March 31, 2021 Reading Time: 4 minutes

According to popular public imagination, as we head towards the future at warp speed, technology is managing and “optimizing” our lives more and more. Already ads follow us on social media, recommendation systems offer products we might want and need, deep fake videos transform political discourse, and dating apps make it easier to establish connections, particularly during desperate and lonesome times of lockdown. But what if we had technology at our disposal that would allow us to “optimize” the search and find true love without all the trial and error? Netflix’s new science-fiction drama series The One introduces a near future rocked by a new technology that promises to find every person their match, but setting people up with “the one” isn’t as simple as they claim. 

The One is trending on Netflix’s Top 10 this month. The main character is Rebecca Webb (starring Hannah Ware) – CEO of The One, a tech/genetics company that promises to “match” people with their perfect soulmate with just a simple DNA test. She and her co-researcher designed a service, which seems to work exactly as advertised and promised, measuring genetic data found in chromosomes (unfortunately we do not fully know how – nerdy details are spared to the general public except for two very brief mentions) to find a person’s ideal partner. The service works so well that when the potential soulmates are actually matched, they find it hard to resist each other. This, on the other hand, leads to divorces, family dramas and other turbulence. But what matters is that the technology works. The One depicts people’s incredible determination and a somewhat unexplainable drive to use technology to better their lives no matter the cost. This drive leads many of them to cheat and harm others for the promise of a world made much simpler (better?) thanks to ubiquitous tech. 

Obviously, The One is not the first movie of its kind. In fact, we’ve been bombarded with various Hollywood productions and TV series telling us the same: technology will eventually take over. Consider the widely acclaimed Black Mirror, Ex Machina, or Westworld. And when we translate these visions into our daily lives, the ultimate question that matters is: are we okay to settle for algorithmic decision-making in the majority areas of our life?

Don’t get me wrong though. I can definitely see a strong case for algorithmic systems supporting human decision-making in various areas of our lives. I do understand that medical diagnoses can at least partially be done very effectively by robust image processing systems and I can easily find applications of both supervised and unsupervised machine learning algorithms in weather forecasting, logistics, agriculture and many other sectors. But as an AI expert who deploys machine learning as a scientific method to solve various problems on a daily basis, I am generally not okay with algorithmic decision-making in the majority of other areas, particularly those sensitive ones. 

Let’s take the example of dating. You currently have a variety of apps that facilitate dating and meeting people. Some rely on randomization, some primarily  on geolocation, and some apply filters based on age, gender, hobbies and other preferences to enhance the matching opportunity. Most of these dating services implement quite advanced algorithms that are often IP-protected. And many of them profited greatly from the dire times of pandemics. Tinder boldly expanded introducing new features, such as “Background Check” or gifting Lyft rides to dates. What is more, apparently there are multiple dating websites that say they use genetics to match people up with potential romantic partners. Some examples are DNA Romance and Gene Partner.  

There is also a dating app developed by Harvard professor George Church that aims to prevent genetic diseases from being passed down to children by parents who both carry the same genetic mutations. SafeM8 would use genome sequencing to identify people who are not genetically compatible and eliminate them from each other’s searches, according to BioNews.org.

“You wouldn’t find out who you’re not compatible with. You’ll just find out who you are compatible with,” Church explained. Even though I am really hesitant about Church’s very idea and a “techy” slogan behind it (“Safer, smarter relationships”), they still leave room for choice. And as problematic as their usage can sometimes be, none of them really tells you that there’s just a small pool of people (limited to one or two persons?) who according to some black-boxed criteria are your matches. 

Algorithms allow for an unprecedented efficiency and can make our work easier. Many of the mundane, routine tasks we tend not to like will be performed by algorithms in the future. I believe, however, that wherever it is possible, algorithms and their work should be transparent and explainable, so that we understand what actually happens, how the information is processed and where the results we see come from. Many serious research projects led by pioneering teams worldwide focus on increasing AI algorithms’ explainability. Currently most of the experts believe that whenever a satisfactory explainability level cannot be reached, humans should be the ones who make a decision and are thus also responsible for the decisions made. When it comes to transparency, the problem is usually not that scientifically refined; it’s just a matter of how much is revealed to the user. I do think it would be beneficial to have more transparency in products and services that generally rely on users’ data processing.

As Yuval Noah Harrari argues in Forbes, with the Covid pandemic “engineers are creating new digital platforms that help us function in lockdown, and new surveillance tools that help us break the chains of infection.” He also notices that surveillance jeopardises our privacy and possibly allows for the emergence of unprecedented totalitarian rule. Now, when Big Brother-like surveillance is combined with lack of transparency and explainability, things become even more problematic. And it gets even worse when the public is fed stories that only non-transparent, complex tracking technology can either prevent them from being sick or replace them in finding a person to spend their lives with.

On a side note, Harari also asked whose job it is to find the right balance between useful surveillance and dystopian nightmares: engineers or politicians? Well, my answer is: ours. We all need and deserve tools that we understand and can control. That is why I think we should be really vocal about it and resist technologies that instead of supporting and collaborating with humans, take over and make decisions on their behalf. Pop culture feeds us these scenarios as inevitable. Let’s write different ones in our real lives. 

Aleksandra Przegalinska

Aleksandra Przegalinska

Alexandra is an Associate Professor of management and obtained her PhD in the philosophy of artificial intelligence at the Institute of Philosophy of the University of Warsaw. Currently, she serves as the Vice-Rector for International Cooperation and ESR at the Kozminski University in Poland.  Former Postdoctoral Research Fellow at the Massachusetts Institute of Technology in Boston.

A graduate of The New School for Social Research in New York, where she participated in research on identity in virtual reality, with particular emphasis on Second Life.

She was a Visiting Research Fellow at the American Institute for Economic Research. In the fall 2021 she will  starting working as the Senior Research Fellow at the Labor and Worklife Program at Harvard University.

Aleksandra is interested in artificial intelligence, natural language processing, machine learning, social robots and wearable technologies.

Get notified of new articles from Aleksandra Przegalinska and AIER.