AI and the Social Contract

Musings on AI and how it’s impacting our ability to be active agents in our lives

Irene Bratsis
4 min readFeb 27, 2022
image by author

When I got into the field of AI/ML/Data Science, I was like a newborn baby coming off the heels of a prior career in tech that left me largely uninspired. Excited about the potential of AI, I was able to see so many applications of AI that could help us predict, optimize, see, hear, speak and sense using the power of data.

I was learning how to put together machine learning projects that were coming to conclusions and useful outputs that were able to make connections in a way human reasoning couldn’t compete with. The idea that we were on our way to using collective knowledge to inform decision making seemed like a no-brainer. Why wouldn’t we harness the amazing abilities of the spectrum of tools available to us that we collectively call “AI”?

Then, as my eyes widened through various work experiences, conversations, book club discussions and events I was putting on with Women in Data, Women in AI, General Assembly and Women in Trusted AI, I started to see the potential downsides of AI. This is where it gets interesting and where I begin to see a new existential concern for AI.

No, I’m not worried about autonomous threats or a “conscious” AI. I’m also not super worried about the threat of negative externalities from the pervasive use of AI because that threat is already alive and well in our present reality and I’ve more or less come to terms with it (please join our book club if you’re not sure what I’m talking about, I moderate discussions on books related to ML/AI and data science monthly! And yes, things aren’t going super great).

What I am worried about is my own ability to make decisions. Depending on where you fall in the free will philosophical debate, this might not feel as visceral. In a world where our selection of partners, jobs, mortgage rates, credit lines, lifestyle inspo and political ideologies are dominated by invisible layers of AI that make decisions on our behalf, how can we say we operate in the world as free agents? Taking it a step further, is the use of codified AI just bringing us closer and closer to the realization that the “humans don’t have free will” camp was right all along?

AI can’t force you to fall in love with a person or a house, it might not have anything to do with how well you perform in a job interview or behavioral assessment, but it certainly takes liberties when it comes to the options it puts forth for you without your explicit knowledge that that’s what it’s doing. For instance, Uber is putting us into buckets based on our mutual ratings impacting things like driver quality and how quickly a ride can get to you. Dating apps similarly group us into buckets based on the quality of our profile photos and attractiveness. Behavior begets behavior and ultimately this kind of optimization is manipulating how we relate to each other and how we perceive of the world around us.

We don’t have philosophical models for this yet. The social contract has reached a critical point where we’ve surpassed the philosophers we study in school and we’ve entered a philosophical frontier. We need new definitions for what’s uniquely human, what level of help we want to allow ourselves to get from AI, what role our government must play in protecting us from AI and what the virtues of analog human experience really are.

The term “social contract” is an implicit agreement among the members of a society to cooperate for social benefits, for example by sacrificing some individual freedom for state protection. Theories of a social contract became popular in the 16th, 17th, and 18th centuries among theorists such as Thomas Hobbes, John Locke, and Jean-Jacques Rousseau, as a means of explaining the origin of government and the obligations of subjects.

But we’ve known for a long time that big tech’s influence has surpassed the influence of most governments. In many cases, it’s surpassed the economic resources of most governments as well. The global artificial intelligence software market is forecast to reach around 126 billion U.S. dollars by 2025. Governmental bodies themselves are not able to create guardrails quickly enough to rein in algorithmic oppression.

Misinformation, disinformation and cyber-warfare are all the results of algorithms being used for nefarious purposes. Russia’s meddling in our 2016 and 2020 elections still hasn’t faced a reckoning and is currently experiencing cyber attacks by independent agents as a response to their aggression on Ukraine.

So are we all just prisoners here?

Are we doomed to build lives and identities unaware that what we’re mimicking and creating stem from things we’ve seen as a result of AI? From the clothes we wear to the way we express to the people we spend time with, can we truly say our thoughts and ambitions are our own? Finally, am I doing a service to the people I impact as a result of my work and volunteering efforts?

I hope the answer is yes. I hope our salvation lies our ability to control this extraordinary set of tools. I hope it takes all of us to make AI work for and benefit all of us. When I entered the field, it was a mix of curiosity, excitement, apprehension and humanitarian interest that fueled me. Those admirable qualities are still there, the only difference is I now feel the pangs of fear.

Not the fear of AI taking on a mind of its own.

The fear of AI taking over my own mind.

--

--

Irene Bratsis

Director, Digital Product & Data @ IWBI, Women in Data Regional Lead, Women in AI WaiTalk Moderator, Founding Member of Women in Trusted AI, Data Haus Book Club