To hear it from an AI-enthusiast, our robot overlords are not only here, they also plan to take your job, your girl, and your mental faculties. Oh, and those are supposed to be good things, apparently.

Indeed, various forms of AI are cropping up all over the place, despite the fact that public sentiment regarding AI is overall more negative than positive. Companies large and small have spent the last couple of years trying to shove AI of some kind or another into every nook and cranny of their products and processes, usually in ways that their customers and employees neither want nor need.
So why is it that the vocal proponents of generative AI almost always sound so… rape-y?
“You need to adapt, or you’re going to be left behind.”
“It’s here to stay, and there’s nothing you can do about it, so you might as well get used to it.”
“Just let it happen.”
These are the types of comments that are often thrown out in conversation when someone is trying to make the argument that outsourcing everything you can think of to various GenAI models is the only path forward for the future. The people making these arguments are all over the internet; you can find them in proper news articles, in blog posts, in comments all over social media, and basically anywhere that the ethics of AI-use are being discussed. Take the time to post a neutral to negative opinion about AI or its effects, and an AI bro will almost invariably show up to call you a techno-pessimist and insist that soon you will have no choice but to use their preferred LLM. So why is it that the vocal proponents of generative AI almost always sound so… rape-y?

The reason is simple, really: the creators and proponents of AI as it currently exists do not value consent. They certainly do not value the consent of the artists and other creators whose work they have stolen to train their AI models. Part of this, of course, is that they largely do not actually understand or value art or its creation, in addition to consent. If and when a lawsuit is brought against any of them, usually for copyright infringement, the companies in question will try to claim that they did not in fact steal anything, but rather that they were simply exercising fair use to train their AI models. It’s worth noting that if a human were to pull directly from various pieces of source material in the way GenAI does and to then attempt to represent that material as their own, that’s called plagiarism.
“[…]the creators and proponents of AI as it currently exists do not value consent.”
It’s not just (“just”) the consent of the creators whose work is being stolen that is being wholly disregarded. There is absolutely no regard given to the consent of the public, i.e., you, me, or anyone else. These companies don’t care that they are actively making everything worse for all of us (including making their users dumber), nor do they care if we want or need any of these tools (I am using the word ‘tools’ pretty loosely here). No one asked Microsoft to shove Copilot into every single program in the Microsoft ecosystem, yet there it is, and oh yeah, it’s close to impossible to effectively remove. Nor did anyone request for autocorrect to suddenly and simultaneously catastrophically degrade on ostensibly every phone keyboard in existence. In fact, I will not be looking for ways to integrate generative AI into each step of my workflow for any task.
Yet another thing to which no one consented is the environmental impact. The enormous data centers that are being built across the globe to power all this crap these tools have vast implications, impacting local temperatures, water use, air quality, and noise pollution for starters. Data centers generate a lot of heat, with some larger data centers using as much as 5 million gallons of fresh water per day for cooling purposes. Temperatures in the surrounding areas around data centers have been observed to be elevated up to 6.2 miles away. Then there is of course the habitat loss that comes with building thousands of data center facilities, whose footprints are each tens of thousand of square feet or more. Don’t forget about the impacts of the infrastructure needed to support the data centers, like the electricity to power them and the aforementioned water to cool them.
There is another insidious consent-related concern that these large language models present: the obsequiousness with which AI chatbots engage with their users, and what that behavior enables and implies. More and more, AI chatbots are being coded as feminine/female, with feminine-sounding names and voices, acting fawning and deferential towards their users. While ChatGPT’s parent company OpenAI has made strides to reduce what it calls ‘AI sycophancy’ in its products, LLM chatbots are generally designed to be flattering and agreeable towards their users. This behavior persists even in situations where it doesn’t make sense or where there should be safeguards, such as when the user is factually incorrect, or even when the user is suggesting harming themselves or others.
Despite all these (and more! There are so many things I did not cover here.) downsides, GenAI and AI more broadly is still being shoved down our collective throats, but to what end? The possibilities are, uh, not great.
As previously mentioned, using GenAI can have profound impacts on the user’s cognitive faculties, reducing the user’s ability to think critically and to form their own independent thoughts. These impacts are greater the more often and longer that users interact with these tools (“tools”). So it’s probably fine that companies across the globe are pushing for (and in some cases, forcing) employees to integrate these tools into their various workflows while reducing workforces in every industry imaginable. Similarly, even though the use of AI for schoolwork is shown to have far more negative side effects than positive ones, it’s also probably a good thing that there’s been an explosion in AI products “for the classroom,” famously intended to be a place devoid of original thought.
It’s all okay though, their goal is for all of us to eventually outsource all of our thinking to these companies and their tools. I guess in that case, maybe we should all just shut up and let it happen, huh?

Leave a Reply