October 10th, 2025
If you know how to turn on a computer, you know how many exaggerations, grand promises, and outright lies about AI there are — all aimed at luring investors with visions of co-owning the new future. Yet, on the other hand, the opposite side goes too far as well. Criticism of AI often comes from good intentions, but can end up overgeneralizing.
Models are trained on content without the creators' consent or any compensation — that's the biggest problem with AI. But it's hardly unique, nor the most egregious example of private businesses exploiting culture or publicly built infrastructure for profit. The Internet itself began as a socially-funded research project, Walmart employees rely on food stamps, and, even more fundamentally, modern business couldn’t exist without roads, schools, or healthcare. This is simply how things are under capitalism, and a capitalism critique is beyond my scope here; I'm simply accepting that it defines the conditions we live under. AI is unethical, yes, but no more so than animal cruelty in the food industry or sweatshop labor.
Currently, AI is barely regulated because it's only been around for a short time, and laws always lag behind technology. Realistically, when regulations do arrive, they will be driven more by pragmatism than ethical concerns. Without a global ban on AI training, any country that ignores ethical issues will gain an advantage over a country that restricts its industries, making widespread bans highly unlikely. The only realistic approach, then, is to impose additional taxation on AI companies to compensate for the harm they cause.
And machine learning technologies ARE useful. I can't believe people are arguing with that as part of the anti-AI sentiment. It's the same process of automation that started with computers themselves. The benefits of AI are not affected by how much it is oversold.
The hype pushed AI into things that don't need it — just like every other tech trend before it. The same pattern occurs with every major tech trend and eventually balances out. A few years ago everything suddenly had to have a mobile app. Over a decade ago every local bakery was rushing to build a website — you can still find remnants of that on neocities. Over time, the trend corrected itself: today it's mostly businesses that process payments themselves that maintain websites, while most others shifted to just social media presence. I'd argue the same will happen with AI assistants — once the hype dies down, they'll be used mainly where they make practical sense.
As for people losing their jobs — that’s nothing new as well, and of course, it's sad. It's even sadder that we don't live in a world where people welcome everything that makes their work easier and faster. When people are unhappy that their careers have become obsolete, it’s usually because they've lost their source of income. Even in the worst case, where a LOT of people lose jobs simultaneously, there is a hope for enough political momentum to push for change. What matters is for this momentum to focus on realistic and universally agreed-upon goals — and banning AI altogether is not one of them.
Talking about people losing jobs is hard, talking about artists losing livelihood is even harder. I wholeheartedly believe that art is necessary both for the author and for the consumer, it's an important part of human experience. But art as a career was always something happening despite the odds. And art faces much bigger problem — its main competition is not AI art, but the ever-available dopamine-overdriving attention-span-destroying brainrot content (doesn't matter if it's handcrafted or generated). The places where art will be replaced first are porn and placeholder commercial art, and, to be honest, I'm not sure I wouldn't prefer the world where such content would be created by machines, leaving humans free to focus on other pursuits.
The reality is that creating art is more of a human need than a commercial demand from society. Many artists of the old survived on the patronage of the wealthy and powerful, and the alternative of state-sponsored art, even if it's the most virtuous state possible, is not something I'm thrilled to see. I'm okay with the idea of a future where human-created art exists primarily as a hobby. I love crowdsourcing platforms for artists, and artists there are not threatened by AI that much, because they build their following not by art alone, but also through personal connection and likeability.
But what if AI limits human creativity? You could say that people might get so used to the convenience of AI that, over time, they stop trying to create on their own. But it's a broader question of convenience making us less driven. Again, I think creating art is a human need, so I'm optimistic. Painting survived photography, theater survived movies, books survived everything.
At the same time, AI opens up new possibilities. Soon it's going to be possible to create full-fledged movies with it. And you might dislike the idea of watching AI-generated movies, but consider this: anyone could make their own movie. Until now, that was a medium closed off unless you had serious money and connections. How is that limiting creativity? And while high art will remain unreachable for AI, comedy has clearly gained from it: absurd humor and parodies are a perfect match for AI.
But what about disinformation? Sure, AI can lie. But I'd argue there's a high chance that it will still be better than the sources the average person gets information from. People will always find some dumb shit to believe in — even without AI. Okay, but wouldn't it be so easy to feed people propaganda? My experience with propaganda tells me that people don't need sophisticated messaging. People will just believe anyone who appears to be on their side or confirm something they already believe in. But it's going to be worse, I agree.
If there's one theme running through this text, it's that AI downsides are nothing new or unusual. Lowering work standards or cutting corners in education is the same. The invention of computers didn't abolish mathematics — it just made some processes faster and enabled much more advanced research. Once we move past this transitional phase, ML will show itself as a powerful assistance tool. That's optimism speaking, but this is my belief and I don't think it's unreasoned. Ultimately, there's a fundamental incentive for technologies and products to be useful. I believe humans have an innate drive to improve, optimize and make everything more convenient — just like the earlier point about the hunger for art. Not everyone has it, but some do. You can see it in the sheer volume of open-source work being done.
And honestly, I just dislike the idea of being a pessimist. Humans simply aren't good at long-term intuitions — if they were, addictions would be rare, casinos unprofitable, and the world a far less chaotic place. People spent the entire 20th century worrying about overpopulation, and now they worry about extinction. Acid rains were stopped by passing the necessary environmental laws. The future's unknowable, and I don't think it's healthy to pick the worst one to believe in.
I have to admit, even if it's not perfect, movement against AI helps prevent the worst effects. I hardly see any AI in spaces curated by communities rather than companies. Writers' strike prevented AI in Hollywood and, by the power of USA cultural export, slowed AI propagation in movies in the whole world. A lot of advertisers are wary of placing their ads on generated content because people are actively complaining about that. All of this helps, but these are more focused and calibrated actions. So maybe general dislike for AI is somewhat useful, but I wish it was more nuanced and measured.
Lastly, I want to ask some genuine questions. Are you against AI itself, or against socialized research being turned into privatized wealth? Is it about people losing jobs without any chance to retrain? About digital products getting worse? Or do you really believe that AI has no good use?