Let’s be honest: I think it’s great that technology is so embedded in our daily lives. It helps us get knowledge faster, complete tasks more efficiently, gives us inspiration, and occasionally scares the hell out of us with those crazy (fake) videos. I help a lot of companies implement AI, so in the end—it pays my bills. But after spending a ridiculous amount of time with all these new technologies, I feel it’s time to reflect on the things I really hate about AI.
1. Everyone is an expert in AI – it’s impossible to cut through the noise
Every time I open LinkedIn, it’s a total chaos. AI is everywhere, and suddenly everyone is an expert. It ranges from people sharing their “best prompt templates,” to AI “gurus” analyzing every tiny update from OpenAI, Google, Anthropic, and others.
Don’t get me wrong—I love that people are enthusiastic. But even as someone who works full-time in this field, it’s overwhelming. The average business user doesn’t need to know the difference between GPT-4o and Claude Opus, or whether a model uses a mixture-of-experts architecture. What they need is clarity, not chaos.
The annoying part? The reality is often 180 degrees different from what’s being posted. Some claims are just nonsense. Like building a fully automated startup in 10 clicks with n8n and a ChatGPT plugin. Sounds great, but anyone who’s built a real business knows that coming up with an idea, designing a logo, and setting up a support inbox does not make you money. Real businesses require differentiation, execution, and a lot of care.
This reminds me of those “how to make 100K/month with 2 hours a week” courses. If it were really that easy, why are they selling courses instead of just doing it?
We’ve built a lot of AI solutions running in production for large enterprises over the last 6 years. Trust me: there is no holy grail. Making AI work for your business takes time, customization, and a lot of iteration.
2. AI is often just wrong or doesn’t do what it’s supposed to
AI tools are incredibly powerful—but also incredibly unreliable. Hallucinations remain a major issue. I’ve seen models confidently generate completely incorrect answers, fabricate citations, invent legal policies, or produce entirely useless summaries.
You can’t just “plug in” a large language model and expect it to work flawlessly. Even for relatively simple use cases, you’ll need robust validation, fallback logic, and clearly defined boundaries. And no, prompt engineering is not a magic wand that fixes everything.
The problem is that even the best models can make mistakes on very simple tasks—which can be extremely frustrating. Something as basic as updating a .xlsx file might break the time format or introduce other subtle errors.
3. Business leaders now put “AI” in every sentence
I get it, AI is exciting. Transformative, even. But let’s be honest: some executives are starting to treat it like a catch-all shortcut to success. Want to sound innovative? Say you’re “exploring AI.” Need more budget? Pitch an “AI-powered roadmap.” Trying to impress investors? Just sprinkle in phrases like “foundation models” and “data strategy” and watch the heads nod.
Even Apple (one of the most disciplined tech companies in the world) is stumbling here. Its newly announced “Apple Intelligence” has already sparked critical debate for overpromising and underexplaining.
The problem? AI doesn’t magically fix bad business models, lack of execution, or broken processes. If your organization struggles with decision-making, culture, or strategy—AI is just going to make those problems worse, faster.
4. AI is too big of a term—it doesn’t fit all the things we call AI
“AI” used to mean something specific. Now it’s a placeholder for anything remotely technical. Is it a chatbot? AI. A recommendation algorithm? AI. A basic automation script? AI. A fancy Excel formula? AI.
Today, AI is a catch-all for several subfields:
- Machine Learning (ML): Algorithms that find patterns in data and make predictions (e.g., spam filters, fraud detection).
- Natural Language Processing (NLP): Understanding and generating human language (e.g., chatbots, language translation).
- Computer Vision: Analyzing images or video (e.g., facial recognition, autonomous vehicles).
- Robotics: Physical machines that perceive and act in the real world.
- Expert Systems: Rule-based decision systems from earlier AI eras, still used in fields like medicine and finance.
We’ve lumped everything under one umbrella, and that creates confusion. Business leaders don’t know what’s what. Vendors rebrand old features as “AI-powered” just to sound modern. And it becomes nearly impossible to have a real conversation about needs and capabilities.
We need new words—or at least clearer distinctions.
5. It requires so much “GPU” processing power—like, insanely much
Let’s talk about the elephant in the room: GPUs. Training and running state-of-the-art AI models requires massive computing power, and we’re not talking about your standard cloud VM here. We’re talking thousands of high-end NVIDIA H100s running in parallel—each costing upwards of €25,000 if you can even get your hands on them.
OpenAI’s GPT-4, Google’s Gemini, and Anthropic’s Claude are all built on compute infrastructures worth hundreds of millions. Even inference—just running these models at scale—requires huge GPU clusters. For most startups or researchers, that level of power is completely out of reach.
If you want to fine-tune a smaller open-source model like LLaMA 3 or Mistral, you’re looking at significant GPU time. Renting a single A100 on AWS or Azure can cost €2 to €5 per hour—and that’s if there’s availability. For actual experiments or training jobs, you often need 4, 8, or even 16 GPUs just to get started.
6. The environmental impact is staggering
All those GPUs crunching numbers? They consume enormous amounts of electricity. Training a single large language model can emit as much CO2 as multiple transatlantic flights. And we’re not just training one model—we’re training thousands, running millions of inference queries daily, and constantly iterating.
Data centers are expanding rapidly to keep up with AI demand, and most still rely heavily on non-renewable energy sources. While companies like Google and Microsoft are investing in carbon offsets and renewable energy, the net impact is still concerning.
7. Copyright, ownership, and ethical gray zones
Who owns AI-generated content? What about the data used to train these models? These questions remain largely unanswered, and it’s causing real problems.
Artists, writers, and creators are rightfully upset that their work was scraped without permission to train models that now compete with them. Legal battles are mounting. Regulations are slow to catch up. And companies are stuck in the middle, unsure whether AI-generated content can even be copyrighted.
8. Job displacement is real—and we’re not ready
Yes, AI creates new jobs. But it also eliminates many existing ones, and not everyone can transition easily. Customer service representatives, data entry specialists, junior analysts, and even some creative professionals are already feeling the pressure.
The typical response—“people should just upskill”—is oversimplified. Upskilling takes time, resources, and access to education, all of which aren’t equally available. We need better safety nets, retraining programs, and honest conversations about the economic impact.
9. The lack of transparency and accountability
When an AI makes a mistake—whether it’s denying a loan application, misdiagnosing a patient, or spreading misinformation—who’s responsible? The model? The company? The developer? The person who deployed it?
Most AI systems are black boxes. Even the teams building them often can’t explain why a model made a specific decision. This lack of transparency is dangerous, especially in high-stakes scenarios like healthcare, finance, and law enforcement.
Final Thoughts
Don’t get me wrong—AI is transformative. It has the potential to solve real problems and improve countless lives. But pretending it’s perfect, risk-free, or universally beneficial does no one any favors.
What we need is nuance. Honest conversations about trade-offs, capabilities, expectations, and very real limitations. If we want to move forward, we need less hype, more clarity, and above all—more honesty.