- Everyday AI
- Posts
- Ep 664: 3 AI Lies Most People Believed In 2025 (But You Shouldn’t)
Ep 664: 3 AI Lies Most People Believed In 2025 (But You Shouldn’t)
OpenAI hits Code Red and new model leak, Gemini 3 and Nano Banana pro go Global, Apples AI Chief stepping down and more
Outsmart The Future
Today in Everyday AI
8 minute read
🎙 Daily Podcast Episode: You’ve been lied to about AI in 2025. We unpack the 3 biggest AI lies you’ve been told. Find out more in today’s show and give it a watch/listen.
🕵️♂️ Fresh Finds: Free Opus 4.5 trial for pro users, Amazon workers are rebelling, New AI Language Tools and more Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: OpenAI hits Code Red and new model leak, Gemini 3 and Nano Banana pro go Global, Apples AI Chief stepping down and more Read on for Byte Sized News.
🧠 Learn & Leveraging AI: How can you separate the shady AI studies from reality? We show you how. Keep reading for that!
↩️ Don’t miss out: Accenture and OpenAI strike huge deal, Kling Omni "Omniverse" Model launch, Runway’s big AI video splash and more. Check it here!
Ep 664: 3 AI Lies Most People Believed In 2025 (But You Shouldn’t)
You've been lied to about AI. 🤥
A lot.
So on today's Hot Take Tuesday episode, we're breaking down 3 of the most viral AI half-truths of 2025 and setting the record straight.
Did Anthropic overtake OpenAI?
Do 95% of AI pilots fail?
Is half of the internet AI slop?
We tell all on today’s episode.
Also on the pod today:
• 57% “AI slop” internet myth 🌐
• Garbage AI detectors exposed 🗑️
• Anthropic vs OpenAI enterprise share 👔
It’ll be worth your 39 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – X-Design is your Creative AI Agent for Branding, Runway Gen-4.5 is a New Frontier for Video Generation, Shipper Helps You Turn Your Ideas Into Real Apps, In A Minute. Create a Revenue-Ready Product by Chatting With AI.
Free Opus 4.5 Trial — Perplexity quietly lets Pro users trial its new Opus 4.5 model
AI Music — AI‑generated worship hits charts, but leaves listeners with a spiritual “ick.”
Google AI — Google may lead in AI—but its real battle is for ad dollars
AI Language Tool — New AI tool helps renters and workers spot hidden contract risks
AI Youtube Scares — YouTube’s deepfake defense sparks new fears over creators’ biometric control
Amazon Worker Backlash — Amazon workers accuse company of sacrificing climate and jobs for AI expansion
AI Churches — Houston faith leaders quietly test AI ‘Godbots’ to keep pews filled
1. OpenAI hits ‘code red’ as AI race heats up 🚨
OpenAI is reportedly hitting the panic button right now, with CEO Sam Altman declaring a “code red” and ordering an all-out push to upgrade ChatGPT as rivals rapidly gain ground. According to reports from The Wall Street Journal and The Information, Altman is shelving splashy projects like ads, shopping and health agents, and a personal assistant called Pulse so teams can focus on making ChatGPT faster, more reliable, more personalized, and able to answer more questions.
The move marks a sharp shift for a company once seen as comfortably ahead, and it reflects mounting pressure as Google’s user base swells and its Gemini 3 model tops key industry benchmarks.
2. Google’s Gemini 3 and Nano Banana Pro go global in Search 🍌
Google has announced that its top-tier Gemini 3 model is rolling out in AI Mode in Search to nearly 120 countries and territories in English, marking a major expansion of its most advanced reasoning tech. Starting today, Google AI Pro and Ultra subscribers can switch to “Thinking with 3 Pro” in the model menu, unlocking smarter handling of complex questions, richer multimodal understanding and new interactive layouts generated on the fly.
The company is also widening access to Nano Banana Pro, its latest generative imagery model built on Gemini 3 Pro, so users can turn Search queries into visual content like infographics in more English-speaking markets.
3. Apple’s AI Chief Steps Down In Major Shake-Up 🍏
Apple is shaking up its AI leadership, with longtime AI boss John Giannandrea stepping down and former Microsoft and DeepMind researcher Amar Subramanya taking over at a pivotal moment for the company’s strategy.
The move comes as Apple faces criticism for lagging behind rivals in AI and after its much-hyped Apple Intelligence rollout, including a revamped Siri, hit delays and lukewarm reviews. Subramanya will now report to software chief Craig Federighi, who is increasingly steering Apple’s AI push while other AI-related teams shift under operations and services leadership.
4. IBM CEO Pours Cold Water on AI Data Center Gold Rush 💧
In a timely reality check for the AI hype cycle, IBM CEO Arvind Krishna says the current data center spending spree for AGI has "no way" of turning a profit at today's costs. On the "Decoder" podcast, he estimated that filling a one-gigawatt AI data center runs about $80 billion, which scales to roughly $8 trillion in global commitments if companies pursue 100 gigawatts of capacity.
Krishna argued that such a bill would require around $800 billion in annual profit just to service the interest, before even counting the hit from AI chips that need to be replaced roughly every five years.
5. UN warns AI could spark a ‘great divergence’ 💥
A new UNDP report out today warns that artificial intelligence could reverse decades of shrinking global inequality by turbocharging growth in rich countries while leaving poorer nations further behind.
Framing AI as a turning point on par with the Industrial Revolution, the study says advanced economies in the Asia Pacific are already cashing in, while states with weak infrastructure and limited skills are largely shut out. The report predicts big economic gains for the region but flags serious risks to jobs, especially for women and young people, if governments do not move quickly on protections and training.
5. OpenAI’s ‘Garlic’ Model Targets Google’s Gemini 3 🧄
OpenAI is reportedly close to launching a new model dubbed Garlic to counter Google’s recent advances with Gemini 3, according to a report from The Information. While OpenAI hasn't made any public announcements, insiders say Garlic has shown strong results in coding and reasoning compared to Google’s Gemini 3 and Anthropic’s Opus 4.5, and could launch as GPT-5.2 or GPT-5.5 as early as next year.
Garlic builds on lessons from a previous model, Shallotpeat, and uses improved pretraining strategies that allow smaller models to match the capabilities of larger ones, potentially saving time and resources. This behind-the-scenes push comes as CEO Sam Altman directs a “code red” effort to keep ChatGPT competitive in the rapidly intensifying AI race.
You’ve been scammed. 🤥
Not by a crypto bro or a phishing email, but by reputable names in education and technology.
In 2025, boardrooms froze budgets and executives panicked because of three specific viral reports. Markets moved. Billions of dollars in strategic capital momentarily shifted.
We’re calling them 3 of the biggest AI fibs of 2025 and we’re here to set the record straight.
But we audited the data.
And obvi brought the receipts.
These weren't research papers.
They were marketing funnels disguised as science, designed to manipulate your decision-making to sell specific products.
And some of the smartest people in the room fell for it.
But not you, Everyday AI reader. We got your back.
Let’s dive in.
1. The $100 Million Conflict of Interest 📉
Menlo Ventures published a chart that terrified OpenAI customers.
It claimed Anthropic had "flipped" the market, grabbing 32% share compared to OpenAI's 25%.
In short — Anthropic is the new enterprise top dog, not OpenAI.
(Lolz)
Executives saw this and immediately questioned if they were betting on the losing horse.
But look at who paid for the microphone.
Menlo Ventures is one of Anthropic’s largest investors, having reportedly poured about a billion dollars into the company.
For their “research paper” though?
They didn't survey the enterprise market. They surveyed 150 people from their network and their own "Anthology Fund"—a join program benefiting Anthropic where they literally pay startups to use Anthropic.
Of course the people being paid (in credits) to use Claude will say they’re using Claude when asked by the company who majorly backs the parent company of…. Claude.
This wasn't reputable research.
It was a portfolio report dressed up as industry analysis to pump their own valuation.
Try This: Open that "State of AI" report your VC sent you last week and scroll immediately to the disclosure section.
If the firm authoring the report is the lead investor in the "winning" tool, treat the entire document as a paid advertisement.
You wouldn't trust a tobacco company’s study on lung capacity of chain smokers, so stop letting venture capitalists with chips on the table dictate your infrastructure roadmap based on their need to exit.
2. The Bible Is AI-Generated 🤖
Another viral study from SEO company Graphite claimed 57% of the internet is now "AI slop."
Companies panicked. Marketing teams implemented strict "human-only" verification policies. HR departments started using detection tools to screen candidates.
Here’s the problem.
The study relied on commercial AI detectors to grade the web.
(Spoiler alert — there’s no such thing as reliable AI text detectors.)
We ran the first part of the book of Genesis from the Bible through the same logic on the Surfer AI content detector Graphite relied on. It came back as 86% AI-generated.
(Although, we do believe much of the internet is AI slop, there’s no definitive way to prove just how much.)
They measure statistical patterns, which means they flag simple, structured writing—specifically from non-native English speakers—as robotic. The company behind this study sells SEO services that benefit directly from the fear of "AI slop" degrading search rankings.
They pushed pseudoscience to sell you a real invoice.
Try This: Ban the use of "AI text detectors" in your hiring and procurement processes right now.
If your HR team is using these tools to screen cover letters, you are currently rejecting qualified candidates based on a random number generator that discriminates against non-native speakers.
Judge output based on accuracy and voice, not a broken algorithm.
3. The 95% Failure Rate Hoax 🚨
This was the most damaging AI fib of the year by a longshot.
An MIT report claimed 95% of AI pilots fail to deliver value.
CFOs used this stat to kill projects. Skeptics took victory laps. But when you dig into the methodology, it’s not just bad science.
It’s a sales pitch. And it was just plain wrong.
The study was based on just 52 interviews. And they defined "success" as a pilot showing measurable P&L impact within six months.
Nothing in enterprise software hits the P&L in six months.
And 52 “directionally accurate” interviews. That’s a legit vibe study you coulda done in one hour at any business conference.
They rigged the criteria to ensure failure. Why?
Because the report was actually an infomercial for MIT’s own "NANDA" product—a $250,000 solution they conveniently pitched as the fix for these failures. They manufactured a crisis to sell you the cure.
Oh, and even beside the 95% viral stat, this study is moot. With beyond flawed methodology, you should throw this sales technique in the garbage and unfollow anyone who references it.
(It means they’re trying to sell you some junk.)
If you want our full deep dive on this one, we already tore it apart earlier this year.
Try This:
Pull up your current AI pilot roadmap and change the ROI success metric from "immediate P&L impact" to "capability validation" for the first 12 months.
If you kill projects because they didn't generate cash in two quarters, you aren't being fiscally responsible. You're falling for a marketing tactic that just cost you your competitive advantage for 2026.







Reply