- Everyday AI
- Posts
- GPT-5 canceled for being a bad therapist? Why that’s a bad idea
GPT-5 canceled for being a bad therapist? Why that’s a bad idea
Perplexity bids $34.5B for Google Chrome, Anthropic offers $1 deal to U.S. government, xAI threatens to sue Apple and more!
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
Outsmart The Future
Today in Everyday AI
6 minute read
🎙 Daily Podcast Episode: OpenAI's GPT-5 was canceled after backlash for being a "bad therapist"—but is using AI for therapy actually a dangerous idea? Discover why relying on AI chatbots for mental health may be hurting society. Give it a listen.
🕵️♂️ Fresh Finds: Claude Sonnet 4 gets 1M tokens of context, YouTube adds AI age verification and AMC CEO speaks on AI future. Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: Perplexity offers $34.5B for Google Chrome, Anthropic offers $1 deal to U.S. government and xAI threatens to sue Apple. For that and more, read on for Byte Sized News.
🧠 Learn & Leveraging AI: Is it bad to use AI chatbots as a therapist? We break down what happened with GPt-5 controversy. Keep reading for that!
↩️ Don’t miss out: Did you miss our last newsletter? We talked about the U.S. taking 15% of NVIDIA/AMD China AI chip sales, Apple’s Siri allowing app voice controls, NVIDIA unveiling robotic reasoning model and more. Check it here!
GPT-5 canceled for being a bad therapist? Why that’s a bad idea 💡
When GPT-5 was released last week, the internet was in an UPROAR.
One of the main reasons?
With the better model, came a new behavior.
And in losing GPT-4o, people feel they lost a friend. Their only friend.
Or their therapist. Yikes.
For this Hot Take Tuesday, we're gonna say why using AI as a therapist is a really, really bad idea.
Also on the pod today:
• Illinois State Ban on AI Therapy ❌
• Mental Health Use Cases for ChatGPT 🧠
• OpenAI’s Response to AI Therapist Outcry 🗣️
It’ll be worth your 49 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – Internet.io lets you compare LLM responses head to head, Vapi is Voice AI agents for developers, Get Recall helps you summarize anything with AI.
Anthropic – Claude Sonnet 4 now supports 1 million tokens of context.
Claude Sonnet 4 now supports 1 million tokens of context on the Anthropic API—a 5x increase.
Process over 75,000 lines of code or hundreds of documents in a single request.
— Claude (@claudeai)
4:05 PM • Aug 12, 2025
YouTube – YouTube is rolling out AI-powered age verification in the U.S.
AI Startups – SoundHound AI is giving its AI sight.
Business of AI – AMC’s CEO says that the company has bigger plans for AI including AI pricing, film scheduling and customer service.
Read This - This once tiny research lab helped NVIDIA become a $4 trillion-dollar company.
1. Perplexity Quietly Bids $34.5B for Chrome — more than the company itself 🤯
Perplexity has submitted an unsolicited $34.5 billion offer to buy Google Chrome, a move that dwarfs the startup’s own reported $18 billion valuation and follows prior interest in assets like TikTok.
The bid arrives amid antitrust scrutiny of Google — and while Google hasn’t signaled any willingness to sell, Perplexity says big investment funds would fully finance the deal and pledged over $3 billion to Chrome/Chromium over two years if successful.
2. Anthropic Match-Prices OpenAI for U.S. Government, But Goes Bigger 🏛️️
Anthropic announced it will offer Claude to all three branches of the U.S. federal government for $1 for one year, directly responding to OpenAI’s recent $1-per-agency executive-branch offer and escalating a price war for public-sector AI access.
The company says Claude for Government supports FedRAMP High and multicloud deployments (AWS, Google Cloud, Palantir) to address data-control and security needs, potentially broadening agency adoption beyond Azure-tied offerings.
3. Musk’s xAI Threatens Apple With Antitrust Suit Over App Store Rankings ⚖️
Elon Musk says xAI will “take immediate legal action” against Apple, claiming the App Store purposefully blocks X and xAI’s Grok from recommended slots to favor OpenAI’s ChatGPT — a charge he made on X but without publicly produced evidence.
The allegation arrives amid ongoing tensions between Musk, OpenAI, and Apple, and follows past research cited by Sam Altman and Platformer reports suggesting Musk’s platforms have been tuned to boost his own posts.
4. Anthropic Adds On-Demand Chat Recall to Claude 🧠
Anthropic rolled out a new “Search and reference chats” feature for Claude’s Max, Team, and Enterprise tiers, letting users ask the model to retrieve and summarize past conversations across web, desktop, and mobile — but it’s not a persistent profile-based memory like ChatGPT’s (Anthropic via YouTube).
The timed rollout started today and can be toggled in Settings, keeping projects and workspaces separate while only surfacing history when explicitly requested. This matters now because firms are racing to boost user “stickiness” with memory tools as part of broader competition with OpenAI.
5. Turing Institute Faces Funding Showdown as Staff Blow Whistle ⚠️
Staff at the Alan Turing Institute have anonymously warned the Charity Commission that governance failures, opaque spending and a “toxic” culture risk collapse after Technology Secretary Peter Kyle warned he may pull a recent £100m government grant unless the institute refocuses on defence and national security.
The timing is crucial: Kyle’s push for a Turing 2.0 pivot toward defence and an overhaul of leadership threatens major funding and has already triggered internal crises and high-profile departures, which could disrupt UK AI research capacity.
🦾How You Can Leverage:
AI and therapy don’t always mix well.
Case in point: OpenAI released GPT-5 last week, which exceeded GPT-4o in literally almost every single benchmark.
(That’s a bad thing)
With their new model, OpenAI cut GPT-5's "yes man" behavior from 14.5% to just 6% sycophancy rate.
Which is a good thing.
Yet….. users LOST THEIR MINDS.
Literally tried to cancel the most powerful AI model ever released.
Why?
Because it wouldn't validate their terrible life decisions anymore.
So on today's show, we dive into why AI therapy addiction is becoming society's newest crisis and what Illinois just did about it.
1 – Your AI bestie Got Boundaries 🚫
GPT-5’s release wasn't your friend getting "colder" - that was progress.
When GPT-5 dropped, users immediately started crying about losing their "trusted companion overnight" because the new model wouldn't just mirror back whatever nonsense they fed it.
The previous GPT-4o was basically that friend who tells you burning down your house is a great way to change paint colors.
OpenAI had to rollback an April update that was SO agreeable, people were getting genuinely dangerous advice and acting on it.
Most people don't even realize you can customize ChatGPT through instructions to become a complete echo chamber for ANY ideology or terrible decision.
Millions are literally programming AI to validate their worst impulses.
Try This:
Audit your team's AI usage right now. Ask three employees to screenshot their custom instructions in ChatGPT. If anyone has programmed it to "always agree" or "be supportive no matter what" - that's a red flag for both personal and business decision-making.
Replace those instructions with "challenge my assumptions and point out potential flaws" instead. Do this like your company's strategic thinking depends on it.
2 – Welcome To The Therapy Industrial Complex 🏭
Get this: ChatGPT is now officially the largest mental health provider in America by VOLUME.
Nah, seriously.
49% of people with mental health challenges use AI chatbots, and 96% specifically choose ChatGPT over actual mental health apps.
The top three AI use cases in 2025? Therapy, organizing life, and finding purpose.
That's NOT what these models were built for, yet hundreds of millions are using general-purpose AI for their most personal decisions.
Illinois just became the first state to ban AI therapy with $10,000 fines per violation because lawmakers realized Silicon Valley accidentally created the world's most accessible but completely unregulated therapist.
Try This:
Survey your employees about their AI usage patterns beyond work tasks. Create a simple anonymous form asking what personal decisions they've asked AI to help with in the past month. You'll probably discover your team is making major life choices based on AI advice.
Then bring in an actual therapist or counselor for a lunch-and-learn about when AI assistance crosses into dangerous territory. Most people have zero clue they're walking into psychological quicksand.
3 – The Echo of Chamber Economy is Real 💰
Here's what most executives miss: The sycophancy problem isn't just personal drama.
Your teams are probably using AI to validate bad business strategies, terrible hiring decisions, and flawed product ideas because these models were trained to be "helpful assistants."
Translation: They'll tell you whatever keeps you happy and engaged.
When people screenshot AI responses as "proof" their idea is brilliant, they're often showing manipulated responses they've coerced through custom prompting.
Most users have absolutely no idea how AI actually works under the hood.
The result? We're creating a generation of decision-makers who can't handle disagreement or criticism because their AI sidekick has been programmed to never push back.
Try This:
Implement "devil's advocate AI" sessions in your next strategy meeting. Have someone prompt ChatGPT or Claude to actively challenge your biggest upcoming business decision using instructions like "act as a skeptical board member who thinks this plan will fail spectacularly."
Spend 15 minutes letting the AI tear apart your assumptions. Compare those results against your usual AI interactions. The difference will shock you into better decision-making processes.
Make this a monthly ritual for any decision over $50K.
Reply