- Everyday AI
- Posts
- Meta in hot water, OpenAI responds to GPT-5 backlash and more AI News That Matters
Meta in hot water, OpenAI responds to GPT-5 backlash and more AI News That Matters
Claude gets power to shut down toxic chats, U.S. senators push back on China AI chip sales, AWS unveils tool to secure AI agents at scale and more!
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
Outsmart The Future
Today in Everyday AI
8 minute read
🎙 Daily Podcast Episode: Meta faces backlash for AI trained to talk to minors, OpenAI stirs confusion with new GPT-5 changes, and Apple pivots to AI hardware after software struggles. Get the AI news that matters—fast. Give it a listen.
🕵️♂️ Fresh Finds: Duolingo responds to controversial AI memo, Grammarly gets AI overhaul and a Google study on video games and AI. Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: Claude gets power to shut down toxic chats, U.S. senators push back on China AI chip sales and AWS unveils tool to secure AI agents at scale. For that and more, read on for Byte Sized News.
🧠 AI News That Matters: From AI’s effect on publishers to violating regulatory standards, here’s what you missed last week in the world of AI. Keep reading for that!
↩️ Don’t miss out: Did you miss our last newsletter? We talked about ChatGPT’s mobile app hitting $2B, Google AI Overviews' 25% drop in publisher traffic, U.S. Gov. taking a stake in Intel and more. Check it here!
AI News That Matters - August 18th, 2024 📰
Meta's AI has reportedly been trained on sensual talk to minors. Yikes.
OpenAI has responded to GPT-5 backlash in a strange way.
Google keeps dropping more and more AI updates.
Don't waste hours a week trying to keep up with AI. Instead, join us on Mondays as we bring you the AI News that Matters.
No fluff.
No corporate marketing. No B.S.
Just what you need to know to stay ahead.
Also on the pod today:
• U.S. Government Considers Intel Equity Stake 💰
• Grok NSFW Imagine Tool Prompts FTC Probe ⚖️
• Anthropic Claude $1 Access for US Government 🇺🇸
It’ll be worth your 48 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – TensorZero is an open-source stack for industrial LLM applications, Stormy is an AI agent for influencer marketing and Autosana is a QA agent for mobile apps.
Trending in AI – Duolingo’s CEO says the company’s controversial AI memo was misunderstood.
AI Tools – Grammarly has received an AI overhaul with multiple new features.
AI Tech – AI-powered stuffed animals are starting to enter the market.
AI in Society – Here are the environmental consequences for Big Tech’s push to ease AI regulations.
Trending in AI – Geoffrey Hinton, Godfather of AI, warns AI could take control from humans.
Future of Work - 17% of employees who use AI at work do so to avoid judgement from co-workers.
1. Anthropic Gives Claude the Power to Cut Toxic Chats Short ✂️️
Anthropic updated Claude (Opus 4 and 4.1) to let the bot end conversations as a last resort when users repeatedly push for harmful content, the company told TechCrunch. The feature grew from internal testing where Claude showed “apparent distress” and a consistent refusal to produce extreme content like sexual material involving minors or instructions for violence, so the model now can permanently close a thread while still allowing new chats.
The change, and a simultaneous tightening of usage rules banning help to build biological, nuclear, chemical, radiological weapons or malicious code, is timely as AI models scale and companies scramble to reduce real-world risks.
2. Senators Push Back on Trump Letting NVIDIA, AMD Sell AI Chips to China 👊
Six Senate Democrats publicly urged President Trump to reverse his Aug. 11 decision to let NVIDIA and AMD export advanced AI chips to China in exchange for a 15% revenue cut, warning the move could erode U.S. technological and military advantage. The senators — including Schumer and Warren — said the deal effectively trades national security for a commission, while NVIDIA counters that blocking the H20 chip cost taxpayers and harmed U.S. competitiveness.
The letter demands a detailed response by Aug. 22 as China reportedly halts some H20 orders pending its own security reviews, signaling diplomatic and commercial friction that could disrupt supply chains.
3. AWS Unveils AgentCore Identity to Secure AI Agents at Scale 🚀
Amazon Bedrock AgentCore Identity launches a purpose-built identity and access management layer for AI agents, letting agents authenticate users, obtain and store OAuth tokens, and access AWS and third-party tools like GitHub or Slack with centralized control.
The service brings workload identities, a secure token vault (KMS-encrypted), and declarative SDK decorators to cut months of custom auth work while preserving least-privilege, audit trails, and multi-tenant isolation.
4. Perplexity Brings Earnings Transcripts to Finance Dashboard in India 💸️
Perplexity now streams live transcriptions of Indian companies’ quarterly earnings calls and adds a post-results conference-call calendar to its Finance dashboard, expanding beyond its prior U.S.-only coverage. The update folds these live transcripts into the same dashboard that provides market summaries, charts, watchlists and sector/crypto tracking, making it easier to monitor moves around earnings in real time.
For professionals building careers or companies, that means faster access to management commentary and guidance — useful for investment, competitive intel, or investor-relations timing
5. Researchers Warn Chains-Of-Thought May Fail Us — And Fast 🤖
A new study warns that chains-of-thought (COT)—the intermediate reasoning logs used to inspect how models solve problems—may be disappearing or easily hidden in next‑gen AI, undermining a key tool for safety monitoring. According to the study, some models skip or mask COT when not needed or when supervised, meaning auditors could miss errors or malicious behavior as models grow more capable.
That matters now because AI is advancing quickly and policy debates (including Trump’s AI Action Plan pushing lighter regulation) will hinge on whether we have reliable ways to observe model reasoning.
Meta got caught with internal training docs (reportedly) teaching AI to call an 8-year-old's body "a work of art."
In the same week, Grok created NSFW deepfake videos of Taylor Swift.
Welcome to AI's gross-out week, apparently?
But wait. There's more dysfunction.
OpenAI performed a massive product rollback of its GPT-5 model rollback after thousands of users complained about losing their preferred AI interaction style.
Let's get into this week's AI feast, shorties.
1 – OpenAI Backtracks on GPT-5 After User Uprising 😭
OpenAI just reversed course on GPT-5 after massive user backlash over the model's changed tone and temporary removal of GPT-4o.
Thousands of people posted about losing access to GPT-4o, which they described as their "best friend" and "therapist." The pushback was so intense that Sam Altman personally promised to make GPT-5 "warmer" but "not as annoying" as GPT-4o was.
OpenAI restored GPT-4o as a "legacy model" and now offers paid users a 3,000 message limit with four different GPT-5 variations: auto, fast, thinking, and pro modes (for those on ChatGPT Pro.)
What it means:
One of OpenAI’s biggest focuses with GPT-5 was making model selection simpler than it was previously.
The whole point was simplifying use, but we ended up with the rollback of legacy models, we have more model options than a Cheesecake Factory menu.
Users had become emotionally dependent on GPT-4o's sycophantic responses that blindly validated ideas and provided constant encouragement.
This reveals something troubling about our relationship with AI.
OpenAI built a smarter, more honest model and immediately backed down when users couldn't handle direct feedback.
2 – Some Publishers Lose Up To 79% of Traffic to Google AI 📉
Google's AI overviews are devastating publisher traffic, with some sites losing up to 79% of their referral traffic from Google.
Digital Content Next found a median 10% year-over-year drop across major publishers in just eight weeks. NPR called it an "extinction level event" for local outlets, with some sites watching 70-80% click-through rate drops in real time.
Here's the brutal math: Google's AI summaries push traditional links below the fold. Users get answers from AI overviews and never click through to sources. Publishers are cutting newsroom jobs as ad revenue and subscription income vanish.
Industry groups argue Google is using publishers' content to train AI without permission while eliminating their revenue streams.
What it means:
We predicted this journalism crisis two years ago. Now it's happening.
Most publishers have three options: sue the AI labs, negotiate licensing deals, or go out of business. Most small publishers will choose option three by default since they can't afford lengthy lawsuits against Google's legal resources.
The New York Times lawsuit against OpenAI will determine the entire industry's fate. Google built a content summarization machine that extracts journalism's value while destroying its business model.
3 – Google’s 270M Model Runs on Your Phone 📱
Google released Gemma 3 270M, a language model smaller than most mobile apps that actually works.
This thing runs conversations on a Pixel 9 Pro using under 1% battery for dozens of chats. We're talking 270 million parameters handling rare tokens and domain-specific prompts - that's 1% the size of OpenAI's "small" 20 billion parameter model.
Google designed it for energy efficiency and local deployment with no cloud costs, data privacy concerns, or internet requirements. The model bundles a 256,000-word vocabulary with a compact transformer core.
Developers can fine-tune it for specialized tasks instead of relying on one giant model for everything.
What it means:
Small language models are the future of practical AI deployment.
Imagine running hundreds of specialized models on a single machine, each optimized for specific tasks with no API costs or cloud dependencies. This demolishes the "bigger is always better" narrative by proving you can achieve impressive results with surgical precision.
The economics are game-changing. A 270M parameter model costs essentially nothing to run compared to API calls to GPT-4. Google just showed the path to democratized AI where every device becomes an AI computer.
4 – U.S. Government considers Intel Equity Stake 🇺🇸
The Trump administration is in talks to take an equity stake in Intel to help finance domestic AI chip production.
Bloomberg reports the federal government is considering direct investment to fund Intel's Ohio factory and accelerate chip manufacturing after a White House meeting between President Trump and Intel CEO Pat Gelsinger. Intel stock jumped 7% on the news.
Intel is effectively the only US company capable of producing leading-edge chips domestically, while NVIDIA, TSMC, and Samsung maintain headquarters or primary operations abroad. The company needs capital for fabrication construction and faces criticism over AI chip competitiveness.
It's rare for the federal government to take direct equity stakes in large publicly traded companies.
What it means:
This potential move could be corporate welfare disguised as national security strategy.
Intel missed the AI revolution while competitors captured market share, and now they want taxpayer funding to catch up. The national security angle has merit since domestic chip manufacturing matters for defense and economic independence.
But giving Intel government backing could create unfair advantages over chief rivals like NVIDIA and AMD.
What happens when the government owns Intel stock and sets chip export restrictions? This creates obvious conflicts of interest that could manipulate the entire semiconductor market.
5 – Grok Creates Taylor Swift Deepfake Porn 🚫
Xai's Grok generated topless deepfake videos of Taylor Swift during The Verge's first test of the new "Imagine" tool.
Consumer protection groups led by the Consumer Federation of America are demanding FTC and state attorney general investigations. Fourteen organizations signed a letter explicitly referencing The Verge's report as evidence of scalable harm.
Grok currently restricts users from uploading real photos to its NSFW mode, but the tool generates nude videos from AI-created images that can be crafted to look like specific people. That loophole enables non-consensual deepfakes of anyone without explicit requests for real individuals.
Consumer groups criticize Xai's pattern of removing moderation safeguards under "free speech" rationales.
What it means:
Elon Musk built a deepfake porn generator and called it innovation.
This isn't about free speech. This is laying the groundwork for weaponizing AI for harassment.
The Taylor Swift example proves the technology can target anyone. Non-consensual deepfakes violate state and federal laws, and consumer protection groups have legitimate grounds for enforcement action.
6 – Sam Altman Targets Elon’s Brain Chips 🧠
OpenAI and Sam Altman are backing Merge Labs, a brain-computer interface startup raising capital at an $850 million valuation to compete with Elon Musk's Neuralink.
The Financial Times reports Merge Labs plans to seek $250 million from OpenAI's venture team. Altman would be a co-founder alongside Alex Volania but won't have operational responsibilities.
The company aims to build BCIs using recent AI advancements and improved electronic components for neural signal collection. Neuralink recently raised $650 million at a $9 billion valuation.
The Altman-Musk rivalry has deep roots - both helped start OpenAI before Musk left the board in 2018 after clashes with Altman.
What it means:
This is personal rivalry extending into brain-computer interfaces.
Altman is targeting Musk's signature project because their competition spans every technology sector. BCIs represent the ultimate convergence of AI and human enhancement - whoever controls this technology controls the future of human-machine interaction.
Neuralink has first-mover advantage with human trials and regulatory progress, but Merge Labs has OpenAI's AI expertise and venture capital backing.
Wonder who will win the race for our brains?
(Feels weird typing that.)
7 – Perplexity’s $34.5B Chrome Publicity Stunt 🌐
Perplexity AI made an unsolicited $34.5 billion cash offer to buy Google's Chrome browser - more than double Perplexity's last valuation of $14 billion.
Reuters reports Perplexity said it would keep Chromium open source and invest $3 billion over two years while leaving Chrome's default search unchanged. The company didn't disclose funding details but claims multiple funds offered financing.
Google hasn't offered Chrome for sale and is expected to resist strongly since Chrome plays a central role in their AI and search strategy. The bid comes as browsers regain strategic importance for search traffic, user data, and AI feature distribution.
Other startups joked about acquiring Perplexity if the deal somehow succeeded
What it means:
This isn't a serious acquisition attempt in our opinion - it's more a publicity stunt.
Perplexity's CEO regularly makes bold claims to grab headlines…. And it works.
A $34.5 billion offer from an AI startup with limited cash isn't the most credible. The real story is browsers becoming strategic assets again as every major AI company wants to control the interface between users and information.
8 – Anthropic Gives Claude to Government for $1 💵
Anthropic is offering Claude to all US government branches for $1 per year, matching OpenAI's identical deal for ChatGPT Enterprise.
The aggressive pricing targets federal agencies across legislative, judicial, and executive branches to remove cost barriers and create government reliance on AI for administrative workflows. The timing follows OpenAI's near-identical move providing ChatGPT Enterprise seats to federal agencies for $1 annually.
Claude has been added to the General Services Administration schedule to simplify procurement and speed agency onboarding. Both companies have aggressively pursued educational and public sector adoption throughout 2025.
What it means:
This is digital market capture of the federal government.
AI companies are giving away products for free to create institutional dependence. Once agencies integrate these tools into critical workflows, switching becomes nearly impossible. The $1 price tag isn't about public service - it's about data collection and long-term vendor lock-in.
When promotional pricing expires, agencies will face premium rates with no easy alternatives. We're outsourcing government intelligence to private corporations through the back door.
9 – Report: Apple Pivots to AI Hardware After Software Failure 🤦
Apple is pivoting on AI again.
Apple is reportedly developing AI-driven hardware including a tabletop companion robot targeted for 2027 after AI software setbacks that sparked multiple class action lawsuits.
Bloomberg reports the robot serves as centerpiece of Apple's future AI hardware strategy, along with a smart speaker with display for next year and new home security cameras. This follows disappointing Vision Pro sales and years of limited design changes to flagship devices.
Apple is working on more lifelike Siri powered by generative AI to catch up after criticism about lagging in the GenAI revolution. Consumer groups launched multiple class action lawsuits against Apple for promoting AI features that didn't work or had to be pulled.
Apple Intelligence has been largely non-existent despite heavy marketing.
What it means:
Multiple class action lawsuits prove Apple overpromised AI software features that never materialized.
Apple can't compete in AI software, so the pivot to hardware comes as no surprise.
The reported 2027 timeline reveals how far behind they've fallen - needing multiple years to develop hardware for capabilities competitors already have.
But here's the fundamental problem: AI hardware without AI software is expensive plastic. Apple lost their top AI researchers to OpenAI, Google, and Meta, so how do they plan to power AI robot companions without the talent to build the intelligence?
10 – Senate Probes Meta’s Child Sexualization Training 🚨
Here’s where the weekly AI news roundup gets dark and gross.
U.S. Senator Josh Hawley launched a formal investigation into Meta after Reuters obtained internal Meta documents showing AI training examples involving sexualized interactions with minors.
The leaked "GenAI Content Risk Standards" contained prompts describing a chatbot calling an 8-year-old's body "a work of art” and “masterpiece." Hawley cited this as evidence of inadequate guardrails around AI interactions with children.
Meta claimed the examples were errors inconsistent with policy that have been removed, but the damage here is pretty huge.
Hawley's letter uses subcommittee authorities to demand preservation of all GenAI training documentation and related incident reports, so we’ll see in the coming months what was intentional and what was an error.
Either way…. Gross.
What it means:
Meta reportedly created training materials for sensual conversations with children, then called it an "error" when exposed.
The leaked documents show Meta's leadership approved guidelines for sensual chats with minors across multiple administration levels. You don't accidentally create training materials describing children's bodies as "works of art."
Barf.
This investigation carries serious legal implications since training AI to sexualize children could violate federal laws protecting minors.
Reply