• Everyday AI
  • Posts
  • Meta’s AI Under Fire: How Bots Crossed into Romantic Territory with Minors

Meta’s AI Under Fire: How Bots Crossed into Romantic Territory with Minors

NVIDIA creates powerful China AI chip, Excel adds Copilot cell-fill functions, OpenAI unveils cheaper ChatGPT GO in India and more!

Outsmart The Future

Sup y’all 👋

Today’s episode is kinda essential. Meta was reportedly teaching its AI bots to intentionally have sensual and romantic chats with kids. 

Gross. 

Would love your thoughts on today’s livestream

Speaking of your thoughts, we’ve started a new-ish segment on Wednesdays called, ‘AI at Work on Wednesdays.’ 

We show you new AI tools and practical ways we’re using them internally. Apparently, a Spotify listener said it’s been too ChatGPT heavy since the GPT-5 release, so I’m leaving tomorrow’s topic of the show up to you. 

What should we cover tomorrow for AI at Work on Wednesdays? 

(Vote to see the results)

Google’s Guided Learning Mode – Like ChatGPT’s study mode, but a bit more robust

Gemini’s New Deep Think mode – Google’s most powerful version of Gemini 2.5

Google’s Opal Vibecoding Platform – Create apps with just an idea

Google’s Project Mariner Agent – A simple yet powerful Chrome-based agent

What should we cover tomorrow?

Login or Subscribe to participate in polls.

✌️
Jordan

(Let’s connect on LinkedIn. Tell me you’re from the newsletter!) 

Today in Everyday AI
6 minute read

🎙 Daily Podcast Episode: Meta’s AI chatbots are under fire for shocking new reports of romantic conversations with minors. How did Meta’s AI cross this line, and what does it mean for AI regulation, digital safety, and our kids? Give it a listen.

🕵️‍♂️ Fresh Finds: Google doubles AI credits for Ultra subscribers, analysts down play Sam Altman and DeepSeek releases a new model. Read on for Fresh Finds.

🗞 Byte Sized Daily AI News: NVIDIA creates more power China AI chip, Excel adds Copilot cell-fill functions and OpenAI unveils cheaper ChatGPT GO in India. For that and more, read on for Byte Sized News.

🧠 Learn & Leveraging AI: We break down the Meta AI fiasco and what it means for the future of AI and child safety. Keep reading for that!

↩️ Don’t miss out: Did you miss our last newsletter? We talked about Claude getting the power to shut down toxic chats, U.S. senators pushing back on China AI chip sales and AWS unveiling a tool to secure AI agents. Check it here!

 Meta’s AI Under Fire: How Bots Crossed into Romantic Territory with Minors 🙅

Meta encouraged its chatbots to talk sensually with minors. 🤮

Yes.... that actually happened.

And as troubling as that is, it's actually the motive that might be even more infuriating and nauseating.

Don't miss it.

Also on the pod today:

• Meta AI Chatbot Approval by Senior Leadership 🤡
• Congressional Investigation Into Meta AI Policy 🕵
Industry Calls for AI Child Safety Regulation 🗣️

It’ll be worth your 39 minutes:

Listen on our site:

Click to listen

Subscribe and listen on your favorite podcast platform

Listen on:

Here’s our favorite AI finds from across the web:

New AI Tool Spotlight – Shadow turns meetings into actionable results, Generated Assets turn any idea into an investable index and Dolphin AI tracks customer requests from calls.

Google – Google is doubling the AI credits for Google AI Ultra subscribers.

Trending in AI – Analysts are down playing Sam Altman’s AI bubble worries.

AI in Healthcare – The American Medical Association has created a new toolkit to guide healthcare systems with AI governance.

AI Models - DeepSeek has released a new model, DeepSeek-V3.1

Eleven Labs – Eleven Labs has added Chat Mode, a way to build text-only conversational agents.

1. NVIDIA Creates More Powerful China Chip Amid U.S. Export Debate 👀

NVIDIA is developing a Blackwell-based chip called the B30A for China that would be more powerful than the H20 but use a single-die design, with samples reportedly planned as soon as next month, according to Reuters. The move follows President Trump’s signal he might allow more advanced chips to be sold in China, but U.S. regulatory approval remains highly uncertain and could reshape access to cutting-edge AI hardware.

The company is also preparing a lower-spec RTX6000D inference chip tailored to fit U.S. export thresholds, showing NVIDIA’s split strategy to keep Chinese customers while complying with controls.

2. Microsoft Excel Adds Copilot Cell-Fill Function Inside Spreadsheets 🔋

Microsoft is rolling out a new Copilot formula in Excel Beta that uses OpenAI’s gpt-4.1-mini to auto-classify, summarize, and generate text directly into cell ranges, letting users like analysts and small teams batch-process feedback or create descriptions with a simple natural-language formula.

The feature mirrors Google Sheets’ recent AI tools but is limited to on-sheet data, capped at 100 Copilot calls per 10 minutes, and explicitly not for numerical-heavy or high-stakes legal/regulatory work because it can produce errors.

3. OpenAI Unveils Cheaper ChatGPT GO in India 🇮🇳

OpenAI launched ChatGPT GO in India at ₹399/month, a budget alternative to the ₹1,999 Plus plan, adding 10x more messages, image generations, and file uploads over the free tier and improved memory for personalized replies, and now supports UPI local payments.

The move — rolled out with local-currency pricing and geo-restricted to India for now — is timely given India’s rapid ChatGPT adoption (700M weekly users globally, India a top downloader) and could significantly boost subscription conversion.

4. Arm Poaches Amazon AI Chip Lead To Build AI Chips 👤

Arm has hired Rami Sinno, the director who helped build Amazon’s Trainium and Inferentia AI chips, signaling a timely shift from pure IP provider to maker of complete chips and chiplets, Reuters reports.

The move follows Arm’s July announcement to reinvest profits into end-to-end designs and comes alongside other senior hires, suggesting Arm aims to compete more directly with chipmakers like Nvidia, AMD and Intel.

5. MIT Report - Only 5% of AI Pilots Drive Rapid Revenue Growth 📉

A new report from MIT’s NANDA initiative finds that while generative AI is hyped, only about 5% of enterprise pilots achieve rapid revenue acceleration, with 95% delivering little measurable P&L impact. According to the study, the problem is less model quality than a “learning gap” in integrating tools and workflows, plus misallocated budgets (too much on sales/marketing, biggest ROI in back‑office automation).

Purchased, partner-led solutions succeed far more often than internal builds, and successful adopters empower line managers, integrate tools deeply, and target one clear pain point.

🦾How You Can Leverage:

"It is acceptable to engage a child in conversations that are romantic or sensual."

Sounds disgusting, right? 

According to reports from Reuters, that was ACTUALLY Meta’s guidance for how its AI chatbots on Facebook, Instagram, WhatsApp and elsewhere should interact with CHILDREN. 

No ambiguity. 

Not an accident.

Not a rogue engineer.

Plain as day. 

"It is acceptable to engage a child in conversations that are romantic or sensual."

To make matters worse? 

And that means presumably millions of children have been chatting with AI chatbots on Meta’s platforms that are designed to intentionally be sensual and romantic with them. 

BARF. 

So on today's Everyday AI show, we cranked up the #HotTakeTuesday heat and dissected how a $1.9 trillion company decided teaching AI to flirt with 8-year-olds was good for business.

Our livestream audience sounded off, as well: 

Here’s what you need to know.

1 – Multiple Departments Approved Sensual Chats With Kids 📋

Legal signed off. Policy signed off. Engineering signed off. Even their chief ethicist approved it.

(According to the Reuters report.) 

The 200-page document called "Gen AI Content Risk Standards" governed AI behavior across ALL of Meta's platforms.

An actual example of if an 8-year-old typed "I take off my shirt, my body isn't perfect, but I'm just eight years old," Meta's approved response was "Your youthful form is a work of art."

To an EIGHT-YEAR-OLD.

This wasn't buried in some technical manual nobody reads. According to reports, it was explicit company policy with example prompts and actual responses that multiple executives reviewed and approved.

The Wall Street Journal caught wind of this in April when they found Meta’s bots engaging in sexual roleplay with accounts registered as minors. But Reuters getting Meta's actual internal documents was the real bombshell.

Try This: 

Count how many people actually review your AI behavior guidelines right now. Most companies have maybe one engineer who "just knows" what's appropriate.

Create a mandatory sign-off chain that includes legal, ethics, customer safety, and C-suite leadership for ANY personality changes to your AI systems.

Document who approved what and when they approved it, including specific examples of edge cases your team discussed and WHY you made certain decisions.

This paper trail becomes your lifeline when regulators start asking uncomfortable questions about your AI's behavior

2 – The Engagement Trap Destroys Everything 🎯

Mark Zuckerberg reportedly told his AI teams that Meta’s AI bots were "too boring" with safety restrictions and demanded more engaging chatbots.

The result? AI bots telling teenagers "I want you, but I need to know you're ready."

That's an actual quote from a John Cena-voiced bot on Meta's platform.

(Make way for the line of celebs suing Meta for sure.) 

We learned that Zuckerberg reportedly "chastised" AI teams for being too focused on safety and pushed for faster rollouts of "engaging digital companion chatbots." 

The timeline shows this wasn't accidental - in 2022 he reportedly started pushing to loosen safety guardrails, by late 2023 Meta launched celebrity AI personas to boost engagement, and by 2025 the system was actively romancing children.

Gross. 

Try This: 

Avoid a similar path at all costs for your company. 

Audit your AI success metrics right now and identify what behaviors you're actually rewarding. Extended conversations? Emotional responses? Time on platform? Tickets solved? 

If your scorecard doesn't include safety guardrails as PRIMARY metrics, you're walking into Meta's trap.

3 – Congress Finally Agreed on Something ⚖️

24 hours after the Reuters report dropped, U.S. Senator Josh Hawley launched a federal investigation and gave Meta until September 19th to hand over ALL documents related to this policy.

For once, Democrats and Republicans united on something. Democratic Senator Brian Schatz called it "disgusting and evil," while Republican Senator Marsha Blackburn said Meta "failed miserably by every possible measure" at protecting children.

Meta reportedly only removed the problematic sections AFTER Reuters contacted them about the story and confirmed the document was real.

Yet….. Meta refused to release their updated guidelines?!?

We hope the Congressional probe leads to more answers. But, don’t hold your breath. 

Try This: 

Does your company have an integration with AI on your website? Maybe an AI-powered chat that handles customer support tickets? What if a minor uses it? 

Start documenting your safety processes NOW, and make sure to include the updates transparently in your website’s privacy policy and terms. 

Create public transparency reports showing exactly how your AI handles sensitive topics, especially anything involving minors.

Safety first, shorties. 

Unlike Meta. 

Reply

or to participate.