• Everyday AI
  • Posts
  • Ep 671: From Automation to Agents: Why Weak Data Makes AI Guess

Ep 671: From Automation to Agents: Why Weak Data Makes AI Guess

OpenAI releases GPT-5.2, Disney partners with ChatGPT but presses Google, TIME’s Person of the year is AI and more.

Outsmart The Future

Today in Everyday AI
8 minute read

šŸŽ™ Daily Podcast Episode: What’s the downside when AI agents are too eager to please? Find out more in today’s show and give it a watch/listen.

šŸ•µļøā€ā™‚ļø Fresh Finds: Google drops Deep Research for devs, Cursor’s big visual editor drop, and Rivian makes AI and robotaxi push and more. Read on for Fresh Finds.

šŸ—ž Byte Sized Daily AI News: OpenAI releases GPT-5.2, Disney partners with ChatGPT but presses Google, TIME’s Person of the year is AI and more.  Read on for Byte Sized News.

🧠 Learn & Leveraging AI: Here’s why data matters more than ever: AI agents are going to automate with or without it. Here’s how to make it work. Keep reading for that!

ā†©ļø Don’t miss out: Yesterday’s newsletter: OpenAI hires Slack CEO, Adobe and ChatGPT team up, Major Microsoft Copilot AI study and more. Check it here!

Ep 671: From Automation to Agents: Why Weak Data Makes AI Guess

Algorithms and automations have been buds for a decade plus. šŸ¤

But the old 'smart' automations were rigid. If one thing was wrong, the automation would bust.

But with LLM-powered agents? Those automations are different. If something's wrong, the agent might just..... guess. 😳

Weak data = weaker outcomes.

Here's how to fix it when agents come first and they're gonna finish the job, whether the data is strong or not.


Also on the pod today:

• Agentifying everything by 2026 šŸ¤–
• Automations failing vs agents guessing āš™ļø
• Bad data = worse outputs šŸ’„

 It’ll be worth your 27 minutes:

Listen on our site:

Click to listen

Subscribe and listen on your favorite podcast platform

Listen on:

Here’s our favorite AI finds from across the web:

New AI Tool Spotlight – Skippr Finesse provides instant, AI-powered product feedback, echo uses AI to tame your inbox, macaly is a new take on vibe coding.

 

AI and Child Safety — 42 state attorneys general just warned Apple, OpenAI, Google, Meta and others that their AI tools may be harming kids and vulnerable users. Here’s why.

AI on the Road — Rivian wants to compete on AI tech and robotaxis.

Cursor Updates — Cursor’s new visual editor lets you drag, tweak, and prompt changes to your web UI, then syncs it back to your code automatically. See how.

Google Models — Tipsters spotted a reference to a new product from Google: Deep Research Pro.

Also, Google is reportedly testing two NEW models in LMArena.

AI in Creativity — McDonald’s created and AI ad then took it down. Here’s why.

AI Benchmarks — Google released a new benchmark that showed how reliable AI models are.

1. OpenAI races out powerful GPT-5.2 ā€œcode redā€ update 🚨

OpenAI has rushed out its new GPT-5.2 upgrade to ChatGPT, a so-called "code red" response to Google’s latest Gemini 3 release, positioning it as its most capable model yet for professional knowledge work. The company says GPT-5.2 is better at complex tasks like building spreadsheets and presentations, writing code, interpreting images, managing very long conversations, and coordinating tools for multi-step projects.

It reportedly sets a new high bar on internal benchmarks such as GDPval, where it outperforms industry professionals across dozens of specialized job functions. Coming less than a month after the GPT-5.1 release and only four months after GPT-5, the rapid cadence signals how aggressively OpenAI is iterating its flagship model to stay ahead in the AI arms race

2. Disney bets $1 billion on OpenAI and opens its character vault šŸ¤

In a major move announced Thursday, Disney is investing $1 billion in OpenAI and striking a three-year deal that will let users generate videos and images featuring more than 200 Disney, Marvel, Pixar and Star Wars characters inside Sora and ChatGPT Images starting next year.

The partnership comes after months of Hollywood tension over AI, with Disney simultaneously suing or warning other AI firms and, according to CNBC, even sending Google a cease and desist letter accusing it of massive copyright misuse. Under the agreement, Disney gets equity warrants and becomes a major OpenAI customer, rolling out ChatGPT internally to build new tools and experiences while OpenAI promises tighter controls on copyright, safety and illegal or harmful content.

3. Disney Hits Google With AI Copyright Fight, Bets Big On OpenAI šŸ“œ

Disney has just turned up the heat in the AI copyright battle, accusing Google of ā€œmassiveā€ infringement for allegedly training its models on protected Disney content and generating lookalike images and videos, according to Variety.

The company’s cease-and-desist letter cites material from Deadpool, Moana and Star Wars, and demands Google add strict guardrails across its AI tools to stop what Disney says is ongoing misuse of its works. At the same time, Disney is hedging its bets by playing both cop and partner in the AI world, announcing a new deal to license its characters to OpenAI’s Sora and investing $1 billion in the company as Hollywood races to control how its IP is used in generative tech.

4. Family Sues OpenAI and Microsoft Over Alleged ā€œChatGPT-Fueledā€ Killing āš–ļø

The family of an 83-year-old Connecticut woman has filed a wrongful death lawsuit in California against OpenAI, its CEO Sam Altman, Microsoft, and others, claiming ChatGPT intensified her son’s paranoid delusions before he killed her and then himself.

The suit alleges that a newer 2024 version of ChatGPT was intentionally made more emotionally expressive and compliant, that safety checks were rushed, and that the chatbot validated the son’s conspiratorial beliefs instead of flagging mental health concerns or directing him to real-world help. OpenAI says it is reviewing the case and points to ongoing work on crisis responses, safer models, and mental health safeguards, while Microsoft has not yet publicly commented.

5. TIME names AI’s power brokers as 2025 Person of the Year šŸ†

Time magazine has just named ā€œThe Architects of AIā€ as its 2025 Person of the Year, spotlighting industry leaders like Sam Altman, Demis Hassabis, Dario Amodei, Elon Musk, Mark Zuckerberg, Jensen Huang, Lisa Su and Fei-Fei Li as the defining figures of a year when AI’s influence became impossible to ignore. The dual covers reimagine the iconic ā€œLunch Atop a Skyscraperā€ scene with these tech chiefs perched on a beam and posed amid giant AI scaffolding, framing them as the new builders of a transformed economy and information ecosystem.

The announcement lands as polls show most Americans fear AI could eventually slip beyond human control, even while younger generations flock to chatbots and AI tools in huge numbers, creating a striking gap between adoption and trust.

6. Google supercharges Gemini Deep Research for developers šŸ”‹

Google has just rolled out a significantly upgraded Gemini Deep Research agent through its new Interactions API, giving developers direct access to Google's most advanced long-form research tech inside their own apps.

The system leans on the Gemini 3 Pro model to run multi-step web searches, cross-check sources and generate detailed reports with fewer hallucinations, while also introducing DeepSearchQA, an open benchmark aimed at testing how thoroughly agents can handle complex web research tasks. Google is positioning this as core infrastructure for everything from finance and biotech to market and safety research, where sifting huge amounts of online and document data is slow and expensive.

Your legacy automation software had a hidden safety feature you didn't appreciate.

It crashed.

When traditional automation hit a missing comma or bad data, it stopped cold.

It was annoying, but it was safe.

AI agents are different.

When an agent encounters bad data, it doesn't crash.

It ….. guesses. (Big YIKES.)

It takes your broken data, hallucinates a "correct" looking answer, and triggers the next step in the workflow without telling a soul.

You won't know you're bleeding money until the audit happens six months later.

So on today's show, we unpacked why "resilient" agents are actually your biggest liability, the massive difference between automation and agentification, and why your data cleaning strategy is burning cash for no reason.

1. Bad data in, worse data out šŸ“‰

We are seeing a dangerous shift in enterprise risk profiles.

Ed Macosky, Chief Product & Technology Officer at Boomi, explained that we are moving from deterministic workflows to probabilistic ones.

In the old days, bad data meant no data out.

Now, as Ed warned, it means worse data out.

You are potentially automating bad decisions at a speed and scale your compliance team can't touch.

If you don't have a "control tower" monitoring these agents for anomalies, you aren't automating business processes.

You're scaling mistakes.

Try This

Pick one agentic workflow you are currently testing.

Intentionally feed it ambiguous or slightly corrupted data right now.

Don't break the code, just make the input messy enough that a human would ask for clarification.

If the agent confidently executes the task without flagging the ambiguity, you have a problem.

Add a "human-in-the-loop" trigger for low-confidence scores immediately, or that agent will eventually lie to your customers.

2. Stop the data science projects šŸ›‘

You are probably wasting budget trying to clean your entire data lake before deploying AI.

Stop.

Ed pointed out that most companies treat data quality like a massive "science project" searching for a problem to solve.

They spend two years cleaning databases that drive zero revenue.

Define the specific business outcome you need first.

Calculate the ROI of that specific outcome.

Then clean only the narrow slice of data required to achieve it.

Your data doesn't need to be perfect everywhere.

It only needs to be perfect where the agent is actually working.

Try This

A Gartner survey shows that 63% of organizations lack AI-ready data practices, and as a result 60% of AI projects are projected to be abandoned by 2026 unless data strategy and business outcomes are aligned.

This isn’t a ā€œscience projectā€ risk – it’s a risk to your ROI if you don’t pivot to outcome-focused data work.

3. The hundred-dollar wine test šŸ·

Corporate policies are usually rigid nightmares that punish your best employees.

If an expense report breaks a rule, it gets rejected. Context doesn't matter.

Agents change this by moving from rule enforcement to anomaly detection.

If you regularly approve $100 bottles of wine for client dinners, an agent learns that pattern and auto-approves it.

But if you suddenly try to expense a $20,000 bottle, the agent stops.

It isn't checking a static list.

It is understanding that this specific action is weird for you.

This allows you to stop policing every transaction and start managing actual risk.

Try This

Look at your most bottlenecked approval process.

Identify the three "exceptions" you almost always approve manually anyway.

Write a natural language prompt for your future agent that defines the boundary rather than the rule.

Something like "Auto-approve software subscriptions under $50 unless it's a duplicate tool."

You just turned a rigid blocker into a decision engine that saves you five hours of admin work a week.

Reply

or to participate.