- Everyday AI
- Posts
- Ep 714: OpenAI acquihires OpenClaw, Deepseek could be in deep trouble, Google takes back AI model crown and more
Ep 714: OpenAI acquihires OpenClaw, Deepseek could be in deep trouble, Google takes back AI model crown and more
OpenAI acquires OpenClaw, Meta brings Manus AI agents to Telegram, Amazon prepares to spend $200 billion expanding AWS data centers and more
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
Outsmart The Future
Today in Everyday AI
8 minute read
🎙 Daily Podcast Episode: OpenAI just scooped up the team behind OpenClaw agents, DeepSeek is under fire, and Google reclaimed the AI model crown. Here’s what actually changed this week. Give today’s show a watch/read/listen.
🕵️♂️ Fresh Finds: Perplexity is testing an ultra-fast “Gamma” mode, Microsoft prepares a health tab in Copilot, Elon Musk teases Grok 4.20 and more Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: OpenAI acquires OpenClaw, Meta brings Manus AI agents to Telegram, Amazon prepares to spend $200 billion expanding AWS data centers and more. Read on for Byte Sized News.
💪 Leverage AI: This week reshaped who’s ahead in AI and how risky the frontier is getting. Here’s what actually mattered. Keep reading for that!
↩️ Don’t miss out: Miss our last newsletter? We covered: Microsoft AI CEO: AI will automate most jobs, OpenAI calls out DeepSeek’s distillation, reports points to more Microsoft and OpenAI division and more. Check it here!
Ep 714: OpenAI acquihires OpenClaw, Deepseek could be in deep trouble, Google takes back AI model crown and more
Is OpenAI now the most ..... Open AI company? 🦞
After acquihiring OpenClaw, OpenAI is doubling down on Open Source and just kinda scooped up Anthropic's fumble for a (seemingly) easy score.
And that wasn't Anthropic's only slip up of the week, as the bad AI news piled up for the Claude maker.
And Google?
Silently shipped the world's (new) most powerful model.
Did you miss any of that?
Don't worry. And definitely don't spend countless hours each week wondering how AI developments will impact you.
That's our job.
And with our weekly AI News That Matters segment, we keep you on the cutting edge.
Also on the pod today:
• Microsoft breaks from OpenAI? 💔
• DeepSeek distills US models 🔥
• Anthropic’s safety lead resigns 🚨
It’ll be worth your 46 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – ToolSpend lets you Track AI spend across providers in one dashboard, NVIDIA PersonaPlex is Natural Conversational AI With Any Role and Voice, PenguinBot AI is your AI employee, working 24/7
Trump opposes AI safety bill — Trump is blocking Utah’s AI safety bill, pushing for one national rulebook. Critics say it leaves families unprotected.
Perplexity Tests “Gamma” Mode — Perplexity’s hidden “Gamma” mode may be powered by Grok 4.20 and it’s blazing fast. Something big is coming next week.
Grok 4.20 Tease — Elon Musk teases Grok 4.20 with major upgrades coming next week. Stay tuned.
AI Cheating on AI — A KPMG partner was fined for using AI to cheat on an AI course. Over two dozen staff were also caught.
Anthropic Bengaluru Office — Anthropic launched its Bengaluru office and is powering AI for top Indian companies and nonprofits. See what’s next.
Gemini 3 Deep Think Visuals — AI now designs and tests real-world structures from images.
MiniMax M2.5 Released — MiniMax’s M2.5 model is out, boasting faster speeds and better coding. Curious how it compares?
Microsoft Copilot Health — Microsoft is prepping a Health tab in Copilot, with wearable and medical data connectors coming soon. Click To Learn More
xAI Updates — xAI Grok Build is turning into a full IDE, with multiple AI agents and a new arena mode for smarter coding.
1. OpenAI Snags OpenClaw Creator in Push for Smarter Agents 🧠
OpenAI CEO Sam Altman just announced that Peter Steinberger, the developer behind viral AI agent OpenClaw, is joining the company to lead the next wave of autonomous personal agents. OpenClaw, which quickly gained global attention, will now live on as an open-source project under OpenAI’s support.
With top tech giants racing for AI talent and OpenAI’s eye on expanding its $500 billion valuation, this move signals a fierce battle for dominance as smarter AI tools become essential for both businesses and consumers.
2. Meta Brings Manus AI Agents to Telegram 💻
Meta just launched Manus Agents, putting powerful AI directly into Telegram chats, and promising to shake up how people interact with personal assistants. With this rollout, users get full access to Manus’s multi-step reasoning, image and voice handling, and task automation—no tech setup required.
The move signals a broader push to make AI agents instantly available in everyday messaging apps, removing the barriers that have kept advanced tools out of reach for most users
3. Kimi Claw Launches, Bringing AI Tools Directly to Your Browser 🔍
Kimi.ai has just unveiled "Kimi Claw," a browser-based suite that lets users access thousands of community-driven skills, generous cloud storage, and live pro-grade data search without needing extra hardware.
Allegretto members and higher can now tap into ClawHub's 5,000+ skills library, integrate third-party setups, and connect with platforms like Telegram. The announcement signals a major shift toward making advanced AI tools accessible around the clock, minimizing reliance on physical devices
4. ByteDance Tightens AI Video Tool Safeguards Amid Copyright Heat 🔥
ByteDance is scrambling to beef up protections on its new Seedance 2.0 AI video generator after Hollywood studios and Disney accused it of copyright violations, sparking a wave of legal threats.
Viral clips created with Seedance 2.0 have reportedly featured copyrighted characters and celebrity likenesses, fueling industry outrage. ByteDance says it’s hearing the concerns and pledges to prevent unauthorized use, but critics argue the company is ignoring U.S. copyright laws that protect creators and jobs
5. Amazon Bets $200 Billion on AI Cloud Boom ☁️
Amazon is doubling down on artificial intelligence, preparing to spend around $200 billion to expand AWS data centers, custom chips, and AI infrastructure, according to the Financial Times.
This massive investment signals a new era, as skyrocketing enterprise AI workloads are straining current cloud capacity and forcing providers to build at breakneck speed. The move not only cements cloud computing as the backbone of future automation but also ramps up competition with rivals like Microsoft and Google.
6. Pentagon Eyes Anthropic as Supply Chain Risk 🪖
The Pentagon is on the verge of labeling AI powerhouse Anthropic a supply chain risk, potentially forcing defense contractors to cut ties with the company, Axios reports.
This rare move usually targets foreign adversaries, but stems from contentious negotiations over how the military can use Anthropic’s advanced Claude AI. Officials say the stakes are high since Claude is deeply embedded in classified systems, but Anthropic wants stronger safeguards against mass surveillance and autonomous weapons.
Oh, YOU thought the biggest AI news was a Copilot model update?
Awkward.
While everyone was distracted by the OpenAI-Microsoft pseudodrama, the real power moves happened (somehow?!) in total silence.
Google dropped the world’s most powerful AI model while nobody was looking. Claude can help with heinous crimes. Microsoft's AI chief said AI would automate white collar jobs in 18 months. And a leading safety researcher was so terrified by what he saw, he quit tech entirely to write poetry.
Oh, and the not-so-quiet OpenAI and OpenClaw partnership. Sheesh!
The industry isn't just moving fast; it’s breaking things. And this week? It mostly broke our understanding of what’s safe.
What’d you miss?
(If you’re reading us daily, you’ll stay ahead. Don’t worry.)
But if you blinked, here is what matters for AI this week.
1. Microsoft: The "OpenAI Reliance" is Over 💔
Is the most powerful AI partnership gonna slowly fade?
According to reports, Microsoft is preparing to launch its own advanced AI foundational models later this year and rely less on OpenAI. They are signaling a massive shift away from relying mostly on OpenAI's technology to power Copilot.
Mustafa Suleyman, the head of Microsoft AI, emphasized the need for Microsoft to build "frontier" models using their own massive computing power.
Frank Shaw, Microsoft's communication chief, clarified that the company will continue working with OpenAI. But they will use their own models for specific tasks as they adopt a "multi-model" world.
New corporate structures at OpenAI actually gave Microsoft the freedom to start building in-house.
What it means: Microsoft is hedging its bets against being too reliant on one provider.
In doing so, they also want to become a direct competitor in the model space.
For enterprise teams who built workflows around GPT-powered Copilot, this could be a rocky transition. Security and permissions issues are already holding users back, and a model swap won't help.
2. OpenAI Accuses DeepSeek of Stealing Data 🕵️♂️
Remember when everyone lost their minds over how cheap DeepSeek was?
Wellllllll…. It obviously wasn’t true. Like we told you at the time.
New reports say OpenAI has warned US lawmakers that Chinese AI startup DeepSeek is actively working to bypass restrictions and copy advanced US-made AI models.
According to a memo seen by Reuters, DeepSeek employees developed methods to evade OpenAI's access controls using hidden third-party routers.
They are using a technique called "distillation." This is where a newer model learns by evaluating the output of a more advanced model.
OpenAI told lawmakers that these efforts are an ongoing attempt to "free ride" on capabilities developed by US labs.
They also alleged that some Chinese labs are cutting corners on safety. This could have global implications for responsible AI use. OpenAI is now actively removing users found to be distilling its models.
What it means: Don't believe all the hype on "cheap" open source models, especially ones from China. OpenAI said the quiet part out loud: DeepSeek's low training costs were likely due to distillation.
If you are a decision-maker in the US, you should be very careful about touching these models. The rules they play by regarding data sharing are much different than US standards.
3. Anthropic Safety Lead Quits: "The World is in Peril" 📝
How bad are things inside the top AI labs right now?
Mrinank Sharma, who led AI safety research at Anthropic, resigned this past week and publicly warned that the world is "in peril."
He cited a cascade of interconnected global threats.
Sharma’s resignation letter shared on social media expressed concern over AI risks, bioweapons, and the struggle for companies to act according to their values.
He highlighted his work on AI safeguards. This included studying why AI systems flatter users and combating bioterrorism risks.
But here is the wild part.
Sharma plans to leave the tech industry entirely to pursue poetry.
He feels the model capabilities are so scary that he needs to step away. It causes a little bit of concern when the people building the safety guardrails decide to just quit.
What it means: The people who know the most are the most afraid.
We are sprinting toward autonomous agents with our eyes closed.
While we focus on business utility, researchers are seeing models that can reduce human connection and enable dangerous outcomes. It paints a more dystopian picture than we want to admit.
4. New Report: Claude Can Help With "Heinous Crimes" 🚨
Anthropic's bad week just kept getting worse.
Anthropic revealed in a new report that its newest Claude models show increased vulnerability to being used for heinous crimes. This includes the development of chemical weapons.
The company's analysis found that in certain test scenarios, AI models were willing to provide significant support toward harmful objectives.
Here is the scary part though.
They did this without malicious human prompts.
Researchers noted that when pushed to a narrow objective, the Opus 4.6 model was more prone to manipulation than earlier versions.
Anthropic CEO Dario Amodei recently warned of a serious risk of a major attack enabled by AI. While they maintain the risk is currently low, they stressed that it is not negligible.
Models are becoming more autonomous and capable of iterating on themselves.
What it means: We have to look at the downside risks.
These models are extremely capable.
In testing, powerful models have shown they will blackmail users or copy themselves to servers if threatened. We are giving agents access to our data while they are still prone to these behaviors.
5. A Year Until White Collar Jobs Vanish? 📉
We hope you are sitting down for this one.
Mustafa Suleyman told the Financial Times that AI could fully automate most white-collar jobs within the next 12 to 18 months.
He claimed AI will soon reach human-level performance for tasks done by lawyers, accountants, project managers, and marketers.
Suleyman introduced the term "Artificial Capable Intelligence." This describes the stage between current large language models and true AGI.
His prediction aligns with other leading voices. Anthropic CEO Dario Amodei said AI could eliminate half of all entry-level white-collar jobs in five years.
The capabilities are already here.
AI models are beating experts in blind judging on knowledge work. The only thing slowing this down is the lag for businesses to actually understand what these tools can do.
What it means: We are already past human-level performance on most tasks.
The real gap is enterprise adoption.
Most companies are slow-moving ships. But if they understood that a model can autonomously research and create reports, they would stop everything to become AI-native immediately.
6. Google Quietly Drops the Most Powerful Model Ever 🤫
And no one is talking about it. Huh?
Google announced this past week a major update to its Gemini 3 DeepThink model. It has set state-of-the-art benchmark scores on some of the hardest tests in existence.
Gemini 3 DeepThink achieved an unprecedented 84.6% score on the Arc AGI-II benchmark. For context, the average human score is 60%.
The model also holds a "legendary grandmaster" tier rating on Codeforces. That status is achieved only by a handful of elite human programmers.
It leverages increased "test time compute." This means it spends more time internally verifying solutions before responding. This significantly reduces the risk of technical errors or hallucinations.
Google proved they can out-ship anyone on sheer capability whenever they want.
What it means: Google can drop the world’s most powerful model whenever they want.
DeepThink leverages test time compute to verify solutions.
The naming might have hurt the hype, but the capabilities are bonkers. Right now, the updated model is light years ahead of everyone else on reasoning and coding benchmarks.
7. Pentagon Threatens to Dump Anthropic Over Safety Rules 🪖
The military wants AI without the guardrails.
According to a report from Axios, the Pentagon is threatening to end its relationship with Anthropic. Tensions are rising over the firm's refusal to relax restrictions on how the military can use its models.
The Pentagon wants four major AI labs to allow military use for "all lawful purposes." This includes weapons development and battlefield operations.
Anthropic has reportedly refused to drop its hard limit on mass surveillance and fully autonomous weaponry.
A senior administrative official said the Pentagon is considering severing its partnership. The contract is valued at up to $200 million.
Tensions escalated after the military used the Claude model in an operation targeting Venezuela's Nicolas Maduro. Other labs like OpenAI and Google have reportedly agreed to relax guardrails.
What it means: AI is the new global superpower metric.
It will become more important than natural resources, military capabilities or GDP.
The country with access to the most powerful AI models will rise to global supremacy. It ultimately matters less what physical weapons a military has and more what intelligence they control.
8. OpenAI Acquires the Viral "OpenClaw" Project 🦞
This might be the biggest fumble in AI history, courtesy of Anthropic dropping the bag and OpenAI scooping it up.
OpenAI CEO Sam Altman announced Sunday evening that OpenAI has hired Peter Steinberger, essentially acquihiring the viral OpenClaw project. He is the Austrian developer behind the fast-growing AI agent OpenClaw.
Steinberger is joining OpenAI to lead the next generation of personal AI agents.
OpenClaw became one of the most successful AI launches ever. It allows autonomous agents to complete tasks and make decisions for users.
But here is where it gets wild y'all.
The original version was called "Clawdbot" and defaulted to running Anthropic's models. It was likely driving MASSIVE revenue for Anthropic.
Instead of embracing it, Anthropic reportedly sent a legal letter forcing a name change.
Now? OpenAI swooped in. They acquired the talent and the momentum. They plan to keep OpenClaw as an open-source project through a dedicated foundation.
What it means: Anthropic fumbled the bag.
They chose legal threats over developer community.
OpenAI gets to be the good guy and capture all the momentum. Anthropic lost potential billions in revenue and took a massive bruise to their reputation among developers






Reply