• Everyday AI
  • Posts
  • Ep 744: Leaks show insights on OpenAI and Anthropic’s next models, Claude Computer Use goes viral and more

Ep 744: Leaks show insights on OpenAI and Anthropic’s next models, Claude Computer Use goes viral and more

Why OpenAI killed Sora, Microsoft drops game-changing Copilot updates, Why ChatGPT's App Store might be stalling and more.

Sup y’all 👋

More on this below, but Anthropic launched a super impressive ‘Computer Use’ feature that literally controls your entire computer like a human.

We’re going to give it the Deep Drive treatment for Wednesday’s ‘AI Working Wednesday’ show.

What’s your interest/take on the new Viral Computer Use feature?

What's your thoughts on Anthropic's New Viral Computer Use Feature?

🗳️ Vote to see LIVE results 🗳️

Login or Subscribe to participate in polls.

✌️

Jordan

Outsmart The Future

Today in Everyday AI
8 minute read

🎙 Daily Podcast Episode: New models from OpenAI AND Anthropic? Microsoft brings huge AI features? Yup and Yup. We go over the top AI news stories of the week you prolly missed. Give today’s show a watch/read/listen to learn more.

🕵️‍♂️ Fresh Finds: Microsoft AI chief warns of 'intense competition' Google testing out Skills in Gemini, Qwen releases updated 3.5 model that impresses. Read on for Fresh Finds.

🗞 Byte Sized Daily AI News: Why OpenAI killed Sora, Microsoft drops game-changing Copilot updates, Why ChatGPT's App Store might be stalling and more. Read on for Byte Sized News.

💪 Leverage AI: If you missed this week’s top AI stories, you’re gonna feel months behind. We got your quick recap. Keep reading for that!

↩️ Don’t miss out: Miss our last newsletter? We covered: Gemini 3.1 Flash Live Rolls Out Globally, Judge blocks Pentagon’s Anthropic ban, Leak shows new Claude ‘Mythos’ model’s power and more. Check it here!

Ep 744: Leaks show insights on OpenAI and Anthropic’s next models, Claude Computer Use goes viral and more


OpenAI and Anthropic are reportedly releasing new models any day now. 😱

A U.S. judge essentially said, 'No' to the federal government's quest to label Anthropic a supply chain risk.

And Google may have created a method that will change AI forever.

(Oh, and for our livestream audience, we'll be able to break some pretty big AI news live)

Miss any of this?

Also on the pod today:

• OpenAI Spud model leaks 🥔 
• Anthropic Claude controls your Mac 💻
• Claude Mythos security concerns 🚨 

It’ll be worth your 42 minutes:

Listen on our site:

Click to listen

Subscribe and listen on your favorite podcast platform

Listen on:

Here’s our favorite AI finds from across the web:

New AI Tool Spotlight – Notion MCP is our hosted server that gives AI tools secure access to your Notion workspace, PopTasks is Your Tasks Right in the Menu Bar, Invoke is your AI-powered coding companion for building, debugging, and refactoring code with expert precision.

Trump AI — Sen. Brian Schatz introduced the GUARDRAILS Act to block Trump’s order preempting state AI rules. It would protect states from losing federal funds for AI safeguards.

Claude Boost — Anthropic’s Claude is seeing a surge in paid consumer sign-ups after Super Bowl ads and a public DoD showdown, with most new users choosing the $20 Pro tier.

Gemini for Business — Google is sneaking NotebookLM and a Skills builder into Gemini for Business. Teams could soon pull research and build custom AI helpers without coding.

Samsung AI Chips — South Korean startup Rebellions raised $400M at a $2.34B valuation to push its Rebel-Quad inference chips into U.S. labs, targeting Meta and xAI.

Qwen3.5-Omni — Qwen3.5-Omni brings native text, image, audio, and video smarts. Curious about its Audio-Visual Vibe Coding?

Microsofts AI Chief — Microsoft’s new AI chief Mustafa Suleyman says the next few years will be defined by Microsoft and OpenAI, and warns rivals to expect a period of “intense competition.”

Microsoft Upgrades — Microsoft revealed big AI upgrades and is giving early-access customers Copilot+ CoPilot to try. Want to Learn More?

Starcloud $1.1B Valuation — StarCloud just hit a $1.1B valuation after raising $200M, locking it into unicorn status as investor appetite for AI infrastructure surges.

1. Microsoft ups Copilot’s research game with Critique and Council 🤓

Today Microsoft rolled out two multi-model features for Researcher in Microsoft 365 Copilot, offering a faster path to more reliable, in-depth research for enterprise users.

Critique separates drafting from rigorous review so one model generates analysis while a second enforces source quality, completeness, and strict evidence grounding, and Microsoft reports it meaningfully improves benchmarked research quality. Council runs two full model reports side-by-side and supplies a judge-written cover letter that flags agreements, divergences, and unique insights to help users compare outputs.

2. OpenAI’s ChatGPT app push stalls amid partner reluctance 🤝

Six months after launching mini apps inside ChatGPT, progress remains slow as partners limit functionality, hide integrations, and balk at ceding payments and customer ties, Bloomberg reports.

Developers complain of a tedious approval process, buggy tools, and scarce usage analytics, leaving many integrations shallow or requiring users to complete transactions off-platform. That hesitation blunts OpenAI’s aim to build an app ecosystem that could challenge Apple’s App Store and shift where users discover services.

3. Microsoft opens Copilot Cowork to Frontier access 😁

Microsoft has moved Copilot Cowork into the Frontier program, giving early access to an assistant designed for long-running, multi-step workflows inside Microsoft 365 while maintaining enterprise security and governance.

The shift makes Copilot capable of planning, coordinating, and executing tasks across calendars, files, and apps rather than just generating content. This milestone highlights Microsoft’s push to embed multi-model intelligence and operational automation into everyday work tools, signaling faster adoption of AI that acts within corporate controls.

4. Why OpenAI killed Sora: it was losing money and compute capacity 💸

OpenAI shut down Sora because the app became a costly, low-use drain on compute, burning roughly $1 million per day while active users plunged from around a million to under 500,000.

Leadership concluded the expensive video workloads were diverting scarce AI chips and teams away from priority models, and competitors like Anthropic were gaining the revenue-driving developer and enterprise market. The decision was framed as strategic reallocation of resources rather than a privacy or data grab, and it abruptly ended major partnerships, including a planned deal with Disney.

5. Tech CEOs Cite AI as Reason for 2026 Job Cuts ✂️

Major tech firms including Meta, Google, Amazon, and others are attributing sizable 2026 layoffs to AI-driven efficiency, with executives saying advanced tools let smaller teams maintain productivity while firms simultaneously ramp AI investment.

Industry observers and investors warn this shift could significantly affect software and engineering roles as companies use AI-generated code and automation to reduce headcount. Firms also use AI spending announcements to signal disciplined cost management to investors while continuing large-scale investment in technology.

6. Meta's Avocado delay exposes internal struggle and Gemini backup 🥑

Meta has pushed its Avocado model release from March to at least May 2026 after internal tests showed it trails leading systems, and internal interfaces reveal multiple Avocado variants still under evaluation.

Evidence suggests Meta is routing some user requests through Google’s Gemini as a stopgap, and leadership has discussed temporarily licensing Gemini to close capability gaps. The company appears to be moving Avocado from open-source roots to a proprietary product, signaling a strategic shift under Zuckerberg’s push for more advanced AI.

Did you accidentally expose your highly classified artificial intelligence models on a public web directory this week?

Whoops.

Anthropic literally did exactly that while warning the public about severe cybersecurity risks. Add in a federal judge aggressively slapping down a Pentagon software ban against Anthropic, and Apple finally admitting Siri is kinda useless without its AI competitors doing the heavy lifting.

Oh, and OpenAI is releasing a new model (already?!) and Microsoft just shipped what could be one of the most powerful and useful AI products of the year by combining the power of ChatGPT and Claude in one mode. 

The AI news cycle went absolutely unhinged this past week. If you looked away for 10 seconds, you prolly missed some massive shifts in how these technology giants operate.

Let’s get it.

1. What OpenAI Spud is and how it targets enterprise

It has only been a few weeks since their last major release. The pace is absolutely blistering right now.

They are aggressively shelving consumer toys like Sora to build a massive productivity super app. This new platform securely combines ChatGPT, Codex, and their Atlas browser into one highly unified workspace.

Sam Altman claims this new intelligence will dramatically accelerate the broader economy. That is a truly massive promise for something named after a potato.

The timing clearly aligns with their strategic decision to redeploy valuable resources toward enterprise robotics and business integration. Yuuuuup.

What it means: OpenAI is pivoting hard away from flashy video generators to completely dominate corporate environments.

Consolidating multiple standalone tools into a single integrated application signals a completely new business strategy.

They are betting heavily that productivity suites will drive massive future revenue.

2. How Claude actively controls Apple desktop computers

Anthropic rolled out a research preview this past week letting its chatbot physically operate Mac software.

Your computer is about to start clicking itself. The experimental feature is currently limited strictly to paid Apple users on the Pro or Max tiers.

Claude can now autonomously open local files, navigate web browsers, and type out text. It will even manually control connected applications to seamlessly transfer documents directly to your phone.

The company thoughtfully implemented strict automated safeguards to actively scan for prompt injections and malicious attacks. The system strictly asks for explicit permission before taking over your screen.

You can completely stop the autonomous agent at any time to maintain total security. This balances absolute convenience with necessary user oversight.

What it means: Artificial intelligence is officially moving past simple text generation into full physical desktop automation.

Having a digital assistant that manually manipulates local files completely changes the future of mundane knowledge work.

It represents a massive leap toward software mimicking human employees.

3. Why Codex plugins fix major coding workflows

ChatGPT plugins are back. 

Well….. Kinda. In Codex at least. 

OpenAI officially updated its Codex app to allow developers to build and share custom external integrations. 

Everyone is completely sleeping on this highly capable tool. It is an absolute beast for complex daily knowledge work.

These brand new plugins include highly specific prepackaged scripts and configuration files. Codex can now execute tested code snippets instead of generating everything completely from scratch every single time.

This drastically reduces expensive hallucination risks and cuts down inference costs. The quiet launch includes a robust directory featuring Google Drive editing capabilities and GitHub review integrations.

It brings crucial sub-agent support specifically to help software teams keep development tools perfectly synchronized. This effectively reduces code inconsistencies across massive collaborative corporate projects.

What it means: Codex is actively shifting from a pure coding assistant into a vastly broader enterprise integration platform.

Running pre-tested scripts instead of generating new text every time makes collaborative projects significantly more reliable.

It points to a future of seamless internal editing.

4. When Apple will add Gemini and Claude to Siri

Siri is finally getting a functioning brain. The upcoming iOS twenty seven update completely ends their exclusive artificial intelligence partnership with OpenAI.

Users will simply select their favorite external bot through a brand new extensions settings menu. The notoriously restricted technology giant is cracking their closed ecosystem wide open.

They also plan to announce a completely overhauled internal Siri built specifically on Google Gemini models at the Worldwide Developers Conference this June. Apple essentially took a year off from major announcements after facing severe backlash over previous failures.

They are clearly hoping this new integration strategy fixes their hardware utility problems.

What it means: Apple essentially admitted they cannot build competitive top-tier intelligence entirely by themselves.

Opening their strict operating system gives average consumers the freedom to natively use vastly superior competitor models.

This is a massive strategic pivot for the historically walled hardware garden.

5. How a federal judge paused the Pentagon AI ban

A federal judge in California just issued an order temporarily stopping the DoD from blocking Anthropic software.

The ongoing drama between the military and Silicon Valley is getting incredibly messy. The administration previously labeled the United States company a massive supply chain risk.

The judge firmly noted this aggressive action looked like classic First Amendment retaliation. Officials apparently attacked the business on pure political grounds instead of citing any actual security defects.

They bizarrely referred to company employees as left-wing extremists in public statements. Anthropic originally sued to secure explicit protection preventing the military from using their systems for autonomous weapons without oversight.

The legal order keeps the software available for outside military contractors while the lawsuit fully proceeds.

What it means: Courts are actively stepping in to prevent angry politicians from weaponizing federal security labels against domestic businesses.

Technology companies are fighting incredibly hard to maintain strict ethical boundaries around military application deployments.

This tension will absolutely define future government technology contracts.

6. What TurboQuant is and how it shrinks memory sizes

Google researchers introduced a highly technical vector compression method that instantly crashed the memory chip stock market.

This sounds incredibly boring, but the long-term hardware implications are massive. The new TurboQuant methodology efficiently reduces temporary cache memory requirements by an astonishing six times.

It acts as a super shredder for the vast information an assistant needs to remember during long chat sessions. The system reliably quantizes caches down to three bits without requiring any expensive model retraining.

It compresses the data so perfectly that the software reads information up to eight times faster. This breakthrough is totally compatible with existing open options like Google Gemma.

It will eventually bring massive computing power directly to significantly smaller desktop machines.

What it means: Hardware bottlenecks are being successfully bypassed through completely brilliant mathematical software engineering.

Running massive language models locally will soon become significantly cheaper for everyday laptop and mobile phone users.

This poses a massive threat to companies selling expensive cloud computing infrastructure.

7. Why the leaked Claude Mythos model is dangerous

The most explosive document was an unpublished draft detailing a highly classified system named Claude Mythos. Internal communications confidently describe its overall performance as a massive step change.

The leak also revealed concrete plans for a completely new top tier codenamed Capybara. These highly powerful systems are currently restricted to selected early access enterprise customers.

The company explicitly warned that these models are incredibly far ahead in dangerous cyber capabilities. They are deliberately giving defenders early access to harden their codebases against potential automated exploits.

This signals a major pivot away from immediately offering the newest systems to regular monthly subscribers.

What it means: The absolute best autonomous systems are becoming entirely too dangerous for immediate public consumption.

Companies are completely rethinking release strategies to give security defenders crucial time to prepare for automated exploits.

The race for supreme capability is officially outpacing standard safety protocols.

8. How Microsoft forces rival models to verify accuracy

The biggest players in the fiercely competitive industry are finally cooperating. The technology giant rolled out a newly improved Researcher agent built entirely on multi-model intelligence.

The system securely uses OpenAI software to draft a complex analytical response first. It then immediately brings in Anthropic Claude to heavily critique the work for accuracy and citation integrity before delivery.

Straight facts. They are actively productizing a rigorous workflow that power users have been doing entirely manually for months.

The platform also launched a new Model Council feature letting users directly compare side-by-side responses to see exactly where top platforms disagree on facts.

What it means: Relying on a single provider for high-value business analysis is officially a terrible strategic decision.

The future absolutely belongs to multi-model ecosystems where fierce competitors verify each other to eliminate basic errors.

Microsoft is leveraging its unique position to build ultimate safety nets.

 

Reply

or to participate.