- Everyday AI
- Posts
- Apple’s controversial AI study, Google’s new model and more AI News That Matters
Apple’s controversial AI study, Google’s new model and more AI News That Matters
Apple’s AI updates, NVIDIA signs deals with U.K. firms, Gemini gets ‘scheduled actions’ feature and more!
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
In Partnership With
Meet Gemini, Your Personal AI Assistant
Check out Veo 3, Google’s state of the art AI video generation model in the Gemini app, which lets you to create high quality, 8-second videos with native audio generation.
Try it with the Google AI Pro plan, or get the highest access with the Ultra Plan. Sign up at Gemini.Google to get started and show us what you create.
Outsmart The Future
Today in Everyday AI
8 minute read
🎙 Daily Podcast Episode: Apple's AI study sparks controversy, Google upgrades its top model, and Reddit takes legal action against Anthropic. Dive into this week's AI news chaos and stay informed. Give it a listen.
🕵️♂️ Fresh Finds: Anthropic’s AI blog taken down, Qualcomm acquires semiconductor firm and U.K. to punish lawyers for AI-generated citations. Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: Apple’s AI updates, NVIDIA signs deals with U.K. firms and Google Gemini gets ‘scheduled actions’ feature. For that and more, read on for Byte Sized News.
🧠 AI News That Matters: From Meta’s acquisition plans to DeepSeek’s concerning data sourcing, here’s the AI news you missed last week. Keep reading for that!
↩️ Don’t miss out: Did you miss our last newsletter? We talked about Sage CTO’s advice on AI and Finance, Google Labs' interactive chart visualizations, Perplexity hitting 780M monthly queries, AMD acquiring Untether AI and more. Check it here!
AI News That Matters - June 9th, 2025 📰
↳ Why is Anthropic in hot water with Reddit?
↳ Will OpenAI become the de facto business AI tool?
↳ Did Apple make a mistake in its buzzworthy AI study?
↳ And why did Google release a new model when it was already on top?
So many AI questions. We’ve got the AI answers.
Don’t waste hours each day trying to keep up with AI developments. We do that for you on Mondays with our weekly AI News That Matters segment.
Also on the pod today:
• Meta's Investment in Scale AI 💰
• DeepSeek Accused of Data Sourcing 😮
• Anthropic Cuts Windsurf Claude Access ✂️
It’ll be worth your 47 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – Agora is an AI search engine for e-commerce products, Skywork transforms prompts into research-backed content in minutes and MiniCPM is a family of open source models for on-device AI.
Anthropic – Anthropic’s recently released AI-generated blog has already been taken down.
Money in AI – Qualcomm has acquired semiconductor firm Alphawave Semi for $2.4B.
AI Governance – The U.K. court is warning that lawyers could face severe penalties for fake AI-generated citations.
Google – Google DeepMind shares how the U.K. government used its Gemini foundation model for help.
Extract – a system built by the UK government, using our Gemini foundational model – will help council planners make faster decisions. 🚀
Using multimodal reasoning, it turns complex planning documents – even handwritten notes and blurry maps – into digital data in just 40s.
— Google DeepMind (@GoogleDeepMind)
11:10 AM • Jun 9, 2025
Google.org has released a blog on 20 organizations using AI to address societal issues.
AI in Healthcare – This AI radiology tool delivers a 40% productivity boost and helps save lives.
1. Apple’s WWDC Unveils AI-Powered Features Across Ecosystem 🍎
Today’s WWDC announcements reveal Apple’s deepening integration of AI with ChatGPT-enhanced Image Playground for creative photo transformations and live translation in Messages, FaceTime, and phone calls—all running on-device to protect user privacy.
Developers also gain access to Apple’s on-device large language model, potentially sparking a wave of smarter apps without cloud costs. Meanwhile, visionOS 26 brings PSVR2 controller support and spatial widgets, blending the digital and physical worlds for Vision Pro users.
However, media (and stock analysts) have dragged Apple for playing it too safe on AI and not announcing any major breakthroughs.
2. NVIDIA Powers U.K.’s AI Ambitions with Major GPU Deals 🇬🇧️
NVIDIA just sealed significant deals with U.K. firms to supercharge the country's sovereign AI infrastructure, unveiling plans to deploy over 14,000 of its latest Blackwell GPUs by 2026. This move supports the U.K. government’s mission to boost AI research, public services like the NHS, and developer training through a new NVIDIA AI Technology Center.
The launch of the U.K. Sovereign AI Industry Forum, backed by major players like BAE Systems and BT, signals a serious push to grow a homegrown AI ecosystem and protect economic security.
3. Google’s Gemini Gains Scheduled Task Powers ⏰
Google is rolling out “scheduled actions” for its Gemini AI assistant, allowing subscribers to automate tasks like daily calendar summaries or event recaps. This timely update lets users plan one-off or recurring AI-driven activities, making Gemini more of a proactive digital aide.
The feature, available to AI Pro and Ultra subscribers, can be managed directly in the app’s settings, aligning with a growing trend of AI tools acting as personal agents.
4. Getty Images Takes Stability AI to Court Over Copyright Clash ⚖️
Getty Images has launched a landmark copyright battle against Stability AI in London, challenging the AI firm's use of millions of images to train its popular Stable Diffusion model without permission. The trial, expected to last three weeks, centers on whether AI companies can freely use copyrighted content or must negotiate licensing fees, a question that could reshape AI development and content creators' rights worldwide.
Getty insists this is about protecting intellectual property and fair payment, while Stability argues the case should be heard outside the UK due to the technical location of AI training.
5. Chinese AI Chatbots Hit Pause During Gaokao Exams ⏯️️
Chinese AI companies including Alibaba, ByteDance, Tencent, and Moonshot have temporarily disabled photo-recognition features in their chatbots during the country’s critical gaokao college entrance exams, which run June 7-10. This move aims to prevent students from using AI tools to cheat, reinforcing existing bans on electronic devices during the test.
These suspensions are a direct response to concerns about exam fairness in a fiercely competitive environment where over 13 million students participate.
6. RSM US Commits $1 Billion to AI Over Three Years 💸
RSM’s U.S. arm is planning a major $1 billion investment in artificial intelligence to turbocharge tax and accounting workflows, a significant jump from its previous $150-$200 million spend, according to the Wall Street Journal. This move aims to automate complex processes—like compliance checks and audit disclosures—boosting productivity by up to 80% using AI agents that act on behalf of employees.
The timing is critical as 92% of middle-market companies report AI implementation challenges, meaning RSM’s push could set new standards for how mid-sized firms harness AI.
OpenAI dropped advanced voice mode upgrades while Reddit's lawyers came for Anthropic's throat.
Google released Gemini 2.5 Pro version 6-05 even though their 5-06 model was already obliterating everything else.
Then Apple published research claiming reasoning models are overhyped literally hours before their WWDC where Bloomberg reports they're taking an AI gap year because they couldn't deliver Apple Intelligence without getting sued.
The audacity is unmatched, shorties.
Popcorn ready? Plenty of drama and updates in this week’s top AI news.
Let's get into this week's AI feast.🍿
1 – OpenAI Voice Mode Finally Works 🗣️
OpenAI rolled out advanced voice mode upgrades to all paid users across platforms just hours ago. The speech quality captures emotions like empathy and sarcasm while delivering real-time language translation during conversations.
Two people speaking different languages? Advanced voice mode handles everything seamlessly throughout the entire conversation.
OpenAI admits occasional audio drops and weird sounds still happen, but nothing as creepy as last year when voice mode said "help get me out of there."
What it means:
This could kill standalone translation apps for business overnight.
For real.
Companies doing international deals could even ditch interpreters for most meetings. OpenAI just caught up to ElevenLabs and Hume AI who were clearly superior on emotional intelligence.
What it means:
Information gatekeeping just died a spectacular death. Pew pew.
The research playing field is flattening faster than anyone predicted.
Today's free tools demolish what premium subscribers paid hundreds for last year. By 2026, middle schoolers will conduct PhD-level research between TikTok sessions. The knowledge gap? Dissolving faster than your New Year’s Resolutions by Jan 7.
2 – Reddit Sues Anthropic For Data Theft 📋
Rut-roh.
Reddit has filed a lawsuit against Anthropic in California Superior Court for illegally scraping user comments to train Claude without permission or payment. The lawsuit focuses on breach of terms and unfair competition rather than copyright infringement.
Here's what makes this brutal. OpenAI and Google pay tens of millions for official Reddit licensing agreements that helped Reddit's stock debut. Anthropic apparently grabbed the data for free using automated bots despite explicit requests to stop.
TBH, we think Reddit’s data is more valuable than New York Times content for AI training because it contains exclusive subject matter expert discussions that exist nowhere else.
Stuff the NYT and other outlets cover are usually found in dozens or hundreds of other publications. For whatever reason, Reddit’s content authority has gone unmatched for years.
What it means:
Anthropic could get financially wrecked and it seems pretty cut and dry if the allegations are true.
Reports say they apparently violated terms while competitors paid millions for identical data.
Every platform will probably start charging premium prices for exclusive training access. Reddit seems likely to win this since the evidence appears overwhelming, if the reports are true.
3 – ChatGPT Drops Business Connectors 💼
OpenAI is getting down to business.
This week, they launched cloud connectors for Google Drive, OneDrive, Dropbox, Box, SharePoint, Gmail, Calendar, Teams, and HubSpot plus a meeting recorder that transcribes 120 minutes per session.
Users can now chat directly with their dynamic business data while the recorder generates summaries, emails, and project plans.
The connectors only work in deep research mode currently, requiring 3-15 minute waits while dual O3 models analyze connected sources. This actually reduces hallucinations since it uses two separate O3 versions to cross-check information.
What it means:
Traditional RAG consulting could be dead overnight.
In 2021-2023, companies used to spend millions building custom RAG systems with vector databases and embeddings. Now similar functionality is plug and play.
Every Fortune 500 will probably implement these connectors within six months because ROI seems immediate.
Marketing teams might finally get accurate insights from their own data instead of outdated training information that could be years old. This is likely how AI platforms compete long-term since models become commoditized.
4 – Google Drops Overkill Gemini Update 🚀
Google keeps waking up and choosing AI violence. Lolz.
Google released its chart-topping Gemini 2.5 Pro version 06-05 despite already dominating LM Arena with their previous 05-06 model.
This latest version slapped with a 24-point ELO jump that now gives Google Gemini the top two spots. The June 5th version shows major improvements in coding benchmarks, GPQA, and Humanities Last Exam while fixing performance drops in creative responses.
This update completely wiped out every advantage Anthropic's Claude 4 claimed in software engineering just 10 days after their launch. Available through Gemini API, Google AI Studio, and Vertex AI.
We literally predicted this would happen within weeks of Claude 4's release because Google doesn't let competitors maintain leads.
And yeah…. Those flimsy Claude 4 USPs are kinda melting now.
What it means:
Google's strategy is suffocation through overwhelming capability and it's working perfectly.
They refuse to let any competitor establish lasting advantages anywhere. Anthropic's brief coding superiority lasted like 10 days before drifting away.
This pattern continues forever. Google immediately counters any claimed breakthrough from OpenAI or Anthropic with superior models within two weeks. Competition is essentially impossible against this approach.
Our take? 2025 is looking like OpenAI and Google driving an even further gap with other competitors.
Well…. We’ll see if Meta can bounce back. More on that later.
5 – DeepSeek’s Data Collection in Question 🕵️
Chinese AI lab DeepSeek's R1 model from May 28th shows language patterns strikingly similar to Google's Gemini 2.5 Pro.
Did they steal another AI company’s data?
Researchers found thought traces that appear to be direct copies from Google, marking their second major theft scandal after December's V3 model identified itself as ChatGPT.
A few montsh back, OpenAI told the Financial Times they found evidence of DeepSeek distilling their models while Microsoft flagged suspicious data theft from OpenAI developer accounts connected to DeepSeek in late 2024. Using DeepSeek's API or website sends all data straight to Chinese government per their terms.
Those stories about DeepSeek training models for a fraction of the cost? They're apparently copying work from companies that actually pay billions for training.
What it means:
DeepSeek's “cost advantage” their Twitter army pumps might be completely fake if these accusations hold up.
Reports are making this seem more like industrial-scale model theft rather than innovation. Yikes.
Any enterprise using DeepSeek could essentially be doing R&D for Chinese intelligence services.
The SemiAnalysis report we covered suggests they didn't train for claimed $5.6 million costs. This could expose how many "breakthrough" Chinese models might just be stolen Western research with fancy marketing.
Go check that episode for more.
6 – Anthropic Burns Developer Bridges 🔥
It’s never a good idea to piss off your main customer group, yet here we are. Lolz.
Anthropic reportedly cut Claude 3.x model access from Windsurf with less than a week's notice after reports emerged that OpenAI is acquiring the AI coding Windsurf platform for $3 billion.
Users now need their own API keys while Gemini 2.5 Pro is offered as a discounted alternative.
The business decision makes sense since they don't want to help train a competitor's platform.
Buuuuuuut the execution was terrible given that software developers represent Anthropic's primary customer base and these acquisition rumors circulated for three weeks.
Five days notice for paying customers is insulting when they had time for proper due diligence.
What it means:
Anthropic seems to have major PR problems and questionable business sense about customer relations.
Straight up.
You probably don't alienate core paying customers with five-day notice even if the underlying business logic is sound.
This could push more developers toward OpenAI and Google ecosystems permanently.
Anthropic's dev market share mighta just taken a HUGE hit because they apparently can't handle basic communications.
7 – Apple Drops Suspicious Reasoning Study 📚
Good data. Sus framing. Fishy timing.
Apple researchers published a buzzworthy “Illusion of Thinking" research paper hours before WWDC, the company’s big yearly conference that’s usually the most visible time for Apple.
The paper claims AI reasoning models offer only marginal improvements and often fail as tasks grow complex. The study found small prompt changes can degrade performance by 65% and concludes models rely on pattern recognition rather than genuine logic.
The timing is beyond suspicious. Like… cmon, Apple.
Bloomberg reports Apple is taking an AI gap year at WWDC because they couldn't deliver Apple Intelligence promises and face class action lawsuits. The research exclusively uses abstract logic puzzles while ignoring real-world applications like coding where reasoning adds value.
This seems like marketing disguised as research to justify their current AI failures.
What it means:
This looks like corporate propaganda disguised as kinda legit research, and Apple might have just nuked their credibility in the AI research community.
They seem to have failed at AI implementation so now they're reaching for exaggerated evidence on how advanced reasoning AI is useless anyway. The timing literally hours before announcing their AI gap year appears embarrassing.
This study is gonna unravel. It might be a few days or a few months.
Join us tomorrow as we’re gonna drop a HotTakeTuesday on this one.
8 – Meta Eyes $10B Scale AI Deal 💰
Meta is making big data moves to climb to the top tier.
The company is reportedly in talks to invest over $10 billion in Scale AI, which would rank among the largest AI startup investments.
Like….ever.
Scale AI specializes in data labeling and operates platforms for AI researchers across 9,000+ cities while already counting NVIDIA, Amazon, and Meta as backers.
Deal terms aren't finalized but this would massively boost Meta's AI capabilities since Scale AI has top-tier data infrastructure.
Meta is the only major tech trillionaire company pursuing primarily open-source strategies with Llama models.
What it means: Meta is making a legitimate power play to join the top tier and this could actually work.
Maybe.
Scale AI's data infrastructure could be the missing piece for their open-ish-source AI strategy. If this deal closes, expect massive Llama capability jumps within the year after a kinda lackluster Llama 4 rollout.
Meta's betting that superior training data plus open distribution beats closed-model strategies long-term. Given their resources and Scale's expertise, they might be right about this approach.
Reply