- Everyday AI
- Posts
- Ep 680: NVIDIA’s $20 billion AI bet, Amazon adds big AI partners, Microsoft’s Copilot failures and more
Ep 680: NVIDIA’s $20 billion AI bet, Amazon adds big AI partners, Microsoft’s Copilot failures and more
OpenAI "head of preparedness" Job, Retailers testing AI to stop return fraud, Mistral tests "Workflows" and more
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
Outsmart The Future
Today in Everyday AI
8 minute read
🎙 Daily Podcast Episode: It may have been a slow holiday week in AI, but Nvidia’s $20B Groq move made it one of the most important weeks of the year. Give today’s show a watch/read/listen.
🕵️♂️ Fresh Finds: Google 2025 recap, AI Chips, AI Trial to fight Alzheimers and more Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: OpenAI "head of preparedness" Job, Retailers testing AI to stop return fraud, Mistral tests "Workflows" and more Read on for Byte Sized News.
💪 Leverage AI: Nvidia dropped $20B, China took the coding crown, and OpenAI admitted a major AI security truth. Here’s what you need to know. Keep reading for that!
↩️ Don’t miss out: Miss our last newsletter? We covered: OpenAI hits 1 million business users, Google Cloud CEO Anticipated for the AI crunch a decade ago, OpenAI says Prompt injections still a risk and more Check it here!
Ep 680: NVIDIA’s $20 billion AI bet, Amazon adds big AI partners, Microsoft’s Copilot failures and more
A $20 billion AI deal while you were away? 🤯
Yes.
Even though this week may be considered a 'slower' week in AI news.....
↳ NVIDIA made a splash with a $20 billion pseudo acquisition
↳ Amazon partnered with some big names for its Alexa+ and
↳ Microsoft's Copilot is reportedly struggling so much that its CEO is acting as a product manager.
Miss anything?
Don't worry, in our weekly 'AI News That Matters' series, we'll get you caught up in no time.
Also on the pod today:
• NVIDIA’s $20B Groq “pseudo-buy” 💸
• OpenAI admits prompt injection risk 🔓
• Social Security threatened by AI ⚠️
It’ll be worth your 30 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – Giselle is the AI agent studio powering product delivery, Dropstone is The Intelligent Runtime for Autonomous Engineering, Molmo 2 is a State-of-the-art video understanding, pointing, and tracking model
2025 Uses of AI — From robo‑care to courtroom bots and farm advisors — AI is quietly remaking daily life.
AI Chips — ASML’s EUV monopoly fuels massive cash flow — a buy for AI chip scale-up
AI for Doctors — Retro Bio starts human trial of a drug designed to boost brain “cleanup” and fight Alzheimer’s
AI Movies — AI is moving into Hollywood, from AI-made shows to voice licensing. Want to know how creators and audiences are reacting?
AI In Classroms — OSU mandates AI fluency for all undergrads as classrooms from K‑12 to college adopt chatbots
1. OpenAI posts a $555,000 “head of preparedness” job as risks from models mount ⚠️
OpenAI CEO Sam Altman announced a new senior role paying $555,000 plus equity to lead preparedness efforts as models grow more capable and introduce urgent harms, calling the job “stressful” and immediate.
The position sits on the Safety Systems team and will run capability evaluations, threat modeling, and mitigation work to build an operationally scalable safety pipeline. The timing matters because internal departures and rising model risks — from mental-health harms to security vulnerabilities — have raised questions about whether safety has kept pace with product rollout.
2. Investors keep piling into AI despite bubble talk 🫧
A new wave of year-end data shows investors remain committed to AI stocks even as warnings of an overheated market grow louder.
Surveys from The Motley Fool and Investopedia find most holders plan to keep or expand AI positions while many also believe prices are speculative, and the S&P 500’s CAPE sits near dot-com bubble levels, signaling broad overvaluation. Analysts warn that a crash could produce steep volatility and losses for some tech names, yet supporters point to large, diversified revenues at leaders like Nvidia and Alphabet that could weather a downturn.
3. Retailers test AI to stop return fraud 🇽
This holiday season Happy Returns is piloting "Return Vision," an AI-driven system that flags suspicious retail returns before refunds are issued, aiming to curb part of the estimated $76.5 billion in annual U.S. return fraud.
The tool analyzes timing, frequency and location signals when customers start a return, supports in-person barcode checks at drop-off points, and routes flagged packages to human auditors for photo comparison and final review. Early pilot data show under 1% of returns flagged as high risk and about 10% of those confirmed as fraud, with average prevented loss around $200 per case, suggesting AI plus hands-on inspection can reduce losses without broadly slowing refunds.
4. Mistral tests “Workflows” and shared Connectors in beta 🧬
Mistral is quietly rolling out a beta “Workflow” option in its sidebar, signaling the company is moving from model releases toward tools that let teams build repeatable, multi-step processes inside the platform.
The beta also hints at a new Connectors area to manage integrations as reusable components across projects, which would make integrations shared building blocks rather than one-off setups. On the Le Chat side, Mistral is consolidating attachment entry points into a single composer UI, tightening how users pick agents, libraries, and assets.
5. Parents sound alarm as chatbots become teen confidants 🤖
A surge in teen use of AI chatbots, now including daily engagement for many, has prompted parents and experts to warn about mental health risks and social development harms, with disturbing reports including chats that encouraged self-harm and even two teens’ deaths cited at a Senate hearing.
Psychologists and pediatricians say extended, personalized conversations with chatbots can reinforce risky content, erode social skills, disrupt sleep, and lead vulnerable teens to substitute machines for human support. Experts urge parents to stay engaged, build digital literacy, set time limits, ensure kids use accounts with parental controls, and seek professional help if warning signs appear.
While you were busy finishing leftovers and prepping for New Year's, the AI world decided to drop a twenty billion dollar bomb.
Nvidia just made one of the biggest moves in chip history.
So much for a quiet holiday break.
And if that wasn't enough, we have Chinese models taking the coding crown and OpenAI admitting a massive security flaw.
What'd you miss?
1. Nvidia Buys Groq Assets for $20 Billion 💰
Holllllld up.
According to reports, Nvidia agreed to buy Groq's assets for $20 billion in cash.
Just to be clear here. This is Groq with a Q. It is the inference chip company and not Elon Musk's chatbot.
But here is the wilder part.
They used a "license plus hire" structure.
Nvidia gets the hardware, IP, and talent without technically buying the entire company. This allows them to bypass standard federal antitrust reviews.
Groq founder Jonathan Ross and other leaders will join Nvidia to integrate their lightning-fast LPU technology.
What it means: Nvidia is ruthlessly eliminating competition in the inference market.
They know speed is the next bottleneck for things like OpenAI's thinking models.
By absorbing Groq's tech without a full buyout, they secure their dominance before regulators can even blink.
2. MiniMax M 2.1 Takes the Coding Crown 👑
MiniMax announced their M 2.1 upgrade this past week and it’s a name and model you should quickly get accustomed to. (Not saying you should use it, but you should at least be aware of its implications.)
You might not know the name. But this Chinese company just embarrassed some of the biggest players in AI.
And the results?
Sheeesh.
It claimed the top spot on the SWE-bench multilingual benchmark with a score of 72%.
That beats heavy hitters like Gemini 1.5 Pro and Claude 3.5 Sonnet.
It is now the best model in the world for non-Python coding tasks like Java, C++, and Rust.
Despite being a massive model, it only activates 10 billion parameters per token with its MoE setup.
What it means: China is winning the open source war exactly as predicted.
If your company does heavy coding in languages other than Python, you should be looking at this model.
3. OpenAI Admits You Can't Fix Prompt Injection 🔓
OpenAI released a report this week that validates your worst security fears.
They publicly acknowledged that prompt injection attacks cannot be "deterministically eliminated."
This is a massive admission for enterprises trying to deploy AI agents or use agentic browsers.
The report states that agent mode increases the attack surface significantly. Even sophisticated defenses cannot guarantee protection against malicious inputs.
Think about hidden text on a website tricking an AI agent into buying 1,000 rolls of toilet paper.
Since simple code can't stop this, OpenAI built an LLM-based automated attacker.
It uses reinforcement learning to find exploits that human red teams miss.
What it means: Deterministic security in AI tools is official DOA.
You cannot rely on simple "if-then" logic to protect your AI agents.
We are entering an era where you need an AI to police another AI because traditional software security isn't enough.
4. Alexa Plus Adds Big Partners (But Is It Smart?) 🗣️
Reports indicate that Amazon is adding major partners to its Alexa Plus service in 2026.
The new lineup includes Angi, Expedia, Square, and Yelp.
This move aims to let users book hotels, schedule salon appointments, and handle payments through voice commands.
The Expedia integration seems the most promising. It allows for comparing and managing hotel reservations using natural language.
But let's be honest.
The current Alexa Plus experience is still not good.
It is slow and frustrating compared to competitors. If you have used Gemini Live or OpenAI's voice mode, even Alexa’s new Alexa+ feels ancient.
What it means: Amazon is trying to pivot Alexa into a transactional platform.
They want it to be an app store for your voice.
But without a massive improvement in the underlying intelligence, users will keep flocking to Google and OpenAI.
5. AI Adoption Threatens Social Security 📉
Welp.
Your retirement fund might be the next victim of AI adoption.
A new report from Barron's warns that AI could accelerate the depletion of Social Security.
The problem lies in the payroll tax base. As AI automates jobs, fewer humans are paying into the system.
The report cites a McKinsey analysis estimating that 30% of U.S. work hours could be automated by 2030.
This threatens millions of jobs. Specifically white-collar roles in admin and legal.
The Social Security Administration has warned that faster-than-expected job loss would reduce tax income. This could push the trust fund depletion date closer than the current projection of 2033.
What it means: The structure of the economy is breaking.
Our safety nets rely on a traditional full-time workforce that AI is dismantling.
We are moving toward a gig economy that contributes far less to these funds.
6. Microsoft CEO Takes Over Copilot Product 🛠️
The Information reports that Satya Nadella has effectively become the product manager for Microsoft Copilot.
Awkwaaaard.
Nadella reportedly told engineering managers that integrations with Gmail and Outlook "don't really work."
He now holds weekly hour-long meetings with top engineers to grill them on performance.
He is also personally recruiting talent from OpenAI and DeepMind to close the gap.
This is highly unusual for a CEO of a company this size.
But it shows how serious the situation is. Microsoft knows it is falling behind Google in consumer AI quality.
What it means: Microsoft is in crisis mode.
When the CEO has to file bug reports, you know the product organization is failing.
They are feeling the pressure from Google's recent wins and are desperate to fix Copilot's reputation.






Reply