• Everyday AI
  • Posts
  • Ep 735: The AI Labor Shift: When It Will Happen and What It Means for Jobs (Start Here Series Vol 13)

Ep 735: The AI Labor Shift: When It Will Happen and What It Means for Jobs (Start Here Series Vol 13)

Our guide to the AI labor shift, OpenAI Trims Side Projects, NVIDIA Launches Vera CPU, Microsoft’s big Copilot shakeup, and more

 

Sup y’all! 👋

Prepping for our hands on, ‘AI at Work on Wednesdays’ show now.

I’m torn.

A ton of meaningful releases over the past week or so.

What should we cover tomorrow?

What should we Tackle on Wednesdays show? 🤔

🗳️ Vote to see LIVE results 🗳️

Login or Subscribe to participate in polls.

✌️

Jordan

P.S. Make sure to check out the lil bonus at the end.

In Partnership With Section

Section: The fastest way to drive, measure, and see returns on AI adoption

Most companies are spending thousands (or even millions) a year on AI tools that employees are barely using, and only 12% are actually getting business value from them.

Section is the platform that fixes that: it coaches employees on real use cases, tracks adoption across your org, and shows you exactly where AI is and isn't creating value.

You go from rolling out tools to proving measurable ROI. Stop guessing if your AI investment is working and check out Section at sectionai.com.

Outsmart The Future

Today in Everyday AI
8 minute read

🎙 Daily Podcast Episode: AI job cuts are rising fast and the way we work is already starting to shift. In Episode 13 of our Start Here Series, we break down what the AI labor shift really means for your future. Give today’s show a watch/read/listen to find out.

🕵️‍♂️ Fresh Finds: Elon Musk Sued by Teens Over xAI Model, Mistral Releases Small 4, Microfish AI Swarm Raises $4.1M, and more. Read on for Fresh Finds.

🗞 Byte Sized Daily AI News: OpenAI Trims Side Projects, NVIDIA Launches Vera CPU, Microsoft’s big Copilot shakeup, and more. Read on for Byte Sized News.

💪 Leverage AI: The AI labor shift is here. Big Tech is cutting jobs, spending billions on AI, and giving everyone else a preview of what comes next. Keep reading for that!

↩️ Don’t miss out: Miss our last newsletter? We covered: NVIDIA and Mistral team up on open source, Meta eyes 20% staff layoff due to AI, OpenAI in late talks for new $10B enterprise venture and more. Check it here!

Ep 735: The AI Labor Shift: When It Will Happen and What It Means for Jobs (Start Here Series Vol 13)


40,000+ AI-linked job cuts in two weeks. 😬

But AI is also supposed to create millions of new jobs.

So are we watching traditional work slowly die while the replacements don't exist yet?

And did we really spend decades building expertise just to babysit AI agents that are smarter and faster than us?

Join us live to find out.

Also on the pod today:

• AI layoffs: Wall Street boost 📈 
• Entry-level hiring quietly slowing
• $700B shift to AI infrastructure 💸 

It’ll be worth your 34 minutes:

Listen on our site:

Click to listen

Subscribe and listen on your favorite podcast platform

Listen on:

Here’s our favorite AI finds from across the web:

New AI Tool Spotlight – OpenViktor Hires your AI employee For any role, mTarsier is an open-source platform for managing MCP servers and clients, Agen is Fully Autonomous AI Coding Agents

Elon Musk Sued by Teens — Three Tennessee teens sue xAI, saying its model helped create lifelike nonconsensual nudes of them as minors.

Mistral Small 4 Released — Mistral Small 4 packs reasoning, coding, and multimodal power into one open-source model.

Attorneys Fined for AI Hallucinations — Two attorneys were hit with $30,000 in sanctions after a Sixth Circuit review found dozens of fabricated or incorrect citations tied to likely AI-generated “hallucinations.”

Microfish AI Swarm — 20-year-old built MiroFish in 10 days, spawning thousands of autonomous AI agents and landing $4.1M. Curious how one dev outscaled whole teams?

Nscale Buying Monarch Campus — Nscale is buying AIP’s up-to-8GW Monarch campus and signed a nonbinding LOI with Microsoft to lease 1.35GW of Nvidia Vera Rubin GPUs.

Perplexity Computer on Android — Perplexity Computer finally lands on Android — full desktop sync on your phone.

NVIDIA New LPU — NVIDIA's new Groq 3 LPU supercharges inference for agent-based AI, targeting million-token contexts and blistering token throughput.

AI and Work — Most workers say AI will make the workplace feel less human, with 57% worried it will erode critical skills and 29% fearing job loss.

ChatGPT Instant Update — ChatGPT’s Instant mode got a tone tune-up to cut back on teaser-y phrasing and improve follow-up tone.

1. OpenAI refocuses on core products, trims side projects ✂️

OpenAI’s leadership is preparing a strategic shift to concentrate on coding and business users, with top executives reviewing which side projects to deprioritize, a move previewed to staff at an all-hands meeting and reported by the Wall Street Journal.

The change signals a tighter commercial focus as the company scales revenue and reallocates resources toward its most lucrative and defensible offerings. For customers and partners, that means more bets on developer and enterprise tools and less attention to experimental consumer-facing efforts. OpenAI declined to comment on the report.

2. NVIDIA unveils Vera CPU built for agentic AI with big speed and efficiency gains 💻

Announced at GTC, NVIDIA launched the Vera CPU, a purpose-built processor designed to accelerate agentic AI and reinforcement learning, claiming twice the energy efficiency and 50% faster performance than traditional rack-scale CPUs.

The platform pairs 88 custom Olympus cores with high-bandwidth LPDDR5X memory and NVLink-C2C to optimize throughput and responsiveness for large-scale AI services, while offering rack and server designs that scale to tens of thousands of concurrent instances. Major cloud, hyperscaler, and hardware partners including Alibaba, Meta, Oracle Cloud Infrastructure, Dell, HPE and Lenovo are already collaborating on deployments, signaling rapid ecosystem adoption and positioning Vera as a new standard for AI infrastructure.

3. Codex adds subagents for parallel, clean workflows 🖥️

OpenAI announced subagents in Codex, a timely upgrade that lets developers spawn specialized, isolated assistants to run parts of a task in parallel while keeping the main conversation uncluttered.

The feature’s key benefit is context budget separation: each subagent gets a fresh context so exploratory or messy probes do not contaminate the orchestrator thread, changing how teams can split complex work. Adoption questions remain around cache efficiency and how much explicit prompting is needed to create and steer subagents, since they do not auto-spawn and require direct requests.

4. NVIDIA unveils NemoClaw for OpenClaw, simplifying secure AI agents in one install 🦞

Announced at GTC on March 16, 2026, NVIDIA officially introduced NemoClaw, a single-command stack that deploys Nemotron models and the OpenShell runtime to add privacy, sandboxing, and policy controls to OpenClaw agents, making always-on autonomous assistants more secure and manageable.

The stack lets agents run locally on NVIDIA GeForce RTX systems, RTX PRO workstations, DGX Station, and DGX Spark or call cloud models through a privacy router, combining local performance with cloud-scale capabilities.

5. Anthropic hires specialist to stop AI-guided chemical and explosives misuse 💥

Anthropic has posted for an external expert in chemical weapons and high-yield explosives to harden its models against "catastrophic misuse," a timely move as AI firms race to manage dual-use risks.

The role will work with AI safety researchers to set handling rules for sensitive chemical, radioactive and explosives information, reflecting growing industry concern about AI-enabled harm.

6. Manus lets its AI agent work directly on your PC 🖥️

Manus today rolled out "My Computer," a timely upgrade that lets its AI agent access and control files, run command-line tasks, and use local GPUs on macOS and Windows, making local automation immediate and broadly available.

The move shifts Manus from cloud-only limits to direct local execution while keeping user approval and fine-grained permission controls, which the company says differentiates it from competitors.

7. Microsoft reshuffles Copilot leadership as Suleyman pivots to “superintelligence” 🧠

Microsoft announced a leadership overhaul tightening consumer and commercial Copilot teams under Jacob Andreou while freeing Mustafa Suleyman to concentrate on building next‑generation models, a move timed with mounting pressure to show AI returns.

The change hands Copilot product duties to executives reporting directly to Satya Nadella and signals Microsoft is prioritizing model development and cost efficiency over immediate Copilot user growth. Suleyman says the company will double down on “superintelligence” lineages aimed at enterprise needs and lowering model COGS, while still leveraging OpenAI and Anthropic models.

Did we really spend decades building domain expertise just to babysit AI agents that are smarter and faster than us?

Welcome to the AI labor shift. 

Big tech is quietly dumping $700 billion into AI infrastructure this year while slashing 45,000 jobs to literally foot the bill. 

If you haven’t already noticed, 2026 has already started to preview what we’ve talked about since 2023: AI is gonna straight up rattle traditional job markets. 

It starts with Big Tech, but the average American Enterprise will eventually follow the lead: fewer humans working and more money invested in AI. 

If you think your job or department is immune to this AI upheaval, you are straight up hallucinating homie.

That’s why we tackled this shift on our Start Here Series on today’s Everyday AI. 

What will jobs look like in the coming quarters and years? And what can you do to prepare? 

Let’s learn y’all.

1. AI layoffs aren’t always AI 🔥🔥

The biggest trap in this episode is believing every “AI layoff” headline means a bot directly replaced a human. Sometimes that’s true. But a whole lot of the time, AI is also the cleanest excuse in the room for companies cleaning up overhiring, shrinking layers, and trying to look brilliant while doing it.

That matters because the market rewards the story. Executives say “AI efficiency,” investors hear discipline, and workers hear doom. Same move. Different audience. Pretty convenient, right?

So the better question is not, “Can AI do this role today?” Nah. The better question is, “Does AI give leadership cover to cut work they already wanted to restructure?” That is the real tell.

If you miss that, you’ll read the labor market wrong. And when you read the market wrong, you redesign your team wrong too.

Try This

Take your current org chart and your last two hiring plans.

Circle every role that mostly routes information, cleans up deliverables, or exists to move work between layers.

Then ask this. If AI disappeared tomorrow, would this role still exist because of judgment or just because of process?

2. Codified work gets smoked first ⚡

This is where the labor shift gets real ugly, real fast. AI is not coming for every job at once. It’s going after codified work first. Research. First drafts. Slide decks. Summaries. Spreadsheet wrangling. The exact stuff companies used to hand to junior employees so they could learn, earn trust, and slowly climb.

That’s why the bottom of the ladder starts wobbling first. Fewer entry-level hires. Smaller teams. Less middle management. More pressure for a handful of AI-native operators to crank out the output that used to take way more people. The org chart starts looking a lot less like a pyramid and a lot more like a knife.

And when that happens, talent pipelines get weird. Real weird. You can’t build tomorrow’s leaders the old way if the bottom rung disappears.

So yeah, leaders need to stop hiring for tasks and start hiring for judgment, domain context, and the ability to navigate ambiguity when the model gets you 80% there.

Try This

Pick three roles on your team and list the five things each one does most often.

Split every task into one of two buckets: codified or tacit.

If the codified side is doing all the heavy lifting, start redesigning the role now. Don’t wait until finance suddenly “discovers” efficiency.

3. Build your survival stack now 🚀

The good news is this episode doesn’t just scream that the sky is falling and leave everyone cooked. It lands on a three-step survival guide that actually makes sense.

First, audit your role. Second, document your reasoning, not just your outputs. Third, pick one AI platform and go deep until it becomes an extension of your expertise instead of some random toy you use to summarize meetings and feel productive.

That last part matters a lot. The winners in this shift are not gonna be the people testing every shiny tool of the week like raccoons in a dumpster behind a data center. It’s gonna be the people who pair real domain expertise with deep AI fluency and know how to direct the machine, not just admire it.

That’s the edge. And yep, it’s available right now. But not for people who stay casual.

Try This

Write down one important decision your team makes that never shows up cleanly in a dashboard.

Capture the reasoning behind it. What signals matter, what tradeoffs matter, and what experienced people catch that newer people miss.

Then rebuild one real workflow around one AI platform this week. Depth beats dabbling. Every time.

🚨 Bonus 🚨

What do we do with all the ‘extras’ that don’t make the cut?

We put together some amazing additional resources to go with today’s show, including an impressive ‘Cinematic Overview’ from NotebookLM.

Reply

or to participate.