• Everyday AI
  • Posts
  • Ep 681: Who Gets Written Out of the AI Future?

Ep 681: Who Gets Written Out of the AI Future?

Meta makes huge agent acquisition, Sanders renews call for AI robot tax, and Microsoft's updated AI strategy

 

šŸ‘‰ Subscribe Here | šŸ—£ Hire Us To Speak | šŸ¤ Partner with Us | šŸ¤– Grow with GenAI

Outsmart The Future

Today in Everyday AI
8 minute read

šŸŽ™ Daily Podcast Episode: AI is shaping how we work, think, and tell stories—but if we blindly trust its outputs, marginalized voices risk being pushed further out of the narrative. Give it a watch/read/listen.

šŸ•µļøā€ā™‚ļø Fresh Finds: Youtube still pushing out "AI Slop", Google in hot water over AI image training, Shaq is now a vibe coder and more. Read on for Fresh Finds.

šŸ—ž Byte Sized Daily AI News: Meta makes shocking agent acquisition, Microsoft’s new Copilot moves, Notion tests new AI Workspace and more. Read on for Byte Sized News.

šŸ’Ŗ Leverage AI: Personalized AI feels powerful, but it can trap leaders in flattering echo chambers and erase entire perspectives. Here’s why that’s a serious business risk. Keep reading for that!

ā†©ļø Don’t miss out: Did you miss our last newsletter? We Covered: OpenAI "head of preparedness" Job, Retailers testing AI to stop return fraud, Mistral tests "Workflows" and more Check it here!

Ep 681: Who Gets Written Out of the AI Future?

One of the scariest parts of AI? 😰

Who (or what) gets left out.

As a result, LLM outputs are heavily skewed toward the perspectives and content most common in their training data and the people who supervise them.

Which is almost always an absolutely terrible thing.

So, who gets written out of the AI future? And how do we fix it?

Join us to find out.

Also on the pod today:

• Marginalized voices missing in AI šŸ—£ļø
• Over-relying on AI outputs 🚨
• ā€œAI slopā€ flooding the internet 🌊 
 

It’ll be worth your 27 minutes:

Listen on our site:

Click to listen

Subscribe and listen on your favorite podcast platform

Listen on:

Here’s our favorite AI finds from across the web:

New AI Tool Spotlight – Adminder Turns product photos into scroll‑stopping video ads, Influcio is Campaign Execution 480X Faster than Human, Note67 is a Private Meeting Notes Assistant

AI Image Model — Fal’s new FLUX.2 Turbo slashes generation steps to 8, making 1024x1024 images in about 6.6 seconds. Curious how it stays open-weight but non-commercial?

Vibe Coding — Shaq is a vibe coder now, apparently, as the Replit investor has partnered with the company for vibe coding promotion.

AI Slop — YouTube still pushes mass‑produced ā€œAI slopā€ — over 20% of new recommendations.

AI Paper Tablet — Paper-like tablet meets AI note tools — TCL’s Note A1 Nxtpaper aims to replace notebooks.

AI New Years Photoshoot — Create dazzling, realistic New Year’s photos from one selfie.

AI Privacy — Google Photos denies using personal photos to train its image AI, but Proton says otherwise.

1. Meta buys Manus for over $2B in rapid deal šŸ¤

According to Bloomberg, Meta agreed to acquire Manus, a Singapore-based AI agent with Chinese origins, for more than $2 billion in a transaction completed in about 10 days.

The move gives Meta an immediate subscription revenue stream from Manus’s $125 million annual run rate and adds task-capable agent tech Meta lacks in its current product lineup. Meta says Manus will stop operating in China and that all Chinese investor stakes were bought out, a step likely meant to ease geopolitical concerns while integrating the tech into Meta’s ecosystem.

2. Notion tests AI-first workspace with new sidebar and controls šŸ¤–

Notion is rolling out an early access AI-first workspace that reorganizes the app, adding separate tabs for AI chats, pages, and an inbox to centralize AI interactions and speed workflows.

The update includes a settings toggle to let admins control whether Notion AI can edit pages across a workspace, giving teams fine-grained governance over automated edits. A planned AI credits system would meter usage, show quotas and time-saved breakdowns, and let organizations buy extra credits beyond subscriptions.

3. Sanders revives push to ā€œtax the robotsā€ with new legislation plan šŸ’¼

Former Presidential Candidate Bernie Sanders renewed his effort to impose a levy on companies that replace human workers with AI or automation, saying Monday that revenue would fund retraining, unemployment support, and other aid for displaced workers.

The proposal would charge firms for each human position eliminated by automation or alter tax rules such as lengthening depreciation schedules for robotic equipment, aiming to remove incentives that favor machines over labor. Sanders cited an October 2025 report warning that AI could displace nearly 100 million U.S. jobs and argued that the gains from automation now flow disproportionately to billionaires and big corporations like Amazon.

4. Microsoft CEO tightens reins as Microsoft reshapes AI leadership 🚧

According to the Financial Times, Microsoft CEO Satya Nadella has overhauled senior leadership to accelerate development of AI models, coding tools and applications after restructuring the company’s partnership with OpenAI, signaling renewed urgency as rivals Amazon and Google advance.

The move responds to competitive pressure and the loss of some exclusive OpenAI advantages, even as Copilot hits 150 million monthly users but still trails larger chatbot audiences. Separately, the Wall Street Journal reports Caterpillar is rapidly expanding generator production for AI data center demand, a shift that could lift company sales and reflects broader infrastructure needs as data center power use climbs.

5. Z.AI launches HK$4.35bn IPO to become Hong Kong’s first major LLM-listed developer šŸ¤‘

Zhipu AI, marketed overseas as Z.ai, kicked off a HK$4.35 billion share sale priced at HK$116.20 and aims to list on January 8, positioning itself to be the first large language model developer traded in Hong Kong.

The company expects about HK$4.17 billion in net proceeds and a post-listing valuation near HK$51.16 billion after raising over RMB 8.3 billion in prior funding from heavyweights like Alibaba, Tencent and Xiaomi. The IPO comes amid a wave of tech listings in Hong Kong, with GPU and AI-related debuts sparking massive retail interest but raising questions about market liquidity if many large deals pile up.

You think training AI on your specific voice and preferences makes you efficient.

Sure.

But it actually makes you blind.

(Yikes.)

Most executives are currently building custom instructions that turn sophisticated reasoning engines into digital yes-men. You aren't getting artificial intelligence.

You're getting a high-tech mirror that validates your sometimes bad ideas at lightning speed.

If AI is just a mirror reflecting the people currently in power, who happens to be standing outside the frame?

They simply vanish.

That’s exactly the sticky situation we un-stickied on today’s show with Bridget Todd, host of Mozilla’s "IRL" podcast. We didn't just talk about bias.

We dug into the terrifying question of who gets written out of the AI future entirely.

While everyone else is high-fiving over personalization features, we explored why building "exclusive" models might be the fastest way to shrink your total addressable market.

Here is why your "customized" AI might be your biggest strategic liability.xxxx

1. The Personalization Trap šŸŖž

It feels productive.

But Bridget identified a critical failure point in this approach. She spent time training models to write captions in her exact voice.

When the outputs came back, she loved them.

Not because the writing was objectively good, but because it was a reflection. She fell in love with the mirror.

This is the hidden danger of excessive customization. When you tune models to strictly match your tone and worldview, you strip away the technology's ability to challenge your logic.

You don't need an AI that nods along with your strategy. You need an AI that spots the holes you missed because you were too busy admiring your own reflection.

Try This: Open your custom instructions or system prompts right now. Look for any line where you've told the model to "always" agree, adopt a specific worldview, or never push back. 

Delete it immediately. Instead, add a line instructing the model to explicitly identify logical gaps in your reasoning before generating a response. If your AI assistant never tells you you're wrong, it's not an assistant. It's a sycophant.

2. Bias Is A Product Failure šŸ“‰

We usually talk about AI bias as a social issue.

It is.

Bridget dropped a specific example that should terrify product managers. When Canva rolled out their image generation tools, the system reportedly flagged "Black women with Bantu knots" as a violation of community guidelines.

Inappropriate content.

Think about the business implication here. This wasn't just a PR nightmare.

It was a functional breakdown where the product literally did not work for a massive segment of the user base because the training data lacked cultural competence. If your AI tools are trained on narrow data, you are building products that only work for narrow markets.

Try This: Audit your current customer-facing AI deployments for "erasure errors" this week. Don't just test if the bot works for your ideal user persona. 

Test it with inputs from demographics, regions, or use cases you typically ignore. If your internal testing team looks exactly like you, you are currently baking liability into your code. 

Diverse testing isn't a "nice to have" for HR. It is the only way to ensure your product actually functions for the total addressable market.

3. The Trust Economy Is Here šŸ¤

The internet is drowning in AI slop. And it’s only gonna get waaaaaay worse. 

Garbage. Content. Everywhere. 

Companies are blindly copying and pasting model outputs to scale content production, creating a wasteland of low-value noise. But this creates a fascinating inverse market opportunity.

As AI generates infinite mediocre content, human trust becomes the premium asset.

Bridget referenced the "FUBU" (For Us, By Us) mentality. The logic is brutal but fair: If it wasn't worth a human's time to create, audiences increasingly feel it isn't worth their time to consume.

The companies that will win in 2026 aren't the ones producing the most AI content. They are the ones using AI to amplify specific human perspectives that build genuine connection.

Try This: Look at your content calendar and identify every piece of material that is 100% AI-generated with zero human oversight. 

Kill it. Replace it with one piece of content where AI did the research but a human wrote the narrative. In an era of infinite slop, your competitive advantage is proving there is a pulse behind the screen. Audiences are learning to spot the difference, and they are punishing the fakes.

Reply

or to participate.