• Everyday AI
  • Posts
  • Ep 747: Responsible AI Playbook: What It Means and 5 Moves to Ensure Your AI Strategy Survives (Start Here Series Vol 17)

Ep 747: Responsible AI Playbook: What It Means and 5 Moves to Ensure Your AI Strategy Survives (Start Here Series Vol 17)

Microsoft Releases MAI Models, Google drops Gemma 4, Slack Upgrades Slackbot with AI Features and more

 

Outsmart The Future

Today in Everyday AI
8 minute read

🎙 Daily Podcast Episode: Ep 747: Responsible AI Playbook: What It Means and 5 Moves to Ensure Your AI Strategy Survives (Start Here Series Vol 17) Give today’s show a watch/read/listen to learn more.

🕵️‍♂️ Fresh Finds: Anthropic Tests Conway Agent Platform, Meta Previews New AI Models, OpenAI Expands Codex Plugin Capabilities, and more. Read on for Fresh Finds.

🗞 Byte Sized Daily AI News: Microsoft Releases MAI Models, Slack Upgrades Slackbot with AI Features, Alibaba’s Qwen3.6-Plus Surges in Code Arena, and more. Read on for Byte Sized News.

💪 Leverage AI: Most companies are moving fast with AI, but they’re not protected. That’s the real risk. Keep reading for that!

↩️ Don’t miss out: Miss our last newsletter? We covered: Google Launches Cheaper Veo 3.1 Lite, Anthropic Source Code Leak Sparks Security Frenzy, OpenAI Secures $122B Funding Round, and more. Check it here!

Ep 747: Responsible AI Playbook: What It Means and 5 Moves to Ensure Your AI Strategy Survives (Start Here Series Vol 17)


Half of consumer question the authenticity of what they see online. 🤔

That's the reality of the business world that your company is blindly spraying a gajillion AI-generated artifacts into.

Sure, enterprises want to 'do the right thing' when it comes to ethical and responsible AI.

But it's easier said than done when the tech is outpacing the guardrails.

Also on the pod today:

• Only 30% have AI governance
• Five pillars of responsible AI 🏛️
• The ten second accountability rule ⏱️

It’ll be worth your 26 minutes:

Listen on our site:

Click to listen

Subscribe and listen on your favorite podcast platform

Listen on:

Here’s our favorite AI finds from across the web:

New AI Tool Spotlight – Denovo Helps you Build your dream startup in minutes, Lightning V3 is a Text to Speech built for Voice Agents, Cosyra Runs Claude Code, Codex CLI, OpenCode, and Gemini CLI from a cloud terminal on your phone.

Anthropic Tests Conway — Anthropic is quietly building "Conway," a new always-on agent platform with extensions, webhooks, and deep Claude integration.

Meta Upgrades — Meta quietly has next-gen Avocado and a new Paricado AI in testing, both with features not yet public.

GLM-5V-Turbo Coding Mode — GLM-5V-Turbo can turn screenshots and design drafts into real code, mixing visual and programming skills without missing a beat.

Computer in Slack — Perplexity Computer is saving companies millions by running AI-powered workflows right inside Slack.

Codex Plugin — OpenAI's Codex now syncs code and tickets in real time, cutting out endless tab-switching.

Control Streamdeck with AI — Stream Deck 7.4 now lets AI tools like NVIDIA G-Assist and Aitum trigger your actions with voice, text, or live events.

Arcee Trinity-Large-Thinking Released — Trinity-Large-Thinking just dropped, bringing stronger multi-turn reasoning and open weights you can actually own.

Nuclear Energy — South Korea and France are teaming up on AI and nuclear energy. See how this tech-power duo plans to shape the future.

OpenAI AcquisitionOpenAI just bought TBPN, the team behind the popular AI talk show.

1. Microsoft Unveils MAI Models: Cheaper, Faster, and Ready to Roll

Microsoft just launched its new lineup of MAI models, now live for developers on Microsoft Foundry and the MAI Playground.

The company claims these models are not just more affordable but also outperform rivals like Whisper and Gemini in key languages, all while promising robust safety features for enterprise use. With prices starting as low as $0.36 per hour for transcription, Microsoft is betting these models will power everything from consumer apps to large-scale commercial deployments.

2. Slack Unveils the Next Generation of Workplace AI 🧠

Slack announced a major upgrade today, rolling out a new version of Slackbot packed with over 30 features to streamline work for teams of all sizes.

The revamped Slackbot now acts as an AI teammate, capable of transcribing meetings, handling CRM tasks, and connecting seamlessly with Salesforce and other business apps. This marks a strategic move to make workplace AI more accessible and collaborative, eliminating the headaches of juggling multiple tools and platforms.

3. Google Unleashes Gemma 4 for Local AI Power ⚡

Google just pulled back the curtain on Gemma 4, its most advanced open-weight AI models to date, promising a big leap for developers craving local control and speed.

The new lineup includes four models specifically tuned for everything from powerhouse GPUs to mobile devices, all boasting lower memory demands and lightning-fast response times. In a move likely to please developers frustrated by licensing headaches, Google is ditching its custom Gemma license for a more open approach.

4. Qwen3.6-Plus Shakes Up Code Arena Rankings 📈

Qwen3.6-Plus Preview from Alibaba has just surged to #8 overall in Code Arena, catapulting the lab to second place on the coveted React leaderboard.

This new version is grabbing attention for its sharper agentic coding abilities, handling multi-step reasoning, tool use, and complex multi-file apps with impressive speed and smarts. The buzz reflects a growing push toward AI that can tackle real-world tasks with greater autonomy and efficiency.

5. Microsoft and Chevron Team Up for AI Data Center Power 😁

Microsoft and Chevron have just inked an exclusivity deal to supply power to a massive AI data center complex in West Texas, signaling a major energy-tech partnership.

The agreement highlights the urgent need for reliable energy as AI infrastructure rapidly expands, especially in regions like Texas where demand is booming. Both companies are betting big on the future of artificial intelligence and sustainable energy collaboration.

A federal court just ruled "the algorithm did it" is no longer a legal defense.

Your AI vendor and your company can now BOTH be held liable for your AI's outputs.

(And sorry…. most companies are out here treating responsible AI like a terms-of-service checkbox they scroll past at 2 AM.)

Half of consumers already question the authenticity of almost everything they encounter online. Courts are calling companies to account. The EU AI Act enforcement clock hits zero in August 2026, with fines up to 7% of global revenue.

Your competitors who figured this out? They're reporting over 5% higher EBIT impact.

We broke this down on today's Everyday AI Start Here Series and handed you the five-move responsible AI playbook you need right now.

This ain't future risk. It's already happening. Time to get right, fam.

1. Your Customers Already Assume It's All Fake 🔥

Half of consumers already distrust almost everything they see online.

Not some of it. Almost everything.

The iProov research tracking this isn't measuring paranoia. It's measuring a permanent default shift happening now as AI-generated fakes get indistinguishable from real life. Deepfake fraud, synthetic media, AI slop flooding every channel your customers use. That baseline trust people used to hand to brands? Evaporating fast.

Consumers are actively shifting toward brands that can PROVE their outputs are authentic.

It's the organic food play of this decade.

When brands started screaming "clean ingredients" ten years ago, people called it niche. Now those brands own loyalty that conventional brands cannot buy back. The same dynamic is playing out in AI right now.

Responsible AI isn't a compliance form. It's a trust infrastructure play that either builds your competitive moat or lets your competitors build theirs first.

Try This

Audit every external-facing AI output your company produces right now. Website copy, graphics, chatbot responses, customer emails. Anything a customer or prospect touches.

For each one, ask: does the customer know AI was involved here? If no, decide whether disclosure makes sense. Spoiler: it usually does.

Then do the same thing internally. Your employees deserve to know how AI agents are making decisions that affect their work.

Transparency starts at home. Then it reaches your customers. Then it becomes your moat.

2. Courts Ruled "The Algorithm Did It" Ain't a Defense ⚡

Something the legal world just settled that most enterprise leaders haven't caught up on yet.

Mobley v. Workday got certified as a nationwide collective action. A federal court ruled when an AI system makes hiring decisions, both the AI vendor and the company deploying it can share liability for discriminatory outcomes.

Not the algorithm. Your company.

It ain't only hiring, either. Courts are rejecting any clean distinction between software-driven decisions and human-driven ones. If your AI made the call, your company made the call.

Meanwhile, Anthropic just settled a copyright case with authors for $1,500,000,000. Courts now treat AI outputs as potentially competing with copyrighted works, and enterprise users face real IP exposure when their vendors trained on unlicensed data.

State laws in California, Illinois, New York, and Colorado already regulate AI in hiring. The "we didn't know" defense?

Cooked.

Try This

Pull your AI vendor agreements this week and look for IP indemnification language. Most enterprise contracts have some. It's often narrower than you think.

Ask your legal team one direct question: if an AI output our company produced gets challenged in court, where exactly does liability sit?

If they can't answer fast, that's your answer. Gap identified.

Document every AI tool touching any customer-facing output or hiring-adjacent decision. That list needs to exist before someone asks for it in discovery.

3. Five Moves Before the August Deadline 🚀

August 2, 2026. Mark it now.

EU AI Act enforcement goes live for high-risk AI: hiring algorithms, credit scoring, biometrics. Noncompliance: fines up to 35,000,000 euros or 7% of global annual revenue, whichever is higher.

If your company does business in Europe or serves EU customers, this applies to you.

The five-move playbook: one, audit every AI system and classify it by risk level. Two, name the one human accountable when something goes wrong. Three, test your tools for algorithmic bias before regulators and courts do it for you. Four, build expert-driven oversight, not just IT rubber-stamping your agentic stack. Five, treat transparency as a competitive advantage.

McKinsey confirms organizations investing heavily in responsible AI are far more likely to report EBIT impact above 5%.

Responsible AI builds the roads. Without those lanes, all your AI cars are going everywhere.

Try This

Run the ten-second accountability test on any AI system currently in your org. If something went catastrophically wrong today, can you name the single human being ultimately accountable?

If the answer is "prolly legal" or "that's an IT question," your accountability structure is cooked.

Calendar a hard date before June 2026 to audit every AI system touching hiring, credit, customer scoring, or biometric data.

August will sneak up fast. Companies waiting until July are gonna be in serious, serious trouble.

 

Reply

or to participate.