- Everyday AI
- Posts
- Ep 737: AI Governance in Plain English: 5 AI Rules Every Company Needs to follow (Start Here Series Vol 13)
Ep 737: AI Governance in Plain English: 5 AI Rules Every Company Needs to follow (Start Here Series Vol 13)
Microsoft reportedly might sue OpenAI, 93% of jobs at risk due to AI, HSBC might cut 20K jobs due to AI and more
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
In Partnership With Section
Section: The fastest way to drive, measure, and see returns on AI adoption
Most companies are spending thousands (or even millions) a year on AI tools that employees are barely using, and only 12% are actually getting business value from them.
Section is the platform that fixes that: it coaches employees on real use cases, tracks adoption across your org, and shows you exactly where AI is and isn't creating value.
You go from rolling out tools to proving measurable ROI. Stop guessing if your AI investment is working and check out Section at sectionai.com.
Outsmart The Future
Today in Everyday AI
8 minute read
🎙 Daily Podcast Episode: AI Governance? What does that even mean with today’s agentic wave? We give you the 101. Give today’s show a watch/read/listen to find out.
🕵️♂️ Fresh Finds: OpenAI’s big acquisition for Codex, Chinese model stuns at a fraction of the cost, Google’s new free AI training and more. Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: Microsoft reportedly might sue OpenAI, 93% of jobs at risk due to AI, HSBC might cut 20K jobs due to AI and more Read on for Byte Sized News.
💪 Leverage AI: There’s 5 must-follow rules for AI governance that withstand even today’s pace of innovation. Keep reading for that!
↩️ Don’t miss out: Miss our last newsletter? We covered: OpenAI Debuts GPT-5.4 Mini and Nano, Google Expands Personal Intelligence, Anthropic releases Dispatch for iPhone control and more. Check it here!
Ep 737: AI Governance in Plain English: 5 AI Rules Every Company Needs to follow (Start Here Series Vol 13)
Pop quiz: If your company's agents go off the rails, can you confidently name the human at fault in 10 seconds?
Prolly not.
Because lately, companies have spent more energy on going fast and breaking things instead of governance.
Ready for the ironic part? People think AI governance slows you down. But, it actually speeds you up.
Join us as we give you the 101 on AI Governance and the 5 AI Rules Every Company Needs to Follow.
Also on the pod today:
• AI governance: stoplight or brake? 🚦
• Shadow AI risks skyrocketing ☁️
• Classify AI by risk tiers ⚠️
It’ll be worth your 31 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – OctoClaw helps you build your AI team in minutes, Budibase is open source agents that run your ops, OpenObserve is an open source datadog alternative
Free AI Trainings — Google just released some free GenAI trainings.
Agentic Issues — Agentic AI at Meta pushed advice that triggered a two-hour internal access breach. Find out how.
Acquisition — OpenAI acquired Astral to plug Python’s top dev tools directly into Codex
Chinese AI — Xiaomi’s MiMo-V2-Pro is a 1T-parameter agent model that rivals top Western AIs at a fraction of the cost.
Apple App Store — Apple is reportedly cracking down on apps that were vibe coded
AI Lawsuits — BMG i suing Anthropic, claiming Claude was trained on—and spits out—unlicensed hit lyrics.
AI Leaders on the Move — Google DeepMind just hired Bridgewater veteran Jasjeet Sekhon as chief strategy officer, signaling a push to accelerate safe AGI development.
Global AI Policy — Google may allow opt-out in AI search to appease UK concerns.
1. Perplexity’s Agentic Comet browser lands on iPhone 📲
Perplexity has released Comet for iPhone after a one-week delay, bringing its AI-first browser experience to iOS users and joining existing Mac, Windows, and Android versions.
The app centers on Perplexity-powered search and chat, offering in-browser summarization, automation, and context-aware assistance to speed up tasks and research. There is no iPad build at launch, and the release follows Perplexity’s broader push into local AI agents with its Personal Computer announcement for Macs.
2. Microsoft might sue OpenAI as its Amazon deal raises cloud clash ⚖️
Microsoft is reportedly considering legal action after OpenAI struck a $50 billion deal that may let Amazon Web Services serve as the exclusive third-party cloud for its Frontier product, a move that could conflict with Microsoft’s exclusive Azure rights under their renewed agreement, according to the Financial Times.
The dispute is timely because Azure remains Microsoft’s biggest growth engine and OpenAI accounts for roughly 45% of certain cloud commitments, amplifying the commercial stakes and capacity strains behind the headlines.
3. 93% of jobs could be disrupted by AI, Cognizant says 📚
A new report from professional services firm Cognizant says AI is reshaping work now, not over the next decade, updating a 2023 forecast to find 93% of jobs could face some disruption and 30% may face existential risk.
The revised estimate lifts the potential labor value shifted to machines to about $4.5 trillion and signals faster, broader change across white-collar and even manual roles. Major tech companies have already begun cutting staff and reallocating resources toward AI, a trend the report warns could trigger rushed strategies and widespread restructuring.
4. Google’s Stitch becomes an AI-native design canvas 🎨
Google’s Stitch relaunched as an AI-native software design canvas that turns natural language into high-fidelity UI, introducing a redesigned infinite canvas, a context-aware design agent, and an Agent manager to track parallel ideas.
The update adds DESIGN.md for agent-friendly design rules and easy import/export of design systems, plus instant interactive prototyping that auto-generates next screens to speed iteration. New voice controls let designers speak to the canvas for real-time critiques and updates, while MCP, Skills, and SDK integrations bridge designs to developer tools for smoother handoffs.
5. HSBC considers 20,000-job cut as AI reshapes back-office work 🪓
According to Bloomberg, HSBC is weighing cuts of about 20,000 roles globally over the next three to five years, using AI to reduce middle- and back-office headcount and simplify the bank’s operations.
That would affect roughly 10% of HSBC’s 208,720 staff and follow earlier restructurings that already reduced managing director numbers and incurred around $1 billion in severance costs. The plan is framed as a medium-term effort to exit non-core businesses, avoid replacing leavers and shift pay toward top performers, not an immediate round of layoffs.
Your company is prolly treating AI governance like a speed bump.
Spoiler alert?
It actually helps you go faster. If you wanna get stuck in pilot purgatory in your AI implementation, don’t keep reading.
But if you wanna govern agentic AI the right way, today’s episode is the legit blueprint.
Cuz let’s be honest: that flimsy AI governance that you worked on from 2023-2024 and finally implemented in 2025 is older than that wood dresser in your mom’s attic.
Today’s AI governance? It’s gotta be nimble yet strict, fast yet measured.
Make sure you check today’s full episode, but then here’s the 1-2-3 (4-5) of what ya need to know. 👇
1. Audit Your Shadow AI First 🔥
Most companies have no clue what AI tools are actually running inside their own walls. That ain't a knowledge gap. That's a liability waiting to happen.
IBM's 2025 breach data shows shadow AI was involved in 20% of all tracked data breaches and cost companies $670,000 MORE per incident than normal leaks. Over half of organizations have zero AI tool inventory right now.
Banning the unauthorized tools doesn't fix it. Employees just hop off the VPN and keep using them anyway.
Try This
Stop acting like a strict parent and start acting like an internal investigator. Pull your IT logs this week, then ask your team directly what AI tools they actually use, cuz the approved list and the honest list ain't gonna match.
Map the exact problems people are solving with unauthorized tools, then fast-track authorization for the two or three they rely on. Build a request channel and give them a sanctioned path.
2. Not Every AI Risk Is Equal ⚡
A tool drafting a welcome email and a tool deciding who gets approved for a mortgage are not the same risk.
Nah.
Treating every AI deployment like it's nuclear is exactly why governance gets a bad rep inside companies. You don't need triple padlocks on a broom closet.
The EU AI Act already built the framework ya need: unacceptable, high, limited, and minimal risk. You don't have to invent this from scratch. Just borrow it.
Try This
Take your top 10 active AI use cases and sort each into those four tiers. You'll prolly spend two hours and realize most land in limited or minimal risk.
For anything high-risk, especially hiring, credit, or health decisions, lock in mandatory human review before any AI output becomes final. Document who owns each sign-off. That paper trail is your legal defense when something goes sideways.
3. Real Lawsuits Need No Evil Intent 🚀
UnitedHealthcare is facing a class action for allegedly letting an AI model deny elderly patient care at a 90% error rate. Workday is defending a nationwide age discrimination suit over its AI hiring screening tool.
Neither company woke up deciding to harm people. The AI just acted. Without guardrails.
When agents make decisions, your company owns every single one. The FTC has already said so out loud.
Try This
Map your highest-stakes AI use cases and ask one question for each: if this blows up tomorrow, can ya show exactly who reviewed and approved this deployment?
If the answer is no for even one use case, that is your most urgent governance gap. Start there. Build the paper trail before you need one, not after.
4. One Owner or Total Chaos 🔥
Real talk. If an AI agent went completely off the rails at your company right now, could you name the ONE accountable person in under 10 seconds?
Not a committee.
One person.
If ya can't do it, you don't have governance. You've got a vibe and a prayer, and that ain't gonna hold up when something goes sideways.
Try This
Assign that person this week with documented authority to pause any deployment without waiting for approval chains. That is the bar, fam.
Then build a five-person governance crew around them: an executive sponsor, legal, IT, a domain expert, and a daily AI user. That daily AI user is your frontline intelligence. They know how agents are actually running long before any incident report lands on anyone's desk.
5. Governance IS the Gas Pedal 🚀
75% of companies are stuck in pilot purgatory right now. Small AI experiments running on a loop, never scaling, cuz corporate policy moves too slow for agentic AI that moves too fast.
But companies with mature governance break out 40% faster. We broke down exactly why on Everyday AI this week.
Governance is not ethics wallpaper. It is the finely tuned engine that lets you push the pedal to the metal without flying off the cliff.
Try This
Throw out your static policy doc and replace it with a playbook for every AI use case you actually run. Every deployment needs five answers before it launches: the task, access level, accuracy measure, human reviewer, and escalation path.
Then set a monthly governance review. Monthly. The last three months of AI capabilities have outpaced the last three years combined. If your governance doc doesn't know that yet, it's already a liability.







Reply