- Everyday AI
- Posts
- Ep 750: The Vibe Coding Boom: Why Vibe Coding isn't Going Away and How it's Both Good and Bad
Ep 750: The Vibe Coding Boom: Why Vibe Coding isn't Going Away and How it's Both Good and Bad
Anthropic shocks with Mythos drop, Broadcom, Anthropic and Google struck a major AI chip deal, OpenAI is pushing for an investigation into Elon Musk and more.
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
Outsmart The Future
Today in Everyday AI
8 minute read
🎙 Daily Podcast Episode: Vibe coding is exploding right now, letting anyone build apps fast—but it’s also creating a wave of fragile software. Give today’s show a watch/read/listen to learn more.
🕵️♂️ Fresh Finds: OpenAI released new AI policy ideas, Meta is testing next-gen AI models, and OpenAI is quietly testing a new image model, and more. Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: Anthropic shocks with Mythos drop, Broadcom, Anthropic and Google struck a major AI chip deal, OpenAI is pushing for an investigation into Elon Musk and more. Read on for Byte Sized News.
💪 Leverage AI: Vibe coding is letting anyone build software fast, but most teams can’t maintain or secure what they’ve created. Keep reading for that!
↩️ Don’t miss out: Miss our last newsletter? We covered: Sam Altman proposed a radical AI wealth plan, OpenAI signaled IPO delays, Microsoft’s Copilot disclaimer is raising questions, and more. Check it here!
Ep 750: The Vibe Coding Boom: Why Vibe Coding isn't Going Away and How it's Both Good and Bad
Is Vibe Coding dying already?
Or, is will it be as essential to the next decade of work as the browser was for the past 20 years?
And how can your company balance the speed and innovation side of vibe coding without accidentally leaking data or building a product that breaks more often than it works?
We'll break down the basics on this Start Here Series deep(ish) dive into Vibe Coding.
Also on the pod today:
• Dream house analogy for code 🏠
• 41-46% code by AI 🧑💻
• Coders skipping code reviews 🚫👀
It’ll be worth your 34 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – NovaVoice is Smart dictation. AI assistant. App control, Lessie AI is an AI People Search Engine, OpenOwl is Your AI Assistant That Controls Your Desktop
OpenAI Policy — OpenAI just dropped new policy ideas aimed at making sure AI benefits everyone, including a push for a public AI wealth fund.
Meta Open Source — Meta is rolling out new AI models led by Alexandr Wang, keeping some parts proprietary while teasing open source releases. Could this hybrid approach help Meta finally catch up to OpenAI and Anthropic?
OpenAI Testing Image Model — OpenAI is quietly testing a new image model that finally nails UI design and text accuracy, possibly rivaling Google's best.
ChatGPT March Madness — OpenAI has released a Diagram of the Most Referenced College Basketball team by State During the Tournament.
Meta Testing Models — Meta’s next-gen AI models are already in live testing, with new names like Avocado Mango and Paricado popping up. The full upgrade could be closer than Meta lets on.
Telegram Updates — Telegram just dropped AI-powered text editing, smarter polls, Live Photos, and bots that can create other bots
Github Copilot CLI — GitHub Copilot CLI’s new Rubber Duck feature lets a second AI review your code plans, catching mistakes before they snowball.
Google Dictation — Google just dropped a new offline dictation app for iOS that edits out filler words and polishes your speech into clean text.
AI Video — Google dropped the price for Veo 3.1 Fast.
1. Broadcom Strikes AI Chip Deals with Google and Anthropic 🖥️
Broadcom just announced major new partnerships, agreeing to build future AI chips for Google and powering Anthropic’s massive expansion in computing capacity. Anthropic’s rapid growth is fueling this demand, with its annual revenue reportedly tripling and its Claude app topping the App Store charts.
The infrastructure, mostly in the U.S., will tap into Google’s custom processors, and Broadcom expects a surge in business as AI adoption skyrockets.
2. OpenAI Fires Back at Musk, Demands AG Probe Ahead of Blockbuster Trial 😨
In a dramatic escalation just weeks before their high-stakes legal showdown, OpenAI has urged California and Delaware attorneys general to investigate Elon Musk and his associates for alleged anti-competitive tactics meant to undermine the AI company.
OpenAI claims Musk, now a competitor via his xAI venture, has teamed up with Meta’s Mark Zuckerberg to derail their progress and is resorting to questionable opposition research and personal attacks on CEO Sam Altman. The company warns that Musk’s actions could shift the future of artificial general intelligence into the hands of rivals who are less concerned with safety and public benefit.
3. Gemini 3.1 Pro Shakes Up Augment Code with Smarter, Cheaper AI 🤓
Gemini 3.1 Pro is now live in Augment Code, quickly grabbing attention by outperforming Opus 4.6 on a complex codebase planning task—at less than half the cost per message.
This update means engineers get top-tier structural reasoning and faster bug hunting without breaking the bank, making it a game changer for daily development work. While Gemini sometimes needs clearer instructions, its thorough planning and efficiency make it a strong contender for routine coding tasks.
4. OpenAI Launches Safety Fellowship for AI Researchers 💪
Applications just opened for the inaugural OpenAI Safety Fellowship, a new program running September 2026 to February 2027 aimed at supporting independent research into AI safety and alignment.
The fellowship targets a diverse crowd from computer scientists to ethicists, offering mentorship, workspace in Berkeley or remote options, and a monthly stipend. OpenAI says they’re prioritizing technical strength and the potential for real-world impact over degrees or credentials.
5. Anthropic Unveils Mythos AI to Hunt Software Flaws 🕵️
Anthropic just rolled out a preview of its new Mythos AI model to a select group of tech giants, aiming to shake up cybersecurity by spotting hidden software bugs, some of them decades old.
The initiative, dubbed Project Glasswing, brings together over 40 major partners like Amazon, Apple, and Microsoft, who will test the model’s power to strengthen digital defenses and share their findings with the industry.
While Anthropic touts Mythos as its most advanced model yet, the launch comes on the heels of a messy data leak and ongoing government tensions over AI risks. The preview isn’t public, but it signals a new phase in using AI for high-stakes security.
6. China’s Z.ai Unleashes Open-Source AI Marathoner GLM-5.1 🥇
In a major move today, Chinese AI startup Z.ai has open-sourced its GLM-5.1 model under the MIT License, allowing developers worldwide to download, modify, and deploy it for free. Unlike most rivals obsessed with speed, GLM-5.1 is built for the long haul, capable of working autonomously on a single task for up to eight hours while outperforming Western giants like OpenAI and Anthropic in key engineering benchmarks.
This shift signals China’s growing ambition to lead not just in AI scale but in practical, agent-driven automation, as Z.ai cements itself as a heavyweight in the global AI race.
Nearly half of all code written globally this year was generated by AI.
And 63% of the people using these tools have NEVER written a single line of code.
So what happens 90 days later when the whole thing becomes untouchable and the humans AND the AI forgot how that app was built?
Such is the predicament of Vibe Coding, the AI phenomenon that makes it easier to code a working app than it is to read that confusing parking sign with 58 words on it.
So is Vibe Coding the future of work? Or is it a walking enterprise liability waiting to explode?
(TBH, it’s prolly a bit of both. But we’ll help you avoid the latter and focus on the former.)
Almost half of all code written globally is now AI-generated. Andrej Karpathy coined "vibe coding" in February 2025, then declared it passé barely a year later because the tools evolved THAT fast.
This isn't developer territory anymore.
Cursor hit $2 billion in annual recurring revenue by February. OpenAI's Codex has two million weekly active users. Google AI Studio ships full-stack apps for free. Microsoft is bringing Copilot Cowork to the average knowledge worker.
Sixty-three percent of vibe coding users have never written a line of code. That means your finance and ops teams are prolly building tools that touch your data. Whether you've approved it or not.
Atonom's head of finance built a CRM on Lovable in three hours and replaced their $40,000 Salesforce contract. Zendesk cut prototype timelines from six weeks to three hours.
This wave is already inside your org.
Try This
Start with a quick audit: send an internal survey asking which AI tools your team uses to build workflows, apps, or automations. No judgment. Just raw intel.
What you find will prolly genuinely surprise you.
Designate one person per department as your AI builder liaison. Their job: surface what's being built before it hits production. About 30 minutes to set up. Could save you from finding out about a vibe-coded tool the same week it exposes your entire customer database
2. The 90-Day Black Box Is Wrecking Startups ⚡
Engineers have a name for what happens 90 days into an AI-built app. They call it the three-month black box effect.
The human who built it forgot how it works. The AI lost the context. Infrastructure changed. Someone wants an update.
A Stack Overflow study found developer confidence in AI-generated code dropped from 43% to 29% in a few months. AppSec Santa's 2026 study tested 534 AI code samples and found about 25% had confirmed security flaws. One in four.
Moltbook, an AI-built social network for AI agents, exposed 1.5 million API authentication tokens from a single database misconfiguration. Security researchers found the hole in minutes.
Studies show 8,000 startups now need full or partial code rebuilds, from $50,000 to a couple million dollars. That's the rescue engineering economy.
Peter Steinberger, creator of OpenClaw, ships code without reading it. Wild? Sure. Common? Absolutely.
Try This
Pick one internal AI-built tool and block 30 minutes this week to audit it. Ask the room: does anyone here actually know how this thing works?
If the answer is no, that tool needs documentation before it touches anything sensitive.
Start a living doc. One row per tool: what it does, who built it, what data it touches, when it was last reviewed. Do it before Deborah in finance retires to Florida and takes all the institutional knowledge with her.
3. Governance Is Now the Actual Competitive Moat 🚀
Companies riding this wave without governance are gonna pay in breach costs and rebuild bills. Companies treating AI-assisted development like real engineering, with human oversight, documentation, and security checkpoints, are the ones capturing actual value.
This isn't optional anymore. But how your org governs it is entirely your call.
Lovable became the first vibe coding platform to add built-in penetration testing because security is table stakes now, not a feature. And at Anthropic, they reportedly don't even write code anymore. The people building Claude Code are actively using Claude Code. Let that sink in.
Vibe coding is turning into vibe working. Nontechnical people are gonna build software to replace pieces of your enterprise stack, whether you've got a policy or not.
Companies with governance are gonna win this.
Companies without it are building a very expensive house on paper floors.
Try This
Pull together a one-page AI coding policy this week. A Google Doc is fine. Three things: what tools are sanctioned, what data is off-limits inside those tools, and who reviews anything before it touches production.
Then find one person already building with these tools and make them your internal vibe coding champion. Give them a dedicated channel to share what's working and what ain't.
Governance doesn't kill innovation. It just makes sure the house you're building can actually hold furniture.






Reply