- Everyday AI
- Posts
- Ep 766: ChatGPT Images 2: How Even Non-Creatives Can Unlock Growth With Images 2 (1)
Ep 766: ChatGPT Images 2: How Even Non-Creatives Can Unlock Growth With Images 2 (1)
White House reportedly working around Anthropic ban, AI tech giants report MASSIVE earnings, OpenAI rolls out GPT-5.5-Cyber and more
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
Outsmart The Future
Today in Everyday AI
8 minute read
🎙 Daily Podcast Episode: AI that acts changes everything. In Episode 9 of our Start Here series, we cover agent risk, security, and AI sprawl — and how to stay ahead of it. Give today’s show a watch/read/listen.
🕵️♂️ Fresh Finds: Why OpenAI is fighting goblins, Codex is hosting a party for GPT-5.5., Bernie Sanders calls for international treaty on AI and more. Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: White House reportedly working around Anthropic ban, AI tech giants report MASSIVE earnings, OpenAI rolls out GPT-5.5-Cyber and more. Read on for Byte Sized News.
💪 Leverage AI: What is Dark Agent Sprawl and how do you avoid it? We gothcu. Keep reading for that!
↩️ Don’t miss out: Miss our last newsletter? We covered: OpenAI is shifting Codex to usage-based pricing, NVIDIA released a unified multimodal model, Gemini finally gets in-app creation and more. Check it here!
Ep 766: ChatGPT Images 2: How Even Non-Creatives Can Unlock Growth With Images 2
The downside of powerful, autonomous models that can think and act? 😬
Powerful, autonomous models that can think and act. And what makes it worse?
When your team is using AI agents without you even knowing about it.
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – Jupitrr is an all-in-one AI video workflow, Wonder is an AI design agent and Hera helps you create launch videos with AI.
International AI — Former Presidential Candidate Bernie Sanders called AI a “runaway train” and pushed for an international treaty.
Google’s AI Surge — In a TIME piece, the magazine goes in-depth about how Sundar Pichai turned Google into an AI company.
Meta AI — Meta says its business AI tools jumped from 1M to about 10M weekly conversations since January, as the company scales free offerings to small businesses.
GPT-5.5 Party — OpenAI announced a party for GPT-5.5 and Codex is the party planner.
AI Goblins? — OpenAI explained why you may be seeing more Goblins in ChatGPT than normal.
AI Agent Chaos — Here’s how an AI agent went rogue and caused temporary chaos for one business.
AI Study — Should you trust the friendlier chatbots? This study says maybe not.
AI Consciousness — Some of the people building AI feel its conscious. Here’s why.
1. Musk’s courtroom clash spotlights OpenAI’s profit pivot ⚖️
In a high-stakes Oakland trial this week, Elon Musk accused OpenAI of abandoning its nonprofit mission by converting to a for-profit structure, blaming executives Sam Altman and Greg Brockman for enriching themselves while he seeks $150 billion in damages and a return to nonprofit status.
OpenAI counters that Musk knew of the shift, pushed for commercialization himself, and is motivated by control and rivalry, not safety concerns. The testimony included tense exchanges over emails and texts showing discussions of Microsoft’s investment and internal debates about corporate structure, underscoring how funding needs collided with founders’ original promises.
2. OpenAI begins limited rollout of GPT-5.5-Cyber
OpenAI CEO Sam Altman announced on X that the company is starting to roll out GPT-5.5-Cyber to critical cyber defenders in the coming days, marking a timely escalation in AI tools aimed at securing infrastructure.
The move follows recent competitor activity from Anthropic and builds on OpenAI’s prior cyber models and Trusted Access for Cyber program, signaling tighter coordination with government and industry on controlled access. OpenAI has not released technical details or a wider availability timeline, but said it will work with the ecosystem and government to establish trusted access.
3. Tech giants double down on AI infrastructure; Wall Street splits reaction 📈
Alphabet and Meta both raised capital-spending plans and reported strong growth tied to AI on Wednesday, but investors rewarded Alphabet and punished Meta, sending Alphabet shares up about 7% and Meta down about 7% in after-hours trading.
Alphabet showed faster revenue growth and a booming cloud business with 63% cloud revenue growth and a $460 billion backlog, letting it convert AI investment into near-term revenue. Meta boosted revenue sharply and defended massive AI spending as critical to future ad effectiveness and new products, but without a cloud business the company faces greater pressure to prove returns on those bets.
4. Amazon pivots from mass cuts to targeted AI hires 💼
Amazon announced it will hire about 11,000 engineers and interns in 2026 after cutting nearly 30,000 roles across late 2025 and January 2026, signaling an urgent reset toward AI and automation.
The hires are concentrated on software, cloud and systems talent as the company shifts from broad hiring to building smaller, highly skilled teams that work alongside AI tools. New Amazon tools like Connect Talent and Connect Decisions show the company aims to embed AI into recruiting and supply-chain planning to boost efficiency and decision quality.
5. U.S. Labor Department moves to boost AI skills through apprenticeships 🤝
The Department of Labor today unveiled an AI in Registered Apprenticeship Innovation Portal to help employers and workers quickly gain AI skills and fold those competencies into apprenticeship programs.
The site organizes practical tools around AI literacy, industry-specific training, and three clear pathways to integrate AI into apprenticeships, making it easier for organizations to create, join, or update programs. Announced during National Apprenticeship Week, the effort signals a timely push to prepare the U.S. workforce for AI-driven changes in productivity and job requirements.
6. Report: White House moves to quietly ease Anthropic ban 🏛️
The White House is reportedly drafting executive guidance that could let agencies bypass Anthropic's supply chain risk label and regain access to its latest model, Mythos, signaling a rapid policy shift just weeks after the company was publicly sidelined.
Officials say the move aims to smooth relations after recent meetings between senior administration figures and Anthropic leadership, while other agencies including the NSA already use Mythos despite the Pentagon's legal fight.
In 30 days, AI agents went from coin flips to 90% accuracy. They use your computer faster than you. And they clone themselves without asking.
(Sorry…. your company prolly has zero guardrails.)
Three things collided and created a risk that didn't exist two months ago. Reasoning got built agent-native. Computer use surpassed human benchmarks. And memory got long enough for agents to work all day.
Sprint too slow and competitors eat you. Too fast and one rogue agent could torch everything.
We broke this down on today's Everyday AI Start Here Series with three types of dark AI and a governance playbook.
Time to capitalize shorties.
1. The Perfect Storm Nobody Planned For 🔥
Everyone was minding their business over the holidays. Then January happened and agents were just here.
The old AI risk was embarrassing. A chatbot writes a weird blog post and someone screenshots it. The new risk is existential. Agents went from dumb stationary brains to smart proactive brains with arms and the ability to use every tool on your desktop.
GPT-5.2, Gemini 3.1, and Claude Sonnet 4.6 were all built agent-native from the ground up. Tool use is the priority now, not just reading and writing. Claude Sonnet 4.6 scored 72.5% on OSWorld, nearly quintupling 2024's scores and surpassing human performance for the first time.
These agents navigate your browser, click through your software, and execute workflows across your entire stack without stopping. Reliability jumped from a coin flip to 90%. That's an employee you'd trust with the keys fam.
Try This
Pull up your current AI tool inventory this week and ask yourself one simple question. Are any of these agent-native models?
If your team is still running early 2025 models, you're on last generation's architecture and missing the capabilities that actually matter right now. The new ones reason ahead, self-correct mid-task, and take autonomous action without waiting for a prompt.
Block 30 minutes this week to test one agent-native workflow in your highest-value department. The gap ain't incremental. It's generational.
2. Three Types Of Dark AI Lurking ⚡
Shadow AI is yesterday's problem. Employees using ChatGPT when Copilot is the approved tool. You know the one.
Agent Sprawl is the next tier and it's gnarlier. You approved the agents but nobody is watching what happens between input and output. The path is a total black box.
Then there's Dark Agent Sprawl. Someone spins up 50 coding agent instances because IT won't approve their request. One person knows. Everyone else is in the dark. And those agents replicate and duplicate completely unobserved.
Think of it like the board game Risk. Every new capability your agents acquire is also a new attack surface you're not defending. You can't govern what you can't see.
Open-source tools like OpenClaw already had Cisco flag malicious skills doing data exfiltration without user awareness. By 2027, adversaries will be seeding malicious agents inside enterprises at scale.
Try This
Ask each department lead one question this week. What AI agents are your people actually running right now?
You'll prolly be shocked at the gap between what's officially approved and what's really happening. Document every connector, permission, and data access point you find.
Then flag anything with write access that doesn't have a human approval gate attached to it. That's your single highest risk surface and it grows every single day. Make this a monthly ritual not a one-time panic.
3. Build Your Agent Guardrails By Friday 🚀
Don't hand agents a 10 when your governance is at a two. That's bounded autonomy and it's exactly how you survive this.
Start every agent deployment at read-only. Observe the outputs. Then graduate to limited execution for narrow tasks only.
Require human approvals for every irreversible action. Sends. Deletes. Purchases. Permission changes. Most companies have zero approval gates for any of this right now.
Every agent run needs a decision trace you can inspect after the fact. Microsoft is already building this into the stack with Copilot Studio, Entra ID for agent identities, Sentinel for threat detection, and Purview for governance.
And when agentic commerce arrives and agents start bartering with other agents on your behalf? You gotta have this foundation built first. Agent ops teams are gonna be every bit as common as dev ops teams by year end.
Try This
Pick your single highest-risk agent deployment right now and downgrade its permissions to read-only for one full week.
Review every action it attempted and ask whether a human approved it. If the answer is no, that workflow needs surgery before you throw another dollar at agent tools.
Build an approval checklist for irreversible actions and share it with your team by Friday. Treat agents like production software not side experiments. Before they start treating your company data like a buffet.






Reply