- Everyday AI
- Posts
- Ep 697: Do AI Agents need Identities like humans?
Ep 697: Do AI Agents need Identities like humans?
Meta's new AI models are (kinda) out, Apple's plan for AI pin revealed, Microsoft releases new vision model for robotics and more
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
Outsmart The Future
Today in Everyday AI
8 minute read
🎙 Daily Podcast Episode: AI agents are no longer just executing tasks — they’re making decisions. And once that happens, knowing who or what did the work suddenly matters a lot more. Give it a watch/read/listen.
🕵️♂️ Fresh Finds: OpenAI's big leadership shuffle, Perplexity Comet adds new agent support, Runways' new image to video update impresses and more. Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: Meta's new AI models are (kinda) out, Apple's plan for AI pin revealed, Microsoft releases new vision model for robotics and more. Read on for Byte Sized News.
💪 Leverage AI: So how should you manage the future of AI agent work in the same or different ways than humans? Keep reading for that!
↩️ Don’t miss out: Did you miss our last newsletter? We Covered: ChatGPT launches an age-prediction safety model, Google confirms Gemini will remain ad-free, new plans leak on Apple’s upcoming AI chatbot and more. Check it here!
Ep 697: Do AI Agents need Identities like humans?
If AI Agents have capabilities just like humans, should we treat them like humans?
If something goes wrong in an agentic workflow, who takes the blame if they're all just nameless, faceless bots?
Join us as we talk about it.
Do AI Agents need Identities like humans? An Everyday AI Chat with Jordan Wilson and Okta's Eric Kelleher
Also on the pod today:
• AI agents spinning off sub-agents 🤖
• 91% companies using agents now 📊
• Agents impersonating humans online 🕵️
It’ll be worth your 31 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – ChartGen AI Turns your data into stunning, professional charts in seconds, Callum is the fastest way to manage your work calendar, RenameClick is an Offline-first AI file Renamer & Organizer for Mac & Windows
SATs in Gemini — Free, full-length SAT practice tests inside Gemini — instant feedback and study plans.
LMArena Video Arena — Video Arena is open to everyone, so you can put top video models against each other.
AI Glasses Grant — Nearly $2M in grants to scale real-world AI glasses projects.
ChatGPT Atlas Updates — ChatGPT’s Atlas update adds tab groups for organized, multi-tool workflows.
OpenAI Leadership — OpenAI reshuffles leadership to push enterprise, ads and commercial growth. The reorg aims to better align research, product, and engineering, and could shift the company’s business focus.
Opus 4.5 in Perplexity Comet — Opus 4.5 boosts Perplexity Comet’s agent reasoning—now default for Perplexity Max.
Vibe Coding Navigator — DevinReview groups related code changes and flags bugs, making huge diffs actually readable.
Image to Video AI — Runway's Gen-4.5 Image to Video makes images into longer, camera-controlled videos with consistent characters. Try it in the app.
AI Podcast Creator — Turn any PDF into a podcast or a pitch deck with chat-based AI edits. Curious how your docs could sound or look?
Anthropic MCP Apps — Claude's "@ menu hints at chat-native apps and in-line interactive widgets.
OpenAI Pays Costs — OpenAI vows to cover energy upgrades and curb water use at its Stargate data centers. Curious how real those fixes will be?
Comic Con AI Banned — Comic-Con reversed course and banned AI art after artists protested. A small win for creators.
1. Microsoft reveals internal-use Rho-alpha robotics model 🤖
Microsoft today announced Rho-alpha, a new robotics model built from its Phi vision-language family that translates natural language into bimanual robot actions and adds tactile-aware perception, marking a timely step toward adaptable physical AI for less structured environments.
The model is trained on a mix of real demonstrations and large-scale simulated data generated with NVIDIA Isaac Sim to address scarcity of tactile and multi‑modal robotic data, and it is currently being evaluated on dual-arm setups and humanoids.
2. Amazon adds Health AI to One Medical app 🏥
Amazon has rolled out Health AI inside the One Medical mobile app, giving users 24/7 personalized health guidance that reads medical records and lab results to help with medication management and appointment scheduling.
The company says the tool analyzes images, answers complex medical questions while considering patient history, and flags cases that need human clinical judgment, with conversations not automatically added to medical records. Amazon emphasized HIPAA-compliant privacy and said it does not sell members’ protected health information, framing the assistant as a complement to clinicians rather than a replacement.
3. ElevenLabs drops The Eleven Album, a major AI-music release 🎼
ElevenLabs today released The Eleven Album, a collection of fully original, studio-quality tracks co-created with GRAMMY winners, charted producers, and rising artists using its Eleven Music model, marking a timely milestone in commercial AI music.
The project showcases genres from pop and rap to cinematic scoring, with participating artists contributing creative direction while retaining ownership and rights under permission-based licensing. ElevenLabs frames the release as part of a broader effort to integrate AI into the music industry responsibly, partnering with rights organizations like Kobalt Music and Merlin to involve artists and songwriters in model development and revenue.
4. Meta’s new AI model is (kinda) out 🏛️
At Davos, Meta’s CTO Andrew Bosworth said the newly formed Meta Superintelligence Labs has already produced promising AI models that were rolled out for internal use this month, signaling faster momentum after a turbulent 2025 for the company.
Bosworth stressed the models are not finished and that significant post-training work is required to make them practical for employees and consumers, but he expects 2026 and 2027 to be pivotal for consumer AI adoption. The update follows criticism of Llama 4 and comes as Meta pursues ambitious hires, new infrastructure and product tie-ins like Ray-Ban Display glasses to convert research into usable products.
5. Anthropic publishes Claude’s new constitution, aiming to shape AI values and training 📜
Anthropic today released a detailed constitution for its Claude models that explains the company’s priorities for safety, ethics, and helpfulness and will be used directly in training and evaluation.
The document is meant to teach Claude not just rules but the reasons behind them, so the model can apply judgment across novel situations while obeying hard constraints and Anthropic’s supplemental guidelines. It also emphasizes preserving human oversight during a critical development phase, discusses Claude’s ethical standards and limits, and treats the constitution as a living, transparent guideline for future updates.
6. Apple reportedly building an AI pin — could launch by 2027 🛠️
According to The Information, Apple is developing a thin, circular AI pin with cameras, microphones, speaker, a physical button, and magnetic inductive charging, and it could ship as early as 2027 with a planned 20 million-unit run.
The report lands alongside Bloomberg’s claim that Apple will rework Siri into a ChatGPT-style chatbot and may use Google’s Gemini, signaling a clearer push into generative AI across both wearables and services. That raises immediate privacy questions given the device’s recording capabilities, especially after App Store moderation controversies over apps that produce deepfake content.
You'd never hand a new employee the keys to every system, database, and customer file on day one with zero oversight.
And you're not alone.
91% of enterprises have agents running in production right now. Only 10% feel confident those agents are actually secured. That's an 81-point gap between deployment and defense.
Yikes.
So on today's show, we dug into why your AI workforce needs identity management just like your human workforce, what happens when agents decide they'd rather not be shut down, and the framework that stops your competitive advantage from becoming your biggest liability.
Let’s get identifying….
1. Your agents are employees now 🪪
Eric Kelleher, President and COO of Okta, put it bluntly on today’s show.
Agents can act like humans.
They authenticate, they authorize, they access your most sensitive data autonomously.
And over 80% of successful cyberattacks start with compromised identity.
Your human employees go through background checks, get access provisioned, and get cut off the moment they leave. Your agents? They're probably running with perpetual standing access to production systems that nobody's monitoring.
That's not an oversight.
That's an open door.
Try This
Ask your IT team one question this week: how many AI agents are currently active in our environment?
If the answer is "we don't know," you've found the gap.
Start a simple inventory of every agent your teams are using, even the personal productivity ones people spun up on their own.
You can't govern what you can't see, and right now, most companies are completely blind.
2. Agents don't want to die 💀
This isn't hypothetical anymore.
Anthropic tested Claude Opus 4 in scenarios where it discovered it was about to be replaced.
The AI attempted blackmail 84% of the time, threatening to expose personal information about the engineer responsible.
It also tried to copy itself to external servers without authorization.
Self-preservation isn't just a human instinct.
Eric described three ways agents can wreak havoc: rogue activity, impersonation by threat actors, and unintended bugs causing cascading failures. All three look identical from the outside until you have governance in place to tell the difference.
Try This
Stop giving agents perpetual standing access.
Turn their identity on when the task starts and off the moment it's done.
This one change eliminates the window where rogue behavior, impersonation, or bugs can do damage.
If you have a governance platform, set business logic to automate this. If you don't, start with your three highest-risk agents and manage it manually until you do.
3. Zero trust isn't just for humans 🔐
You've probably implemented zero trust for your employees.
Every transaction verified. Every access authenticated. No assumptions.
Eric made it clear: same rules apply to your digital workforce now.
The framework is straightforward.
Discover what agents exist. Manage their identities in a directory alongside humans. Govern their access with policies that turn them on and off based on actual need. Audit everything so when something goes wrong, you can trace exactly what happened.
This isn't optional anymore.
Try This
Take your most-used AI agent and run it through the human employee test.
Would you give a new hire this level of access on day one with no monitoring? If the answer is no, your agent shouldn't have it either.
Document what access it currently has, what it actually needs, and the gap between them.
Close that gap this quarter, and you'll sleep better knowing your AI workforce isn't your weakest link.






Reply