- Everyday AI
- Posts
- Data Dreams & Digital Delusions: The role of AI in health tech
Data Dreams & Digital Delusions: The role of AI in health tech
Google launches Gemini for Government, Meta pauses AI hiring spree, China calls out NVIDIAās AI chips and more!
š Subscribe Here | š£ Hire Us To Speak | š¤ Partner with Us | š¤ Grow with GenAI
Outsmart The Future
Today in Everyday AI
6 minute read
š Daily Podcast Episode: AI data centers are booming, but will bigger investments mean safer health tech? Explore AI hallucinations, the impact of generative AI in healthcare, and the truth behind data quality. Give it a listen.
šµļøāāļø Fresh Finds: Cohere unveils enterprise reasoning model, Google gets called out for lying and Amazonās future AI agent plans. Read on for Fresh Finds.
š Byte Sized Daily AI News: Google launches Gemini for Government, Meta pauses AI hiring spree and China calls out NVIDIAās AI chips. For that and more, read on for Byte Sized News.
š§ Learn & Leveraging AI: What impact will AI data centers have on health tech? We break it down for you. Keep reading for that!
ā©ļø Donāt miss out: Did you miss our last newsletter? We talked about Grok publishing user convos, Metaās stock dropping, Google unveiling nuclear energy plans and more. Check it here!
Data Dreams & Digital Delusions: The role of AI in health tech š”
Will more data solve AI hallucinations?
Maybe.
But what about the industries that have the most to gain (and lose) from AI transformation like healthcare?
Join us as we dive deep into the role of data and transformation and what obstacles the healthcare industry still has to clear to turn their digital delusions into data dreams.
Also on the pod today:
⢠AI Hallucinations and Patient Safety Risks ā ļø
⢠Data Quality, Cleanliness, and Health Tech Outcomes š„
⢠Importance of RAG (Retrieval Augmented Generation) š¤ļø
Itāll be worth your 26 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Hereās our favorite AI finds from across the web:
New AI Tool Spotlight ā Polymet is an AI product designer, Magic Inspector is an AI web test automation platform and Nuvio provides AI-powered financial management.
Cohere ā Cohere has unveiled Command A Reasoning, its most advanced model for enterprises.
Introducing Command A Reasoning, our most advanced model for enterprise reasoning tasks.
ā cohere (@cohere)
2:52 PM ⢠Aug 21, 2025
Google ā Google says that a typical AI text prompt only uses 5 drops of water ā experts say itās misleading.
Amazon ā Amazonās AGI Labs chief spoke on the companyās plans for AI agents.
Trending in AI ā An AI writing scandal has affected multiple media outlets.
AI in Government - The U.S. Federal Reserve is telling banks to embrace AI or get left behind.
AI Startups ā NVIDIA and Bill Gates backed robotics startup Field AI has hit $2 billion valuation after a recent raise.
1. Google Launches āGemini for Governmentā with GSA Partnership šŗšø
Google today introduced āGemini for Government,ā a FedRAMP Highāauthorized AI platform offered through the GSAās OneGov program that bundles Gemini models, Workspace discounts, enterprise search, NotebookLM, image/video generation, and pre-built AI agents for U.S. agencies at a very low per-agency price.
According to Google and the GSA, the package emphasizes agency choice and controlāagent galleries, connectors to enterprise data, Vertex AI integration, and user-access controlsāwhile promising builtāin advanced security and compliance.
2. Meta Pauses AI Hiring Amid Big Reorganization āÆļøļø
Meta has frozen hiring and internal transfers in its AI division as it restructures into four teams ā including a superintelligence-focused "TBD Lab" ā after a burst of aggressive recruiting, sources tell the Wall Street Journal. The pause, which requires Alexandr Wangās signāoff for exceptions, reflects a push to rein in spiraling stockābased compensation and align headcount with budgeting and planning, according to a Meta spokesperson and WSJ reporting.
For AI practitioners and startup founders, the move signals a cooling in one of the marketās most frenzied talent pipelines and suggests fewer rapid reverseāacquihire exits and blockbuster offers in the immediate term.
3. China Curbs NVIDIA H20 Sales After U.S. Officialās Remarks š¤
According to the Financial Times, Beijing quietly discouraged local firms from buying NVIDIAās China-tailored H20 AI chip after U.S. Commerce Secretary Howard Lutnick said American policy intentionally keeps top chips out of China ā comments senior Chinese leaders found āinsulting.ā The guidance, issued by several Chinese regulators, undermines the recent U.S.-China trade thaw that allowed H20 sales to resume and risks a meaningful hit to NVIDIA given China is at least ~15% of its revenue and H20ās popularity among local developers.
Major domestic cloud and AI players ā Alibaba, Baidu, ByteDance ā have pushed back, preferring NVIDIAās performance to nascent local alternatives, but some startups (per FT) already face setbacks training models on Huawei hardware.
4. Google Expands AI Mode Globally with New Personalization Tools š
Google is rolling AI Mode out to English users in 180 more countries, moving the experimental conversational Search feature beyond its prior U.S., U.K., and India testbeds. The company is also adding agentic capabilities (starting with dinner reservations) that can search live availability across booking platforms for Google AI Ultra subscribers, and plans to add local services and event tickets later.
Personalization for U.S. experiment users will tailor resultsāfirst for diningāusing past searches, Maps activity, and prior AI Mode chats, though users can change privacy settings in their Google Account.
5. Anthropic Bundles Claude Code into Enterprise Plans š”
Anthropic now offers Claude Code inside Claude for Enterprise, letting businesses add the popular command-line coding tool to their enterprise suites with richer admin controls and scalable spending limits, aiming to fix earlier user pain from unexpected caps.
The move matches similar enterprise launches from Google and GitHub and unlocks tighter integrations between Claude Code, the Claude.ai chatbot, and internal data sources for things like automated customer-feedback synthesis.
6. Microsoft Warns of Rising āAI Psychosisā as Chatbots Feel Human šØ
Mustafa Suleyman, Microsoftās head of AI, warned this week that increasingly persuasive chatbotsāthough not consciousāare causing āAI psychosis,ā where users form delusions or relationships with tools like ChatGPT, Claude and Grok. Personal accounts and a Bangor University study cited by the BBC show a growing number of people treating conversational AI as real, sometimes with serious mental-health and decision-making consequences.
Experts urge clearer guardrails and that companies stop implying consciousness, while clinicians warn heavy AI use could reshape thinking the way ultraāprocessed foods changed diets.
š¦¾How You Can Leverage:
The healthcare industry just lost more than 230,000 workers while tech companies dump trillions of dollars into data centers.
Could those two things be related?
Now that LLMs are becoming increasingly more reliable and hallucinations are going down, might AI finally get its shining moment in the health tech space?
Or are the stakes for AI too high in some medical spaces, regardless of how much we invest in better, cleaner data.
Data Dream?
Or digital delusion?
Thatās what we set out to answer on todayās episode of Everyday AI.
Health tech executive Smriti Kirubanandan explained why this massive investment gap can create a dangerous illusion that more data automatically equals better patient outcomes, when the reality is far more complex and potentially deadly.
Spoiler alert: it doesn't work that way.
Make sure to check todayās full episode, but hereās the gist of what you need to know.
1 ā Why Transparency beats honesty every time šŖ
Here's a story Smriti shared from a wine conversation that'll change how you think about AI accountability.
What's the difference between honesty and transparency?
Honesty means telling someone you had dinner with friends.
Transparency means specifying that you ate at Javier's restaurant with Jordan, Eve, and Alice for three hours discussing quarterly projections.
Most healthcare executives stay honest about their AI investments, but transparency about data pipelines?
Crickets yāall.
Ask your team where training data originates or who controls modification permissions across AI systems.
You'll get blank stares. Thatās gotta change.
Try This:
Create an AI transparency audit this week mapping every data source, update frequency, and access permission for each system.
Present it to your executive team and watch their faces when they realize nobody knows where training data comes from.
Most healthcare leaders discover zero visibility into AI data sources, creating massive regulatory risks that could shut down programs overnight.
2 ā RAG is your hallucination prevention system š
Remember when everyone was obsessed over RAG 24 months ago?
Then people stopped talking about it, hoping bigger context windows would magically solve hallucination problems.
Bad move.
Smriti calls Retrieval Augmented Generation your essential checkpoint system.
RAG forces AI to verify responses against real-time databases instead of regurgitating outdated training data.
Here's the nightmare scenario: patient uploads MRI asking about cancer detection.
AI hallucinates and reports 80% cancer probability when the scan is completely clear. Wrong diagnosis devastates families and creates massive liability exposure.
Try This:
Implement RAG verification on prior authorization decisions and clinical documentation this week.
Configure systems to cross-reference three verified medical databases before any patient-facing recommendations. Start with low-risk cases, document every outcome, then expand gradually.
3 ā Going Slow to Go Fast Actually Works š„
Why does healthcare move slower than other industries on AI adoption?
Because it should. Itās life or death, not a productivity race.
Smriti nailed this: current AI adoption is like giving kids unlimited candy store access.
Too much sugar causes diabetes.The same principle applies to rushing AI deployment without proper safeguards.
Human lives depend on every algorithmic decision.
While other industries can move fast and break things, healthcare organizations that break things face lawsuits, regulatory shutdowns, and permanent patient trust damage.
Better data doesn't automatically mean safer outcomes if you're deploying recklessly.
Try This:
Establish three-phase deployment protocol starting this month. Phase one uses synthetic data with zero patient risk. Phase two requires supervised real-data testing with clinical oversight.
Phase three enables gradual rollout with human verification at critical points. Takes longer initially, but prevents reputation-destroying failures that could end careers forever.
Reply