• Everyday AI
  • Posts
  • Data Dreams & Digital Delusions: The role of AI in health tech

Data Dreams & Digital Delusions: The role of AI in health tech

Google launches Gemini for Government, Meta pauses AI hiring spree, China calls out NVIDIA’s AI chips and more!

šŸ‘‰ Subscribe Here | šŸ—£ Hire Us To Speak | šŸ¤ Partner with Us | šŸ¤– Grow with GenAI

Outsmart The Future

Today in Everyday AI
6 minute read

šŸŽ™ Daily Podcast Episode: AI data centers are booming, but will bigger investments mean safer health tech? Explore AI hallucinations, the impact of generative AI in healthcare, and the truth behind data quality. Give it a listen.

šŸ•µļøā€ā™‚ļø Fresh Finds: Cohere unveils enterprise reasoning model, Google gets called out for lying and Amazon’s future AI agent plans. Read on for Fresh Finds.

šŸ—ž Byte Sized Daily AI News: Google launches Gemini for Government, Meta pauses AI hiring spree and China calls out NVIDIA’s AI chips. For that and more, read on for Byte Sized News.

🧠 Learn & Leveraging AI: What impact will AI data centers have on health tech? We break it down for you. Keep reading for that!

ā†©ļø Don’t miss out: Did you miss our last newsletter? We talked about Grok publishing user convos, Meta’s stock dropping, Google unveiling nuclear energy plans and more. Check it here!

 Data Dreams & Digital Delusions: The role of AI in health tech šŸ’”

Will more data solve AI hallucinations? 

Maybe. 

But what about the industries that have the most to gain (and lose) from AI transformation like healthcare? 

Join us as we dive deep into the role of data and transformation and what obstacles the healthcare industry still has to clear to turn their digital delusions into data dreams. 

Also on the pod today:

• AI Hallucinations and Patient Safety Risks āš ļø
• Data Quality, Cleanliness, and Health Tech Outcomes šŸ„
• Importance of RAG (Retrieval Augmented Generation) šŸ¤”ļø

It’ll be worth your 26 minutes:

Listen on our site:

Click to listen

Subscribe and listen on your favorite podcast platform

Listen on:

Here’s our favorite AI finds from across the web:

New AI Tool Spotlight – Polymet is an AI product designer, Magic Inspector is an AI web test automation platform and Nuvio provides AI-powered financial management.

Cohere – Cohere has unveiled Command A Reasoning, its most advanced model for enterprises.

Google – Google says that a typical AI text prompt only uses 5 drops of water — experts say it’s misleading.

Amazon – Amazon’s AGI Labs chief spoke on the company’s plans for AI agents.

Trending in AI – An AI writing scandal has affected multiple media outlets.

AI in Government - The U.S. Federal Reserve is telling banks to embrace AI or get left behind.

AI Startups – NVIDIA and Bill Gates backed robotics startup Field AI has hit $2 billion valuation after a recent raise.

1. Google Launches ā€˜Gemini for Government’ with GSA Partnership šŸ‡ŗšŸ‡ø

Google today introduced ā€œGemini for Government,ā€ a FedRAMP High–authorized AI platform offered through the GSA’s OneGov program that bundles Gemini models, Workspace discounts, enterprise search, NotebookLM, image/video generation, and pre-built AI agents for U.S. agencies at a very low per-agency price.

According to Google and the GSA, the package emphasizes agency choice and control—agent galleries, connectors to enterprise data, Vertex AI integration, and user-access controls—while promising built‑in advanced security and compliance.

2. Meta Pauses AI Hiring Amid Big Reorganization āÆļøļø

Meta has frozen hiring and internal transfers in its AI division as it restructures into four teams — including a superintelligence-focused "TBD Lab" — after a burst of aggressive recruiting, sources tell the Wall Street Journal. The pause, which requires Alexandr Wang’s sign‑off for exceptions, reflects a push to rein in spiraling stock‑based compensation and align headcount with budgeting and planning, according to a Meta spokesperson and WSJ reporting.

For AI practitioners and startup founders, the move signals a cooling in one of the market’s most frenzied talent pipelines and suggests fewer rapid reverse‑acquihire exits and blockbuster offers in the immediate term.

3. China Curbs NVIDIA H20 Sales After U.S. Official’s Remarks šŸ¤–

According to the Financial Times, Beijing quietly discouraged local firms from buying NVIDIA’s China-tailored H20 AI chip after U.S. Commerce Secretary Howard Lutnick said American policy intentionally keeps top chips out of China — comments senior Chinese leaders found ā€œinsulting.ā€ The guidance, issued by several Chinese regulators, undermines the recent U.S.-China trade thaw that allowed H20 sales to resume and risks a meaningful hit to NVIDIA given China is at least ~15% of its revenue and H20’s popularity among local developers.

Major domestic cloud and AI players — Alibaba, Baidu, ByteDance — have pushed back, preferring NVIDIA’s performance to nascent local alternatives, but some startups (per FT) already face setbacks training models on Huawei hardware.

4. Google Expands AI Mode Globally with New Personalization Tools šŸŒŽ

Google is rolling AI Mode out to English users in 180 more countries, moving the experimental conversational Search feature beyond its prior U.S., U.K., and India testbeds. The company is also adding agentic capabilities (starting with dinner reservations) that can search live availability across booking platforms for Google AI Ultra subscribers, and plans to add local services and event tickets later.

Personalization for U.S. experiment users will tailor results—first for dining—using past searches, Maps activity, and prior AI Mode chats, though users can change privacy settings in their Google Account.

5. Anthropic Bundles Claude Code into Enterprise Plans šŸ’”

Anthropic now offers Claude Code inside Claude for Enterprise, letting businesses add the popular command-line coding tool to their enterprise suites with richer admin controls and scalable spending limits, aiming to fix earlier user pain from unexpected caps.

The move matches similar enterprise launches from Google and GitHub and unlocks tighter integrations between Claude Code, the Claude.ai chatbot, and internal data sources for things like automated customer-feedback synthesis.

6. Microsoft Warns of Rising ā€œAI Psychosisā€ as Chatbots Feel Human 🚨

Mustafa Suleyman, Microsoft’s head of AI, warned this week that increasingly persuasive chatbots—though not conscious—are causing ā€œAI psychosis,ā€ where users form delusions or relationships with tools like ChatGPT, Claude and Grok. Personal accounts and a Bangor University study cited by the BBC show a growing number of people treating conversational AI as real, sometimes with serious mental-health and decision-making consequences.

Experts urge clearer guardrails and that companies stop implying consciousness, while clinicians warn heavy AI use could reshape thinking the way ultra‑processed foods changed diets.

🦾How You Can Leverage:

The healthcare industry just lost more than 230,000 workers while tech companies dump trillions of dollars into data centers. 

Could those two things be related? 

Now that LLMs are becoming increasingly more reliable and hallucinations are going down, might AI finally get its shining moment in the health tech space? 

Or are the stakes for AI too high in some medical spaces, regardless of how much we invest in better, cleaner data.  

Data Dream? 

Or digital delusion? 

That’s what we set out to answer on today’s episode of Everyday AI. 

Health tech executive Smriti Kirubanandan explained why this massive investment gap can create a dangerous illusion that more data automatically equals better patient outcomes, when the reality is far more complex and potentially deadly.

Spoiler alert: it doesn't work that way.

Make sure to check today’s full episode, but here’s the gist of what you need to know. 

1 – Why Transparency beats honesty every time šŸ’Ŗ

Here's a story Smriti shared from a wine conversation that'll change how you think about AI accountability.

What's the difference between honesty and transparency?

Honesty means telling someone you had dinner with friends.

Transparency means specifying that you ate at Javier's restaurant with Jordan, Eve, and Alice for three hours discussing quarterly projections.

Most healthcare executives stay honest about their AI investments, but transparency about data pipelines?

Crickets y’all. 

Ask your team where training data originates or who controls modification permissions across AI systems.

You'll get blank stares. That’s gotta change. 

Try This: 

Create an AI transparency audit this week mapping every data source, update frequency, and access permission for each system.

 Present it to your executive team and watch their faces when they realize nobody knows where training data comes from. 

Most healthcare leaders discover zero visibility into AI data sources, creating massive regulatory risks that could shut down programs overnight.

2 – RAG is your hallucination prevention system šŸ›‘

Remember when everyone was obsessed over RAG 24 months ago?

Then people stopped talking about it, hoping bigger context windows would magically solve hallucination problems.

Bad move.

Smriti calls Retrieval Augmented Generation your essential checkpoint system.

RAG forces AI to verify responses against real-time databases instead of regurgitating outdated training data.

Here's the nightmare scenario: patient uploads MRI asking about cancer detection.

AI hallucinates and reports 80% cancer probability when the scan is completely clear. Wrong diagnosis devastates families and creates massive liability exposure.

Try This: 

Implement RAG verification on prior authorization decisions and clinical documentation this week. 

Configure systems to cross-reference three verified medical databases before any patient-facing recommendations. Start with low-risk cases, document every outcome, then expand gradually.

3 – Going Slow to Go Fast Actually Works šŸ„

Why does healthcare move slower than other industries on AI adoption?

Because it should. It’s life or death, not a productivity race. 

Smriti nailed this: current AI adoption is like giving kids unlimited candy store access.

Too much sugar causes diabetes.The same principle applies to rushing AI deployment without proper safeguards.

Human lives depend on every algorithmic decision.

While other industries can move fast and break things, healthcare organizations that break things face lawsuits, regulatory shutdowns, and permanent patient trust damage.

Better data doesn't automatically mean safer outcomes if you're deploying recklessly.

Try This: 

Establish three-phase deployment protocol starting this month. Phase one uses synthetic data with zero patient risk. Phase two requires supervised real-data testing with clinical oversight. 

Phase three enables gradual rollout with human verification at critical points. Takes longer initially, but prevents reputation-destroying failures that could end careers forever.

Reply

or to participate.