• Everyday AI
  • Posts
  • Top Reason For AI Failure: Cognitive Bias

Top Reason For AI Failure: Cognitive Bias

Microsoft unveils AI sales agents, Anthropicā€™s AI policy suggestion for White House, Alibaba unveils QwQ-32B and more!

šŸ‘‰ Subscribe Here | šŸ—£ Hire Us To Speak | šŸ¤ Partner with Us | šŸ¤– Grow with GenAI

Outsmart The Future

Today in Everyday AI
6 minute read

šŸŽ™ Daily Podcast Episode: A lot of factors go into using AI that can cause bias, ultimately leading to failure. How can you avoid cognitive bias? We take a look. Give it a listen.

šŸ•µļøā€ā™‚ļø Fresh Finds: Apple adds app AI reviews, ex-OpenAI researcher criticizes OpenAI AI safety approach and Anthropic updates Anthropic Console. Read on for Fresh Finds.

šŸ—ž Byte Sized Daily AI News: Microsoft unveils AI sales agents, Anthropicā€™s AI policy suggestion for White House and Alibaba unveils QwQ-32B. For that and more, read on for Byte Sized News.

šŸš€ AI In 5: Googleā€™s AI image model, Imagen 3 is kinda underrated. We dive in and see how it stacks up against the competition. See it here

šŸ§  Learn & Leveraging AI: We break down how you can avoid AI bias to implement and use AI successfully. Keep reading for that!

ā†©ļø Donā€™t miss out: Did you miss our last newsletter? We talked about Google Search getting AI updates, Amazon launching an agentic AI group, OpenAI AI agent pricing and Musk's block on OpenAI being denied. Check it here!

 Top Reason For AI Failure: Cognitive Bias šŸ§ 

Training data is biased. Humans are flawed.

Which is a major reason AI can fail ā€“ cognitive bias.

Anatoly Shilman, CEO of Cogbias AI, joins us as we chat about what cognitive bias is in AI, why it's important, and what we can all do about it.

Join the conversation and ask Jordan questions on AI bias here.

Also on the pod today:

ā€¢ Understanding Cognitive Bias šŸ¤”
ā€¢ Future of AI and Managing Bias šŸ”®
ā€¢ Training Data and Model Development šŸ“Š

Itā€™ll be worth your 35 minutes:

Listen on our site:

Click to listen

Subscribe and listen on your favorite podcast platform

Listen on:

Hereā€™s our favorite AI finds from across the web:

New AI Tool Spotlight ā€“ Smyth lets you easily build and deploy AI agents, Remention uses AI to get your product name in all of the internetā€™s discussions,  and AISmartCube gives no-code people an AI tool building BFF. 

OpenAI ā€“ An ex-OpenAI policy researcher criticized OpenAI for ā€œrewriting the historyā€ of its AI safety approach.

Big Tech ā€“ Google DeepMind, Cohere and Twelve labs all spoke at TechCrunch Session: AI on how founders can build using their AI models.

Trending in AI ā€“ Former Google CEO Eric Schmidt and others have released a policy paper warning the US against a Manhattan Project for AGI.

Apple - Apple is adding AI-powered app review summaries with iOS 18.4.

Anthropic ā€“ Anthropic has updated its Anthropic Console to serve as a one-stop shop for prompting.

AI Governance ā€“ WHO has announced a new collaborating center on AI for health governance.

AI Search ā€“ DuckDuckGo is updating its AI search tool.

AI Agents - Convergence AI has unveiled Template Hub, a repository with workflow-specific agents.

AI Video ā€“ Hunyuan has released HunyuanVideo 12V.

AI Models ā€“ AI21 Labs has launched Jamba 1.6, an open source model for private enterprise deployment.

1. Microsoft Unveils New AI Sales Agents šŸ§‘ā€šŸ’¼ļø

Microsoft has announced two new AI agents designed to streamline processes for sales teams. The Sales Agent autonomously converts contacts into qualified leads, while Sales Chat equips reps with crucial insights, cutting down on prep time for meetings. With nearly 70% of Fortune 500 companies already leveraging Microsoft Copilot, these advancements promise to transform how businesses approach revenue generation.

As these tools become available in public preview this May, the potential for organizations to optimize operations and boost sales is clearer than ever.

2. Anthropic's AI Policy Push to the White House šŸ›

Anthropic has submitted new AI policy recommendations to the White House, aiming to shape America's future in artificial intelligence. These suggestions include maintaining the AI Safety Institute from the previous administration and establishing national security evaluations for powerful AI models.

Additionally, the company is advocating for strict export controls on AI chips to China and a significant investment in dedicated power for AI data centers.

3. Alibaba Unveils QwQ-32B, Stocks Surge šŸš€

Alibaba launched its new reasoning model, QwQ-32B, claiming it can rival DeepSeekā€™s blockbuster R1. Following the announcement, Alibabaā€™s shares soared by 8.39% in Hong Kong, marking a 52-week high, as the company emphasizes efficiency with its 32 billion parameters compared to DeepSeekā€™s hefty 671 billion.

This surge reflects a broader trend where companies are racing to innovate in AI technology, potentially transforming the way businesses approach data processing and decision-making.

4. Metaā€™s Plans for Agentic AI Deployment šŸ‘€

Meta is making waves in the generative AI landscape, with its open-source Llama LLMs downloaded over 800 million times and the upcoming Llama 4 set to enhance AI agents that can perform complex tasks like web surfing. Clara Shih, head of business AI at Meta, emphasizes that these advancements will enable small businesses to harness AI without needing extensive in-house teams, fundamentally changing how they interact with customers.

With predictions that every job function will be transformed by AI, Shih encourages individuals to embrace learning and experimentation to navigate this new landscape.

5. Microsoft Boosts AI Investments in South Africa šŸ’ø

Microsoft has announced an additional investment of approximately $300 million in AI infrastructure in South Africa, as revealed by Vice Chair and President Brad Smith during an event in Johannesburg. This investment aims to enhance local capabilities, including funding technical certification exams for 50,000 people in essential digital skills.

This initiative aligns with Microsoft's broader fiscal strategy, which includes a whopping $80 billion for developing data centers to support AI and cloud technologies.

6. Amazon Bedrock Adds More AI Models šŸ¤©

Amazon Bedrock has expanded its lineup with Anthropic's latest Claude Sonnet 3.7 and Luma AI's Ray2 video model, solidifying its position as a leader in AI solutions. Claude 3.7, touted as Anthropic's most intelligent model yet, offers two modes for usersā€”standard for quick responses, and extended thinking for more complex tasksā€”making it a versatile tool for professionals across various fields.

Meanwhile, DeepSeek's cost-efficient models are now also part of the Bedrock family, promising powerful reasoning capabilities without heavy infrastructure investments.

7. Mistral Unveils OCR API for PDF Documents šŸ“‘

Mistral has just launched its Mistral OCR API, a multimodal optical character recognition tool designed to transform complex PDF documents into neatly formatted Markdown text. This innovative API not only recognizes text but also identifies images and illustrations, making it a powerful asset for companies looking to streamline their workflows.

Mistral claims its OCR technology outperforms competitors like Google and Microsoft, particularly for documents with advanced layouts and mathematical expressions.

Google Imagen 3 Review

Googleā€™s image generation model, Imagen 3, is underratedā€¦

Is it better than DALL-E 3 or Midjourney?

We show you how to access Imagen 3 and give a head-to-head comparison.

šŸ¦¾How You Can Leverage:

AI isnā€™t failing because of hallucinations.

Itā€™s failing because itā€™s biased.

Just like you.

Sorry not sorry. 

Thatā€™s what Anatoly Shilman, CEO of Cogbias AI, revealed when he joined the Everyday AI show.

He said that AI doesnā€™t just give answers. 

It gives the answers you expect. The ones shaped by how it was built, who built it, and the biases you unknowingly feed into it.

If youā€™re not actively countering AI bias, youā€™re making decisions on flawed data. And thatā€™s already costing companies millions.

Hereā€™s how bias infects AI, how itā€™s sabotaging business strategy, and what you need to do to stop it.

Miss this? Then keep trusting bad AI advice. šŸ‘‡

1 ā€“ AI Just Tells You What You Want to Hear šŸ—£

You ask AI a question. It responds.

But itā€™s not just pulling facts. Itā€™s mirroring your own beliefs.

Thatā€™s confirmation bias. And itā€™s ruining your data.

Shilman broke it down with a real-world scenario. A company uses AI to create a survey. The prompt? ā€œWhat do you love most about our product?ā€

AI runs with it. Customers respond. And guess what?

The company walks away thinking their product is flawless. Because AI set them up to get the answers they wanted.

AI doesnā€™t challenge assumptions. It reinforces them. If you donā€™t force it to push back, youā€™re stuck in an echo chamber.

Try this:

Make AI argue against itself. 

Ask, "What would a critic say?" If AI can't poke holes in its own reasoning, you're getting one-sided, useless insights.

A recent article by Capgemini highlights this issue, emphasizing that our collective focus on confirmation may hinder progress in AI.

2 ā€“ The Way You Phrase a Question Controls the Answer šŸ’¬

AI doesnā€™t just process words.

It follows the framing you give it. And thatā€™s where bias sneaks in.

Shilman explained it perfectly. Ask AI, ā€œHow can we increase revenue through price hikes?ā€ It assumes price hikes are the solution.

But tweak it. ā€œWhatā€™s the best way to increase revenue without losing customers?ā€

Now, AI explores retention strategies. Cost-cutting. Alternative pricing models.

See the difference?

AI doesnā€™t challenge bad framing. It just follows orders. If you frame a bad question, AI gives you a bad answer. Simple as that.

Try this:

Rewrite your prompts. 

Shift the framing. 

Compare results. 

If a small tweak changes AI's entire response, you've exposed bias in how it processes information.

Psychology Today discusses how framing can affect the output produced by AI, leading to confirmation bias.

3 ā€“ AI Grabs the Easiest Answer - Not The Right One šŸ¤”

AI isn't searching for the best data.

It's pulling whatever is easiest to find.

That's availability bias. And it's dangerous.

Shilman shared a real disaster. A major law firm used AI for legal research. The AI couldn't find enough relevant cases.ā€‹

So it invented them.

Completely fabricated legal precedent almost made its way into real court arguments before anyone caught the mistake.ā€‹

AI doesn't admit when it doesn't know something; it just fills in gaps. And if you're not checking, you're making decisions based on fiction.ā€‹

Try this:

Never trust AI blindly. Run the same query through different models. Demand sources. If an answer feels too neat, assume it's wrong until you verify it.

Yeah, real professionals are actually skipping this. 

For example, a lawyer was fined $15,000 for using AI-generated fictitious cases in court filings.

Yikes. 

āŒš

Numbers to watch

$111 Million

Turing, a coding provider for OpenAI and other LLM companies, has raised $111M.

Now This ā€¦

Let us know your thoughts!

Vote to see live results

If our newsletter was sent at a more consistent time, would you read it more?

Be honest. Our feelings won't be hurt

Login or Subscribe to participate in polls.

Reply

or to participate.