• Everyday AI
  • Posts
  • Confronting AI Bias and AI Discrimination in the Workplace

Confronting AI Bias and AI Discrimination in the Workplace

OpenAI stops 20 cybercrimes worldwide, Tesla unveils an AI taxi, AMD challenges NVIDIA with new AI chips, and more!

Outsmart The Future

Sup y’all! 👋

Latest newsletter ever? Sorry.

(Proof this thing is written by a human and not copy-pasted like 90% of AI newsletters nowadays. lolz)

I actually had the opportunity to run an AI enablement workshop and keynote a CEO retreat for Tritium Partners. (And currently on dial up speed wifi somewhere between Texas and Chicago in the MIDDLE SEAT YIKES.)

FYI — hiring us to speak is kinda hidden on our website, but, Like Tritium, you can bring the Everyday AI experience to your team/next event. Holler at us here.

(Oh, today’s guest from EY was fantastic. Go learn from her below.) 

✌️
Jordan

Today in Everyday AI
7 minute read

🎙 Daily Podcast Episode: AI isn’t something you can just set and forget. You need to make sure to watch out for AI bias and discrimination. We have an AI expert from EY explain how you can navigate AI in the workplace ethically. Give it a listen.

🕵️‍♂️ Fresh Finds: Amazon’s new AI tool for packages, Zoom unveils custom AI avatars and a new AI video generator arrives to compete. Read on for Fresh Finds.

🗞 Byte Sized Daily AI News: OpenAI stops 20 deceptive operations worldwide, Microsoft’s new AI tool for healthcare and AMD challenges NVIDIA with new AI chip. For that and more, read on for Byte Sized News.

🚀 AI In 5: xAI released Grok 2 not too long ago. Is it any better than Grok 1? We dive in to find out. See it here

🧠 Learn & Leveraging AI: Wondering how you can make sure to use AI in the workplace ethically? We break down what governance needs to be in place. Keep reading for that!

↩️ Don’t miss out: Did you miss our last newsletter? We talked about Google in trouble with DOJ, Meta unveiling a new AI video tool and NVIDIA's new AI initiatives. Check it here!

Confronting AI Bias and AI Discrimination in the Workplace ⚖️

Think AI is neutral? Think again.

This is the workplace impact you never saw coming.

What happens when the tech we rely on to be impartial actually reinforces bias?

Join us for a deep dive into AI bias and discrimination with Samta Kapoor, EY’s Americas Energy AI and Responsible AI Leader.

Join the conversation and ask Jordan and Samta questions on AI here.

Also on the pod today:

• Bias and Discrimination in AI Models 🤖
• AI Guardrails For Businesses 🛡
• AI and the Future of Work 💼

It’ll be worth your 27 minutes:

Listen on our site:

Click to listen

Subscribe and listen on your favorite podcast platform

Listen on:

Here’s our favorite AI finds from across the web:

New AI Tool Spotlight – Latitude is an open-source prompt engineering platform, Height.app is an autonomous project management tool and Handinger helps extract data from the internet.

Big Tech – Amazon has unveiled an AI tool that helps drivers find packages faster.

Zoom - Zoom has announced a custom AI avatar tool that it plans to release next year.

AI Video – A new AI video generator, Pyramid Flow, has launched and it’s giving competitors a run for their money.

Google - Google TV’s AI screensavers are now widely available on Chromecast and other devices.

AI in Healthcare – AI startup Suki has raised $70 million to build AI assistants for hospitals.

Read This – This expert is listing the jobs most at risk to be replaced by AI.

1. OpenAI Foils 20 Cybercrime Networks Globally 🚨

In a recent announcement, OpenAI revealed it has disrupted over 20 deceptive operations worldwide this year, aimed at exploiting its platform for malicious activities like generating fake social media content and crafting malware. Notable operations included efforts by SweetSpecter, a suspected Chinese adversary, and Iranian-linked groups using AI models for cyber reconnaissance and misinformation campaigns surrounding elections.

OpenAI emphasized that while threat actors are adapting, their attempts have not led to significant advancements in creating new malware or viral audiences.

2. Microsoft Unveils New AI Tools for Healthcare 🧑‍⚕️

Microsoft has introduced a suite of AI-driven tools aimed at revolutionizing the healthcare industry. With nurses currently spending upwards of 41% of their time on documentation, these innovations—including medical imaging models and an automated documentation solution—are designed to streamline workflows and reduce burnout among clinicians.

As healthcare organizations begin testing these tools, the potential to enhance efficiency and collaboration in patient care is clear. According to Microsoft, the integration of AI in healthcare could significantly lighten the load on medical staff, ultimately benefiting both professionals and patients alike.

3. AMD Revamps AI Chips with New Releases 🚀

Advanced Micro Devices (AMD) announced plans to kick off mass production of its MI325X AI chip in Q4 2024, aiming to compete with NVIDIA's dominance in the market. Alongside this, AMD revealed its next-gen MI350 series chips, set to launch in H2 2025, boasting enhanced performance and memory capabilities over previous models.

Despite a recent uptick in its stock, shares fell by 3.3% amid investor caution over new customer announcements.

4. Tesla Reveals New Robotaxi “Cybercab” 🚕

Tesla is set to unveil its new "Cybercab" prototype, marking a pivotal moment in its quest for fully autonomous vehicles. While the company aims to impress investors with this breakthrough, it faces uphill challenges, particularly in convincing regulators and the public about safety—an area where competitors like Waymo have a head start with their fleet of operational robotaxis.

Tesla's strategy, which relies solely on computer vision and end-to-end AI, could yield high rewards but also presents significant risks, especially in handling rare driving scenarios.

5. Nobel Prizes Ignite Debate on AI Research 🤔

The recent awarding of Nobel Prizes to AI leaders Demis Hassabis, John Jumper, and Geoffrey Hinton has sparked significant discussion about the influence of Big Tech in scientific research. Critics argue that the absence of a dedicated prize for mathematics or computer science skews recognition, as these laureates' work may not fit traditional categories like physics and chemistry.

This situation raises concerns about how advancements in AI are valued and the struggle of academia to compete with tech giants in groundbreaking research. As the lines blur between fields, the implications for future research funding and career opportunities in AI remain a hot topic.

Sponsored by

“If you want to understand artificial intelligence, get better at understanding human intelligence,” says author and professor of psychology Tomas Chamorro-Premuzic.

He joins the Microsoft WorkLab podcast to discuss how AI can help unlock greater human performance.

Learn how work is changing on WorkLab, available wherever you get your podcasts.

Grok-2 Review: Better Than Grok 1 or Another Flop from Twitter?

Not too long ago xAI released Grok 2, a new and updated model from Elon Musk’s company.

Is this new model better than Grok 1?

We’re giving you a live review and going over the pros and cons.

🦾How You Can Leverage:

A tricky hurdle to clear? 

How to feast from the large langue model buffet but hold the bias. 

Trying to navigate the tricky world of AI bias in the workplace? 

Same. 

When done right, AI can be an an invaluable asset to your team. 

But when bias creeps in, it can take a turn for the worst

Yikes. 

Enter Samta Kapoor, the Energy AI and Responsible AI Leader at Ernst & Young, the global consulting powerhouse that boasts over 400,000 employees worldwide. 

Samta's jam? 

She’s all-in on countering AI bias and steering this transformative technology into a force that fuels growth and innovation while upholding ethics and fairness. 

(Yeah, we REALLY need peeps like her.) 

From setting up strategic guardrails to fostering an environment where AI is respected and used responsibly, Samta talked strategies employed by global teams in EY and shares her secrets to fighting bias. 

Wanna karate chop bias in the neck? 

Let’s get it. 

Here’s what you need to know. 

1 – Look in the mirror, human 🪞

Real talk — when models spit out info that seems a bit biased or discriminatory, we immediately put that LLM in the corner for timeout. 

But maybe we should take a look in the mirror, Samta said. 

After all, Generative AI tools we use (text, video, images, etc.) are all reflections of their training data. 

Should humans do a better job of training models and weeding out the garbage? Yes. 

Is it possible to free a model of bias?

Not really. 

Try this: 

(It’s kinda a sneaky feat, but they essentially just scrape most of the open internet. And a lot more.) 

And guess what’s on the internet? (Aside from cat memes.)

Misinformation, hate speech, bias and prejudice. So much of it, in fact, that even well-intentioned humans training models might miss. 

That’s why companies tweaking models in house gotta prioritize guardrails, as they are essential for ethical and reliable use of AI, Samta said. 

2 – The onus is on you 🫵

Whose responsibility is it to make sure models are fair? 

Not the CEO. Or the CISO. Or your ML team. 

It’s everyone’s responsibility. Passing the buck can’t be an excuse for copying-and-pasting content that can be harmful or hurtful. 

Samta said every employee interacting with AI should have a foundational understanding of implications of the technology, not just the experts.

Try this: 

Cutting out bias isn’t just some nice-to-do check box. It’s gotta be intentional with an accountability system. 

Easier said than done? Fosho.

Expert tip: Businesses should include data tracking and audit trails as part of their AI strategy to maintain transparency in AI usage.

With great superpowers of AI comes super responsibility. 

3 – Train before you can walk 🚶‍♂️

We get it. 

Everyone was AI to legit be biz rocket fuel. (Side note….. can we drop the 🚀 emoji when talking about AI on social media? Like…. Can we all just make a pact? Cool) 

Oh… train before you can walk. 

Y’all remember computer classes? 

Or typing classes? 

Or.... internet classes? 

Yeah, youngn’s, those were all actually things. 

You gotta walk before you run, but you gotta train before you can walk. 

Try this: 

That means — AI 101 is a necessity for keeping bias out of your LLM outputs.

(Or, similarly, if you’re fine-tuning models for your company’s use. Same rules apply.) 

Samta talked that GenAI training was a huge priority for EY. 

And it should be a top priority for any biz leader. Samta said employers gotta actively invest in AI training to improve fairness and more accurate outcomes. 

Numbers to watch

$32 Million

Reylance raises $32M to help companies comply with data regulations.

Now This …

Let us know your thoughts!

Vote to see live results

Do you attend our livestreams?

Every weekday, we bring you fresh AI insights, exclusive interviews, and breaking news with our Everyday AI livestream.

Login or Subscribe to participate in polls.

Reply

or to participate.