• Everyday AI
  • Posts
  • Sam Altman joins Microsoft after OpenAI firing 🤯

Sam Altman joins Microsoft after OpenAI firing 🤯

🤖 How to be safe and ethical in using AI, Sam Altman joins Microsoft after OpenAI firing, creating presentation with AI, and more!

Outsmart The Future

Today in Everyday AI
7 minute read

🎙 Daily Podcast Episode: Is AI actually safe? How can we use AI tools that are ethical and trustworthy? We dive in with the president of the Mozilla Foundation, a leading voice in the safe AI revolution. Give it a listen.

🕵️‍♂️ Fresh Finds: An AI language tutor, OpenAI’s new interim CEO, and Mastercard’s new AI fraud tool. Read on for Fresh Finds.

🗞 Byte Sized Daily AI News: Sam Altman joins Microsoft after OpenAI firing, Meta disbands Responsible AI team, and Germany, France, and Italy agree on AI regulation. For that and more, read on for Byte Sized News.

🚀 AI In 5: Creating presentations and pitch decks can be time consuming. But this new AI presentation tool might just save your time! See it here

🧠 Learn & Leveraging AI: So how do we know when we can trust AI that’s ethical? What steps can we take to use safe AI? Keep reading for that!

↩️ Don’t miss out: Did you miss our last newsletter? We talked about the future of AI in biology, Meta’s new AI video editing tools, and your new AI learning sidekick. Check it here!

Safer AI - Why we all need ethical AI tools we can trust ⚙️

Do you trust the AI tools that you use?

Are they ethical and safe?

We often overlook the safety behind AI and it's something we should pay attention to.

Mark Surman, President at Mozilla Foundation, joins us to discuss how we can trust and use ethical AI.

Also on the pod today:

• Responsible AI regulation 🔓
• AI concerns to be aware of 🤔
• Creating balanced government regulation ⚖️

It’ll be worth your 32 minutes:

Listen on our site:

Click to listen

Subscribe and listen on your favorite podcast platform

Listen on:

Upcoming Everyday AI Livestreams

Tuesday, November 21st at 7:30 am CST

Here’s our favorite AI finds from across the web:

New AI Tool Spotlight – Chatty Tutor is an AI language tutor, lets you create your own AI avatar profile picture, WhatLetter explains documents in your language, and PostNitro.ai lets you create stunning carousel posts.

Trending in AI – So with Sam Altman out, who’s the new CEO at OpenAI? Meet the interim CEO, who thinks AI could destroy life.

Big Tech - Over 100 AI executives and US government officials met in Utah to discuss AI security issues and its future.

Business of AI – Mastercard is implementing a new AI tool to prevent fraud through cryptocurrency exchanges.

AI in Science – Generate Biomedicines has developed an AI that can generate protein structures and predict the potential functionality of the proteins generated.

1. Sam Altman's New Role At Microsoft for OpenAI Collaboration

Sam Altman, the former CEO of OpenAI, has transitioned to a new role at Microsoft. This change follows Altman’s unexpected firing from OpenAI on Friday, a company he co-founded. Altman's move to Microsoft is seen as a strategic step to deepen the collaboration between the tech giant and OpenAI, focusing on advancing AI technologies. 

Microsoft CEO Satya Nadella announced the new position for Altman leading Microsoft’s AI research team on X/Twitter.

Emmett Shear, who most recently served as the CEO of streaming giant Twitch, was announced as OpenIAI’s interim CEO.

2. OpenAI Employees Demand Board Resignation 🚨 

More than 600 employees at OpenAI are threatening to resign and join Sam Altman at his new role at Microsoft unless the current OpenAI board steps down and appoints new leadership. This dramatic move comes in the wake of former CEO Sam Altman's departure from the company on Friday and a tumultuous weekend of back and forth negotiations between Altman, OpenAI’s board, Microsoft and other players.

The employees are demanding the reinstatement of Altman and a complete overhaul of the board. This situation indicates significant internal unrest within OpenAI.

3. Meta Disbands Responsible AI Team Amidst Reorganization 🚫

Meta, the parent company of Facebook, has dissolved its Responsible AI team. This decision comes amidst a larger corporate restructuring process. The Responsible AI team was tasked with ensuring ethical practices in AI development and usage, and its disbandment raises questions about Meta's future direction in ethical AI.

4. AI Regulation Agreement by Germany, France, and Italy 🌍

Germany, France, and Italy have reached an agreement on the regulation of artificial intelligence. This trilateral agreement is a significant step towards establishing a unified European approach to AI regulation. The collaboration aims to create a regulatory framework that balances innovation with ethical and societal considerations.

5. Amazon's 'AI Ready' Initiative: Free AI Training for Millions 📚

Amazon has launched 'AI Ready', a global initiative to provide free AI training to 2 million people by 2025. The program includes a range of free AI courses and scholarships, targeting the increasing need for AI skills in the workforce. This initiative reflects Amazon's commitment to expanding AI education and skill development worldwide.

6. FDA's Challenge in Regulating Healthcare AI 🏥

The U.S. Food and Drug Administration (FDA) is facing challenges in regulating generative AI in the healthcare sector. The FDA is currently working on a framework to manage the safe use of AI in medical devices and applications. This task is complicated by the rapid evolution of AI technologies and their potential impact on patient care and safety.

All in one presentation creator! 🎨

Creating presentations and pitch decks can be time consuming.

But this new AI presentation tool might just save your time!

We’re showing you Pitch and all its capabilities.

🤷‍♂️ What’s Going On and Why It Matters:

Yeah, the Generative AI space is shaking up, especially when it comes to privacy, safety and ethics. 

(Understatement of the year, we know.) 

Between the breaking news of the OpenAI/Microsoft shakeup, Meta breaking up its Responsible AI team and other recent developments, it’s more important than ever to pay close attention to your GenAI strategy. 

To be honest, the last few months (and hours!) of AI happenings have been a lot to take in. That’s one of the reasons why Mark Surman joined us on the show. 

Mark is the President of the Mozilla Foundation, and a leader in creating safer AI systems for us all to use through the recent launch of Mozilla.ai.

As Big Tech shakes things up in the Generative AI world, it’s more important than ever to make sure your company focuses in on safer AI. 

Here’s a bite-sized preview of what we tackled: 

  • Need for more sophisticated AI tools to combat misinformation

  • Potential challenges in regulating generative AI systems

  • Concerns about sacrificing ethical considerations for faster development

  • Governments and the public advocating for AI regulations

  • Importance of transparency from AI companies and governments

  • Mozilla AI's focus on building trustworthy and open source AI

  • Mozilla AI's goal to avoid hallucinations and discrimination in AI models

And a TON more


To say it lightly, there’s a lot of work to be done. 

So let’s take a deeper dive to see how we can make it work. 👇

🦾How You Can Leverage:

Safer AI might sound impossible. 

We get it. 

‘I’m not building these tools, I’m just using them.’ 

Makes sense. Yet, individuals and business leaders still are in the driver’s seat when it comes to safer AI use and using AI in an ethical way. 

Mark is the President of the Mozilla Foundation and helping shape the future of safe AI use across the globe with Mozilla.ai

So, today’s show is definitely worth a re-listen.

But if you’re ready to implement what Mark talked about, here’s your 1-2-3 guide y’all.   

1 – Build trust with transparency 🤝

Sounds simple enough, right? 

But how you build trust in your organization while using AI is paramount.

And Mark said one of the most important elements is transparency. And without transparency in AI models and their usage, there is no safety. 

Try this: Here’s a 1-2 combo on how to build trust and transparency in your organization or department. First, being able to explain and understand what GenAI models you use (and why!) is crucial. We talked about that recently with Nick Schmidt, on how to understand and fix biased AI

After giving that a listen, check out the ‘Creating Trustworthy AI’ paper from Mozilla that Mark mentioned. (And when it gets updated in the coming months like Mark mentioned, we’ll be sure to share those updates!) 

2 – Verify before you share 🗣️

Can we keep it real here quick? 

So much of GPT or AI-created content is hot garbage.

The downside of Generative AI is that it makes it easy for anyone to create quality-ish content at scale. Why’s that a bad thing? Well, aside from the spiraling downfall of insightful, human-written content, it’s easier than ever to accidentally share disinformation or hallucinations from a large language model. 

Try this: Take a page out of Mark’s book and verify, verify, verify.  

In a world brimming with AI-generated content, it's crucial to wear your detective hat and double-check before hitting that share button. Know how large language models work, and how (and why) they sometimes lie. 

Don't perpetuate stereotypes and biases that sneak their way into AI-generated creations. Take a critical eye to the content you encounter, and don't let the AI-generated variety fool you. Verify, verify, and verify some more.

3 – Slowly go fast 🐢

Mark dropped a thought-provoking take on the speed of AI.

Its impact is both too fast, and too slow. 

Wait, what? 

You heard that right— and we agree with this take and how the ‘slowly go fast’ methodology can be applied to your Generative AI strategy. 

Try this: When implementing GenAI in your org, take your time, weigh the options, and make thoughtful choices. Once the groundwork is solid and you’ve got stakeholder buy-in and governance in place, use GenAI to supercharge your velocity. 

It's all about finding that sweet spot where caution and acceleration come together. We actually covered the “slow by fast” approach recently in an episode about adapting to an AI-first world.

Now This …

We wanna hear from you!

Vote to see live results

Do you trust AI tools?

Login or Subscribe to participate in polls.

Join the conversation

or to participate.