- Everyday AI
- Posts
- Google warns on military AI, California’s bill divides and more. AI News That Matters
Google warns on military AI, California’s bill divides and more. AI News That Matters
OpenAI supports AI content bill, Inflection's Pi adds usage caps, TikTok adds custom AI voices and more!
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
Outsmart The Future
Today in Everyday AI
6 minute read
🎙 Daily Podcast Episode: Microsoft’s controversial feature is finally coming out. Google employees are worried about its AI role in the military and what’s up with the new California AI bill? Here’s this week’s AI news that matters. Give it a listen.
🕵️♂️ Fresh Finds: US Department of HHS focuses on AI cyberthreats, Google Pixel gets new AI image feature and how to create a chatbot with any GitHub repo. Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: Inflection’s Pi gets usage caps, IBM introduces AI chip tech and TikTok adds custom AI voices. For that and more, read on for Byte Sized News.
🚀 AI In 5: We're checking out a GPT inside ChatGPT that lets you convert a file to almost any format you need! See it here
🧠 AI News That Matters: Why is Apple Intelligence being delayed? Will GPTSearch replace Google? And what do Meta’s LLM updates mean for the AI world? Here’s our breakdown of the AI news that matters. Keep reading for that!
↩️ Don’t miss out: Did you miss our last newsletter? We talked about DeepMind's worries over Google defense deals, China looks for US chip access and Salesforce unveils new AI agents. Check it here!
Thoughts on our "Today in Everyday AI" table of contents? |
AI News That Matters - August 26th, 2024 📰
What's the controversial Microsoft AI feature that's finally coming?
Why are Google employees worried about their AI in the military's hands?
And why is Silicon Valley fighting over this new AI bill in California?
Here's this week's edition of AI News That Matters.
Join the conversation and ask Jordan any questions on AI here.
Also on the pod today:
• Salesforce's New AI Tools 💼
• Fortune 500 Companies and AI Risk 🏢
• Microsoft's Recall AI Feature Launch 🧠
It’ll be worth your 27 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – TheySaid provides conversational surveys for deeper insights, Mimrr eliminates technical debt for your startup and IndieScore turns your meetings to concise highlight clips.
Trending in AI – The trailer for a new movie called Megalopolis contained fake AI quotes and is now receiving backlash.
LLMs - Hugging Face and Google Cloud have partnered to create Deep Learning Containers (DLCs).
🐳 Big news! @huggingface and @googlecloud have joined forces to bring you a collection of Deep Learning Containers (DLCs) to transform the way you build AI with open models on Google Cloud!
With pre-configured, optimized environments for PyTorch Training (GPU) and Inference… x.com/i/web/status/1…
— Alvaro Bartolome (@alvarobartt)
2:00 PM • Aug 26, 2024
AI in Medical – A new study shows that almost half of the FDA approved AI medical devices aren’t trained on real patient data.
Meta – Meta has released new trust & safety research for LLMs.
As part of the release of Llama 3.1, we also released new trust & safety research including CyberSecEval 3. We've published our research on this work to continue the conversation on empirically measuring LLM cybersecurity risks & capabilities.
Paper ➡️ go.fb.me/yv32a9
— AI at Meta (@AIatMeta)
6:12 PM • Aug 26, 2024
AI Startup - AI Startup Viggle AI, is taking social media by storm with its controllable AI characters for memes and ideas.
AI in Society – Oklahoma City police officers are using AI chatbots to write crime reports.
AI in Media – Actress Jenna Ortega spoke on her struggles with explicit AI-generated images of herself.
1. OpenAI Backs New California Bill to Label AI Content 🏷
OpenAI is backing California's AB 3211, a bill that mandates tech companies to label AI-generated content, a crucial step during an important election year. With a unanimous 62-0 vote in the Assembly, the legislation seeks to address the issues of misleading deepfakes and harmless memes alike.
OpenAI emphasizes that transparency, such as watermarking AI content, is vital for helping voters distinguish between authentic and AI-generated information.
2. Inflection’s Pi Faces Usage Caps 🧢
Inflection is set to impose usage caps on its AI chatbot Pi as part of a strategic pivot towards enterprise offerings after securing $1.3 billion in funding last year. Following Microsoft's acquisition of Inflection's founders and staff for $650 million, antitrust regulators are scrutinizing the deal for potential anti-competitive behavior.
In a notable move, users can now export their conversations from Pi, establishing a new benchmark for data mobility in the AI sector.
3. TikTok Unleashes Custom AI Voices 🗣
TikTok has rolled out an innovative feature that allows users to create their own AI voices for video clips, as highlighted by researcher Jonah Manzano. This new option not only reduces strain on users’ vocal cords but also enables translations of voice overs into multiple languages, providing significant convenience.
However, this development raises concerns about diminishing the authenticity of social interactions on the platform, as critics argue that the reliance on AI voices could undermine the essence of social media.
4. New AI Model Created by U.S. and China for Drug Discovery 💊
Scientists from China and the U.S. have introduced a groundbreaking AI model named ActFound, poised to transform drug development by predicting bioactivity with reduced data requirements and costs. This advanced model outperforms its competitors by addressing challenges such as limited data and assay incompatibilities while demonstrating strong capabilities in predicting cancer drug bioactivity.
With its training on an impressive 35,644 assays and 16 million bioactivities, ActFound could significantly enhance the efficiency of drug discovery processes.
5. Berkeley Researchers Tame ChatGPT's Math Errors 🧮
Researchers from UC Berkeley have made significant strides in reducing ChatGPT's math errors, known as "hallucinations." Their study revealed that by having the chatbot solve the same algebra problem 10 times, its error rate dramatically decreased from 29% to just 2%. Although challenges remain in statistics, these findings suggest a promising future for automated tutoring systems.
Convert Anything Inside ChatGPT
We're checking out a GPT inside ChatGPT that lets you convert a file to almost any format you need!
ConvertAnything lets you convert files to video, text, audio, compressed files, and more.
Check out today's AI in 5.
Y'all, this week in AI has been hotter than a single GPU trying to run Dall-E 3.
(Why you still look so cartoonish though?)
We've got Google employees raising eyebrows over military contracts, Salesforce dropping AI sales assistants, and California cooking up some controversial AI legislation.
Aaaaaannnnnd, Microsoft's reportedly unleashing that feature that had privacy advocates on high alert.
Grab your favorite caffeinated beverage, shorties.
Let's dive into this week's AI rollercoaster.
Here’s what ya need to know. 👇
1 – Google's DeepMind Drama: AI Ethics vs. Military Contracts 🎖️
At least 200 DeepMind employees are giving Google the side-eye over its military contracts, especially with the Israeli army.
Employees of Google DeepMind circulated an internal letter back in May, arguing that military involvement undermines DeepMind's position as a leader in responsible AI.
What it means:
This isn't just Google's headache.
It's a wake-up call for the whole tech industry.
As AI gets more powerful, these ethical dilemmas are gonna pop up more often. Companies need to start thinking hard about where they draw the line, or they might find their top talent heading for the exits.
2 – Salesforce's AI Duo: Your New Sales BFFs? 💼
You gonna hand over all your CRM duties to a Salesforce AI helper?
The company kinda wants you to.
Salesforce just dropped two AI tools that might make your sales team feel like they've got superpowers.
First up, the Einstein SDR agent – a 24/7 inbound lead nurturing machine.
Then there's the Einstein sales coach agent, an AI mentor for your pitches.
These AI sidekicks aim to tackle a big problem: sales pros reportedly spend 70% of their time on non-selling tasks.
That's a lot of coffee runs and spreadsheet wrestling.
What it means:
This could be a game-changer for sales teams.
But it's not just about boosting numbers. It's about freeing up your human talent to do what they do best – build relationships and close deals that need that human touch.
Companies that find the right balance between AI efficiency and human ingenuity are gonna come out on top.
3 – California's AI Bill: Silicon Valley's New Frenemy? 💻 🧑💻️
The new AI bill in California has officially divided Silicon Valley.
California's trying to regulate AI with the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SSIFAIMA, if that’s a real acronym)
The Safe and Secure Innovation for Frontier AI Models Act (SB 1047) is a California bill aimed at regulating high-cost AI models. Here’s the high level.
Targets AI models that cost over $100 million to train and need a lot of computing power.
Requires safety checks and plans to reduce risks.
Companies must report any safety incidents.
Offers protections for whistleblowers.
Creates a new Frontier Model Division for oversight.
Silicon Valley is still mixed.
Meta, OpenAI, and some startups are against it, while Anthropic supports it with some changes.
Google and Microsoft have a more divided view. The big issue is finding a balance between innovation and safety.
What it means:
This isn't just California's show.
If this bill passes, it could change the AI game globally.
Companies might need to pump the brakes on their AI development and think more about the consequences. It's time to start baking ethics into your AI strategy now, before the law makes you.
4 – Google's $250M News Deal: Clever Move or Band-Aid Solution? 📰
According to reports, Google's dropping $250 million over 5 years on California newsrooms to sidestep a bill that could've cost them billions.
It's $180 million for news outlets (sorry, broadcasters) and $70 million for AI tools to boost journalism.
This deal helps Google avoid a proposed state bill that would've required them to compensate for linking to news articles.
That bill estimated Google and Meta could owe US publishers up to $13.9 billion annually.
Yikes.
That $250 milly seems like a bargain, no?
What it means:
This is about more than Google saving some cash. It's about figuring out how news and AI can coexist.
For businesses in media and tech, this is a glimpse into the future. We're watching new models for content creation and distribution emerge in real-time. Time to start thinking about how you'll navigate this new landscape.
Keep an eye on how similar legislation might find its way to Perplexity and OpenAI’s doorsteps.
5 – Microsoft's Recall: Your PC's New Memory Bank 🧠
Microsoft's launching Recall for Windows Insiders in October, reports are showing.
It's like giving your PC a photographic memory, automatically taking screenshots for later searches.
Privacy folks are raising red flags, but Microsoft says it'll be off by default and wrapped in tight security.
TBH, we don’t hate it at all.
It'll only work on compatible "Copilot Plus PCs" designed to handle AI workloads locally.
No timeline for wider release yet, but Microsoft's eyeing that holiday season hype.
What it means:
This could be super useful, but it comes with some pretty big privacy concerns and grey area.
For businesses, it's time to think about policies for these kinds of tools before your employees start using them. It's a reminder that the line between helpful AI and privacy concerns is pretty thin.
6 – Fortune 500's AI Reality Check: From Hype to Caution 🚨
A recent report highlights that over half of Fortune 500 companies are now mentioning AI as a potential risk in their SEC filings.
A study by Arize AI found that nearly two-thirds of these companies mentioned AI in their latest reports, with one in five specifically referencing generative AI.
Only 31% discuss AI benefits outside the risk sections.
The most likely industries to disclose AI risk?
Media, technology, telecom, healthcare, and financial services.
What it means:
Studies are showing that the AI honeymoon is winding down.
Companies are realizing that just slapping "AI-powered" on everything isn't a magic solution.
It's time to get real about AI.
Focus on how it actually fits into your strategy, and for Pete's sake, train your dang people, y’all!
Throwing AI at untrained employees is a recipe for disaster, yet that’s just about how everyone company is doing it.
7 – AWS CEO's Warning: Coders, Time to Level Up 💻🔝
Matt Garman, AWS's big head honcho, says AI could take over coding tasks within two years.
He's predicting that being a dev in 2025 will look totally different than in 2020.
But don't panic yet.
Garman's not saying devs are going extinct; he's saying they need to evolve.
It's time to focus on the stuff AI can't do (yet) – like innovation and understanding customer needs.
What it means:
This is your wake-up call, tech world.
AWS said that AI-assisted coding and development has saved them 4,500 developer years worth of work.
(Seriously.)
The days of just churning out code the old school way could be numbered.
For businesses, it's time to rethink how you build and manage tech teams. You need people who can work with AI, not compete against it.
Start investing in upskilling your devs now, or you might find yourself with a team that can't keep up with the machines.
What do you think?
(Or your fave LLM like Claude, Gemini, Copilot, etc)
Reply