- Everyday AI
- Posts
- Microsoft is creating AI tool for spies 🕵️
Microsoft is creating AI tool for spies 🕵️
7 common mistakes people make with LLMs, some proof of GPT 4.5, Katy Perry's mom fooled by AI and more
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
Outsmart The Future
Sup y’all 👋
We’re trying something a bit different today.
Wanna roll the dice? 🎲
Today, we talked about the 7 most common mistakes people make when working with Large Language Models.
So, we decided to give away one free 90-minute LLM session. Whatever question you have — we’ll do our best to answer. Live, 1-on-1.
Entering the giveaway is easy. Just click ‘Yes’ to enter and we’ll announce the winner in tomorrow’s newsletter.
Enter to win a free, 1-on-1, 90-minute LLM training session?Click "Yes" to enter to win |
This will be fun — we’ll solve your LLM problems together!
✌️
Jordan
Today in Everyday AI
8 minute read
🎙 Daily Podcast Episode: We’ve trained thousands of business leaders on using Large Language Models, and we see the same mistakes. Over and over and over again. So, we’re tackling the 7 most common LLM mistakes and how to avoid them. Give it a read or listen.
🕵️♂️ Fresh Finds: Is GPT 4.5 actually a thing? AI image of Katy Perry at Met Gala fools her own mom, OpenAI announces new security measures and more. Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: ChatGPT makes moves for its new search engine and gpt2-chatbot makes a return, more new Apple AI announcements, Microsoft creating AI tool for spies and more. Read on for Byte Sized News.
🚀 AI In 5: Can AI help you go viral? See if this AI video tool can help catapult you to viral vertical video fame. See it here
🧠 Learn & Leverage AI: Pitfalls. Pitfalls everywhere. We’ve taken hundreds of hours of LLM training and have written an easy guide to help you avoid common LLM pitfalls. Keep reading for that!
↩️ Don’t miss out: Did you miss our last newsletter? We talked about Microsoft’s surprise model announcement called MAI-1, new AI video updates from SORA, and one simple trick in Copilot that’ll save you tons of time. Check it here!
Stop making these 7 Large Language Model mistakes 👎
You wouldn't ride a unicycle on a highway. 🚳
Sure, that's technically a way you can travel.
↳ But that doesn't mean pedaling a unicycle is an acceptable way to travel from point A to point B.
↳ That's how people are using Large Language Models.
↳ There's millions using LLMs like riding a unicycle on an interstate.
Don't worry.
We'll set the record straight and help you trade in that unicycle for a friggin Bentley.
(Or like a 2009 Toyota Prius hybrid. Whatever's your speed.)
On today’s show, we showed you how to Stop making these 7 Large Language Model mistakes.
Also on the pod today:
• Avoiding common LLM mistakes ⛔️
• Staying up to date with GenAI 📰
• Preparing for the future of work 🔮
It’ll be worth your 43 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – AFFiNE AI uses AI to help you better draw, write and present, Storyville gives you (or maybe your kids) personalized bedtime stories with the help of AI and Flownote uses AI to transcribe your meetings into concise summaries.
Trending in AI – Some AI photos of Katy Perry at the Met Gala were so good, even her mom was fooled.
Ethics in AI — OpenAI announced new ethical safeguards with its Media Manager, which OpenAI says can help creators and content owners better control how their content is used in AI.
Our approach to content and data in the age of AI: openai.com/index/approach…
— OpenAI (@OpenAI)
3:04 PM • May 7, 2024
AI in Medicine – Study finds that AI is as good as a physician when it comes to prioritizing patient care.
AI in Social Media — It seems that just about every social media company has integrated some sort of AI chat or assistant. Here’s what it means.
New AI models — The imagoodgpt2chatbot model, presumably from OpenAI, seems to be hinting that it’s actually going to be called GPT4.5.
We found it here:
It's never very accurate to ask a model what it is. Yet, it looks like im-a-good-gpt2-chatbot has been answering this way pretty consistently.
More on what this could mean later
#gpt2#gpt2chatbot#imagoodgpt2chatbot
— Jordan Talks Everyday AI (@EverydayAI_)
11:55 AM • May 7, 2024
AI in Politics — Republicans and Democrats are jostling with how to best use AI in elections. See the latest here.
ChatGPT News — ChatGPT can now be accessed at a new domain. This might be the first sign of its new search engine dropping.
Breaking:
You can now access ChatGPT at chatgpt.com
This could be a big sign.
Here's why 👇
— Jordan Talks Everyday AI (@EverydayAI_)
7:15 PM • May 6, 2024
AI in society – Should AI have rights, like humans? It’s more complex than you may think.
AI advancements — The U.S. is starting to crack down on how AI can make discoveries with synthetic DNA.
1. Reports: Apple working on AI chips for data centers 📈
Apple has been developing chips for artificial intelligence in data centers, codenamed Project ACDC (rock on!), with the help of Taiwan Semiconductor Manufacturing Co. This move is seen as an effort to catch up with competitors such as Google and Microsoft in the AI race.
Apple has been investing in the development of AI capabilities, and CEO Tim Cook has hinted at a major AI-related announcement later this year. The company's server chips will likely focus on AI inference rather than training, an area currently dominated by NVIDIA.
Apple also released plans for its M4 chip, more on that on #4 below. 👇
2. ChatGPT search engine and new model (might be) coming soon🚨
In some original research and reporting, we spotted that OpenAI was likely prepping for a major announcement surrounding a potential search engine play by changing ChatGPT’s domain to ChatGPT.com
Previous reports talked about a May 9 announcement from OpenAI regarding a search engine. Seemingly, OpenAI started some required tech work to make way for such an announcement, if it comes to fruition.
🚨 Breaking: #ChatGPT has a new domain.
Here's what it means.
🧵
— Jordan Talks Everyday AI (@EverydayAI_)
8:08 PM • May 6, 2024
To add more fuel to the fires of speculation, OpenAI re-released a preview model of gpt2-chatbot, which we covered previously.
Will OpenAI be launching a search-focused version of ChatGPT in an attempt to compete more closely with Google and Perplexity?
Stay tuned this week to find out.
3. AI startup raises $1 billion for self-driving car software 🚘
Wayve, a UK-based self-driving car technology startup, has received over $1 billionn of investment led by Softbank to develop embodied AI for autonomous vehicles. This marks the biggest investment in a European AI startup and signals the strength of the UK's AI ecosystem.
Softbank, NVIDIA, and Microsoft are among the major investors in Wayve's latest funding round. Some experts have noted that Wayve's technology is still in its early stages and may face challenges in real-world situations.
4. Apple’s unveils its next-gen M4 chip to power future GenAI 🍎
Apple has unveiled its newest Apple Silicon chip, M4, which features a 3 nanometer chip architecture and is built for AI.
This groundbreaking chip, designed specifically for artificial intelligence tasks, features a new display engine for enhanced color and brightness on the iPad Pro.
Apple announced this next-gen AI focused chip and other product announcements, such as a new iPad Pro and iPad Air, at its Apple Event.
5. Microsoft creates top secret GenAI for spies 🕵️
According to a Bloomberg report, Microsoft has successfully deployed a generative AI model that is completely isolated from the internet. This allows US intelligence agencies to utilize the powerful technology to analyze sensitive information with confidence.
This is the first major language model to operate entirely separate from the internet, providing a secure system for analyzing top-secret information. Most AI models rely on cloud services for learning and inference, but Microsoft wanted to offer a more secure option for the intelligence community.
This breakthrough could have significant implications for how AI is used in sensitive and confidential settings.
AI-powered shortcut Shortcut to create viral vertical videos?
Ever wondered how to create those viral vertical videos that dominate social media feeds?
Well today’s AI in 5 is for you!
We’re breaking down Spikes Studio, an AI-generated video creator that takes your long form video and creates recommended viral clips in vertical format.
Check out today's AI in 5.
We’ve literally taught thousands of business leaders how to prompt inside large language models.
And for the past year, we’ve seen the same mistakes.
Debunked the same myths.
And prioritized the same truths.
So, we thought it was time for a dedicated episode going over the 7 most common LLM mistakes that people make.
So, let’s get to it. 👇
Mistake 7 – Not understanding a LLM’s knowledge cutoff 🧠
Forget what the companies are trying to tell you in their marketing. Even if a model is ‘connected to the internet’ it’s not always up to date.
In short:
Models are trained on data.
Data is scraped from the internet. (Whether that’s legal or not will be decided in the coming years)
Humans train the models based on that data.
But the process between steps 2 and 3?
There’s an expiration date, of sorts. The model training process can take many months, in which case the model’s training data (and knowledge cutoff) only get more and more stale.
Do this instead:
The LMSYS Chatbot Arena has a pretty up-to-date list of what each popular model’s knowledge cutoff is.
Side note — it incorrectly lists the GPT-3.5 knowledge cutoff as September 2021 whereas it’s actually January 2022. (The rest all look good!)
Need to know more about what a knowledge cutoff is, how it impacts LLMs and how they work?
Mistake 6 – Not investigating internet connectivity 🛜
If Big Tech pinky promises that their model is connected to the internet, that means knowledge cutoffs don’t matter, right?
And that we can also feel confident in any model’s output?
Wrrroooooong.
The approach (and consistency) of how different models talk to the internet varies.
And sometimes, it’s downright awful. (We’re looking at you, Google!)
Do this instead:
Nothing beats first-hand experience of trying different time-sensitive queries, and observing how different “internet-connected” LLMs act.
Or, you can sit on the couch and watch as we did the heavy lifting for you.
This episode compares with real-world tests how LLMs interact with the internet. This single episode is gonna save you time, improve your accuracy, and cut down on those dang hallucinations.
Mistake 5 – Not managing your memory ⁉️
LLMs aren’t infinitely smart.
Just like us (and goldfish) they can only remember so many things.
And while models like Claude-3 and Gemini have stolen the show in terms of big memories and long context windows, they’re not always accurate.
And the GPT models still lag a bit behind compared to Anthropic and Google’s big brained models.
Do this:
It’s like how the Star Wars text scrolls up and out of the screen — LLMs can only retain so much information at a time.
Wanna dork out and go all-in on understanding tokenization and memory? If you really wanna up your LLM game, this is essential reading/watching.
Mistake 4 – Paying attention to screenshots 🖥️
Guess what a screenshot from a large language model means?
Absolutely nothing.
You can tell a model to parrot anything you want then share that screenshot online.
I’m rich!
Do this:
Screenshots are a dime a dozen.
“AI experts” are trying to share screenshots showing their super duper AI skills.
AI skeptics are trying to share their screenshots showing how dumb LLMs are.
All those things mean?
Those people don’t understand how models work.
If you really wanna show your work, you can always just share the chat URL, like this.
(Yeah, Jordan really didn’t win the lottery after all.)
Mistake 3 – Thinking that LLMs are deterministic 🚫
AI chats aren’t like search engines.
You can put 1 prompt in 100 times and get 100 very different answers.
Or 50 different answers.
Or 2 slightly different answers.
Large Language Models are generative by nature, which means their next-token prediction abilities are meant to be generative.
A little random. A bit unpredictable.
Do this:
Go into the OpenAI playground, and play around with Top-P, temperature and more. We’d love to walk you through this, step-by-step, but going through the process on your own really helps you understand how generative models actually work.
(Reply to this email if you’d be interested in an episode on the OpenAI Playground.)
Mistake 2 – Thinking copy and paste prompts work 💾
If you see someone shilling copy-and-paste prompts promising to solve all of your problems, run for the hills.
Don’t pay attention to Billy Boys like this.
Here’s the truth — prompts don’t really do anything.
Sure, they can get you from an F to a C- pretty quick, but that’s about all copy-and-pasting prompts is good for. Going from hot garbage to lukewarm trash.
Do this:
If you’re feeling reaaaaalllllly spicy, go read this 43-page research paper on chain-of-thought prompting. (We have.)
Or, you can just look at this graph and agree with math, science and logic: copy-and-paste prompts give you poor outputs and proper prompt engineering wins out every single time.
(If you’re not a graphs person, this says that ‘few shot’ prompting always outperforms zero-shot prompting. In other words, having a conversation with an LLM and giving it examples and working with it like an expert will always give you better results than copy-paste prompts.)
Mistake 1 – Not understanding LLMs are the future of work 🔮
This ain’t a hot take.
Google — all in on LLMs.
Amazon — all in on LLMs.
Microsoft — all in on LLMs.
Meta — all in on LLMs.
Apple — (reportedly) all in on LLMs.
That’s 5 of the 6 largest companies in the U.S. (and the other is NVIDIA — the one literally powering the GenAI and LLM revolution.)
If you think AI is a fad or something that’s gonna come and go, we’ll try to be nice when we say this: you’re very wrong.
Do this:
Here’s a fun little trick we started talking about last year. Instead of using the term ‘Generative AI’ or ‘Large Language Model,’ start using the term Internet.
Would you use the internet to help you get a job? (Yes)
Would you use the Internet to do your work? (Yes)
Would you use the Internet to grow your business? (Yes)
That’s just how the world works now.
The future of our personal and professional lives is based around LLM and Generative AI technology.
So the next time you’re thinking: “Should we use a LLM for this?” then just swap out and use the word internet.
Or just know the answer is almost always ‘Yes.’
⌚
Numbers to watch
$1 billion
Amount raised by Wayve, an AI-startup that could compete with Tesla in using AI to make cars more autonomous.
Wait, did you not vote in that poll yet?
Make sure to go do that!
Reply