- Everyday AI
- 5 Facts you NEED to know about ChatGPT 👀
5 Facts you NEED to know about ChatGPT 👀
🧠 5 ChatGPT facts you should know, FTC investigates AI Big Tech, OpenAI updates Turbo models, and more!
🦾How You Can Leverage:
Let’s keep it real — there’s a lot of bad info out there about ChatGPT.
There’s 19-year-old “ChatGPT experts” who are spewing garbage.
Media companies regurgitate what’s in press releases.
And OpenAI does what any billion dollar company does best — slick marketing.
So we decided it was time for a reality check today and to separate fact from fiction.
Whether you’re a ChatGPT expert, a casual user, or just wanna know more about how large language models work, today’s show is for you.
We did all the homework, so you can just copy our answers.
So, let’s cut through the confusion and get to the crux of the matter. And we’ll talk about how knowing these 5 facts about ChatGPT can help you grow your company and your career.
Let’s get it. 👇
1 – Tokens, not words 🪙️
One of the most gag-worthy things we see online is people saying, ‘here’s how you get ChatGPT to write like you! Just send it 10 examples of your writing, and PRESTO!”
Nah. Doesn’t work that way shorties.
That’s because ChatGPT (and other large language models) don’t actually understand words.
Instead, they convert parts of words and sentences into tokens, and use context to try and understand what your words actually mean.
What it means: The better you understand tokenization, the greater your results will be out of ChatGPT. Especially now as enterprise companies ramp up their use of ChatGPT, it’s more important than ever to understand the basics of how ChatGPT actually works.
We suggest playing around with OpenAI’s tokenizer, then checking out this episode.
2 – Keep memory in mind 🧠
You know that feeling when ChatGPT starts a conversation doing EXACTLY what you wanted?
Like, for a few back and forth sequences, it performs like Superman before quickly morphing into Clark Kent?
What the flip Clark!? Bring Superman back.
That’s because ChatGPT has a much smaller memory than the internet might want you to believe.
What it means: When OpenAI announced its updated model, GPT4 Turbo, they announced it had 128K token context window, or about a 95,000 word memory.
But, as you saw live on today’s show, that’s not actually the truth for ChatGPT. That 128K context window is for the API, not for ChatGPT, which still has a 32K token memory.
3 – Browse with Bing is sneaky
Berry, berry sneaky.
Not on purpose, though. But as we start to incorporate LLMs into our daily business lives, you GOTTA be able to look under the hood and understand the engine.
(//Also, can we all PLEASE stop paying attention to these 21-year-old ChatGPT ‘experts?’ They’re just trying to smoke and mirror you into buying some crap product they’re peddling, and following their “advice” could legit tank your biz. Rant over.//)
So, here’s why you gotta pay close attention to Browse with Bing, which connects ChatGPT Plus to Microsoft’s Bing search engine. In short — it can actually CAUSE hallucinations, which we showed today.
What it means: If you’re using ChatGPT for any important project, you’ve gotta keep a watchful eye on what information Browse with Bing brings in.
Is it useful? Sure!
Is it potentially problematic? Absolutely.
We recommended using ChatGPT’s plugin mode first and foremost, as it may be phased out soon. Also, check out how and why some AI chats are lying to you about the internet.
4 – Hallucinations are actually rare 🦄
When we hear AI leaders talk about how LLMs are untrustworthy because of hallucinations, that means one thing.
They legit don’t understand proper ways to use ChatGPT.
If you make your inputs more precise and narrow, you greatly reduce the likelihood of hallucinations. (Like, we use ChatGPT for most weeks 15+ hours, and hardly ever run into hallucinations.)
What it means: If you’re running into hallucinations, you should do two things.
First, check out this episode:
THEN, take our PPP course, y’all!
Just reply “PPP” to this email, and we’ll send you the private registration page for our free live Prime, Prompt, Polish ChatGPT training.
5 – OpenAI is making a mistake 🤦
Not even #HotTakeTuesday, and we’re bringing a little spice. So this is technically an opinion with some fact behind it, but ‘4.5 Facts You Might Not Know About ChatGPT’ sounded like a confusing episode title to us.
In short, OpenAI is making a huge mistake with its rumored/kinda confirmed decision to pull the plug on Plugins mode and instead prioritize and replace that mode with GPTs.
What it means:
OpenAI is an amazing company, but sometimes seems to make decisions with developers in mind (1% of their users) vs. the general public (99% of their users).
It’s been kinda confirmed that OpenAI is eventually phasing out plugins in lieu of its new GPTs.
While GPTs are great for beginners, getting advanced functionality and true business automation inside GPTs requires some advanced coding and technical knowledge.
Even with that advanced coding and technical knowledge, the best GPT will still lack the usefulness of a chat with 3 plugins enabled in Plugins mode.
We teach ‘Plugin Packs’ in our free PPP and free PPP Pro courses, which shows you how to pick the proper combination of 3 plugins to automate some of the most time-consuming manual tasks for your business.
When/if the Plugins mode is retired, that easy zero-code Plugins Automation is out the window and cannot be easily replaced even with the most advanced GPT (they way they’re built today).
This potential oversight will ultimately cause ChatGPT to lose out paying customers to Microsoft Copilot Pro, and maybe even Claude or Bard, if Google can get its act together.
Numbers to watch
Intel’s stock has dropped more than 12% after it released its first quarter revenue. Analysts suggested Intel was not honed in enough on its AI offerings and innovation.