- Everyday AI
- Posts
- OpenAI’s Deep Research Updates: What’s new and how to make it work for you
OpenAI’s Deep Research Updates: What’s new and how to make it work for you
Amazon unveils AI-powered Alexa+, Microsoft’s next-gen Phi AI models and OpenAI could launch GPT-4.5 soon and more!
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
Outsmart The Future
Today in Everyday AI
6 minute read
🎙 Daily Podcast Episode: OpenAI just released a HUGE update with the access of its Deep Research to Plus users. We break down what’s new and what it means for you. Give it a listen.
🕵️♂️ Fresh Finds: Perplexity’s new voice mode, Amazon launches Alexa site and app and ElevenLabs now allows audiobook publishing. Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: Amazon unveils AI-powered Alexa+, Microsoft’s next-gen Phi AI models and OpenAI could launch GPT-4.5 any day now. For that and more, read on for Byte Sized News.
🚀 AI In 5: This Perplexity feature gives you insightful data and allows you to visualize just about anything. See it here
🧠 Learn & Leveraging AI: Here’s everything you need to know about OpenAI’s latest Deep Research update and how it’ll impact you. Keep reading for that!
↩️ Don’t miss out: Did you miss our last newsletter? We talked about Claude 3.7 Sonnet, Google and Salesforce expanding its partnership, DeepSeek’s R2 launch and Google unveiling a free coding assistant for developers. Check it here!
Thoughts on our "Today in Everyday AI" table of contents? |
OpenAI’s Deep Research Updates: What’s new and how to make it work for you 🔍
The biggest AI update of 2025 just happened and you probably had no clue. This isn't a new model. It's not even technically a new feature. It's all about access.
And now tens of millions of ChatGPT Plus users will have access to OpenAI's Deep Research. (Including some of the new bells and whistles they JUST released)
Join us as we tell you how to take advantage.
Join the conversation and ask Jordan questions on OpenAI here.
Also on the pod today:
• Accessibility of OpenAI’s Deep Research 💰
• Deep Research Capabilities and Features 🕵
• Use Cases for OpenAI's Deep Research 💼
It’ll be worth your 1 hour and 10 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Upcoming Everyday AI Livestreams
Thursday, February 27th at 7:30 am CST ⬇️
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – Polymet is an AI product designer, Magic Inspector is an AI web test automation platform and Nuvio provides AI-powered financial management.
Perplexity – Perplexity has launched a new voice mode.
Introducing Perplexity's new voice mode.
Ask any question. Hear real-time answers.
Update your iOS app to start using. Coming soon to Android and Mac app.
— Perplexity (@perplexity_ai)
4:36 PM • Feb 26, 2025
Amazon – In addition to the announcement of Alexa+ amazon has launched Alexa.com and a new app for Alexa+.
Alexa+ also brings AI-powered ‘Explore’ and ‘Stories’ features for kids.
AI Audio – ElevenLabs is now letting authors create and publish audiobooks.
AI Research - A new finding has created Gibberlink Mode, designed to enhance interactions between AI voice agents.
AI Models – Claude 3.7 Sonnet has claimed the #1 spot in the WebDev Arena.
BREAKING: Claude 3.7 Sonnet claims the #1 spot in WebDev Arena with a +100 score jump 🚀 over Claude 3.5 Sonnet! 🔥 Huge congrats to @AnthropicAI on this incredible milestone!
Have you tried Claude 3.7 Sonnet in the WebDev Arena yet? Test it now (link below) x.com/i/web/status/1…
— lmarena.ai (formerly lmsys.org) (@lmarena_ai)
8:01 PM • Feb 26, 2025
1. Amazon Unveils AI-Enhanced Alexa+ 🗣
Amazon has introduced Alexa+, a revamped version of its voice assistant that utilizes a “model agnostic” system to optimize performance across various tasks. Powered by Bedrock, Amazon's cloud platform, Alexa+ taps into multiple generative AI models, including its proprietary Nova and those from Anthropic, to enhance user interactions. With the capability to browse the web and coordinate multiple services, Alexa+ promises to streamline everyday tasks like booking dinner reservations and securing rides, potentially transforming how users manage their time and responsibilities.
However, not all Echo devices will be eligible for the upgrade, leaving many first-generation models in the dust. Alexa+ will be included for all Prime members but Amazon also introduced a subscription model at $19.99 for those without a Prime membership.
2. Microsoft Unveils Next-Gen Phi AI Models ⚡️
Microsoft has launched its latest Phi AI models, Phi-4-multimodal and Phi-4-mini, available now on Azure AI Foundry and Hugging Face. The Phi-4-multimodal enhances capabilities in speech recognition, translation, summarization, audio understanding, and image analysis, while the Phi-4-mini focuses on speed and efficiency.
This rollout not only empowers developers to integrate advanced AI into smartphones, PCs, and vehicles but also hints at a future where intelligent systems become even more accessible and impactful in everyday life.
3. OpenAI's GPT-4.5 Launching VERY Soon 👀
OpenAI's much-anticipated GPT-4.5, codenamed Orion, is set to launch soon. This comes on the heels of Reddit users uncovering references to the new model in the Android version of ChatGPT, sparking excitement across the community.
As companies and individuals alike seek to leverage AI advancements for growth, this update could offer enhanced capabilities that significantly impact productivity and innovation. With the official rollout imminent, all eyes are on how GPT-4.5 will reshape the landscape of artificial intelligence.
4. ElevenLabs Launches Scribe: Speech-to-Text 💬
ElevenLabs has unveiled Scribe, its first stand-alone speech-to-text model, after securing a hefty $180 million in funding. With support for over 99 languages and a competitive pricing of $0.40 per hour of transcribed audio, Scribe aims to challenge established players like Google and OpenAI.
The model boasts impressive accuracy, with over 25 languages achieving a word error rate below 5%, highlighting the startup's commitment to improving speech detection across various languages.
5. NVIDIA's Strong Outlook Amidst AI Competition 💪
NVIDIA's quarterly forecast exceeded expectations, suggesting that tech giants like Microsoft and Amazon continue to invest heavily in AI infrastructure despite concerns over cost-effective alternatives from newcomers like DeepSeek. The chipmaker's shares rose nearly 2% as it reported a staggering 78% increase in quarterly revenue, although its profit margins are expected to tighten slightly.
While the so-called "Magnificent Seven" stocks have faced recent declines, NVIDIA's leading position suggests that demand for high-end AI chips remains robust.
Create custom GPTs that save you time!
This Perplexity Cards feature is super useful!
BUT they’re kinda hard to trigger.
If you can get it to work, it gives you insightful data and allows you to visualize just about anything.
We show you how to use it!
Check out today's AI in 5.
🦾How You Can Leverage:
OpenAI just pulled the sneakiest power move of 2025.
And if you pay close attention to this newsletter and today’s episode, you can cash in shorties.
While everyone's drooling over shiny new models, they quietly gave $20 ChatGPT Plus subscribers access to Deep Research—previously a $200/month Pro-only feature.
It’s the best AI feature we’ve used.
Bar none.
Deep Research isn’t even a new model. Not even a new feature, as it was released to the expensive ChatGPT Pro tier more than 3 weeks ago.
What is it, though?
Just pure, unadulterated access to what might be the most productive hours you'll buy this year.
We've used Deep Research daily since February 2nd.
The verdict? It's the 90s Bulls of AI research tools.
Unstoppa-bull. Jordan-esque. Tongue-waggingly good.
And now tens of millions paid ChatGPT customers can access it for the price of lunch.
If you’ve still been looking for the ROI of GenAI, take your time w/ this newsletter/show.
Miss this update and you're essentially volunteering to work harder than everyone else in your industry.
Here's what you need to know.
1 – Wider Access = Work Changes Now 🔀
The democratization just happened overnight.
ChatGPT Plus users now get 10 Deep Research searches monthly. Pro accounts get 120 (up from 100). Even free users will soon get 2 monthly searches.
(Just not yet. Lolz.)
This isn't just another feature rollout. It's an accessibility revolution that puts enterprise-grade research capabilities in the hands of everyday professionals.
What took specialized teams days now takes minutes.
What required advanced Boolean search skills and like 3.5 dozen tabs now needs only clear instructions.
What demanded expensive subscriptions now costs less than streaming services.
The playing field just leveled, but only for those who recognize it.
Try This:
Make your 10 monthly searches count like precious currency.
How?
Go share today’s LinkedIn episode and we’ll:
- Send you our guide for 10 use-cases for Deep Research
And by doing that:
- You’ll also be entered in a giveaway for a free 90-minute consult that we’ll announce the winner in the newsletter next week. (So you gotta keep your eyes open.)
Pedro’s crushing it
2 – o3 + o3 Mini: On Another Level 🚀
Deep Research isn't just "better searching."
It's synthetic intelligence that makes Google, Perplexity, and Grok seem kinda primitive by comparison.
The secret weapon?
A dual-model architecture combining a fine-tuned version of OpenAI's unreleased o3 (not the mini variant available in ChatGPT—the full version that hasn't even been benchmarked publicly) with o3 Mini as a supporting system for summarizing complex reasoning chains.
Yeah, according to OpenAI’s just released system card for Deep Research, this behemoth is actually using two versions of o3.
This dual approach creates something unprecedented: an AI that doesn't just retrieve information but builds reasoning trees that mimic expert human research—making decisions at each step about where to branch next.
Small example: when it hits a Forbes paywall, it automatically pivots to find that same content on the MIT Sloan website.
When it discovers a trend, it validates from multiple sources before reporting. It even analyzes competing perspectives to deliver nuanced conclusions.
Sheesh.
Try This:
Leverage the O3 reasoning capabilities by designing multi-stage research queries.
Start with "First, identify the three most significant challenges facing AI implementation in manufacturing. Then for each challenge, find documented solutions with success metrics. Finally, synthesize a comprehensive adoption framework based on these insights."
This approach forces Deep Research to deploy its full reasoning architecture rather than simple retrieval, resulting in insights no human researcher could produce in comparable time.
3 – Combine Your Work and Research 💥
The most overlooked update isn't about web browsing—it's about Deep Research's dramatically improved ability to understand and reference your uploaded files.
That’s a new and improved feature of Deep Research just released.
We tested this by feeding it 15,000 rows of Google Search Console data. The system didn't just analyze this mountain of information—it connected it directly to external research, creating insights impossible to get from either source alone.
Within minutes, it delivered a comprehensive content strategy with 10 precisely targeted opportunities, each with sourced data points and strategic rationales—all tied directly back to our original SEO data.
And all well-resourced and sourced like WTF kinda wizardry is this?
Try This:
Stop using Deep Research as a better Google.
Start stacking your normal work, tasks, files, and research questions together for maximum time savings and insight generation.
Stack your workflows by combining internal data with external research in single queries.
(That’s knowledge work 101. You might as well get used to it.)
Upload your quarterly sales report alongside competitive intelligence documents, then ask: "Based on our performance data and market trends, identify our three biggest growth opportunities, potential obstacles, and specific action plans with implementation timelines."
The magic happens when Deep Research connects your internal context with external realities, creating insights impossible to get from isolated research processes.
Reply