- Everyday AI
- Posts
- GPT-4.5 hands on – What it can do and how you can use it
GPT-4.5 hands on – What it can do and how you can use it
OpenAI’s $50M education initiative, Amazon's AI reasoning model, Google unveils an open-source wildlife AI model and more!
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
Outsmart The Future
Today in Everyday AI
7 minute read
🎙 Daily Podcast Episode: OpenAI’s GPT-4.5 is now out! So what can it do and what are its best use cases? We break it down for you. Give it a listen.
🕵️♂️ Fresh Finds: OpenAI chairman speaks about AI agents, new Gemini Live capabilities and Samsung and NVIDIA collab on 6G tech. Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: OpenAI’s $50M education initiative, Amazon’s AI reasoning model and Google unveils an open-source wildlife AI model. For that and more, read on for Byte Sized News.
🚀 AI In 5: Do Google’s recent updates to Gemini Workspace make it actually usable? We dive in to find out. See it here
🧠 Learn & Leveraging AI: Wondering what’s new with GPT-4.5? Here’s everything you need to know. Keep reading for that!
↩️ Don’t miss out: Did you miss our last newsletter? We talked about Microsoft going AI Dragon on healthcare, GPT-4.5 topping the leaderboards and NVIDIA's stock tumbling. Check it here!
OpenAI’s new GPT-4.5: What’s new and who can benefit the most ✨
OpenAI's newest model, GPT-4.5, is out and very impressive.
But do you know the best use-cases on how to use it?
Join the conversation and ask Jordan questions on OpenAI here.
Also on the pod today:
• Comparison of Models 🤔
• Tools Available in GPT 4.5 🛠️
• Demonstration and Analysis of GPT 4.5 🕵
It’ll be worth your 59 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – TestSprite is an AI agent automating software testing processes, 2Read is an AI Kindle reading buddy and Agents Base deploys swarms of cloud marketing agents that automate A/B tests.
OpenAI – OpenAI chairman Bret Taylor spoke about AI agents at the Mobile World Congress.
Google – New Gemini Live capabilities are coming later this month.
iPhone users can now talk to Gemini from their lock screens.
Big Tech – Keysight Technologies is collaborating with Samsung and NVIDIA to advance AI channel estimations for 5G and 6G tech.
AI Chips - TSMC plans to spend $100 billion on US chip facilities.
AI Tech – T-Mobile’s parent company is creating an AI phone with Perplexity Assistant.
Future of Work - Salesforce’s head of talent growth spoke on how the company is training its employees on AI.
Business of AI - GM has hired the former head of AI at CISCO as its first AI chief officer.
AI Governance – The creator of SB 1047 has introduced a new AI bill in California.
AI Research – Researchers have found that less educated areas are adopting AI writing tools more quickly.
Read This – Banks have loaned $2 million to build a 100-acre AI data center in Utah.
1. OpenAI Launches $50M Initiative to Transform Research and Education 🧑🏫️
OpenAI has unveiled NextGenAI, a consortium comprising 15 leading institutions and a whopping $50 million in funding. This initiative aims to harness the power of AI to tackle pressing challenges in fields like healthcare and education, with partners including Harvard, MIT, and Texas A&M stepping up to the plate.
As universities leverage cutting-edge tools and resources, students and researchers alike will benefit from enhanced learning models and innovative applications, potentially redefining career pathways in the process.
2. Amazon Unveils Nova AI Model with Advanced Reasoning 🧠
Amazon is set to launch its advanced reasoning AI model, dubbed "Nova," in June 2025, positioning itself against giants like Meta AI and Google's Gemini. This new model aims to incorporate a "hybrid reasoning" approach for complex problem-solving, further intensifying the competition in the generative AI space.
With a focus on affordability and competitive performance on key benchmarks, the new model is part of Amazon’s broader strategy to enhance its AI offerings, reflecting the growing trend toward more sophisticated generative AI solutions.
3. Google Unleashes SpeciesNet for Wildlife Monitoring 🏞
Google has open-sourced SpeciesNet, an AI model capable of identifying over 2,000 animal species from camera trap images. This model, a key component of the Wildlife Insights platform, aims to streamline the analysis of vast amounts of wildlife data generated by researchers globally. Trained on an impressive 65 million images, SpeciesNet not only enhances biodiversity monitoring but also provides valuable tools for developers and academics alike.
As organizations increasingly turn to technology for environmental solutions, this initiative could significantly impact wildlife research and conservation strategies moving forward.
4. Singapore Shuts Down NVIDIA Smuggling Ring 🚫
Singaporean authorities have arrested three men for allegedly smuggling NVIDIA chips amidst growing concerns over China's access to advanced technology. The suspects are accused of diverting servers, presumably containing these restricted chips, away from their intended destination in Malaysia.
This incident comes on the heels of NVIDIA’s report indicating that Singapore accounted for 18% of its revenue for the fiscal year 2025, raising eyebrows about the true flow of its products.
5. Cohere Unveils Aya Vision Multimodal AI 👁
Cohere has just rolled out Aya Vision, an open-source AI model capable of handling tasks across 23 languages, from image captioning to translating text. This innovative model aims to bridge the performance gap in multilingual and multimodal applications, boasting impressive capabilities even against much larger competitors. The release includes two versions, with the Aya Vision 32B model outperforming others like Meta's Llama-3.2 90B Vision on visual benchmarks.
With its focus on efficiency through synthetic annotations, Cohere's latest offering not only enhances accessibility for researchers but also aims to address the ongoing evaluation crisis in AI.
6. Mistral's Call for Data Center Investments at MWC 🏭
At the Mobile World Congress, Mistral CEO Arthur Mensch urged telecom leaders to invest in building data center infrastructure to bolster Europe's AI ecosystem. Mensch emphasized the importance of decentralizing cloud services and reducing reliance on U.S. technology giants, advocating for a stronger domestic presence in the sector.
He also highlighted opportunities for AI to transform network operations, suggesting that partnerships with telcos could pave the way for enhanced consumer access to AI products.
7. Trump Administration's Cuts Hit AI Research Hard 😬
The Trump Administration has laid off key employees at the National Science Foundation (NSF), threatening crucial funding for AI research initiatives. The Directorate for Technology, Innovation, and Partnerships, known for channeling government grants into AI projects, has seen many of its review panels postponed or canceled due to these layoffs, stalling progress across the sector.
This move has sparked backlash from AI experts, including Geoffrey Hinton, who criticized the cuts and called for accountability from billionaire Elon Musk’s Department of Government Efficiency.
Gemini Workspace Updates: What's New & What Still Needs Work
When we first reviewed Gemini inside Google Workspace, we weren’t impressed.
But Google recently rolled out a lot of updates to its Gemini integration into Google Workspace.
So we decided to take a look at what’s new and see if the updates actually make Gemini Workspace usable.
Check out today's AI in 5.
🦾How You Can Leverage:
OpenAI's newest model aces the only benchmark that matters – creating outputs that not only FEEL more human, but help other humans create content for…. (You guessed it…) humans.
So while OpenAI’s newest GPT-4.5 model might not break every STEM/Math/Dev benchmark into smithereens, it’s grabbing even more important headlines, perhaps.
According to the people, it’s the best model in the world.
Hot dang GPT-4.5. I thought the haters said you’d hit a wall? Lolz.
OpenAI’s newest GPT-4.5 model just grabbed a first-place tie on the LM Arena as the best LLM in the world, even though it didn’t crash every single benchmark.
That was kinda the point of today’s show, in which we gave GPT-4.5 the live, hands-on treatment.
Biggest takeaway?
The people LOVE GPT-4.5’s output compared to GPT-4o.
Because even though the new GPT-4.5 currently requires the $200/mo Pro subscription in ChatGPT (or a crazy high API price for devs) it should be released any day now for ChatGPT Plus subscribers.
Ready to get your hands dirty?
Samesies. Let’s get it.
1 – The EQ Advantage: When Machines Learn to Read the Room 🧠
The most shocking revelation? GPT-4.5 writes emails you'd actually want to receive.
We tested identical prompts about responding to a colleague's family emergency. GPT-4o gave us corporate-speak: "I understand that the deadline was affected." Helpful, but bloodless.
Four-five? It wrote: "I've been thinking of you... please don't worry about the missed deadline."
That's not just different wording.
That's emotional intelligence.
It's the difference between sympathy and empathy. For the first time, an AI understands the human on the other end needs support, not just project updates.
Try This:
Next time you face a delicate workplace situation, run parallel prompts through GPT-4o and 4.5.
Note how 4.5 acknowledges emotions before tasks, uses first-person language that creates connection, and varies sentence structure to sound genuinely human. Then steal those phrasings for your own communications.
2 – The Writing Breakthrough: AI That Finally Sounds Human ✍️
Content structure reveals everything.
GPT-4o writes like somebody's uncle who just discovered emojis. Every sentence is 15-20 words. No rhythm. When asked for motivational content, it spat out bland corporate inspiration with random fire emojis and hashtags.
GPT-4.5? Completely different beast.
It varies sentence length intentionally. Short. Medium. Long and flowing with natural cadence.
When asked to write a Microsoft Zune revival memo, 4.5 immediately launched in Canvas mode. It anticipated you'd want to edit further.
We counted compound sentences with em dashes in GPT-4o’s output. Three out of six sentences used this exact structure. In 4.5's output? Just one. The rest varied
beautifully.
Try This:
Feed your next important document to GPT-4.5 with this prompt: "Rewrite this with varied sentence structure, strategic emphasis, and authentic language that will genuinely connect with my audience."
Study the before and after. Notice how it transforms sentence length, eliminates clichés, and creates natural rhythm. Apply these techniques to instantly sound more human.
3 – The Strategic Implementation: How To Actually Use This Thing 🎯
Let's talk money.
GPT-4.5 costs $75 per million input tokens and $150 per million output tokens in the API. That's 30X more than previous models.
Ouch.
Very few companies can afford to run this at scale. The strategic value lies in specific, high-stakes human interactions where emotional connection creates ROI.
We identified five key use cases where 4.5 demolishes everything else: personal/business coaching, therapeutic communication, high-stakes content writing, strategic direction and creative partnership.
The knowledge cutoff was rolled BACK from June 2024 (4o) to October 2023 (4.5). This shows OpenAI prioritized emotional intelligence over recency. (Can’t win em all, we guess?)
And here's what everyone missed: This is OpenAI's last pre-reasoning model. Future versions will combine this emotional intelligence with reasoning capabilities, creating hybrids that both understand AND relate.
Try This:
Identify the three communication scenarios in your organization with the highest emotional stakes. Create GPT-4.5 prompts specifically designed for these scenarios.
Test against your current approaches and measure both efficiency AND emotional response. Pay special attention to how improved emotional tone affects conversion rates, resolution times, and customer satisfaction metrics.
The future isn't just smarter AI.
It's AI that makes you feel understood.
⌚
Numbers to watch
2027
Apple reportedly might not release a truly ‘modernized’ conversational Siri until the year 2027.
Now This …
Let us know your thoughts!
Vote to see live results
What are your thoughts? |
Reply