• Everyday AI
  • Posts
  • How Distributed Computing is Unlocking Affordable AI at Scale

How Distributed Computing is Unlocking Affordable AI at Scale

OpenAI's safety reasoning monitor, Copilot Studio gets Computer Use, U.S. may ban DeepSeek and more!

šŸ‘‰ Subscribe Here | šŸ—£ Hire Us To Speak | šŸ¤ Partner with Us | šŸ¤– Grow with GenAI

Outsmart The Future

Today in Everyday AI
6 minute read

šŸŽ™ Daily Podcast Episode: Discover how distributed computing is revolutionizing AI at scale. Find out how your business can tap into affordable AI computing solutions. Give it a listen.

šŸ•µļøā€ā™‚ļø Fresh Finds: Grok gets improved memory, OpenAI names new nonprofit advisors and Gemini Live screen share now free for Android. Read on for Fresh Finds.

šŸ—ž Byte Sized Daily AI News: Google claps back at OpenAI, Copilot Studio gets new computer use feature and U.S. may ban DeepSeek. For that and more, read on for Byte Sized News.

🧠 Learn & Leveraging AI: Whether you’re a an enterprise company or small business, here’s how you can leverage distributed computing. Keep reading for that!

ā†©ļø Don’t miss out: Did you miss our last newsletter? We talked about OpenAI unveiling o3 and o4-mini, Claude getting a new research feature, Veo 2 added to Gemini Advanced and adapting your brand to AI search. Check it here!

 How Distributed Computing is Unlocking Affordable AI at Scale šŸ§‘ā€šŸ’»ļø

Everyone’s chasing bigger AI.

The real opportunity? Smarter scaling.

Distributed computing is quietly rewriting the rules of what’s possible— not just for tech giants, but for everyone building with AI.

We’re talking cost. We’re talking scale. And we’re definitely talking disruption.

Tom Curry, CEO and Co-Founder of DistributeAI, joins us as we dig into the future of distributed power and practical AI performance.

Also on the pod today:

• Small Business AI Compute Solutions šŸ’¼
• Open Source vs. Proprietary AI Models šŸ¤”
• Edge Computing and Privacy Concerns šŸ”

It’ll be worth your 22 minutes:

Listen on our site:

Click to listen

Subscribe and listen on your favorite podcast platform

Listen on:

Here’s our favorite AI finds from across the web:

New AI Tool Spotlight – Omniflow is an AI operating system for product development, AdClone AI remixes any ad in LinkedIn’s Ad Library and Music AI Bubble connects ChatGPT to your music app for music trivia and info.

xAI – Grok now has improved memory and can remember your conversations and give personalized recommendations.

OpenAI – OpenAI has named its new nonprofit advisors.

Google - Gemini Live’s screen share feature is now free for Android users.

Google has also released starter apps V2 in Google AI Studio so you can quickly prototype with Gemini API.

AI Media – Wikipedia is giving AI developers its data to help fight bot scrapers.

AI Video – D-ID has released Studio V3, an all-in-one creative studio for AI avatar videos.

AI in Government - A new report shows that US police are using an AI tool called Overwatch to deploy AI bots to infiltrate criminal networks.

1. OpenAI Rolls Out New Safety Monitor for Its Latest AI Models šŸ”’

OpenAI has introduced a new ā€œsafety-focused reasoning monitorā€ to oversee its latest AI reasoning models, o3 and o4-mini, aiming to block prompts related to biological and chemical threats. This move comes after internal tests revealed these models have improved capabilities that could be misused to generate harmful content, with the monitor declining risky requests nearly 99% of the time during simulations.

While the company admits the system isn’t foolproof and will continue using human oversight, it signals a growing reliance on automated safeguards as AI power surges.

2. Microsoft Unleashes New AI ā€œComputer Useā€ Feature for Copilot Studio šŸ’»

Microsoft just rolled out a new ā€œcomputer useā€ capability in Copilot Studio, allowing AI agents to interact directly with websites and desktop apps by clicking, typing, and navigating menus—no API needed. This means businesses can automate everything from data entry to market research, even when traditional integration isn’t available.

Unlike the limited ā€œActionsā€ feature in consumer Copilot, this tool promises broader compatibility and smarter adaptability to UI changes, making automation smoother and more reliable.

3. US Considers DeepSeek Ban Amid AI Chip Restrictions 🚫

The Trump administration is reportedly weighing a ban on Chinese AI startup DeepSeek, aiming to block its access to Nvidia’s AI chips and potentially prevent Americans from using its services, according to The New York Times. This move follows recent tightened restrictions on Nvidia chip sales to China, reflecting ongoing efforts to curb China’s technological advances in AI.

DeepSeek’s rapid rise and aggressive pricing have disrupted Silicon Valley, while allegations of intellectual property theft add fuel to U.S. regulatory scrutiny.

4. OpenAI Eyes $3 Billion Acquisition of Windsurf šŸ’°

OpenAI is reportedly in talks to acquire Windsurf, an AI-assisted coding startup, for about $3 billion, marking its biggest potential deal yet, Bloomberg News reveals. This move comes as Windsurf, formerly Codeium, seeks to capitalize on soaring investor interest in AI tools, having recently closed a $150 million funding round valuing it at $1.25 billion.

The deal, still tentative, could significantly boost OpenAI’s footprint in developer tools. With OpenAI simultaneously planning a massive $40 billion funding round, the tech giant’s aggressive expansion signals ongoing acceleration in AI-driven innovation and investment.

5. Motorola Razr to Feature Perplexity’s AI Assistant in Upcoming Launch šŸ“±

Motorola is set to unveil its new Razr foldable on April 24th, featuring Perplexity’s AI voice assistant alongside Google’s Gemini. This partnership aims to give users a fresh AI interaction option with a specialized interface designed to highlight Perplexity’s capabilities.

The move also hints at broader plans, as Perplexity is reportedly in early talks to bring its assistant to Samsung devices, challenging the dominance of Google’s AI on Galaxy phones.

6. Microsoft’s Copilot Vision Now Freely Available in Edge Browser šŸ‘

Microsoft AI CEO Mustafa Suleyman announced on Bluesky that Copilot Vision is now free to use within the Edge browser. This feature lets users speak commands and get real-time guidance, like cooking help or interview prep, by ā€œseeingā€ what’s on the screen, though it won’t click links for you.

Broader system-wide capabilities remain locked behind Copilot Pro subscriptions. According to Microsoft, your privacy is respected by logging responses but not your screen content, making it an intriguing tool for professionals looking to streamline tasks without extra software.

7. Google Unleashes Gemini 2.5 Flash AI Model Preview šŸ“ø

Google has just dropped an "early version" of its Gemini 2.5 Flash hybrid reasoning model for public preview, according to Mashable. The new model claims sharper reasoning skills, smarter compute management, and is available both in the Gemini app and for developers via Google AI Studio and Vertex AI.

With OpenAI launching its o3 and o4-mini models just hours earlier, the race for AI dominance is heating up—and Google's latest move means more accessible, cost-efficient AI tools are now at everyone's fingertips.

🦾How You Can Leverage:

Remember when ChatGPT straight-up rejected new users because their "GPUs were melting"?

Not hyperbole.

The world is LITERALLY running out of compute power as we all flock to sprinkle AI on everything. 

 Even billion-dollar companies like OpenAI and Anthropic say they can't secure enough chips.

But what if the solution is collecting dust in your office right now?

Those computers sitting idle after 5pm could be your secret weapon for affordable AI.

Tom Curry, CEO of Distribute AI, joined the Everyday AI show today and blew our minds explaining how medium-sized companies can finally join the AI revolution without breaking the bank.

So if you’re enterprise company looking for a bit more compute, or just a bit compute curious (lolz), then read on for our top takeaways. 

šŸ‘‡

1 – Silicon Chip Technology Has Hit The Physics Wall šŸ›‘

Five years ago, gaming drove GPU demand.

Now?

AI has devoured everything.

Tom revealed we're not just facing manufacturing bottlenecks. We've smacked straight into the PHYSICAL LIMITS of silicon chip technology.

When will breakthrough chip tech arrive?

A decade from now.

TEN. WHOLE. YEARS.

Our power grid is literally struggling under the weight of AI data centers. Tom explained that when you run these massive models, you're "stretching every resource that we have in the world."

This explains the dreaded ChatGPT capacity messages. Tom pointed out that when millions of requests flood their servers, especially for resource-intensive tasks, the necessary computing power simply doesn't exist.

Need an image generated? That's 10-20 seconds of precious GPU time.

Want a video? We're talking MINUTES even on the world's fastest hardware.

Try this

Tom's solution is brilliantly simple. 

Install their one-click program that activates your office computers at night when they'd otherwise sit idle. These machines join a distributed network providing compute power. 

During business hours, you get significantly discounted access to AI models running on that same network.

It's like turning your empty office into an overnight data center, then using the proceeds to fund your daytime AI projects. Setup takes minutes, and companies immediately reduce costs compared to traditional cloud GPU rentals.

2 – Tiny Models Are Getting Scarily Good 🫣

The AI world is experiencing the weirdest evolution right now.

Behemoth models keep ballooning in size.

Yet Google's tiny Gemma 3 (just 27 billion parameters) is absolutely demolishing DeepSeek's 600+ billion parameter model in head-to-head tests.

Tom compared this to how mobile phones initially shrunk to tiny sizes, then suddenly grew again when we wanted bigger screens and batteries.

But here's what nobody's talking about: smaller doesn't equal efficient.

Tom explained these compact models use "chain-of-thought" reasoning that secretly consumes billions of tokens behind the scenes. The model might be tiny, but the computation remains massive.

Want proof? 

Tom suggested clicking "show thinking" in Claude or Gemini.

You'll be shocked watching it generate an essay-length internal monologue just to tell you Springfield is Illinois' capital.

When combined with real-time web access, these compact models can match much larger ones for everyday tasks.

Try this

Don't automatically default to Claude and GPT-4o. Tom insists you should benchmark both premium models AND smaller open-source alternatives for your specific business needs.

Many companies discover they're hemorrhaging money for negligible performance gains. For most real-world applications, these smaller models deliver 90% of the capabilities at a fraction of the price.

The best part? They often run on hardware you already own, eliminating cloud computing fees entirely. 

3 - Open Source Models Are Just Weeks Away From Matching Closed Ones šŸ”

Two years ago, OpenAI seemed untouchable.

Today?

This isn't just interesting trivia. It fundamentally reshapes how every business should approach AI strategy.

When all models essentially perform the same (happening RIGHT NOW), your competitive advantage shifts from which fancy model you use to how efficiently you deliver it.

Tom shared the uncomfortable financial reality: OpenAI and Anthropic "lose money every day already." Their business models face existential threats when comparable open-source alternatives cost nothing.

For privacy-focused organizations, Tom painted an exciting future: your devices becoming mini data centers running sophisticated AI that never shares your information with tech giants.

 

Try this

Tom's #1 piece of advice? Stay ridiculously flexible with your AI infrastructure.

Create a technical architecture that lets you swap between providers as easily as changing channels. The next DeepSeek or Gemma breakthrough could arrive next month and completely flip your cost-benefit calculations.

Tom emphasized compute flexibility over model loyalty. 

This approach keeps you nimble enough to leverage both open source advancements and proprietary innovations without getting trapped in technological dead ends.

Reply

or to participate.