- Everyday AI
- Posts
- Meta’s new AI lab, OpenAI hit with huge copyright blow, AI’s impact on content publishers and more
Meta’s new AI lab, OpenAI hit with huge copyright blow, AI’s impact on content publishers and more
AI put Trump under the microscope, hidden prompts to sway peer review, AI putting recent grads out of jobs and more!
👉 Subscribe Here | 🗣 Hire Us To Speak | 🤝 Partner with Us | 🤖 Grow with GenAI
Sup y’all! 👋
For our U.S. friends, hope you had a Happy Fourth and it was good to recharge a bit.
(At least Meta didn’t come for us like they did for OpenAI during their recharge period. Yikes.)
Speaking of hot takes, what should we cover on tomorrow’s livestream/podcast?
It’s Hot Take Tuesday…. but I work for you.
What should we cover for HotTakeTuesday? Your vote wins. Vote to see results. |
You decide Everyday AI’s fate.
I just wake up early and do what you tell me to.
✌️
Jordan
(Let’s connect on LinkedIn. Tell me you’re from the newsletter. Otherwise stranger danger ya know.)
Outsmart The Future
Today in Everyday AI
8 minute read
🎙 Daily Podcast Episode: Meta officially unveils its AI Dream Team, OpenAI takes a copyright loss and Cloudflare?! could change the AI game. Give it a listen.
🕵️♂️ Fresh Finds: Capgemini makes a $3.3 billion bet on AI, Microsoft exec's dicey advice on AI and job displacement, Elon teases Grok 4 (again) and more. Read on for Fresh Finds.
🗞 Byte Sized Daily AI News: AI put Trump under the microscope, hidden prompts to sway peer review, AI putting recent grads out of jobs, read on for Byte Sized News.
🧠 AI News That Matters: OpenAI lost a battle in the copyright war against the New York Times, Meta is getting testy with AI bots and why the U.S. Senate shot down AI legislation. Don’t miss out on this past week’s biggest AI news stories and what they mean. Keep reading for that!
↩️ Don’t miss out: Did you miss our last newsletter? We talked about Google Workspace gets Gemini Gems, OpenAI powers up with Oracle data centers, EU delays AI compliance code and more. Check it here!
AI News That Matters - July 7th, 2025 📰
Meta just announced the legit Dream Team of AI. 🥇
That's got OpenAI scrambling.... but getting top talent poached is the least of their worries after a recent copyright case ruling. 🧑⚖️
And publishers might be able to better protect their content from LLMs, but it might make us all a little dumber. 🫤
Don't waste hours each day trying to understand what AI news actually matters.
That's our job.
Join us on Mondays as we bring you the AI News That Matters.
Also on the pod today:
• OpenAI battles Meta's talent raid 🤼
• CloudFlare blocks AI crawlers 🛡️
• AI-driven publisher revenue loss 📉
It’ll be worth your 44 minutes:
Listen on our site:
Subscribe and listen on your favorite podcast platform
Listen on:
Here’s our favorite AI finds from across the web:
New AI Tool Spotlight – TensorBlock forge promises one API that works with all AI models, TeammatesAI launched its AI interviewer, Sara, Context is an AI-first Office Suite.
AI And Shopping — Amazon Prime Day is around the corner. Here’s how AI can help you find the best deals.
AI in the Media — Publishers urge EU action as Google’s AI Overviews spark antitrust battle.
Big Tech Acquisitions — Capgemini is buying WNS for $3.3B to boost its AI-powered business services
AI and Jobs — Microsoft cut thousands of jobs—then told laid-off workers to use AI to cope, sparking backlash over tone-deaf advice. Curious how tech and empathy are clashing in the workplace?
Workplace Ethics — Nearly 66% of surveyed managers said they used AI to consult about layoffs. Yikes.
AI Models — Elon Musk Tweeted about a Grok 4 livestream this Wednesday, after months of delays and pushbacks for Grok 3.5 which was not released.
Grok 4 release livestream on Wednesday at 8pm PT @xai
— Elon Musk (@elonmusk)
8:52 PM • Jul 7, 2025
Similarly, Perplexity’s CEO Tweeted a cryptic message about this Wednesday, which many are believing to be an updated release of Perplexity’s promising Comet browser.
Compute an Big Tech — Amazon’s building a massive AI supercomputer for Anthropic, powered by its own Trainium2 chips instead of GPUs. See the details.
AI in the Home — Apple is falling behind in AI in another realm: the home.
1. Academics Plant Hidden AI Prompts to Game Peer Review 🫠
According to reports, researchers have been quietly hiding AI prompts in academic papers on arXiv, instructing AI review tools to deliver glowing feedback.
These prompts, often disguised in white text or minuscule fonts, urge AI to highlight the papers’ strengths and novelty. Some academics claim this tactic is meant to counteract reviewers who use AI themselves, sparking debate over fairness in research evaluation.
2. AI Puts Trump Claims Under the Microscope—And Most Don’t Pass 🕵️
A new analysis published in the Washington Post found that five top AI models, ChatGPT, Gemini, Grok, Claude, and Perplexity, disproved the majority of President Trump’s major public claims, with all five models debunking his statements in 15 out of 20 tested scenarios.
Despite Trump’s vocal support for AI and a sweeping executive order to boost U.S. dominance, the technology he champions doesn’t back his signature claims, which raises tough questions about truth and tech in today’s political climate.
The study reports that across every question, at least 3 out of 5 AIs rejected Trump’s assertions, and in most cases, the refutation was unanimous.
3. Anthropic Calls for New AI Transparency Rules as Lawmakers Retreat 🔎
Anthropic just announced a plan to require advanced AI developers to publicly share their risk and safety practices.
The timing is key, coming days after Congress scrapped a decade-long ban on state AI regulation, a move CEO Dario Amodei criticized as too heavy-handed. Their proposal won quick approval from AI watchdogs for demanding real accountability and transparency. For anyone growing a company or career in AI, this shift could reward those who get serious about openness before stricter rules arrive.
4. AI Pushes Recent Grad Unemployment to Record High 💼
A new report from Oxford Economics says the unemployment rate for recent college grads has climbed to a record 6.6% over the past year, outpacing the national average as employers lean harder on artificial intelligence to fill entry-level roles.
The automation wave is shrinking job openings that used to serve as a springboard into the workforce. Experts note that picking up AI skills could help young professionals stand out in a job market increasingly shaped by tech
5. Judge Fines Mike Lindell’s Lawyers for AI Blunder in Defamation Trial 👨⚖️
In a July 7 ruling, a federal judge slapped two of Mike Lindell’s attorneys with a $3,000 fine for submitting a court filing riddled with bogus citations generated by artificial intelligence. Lindell is the MyPillow CEO and prominent supporter of U.S. President Donald Trump.
The lawyers tried to blame the mishap on a draft-mixup, but the judge called them out for poor oversight and even worse attitude, noting a pattern of similar errors in other cases. This public embarrassment highlights that even top legal teams are struggling to manage AI tools responsibly as pressure mounts to adopt them.
6.Microsoft Unveils Deep Research for Azure AI Foundry 🔭
Microsoft just launched the public preview of Deep Research for Azure AI Foundry, letting developers tap into OpenAI-powered, API-driven research agents that automate and audit web-scale information gathering—right inside enterprise workflows.
According to Microsoft, this tech leap means companies can now build programmable agents that plan, analyze, and synthesize data from across the web, all with traceable sources and airtight compliance. The move lets businesses automate everything from market analysis to regulatory reports, plugging deep research directly into their own apps and workflows.
What a time to take a week off to recharge.
OpenAI, with most of its employees not working during the week of the July 4th holiday to recoup, got hit with quite a few heavy blows the past few days.
(So much for that whole recharge thang.)
OpenAI wasn’t the only company grabbing headlines.
Cloudflare just blocked 20% of the internet from AI training.
Apple admitted total AI failure.
News publishers watched 40% of their traffic vanish.
The AI world just reshuffled the entire deck.
Most people have no clue what just happened.
1. OpenAI CEO Sam Altman Responds to Meta's AI Talent Raid
According to reports, OpenAI CEO Sam Altman has responded to Meta's recent aggressive recruitment of AI talent in internal Slack messages to his research team.
Meta's been systematically poaching OpenAI's core researchers with offers exceeding $100 million annually following a series of high-profile departures to Meta's new superintelligence team.
That's more than LeBron James makes.
For coding. (But like obviously more than coding. But….. sheesh.)
Altman's response? A company-wide message about "missionaries beating mercenaries" while promising compensation reviews for everyone. His chief research officer, Mark Chen, went full emotional and compared Meta's hiring to someone breaking into their home and stealing something.
The defensive move? Altman claimed Meta "had to go quite far down their list" to fill positions, suggesting OpenAI's top talent stayed.
But the defensive tone kinda says otherwise.
What it means: OpenAI's unassailable lead just became very assailable.
When your CEO is sending defensive Slack messages about company culture while promising pay raises, you're not winning the talent war.
You're falling behind in talent.
(And Google has already caught up on the model side.)
2. Meta Tests AI Chatbots That Message Users Without Permission
According to a report from Business Insider, Meta has been developing and testing AI chatbots that send unsolicited follow-up messages to users on Facebook, WhatsApp, and Instagram.
The initiative is called Project Omni, and it aims to boost user engagement and retention by having chatbots reach out to users without being prompted.
Not responding to your questions.
Initiating conversations.
Think AI cold calling, but worse. These bots use your engagement history to craft personalized messages that feel human but aren't. Meta projects this could generate $2-3 billion in AI revenue by 2025, up from $1.4 billion, jumping to $1.4 trillion by 2035.
The legal precedent isn't promising. Character AI and Replica faced lawsuits for similar proactive messaging features.
What it means: Meta's turning your social media feeds into AI sales funnels.
This isn't innovation.
It's desperation disguised as engagement strategy. When you own the platforms, you can force AI interactions that users never requested.
Expect this to backfire spectacularly when people realize their "authentic" brand interactions are just sophisticated spam.
And if Meta's approach seems questionable, wait until you see what Apple's been up to...
3. Apple in Talks with Anthropic and OpenAI to Overhaul Siri
According to Bloomberg, Apple is in talks and reportedly negotiating with leading AI firms, including Anthropic and OpenAI, to overhaul their Siri platform.
Apple's in advanced discussions to potentially use their large language models as the foundation for a new, more capable, smarter AI-powered Siri. Apple has requested custom versions of these models to run on Apple's own private cloud compute infrastructure.
Why the desperation?
Apple's internal large language models have reportedly lagged behind just about everyone else's, even though they've been spending millions of dollars daily on LLM development since 2023.
(We’ve been covering Apple’s AI failures plenty lately. And here.)
Anthropic was reportedly looking for a multibillion dollar annual fee that would escalate year over year. We don't have reports on what OpenAI was potentially asking for.
What it means:
The richest company in the world just waved the white flag on AI.
This proves that money can't buy AI competence.
Unless you’re Zuck. Then it definitely can.
When Apple - masters of hardware integration an tech innovation - admits defeat and begs competitors for help, that speaks volumes.
4. Cloudflare Blocks AI Crawlers from 20% of the Internet by Default
Spicy move, Cloudflare.
Cloudflare, which powers about 20% of the Internet, will now block AI bots and crawlers from accessing content by default.
This move could remove one-fifth of open web content from AI training datasets.
Every new website signing up with Cloudflare will automatically prevent AI crawlers from scraping their content unless site owners specifically allow it. Cloudflare's also introducing granular privacy controls, letting individual site owners decide which AI bots can access their content.
The twist?
Cloudflare launched a new pay-per-crawl initiative allowing AI companies to pay for access to web content from certain publishers, with publishers setting their own rates.
Major publishers including Adweek, BuzzFeed, Fortune, Stack Overflow, The Atlantic, Time, and Quora have signed on to support the initiative.
What it means:
The free lunch just ended for Big Tech.
AI companies built trillion-dollar valuations by scraping content without permission. Now they'll pay market rates or train on garbage data.
This fundamentally breaks the economics of AI development and creates massive moats for companies with content partnerships.
Future AI models will be as good as their licensing budgets.
5. US Senate Votes 99-1 to Strike Down Proposed AI Regulation Ban
The US Senate has voted 99 to 1 to strike down a proposed ten-year ban on state-level AI regulation from President Trump's domestic policy bill.
The moratorium would have blocked states from enforcing laws against harmful AI uses such as deepfakes and political manipulation for a decade, but bipartisan opposition prevailed.
That's rarer than solar eclipses. lolz.
The Senate vote means US states will keep their power to make and enforce their own AI-related laws. If the original moratorium had passed, states wouldn't have been able to address problems like deepfake pornography, AI-driven scams, or election interference at the local level for ten years.
AI companies were wanting something like this to pass because it would be easier for them to operate without adhering to different rules across the US.
What it means: States just kept their power to regulate AI however they want.
This guarantees a patchwork of conflicting laws that'll make nationwide AI deployment a nightmare.
California will go aggressive. Texas will go permissive. Everyone else will pick sides.
6. Federal Judge Orders OpenAI to Hand Over User Logs to New York Times
A federal judge has ordered OpenAI to grant the New York Times, New York Daily News, and the Center for Investigative Reporting access to its user logs, including deleted records, as part of a high-profile copyright infringement lawsuit.
This sweeping order allows the plaintiffs to examine whether OpenAI's language models were trained on copyrighted news content and if ChatGPT is capable of reproducing or plagiarizing that material.
The New York Times argues that users may delete their ChatGPT histories after bypassing paywalls, making full access to logs essential for uncovering potential copyright violations.
OpenAI has objected to the ruling, saying it undermines long-standing privacy norms, and has vowed to continue fighting the order.
The order covers logs from free ChatGPT and API users but specifically exempts enterprise and educational accounts.
What it means:
Your deleted ChatGPT conversations might not actually be deleted….. yet.
This precedent opens every AI company to fishing expeditions from content creators claiming copyright theft.
If publishers can access private user data to prove their case, AI companies just became liable for every user interaction with copyrighted material.
7. New Study Shows Major US News Sites Lost Up to 40% Traffic from Google's AI Overviews
New data from SimilarWeb shows that 37 out of the top 50 US news websites have suffered significant year-over-year traffic declines since Google introduced its AI overview search feature in May 2024.
Forbes and Huffington Post experienced the steepest drops, each losing about 40% of their website traffic. Daily Mail fell 32% and CNN dropped 28%.
The numbers are brutal. The average click-through rate for the top organic search results on queries triggered AI overviews dropped from 7.3% in March 2024 to just 2.6% a year later.
Google's automated summaries answer user questions directly at the top of search results, reducing the need to click through to news sites.
What it means: Google mighta just accidentally destroyed a big chunk of online journalism.
When people get answers without clicking, publishers lose ad revenue and evenutally die.
This creates a death spiral where less original content gets created, making AI summaries increasingly worthless.
Google's algorithm might be eating its own tail. But, they gotta eat when everyone else is, right?
8. OpenAI Intensifies Enterprise Consulting with $10 Million Minimum Fees
According to The Information, OpenAI is intensifying its enterprise consulting efforts, now charging at least $10 million per client.
The company's engineers, known internally as forward deployed engineers, work directly with third-party organizations to tailor models like GPT-4o to specific business data and develop custom AI applications.
This move places OpenAI in direct competition with established consulting firms like Accenture and Deloitte, as well as companies like Palantir.
High-profile clients already include the US Department of Defense and Southeast Asian tech company Grab.
What it means: OpenAI just admitted the technology isn't plug-and-play ready.
But we all knew that, right?
When you're charging consulting fees that exceed most companies' annual AI budgets, you're not selling software.
You're selling implementation expertise because your product is too complex for customers to deploy alone.
This proves AI adoption is still in the expensive, hand-holding phase.
9. Meta Announces New AI Organization with Massive Talent Acquisitions
News Team, Assemble.
(Except instead of news team, it’s the dream AI Team. And instead of Ron Burgandy, it’s Mark Zuckerberg.)
Meta has announced MSL (Meta Superintelligence Labs), its new organizational structure uniting its foundation, product, and FAIR teams with a dedicated lab for next-generation AI models, according to a memo obtained by The Wire.
The move marks a major escalation in the AI talent wars as Meta essentially acquired a 49% stake in Scale AI and hired its CEO, Alexander Wang, as chief AI officer to lead MSL.
Nate Friedman, the former GitHub CEO, will co-lead the new lab alongside Wang.
But wait, there's more.
Trapit Bansal: RL on chain-of-thought, o-series models
Shuchao Bi: GPT-4o voice mode, multimodal post-training
Huiwen Chang: GPT-4o image gen, MaskIT/Muse architectures
Ji Lin: o3/o4-mini, reasoning stack
Joel Pobar: inference (Anthropic), ML infrastructure
Jack Rae: Gemini pre-training/reasoning, Gopher/Chinchilla
Hongyu Ren: GPT-4o, post-training
Johan Schalkwyk: speech systems, Sesame, Maya
Pei Sun: post-training/coding for Gemini, Waymo perception
Jiahui Yu: GPT-4o/4.1, multimodal perception
Shengjia Zhao: ChatGPT/mini models, synthetic data
Meta essentially bought the humans who built mostly every breakthrough AI feature you've used in the past year.
What it means: Meta stopped playing defense and went full offense after a lackluster Llama 4.
They're not trying to catch up anymore. They're trying to leapfrog everyone by hiring the actual brains behind every major AI advancement.
Smart move.
When you can't out-innovate competitors, you buy their innovation teams.
Zuckerberg just turned AI development into a talent acquisition war he's determined to win.
Reply