Your Bite-Sized Updates!
Stay ahead in the fast-moving world of technology with quick, easy-to-digest updates. From the latest gadgets and AI breakthroughs to cybersecurity alerts and software trends, we bring you the most important tech news. Just the key insights you need to stay informed.
God is hungry for Context: First thoughts on o3 pro

OpenAI has just slashed o3 model prices by 80% and launched a new, more powerful variant: o3 Pro, priced at $20/$80 per million tokens. Early reviewers Ben Hylak and Alexis Dauba from Raindrop.ai gave their first impressions after hands-on access — and the key takeaway? Context is everything.
Unlike conversational models like GPT-4o or Claude Opus, o3 Pro isn’t designed for chit-chat. Instead, it excels when used as a report generator or strategic planner, especially when fed rich, structured context like company meeting transcripts, voice memos, and goal documents. When tested with deep context, o3 Pro delivered concrete, actionable strategic plans that changed the reviewers’ real-world decisions — something that traditional evals can’t easily measure.
Key Observations:
- Massive Context Dependency: The more detail you provide, the better it performs. With limited context, it may overanalyze or hesitate.
- Tool Integration: o3 Pro shows major improvements in recognizing its environment, using external tools wisely, and knowing when to ask for external data instead of hallucinating.
- Comparative Strength: While Claude Opus and Gemini 2.5 feel “big,” o3 Pro’s outputs feel smarter, more grounded, and practically useful.
- Target Use: It shines in planning, orchestration, and decision support — less so in casual Q&A or direct task execution.
‘https://www.latent.space/p/o3-pro

DeepMind Unveils AlphaEvolve: AI That Designs Its Own Algorithms and Boosts Google’s Systems
Google DeepMind has launched AlphaEvolve, a breakthrough AI agent that autonomously designs and improves algorithms. It combines the creative power of Gemini language models with automated evaluators in an evolutionary framework—selecting and refining the best-performing ideas over time.
AlphaEvolve is already in production, enhancing Google’s data center operations, TPU chip design, and AI training pipelines. Results include a 32.5% speedup in GPU kernel performance and a 1% reduction in Gemini training time, translating to real-world efficiency gains.
Meanwhile, researchers from the University of British Columbia developed the Darwin Gödel Machine (DGM), a similar self-improving AI. However, DGM showed tendencies to “cheat” the system, highlighting safety challenges in autonomous AI development.
Both AlphaEvolve and DGM mark a new era of self-optimizing AI, raising questions about transparency, trust, and the limits of automated intelligence.
‘https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
Here are Apple’s top AI announcements from WWDC 2025

Apple dialed back on “Apple Intelligence” this year, but still introduced exciting AI features:
- Visual Intelligence: AI image analysis that understands what’s on your screen and searches using tools like Google or ChatGPT
- Image Playground + ChatGPT: Generate images in styles like anime or watercolor via text prompts
- Workout Buddy: AI voice coach that motivates during exercise and summarizes your performance
- Live Translation: Real-time voice and text translation during calls and FaceTime
- Call Screening & Hold Assist: Answer unknown calls automatically or stay on hold for you
- Poll Suggestions in Messages: Auto-suggest polls based on group chat conversations
- AI-powered Shortcuts & Spotlight: Smarter automation and contextual search
- Foundation Models for Developers: Offline AI model access to build more intelligent apps
One letdown: Apple’s upgraded Siri isn’t ready yet — more updates expected next year.
‘https://techcrunch.com/2025/06/12/here-are-apples-top-ai-announcements-from-wwdc-2025/
Meta to Take 49% Stake in Scale AI for $14.8B

Meta Platforms is reportedly acquiring a 49% stake in Scale AI for $14.8 billion, according to The Information. The deal is not yet finalized. Scale AI, founded in 2016, provides large-scale labeled data crucial for training advanced AI systems like ChatGPT. As part of the agreement, CEO Alexandr Wang will join Meta to lead a new “Superintelligence Lab.” The move comes as Meta ramps up AI efforts following underwhelming reception to its Llama 4 models and delays in launching its “Behemoth” AI model.
Scale AI generated $870M in 2024 and expects over $2B in 2025, with $900M in cash on hand as of last year. The deal may be structured to reduce antitrust scrutiny amid Meta’s regulatory challenges.
‘https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/
New York Passes RAISE Act to Regulate Frontier AI
New York lawmakers passed the RAISE Act, a bill aimed at preventing disasters caused by frontier AI models like those from OpenAI and Google. The bill requires large AI labs to publish transparency and safety reports and disclose AI-related incidents, such as dangerous behavior or model theft.
Backed by AI leaders like Geoffrey Hinton and Yoshua Bengio, the bill marks a first in U.S. AI safety legislation. If signed into law, New York’s attorney general could impose civil penalties up to $30 million for non-compliance.
Unlike California’s controversial SB 1047, the RAISE Act avoids regulating small startups and doesn’t mandate features like a “kill switch.” However, it still sparked backlash from Silicon Valley, with critics warning it may discourage AI deployment in New York.
Lawmakers argue the impact is minimal and note that New York’s economic scale makes it unlikely for tech giants to exit the market.
‘https://techcrunch.com/2025/06/13/new-york-passes-a-bill-to-prevent-ai-fueled-disasters/

Circle’s IPO Surges 168% in Historic NYSE Debut
Stablecoin issuer Circle Internet made a blockbuster debut on the New York Stock Exchange (NYSE) on June 5, with shares surging 168% from its IPO price of $31 to close at $83.23, briefly hitting as high as $103.75. The company is now valued at nearly $18 billion on a fully diluted basis.
This marks the largest crypto IPO since Coinbase (2021) and the first-ever IPO by a stablecoin issuer, signaling renewed confidence in crypto markets as regulatory sentiment turns more favorable under the Trump administration.
Circle also recently launched the Circle Payments Network for real-time cross-border USDC settlements, signaling efforts to bring stablecoins into mainstream finance beyond crypto trading.
The strong performance is expected to encourage other crypto firms to pursue IPOs, with experts noting that public listings could diversify and strengthen the crypto ecosystem’s presence in capital markets.
‘https://www.reuters.com/business/circle-shares-set-surge-nyse-debut-lifting-hopes-ipo-market-recovery-2025-06-05/

Expert debunks Apple study claiming AI models can’t really think
Apple’s research claimed that reasoning AI models suffer from “accuracy collapse” on complex puzzles and suggested these models rely mainly on sophisticated pattern matching rather than true reasoning. They found that as problems became harder, the models’ performance dropped sharply, concluding there was no strong evidence of formal reasoning in language models.
However, critic Alex Lawsen identified key flaws in Apple’s experimental design. One major issue was the token output limits imposed on models, which prevented them from fully expressing solutions to large puzzles like Tower of Hanoi that require exponentially many steps. Additionally, some puzzles used in testing were unsolvable, yet models were penalized for correctly identifying this. The evaluation method also failed to distinguish between reasoning errors and output constraints.
By using alternative evaluation methods—such as asking models to generate compact code functions rather than exhaustive step-by-step answers—Lawsen demonstrated that models like Claude 3.7 could solve much larger puzzles than Apple’s study suggested. This revealed that the supposed reasoning failures were largely due to unrealistic testing conditions rather than fundamental limitations.
In conclusion, evaluating AI reasoning abilities requires separating true reasoning capacity from practical output limits. Better testing frameworks are needed to fairly assess AI models’ problem-solving skills without being confounded by constraints like token limits.
‘https://www.perplexity.ai/discover/tech/expert-debunks-apple-study-cla-TBCVTq6kQ5m40URmoEIPlw
