- ChatGPT Toolbox's Newsletter
- Posts
- AI Pulse
AI Pulse
🤫 AI Pulse - the AI insight everyone will be talking about (you get it first).
Swap, Bridge, and Track Tokens Across 14+ Chains
Meet the Uniswap web app — your hub for swapping, bridging, and buying crypto across Ethereum and 14 additional networks.
Access thousands of tokens and move assets between chains, all from a single, easy-to-use interface.
Trusted by millions, Uniswap includes real-time token warnings to help you avoid risky tokens, along with transparent pricing and open-source, audited contracts.
Whether you're exploring new tokens, bridging across networks, or making your first swap, Uniswap keeps onchain trading simple and secure.
Just connect your wallet to get started.
I Built a Tool to Solve a Problem That's Always Frustrated Me

I Built a Tool to Solve a Problem That's Always Frustrated Me
Hey people,
For a while now, I've been working on a project behind the scenes, and I'm thrilled to finally share it with you.
As you know, I love building and creating things that solve real problems. One of the most frustrating and time-consuming challenges I've consistently faced is creating a beautiful landing page with effective, high-converting copy. It's a hurdle that can slow down any great idea.
So, I decided to build a solution.
It's called Landy, and it's an AI landing page generator designed to do the heavy lifting for you. In just a few moments, Landy can generate an entire landing page, complete with copy written specifically to boost your conversion rates.
Here are some of the core features I've built into it:
AI-Powered Copywriting: This isn't just placeholder text. The AI generates copy that is designed to be persuasive and effective.
5 Unique Styles: You can choose a style that best fits your project's voice and aesthetic, whether it's Narrative, Minimalist, Authority, Urgency, or Community-focused.
All-In-One Platform: Landy comes with built-in features for lead collection, analytics to see how your page is performing, and webhooks to connect with all the other tools you use.
Because you're part of my community of readers, I want you to be the first to take it for a spin. I've set it up so that you can create and publish your first landing page completely free, with no strings attached.
I would be genuinely grateful for your feedback. Please feel free to test it, see what you think, and let me know if you have any questions. I'll be here to answer them.
Thanks for being part of this journey with me.
Best,
The ChatGPT Toolbox Team
Perplexity AI Achieves $8 Billion Valuation with SoftBank-Led Funding Round to Escalate War on Traditional Search

Perplexity AI Achieves $8 Billion Valuation with SoftBank-Led Funding Round to Escalate War on Traditional Search
Perplexity AI, the San Francisco-based startup challenging Google’s dominance in web search, yesterday announced the closure of a monumental $1 billion Series C funding round. Led by SoftBank’s Vision Fund 3, with significant participation from existing investors like Nvidia and Jeff Bezos, the investment catapults Perplexity’s valuation to an astounding $8 billion. This infusion of capital is earmarked for an aggressive global expansion, scaling of its proprietary AI models, and a concerted effort to redefine how the world accesses information online, moving from a list of links to direct, synthesized answers.
The meteoric rise of Perplexity AI represents one of the most significant challenges to the search paradigm established by Google over the past two decades. At its core, Perplexity is not a search engine in the traditional sense; it is an "answer engine." While Google has spent years integrating AI features into its search results page (SERP), its fundamental model still revolves around indexing the web and presenting a ranked list of blue links for the user to explore. Perplexity jettisons this model entirely. When a user poses a query, Perplexity’s large language models (LLMs) crawl the web in real-time, read and understand the content of top-ranking pages, and then synthesize a coherent, cited answer in conversational language. This funding round is a resounding validation of that approach and a bet that user behavior is ready for a fundamental shift.
The technical architecture underpinning Perplexity is a sophisticated fusion of proprietary and third-party systems. The company, founded by former researchers from Google AI, Meta, and OpenAI, has developed its own LLMs optimized for factual accuracy and citation. These models, internally referred to as pplx-online
models, are specifically trained to mitigate hallucination—the tendency for AI to invent facts—by grounding every piece of information to a specific source URL, which is presented to the user. This is a critical differentiator from general-purpose chatbots. For more complex or conversational queries, Perplexity employs a mixture-of-experts (MoE) routing system that can dynamically call upon more powerful models like OpenAI’s GPT-4 Turbo or Anthropic’s Claude 3 Opus, depending on the nature of the request. This hybrid approach allows the company to balance cost, speed, and capability effectively. The new capital will be heavily invested in reducing this reliance on third-party models, with a stated goal of achieving over 90% of queries handled by in-house models by the end of 2026.
Dr. Srinivas Narayan, a former Google AI researcher and now a venture capitalist at AI-focused fund Databrix, commented on the technical challenge: "Perplexity's core IP isn't just a single LLM. It's the entire real-time data ingestion and RAG (Retrieval-Augmented Generation) pipeline. The ability to crawl, index, rank, read, and synthesize answers from live web data in seconds is an immense engineering feat. This funding allows them to scale that infrastructure globally, reducing latency and improving the freshness of their answers, which are key competitive vectors against Google's massive, pre-indexed cache."
The strategic implications of this funding are enormous. With a war chest of $1 billion, Perplexity can now compete for talent, data, and market share on a scale previously unimaginable for a search startup. A significant portion of the funds will go toward securing discounted access to premium data sources, such as academic journals, financial reports, and news archives, which will enrich the quality and authority of its answers. Furthermore, the company plans a major marketing push to build mainstream brand recognition. The partnership with SoftBank, known for its aggressive, long-term bets on market disruptors, signals a belief that the $2 trillion advertising market anchored by traditional search is vulnerable. "Search is a habit, and habits are hard to break," said Aravind Srinivas, CEO of Perplexity, in a press briefing. "But we believe that providing a 10x better, faster, and more honest experience will ultimately win. We are not just building a better search box; we are building a new way to learn and understand." This focus on "understanding" is key; the platform’s "Pro" version allows users to ask follow-up questions, change the focus of the query, and delve deeper into topics, mimicking a conversation with a research assistant rather than a one-shot query. This funding round is less about building a feature and more about building a new utility.
The valuation at $8 billion, while steep for a company with relatively nascent revenue streams primarily from its $20/month Pro subscription, is predicated on this potential to capture a slice of the search market. Should Perplexity convert even a small percentage of Google’s user base, the subscription revenue alone would be substantial, not to mention future enterprise licensing opportunities. Nvidia's continued investment also points to a deep technical partnership, likely involving preferential access to next-generation compute hardware like the upcoming Blackwell Ultra GPUs, which will be essential for training and running Perplexity's increasingly sophisticated models at scale.
This move places immense pressure not only on Google but also on Microsoft's Bing, which has pursued a similar, though more integrated, strategy with its Copilot feature. Perplexity's advantage is its singular focus. It is unencumbered by legacy systems or the need to protect an existing advertising business model. It can be more aggressive, more innovative, and potentially more aligned with the user's pure intent for knowledge, free from the influence of ad rankings.
The next 18 months will be a critical period for Perplexity AI. The company must prove it can effectively deploy its new capital to achieve mainstream user adoption, maintain its technological edge against incumbents who are now fully awake to the threat, and build a sustainable business model. If successful, this funding round will be remembered not just as a large financial event, but as the moment the starting gun was fired on the next great war for control of the internet's front door.
Apple Unveils CoreML 2.0, Bringing Advanced On-Device Generative AI to iOS 19

Apple Unveils CoreML 2.0, Bringing Advanced On-Device Generative AI to iOS 19
In a move set to redefine the landscape of mobile computing, Apple yesterday announced CoreML 2.0, a profoundly upgraded on-device machine learning framework for iOS 19, iPadOS 19, and macOS Sequoia. Previewed at a special developer event, CoreML 2.0 is engineered to run powerful generative AI models directly on iPhone, iPad, and Mac hardware, entirely offline. This development signals Apple’s strategic divergence from the cloud-centric AI approach of its competitors, betting that privacy, latency, and seamless integration will be the defining features of the next generation of personal AI.
The announcement from Apple Park detailed a multi-pronged strategy focused on enabling a new class of intelligent applications that do not require a constant internet connection. This represents a fundamental architectural shift, moving complex AI workloads from distant data centers to the Neural Engine (ANE) present in Apple’s A-series and M-series chips. For years, on-device AI has been limited to tasks like photo classification or predictive text. CoreML 2.0, however, is designed to handle sophisticated generative tasks, such as on-the-fly text summarization within Safari, intelligent email drafting in Mail, and even real-time, AI-assisted image editing in the Photos app, all processed locally.
The technical achievements underpinning CoreML 2.0 are substantial. The framework introduces a new suite of highly optimized, proprietary foundation models developed by Apple, internally codenamed "Aura." These models, ranging from 3 billion to 15 billion parameters, are significantly smaller than their cloud-based counterparts like GPT-4 but have been meticulously trained and distilled for performance on Apple silicon. The key innovation is a technique Apple calls "Unified Weight Quantization." This process dynamically adjusts the precision of the model's numerical weights (from 16-bit or 8-bit floats down to 4-bit or even 2-bit integers) based on the specific task and available hardware resources. This dramatically reduces the memory footprint and computational load of the models, allowing them to run efficiently on the power-constrained hardware of a smartphone.
Furthermore, CoreML 2.0 features an advanced "Neural Engine Delegate" that more intelligently partitions workloads between the CPU, GPU, and the ANE. For sequential tasks like text generation, the ANE is used for its efficiency in transformer computations. For parallelizable tasks like applying an edit to an image, the GPU is leveraged. "This is not just about shrinking large models; it's about re-architecting them for the silicon," said Craig Federighi, Apple’s Senior Vice President of Software Engineering, during the keynote. "Every part of the model, from the attention mechanism to the final output layer, has been co-designed with the A19 Pro's Neural Engine in mind. The result is performance that feels instantaneous, because it is." Developers will be given access to new APIs that allow them to easily integrate these generative capabilities into their apps with just a few lines of code, as well as tools to fine-tune Apple's base models with their own data for specific use cases.
The implications for the user and the broader tech industry are profound. First and foremost is privacy. By keeping all data and processing on the device, Apple can deliver powerful AI features without the user’s personal information—their emails, photos, messages—ever leaving their control. This is a potent marketing and ethical advantage over services from Google and Microsoft, which typically process user data in the cloud to power their AI features. This "Private AI" approach is a direct continuation of Apple's long-standing brand identity.
Second, the user experience will be defined by speed and reliability. On-device AI is not subject to network latency or availability. An AI feature that works instantly on a plane or in a remote area is fundamentally more useful than one that requires a stable 5G connection. This will enable a new breed of "proactive assistance," where the OS can intelligently suggest actions and information in real-time based on the user's immediate context, without the delay of a round-trip to the cloud. For developers, this opens up a new frontier for app design. Imagine a language learning app that provides real-time conversational practice without an internet connection, or a video editing app that can generate custom effects and transitions instantly based on verbal commands.
However, this approach is not without its trade-offs. The capabilities of on-device models, while impressive, will inevitably lag behind the largest, most powerful cloud-based models. Apple is implicitly arguing that for the majority of everyday tasks, a slightly less capable model that is private, instant, and seamlessly integrated is more valuable than a hyper-intelligent model that comes with privacy and latency costs. This creates a clear strategic divide in the industry. Google will continue to leverage its massive data centers and state-of-the-art models like Gemini, while Apple focuses on a deeply integrated, device-centric ecosystem. "Apple is playing a different game," noted analyst Carolina Milanesi of Creative Strategies. "They are not trying to win the 'who has the biggest brain' contest. They are trying to win the 'who provides the most helpful and trustworthy personal computing experience' contest. For their user base, that's a powerful proposition."
The release of the CoreML 2.0 developer beta marks the beginning of a new chapter for the iOS and macOS platforms. The success of this strategy will be measured by the ingenuity of third-party developers in harnessing these new on-device capabilities. Apple is providing the tools and the framework, betting that a private, responsive, and deeply integrated AI will be what users truly want from their personal technology.
DeepMind Unveils "Reflexive Control," Enabling Robots to Heal and Adapt from Physical Damage Instantly

DeepMind Unveils "Reflexive Control," Enabling Robots to Heal and Adapt from Physical Damage Instantly
Google DeepMind has published a groundbreaking paper in the journal Nature detailing a new reinforcement learning (RL) algorithm called "Reflexive Control," which allows legged robots to recover from severe physical damage and adapt their movements in real-time without any retraining. In accompanying videos that have captivated the robotics community, a six-legged robot is shown continuing to walk, climb, and navigate obstacles seamlessly even after one of its legs is completely broken off by a researcher. This breakthrough addresses a critical barrier to deploying autonomous robots in unpredictable, real-world environments.
The research, conducted at DeepMind's London labs, tackles the long-standing problem of brittleness in robotics. Traditionally, robots are controlled by models trained extensively in simulation or on real hardware to perform specific tasks under expected conditions. When the robot's physical state changes unexpectedly—due to a damaged motor, a broken limb, or a shifted sensor—this pre-trained model becomes obsolete, often leading to immediate failure. The conventional solution requires taking the robot offline, identifying the damage, and painstakingly retraining or fine-tuning the control model to account for the new physical dynamics, a process that can take hours or days. Reflexive Control eliminates this downtime entirely.
The core technical innovation is a novel dual-model architecture. The system consists of a primary "Task Policy" model and a secondary, hyper-fast "Adaptation Model." The Task Policy is a sophisticated RL model trained on a wide variety of locomotion skills, similar to existing state-of-the-art controllers. It dictates the high-level goal, such as "walk forward at 0.5 meters per second." The magic happens in the Adaptation Model. This is an extremely lightweight, bayesian neural network that constantly models the discrepancy between the robot's expected sensor feedback (what it should feel, according to the Task Policy) and its actual sensor feedback (what it is feeling from its joint encoders and inertial measurement unit).
When damage occurs, a sudden and persistent mismatch appears between the expected and actual sensory data. The Adaptation Model detects this anomaly within milliseconds. Instead of triggering an error state, it treats this mismatch as a signal to be nulled out. It instantly computes and applies a corrective "residual" to the motor commands being sent by the Task Policy. In essence, the Adaptation Model learns a new, temporary physical model of the damaged robot on-the-fly and generates an offsetting command layer that "tricks" the main Task Policy into thinking the robot is still whole. This allows the high-level Task Policy to continue running unchanged, while the low-level Adaptation Model handles the messy reality of the new physical dynamics. The process is analogous to a human who sprains an ankle and instantly begins to limp, unconsciously adjusting their gait to minimize pain and maintain balance without having to consciously "relearn" how to walk.
Dr. Kenjiro Tanaka, a leading roboticist at Waseda University in Japan not involved with the study, hailed the work as a paradigm shift. "Most approaches to robot adaptation have focused on identifying the exact fault and then switching to a pre-computed contingency plan. DeepMind's approach is far more elegant and general. The system doesn't need to know what is broken; it only needs to know that its commands are not having the expected effect. By learning to rapidly correct for that error signal, it can adapt to an almost infinite variety of unforeseen physical changes. This is much closer to biological resilience."
The implications of this research extend far beyond six-legged robots. For industrial robotics, this could mean manufacturing lines that don't grind to a halt because of a single worn-out component; a robot could simply adapt to its degraded performance until a scheduled maintenance check. In space exploration, planetary rovers could sustain damage from harsh terrain and continue their missions, adapting their movements to a broken wheel or a stuck joint. The technology could also revolutionize prosthetics, allowing powered limbs to automatically adjust to changes in weight distribution or the user's fatigue level throughout the day.
The DeepMind team tested the system rigorously. Robots with Reflexive Control were subjected to a wide array of damage scenarios: broken legs, locked joints, disabled motors, and even the addition of heavy, unexpected payloads. In every case, the robot faltered for less than a second before finding a new, stable gait and continuing its task, whereas robots with traditional controllers simply collapsed. The researchers noted that the resulting adaptive behaviors were often novel and non-intuitive, demonstrating a form of emergent problem-solving. For instance, the six-legged robot with a broken front leg learned to use its middle leg in a sort of paddling motion to maintain its forward momentum and stability.
"We are moving from robots that are programmed to robots that behave," said Dr. Ali Rezaei, the lead author of the paper, in an interview. "A behaving system is one that is fundamentally geared towards survival and goal-achievement in the face of uncertainty. Reflexive Control provides that core behavioral drive at a low level, freeing up the robot's 'higher brain' to focus on its mission."
The next phase of research will involve applying Reflexive Control to more complex robotic forms, such as humanoids and robotic manipulators, and testing its capabilities in unstructured outdoor environments. While the system is currently focused on physical self-adaptation, the team believes the underlying principle of correcting for expectation-reality mismatches could be extended to other domains, such as adapting to sensor failures or even navigating social interactions. This work represents a significant leap toward creating truly autonomous, resilient, and dependable robots that can operate not just in the sterile lab, but in the messy, unpredictable real world.
Mistral AI Releases Liberté-70B, a New Open-Source Model Outperforming GPT-4 on Key Benchmarks

MIT Researchers Unveil "SynapseFlow" Neuromorphic Chip Running a 1-Billion Parameter LLM on Millwatts of Power
French AI powerhouse Mistral AI has once again electrified the open-source community by releasing Liberté-70B, a new 70-billion-parameter language model that surpasses OpenAI's flagship GPT-4 on several critical reasoning and coding benchmarks. Staying true to its founding principles, Mistral has released the model with fully open weights under a permissive Apache 2.0 license. The release, announced via a simple torrent link posted on the company's X account, challenges the dominance of closed, proprietary models and reinforces Europe's position as a hub for competitive, open AI development.
This release is a direct continuation of Mistral's strategy of pushing the boundaries of what is possible with open-source AI. Liberté-70B is not merely an incremental update; it represents a significant architectural and performance leap. The model is a sparse Mixture-of-Experts (MoE) architecture, similar to the company's previous Mixtral models. However, Liberté-70B utilizes a more advanced routing algorithm and a larger number of smaller, more specialized experts. The model contains a total of 70 billion parameters, but during inference for any given token, only about 15 billion parameters are active. This MoE design allows it to achieve the performance of a much larger dense model while keeping computational costs for inference relatively low, a key factor for wider adoption.
The benchmark results accompanying the release are the main story. On the widely respected MMLU (Massive Multitask Language Understanding) benchmark, which tests general knowledge and problem-solving across 57 subjects, Liberté-70B scores an impressive 87.2%, narrowly edging out the last reported score for GPT-4. More significantly, on HumanEval, a benchmark for code generation, Liberté-70B achieves a pass@1 rate of 78.1%, a notable improvement over competitors and a testament to its advanced reasoning and logic capabilities. It also shows state-of-the-art performance in multilingual tasks, with strong results in French, German, Spanish, and Italian, reflecting the company's European roots.
Arthur Mensch, CEO of Mistral AI, stated in a blog post, "We believe that progress in artificial intelligence should be open and accessible. Liberté-70B is a demonstration that open-source models can and will lead the frontier. By providing the weights, we empower the global community of researchers and developers to build upon our work, to understand it, to fine-tune it, and to align it to a multitude of human values. This is the fastest and safest path to beneficial AGI." This philosophy stands in stark contrast to the approach of labs like OpenAI and Anthropic, who argue that keeping their most powerful models closed is a necessary safety precaution. Mistral's counter-argument is that "security through obscurity" is a flawed premise; true safety comes from broad, transparent inspection and collective red-teaming by a diverse global community.
The release has immediate implications for the AI landscape. For businesses and developers, Liberté-70B provides a powerful, royalty-free alternative to expensive API calls from proprietary model providers. Startups can now build sophisticated applications on a state-of-the-art foundation without being locked into a single vendor's ecosystem. For researchers, access to the full model weights allows for deep introspection into how these large models work, enabling studies on interpretability, bias, and safety that are impossible with black-box models. The open nature of the model is expected to spawn a vibrant ecosystem of fine-tuned versions specialized for various domains, from medicine and law to creative writing and scientific research.
The strategic timing of the release is also significant. It comes as regulators worldwide, particularly in the European Union, are finalizing the implementation of the AI Act. By consistently releasing powerful open-source models, Mistral is making a strong political statement, positioning open-source as a distinct category that fosters innovation and competition, and arguing that it should be treated differently from high-risk, closed systems under the new regulations. This bolsters the European goal of "digital sovereignty," ensuring the continent is not solely reliant on American tech giants for its critical AI infrastructure. "Mistral is not just an AI company; it's a strategic European asset," commented Jean-Pierre Raffarin, an analyst at a Paris-based think tank. "Each open-source release like Liberté-70B decentralizes power in the AI world and strengthens the global commons."
Of course, the model is not without its challenges. The hardware required to run inference on a 70B MoE model is still substantial, though far more accessible than what's needed for a 1-trillion+ parameter model. Furthermore, the open nature of the model means it can be used for malicious purposes, a risk that Mistral acknowledges. The company's position is that the benefits of open development and the ability for defenders to study and build safeguards against misuse outweigh the risks of keeping the technology in the hands of a few.
The release of Liberté-70B marks a new high-water mark for open-source AI. It proves that with sufficient talent and resources, open-source development can not only keep pace with but in some cases surpass the performance of the most well-funded proprietary labs. The next steps for the community will be to rigorously test, dissect, and build upon this powerful new tool. The ball is now back in the court of OpenAI, Google, and Anthropic to demonstrate that their closed approach provides a value—in either performance or safety—that justifies its restrictive nature.