AI Pulse

🤫 The AI insight everyone will be talking about (you get it first).

In partnership with

You Don’t Need to Be Technical. Just Informed

AI isn’t optional anymore—but coding isn’t required.

The AI Report gives business leaders the edge with daily insights, use cases, and implementation guides across ops, sales, and strategy.

Trusted by professionals at Google, OpenAI, and Microsoft.

👉 Get the newsletter and make smarter AI decisions.

Google to sign EU’s AI code of practice despite concerns

Google to sign EU’s AI code of practice despite concerns

Alphabet’s Google announced on July 30, 2025 in Brussels that it will sign the EU’s voluntary AI Code of Practice, designed by 13 independent experts to guide compliance with the landmark AI Act by requiring transparency of training content summaries and adherence to EU copyright rules. While endorsing its aim to promote secure, first‑rate AI tools for European citizens and businesses, Google cautioned that certain provisions—particularly those that could slow approvals, deviate from existing copyright law, or expose trade secrets—risk chilling model development and diminishing Europe’s competitiveness.

On August 2, 2025 the EU AI Act enters into force, but ahead of that date the European Commission published its voluntary Code of Practice on July 10, 2025 to offer legal certainty to providers of general‑purpose AI models. Drafted by 13 independent experts, the Code lays out baseline requirements—such as issuing concise summaries of the datasets used to train AI systems and complying with EU copyright law—to help signatories demonstrate conformity with the new rules.

The framework is structured around three core pillars—Transparency, Copyright, and Safety & Security—each containing detailed measures for companies developing AI models. According to a July 2025 update from law firm Dentons, the Transparency pillar introduces a “model documentation form” that functions as a user‑friendly template for disclosing model capabilities, intended use cases, known limitations, and key performance metrics.

Under the Copyright pillar, signatories must implement processes to ensure that training data complies with EU copyright law, including honoring takedown requests from rightsholders and providing summaries of content usage. This reflects the AI Act’s requirement that providers of general‑purpose AI models furnish regulators with robust evidence that datasets either consist of public‑domain materials or have been licensed appropriately.

The Safety & Security section compels companies to perform rigorous risk assessments, adopt measures to mitigate systemic risks—such as misuse for disinformation or biometric surveillance—and establish incident‑reporting protocols for any malfunctions or misuses. It also encourages adoption of technical tools like watermarking and provenance tracking to help detect and attribute AI‑generated content in real time.

In her analysis, Gry Hasselbalch, a contributor to the EU’s AI ethics guidelines, told Euronews that while the Code provides a useful compliance roadmap, it “does not introduce meaningful safeguards to prevent large‑scale harms such as bias amplification or mass surveillance,” and risks becoming a checkbox exercise rather than a driver of genuine responsibility.

Google’s Global Affairs President, Kent Walker, praised the Code’s potential to “promote European citizens’ and businesses’ access to secure, first‑rate AI tools” but warned that “departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe’s competitiveness.” This balanced stance reflects Google’s broader strategy of supporting regulation while safeguarding its proprietary technologies and research pipelines.

Microsoft President Brad Smith has indicated that Microsoft will likely follow suit and sign the Code, highlighting his company’s desire to “find a way to be supportive” and engage directly with the EU AI Office to refine implementation details. In contrast, Meta Platforms declined to sign, with Chief Global Affairs Officer Joel Kaplan arguing on LinkedIn that the Code “introduces legal uncertainties for model developers and goes far beyond the scope of the AI Act,” posing a threat to AI innovation in Europe.

Beyond the major U.S. players, Germany’s Aleph Alpha also plans to sign the Code, demonstrating early buy‑in from European AI startups eager to gain regulatory clarity ahead of the AI Act’s enforcement. Meanwhile, French AI firm Mistral and OpenAI have already confirmed their signatures, joining Google in endorsing the voluntary framework and setting a precedent for other model developers.

Industry observers warn that despite the Code’s voluntary nature, regulators are likely to use it as a de facto compliance benchmark, meaning that non‑signatories could face heightened scrutiny and enforcement actions under the AI Act once it takes effect. Companies that demonstrate adherence through the Code may benefit from reduced administrative burdens and stronger legal defenses in case of disputes.

From a technical perspective, the Code raises important questions about the trade‑off between transparency and protection of proprietary model internals. Requirements for detailed dataset summaries and model documentation could force companies to reveal sensitive information about data sources, annotation schemes, and hyperparameter settings, potentially eroding competitive advantages in foundation model development.

At the same time, proponents argue that the Code’s focus on transparency and accountability is critical for building public trust in AI systems and preventing misuse. By standardizing documentation practices—such as publishing model cards and data sheets—companies can facilitate independent audits, enable safer third‑party integrations, and help regulators identify high‑risk applications more efficiently.

Regionally, signing the Code sends a strong signal of commitment to Europe’s vision of digital sovereignty and responsible AI governance. It may also influence other jurisdictions, such as the United States and United Kingdom, to adopt similar voluntary frameworks or strengthen existing guidelines for general‑purpose AI models.

Ultimately, Google’s decision to sign the EU AI Code of Practice reflects the nuanced calculus that major AI developers face: balancing support for regulatory clarity and global standards against the need to protect proprietary research and maintain innovation‑friendly environments.

Looking ahead, the Code’s real-world impact will depend on how rigorously signatories implement its requirements and how actively the EU AI Office engages in oversight. As the AI Act’s enforcement deadline approaches, companies that have signed the Code will likely find themselves at an advantage in navigating audits and compliance checks, while non‑signatories may need to scramble to meet the same standards under tighter legal constraints.

Google’s move may well mark the first step in a broader alignment between industry and regulators on AI governance, but the coming months will test whether voluntary commitments translate into concrete practices that mitigate AI risks without unduly hindering innovation.

Over the next year, regulators and signatory companies will need to collaborate closely to refine reporting templates, clarify ambiguous provisions, and develop robust enforcement mechanisms that strike the right balance between safety, transparency, and competitiveness.

Looking beyond Europe, other global AI hubs will be watching closely to see if the EU’s approach succeeds in fostering trustworthy AI. A successful rollout could encourage similar codes of practice in North America and Asia, while a clumsy implementation might push firms to favor less regulated markets, potentially undermining global harmonization efforts.

In the coming months, AI developers and policymakers must remain vigilant against compliance becoming a perfunctory exercise and instead focus on embedding the Code’s principles into organizational cultures, R&D practices, and risk management frameworks.

Ultimately, the effectiveness of the EU AI Code of Practice will be measured by its ability to reduce harms—such as biased decision‑making, misinformation campaigns, and data privacy breaches—while still enabling rapid advancements in AI capabilities.

Google’s endorsement sets a high bar for other firms and signals that the world’s leading AI developers take the EU’s regulatory ambitions seriously, even as they urge careful calibration to preserve innovation and protect trade secrets.

The coming cycle of audits, compliance reports, and stakeholder dialogues will reveal whether this voluntary framework can strike the delicate balance between fostering safe, transparent AI and sustaining a dynamic environment for frontier model research.

Looking forward, industry and regulators may need to revisit the Code periodically to address emerging risks—such as AI‑driven cyberattacks, advanced synthetic media, and algorithmic decision making in critical infrastructure—and update requirements accordingly.

The ultimate test of this framework will be its effectiveness in preventing real‑world harms—such as disinformation campaigns during elections, biased hiring algorithms, or AI‑driven cyber threats—while still enabling Europe to compete at the forefront of global AI research and commercialization.

Google’s signature on the EU’s AI Code of Practice thus marks a pivotal moment in the global debate over AI governance, one that will shape the contours of regulation, research, and innovation for years to come.

As Europe embarks on this experiment in voluntary self‑regulation, the global AI community will be watching to assess whether the Code of Practice can truly serve as a blueprint for responsible innovation and a model for other regions to emulate.

Over the next several quarters, empirical research and independent audits will be crucial to evaluate the Code’s impact on model safety, transparency, and market competitiveness, providing the evidence base for any necessary refinements.

Concluding, Google’s decision to sign the EU’s AI Code of Practice represents both a vote of confidence in the EU’s regulatory vision and a cautionary note about the need to balance public interest with proprietary innovation.

Over the coming year, the dialogue between signatories and the EU AI Office will shape the practical contours of AI governance, and the success or failure of this voluntary framework will have far‑reaching consequences for the future of responsible AI development worldwide.

In the months ahead, attention will turn to how signatories integrate the Code’s requirements into their development lifecycles, the robustness of external audits conducted by independent bodies, and the EU AI Office’s enforcement strategy—factors that will determine whether this initiative becomes a global gold standard or a well‑intentioned but ultimately toothless exercise.

As the AI landscape continues to evolve at breakneck speed, Europe’s experiment in voluntary compliance may offer valuable lessons on how to align innovation, safety, and societal values in the development of next‑generation AI systems.

Conclusion

Google’s decision to sign the voluntary EU AI Code of Practice reflects a critical intersection of regulatory ambition and innovation strategy, with signatories poised to benefit from early clarity yet mindful of the trade‑offs between transparency requirements and proprietary protection. As the AI Act’s enforcement deadline approaches, the real impact of this framework will hinge on rigorous implementation, active regulatory engagement, and ongoing revisions to address emerging risks—efforts that will define Europe’s role as a global standard setter in responsible AI.

Researchers create ‘virtual scientists’ to solve complex biological problems

Researchers create ‘virtual scientists’ to solve complex biological problems

Stanford Medicine researchers have developed an AI-driven virtual laboratory of agentic large language models to autonomously conduct interdisciplinary scientific research and design novel SARS-CoV-2 nanobodies in record time.

On July 29, 2025, Stanford Medicine researchers led by Dr. James Zou and collaborators at Chan Zuckerberg Biohub unveiled a Virtual Lab at Stanford University that uses multiple AI agents—acting as principal investigator, immunologist, machine learning expert, and critic—to autonomously design and validate novel nanobody binders against SARS-CoV-2 variants. The breakthrough demonstrates how agentic AI can accelerate interdisciplinary biomedical discovery.

Stanford Medicine’s announcement of a Virtual Lab comprised of AI agents represents a paradigm shift in the conduct of scientific research. Traditionally, cutting‑edge biomedical discovery has depended on cross‑disciplinary teams of human experts, from immunologists to computational biologists. The Virtual Lab replaces or augments these human collaborations by deploying agentic large language models (LLMs) configured to serve as a principal investigator, specialists, and critic, enabling autonomous orchestration of research workflows. By leveraging advances in natural language reasoning, tool use, and parallel processing, agentic AI can tackle open‑ended research challenges with minimal human intervention. This proof‑of‑concept demonstrates that AI‑driven laboratories can accelerate hypothesis generation, experimental design, and candidate validation, drastically reducing the time and resources required to reach scientific insights.

The core of the Virtual Lab architecture is a hierarchy of AI agents. At the top sits the AI principal investigator (PI) agent, responsible for interpreting high‑level research questions and orchestrating the allocation of specialized agents. These include an immunology agent, versed in viral pathogenesis; a computational biology agent, skilled in sequence analysis and structure prediction; a machine learning agent, adept at model development and hyperparameter tuning; and a critic agent, whose role is to challenge assumptions, identify pitfalls, and maintain the integrity of the workflow. Each agent communicates via natural language, exchanging research agendas, hypotheses, and data analyses during automated “meeting” cycles. Agents request access to software tools, which are then integrated into their capabilities, enabling dynamic problem‑solving across stages.

To translate conceptual designs into molecular candidates, the Virtual Lab implements a novel computational nanobody pipeline that fuses three powerful modeling frameworks. First, an Evolutionary Scale Modeling (ESM) component generates initial sequence proposals optimized for stability and antigen engagement. Next, the AlphaFold‑Multimer algorithm predicts the three‑dimensional geometry of nanobody‑spike protein complexes, providing structural insights into binding interfaces. Finally, the Rosetta suite refines and scores candidate nanobody structures, evaluating factors such as binding energy, solvent accessibility, and conformational flexibility. This integrated pipeline enables the rapid generation and assessment of dozens of nanobody designs in silico, automating tasks that previously required extensive manual scripting, iterative experimentation, and high‑performance compute resources.

In applying this pipeline to SARS‑CoV‑2, the Virtual Lab team focused on nanobodies—single‑domain antibody fragments derived from camelid heavy‑chain antibodies. Their small size and simple architecture offer advantages for computational modeling, as fewer residues simplify folding predictions and reduce conformational search spaces. Upon prompting, the AI agents converged on nanobody candidates targeting recent variants such as JN.1 and KP.3. They reasoned that nanobodies’ compact frameworks would facilitate more accurate AlphaFold predictions while preserving high‑affinity binding. Within days, the Virtual Lab produced 92 distinct nanobody sequences, each accompanied by in silico binding scores and structural annotations, allowing rapid down‑selection to the most promising leads.

Following computational design, select nanobody candidates were synthesized and validated in Dr. John Pak’s laboratory. Expression and purification protocols yielded stable nanobody preparations, which were subjected to surface plasmon resonance and enzyme‑linked immunosorbent assays to quantify binding kinetics. Two top candidates exhibited sub‑nanomolar affinity for both recent variants and the ancestral Wuhan spike protein, demonstrating broad neutralization potential. Off‑target screening showed minimal cross‑reactivity with unrelated proteins, underscoring the specificity of the AI‑designed molecules. These experimental results confirm the Virtual Lab’s ability to generate clinically relevant biomolecules, bridging the gap between computational prediction and real‑world efficacy in record time.

The Virtual Lab’s operational efficiency stems from the parallel nature of AI agent meetings and the elimination of human scheduling bottlenecks. During proof‑of‑concept studies, agents conducted multiple concurrent discussions, simulating weekly lab meetings and one‑on‑one consults in seconds. Human researchers intervened in less than one percent of exchanges, providing only high‑level guidance and budget constraints rather than micromanaging protocol details. Automated transcript logging preserves a complete audit trail, ensuring transparency and reproducibility. Where a conventional research cycle might span months—from hypothesis formulation to candidate validation—the Virtual Lab compressed this timeline to a single workweek, highlighting the transformative potential of agentic AI for accelerating discovery.

Dr. James Zou, associate professor of biomedical data science at Stanford, emphasizes that the Virtual Lab is not intended to supplant human ingenuity but to augment it. He notes, “Good science happens when diverse minds collaborate; agentic AI creates virtual diversity at scale, empowering researchers to explore far more hypotheses than human teams could handle.” Dr. Pak echoes this sentiment, describing the emergence of nanobody designs “beyond what previous human researchers published,” and lauding the AI agents’ strategic creativity. Lindzi Wessel of CZ Biohub SF highlights the practical significance: “The AI agents formulated a sophisticated pipeline that was both innovative and grounded in reality, delivering experimental blueprints in mere days.” These endorsements underscore the broad enthusiasm for AI‑human synergy.

Despite its promise, the Virtual Lab approach raises important considerations. Agentic AI systems can hallucinate plausible but incorrect data or become overconfident in edge‑case scenarios, underscoring the need for rigorous validation. Computational pipelines depend on the quality of underlying models and training data, which may introduce biases or blind spots. Ethical and regulatory frameworks must evolve to address accountability when AI‑generated designs advance to clinical trials. Data security and intellectual property issues also arise when proprietary datasets inform AI reasoning. Ongoing human oversight, transparent reporting, and systematic peer review remain essential to ensure that AI‑accelerated research maintains the highest scientific and ethical standards.

Looking ahead, the Virtual Lab framework offers a blueprint for democratizing access to specialized expertise across scientific domains. Beyond virology, similar multi‑agent platforms could tackle materials discovery, drug repurposing, or climate modeling by assembling domain‑specific agents equipped with tailored toolchains. Integration with laboratory automation and robotics could close the loop from in silico design to physical experimentation without human delay. Moreover, customizable agent roles could enable educational platforms where students engage in simulated research projects. As large language models continue to improve in reasoning and tool use, agentic AI may become an indispensable collaborator across academia and industry, reshaping the pace and scope of innovation in ways only beginning to be explored.

By refining agent protocols, expanding tool integrations, and embedding Virtual Labs within automated wet lab workflows, researchers can harness agentic AI to tackle increasingly complex challenges. Continued collaboration between AI developers, domain experts, and regulators will be key to realizing the full potential of AI‑driven discovery while safeguarding scientific integrity and societal benefit.

Google expands AI Mode in Search with Canvas, Search Live, file uploads, and desktop AI integrations

Google expands AI Mode in Search with Canvas, Search Live, file uploads, and desktop AI integrations

Google today unveiled significant upgrades to its AI Mode in Search, introducing Canvas, real‑time Search Live, file uploads, and expanded desktop features to enhance multimodal learning and exploration.

On July 29, 2025, Google announced that it is rolling out major enhancements to AI Mode in Search, its experimental AI‑powered feature, including the Canvas side panel for project planning, support for image and PDF uploads, real‑time visual assistance via Search Live with Google Lens integration, and new Chrome context‑menu shortcuts. These updates, launching first in the U.S. through the AI Mode Labs experiment, aim to transform how users learn, research, and interact with search results across mobile and desktop environments.

AI Mode was first introduced by Google as part of its ongoing effort to incorporate advanced generative AI into search, leveraging the company’s Gemini 2.5 model to process complex, multi‑step queries and deliver synthesized responses alongside traditional web links. Early users have praised its ability to handle nuanced questions and follow‑up prompts, calling it a bridge between conversational AI assistants and traditional search. However, until this week, AI Mode’s capabilities were largely limited to text‑based interactions and were available only on mobile devices for users in the U.S. AI Mode Labs.

Canvas introduces a persistent side panel where users can assemble information snippets, create study plans, plan trips, or draft project outlines over multiple search sessions. By clicking the “Create Canvas” button, users can input a high‑level prompt—such as “plan my study guide for organic chemistry”—and the AI will generate structured outlines and resource lists that can be refined with follow‑up queries. The feature supports dragging and dropping text results, images, and links directly into the Canvas, effectively functioning as an AI‑powered notebook that maintains state across browser sessions. Google plans to extend Canvas to support additional file types and integrations with Google Drive later this year, potentially enabling collaborative editing and sharing.

AI Mode now allows users to upload images and PDF documents directly into the search interface, enabling the model to parse visual and textual content for contextualized responses. On desktop, students can upload lecture slides in PDF format, and the AI will summarize key points or answer detailed questions based on the document’s content. Image uploads leverage the same underlying technology as Google Lens, performing OCR and visual recognition to identify objects, text, and chart data within photos. Support for additional formats—such as Google Drive files and other document types—is slated to arrive in the coming months, broadening the utility for researchers and professionals.

Search Live, an integration of Project Astra, brings real‑time AI assistance through video input, allowing the AI to see through the device camera and provide live commentary. Users in the U.S. AI Mode Labs can tap the Live icon in the Google app, point their camera at physical objects or diagrams, and engage in a conversational back‑and‑forth informed by both the visual feed and web data. This feature is powered by a low‑latency video pipeline that captures frames at configurable intervals, sending them to Google’s AI processing clusters for immediate analysis. Early testers report that Search Live can identify complex historical artifacts, chemical reactions, and technical schematics, making it a versatile tool for fieldwork and classroom demonstrations.

Google also announced that AI Mode features will be integrated directly into Chrome via the “Ask Google about this page” context menu, enabling users to highlight any portion of a webpage—text, diagram, or code snippet—and request AI‑generated explanations or summaries. This desktop integration extends AI Mode beyond the standalone tab in Search, embedding it into the browsing experience across all sites. PDF and image uploads will be available on desktop in the Google Search homepage interface, allowing multi‑modal queries without switching to mobile. Google plans to leverage Chrome’s AI Tools API to surface AI suggestions in omnibox and toolbar contexts, creating quick‑access shortcuts for common tasks like unit conversion, code debugging, and language translation.

Behind the scenes, AI Mode relies on a custom implementation of Google’s Gemini 2.5 multimodal model, which supports text, vision, and document understanding. The system uses a query fan‑out technique, breaking down user prompts into sub‑queries that are executed in parallel against Google’s index, retrieval‑augmented generation, and specialized task models. Canvas state is maintained by a server‑side session management service, which logs actions and refinements to a secure storage layer built on Google Cloud. Image and PDF analysis runs through an OCR and vision pipeline in Google Lens, with anonymized logging to preserve user privacy while improving model accuracy. Search Live’s live video processing leverages WebRTC streams and a GPU‑accelerated inference cluster, routing frames to a containerized model‑serving environment optimized for low latency.

Gemini 2.5, the backbone of AI Mode, was trained on a diverse mix of web crawls, high‑quality corpora, code repositories, and image‑text pairs to achieve comprehensive reasoning and multimodal understanding. Google has fine‑tuned the model on instructional datasets, including educational texts and step‑by‑step guides, to optimize its performance on classroom and research queries. Additional adapters facilitate domain‑specific tasks—for instance, a specialized chemistry adapter for parsing molecular diagrams and reaction equations. Performance benchmarks indicate that Gemini 2.5 achieves 92% accuracy on the Science Question Answering (SciQA) benchmark and outperforms previous versions by 15% in multimodal reasoning tasks.

By combining text, image, and video inputs, AI Mode exemplifies Google’s commitment to truly multimodal AI experiences, setting it apart from single‑modality chatbots. The integration of Google Lens, Chrome APIs, and cloud‑based compute creates a seamless workflow across devices and contexts. Developers can already prototype AI Mode‑like features using Google AI Studio and Vertex AI, though the tight integration with Search’s index remains unique to Google. Industry observers note that this architecture could serve as a blueprint for other web‑based AI assistants seeking deep search integration.

Students and educators are likely to embrace Canvas and file uploads for back‑to‑school planning, interactive tutoring, and curricular research. Universities may pilot AI Mode as part of digital literacy programs, teaching students to critically evaluate AI‑generated responses and use them responsibly. EdTech companies could integrate Canvas‑like features into learning management systems, creating collaborative AI‑driven workspaces for group projects. Some professors caution that reliance on AI for homework and study could exacerbate academic integrity concerns, necessitating robust honor code updates. However, proponents argue that interactive AI companions can enhance comprehension, encourage inquiry, and democratize access to specialized knowledge.

Google’s latest updates put pressure on Microsoft’s Bing Chat and OpenAI, both of which recently added file upload and browsing features to their AI assistants. Meta and Amazon have also announced AI search experiments, though none currently offer the same depth of integration with core search and browser experiences. Google’s advantage lies in its unmatched search index, global infrastructure, and browser footprint, which together provide a feature set difficult for newcomers to replicate. Analysts predict that these enhancements could increase Google Search engagement metrics and ad impressions through AI‑powered side panels and tooltips. This arms race in search‑centric AI may accelerate the rollout of similar features across ecosystems and drive consolidation among AI platform providers.

As AI Mode handles sensitive content from user‑uploaded files and live video streams, privacy safeguards are paramount, and Google has committed to end‑to‑end encryption for file uploads. Data used for model improvement is anonymized, and users will have granular controls to delete their Canvas sessions and uploaded files. Under the EU AI Act and related frameworks, features like Canvas summaries and file parsing may require transparency reports and data lineage documentation. Google will update its privacy policy and include disclosures in the AI Mode settings panel to comply with upcoming regulations.

These feature upgrades could drive monetization through premium AI subscriptions, similar to Google Workspace AI add‑ons, and bolster Google Cloud’s AI usage metrics. Advertisers may explore embedding sponsored Canvas templates or paid interactive guides into search workflows. For enterprises, the ability to upload project documents and receive AI‑driven analytics inline with search offers productivity applications beyond consumer use cases. Google’s investments in AI Mode also underscore its broader strategy to integrate generative AI across its product ecosystem, from Ads to Maps to YouTube.

“Canvas transforms search from a one‑off lookup into a continuous workspace,” says Robby Stein, VP of Product at Google Search. “By combining multimodal inputs and real‑time feedback, we’re redefining how people learn and explore online,” adds Hema Budaraju, Google’s VP of Product Management for Search. Chris Smith of BGR described AI Mode as “one of the most exciting new AI products of the year,” highlighting its potential to reshape search. Researchers Emma Harvey et al. caution that AI‑driven educational tools may inadvertently prioritize technical correctness over pedagogical process, underscoring the need for educator‑centered design in future iterations.

Conclusion

As these features roll out, Google will need to monitor user feedback, refine privacy controls, and expand integrations with Google Drive and Workspace to fully realize AI Mode’s potential across consumer, educational, and enterprise contexts.

HubSpot Launches First CRM Connector for Anthropic’s Claude

HubSpot Launches First CRM Connector for Anthropic’s Claude

HubSpot has introduced the first‑ever CRM connector for Anthropic’s Claude AI assistant, allowing teams to ask natural‑language questions about their HubSpot data and receive contextualized insights, visualizations, and actions directly within existing workflows.

On July 29, 2025, in Boston, HubSpot unveiled its new CRM connector for Anthropic’s Claude AI assistant. This integration brings each customer’s unique HubSpot context—contacts, deals, tickets, and custom properties—into Claude, empowering marketing, sales, support, and customer‑success teams to generate tailored answers, charts, and graphs without leaving their CRM environment. By embedding AI directly into daily workflows, HubSpot aims to overcome the limitations of standalone language models that lack access to proprietary business data.

HubSpot’s Claude connector is built on a secure, permission‑aware architecture that respects existing HubSpot roles and data‑governance settings. Under the hood, it uses HubSpot’s public APIs and OAuth 2.0 authentication to grant Claude read‑only access to standard CRM objects. Once connected, teams can prompt Claude in plain English—for example, “Show me all high‑value deals closing this month, sorted by likelihood.” Claude then retrieves the relevant records, applies natural‑language understanding to interpret the query, and uses embedded charting libraries to generate shareable visualizations. From within the same interface, users can also ask follow‑up questions—such as “What’s the win rate for deals in the past quarter?”—or trigger HubSpot actions like creating tasks or drafting personalized emails based on the insights returned.

Connector Architecture and Security
The connector operates as a multi‑tenant service running on HubSpot’s Kubernetes clusters, with Claude requests routed through a dedicated connector microservice. When a user issues a query, the service validates their credentials and permissions via HubSpot’s identity platform, ensuring that Claude sees only the data the user is allowed to access. Queries are then translated into optimized GraphQL calls against HubSpot’s CRM API. Retrieved data is streamed to Anthropic’s Claude Sonnet 4 model via a secure, encrypted channel. Claude processes the data in‑memory, generating both text responses and JSON‑formatted visualization instructions, which the connector microservice renders client‑side using D3.js and HubSpot’s own UI components. All interactions are logged in HubSpot’s audit trail for compliance, and no data is stored beyond the duration of each session, preserving data privacy.

“We built the HubSpot connector for Claude to ensure users only see the CRM data they’re allowed to access, while delivering powerful AI‑driven insights right where teams work,” said Karen Ng, EVP and Head of Product at HubSpot. “By grounding Claude’s reasoning in real business context, we’re giving teams a trusted copilot that can accelerate decision‑making without compromising security.”

Use Cases Across Departments

  • Marketing Optimization: Marketers can ask Claude to identify contacts who opened the last three email campaigns but did not click through, then instantly generate a pie chart illustrating segment sizes and next‑step recommendations. Claude can also surface campaign performance anomalies, such as sudden drops in click‑through rates, and suggest A/B testing strategies or audience refinements.

  • Sales Prioritization: Sales reps can query high‑value deals by stage and probability, then visualize pipeline velocity trends over time. Claude can recommend which deals to prioritize this week based on historical win rates, recent engagement metrics, and deal size thresholds. It can even draft follow‑up email templates personalized to each prospect’s industry and past interactions.

  • Support Insights: Support teams can ask Claude to list open tickets by priority and age, with a bar chart view of ticket volumes by issue type. Claude can recommend routing rules to balance workloads, highlight recurring customer pain points, and draft knowledge‑base articles for high‑frequency issues.

  • Customer Success Strategies: Customer‑success managers can query churn risk signals—such as a drop in login frequency or negative NPS feedback—and generate a dashboard of at‑risk accounts. Claude can propose tailored outreach cadences and content recommendations to re‑engage customers based on product usage patterns.

Industry Context and Competitive Landscape
HubSpot’s move reflects a broader trend of embedding AI directly into core business systems, a shift driven by the realization that generic AI assistants lack the domain‑specific context necessary for true productivity gains. Gartner recently forecast that by 2026, 70% of CRM platforms will include native AI connectors to enterprise data lakes and CRM objects. HubSpot, having maintained independence after a potential acquisition by Google fell through in mid‑2024, has aggressively expanded its AI capabilities over the past year—first with its Breeze Intelligence engine that unified first‑ and third‑party data, and then with a deep‑research connector for OpenAI’s ChatGPT in early June. The Claude connector differentiates itself through Anthropic’s focus on safety and interpretability, leveraging Claude’s advanced prompting techniques to reduce hallucinations and produce more reliable, context‑aware outputs.

MarTech veteran Sydney Blanchard of KMWorld commented, “HubSpot’s two‑pronged approach—supporting both ChatGPT and Claude—signals that enterprises will increasingly demand multi‑model support to balance innovation with governance. Each model brings unique strengths, and organizations will pick connectors based on security, compliance, and performance trade‑offs.”

Technical and Operational Implications

  • Performance and Scalability: The connector handles thousands of concurrent queries by autoscaling the microservice pods based on load. HubSpot reports 95th‑percentile query latencies under 800 ms for simple retrievals and under 2 s for complex visualization requests.

  • Governance and Compliance: By respecting HubSpot’s permission tiers and logging every query in the audit trail, the connector helps organizations meet GDPR and CCPA transparency requirements. Data never leaves the encrypted session context, addressing concerns about sending proprietary data to third‑party AI models.

  • Customization and Extensibility: Through HubSpot’s open‑source CLI and APIs, developers can create custom connectors that extend Claude’s capabilities to proprietary modules—such as custom ticket‑scoring algorithms or industry‑specific dashboards—using HubSpot’s app framework.

  • Future Roadmap: HubSpot plans to add write‑back capabilities, enabling Claude not only to read data but also to suggest and enact updates—like adjusting deal stages or updating contact records—subject to admin approval. Early tests of automated lead‑scoring workflows show promise in reducing manual data entry by up to 30%.

Expert Perspectives
“Integrating Claude directly with CRM data is a game‑changer for enterprise AI adoption,” said analyst Priya Patel of Forrester Research. “Enterprises want both the power of large language models and the governance that comes from having AI act on real business data within trusted boundaries. HubSpot’s connector hits that sweet spot.” An Anthropic spokesperson added, “We collaborated closely on the connector to ensure Claude’s safety mechanisms—such as guarded generation and content redaction—are preserved, even when handling sensitive CRM records.”

Challenges and Considerations
Despite the promise, organizations must carefully manage connector rollout to avoid overreliance on AI. Training and change management are critical to ensure teams understand both Claude’s strengths and its limitations. IT teams must monitor usage to avoid hitting API rate limits, and data‑quality issues in CRM records can lead to misleading insights if not properly cleansed. Lastly, as AI becomes more embedded, organizations should update governance policies to outline when AI‑driven recommendations require human approval.

HubSpot’s CRM connector for Claude represents a significant step toward truly contextualized AI in customer‑facing workflows. By marrying enterprise data with advanced language models in a secure, permission‑aware environment, it paves the way for more intelligent marketing, smarter sales prioritization, and more proactive customer support—heralding a new era of AI‑augmented business productivity.

Potential Next Steps
As adoption grows, expect HubSpot to extend the connector to support custom objects, project management modules, and third‑party integrations (e.g., ERP systems), while introducing write‑back automation and advanced analytics templates powered by generative AI—further blurring the line between data insight and operational execution.

Rakuten Launches Full‑Scale Rakuten AI, Unveils Vision for Agentic Ecosystem

Rakuten Launches Full‑Scale Rakuten AI, Unveils Vision for Agentic Ecosystem

Rakuten Group, Inc. and Rakuten Mobile, Inc. today announced in Tokyo the full‑scale launch of Rakuten AI—an agentic AI assistant integrated into Rakuten Link and available as a standalone web app—to revolutionize user interactions across shopping, fintech, travel, education, and more by unifying fragmented data, decoding intent, and driving personalized actions at scale.

Rakuten AI debuts on July 30, 2025, in Tokyo, offering mobile subscribers an AI chat interface, auto‑generated prompts, voice input, image search, translation, coding assistance, and more, all free of charge. Built with deep Japanese context awareness and seamless integration into Rakuten’s ecosystem, it aims to empower users with expert‑level insights and task automation across services like Rakuten Ichiba, Rakuma, Books, and Fashion. The launch marks the start of Rakuten’s agentic AI era and sets the stage for a broader rollout through its web app and e‑commerce marketplaces in autumn 2025.

Since its beta debut as Rakuten Link AI in November 2024, Rakuten AI has evolved into a sophisticated, multi‑modal assistant designed to function as an orchestration layer for the company’s sprawling digital ecosystem. At its core, the AI leverages a proprietary large‑language model fine‑tuned on Japanese web crawls, customer interaction logs, and domain‑specific corpora spanning retail, finance, travel, and lifestyle. By fusing natural language understanding with a customizable action engine, Rakuten AI decodes user intent—whether drafting a product return request, summarizing loyalty point balances, or comparing airfare options—and maps it onto concrete service APIs for immediate execution.

On mobile, Rakuten AI is embedded directly into the Rakuten Link app, which more than 20 million subscribers use daily for messaging and calls. Users tap the AI icon to initiate a chat, then type, speak, or snap a photo to kick off a conversation. Behind the scenes, prompts are tokenized and routed through an inference cluster on Rakuten Cloud’s secure Kubernetes platform, where the model runs parallel chains of retrieval‑augmented generation and action planning. An adaptive prompt library offers follow‑up suggestions—such as “Compare shipping times for these three products” or “Translate this chat into English”—that evolve based on each user’s preferences and usage history.

The standalone Rakuten AI web app, accessible via any browser with a Rakuten ID, extends this functionality beyond mobile. It features a workspace UI where users can store conversation threads, pin critical information, and share AI‑generated summaries across multiple sessions. Specialized modules support coding help—auto‑completing snippets in JavaScript or Python—and visual creation tools that generate custom social‑media graphics from text descriptions. Voice mode employs a real‑time speech‑to‑text engine paired with the LLM, enabling hands‑free queries and responses, while the image search module uses integrated OCR and object detection to analyze uploaded photos or screenshots for contextual answers.

“By seamlessly integrating Rakuten AI into all of our core services, we are empowering users to get more done in less time, whether that’s complex research, instant translation, or simply finding the best deal across our platforms,” said Ting Cai, Rakuten Group Chief AI & Data Officer. “This launch is a key milestone in realizing our vision for an Agentic Ecosystem, where AI agents drive deeper engagement and create greater value for our customers and partners.”

Rakuten’s agentic approach emphasizes not only understanding user queries but also orchestrating multi‑step tasks across services. For example, a user shopping for a new laptop can ask Rakuten AI to compare specifications across top sellers on Rakuten Ichiba, apply coupon codes from loyalty programs, and schedule delivery—all within a single conversational flow. The AI maintains a session state that tracks user directives, context switches, and partial results, reducing the need for repetitive prompts and minimizing cognitive load.

Security and privacy are central to Rakuten AI’s design. All user inputs and model interactions occur over end‑to‑end encrypted channels, and sensitive information—such as financial data or personal identifiers—is redacted before being fed into the model. Temporary session tokens enforce data access controls, ensuring that the AI only retrieves information for services the user has authorized. Transcripts and usage logs are retained in encrypted form within a private cloud vault, with user‑centric controls allowing account holders to review or delete their data at any time.

Industry analysts view Rakuten’s move as a pivotal step in democratizing AI‑driven commerce. “Rakuten AI’s deep integration across e‑commerce, fintech, and communications is a blueprint for how digital ecosystems can leverage agentic AI to boost engagement and lifetime value,” said Yuko Tanaka, senior analyst at TechnoRX Insights. “By unifying diverse datasets—purchase histories, loyalty points, and user preferences—Rakuten can deliver hyper‑personalized experiences that keep users within its ecosystem rather than bouncing between standalone apps.”

Competition is heating up. Google’s AI Mode in Search and Microsoft’s Copilot integrations already offer multi‑modal assistance, but Rakuten AI’s unique selling point lies in its native access to proprietary service data and localized Japanese context. Amazon’s Alexa and Apple’s Siri remain voice‑first interfaces with limited action orchestration, whereas Rakuten AI supports both conversational planning and direct execution of commerce, content, and communication tasks under unified session management.

Looking ahead, Rakuten plans to roll out Rakuten AI in its flagship marketplace, Rakuten Ichiba, in autumn 2025. There, AI agents will leverage granular transaction data and user attributes—such as browsing history, cart activity, and past purchase trends—to surface personalized product recommendations, dynamic pricing alerts, and customized promotional bundles. Integration with third‑party logistics partners could enable real‑time shipping cost estimates and route optimizations, while synergy with Rakuten Travel and Rakuten Bank could allow end‑to‑end trip planning and expense management under one conversational umbrella.

Beyond consumer applications, Rakuten AI’s extensible architecture supports developer‑facing APIs that allow external apps to tap into its understanding and action capabilities. Early partners in education are prototyping AI‑tutoring tools that combine Rakuten AI’s deep think search with domain‑specific knowledge modules for language learning and coding instruction. FinTech startups are exploring integrations that let users manage investment portfolios via conversational prompts, while hospitality platforms are testing chat‑based booking workflows powered by Rakuten AI.

Despite its promise, Rakuten AI faces challenges common to agentic systems: ensuring response accuracy, preventing hallucinations, and maintaining privacy compliance across jurisdictions. Rakuten’s internal “AI‑nization” initiative is driving ongoing model retraining with human‑in‑the‑loop feedback, domain‑specific evaluation benchmarks, and robust red‑team testing to identify edge‑case failures. Regulatory alignment with Japan’s emerging AI governance standards and global frameworks such as the EU AI Act will require continual updates to transparency reports, data lineage documentation, and consent mechanisms.

By turning Rakuten Link into an AI gateway and scaling its web app globally, Rakuten is staking its claim as a pioneer of the agentic web—an ecosystem where intelligent agents anticipate needs, unify data, and execute actions without friction. As the technology matures, Rakuten AI could evolve into an open platform where external developers contribute specialized agents—ranging from nutrition planners to legal assistants—under a shared orchestration layer. Such extensibility would mirror smartphone app stores but for AI‑powered workflows, potentially creating a vibrant marketplace of agentic microservices.

Conclusion

With its full‑scale launch of Rakuten AI, Rakuten has taken a decisive leap toward an interconnected, agent‑driven digital ecosystem that unifies services, personalizes experiences, and automates complex tasks. As the company rolls out support across its core platforms and opens the door for developer integrations, Rakuten AI is poised to redefine how users shop, communicate, and manage daily life—paving the way for an era of seamless, AI‑orchestrated experiences.