ChatGPT Toolbox AI Pulse

Your weekly digest of breakthroughs in AI hardware, open‐source reasoning, policy shifts, enterprise Copilot tools, and data‐center economics.

In partnership with

BRICS Leaders to Call for Data Protections Against Unauthorized AI Use

BRICS Leaders to Call for Data Protections Against Unauthorized AI Use

Background
During the opening of the BRICS summit in Rio de Janeiro, leaders dedicated significant discussions to the role of artificial intelligence in economic development and data governance.

Key Proposals
The core proposal urges the creation of legal mechanisms safeguarding individual and corporate data against unconsented AI training and inference activities. It stipulates that companies using large datasets for AI development must transparently report their data sources and obtain explicit licenses or permissions. The statement also recommends establishing an independent oversight body within the BRICS secretariat to monitor compliance and mediate disputes over data usage. Signatories would agree to a “fair payment” model, whereby data originators receive royalties or licensing fees proportionate to the value generated by AI applications.

Geopolitical Implications
By advocating for robust data protections, BRICS nations aim to counterbalance the dominance of Western‑based AI providers that often leverage loosely regulated data streams. This move could foster regional AI ecosystems by keeping data residency and privacy controls within member states, thereby stimulating local innovation hubs. Critics argue the proposals may introduce compliance burdens that deter smaller AI developers or fragment global AI supply chains. Nonetheless, supporters believe a harmonized BRICS approach could pressure multinational tech firms to negotiate equitable data‑sharing agreements.

Industry Response and Next Steps
Major technology firms operating in BRICS markets, including Alibaba, Google, and Meta, are likely to engage in lobbying efforts to shape the final wording of the regulations. Legal experts anticipate that drafting and ratification processes could span 12–18 months, with pilot programs launching in select sectors such as finance, healthcare, and telecommunications. Data‑intensive startups are already exploring self‑regulatory certification schemes to demonstrate compliance ahead of formal rules. The summit’s closing communiqué is expected to formalize the timeline for publishing model legislation and establishing the proposed oversight body.

Conclusion
BRICS’ call for data protections marks a strategic pivot toward asserting digital sovereignty and fair economic participation in the AI era. By balancing innovation incentives with privacy and compensation principles, member states aim to chart an alternative path to AI governance that prioritizes both growth and equity.

Character.AI Introduces TalkingMachines, a New AI Model That Can Generate Interactive Videos

Character.AI Introduces TalkingMachines, a New AI Model That Can Generate Interactive Videos

Background
On July 5, Character.AI announced TalkingMachines, a cutting‑edge AI model designed to generate interactive videos using just a static image and an accompanying voice signal. The launch reflects a broader industry trend toward multimodal generative AI capabilities that seamlessly blend visual and auditory content. Character.AI, known for its advanced conversational agents, is expanding its footprint by enabling users to create dynamic video clips without specialized hardware or extensive editing skills.

Technical Innovations
TalkingMachines leverages synchronized vision‑language architectures trained on large-scale video datasets to understand facial movements, lip synchronization, and expression dynamics. The model incorporates a novel audio‑visual alignment module that ensures generated mouth movements accurately reflect the input voice signal, resulting in lifelike interactions. Pre‑configured style presets allow users to adjust mood, lighting, and animation intensity, offering creative flexibility without technical expertise. Character.AI claims TalkingMachines can produce videos in multiple styles—ranging from photorealistic to stylized animation—within seconds on consumer‑grade GPUs.

Use Cases and Applications
Potential applications for TalkingMachines span marketing, education, and entertainment sectors, where dynamic video generation can personalize user experiences at scale. E‑commerce companies could deploy AI‑generated spokespeople for product demonstrations, while educators might create custom video lessons featuring virtual instructors. Social media influencers and content creators stand to benefit from cost‑effective video production workflows, leveraging TalkingMachines to produce engaging content without professional teams. Character.AI has also hinted at an API launch later this year to integrate TalkingMachines into third‑party applications.

Industry Reception
Early adopters have praised the model’s ease of use and output quality, though some have noted artifacts in complex lighting conditions or rapid head movements. Competitors such as Meta and Google are racing to release similar multimodal models, underscoring a high-stakes battle for market leadership in generative video AI. Ethical considerations regarding deepfake potential have prompted Character.AI to include watermarking features and usage monitoring to mitigate misuse. The company is also engaging with policy forums to develop industry best practices around transparency and consent in synthetic media.

Future Directions
Looking ahead, Character.AI plans to enhance TalkingMachines with real‑time editing capabilities, allowing users to tweak generated videos instantly during playback. Research efforts are focused on improving robustness to occlusions, complex backgrounds, and multi‑speaker scenarios. Partnerships with cloud providers aim to scale inference pipelines and reduce latency for enterprise deployments. As generative AI models evolve, TalkingMachines represents a step toward truly immersive, AI‑driven video creation across industries.

Start learning AI in 2025

Everyone talks about AI, but no one has the time to learn it. So, we found the easiest way to learn AI in as little time as possible: The Rundown AI.

It's a free AI newsletter that keeps you up-to-date on the latest AI news, and teaches you how to apply it in just 5 minutes a day.

Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.

How Brex Is Keeping Up with AI by Embracing the ‘Messiness’

How Brex Is Keeping Up with AI by Embracing the ‘Messiness’

Background
On July 6, corporate credit card provider Brex revealed a transformative strategy for evaluating AI tools amid rapid industry evolution. Faced with a proliferation of over 1,000 AI products, Brex’s engineering teams initially struggled with lengthy pilot programs that often outlasted the tools’ relevance. The company’s CTO, James Reggio, described how conventional procurement cycles hindered agility in adopting emerging technologies.

Redesigned Procurement Framework
Brex implemented a decentralized testing framework that empowers individual engineers with a monthly budget to license approved AI tools directly. This approach leverages a “superhuman product‑market‑fit test,” which focuses on rapid, hands‑on evaluation rather than protracted approvals. Legal validations and data processing agreements have been standardized to reduce friction when introducing new tools. According to Reggio, this shift cut evaluation times from several months to just days, ensuring the company stays on the cutting edge.

Culture of Experimentation
By decentralizing decision‑making, Brex fostered a culture where experimentation and controlled risk‑taking became the norm. Teams can trial up to five major AI deployments concurrently and cancel underperforming tools without centralized approvals. This “messiness” acknowledges that no single tool fits all use cases, and that organic adoption patterns reveal the most valuable solutions.

Business Impact
Brex reports improved development velocity and a clearer understanding of enterprise‑level AI tool usage across its organization. Data from pilot programs enabled the procurement team to negotiate broader licensing deals based on actual headcount usage, leading to cost savings. The methodology also highlighted security and compliance priorities as teams surfaced tool-specific risks during rapid evaluations.

Lessons for Enterprises
Reggio advises that companies resist over‑engineering their AI adoption processes and instead embrace iterative, decentralized frameworks. Organizations should establish clear guardrails—such as pre‑approved tool lists and budget thresholds—while granting autonomy to end users. This balance between control and flexibility can accelerate innovation and prevent enterprises from falling behind in the fast‑moving AI landscape.

Criminal Hackers Are Employing AI To Facilitate Identity Theft

Criminal Hackers Are Employing AI To Facilitate Identity Theft

Overview of Emerging Threats
Cybercriminals are increasingly leveraging AI-driven methods to orchestrate sophisticated identity theft schemes, exploiting deepfake audio, synthetic images, and network infiltration techniques. According to recent reports, attackers deploy AI tools to automate the generation of fraudulent IDs, personalize phishing messages, and bypass traditional security checks. These AI‑enhanced fraud tactics significantly lower the effort required to create convincing forgeries, amplifying the scale and speed of attacks.

Tactics and Techniques
One common tactic involves training generative models on leaked personal data to produce synthetic identities that pass KYC (Know Your Customer) verifications. Attackers also use voice cloning technologies to mimic victims’ voices in customer service calls, authorizing unauthorized transactions. Deepfake videos are employed to manipulate AI‑based surveillance and facial recognition systems, allowing imposters to gain physical access to restricted areas. Additionally, AI-driven network scanning tools identify vulnerable endpoints with high-privilege credentials, facilitating lateral movement within corporate networks.

Impact on Businesses and Individuals
Financial institutions report a marked increase in fraud losses tied to AI‑powered identity theft, with some banks experiencing spikes of up to 30% in unauthorized account openings. Victims face not only financial losses but also long‑term credit damage and privacy breaches, as personal data is circulated in underground marketplaces. Small and mid‑sized enterprises (SMEs) are particularly vulnerable due to limited cybersecurity resources and reliance on automated verification processes.

Mitigation Strategies
Experts recommend adopting multi‑factor authentication (MFA) beyond voice or SMS to include behavioral biometrics and device‑based signals. AI countermeasures, such as anomaly detection models trained on genuine user behavior, can flag suspicious access attempts in real time. Regular audits of KYC workflows, combined with adversarial testing of identity verification systems, help identify gaps exploitable by AI‑driven attacks. Collaboration between financial institutions and cybersecurity vendors to share threat intelligence is also critical.

Future Outlook
As AI capabilities continue to advance, the arms race between cybercriminals and defenders will intensify. Organizations must invest in AI‑powered security platforms that evolve alongside emerging attack vectors. Policymakers are considering regulations that mandate minimum security standards for identity verification systems and penalties for providers whose tools are misused for fraud.

Learn AI in 5 minutes a day

What’s the secret to staying ahead of the curve in the world of AI? Information. Luckily, you can join 1,000,000+ early adopters reading The Rundown AI — the free newsletter that makes you smarter on AI with just a 5-minute read per day.

‘Will AI Take My Job?’ A Trip to a Beijing Fortune‑Telling Bar to See What Lies Ahead

‘Will AI Take My Job?’ A Trip to a Beijing Fortune‑Telling Bar to See What Lies Ahead

Context and Cultural Phenomenon
On July 5, The Guardian published a feature on the rise of “fortune‑telling bars” in Beijing, where patrons seek mystical insights into their career prospects amid growing AI disruption. These bars blend traditional divination practices with modern concerns over automation, offering personalized readings that tap into anxieties about machine‑driven job displacement.

The Fortune‑Telling Experience
Visitors choose from various rituals—such as tarot, palmistry, or symbolic readings—in settings adorned with neon lights and vintage decor. Practitioners employ a fusion of ancient Chinese metaphysics and contemporary pop culture references to deliver forecasts that resonate with tech‑savvy millennials. Many clients report feeling reassured after their readings, even if the predictions are intentionally ambiguous.

Societal Implications
The popularity of these bars underscores widespread uncertainty about AI’s impact on employment, particularly among recent graduates and mid‑career professionals. While China invests heavily in AI research and development, the labor market has yet to fully absorb displaced workers from manufacturing and service sectors. The trend illuminates a psychological coping mechanism, blending cultural heritage with contemporary fears.

Expert Perspectives
Economists interviewed for the article emphasize that while AI will automate routine tasks, it will also create new job categories focused on AI oversight, data labeling, and human‑AI collaboration. However, they warn that workforce retraining programs have lagged behind technology adoption, exacerbating skill gaps. Social commentators note that the fortune‑telling phenomenon reflects a deeper need for community and narrative amid rapid technological change.

Conclusion
The Beijing fortune‑telling bar trend illustrates how individuals navigate AI‑induced uncertainty through cultural rituals that provide psychological comfort. As AI continues to reshape industries, blending traditional wisdom with futuristic concerns may offer a blueprint for collective resilience in times of upheaval.