Meta description: Global AI news for November 28, 2025: China moves AI training offshore, Google’s AI comeback accelerates, MIT warns 12% of jobs are automatable, and US–Japan ramp up AI policy.
Table of Contents
- Top 5 Global AI News Stories – November 28, 2025
- 1. China’s Tech Giants Move AI Training Offshore to Access NVIDIA Chips
- 2. Google’s AI Comeback: Gemini 3, Ironwood TPUs, and Market Repricing
- 3. MIT–ORNL “Iceberg Index” Finds AI Could Already Do 11.7% of U.S. Jobs
- 4. Trump’s “Genesis Mission” Executive Order Puts AI at the Core of U.S. Scientific Strategy
- 5. Japan Boosts AI Budget and Safety Capacity in New Supplementary Spending Plan
- Structured Data and Compliance Considerations
Top 5 Global AI News Stories – November 28, 2025
China’s Offshore AI Shift, Google’s Comeback, Labor Shock Forecasts, and New National AI Strategies
The global artificial intelligence landscape on November 28, 2025 is defined by intensifying geopolitical competition, aggressive corporate investment, and mounting societal concerns over automation. Chinese tech giants are reportedly moving cutting-edge machine learning training offshore to access restricted NVIDIA chips, highlighting how export controls are reshaping the AI supply chain. At the same time, Google’s Gemini 3 and its Ironwood AI chips are powering an “AI comeback” that has put Alphabet on the verge of a historic valuation, while a major MIT–ORNL study finds that today’s AI could already perform work equivalent to nearly 12% of U.S. jobs. In parallel, the Trump administration’s Genesis Mission aims to use AI to accelerate scientific discovery, and Japan is sharply increasing public AI spending and safety capacity. Together, these developments underscore how the AI industry is now a central axis of economic strategy, labor policy, and national security in the emerging era of global AI trends.
1. China’s Tech Giants Move AI Training Offshore to Access NVIDIA Chips
Headline: Chinese AI Leaders Shift Model Training Overseas to Circumvent U.S. Chip Controls
Top Chinese technology companies, including Alibaba and TikTok owner ByteDance, are moving the training of their latest large language models to data centers in Southeast Asia in order to access high‑end NVIDIA GPUs that are restricted inside China, according to reporting by the Financial Times and Reuters. These firms are leasing capacity in foreign‑owned data centers, particularly in Singapore and other regional hubs, to train frontier models such as Alibaba’s Qwen and ByteDance’s Doubao on NVIDIA accelerators that cannot be sold directly into the Chinese market under U.S. export rules. One notable exception is Chinese startup DeepSeek, which reportedly stockpiled NVIDIA chips ahead of the latest export bans and continues to train domestically while partnering with Huawei and other local chipmakers to optimize future Chinese AI hardware.reuters+4
Additional coverage notes that offshore training activity surged after Washington tightened restrictions on export‑compliant chips such as NVIDIA’s H20 in April, and after a proposed “AI diffusion rule” that would have constrained foreign leasing arrangements was withdrawn earlier this year. Under the current framework, Chinese companies are barred from buying top‑end U.S. GPUs for use in China, and China has banned foreign AI chips from state‑funded data centers, but leasing foreign‑owned clusters abroad remains legal so long as the hardware is operated by non‑Chinese entities.asiafinancial+2
Editorial analysis:
This development illustrates how quickly sophisticated AI labs adapt to regulatory constraints. Technically, U.S. export controls still impede China’s ability to build sovereign, domestic AI training infrastructure at scale, but the offshore workaround preserves access to state‑of‑the‑art compute for foundational model training. For global policymakers, it underscores a core compliance challenge: regulating AI strictly by geography or hardware owner, rather than by beneficial user and model capability, leaves substantial room for arbitrage. For the AI industry, it also signals that high‑end compute has become as strategically contested as advanced semiconductors themselves, with enforcement design now as critical as headline bans.
2. Google’s AI Comeback: Gemini 3, Ironwood TPUs, and Market Repricing
Headline: Google’s Gemini 3 and Ironwood Chips Power Alphabet’s Resurgent AI Leadership
A detailed CNBC analysis describes how Google has “put together the pieces” of an AI comeback, driven by its Gemini 3 model family and its latest Ironwood Tensor Processing Units (TPUs). Gemini 3, launched this month, has been praised by partners such as Salesforce CEO Marc Benioff, who called the performance leap “insane” and said he was “not going back” to prior models after testing Gemini 3 for only two hours. Alphabet’s stock has surged nearly 70% year‑to‑date, briefly overtaking Microsoft’s market capitalization and fueling expectations that the company could soon join or surpass the $4 trillion valuation threshold highlighted in recent market reports.cnbc+4
On the infrastructure side, Google’s Ironwood (its seventh‑generation TPU) is claimed to be almost 30 times more energy‑efficient than its first cloud TPU from 2018, enabling customers to train and serve “the largest, most data‑intensive models” more cheaply. Reports that Meta is in talks to spend billions on Google’s TPUs from 2026–2027 further cement Google’s position as a serious challenger to NVIDIA’s dominance in AI hardware, and have already contributed to short‑term pressure on NVIDIA’s share price.reuters+4
Editorial analysis:
The narrative that Google “missed the AI moment” after ChatGPT’s debut is now clearly being reassessed. Strategically, Google is executing on a vertically integrated stack—models, data, distribution platforms like YouTube, and now competitive custom silicon—that could make it uniquely resilient if frontier models become commoditized. From a machine learning and infrastructure perspective, the emerging Google–Meta TPU axis suggests a more multipolar AI hardware market, which could ease supply bottlenecks and price pressures for enterprises. However, Alphabet’s rapid re‑rating also raises the risk of investors over‑pricing short‑term model leadership in a field where state‑of‑the‑art status is measured in months, not years.
3. MIT–ORNL “Iceberg Index” Finds AI Could Already Do 11.7% of U.S. Jobs
Headline: New MIT Study: Today’s AI Is Technically Capable of Performing Work Equal to Nearly 12% of U.S. Jobs
A new study by the Massachusetts Institute of Technology (MIT) and Oak Ridge National Laboratory (ORNL) concludes that current AI systems are already technically and economically capable of performing tasks equivalent to 11.7% of the U.S. labor market, representing roughly 151 million workers and about 1.2 trillion dollars in annual wages. The research relies on the newly developed Iceberg Index, a large‑scale labor simulation tool that models 151 million workers across 923 occupations and more than 32,000 skills to assess where AI agents can economically substitute for human tasks.fortune+5
Rather than predicting specific layoffs, the Iceberg Index distinguishes between “visible” disruptions—such as well‑publicized tech layoffs—and “hidden” exposure in white‑collar sectors like finance, human resources, healthcare administration, logistics, legal and accounting services, where current artificial intelligence tools can already automate routine cognitive tasks. States such as Tennessee, North Carolina and Utah have begun using the Index to inform state‑level AI workforce action plans, while the tool is described by researchers as a “digital twin of the U.S. labor market” that can support stress‑testing of policy scenarios before decisions on training, regulation, or infrastructure investment are made.cnbc+4
Editorial analysis:
The Iceberg Index is one of the most concrete attempts to bridge the gap between abstract “AI exposure” studies and real policy planning. Its key message is not that 11.7% of jobs will disappear imminently, but that a meaningful share of current work is already automatable at today’s prices—well before more capable generations of generative AI arrive. For governments and firms, this implies that delaying reskilling, social safety net reforms, and productivity‑sharing mechanisms could turn a manageable transition into a disruptive shock. It also reframes AI risk away from only blue‑collar automation toward professional and administrative roles long perceived as insulated from such change.
4. Trump’s “Genesis Mission” Executive Order Puts AI at the Core of U.S. Scientific Strategy
Headline: U.S. Launches Genesis Mission to Harness AI, Supercomputing and Federal Data for Breakthrough Research
President Donald Trump has signed an executive order launching the Genesis Mission, a national effort to integrate AI systems, U.S. national lab supercomputers, and decades of federally funded scientific data into a unified platform for accelerated discovery. According to an official White House fact sheet, the Department of Energy is directed to build a “closed‑loop experimentation system” where foundational AI models can control robotic laboratories and run large‑scale experiments across priority areas including biotechnology, critical materials, nuclear fusion and fission, semiconductors, quantum information science, and space exploration.whitehouse+4
Major technology and hardware vendors—including NVIDIA, AMD, Hewlett Packard Enterprise, and Dell—have pledged to deploy advanced computing systems into national laboratories as part of the initiative. Legal and policy analyses note that the order effectively places AI at the center of long‑term U.S. strategic competition, akin to the role nuclear research played during the Cold War, while leaving open questions about the scale and sources of funding required to support the envisioned compute footprint.nytimes+2
Editorial analysis:
For the AI industry, Genesis is significant less as a single project and more as a signal: U.S. federal science policy is being re‑architected on the assumption that frontier AI plus high‑performance computing is now a general‑purpose research accelerator. If implemented fully, this could compress timelines for breakthroughs in clean energy, materials and biomedicine—areas where the economic upside is very large and where the U.S. seeks to maintain or regain leadership. At the same time, concentrating sensitive models and critical datasets inside a powerful national platform heightens concerns about cybersecurity, dual‑use research, and the need for transparent governance frameworks that go beyond traditional export control logic.
5. Japan Boosts AI Budget and Safety Capacity in New Supplementary Spending Plan
Headline: Japan Earmarks About ¥400 Billion for AI, Fusion and Quantum – AI Safety Agency to Triple Staff
Japan’s government plans to allocate roughly ¥400 billion (around 2.6 billion U.S. dollars) in its fiscal 2025 supplementary budget to support artificial intelligence, nuclear fusion, and quantum technologies, according to The Japan News. Of this, about ¥190 billion is designated for AI‑related initiatives, including ¥45 billion specifically for integrating AI into scientific research to improve research productivity and accelerate data‑driven discovery. Additional funds—about ¥25.3 billion—will support development of AI‑powered robots and autonomous vehicles, while ¥4.4 billion will help promote AI use within government ministries and agencies.japannews.yomiuri
A related report indicates that Japan’s AI Safety Agency (AISI) will also receive approximately ¥8.8 billion in supplemental funding to strengthen defenses against “backdoors” and other vulnerabilities in foreign‑made AI systems, with plans to triple its staff as part of the expansion. The agency’s remit includes developing systems to test third‑party AI models for hidden capabilities, monitoring model updates, and supporting ministries in secure deployment of AI across government.japannews.yomiuri+1
Editorial analysis:
Japan’s move highlights how mid‑to‑large economies can craft a dual‑track AI strategy: aggressive adoption for productivity and scientific competitiveness, coupled with institutionalized AI safety oversight. Unlike purely industrial subsidies, the emphasis on AI in government workflows, robotics, and autonomous systems suggests a bid to weave AI deeply into public services and advanced manufacturing. At the same time, dedicated funding for AISI reflects growing recognition that security reviews, model evaluation, and supply‑chain assurance are not optional add‑ons, but core infrastructure for any country that relies heavily on imported AI technologies.
Structured Data and Compliance Considerations
For publishers syndicating this AI news coverage and seeking best‑practice SEO and compliance:
Schema.org/NewsArticle is recommended for the overall piece, with fields such as
headline,datePublished,author,articleSection,about(e.g., “artificial intelligence”, “machine learning”, “AI industry”), andmainEntityOfPage.A small FAQPage block can be added for common questions (for example: “What is the MIT Iceberg Index?” or “How are Chinese firms bypassing NVIDIA export controls?”), provided answers are concise, factual, and sourced.
Where visual explainers or timelines are used, ImageObject or VideoObject markup should be attached, with clear attribution to data sources and rights holders.
All factual content above is drawn from reputable third‑party sources including Reuters, the Financial Times, CNBC, Fortune, NDTV, Tom’s Hardware, and national outlets such as The Japan News and Chosun Ilbo. This article itself is original editorial synthesis and analysis created for informational and journalistic purposes. Use of third‑party information is limited to what is necessary to report current events and is intended to fall under fair‑use and comparable news‑reporting exceptions in applicable copyright regimes. Any AI‑generated text here is presented transparently as news analysis, not as a substitute for the primary sources it cites.ft+9
