[Comprehensive Report by Global Network and Red Star News] In late February 2026, U.S. artificial intelligence startup Anthropic was caught in two major industry focus events one after another: on the one hand, the U.S. Pentagon issued an ultimatum, requiring it to lift AI ethical restrictions to cooperate with military use; on the other hand, the company quietly abandoned its core commitment of “never training AI systems unless safety is fully ensured”, triggering in-depth discussions in the global AI industry on the balance between safety governance and commercial competition. The superposition of these two events reflects the complex pattern of the global AI field in technological competition, ethical boundaries and geopolitical games, and relevant developments have been continuously tracked and reported by authoritative media such as Time Magazine and Red Star News.
Pentagon Issues Ultimatum: Lift Ethical Barriers, or Face Forced Takeover of Claude Technology
Local time on February 24, the U.S. Pentagon issued an explicit ultimatum to Anthropic, requiring it to lift ethical restrictions on its core product, the Claude AI system, by 5:01 p.m. that Friday, allowing the U.S. military to use it for “all legitimate purposes” without restriction. Otherwise, it would take punitive measures including canceling the $200 million contract and forcibly taking over product technology in accordance with the Cold War-era Defense Production Act. This news was disclosed by Red Star News citing multiple insiders and U.S. defense officials.
It is reported that Anthropic’s Claude model has long been the only available AI model for the U.S. military’s sensitive intelligence work, weapons development and battlefield operations, but there have been long-standing differences between the two sides on war ethics. The trigger for this dispute is related to the U.S. military’s forced move to control Venezuelan leader Nicolás Maduro in January this year – sources said that the Claude model was used to assist the combat operation, triggering Anthropic’s in-depth concern about the actual use of its products, and even verifying the relevant situation with the Pentagon. This act was regarded as an indirect challenge to the military’s use authority.
Faced with the pressure, Anthropic CEO Dario Amodei made it clear that he refused to let Claude be used to drive fully autonomous weapons or conduct large-scale surveillance on Americans, emphasizing that the company’s ethical restrictions would not interfere with the U.S. military’s “legal and legitimate” operations. After the meeting, he issued a statement warning that some safety assurance measures in the global AI field have begun to gradually collapse. As of press time on February 25, Anthropic has signaled its withdrawal from negotiations and abandonment of the $200 million military contract, while the Pentagon has not yet made an official comment, and the two sides are still in a superficial state of “good-faith dialogue”.
It is worth noting that while Anthropic is in a stalemate with the Pentagon, Elon Musk’s xAI has taken the lead in compromising, signing an agreement to allow the U.S. military to use its Grok model in classified systems, becoming the second AI model approved to enter the U.S. Department of Defense’s highest classified systems. This has further intensified the competitive pressure and geopolitical game dilemma faced by Anthropic.
Synchronized Reversal: Anthropic Abandons “Safety First” Core Commitment, Compromising to Industry Competition
One day before the exposure of the Pentagon’s pressure incident (February 25), Time Magazine revealed another major adjustment by Anthropic: the company has completely revised its “Responsible Scaling Policy (RSP)” implemented since 2023, officially canceling the core commitment of “never training or releasing any AI model unless risk mitigation measures are fully proven to be in place”. This once praised “safety bottom line” in the industry has been completely abandoned under the pressure of industry competition.
Data shows that Anthropic was founded by former core members of OpenAI and has always taken “safety first” as its core philosophy since its establishment. Its RSP policy is regarded as a counterattack against the “speed-first” trend in the AI industry. The company’s senior management has repeatedly publicly warned that AI technology without strict safety controls may pose a catastrophic risk to human society. The core reason for this policy reversal is the intensification of industry competition – as giants such as OpenAI and Google accelerate the research and development of large models, Anthropic is facing the risk of marginalization.
Jared Kaplan, Chief Scientific Officer of Anthropic, told Time Magazine in an interview, “Stopping AI model training is not good for anyone. In the context of rapid technological iteration, it is not wise for us to unilaterally abide by the commitment if competitors are ahead”, directly stating the realistic considerations for abandoning the safety commitment. This adjustment means that Anthropic will completely shift to a “technology iteration first” strategy, launching full-scale competition with giants such as OpenAI and Google, and the previously adhered AI safety and ethical bottom line has experienced a major loosening.
Industry Shock: Imbalance Between Safety and Competition, New Challenges for Global AI Governance
Anthropic’s dual developments have quickly triggered widespread discussions in the global AI industry, reflecting the core contradiction in the current AI field behind it – the imbalance between technological competition and safety governance. According to data from the 2025 Artificial Intelligence Index Report released by the Stanford University Human-Centered AI Institute (HAI), global private AI investment reached $252.3 billion in 2024, a year-on-year increase of 44.5%, of which U.S. private AI investment reached $109.1 billion, accounting for more than 43% of the global total. High-intensity capital investment has driven the industry into a “race-style development” stage, while AI-related safety incidents have increased by 56.4% annually, and the problem of safety governance lagging behind technological iteration has become increasingly prominent.
Industry analysts believe that Anthropic’s compromise is representative of the industry to a certain extent: on the one hand, AI startups are under the pressure of survival squeezed by giants and have to abandon some safety commitments to maintain competitiveness; on the other hand, the demand for AI technology applications by governments of various countries (especially in the military field) has further intensified the dilemma of enterprises between ethics and interests, safety and development. According to the Q2 2025 Global Artificial Intelligence Report released by CB Insights, the total number of global AI unicorns has reached 302, with the United States accounting for more than 33%. The intensification of industry competition is forcing enterprises to rebalance the priority of safety and development.
In addition, Anthropic’s incident also poses new challenges to global AI governance. At present, more than 20 countries around the world have issued Responsible AI (RAI) governance frameworks, but there is a lack of unified implementation standards and supervision mechanisms, leading to enterprises being prone to loosening ethical bottom lines when facing competitive pressure or government requirements. AI safety experts point out that Anthropic’s abandonment of core safety commitments may trigger a chain reaction. If more enterprises follow suit and compromise, it will lead to the loss of control of AI technology risks and further exacerbate the public trust gap in the AI industry – according to a survey in the 2025 Artificial Intelligence Index Report, only 47% of the public currently believe that AI enterprises can effectively protect data, and trust in AI fairness is still declining.
Outlook: Industry Pattern Changes, Governance System Urgently Needs Improvement
As of February 26, there has been no substantial progress in the negotiations between Anthropic and the Pentagon, and its first technology iteration plan after abandoning the safety commitment has not yet been announced. However, it is clear that this incident will have multiple impacts on the global AI industry pattern: first, the competition among U.S. AI giants will further intensify, and the “four-nation competition” among OpenAI, Google, xAI and Anthropic will become more obvious. According to Blue Whale News, OpenAI is advancing a $100 billion-level financing, with a valuation expected to reach $730 billion, while Anthropic’s pre-IPO valuation is about $380 billion, and the gap between the two is gradually widening; second, AI ethics and safety will once again become the focus of global policymakers. It is expected that more countries will introduce targeted policies within the next six months to regulate the technological research and development and application boundaries of AI enterprises; third, the responsibility system of AI enterprises will face reconstruction. Enterprises need to find a more reasonable balance between technological iteration, commercial interests and safety ethics, otherwise they may face policy sanctions and market trust crises.
Industry insiders predict that with the deepening penetration of AI technology in key fields such as military, medical care and finance, the contradiction between technological competition and safety governance will exist for a long time. Anthropic’s double shocks may become an important turning point for the global AI industry to transform from “rampant development” to “rational in-depth cultivation”, promoting the industry to form a new pattern of “equal emphasis on safety and development”.
News Source and Data Source Description
- Core news events (Pentagon pressure, Anthropic abandoning safety commitments): Global Network (February 25, 2026), Red Star News (February 25, 2026), Time Magazine (reported on February 25, 2026) [2];
- AI industry investment data and number of unicorns: CB Insights’ Q2 2025 Global Artificial Intelligence Report (released in January 2026);
- Data on AI safety incidents, public trust, and national governance frameworks: Stanford University HAI’s 2025 Artificial Intelligence Index Report (released in February 2026);
- Valuation and financing information of OpenAI and Anthropic: Blue Whale News (February 24, 2026);
- Relevant image source: Visual China (taken on February 16, 2026, screenshot of Claude AI application).