Thairath Online
Thairath Online

ChatGPTs Negative Reviews Surge Nearly 300% as Sam Altman Admits Opportunism and Carelessness

Tech companies04 Mar 2026 11:32 GMT+7

Share article

ChatGPTs Negative Reviews Surge Nearly 300% as Sam Altman Admits Opportunism and Carelessness

Sam Altman, CEO of OpenAI acknowledged " rushing the announcement of the deal with the Pentagon." This followed OpenAI taking over from Anthropic, which refused cooperation, declining to allow the U.S. military to use its AI models in operations targeting Venezuela and Iran alongside Israel, and issuing a statement on its stance regarding safety and ethics, requesting contract revisions to ensure AI systems would not be used for domestic surveillance within the U.S.

On 3 March, Altman posted on platform X admitting the mistake of hastily announcing the deal immediately after the collapse of the agreement between his competitor Anthropic and the Pentagon. He stated, “These issues are very complex and require clear communication. Our company intends to reduce tensions and avoid worse outcomes, but I admit it appeared opportunistic and careless.”

This admission came after criticism targeted OpenAI, suggesting the deal could open the door for technology use in domestic surveillance or fully autonomous weapons. This was the reason Anthropic strongly refused and affirmed its boundaries that its technology would not be used for mass domestic surveillance or fully autonomous weapons without human control.

Backlash from users

Following the cooperation announcement, OpenAI faced heavy criticism from many users, including internal staff and general users. Market analysis firm Sensor Tower reported a sharp increase in ChatGPT app uninstallations after the news, especially on Saturday (28 Feb), when removals in the U.S. surged 295% in one day compared to the previous 30-day average of just 9%. Downloads in the U.S. dropped 13% and then another 5% on Sunday (1 Mar), while before the deal news on Friday (27 Feb), downloads had increased 14%.

Meanwhile, Anthropic's Claude app gained momentum, rising to No. 1 on Apple's U.S. App Store and maintaining that position into the early week. It also topped charts in six other countries: Belgium, Canada, Germany, Luxembourg, Norway, and Switzerland. Data from Similarweb showed Claude's downloads in the U.S. last week were 20 times higher than in January, though they cautioned other factors besides politics might contribute.

What new limits has OpenAI added in the contract?

Under pressure, OpenAI clarified contract language explicitly specifying three main red lines for its collaboration with the Pentagon:

  • No use of OpenAI technology for mass domestic surveillance.
  • No use of OpenAI technology to control or operate autonomous weapons systems.
  • No use of OpenAI technology for high-impact automated decision-making (such as "social credit" systems).

Along with adding key conditions, including

1. AI systems shall not be used for domestic surveillance, including prohibitions on tracking, monitoring, and acquiring or using commercial personal or identifiable data, referencing laws such as the U.S. Constitution's Fourth Amendment, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act of 1978 (FISA).

2. Intelligence agencies like the National Security Agency (NSA) cannot use OpenAI systems unless a specific contract amendment is approved.

3. Use of AI in fully or semi-automated systems must undergo rigorous testing, review, and evaluation before deployment. Systems are prohibited from controlling autonomous weapons where laws or regulations require human oversight, and may not replace critical human-approved decisions.

5. Models will be deployed only on cloud infrastructure (cloud-only deployment) and not installed on endpoint devices (edge devices) that could enable use in lethal autonomous weapons. OpenAI retains independent monitoring and enforcement to ensure no red line violations.

4. Collaborative governance with the Pentagon includes deploying certified OpenAI engineers and security researchers to participate closely.

Additionally, OpenAI's statement emphasized that its contract provides more safeguards and accountability measures than previous agreements, including Anthropic's. It stressed that its red lines are enforceable because use is cloud-limited and OpenAI personnel are involved in oversight. Regarding concerns about enabling autonomous weapons or citizen surveillance, the company asserted that the cloud architecture, security measures, contract terms, and existing laws make such misuse impossible.

Finally, Altman affirmed OpenAI's commitment to democracy and its desire to provide those charged with national defense the best tools. He stated that the U.S. military needs robust AI models, especially amid increasing threats, and that such use must comply with legal frameworks and strict safety measures.


Follow the Facebook page: Thairath Money at this link -