Share this
Latest research: Security oversight of leading AI companies like OpenAI fails to meet global standards.

Latest research: Security oversight of leading AI companies like OpenAI fails to meet global standards.

2026-01-15 11:57:28 · · #1

① The AI ​​Security Index released by the Future of Life Institute shows that major AI companies have not reached global standards in security governance. Although they are racing to develop "super intelligence", none of them have formulated a sound control strategy. ② The report points out that although the leadership of many companies has talked about dealing with existential risks, it has not yet been translated into concrete security plans or credible internal monitoring and control measures.

The Future of Life Institute, a non-profit organization, released a new version of its AI Safety Index on Wednesday, which points out that major artificial intelligence companies such as Anthropic, OpenAI, xAI, and Meta are among the most safe. The company's security governance level is far from meeting the emerging global standards.

The institute stated that a security assessment conducted by an independent panel of experts found that, despite the companies racing to develop “superintelligence,” none have developed a sufficiently robust strategy and plan to control such an advanced AI system.

According to reports, Anthropic achieved the highest overall score in this assessment, but still only received a D rating in "existential safety," meaning the company has not yet established sufficient strategies to prevent catastrophic misuse or loss of control. This is the second consecutive report in which no company received a score higher than D in this metric.

With the exception of a few companies like Meta, all AI companies responded to the list of questions issued by the Future of Life Institute, providing more information about their safety practices.

The report shows that leadership at several companies has discussed addressing existential risks, but researchers point out that "this rhetoric has not translated into quantifiable security plans, specific alignment failure mitigation strategies, or credible internal monitoring and control measures."

In terms of specific metrics, Anthropic and OpenAI received high scores of A and B in information sharing, risk assessment, and governance and accountability, respectively.

While xAI and Meta have risk management frameworks, they lack commitments to security monitoring and have not provided evidence that their investment in security research exceeds the minimum standards.

This research comes at a time of growing public concern about the social impact of more intelligent AI systems possessing human-like reasoning and logic abilities. Several previous cases of suicide and self-harm have been linked to AI chatbots. related.

MIT professor and director of the Future of Life Institute, Max Tegmark, said: “Despite the recent controversies surrounding AI-assisted hacking and AI causing mental breakdowns and self-harm, AI companies in the United States are still less regulated than restaurants and continue to lobby against binding safety standards.”

Meanwhile, the AI ​​race shows no signs of slowing down, with major tech companies still investing hundreds of billions of dollars to expand and upgrade their machine learning capabilities. The US government is also attempting to pass legislation to prohibit states from enforcing AI regulations over the next decade.

The Future of Life Institute is a non-profit organization focused on the risks posed to humanity by intelligent machines. Founded in 2014, it initially received funding from Tesla. Support from CEO Elon Musk.

In October of this year, a group of researchers, including scientists Geoffrey Hinton and Yoshua Bengio, called for a halt to the development of superintelligent AI until the public demands it and science finds a safe path forward.

(Article source: CLS)

Read next

US November non-farm payrolls will be released tonight. Morgan Stanley: "Moderately weak" employment = increased probability of interest rate cut!

At 21:30 Beijing time tonight, the US will release its November non-farm payrolls report, which was delayed due to the ...

Stock 2026-01-12