Recent advances in the capabilities of AI models from DeepSeek and other Chinese firms heightened the possibility that these systems could be misused by bad actors or escape human control, according to Concordia AI, a Beijing-based consultancy focused on AI safety in China.
As AI models grew more powerful, their frontier risks – or their potential to endanger public safety and social stability – have raised alarms among experts about possible catastrophic consequences, including the destruction of humanity.
Concordia AI’s assessment, based on its analysis of 50 leading AI models and shared exclusively with the Post, indicated that Chinese models had joined their US counterparts in pushing the boundaries of such risks.
“We hope our findings can support these companies as they improve the safety of their models,” said Fang Liang, the consultancy’s head of AI safety and governance in China.
