AI Index 2025: What’s changing and why it matters

AI Index 2025: What's changing and why it matters

Stanford recently released its AI Index 2025, and it’s packed with insights on how AI is changing. For CISOs, it’s a solid check-in on where things stand. It covers what the tech can do now, how governments are responding, and where public opinion is heading. Here’s what’s worth knowing.

AI is improving fast and showing up everywhere

New models are performing better on hard benchmarks and tackling complex tasks like coding and math with more success than last year. Some agents are even outpacing humans, under time constraints at least. Video generation, protein sequencing, and medical diagnosis are areas where AI is advancing quickly.

Meanwhile, AI is no longer just a lab tool. It’s embedded in everyday life. In 2023, the FDA approved 223 AI-powered medical devices. Robotaxis are now in regular use in U.S. and Chinese cities. On the enterprise side, 78% of organizations reported using AI in 2024, up from 55% the year before.

Industry leads the charge, but with growing risks

Nearly 90% of notable AI models in 2024 came from industry, not academia. Yet the research community remains the top producer of highly cited papers. Model training is scaling fast. Compute use doubles every five months. While costs are falling, emissions are rising. GPT-4’s training produced over 5,000 tons of carbon. Llama 3.1’s largest model generated nearly 9,000 tons.

The tools are getting cheaper too. The cost to run a GPT-3.5-level model dropped more than 280 times in 18 months. Open-weight models are catching up to their closed-source counterparts, narrowing the performance gap to under 2% in some benchmarks.

Governments are ramping up oversight and investment

Regulation is catching up. U.S. agencies issued 59 AI-related rules in 2024, twice the number from 2023. More than 75 countries mentioned AI in legislation last year. State-level regulation of deepfakes in elections has also grown, with 24 U.S. states passing laws so far.

Spending is rising too. Canada pledged $2.4 billion for AI infrastructure. China launched a $47.5 billion chip fund. Saudi Arabia’s $100 billion “Project Transcendence” aims to fuel AI development. These moves signal a global shift from policy talk to large-scale deployment.

Trust and safety gaps persist

Responsible AI remains uneven. Reported AI incidents rose 56% last year. While tools to measure safety, factuality, and bias are improving, few organizations use them consistently. Even models trained to reduce bias still show patterns that favor men over women and reinforce racial stereotypes.

Public opinion is split. Optimism is growing in countries like China and Indonesia, but remains low in places like the U.S., Canada, and the Netherlands. Trust in AI companies is also slipping, especially around data privacy and fairness.

Education is expanding but unevenly

Two-thirds of countries now offer K-12 computer science education, and CS degree output is rising. But access gaps persist, especially in regions lacking infrastructure. In the U.S., most CS teachers want to teach AI but don’t feel prepared to do so.


Source link