CC Signals lets you set boundaries with AI without locking down your work
Creative Commons introduced CC Signals, a new framework that helps data and content owners communicate how they want their work used by AI systems. The idea is to build a shared understanding of what’s acceptable, and to encourage more fair and open use of public content.
As AI models grow more powerful, they continue learning from vast amounts of data, much of it pulled from public websites. Some worry this means AI will benefit from open knowledge without giving anything back. Others fear that public data will become locked away. CC Signals is Creative Commons’ attempt to offer a middle path.
Anna Tumadóttir, CEO of Creative Commons, says CC Signals is “designed to sustain the commons in the age of AI.” The organization hopes to bring the same clarity and collaboration to AI that it once brought to the early web with CC licenses. The goal is reciprocity. People who contribute data should see a return, not just for themselves, but for the public good.
CC Signals allows people and institutions to attach simple, readable “signals” to their data. These signals express preferences or conditions for how the content should be used. Some may be legal. Others may rely on community norms. The key is consistency. The more people use the same signal, the more it becomes part of the standard.
Sarah Hinchliff Pearson, Creative Commons’ general counsel, says one person’s request won’t change much. But collective action can influence how AI systems are trained. “We can demand a different way,” she said.
Creative Commons spent several years developing this idea, testing it with researchers, librarians, educators, and technologists. Now, the team is inviting public feedback. Early design documents are posted on GitHub. An alpha version of the system is scheduled for release in November 2025.
Source link