State of Data & AI: Security and Privacy

State of Data & AI: Security and Privacy

Human considerations

While the technical requirements for securing AI systems are significant, many organisations are finding it pays to take a humanistic approach to the challenge.

At the international advertising agency TBWA, chief AI and innovation officer for Australia Lucio Ribeiro said both ethics and security had been front of mind for management from the earliest stages of their AI journey. The company has established a Collective AI framework, which provides structure across governance, risk, and transparency for AI investments and has proved vital when assessing the suitability and security of AI investments.

“We’ve turned down plenty of tools that don’t meet our standards – no matter how good they look on social media,” said Ribeiro.

“A lot of what people publish online – GenAI videos and advertisements, workflow automations, image generators, or AI-generated case studies – may look impressive, but many of those tools can’t be responsibly used in an enterprise setting.

“They often lack clear IP rights, data protections, or commercial terms. We simply can’t afford to experiment irresponsibly. Our clients—and we—can’t afford to be AI cowboys.”

State of Data & AI: Security and Privacy

Ribeiro said that TBWA’s quest to embrace AI safely had seen the business take steps to ensure that this requirement did not impede its to innovate using AI. This had led to the creation of ‘safe-to-fail’ environments with clear boundaries that supported AI-based experimentation.

“If trust is compromised, creativity is too,” Ribeiro said.

“So our principle remains: build fast, test safely, scale only what’s secure.”

Secure AI foundations

Security has been embedded as a fundamental pillar in the transformation program being undertaken at the Australian National University, where the adoption of AI technology has led to the introduction of additional security dimensions.

State of Data & AI: Security and Privacy

According to the university’s director of digital infrastructure and information security, Sajid Hassan, the concentration of computational power for AI workloads has required ANU to develop secure compute environments with isolated processing capabilities for sensitive research.

 “Compliance with evolving AI regulations and guidelines has become a moving target that requires constant attention,” Hasan said.

“We’ve had to carefully balance the openness required for research collaboration with protection of intellectual property, particularly as AI models themselves become valuable research outputs.

“These considerations have led us to develop specific governance frameworks for AI research that go beyond traditional IT security measures.”

Investments in AI have also spurred an evolution in the university’s ethical frameworks, leading to the creation of comprehensive guidelines for responsible AI use in research that address issues from bias in algorithms to transparency in AI-driven decision-making.

“We’ve worked to ensure alignment between AI innovation and university values, including commitments to equity, accessibility, and research integrity,” Hassan said.

“This has involved extensive consultation with researchers, ethicists, and the broader university community to develop frameworks that enable innovation while maintaining ethical standards.”

Data governance frameworks are being enhanced to address privacy concerns specific to AI applications, particularly regarding the use of personal data in research. The university has also implemented transparency requirements for AI-driven decisions that affect students or staff to ensure there is always human oversight and the ability to understand and challenge automated decisions.


Source link