OneTrust AI Governance helps organizations manage AI systems and mitigate risk


OneTrust announces OneTrust AI Governance is now available on the Trust Intelligence Platform.

OneTrust AI Governance enables visibility and transparency in adopting and governing AI use throughout the organization so companies can manage AI systems and mitigate risk.

“Companies are turning to AI to drive value and innovation for their business, but it comes with significant challenges around data privacy, governance, ethics, and risk management,” said Blake Brannon, Chief Product and Strategy Officer at OneTrust. “To excel in the evolving AI domain, you need a robust and adaptable governance strategy. OneTrust AI Governance provides our customers with a leading-edge tool to understand where AI is being used across the business, manage the AI lifecycle, and assess risk against global laws and frameworks.”

OneTrust AI Governance helps organizations:

Understand where AI technology is being used across the business: OneTrust AI Governance helps compliance teams and data scientists maintain up-to-date inventory of projects, models, and datasets that leverage AI and ML technology. Integrate with MLOps tools to auto detect AI models and sync with a centralized inventory. Build relationships between models, datasets, projects and vendors and processing activities to understand processes involving AI and perform privacy risk assessments.

Govern the AI development lifecycle: OneTrust enables organizations to evaluate AI use cases, surface risks, and govern every phase of AI development, including ideation, experimentation, production, and archive. Capture context and surface potential risk for AI projects at the beginning of the project lifecycle. Track distinct AI initiatives and relevant information including intended purposes and business stakeholders.

Mitigate AI risk: OneTrust AI Governance makes it easy to assess AI against the business’s established responsible use policies, as well as global laws and frameworks such as NIST AI RMF, EU AI Act, UK ICO, ALTAI, and OECD Framework for the Classification of AI Systems. Auto-assign risk levels for bias, fairness, and transparency, and escalate high-risk projects to the appropriate workstreams. Easily identify model risk level and understand the number of high-risk AI projects using dynamic dashboards and reports.



Source link