Generative AI (GenAI) has opened the doors for new AI initiatives, making the need to implement robust AI trust, risk and security management (AI TRiSM) capabilities even more urgent. AI TRiSM is a set of solutions that support AI model governance, trustworthiness, fairness, reliability, robustness, transparency and data protection.
GenAI has sparked extensive interest in AI pilots, but organizations often don’t consider the risks until AI models or applications are already in production or use. An AI TRiSM program helps organizations integrate much-needed governance upfront, and proactively ensure AI systems are compliant, fair, reliable and protect data privacy.
The democratization of access to AI has made the need for AI TRiSM even more urgent. Gartner has found that by 2026, AI models from organizations that operationalize AI transparency, trust, and security will achieve a 50 per cent improvement in terms of adoption, business goals and user acceptance.
However, despite the benefits that come with the necessary steps of AI TRiSM, it still may be questioned by c-suite or board members that are not as close to these projects. Here are four reasons AI leaders can utilize to explain why organizations should incorporate AI TRiSM into their AI models:
GenAI and third-party AI tools pose data risks
GenAI has transformed how many organizations compete and do work. The risks associated with GenAI applications are significant and quickly evolving. Without guardrails, any type of AI model can rapidly generate compounding negative effects that spin out of control, overshadowing any positive performance and gains from AI.
As organizations integrate AI models and tools from third-party providers, they also absorb the large datasets used to train those AI models. Users could be accessing confidential data within others’ AI models, potentially creating regulatory, commercial, and reputational consequences for organizations. They can also be accessing copyrighted materials that they do not have a legal right to.
 AI models and apps require constant monitoring
Specialized risk management processes must be integrated into AI model and application operations to keep AI compliant, fair, and ethical. There are several solutions in the market, but many of these are offered by startups as the market continues to emerge in the face of increasing customer demands. Controls must be applied continuously — for example, throughout model and application development, testing and deployment, and ongoing operations.
With new tools come new threats previously unencountered
Malicious attacks against AI (both homegrown and embedded in third-party models) lead to various types of organizational harm and loss — for example, financial, reputational, or related to intellectual property, personal information, or proprietary data. Add specialized controls and practices for testing, validating and improving the robustness of AI workflows, beyond those used for other types of apps.
Regulations will soon define compliance controls
The EU AI Act and other regulatory frameworks are already establishing regulations to manage the risks of AI applications. Be prepared to comply, beyond what’s already required for regulations such as those pertaining to privacy protection or identifying hallucinations that can steer companies in undesirable directions.
Organizations that do not consistently manage AI risks are exponentially more inclined to experience adverse outcomes such as project failures and breaches. Inaccurate, unethical, or unintended AI outcomes, process errors, and interference from malicious actors can result in security failures, financial and reputational loss or liability, and social harm. AI misperformance can also lead to suboptimal business decisions.
AI TRiSM capabilities are needed to ensure the reliability, trustworthiness, security and privacy of AI models and applications. They drive better outcomes related to AI adoption, achieving business goals and ensuring user acceptance. Make AI use safer and more reliable by enhancing application security and risk management programs, keeping up with the increasing maturity of available controls to operate AI models, and getting ahead of compliance issues by deploying AI TRiSM principles.
Avivah Litan is a Distinguished VP Analyst at Gartner, where she covers AI, AI TRiSM and blockchain. Gartner analysts will provide additional analysis on GenAI risks at Gartner Security & Risk Management Summit, taking place June 3-5, in National Harbor, MD.