With the integration of artificial intelligence (AI) technologies rising in organizations, cybersecurity experts are raising concerns about governance. A survey involving 1,253 cybersecurity professionals indicates that only 7% of organizations have established real-time governance to enforce security policies, despite 73% utilizing AI tools. This creates a notable 66-point gap as AI implementation accelerates beyond security measures.
The study highlights a paradox where, although 90% of respondents have increased their AI security budgets this year, 29% report feeling less secure compared to the previous year. Issues such as the pressure to quickly adopt AI, inadequate security frameworks, and outdated tools contribute to this feeling. Visibility into AI operations is also a major issue, with 94% of organizations acknowledging gaps, and 88% struggling to distinguish between personal and corporate AI accounts, complicating data governance.
Furthermore, a staggering 91% of respondents can only identify unauthorized actions by AI agents after they occur, with 37% experiencing operational problems due to these actions in the past year. As organizations describe their AI governance as either reactive or still developing, there is a growing concern that governance failures could lead to significant AI-related breaches.