Screen Capture of Dr. Grace Trinidad, discussing Cybersecurity

Cybersecurity

AI Security in the Age of Generative AI

Insights from IDC Research Director Grace Trinidad

What is AI security in the context of generative AI?

AI security is the discipline of protecting enterprise data, identities, and workflows as organizations adopt AI systems. It is built on two core pillars: data control (how data is used and governed) and identity management (who can access and use AI systems). In the age of generative AI, this also includes managing risks like automated attacks, deepfakes, and workflow vulnerabilities.

According to Grace Trinidad, Research Director for AI Security and Trust at IDC, AI security begins with understanding:

  • Who is using AI?
  • What data they are using?
  • How that data is governed?

Why is cybersecurity harder in the age of AI?

Cybersecurity is becoming more difficult due to:

  • A dramatic increase in threat volume
  • Accelerated attack speed
  • Growing complexity of enterprise environments
  • New vulnerabilities introduced by AI adoption

As Trinidad explains:

“Orders of magnitude more threats, more vulnerabilities… an entirely different architecture.”

AI increases both the scale and speed of cyber risk—without fundamentally changing its nature.

How does generative AI change cyber threats?

Generative AI is amplifying existing attack methods rather than creating entirely new ones.

Key impacts include:

  • Faster attack deployment
  • Automated iteration of attacks
  • More convincing phishing attempts
  • Advanced impersonation (e.g., deepfakes)

“Generative AI has helped accelerate the speed… and automate the attacks.”

Generative AI doesn’t invent new attacks—it makes existing ones faster, cheaper, and harder to detect.

Why are AI security failures still a human and workflow problem?

Most AI-driven attacks succeed due to failures in workflows and processes—not just technology gaps.

Example scenario:

  • A suspicious deepfake request is received
  • An employee notices red flags
  • The transaction is still executed
  • No verification workflow is in place

“It’s still a people problem… more of a workflow process.”

AI security failures are most often caused by workflow gaps, not technology gaps.

What are the most effective defenses for AI security?

IDC recommends reinforcing fundamental, often low-tech controls:

  • Multi-person approval processes
  • Verification workflows
  • Strong authentication mechanisms
  • Redundancy in critical processes

“Redundancy is the name of the game.”

Simple controls—when consistently applied—are still the most effective defense against AI-driven threats.

What are the core pillars of AI security?

1. Data control

  • Know where your data resides
  • Secure and classify it
  • Remove outdated or low-value data

2. Identity and access management

  • Identify who is using AI systems
  • Control access permissions
  • Continuously monitor usage

“Identity is the foundation of AI security.”

Identity and data governance—not AI models—are the foundation of AI security.

Why does data strategy need to come before AI strategy?

Organizations are shifting from AI-first to data-first approaches.

“It starts with the data… early adopters are hitting roadblocks.”

Without a strong data foundation:

  • AI initiatives stall
  • Risk increases
  • Governance breaks down

AI strategy fails without a clear, governed data foundation.

How does AI security relate to traditional cybersecurity?

AI security does not replace traditional cybersecurity—it extends it.

“AI security does not replace traditional security. It’s a layer on top.”

AI security builds on existing security frameworks—it doesn’t replace them.

What should organizations do now to improve AI security?

  • Clean and organize enterprise data
  • Remove redundant or low-value data
  • Modernize identity systems
  • Update access controls and governance policies

“Look closely at your data… and modernize identity and access management.”

What is the key tension in AI security?

Organizations must balance:

  • Rapid AI innovation
  • Digital sovereignty and risk tolerance

“What is the degree of risk… you are willing to tolerate?”

How will AI security evolve?

“AI security… is going to completely change in the next year.”

Expect rapid shifts in:

  • Threat landscapes
  • Defensive strategies
  • Governance requirements

Key takeaways

  • AI increases both the scale and complexity of threats
  • Generative AI accelerates and enhances attack methods
  • Security failures are often driven by human and process gaps
  • Identity and data governance are foundational
  • AI security builds on—not replaces—existing frameworks

FAQ

What is AI security?
AI security is the practice of protecting data, identities, and workflows as organizations adopt AI systems, with a focus on governance and access control.

Why is AI security important now?
Because generative AI increases the speed, scale, and realism of cyberattacks while introducing new data and workflow risks.

What are the biggest AI security risks?
Data exposure, identity misuse, automated attacks, deepfakes, and failures in verification workflows.

Source

Insights from IDC Research Director Grace Trinidad

AI security and the rise of generative AI cyber risk

AI-powered cyberattacks are accelerating in speed and sophistication, forcing organizations to rethink how they approach security, data governance, and risk.

Read the Article

AI-Powered Cyberattacks are Here. Are You Ready?

Businesses are locked in an AI arms race. Cybercriminals are using generative AI, synthetic identities, and deepfake technology to accelerate attacks.

Read the Article

Are Businesses Ready to Defend Against AI-Powered Cyberattacks?

Read the BizTech Magazine article featuring Grace Trinidad

Read the Article