Skip to content

Managing AI issues

When AQtive Guard analyzes data, it evaluates detected AI assets and their properties against the currently active rules. If an asset fails to meet the defined rule parameters, an issue is generated.

Issues alert you to critical deviations in model health and risk posture, enabling you to take prompt action to address security and integrity concerns.

To view AI issues:

  1. From the AQG main menu, select Issues.

  2. Select the AI-SPM tab to view a full list of issues discovered during analysis.

AI issues table

  • Rule (AI-SPM) - The out-of-policy check or rule that was flagged during analysis.
  • AI-SPM Objects analyzed - Specifies the type of AI asset that triggered the issue (AI models / Agents / MCP servers).
  • Severity - The severity of the issue, as determined by parameters set in the associated rule.
  • Occurrences - Indicates the number of discovered AI assets impacted by this specific issue.
  • Details - Opens a panel showing the full scope of the issue, including a Security knowledge graph of impacted assets.

Issue details

In the AI issues table, select Details for any rule to explore the issues flagged by that rule. This panel provides a deep dive into the rule violation, the scope of impact, and contextual data for remediation.

Issue scope and metrics

The top section of the panel displays key summary data for the violation:

  • Severity - The specific severity level assigned to the overall finding (Low / Medium / High).
  • Occurrences - The total number of unique assets impacted by this specific rule violation.
  • Objects analyzed - The type of asset that triggered the rule (Models / Agents / MCP servers).

Security knowledge graph

This visual component displays the relationships between the impacted assets, showing model lineage and potential dependencies to help you understand the full impact of the issue.

Impacted Assets Table

This table details the occurrences of the flagged rule.

  • Name - The unique name or identifier of the model.
  • Model health score - The metric indicating the model’s overall AI security posture.
  • Severity - The severity level assigned to this asset based on this issue.