Compliance Frameworks
The Trusera platform evaluates scan results against industry compliance frameworks to help organizations meet regulatory requirements and security standards. AI-BOM findings are automatically mapped to framework categories.
Supported frameworks
OWASP LLM Top 10
The OWASP Top 10 for LLM Applications identifies the most critical security risks in LLM-based applications.
| ID | Risk | AI-BOM mapping |
|---|---|---|
| LLM01 | Prompt Injection | AI agent nodes without input validation, unguarded tool use |
| LLM02 | Insecure Output Handling | LLM outputs routed to code execution without sanitization |
| LLM03 | Training Data Poisoning | Unverified model sources, unpinned model versions |
| LLM04 | Model Denial of Service | Unbounded token limits, missing rate limiting |
| LLM05 | Supply Chain Vulnerabilities | Shadow AI, unvetted AI packages, deprecated SDKs |
| LLM06 | Sensitive Information Disclosure | Hardcoded API keys, credentials in workflow JSON |
| LLM07 | Insecure Plugin Design | MCP servers without auth, tool chains without guardrails |
| LLM08 | Excessive Agency | AI agents with unrestricted tool access, code execution |
| LLM09 | Overreliance | AI components without human-in-the-loop controls |
| LLM10 | Model Theft | Exposed model files, unprotected model endpoints |
EU AI Act
The EU AI Act (effective August 2025) requires organizations to maintain transparency about AI systems. Article 53 mandates a complete AI component inventory.
AI-BOM supports EU AI Act compliance by:
- Component inventory - Generating a complete Bill of Materials for all AI components
- Risk classification - Mapping components to EU AI Act risk categories
- Documentation - CycloneDX and SPDX output formats meet SBOM requirements
- Continuous monitoring - Scheduled scans detect new AI components as they are introduced
Key EU AI Act requirements mapped to AI-BOM capabilities:
| Requirement | AI-BOM capability |
|---|---|
| Article 53 - AI component transparency | Full AI inventory with CycloneDX SBOM output |
| Article 9 - Risk management | 0-100 risk scoring with severity classification |
| Article 15 - Accuracy and robustness | Detection of deprecated models and unpinned versions |
| Article 13 - Transparency | Source location tracking for every detected component |
| Article 17 - Quality management | CI/CD integration with policy enforcement |
OWASP Agentic Security Top 10
The OWASP Top 10 for Agentic Security addresses risks specific to AI agent architectures.
| ID | Risk | AI-BOM mapping |
|---|---|---|
| ASI01 | Agent Identity Spoofing | MCP servers without authentication |
| ASI02 | Tool Misuse | Code execution tools connected to AI agents |
| ASI03 | Privilege Escalation | Agents with unrestricted system access |
| ASI04 | Memory Poisoning | Vector store configurations without access controls |
| ASI05 | Resource Exhaustion | Unbounded agent loops, missing timeout configurations |
| ASI06 | Agent Communication Tampering | A2A protocol without encryption |
| ASI07 | Cascading Failures | Multi-agent chains without circuit breakers |
| ASI08 | Data Exfiltration | Agents with network access and sensitive data |
| ASI09 | Audit Evasion | AI operations without logging |
| ASI10 | Supply Chain Compromise | Unverified agent packages and MCP servers |
Custom rules (v2)
The platform supports custom compliance rules for organization-specific requirements. Rules are created via the API and evaluated against scan results.
Creating a custom rule
curl -X POST https://your-instance.trusera.dev/api/v1/compliance/rules \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{
"name": "No hardcoded API keys",
"description": "All API keys must use environment variables or secret managers",
"framework": "custom",
"condition": {
"field": "flags",
"operator": "contains",
"value": "hardcoded_api_key"
},
"severity": "critical"
}'
Rule conditions
| Field | Operators | Description |
|---|---|---|
flags | contains, not_contains | Check component risk flags |
severity | eq, gte, lte | Check component severity level |
risk_score | gt, gte, lt, lte, eq | Check numeric risk score |
type | eq, in | Check component type |
provider | eq, in, not_in | Check AI provider |
name | eq, contains, matches | Check component name |
Example rules
Block all hardcoded credentials:
{
"name": "No hardcoded credentials",
"condition": {
"field": "flags",
"operator": "contains",
"value": "hardcoded_credentials"
},
"severity": "critical"
}
Maximum risk score:
{
"name": "Risk score under 80",
"condition": {
"field": "risk_score",
"operator": "lt",
"value": "80"
},
"severity": "high"
}
Block specific providers:
{
"name": "No DeepSeek in production",
"condition": {
"field": "provider",
"operator": "eq",
"value": "deepseek"
},
"severity": "high"
}
Managing rules
| Operation | Endpoint | Required role |
|---|---|---|
| Create | POST /api/v1/compliance/rules | Editor |
| List | GET /api/v1/compliance/rules | Any authenticated user |
| Update | PUT /api/v1/compliance/rules/:id | Editor |
| Delete | DELETE /api/v1/compliance/rules/:id | Editor |
CLI policy enforcement
For CI/CD pipelines, use the --policy flag with a YAML policy file:
# .ai-bom-policy.yml
max_critical: 0
max_high: 5
max_risk_score: 75
block_providers: []
block_flags:
- hardcoded_api_key
- hardcoded_credentials
ai-bom scan . --policy .ai-bom-policy.yml --quiet
This returns exit code 1 if any policy violations are found, making it suitable for CI gates.