n8n Workflow Scanner
AI-BOM is the first security tool to scan n8n workflows for AI components. The n8n scanner detects AI Agent nodes, MCP client connections, hardcoded credentials, and other security risks in n8n workflow JSON files.
Scanning methods
Scan workflow files
# Scan exported workflow JSON files
ai-bom scan ./workflows/
# Scan a specific workflow file
ai-bom scan workflow.json
Scan local n8n installation
ai-bom scan . --n8n-local
This scans the ~/.n8n/ directory where n8n stores workflow data.
Scan running n8n instance via API
ai-bom scan . --n8n-url http://localhost:5678 --n8n-api-key YOUR_KEY
This connects to a running n8n instance and scans all workflows via the n8n API.
What it detects
AI node types
The scanner identifies these n8n AI node types:
| Node Type | Category |
|---|---|
@n8n/n8n-nodes-langchain.agent | AI Agents |
@n8n/n8n-nodes-langchain.lmChatOpenAi | LLM Chat (OpenAI) |
@n8n/n8n-nodes-langchain.lmChatAnthropic | LLM Chat (Anthropic) |
@n8n/n8n-nodes-langchain.lmChatGoogleGemini | LLM Chat (Google) |
@n8n/n8n-nodes-langchain.mcpClientTool | MCP Client |
@n8n/n8n-nodes-langchain.toolCode | Code Tools |
@n8n/n8n-nodes-langchain.toolWorkflow | Workflow Tools |
@n8n/n8n-nodes-langchain.embeddingsOpenAi | Embeddings |
@n8n/n8n-nodes-langchain.vectorStoreInMemory | Vector Stores |
@n8n/n8n-nodes-langchain.chainSummarization | Summarization Chains |
Security risks
- Hardcoded credentials - API keys embedded directly in workflow JSON parameters
- Webhook triggers without authentication - Workflows exposed without auth headers
- Dangerous tool combinations - Code execution tools connected to AI agents without guardrails
- Cross-workflow AI chains - Multi-workflow AI agent architectures
n8n community node
For scanning directly from the n8n UI, install the Trusera community node:
npm install n8n-nodes-trusera
Or via the n8n UI: Settings > Community Nodes > Install > n8n-nodes-trusera
Setup
- Add a Trusera Dashboard node to a workflow
- Create credentials with your n8n API URL and API key
- Optionally set a dashboard password for AES-256-GCM encryption
- Execute the node - it fetches all workflows, scans them, and returns an interactive HTML dashboard
Dashboard features
- Severity distribution charts and risk score stat cards
- Sortable findings table with search and severity/type filters
- Per-finding remediation cards with fix steps and guardrail recommendations
- OWASP LLM Top 10 category mapping for every risk flag
- CSV and JSON export
- Light/dark theme toggle
- Optional password protection (AES-256-GCM encrypted, client-side decryption)
Example output
Scanning an n8n workflow with AI agents produces output like:
Component: n8n AI Agent
Type: agent
Source: workflow.json (node: "AI Research Agent")
Risk Score: 65 (HIGH)
Flags: ai_agent, tool_use, no_guardrails
Workflow: "Research Pipeline" (id: abc123)
Component: OpenAI LLM Chat
Type: llm_provider
Source: workflow.json (node: "OpenAI Chat Model")
Risk Score: 45 (MEDIUM)
Flags: llm_provider, model_reference
Model: gpt-4o
Risk scoring for n8n
n8n components are scored based on:
- Node type - AI agents score higher than simple LLM calls
- Tool connections - Agents with code execution tools score higher
- Authentication - Missing webhook auth increases the score
- Credentials - Hardcoded credentials are critical severity
- Agent chains - Multi-agent workflows score higher due to complexity