AI changes everything.
Especially the attack surface.
Every LLM deployment, every AI agent, every RAG pipeline is a new attack vector your adversaries are already probing. Fortify North's AI Security practice applies the same collective intelligence model to AI threats — multiple specialists, simultaneously, across every layer of your AI stack.
The AI threat surface is unlike anything in traditional security.
Traditional perimeter security doesn't apply when the attack vector is natural language. Adversaries don't need to find a CVE — they need to find the right sentence.
Prompt Injection
Malicious instructions embedded in user input or external data hijack LLM behaviour, causing unauthorized actions or data exfiltration.
Jailbreaking
Adversarial prompting techniques that bypass safety guardrails, exposing the model to produce harmful, confidential, or off-policy output.
Training Data Poisoning
Corrupting training or fine-tuning datasets to embed backdoors, introduce biases, or degrade model performance on specific inputs.
Model Extraction
Systematic API querying to reconstruct proprietary model weights, capabilities, or training data — model theft without direct access.
Agentic Escalation
Autonomous AI agents with tool access can be manipulated into taking unintended real-world actions: deleting data, exfiltrating files, sending messages.
RAG Data Leakage
Improperly secured retrieval pipelines expose sensitive documents to users who should not have access — often without any audit trail.
MCP Tool Poisoning
Malicious Model Context Protocol servers embed instructions in tool schemas that hijack agent behaviour — trusted by design, exploited by default.
AI Bot Abuse
AI-enhanced automation defeats CAPTCHAs, generates convincing synthetic identities, and runs credential attacks at a sophistication no rule-based filter can match.
LLM Security Assessment
Red-team your AI before your adversaries do.
Our collective conducts adversarial testing of your LLM-powered applications using the OWASP LLM Top 10 and MITRE ATLAS frameworks. Multiple specialists attack simultaneously — one focused on prompt injection, one on data extraction, one on guardrail evasion — producing coverage no single tester could achieve.
Deliverables
- Prompt injection and indirect injection testing
- Jailbreak resistance evaluation across leading techniques
- Sensitive data and PII leakage assessment
- System prompt extraction attempts
- Output integrity and hallucination risk evaluation
- OWASP LLM Top 10 coverage report
- Prioritized remediation roadmap with implementation guidance
Agentic AI Security Architecture
Autonomous AI needs non-autonomous security controls.
AI agents with tool access — browsing the web, writing code, sending emails, calling APIs — represent a fundamentally new attack surface. Fortify North designs the security architecture around your agentic systems: permission models, sandboxing, human-in-the-loop checkpoints, and audit logging. We apply the same collective analysis used for enterprise security architecture to your AI deployment.
Deliverables
- Agentic system threat model (inputs, tools, outputs, trust boundaries)
- Principle of least privilege design for tool access
- Sandboxing and isolation architecture
- Human-in-the-loop control point mapping
- Prompt injection attack surface analysis for multi-agent pipelines
- MCP (Model Context Protocol) security review
- Audit logging and observability framework
- Incident response playbook for agentic AI events
AI Supply Chain Security
The model you're trusting may not be what you think.
Enterprise AI deployments depend on pre-trained models, open-source packages, fine-tuning pipelines, and third-party APIs — each a potential attack vector. Fortify North assesses your full AI supply chain: model provenance, dependency risk, fine-tuning data integrity, and third-party API exposure.
Deliverables
- AI dependency inventory and risk classification
- Model provenance and integrity verification
- Training and fine-tuning data integrity assessment
- Third-party AI API security review (data retention, logging, terms)
- Hugging Face / model hub artifact scanning
- Supply chain attack scenario simulation
- Vendor AI security questionnaire and scoring framework
RAG & AI Data Security
Your knowledge base is only as secure as its access controls.
Retrieval-Augmented Generation pipelines connect LLMs to your most sensitive internal data. A misconfigured RAG system can expose legal documents, HR records, financial data, and proprietary IP to any user who knows how to ask. Fortify North audits the full retrieval pipeline — from vector database access controls to chunking logic to output filtering.
Deliverables
- RAG architecture security review (chunking, embedding, retrieval logic)
- Vector database access control assessment
- Document-level authorization and multi-tenancy review
- Data exfiltration via crafted query testing
- PII and sensitive data detection in the knowledge base
- Output filtering and content policy review
- Embedding model security assessment
- Monitoring and anomaly detection framework for RAG queries
MCP Security Review
The protocol is trusted. That's exactly the problem.
Model Context Protocol (MCP) lets LLMs connect to tools, files, APIs, and databases through a standardized server interface. What makes MCP powerful also makes it dangerous: by design, the LLM trusts everything an MCP server tells it — including the tool descriptions themselves. A malicious or compromised MCP server can inject instructions directly into the agent's reasoning chain at the protocol level, bypassing all application-layer guardrails. Fortify North audits your MCP deployment before that trust is exploited.
Deliverables
- MCP server inventory, trust boundary mapping, and permission scope audit
- Tool schema inspection for embedded prompt injection payloads
- Authentication and authorization review between LLM client and MCP servers
- Network isolation and exposure analysis of MCP server interfaces
- Rug-pull attack surface assessment (dynamic tool definition updates)
- Multi-server privilege escalation chain testing
- Least-privilege redesign recommendations for all tool permissions
- MCP-specific incident response playbook
EU AI Act Gap Assessment
Enforcement is live. Most organizations aren't ready.
The EU AI Act (in force August 2024, phased enforcement through 2026) creates binding obligations for organizations that develop, deploy, or use AI systems affecting EU residents — regardless of where those organizations are headquartered. Canadian firms with EU customers or subsidiaries are in scope. Fortify North's gap assessment maps your AI systems against the Act's risk tiers, identifies compliance obligations, and produces a prioritized remediation roadmap your legal and technical teams can execute against.
Deliverables
- AI system inventory and risk tier classification (Prohibited / High-Risk / Limited / Minimal)
- High-risk system conformity requirements mapping
- General-purpose AI (GPAI) model obligations assessment
- Transparency and disclosure obligation review
- Human oversight and control mechanism evaluation
- Technical documentation and record-keeping gap analysis
- Incident reporting obligation framework
- Remediation roadmap with legal and technical workstreams
Secure AI deployment starts with a layered defense model.
Fortify North applies a five-layer security architecture to every AI deployment engagement. Each layer must be independently hardened — a breach at any layer can compromise the entire system regardless of how well the others are secured.
Unlike traditional application security, AI systems have bidirectional data flows between untrusted inputs (users, external data) and privileged outputs (tool calls, database writes, communications). The architecture must account for this at every layer.
Standards we work to
Red team by default
Every AI assessment includes adversarial testing. We don't just review architecture — we attack it, using the same multi-specialist parallel approach that finds what solo testers miss.
Framework-mapped findings
All findings are mapped to OWASP LLM Top 10 and MITRE ATLAS, giving your team a standardized remediation framework that aligns with board-level risk reporting.
Built for fast-moving AI
AI threat techniques evolve weekly. The collective model means you benefit from specialists who are continuously tracking jailbreak research, agentic exploits, and supply chain developments.
Deploying AI? Start with a threat model.
Tell us about your AI stack. We'll respond with a proposed assessment scope tailored to your deployment architecture.