AI Governance Platforms
Platforms for AI inventory, policy mapping, model approval workflows, control evidence, and audit readiness.
Curated shortlists and buyer checklists across AI governance platforms, runtime security, observability, and privacy tooling — aligned to EU AI Act, NIST AI RMF, and ISO 42001 programs.
Each category aligns with the team that typically leads the evaluation — security, risk and legal, ML platform, or privacy.
Platforms for AI inventory, policy mapping, model approval workflows, control evidence, and audit readiness.
Vendors focused on prompt injection defense, model scanning, runtime guardrails, LLM app security, and AI-specific attack surfaces.
Monitoring, testing, tracing, and evaluation tooling for model quality, safety, drift, and production behavior.
Vendors centered on PII detection, privacy-preserving AI workflows, sensitive data controls, and broader data governance obligations.
Platforms frequently shortlisted by enterprise evaluators.
Enterprise AI governance platform for inventory, risk controls, policy automation, and audit-ready evidence.
Governance, risk, and compliance platform with an emphasis on AI assurance and policy operationalization.
AI governance and compliance tooling designed to plug into existing model, risk, and process infrastructure.
Security tooling for protecting LLM applications against prompt injection, jailbreaks, and unsafe interactions.
AI security company focused on securing models, pipelines, registries, and the broader ML supply chain.
Observability platform for monitoring model performance, explainability, quality, and AI system behavior in production.
Evaluation and testing platform for LLM outputs, safety, reliability, and application quality.
Privacy-focused tooling for detecting and redacting PII in text and data flowing through AI systems.
Start with the category that maps to your team. Every vendor page includes a buyer checklist, framework fit, and comparisons with similar platforms.