AI Security & Runtime Controls

Lakera

Security tooling for protecting LLM applications against prompt injection, jailbreaks, and unsafe interactions.

Best for: Teams shipping external or employee-facing LLM applications.
Deployment: API and security platform
Primary motion: Protect LLM apps at runtime against prompt and abuse threats.

What This Vendor Covers

Lakera is a good entry for buyers who are already deploying LLM apps and need defensive controls around prompts, responses, and runtime misuse. It is useful when governance concerns are being driven by security incidents rather than audit programs.

  • prompt security
  • runtime protection
  • jailbreak defense
  • LLM apps

Buyer Checklist

  • Can it inspect both prompts and model outputs in real time?
  • What coverage exists for jailbreak and prompt injection patterns?
  • How is false-positive tuning handled for production traffic?
  • Does it support multiple model providers and self-hosted models?
  • How are incidents logged for later review?
  • Can controls be enforced without breaking user experience?