Guardrails are programmatic constraints and checks placed around LLM systems to ensure outputs stay within acceptable boundaries. They filter inputs and outputs, validate responses, and prevent misuse - acting as safety barriers around the core model.
Guardrails provide defense-in-depth, catching issues that model training alone may not prevent.