AI Safety encompasses the practices, techniques, and research aimed at ensuring AI systems behave as intended without causing harm. This includes preventing dangerous outputs, protecting user privacy, and ensuring models remain controllable.
Safety is a critical consideration throughout the AI lifecycle - from training data curation to deployment and monitoring.