Self-calibration is a technique where LLMs assess their own confidence in their answers, aiming to align stated confidence with actual accuracy. A well-calibrated model should be correct 80% of the time when it says it's "80% confident."
This is crucial for building reliable AI systems that know when to seek human input or additional verification.