Logits are the raw, unnormalized output scores from a neural network before applying softmax. In language models, logits represent the model's "confidence" for each possible next token—higher logits mean the model considers that token more likely.
Logits are converted to probabilities via the softmax function for sampling or loss calculation.