In-context learning (ICL) is the ability of LLMs to learn new tasks or patterns from examples provided in the prompt, without any weight updates or fine-tuning. The model adapts its behavior based purely on the context you give it.
This is what enables few-shot prompting to work—the model "learns" the pattern from your examples during inference.