Prompt Engineering Fundamentals / Learning Approaches

In-Context Learning

Intermediate [3/5]
ICL Prompt-based learning Learning from examples

Definition

In-context learning (ICL) is the ability of LLMs to learn new tasks or patterns from examples provided in the prompt, without any weight updates or fine-tuning. The model adapts its behavior based purely on the context you give it.

This is what enables few-shot prompting to work—the model "learns" the pattern from your examples during inference.

Key Concepts

  • No training required: Learning happens at inference time
  • Pattern recognition: Model identifies patterns from examples
  • Temporary: Learning only lasts for the current context
  • Emergent ability: Appeared in large-scale models

Examples

ICL in Action
Learning a New Format
PROMPT (with examples): Convert to our internal format: "John Smith" → SMITH_JOHN "Jane Doe" → DOE_JANE "Bob Wilson" → WILSON_BOB Now convert: "Alice Johnson" RESPONSE: → JOHNSON_ALICE The model learned: 1. Split by space 2. Reverse order 3. UPPERCASE 4. Join with underscore All from just 3 examples!
The model infers complex transformation rules from examples alone.
ICL Types
Variations of In-Context Learning
ZERO-SHOT • No examples, just instructions • "Translate to French: Hello" • Relies on pre-trained knowledge FEW-SHOT (1-5 examples) • Small number of demonstrations • Model infers pattern from examples • Most common ICL approach MANY-SHOT (10+ examples) • More examples for complex tasks • Better pattern recognition • Uses more context window CHAIN-OF-THOUGHT ICL • Examples include reasoning steps • Model learns to show its work • Better for complex reasoning
Different numbers and types of examples affect ICL effectiveness.
Why ICL Works
The Magic Behind In-Context Learning
TRADITIONAL ML: Data → Training → Weight updates → Deploy (Weeks/months, requires infrastructure) IN-CONTEXT LEARNING: Examples → Prompt → Immediate output (Seconds, no infrastructure) THEORIES ON HOW IT WORKS: 1. PATTERN COMPLETION Model treats examples as pattern to continue 2. IMPLICIT FINE-TUNING Attention heads simulate gradient updates 3. META-LEARNING Training included "learn from examples" tasks 4. BAYESIAN INFERENCE Model updates beliefs based on evidence The exact mechanism is still debated by researchers!
ICL is a surprising emergent capability that researchers are still studying.

Interactive Exercise

🧠
Test ICL Understanding

Write examples to teach a model this transformation (ICL-style):

Rule: Convert dates from "Month DD, YYYY" to "YYYY-MM-DD"

Write 3 examples that would help a model learn this pattern.

Pro Tips
  • Use diverse examples that cover edge cases
  • Keep examples consistent in format
  • Order can matter—sometimes put hardest examples last
  • 3-5 examples is often the sweet spot

Related Terms