Transfer learning is the practice of taking a model trained on one task and adapting it to a different but related task. Instead of training from scratch, you leverage knowledge already learned, dramatically reducing data and compute requirements.
Transfer learning is why you can fine-tune GPT or BERT on your specific task with just hundreds of examples instead of billions.