Learning rate is a hyperparameter that controls how much to adjust model weights during training. It determines the step size when moving in the direction opposite to the gradient. Too high causes instability; too low causes slow training.
The learning rate is often the single most important hyperparameter to tune for successful training.