Encoder-decoder is a transformer architecture with two components: an encoder that processes the full input bidirectionally, and a decoder that generates output autoregressively while attending to the encoder's representations.
This was the original transformer architecture from "Attention Is All You Need," designed for sequence-to-sequence tasks like translation.