Embedding models convert text (or other data) into dense vector representations that capture semantic meaning. These vectors enable similarity comparisons, clustering, and retrieval—the foundation of modern RAG systems.
Unlike generative LLMs that produce text, embedding models produce fixed-length vectors optimized for comparison tasks.