A multi-lingual, versatile text embedding model supporting dense, sparse, and multi-vector retrieval. <metadata> gpu:T4 | collections: ["Information Retrieval"] </metadata>
-
Updated
Mar 11, 2025 - Python
A multi-lingual, versatile text embedding model supporting dense, sparse, and multi-vector retrieval. <metadata> gpu:T4 | collections: ["Information Retrieval"] </metadata>
An embedding model, trained on a mixture of multilingual datasets and supports 100 languages. <metadata> gpu: T4 | collections: ["HF Transformers"] </metadata>
Text embedding model, delivering 768-dimensional embeddings and supporting up to 8192-token inputs. <metadata> gpu: T4 | collections: ["Information Retrieval"] </metadata>
Text embedding model which is trained on extensive Mandarin corpora that produces high‑quality vector representations for semantic search, clustering, classification, and retrieval. <metadata> gpu: T4 | collections: ["HF Transformers"] </metadata>
Fine-tuned on MS MARCO for efficient dense sentence embeddings, excelling in semantic search and retrieval. <metadata> gpu: T4 | collections:["Information Retrieval"] </metadata>
Generates embeddings of biomedical articles that can be used for semantic search (dense retrieval). <metadata> gpu: T4 | collections: ["HF Transformers","Batch Input Processing"] </metadata>
600M parameter, 100 language embedding model that turns up to 32k token inputs into instruction-aware vectors. <metadata> gpu: A10 | collections: ["HF_Transformers"] </metadata>
Google’s Universal Sentence Encoder Multilingual QA produces high-quality sentence embeddings optimized for cross-lingual question answering and semantic similarity tasks. <metadata> gpu: T4 | collections: ["Information Retrieval"] </metadata>
Generate embedding of biomedical short texts like questions, search queries and sentences. <metadata> gpu: T4 | collections: ["HF Transformers","Batch Input Processing"] </metadata>
A 3.8B multimodal-multilingual embedding that unifies text and image understanding in a single late-interaction space, delivers both dense and multi-vector outputs. <metadata> gpu: A10 | collections: ["HF_Transformers"] </metadata>
Add a description, image, and links to the generate-embeddings topic page so that developers can more easily learn about it.
To associate your repository with the generate-embeddings topic, visit your repo's landing page and select "manage topics."