[Model] VLM2Vec, the first multimodal embedding model in vLLM (#9303)
This commit is contained in:
0
tests/models/embedding/vision_language/__init__.py
Normal file
0
tests/models/embedding/vision_language/__init__.py
Normal file
Reference in New Issue
Block a user