Tags: #any-to-any
Multimodal Large Language Model
3.6k
NExT-GPT/NExT-GPT
The first end-to-end multimodal large language model (MM-LLM) that perceives input and generates output in arbitrary combinations (any-to-any) of text, image, video, and audio.
The first end-to-end multimodal large language model (MM-LLM) that perceives input and generates output in arbitrary combinations (any-to-any) of text, image, video, and audio.