liuhaotian

34 models • 2 total models in database
Sort by:

llava-v1.5-7b

--- inference: false pipeline_tag: image-text-to-text ---

NaNK
264,085
509

llava-v1.6-vicuna-13b

NaNK
36,627
58

llava-v1.5-13b

Model type: LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Model date: LLaVA-v1.5-13B was trained in September 2023. Paper or resources for more information: https://llava-vl.github.io/ License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. Where to send questions or comments about the model: https://github.com/haotian-liu/LLaVA/issues Intended use Primary intended uses: The primary use of LLaVA is research on large multimodal models and chatbots. Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 450K academic-task-oriented VQA data mixture. - 40K ShareGPT data. Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.

NaNK
33,532
518

llava-v1.6-vicuna-7b

Model type: LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: lmsys/vicuna-7b-v1.5 Model date: LLaVA-v1.6-Vicuna-7B was trained in December 2023. Paper or resources for more information: https://llava-vl.github.io/ License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. Where to send questions or comments about the model: https://github.com/haotian-liu/LLaVA/issues Intended use Primary intended uses: The primary use of LLaVA is research on large multimodal models and chatbots. Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 500K academic-task-oriented VQA data mixture. - 50K GPT-4V data mixture. - 40K ShareGPT data. Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.

NaNK
29,249
137

llava-v1.6-mistral-7b

NaNK
license:apache-2.0
18,904
239

llava-v1.6-34b

Model type: LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: NousResearch/Nous-Hermes-2-Yi-34B Model date: LLaVA-v1.6-34B was trained in December 2023. Paper or resources for more information: https://llava-vl.github.io/ Where to send questions or comments about the model: https://github.com/haotian-liu/LLaVA/issues Intended use Primary intended uses: The primary use of LLaVA is research on large multimodal models and chatbots. Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 500K academic-task-oriented VQA data mixture. - 50K GPT-4V data mixture. - 40K ShareGPT data. Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.

NaNK
license:apache-2.0
4,832
354

llava-llama-2-13b-chat-lightning-preview

NaNK
546
47

llava-v1.5-7b-lora

NaNK
134
24

llava-llama-2-13b-chat-lightning-gptq

NaNK
llama
105
8

llava-v1.5-mlp2x-336px-pretrain-vicuna-7b-v1.5

NaNK
77
22

LLaVA-Lightning-MPT-7B-preview

NaNK
license:cc-by-nc-sa-4.0
70
52

LLaVA-Lightning-7B-delta-v1-1

NaNK
license:apache-2.0
61
22

LLaVA-13b-delta-v0

NaNK
llama
55
221

llava-llama-2-7b-chat-lightning-lora-preview

NaNK
33
13

llava-v1.5-mlp2x-336px-pretrain-vicuna-13b-v1.5

NaNK
30
2

llava-lcs558k-scienceqa-vicuna-13b-v1.3

NaNK
28
6

llava-pretrain-vicuna-7b-v1.3

NaNK
25
2

llava-v1.5-13b-lora

NaNK
22
27

llava-336px-pretrain-llama-2-7b-chat

NaNK
18
0

LLaVA-13b-delta-v1-1

NaNK
license:apache-2.0
17
14

LLaVA-7b-delta-v0

NaNK
llama
15
17

llava-v1-0719-336px-lora-merge-vicuna-13b-v1.3

NaNK
15
9

llava-336px-pretrain-vicuna-7b-v1.3

NaNK
13
3

llava-pretrain-vicuna-13b-v1.3

NaNK
13
0

llava-336px-pretrain-vicuna-13b-v1.3

NaNK
12
7

llava-pretrain-llama-2-7b-chat

NaNK
12
4

llava-v1-0719-336px-lora-vicuna-13b-v1.3

NaNK
9
8

llava-336px-pretrain-llama-2-13b-chat

NaNK
8
2

llava-pretrain-llama-2-13b-chat

NaNK
7
2

LLaVA-13b-delta-v0-science_qa

NaNK
llama
6
6

llava-v1.5-13b-shard3gb

NaNK
2
14

LLaVA-Pretrained-Projectors

license:apache-2.0
0
17

llava-vicuna-7b-v1.1-lcs_558k-instruct_80k_1e-lora-preview_alpha

NaNK
license:apache-2.0
0
5

llava-v1.6-34b-tokenizer

NaNK
0
2