Model
llava-internlm-7b is a LLaVA model fine-tuned from InternLM-Chat-7B and CLIP-ViT-Large-patch14-336 with LLaVA-Pretrain and LLaVA-Instruct by XTuner.
Quickstart
Installation
pip install -U 'xtuner[deepspeed]'
Chat
xtuner chat internlm/internlm-chat-7b \
--visual-encoder openai/clip-vit-large-patch14-336 \
…
Training procedure
Framework versions
PEFT 0.4.0
Source link
LLaVA_X_KoLlama2-7B-pretrain-0.2v
KoT-platypus2 X LLaVA
Model Details
Model Developers: Nagase_Kotono
Base Model: kyujinpy/KoT-platypus2-7B
Training Dataset: KoLLaVA-CC3M-Pretrain-595K, LLaVA Visual Instruct Pretrain LCS-558K
Source link
kuyesu22/ll-avatar
Image-Text-to-Text
•
Updated
about 9 hours ago
Source link
Model Card Title
Source link
Model Card Title
Source link
Inference Preparation
This is a fork of liuhaotian/llava-v1.6-mistral-7b to be fully
compatible for inference with SGLang.
No other changes were made.
LLaVA Model Card
Model details
Model type:
LLaVA is an open-source chatbot trained…
GGUF Quantized LLaVA 1.6 Vicuna 13B
Updated quants and projector from PR #5267
Name
Quant method
Bits
Size
Use case
llava-v1.6-vicuna-13b.Q3_K_XS.gguf
Q3_K_XS
3
5.31 GB
very small, high quality loss
llava-v1.6-vicuna-13b.Q3_K_M.gguf
Q3_K_M
3
6.34 GB
very small, high quality loss
llava-v1.6-vicuna-13b.Q4_K_M.gguf
Q4_K_M
4
7.87 GB
medium, balanced quality - recommended
llava-v1.6-vicuna-13b.Q5_K_S.gguf
Q5_K_S
5
8.97…
