Skip to content Skip to sidebar Skip to footer

Ziya-BLIP2-14B-Visual-v1

Ziya-BLIP2-14B-Visual-v1 Main Page:Fengshenbang Github: Fengshenbang-LM 姜子牙系列模型 Ziya-BLIP2-14B-Visual-v1 Ziya-LLaMA-13B-v1.1 Ziya-LLaMA-13B-v1 Ziya-LLaMA-7B-Reward Ziya-LLaMA-13B-Pretrain-v1 简介 Brief Introduction Ziya-Visual多模态大模型基于姜子牙通用大模型V1训练,具有视觉问答和对话能力。今年3月份OpenAI发布具有识图能力的多模态大模型GPT-4,遗憾的是,时至今日绝大部分用户也都还没有拿到GPT-4输入图片的权限,Ziya-Visual参考了Mini-GPT4、LLaVA等优秀的开源实现,补齐了Ziya的识图能力,使中文用户群体可以体验到结合视觉和语言两大模态的大模型的卓越能力。 The Ziya-Visual multimodal Big Model is based on the Ziya-LLaMA-13B-v1…

mms-tts-eng

Edit model card Massively Multilingual Speech (MMS): English Text-to-Speech Model Details Usage BibTex citation License Massively Multilingual Speech (MMS): English Text-to-Speech This repository contains the English (eng) language…

co-instruct

News See its paper: https://huggingface.co/papers/2402.16641 Load Model import torch from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("q-future/co-instruct" , …