Edit model card
VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
Model Details
Usage
BibTex citation
License
VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
…
The crispy rerank family from mixedbread ai.
mxbai-rerank-xsmall-v1
This is the smallest model in our family of powerful reranker models. You can learn more about the models in…
Ziya-Visual-14B-Chat
Main Page:Fengshenbang
Github: Fengshenbang-LM
姜子牙系列模型
Ziya-LLaMA-13B-v1.1
Ziya-LLaMA-13B-v1
Ziya-LLaMA-7B-Reward
Ziya-LLaMA-13B-Pretrain-v1
软件依赖
pip install torch==1.12.1 tokenizers==0.13.3 git+https://github.com/huggingface/transformers
模型分类 Model Taxonomy
需求 Demand
任务 Task
系列 Series
模型 Model
参数…
MADNet Keras
MADNet is a deep stereo depth estimation model. Its key defining features are:
It has a light-weight architecture which means it has low latency.
It supports self-supervised…
Edit model card
Massively Multilingual Speech (MMS) : Text-to-Speech Models
Table Of Contents
Usage
Supported Languages
Model details
Additional Links
Massively Multilingual Speech (MMS) : Text-to-Speech Models
This…
Reward Model Overview
The reward model is trained from the base model google/gemma-2b-it. See the 7B version RM-Gemma-7B.
Model Details
If you have any question…
TinyLLaVA: A Framework of Small-scale Large Multimodal Models
🎉 News
[2024.02.25] Update evaluation scripts and docs!
[2024.02.25] Data descriptions out. Release…
羽人-百川7B
羽人-百川7B是基于baichuan-inc/baichuan-7B 进行多任务有监督微调的开源多模态大语言模型, 建立在 Pleisto 的以数据为中心(Data-centric AI)的工作上。羽人在多轮对话、开放域问答、角色扮演、文本生成、文本理解、图片理解等多个任务上均拥有优异的表现。
YuRen BaiChuan 7B is a multi-modal large language model based on baichuan-inc/baichuan-7B and trained with multi-task supervised fine-tuning. It is built on…
