DistilRoberta-financial-sentiment
This model is a fine-tuned version of distilroberta-base on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
Loss: 0.1116
Accuracy: 0.9823
Base Model description
…
HelpingAI-Vision
Model details
The fundamental concept behind HelpingAI-Vision is to generate one token embedding per N parts of an image, as opposed to producing N…
Ziya-BLIP2-14B-Visual-v1
Main Page:Fengshenbang
Github: Fengshenbang-LM
姜子牙系列模型
Ziya-BLIP2-14B-Visual-v1
Ziya-LLaMA-13B-v1.1
Ziya-LLaMA-13B-v1
Ziya-LLaMA-7B-Reward
Ziya-LLaMA-13B-Pretrain-v1
简介 Brief Introduction
Ziya-Visual多模态大模型基于姜子牙通用大模型V1训练,具有视觉问答和对话能力。今年3月份OpenAI发布具有识图能力的多模态大模型GPT-4,遗憾的是,时至今日绝大部分用户也都还没有拿到GPT-4输入图片的权限,Ziya-Visual参考了Mini-GPT4、LLaVA等优秀的开源实现,补齐了Ziya的识图能力,使中文用户群体可以体验到结合视觉和语言两大模态的大模型的卓越能力。
The Ziya-Visual multimodal Big Model is based on the Ziya-LLaMA-13B-v1…
Model Details: DPT-Large (also known as MiDaS 3.0)
Dense Prediction Transformer (DPT) model trained on 1.4 million images for monocular depth estimation.
It was introduced in the paper Vision…
Edit model card
Massively Multilingual Speech (MMS): English Text-to-Speech
Model Details
Usage
BibTex citation
License
Massively Multilingual Speech (MMS): English Text-to-Speech
This repository contains the English (eng) language…
The crispy rerank family from mixedbread ai.
mxbai-rerank-base-v1
This is the base model in our family of powerful reranker models. You can learn more about the models in…
News
See its paper: https://huggingface.co/papers/2402.16641
Load Model
import torch
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("q-future/co-instruct" ,
…
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Model card for BLIP trained on visual question answering- base architecture (with ViT base backbone).
Pull figure from…
