Edit model card
Massively Multilingual Speech (MMS): Thai Text-to-Speech
Model Details
Usage
BibTex citation
License
Massively Multilingual Speech (MMS): Thai Text-to-Speech
This repository contains the Thai (tha) language…
MiniLM: Small and Fast Pre-trained Models for Language Understanding and Generation
MiniLM is a distilled model from the paper "MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers".…
Model card for Pix2Struct - Finetuned on Widget Captioning (Captioning a UI component on a screen) - large version
Table of Contents
TL;DR
Using the model…
glpn-nyu-finetuned
This model is a fine-tuned version of vinvino02/glpn-nyu on the diode-subset dataset.
It achieves the following results on the evaluation set:
Loss: 1.5286
Mae: 3.1196
Rmse: 3.5796
Abs Rel:…
Edit model card
Massively Multilingual Speech (MMS): Uyghur Text-to-Speech
Model Details
Usage
BibTex citation
License
Massively Multilingual Speech (MMS): Uyghur Text-to-Speech
This repository contains the Uyghur (uig-script_arabic) language…
Model Trained Using AutoNLP
Problem type: Multi-class Classification
Model ID: 5521155
Validation Metrics
Loss: 1.3173143863677979
Accuracy: 0.8220706757594545
Macro F1: 0.5713688384455807
Micro F1: 0.8220706757594544
Weighted F1: 0.8217158913702755
Macro…
Sharded BLIP-2 Model Card - flan-t5-xl
This is a sharded version of the blip2-flan-t5-xl which leverages Flan T5-xl for image-to-text tasks such as image captioning and visual…
Onegafer/glpn-nyu-finetuned-diode-230530-200638
Depth Estimation
•
Updated
May 30, 2023
•
28
Source link
