distilbert-base-uncased-finetuned-mnli-amazon-query-shopping
This model is a fine-tuned version of nlptown/bert-base-multilingual-uncased-sentiment on an Amazon US Customer Reviews Dataset. The code for the fine-tuning process can be found
here. This model is uncased:…
Indonesian RoBERTa Base Sentiment Classifier
Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the RoBERTa model. The model was originally the pre-trained Indonesian RoBERTa Base model,…
bert-base-multilingual-uncased-sentiment
This is a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish, and Italian. It predicts the sentiment of the…
MiniLM: Small and Fast Pre-trained Models for Language Understanding and Generation
MiniLM is a distilled model from the paper "MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers".…
Model Trained Using AutoNLP
Problem type: Multi-class Classification
Model ID: 5521155
Validation Metrics
Loss: 1.3173143863677979
Accuracy: 0.8220706757594545
Macro F1: 0.5713688384455807
Micro F1: 0.8220706757594544
Weighted F1: 0.8217158913702755
Macro…
mdhugol/indonesia-bert-sentiment-classification
Text Classification
β’
Updated
Sep 14, 2021
β’
45.3k
β’
17
Source link
Passage Reranking Multilingual BERT π π
Model description
Input: Supports over 100 Languages. See List of supported languages for all available.
Purpose: This module takes a search…
Model Card for deberta-v3-base-prompt-injection
This model is a fine-tuned version of microsoft/deberta-v3-base on multiple combined datasets of prompt injections and normal prompts.
It aims to identify prompt injections, classifying…
