Edit model card
MobileViT (small-sized model)
Model description
Intended uses & limitations
How to use
Training data
Training procedure
Preprocessing
Pretraining
Evaluation results
BibTeX entry and citation info
MobileViT (small-sized model)
MobileViT model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer by Sachin Mehta…
Edit model card
PPO Agent playing CartPole-v1
Usage (with Stable-baselines3)
PPO Agent playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1
using the stable-baselines3 library.
Usage (with Stable-baselines3)
TODO: Add your code
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
Source link
Edit model card
README.md exists but content is empty.
Use the Edit model card button to edit it.
Source link
Edit model card
persian-tts-female-vits
Uses
How to Get Started with the Model
persian-tts-female-vits
persian-tts-female vits model for text to speech purposes.
Persian فارسی
Single-speaker female voice
Trained on persian-tts-dataset-male dataset
GitHub Repo
Demo
Uses
Install dependencies:
!pip install TTS
!sudo apt-get -y install espeak-ng
Generate audio from text:
…
Edit model card
persian-tts-female-glow_tts
Uses
How to Get Started with the Model
persian-tts-female-glow_tts
persian-tts-female glow_tts model for text to speech purposes.
Persian فارسی
Single-speaker female voice
Trained on persian-tts-dataset-famale
GitHub Repo
Uses
Install dependencies:
!pip install TTS
!sudo apt-get -y install espeak-ng
Generate audio from text:
using cli:
!tts --text…
Start up Profile The company focuses on E-commerce, banking, customer support, medical & nutrition, IoT & robotics. They are concentrated in unified algorithm architecture to achieve human-level intelligence in language. Smarter.codes helps to add layer of intelligence to applications and devices with its machine intelligence platform in the cloud. It provides…
Pretrained large language models (LLMs) boast remarkable language processing abilities but require substantial computational resources. Binarization, which reduces model weights to a single bit, offers a solution by drastically reducing computation and memory demands. However, existing quantization techniques must help maintain LLM performance at such low bit widths. This challenges achieving efficient deployment of LLMs…
The challenge of matching human preferences to big pretrained models has gained prominence in the study as these models have grown in performance. This alignment becomes particularly challenging when there are unavoidably poor behaviours in bigger datasets. For this issue, reinforcement learning from human input, or RLHF has become popular. RLHF approaches use human preferences…
