Edit model card
YAML Metadata Error:
"model-index[0].results" is required
mlagents-snowballfight-1vs1-ppo ☃️
mlagents-snowballfight-1vs1-ppo ☃️
This is a saved model of a PPO 1vs1 agent playing Snowball Fight.
Source link
Edit model card
PPO Agent playing CartPole-v1
Usage (with Stable-baselines3)
PPO Agent playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1
using the stable-baselines3 library.…
Edit model card
Snowball Fight ☃️, a multi-agent environment for ML-Agents made by Hugging Face
The Environment
Observation Space
Action Space (Discrete)
Agent Reward Function (dependant):
Addendum
How to use it
Set-up…
Edit model card
Model Card for Jat
Model Details
Model Description
Model Sources
Training
How to Get Started with the Model
Model Card for Jat
This is a…
Edit model card
Play PongNoFrameskip-v4 with MuZero Policy
Model Description
Model Usage
Install the Dependencies
Git Clone from Huggingface and Run the Model
Run Model by Using Huggingface_ding
Model Training
Train the…
Edit model card
A(n) APPO model trained on the doom_health_gathering_supreme environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
Downloading…
Edit model card
poca Agent playing SoccerTwos
Usage (with ML-Agents)
Resume the training
Watch your Agent play
poca Agent playing SoccerTwos
This is a trained model…
Edit model card
A2C Agent playing PandaReachDense-v3
Usage (with Stable-baselines3)
A2C Agent playing PandaReachDense-v3
This is a trained model of a A2C agent playing PandaReachDense-v3
using the stable-baselines3 library.…
