Edit model card
PPO Agent playing PongNoFrameskip-v4
Evaluation Results
Usage (with Stable-baselines3)
Training Code
PPO Agent playing PongNoFrameskip-v4
This is a trained model of a PPO agent playing…
Edit model card
PPO Agent playing BreakoutNoFrameskip-v4
Evaluation Results
Usage (with Stable-baselines3)
Training Code
PPO Agent playing BreakoutNoFrameskip-v4
This is a trained model of a PPO agent playing…
Edit model card
ppo-Walker2DBulletEnv-v0
Usage (with Stable-baselines3)
Evaluation Results
ppo-Walker2DBulletEnv-v0
This is a pre-trained model of a PPO agent playing AntBulletEnv-v0 using the stable-baselines3 library.
Usage (with…
Edit model card
YAML Metadata Error:
"model-index[0].results" is required
mlagents-snowballfight-1vs1-ppo ☃️
mlagents-snowballfight-1vs1-ppo ☃️
This is a saved model of a PPO 1vs1 agent playing Snowball Fight.
Source link
Edit model card
PPO Agent playing CartPole-v1
Usage (with Stable-baselines3)
PPO Agent playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1
using the stable-baselines3 library.…
Edit model card
Snowball Fight ☃️, a multi-agent environment for ML-Agents made by Hugging Face
The Environment
Observation Space
Action Space (Discrete)
Agent Reward Function (dependant):
Addendum
How to use it
Set-up…
Edit model card
Model Card for Jat
Model Details
Model Description
Model Sources
Training
How to Get Started with the Model
Model Card for Jat
This is a…
Edit model card
Play PongNoFrameskip-v4 with MuZero Policy
Model Description
Model Usage
Install the Dependencies
Git Clone from Huggingface and Run the Model
Run Model by Using Huggingface_ding
Model Training
Train the…
