This project is a fork of an original repository that provided the base environment for Mario Kart 64. The original repository can be found here: Original Repository.
The environment in the original repository was based on Python 2, which restricts the use of modern reinforcement learning libraries such as the latest versions of PyTorch and TensorFlow. Directly updating the Python version in the Docker container led to execution issues. Therefore, this repository maintains the original Python 2 environment and uses a socket-based approach to interface with a Python 3 environment for executing reinforcement learning programs.
The easiest, cleanest, most consistent way to get up and running with this project is via Docker. These instructions will focus on that approach.
Pre-requisites:
- Docker & docker-compose (if you are using Compose plugin for docker, replace
docker-composewithdocker composein the commands below). - Ensure you have a copy of the ROMs you wish to use, and make sure it is placed inside the path under
gym_mupen64plus/ROMs.
Steps:
-
Clone the repository and get into the root folder of the project.
-
Build the docker image with the following command:
docker build -t bz/gym-mupen64plus:0.0.1 . -
Please be noticed that in order to enable multiple instances of the environment, the original docker-compose file is separated into two parts - base file (docker-compose.yml) and override files (e.g. instance1.yml). The following command gives an example of instantiating an environment:
docker-compose -p agent1 -f docker-compose.yml -f instance1.yml up --build -d
This will start the following 4 containers:
xvfbsrvruns XVFBvncsrvruns a VNC server connected to the Xvfb containeragentruns the example python scriptemulatorruns the mupen64plus emulator
Note:
-pflag is the name of this environment instance- Before creating a new instance, be sure to create a override file to modify the port numbers (see
instance1.ymlfor more details). - Make sure that the
docker-compose downcommand given below also matches the file name of your instance and file names.
-
Under the root of the repository, there is a Python 3 file
SocketWrapper.py. This file contains the wrapper for our RL training. We can first create a virtual environment for our project by:python -m venv RL_env
Activate the environment:
source RL_env/bin/activateInstall the required packages:
pip install -r requirements.txt
In your training script:
from SocketWrapper import SocketWrapper env = SocketWrapper()
-
Then you can use your favorite VNC client (e.g., VNC Viewer) to connect to
localhostto watch the XVFB display in real-time. Note that running the VNC server and client can cause some performance overhead.For VSCode & TightVNC Users:
- Forward the port 5901/5902 to the desired port on the local host.
- Open TightVNC and connect to
localhost::desired_port_num, e.g.localhost::5901.
-
To turn off the docker compose container (e.g. suppose we follow the naming criteria above
agent1as the instance name and useinstance1.ymlfor the override file), use the following command:docker-compose -p agent1 -f docker-compose.yml -f instance1.yml down
Note:
- To create another instance, you can create another tmux channel to run another with a different instance name and override file.
Additional Notes:
-
To view the status (output log) of a single compose, you can use the following command (suppose our instance name is
agent1):docker-compose -p agent1 logs xvfbsrv docker-compose -p agent1 logs vncsrv docker-compose -p agent1 logs emulator docker-compose -p agent1 logs agent
- SAC and BC Training Script: A script to train a Soft Actor-Critic (SAC) and Behavior Cloning (BC) agent.
- Grad-CAM Visualization: Tools to visualize the learned features using Grad-CAM.
This repository enhances the Mario Kart 64 Gym Environment with modern reinforcement learning capabilities. Follow the setup instructions to get started with training and visualizing your own AI agents in Mario Kart 64.