https://github.com/openai/retro:
> Gym Retro lets you turn classic video games into Gym environments for reinforcement learning and comes with integrations for ~1000 games. It uses various emulators that support the Libretro API, making it fairly easy to add new emulators.
.nes is listed in the supported ROM types: https://retro.readthedocs.io/en/latest/integration.html#supp...
> Integrating a Game: To integrate a game you need to define a done condition and a reward function. The done condition lets Gym Retro know when to end a game session, while the reward function provides a simple numeric goal for machine learning agents to maximize.
> To define these, you find variables from the game’s memory, such as the player’s current score and lives remaining, and use those to create the done condition and reward function. An example done condition is when the `lives` variable is equal to 0, an example reward function is the change in the `score` variable.
PPO Proximal Policy Optimization and OpenAI/baselines: https://retro.readthedocs.io/en/latest/getting_started.html#...
MuZero: https://en.wikipedia.org/wiki/MuZero
MuZero-unplugged with PyTorch: https://github.com/DHDev0/Muzero-unplugged
Farama-Foundation/Gymnasium is a fork of OpenAI/gym and it has support for additional Environments like MuJoCo: https://github.com/Farama-Foundation/Gymnasium#environments
Farama-Foundatiom/MO-Gymnasiun: "Multi-objective Gymnasium environments for reinforcement learning": https://github.com/Farama-Foundation/MO-Gymnasium
There are sysdig chisels that reference gdb.
https://github.com/openai/retro :
> Gym Retro lets you turn classic video games into Gym environments for reinforcement learning and comes with integrations for ~1000 games. It uses various emulators that support the Libretro API, making it fairly easy to add new emulators.
DOTA had also decent API links as far as I recall.
Also, there's a lot of research on older games, see https://github.com/openai/retro for interfacing with NES/SNES/GBA/etc games.