ALE Release Notes

ALE v0.10.1

Released on 2024-09-28 - GitHub - PyPI

Revert change to requirements that numpy < 2.0, now numpy > 1.20 and add support for building from source distribution, tar.gz (though not recommended).

v0.10.0: ALE v0.10

Released on 2024-09-24 - GitHub - PyPI

In v0.10, ALE now has its own dedicated website, https://ale.farama.org/ with Atari's documentation being moved from Gymnasium.

We have moved the project main code from src into src/ale to help incorporate ALE into C++ projects and in the Python API, we have updated get_keys_to_action to work with gymnasium.utils.play by changing the key for no-op from None to the e key.

Furthermore, we have updated the API to support continuous actions by @jjshoots and @psc-g.

Previously, users could interact with the ALE interface with only discrete actions linked to joystick controls, ie:

  • All left actions (LEFTDOWN, LEFTUP, LEFT...) -> paddle left max
  • All right actions (RIGHTDOWN, RIGHTUP, RIGHT...) -> paddle right max
  • Up... etc.
  • Down... etc.

However, for games using paddles, this loses the ability to specify non-max values for moving left or right. Therefore, this release adds to both the Python and C++ interfaces the ability to use continuous actions (FYI, this only impacts environments with paddles, otherwise they can't make use of this change).

C++ interface changes

Old Discrete ALE interface

reward_t ALEInterface::act(Action action)

New Mixed Discrete-Continuous ALE interface

reward_t ALEInterface::act(Action action, float paddle_strength = 1.0)

Games where the paddle is not used simply have the paddle_strength parameter ignored.
This mirrors the real-world scenario where you have a paddle connected, but the game doesn't react to it when the paddle is turned.
This maintains backwards compatibility.

Python interface changes

Old Discrete ALE Python Interface

ale.act(action: int)

New Mixed Discrete-Continuous ALE Python Interface

ale.act(action: int, strength: float = 1.0)

The continuous action space is implemented at the Python level within the Gymnasium environment.

if continuous:
    # action is expected to be a [2,] array of floats
    x, y = action[0] * np.cos(action[1]), action[0] * np.sin(action[1])
    action_idx = self.map_action_idx(
        left_center_right=(
            -int(x < self.continuous_action_threshold)
            + int(x > self.continuous_action_threshold)
        ),
        down_center_up=(
            -int(y < self.continuous_action_threshold)
            + int(y > self.continuous_action_threshold)
        ),
        fire=(action[-1] > self.continuous_action_threshold),
    )
    ale.act(action_idx, action[1])

Full Changelog: v0.9.1...v0.10.0

ALE v0.9.1

Released on 2024-08-01 - GitHub - PyPI

This release adds support for NumPy 2.0 by updating the PyBind 11 version to 2.13.1 used to compile the wheels, see #535 for the changes.

We have added support for users to use their own PyBind version if already installed when compiling

Full Changelog: v0.9.0...v0.9.1

ALE v0.9.0

Released on 2024-05-20 - GitHub - PyPI

Previously, ALE implemented only a Gym based environment, however, as Gym is no longer maintained (the last commit was 18 months ago). We have updated ale-py to use Gymnasium >= 1.0.0a1 (a maintained fork of Gym) as the sole backend environment implementation. For more information on Gymnasium’s API, see their introduction page.

import gymnasium as gym
import ale_py

gym.register_envs(ale_py)  # unnecessary but prevents IDEs from complaining

env = gym.make("ALE/Pong-v5", render_mode="human")

obs, info = env.reset()
episode_over = False
while not episode_over:
	action = policy(obs)  # replace with actual policy
	obs, reward, terminated, truncated, info = env.step(action)
	episode_over = terminated or truncated
env.close()

An important change in this update is that the Atari ROMs are packaged within the PyPI installation such that users no longer require pip install "gym[accept-rom-license]" (AutoROM) or ale-import-roms for downloading or loading ROMs. This should significantly simplify installing Atari for users. For users who wish to load ROMs from an alternative folder, use the ALE_ROM_DIR system environment variable to specify a folder directory.

Importantly, Gymnasium 1.0.0 removes a registration plugin system that ale-py utilises where atari environments would be registered behind the scenes. As a result, projects will need to import ale_py, to register all the atari environments, before an atari environment can be created with gymnasium.make. We understand this will cause annoyance to some users, however, the previous method brought significant complexity behind the scenes that the development team believed caused more issues than help.

Other changes

  • Added Python 3.12 support.
  • Replace interactive exit by sys.exit (#498)
  • Fix C++ documentation example links(#501)
  • Add support for gcc 13 (#503)
  • Unpin cmake dependency and remove wheel from build system (#493)
  • Add missing imports for cstdint (#486)
  • Allow installing without git (#492)
  • Update to require importlib-resources for < 3.9 (#491)

Full Changelog: v0.8.1...v0.9.0

v0.8.1: Arcade Learning Environment 0.8.1

Released on 2023-02-17 - GitHub - PyPI

Added

  • Added type stubs for the native ALE Python module generated via pybind11. You'll now get type hints in your IDE.

Fixed

  • Fixed render_mode attribute on legacy Gym environment (@younik)
  • Fixed a bug which could parse invalid ROM names containing numbers, e.g., TicTacToe3D or Pitfall2
  • Changed the ROM identifier of VideoChess & VideoCube to match VideoCheckers & VideoPinball.
    Specifically, the environment ID changed from Videochess -> VideoChess and Videocube -> VideoCube.
    Most ROMs had the ID correctly as video_chess.bin and video_cube.bin but for those who didn't you can
    simply run ale-import-roms which will automatically correct this for you.
  • Reverted back to manylinux2014 (glibc 2.17) to better support older operating systems.

v0.8.0: Arcade Learning Environment 0.8.0

Released on 2022-09-09 - GitHub - PyPI

Added

  • Added compliance with the Gym v26 API. This includes multiple breaking changes to the Gym API. See the Gym release for additional information.
  • Reworked the ROM plugin API resulting in reduced startup time when importing ale_py.roms.
  • Added a truncation API to the ALE interface to query whether an episode was truncated or terminated (ale.game_over(with_truncation=true/false) and ale.game_truncated())
  • Added proper Gym truncation on max episode frames. This no longer relies on the TimeLimit wrapper with the new truncation API in Gym v26.
  • Added a setting for truncating on loss-of-life.
  • Added a setting for clamping rewards.
  • Added const keywords to attributes in ale::ALEInterface (#457) (@AlessioZanga).
  • Added explicit exports via __all__ in ale-py so linting tools can better detect exports.
  • Added builds for Python 3.11.

Fixed

  • Moved the Gym environment entrypoint from gym.envs.atari:AtariEnv to ale_py.env.gym:AtariEnv. This resolves many issues with the namespace package but does break backwards compatability for some Gym code that relied on the entry point being prefixed with gym.envs.atari.

v0.7.5: Arcade Learning Environment 0.7.5

Released on 2022-04-18 - GitHub - PyPI

Added

  • Added validation for Gym's frameskip values.
  • Made ROM loading more robust with module-level __getattr__ and __dir__.
  • Added py.typed to the Python module's root directory to support type checkers.
  • Bumped SDL to v2.0.16.

Fixed

  • Fixed Gym render mode metadata. (@vwxyzjn)
  • Fixed Gym warnings about seeding.hash_seed and random.randint.
  • Fixed build infrastructure issues from the migration to setuptools>=0.61.

Removed

  • Removed Gym's .render(mode='human'). Gym now uses the render_mode keyword argument in the environment constructor.

v0.7.4: Arcade Learning Environment 0.7.4

Released on 2022-02-17 - GitHub - PyPI

Added

  • Proper C++ namespacing for the ALE and Stella (@tuero)
  • vcpkg manifest. You can now install dependencies via cmake.
  • Support for the new Gym (0.22) reset API, i.e., the seed and return_info keyword arguments.
  • Moved cibuildwheel config from Github Actions to pyproject.toml.

Fixed

  • Fixed a bug with the terminal signal in ChopperCommand #434
  • Fixed warnings with importlib-metadata on Python < 3.9.
  • Reverted the Gym v5 defaults to align with the post-DQN literature. That is, moving from a frameskip of 5 -> 4, and full action set -> minimal action set.

v0.7.3: Arcade Learning Environment 0.7.3

Released on 2021-11-02 - GitHub - PyPI

This update includes a minor addition that allows users to load ROMs from a directory specified by the environment variable ALE_PY_ROM_DIR.

Added

  • Environment variable ALE_PY_ROM_DIR which if specified will search for ROMs in ${ALE_PY_ROM_DIR}/*.bin. (@joshgreaves)

v0.7.2: Arcade Learning Environment 0.7.2

Released on 2021-10-07 - GitHub - PyPI

This release includes a bug fix for Windows and Python 3.10 wheels. Note that we no longer build wheels for Python 3.6 which is considered end of life as of December 2021.

Added

  • Package Tetris by Colin Hughes. This ROM is made publicly available by the author. This is useful for other open-source packages to be able to unit test against the ALE. (@tfboyd)
  • Python 3.10 prebuilt wheels

Fixed

  • Fixed an issue with isSupportedROM on Windows which was causing incorrect ROM hashes.

Removed

  • Python 3.6 prebuilt wheels

v0.7.1: Arcade Learning Environment 0.7.1

Released on 2021-09-29 - GitHub - PyPI

This release adds some niceties around Gym as well as expands upon some deprecation warnings which may have been confusing. The biggest change in this release is that the Gym environment is now installed to gym.envs.atari:AtariEnv which is now backwards compatible with the previous entry point.

Furthermore, users no longer need to import the ALE when constructing a *-v5 environment. We now use the new Gym environment plugin system for all environments, i.e., v0, v4, v5. Additionally, Gym adds new tools for downloading/installing ROMs. For more info, check out Gym's release notes.

Added

  • Added ale-import-roms --import-from-pkg {pkg}
  • Use gym.envs.atari as a namespace package to maintain backwards compatability with the AtariEnv entry point.
  • The ALE now uses Gym's environment plugin system in gym>=0.21 (openai/gym#2383, openai/gym#2409, openai/gym#2411). Users no longer are required to import ale_py to use a -v5 environment.

Changed

  • Silence unsupported ROMs warning behind ImportError. To view these errors you should now supply the environment variable PYTHONWARNINGS=default::ImportWarning:ale_py.roms.
  • Reworked ROM error messages to provide more helpful suggestions.
  • General metadata changes to the Python package.

Fixed

  • Add missing std:: name qualifier when enabling SDL (@anadrome)
  • Fixed mandatory kwarg for gym.envs.atari:AtariEnv.clone_state.

v0.7.0: Arcade Learning Environment 0.7.0

Released on 2021-09-14 - GitHub - PyPI

This release focuses on consolidating the ALE into a cohesive package to reduce fragmentation across the community. To this end, the ALE now distributes native Python wheels, replaces the legacy Atari wrapper in OpenAI Gym, and includes additional features like out-of-the-box SDL support for visualizing your agents.

For a full explainer see our release blog post: https://brosa.ca/blog/ale-release-v0.7.

Added

  • Native support for OpenAI Gym
  • Native Python interface using pybind11 which results in a speedup for Python workloads as well as proper support for objects like ALEState
  • Python ROM management, e.g., ale-import-roms
  • PyPi Python wheels published as ale-py + we distribute SDL2 for out of the box visualization + audio support
  • isSupportedROM(path) to check if a ROM file is supported by the ALE
  • Added new games: Atlantis2, Backgammon, BasicMath, Blackjack, Casino, Crossbow, DarkChambers, Earthworld, Entombed, ET, FlagCapture, Hangman, HauntedHouse, HumanCannonball, Klax, MarioBros, MiniatureGolf, Othello, Pacman, Pitfall2, SpaceWar, Superman, Surround, TicTacToe3D, VideoCheckers, VideoChess, VideoCube, WordZapper (thanks @tkoeppe)
  • Added (additional) mode/difficulty settings for: Lost Luggage, Turmoil, Tron Dead Discs, Pong, Mr. Do, King Kong, Frogger, Adventure (thanks @tkoeppe)
  • Added cloneState(include_rng) which will eventually replace cloneSystemState (behind the scenes cloneSystemState is equivalent to cloneState(include_rng=True)).
  • Added setRAM which can be useful for modifying the environment, e.g., learning a causal model over RAM transitions, altering game dynamics, etc.

Changed

  • Rewrote SDL support using SDL2 primitives
  • SDL2 now renders every frame independent of frameskip
  • SDL2 renders at the proper ROM framerate (added benefit of audio sync support)
  • Rewrote entire CMake infrastructure which now supports vcpkg natively
  • C++ minimum version is now C++17
  • Changed all relative imports to absolute imports
  • Switched from Travis CI to Github Actions
  • Allow for paddle controller's min/max setting to be configurable
  • More robust version handling between C++ & Python distributions
  • Updated Markdown documentation to replace TeX manual

Fixed

  • Fixed bankswitching type for UA cartridges
  • Fixed a SwapPort bug in Surround
  • Fixed multiple bugs in handling invalid ROM files (thanks @tkoeppe)
  • Fixed initialization of TIA static data to make it thread safe (thanks @tkoeppe)
  • Fixed RNG initialization, this was one of the last barriers to making the ALE fully deterministic, we are now fully deterministic

Removed

  • Removed FIFO interface
  • Removed RL-GLUE support
  • Removed ALE CLI interface
  • Removed Java interface
  • Removed ALEInterface::load(), ALEInterface::save(). If you require this stack functionality it's easy to implement on your own using ALEInterface::cloneState(include_rng)
  • Removed os-dependent filesystem code in favour of C++17 std::fs
  • Removed human control mode
  • Removed old makefile build system in favour of CMake
  • Removed bspf
  • Removed unused controller types: Driving, Booster, Keyboard
  • Removed AtariVox
  • Removed Stella types (e.g., Array) in favour of STL types
  • Remove Stella debugger
  • Remove Stella CheatManager
  • Lots of code cleanups conforming to best practices (thanks @tkoeppe)

v0.6.1: Arcade Learning Environment 0.6.1

Released on 2019-11-21 - GitHub - PyPI

This collects a number of minor changes from 0.6.0, spanning about two years.

Changed

  • Speedup of up to 30% by optimizing variable types (@qstanczyk)

Fixed

  • Fixed switch fall-through with Gravitar lives detection (@lespeholt)

v0.6.0: Arcade Learning Environment 0.6.0

Released on 2017-12-01 - GitHub - PyPI

This is the first release of a brand new version of the ALE, including modes, difficulties, and a dozen new games.

Added

  • Support for modes and difficulties in Atari games (@mcmachado)
  • Frame maxpooling as a post-processing option (@skylian)
  • Added support for: Turmoil, Koolaid, Tron Deadly Discs, Mr. Do, Donkey Kong, Keystone Kapers, Frogger, Sir Lancelot, Laser Gates, Lost Luggage,
  • Added MD5 list of supported ROMs

Changed

  • Disabled color averaging by default
  • Replaced TinyMT with C++11 random

Fixed

  • Fixed old color averaging scheme (PR #181)
  • Fixed minimal action set in Pong
  • Fixed termination issues in Q*Bert

v0.5.2: Arcade Learning Environment 0.5.2

Released on 2017-08-20 - GitHub - PyPI

This is a minor release of ALE 0.5, meant to reflect a number of bug fixes and PRs that have been added over the last two years. Note that a new major release (0.6) should be released within the next three months.

Added

  • Routines for ALEState serialization (@Jragonmiris).

Changed

Fixed

  • Fix RNG issues introduced in 0.5.0.
  • Additional bug fixes.

v0.5.1: Arcade Learning Environment 0.5.1

Released on 2015-10-08 - GitHub - PyPI

This is the official release of the Arcade Learning Environment, version 0.5.1. This version sees bug fixes from 0.5.0, additions to the C++ and Python interfaces, and additional error checking. The interfaces should be considered mostly stable, but are likely to see a few tweaks before version 1.0.

Added

  • Added RNG serialization capability.

Changed

  • Refactored Python getScreenRGB to return unpacked RGB values (@spragunr).
  • Sets the default value of the color_averaging flag to be true. It was true by default in previous versions but was changed in 0.5.0. Reverted for backward compatibility.

Fixed

  • Bug fixes from ALE 0.5.0.

v0.5.0: Arcade Learning Environment 0.5.0

Released on 2015-06-23 - GitHub - PyPI

This is the official release of the Arcade Learning Environment, version 0.5.0. This version sees a major code overhaul, including simpler installation, better interfaces, visualization, and optional controller stochasticity. The interfaces should be considered mostly stable, but may see a few tweaks before version 1.0.

Added

  • Added action_repeat_stochasticity.
  • Added sound playback, visualization.
  • Added screen/sound recording ability.
  • CMake now available.
  • Incorporated Benjamin Goodrich's Python interface.
  • Added examples for shared library, Python, fifo, RL-Glue interfaces.
  • Incorporated Java agent into main repository.

Changed

  • Better ALEInterface.
  • Many other changes.

Fixed

  • Some game fixes.

Removed

  • Removed internal controller, now superseded by shared library interface.
  • Removed the following command-line flags: 'output_file', 'system_reset_steps', 'use_environment_distribution', 'backward_compatible_save', internal agent flags
  • The flag 'use_starting_actions' was removed and internally its value is always 'true'.
  • The flag 'disable_color_averaging' was renamed to 'color_averaging' and FALSE is its default value.