pantheonrl.common.multiagentenv.DummyEnv
- class DummyEnv(base_env, agent_ind, extractor=<function extract_obs>)[source]
Bases:
EnvEnvironment representing an interface for single-agent RL algorithms that assume access to a gym environment.
In its basic use, it just defines the observation and action spaces. However, it may also be used directly to run a single-agent RL algorithm.
Warning
Use caution when trying to directly train a policy on this environment. You must create a separate thread and manage potential deadlocks. If you are using the SB3 algorithms, we strongly advise using our OnPolicyAgent and OffPolicyAgent classes instead to avoid deadlocks.
- Parameters:
base_env (Env) – The base MultiAgentEnv
agent_ind (int) – The player number in the larger environment
extractor (Callable[[Observation], Any]) – Function to call to process the Observation into a usable value. By default, transforms the Observation into a numpy array of the partial observation.
Methods
After the user has finished using the environment, close contains the code necessary to "clean up" the environment.
Gets the attribute name from the environment.
Compute the render frames as specified by
render_modeduring the initialization of the environment.Resets the environment to an initial internal state, returning an initial observation and info.
Run one timestep from the perspective of the agent.
Attributes
metadataReturns the environment's internal
_np_randomthat if not set will initialise with a random seed.render_modereward_rangespecReturns the base non-wrapped environment.
action_spaceobservation_space- close()[source]
After the user has finished using the environment, close contains the code necessary to “clean up” the environment.
This is critical for closing rendering windows, database or HTTP connections. Calling
closeon an already closed environment has no effect and won’t raise an error.
- get_wrapper_attr(name)
Gets the attribute name from the environment.
- Parameters:
name (str) –
- Return type:
Any
- property np_random: Generator
Returns the environment’s internal
_np_randomthat if not set will initialise with a random seed.- Returns:
Instances of np.random.Generator
- render()[source]
Compute the render frames as specified by
render_modeduring the initialization of the environment.The environment’s
metadatarender modes (env.metadata[“render_modes”]) should contain the possible ways to implement the render modes. In addition, list versions for most render modes is achieved through gymnasium.make which automatically applies a wrapper to collect rendered frames.- Note:
As the
render_modeis known during__init__, the objects used to render the environment state should be initialised in__init__.
By convention, if the
render_modeis:None (default): no render is computed.
“human”: The environment is continuously rendered in the current display or terminal, usually for human consumption. This rendering should occur during
step()andrender()doesn’t need to be called. ReturnsNone.“rgb_array”: Return a single frame representing the current state of the environment. A frame is a
np.ndarraywith shape(x, y, 3)representing RGB values for an x-by-y pixel image.“ansi”: Return a strings (
str) orStringIO.StringIOcontaining a terminal-style text representation for each time step. The text can include newlines and ANSI escape sequences (e.g. for colors).“rgb_array_list” and “ansi_list”: List based version of render modes are possible (except Human) through the wrapper,
gymnasium.wrappers.RenderCollectionthat is automatically applied duringgymnasium.make(..., render_mode="rgb_array_list"). The frames collected are popped afterrender()is called orreset().
- Note:
Make sure that your class’s
metadata"render_modes"key includes the list of supported modes.
Changed in version 0.25.0: The render function was changed to no longer accept parameters, rather these parameters should be specified in the environment initialised, i.e.,
gymnasium.make("CartPole-v1", render_mode="human")
- reset(*, seed=None, options=None)[source]
Resets the environment to an initial internal state, returning an initial observation and info.
This method generates a new starting state often with some randomness to ensure that the agent explores the state space and learns a generalised policy about the environment. This randomness can be controlled with the
seedparameter otherwise if the environment already has a random number generator andreset()is called withseed=None, the RNG is not reset.Therefore,
reset()should (in the typical use case) be called with a seed right after initialization and then never again.For Custom environments, the first line of
reset()should besuper().reset(seed=seed)which implements the seeding correctly.Changed in version v0.25: The
return_infoparameter was removed and now info is expected to be returned.- Args:
- seed (optional int): The seed that is used to initialize the environment’s PRNG (np_random).
If the environment does not already have a PRNG and
seed=None(the default option) is passed, a seed will be chosen from some source of entropy (e.g. timestamp or /dev/urandom). However, if the environment already has a PRNG andseed=Noneis passed, the PRNG will not be reset. If you pass an integer, the PRNG will be reset even if it already exists. Usually, you want to pass an integer right after the environment has been initialized and then never again. Please refer to the minimal example above to see this paradigm in action.- options (optional dict): Additional information to specify how the environment is reset (optional,
depending on the specific environment)
- Returns:
- observation (ObsType): Observation of the initial state. This will be an element of
observation_space (typically a numpy array) and is analogous to the observation returned by
step().- info (dictionary): This dictionary contains auxiliary information complementing
observation. It should be analogous to the
inforeturned bystep().
- observation (ObsType): Observation of the initial state. This will be an element of
- Parameters:
seed (int | None) –
options (dict[str, Any] | None) –
- Return type:
tuple[Observation, dict[str, Any]]
- step(action)[source]
Run one timestep from the perspective of the agent.
Accepts the agent’s action and returns a tuple of (observation, reward, done, info) from the perspective of the ego agent.
Note that when the environment is done, the final observation is the latest observation provided by the environment, which may be the same as the previous observation given to the agent, especially in turn-based settings.
- Parameters:
action (ndarray) – An action provided by the ego-agent.
- Returns:
observation: Ego-agent’s next observation
reward: Amount of reward returned after previous action
terminated: Whether the episode has ended (call reset() if True)
truncated: Whether the episode was truncated (call reset() if True)
info: Extra information about the environment
- Return type:
tuple[Observation | Any, float, bool, bool, dict[str, Any]]
- property unwrapped: Env[ObsType, ActType]
Returns the base non-wrapped environment.
- Returns:
Env: The base non-wrapped
gymnasium.Envinstance