pantheonrl.algos.adap.adap_learn.ADAP

class ADAP(policy, env, learning_rate=0.0003, n_steps=2048, batch_size=64, n_epochs=10, gamma=0.99, gae_lambda=0.95, clip_range=0.2, clip_range_vf=None, normalize_advantage=True, ent_coef=0.0, vf_coef=0.5, max_grad_norm=0.5, use_sde=False, sde_sample_freq=-1, target_kl=None, stats_window_size=100, tensorboard_log=None, policy_kwargs=None, verbose=0, seed=None, device='auto', _init_setup_model=True, context_loss_coeff=0.1, context_size=3, num_context_samples=5, context_sampler='l2', num_state_samples=32)[source]

Bases: OnPolicyAlgorithm

Borrows from Proximal Policy Optimization algorithm (PPO) (clip version)

Methods

collect_rollouts

Collect rollouts using the current policy and fill a RolloutBuffer.

get_env

Returns the current environment (can be None if not defined).

get_parameters

Return the parameters of the agent.

get_vec_normalize_env

Return the VecNormalize wrapper of the training env if it exists.

learn

Return a trained model.

load

Load the model from a zip-file.

predict

Get the policy action from an observation (and optional hidden state).

save

Save all the attributes of the object and the model parameters in a zip-file.

set_env

Set the env to use

set_logger

Setter for for logger object.

set_parameters

Load parameters from a given zip-file or a nested dictionary containing parameters for different modules (see get_parameters).

set_random_seed

Set the seed of the pseudo-random generators (python, numpy, pytorch, gym, action_space)

train

Update policy using the currently gathered rollout buffer.

Attributes

logger

Getter for the logger object.

policy_aliases

rollout_buffer

policy

observation_space

action_space

n_envs

lr_schedule

Parameters:
  • policy (ActorCriticPolicy) –

  • env (Env | VecEnv | str) –

  • learning_rate (float | Callable[[float], float]) –

  • n_steps (int) –

  • batch_size (int) –

  • n_epochs (int) –

  • gamma (float) –

  • gae_lambda (float) –

  • clip_range (float | Callable[[float], float]) –

  • clip_range_vf (None | float | Callable[[float], float]) –

  • normalize_advantage (bool) –

  • ent_coef (float) –

  • vf_coef (float) –

  • max_grad_norm (float) –

  • use_sde (bool) –

  • sde_sample_freq (int) –

  • target_kl (float | None) –

  • stats_window_size (int) –

  • tensorboard_log (str | None) –

  • policy_kwargs (Dict[str, Any] | None) –

  • verbose (int) –

  • seed (int | None) –

  • device (device | str) –

  • _init_setup_model (bool) –

  • context_loss_coeff (float) –

  • context_size (int) –

  • num_context_samples (int) –

  • context_sampler (str) –

  • num_state_samples (int) –

collect_rollouts(env, callback, rollout_buffer, n_rollout_steps)[source]

Collect rollouts using the current policy and fill a RolloutBuffer. The term rollout here refers to the model-free notion and should not be used with the concept of rollout used in model-based RL or planning.

Parameters:
  • env (VecEnv) – The training environment

  • callback (BaseCallback) – Callback that will be called at each step (and at the beginning and end of the rollout)

  • rollout_buffer (RolloutBuffer) – Buffer to fill with rollouts

  • n_rollout_steps (int) – Number of experiences to collect per env

Returns:

True if function returned with at least n_rollout_steps collected, False if callback terminated rollout prematurely.

Return type:

bool

get_env()

Returns the current environment (can be None if not defined).

Returns:

The current environment

Return type:

VecEnv | None

get_parameters()

Return the parameters of the agent. This includes parameters from different networks, e.g. critics (value functions) and policies (pi functions).

Returns:

Mapping of from names of the objects to PyTorch state-dicts.

Return type:

Dict[str, Dict]

get_vec_normalize_env()

Return the VecNormalize wrapper of the training env if it exists.

Returns:

The VecNormalize env.

Return type:

VecNormalize | None

learn(total_timesteps, callback=None, log_interval=1, tb_log_name='ADAP', reset_num_timesteps=True, progress_bar=False)[source]

Return a trained model.

Parameters:
  • total_timesteps (int) – The total number of samples (env steps) to train on

  • callback (None | Callable | List[BaseCallback] | BaseCallback) – callback(s) called at every step with state of the algorithm.

  • log_interval (int) – The number of episodes before logging.

  • tb_log_name (str) – the name of the run for TensorBoard logging

  • reset_num_timesteps (bool) – whether or not to reset the current timestep number (used in logging)

  • progress_bar (bool) – Display a progress bar using tqdm and rich.

Returns:

the trained model

classmethod load(path, env=None, device='auto', custom_objects=None, print_system_info=False, force_reset=True, **kwargs)

Load the model from a zip-file. Warning: load re-creates the model from scratch, it does not update it in-place! For an in-place load use set_parameters instead.

Parameters:
  • path (str | Path | BufferedIOBase) – path to the file (or a file-like) where to load the agent from

  • env (Env | VecEnv | None) – the new environment to run the loaded model on (can be None if you only need prediction from a trained model) has priority over any saved environment

  • device (device | str) – Device on which the code should run.

  • custom_objects (Dict[str, Any] | None) – Dictionary of objects to replace upon loading. If a variable is present in this dictionary as a key, it will not be deserialized and the corresponding item will be used instead. Similar to custom_objects in keras.models.load_model. Useful when you have an object in file that can not be deserialized.

  • print_system_info (bool) – Whether to print system info from the saved model and the current system info (useful to debug loading issues)

  • force_reset (bool) – Force call to reset() before training to avoid unexpected behavior. See https://github.com/DLR-RM/stable-baselines3/issues/597

  • kwargs – extra arguments to change the model when loading

Returns:

new model instance with loaded parameters

Return type:

SelfBaseAlgorithm

property logger: Logger

Getter for the logger object.

predict(observation, state=None, episode_start=None, deterministic=False)

Get the policy action from an observation (and optional hidden state). Includes sugar-coating to handle different observations (e.g. normalizing images).

Parameters:
  • observation (ndarray | Dict[str, ndarray]) – the input observation

  • state (Tuple[ndarray, ...] | None) – The last hidden states (can be None, used in recurrent policies)

  • episode_start (ndarray | None) – The last masks (can be None, used in recurrent policies) this correspond to beginning of episodes, where the hidden states of the RNN must be reset.

  • deterministic (bool) – Whether or not to return deterministic actions.

Returns:

the model’s action and the next hidden state (used in recurrent policies)

Return type:

Tuple[ndarray, Tuple[ndarray, …] | None]

save(path, exclude=None, include=None)

Save all the attributes of the object and the model parameters in a zip-file.

Parameters:
  • path (str | Path | BufferedIOBase) – path to the file where the rl agent should be saved

  • exclude (Iterable[str] | None) – name of parameters that should be excluded in addition to the default ones

  • include (Iterable[str] | None) – name of parameters that might be excluded but should be included anyway

Return type:

None

set_env(env, force_reset=True)[source]

Set the env to use

set_logger(logger)

Setter for for logger object.

Warning

When passing a custom logger object, this will overwrite tensorboard_log and verbose settings passed to the constructor.

Parameters:

logger (Logger) –

Return type:

None

set_parameters(load_path_or_dict, exact_match=True, device='auto')

Load parameters from a given zip-file or a nested dictionary containing parameters for different modules (see get_parameters).

Parameters:
  • load_path_or_iter – Location of the saved data (path or file-like, see save), or a nested dictionary containing nn.Module parameters used by the policy. The dictionary maps object names to a state-dictionary returned by torch.nn.Module.state_dict().

  • exact_match (bool) – If True, the given parameters should include parameters for each module and each of their parameters, otherwise raises an Exception. If set to False, this can be used to update only specific parameters.

  • device (device | str) – Device on which the code should run.

  • load_path_or_dict (str | Dict[str, Tensor]) –

Return type:

None

set_random_seed(seed=None)

Set the seed of the pseudo-random generators (python, numpy, pytorch, gym, action_space)

Parameters:

seed (int | None) –

Return type:

None

train()[source]

Update policy using the currently gathered rollout buffer.

Return type:

None