pantheonrl.algos.modular.learn.ModularAlgorithm
- class ModularAlgorithm(policy, env, learning_rate=0.0003, n_steps=2048, batch_size=64, n_epochs=10, gamma=0.99, gae_lambda=0.95, clip_range=0.2, clip_range_vf=None, ent_coef=0.0, vf_coef=0.5, max_grad_norm=0.5, use_sde=False, sde_sample_freq=-1, target_kl=None, tensorboard_log=None, policy_kwargs=None, verbose=0, seed=None, device='auto', _init_setup_model=True, marginal_reg_coef=0.0)[source]
Bases:
OnPolicyAlgorithmThe base for On-Policy algorithms (ex: A2C/PPO).
Methods
Collect rollouts using the current policy and fill a RolloutBuffer.
Returns the current environment (can be None if not defined).
Return the parameters of the agent.
Return the
VecNormalizewrapper of the training env if it exists.Return a trained model.
Load the model from a zip-file.
Get the policy action from an observation (and optional hidden state).
Save all the attributes of the object and the model parameters in a zip-file.
Checks the validity of the environment, and if it is coherent, set it as the current environment.
Setter for for logger object.
Load parameters from a given zip-file or a nested dictionary containing parameters for different modules (see
get_parameters).Set the seed of the pseudo-random generators (python, numpy, pytorch, gym, action_space)
Update policy using the currently gathered rollout buffer.
Attributes
Getter for the logger object.
policy_aliasesrollout_bufferpolicyobservation_spaceaction_spacen_envslr_schedule- Parameters:
policy (ActorCriticPolicy) –
env (Env | VecEnv | str) –
learning_rate (float | Callable[[float], float]) –
n_steps (int) –
batch_size (int | None) –
n_epochs (int) –
gamma (float) –
gae_lambda (float) –
clip_range (float | Callable[[float], float]) –
clip_range_vf (None | float | Callable[[float], float]) –
ent_coef (float) –
vf_coef (float) –
max_grad_norm (float) –
use_sde (bool) –
sde_sample_freq (int) –
target_kl (float | None) –
tensorboard_log (str | None) –
policy_kwargs (Dict[str, Any] | None) –
verbose (int) –
seed (int | None) –
device (device | str) –
_init_setup_model (bool) –
marginal_reg_coef (float) –
- collect_rollouts(env, callback, rollout_buffer, n_rollout_steps, partner_idx=0)[source]
Collect rollouts using the current policy and fill a RolloutBuffer.
- Parameters:
env (VecEnv) – (VecEnv) The training environment
callback (BaseCallback) – (BaseCallback) Callback that will be called at each step (and at the beginning and end of the rollout)
rollout_buffer (RolloutBuffer) – (RolloutBuffer) Buffer to fill with rollouts
n_steps – (int) Number of experiences to collect per environment
n_rollout_steps (int) –
partner_idx (int) –
- Returns:
(bool) True if function returned with at least n_rollout_steps collected, False if callback terminated rollout prematurely.
- Return type:
bool
- get_env()
Returns the current environment (can be None if not defined).
- Returns:
The current environment
- Return type:
VecEnv | None
- get_parameters()
Return the parameters of the agent. This includes parameters from different networks, e.g. critics (value functions) and policies (pi functions).
- Returns:
Mapping of from names of the objects to PyTorch state-dicts.
- Return type:
Dict[str, Dict]
- get_vec_normalize_env()
Return the
VecNormalizewrapper of the training env if it exists.- Returns:
The
VecNormalizeenv.- Return type:
VecNormalize | None
- learn(total_timesteps, callback=None, log_interval=1, tb_log_name='OnPolicyAlgorithm', reset_num_timesteps=True, progress_bar=False)[source]
Return a trained model.
- Parameters:
total_timesteps (int) – The total number of samples (env steps) to train on
callback (None | Callable | List[BaseCallback] | BaseCallback) – callback(s) called at every step with state of the algorithm.
log_interval (int) – The number of episodes before logging.
tb_log_name (str) – the name of the run for TensorBoard logging
reset_num_timesteps (bool) – whether or not to reset the current timestep number (used in logging)
progress_bar (bool) – Display a progress bar using tqdm and rich.
- Returns:
the trained model
- Return type:
OnPolicyAlgorithm
- classmethod load(path, env=None, device='auto', custom_objects=None, print_system_info=False, force_reset=True, **kwargs)
Load the model from a zip-file. Warning:
loadre-creates the model from scratch, it does not update it in-place! For an in-place load useset_parametersinstead.- Parameters:
path (str | Path | BufferedIOBase) – path to the file (or a file-like) where to load the agent from
env (Env | VecEnv | None) – the new environment to run the loaded model on (can be None if you only need prediction from a trained model) has priority over any saved environment
device (device | str) – Device on which the code should run.
custom_objects (Dict[str, Any] | None) – Dictionary of objects to replace upon loading. If a variable is present in this dictionary as a key, it will not be deserialized and the corresponding item will be used instead. Similar to custom_objects in
keras.models.load_model. Useful when you have an object in file that can not be deserialized.print_system_info (bool) – Whether to print system info from the saved model and the current system info (useful to debug loading issues)
force_reset (bool) – Force call to
reset()before training to avoid unexpected behavior. See https://github.com/DLR-RM/stable-baselines3/issues/597kwargs – extra arguments to change the model when loading
- Returns:
new model instance with loaded parameters
- Return type:
SelfBaseAlgorithm
- property logger: Logger
Getter for the logger object.
- predict(observation, state=None, episode_start=None, deterministic=False)
Get the policy action from an observation (and optional hidden state). Includes sugar-coating to handle different observations (e.g. normalizing images).
- Parameters:
observation (ndarray | Dict[str, ndarray]) – the input observation
state (Tuple[ndarray, ...] | None) – The last hidden states (can be None, used in recurrent policies)
episode_start (ndarray | None) – The last masks (can be None, used in recurrent policies) this correspond to beginning of episodes, where the hidden states of the RNN must be reset.
deterministic (bool) – Whether or not to return deterministic actions.
- Returns:
the model’s action and the next hidden state (used in recurrent policies)
- Return type:
Tuple[ndarray, Tuple[ndarray, …] | None]
- save(path, exclude=None, include=None)
Save all the attributes of the object and the model parameters in a zip-file.
- Parameters:
path (str | Path | BufferedIOBase) – path to the file where the rl agent should be saved
exclude (Iterable[str] | None) – name of parameters that should be excluded in addition to the default ones
include (Iterable[str] | None) – name of parameters that might be excluded but should be included anyway
- Return type:
None
- set_env(env, force_reset=True)
Checks the validity of the environment, and if it is coherent, set it as the current environment. Furthermore wrap any non vectorized env into a vectorized checked parameters: - observation_space - action_space
- Parameters:
env (Env | VecEnv) – The environment for learning a policy
force_reset (bool) – Force call to
reset()before training to avoid unexpected behavior. See issue https://github.com/DLR-RM/stable-baselines3/issues/597
- Return type:
None
- set_logger(logger)
Setter for for logger object.
Warning
When passing a custom logger object, this will overwrite
tensorboard_logandverbosesettings passed to the constructor.- Parameters:
logger (Logger) –
- Return type:
None
- set_parameters(load_path_or_dict, exact_match=True, device='auto')
Load parameters from a given zip-file or a nested dictionary containing parameters for different modules (see
get_parameters).- Parameters:
load_path_or_iter – Location of the saved data (path or file-like, see
save), or a nested dictionary containing nn.Module parameters used by the policy. The dictionary maps object names to a state-dictionary returned bytorch.nn.Module.state_dict().exact_match (bool) – If True, the given parameters should include parameters for each module and each of their parameters, otherwise raises an Exception. If set to False, this can be used to update only specific parameters.
device (device | str) – Device on which the code should run.
load_path_or_dict (str | Dict[str, Tensor]) –
- Return type:
None
- set_random_seed(seed=None)
Set the seed of the pseudo-random generators (python, numpy, pytorch, gym, action_space)
- Parameters:
seed (int | None) –
- Return type:
None