parl.Algorithm

class Algorithm(model=None)[source]
alias: parl.Algorithm
alias: parl.core.fluid.algorithm.Algorithm
Algorithm defines the way how to update the parameters of

the Model. This is where we define loss functions and the optimizer of the neural network. An Algorithm has at least a model.

PARL has implemented various algorithms(DQN/DDPG/PPO/A3C/IMPALA) that

can be reused quickly, which can be accessed with parl.algorithms.

Example:

import parl

model = Model()
dqn = parl.algorithms.DQN(model, lr=1e-3)
Variables:
  • model (parl.Model) – a neural network that represents a policy
  • a Q-value function. (or) –
Pulic Functions:
  • get_weights: return a Python dictionary containing parameters

of the current model. - set_weights: copy parameters from get_weights() to the model. - sample: return a noisy action to perform exploration according to the policy. - predict: return an action given current observation. - learn: define the loss function and create an optimizer to minized the loss.

__init__(model=None)[source]
Parameters:model (parl.Model) – a neural network that represents a policy or a Q-value function.
get_weights()[source]

Get weights of self.model.

Returns:a Python dict containing the parameters of self.model.
Return type:weights (dict)
learn(*args, **kwargs)[source]

Define the loss function and create an optimizer to minize the loss.

predict(*args, **kwargs)[source]

Refine the predicting process, e.g,. use the policy model to predict actions.

sample(*args, **kwargs)[source]

Define the sampling process. This function returns an action with noise to perform exploration.

set_weights(params)[source]

Set weights from get_weights to the model.

Parameters:weights (dict) – a Python dict containing the parameters of self.model.