|
using | ActionType = typename PolicyType::ActionType |
| Convenient typedef for action. More...
|
|
|
| AggregatedPolicy (std::vector< PolicyType > policies, const arma::colvec &distribution) |
|
void | Anneal () |
| Exploration probability will anneal at each step. More...
|
|
ActionType | Sample (const arma::colvec &actionValue, bool deterministic=false) |
| Sample an action based on given action values. More...
|
|
class mlpack::rl::AggregatedPolicy< PolicyType >
- Template Parameters
-
PolicyType | The type of the child policy. |
Definition at line 27 of file aggregated_policy.hpp.
◆ ActionType
using ActionType = typename PolicyType::ActionType |
◆ AggregatedPolicy()
AggregatedPolicy |
( |
std::vector< PolicyType > |
policies, |
|
|
const arma::colvec & |
distribution |
|
) |
| |
|
inline |
- Parameters
-
policies | Child policies. |
distribution | Probability distribution for each child policy. User should make sure its size is same as the number of policies and the sum of its element is equal to 1. |
Definition at line 39 of file aggregated_policy.hpp.
◆ Anneal()
◆ Sample()
ActionType Sample |
( |
const arma::colvec & |
actionValue, |
|
|
bool |
deterministic = false |
|
) |
| |
|
inline |
The documentation for this class was generated from the following file:
- /home/ryan/src/mlpack.org/_src/mlpack-git/src/mlpack/methods/reinforcement_learning/policy/aggregated_policy.hpp