ContinuousActionEnv Class Reference

To use the dummy environment, one may start by specifying the state and action dimensions. More...

Classes

class  Action
 Implementation of continuous action. More...

 
class  State
 Implementation of state of the dummy environment. More...

 

Public Member Functions

State InitialSample ()
 Dummy function to mimic initial sampling in an environment. More...

 
bool IsTerminal (const State &) const
 Dummy function to find terminal state. More...

 
double Sample (const State &, const Action &, State &)
 Dummy function to mimic sampling in an environment. More...

 

Detailed Description

To use the dummy environment, one may start by specifying the state and action dimensions.

Eg:

Now the ContinuousActionEnv class can be used as an EnvironmentType in RL methods just as any other mlpack's implementation of gym environments.

Definition at line 121 of file env_type.hpp.

Member Function Documentation

◆ InitialSample()

State InitialSample ( )
inline

Dummy function to mimic initial sampling in an environment.

Returns
the dummy state.

Definition at line 193 of file env_type.hpp.

References DiscreteActionEnv::State::State().

◆ IsTerminal()

bool IsTerminal ( const State ) const
inline

Dummy function to find terminal state.

Parameters
*(state) The current state.
Returns
It's of no use but so lets keep it false.

Definition at line 200 of file env_type.hpp.

◆ Sample()

double Sample ( const State ,
const Action ,
State  
)
inline

Dummy function to mimic sampling in an environment.

Parameters
*(state) The current state.
*(action) The current action.
*(nextState) The next state.
Returns
It's of no use so lets keep it 0.

Definition at line 183 of file env_type.hpp.


The documentation for this class was generated from the following file:
  • /home/ryan/src/mlpack.org/_src/mlpack-git/src/mlpack/methods/reinforcement_learning/environment/env_type.hpp