Fork me on GitHub

REINFORCEjs

GridWorld: Dynamic Programming Demo


0.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00R -1.00.000.000.000.000.000.000.000.000.000.00R -1.00.00R -1.00.000.000.000.000.000.000.000.00R 1.00.00R -1.00.000.00R -1.00.000.000.000.000.000.000.000.000.00R -1.00.000.000.000.000.00R -1.00.00R -1.00.00R -1.00.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00

Cell reward: (select a cell)

Setup

This is a toy environment called Gridworld that is often used as a toy model in the Reinforcement Learning literature. In this particular case:

In other words, this is a deterministic, finite Markov Decision Process (MDP) and as always the goal is to find an agent policy (shown here by arrows) that maximizes the future discounted reward. My favorite part is letting Value iteration converge, then change the cell rewards and watch the policy adjust.

Interface. The color of the cells (initially all white) shows the current estimate of the Value (discounted reward) of that state, with the current policy. Note that you can select any cell and change its reward with the Cell reward slider.

Dynamic Programming

An interested reader should refer to Richard Sutton's Free Online Book on Reinforcement Learning, in this particular case Chapter 4. Briefly, an agent interacts with the environment based on its policy \(\pi(a \mid s)\). This is a function from states \(s\) to an action \(a\), or more generally to a distribution over the possible actions. After every action, the agent also receives a reward \(r\) from the environment. The interaction between the agent and the environment hence takes the form \(s_t, a_t, r_t, s_{t+1}, a_{t+1}, r_{t+1}, s_{t+2}, \ldots \), indexed by t (time) and our goal is to find the policy \(\pi^*\) that maximizes the amount of reward over time. In the DP approach, it is helpful to define a Value function \(V^\pi(s)\) that gives the expected reward when starting from the state \(s\) and then interacting with the environment according to \(\pi\):

$$ V^\pi(s) = E \{ r_t + \gamma r_{t+1} + \gamma^2 r_{t+2} + \ldots \mid s_t = s \} $$

Note that the expectation is over both 1. the agent's (in general) stochastic policy and 2. the environment dynamics; That is, how the environment evolves when the agent takes an action \(a\) in state \(s\) and what the obtained reward is in that case. The constant \(\gamma\) is a discount factor that gives more weight to earlier rewards than the later ones. We can start writing out the expectation. To get the value of state \(s\) we sum over all the actions the agent would take, then over all ways the environment could respond, and so on back and forth. When you do this, you'll find that:

$$ V^\pi(s) = \sum_{a} \pi(s,a) \sum_{s'} \mathcal{P}_{ss'}^{a} \left[ \mathcal{R}_{ss'}^{a} + \gamma V^\pi(s') \right] $$

In the above equation \(\mathcal{P}_{ss'}^{a} , \mathcal{R}_{ss'}^{a} \) are fixed constants specific to the environment, and give the probability of the next state \(s'\) given that the agent took action \(a\) in state \(s\), and the expected reward for that transition, respectively. This equation is called the Bellman Equation and interestingly, it describes a recursive relationship that the value function for a given policy should satisfy.

With our goal of finding the optimal policy \(\pi^*(s,a)\) that gets the most Value from all states, our strategy will be to follow the Policy Iteration scheme: We start out with some diffuse initial policy and evaluate the Value function of every state under that policy by turning the Bellman equation into an update. The value function effectively diffuses the rewards backwards through the environment dynamics and the agent's expected actions, as given by its current policy. Once we estimate the values of all states we will update the policy to be greedy with respect the Value function. We then re-estimate the Value of each state, and continue iterating until we converge to the optimal policy (the process can be shown to converge). These two steps can be controlled by the buttons in the interface:

Sketch of the Code

The goal of Policy Evaluation is to update the value of every state by diffusing the rewards backwards through the dynamics of the world and current policy (this is called a backup). The code looks like this:


evaluatePolicy: function() {
  // perform a synchronous update of the value function
  var Vnew = zeros(this.ns); // initialize new value function array for each state
  for(var s=0;s < this.ns;s++) {
    var v = 0.0;
    var poss = this.env.allowedActions(s); // fetch all possible actions
    for(var i=0,n=poss.length;i < n;i++) {
      var a = poss[i];
      var prob = this.P[a*this.ns+s]; // probability of taking action under current policy
      var ns = this.env.nextStateDistribution(s,a); // look up the next state
      var rs = this.env.reward(s,a,ns); // get reward for s->a->ns transition
      v += prob * (rs + this.gamma * this.V[ns]);
    }
    Vnew[s] = v;
  }
  this.V = Vnew; // swap
},

Here, we see this.ns (number of states), this.P which stores the current policy, and this.env, which is a pointer to the Environment object. The policy array is one-dimensional in this implementation, but stores the probability of taking any action in any state, so I'm using funny indexing (this.P[a*this.ns+s]) to not have to deal with 2D arrays in Javascript. The code implements the synchronous Value backup for each state:

\begin{align} V^\pi(s) & = E_\pi \{ r_t + \gamma r_{t+1} + \gamma^2 r_{t+2} + \ldots \mid s_t = s \} \\ & = E_\pi \{ r_t + \gamma V^\pi(s_{t+1}) \mid s_t = s \} \\ & = \sum_a \pi(s,a) \sum_{s'} \mathcal{P}_{ss'}^a \left[ \mathcal{R}_{ss'}^a + \gamma V(s') \right] \end{align}

However, note that in the code we only took expectation over the policy actions (the first sum above), while the second sum above was not evaluated because this Gridworld is deterministic (i.e. ns in the code is just a single integer indicating the next state). Hence, the entire probability mass is on a single next state and no expectation is needed.

Once the Value function was re-estimated, we can perform a Policy Update, making the policy greedy with respect to the estimate Value in each state. This is a very simple process, but the code below is a little bloated because we're handling ties between actions carefully and distributing the policy probabilities equally over all winning actions:


updatePolicy: function() {
  // update policy to be greedy w.r.t. learned Value function
  // iterate over all states...
  for(var s=0;s < this.ns;s++) {
    var poss = this.env.allowedActions(s);
    // compute value of taking each allowed action
    var vmax, nmax;
    var vs = [];
    for(var i=0,n=poss.length;i < n;i++) {
      var a = poss[i];
      // compute the value of taking action a
      var ns = this.env.nextStateDistribution(s,a);
      var rs = this.env.reward(s,a,ns);
      var v = rs + this.gamma * this.V[ns];
      // bookeeping: store it and maintain max
      vs.push(v);
      if(i === 0 || v > vmax) { vmax = v; nmax = 1; }
      else if(v === vmax) { nmax += 1; }
    }
    // update policy smoothly across all argmaxy actions
    for(var i=0,n=poss.length;i < n;i++) {
      var a = poss[i];
      this.P[a*this.ns+s] = (vs[i] === vmax) ? 1.0/nmax : 0.0;
    }
  }
},

Here, we are computing the action value function at each state \(Q(s,a)\), which measures how much expected reward we would get by taking the action \(a\) in the state \(s\). The function has the form:

\begin{align} Q^\pi (s,a) &= E_\pi \{ r_t + \gamma V^\pi(s_{t+1}) \mid s_t = s, a_t = a \} \\ &= \sum_{s'} \mathcal{P}_{ss'}^a \left[ \mathcal{R}_{ss'}^a + \gamma V^\pi(s') \right] \end{align}

In our Gridworld example, we are looping over all states and evaluating the Q function for each of the (up to) four possible actions. Then we update the policy to take the argmaxy actions at each state. That is,

$$ \pi'(s) = \arg\max_a Q(s,a) $$

It can be shown (see Richard Sutton's book) that performing these updates over and over again is guaranteed to converge to the optimal policy. In this Gridworld example, this corresponds to arrows that perfectly guide the agent to the terminal state where it gets reward +1.

REINFORCEjs API use of DP

If you'd like to use the REINFORCEjs Dynamic Programming for your MDP, you have to define an environment object env that has a few methods that the DP agent will need:

See the GridWorld environment in this demo's source code for an example. An example of creating and training the agent will then look something like:


// create environment
env = new Gridworld(); 
// create the agent, yay! Discount factor 0.9
agent = new RL.DPAgent(env, {'gamma':0.9}); 

// call this function repeatedly until convergence:
agent.learn();

// once trained, get the agent's behavior with:
var action = agent.act(); // returns the index of the chosen action

Pros and Cons

In practice you'll rarely see people use Dynamic Programming to solve Reinforcement Learning problems. There are numerous reasons for this, but the two biggest ones are probably that:

However, the nice parts are that: