DP simplifies the MDP problem, allowing us to find α = {α1, . . . , αT} using a recursive procedure. Basically, it uses V as a shadow price to map a stochastic/multiperiod problem into a deterministic/static optimization problem. We are going to focus on infinite horizon problems, where V is the unique solution for the Bellman equation V = Γ (V ).