Recursive Regression with Neural Networks: Approximating the HJI PDE Solution

Vicenç Rubies Royo, Claire Tomlin

Nov 04, 2016 (modified: Jan 13, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: Most machine learning applications using neural networks seek to approximate some function g(x) by minimizing some cost criterion. In the simplest case, if one has access to pairs of the form (x, y) where y = g(x), the problem can be framed as a regression problem. Beyond this family of problems, we find many cases where the unavailability of data pairs makes this approach unfeasible. However, similar to what we find in the reinforcement learning literature, if we have some known properties of the function we are seeking to approximate, there is still hope to frame the problem as a regression problem. In this context, we present an algorithm that approximates the solution to a partial differential equation known as the Hamilton-Jacobi-Isaacs PDE and compare it to current state of the art tools. This PDE, which is found in the fields of control theory and robotics, is of particular importance in safety critical systems where guarantees of performance are a must.
  • TL;DR: A neural network that learns an approximation to a function by generating its own regression points
  • Conflicts: berkeley.edu
  • Keywords: Supervised Learning, Games, Theory

Loading