Recursive Regression with Neural Networks: Approximating the HJI PDE SolutionDownload PDF

19 Apr 2024 (modified: 21 Jul 2022)ICLR 2017 Invite to WorkshopReaders: Everyone
Abstract: Most machine learning applications using neural networks seek to approximate some function g(x) by minimizing some cost criterion. In the simplest case, if one has access to pairs of the form (x, y) where y = g(x), the problem can be framed as a regression problem. Beyond this family of problems, we find many cases where the unavailability of data pairs makes this approach unfeasible. However, similar to what we find in the reinforcement learning literature, if we have some known properties of the function we are seeking to approximate, there is still hope to frame the problem as a regression problem. In this context, we present an algorithm that approximates the solution to a partial differential equation known as the Hamilton-Jacobi-Isaacs PDE and compare it to current state of the art tools. This PDE, which is found in the fields of control theory and robotics, is of particular importance in safety critical systems where guarantees of performance are a must.
TL;DR: A neural network that learns an approximation to a function by generating its own regression points
Conflicts: berkeley.edu
Keywords: Supervised Learning, Games, Theory
8 Replies

Loading