Abstract: In memory-constrained algorithms, access to the input is restricted to be read-only, and the number of extra variables that the algorithm can use is bounded. In this paper we introduce the compressed stack technique, a method that allows to transform algorithms whose main memory consumption takes the form of a stack into memory-constrained algorithms. Given an algorithm \(\mathcal {A}\) that runs in \(O(n)\) time using a stack of length \(\Theta (n)\), we can modify it so that it runs in \(O(n^2\log n/2^s)\) time using a workspace of \(O(s)\) variables (for any \(s\in o(\log n)\)) or \(O(n^{1+1/\log p})\) time using \(O(p\log _p n)\) variables (for any \(2\le p\le n\)). We also show how the technique can be applied to solve various geometric problems, namely computing the convex hull of a simple polygon, a triangulation of a monotone polygon, the shortest path between two points inside a monotone polygon, a 1-dimensional pyramid approximation of a 1-dimensional vector, and the visibility profile of a point inside a simple polygon. Our approach improves or matches up to a \(O(\log n)\) factor the running time of the best-known results for these problems in constant-workspace models (when they exist), and gives a trade-off between the size of the workspace and running time. To the best of our knowledge, this is the first general framework for obtaining memory-constrained algorithms.
Loading