Learning stable representations in a changing world with on-line t-SNE: proof of concept in the songbird

St├ęphane Deny, Emily Mackevicius, Tatsuo Okubo, Gordon Berman, Joshua Shaevitz, Michale Fee

Feb 18, 2016 (modified: Feb 18, 2016) ICLR 2016 workshop submission readers: everyone
  • Abstract: Many real-world time series involve repeated patterns that evolve gradually by following slow underlying trends. The evolution of relevant features prevents conventional learning methods from extracting representations that separate differing patterns while being consistent over the whole time series. Here, we present an unsupervised learning method to finding representations that are consistent over time and which separate patterns in non-stationary time-series. We developed an on-line version of t-Distributed Stochastic Neighbor Embedding (t-SNE). We apply t-SNE to the time series iteratively on a running window, and for each displacement of the window, we choose as the seed of the next embedding the final positions of the points obtained in the previous embedding. This process ensures consistency of the representation of slowly evolving patterns, while ensuring that the embedding at each step is optimally adapted to the current window. We apply this method to the song of the developing zebra finch, and we show that we are able to track multiple distinct syllables that are slowly emerging over multiple days, from babbling to the adult song stage.
  • Conflicts: none.com