Description of Contents 

1_example_episode_sighted: 
An example episode traversed by a sighted agent (equipped with a depth camera and an egomotion sensor). Left shows egocentric RGB (red border indicates collision), right shows top-down map (for visualization only; not available to agent). Notice that the path traversed is close to the shortest possible path. 

2_example_episode_blind:
The same example for a blind agents (equipped with only an egomotion sensor). Both the RGB and the top-down map are for visualization only. Notice that wall-following behavior and the backtracking. 

[3-5]_collision_neurson_visualization:
Left: egocentric RGB. Right: 2-dimensional t-SNE visualization of the agent’s
internal representation for detecting collisions. We find 4 distinct
semantically-meaningful clusters. One cluster always fires for collisions, one for forward actions that did not result in a collision, and the other two correspond to turning actions. Black dot indicates the t-SNE embedding for the current frame. 

[6-8]_BlindAgent(S->T)_Probe(T->S):
Example episodes showing a blind agent navigating from source (blue) to target (red) followed by the probe (implanted with the final memory of the agent) navigating from target (red) to source (blue). Notice that the agent(S->T) hugs walls, conducts exploratory excursions, and backtracks, while the probe(T->S) takes more direct paths, cuts out excursions (e.g. never goes into some rooms that the agent does). Overall, the behavior of the probe is similar to that of the sighted agent in video 1.

Extra-occupancy-decoder-examples:
This folder contains 100 examples of the occupancy decoder. Ground-truth is on the left and the prediction is on the right. One behavior to notice is that the decoder shifts its prediction if it believes the agent has a wall on the left or right. 