From Generalization to Inner Activations

One of the side effects of deep learning models becoming increasingly large is the amount of data that they intermediately generate which is then often ignored. Researchers have used this information to create explainability methods which show importance relative to a specific input it is connected with, but often the intermediate values are disregarded and mostly underutilized. Although this intermediate data can become increasingly unwieldily in size as the networks grow, the ability to monitor specific layers is a valuable tool that provides insight into how the model is learning as well as vague generalities about how the model performs overall (e.g. where are specific features begun to be extracted or where is noise filtered out).


Sample Submission

This post outlines a few more things you may need to know for creating and configuring your blog posts.


Example content (Basic Markdown)

Howdy! This is an example blog post that shows several types of HTML content supported in this theme.