# Do Wide And Deep Networks Learn The Same Things? Uncovering How Neural Network Representations Vary With Width And Depth
###### tags: `papers`
MAIN IDEAS:
- There is a limited understanding of the effects of depth and width on learned representations
- How does varying depth and width affect model hidden representations?
- Characteristic block structure in hidden representations of larger capacity models
- Implies model capacity is large relative to the size of the training set
- Implies underlying layers preserving and propagating dominant principal component of their representations
- Representations outside the block structure are similar across architectures, but the block structure is unique
- Even when overall accuracy is similar, wide and deep models exhibit distinctive error patterns and variations across classes.