# Complex Data Structures/ Tour de Cache
In this lab, you will applying caching optimizations to the perceptron-based ML model you studied in the previous lab.
This lab will be completed on your own.
Check gradescope for due date(s).
## Grading
Your grade for this lab will be based on your completion of the data collection steps described in this document and the completed worksheet.
| Part | value |
|----------------------------|-------|
| Optimizations | 70% |
| Worksheet 1 | 15% |
| Worksheet 2 | 15% |
## The Leader Board
There is a leader board set up for this lab. It records the speedup of your code vs the starter code for neural net training. You can use it to guage your progress.
For this lab, the leader board does not impact your grade. (Maybe this one does?)
## Skills to Learn and Practice
1. Gain intuition for how code characteristics affect processor behavior.
2. Gain intuition for the memory behavior of complex (non-matrix) data structures.
3. Explore the memory behavior of Python vs. C++
## Software You Will Need
1. A computer with Docker installed (either the cloud docker container via ssh, or your own laptop). See the intro lab for details.
2. The lab for the github classroom assignment for this lab. Find the link on the course home page: https://github.com/CSE141pp/Home/.
3. A PDF annotator/editor to fill out `worksheet.pdf`. You'll submit this via *a separate assignment* in Gradescope. We **will not** look at the version in your repo.
4. Moneta - This is currently available on the pod that you obtain using launch-142.
## Tasks to Perform
### Inspect The Code
## Turn in Your Work
Submit your code repo via gradescope. You can submit the code portion
as many times as you like.
The grade you see in the autograder output is out of the 80% of that's
just based on your metrics. The leaderboard part of the grade will be
incorporated later.