Brown DL 2470 F25
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Write
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
      • Invitee
    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Sharing URL Help
Menu
Options
Versions and GitHub Sync Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Write
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
Invitee
Publish Note

Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

Your note will be visible on your profile and discoverable by anyone.
Your note is now live.
This note is visible on your profile and discoverable online.
Everyone on the web can find and read all notes of this public team.
See published notes
Unpublish note
Please check the box to agree to the Community Guidelines.
View profile
Engagement control
Commenting
Permission
Disabled Forbidden Owners Signed-in users Everyone
Enable
Permission
  • Forbidden
  • Owners
  • Signed-in users
  • Everyone
Suggest edit
Permission
Disabled Forbidden Owners Signed-in users Everyone
Enable
Permission
  • Forbidden
  • Owners
  • Signed-in users
Emoji Reply
Enable
Import from Dropbox Google Drive Gist Clipboard
   owned this note    owned this note      
Published Linked with GitHub
Subscribed
  • Any changes
    Be notified of any changes
  • Mention me
    Be notified of mention me
  • Unsubscribe
Subscribe
# Welcome to Assignment 1: Setup and Warm-up In this assignment we are going to go through all the setup and background we need for this course! We've divided it into 3 sections as follows: 1. Setting up our python environment. 2. Review math that we will use throughout this course. 3. Get comfortable with Python, which we will use for all assignments and the final project. :::info This homework is due by **Tuesday, September 23rd, 2025 at 6:00 PM EST** ::: ## 1. Environment Setup ### Getting started Please click <ins>[here](https://classroom.github.com/a/ct6UTxFu)</ins> to get the stencil code. Reference this <ins>[guide](https://hackmd.io/gGOpcqoeTx-BOvLXQWRgQg)</ins> for more information about GitHub and GitHub Classroom. ### Roadmap In order to complete (programming) assignments for this course, you will need a way to code, run, and debug your own Python code. While you are always free to use department machines for this (they have a pre-installed version of the course environment that every assignment has been tested against), you are also free to work on your own machines. Below we give you some information that is helpful for either of these situations. ### A. Configuring Environment #### Developing Locally In order to set up your virtual environment for this couse, we _highly recommend_ (and only formally support) the use of Anaconda to create and manage your Python environment. Once you have cloned the Github Classroom assignment for this assignment, you can do the following to setup your virtual environment: 1. Download the **Anaconda** installer from [here](https://www.anaconda.com/download#downloads), and install it on your computer. We recommend using the Distribution Installer for the correct system (Windows / (Intel) Mac / Mac M1). :::info **Note:** If you have an existing Anaconda or Miniconda installation (such as from CS200), then you don't need to reinstall, and can just use that! You can tell if you have an existing install if the command `conda --version` is recognized. ::: :::danger **Windows**: When installing using the graphical installer, **be sure to check the box which adds `conda` to your `PATH`**. ::: 2. Open a new terminal window and navigate to the root of the cloned assignment in a terminal (such as the one in VSCode) using `cd` and `ls`, and run `./env_setup/Other/conda_create.sh`. This should set up a virtual environment named `csci2470` on your computer. **If you have an Apple M1, this script will be different.** (See below.) :::info You may need to restart your terminal after installing Anaconda in order for this to work. ::: :::warning **Note:** This might be slightly different depending on your platform: - **Apple M1**: We provide a slightly different script which you can call from `"./env_setup/Apple_Silicon/conda_create_silicon.sh"` for those who have Apple silicon. - **Windows and Others**: If you are using Windows Powershell, then you can just run `./env_setup/Other/conda_create.sh` (forward slashes), but if you are using *Command Prompt*, then you need to run `.\env_setup\Other\conda_create.sh` (backslashes). **Some users have also experienced problems running the `*.sh` files entirely.** If you are getting a permissions issue try running `chmod a+x` in the command line from inside your repo. If that does not work, you can just open the `conda_create.sh` script in a text editor (such as VSCode), and run each line individually in your terminal. ::: 3. Run `conda activate csci2470`. **You will need to do this in every shell where you want to use the virtual environment**. :::warning If you are the above procedure doesn't work for you (although we **highly recommend trying to troubleshoot that first**), here is another method that does not rely on `conda` commands: :::spoiler 1. **Install Python 3.11** (or Python 3.9, we have not found any differences in functionality between them for our projects). 2. **Install the following packages** in either a virtual environment or your main python environment ipython==8.8.0 matplotlib==3.5.3 numpy==1.23.5 Pillow==9.4.0 scipy==1.9.3 tensorflow==2.11.0 tqdm==4.64.1 - We suggest you create a virtual environment. You can do so by running the following commands: 1. Create a folder where you plan to keep all your homework assignments. 2. In that folder, run `python -m venv cs2470`. This will create a new virtual environment caled `cs2470`. 3. Activate your virtual environment by running `cs2470/Scripts/activate` on Windows machines or `source cs2470/bin/activate` on Mac and Linux machines. Do this before starting any homework asssignment. 4. You can deactivate the virtual environment by typing `deactivate`. - You can install individual packages using `pip` commands (ie `pip install ipython==8.8.0`) - You can install a group of packages by pasting them into a requirements.txt file and running `pip install -r requirements.txt` ::: Once this is complete, you should have a local environment to use for the course! #### Department Machines :::info **Note:** Sometimes even if you set up your local environment correctly, you may experience unexpected bugs and errors that are unique to your local setup. To prevent this from hindering your ablity to complete assignments, we **highly recommend** that you familiarize yourself with the department machines, even if you expect to usually be working locally. ::: Department machines serve as a common, uniform way to work on and debug assignments. There are a variety of ways in which you can use department machines: 1. **In Person.** If you are in the CIT, you can (almost) always head into the Sunlab/[ETC] and work on a department machine. 2. **FastX**. FastX allows you to VNC into a department machine from your own computer, from anywhere! A detailed guide to getting FastX working on your own computer can be found [here](https://cs.brown.edu/about/system/connecting/fastx/). 3. **SSH**. The department machines can also be accessed by SSH (Secure Shell) from anywhere, which should allow you to perform command line activities (cloning repositories, running assignment code). You can check out an SSH guide [here](https://cs.brown.edu/about/system/connecting/ssh/). When using the department machines, you can activate the course virtual environment (which we have already installed) using: ``` source /course/cs2470/cs2470_env/bin/activate ``` Which will activate the course virtual environment. From here, you should be able to clone the repository (see a GitHub guide here for more information on using Git via the command line), and work on your assignment. :::info **Note**: Python files using `tensorflow` may require a little more time on startup to run on department machines (likely because it is pulling files from the department filesystem), but they should all run nonetheless. ::: ### B. Test your environment #### What is an environment? Python packages, or libraries, are external sets of code written by other industry members which might prove really helpful! (Imagine coding how to draw a graph in Python every single time) However, different classes, tasks, and even projects, might require different sets of Python packages. We can manage these as different virtual environments which have different sets of packages installed. #### Conda Specifics If you are using `conda`, you might notice the `(base)` prefix in your terminal. This signifies that you're in the default (hence `(base)`) environment. To access CSCI2470's virtual environment, you can use `conda activate csci2470`. You should now see the `(csci2470)` prefix in your terminal! To return back to the base environment, you can use `conda deactivate`. ## 2. Math Review ### A. Matrix Multiplication 1. Given two column vectors $a \in \mathbb{R}^{m \times 1}, \; b \in \mathbb{R}^{n \times 1}$ the _outer product_ is $$\mathbf{a} \times \mathbf{b} = \begin{bmatrix}a_0 \\ \vdots \\ a_{m-1}\end{bmatrix} \times \begin{bmatrix}b_0 \\ \vdots \\ b_{n-1}\end{bmatrix} = \begin{bmatrix} a_0 b^T\\ \vdots \\ a_{m-1} b^T\\ \end{bmatrix} = \begin{bmatrix} a_0 b_0 & \cdots & a_0 b_{n-1}\\ \vdots & \ddots & \vdots \\ a_{m-1} b_0 & \cdots & a_{m-1} b_{n-1}\\ \end{bmatrix} \in \mathbb{R}^{m\times n} $$ 2. Given two column vectors $\mathbf{a}$ and $\mathbf{b}$ both in $\mathbb{R}^{r\times 1}$, the _inner product_ (or the _dot product_) is defined as: $$ \mathbf{a} \cdot \mathbf{b} = \mathbf{a}^T\mathbf{b} = \begin{bmatrix} a_0\ \cdots\ a_{r-1} \end{bmatrix} \begin{bmatrix}b_0 \\ \vdots \\ b_{r-1}\end{bmatrix} = \sum_{i=0}^{r} a_i b_i $$ where $\mathbf{a}^T$ is the _transpose_ of a vector, which converts between column and row vector alignment. The same idea extends to matrices as well. 3. Given a matrix $\mathbf{M} \in \mathbb{R}^{r\times c}$, and a vector $x\in \mathbb{R}^c$ let $M_i$ be the ith row of the $M$. The matrix product is defined as: $$\mathbf{Mx} \ =\ \mathbf{M}\begin{bmatrix} x_0\\ \vdots \\ x_{c-1}\\ \end{bmatrix} \ =\ \begin{bmatrix} \mathbf{M_0}\\ \vdots \\ \mathbf{M_{r-1}}\\ \end{bmatrix}\mathbf{x} \ =\ \begin{bmatrix} \ \mathbf{M_0 \cdot x}\ \\ \vdots \\ \ \mathbf{M_{r-1} \cdot x}\ \\ \end{bmatrix} $$ Further, given a matrix $N \in \mathbb{R}^{c\times m}$ we define $$ MN = \begin{bmatrix} \mathbf{M_0\cdot N^T_0} \cdots\mathbf{M_0\cdot N^T_c} \\ \vdots \ddots \vdots \\ \mathbf{M_c\cdot N^T_0} \cdots \mathbf{M_c\cdot N^T_m}\end{bmatrix} $$ And we have $MN \in \mathbb{R}^{r\times m}$ 4. $\mathbf{M} \in \mathbb{R}^{r\times c}$ implies that the function $f(x) = \mathbf{Mx}$ can map $\mathbb{R}^{c\times 1} \to \mathbb{R}^{r\times 1}$. 5. $\mathbf{M_1} \in \mathbb{R}^{d\times c}$ and $\mathbf{M_2} \in \mathbb{R}^{r\times d}$ implies $f(x) = \mathbf{M_2M_1x}$ can map $\mathbb{R}^c \to \mathbb{R}^r$. :::info Given this and your own knowledge, try solving these: - __Prove that $(2) + (3)$ implies $(4)$__. In other words, use your understanding of the inner and matrix-vector products to explain why $(4)$ has to be true. - __Prove that $(4)$ implies $(5)$__ ::: ### B. Differentiation Recall that differentiation is finding the rate of change of one variable relative to another variable. Some nice reminders: $$\begin{align} \frac{df(x)}{dx} & \text{ is how $f(x)$ changes with respect to $x$}.\\ \frac{\partial f(x,y)}{\partial x} & \text{ is how $f(x,y)$ changes with respect to $x$ (and ignoring other factors)}.\\ \frac{dz}{dx} &= \frac{dy}{dx} \cdot \frac{dz}{dy} \text{ via chain rule if these factors are easier to compute}. \end{align}$$ :::info Given this and your own knowledge: Use (and internalize) the log properties to solve the following: $$\frac{\partial}{\partial y}\ln(x^5/y^2)$$ The log properties are as follows: $$\log(x^p) = p\log(x)$$ $$\log(xy) = \log(x) + \log(y)$$ $$\log(x/y) = \log(x) - \log(y)$$ Solve the following partial for a valid $j$ and all valid $i$: $$\frac{\partial}{\partial x_j}\ln\bigg[\sum_i x_iy_i\bigg]$$ Hint: Consider using the chain rule. Let $g_1(x) = \sum_i x_iy_i$... ::: ### C. Jacobians Now, the previous examples focused on scalar functions (functions that output a single number), but many functions output vectors. For example, consider the function: $$ f(x,y)= \begin{bmatrix} x^2+y \\ 2xy \\ x−y^2 \end{bmatrix}$$ This function takes two inputs $(x,y)$ and produces three outputs. When we want to understand how this vector function changes with respect to its inputs, we organize all the partial derivatives into a matrix called the **Jacobian**. For a function mapping $\mathbb{R}^n \to \mathbb{R}^m$, the Jacobian is **always** an $m \times n$ matrix—it has as many rows as outputs and as many columns as inputs. The Jacobian matrix $\mathbf{J}$ has the form: $$\mathbf{J} = \frac{\partial \mathbf{f}}{\partial (x,y)} = \begin{bmatrix} \frac{\partial f_1}{\partial x} & \frac{\partial f_1}{\partial y} \\ \frac{\partial f_2}{\partial x} & \frac{\partial f_2}{\partial y} \\ \frac{\partial f_3}{\partial x} & \frac{\partial f_3}{\partial y} \end{bmatrix}$$ Each **row** corresponds to one output component, and each **column** corresponds to one input variable. The entry in row $i$, column $j$ tells us how output $i$ changes with respect to input $j$. Returning to our example above, we can define our component functions as: - $f_1(x,y) = x^2 + y$ - $f_2(x,y) = 2xy$ - $f_3(x,y) = x - y^2$ Therefore, the complete Jacobian is: $$\mathbf{J} = \begin{bmatrix} 2x & 1 \\ 2y & 2x \\ 1 & -2y \end{bmatrix}$$ :::info Given this and your knowledge, consider the following vector functions: $$\mathbf{f}(s,t) = \begin{bmatrix} s^2t \\ s+e^t \\ \ln(1+s^2) \end{bmatrix}$$ $$\mathbf{g}(u,v,w) = \begin{bmatrix} e^{u} + \ln(1+e^{v}) \\ w(u^2+1) \\ \frac{1}{1+e^{-v}} + w^3 \end{bmatrix}$$ $$\mathbf{p}(a,b) = \begin{bmatrix} a^2 + b^2 \\ 2ab \end{bmatrix}$$ **Part A: Jacobian Computation and Analysis** 1. Compute the complete Jacobian matrix $\mathbf{J}_g$ for function $\mathbf{g}$. 2. Evaluate $\mathbf{J}_g$ at the point $(u,v,w) = (1,0,1)$. What do you notice about the values in the first column versus the other columns? **Part B: Function Composition and Chain Rule** 1. Consider the potential composition $\mathbf{p}(\mathbf{g}(\mathbf{f}(s,t)))$. Is this composition valid? If not, explain what's wrong in terms of dimensional compatibility. If it is valid, what would be the dimensions of the resulting Jacobian when applying the chain rule? 2. Analyze the composition $\mathbf{g}(\mathbf{f}(s,t))$: - Write out the result of the composition explicitly as a function of $(s,t)$ - Find $\frac{\partial}{\partial s}\mathbf{g}(\mathbf{f}(s,t))$ using direct differentiation - Now compute the same derivative using the chain rule: $\mathbf{J}_g(\mathbf{f}(s,t)) \cdot \mathbf{J}_f(s,t)$ - Verify that both methods yield identical results ::: A special class of vector functions applies the same scalar function to each component independently: An element-wise function $\mathbf{h}: \mathbb{R}^n \to \mathbb{R}^n$ has the form: $$\mathbf{h}(\mathbf{x}) = \begin{bmatrix} h(x_1) \\ h(x_2) \\ \vdots \\ x_n) \end{bmatrix}$$ Since each output depends only on its corresponding input, the Jacobian is **diagonal**: $$\mathbf{J}_h = \begin{bmatrix} h'(x_1) & 0 & \cdots & 0 \\ 0 & h'(x_2) & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & h'(x_n) \end{bmatrix}$$ :::info Given this and your knowledge, attempt the following: 1. Answer the following questions where $\mathbf{r}(\mathbf{x}) = \max(0, \mathbf{x})$: a) Find the scalar derivative $\mathbf{r}'(\mathbf{x})$. b) Write the Jacobian $\mathbf{J}_{\mathbf{r}}$ for $\mathbf{x} = [x_1, x_2, x_3]^T$. 2. Compare the following functions for the input, $\mathbf{x}$: - **X (Input)**: $\mathbf{x} = [x_1, x_2]^T$ - **A (Element-wise):** $\mathbf{f}_A(\mathbf{x}) = \begin{bmatrix} x_1^2 \\ x_2^2 \end{bmatrix}$ - **B (Non-element-wise):** $\mathbf{f}_B(\mathbf{x}) = \begin{bmatrix} x_1^2 + x_2 \\ x_1 + x_2^2 \end{bmatrix}$ Compute both Jacobians $\mathbf{J}_A$ and $\mathbf{J}_B$. Which one is diagonal and why? 3. When computing $\mathbf{J}_\mathbf{v}$ for some vector $\mathbf{v}$, how many multiplications are needed when $\mathbf{J}$ is diagonal versus when it's a full $n \times n$ matrix? Why does this matter for large $n$. ::: ### D. Probability #### Fundamental Concepts **Random Variables**: A random variable $X$ is a function that assigns numerical values to the outcomes of a random experiment. We write $X \sim P(x)$ to indicate that $X$ follows probability distribution $P$. **Independence**: Events $A$ and $B$ are independent if $P(A \cap B) = P(A)P(B)$. For random variables, $X$ and $Y$ are independent if knowing the value of $X$ tells us nothing about the probability distribution of $Y$. **Conditional Probability**: $P(A|B) = \frac{P(A \cap B)}{P(B)}$ represents the probability of $A$ given that $B$ has occurred. :::info Given this and your own knowledge: - You're trying to train a cat/dog classifier which takes in an image $x$ from our dataset, X, and outputs a prediction, $\hat{y}\in \{0, 1\}$ (0 if the image is a cat, 1 if it is a dog). Let $\hat{Y}(x)$ be a random variable that represents our classifier. Suppose that the dataset of cats and dogs is balanced (i.e. there are an equal number of cat and dog examples). Your friend argues that since the dataset is balanced, the classifier should ignore the input data and produce each prediction with equal probability $$\mathbb{P}[\hat{Y}=0] = \mathbb{P}[\hat{Y}=1]$$ - If your friend's assumption were correct, what value of $\mathbb{P}[\hat{Y}=0]=\mathbb{P}[\hat{Y}=1]$ would make this a valid probability distribution? - Is your friend's assumption correct? Why or why not? ::: #### Expectation and Variance: The Computational Foundation The **expectation** (or mean) of a random variable $X$ is: $$\mathbb{E}[X] = \begin{cases} \sum_{x} x \cdot P(X = x) & \text{if } X \text{ is discrete} \\ \int_{-\infty}^{\infty} x \cdot p(x) \, dx & \text{if } X \text{ is continuous} \end{cases}$$ The **variance** measures spread around the mean: $$\mathbb{V}[X] = \mathbb{E}[(X - \mathbb{E}[X])^2] = \mathbb{E}[X^2] - (\mathbb{E}[X])^2$$ **Key Properties** (these will be crucial for optimization algorithms): - **Linearity of expectation**: $\mathbb{E}[aX + bY] = a\mathbb{E}[X] + b\mathbb{E}[Y]$ (even if $X, Y$ are not independent!) - **Variance of scaled variables**: $\mathbb{V}[aX] = a^2\mathbb{V}[X]$ - **Independence and variance**: If $X, Y$ independent, then $\mathbb{V}[X + Y] = \mathbb{V}[X] + \mathbb{V}[Y]$ **Normal Distribution**: $X \sim \mathcal{N}(\mu, \sigma^2)$ has the familiar bell curve shape. - **Density**: $p(x) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}$ - **Expectation**: $\mathbb{E}[X] = \mu$ - **Variance**: $\mathbb{V}[X] = \sigma^2$ **Standard Normal**: $\mathcal{N}(0, 1)$ is the normal distribution with mean 0 and variance 1. **Key Transformation**: If $X \sim \mathcal{N}(\mu, \sigma^2)$, then $\frac{X - \mu}{\sigma} \sim \mathcal{N}(0, 1)$. :::info Given this and your knowledge, attempt the following: **Part A: Basic Computations** 1. A random variable $Z$ takes values $\{-2, 0, 2\}$ with probabilities $\{0.3, 0.4, 0.3\}$. - Compute $\mathbb{E}[Z]$ and $\mathbb{V}[Z]$ - What is $\mathbb{E}[Z^2]$? Verify that $\mathbb{V}[Z] = \mathbb{E}[Z^2] - (\mathbb{E}[Z])^2$ 2. If $X \sim \mathcal{N}(2, 9)$ and $Y \sim \mathcal{N}(-1, 4)$ are independent: - What is the distribution of $X + Y$? - What is the distribution of $3X - 2Y + 5$? **Part B: Matrix-Vector Products with Random Matrices** 3. Consider a $3 \times 2$ matrix $\mathbf{A}$ where each entry $A_{ij}$ is independent with $\mathbb{E}[A_{ij}] = 0$ and $\mathbb{V}[A_{ij}] = 1$. - For the deterministic vector $\mathbf{v} = [1, -1]^T$, compute $\mathbb{E}[\mathbf{A}\mathbf{v}]$ - What is $\mathbb{V}[(\mathbf{A}\mathbf{v})_1]$? (Note: $(\mathbf{A}\mathbf{v})_1 = A_{11} \cdot 1 + A_{12} \cdot (-1)$) - More generally, if $\mathbf{v}$ is any vector with $\|\mathbf{v}\|^2 = c$, what is $\mathbb{V}[(\mathbf{A}\mathbf{v})_i]$ for any component $i$? **Part C: Optimization from Probabilistic Assumptions** 4. Suppose you observe noisy measurements: $y_i = 2x_i + 3 + \epsilon_i$ where each $\epsilon_i \sim \mathcal{N}(0, 1)$ independently. - Given data points $(x_1, y_1) = (1, 4.8)$, $(x_2, y_2) = (2, 7.2)$, $(x_3, y_3) = (3, 9.1)$, what's the probability density of observing $y_1 = 4.8$ given $x_1 = 1$? - If you want to find the best estimates $\hat{a}$ and $\hat{b}$ for the model $y = ax + b$, explain why minimizing $\sum_{i=1}^3 (y_i - ax_i - b)^2$ makes sense from a probabilistic perspective. **Part C: Averaging Independent Quantities** 5. You measure the same quantity 16 times, getting independent measurements $M_1, M_2, \ldots, M_{16}$ where each $M_i$ has $\mathbb{E}[M_i] = \mu$ (the true value) and $\mathbb{V}[M_i] = \sigma^2$. - What is $\mathbb{E}[\bar{M}]$ where $\bar{M} = \frac{1}{16}\sum_{i=1}^{16} M_i$? - What is $\mathbb{V}[\bar{M}]$? - How many measurements would you need to make the variance of your average 4 times smaller? ::: ## 3. Python Review ### A. Basic Python Review For an overview of python syntax and common python uses we recommend checking out <ins>[this](https://www.w3schools.com/python)</ins> python tutorial for a refresher. ### B. Advanced Python #### I. Dunder Methods ##### Constructors :::info Write up a Python class for a `Square` whose constructor (the `__init__` method) takes in a string `name` and numeric `length` field. You can use this code to verify functionality: ```python square1 = Square("square1", 5) square1.name == "square1" square1.length == 5 ``` ::: ##### Calling We can give a class some special interaction patterns with other Dunder methods. If we give a class, say a `Multiplier`, a `__call__` method, then we can specify what happens when we call an instance of the class! For example: ```python multer = Multiplier() multer(5, 10) ``` This is effectively the same as calling: ```python multer.__call__(5, 10) ``` :::info Write up a Python class `Multiplier` which, when called on two integers, returns the product of the two integers Use this code to check: ```python multer = Multiplier() multer(5, 10) == 50 ``` ::: #### II. Classes and OOP :::warning Throughout this course, we will be working with Python extensively. Though you won't need to be an OOP expert, we do expect some basics which help Deep Learning libraries work in organized and efficient ways. ::: ##### Objects vs Classes OOP (Object Oriented Programming) strongly focuses on objects, which encompass any value, varaible, *etc*. It's really any "tangible" thing. ```python thing1 = "i am an object!" thing2 = 1234567 ... ``` However, we might find it useful to organize these things into classes of things, with properties like instance variables or methods shared across all things that are members of the same class. You might be familiar with Python's `str`, `int`, and `float` classes, for example. When working with Python, you'll almost always be working with objects which are instances of classes. You can check the type of a variable with the built-in `type` method! ##### Inheritance If classes are sets and objects are set elements, then we also need subsets and supersets! In Python, we can make a "child" class inherit all the methods from a "parent" class like so: ```python class ChildClass(ParentClass1): ``` ##### Class-level vs Instance Variables Consider the instance of our `Square` earlier. ```python square1 = Square("square1", 5) square1.name == "square1" square1.length == 5 ``` :::success Notice that the `name` and `length` variables for are instance variables! ::: In contrast with instance variables, class level variables are shared across all instances of the class. Say in the declaration of `Square`, we included this: ```python class Square: shape = "square" def __init__(...): ... ``` Then, if we checked `square1.shape` or `square2.shape`, you'll notice that they both return the string `"square"`. This also applies to checking the class directly: `Square.shape` will also return `"square"`! `shape` is a class level variable because it is shared across the whole class! :::warning Class variables can be redefined from any instance (*e.g.* `square1.shape = "rectangle"`) or directly through the class (*e.g.* `Square.shape = "rectangle"`), but we **strongly** recommend doing it through the class directly. ::: ##### Putting it Together :::info Task 1. __Make a parent class named `Logger`. Above the constructor, include this line:__ ```python logging_tape: LoggingTape | None = None ``` This will be our log of things that happen! The `: LoggingTape | None` indicates that the variable `logging_tape` will either be of type `LoggingTape` (which we'll make in just a second), or `None` ::: #### IV. Context Managers Context managers in Python are a great tool for temporarily defined things. For instance, you have probably seen ```python with open('file.txt', 'r') as f: # do things with the file f #f is now closed, do other things! ``` You'll notice that `f` is only properly defined as being the file opened with read permissions while in the "context" of the `with` statement (within the `with` statement's indent block) This is a context manager! Context managers derive their functionality from special Dunder Methods `__enter__` and `__exit__`. Let's work through an example! Say we want to set the **class variable** `Logger.logging_tape` to be a new `LoggingTape`, but only temporarily. Sounds perfect for a `with` statement, huh? Here's a starter: ```python class LoggingTape: def __init__(self): ... def __enter__(self): ... def __exit__(self, *args): ... def add_to_log(self, new_log): ... def print_logs(self): for log in self.logs: print(log) ``` We might see some code using the `LoggingTape` like ```python= with LoggingTape() as tape: ... ... ``` On line 1, `LoggingTape`'s `__enter__` method is called to enter the `with ` statement. Then, after the indent block (so after line 2 but before line 3), `LoggingTape`'s `__exit__` method is called to exit from the `with` statement. :::info In `LoggingTape`'s constructor, make an empty list called `logs`. We'll store strings as messages of logs of whatever happened Then, in `__enter__`, set `Logger.logging_tape = self` and `return self`. We're setting a class level variable! Next, in `__exit__`, set `Logger.logging_tape = None` In `add_to_log`, append `new_log` to the end of `logs`. ::: Now, check it out this code block: ```python= with LoggingTape() as tape: #runs LoggingTape's __enter__() #Logger.logging_tape is now defined as tape (from line 1)! tape.add_to_log("Hi!") #runs LoggingTape's __exit__() #Now Logger.logging_tape is defined as None ``` This might seem a little trivial now, but what this enables us to do is have any `Logger` class record to `tape` while inside the `with` statement (lines 2-3 in the example)! Say we have a car class: ```python= class Car(Logger): def travel(self, distance): self.logging_tape.add_to_log(f"Traveled Distance {distance}") ``` ```python= car = Car() with LoggingTape() as tape: car.travel(5) tape.print_logs ``` :::success The output will be "Traveled Distance 5". The LoggingTape kept track of the logged item automatically for us. ***I wonder if this will be useful in Homework 2...*** ::: ### C. Libraries for this Course #### I. Numerical Python (NumPy) ##### Making NumPy arrays (also shapes) ```python= import numpy as np #1. Make a NumPy array of zeros with shape (5,10). zeros = ... assert(zeros.shape == (5,10)) assert(np.max(zeros) == np.min(zeros) == 0) # 2. Do it again, but make it full of ones! ones = ... assert(ones.shape == (5,10)) assert(np.max(ones) == np.min(ones) == 1) # 3. Slice the array to get the first row of ones! first_row = ... assert(first_row.shape == (10,)) # 4. Slice the array to get the first *column* of ones! # (Hint: Try passing in `:` as one of the slice indices!) first_col = ... assert(first_col.shape == (5,)) # 5. Create a new dimension on the ones array. # You should end up with shape $(5,10,1)$. (Check out `np.expand_dims`) expanded = ... assert(expanded.shape == (5,10,1)) # 6. Cast a list `[1,2,3]` into a NumPy array. # Then, change the first element to `4`. arr = ... ... np.testing.assert_array_equal(arr, np.array([4,2,3])) # 7. Make a NumPy array of integers $0$ to $9$, # inclusive, in shape $(2,5)$ using `np.arange` and `np.reshape`. incr = ... ... assert(incr.shape == (2,5)) np.testing.assert_array_equal(incr, np.array([[0,1,2,3,4],[5,6,7,8,9]])) # 8. With incr from 7., use `np.vstack` to add # a new row `[10, 11, 12, 13, 14]`. vstacked = ... v_target = [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14]] np.testing.assert_array_equal(vstacked, np.array(v_target)) # 9. With incr from 7., use `np.hstack` to add a new column of 0's. # *Hint: think about the dimensionality of the original matrix. # What dimensions do you need to represent a new column?* hstacked = ... h_target = [[0, 1, 2, 3, 4, 0], [5, 6, 7, 8, 9, 0]] np.testing.assert_array_equal(hstacked, np.array(h_target)) ``` ##### Basic Operations (addition, subtraction, scalars) ```python= import numpy as np # 1. Add two NumPy arrays of ones with shape $(5,10)$. ones_1 = ... ones_2 = ... sum_ones = ... assert(sum_ones.shape == (5,10)) assert(np.max(sum_ones) == np.min(sum_ones) == 2) # 2. Subtract a NumPy arrays of ones with shape $(5,10)$ from another one. # Reuse ones_1 and ones_2 diff_ones = ... assert(diff_ones.shape == (5,10)) assert(np.max(diff_ones) == np.min(diff_ones) == 0) # 3. Multiply a NumPy array of ones with shape $(5,10)$ by the scalar two. scaled = ... assert(scaled.shape == (5,10)) assert(np.max(scaled) == np.min(scaled) == 2) ``` ##### Matrix Operations (matrix product, element-wise product/division, mean, axes) ```python= #1. Use NumPy matrix multiplication (`np.matmul` or using the `@` symbol) # to calculate the inner product of vectors v1, v2 v1 = np.array([1,2,3]) v2 = np.array([3,2,1]) inner_prod = ... assert(inner_prod == 10) # 2. Use NumPy matrix multiplication (`np.matmul` or using the `@` symbol) # to calculate the matrix product of matrices m1 and m2. m1 = np.array([[1, 2, 3],\ [0, 1, 0]]) m2 = np.array([[4, 6],\ [2, 1],\ [0, 5]]) mat_prod = ... m_target = np.array([[8, 23],\ [2, 1]]) np.testing.assert_array_equal(mat_prod, m_target) # 3. Use NumPy element-wise matrix multiplication (using the `*` symbol) # to calculate the element-wise product of matrices m1 (above) and m3). m3 = np.array([[4, 6, 2],\ [1, 0, 5]]) elem_prod = ... e_target = np.array([[4, 12, 6],\ [0, 0, 0]]) np.testing.assert_array_equal(elem_prod, e_target) # 4. Use NumPy element-wise matrix division (using the `/` symbol) # to calculate the element-wise quotient of matrices m1 and m4. m4 = np.array([[4, 6, 2],\ [1, 1, 5]]) quot = ... q_target = np.array([[0.25, 0.33333333, 1.5],\ [0., 1., 0.]]) np.testing.assert_allclose(quot, q_target) # 5. Use NumPy functions to find the average of the entries in matrix m5. # Do it again, but get the average per row # Then, do it per column m5 = np.array([[1,2],\ [0,1]]) avg = ... row_avg = ... col_avg = ... assert(avg == 1) np.testing.assert_allclose(row_avg, [1.5, 0.5]) np.testing.assert_allclose(col_avg, [0.5, 1.5]) ``` ##### Logical Operations (masking, `np.where`, `argmax`) ```python= # 1. Use a masking operation on matrix m1. # We want masked to be a matrix whose entries are `False` where # m1's entries are less than $6$, and `True` otherwise. m1 = np.array([[1, 9, 5],\ [8, 0, 2]]) masked = ... masked_target = np.array([[False, True, False],\ [True, False, False]]) np.testing.assert_array_equal(masked, masked_target) # 2. Use `np.where` on matrix m1 to # keep entries greater than or equal to 6 # and replace any entries less than 6 with 0. replaced = ... replaced_target = np.array([[0, 9, 0],\ [8, 0, 0]]) np.testing.assert_array_equal(replaced, replaced_target) # 3. Use `np.argmax` on matrix m1 to find, per row, # the index of the greatest element. max_inds = ... target_inds = [1,0] np.testing.assert_array_equal(max_inds, target_inds) ``` ##### Broadcasting Broadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array. Take a glance through some of these simple examples! ```python import numpy as np # We will add the vector v to each row of the matrix x, # storing the result in the matrix y x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) v = np.array([1, 0, 1]) y = x + v # Add v to each row of x using broadcasting print(y) # Prints "[[ 2 2 4] # [ 5 5 7] # [ 8 8 10] # [11 11 13]]" ``` ```python import numpy as np # Let's say we have the following matrix A. A = np.random.random((50, 100, 20)) # We can imagine A as 50 instances of (100,20) matrices. # We have the following matrix B B = np.random.random((20,40)) # We want to multiply each (100,20) instance of A by B, # we can do this because the dimensions match up: 20 = 20 print(A @ B.shape) # output should be of shape (50, 100, 40) # each of the 50 (100,20) is multiplied by the (20,40) matrix to yield (100, 40) ``` :::info Consider the following arrays: ```python A = np.array([[0,1,2],[3,4,5]]) # shape (2,3) B = np.array([[1,1,1]]) # shape (1,3) C = np.array([[-1,-1,-1],[1,1,1]]) # shape (2,3) ``` 1. Create matrix `D` as A - B using broadcasting 2. Create matrix `E` with shape (3,2) by reshaping `C` 3. Create matrix `F` with shape (2,2) by matrix multiplying `D` by `E` You can use the following to confirm your results look as they should! ```python assert(np.all(D == [[-1,0,1],[2,3,4]])) assert(np.all(E == [[-1,-1],[-1,1],[1,1]])) assert(np.all(F == [[2,2],[-1,5]])) ``` ::: #### II. Tensorflow Let's try some of the important examples again, but with Tensorflow. You'll find that for a lot of things, you can just replace `np` with `tf`. However, in some cases, the method might be named something else. Again, you should get used to searching for methods you'd like to use in the [documentation](https://www.tensorflow.org/api_docs). :::success **Hint:** If you know the Numpy method you'd like to use, you can usually get away with googling <numpy method name> in tensorflow. ::: ##### Making Tensors (also shapes) ```python= import tensorflow as tf #1. Make a tf Tensor of zeros with shape (5,10). zeros = ... assert(zeros.shape == (5,10)) assert(tf.reduce_max(zeros) == tf.reduce_min(zeros) == 0) # 2. Slice the array to get the first *column* of ones! # (Hint: Try passing in `:` as one of the slice indices!) first_col = ... assert(first_col.shape == (5,)) # 3. Create a new dimension on the ones array. # You should end up with shape $(5,10,1)$. (Check out `tf.expand_dims`) expanded = ... assert(expanded.shape == (5,10,1)) # 3. Cast a list `[1,2,3]` into a tensor. (Check out tf.convert_to_tensor) arr = ... ... assert(tf.reduce_all(arr == [1,2,3])) # 4. Make a tensor of integers $0$ to $9$, # inclusive, in shape $(2,5)$ using `tf.range` and `tf.reshape`. incr = ... ... assert(incr.shape == (2,5)) assert(tf.reduce_all(incr == [[0,1,2,3,4],[5,6,7,8,9]])) ``` ##### Basic Operations (addition, subtraction, scalars) ```python= import tensorflow as tf # 1. Add two tensors of ones with shape $(5,10)$. ones_1 = ... ones_2 = ... sum_ones = ... assert(sum_ones.shape == (5,10)) assert(tf.reduce_max(sum_ones) == tf.reduce_min(sum_ones) == 2) # 2. Multiply a tensor of ones with shape $(5,10)$ by the scalar two. scaled = ... assert(scaled.shape == (5,10)) assert(tf.reduce_max(sum_ones) == tf.reduce_min(sum_ones) == 2) ``` ##### Matrix Operations (matrix product, element-wise product/division, mean, axes) ```python= # 1. Use Tensorflow matrix multiplication (`tf.matmul` or using the `@` symbol) # to calculate the matrix product of matrices m1 and m2. m1 = tf.convert_to_tensor([[1, 2, 3],\ [0, 1, 0]]) m2 = tf.convert_to_tensor([[4, 6],\ [2, 1],\ [0, 5]]) mat_prod = ... m_target = tf.convert_to_tensor([[8, 23],\ [2, 1]]) tf.debugging.assert_equal(mat_prod, m_target) # 3. Use NumPy element-wise matrix multiplication (using the `*` symbol) # to calculate the element-wise product of matrices m1 (above) and m3). m3 = tf.convert_to_tensor([[4, 6, 2],\ [1, 0, 5]]) elem_prod = ... e_target = tf.convert_to_tensor([[4, 12, 6],\ [0, 0, 0]]) tf.debugging.assert_equal(elem_prod, e_target) # 5. Use Tensorflow functions to find the average of the entries in matrix m5. # Do it again, but get the average per row # Then, do it per column m5 = tf.convert_to_tensor([[1,2],\ [0,1]], dtype=tf.float32) avg = ... row_avg = ... col_avg = ... assert(avg == 1) assert(tf.reduce_all(row_avg == [1.5, 0.5])) assert(tf.reduce_all(col_avg == [0.5, 1.5])) ``` ##### Logical Operations ```python= # 1. Use a masking operation on matrix m1. # We want masked to be a matrix whose entries are `False` where # m1's entries are less than $6$, and `True` otherwise. m1 = tf.convert_to_tensor([[1, 9, 5],\ [8, 0, 2]]) masked = ... masked_target = np.array([[False, True, False],\ [True, False, False]]) tf.debugging.assert_equal(masked, masked_target) # 2. Use `tf.argmax` on matrix m1 to find, per row, # the index of the greatest element. max_inds = ... target_inds = [1,0] assert(tf.reduce_all(max_inds == target_inds)) ``` ##### Broadcasting :::info Consider the following arrays: ```python A = tf.convert_to_tensor([[0,1,2],[3,4,5]]) # shape (2,3) B = tf.convert_to_tensor([[1,1,1]]) # shape (1,3) C = tf.convert_to_tensor([[-1,-1,-1],[1,1,1]]) # shape (2,3) ``` 1. Create matrix `D` as A - B using broadcasting 2. Create matrix `E` with shape (3,2) by reshaping `C` 3. Create matrix `F` with shape (2,2) by matrix multiplying `D` by `E` You can use the following to confirm your results look as they should! ```python assert(tf.reduce_all(D == [[-1,0,1],[2,3,4]])) assert(tf.reduce_all(E == [[-1,-1],[-1,1],[1,1]])) assert(tf.reduce_all(F == [[2,2],[-1,5]])) ``` ::: # Conclusion Woohoo! You just completed your first assignment of CSCI2470! There is nothing to submit for this assignment.

Import from clipboard

Paste your markdown or webpage here...

Advanced permission required

Your current role can only read. Ask the system administrator to acquire write and comment permission.

This team is disabled

Sorry, this team is disabled. You can't edit this note.

This note is locked

Sorry, only owner can edit this note.

Reach the limit

Sorry, you've reached the max length this note can be.
Please reduce the content or divide it to more notes, thank you!

Import from Gist

Import from Snippet

or

Export to Snippet

Are you sure?

Do you really want to delete this note?
All users will lose their connection.

Create a note from template

Create a note from template

Oops...
This template has been removed or transferred.
Upgrade
All
  • All
  • Team
No template.

Create a template

Upgrade

Delete template

Do you really want to delete this template?
Turn this template into a regular note and keep its content, versions, and comments.

This page need refresh

You have an incompatible client version.
Refresh to update.
New version available!
See releases notes here
Refresh to enjoy new features.
Your user state has changed.
Refresh to load new user state.

Sign in

Forgot password

or

By clicking below, you agree to our terms of service.

Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
Wallet ( )
Connect another wallet

New to HackMD? Sign up

Help

  • English
  • 中文
  • Français
  • Deutsch
  • 日本語
  • Español
  • Català
  • Ελληνικά
  • Português
  • italiano
  • Türkçe
  • Русский
  • Nederlands
  • hrvatski jezik
  • język polski
  • Українська
  • हिन्दी
  • svenska
  • Esperanto
  • dansk

Documents

Help & Tutorial

How to use Book mode

Slide Example

API Docs

Edit in VSCode

Install browser extension

Contacts

Feedback

Discord

Send us email

Resources

Releases

Pricing

Blog

Policy

Terms

Privacy

Cheatsheet

Syntax Example Reference
# Header Header 基本排版
- Unordered List
  • Unordered List
1. Ordered List
  1. Ordered List
- [ ] Todo List
  • Todo List
> Blockquote
Blockquote
**Bold font** Bold font
*Italics font* Italics font
~~Strikethrough~~ Strikethrough
19^th^ 19th
H~2~O H2O
++Inserted text++ Inserted text
==Marked text== Marked text
[link text](https:// "title") Link
![image alt](https:// "title") Image
`Code` Code 在筆記中貼入程式碼
```javascript
var i = 0;
```
var i = 0;
:smile: :smile: Emoji list
{%youtube youtube_id %} Externals
$L^aT_eX$ LaTeX
:::info
This is a alert area.
:::

This is a alert area.

Versions and GitHub Sync
Get Full History Access

  • Edit version name
  • Delete

revision author avatar     named on  

More Less

Note content is identical to the latest version.
Compare
    Choose a version
    No search result
    Version not found
Sign in to link this note to GitHub
Learn more
This note is not linked with GitHub
 

Feedback

Submission failed, please try again

Thanks for your support.

On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

Please give us some advice and help us improve HackMD.

 

Thanks for your feedback

Remove version name

Do you want to remove this version name and description?

Transfer ownership

Transfer to
    Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

      Link with GitHub

      Please authorize HackMD on GitHub
      • Please sign in to GitHub and install the HackMD app on your GitHub repo.
      • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
      Learn more  Sign in to GitHub

      Push the note to GitHub Push to GitHub Pull a file from GitHub

        Authorize again
       

      Choose which file to push to

      Select repo
      Refresh Authorize more repos
      Select branch
      Select file
      Select branch
      Choose version(s) to push
      • Save a new version and push
      • Choose from existing versions
      Include title and tags
      Available push count

      Pull from GitHub

       
      File from GitHub
      File from HackMD

      GitHub Link Settings

      File linked

      Linked by
      File path
      Last synced branch
      Available push count

      Danger Zone

      Unlink
      You will no longer receive notification when GitHub file changes after unlink.

      Syncing

      Push failed

      Push successfully