### Vector operations ### 1Q) a) give an example/explanation of the dot product. The dot product (or scalar product) of two vectors is an important operation in vector algebra that results in a scalar. It is calculated by multiplying corresponding components of two vectors and then summing these products. Given two vectors $\mathbf{a}$ and $\mathbf{b}$: $$ \mathbf{a} = \begin{pmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \end{pmatrix}, \quad \mathbf{b} = \begin{pmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{pmatrix} $$ The dot product of $\mathbf{a}$ and $\mathbf{b}$ is defined as: $$ \mathbf{a} \cdot \mathbf{b} = a_1 b_1 + a_2 b_2 + \dots + a_n b_n = \sum_{i=1}^{n} a_i b_i $$ This operation results in a scalar value, which is a single number. **Example:** If the vectors are: $$ \mathbf{a} = \begin{pmatrix} 2 \\ 3 \end{pmatrix}, \quad \mathbf{b} = \begin{pmatrix} 4 \\ 5 \end{pmatrix} $$ The dot product is calculated as: $$ \mathbf{a} \cdot \mathbf{b} = (2 \times 4) + (3 \times 5) = 8 + 15 = 23 $$ Thus, the dot product of $\mathbf{a}$ and $\mathbf{b}$ is 23. The dot product is useful in various contexts, such as determining the angle between two vectors or checking if two vectors are orthogonal (perpendicular). ### b) How can you calculate direction and magnitude? To calculate the direction and magnitude of a vector, you need to understand two key concepts: vector magnitude (length) and the angle (direction) relative to a coordinate system. ### Magnitude: The magnitude (or length) of a vector $\mathbf{v}$ is a measure of how long the vector is. For a vector in 2D space, given by: $$ \mathbf{v} = \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} $$ The magnitude is calculated using the Pythagorean theorem: $$ |\mathbf{v}| = \sqrt{v_1^2 + v_2^2} $$ For a vector in 3D space: $$ \mathbf{v} = \begin{pmatrix} v_1 \\ v_2 \\ v_3 \end{pmatrix} $$ The magnitude is: $$ |\mathbf{v}| = \sqrt{v_1^2 + v_2^2 + v_3^2} $$ ### Direction: The direction of a vector is the angle it makes with the positive x-axis (in 2D) or a reference axis (in 3D). In 2D, for a vector $\mathbf{v}$ with components $v_1$ and $v_2$, the direction $\theta$ is given by: $$ \theta = \tan^{-1}\left(\frac{v_2}{v_1}\right) $$ In 3D space, the direction is often described by angles relative to the coordinate axes. For a vector $\mathbf{v} = \begin{pmatrix} v_1 \\ v_2 \\ v_3 \end{pmatrix}$, the direction cosines are given by: $$ \cos(\alpha) = \frac{v_1}{|\mathbf{v}|}, \quad \cos(\beta) = \frac{v_2}{|\mathbf{v}|}, \quad \cos(\gamma) = \frac{v_3}{|\mathbf{v}|} $$ Here, $\alpha$, $\beta$, and $\gamma$ are the angles between the vector and the x, y, and z axes, respectively. ### Example: For a 2D vector $\mathbf{v} = \begin{pmatrix} 3 \\ 4 \end{pmatrix}$: - Magnitude: $$ |\mathbf{v}| = \sqrt{3^2 + 4^2} = \sqrt{9 + 16} = \sqrt{25} = 5 $$ - Direction: $$ \theta = \tan^{-1}\left(\frac{4}{3}\right) \approx 53.13^\circ $$ ### 2Q) Trigonometric definitions ### Can you write the relationship between polar coordinates and cartesian coordinates (i.e., from x and y to r and theta. To relate polar coordinates to Cartesian coordinates, you can use the following trigonometric definitions. ### Relationship between Polar and Cartesian Coordinates In polar coordinates, a point is defined by its distance $r$ from the origin and the angle $\theta$ from the positive x-axis. In Cartesian coordinates, a point is defined by its $x$ and $y$ coordinates. The relationship between these coordinate systems is given by: **1. Converting from Polar to Cartesian Coordinates:** Given polar coordinates $(r, \theta)$, the Cartesian coordinates $(x, y)$ are: $$ x = r \cos(\theta) $$ $$ y = r \sin(\theta) $$ **2. Converting from Cartesian to Polar Coordinates:** Given Cartesian coordinates $(x, y)$, the polar coordinates $(r, \theta)$ can be calculated as: $$ r = \sqrt{x^2 + y^2} $$ $$ \theta = \tan^{-1}\left(\frac{y}{x}\right) $$ ### Example: Suppose you have the polar coordinates $(r, \theta) = (5, \frac{\pi}{4})$: - To convert to Cartesian coordinates: $$ x = 5 \cos\left(\frac{\pi}{4}\right) = 5 \times \frac{\sqrt{2}}{2} = \frac{5\sqrt{2}}{2} $$ $$ y = 5 \sin\left(\frac{\pi}{4}\right) = 5 \times \frac{\sqrt{2}}{2} = \frac{5\sqrt{2}}{2} $$ - To convert back to polar coordinates from Cartesian coordinates $(x, y) = \left(\frac{5\sqrt{2}}{2}, \frac{5\sqrt{2}}{2}\right)$: $$ r = \sqrt{\left(\frac{5\sqrt{2}}{2}\right)^2 + \left(\frac{5\sqrt{2}}{2}\right)^2} = \sqrt{25} = 5 $$ $$ \theta = \tan^{-1}\left(\frac{\frac{5\sqrt{2}}{2}}{\frac{5\sqrt{2}}{2}}\right) = \frac{\pi}{4} $$ ### 3Q) Solving linear equations in vector format and matrix format. If you like, use 2x2 or 3x3 matrices in your example(s) ### Solving Linear Equations in Vector and Matrix Format #### Vector Format: In vector format, a system of linear equations can be represented as: $$ \mathbf{A} \mathbf{x} = \mathbf{b} $$ where: - $\mathbf{A}$ is the matrix of coefficients, - $\mathbf{x}$ is the vector of variables, - $\mathbf{b}$ is the vector of constants. **Example (2x2 system):** Consider the system of linear equations: $$ \begin{cases} 2x + 3y = 5 \\ 4x - y = 6 \end{cases} $$ In vector format, this system can be written as: $$ \mathbf{A} = \begin{pmatrix} 2 & 3 \\ 4 & -1 \end{pmatrix}, \quad \mathbf{x} = \begin{pmatrix} x \\ y \end{pmatrix}, \quad \mathbf{b} = \begin{pmatrix} 5 \\ 6 \end{pmatrix} $$ So: $$ \begin{pmatrix} 2 & 3 \\ 4 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 5 \\ 6 \end{pmatrix} $$ #### Matrix Format: To solve the system in matrix format, use matrix algebra techniques. You can use methods such as Gaussian elimination, or find the inverse of the matrix $\mathbf{A}$ if it exists. **Example (2x2 system):** Given: $$ \mathbf{A} = \begin{pmatrix} 2 & 3 \\ 4 & -1 \end{pmatrix}, \quad \mathbf{b} = \begin{pmatrix} 5 \\ 6 \end{pmatrix} $$ To find $\mathbf{x}$, we solve: $$ \mathbf{x} = \mathbf{A}^{-1} \mathbf{b} $$ **Finding the Inverse:** The inverse of $\mathbf{A}$ is given by: $$ \mathbf{A}^{-1} = \frac{1}{\text{det}(\mathbf{A})} \text{adj}(\mathbf{A}) $$ where $\text{det}(\mathbf{A})$ is the determinant of $\mathbf{A}$, and $\text{adj}(\mathbf{A})$ is the adjugate matrix of $\mathbf{A}$. Calculate: $$ \text{det}(\mathbf{A}) = (2 \times -1) - (3 \times 4) = -2 - 12 = -14 $$ $$ \text{adj}(\mathbf{A}) = \begin{pmatrix} -1 & -3 \\ -4 & 2 \end{pmatrix} $$ $$ \mathbf{A}^{-1} = \frac{1}{-14} \begin{pmatrix} -1 & -3 \\ -4 & 2 \end{pmatrix} = \begin{pmatrix} \frac{1}{14} & \frac{3}{14} \\ \frac{4}{14} & -\frac{1}{7} \end{pmatrix} $$ Then: $$ \mathbf{x} = \begin{pmatrix} \frac{1}{14} & \frac{3}{14} \\ \frac{4}{14} & -\frac{1}{7} \end{pmatrix} \begin{pmatrix} 5 \\ 6 \end{pmatrix} = \begin{pmatrix} x \\ y \end{pmatrix} $$ Perform the matrix multiplication to find $x$ and $y$. ### 4Q) Reducing matrices (Note: we will see this again in a later lecture) Give an example of row reduction for a 2x2 or 3x3 matrix. ### Reducing Matrices Row reduction (or Gaussian elimination) is a method used to simplify a matrix to its row echelon form or reduced row echelon form (RREF). This process involves applying a series of elementary row operations to the matrix. ### Example: Row Reduction of a 3x3 Matrix Consider the matrix: $$ \mathbf{A} = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix} $$ The goal is to reduce this matrix to its row echelon form. #### Step 1: Forming the Augmented Matrix First, write the matrix $\mathbf{A}$ as: $$ \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix} $$ #### Step 2: Apply Row Operations 1. **Make the element at (1,1) a pivot:** The element at (1,1) is already 1, so no change is needed. 2. **Eliminate the elements below the pivot (1,1):** - Subtract 4 times Row 1 from Row 2: $$ R_2 \leftarrow R_2 - 4R_1 \implies \begin{pmatrix} 1 & 2 & 3 \\ 0 & -3 & -6 \\ 7 & 8 & 9 \end{pmatrix} $$ - Subtract 7 times Row 1 from Row 3: $$ R_3 \leftarrow R_3 - 7R_1 \implies \begin{pmatrix} 1 & 2 & 3 \\ 0 & -3 & -6 \\ 0 & -6 & -12 \end{pmatrix} $$ 3. **Make the element at (2,2) a pivot:** Divide Row 2 by -3: $$ R_2 \leftarrow \frac{1}{-3}R_2 \implies \begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 2 \\ 0 & -6 & -12 \end{pmatrix} $$ 4. **Eliminate the elements above and below the pivot (2,2):** - Add 6 times Row 2 to Row 3: $$ R_3 \leftarrow R_3 + 6R_2 \implies \begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \end{pmatrix} $$ - Subtract 2 times Row 2 from Row 1: $$ R_1 \leftarrow R_1 - 2R_2 \implies \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \end{pmatrix} $$ Now the matrix is in row echelon form. #### Summary The row reduction of the matrix $\mathbf{A}$ results in: $$ \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \end{pmatrix} $$ This matrix is in row echelon form. Further simplification to reduced row echelon form (RREF) is not necessary for this example. ### 5Q) Linear Combinations What is a linear combination? How is it calculated with vectors? ### Linear Combinations #### Definition A **linear combination** of a set of vectors is a new vector created by multiplying each vector by a scalar and then adding the results. In mathematical terms, given a set of vectors $\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n$ and scalars $c_1, c_2, \ldots, c_n$, a linear combination is expressed as: $$ \mathbf{L} = c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \cdots + c_n \mathbf{v}_n $$ where $\mathbf{L}$ is the resulting vector. #### Calculation with Vectors **Example:** Given vectors: $$ \mathbf{v}_1 = \begin{pmatrix} 1 \\ 2 \end{pmatrix}, \quad \mathbf{v}_2 = \begin{pmatrix} 3 \\ 4 \end{pmatrix} $$ and scalars: $$ c_1 = 2, \quad c_2 = -1 $$ The linear combination of $\mathbf{v}_1$ and $\mathbf{v}_2$ is: $$ \mathbf{L} = c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 $$ Substitute the values: $$ \mathbf{L} = 2 \begin{pmatrix} 1 \\ 2 \end{pmatrix} + (-1) \begin{pmatrix} 3 \\ 4 \end{pmatrix} = \begin{pmatrix} 2 \\ 4 \end{pmatrix} + \begin{pmatrix} -3 \\ -4 \end{pmatrix} = \begin{pmatrix} 2 - 3 \\ 4 - 4 \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix} $$ #### b) Can you write a linear combination with non-linear functions (e.g., )?Linear Combination with Non-linear Functions Linear combinations are defined in the context of linear algebra, so they specifically involve linear operations (addition and scalar multiplication). Non-linear functions, such as $\log(x)$, do not fit within the strict definition of linear combinations because they involve non-linear transformations. However, in some contexts, we can combine vectors that are functions, including non-linear functions, but the result would not be a traditional linear combination. For instance, if we have functions $f_1(x) = \log(x)$ and $f_2(x) = x^2$, and we want to create a combination: $$ f(x) = c_1 \log(x) + c_2 x^2 $$ where $c_1$ and $c_2$ are scalars, this expression represents a combination of functions but is not a linear combination in the strict algebraic sense. ### 6Q) Linear (In)dependenceHow can you use multiple vectors to "create" another vector? What does the word "span" mean (in the context of "vector spaces")? ### Linear (In)dependence and Span In linear algebra, the concepts of linear dependence, linear independence, and span are fundamental to understanding vector spaces and how vectors can be used to construct other vectors. #### Linear (In)dependence - **Linear Independence**: A set of vectors $\{ \mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n \}$ is said to be linearly independent if no vector in the set can be written as a linear combination of the others. In other words, if the only solution to the equation $$ c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \cdots + c_n \mathbf{v}_n = \mathbf{0} $$ is $c_1 = c_2 = \cdots = c_n = 0$, then the vectors are linearly independent. - **Linear Dependence**: Conversely, a set of vectors is linearly dependent if at least one of the vectors can be expressed as a linear combination of the others. There exist non-zero scalars $c_1, c_2, \ldots, c_n$ such that: $$ c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \cdots + c_n \mathbf{v}_n = \mathbf{0} $$ #### Span The **span** of a set of vectors is the set of all possible linear combinations of those vectors. For a set of vectors $\{ \mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n \}$, the span is defined as: $$ \text{Span}(\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n) = \{ c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \cdots + c_n \mathbf{v}_n \mid c_1, c_2, \ldots, c_n \text{ are scalars} \} $$ In other words, the span of a set of vectors is the collection of all vectors that can be "reached" or "created" by taking linear combinations of the given vectors. ### Example: Consider two vectors in $\mathbb{R}^2$: $$ \mathbf{v}_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \mathbf{v}_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix} $$ The span of $\mathbf{v}_1$ and $\mathbf{v}_2$ is: $$ \text{Span}(\mathbf{v}_1, \mathbf{v}_2) = \left\{ c_1 \begin{pmatrix} 1 \\ 0 \end{pmatrix} + c_2 \begin{pmatrix} 0 \\ 1 \end{pmatrix} \mid c_1, c_2 \in \mathbb{R} \right\} $$ This span includes all vectors in $\mathbb{R}^2$, as any vector $\begin{pmatrix} x \\ y \end{pmatrix}$ can be expressed as $x \mathbf{v}_1 + y \mathbf{v}_2$. ### 7Q) Matrix operations What are we doing when we multiply a matrix by a vector? ### Matrix-Vector Multiplication When you multiply a matrix by a vector, you perform a linear transformation on the vector. This operation is fundamental in linear algebra and is used in various applications, such as computer graphics, engineering, and machine learning. #### Mathematical Definition Given a matrix $\mathbf{A}$ of size $m \times n$ and a vector $\mathbf{x}$ of size $n \times 1$, the matrix-vector multiplication $\mathbf{A} \mathbf{x}$ results in a new vector $\mathbf{b}$ of size $m \times 1$. If $\mathbf{A}$ is an $m \times n$ matrix and $\mathbf{x}$ is an $n \times 1$ vector: $$ \mathbf{A} = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} $$ $$ \mathbf{x} = \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix} $$ The product $\mathbf{A} \mathbf{x}$ is: $$ \mathbf{b} = \mathbf{A} \mathbf{x} = \begin{pmatrix} b_1 \\ b_2 \\ \vdots \\ b_m \end{pmatrix} $$ where each element $b_i$ of the resulting vector $\mathbf{b}$ is computed as: $$ b_i = a_{i1} x_1 + a_{i2} x_2 + \cdots + a_{in} x_n $$ #### Interpretation 1. **Linear Transformation**: Multiplying a matrix by a vector transforms the vector into another vector in a potentially different space. The matrix $\mathbf{A}$ represents a linear transformation that changes the direction and/or magnitude of the vector $\mathbf{x}$. 2. **Geometric Interpretation**: In geometric terms, if you consider the matrix as a transformation matrix, multiplying it by a vector applies that transformation to the vector. For example, in 2D space, a rotation matrix applied to a vector rotates it by a specified angle. 3. **System of Equations**: If the matrix $\mathbf{A}$ represents the coefficients of a system of linear equations and $\mathbf{x}$ represents the variables, then $\mathbf{A} \mathbf{x}$ provides the result of the system. ### Example Consider a matrix $\mathbf{A}$ and a vector $\mathbf{x}$: $$ \mathbf{A} = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}, \quad \mathbf{x} = \begin{pmatrix} 5 \\ 6 \end{pmatrix} $$ The product $\mathbf{A} \mathbf{x}$ is: $$ \mathbf{A} \mathbf{x} = \begin{pmatrix} 1 \cdot 5 + 2 \cdot 6 \\ 3 \cdot 5 + 4 \cdot 6 \end{pmatrix} = \begin{pmatrix} 17 \\ 39 \end{pmatrix} $$ ### 8Q) ### Basic Arithmetic and Algebra - **Addition**: \( a + b \) - **Subtraction**: \( a - b \) - **Multiplication**: \( a \cdot b \) or \( a \times b \) - **Division**: \( \frac{a}{b} \) - **Exponentiation**: \( a^b \) - **Roots**: \( \sqrt{a} \) or \( \sqrt[n]{a} \) ### Calculus - **Integral**: $$ \int_a^b f(x) \, dx $$ - **Partial Derivative**: $$ \frac{\partial f}{\partial x} $$ - **Limit**: $$ \lim_{x \to \infty} f(x) $$ ### Linear Algebra - **Matrix**: $$ \mathbf{A} = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} $$ - **Vector**: $$ \mathbf{v} = \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} $$ - **Determinant**: $$ \text{det}(\mathbf{A}) = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix} $$ ### Logic and Set Theory - **Logical AND**: \( p \land q \) - **Logical OR**: \( p \lor q \) - **Not**: \( \neg p \) - **Set Union**: \( A \cup B \) - **Set Intersection**: \( A \cap B \) - **Subset**: \( A \subseteq B \) ### Functions and Relations - **Function**: $$ f(x) = x^2 + 3x + 2 $$ - **Inverse Function**: \( f^{-1}(x) \) - **Composition of Functions**: $$ (f \circ g)(x) = f(g(x)) $$ ### Special Functions and Constants - **Euler’s Number**: \( e \approx 2.718 \) - **Pi**: \( \pi \approx 3.141 \) - **Gamma Function**: \( \Gamma(x) \) - **Beta Function**: \( B(x, y) \) ### Trigonometric Functions - **Sine**: \( \sin(\theta) \) - **Cosine**: \( \cos(\theta) \) - **Tangent**: \( \tan(\theta) \) - **Secant**: \( \sec(\theta) \) - **Cosecant**: \( \csc(\theta) \) - **Cotangent**: \( \cot(\theta) \) ### Miscellaneous - **Summation**: $$ \sum_{i=1}^{n} a_i $$ - **Product**: $$ \prod_{i=1}^{n} a_i $$ - **Infinity**: \( \infty \) - **Nabla (Gradient)**: \( \nabla f \) - **Angle**: \( \angle ABC \)