# Math 55 Definition List ## Lecture 1: Propositional Logic 1. A *proposition* is a declarative sentence that is either true or false but not both. A proposition has a *truth value* which is either $T$ or $F$. 2. A letter used to denote a proposition is called a *propositional variable*. 3. If $p$ is a proposition, the *negation* of $p$, denoted $\lnot p$, is the proposition "it is not the case that $p$". The truth value of $\lnot p$ is the opposite of that of $p$. 4. If $p$ and $q$ are propositions, the *conjunction* of $p$ and $q$ (denoted $p\land q$) is the proposition "p and q." Its truth value is $T$ when both $p$ and $q$ are $T$, and it is $F$ otherwise. 5. If $p$ and $q$ are propositions,he *disjunction* of $p$ and $q$ is the proposition "p or q". Its truth value is $T$ when at least one of $p$ or $q$ is $T$, and $F$ if both $p$ and $q$ are $F$. 7. If $p$ and $q$ are propositions, the *conditional* $p\rightarrow q$ is the proposition "if $p$ then $q$". Its truth values are given by the following *truth table* (which lists its truth values in terms of those of $p$ and $q$): [see Table 5 in Rosen] The *converse* of $p\rightarrow q$ is $q\rightarrow p$. 8. A *compound proposition* consists of logical operations (the four listed above) applied to propositions or propositional variables, possibly with parentheses. The truth value of a compound proposition can be mechanically determined given the truth values of its constituents. ## Lecture 2: Propositional Functions, Quantifiers, (begin) Rules of Inference 1. A compound proposition which is always true (regardless of the truth values of constituent propositions) is called a *tautology*. A compound proposition that is always false is called a *contradiction*. A compound proposition that is neither is called *contingent*. 2. Two compound propositions $c_1,c_2$ are *logically equivalent*, denoted $c_1\equiv c_2$, if they have the same truth values for all truth values of their propositional variables (and all domains of their propositional functions). 3. The *biconditional* $p\leftrightarrow q$ is the proposition $(p\rightarrow q)\land (q\rightarrow p)$ (read "if and only if"). Two propositions $p$ and $q$ are logically equivalent if $p\leftrightarrow q$ is a tautology. This is denotes $p\equiv q$. Logically equivalent propositions have the same truth table. 4. The *contrapositive* of a conditional $p\rightarrow q$ is $\lnot q\rightarrow \lnot p$. The contrapositive is logically equivalent to the original conditional. 5. A *propositional function* is a statement containing one or more variables (taking values in an implicitly or explicitly specified domain) which becomes a proposition when each variable is instantiated with a value. 6. The *universal quantification* of $P(x)$ is the proposition "For all $x$ in the domain, $P(x)", denoted $\forall xP(x)$. The *existential quantification* of $P(x)$ is the proposition "There exists an $x$ in the domain such that P(x)", denoted $\exists xP(x)$. 7. A variable appearing with a quantifier is called *bound*. A statement in which every variable appearing in a propositional function is bound is a proposition; otherwise it is not a proposition. ## Lecture 3: Rules of Inference for Quantifiers, Direct Proof, Contrapositive, Contradiction 1. An *argument* is a sequence of propositions each of which is a premise (i.e., an assumption) or a conclusion (i.e., follows from previous propositions via rules of inference). 2. An integer $n$ is even if there exists an integer $k$ such that $n=2k$. An integer $n$ is odd if there exists an integer $k$ such that $n=2k+1$. 3. A real number $x$ is rational if there exist integers $p$ and $q\neq 0$ such that $x=p/q$. ## Lecture 4: More Proofs, Sets 1. A *set* is an unordered collection of objects which are called its *elements*. A set *contains* its elements, denoted $x\in A$. For every $x$, the statement "$x\in A$" must be a proposition (otherwise $A$ is not a set). 2. The set with no elements is called the *empty set*, denoted $\varnothing$. 3. Two sets $A$ and $B$ are *equal*, denoted $A=B$, if they have the same elements, i.e. $\forall x(x\in A\leftrightarrow x\in B)$. 4. If $A$ and $B$ are sets, $A$ is a subset of $B$, denoted $A\subseteq B$. if every element of $A$ is an element of $B$, i.e., $\forall x (x\in A \rightarrow x\in B)$. 5. If $A$ is a set, the *power set* of $A$, denoted $\mathcal{P}(A)$, is the set of all subsets of $A$: $$\mathcal{P}(A)=\{S:S\subseteq A\}.$$ 6. If $A$ and $B$ are sets, their *union* is $$A\cup B = \{x:x\in A\lor x\in B\},$$ their *intersection* is $$A\cap B = \{x:x\in A\land x\in B\},$$ and their *difference* is $$A-B = \{x:x\in A\land \lnot x\in B\}.$$ If $A$ is a subset of a *universe* $U$ (which is just another set), then the *complement* of $A$ (with respect to $U)$ is $$ \overline{A}= \{x\in U:x\notin A\}=U-A.$$ ## Lecture 5: Functions 1. The *cartesian product* of two sets $A$ and $B$ is the set of ordered pairs $(a,b)$ such that $a\in A$ and $b\in B$. It is denoted $$ A\times B = \{(a,b):a\in A\land b\in B\}.$$ 2. If $A$ and $B$ are nonempty sets, a *function* from $A$ to $B$ is a rule which assigns exactly one element of $B$ to each element of $A$. This assignment is denoted $f(a)=b$. If $f$ is a function from $A$ to $B$ we write $f:A\rightarrow B$. $A$ is called the *domain* of $f$ and $B$ is called the *codomain*. 3. A function $f:A\rightarrow B$ is *onto* (*surjective*) if for every $b\in B$ there exists an $a\in A$ such that $f(a)=b$. 4. A function $f:A\rightarrow B$ is *one to one* (*injective*) if for every $a_1,a_2\in A$, $f(a_1)=f(a_2)$ implies $a_1$=$a_2$. (equivalently, $a_1\neq a_2$ implies $f(a_1)\neq f(a_2)$). 5. A function $f:A\rightarrow B$ is called a *bijection* if it is one to one and onto. 6. The composition of $f:B\rightarrow C$ and $g:A\rightarrow B$ is the function $f\circ g:A\rightarrow C$ defined by $f\circ g(a)=f(g(a))$. 7. If a function is a bijection, then the function $f^{-1}(b)$ which assigns to be the unique element $a\in A$ mapping to $b$ is called its inverse. 8. If $f:A\rightarrow B$ and $S\subseteq A$ then the *image* of $S$ is $f(S) = \{b\in B:\exists a\in S s.t. f(a)=b\}$. The *inverse image* (preimage) of $S\subseteq B$ is $f^{-1}(S) = \{a\in A:f(a)\in S\}$. The *range* of $f$ is $f(A)$. $\newcommand{\Z}{\mathbb{Z}}$ ## Lecture 6: Cardinality 1. $\Z_+$ denotes the set of positive integers. 2. A set $A$ is called *finite* if it is the empty set or if it has exactly $n$ elements for some $n\in \Z_+$. The *cardinality* of $A$, denoted $|A|$, is $n$. 3. Two sets $A$ and $B$ *have the same cardinality*, denoted $|A|=|B|$, if there is a bijection $f:A\rightarrow B$ (also called a "one to one correspondence" in this context). If there is an injection $f:A\rightarrow B$ we say that $|A|\le |B|$. If $|A|\le |B|$ but $|A|\neq|B|$ then we write $|A|<|B|$. 4. An infinite set with $|A|=|\Z_+|$ is called *countable*. An infinite set which is not countable is called *uncountable*. 5. A function $f:\Z_+\rightarrow A$ is called a *sequence*. If $f$ is a bijection the sequence is called an *enumeration* of $A$. Theorems: The integers and rationals are countable. The interval $(0,1)$ is uncountable. ## Lecture 7: Division, Modular Arithmetic 1. If $a\neq 0$ and $b$ are integers then $a$ *divides* $b$ (denoted $a|b$) if there exists an integer $k$ such that $b=ka$. 2. ("Division Algorithm") If $a$ is an integer and $d\in\Z_+$ then there are unique integers $q,r$ such that $a=dq+r$ and $0\le r<d$. $q$ is called the *quotient* and $r$ is called the *remainder*, sometimeds denoted $r=a$ **mod** $d$. 2. (Axiom) Well-ordering principle: Every nonempty subset of $\Z_{\ge 0}$ has a least element. 3. For $a,b\in\Z$ we say $a\equiv b (mod m)$ if $m|(b-a)$. 4. If $b\in\Z_+$ and $a\in \Z_+$ then there is a unique way to write $a$ as a sum of powers of $b$. This is called the *base b expansion* of $a$. ## Lecture 8: Primes, GCD, and Bezout's Theorem 1. An integer $n>1$ is called *prime* if it has no divisors other than one and itself. An integer $n>1$ is called *composite* if it is not prime. 2. If $a,b$ are integers, not both zero, then the *greatest common divisor* of $a$ and $b$, denoted $gcd(a,b)$ is the largest $d\in \Z_+$ such that $d|a$ and $d|b$. Theorems in this lecture: Infinitude of Primes, Fundamental Theorem of Arithmetic, Bezout's theorem. ## Lecture 9: Inverses **Defn.** $\overline{a}$ is an *inverse of $a$* modulo $m$ if $a\overline{a}\equiv 1 (mod m)$. ## Lecture 10: Chinese Remainder Theorem, Cryptography ## Lectures 12/13: Induction, no definitions ## Lecture 14: Graphs, Paths, Circuits, Coloring A **graph** is a pair $G=(V,E)$ where $V$ is a finite set of **vertices** and $E$ is a finite multiset of $2-$element subsets of $V$, called **edges**. If $E$ has repeated elements $G$ is called a *multigraph*, otherwise it is called a *simple graph*. We will not consider directed graphs or graphs with loops (edges from a vertex to itself). Two vertices $x,y\in V$ are **adjacent** if $\{x,y\}\in E$. An edge $e=\{x,y\}$ is said to be **incident with** $x$ and $y$. The **degree** of a vertex $x$ is the number of edges incident with it. A **path** of length $k$ in $G$ is an alternating sequence of vertices and edges $$x_0, e_1, x_1, e_2, \ldots, x_{k-1},e_k,x_k$$ where $e_i=\{x_{i-1},x_i\}\in E$ for all $i=1\ldots k$. The path is said to *connect* $x_0,x_k$, *visit* the vertices $x_0,\ldots,x_k$, and *traverse* the edges $e_1,\ldots,e_k$. A path is called **simple** if it contains no repeated vertices. If $x_0=x_k$ then the path is called a **circuit**. Note that in a simple graph (i.e., a graph with no multiple edges), a path is uniquely specified by the sequence of vertices $x_0,\ldots,x_k$ (as there is only one possible choice of edge between each pair of consecutive vertices). A graph $G$ is *connected* if for every pair of distinct vertices $x,y$ of $G$, there is a path in $G$ between $x$ and $y$. A **k-coloring** of $G$ is a function $f:V\rightarrow \{1,\ldots, k\}$ such that $f(x)\neq f(y)$ whenever $x$ is adjacent to $y$. A graph is **$k-$colorable** if it has a $k$-coloring. (in class we did the case $k=2$) ## Lecture 15: $k$-coloring A graph $H=(W,F)$ is a **subgraph** of $G$ if $W\subseteq V$ and $F\subseteq E$. If $W=V$ then $H$ is called a **spanning** subgraph of $G$. A **k-coloring** of $G$ is a function $f:V\rightarrow \{1,\ldots, k\}$ such that $f(x)\neq f(y)$ whenever $x$ is adjacent to $y$. **Defn.** If $G=(V,E)$ is a graph, a subset $S\subseteq V$ is called a *clique* if $xy\in E$ for every pair of distinct vertices $x,y\in S$. $S$ is called an *independent set* if $xy\notin E$ for every pair of distinct vertices $x,y\in S$. **Defn.** If $G=(V,E)$ is a graph, a subgraph $H$ of $G$ is called a *connected component* of $G$ if $H$ is connected and *maximal*, i.e., it is not possible to add any vertices or edges to $H$ while keeping it connected (formally: if $H'$ is a subgraph of $G$ such that $H\neq H'$ and $H$ is a subgraph of $H'$, then $H'$ must be disconnected.). ## Lectures 16-17: See the Graph Theory Notes ## Lecture 18: Counting An ordering of $n$ distinct elements is called a *permutation*. An ordered sequence of $r$ distinct objects chosen from $n$ distinct objects is called an *r-permutation*. The number of $r-$permutations of $n$ objects is denoted $P(n,r)$. An unordered set of $r$ distinct objects chosen from $n$ distinct objects is called an *r-combination*. The number of $r-$combinations of $n$ objects is denoted $\binom{n}{k}$ or $C(n,r)$. ## Lecture 19: Permutations and Combinations with repetitions No definitions. ## Lecture 20: Labeled and Unlabeled Tokens in Boxes, Recurrences A *recurrence relation* is a recursive definition of a sequence $(a_0,a_1,a_2,\ldots)$. ## Lecture 21: Pigeonole Principle Generalized Pigeonhole principle: If $N$ objects are placed in $k$ boxes, then at least one box contains at least $\lceil N/k\rceil$ objects. ## Lecture 22: Probability $\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ An *experiment* is a well-defined procedure with a set of possible *outcomes*. An *outcome* is a complete description of all relevant features of the result of an experiment. The set of all outcomes is called the *sample space* $S$. An *event* is a subset of the sample space $E\subseteq S$. A *probability space* consists of a sample space $S$ along with a function $p:S\rightarrow \R$ satisfying $p(s)\in [0,1]$ for all $s\in S$ and $\sum_{s\in S}p(s) = 1$. The *probability* of an event is $$\P(E)=\sum_{s\in E}p(s).$$ ## Lecture 23: Conditional Probability and Independence Two events $E,F\subseteq S$ are *disjoint* if $E\cap F=\emptyset$. If $E,F$ are events and $\P(F)>0$ then the *probability of $E$ given $F$* (also called $E$ conditional on $F$) is: $$ \P(E|F)=\frac{\P(E\cap F)}{P(F)}.$$ Two events $E$ and $F$ are *independent* if $$\P(E\cap F) = \P(E)\P(F).$$ If $\P(F)>0$ this is equivalent to $\P(E|F)=\P(E)$ and if $\P(E)>0$ this is equivalent to $\P(F|E)=\P(F)$. ## Lecture 24: Random Variables and Linearity of Expectation $\newcommand{\E}{\mathbb{E}}$ A *random variable* on a probability space $S,p$ is a function $X:S\rightarrow \R$. The *expectation* of a random variable is defined as $$\E X = \sum_{s\in S} p(s) X(s).$$ This may be equivalently rewritten as a sum over the range of $X$: $$\E X = \sum_{r\in range(X)} r\cdot \P(X=r).$$ ## Lecture 25: Examples of Distributions If $X:S\rightarrow \R$ is a random variable on a probability space, the function $p_X(r)=\P(X=r)$ is called the *distribution* of the random variable. A *Bernoulli* random variable with bias $q\in [0,1]$ has distribution $\P(X=0)=q, \P(X=1)=1-q$. A *Binomial* random variable with bias $q\in [0,1]$ and $n$ trials has distribution $\P(X=k)=\binom{n}{k}q^k(1-q)^{n-k}$ for $0\le k\le n$. A *Geometric* random variable with success probability $q$ has distribution $\P(X=k)=(1-q)^{k-1}q$ for $k=1,2,\ldots$. ## Lecture 26: Algebra with random variables, Variance If $X,Y:S\rightarrow \R$ are random variables, their *sum* is defined as $(X+Y)(s)=X(s)+Y(s)$. If $c$ is a real number, the scalar multiple $cX$ is defined as $(cX)(s) = c\cdot X(s)$. The *product* is defined as $(XY)(s) = X(s)Y(s)$. Notice that doing algebra on random variables means doing algebra on their ``outputs''. Two random variables $X,Y$ are *independent* if for all $r_1,r_2\in\R$, the events $\{X=r_1\}$ and $\{Y=r_2\}$ are independent. This condition implies that $\E XY = \E X \E Y$, which is not true in general. The *variance* of a random variable is defined as $V(X) = \E (X-\mu_X)^2$ here $\mu_X=\E X$. ## Lecture 27: Concentration Inequalities Markov's Inequality: If $Y$ is a nonnegative random variable, then for every $t>0$: $$\P(Y\ge t)\le \frac{\E Y}{t}.$$ Chebyshev's Inequality: If $X$ is any random variable, then for every $t>0$: $$\P(|X-\E X|\ge t)\le \frac{V(X)}{t^2}.$$