# [Public] Exercises + Some Solutions ## 1. Detecting a negative-weight cycle Design and analyze a randomized algorithm that, given an input directed weighted graph $G=(V, E)$, in near-linear ($O(|E| (\log |E|)^{O(1)})$) time returns a negative-weight cycle in $G$ if it exists (otherwise, the algorithm returns "no negative cycle"). You can assume that there exists an algorithm $SSSP(G,s)$ that takes as input a graph $G$ and a source $s$ and in near-linear ($O(|E| (\log |E|)^{O(1)})$) time returns the following: * If the graph $G$ contains a negative-weight cycle then the algorithm returns "negative cycle" (but it does not report such cycle). * If the graph $G$ does not contain a negative-weight cycle then the algorithm returns a shortest path tree $T$ from $s$. **Hint:** 1. Assume that we are given a graph $H=(V, E, w)$ with weight function $w$ such that the minimum-weight cycle (say $C$) has weight zero. Prove that for an arbitrary $s$, if we set the price of every node $v$ to $p(v)=dist_H(s, v)$, then for every edge $(u,v)$ in $C$, the "adjusted" weight $w_p(u,v):=w_H(u,v)+p(u)-p(v)$ is zero. 2. For the graph $H$ with adjusted weight $w_p$ above, can you find a zero-weight cycle? Can you use this to find a zero-weight cycle in $H$ (with weight $w$)? 3. Now assume that the input graph $G$ contains a negative-weight cycle and that we know a number $x$ such that adding $x$ to the weight of all edges results in a graph $H$ such that the minimum-weight cycle (say $C$) has weight zero. Can you use the algorithm above to find a negative-weight cycle in $G$? ==Solution:== Section 7 in [BNW22] [BNW22] Negative-Weight Single-Source Shortest Paths in Near-linear Time. [arXiv](https://arxiv.org/abs/2203.03456) ## 2. ScaleDown(G,B,Δ) In the lecture I stated ScaleDown(G,B,Δ) and some theorems without proofs. Prove Theorem 1 and 3. The proofs are very similar to the proofs for ScaleDown(G,1) explained in the lecture. ([Slides](http://tiny.cc/vjltuz)) ## 3. Spanner via graph decompostions Given an undirected unweighted graph $G=(V, E)$ and an integer $k\geq 1$, a *$k$-spanner* is a subgraph $H=(V, E'\subseteq E)$ of $G$ such that for every $u,v\in V$, $dist_H(u,v)\leq k\cdot dist_G(u,v)$. Design and analyze an algorithm that, given an undirected unweighted input graph $G=(V,E)$, takes $O(|E| (\log |E|)^{O(1)})$ expected time and, w.h.p., returns a $(\log |E|)^{O(1)})$-spanner containing $O(|V| (\log |E|)^{O(1)})$ edges. Can you extend your algorithm to return a $k$-spanner? How many edges does your spanner contain? **Hints:** 1. If $G$ has diameter $(\log |E|)^{O(1)})$, how can you compute its spanner? 2. Can you use LDD/expander decompositions to partition edges of $G$ into $(\log |E|)^{O(1)})$ sets where each connected component of each set has diameter $(\log |E|)^{O(1)})$? ==**Solution:**== * Run LDD with $D=polylog(n)$. Do it again for intercluster edges. Everytime the number of edges is reduced by a factor of polylog, so we will have to repeat only polylog time. * For each connected component of each edge set, which has polylog diameter, compute a spanning tree. This is a polylog-spanner for such component. ==**Bonus question:**== Can you use expander decomposition instead of LDD? ## 4. (Directed) LDD with strong diameter Solve the exercise at the end of Section 3 of the [LDD notes](https://hackmd.io/@U0nm1XUhREKPYLt1eUmE6g/Sycpovkiq). ## 5. Uniform distribution? Consider modifying the UndiLDD algorithm in Section 2.1 in the [LDD notes](https://hackmd.io/@U0nm1XUhREKPYLt1eUmE6g/Sycpovkiq) by selecting $R_v$ uniformly from $[1,D]$ instead. Argue that Theorem 1 does not hold anymore. ==Solution sketch== Let $D=\Theta(n^{0.1})$. Consider a graph with $k\approx n/D$ paths $P_1, ..., P_k$ where $P_i$ is a path of length $D-1$ from node $v_i$ to node $c$. All paths share exactly one node $c$. Add an edge $(c,v)$. The probability that $(c,v)$ is in $E_{rem}$ when we start growing the ball from $v_1, v_2, ...$ is $\sum_{i=0}^k (1-1/D)^i (1/D)$ which is $\Omega(1)$. ## 6. Hypergraph global mincut Solve the exercise at the end of Section 3 in the [isolating cut notes](https://hackmd.io/@U0nm1XUhREKPYLt1eUmE6g/rku_qeUn9). ## 7. Symmetric submodular minimization (optional) Prove the theorem below. **Definitions:** * A function $f:2^V\rightarrow R$ is *submodular* if for every $X,Y\subseteq V$ with $X\subseteq Y$, $f(X\cup \{x\})-f(X)$ $\geq$ $f(Y\cup \{x\})-f(Y).$ * Moreover, $f$ is symmetric if $f(X)=f(V\setminus X)$ for any $X\subseteq V$. * For any function $f$, we assume that there is an *oracle* $O_f$ that we can *query* for the value of $f(X)$ for any $X\subseteq V$. **Assume** that for any symmetric submodular function $f:2^V\rightarrow R$ and any distinct $s,t\in V$, we can compute $\min_{X\subseteq V, s\in X, t\notin X} f(X)$ by making $O(|V|)$ queries to $O_f$. **Theorem:** Let $f:2^V\rightarrow R$ be a symmetric submodular function. There is a randomized algorithm that can compute $\min_{\emptyset \subsetneq X \subsetneq V} f(X)$ by making $O(|V|\log^{O(1)}(|V|))$ queries to $O_f$. *Hint* Use the isolating cut technique. *Remark:* Computing $\min_{X\subseteq V, s\in X, t\notin X} f(X)$ by making $O(|V|)$ queries to $O_f$ is called *$st$-submodular function minimization* ($st$-SFM) and computing $\min_{\emptyset \subsetneq X \subsetneq V} f(X)$ is called *nontrivial* submodular function minimization. The proof of the above theorem can be extended to show that computing nontrivial SFM takes essentially the same query complexity as computing SFM. ==Solution:== See [CQ21](https://drops.dagstuhl.de/opus/volltexte/2021/14119/) and [MN21](https://arxiv.org/abs/2103.15724)