# Minor modification to BPCG algorithm In this comment we briefly explain that the change in the BPCG algorithm in the `FrankWolfe.jl` package does not adverse affect convergence (up to a very small constant factor), while improving sparsity. Starting point is Lemma 3.5 in the [original BPCG paper](https://arxiv.org/pdf/2110.12650.pdf). The algorithm now uses the new condition: $$ \tag{relaxCond} K \cdot \langle \nabla f(x_t), a_t - s_t \rangle \geq \langle \nabla f(x_t), x_t - w_t \rangle, $$ with $K \geq 1.0$ some factor that is equal to the lazification $K$; in the original algorithm we had simply $K = 1.0$, for better sparsity however it can be helpful to favor local steps with $K > 1.0$ (see also below). ## Change in main argument **Lemma.** (Modified version of Lemma 3.5) $$ (K+1) \cdot \langle \nabla f(x_t), d_t \rangle \geq \langle \nabla f(x_t), a_t - w_t \rangle $$ **Proof Sketch.** If we take a pairwise step we have $$ K \cdot \langle \nabla f(x_t), a_t - s_t \rangle \geq \langle \nabla f(x_t), x_t - w_t \rangle. $$ Moreover, it holds as before $$ \tag{convexComb} \langle \nabla f(x_t), x_t \rangle \geq \langle \nabla f(x_t), s_t\rangle, $$ so that we obtain $$ K \cdot \langle \nabla f(x_t), a_t - s_t \rangle \geq \langle \nabla f(x_t), s_t - w_t \rangle. $$ Now we add $\langle \nabla f(x_t), a_t - s_t \rangle$ to both sides of this inequality and obtain: $$ (K+1) \cdot \langle \nabla f(x_t), a_t - s_t \rangle \geq \langle \nabla f(x_t), a_t - w_t \rangle. $$ In case we took a normal FW step it holds: $$ K \cdot \langle \nabla f(x_t), a_t - s_t \rangle < \langle \nabla f(x_t), x_t - w_t \rangle, $$ and together with (convexComb) $$ K \cdot \langle \nabla f(x_t), a_t - x_t \rangle < \langle \nabla f(x_t), x_t - w_t \rangle. $$ Now adding $K \cdot \langle \nabla f(x_t), x_t - w_t \rangle$ to both sides we obtain $$ K \cdot \langle \nabla f(x_t), a_t - w_t \rangle < (K+1) \cdot \langle \nabla f(x_t), x_t - w_t \rangle, $$ and hence $$ \langle \nabla f(x_t), a_t - w_t \rangle < \frac{(K+1)}{K} \cdot \langle \nabla f(x_t), x_t - w_t \rangle \leq (K+1) \cdot \langle \nabla f(x_t), x_t - w_t \rangle, $$ as required. **Note.** The default parameter in `FrankWolfe.jl` is $K=2.0$ favoring local steps at a (potentially) slightly reduced convergence speed by a constant factor. **Note.** Optimality over the local active set implies that the away-gap is $0$. **Note.** non-lazy BPCG is sparser than the lazy BPCG. While counter-intuitive, the optimization for the local active set maximizes sparsity already and the non-lazy variant uses tighter $\Phi$ bounds as they are upated in each iteration. # Using the FW gap instead of the Strong FW gap Recall that gaps are proxies for primal progress via the smoothness inequality (assuming no clipping of the short step step-size). Thus many algorithms compare the respective gaps of the associated directions to pick the most promising direction for making progress. There are various notions of gaps and in the `FrankWolfe.jl` package the implementation of some of the algorithms deviates from the textbook variant in that they all use the standard FW gap as dual information (to maintain consistency across algorithms). We will now argue that this at most induces a constant factor degradation in the convergence rates. The algorithms where this is relevant are active set-based FW methods (away-step FW, pairwise CG, blended pairwise CG, blended CG, fully-corrective FW, etc) that typically use the strong FW-gap $$ \tag{strongFWgap} \max_{v \in P, a \in S(x)} \langle \nabla f(x), a-v \rangle, $$ with $S(x)$ being the active set used for the representation of $x$, to ensure sufficient primal progress, especially when lazifying algorithms; we will discuss this separatel below. We will now argue that using the standard FW gap $$ \tag{FWgap} \max_{v \in P} \langle \nabla f(x), x-v \rangle, $$ for step selection together with the selection criteria in the algorithm ensures also implies the corresponding inequality for the strong FW gap (with a scaling factor). This ensures that we make enough progress relative to the strong FW gap also. $$ \tag{progressEstimate} \begin{align*} f(x_t) - f(x_{t+1}) & \geq \frac{\max_{v \in P} \langle \nabla f(x), x-v \rangle^2}{2LD^2} \\ & \geq O(1) \frac{\max_{v \in P, a \in S(x)} \langle \nabla f(x), a-v \rangle^2}{2LD^2} \end{align*} $$ To this end, let $x$ be an iterate and $S(x)$ being its active set. We define: $$ \begin{align*} w & \leftarrow \arg\max_{v \in P} \langle \nabla f(x), x-v \rangle & \text{(global FW vertex)} \\ v & \leftarrow \arg\max_{v \in S(x)} \langle \nabla f(x), x-v \rangle & \text{(local FW vertex)} \\ a & \leftarrow \arg\max_{a \in S(x)} \langle \nabla f(x), a-x \rangle & \text{(away vertex)} \\ \end{align*}, $$ so that the strong FW gap becomes $\langle \nabla f(x), a - w \rangle$ and the FW gap becomes $\langle \nabla f(x), x - w \rangle$. Further, let $\kappa \geq 1$ be some multiplicative guarantee; the typical choice is $\kappa = 1.0$ (or $\kappa = 2.0$ for BPCG to promote sparsity). ## Away Step We select an away step if $$ \kappa \langle \nabla f(x), a-x \rangle \geq \langle \nabla f(x), x - w \rangle, $$ i.e., the away gap promises more progress than the regular FW-gap. Now simply add $\langle \nabla f(x), a-x \rangle$ to both sides of that inequality to obtain: $$ (\kappa + 1) \langle \nabla f(x), a-x \rangle \geq \langle \nabla f(x), a - w \rangle, $$ i.e., the gap of the away step (the away gap) can be lower bounded by a multiple of the strong FW gap. ## Local FW Step We typically select a local FW step with the vertex $v$ if $$ \kappa \langle \nabla f(x), x-v \rangle \geq \langle \nabla f(x), x - w \rangle, $$ and the associate local gap is larger than the away gap $$ \langle \nabla f(x), x-v \rangle \geq \langle \nabla f(x), a - x \rangle. $$ It suffices to add up these two inequalities to obtain: $$ (\kappa+1) \langle \nabla f(x), x-v \rangle \geq \langle \nabla f(x), a - w \rangle, $$ i.e., the local FW gap can be lower bounded by a multiple of the strong FW gap. ## Local Pairwise Step We typically select a local pairwise step with the vertices $v$ and $a$ if $$ \kappa \langle \nabla f(x), a-v \rangle \geq \langle \nabla f(x), x - w \rangle. $$ Simply add the away gap $\langle \nabla f(x), a-x \rangle$ (which is always nonnegative) to both sides to obtain, $$ \kappa \langle \nabla f(x), a-v \rangle + \langle \nabla f(x), a-x \rangle \geq \langle \nabla f(x), a - w \rangle, $$ and observe that $\langle \nabla f(x), a-x \rangle \leq \langle \nabla f(x), a-v \rangle$, so that we obtain $$ (\kappa+1) \langle \nabla f(x), a-v \rangle \geq \langle \nabla f(x), a - w \rangle, $$ as desired. ## FW Step The argument for the standard FW step is slightly different and more involved. The key is that we do a standard FW Step if the "better steps" do not provide enough progress. This means that typically both the away step condition as well as the local FW step condition (or the local pairwise FW condition) are not met, i.e., $$ \kappa \langle \nabla f(x), a-x \rangle < \langle \nabla f(x), x - w \rangle, $$ and $$ \kappa \langle \nabla f(x), x-v \rangle < \langle \nabla f(x), x - w \rangle, $$ and adding these two up we arrive at the combined inequality: $$ \kappa \langle \nabla f(x), a-v \rangle < 2 \langle \nabla f(x), x - w \rangle, $$ which also happens to be the local pairwise condition up to the factor of $2$, which does not affect the argument by simply combining with the $\kappa$. Now add $\kappa \langle \nabla f(x), v-w \rangle$ and we obtain: $$ \kappa \langle \nabla f(x), a-w \rangle < 2 \langle \nabla f(x), x - w \rangle + \kappa \langle \nabla f(x), v-w \rangle \leq (\kappa +2) \langle \nabla f(x), x - w \rangle, $$ as $\langle \nabla f(x), x - v \rangle \geq 0$, so that we finally have $$ (\kappa +2)/\kappa \ \langle \nabla f(x), x - w \rangle \geq \langle \nabla f(x), a-w \rangle. $$ ## Lazification So far we have only considered the non-lazy case where we use the desired inqualities to ensure enough primal progress. When lazifying we typically maintain a dual gap esimate $\Phi$ that is successively halved when in the run of the algorithm. The logic changes in the algorithms so that we do not compare the step selection against the FW gap $\langle \nabla f(x), x - w \rangle$ but rather the estimate $\Phi$, e.g., for the away step we replace $$ \kappa \langle \nabla f(x), a-x \rangle \geq \langle \nabla f(x), x - w \rangle, $$ by $$ \kappa \langle \nabla f(x), a-x \rangle \geq \Phi, $$ in the algorithms. With this (progressEstimate) changes to: $$ \tag{progressEstimateScaling} \begin{align*} f(x_t) - f(x_{t+1}) & \geq \frac{\Phi^2}{2LD^2} \end{align*} $$ Now, we need to relate $\Phi$ back to the strong FW gap $\langle \nabla f(x), a-w \rangle$. This only needs to be done when we update $\Phi$, which happens in a dual step where none of the steps promises enough progress compared to $\Phi$. In these cases we have following inequalities implied by the algorithms: $$ \kappa \langle \nabla f(x), a - x \rangle < \Phi, $$ (i.e., no away step with enough progress promise $\Phi$ is possible) and $$ \kappa \langle \nabla f(x), x - w \rangle < \Phi, $$ (i.e., no standard FW step with enough progress promise $\Phi$ is possible) and adding these two up we arrive at the combined inequality, where the term on the left is the strong FW gap: $$ \tag{phiRelation} \kappa \langle \nabla f(x), a-w \rangle < 2 \Phi. $$ With (phiRelation) we can now invoke standard inequalities to obtain the convergence rates. **Example.** Geometric strong convexity (for the strongly convex case over polytopes) gives $$ h(x) \leq O(1) \langle \nabla f(x), a-w \rangle^2 \leq O(1) \frac{4}{\kappa^2} \Phi^2, $$ combined with (progressEstimateScaling) gives: $$ f(x_t) - f(x_{t+1}) \geq \frac{\Phi^2}{2LD^2} \geq \frac{\kappa^2}{O(1) 8 LD^2}h(x_t) $$