## Response to Reviewer 5ZqR
Thank you for your valuable comments, and for taking the time to review our paper. We are glad that you appreciate the strength of our results and we thank you for supporting our paper.
## Response to Reviewer wurv
Thank you for your valuable comments and suggestions. We are glad that you appreciate our proof overview, and thank you for supporting our paper. We answer your specific questions below.
**"regularizer that depends on $\theta$..."**
Thank you for the suggestion. While there are settings where a regularizer that depends on the position $\theta$ may lead to a faster runtime, this would likely require additional access to the function $f,$ or different assumptions on the structure of $f$, beyond what we assume in our paper.
For the class of functions considered in our paper, we conjecture that an $\ell_2$ regularizer which does not depend on $\theta$ is likely optimal. This is because we consider the class of functions $f$ which are $L$-Lipschitz or $\beta$-smooth with respect to the $\ell_2$ norm, and our bound on $L$ or $\beta$ does not depend on $\theta$. Moreover, we only have access to the function $f$ through an oracle which returns the value of $f$ at any given point $\theta$, but does not tell us how $f$ changes at nearby points.
We will add a remark about this in the final version.
**“The authors might want to also discuss the following paper…”**
Thank you for pointing us to this reference, we will discuss it in the related work section. In particular, we note that the bounds in [Chalkis, Fisikopoulos, Papachristou, Tsigaridas, ACM Transactions on Mathematical Software, 2023] assume that $f$ is $M$-strongly convex for some $M>0$ (we assume $f$ is convex, but not necessarily $M$-strongly convex for $M>0$), and thus are not directly comparable to our bounds.
**“matrix multiplication constant $\omega$.”**
We will add the value of $\omega$ to the introduction.
**“Notations…”** Thank you for pointing this out. We will add these definitions to the notation section.
**“lower bound of $R/r$...”** Any convex body satisfies the lower bound $R/r \geq 1$ ($R/r =1$ for a ball, and one can construct a polytope, which approximates a ball, for which $R/r$ is arbitrarily close to $1$).
Moreover, for any convex body $K$, there always exists a linear transformation $T$ for which $TK$ is contained in a ball of radius $R$ and contains a ball of radius $r$ such that $R/r$ satisfies the upper bound $R/r \leq O(\sqrt{d})$ ($TK$ is referred to as a “well-rounded” convex body).
We will add this information to the table caption.
## Response to Reviewer ACUh
Thank you for your valuable comments and suggestions. We are glad you appreciate our theoretical results and the applications to differential privacy, and thank you for supporting our paper. We are sorry for any difficulty understanding our presentation. We answer your specific questions below.
**“technical contributions can also be provided…”**
Our work makes the following novel technical contributions (discussed in Section 3).
**(1)** We introduce self-concordant barrier functions which simultaneously take into account the geometry of both the constraint polytope $K$ and the Lipschitz or smoothness property of the target log-concave function $f$.
**(2)** The main technical challenge in bounding the mixing time of our Dikin walk is to prove that the determinantal term in the Metropolis acceptance probability $\frac{\textrm{det}\Phi(z)}{\textrm{det}\Phi (\theta)}$ is $\Omega(1)$ with high probability (w.h.p.), where $\Phi$ is the Hessian of our regularized log-barrier function. In previous works on the Dikin walk, which use the log-barrier without regularizer, the determinantal term can be bounded using the following inequality of [Vaidya, Atkinson, 1993] which holds for the Hessian $H$ of the log-barrier:
$$(\nabla V(\theta))^\top[H(\theta)]^{-1}\nabla V(\theta)\leq O(d)\qquad\forall\theta\in\textrm{int}(K),$$
where $V(\theta):=\log\textrm{det}H(\theta)$. Unfortunately, this inequality does not hold for every self-concordant barrier function. We prove that it *does* however hold for our regularized barrier functions. To see why, we first show that the Hessian of our regularized barrier function, $\Phi(\theta)=\alpha^{-1}H(\theta)+\eta^{-1}I_d$, can be viewed as the limit of an infinite sequence $\{H_j(\theta)\}_{j=1}^\infty$ of matrices, where each $H_j$ is the Hessian of a log-barrier obtained by representing $K$ by an increasing set of (redundant) inequalities. Roughly, this allows us to show that the above inequality, which holds for any log-barrier, must also hold for our regularized barrier function (if we replace $H(\theta)$ with $\Phi(\theta)$ in the above inequality and $V(\theta)$ definition).
**“…insights about how your algorithm can speed up the sampling…”**
The regularizer in our algorithm speeds up the runtime by allowing the Dikin walk to take larger steps, while still ensuring these steps are accepted w.h.p. by the Metropolis accept/reject rule for $f$. Taking larger steps allows our Dikin walk to converge more quickly to the target distribution $\propto e^{-f}$.
To see why, note that from any point $\theta$, the original Dikin walk proposes updates $z=\theta+y$ where $y$ is normally distributed with covariance matrix $\alpha H(\theta)^{-1}$, where $H$ is the Hessian of, e.g., the log-barrier function for $K$ and $\alpha>0$ is a hyperparameter. If one applies the Dikin walk to sample from a non-constant distribution $\pi\propto e^{-f}$, one needs to ensure the stationary distribution it converges to is equal to $\pi$. This can be done by accepting the proposed step with probability proportional to the Metropolis rule $e^{f(z)-f(\theta)}$ and rejecting it otherwise. However, the acceptance probability may be very low (e.g., if half the eigenvalues of $\alpha H(\theta)^{-1}$ are $>\frac{c}{dL^2}$ for some $c>\Omega(1)$, the acceptance probability may be exponentially small in $c$).
One approach (used in [Narayanan, Rakhlin, JMLR 2017]) to ensuring the acceptance probability is high is to choose a smaller $\alpha$ such that all the eigenvalues of $\alpha H(\theta)^{-1}$ are $\leq\frac{1}{dL^2}$, which ensures the acceptance probability $e^{f(z)-f(\theta)}=\Omega(1)$ w.h.p. Unfortunately, this can lead the Dikin walk to propose steps with covariance matrix that has many of its eigenvalues unnecessarily small. This is because some eigenvalues of $H(\theta)^{-1}$ may be much larger than other eigenvalues (e.g., if $K$ is much wider in some directions than in others).
To overcome this, our Dikin walk proposes steps with covariance $(\alpha^{-1}H(\theta)+\eta^{-1}I_d)^{-1}$, where we set $\eta=\frac{1}{dL^2}$ (and set $\alpha$ to the same value $\frac{1}{d}$ used in prior works which apply in the special case where $f$ is constant). This ensures the largest eigenvalues of the covariance matrix are no larger than $\frac{1}{dL^2}$, *without* reducing (by more than a constant factor) the eigenvalues which were already $\leq\frac{1}{dL^2}$.
We will add this discussion to Section 3.
**"…Some small examples…"**
Thank you for the suggestion. We will add one or two concrete examples, in addition to the examples given in Section 2 and in the last two columns of Table 1, to better illustrate the runtime improvement.
**“…computation of the Lip/smooth constant”**
Oftentimes, a bound on the Lipschitz or smoothness constant can be calculated analytically. This includes, e.g., applications to training Bayesian or differentially private logistic regression models (or other generalized linear models such as support vector machines (SVMs)). In these applications, $f(\theta)=\sum_{i=1}^n\ell(\theta^\top x_i)$ where $\ell:\mathbb{R}\rightarrow\mathbb{R}$ is a convex loss and $\{x_1,…,x_n\}\subseteq\mathbb{R}^d$ is a dataset. The loss $\ell$ may be $O(1)$-Lipschitz (e.g., if $\ell$ is the logistic loss $\ell(s)=\log(1+e^{-s})$, or the loss $\ell(s)=\max(0,s)$ used to train SVMs) or may be $O(1)$-smooth (in e.g. logistic regression).
When the Lipschitz or smoothness constant is not known, one can in practice set our algorithm's hyperparameters by hand such that the average acceptance probability is, e.g., $>\frac{1}{2}$. One can then run the Markov chain until an easily-computed heuristic convergence metric (e.g., the autocorrelation time) is lower than some desired value (see, e.g., [Durmus, Moulines, “High-dimensional Bayesian inference…,” Bernoulli 2019], who use a similar approach to choose hyperparameters of a different Markov chain).
We will add a remark in the final version.
## Response to Reviewer D286
Thank you for your valuable comments and suggestions. We are glad that you appreciate our state-of-the-art results, and its many applications to training differentially private and Bayesian ML models, and we thank you for supporting our paper. We answer your specific questions below.
**“…generalize to other barrier functions”**
It is an interesting question whether one can extend our results to more general barrier functions. In Appendix $F$ we show that, if any $\nu’$-self-concordant barrier function for the polytope $K$ is used in place of the log-barrier function in our algorithm, then our regularized barrier function is self-concordant with parameter $\nu = \nu’ + L^2 R^2$ (in, e.g., the setting where $f$ is $L$-Lipschitz). In the special case where $f$ is constant, [Laddha, Lee, Vempala, STOC 2020] show that their Dikin walk Markov chain has mixing time $O(\bar{\nu} d)$, but they require the barrier function to satisfy a stronger condition, “strong self-concordance with symmetric self-concordance parameter” $\bar{\nu}$ (in particular, they show that the Lewis weights barrier satisfies this stronger condition with parameter $\bar{\nu}=d$). Showing that our regularized barrier function is *strongly* self-concordant with symmetric self-concordance parameter $\bar{\nu} = d + L^2 R^2$ when, e.g., the Lewis weights barrier is used, is thus a natural direction for future work.
**"The algorithm takes $O(m d^{\omega-1})$ to form the Hessian matrix. This does not seem to be tight.”**
Investigating whether one can reduce the cost of computing the log-barrier Hessian matrix at each step of our algorithm is an interesting open problem. In particular, we note that [Laddha, Lee, Vempala, STOC 2020] show, in the special case where $f$ is constant, that the (average) cost of computing the Hessian matrix of the log-barrier of the polytope $K:=\{\theta \in \mathbb{R}^d : A \theta \leq b\}$ at each step of their Dikin walk can be improved to roughly $O(d^2 + \textrm{nnz}(A))$ arithmetic operations, where $\textrm{nnz}(A)$ denotes the number of non-zero entries of $A$. Whether their result can be extended to the problem of computing the regularized barrier functions used in our algorithms, in the more general setting where $f$ is $L$-Lipschitz or $\beta$-smooth, is an interesting direction for future work. We will discuss this in the Conclusions, limitations, and future work section.
**“Dependence on $L$ and $R$...”**
It may be possible to eliminate the polynomial dependence on $L$ and $R$, but it is outside the scope of the current paper. This is a challenging problem, and we discuss it in the Conclusions, limitations, and future work section, and in more detail in Appendix $F$.
One challenge in obtaining bounds which are independent of $L$ and $R$ is that the isoperimetric inequality used (in our paper, and in many prior works on the Dikin walk) to bound the mixing time of Dikin walk Markov chains uses a metric—the cross-ratio distance for the polytope $K$-- which, roughly speaking, defines the distances between Markov chain steps by how quickly these steps approach the boundary of the polytope $K$. Measured in the cross-ratio distance, the steps proposed by our Dikin walk (or, more generally, by any version of the Dikin walk which takes steps that are small enough such that the term $e^{f(z)-f(\theta)}$ in the Metropolis acceptance probability is $\Omega(1)$ when $f$ is $L$-Lipschitz on a polytope contained in a ball of radius $R$) may be as small as roughly $O(\frac{1}{RL})$, and the mixing time bounds one would obtain via this isoperimetric inequality would depend polynomialy on $RL$.