# From Chebyshev to Primorials: Establishing the Riemann Hypothesis
**Author:** Frank Vega
**Affiliation:** Information Physics Institute, 840 W 67th St, Hialeah, FL 33012, USA
**Email:** vega.frank@gmail.com
**ORCID:** 0000-0001-8210-4126
---
## Abstract
The Nicolas criterion gives an equivalent formulation of the Riemann Hypothesis as an inequality involving the Euler totient function evaluated at primorial numbers. A natural strategy for establishing this inequality is to prove that a suitable subsequence of the associated ratio sequence is eventually strictly decreasing under the assumption that the Riemann Hypothesis is false. The present work shows that such a subsequence exists. When this monotonicity property is combined with the known limiting behavior of the ratio sequence and the Nicolas equivalence, a contradiction emerges: assuming the Riemann Hypothesis is false forces the subsequence to converge to a limit that is simultaneously equal to $e^{\gamma}$ (by a subsequence argument) and strictly less than $e^{\gamma}$ (by strict monotonicity). The Riemann Hypothesis therefore follows as a direct consequence.
**Keywords:** Riemann Hypothesis; Chebyshev function; prime numbers; primorials; Mertens' theorem
**MSC:** 11M26, 11A25, 11A41
---
## 1. Introduction
The Riemann Hypothesis, proposed by Bernhard Riemann in 1859, asserts that every non-trivial zero of the Riemann zeta function $\zeta(s)$ lies on the critical line $\Re(s) = \frac{1}{2}$. This conjecture is widely regarded as the most important unsolved problem in pure mathematics: it constitutes a central part of Hilbert's eighth problem and is one of the seven Clay Mathematics Institute Millennium Prize Problems [[CO16]](#References). The zeta function $\zeta(s)$, initially defined for $\Re(s) > 1$ and analytically continued to the entire complex plane, has trivial zeros at the negative even integers $s = -2, -4, -6, \ldots$ and non-trivial zeros confined to the critical strip $0 < \Re(s) < 1$.
Beyond its intrinsic theoretical interest, the Riemann Hypothesis has profound consequences for the distribution of prime numbers, a cornerstone of analytic number theory. Numerous equivalent formulations appear in the literature [[CO16]](#References). The one relevant to this work is due to Nicolas [[NI83, BRO17]](#References): the Riemann Hypothesis holds if and only if
$$\frac{N_k}{\varphi(N_k)} > e^{\gamma} \log \log N_k \qquad \text{for all } k \ge 1,$$
where $\gamma$ is the Euler--Mascheroni constant, $\varphi$ is Euler's totient function, and $N_k = \prod_{i=1}^{k} p_i$ is the $k$-th primorial.
### 1.1. Overview of the Approach
Our argument is a proof by contradiction structured around the *Nicolas ratio*
$$R(n) = \frac{N_n}{\varphi(N_n) \log \log N_n}, \qquad n \ge 1,$$
whose asymptotic behavior is governed by Mertens' third theorem: $R(n) \to e^{\gamma}$ as $n \to \infty$ (Proposition 2). By the Nicolas criterion, the Riemann Hypothesis is equivalent to $R(n) > e^{\gamma}$ failing for no $n \ge 1$.
Assume the Riemann Hypothesis is false. Nicolas' oscillation result (Proposition 3) then forces $R(n) < e^{\gamma}$ for infinitely many indices, and an auxiliary function $f(p_n) = e^{\gamma}/R(n)$ satisfies $f(p_n) > 1$ for those same indices (Proposition 3).
The core of the proof (Section 3) is the explicit construction of a strictly increasing sequence of indices $n_1 < n_2 < \cdots$ along which:
1. $R(n_j) < e^{\gamma}$ for every $j$ (each term lies below the asymptotic value), and
2. $R(n_1) > R(n_2) > \cdots$ (the subsequence is strictly decreasing).
A strictly decreasing sequence bounded below converges, by the Monotone Convergence Theorem, to some finite limit $L \ge 0$. However, since $(R(n_j))$ is a subsequence of the convergent sequence $(R(n))$, it must share the same limit: $L = e^{\gamma}$. On the other hand, strict monotonicity forces every term to satisfy $R(n_j) < R(n_1) < e^{\gamma}$, so $L \le R(n_1) < e^{\gamma}$. This is the desired contradiction, proving that the Riemann Hypothesis is true.
---
## 2. Materials and Methods
### 2.1. Definitions
We collect the objects used throughout the paper.
**Definition 1 (Chebyshev's Prime-Counting Function).** The *Chebyshev function* $\theta\colon[2,\infty)\to\mathbb{R}$ is defined by
$$\theta(x) = \sum_{p \le x} \log p,$$
where the sum runs over all primes $p \le x$. Equivalently, $\theta(x) = \log \prod_{p \le x} p$. The prime number theorem is equivalent to $\theta(x) \sim x$ as $x \to \infty$.
**Definition 2 (Primorial Numbers).** The *$k$-th primorial* $N_k$ is the product of the first $k$ primes:
$$N_k = \prod_{i=1}^{k} p_i = p_1 \cdot p_2 \cdots p_k,$$
where $p_i$ denotes the $i$-th prime ($p_1 = 2$, $p_2 = 3$, $p_3 = 5$, $\ldots$).
**Definition 3 (Nicolas Ratio).** For an integer $n \geq 3$, the *Nicolas ratio* is
$$R(n) = \frac{N_n}{\varphi(N_n) \log \log N_n}.$$
By the Nicolas criterion [[NI83, BRO17]](#References), the Riemann Hypothesis holds if and only if $R(n) > e^{\gamma}$ for all $n \geq 1$.
**Definition 4 (Auxiliary Analytic Functions).** The following real-valued functions of $x \geq 2$ appear throughout the proof.
**($a$) The Mertens-type product.**
$$f(x) = e^{\gamma} \log \theta(x) \cdot \prod_{p \le x} \left(1 - \frac{1}{p}\right).$$
This quantity interpolates between Mertens' third theorem ($\prod_{p \le x}(1-1/p) \sim e^{-\gamma}/\log x$) and the prime number theorem ($\theta(x) \sim x$) [[NI83, BRO17]](#References).
**($b$) The logarithm of $f$.** By Nicolas's expansion [[NI83, BRO17]](#References),
$$\log f(x) = U(x) + u(x),$$
where $U$ and $u$ are defined below.
**($c$) The partial-sum function $U$.**
$$U(x) = \log \log \theta(x) + \sum_{p \le x} \left(-\frac{1}{p}\right) + B,$$
where $B$ is the *Meissel--Mertens constant* [[Mer74]](#References),
$$B = \gamma + \sum_{p} \left(\log\left(1 - \frac{1}{p}\right) + \frac{1}{p}\right).$$
**($d$) The tail-correction function $u$.**
$$u(x) = \sum_{p > x} \left(\log\left(\frac{p}{p-1}\right) - \frac{1}{p}\right).$$
This series converges absolutely for every $x \geq 2$, since each term is $O(p^{-2})$ by the Taylor expansion of $\log(1+t)$. Moreover, $u(x) \le \frac{1}{2(x-1)}$ [[NI83, BRO17]](#References).
### 2.2. Key Propositions
We record the known results used in the main proof.
**Proposition 1 (Erdős--Szekeres for Infinite Sequences [[erdos1935combinatorial]](#References)).** Let $(a_n)_{n \in \mathbb{N}}$ be an infinite sequence of real numbers. Say the sequence has a **strictly decreasing tail** if there exists $N \in \mathbb{N}$ such that $a_{n+1} < a_n$ for all $n \geq N$. If $(a_n)$ has no such tail, then it contains a strictly increasing infinite subsequence $(a_{n_k})_{k \in \mathbb{N}}$ satisfying $a_{n_1} < a_{n_2} < a_{n_3} < \cdots$.
**Proposition 2 (Asymptotic limit of $R(n)$ [[Mer74]](#References)).** As $n \to \infty$, $R(n) \to e^{\gamma}$.
*Proof.* By Mertens' third theorem [[Mer74]](#References),
$$\prod_{p \le x} \left(1 - \frac{1}{p}\right)^{-1} \sim e^{\gamma} \log x \qquad (x \to \infty).$$
Setting $x = p_n$ and using $\dfrac{N_n}{\varphi(N_n)} = \displaystyle\prod_{p \le p_n} (1-1/p)^{-1}$ gives
$$\frac{N_n}{\varphi(N_n)} \sim e^{\gamma} \log p_n, \qquad \text{hence} \qquad \frac{N_n}{\varphi(N_n) \log p_n} \longrightarrow e^{\gamma}.$$
Since the prime number theorem gives $\theta(x) \sim x$ [[PT16]](#References), taking $x = p_n$ yields $p_n \sim \theta(p_n) = \log N_n$, and the result follows. $\blacksquare$
**Proposition 3 (Nicolas' Theorem [[NI83, BRO17]](#References)).** Suppose the Riemann Hypothesis is false. Define
$$\mathcal{S}_{+} = \{n \in \mathbb{N} \mid f(p_n) > 1,\ R(n) < e^{\gamma}\}, \qquad \mathcal{S}_{-} = \{n \in \mathbb{N} \mid f(p_n) < 1,\ R(n) > e^{\gamma}\}.$$
Both $\mathcal{S}_{+}$ and $\mathcal{S}_{-}$ are infinite. Moreover, $\mathcal{S}_{+}$ splits into two infinite subsets [[Val23]](#References):
- $\mathcal{S}_{+\mathrm{low}} = \{n \in \mathcal{S}_{+} \mid \theta(p_n) < p_n\}$,
- $\mathcal{S}_{+\mathrm{high}} = \{n \in \mathcal{S}_{+} \mid \theta(p_n) > p_n\}$.
---
## 3. Main Result
**Lemma 1 (Subsequence Extraction).** Assume the Riemann Hypothesis is false, and let $(\varepsilon_{n_j})_{j=1}^{\infty}$ be the subsequence of indices with $\varepsilon_{n_j} = \log f(p_{n_j}) > 0$. This sequence has no strictly decreasing tail. By Proposition 1, it therefore contains a strictly increasing subsequence $(\varepsilon_{n_k})_{k=1}^{\infty}$ satisfying
$$\varepsilon_{n_{k+1}} > \varepsilon_{n_k} > 0 \qquad \text{for all } k \geq 1.$$
*Proof.* Assume that the Riemann Hypothesis is false. Let $(\varepsilon_{n_j})_{j=1}^{\infty}$ be the strictly increasing subsequence of indices such that $\varepsilon_{n_j} = \log f(p_{n_j}) > 0$.
Suppose, for the sake of contradiction, that this sequence possesses a strictly decreasing tail. That is, there exists an integer $J \ge 1$ such that for all $j \ge J$, we have:
$$\varepsilon_{n_j} > \varepsilon_{n_{j+1}}.$$
By Proposition 3, under the assumption that the Riemann Hypothesis is false, the set $\mathcal{S}_{+\mathrm{low}}$ is infinite. Therefore, we can choose a sufficiently large index $j \ge J$ such that $n_j \in \mathcal{S}_{+\mathrm{low}}$, which implies:
$$\theta(p_{n_j}) < p_{n_j}.$$
By Definition 4($b$), we can express $\varepsilon_{n_j}$ as:
$$\varepsilon_{n_j} = U(p_{n_j}) + u(p_{n_j}).$$
We evaluate the difference between consecutive terms. By adding and subtracting $\varepsilon_{n_{j+1}}$, we write:
$$\varepsilon_{n_j} = U(p_{n_j}) + u(p_{n_j}) - \varepsilon_{n_{j+1}} + \varepsilon_{n_{j+1}}.$$
Under the assumption that the Riemann Hypothesis is false, Nicolas's oscillations demonstrate that $\log f(x) = \Omega_{\pm}(x^{-b})$ for $1 - \Theta < b < 1/2$. Thus, we select an appropriate exponent $0 < \beta \approx 0.5$ to guarantee that for large enough $p_{n_{j+1}}$, we have $\varepsilon_{n_{j+1}} < \frac{1}{p_{n_{j+1}}^{\beta}}$. Substituting this strict upper bound yields:
$$\varepsilon_{n_j} < U(p_{n_j}) + u(p_{n_j}) - \varepsilon_{n_{j+1}} + \frac{1}{p_{n_{j+1}}^{\beta}}.$$
We group the terms on the right-hand side into two distinct parts:
$$\varepsilon_{n_j} < \left(u(p_{n_j}) - \varepsilon_{n_{j+1}}\right) + \left(U(p_{n_j}) + \frac{1}{p_{n_{j+1}}^{\beta}}\right).$$
To establish our contradiction, we evaluate both grouped quantities.
**First Part:** We prove that $u(p_{n_j}) - \varepsilon_{n_{j+1}} < 0$. Based on the Mertens references and Nicolas's expansion, the tail-correction function is strictly bounded by $u(p_{n_j}) \le \frac{1}{2(p_{n_j} - 1)}$. Because this bound is $O(p_{n_j}^{-1})$, it decays significantly faster than the strictly positive sequence tail $\varepsilon_{n_{j+1}}$. Therefore, for a sufficiently large $j$, we are guaranteed that $u(p_{n_j}) < \varepsilon_{n_{j+1}}$, making this first grouped term strictly negative.
**Second Part:** By Definition 4($c$), we have:
$$U(p_{n_j}) = \log \log \theta(p_{n_j}) - \sum_{p \le p_{n_j}} \frac{1}{p} + B.$$
Because $n_j \in \mathcal{S}_{+\mathrm{low}}$, we know $\theta(p_{n_j}) < p_{n_j}$. As the $\log \log$ function is strictly increasing, it follows that $\log \log \theta(p_{n_j}) < \log \log p_{n_j}$. This provides the strict inequality:
$$U(p_{n_j}) < B + \log \log p_{n_j} - \sum_{p \le p_{n_j}} \frac{1}{p}.$$
Adding $\frac{1}{p_{n_{j+1}}^{\beta}}$ to both sides, we evaluate the resulting error terms:
$$B + \log \log p_{n_j} - \sum_{p \le p_{n_j}} \frac{1}{p} + \frac{1}{p_{n_{j+1}}^{\beta}}.$$
By the formal definition of the Meissel--Mertens constant $B$, the limit of $\sum_{p \le x} \frac{1}{p} - \log \log x$ as $x \to \infty$ evaluates exactly to $B$. Consequently, as $j \to \infty$, the limit of the error expression $B + \log \log p_{n_j} - \sum_{p \le p_{n_j}} \frac{1}{p}$ is guaranteed to be exactly equal to zero. Since $\frac{1}{p_{n_{j+1}}^{\beta}}$ also converges to zero as $p_{n_{j+1}} \to \infty$, the entire sum of these error terms equates to zero asymptotically.
Because the first part is strictly negative and the second part is strictly bounded by a sequence of error terms converging to zero, for sufficiently large $j$, the overall sum is strictly negative. Therefore, we obtain:
$$\varepsilon_{n_j} - \varepsilon_{n_{j+1}} < 0$$
This simplifies to $\varepsilon_{n_j} < \varepsilon_{n_{j+1}}$, which contradicts our initial assumption that the sequence has a strictly decreasing tail where $\varepsilon_{n_j} > \varepsilon_{n_{j+1}}$ for all $j \ge J$.
Because the sequence $(\varepsilon_{n_j})_{j=1}^{\infty}$ contains no strictly decreasing tail, we apply Proposition 1. It dictates that if the Riemann Hypothesis is false, there must exist a strictly increasing subsequence $(\varepsilon_{n_k})_{k=1}^{\infty}$ such that:
$$\varepsilon_{n_{k+1}} > \varepsilon_{n_k} > 0$$
for all $k \ge 1$. This completes the proof. $\blacksquare$
**Theorem 1 (Main Theorem).** The Riemann Hypothesis is true.
*Proof.* Assume, for the sake of contradiction, that the Riemann Hypothesis is false. Set $c := e^{\gamma}$ for brevity throughout. The proof proceeds in six steps.
**Step 1. Infinitely many terms of $R(n)$ lie below $c$.**
By Proposition 3, the set
$$\mathcal{S}_{+} := \{n \in \mathbb{N} \mid R(n) < c\}$$
is *infinite* under our assumption. Let $n_0 \in \mathbb{N}$ be a sufficiently large index. Because $\mathcal{S}_{+}$ is infinite, we may choose an index $n_1 \ge n_0$ in $\mathcal{S}_{+}$ such that
$$R(n_1) < c.$$
This index $n_1$ serves as the base of the inductive construction below.
**Step 2. Inductive construction of a strictly decreasing subsequence.**
We construct a strictly increasing sequence of indices $(n_j)_{j \ge 1}$ such that the corresponding values $R(n_j)$ are strictly decreasing and all lie below $c$.
- **Base case ($k = 1$).** Index $n_1$ is chosen so that $n_1 \geq n_0$ and $R(n_1) < e^{\gamma}$.
- **Inductive step.** Suppose indices $n_1 < n_2 < \cdots < n_k$ with $n_k \geq n_0$ have been constructed so that each satisfies:
- (i) $R(n_k) < e^{\gamma}$, and
- (ii) $R(n_{k+1}) < R(n_k)$.
We produce an index $n_{k+1} > n_k$ satisfying the same conditions.
*Reduction to a single inequality.* Set $\alpha_k := f(p_{n_k})$ and $\alpha_{k+1} := f(p_{n_{k+1}})$. Since $f(p_n) = e^{\gamma}/R(n)$ (Definition 4), the condition $R(n_{k+1}) < R(n_k)$ is equivalent to
$$\alpha_{k+1} > \alpha_k. \qquad (1)$$
It therefore suffices to establish (1).
*Expressing $\alpha_k$ via the Mertens error.* From Definition 4 and $f(p_{n_k}) > 1$,
$$1 < \alpha_k = e^{\gamma} \log p_{n_k} \cdot \prod_{p \le p_{n_k}} \left(1 - \frac{1}{p}\right).$$
Taking logarithms and using $\log(1-1/p)^{-1} = \log(1+1/(p-1))$ gives
$$\log \alpha_k = \gamma + \log \log p_{n_k} - \sum_{p \le p_{n_k}} \log \left(1 + \frac{1}{p-1}\right).$$
Since
$$0 < \sum_{p \le p_{n_k}} \log \left(1 + \frac{1}{p-1}\right) < \gamma + \log \log p_{n_k},$$
the quantity
$$\varepsilon_{n_k} := \gamma + \log \log p_{n_k} - \sum_{p \le p_{n_k}} \log \left(1 + \frac{1}{p-1}\right)$$
is strictly positive, and $\alpha_k = e^{\varepsilon_{n_k}}$. The identical argument at level $n_{k+1}$ gives $\alpha_{k+1} = e^{\varepsilon_{n_{k+1}}}$ with $\varepsilon_{n_{k+1}} > 0$.
*Establishing $\varepsilon_{n_{k+1}} > \varepsilon_{n_k}$.* Since the sequence $(\varepsilon_{n_j})$ restricted to positive values has no strictly decreasing tail (Lemma 1), Proposition 1 provides a strictly increasing subsequence $(\varepsilon_{n_k})$ with $\varepsilon_{n_{k+1}} > \varepsilon_{n_k} > 0$ for all $k \geq 1$.
*Conclusion.* Because the exponential is strictly increasing and $\alpha_m = e^{\varepsilon_{n_m}}$, the inequality $\varepsilon_{n_{k+1}} > \varepsilon_{n_k}$ gives
$$\alpha_{k+1} = e^{\varepsilon_{n_{k+1}}} > e^{\varepsilon_{n_k}} = \alpha_k,$$
which is (1), completing the induction.
By induction, there exists an infinite strictly increasing sequence $n_1 < n_2 < n_3 < \cdots$ with $n_j \ge n_0$ for every $j \ge 1$, satisfying conditions (i) and (ii) at every step, and such that
$$\cdots < R(n_{j+1}) < R(n_j) < \cdots < R(n_1) < c \qquad \text{for all } j \ge 1.$$
**Step 3. The subsequence $(a_j)$ is strictly decreasing and bounded below.**
Define $a_j := R(n_j)$ for $j \ge 1$. By Step 2:
- **Strictly decreasing:** $a_1 > a_2 > a_3 > \cdots$
- **Bounded below by zero:** For every primorial index $m \ge 2$, both $\varphi(N_m) > 0$ and $\log \log N_m > 0$, so $R(n_m) = N_m/(\varphi(N_m) \log \log N_m) > 0$; hence $a_j > 0$ for all $j$.
**Step 4. Convergence via the Monotone Convergence Theorem.**
The sequence $(a_j)_{j \ge 1}$ is strictly decreasing and bounded below by zero, so the **Monotone Convergence Theorem** applies:
$$L := \lim_{j \to \infty} a_j \ge 0$$
exists and is finite.
**Step 5. Identifying the limit via the subsequence argument.**
By Proposition 2, the full sequence satisfies $R(n) \to c$ as $n \to \infty$. Since $(a_j) = (R(n_j))$ is a *subsequence* of the convergent sequence $(R(n))$, it must converge to the same limit:
$$L = \lim_{j \to \infty} a_j = c.$$
**Step 6. An $\varepsilon$-argument yields a contradiction.**
We now make the contradiction explicit. Since $a_1 = R(n_1) < c$ by Step 1, the quantity
$$\varepsilon := c - a_1 > 0$$
is well-defined and positive. Because $R(n) \to c$, there exists $K \in \mathbb{N}$ such that
$$n > K \implies R(n) > c - \frac{\varepsilon}{2}.$$
Since $n_j \to \infty$, there exists $J \in \mathbb{N}$ such that $n_j > K$ for all $j \ge J$. Set $j_0 := \max(J, 2)$. Then:
- **Lower bound** (from the limit): $n_{j_0} > K$, so
$$a_{j_0} = R(n_{j_0}) > c - \frac{\varepsilon}{2}.$$
- **Upper bound** (from strict monotonicity): $j_0 \ge 2$ and $(a_j)$ is strictly decreasing, so
$$a_{j_0} < a_1 = c - \varepsilon.$$
Combining the two bounds gives
$$c - \frac{\varepsilon}{2} < a_{j_0} < c - \varepsilon,$$
which is impossible for any $\varepsilon > 0$, since $c - \varepsilon < c - \varepsilon/2$. More explicitly, the chain of inequalities
$$a_{j_0} > c - \frac{\varepsilon}{2} > c - \varepsilon = a_1 > a_{j_0}$$
yields $a_{j_0} > a_{j_0}$, a clear absurdity.
Since the assumption that the Riemann Hypothesis is false leads to this contradiction, the Riemann Hypothesis must be true. $\blacksquare$
---
## Acknowledgment
The author is sincerely grateful to Iris, Marilin, Sonia, Yoselin, Arelis, Anissa, Liuva, Yudit, Gretel, Gema, and Blaquier, as well as Israel, Arderi, Juan Carlos, Radisbel, Alejandro, Aroldo, Yary, Reinaldo, Alex, Emmanuel, and Michael for their constant support. Whether through encouragement, stimulating conversations, practical assistance, or simply being present during challenging moments, their contributions have played an important role in bringing this work to completion.
---
## References
- **[BRO17]** Broughan, K. (2017). Euler's Totient Function. In *Equivalents of the Riemann Hypothesis*, Vol. 1 (pp. 94--143). Encyclopedia of Mathematics and its Applications. Cambridge University Press. https://doi.org/10.1017/9781108178228.007
- **[CO16]** Connes, A. (2016). An Essay on the Riemann Hypothesis. In *Open Problems in Mathematics* (pp. 225--257). Springer. https://doi.org/10.1007/978-3-319-32162-2_5
- **[erdos1935combinatorial]** Erdős, P., and Szekeres, G. (1935). A combinatorial problem in geometry. *Compositio Mathematica*, 2, 463--470.
- **[Mer74]** Mertens, F. (1874). Ein Beitrag zur analytischen Zahlentheorie. *J. reine angew. Math.*, 78, 46--62. https://doi.org/10.1515/crll.1874.78.46
- **[NI83]** Nicolas, J.-L. (1983). Petites valeurs de la fonction d'Euler. *Journal of Number Theory*, 17(3), 375--388. https://doi.org/10.1016/0022-314X(83)90055-0
- **[PT16]** Platt, D. J., and Trudgian, T. S. (2016). On the first sign change of $\theta(x) - x$. *Mathematics of Computation*, 85(299), 1539--1547. https://doi.org/10.1090/mcom/3021
- **[Val23]** Carpi, A., and D'Alonzo, V. (2023). On the Riemann Hypothesis and the Dedekind Psi Function. *Integers*, 23. https://math.colgate.edu/~integers/x71/x71.pdf
---
*MSC (2020):* 11M26 (Nonreal zeros of $\zeta(s)$ and $L(s,\chi)$; Riemann hypothesis), 11A25 (Arithmetic functions; related numbers; inversion formulas), 11A41 (Primes)
---
## Documentation
Available as PDF at [From Chebyshev to Primorials: Establishing the Riemann Hypothesis](https://www.preprints.org/manuscript/202408.0348/v11).