# Solve b variable (with Will Handley): ### Goal: Solve b variable: $b(\frac{1}{b})''=a(\frac{1}{a})''+K$ ### Analytic solution of b in Kinetic-dominance (KD) and Slow roll limit (SR) with K=1: 1. **Dinf**: $E=\gamma mc^2$ > $\dot{H}=-\frac{\dot{\phi}^2}{2m}+\frac{K}{a^2}$, eliminate $\dot{\phi}^2$ term $\rightarrow \dot{H}=\frac{K}{a^2}\rightarrow \frac{a''}{a}-2(\frac{a'}{a})^2-K=0\rightarrow a=C1\sec(\eta+C2)$, choose $a=\sec(\eta)$ > Apply a to the ODE, get $b=\frac{C}{D-\eta}$ 3. **KD**: > Take KD limit ($\dot{\phi}^2>>V(\phi)$), Friedmann equation become: > $H^2+\frac{K}{a^2}=\frac{\dot{\phi}^2}{6m^2}, \frac{\ddot{a}}{a}=-\frac{\dot{\phi}^2}{3m^2}$ > $\rightarrow \frac{a''}{a}+(\frac{a'}{a})^2+2K=0$ , if K=1 > $\rightarrow a=C2\sqrt{\cos(2\eta-C1)}$, choose $a=\sqrt{\sin(2\eta)}$ > Apply a into the ODE, get $b=\sqrt{2\eta}+\frac{1}{6\sqrt{2}}(-1-48A+3\pi+12\ln(2)-6\ln(\eta))\eta^{5/2}+O(\eta^{7/2})$ ### Set IC of b in SR and KD with analytic solutions, and match two numerical solution of b (K=1) 1. Choose a moment in SR and KD respectively. 2. Start to integrate from the moment, and make sure the evolution of BG variables are similar to the one start from the start of inflation. > (1) Use method RK45: > ![](https://i.imgur.com/OdrDG0L.png) > (2) Use method DOP853: > ![](https://i.imgur.com/NHRhoCW.png) > $\rightarrow$ use DOP853 3. Solve ODE of b to get its numerical solutions starting from SR and KD, and try to match them in between. > (1) Match b and b' at t=0.0 to get A and C (fix B): problem-- b is not similar to a > ![](https://i.imgur.com/HaU6lgc.png) > (2) Match b and b' at t=2 to get A and B (fix C): To make the solution of 'b' similar to 'a' in deep inflation, I fix parameter C by comparing 'b' to 'a' during the inflation era. Finally, I use root finder to find two parameters A&B in the analytic solution of 'b' in KD, by matching numerical solutions of b & b_prime starting from SR and KD at t=4.5 > **ODE method: DOP853** > a. Zoom-in figure. Note that the difference between numerical solutions of 'b' starting from SR to the solution of 'a' at t~0, is coming from numerical error. That is why I choose to match them at t=4.5 > ![](https://i.imgur.com/jVA2OB3.png) > b. Two solutions would blow up in deep inflation. It is an error, since the SR analytic solution of 'b' would perfectly match 'a' in the inflation era. > ![](https://i.imgur.com/Wr7abFD.png) > > **ODE method: Radau** > a. I changed the method of scipy.integrate.solve_ivp from DOP853 to Radau, the result looks different. Which shows that both the difference of SR solution around t=0 and the blowing up behavior during inflation, are coming from numerical error. However, the parameter A and B solved by root finder aren't changed a lot, and if we only focus on the region: t=2~10 (the time when deep inflation start is at t=2.68), the solution of b starting from SR and KD looks similar. > ![](https://i.imgur.com/UzWCi4J.png) > b. As a result, I think we already find the solution of 'b' variable. Which is (1) the numerical solution of 'b' starting from deep KD, integrating backward and forward (to deep inflation start, t=2.68). And (2) analytic solution of SR limit in deep inflation era. > ![](https://i.imgur.com/HMGVIyF.png) > The first difference between 'a' and 'b' is in region t=-1~2.0 (the result is showed below). I use two ODE methods DOP853 and Radau. Both of them get the similar result, which show that this is physical property, not coming from numerical error. I will do more convergence test to check the result. > ![](https://i.imgur.com/Yxb2SNl.png) > The second difference between 'a' and 'b' is in region t=16~28. However, it may come from error, since we eiminate K for analytic solution. > ![](https://i.imgur.com/qo23waC.png) ### Analytic solution of b in Kinetic-dominance (KD) and Slow roll limit (SR) with K=-1: 1. **Dinf**: > $\dot{H}=-\frac{\dot{\phi}^2}{2m}+\frac{K}{a^2}$, eliminate $\dot{\phi}^2$ term $\rightarrow \dot{H}=\frac{K}{a^2}\rightarrow a=C1csch(\eta+C2)$, choose $a=csch(\eta)$ > Apply a to the ODE, get $b=\frac{C}{D-\eta}$ 2. **KD**: > Take KD limit ($\dot{\phi}^2>>V(\phi)$), Friedmann equation become: > $H^2+\frac{K}{a^2}=\frac{\dot{\phi}^2}{6m^2}, \frac{\ddot{a}}{a}=-\frac{\dot{\phi}^2}{3m^2}$ > $\rightarrow \frac{a''}{a}+(\frac{a'}{a})^2+2K=0$ , if K=-1 $\rightarrow a=C2\sqrt{\sinh(2\eta-C1)}$ > choose $a=\sqrt{\sinh(2\eta)}$ > Apply a into the ODE, get $b=\Re[(2+2i)\pi^2 \sqrt{\eta}/(B\Gamma(-0.5)^2)- ( 8/3*(1+1i)\pi^2\eta^{2.5} (6(-1)^{0.25}A\pi^2 + B\Gamma(3./4.)^2 (-1+(3-3i)\pi+ 12\log(2)-6\log(\eta) )))/ (B^2\Gamma(-0.25)^4)]$ ### Set IC of b in SR and KD with analytic solutions, and match two numerical solution of b (K=-1) > The two analytic solution is only an approximation in SR or KD. They woule explode out of the region. Since two solutions have no overlap, I cannot find a whole solution: > ![](https://i.imgur.com/fb111YD.png) > However, if we set b=a, $\dot{b}=\dot{a}$ in deep SR or KD, there is an solution of b: > ![](https://i.imgur.com/CYYtlHp.png) > To solve this problem, we can match the two analytic solutions in deep KD era: > ![](https://i.imgur.com/wULoOCi.png) > The difference between a and b is the following (K=-1): we can see that b is larger than a. > ![](https://i.imgur.com/1Rhs7As.png) Compare with K=1 case: b is smaller than a. > ![](https://i.imgur.com/ebkIfKN.png) ### Switch $\eta$ to a new variable: N', and then solve the analytic solution of b (K=-1) 1. The reason why we want to do that is because the analytic solution of a looks nice in inflation and KD era when K=-1, but not b. Maybe b would be a nice form as a function of N' 2. ODE: > $y'+\frac{H'}{H}y'+y''=[-\frac{H'}{H}+\frac{K}{(aH)^2}]y$ (Note: The ' is $\frac{d}{dN}$) > In KD limit: $\frac{a''}{a}+(\frac{a'}{a})^2+2K=0$ (Note: The ' is $\frac{d}{d\eta}$) > $\rightarrow \frac{K}{(aH)^2}=-\frac{1}{2}(3+\frac{H'}{H})$ (Note: The ' is $\frac{d}{dN}$) > The first ODE become: $y'+\frac{H'}{H}+y''=-\frac{3}{2}(1+\frac{H'}{H})y$ > **With K=-1**: $a(\eta)=\sqrt{\sinh(2\eta)}\rightarrow N=\ln(a)\rightarrow \eta=0.5\sinh^{-1}(e^{2N})$ > $\rightarrow H=\frac{\dot{a}}{a}=\frac{a'}{a^2}=\frac{\coth(2\eta)}{\sqrt{\sinh(2\eta)}}=e^{-3N}\sqrt{1+e^{4N}}$ > Apply the result to the ODE, get: $y''(N)=\frac{3y+2y'}{1+e^{4N}}$ > The solution is: $A\times e^{3N}Hypergeometric2F1[0.75, 0.75, 2., -1. E^(4. N)] + B\times MeijerG[{{}, {1., 1.}}, {{-0.25, 0.75}, {}}, -e^{4N}]$ (A, B are constants) 3. Result: This are the solutions. The first solution would blow up when $\eta\rightarrow 0$. The second solution is a complex function, with the real part similar to imaginary part when $\eta\rightarrow 0$. It seems not better than the previous solutions. > ![](https://i.imgur.com/ZnOxobs.png) > ![](https://i.imgur.com/1oO5Q8R.png) ### Plot PPS result: 1. Read [Mary's paper](https://arxiv.org/pdf/2211.17248.pdf) to know how to get PPS from our present stage: > Step1: Find the ODEs and the IC. > Step2: Repeat her result. > Step3: Apply my b solution and find the new result. > Q1: Why use minizised-REST instead of frozen IC? > Q2: Since 'b' is similar to 'a' in deep inflation, I don't know why the result should be any different? Or we should pick R_k at another time? > Ans: The reason is that the initial conditions are not set deep in inflation. If for example they are set at the start of inflation (or some other early time), then there is a difference. For this reason one sees a difference in the primordial power spectra in figures 2 and 3 in Mary's paper. 2. Result: > The PPS result looks strange:it increase in high k mode with the ode -> fixed the ODE to make it not increasing. > ![](https://i.imgur.com/WyziTzJ.png) > I found that I mistakenly type the ODE in my code. > This is the ODE: eq(13) in Mary's paper 2211.17248 $0=a^2(\kappa_D^2+K\epsilon)\ddot{R}+2a^2(\frac{\dot{a}}{a}\kappa_D^2+K\frac{\dot{a}}{a}\epsilon)\dot{R}+[K(1+\epsilon-2\frac{a}{dot{a}}\frac{\dot{z}}{z})\kappa_D^2-K^2\epsilon+\kappa_D^4]R$ > Which is equal to eq(12) in 1907.08524v2: > $0=a^2(\kappa_D^2+K\epsilon)\ddot{R}+a^2[(H+2\frac{\dot{z}}{z})\kappa_D^2+3KH\epsilon]\dot{R}+[K(1+\epsilon-\frac{2}{H}\frac{\dot{z}}{z})\kappa_D^2-K^2\epsilon+\kappa_D^4]R$ > It is also equal to eq(50)-(52) in Lukas's paper [2205.07374]. > After correcting it, and apply Starobinsky potential, we get the result (K=-1, N0=12): > (1) Applying $b_0$ and $\dot{b}_0$ as Mary's paper (g0=1, f0=0 at t0): > ![](https://i.imgur.com/Vyw7pg2.png) > (2) Applying actual $b_0$ and $\dot{b}_0$ got from numerical solution of b: > ![](https://i.imgur.com/XbmZ5le.png) 3. Numerical setup: > (1) Assume a [Planck 2018](https://wiki.cosmos.esa.int/planck-legacy-archive/images/b/be/Baseline_params_table_2018_68pc.pdf) TTTEEE+lowl+lowE+lensing best-fit parameters. > (2) Choatic and Starobinsky potentials are incompatible with the Planck best-fit cosmology when curvature is included -> choose $V(\phi)\propto \phi^{4/3}$ > (3) The only leaving degree of freedom is $\Omega_K$ or $N_0$ (initial scale factor). $\Omega_K\in [-2.1\%,-150]\rightarrow (min=-0.021, max=-150, med=\sqrt{min^2+max^2})$ > (4) The numerical integration using a solver that is capble of accurately navigating the many oscillations between IC and horizon exit: [Agocs et al.](https://arxiv.org/pdf/1906.01421.pdf). How these background evolutions are constructed can be seen in [L. T. Hergt (2019)](https://journals.aps.org/prd/pdf/10.1103/PhysRevD.100.023501) 4. Problems: > (1) I should apply potential $V(\phi)\propto\phi^{4./3.}$. First get $\phi_0$ and $\sigma$. And then get $b_0$ and $\dot{b}_0$. Apply all of them to get PPS. > (2) I do not really understand the definition of inflationary sound speed (cs), which is related to P(X,phi)=X-V(phi). As a result, I set it to be 1. Do you think it is valid? > Ans: cs could be 1 for most Lagrangian. However, if I consider general Lagrangian, it might not be equal to 1. > (3) I don't understand why the definition of kappa looks like this: ![](https://i.imgur.com/sRQD7Sh.png) Since k=[1.e-4~1.e0], which would never be larger than 2. How should we define kappa when K=1? > Ans: since the k in kappa is conformal k, while the k=[1.e-4~1.e0] is physical k. k_phy=k_con/a 5. Answer (Discuss with Mary): > (1) Read [Daniel Baumann's lecture note](https://arxiv.org/pdf/0907.5424.pdf) for more BG knowledge for the project. > (2) [pycoscode video](https://www.youtube.com/watch?v=u7E82j8UIM4&ab_channel=Enthought) > (2) $c_a^2$: \begin{align} \dot{\phi} &= \frac{Hz}{a} \\ \ddot{\phi} &= \frac1a (\dot{H}z + H\dot{z} - zH^2) \nonumber \\ &= \dot{\phi}(\frac{\dot{z}}{z} + \frac{\dot{H}}{H} - H) \nonumber \\ &= -3H\dot{\phi} - \partial_{\phi}V(\phi) \\ \phi^{(3)} &= -3\dot{H}\dot{\phi} - 3H\ddot{\phi} - \dot{\phi}\partial_{\phi\phi}V(\phi) \\ \dot{H} &= -\frac12\dot{\phi}^2 + \frac{K}{a^2} \\ \ddot{H} &= \dot{\phi}\ddot{\phi} - \frac{2K}{a^2}H \\ \ddot{z} &= z((\frac{\dot{z}}{z})^2 + \dot{H} + \frac{\phi^{(3)}}{\dot{\phi}} - (\frac{\ddot{\phi}}{\dot{\phi}})^2 - \frac{\ddot{H}}{H} + \frac{\dot{H}}{H} )^2) \end{align} so finally \begin{align} c_a^2 &= \frac{1}{3H}(3H + 2\frac{\dot{H}}{H} - 2\frac{\dot{z}}{z}) \\ \frac{\dot{c_a}}{c_a} &= \frac{1}{2c_a^2}(-\frac{\dot{H}}{H^2}(3H + 2\frac{\dot{H}}{H} - 2\frac{\dot{z}}{z}) \nonumber \\ &+ \frac{1}{3H}(3\dot{H} + 2\frac{\ddot{H}}{H} - 2(\frac{\dot{H}}{H})^2 - 2\frac{\ddot{z}}{z} + 2(\frac{\dot{z}}{z})^2 ) \end{align} ### Plot PPS by Will's code: 1. Problem: > (1) The numerical solution starting from KD and SR cannot connect with each other at start of SR when N small. > Sol: start from the beginning of SR ($w=p/\rho=-0.99$), rather than starting from deep SR. In this case, the numerical solution of b is stable even in KD. Then we can connect it with the numerical solution staring from deep KD, and also find IC of b in t=0 (start of inflation). 2. PPS result with different N_IC: > (1) Use IC of actual $b, \dot{b}$: > ![](https://i.imgur.com/8B6rF69.png) > (2) Use IC of $b, \dot{b}$ defined in Mary's paper: > ![](https://i.imgur.com/9f3cWTf.png) 3. PPS result with different R_IC: > (1) N=N_min: > ![](https://i.imgur.com/MfI58Jp.png) > (2) N=N_med: > ![](https://i.imgur.com/qCXzHKm.png) > (3) N=N_max: > ![](https://i.imgur.com/uiWRUET.png) 5. Evolution of b: > The solution of b looks different from a in N_min. However, it looks like a in N_max. It is because in the equation: $b\frac{d^2}{dt^2}(\frac{1}{b})=a\frac{d^2}{dt^2}(\frac{1}{a})+\frac{K}{a^2}$, there is $/a^2$ in K term. If N_i is smaller, the affect of the K term would be larger. That is, the difference between b and a would be larger. > (1) N=N_min: > ![](https://i.imgur.com/bB0bl6n.png) > (2) N=N_med: > ![](https://i.imgur.com/R6EAKhn.png) 6. CMB: CMB with different N_i: ![](https://i.imgur.com/hujWN0E.png) 7. Meeting with Will: > (1) I found that the PPS and CMB, which match 'b' with 'a' at KD and SR, looks nice and similar. However, the one match at t=0 would explode (with V=starobinsky potential, N_i is small). Note: Mary choose V=phi^3/4 instead of starobinsky, that's why her result don't explode when N_i is small (I test it by applying V=phi^3/4 to my code, and the explosion behavior disappear). Do you think it is physical? Or it is an numerical error? > Ans: there might be restriction for different $N_i$. I can make time- $1/(aH)^2$ plot to see at what $N_i$ it would explode at the start of inflation (t=0). Try to use N_i=11.72 to see how it looks (with Starobinsky potential). > (2) For the next project, one of the problem is to find the exact initial time to quantize. Can we refer to Loop Quantum cosmology? That is quantize at the big bounce. > Ans: actually Dinelle is doing this analytically. > (3) Shall we consider quantum reference frame (QRF)? Now we consider the vacuum as minimizing $T_{\mu\nu}$ for an observer locally. But is it possible to find an initial condition such that all observers would see $T_{\mu\nu}$ is the minimum everywhere? (need to use transformation of QRF) > Ans: Refer to [Fruzsina's paper](https://arxiv.org/abs/2002.07042), but in IV.A.3 (page 9), in the paragraph beginning "To make sure this is the case". > I found the min N_i, but I also have to find the max N_i, which would make things break down. Different potential would make different PPS, so don't try to find it by matching the PPS on Mary's paper. > Find the parameter set: [Omega_m, Omega_K, Omega_l], which would make two curve combined. Sum of them would be 1, so there are only two degree of freedom. After finding them, we can compare with Planck data to see whether it is allowed. By the way, I also have to consider As and ns. Try to find their relationship. > Result: I found that H0, Omega_m, Omega_K, logA don't change the graph a lot. Except for ns. I find ns=0.96535 can best match the 1/(aH)^2 curve before and after reheating. This is the result (using Will's code, with N_i=0.5, Starobinsky potential) > ![](https://i.imgur.com/h6S4WZV.png) > (4) Why not apply FIC to curved spacetime? Why set R_k constant? Is it better to set R_k by RSET in deep KD? > Ans: It is because the code break down while ploting PPS before. But I can plot it with my code and see how it goes. > (5) Make parameter set plots (Fig23), and best fit plot of CMB (Fig25) -> find exact parameter sets. By the code: [T. Hergt, Inference products for "Finite inflation in curved space"](https://zenodo.org/record/6547872#.Y-UpNBzP3HA) > (6) The computer run slow (see it by typing "htop" in commend line) -> I can get access to a supercomputer (I can set primpy on it), I can also get a new computer (Dell - Latitude 7420) is recommended. > (7) nsamples cannot be too high, or it would need lots of memory to do that (my present computer only have 7.6G memory). > (8) Numba can change python to C to speed up the code, however, our bottleneck is loading data, which is already in C, so make need be too helpful. > (9) How to make pcs data? Turn jpynb to py -> run on supercomputer. How to check correctness? ### Find Best-fit parameters by comparing with data 1. Install [Cobaya](https://cobaya.readthedocs.io/en/latest/): a code for Bayesian analysis in Cosmology. 2. Before installing data, need to [install BLAS and LAPACK](https://askubuntu.com/questions/623578/installing-blas-and-lapack-packages) :(1) sudo apt-get install gfortran, (2) sudo apt-get install libblas-dev liblapack-dev 3. ![](https://i.imgur.com/ai357bn.png) 4. [Planck data](https://wiki.cosmos.esa.int/planck-legacy-archive/index.php/CMB_spectrum_%26_Likelihood_Code): Include TT, EE, TE, BB, BE power-spectrum and likelihood (in clik file type) 5. Install clik: >> mkdir new_dir >> cd new_dir >> python -m venv venv >> source venv/bin/activate >> pip install cobaya >> cobaya-install -p packages cosmo [and then run ./wag](https://cosmologist.info/cosmomc/readme_planck.html) 6. Evolution of b: matching at different w value (1) primpy start from SR (w=-0.99~-0.4): ![](https://i.imgur.com/4TFPJQA.png) (2) primpy start from KD (w=-0.3~0.99): ![](https://i.imgur.com/7TvXf6o.png) (3) use PPS_will_b: ![](https://i.imgur.com/v31VnkP.png) 7. chi of PPS (match a&b at different t): (1) use primpy and example_planck_likelihood (Nstar=55) ![](https://i.imgur.com/5lqisGY.png) (2) use PPS_will_b: ![](https://i.imgur.com/pYk6u6g.png) (3) use PPS_will_b and example_planck_likelihood ![](https://i.imgur.com/IAN3fwA.png) 8. Compare ouput PPS (a&b match at SR) (1) primpy: a. Modified IC of R (solve b) ![](https://i.imgur.com/47OXcLh.png) b. Oriinal primpy ![](https://i.imgur.com/gbRH3oA.png) (2) PPS_will_b: ![](https://i.imgur.com/A0wWhj9.png) (3) w=-0.99~0.99: ![](https://i.imgur.com/bMEoQ0m.png) 9. Next step: (1) Plot PPS as loglog, and make sure the quality is as good as Fig.14 in Lukas's paper (2205.07274). -> should see the fluctuation in low k and no zig-zeg (numerical error). If not, find the way to improve accuracy -> chi_eff to be accurate. (2) Use scipy.optimize.minimize to find the best-fit parameters set for each fixed w (equation-of-state) ranging from -1 (SR) to 1 (KD). See if they are different. It take lots of calculating resources -> use supercomputer. (3) Although chi_eff is slightly different while varying a&b matching point, the difference of value is not large (100 or more will be important). (4) I can use [array jobs](https://docs.hpc.cam.ac.uk/hpc/user-guide/batch.html#array-jobs) to submit jobs at once directly -> w=linplace[] ### Use Polychord to plot posterior of parameters #### Introduction: 1. The blue one is prior (volumn of the parameter space), yellow one is Likelihood, and the green one is posterior (posterior=prior$\times$Likelihood). The x-axis is interation (from low likelihood to high likelihood). 2. Nested-sampling is similar to thermal dynamics, prior cooresponds to temperature=infinity, while Likelihood corresponds to temperature=0. Since $\beta=\frac{1}{kT}$, when increase $\beta$ (lower the temperature), the peak in lower-left figure would shift from left to right. ![](https://hackmd.io/_uploads/r1aOe6kjn.png) #### Settings: 1. Setting: > step1: Install [PolyChord](https://github.com/PolyChord/PolyChordLite/blob/master/run_pypolychord.py) > step2: mpirun -np 1 python run_pypolychord.py ---> make sure the code works well. 2. Modify run_pypolychord.py > step1: > ```pythonscript= > def to_optimise(params): > plik, lowl, lowE, lensing, chi_eff_sq = run_TT(params) > return -chi_eff_sq > ``` > step2: nDims = number of parameters, nDerived = number of derived parameters > step3: > ```pythonscript= > def likelihood(theta): > return to_optimize(theta), [] > ``` > step4: > ```pythonscript= > def prior(hypercube): > return [UniformPrior(-0.3, 0.3)(hypercube[0]), UniformPrior(20, 100)(hypercube[1]),...] > ``` > step5: settings.nlive should be larger than nDims$\times 25$ and 200 > step6: The result would be saved in /chains. Read it by > ```pythonscript= > import anesthetics as ac > ac.read_chains("chains/gaussian") > ``` > step7: make 2D plot by [anesthetic.readthedocs](https://anesthetic.readthedocs.io/en/latest/) 4. MPI: > **note**: on SL3, 12 hours is the maximum running time. However, we can use those 12 hours a lot more effectively with parallelisation. > step1: OMP_NUM_THREADS=8, so need cpus-per-task=8, and export OMP_NUM_THREADS=8 > step2: since only use 1 node (there are 56 cpu/node), and cpus-per-task=8, so ntasks=7. > step3: If still not fast enough, I can increase the number of nodes, and ntasks ($=7\times$ number of nodes) > step4: CMD="mpirun -ppn $mpi_tasks_per_node -np $np $application $options" 5. Problem1: [cpu-p-100:79638:0:79638] Caught signal 11 (Segmentation fault: address not mapped to object at address 0xfffffffc063c20f0) > solution: check whether there is Nan in cls (sometimes cosmo would produce Nan if things go bad). > Result: Indead, after changing this, code can run with any numbers of M{I porcesses. 6. Problem2: The code didn't end in 12 hours even with 2 nodes and total 10 processes. > solution: (1) if there is nan in cls, set chi_eff_sqr=2e30 directly without calculating -> save time (2) change size of hypercube -> Z_live is small portion of Z -> converge (3) reduce nlive, since the dominant cost is the generation of a new live point (4) Use different prior, not Uniform (5) nlive is proposional to n_MPIprocesses (6) test original primpy > result: after (1)(2)(3)(5)(6), the problem remain. 7. Discuss with David, Will, and Adam: > (1) Try logl = to_optimise(theta) and set log=settings.logzero if there is CosmoComputationError or np.isnan(logl). By this way we can avoid Polychord to sample at non-physical region. Also, limit the prior range help > (2) Set UniformPrior is fine. > (3) [Register for a DiRAC account](https://safe.epcc.ed.ac.uk/dirac/main.jsp#form) -> can run longer (36 hours) > (4) Each MPI process should write PPS result to different file > (5) To check whether the error message came from a specific line of code, I can sandwish the line by: print("commend", flush=True). In the case, it will print directly to output without delay (sometimes MPI will wait for others). 8. The Problem2 is solved: I try to use only MPI (without OpenMP), and set --cpus-per-task=1 (each MIP process run on one CPU). In the case, all CPUs are used in 100%. Here is the result with nlive=20 and limited prior: ![](https://hackmd.io/_uploads/rJnxbkWHn.png) 9. Increase nlive to 112. The prior is still limited: The result shows that for logA_SR and H0, the nlive is enough (the result doesn't change a lot). However, for other parameters, maybe need more nlive. ![](https://hackmd.io/_uploads/Hk6v4ocBn.png) 10. Increase the prior size to the range in Lukas' paper (Table1 in [2205.07374]), and nlive=224: 12. Extend the size of prior to the range Lukas used (Table1 [2205.07374]). Since Polychord shrink the range in log scale, the time we need $\propto \log(\text{range of prior})$. By the way, make #of MPI < nlive. Since too many MPI wouldn't speed up the calculation. I can increase nlive instead of making #of MPI. In this case, the code would run faster. Result(nlive=224): > ```pythonscript= > prior = np.array([[3.0, 3.1],#logA_SR > [35, 75], #N_star > [-1, 5], #log10f_i > [-0.02,-0.0001],#omega_k > [55, 75]]) #H0 > ``` ![](https://hackmd.io/_uploads/Sy97hrQUh.png) 13. Compare with Lukas' result in Fig.20 [2205.07374]: we find that the result is different from us (expecially H0, N_star, omega_k). Since Lukas use different Planck likelihood and some more detail method (hard to reproduce). We would first use original primpy to make the plot again, to see whether it is because of those additional setup, or just because of different IC of comoving curvature perturbation ($R$). ![](https://hackmd.io/_uploads/rJ3JKcQ83.png) This is the result by using original primpy: ![](https://hackmd.io/_uploads/H1ddSGyw3.png) 14. Compare with Lukas' result: The results are different. There are two ways to solve the proble: (1) use wider prior (as Lukas) (2) use cube_samples in pypolychord/settings.py to set the limited prior as what Lukas showed. Remember I have to rescle it to the range [0, 1]. Ask Kilian or David about how to do that. ![](https://hackmd.io/_uploads/HkPAJ1Lv2.png) ![](https://hackmd.io/_uploads/Sy5R11Lw2.png) 15. Why the priors don't look like uniform even if I assume "UniformPrior"?". There are several reasons why priors might not look/be uniform (a) The general 'fluffyness' below the diagonal comes from infelicities in kernel density estimation (you get a better picture for uniform distributions by looking directly at the samples above the diagonal) (b) The Planck 'likelihood' places a non-trivial prior on 'unphysical' regions of parameter space (i.e. when it returns logzero) -- this is something that I've had for some time a problem with, since one shouldn't technically define a prior by the dataset (c ) Lukas places a physical prior on the validity universes (e.g. Fig 8 from his paper associated with collapse/sufficient conformal time to solve the horizon problem, and reheating consideration) 16. Answer from Lukas: (1) The "Methods" section of [2205.07374] details my use of Cobaya+PolyChord+CLASS and also which data/likelihoods I have used. The sections on conformal time and reheating will explain the prior exclusions. (2) I did not use the CMB lensing likelihood, which you seem to be using. That will cause a big posterior difference, especially in Omega_K0. (3) The exclusion of parameter sets that do not solve the horizon problem or that cause impossible reheating scenarios explains the difference that you see in the prior of log10f_i. (4) For reproducing the PolyChord runs essentially all the information is on zenodo in form of the Cobaya yaml files. (5) The constraint: N_star<70 is also down to reheating. Check out figures 6, 10, and 23 in [2205.07374]. Figure 6 visualises how `N_star` factors into all this. Figure 10 visualises how `w_reh` factors into all this. Note the theoretical constraints, which set the prior range to `-1/3<w_reh<1`. Both those figures together should give an idea of how interlinked reheating with the observable number of e-folds is. With that understood, figure 23 sums all this up in a triangle plot. In particular the prior constraint `w_reh<1` results in the limit `N_star<~70` and also `n_s<~0.975`. It also sets lower bounds on `r` but these are model dependent. 17. Edit Likelihood and prior: (1) delete the lensing likelihood (2) use conformal_time_ratio() in [Lukas's zenodo](https://zenodo.org/record/6547872) plotting_code/curved_conformal_time.ipynb to calculate conformal_time_ratio (3) if conformal_time_ratio<1 -> set log Likelihood=$-\infty$. (4) Reheating constraint 18. Use Cobaya: (1) cobaya-run -p ~/rds/hpc-work/clik_installs/Cobaya pcs3d500_TTTEEElite_lowl_lowE_BK15_stb_omegak_AsfoH_perm.yaml -f (2) To avoid having to type the command in every time, I use a python file: run_Cobaya.py: mpirun -np 2 python run_Cobaya.py pcs3d500_TTTEEElite_lowl_lowE_BK15_stb_omegak_AsfoH_perm (3) Lukas use BICEP/Keck likelihood in 2015 rather than the default one in Cobaya (2018). It is because the BK18 data haven't released when Lukas producing his result. At this point it's better to use BK18. It should only lead to minor differences, mostly on r. (4) running `generate_stb_omegak_AsfoH_perm_logA_Ns_logfi_ok__H0.py`, requires modifying CLASS to use the Hubble parameter as an 11th input parameter to the primordial module. Normally the primordial module doesn't know about parameters from the background module. On Lukas' GitHub fork of CLASS I have an `external_H0_2.9.4` branch which does that modification. (5) What is our goal? If reproducing the runs is the main goal, it means modifying both CLASS and Cobaya, it won't work with the master branches. But I am trying to do things beyond reproduction of Lukas' runs, so starting by reproducing my runs exactly is the right place to start. If I am looking for new things, then you definitely shouldn't go for BK15 anymore, but for BK18. Also, how important is it to you to use CLASS? By now, CAMB has overtaken CLASS when it comes to convenience in interfacing with external power spectra, particularly because CAMB allows to sample primordial parameters independently from parameters needed for computing the transfer functions. This is a huge advantage for curvature runs. (6) As a result, I can first use Polychord with Horizon and Reheating constraint. And then try to use Cobaya with CAMB. ### Use margarine for complicated prior: 1. [magarine github](https://github.com/htjb/margarine) 2. Steps: (1) Generate n samples for all parameters on-by-one uniformly -> have $n^{nDim}$ number of samples (there are nDim parameters) (2) find samples which satisfy the conditions (reheating and horizon problem constraints): > ```pythonscript= > if sample != condition1: > discard > if sample != condition2: > discard > ``` (3) Input the samples as an array (# of samples satisfying the constraints, nDim) to margarine: > ```pythonscript= > w=np.ones(len(samples)) # weighting, set to be one, since we applies uniform distribution > flow = MAF(samples, w) > flow.train(1000, early-stopping=True) > # 1000 means how many nodes used for training > # early-stopping means it would stop earlier if the model doesn't become better. It can prevent over fitting. > file_name = 'path/to/pickled/MAF.pkl' > flow.save(file_name) # save the trained model > flow(cube) # generate posterior which can be prior we need > ``` 3. Discuss with Will: (1) Separate the magarine and polychord part into different code. (2) Improve the neuro network by modifying number of epochs in MAF.train() and learning_rate. (3) Use Lukas' posterior or prior instead of mine to train the MAF. (4) Use MAF.sample() to see the output prior. Check the prior looks similar to Lukas's result. (5) I can use magarine to train the Likelihood, too (ask Harry Bevins) (6) References: [original paper of neuro network](https://arxiv.org/abs/1705.07057), follow-up paper: [nesting sampling for any prior you like](https://arxiv.org/abs/2102.12478), and [technical detials for margarine](https://arxiv.org/abs/2207.11457). (7) Committees of first year report: Dr Eloy de Lera Acedo, and Professor Roberto Maiolino. Read the report and understand any unclear parts. 4. prior result: (1) Use Lukas' prior to train the MAF (training_rate=1e-3, epochs=1000, early stop=True): ![](https://hackmd.io/_uploads/Hke8Jeaq3.png) (2) Use Lukas' prior to train the MAF (training_rate=1e-4, epochs=1000, early stop=True): ![](https://hackmd.io/_uploads/rk8jJla93.png) (3) compare with Lukas' result (Lukas' prior is blue, MAF is orange): ![](https://hackmd.io/_uploads/BkgXIxpch.png) 5. Problem: it is different from Lukas' prior. It might because I use prior.head(1000), not all of the samples. There are two ways for solving it: (1) prior.head(prior.neff()): It would choose the prior samples with high enough weights. The advantage is that it contains the weights. Its shortcoming is that samples with high weights in prior do not exactly has higher likelihood. ![](https://hackmd.io/_uploads/B14Xu0lih.png) (2) samples.compress(n): It would contain samples with higher likelihood. However, it loss weight. Since there are repeated samples, and the one with higher likelihood would repeat more times. I can set weight=1/n. ![](https://hackmd.io/_uploads/ryI9uClj2.png) 6. I apply the prior from margarine to polychord, however, it run quite slow. After 36 hours, we only get one posterior points with extremely high weighting. Solution: eliminate inf and nan parameter sets > ```pythonscript= > def prior(hypercube): > return bij(hypercube) > > def likelihood: > if not np.isfinite(np.sum(theta)): > return -1e30, [] > else: > your existing likelihood function > return like, [] > ``` 7. I can only use one node for running. Solution: don't set cpu's per task (this setting is only for code that isn't parallelised well so I don't think we should need for polychord). > ```bashscript= > #! SBATCH --cpus-per-task=1 > ``` 8. There are too few equal weight points to plot: > It's unlikely if there are too few equal weight points that the plotting will work well (the plot calls will be probably trying to plot one single sample as that dominates the current evidence estimate), > One way that might be more stable while the run is still going is to specify the kind of plots to be scatters: > ```pythonscript= > samples.plot_2d(axes, kinds=dict(upper="scatter_2d",lower='scatter_2d')) > ``` > as the errors are likely due to running KDE estimate on a small number of points. The above may help the plotting work but it's likely there will only be one or two points to show in this case. > Here is the result (only 6 equal weight points): > ![](https://hackmd.io/_uploads/SyZFFrb62.png) > After running more time, we get more points. Here is the posterior result: although the posterior is narrower, which might because of lack of points, the position of peaks are similar. > ![](https://hackmd.io/_uploads/BJiCaUlJp.png) 9. Problem: the posterior is still not looks similar to Lukas' result -> use anesthetic GUI to diagnose the problem: "anesthetic <ns file root>" ([refer to here](https://anesthetic.readthedocs.io/en/latest/)) ### Set quantum vacuum away from the start of inflation 1. Question: how PPS would look like if we set the quantum vacuum at the start of inflation? 2. Steps: (1) First consider flat RSET: ![](https://hackmd.io/_uploads/HJXJrzLP2.png) (2) Evolve BG variables to the new time (defined by $w$: equation of state) for quantization (in KD or SR). Use the values to set quantum IC of $R$ at that time. (3) Use [oscode](https://github.com/fruzsinaagocs/oscode/blob/master/examples/introduction_to_pyoscode.ipynb) to evolve it back to the start of inflation. Return the $R$ and $R'$ value at that time. (4) oscode_solver.py would use these $R$ and $R'$ value as its initial condition to get PPS. (5) extend the result to curved RSET (with $b$ variable). 3. Here is the result of flat RSET: We can only set the quantum vacuum before inflation, it is because on step (3), oscode only allows forward calculation. The result is similar to FIC (fig4 in [2104.03016]), they both have larger oscillations. The reason why FIC has that is because R_k=constant, not $\propto \frac{1}{\sqrt{2k}}$ (BD, HD, RSET, RHM has that). Since $P_R\propto R_k^2$, the oscillation would decay when k increase. (1) PPS, which quantum IC set at different w (equation of state). ![](https://hackmd.io/_uploads/SyRgSGLw3.png) (2) PPS, which quantum IC set at different N. ![](https://hackmd.io/_uploads/rJn0hH1uh.png) 4. I found that if we want to extend it to curved RSET (with $b$ variable), $\zeta$ and $\dot{\zeta}$ would blow up in KD. As a result, we cannot set curved RSET quantum vacuum in KD. 5. Next step (discuss with Will): (1) try to set flat RSET close to the Bang (use a, t, or N) to see whether it blows up. The blows up behavior might cancel with the blow up in curved RSET. ![](https://hackmd.io/_uploads/BkDdnb9wn.png) (2) Try to set RSET when $k^{-1}$ enter the Horizon. It may have similar behavior as FIC. Here is the result with flat RSET: ![](https://hackmd.io/_uploads/Syt0maFP2.png) ### Idea for next step: 1. Make CMB and find the best model and parameter set: > (1) Make CMB by Boltzmann solver (CLASS). (with 'primpy' code to consider various inflationary potentials) > (2) Find posterior distribution (best parameter set given data and model) by 'Cobaya' code. > (3) In 'Cobaya', we apply 'CosmoMC' for MCMC sampler and 'PolyChord' for nested sampling. > Q1: What's cosmo and curved would like to get? > Ans:(The $\Lambda CDM$ and our result respectively) > Q2: What's $\chi^2$? > Ans: The difference between theoretical prediction and the data. And $\Delta\chi^2=\Delta\chi_{result}^2-\Delta\chi_{\Lambda CDM}^2$ -> The smaller the better. 2. Check whether RSET is canonically invariant when $K\neq 0$. (Refer to section VII. in [Mary's paper](https://arxiv.org/pdf/2211.17248.pdf)) and [Conformal invariance](https://journals.aps.org/prd/abstract/10.1103/PhysRevD.102.023507): > Apply canonical redefinition to eq.19 and minimize the new energy-momentum tensor to get the new solution. Finally transform it back to see whether it is the same as the original result eq.40. > Answer from Mary: I can refer to the calculation of RSET (Appendix A in Mary's paper), and the only difference from changing flat to curve universe is just change k to $\kappa$. Then everything would be fine. But this need to be checked. 4. Find Frozen IC in $K\neq 0$ case. 5. Radius of the universe ### 1/5 Meeting 1. 'b' analytic solution with $N,\dot{N}$ instead of $N,t$. 2. PPS which IC is set by exact 'b' solution. 3. Get the CMB by Boltzmann solver, and find the best model with the best parameter set (Q: Do we have supercomputer to run the code?). 4. Which supervision to take? (1) Part II Astrophysical Fluid Dynamics (2) Statistical Physics (lectured in DAMTP) (3) Introduction to Cosmology (lecturer: George Efstathiou) (4) Topics in Astrophysics (lecturer: Mark Wyatt & Oli Shorttle) 5. Which course to take? (1) Astrostatic (?) (2) Field Theory in Cosmology (L24)- by Enrico Pajer 6. Ted Jacobson ### Why Kinetic dominance? How to choose a specific time? We agree that eternal slow-roll inflation is attractive, since all observable scales are much smaller than the comoving horizon at the beginning, as a result we can set initial condition (ex: Bunch-Davis) without considering any specific time. However, the problem is that: if there is eternal slow-roll inflation, we would not have observed non-zero curvature, since all curvature would be smoothed out by inflation. This gives us a hint that inflation should not be infinite. Moreover, this paper (1809.07185) shows that: if we consider plateau potentials (ex: Starobinsky) or hilltop potentials (ex: Landau-Ginzburg), initial conditions for inflation at the Planck epoch should be set in the kinetically dominated regime. As to which time to choose for setting our initial condition, this is one of the major unsolved questions in the field. We mention this in passing on Page 5 of this paper (1607.04148): It is important to recognize setting these conditions at η0 is equivalent to forcing the universe into a vacuum state at that moment, but there is minimal theoretical guidance as to when this should be[8]. Indeed, there is little reason to imagine that the universe should be in a vacuum state at any given moment. However, these conditions could also be used to build a formalism of excited states. One could of course try to fit for this vacuum state using data, if indeed these models of just enough inflation provide better fits to present/future data. In this paper (2104.03016), we set the initial conditions representing quantum vacuum at the end of Kinetic dominance. However, it is more natural to impose perturbation mode initial conditions at the start of the universe (i.e. in our model at the beginning of kinetic dominance epoch, which is t -> -1/2k_t, v_k -> 0). Since none of the initial conditions depending on quantum vacuum can be set at the singularity, a new method called "Frozen initial conditions" is proposed in this paper (IV.C), which can be set at the start of the universe. After comparing the predictions with Planck data, we found that the data alone cannot prove which initial condition is prefered. More observation is needed. ## Summary of the Project: Quantum initial conditions for inflation This project will begin by continuing theoretical investigations performed by the group in the field of quantum initial conditions for inflation. We will explore the definition of the quantum vacuum in the primordial universe for cosmologies with spatial curvature at the start of inflation, combining results from 2002.07042, 1607.04148, 0909.5384, 1907.08524 and 2112.07547. The project will then proceed to examine the impact that such theories could have on observations of night sky in the cosmic microwave background, baryon acoustic oscillations and large scale structure surveys. We will explore the possibility of testing these theories against the latest cosmological datasets using cutting edge Bayesian inference and machine learning techniques developed by the group. The student will have access to the group’s collective expertise, computational resources, and world-class research environment. They will be given the option to attend graduate lectures in cosmology, particle physics, astrostatistics, data intensive science and machine learning. ## Background Knowledge: Here are two books that are a nice introduction to the QFT: 1. Fulling: Aspects of Quantum Field Theory in Curved Spacetime 2. Mukhanov: Introduction to Quantum Effects in Gravity The more standard text is Quantum Fields in Curved Spacetime by Birrell and Davies, but this is very heavy going, so should be reserved for a second pass through. I would also recommend David Tong's quantum field theory notes, which you should read with an eye to what changes in between traditional QFT, and for quantum fields in curved spacetime. ## Open Questions of the project: Open questions, which is where a project in and around this would begin are 1. What is the correct variable to quantise, and how? At the moment one generally applies the quantisation procedure (and definition of the quantum vacuum) to the mukhanov variable v = z R. This is well-defined in the de-Sitter regime (approximated by slow roll inflation), but [2002.07042] investigates the assumptions underlying and problems with this procedure (which generally surround it's gauge dependence, and therefore lack of physicality). Instead of defining the vacuum as 'particle-less' or 'diagonalises the Hamiltonian', one can define it as a 'minimum energy state' [1607.04148], with energy defined via the renormalised stress energy tensor. This removes the gauge dependence, makes observational predictions, and suggests that the appropriate variable to quantise is in fact the comoving curvature perturbation R. However, when one moves beyond spatially flat spacetimes [1907.08524,2112.07547] (which one must if one is examining the start of inflation), Even R seems inappropriate (since the action becomes non-local). Work previously done by the group [0909.5384] may yet provide a better way to define the fundamental physical variable to quantise at the start of the universe. 2. Frozen initial conditions [2104.03016] These present a classical alternative to the quantum initialisation of the universe. In this instance, the universe begins in a kinetically dominated white noise state (though the justification for this is loose at the moment, this could likely be argued on 'maximum entropy' grounds). The entry of modes into the horizon by kinetic dominance provides the same spectral tilt that a de Sitter vacuum does, without the need to invoke quantum mechanics. These theories predict oscillatory power spectra, which reduce to LCDM in the short oscillation limit. Open questions here are (a) how do these theories impact cosmic structure formation (since they should in principle imprint themeselves on the matter power spectrum as well as the CMB power spectrum) (b) how do these theories impact lensing (current lensing codes are not capable of working with highly oscillatory primordial power spectra) (c) How well can future datasets, in particular the CMB combined with a better measurement of reionisation (tau)? 3. Palindromic universes, inflation and curvature There is an underlying question of how to link inflation and/or spatial curvature with palindromic universes. [2111.14588],[2104.01938],[2104.02521], and the impact that this interplay would have on the wavefunction quantisation they predict. ## Project 1. Task 1: Replace the R in the action (eq. 26 in [1907.08524]) with $\zeta$ (eq. 104 in [0909.5384]). > The reason why we want to do this: The original R is only applicable to flat spacetime. To expand it to curved spacetime, we have to resort to $\zeta$. To understand its evolution in 3+1 spacetime, we have to replace R by $\zeta$ in the action, and try to get the equation of motion. 2. Task 2: Implament figure 5 & 6 in [this paper](https://arxiv.org/pdf/2205.07374.pdf) ### Structure of the code: **There are two python code:** 1. BG_find_sigma_loop_comoving.py --> input K and $N_i$, solve $\phi_i$ and $\sigma$ 2. Com_HubbleHorizon.py --> get phi_i by N_star instead of N_end 3. BG_plot_manyLines.py --> plot all results **BG_find_sigma_loop_comoving.py** 1. Set K and $N_i$ 2. Function Integrate($\phi_i$): return N_tot > Find_sigma($\sigma, \phi_i$): return P_R_star * \sigma^2 - As_cons > Use root finder to get sigma, which would make P_R_star after scaling = 1.e-9 > To get P_R_star, I use Horizon_crossing event : ks = $0.05 Mpc^{-1}\times a0$ 3. Use root finder to get $\phi_i$, which would make N_end=70. ## 1+3 covariant approach: 1. Reference: [Theoretical and Observational Cosmology (Marc Lachieze-Rey, 1999)](https://link.springer.com/content/pdf/10.1007/978-94-011-4455-1.pdf) 10/6 Meeting with Will 1. Choose V0=1 would be fine. It is just a rescaling (read eq111-113 in 1401.2253.pdf) 2. Use ivp, first set phi_0, and event = inflating, find when the inflation end, such that I can get N_tot. (Integrte(phi_i)=N_tot) 3. After geting Integrte(phi_0)=N_tot, use root finder and set f= Integrte(phi_i)-70, and that phi_0 range from [10, 100]. (If phi_0 smaller, N would be smaller). 4. Get min phi theoretically: since Omega_Ki have to be smaller than 1, we can get min phi. ![](https://i.imgur.com/syarIg4.png) 5. Get max phi theoretically: Suppose slow roll approximation (phi=constant), and N_tot=70, we can get max phi. 6. phi_0=10, N_0=10: ![](https://i.imgur.com/WfN40Gb.png) 10/13 Will meeting: Background variable evolution 1. Get the time when the universe begin by setting an event with a=0. 2. Set inflating.direction =1 to choose the event when inflation end. 3. I should keep my code in natural unit (with l_p=t_p=m_p=1), and change physical numbers (such as k=50 mpc^-1) into Planck unit. 10/27 Will meeting: 1. try to use a single root finder with two variables (phi_0, sigma) 2. Make the Fig.13 and left of Fig.14 3. Plot variable "b" in Mary's paper 11/3 Will meeting: 1. Compare with Luka's code to see how he deal with the unstable problem (root guess etc) 2. Plot more k points -> the may looks like Fig.14 3. Change b_dot into other number, to make b~a when K=0 (flat universe), since it would make g=1 & f=0. ## Report from Will ### Report on Wei-Ning Deng for MT2022 ---------------------------------- Following my advice, Wei-Ning attended the DAMTP cosmology course, which will have laid the theoretical groundwork for the rest of her PhD. We will discuss suitable courses to attend this term early in the new year. Wei-Ning attends the weekly arXiv scrolling club, making good contributions (which I encourage her to continue) as well as our weekly one-on-one meetings. In addition, after appropriate consideration, Wei-Ning prepared and delivered Part II Physics Relativity supervisions. This is commendable and challenging, and will have helped solidify her understanding of core theoretical material, as well as giving her experience of supervising at a very high Cambridge level. If she wishes to she can do this again next year, where it will be considerably less work now that she has prepared the material. Wei-Ning's project has begun from the start point of a recently published analysis by the group [2211.17248], which determined a class of canonical primordial perturbation variables suitable for primordial universes including spatial curvature. Prompted by an interaction following an excellent 'flash presentation' she gave at the KICC at the end of term, she has been in correspondence with one of the notable names in the field (Prof Ted Jacobson) defending kinetic dominance and models of inflation including curvature. She has reached the stage where she is able to reproduce plots and results from the groups previous papers, including Lukas Hergt's [2205.07374]. This term we will focus on mastering the intricacies of the theory which this work uncovers, with the aim of getting a theoretical publication which can form the bulk of a first year report. After that we will review if Wei-Ning wishes to continue in this line (or perhaps pursue other topics she has expressed interest in), but the next natural stage would be to incorporate this into a Boltzmann solver and explore the effects of the new initial conditions on cosmological fits using the full machinery of [2205.07374].