noisyoscillator

@noisyoscillator

Joined on May 23, 2021

  • Approximate Gaussian process inference for the drift function in stochastic differential equations Phys. Rev. E 98, 022109 (2018) - Approximate Bayes learning of stochastic differential equations (aps.org) Gaussian Process Approximations of Stochastic Differential Equations (mlr.press) Moment-Based Variational Inference for Stochastic Differential Equations (mlr.press) Variational Inference for Stochastic Differential Equations - Opper - 2019 - Annalen der Physik - Wiley Online Library Sparse Gaussian Processes for Stochastic Differential Equations | OpenReview
     Like  Bookmark
  • A curated list of repositories related to fluid dynamics. Please send pull requests or raise issues to improve this list. Contents Educational Notebooks Lecture Series
     Like 1 Bookmark
  • 🎉 What's new ? :warning: Starting from Shapash v2.0.0: '.compile()' parameters must be provided in the SmartExplainer init: :warning: For clearer initiation, method's parameters must be provided at the init: xpl = SmartExplainer(model, backend, preprocessing, postprocessing, features_groups) instead of xpl.compile(x, model, backend, preprocessing, postprocessing, features_groups) Version New Feature Description
     Like  Bookmark
  • Books Zwanzig-Nonequilibrium Statistical Mechanics Memory Functions, Projection Operators, and the Defect Technique (Lecture Notes in Physics) (Advances in Chemical Physics) - Memory Function Approahes to Stochastic Problems in Condensed Matter, Volume 62(1985). Boon, Yip - Molecular Hydrodynamics (Chapter 1, 2) Projection Operator Techniques in the theory of fluctuations by Bruce Berne in Modern Theoretical Chemistry. v.5 Statistical Mechanics, Part B Time-dependent processes Dieter Forster.-Hydrodynamic fluctuations, broken symmetry, and correlation functions Chapter 11 in Bruce J. Berne, Robert Pecora - Dynamic Light Scattering
     Like  Bookmark
  • The evolution of a physical system goverened by a nonlinear differential equation often occurs on two time scales when the parameters affecting the dynamics is small or large. This separation of time scales vastly simplifies their study and is a fundamental requirement to all sorts of perturbation techniques in the parameter space that one could employ. The subclass of linear systems on the hand can be readily generalized to the case of linear partial differential equations like the Fokker-Planck or forward Kolmogorov equations, Euler/Burgers equation using Mori-Zwanzig's projection operator formalism which has the effect integrating out the fast degrees of freedom of the system (e.g. solvent) and projecting this averaging effect onto the slow variables (e.g. time evolution of the spatial coordinate of a protein molecule) whose dyamics now acquires a memory kernel plus noise. The appearance of the memory kernel makes the overall dynamics non-Markovian. The projection operator formalism is a very powerful technique which leads to reduced order modelling of the system where one only tracks (or models) the dynamics of the slow variables with clear separation of two time scales. More details on this formalism can be found in the following excellent presentation: {%youtube e8QFNh5u_1U %} For a detailed introduction to slow-fast dynamical systems take a look at Nils Berglund, Barbara Gentz - Noise-Induced Phenomena in Slow-Fast Dynamical Systems: A Sample-Paths Approach. Many real systems, however, possess a continuum of time scales, with no clear separation. In the multiscale analysis of such systems one is concerned with the bridging of disparate scales by merging different mathematical models appropriate at different scales (such as quantum, molecular, and continuum). Finally one looks for a phenomenological description of the phenomena under study that is valid at all scales using the more powerful renormalization group theory. This paper discusses all these powerful theoretical concepts beautifully.
     Like  Bookmark
  • [1] J. Michael Steele - Stochastic Calculus and Financial Applications [2] Girsanov theorem for multifractional Brownian processes (tandfonline.com) Introduction Can a stochastic process $X_t$ with drift $\mu$ be viewed as another stochastic process $Y_t$ without drift? This is related to the fact that almost any question about Brownian motion with drift may be paraphresed as an equivalent but slightly modified question about standard Brownian motion $B_t$. In this note we will be focussing on this sort of inquiry condensed into a few theorems which is known as Girsanov's theory. This theory is crucial in Mathematical Finance which provides the foundation for risk-neutral pricing of financial instruments like Options and Derivatives. For example, it is applied to the celebrated Black-Scholes model to find a probability measure which transfoms the current value of the stock price $S_t$ into a very interesting mathematical object called a martigale. Almost everything about Option and Derivative Pricing theory is depedent on such mathematical objects together with semi/sub/super-martingales. But here we will not dive deeper into the mathematical intricacies of such objects except for stating their important properties. To motivate the question asked on the first sentence of this note we will begin with a simple simulation technique called importance sampling. The idea of importance sampling is then extended in a natural way to random processes and in short order this extension leads to the first Girsanov theorem. We will illustrate the effectiveness of this theorem with a few examples at end one of them being the derivation of the elegant Levy-Bachelier formula for the density of first hitting time of the Brownian motion to a slope. First hitting time is also known as the first passage time which is more familair to physicists.
     Like  Bookmark
  • In contrast with TensorFlow and PyTorch, JAX has a clean NumPy-like interface which makes it easy to use things like directional derivatives, higher-order derivatives, and differentiating through an optimization procedure. There are several neural net libraries built on top of JAX. Depending what you're trying to do, you have several options: For toy functions and simple architectures (e.g. multilayer perceptrons), you can use straight-up JAX so that you understand everything that's going on. Stax is a very lightweight neural net package with easy-to-follow source code. It's good for implementing simpler architectures like CIFAR conv nets, and has the advantage that you can understand the whole control flow of the code. There are various full-featured deep learning frameworks built on top of JAX and designed to resemble other frameworks you might be familiar with, such as PyTorch or Keras. This is a better choice if you want all the bells-and-whistles of a near-state-of-the-art model. The main choices are Flax, Haiku, and Objax, and the choice between them might come down to which ones already have a public implementation of something you need. While some of these frameworks involve some magic for defining and training architectures, they still provide a functional API for network computations, making it easy to compute things like Hessian-vector products. Neural Tangents is a library for working with the neural tangent kernel and infinite width limits of neural nets (see Lecture 6). You are welcome to use whatever language and framework you like but keep in mind that some of the key concepts, such as directional derivatives or Hessian-vector products, might not be so straightforward to use in some frameworks.
     Like  Bookmark
  • Single Pages Single pages, as the name suggests, are the pages for single pieces of content. For example, a single blost post, a project description, or publication overview. The URL for these pages will be of the form domain.com/section/example-title, e.g. domain.com/post/my-first-post. Examples of actual single pages from the demo site can be found here, here, or here. The code that generates these pages can be found in the GitHub repo for Academic, specifically the layouts folder. The actual HTML file, named single.html for each content section can be found in the directory with the corresponding name, e.g. layouts/project/single.html. To understand how these HTML files work see the docs on https://gohugo.io/templates/Hugo templates. Git submodules Before we continue to the other types of pages, we will briefly explain git submodules and how they are used for this theme. Essentially, submodules allow you to make a git repo a subdirectory of another git repo. If you look at the themes directory in your website repo you should see a subdirectory named academic, which is actually a submodule based on the GitHub repo for Academic that was mentioned earlier. If you’re looking at this on github.com, you’ll see that the folder icon has a white arrow on it. In order to make changes to this theme repo, you’ll want to create a fork, then update the submodule to use your theme repo. To do this, you first need to change the URL for the submodule from https://github.com/gcushen/hugo-academic.git to the URL for your theme repo that you just forked (e.g. https://github.com/your-github-handle/hugo-academic.git) in the .gitmodules file, which is located in the root directory of the website repo. Next, you’ll run the following git commands to actually update the submodule: git submodule sync and then git submodule update --init (relevant stackoverflow question). If you click the themes/academic directory on github.com it should now take you to your forked theme repo. The final thing regarding submodules is that if you change the theme repo, these changes are not automatically updated in your website repo. In order to propogate the changes to the website repo, you’ll need to run git submodule update --remote --merge, then the standard add, commit, push commands (relevant [stackoverflow question(https://stackoverflow.com/questions/5828324/update-git-submodule-to-latest-commit-on-origin)). Section pages Section pages are the ones that will give an overview of all the content within that section, e.g. a list of blog posts or publications. The URL for these pages will be of the form domain.com/section. An example from the demo site for a posts page can be found here. To build this page we use a list page template that defines how the full section page can be composed out of smaller elements that contain just a summary for each single page. The order content section of the doc is useful to look at since it describes how you can change the order of the list elements as well as their default ordering (Weight > Date > LinkTitle > FilePath). We also have a template that generates and formats these summaries, but talk about that a bit later.
     Like  Bookmark
  • obsidian.md https://obsidian.md/ https://hackmd.io/ Another alternative is to use Gistpad in VSCode
     Like  Bookmark
  • Google Colab As of October 13, 2018, Google colab provides a single 12GB NVIDIA Tesla K80 GPU that can be used up to 12 hours continuously. Recently, Colab also started offering free TPU. Getting the Most Out of Your Google Colab Cloudizer Clouderizer uses the Colab as a backend platform for the GPU and Google Drive as your permanent disk, so it’s a free computation engine because of Colab, a free permanent disk for saving datasets and models because you can connect your Google Drive account to it. The team at clouderizer already created a Community Project for fast.ai which does all the configuration for you and load the updated libraries each time you start the project, loads any Kaggle Datasets at start or install python libraries or linux libraries.
     Like  Bookmark