Markdown is so-called markup language, along with HTML and $\LaTeX$, meaning that it is a syntax for marking up text documents (making words bold, making large headers, etc.). However, it is much easier to read than HTML or pure Latex, although it allows embedded Latex for using mathematical notation.
Here's a DFA $D$
graph LR
_ --> a
a((a)) --0--> b
a --1--> c
b --0--> c
b --1--> b
Dave Doty changed 10 months agoEdit mode Like Bookmark
self-stabilizing function computation
If we really do not care about time complexity at all, then I suspect the following simpler idea would work. If not, it would benefit the paper to discuss why this idea fails.
The agents can increase counts, as in the paper, through direct interaction of two agents with the same input. For example, suppose we have input alphabet ${a,b,c}$. Fix $k$ to be the maximum count in any root set. Then for agents with input $a$, we have transitions
$a,a \to a_2,a$
$a_2,a_2 \to a_3,a_2$
$a_3,a_3 \to a_4,a_3$
Dave Doty changed 2 years agoView mode Like Bookmark
This paper introduces a potentially very influential and insightful new definition of "adversarial runtime" in computation by chemical reaction networks (CRNs) consisting of reactions among abstract species such as $A+B \to X+Y$. Existing studies of CRN computation (many of which are done under the guise of the distributed computing model know as population protocols that captures a very expressive and powerful subclass of CRNs) typically take the following approach. They require that the protocol is "guaranteed correct" in the sense that any "fair" yet otherwise adversarial schedule of reactions will eventually produce the correct output. This is known as stable computation, a.k.a., "rate-independent" computation since reaction rates can vary (almost) arbitrarily over time, yet the protocol is still correct.
But of course we are also interested in the time complexity of such protocols. The approach taken in other papers has been to use the standard stochastic scheduler (one that picks a pair of molecules uniformly at random to be the next to react) to analyze the runtime. So there's a bit of a disconnect between the definitions of correctness and speed: analysis of correctness allows a powerful adversary to schedule reactions in improbable ways, but analysis of speed gets to assume the well-behaved stochastic scheduler. The purpose of requiring stable computation/rate-independence is to ensure that the system is robust to violations in the assumptions inherent in the stochastic scheduler. It assumes for example that the solution is well-mixed, but it may not be initially well-mixed, for example if inputs are added by pipette and then (several seconds or minutes later) vortexed. Or the CRN may be operating inside of a cell, which uses many spatial strutures such as membranes to keep the cell from being well-mixed. This could in turn mean that the protocol will not in fact produce an output as quickly as under the standard stochastic scheduler.
One prior work addressed this (ref [CCDS17])
Strengths
Weaknesses
My main issue with the paper is that it is heavy on formal notation and light on intuition. I was okay reading some of it because I already know much of the material, but stuff that is new to this paper, such as the definitions of adversarial time complexity, were very difficult to parse. I think that this paper could be very influential, but only if people can read it, so I urge the authors to take seriously my suggestions below.
Dave Doty changed 3 years agoEdit mode Like Bookmark
CRN where each reaction has 2 reactants, 2 products, and rate constant 1, e.g., $X,Y \to X,Z$.
Translation key:
state = chemical species
transition = reaction
agent = molecule
interaction = collision of two molecules (may or may not result in a reaction)
Any unspecified transition is null: $x,y \to x,y$
Dave Doty changed 4 years agoEdit mode Like Bookmark