It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 1.6K
- General 37
- Azimuth Project 95
- - Latest Changes 345
- News and Information 222
- - Strategy 93
- - Questions 39
- Azimuth Forum 25
- - Conventions and Policies 21
- Chat 166
- Azimuth Blog 132
- - - Action 13
- - - Biodiversity 7
- - - Books 1
- - - Carbon 7
- - - Climate 41
- - - Computational methods 34
- - - Earth science 21
- - - Ecology 40
- - - Energy 28
- - - Geoengineering 0
- - - Mathematical methods 62
- - - Oceans 4
- - - Methodology 16
- - - Organizations 33
- - - People 6
- - - Reports 3
- - - Software 19
- - - Statistical methods 1
- - - Things to do 1
- - - Visualisation 1
- - - Meta 7
- - - Natural resources 4
- Azimuth Wiki 6
- - - Experiments 23
- - - Sustainability 4
- - - Publishing 3
- Azimuth Code Project 69
- - Spam 1

Suppose you have a stochastic Petri net with two species A and B and the following transitions

A+B -> 2A at rate u

A+B -> 2B at rate u

A -> B at rate v

B -> A at rate v

Now, assuming I am not horribly confused about something, and haven't miscalculated...

The total number of A'a and B's is a constant N. Let the number of A's be n so the number of B's is N-n. This system is simple enough to find the equilibrium distribution, which is a Beta-binomial_distribution with $\alpha = \beta = v/u$. If u>v, this is U-shaped, and if u>>v you get what I think of as 'bistable' system. The system will spend a long time with n close to 0, then 'flip' quite quickly to n close to N, stay there for some time then flip back, and so on. This is true for any N>2.

But the rate equation for the proportion x=n/N is just

$d x/d t = v(1-2x)$

which just says that x tends to x/2.

What bothers me is that even when N is a zillion, the continuous approximation gives a very poor idea of what is going on.

BTW, I can't see a better category than Chat for a post like this.

## Comments

I agree that the rate equation says that the system will move towards an equilibrium where the number of A's equals the number of B's.

When I eyeball the reactions, it does look like the system will always be in an unstable state if A and B are not equal. The first two reactions have the same rate constant, the same reactants A + B, and one has product 2A while the other has product 2B. So these two alone should lead to no expected drift in the concentrations. This is confirmed by the fact that the rate equation for these two reactions alone says that the time derivatives of each of the concentrations is zero.

So the first two transitions alone would lead to some kind of random walk, under stochastic conditions, with no attractor point.

The other two transitions clearly pull the concentrations towards equality.

So the only true equilibrium point is at equality.

Now by saying u >> v, you are giving a high weight to the random walk, and a low weight towards the directed process. So it will look alot like the random walk, and it may often even reach states where the counts of A or B are close to zero or zero.

But is this really bi-stability? To me it looks like there is a single stable state, with huge "fluctuations" caused by the fact that u >> v.

Can you give the details of how you computed the equilibrium distribution?

`I agree that the rate equation says that the system will move towards an equilibrium where the number of A's equals the number of B's. When I eyeball the reactions, it does look like the system will always be in an unstable state if A and B are not equal. The first two reactions have the same rate constant, the same reactants A + B, and one has product 2A while the other has product 2B. So these two alone should lead to no expected drift in the concentrations. This is confirmed by the fact that the rate equation for these two reactions alone says that the time derivatives of each of the concentrations is zero. So the first two transitions alone would lead to some kind of random walk, under stochastic conditions, with no attractor point. The other two transitions clearly pull the concentrations towards equality. So the only true equilibrium point is at equality. Now by saying u >> v, you are giving a high weight to the random walk, and a low weight towards the directed process. So it will look alot like the random walk, and it may often even reach states where the counts of A or B are close to zero or zero. But is this really bi-stability? To me it looks like there is a single stable state, with huge "fluctuations" caused by the fact that u >> v. Can you give the details of how you computed the equilibrium distribution?`

I am sure it is not defined to be bistable in the theory of Petri nets, not by John anyway. Your description in terms of huge fluctuations is not wrong, but if the system is say spending 99.99% of the time outside (N/10, 9N/10) it seems like this description http://en.wikipedia.org/wiki/Bistability

Sketch:

There are N+1 states n=0 to n=N. The rate at which n->n-1 is n(N-n)u + nv. The rate at which n->n+1 is n(N-n)u + (N-n)v. You can now write down a (N+1)*(N+1) rate matrix R for the Markov process on these states. I use the convention that states are row vectors on the left. R is tridiagonal and row n looks like

$$0 ... 0 \quad n(N-n)u+n v \quad X \quad n(N-n)u + (N-n)v \quad0 ... 0$$ for 0<n<N, where X is on the diagonal and is minus the two other terms, so the row sum is zero. You can do cases n=0,N yourself. Then setting yR=0 and solving for $y=(y_0, \dots y_N)$ gives the equilibrium distribution. I did that by showing

$$\frac{y_{i+1}}{y_i} = \frac{N-i}{i+1} \frac{i+v/u}{N-i-1+v/u}$$ (I did this directly for i=0, then by induction on i.) The beta-binomial also has this (defining) property.

`> But is this really bi-stability? To me it looks like there is a single stable state, with huge “fluctuations” caused by the fact that u >> v. I am sure it is not defined to be bistable in the theory of Petri nets, not by John anyway. Your description in terms of huge fluctuations is not wrong, but if the system is say spending 99.99% of the time outside (N/10, 9N/10) it seems like this description http://en.wikipedia.org/wiki/Bistability > Can you give the details of how you computed the equilibrium distribution? Sketch: There are N+1 states n=0 to n=N. The rate at which n->n-1 is n(N-n)u + nv. The rate at which n->n+1 is n(N-n)u + (N-n)v. You can now write down a (N+1)*(N+1) rate matrix R for the Markov process on these states. I use the convention that states are row vectors on the left. R is tridiagonal and row n looks like $$0 ... 0 \quad n(N-n)u+n v \quad X \quad n(N-n)u + (N-n)v \quad0 ... 0$$ for 0<n<N, where X is on the diagonal and is minus the two other terms, so the row sum is zero. You can do cases n=0,N yourself. Then setting yR=0 and solving for $y=(y_0, \dots y_N)$ gives the equilibrium distribution. I did that by showing $$\frac{y_{i+1}}{y_i} = \frac{N-i}{i+1} \frac{i+v/u}{N-i-1+v/u}$$ (I did this directly for i=0, then by induction on i.) The beta-binomial also has this (defining) property.`

That does look curious.

I think the underlying issue can be highlighted by looking at the first two transitions alone: A + B -> 2A and A + B -> 2B, both proceeding at the same rate. The rate equation for this system says that the derivatives are zero. So every state is an equilibrium state. But the reaction proceeds more slowly as n gets closer to zero or to N. So the amount of time spent at the extremes will be greater. All states are equilibrium states, but some are preferred over others.

`That does look curious. I think the underlying issue can be highlighted by looking at the first two transitions alone: A + B -> 2A and A + B -> 2B, both proceeding at the same rate. The rate equation for this system says that the derivatives are zero. So every state is an equilibrium state. But the reaction proceeds more slowly as n gets closer to zero or to N. So the amount of time spent at the extremes will be greater. All states are equilibrium states, but some are preferred over others.`

Also, this two-transition system will eventually get stuck in one of the states n = 0 or n = N. So these are "attractor" points in a stochastic sense, to which the continuous deterministic model is oblivious.

What are the general lessons to be learned from this example you posted, and the two-transition subsystem that it contains? To me it shows that the deterministic limit is glossing over a lot of considerations, including some that pertain to probabilistic equilibrium and bi-stability.

`Also, this two-transition system will eventually get stuck in one of the states n = 0 or n = N. So these are "attractor" points in a stochastic sense, to which the continuous deterministic model is oblivious. What are the general lessons to be learned from this example you posted, and the two-transition subsystem that it contains? To me it shows that the deterministic limit is glossing over a lot of considerations, including some that pertain to probabilistic equilibrium and bi-stability.`

Yes, the two-transition, or v=0, case is a good place to start in understanding the system. It is more interesting (to me anyway) if v is very small but nonzero.

This system is interesting as a simple model for some real world things, like:

A protein which can fold into two conformations A and B. At a very low rate u they flip from one to the other 'spontaneously'. When an A bumps into a B, they both 'try' to bend the other into their own shape.

A bunch of sheep which can get into two fields A and B. Since they like to stay together, they are usually all in one or the other field, but occasionally they move from one to the other.

The way a group of people decide which side of the road to drive on.

I was actually thinking of two alleles A and B, with equal fitness in a gene pool when I wrote down the transitions, but the gene pool example seems to be more complex. It has the same sort of behaviour (one allele goes to fixation, and the system gets stuck there for a while, but not forever). Based on the gene pool example, exact symmetry is not necessary to get the same sort of behaviour.

Oh, and thanks for finally asking a question, something I managed not to do, despite the title.

`Yes, the two-transition, or v=0, case is a good place to start in understanding the system. It is more interesting (to me anyway) if v is very small but nonzero. This system is interesting as a simple model for some real world things, like: 1. A protein which can fold into two conformations A and B. At a very low rate u they flip from one to the other 'spontaneously'. When an A bumps into a B, they both 'try' to bend the other into their own shape. 2. A bunch of sheep which can get into two fields A and B. Since they like to stay together, they are usually all in one or the other field, but occasionally they move from one to the other. 3. The way a group of people decide which side of the road to drive on. I was actually thinking of two alleles A and B, with equal fitness in a gene pool when I wrote down the transitions, but the gene pool example seems to be more complex. It has the same sort of behaviour (one allele goes to fixation, and the system gets stuck there for a while, but not forever). Based on the gene pool example, exact symmetry is not necessary to get the same sort of behaviour. Oh, and thanks for finally asking a question, something I managed not to do, despite the title.`

Graham wrote:

It shouldn't bother you - you should be happy that you came up with this interesting example! There are theorems saying that:

1) In the limit of large numbers you can take any solution of the rate equation and cook up a solution of the master equation where the probability distribution is peaked near that solution of the rate equation for a while.

2) Any complex balanced equilibrium solution of the rate equation gives an equilibrium solution of the master equation where the numbers of each kind of species are distributed in a Poisson distribution whose mean matches that equilibrium solution of the rate equation - the Anderson-Craciun-Kurtz theorem.

But these theorems, being true, aren't contradicted by your example!

If I'm really good I'll include this example in my book. It shows that the continuum limit can easily be misleading - if we're easily misled.

As David noted, if you leave out two transitions and just keep these:

A+B -> 2A at rate u

A+B -> 2B at rate u

and start the system off in a state with an equal number of A's and B's, and evolve it stochastically according to the

masterequation, theexpected valueof the number of A's and B's will remain constant. However, the probability that the number of A's is nonzero will eventually drop to 1/2. Similarly, the probability that the number of B's is nonzero will also eventually drop to 1/2.It's like a random walk on a closed interval where when you hit either end you're stuck there.

`Graham wrote: > What bothers me is that even when N is a zillion, the continuous approximation gives a very poor idea of what is going on. It shouldn't bother you - you should be happy that you came up with this interesting example! There are theorems saying that: 1) In the limit of large numbers you can take any solution of the rate equation and cook up a solution of the master equation where the probability distribution is peaked near that solution of the rate equation for a while. 2) Any complex balanced equilibrium solution of the rate equation gives an equilibrium solution of the master equation where the numbers of each kind of species are distributed in a Poisson distribution whose mean matches that equilibrium solution of the rate equation - the [Anderson-Craciun-Kurtz theorem](http://math.ucr.edu/home/baez/networks/networks_10.html). But these theorems, being true, aren't contradicted by your example! If I'm really good I'll include this example in my book. It shows that the continuum limit can easily be misleading - if we're easily misled. As David noted, if you leave out two transitions and just keep these: A+B -> 2A at rate u A+B -> 2B at rate u and start the system off in a state with an equal number of A's and B's, and evolve it stochastically according to the _master_ equation, the _expected value_ of the number of A's and B's will remain constant. However, the probability that the number of A's is nonzero will eventually drop to 1/2. Similarly, the probability that the number of B's is nonzero will also eventually drop to 1/2. It's like a random walk on a closed interval where when you hit either end you're stuck there.`

Well it bothers me because I'm more of an applied mathematician than a pure one these days and I want things to work, not be interesting!

I think some progress can be made like this. If n is the number of A's, then we can look at what happens to the expectation of n^2. There is an equal chance of n going to n-1 or n+1, so the expected change in n^2 per transition is

$$\frac{(n-1)^2 +(n+1)^2}{2} -n^2 = 1$$ So the expectation of second moment will increase until n=0 or n=N. Presumably one can derive differential equations for second moments and do this quite generally for Petri nets, and get an equilibrium solution for first and second moments, so you would get vector of means and a covariance matrix describing the equilibrium. I imagine people have already done this.

`Well it bothers me because I'm more of an applied mathematician than a pure one these days and I want things to work, not be interesting! I think some progress can be made like this. If n is the number of A's, then we can look at what happens to the expectation of n^2. There is an equal chance of n going to n-1 or n+1, so the expected change in n^2 per transition is $$\frac{(n-1)^2 +(n+1)^2}{2} -n^2 = 1$$ So the expectation of second moment will increase until n=0 or n=N. Presumably one can derive differential equations for second moments and do this quite generally for Petri nets, and get an equilibrium solution for first and second moments, so you would get vector of means and a covariance matrix describing the equilibrium. I imagine people have already done this.`

Graham wrote:

It works, but unfortunately in an interesting way.

That's an interesting idea. I haven't seen it! Let me just state your idea a bit more precisely, in case someone wants to try it. Starting from the master equation we can derive a differential equation, the rate equation, saying how the mean values of the number of things of each species

under the assumption that the mean number of things is large and the standard deviation and some higher moments are small in comparison. Under some similar assumptions we should be able to derive differential equations saying how the higher moments change with time.I don't think I've even seen a rigorous derivation of the rate equation in the way I've described! Not in the published literature, that is. I've seen it in my own notebook. I should work out the details, write it up and included it in my book. The necessary assumptions are probably a bit subtler than the italicized phrase - I forget right now - but basically you just do the calculation and see which terms need to be 'negligibly small' to get the calculation to work.

The same sort of calculation should let us see how the higher moments change.

Hmm, this sounds like a good project for one of my students! Starting in a couple days I'll be teaching a course on stochastic Petri nets, based on the book. So, after that course is done, they'll be in the position to try this project.

`Graham wrote: > ... I want things to work, not be interesting! It works, but unfortunately in an interesting way. <img src = "http://math.ucr.edu/home/baez/emoticons/tongue2.gif" alt = ""/> > Presumably one can derive differential equations for second moments and do this quite generally for Petri nets, and get an equilibrium solution for first and second moments, so you would get vector of means and a covariance matrix describing the equilibrium. I imagine people have already done this. That's an interesting idea. I haven't seen it! Let me just state your idea a bit more precisely, in case someone wants to try it. Starting from the master equation we can derive a differential equation, the rate equation, saying how the mean values of the number of things of each species _under the assumption that the mean number of things is large and the standard deviation and some higher moments are small in comparison_. Under some similar assumptions we should be able to derive differential equations saying how the higher moments change with time. I don't think I've even seen a rigorous derivation of the rate equation in the way I've described! Not in the published literature, that is. I've seen it in my own notebook. I should work out the details, write it up and included it in my book. The necessary assumptions are probably a bit subtler than the italicized phrase - I forget right now - but basically you just do the calculation and see which terms need to be 'negligibly small' to get the calculation to work. The same sort of calculation should let us see how the higher moments change. Hmm, this sounds like a good project for one of my students! Starting in a couple days I'll be teaching a course on stochastic Petri nets, based on the book. So, after that course is done, they'll be in the position to try this project.`

Excellent idea!

`> Hmm, this sounds like a good project for one of my students! Excellent idea!`

I need to remember this excellent idea, which is the only reason I'm writing this comment.

`I need to remember this excellent idea, which is the only reason I'm writing this comment.`