I’ve been studying from Biological Modeling: a free course^{[1]}. These are some of my notes for Chapter 1, about gene networks. I wanted to implement the section on autoregulation^{[2]} myself instead of using their recommended software^{[3]}.

The context: DNA leads to RNA leads to proteins. These proteins can do a wide range of things, but the ones covered here are *transcription factors*, which affect the rate of protein production (gene expression).

## 1 Autoregulation#

Surprisingly, some proteins affect *their own* rate of production, a process called *autoregulation*. To understand this, let’s first look at a simple model of protein production:

- In the book there are two proteins,
`X`

and`Y`

. But`X`

is constant, so I’m going to focus on`Y`

in my code. - The first reaction is
`X`

→`X`

+`Y`

. Since`X`

is constant, that means`Y`

is being produced at a constant rate, which I’ll call`reaction`. - The second reaction is
`Y`

→ null. This means`Y`

is being removed at a rate`recycle`proportional to the amount of`Y`

in the system.

These two reactions lead to equilibrium for the level of `Y`

:

By playing with the sliders I can see that `reaction` is a pure scaling factor, and `recycle` is the main parameter affecting shape. The chart is afrom a simulation but this simple case also can be represented as a closed form `Y(t) = reaction/recycle * (1 - e^(-recycle * t))`

.

Now it’s time to add autoregulation.

- The autoregulation reaction is
`2Y`

→`Y`

. This means Y is being removed at a rate`autoregulation`

times`Y²`

.

This also leads to an equilibrium, but it’s lower than the original system:

But why would we ever want this? It seems weird to evolve a new mechanism to reach a lower equilibrium, since it can already do that without autoregulation. We can reduce the `reaction`

parameter instead.

The way to think about it is that instead of the *input* parameters `reaction` and `recycle` being fixed, it’s the *equilibrium* value that’s fixed. Let’s set the equilibrium to always be 1.0 and see what autoregulation does:

This explains why we want autoregulation! For any value of the `reaction` rate, **we reach equilibrium faster** if we also have `autoregulation > 0`

.

So that’s interesting! I learned a bit by studying this and implementing the simulation.

Other notes I made while studying this section:

- without autoregulation, the differential equation is solved by
`Y(t) = reaction / recycle * (1 - exp(-recycle * t))`

and equilibrium is at`Y = reaction / recycle`

. - with autoregulation, I can find the equilibrium by solving
`dy = 0`or`reaction - recycle * y - autoregulation * y * y = 0`

. This is a quadratic equation with`a = autoregulation`,`b = recycle`,`c = -reaction`, so the solution is`y = recycle/autoregulation/2 * (-recycle ± sqrt(recycle² + 4 * autoregulation * reaction))`

, or with Muller’s method we get`y = -2*reaction / (-recycle ± sqrt(recycle² + 4 * autoregulation * reaction))`

. The latter has the advantage of working with`a`is zero, which is a case I want to handle, since I want to allow setting`autoregulation`to 0.

Read the course material^{[4]} for a lot more detail!

I also made a spreadsheet version^{[5]} of this.

## 2 Feedforward#

For the rest of this page, I took notes but I did not implement the simulations. I have worked with these types of simulations (in a non-biology context) enough that it all “clicked” without needing to implement it. So what I’m learning here is the biology part but not the simulation or math part.

They show a three-value network.

- X regulates Y
- Y regulates Z
- X also regulates Z!

The signs matter. An “incoherent feed-forward loop” is X adds to Y, Y subtracts Z, but X adds to Z. I guess there would be three signs which could be + or - so eight types total.

X starts out in a *steady state*. So it’s constant. It’s adding to both Y and Z. But Y is removing from Z. And in this simulation, Y and Z are both constantly decreasing (by some fraction every tick). X needs to add to Z faster than to Y. Otherwise Y will never be big enough to decrease Z.

But won’t this lead to a steady state for Y? It’s only affected by X, which is a steady state. I guess the point of this section is to show that feedforward can *converge* quicker than regular autoregulation.

So if X is adding to Z and then Y removes from Z, then Z will also end up in a steady state.

And the idea is that X adding to Z, before Y has grown, will *overshoot*. So we’ll get Z to a desired level more quickly than if we only used regular autoregulation.

I think I understand this. Do I want to implement it to? I dunno. I think if I implement it I could see all eight types of networks. But I can probably look them up somewhere.

If there’s overshoot, will there be oscillation, like car springs? Maybe.

## 3 Oscillators#

The next section is about oscillators: https://biologicalmodeling.org/motifs/oscillators^{[6]}

X reduces Y, Y reduces Z, Z reduces X. But all of them are produced at a constant rate, and all of them are also reduced at a constant rate. This produces an oscillator. This “clicked” for me as I’ve built these kinds of things before (non-biology contexts).

Interestingly, the X, Y, Z, oscillator is *noise suppressing*. If you add noise to the system it works just fine.

https://biologicalmodeling.org/motifs/conclusion^{[7]}

They switch from simulating individual molecules to simulating just the count, which is what I did from the beginning because particle simulations are more work.

They show that greatly altering one of X, Y, Z, will cause the system to recover pretty quickly. This would be something I could show in a simulation. But *why*? Do I think I would learn this material better by simulating it myself? Maybe. But it would take a lot longer, and only learn it slightly better I think. So I am inclined to skip it, unless I want to make a tutorial.

I reached the end of the chapter and will move on.