Federica wrote: ↑Tue Sep 05, 2023 9:00 pm
I understand the equations have to assign probabilities to each alternative reactive scenario, since the chemicals could potentially engage in a multitude of reaction patterns? How do they assign the probability of certain reaction alternatives occurring first, thereby imparting a certain distinct direction to the reactive general scenario, since the hypothesis is perfect diffusion of all chemicals and so any reaction (among the many possible) could ‘start the dance’?
I’m asking this because I'm not clear how the simulation is adding value to the general intention of making progress towards simulating life.
I cannot give you an exact answer because I myself am not so knowledgeable in the details of these methods. I understand the stuff as much as to have some general orientation in the domain but otherwise I’m a complete layperson.
As far as I understand, the kinetic models don’t work by deciding which reaction to allow at each time step. Actually, the timesteps are not an intrinsic part of the model. They are only a specific method for solving it.
Let’s compare this with continuous mechanics. Think of water flow and a pipe junction, where the flow splits in pipes with different diameters. Let’s say we have a known flow rate (liters per minute for example) at the inlet. We solve the equations and get the flow rate for each of the outlet pipes. A simple constraint here is that the sum of rates at the outlets must be equal to the rates at the inlet(s). We don’t ask the question by imagining a water molecule and asking through which of the outlet pipes it would go. In a sense, we assume that the flow is infinitely divisible. For example, if we have flow rate of 0.00...0001 l/min this might be so low that realistically only few water molecule would have to pass in an hour. But in continuous mechanics we don't care about this. The flow rate is what we simulate and it could be anything we want, we're not concerned that the flow is made of particles. Similarly, we don’t ask to which reaction we give way but simply what the reaction rates are.
The timestep is used for something else. If we imagine a steady state pipe system, then with constant input flow rate we’ll probably also have constant output rates. But imagine that the input pressure fluctuates sinusoidally. In that case the output rates will also likely alternate in some way. If we can solve the equations analytically we may get as a result some nicely formed mathematical function such as sin(t) (with some coefficients of course). Then if we want to know the output rate at any arbitrary time t we simply plug that into the function and get the result.
Alas, such exact mathematical functions can be found only for the simplest cases. In the general case we can’t simply calculate the state of the system for any arbitrary time t. Instead, we have to start from some initial conditions and slowly modify the system in tiny steps until we reach the time we need. At every step we’re still dealing with continuous mechanics. We still care only for the rates, we don’t ask which molecule will go where. It is similar with the chemical models. Think of the reactions as pipe flows that transform one substance into another at certain rates. At every step
all reactions happen, just like with the pipes at every step water flows through all of them.
In other words, the timestep is used to break down the temporal process in tiny snapshots. The rates are well defined in each snapshot. The steps are needed in order to understand how the rates change from moment to moment. But we don’t care that at a lower level everything is discontinuous.
So this is the main point: we don't think about probabilities - which reaction is likely to happen - but assume that all reactions happen all the time and they smoothly vary the chemical concentrations (transform then from one another). Such probabilities can be inferred from the rates but we don't have to. We only care about how given chemical concentration (water inlet) is transformed and distributed to other chemical concentrations.
Federica wrote: ↑Tue Sep 05, 2023 9:00 pm
That was to try and grasp the intentions of the researchers and in what direction they invest their efforts.
It is simply a normal part of the modern scientific process. The vast majority of papers that get published today don’t present anything groundbreaking. They are only small steps. For example, the present paper wouldn’t be possible if in hundreds previous papers the dynamics of all the different reactions in the cell were not studied. So to understand the motivation we always have to think of a greater goal, that everything contributes to greater knowledge.
Now maybe you ask more specifically why would scientists want to simulate a cell? One answer is – to verify if our current understanding (implemented as models) stands up to the facts. Then, if the model seems to match known dynamics we can more confidently assume that it will give correct results even for novel simulated conditions. For example, a gene can be removed (which precludes the corresponding protein being synthesized) and we can see how this affects the workings of the whole cell.
It might still be unclear why would we do that instead of simply trying it out in a real cell. Part of the answer is that it’s simply technologically impossible at our stage to have clear overview of the cellular processes. This can be presented as an image. Think of a big amusement park. From an airplane we can have only very general picture of the processes there. The people are not even visible. This is what we can get of an cell with an optical microscope. There’s a physical limit of how much we can zoom in. It’s not a matter of perfecting the lenses. It’s simply that at the small scales everything acts as diffraction grates since the wavelength of light is comparable to the molecules sizes. Thus at some level we simply get a complicated diffraction pattern instead of sharp outlines of the small details. These details can be studied in roundabout ways but what we learn about them can be compared to schematic snapshots. For example, we can have a snapshot of a person handing money in the ticket booth. Another of a person putting the safety belt on a ride and so on. From these snapshots of transactions we have to build up the picture of what the life in an amusement park is like.
Even though every textbook has many images of the cell processes and today we also have nice animations, in reality we can’t see things in this way directly. The overall picture is patched from our disconnected snapshots, our partial understanding of the different reactions. Simulating all the reactions simultaneously is one way to see if all really plays out as we conceive it.
Federica wrote: ↑Tue Sep 05, 2023 9:00 pm
Cleric K wrote: ↑Thu Aug 31, 2023 4:07 pm there's no danger of some higher life spaces creeping into our algorithm and meaningfully manipulate the results of our calculation (the same can't be said for the way we think out the algorithm itself. I'll explain later when I write in connection with the full paper)
Is there more to add at this point?
As said above, our knowledge consists of quite sparse snapshots which we connect through our thinking. The part that we fill in is largely influenced by our beliefs. Let’s consider the fact that the minimal cell needs some 500 genes (blueprints for proteins) in order to work. Some of these are completely critical and if they are missing, the whole cell fails to work. The question of how could this have evolved is very hard because there’s no simple solution that explains how all of this can have evolved gradually by tiny mutations. Nevertheless, the materialistic belief holds that it
must have happened somehow. Thus science behaves as if we have certain sparse snapshots and it’s only a question of time the interior will be filled with the details.
It is in this sense that I’ve said – higher spaces can’t augment the way a computer algorithm works, but they can surely steer the thinking that thinks the algorithms and devises them in such a way that it seeks confirmation of its beliefs.
It’s similar with the workings of the cell (even if we ignore how it came to be). It is assumed that it’s all chemical automata and the algorithms that we develop seek to implement these ideas. Thus the spaces can’t interfere with the algorithm once it is implemented in the silicon but surely steer how the algorithm is thought out.
The trouble is that the gaps in knowledge are so large that we can always justify saying: "We shouldn't be in a hurry to dismiss the possibility that life is a purely mechanical process (basically random collisions between chemicals that looked from above seem to form a pattern). There's no need to invoke magic just because we don't understand all the details. Maybe it is precisely in understanding the details that we'll see that everything is perfectly explainable by random collisions of chemicals."