Hoffmann, Peter M. Life’s Ratchet: How Molecular Machines Extract Order from Chaos. New York, New York: Basic Books, 2012.
This book is 288 pages in length (paperback version published by Basic Books). It describes how physics, chemistry, and biology intersect to define what life is in terms of what turns seemingly inert molecules collectively into a self-propelling molecular machine driving what we know as life. At the molecular level, the author notes that the “secret of life’s activity is found at the scale of a nanometer”. Peter Hoffmann is a physicist whose conversion to biology began when, as a doctoral student, he used an atomic force microscopy to view deposits of DNA molecules. Years later, after inheriting a collaboration as a Wayne State University professor with a molecular biologist measuring the “motions of particular molecular machines implicated in the spread of cancer”, he changed course, reinforced his biology knowledge, and started a new research direction on the science of molecular machines (from the Introduction section of the book).
INTRODUCTION: WHAT IS LIFE?
In the introduction to the book, the author gives a preview of what the book aims to accomplish for the reader. The goal of this book is to answer the question to which the book title alludes to: What controls this collection of molecules into a life-giving machine? The author’s short and cryptic answer is the “force that drives life is chaos”. The author will relate the many discoveries by philosophers early in the debate and the scientists to explain “what it takes to turn a molecule into a machine and many molecular machines into a living cell”/
CHAPTER 1 – THE LIFE FORCE
This chapter traces very early ideas arising from debates about what gives life to living organisms. Starting from the creation of the universe, the author notes that “All life started as a circle dance of molecules billions of years ago.” It then goes on to give a summary of how the discourse on what constitutes life has evolved over the ages beginning from ideas put forth by Greek philosophers. For hundreds of years, the debate about what constitutes “life” has centered on the concepts of “purpose and mechanism”. The author states that the three most prominently held explanations for life, starting from the Greek philosophers to the early-19th century biologists are embodied in the philosophies of the following believe systems: animism, vitalism, and mechanism/atomism. The practice of medicine gave the first empirical insinuations in this debate that contributed to an increasingly scientific and quantitative knowledge of the workings of the human body and offered some concrete evidence related to the debate of what constitutes life. For example, William Harvey was noted as being one of the first to infuse a more quantitative method of understanding of the workings of the human body questioning how the heart could possibly be the source of blood and where it eventually by measuring blood volume and pump rates. Studies and analyses which were mainly proponents of the mechanical view helped user in the scientific revolution of which Galileo Galilei (1564 – 1642) and Isaac Newton (1642 – 1726) were the major players. Both of these men were atomists and used experiment and reason to investigate nature. This more mechanical and atomistic view emerging during the scientific revolution helped propel the pursuit of instruments (with help from improvements in the understanding of optics) to study the living world, i.e. the microscope being one of the most important. The microscope gave the first glimpse of the fundamental unit of life, the cell, unbeknownst to these scientists who first observed their presence. In his most popular work, L’homme machine, de la Mettrie concluded that organisms function as a result of their physical and mechanical make-up based on observations of how bodily and mental functions were altered by injury and the observation that muscle tissue even without being attached to a living body. Animal heat was also big on the mind of those studying the basis of life. Before Harvey’s time, Galen had correctly suggested a relationship between food and heat and respiration. Further experiments by Boyle, Mayow, and Lavoisier contributed some solid observation supporting this relationship. The birth of modern biology in the 19th century ushered in new methods and new ideas for investigating the “self-sustaining, self-organizing activity” basic attribute of life. In the field of embryology, the thinkers were divided along the lines of preformationists and the epigenesists but this era also gave birth to the teleomechanistic philosophy and ideas supporting vitalist views. The teleomechanistic views of Kant and Blumenbach led to the development of the cell theory, the idea that all living organisms are composed of fundamental units called cells. In de la Mettrie’s view, however, irritability is evidence of the purely mechanistic basis for life. The study of irritability gained traction using new methods of applying electricity to animal parts. These experiments and observations suggested that electricity could be the vital force propounded by many believers. The mid-19th century saw the revival of the mechanistic view driven by the work of Charles Darwin (1809 – 1882) who “destroyed teleology” and Hermann von Helmholtz (1821 – 1894) who “vanquished the vital force”. Helmholtz’s dismissed any utility of the vital force theory instead suggesting that energy must be conserved. His experiments showed that “all the hallmarks of being alive, from animal heat to irritability – had to occur within the energy budget prescribed by the physicochemical world”. By the end of the 19th century, the success of these mechanistic studies brought to light the fact that biology, physics, and chemistry must intersect to explain what gives life to organisms but did not answer the fundamental question issue of “purpose”. Charles Darwin and Alfred Russell Wallace provided an answer when they developed the theory of evolution based on natural selection. In his Origin of the Species published in 1859, Darwin presented his observations and his arguments for natural selection as the driving force of evolution: variations within individuals of a given species that favor reproductive success lead to progeny, thus survival and propagation. Mendel’s work on genetics and inheritance answered the follow-up question of how those traits leading to survival and propagation are transferred to the offspring: “traits are inherited whole, and that traits from each parent can be combined in various ways in the offspring”. The fact that some of these individual traits from parents can be conserved, and not always blended, was crucial to Darwin’s theory for only through the complete conservation and transfer of a specific trait that favor reproduction and not its dilution can lead to the trait’s propagation within the species. In the next chapter, the author will answer the question of what gives rise to these variations.
CHAPTER 2 – CHANCE AND NECESSITY
In the minds of many scientists and philosophers pondering what life is, randomness was not entertained at all as a viable player in the discussion until the end of the 19th century. This chapter starts off with a brief history of the development of statistics and the calculation of probabilities. The author used Pascal’s triangle as an early example of calculating the number of times one can choose a certain number of items (k items) out of n available items. This number can also be calculated using the binomial coefficient expression: n!/(n-k)!k! “Statistics has been called the theory of ignorance” but it “provides the clues to understanding the underlying regularities or the emergence of new phenomena arising from the interaction of many parts.” The real story started when biologists themselves began to apply statistics in their own studies of organisms with Galton and Quetelet being the pioneers of these extensive studies. Charles Darwin’s cousin, the mathematician Francis Galton, applied Quetelet’s ideas (“how the error law, the normal distribution, and the central limit theorem governed almost everything”) to a wide range of biological phenomena: heights, masses of organs, and circumferences of limbs and found that they all follow a normal distribution. It is these statistics that Mendel used to develop his laws of heredity. In the section “Randomness and Life: Three Views”, the author described the beliefs of the main players in the role of randomness in understanding life: the how and why of the many phenomena of life and life itself can be understood given enough understanding of the complexities involved (Thompson); life events happen due to a higher purpose reconciling science and religion (Chardin); and the beginning of life is an improbable event but once it happened, evolution and the game of chance took over (Monod). These beliefs can summarized into two dichotomies: dichotomy of mechanics (“mere physics”) versus higher forces (life forces, the soul) and the dichotomy of chance and necessity. The physicists found themselves at the intersection of mechanics (physical forces) and necessity. With the development of the kinetic theory which evolved and broadened into the Laws of Thermodynamics and mathematically sharpened into Statistical Mechanics, they finally found a way to incorporate and tame randomness into their efforts to explain (and predict) the properties of matter based on the existence of atoms: with well-defined probability distributions developed by averaging over large numbers the random motions of atoms. Quantum mechanics came into the picture in the early 19th century replacing the “iron-clad model of necessity, classical physics” with a “fundamentally statistical picture of nature”. Meanwhile, developments have been occurring in biology. Chromosomes (bundles of DNA) and their duplication during cell division were known by 1882 but it was not until 1900 that a connection was made between chromosomes and Mendel’s hereditary traits. By 1909, genetic experiments were being conducted on fruit fly drosophila by Thomas Hunt Morgan (1866 – 1945). Morgan discovered that traits are not independent as Mendel thought but are linked suggesting that “traits were contained in some kind of linear arrangement on chromosomes with nearby traits more likely to be inherited together. The mixing of traits was assigned to a crossing-over of linear molecules.” By 1920, it was clear that hereditary information was contained in the linear arrangement of the chromosomes. In 1926, Joseph Muller discovered after years of experimentation and using a controlled dose of x-ray that radiation increased the probability of new genetic traits being created due to mutations. This correlation was later refined with the help of Max Delbruck and they subsequently showed that the mutation rates depend on temperature and x-ray dose. In the words of the author, this sequence of events during the first half of the 20th century illustrates how “previously mysterious biological processes, such as heredity and variation, became connected to measurable physical (molecular) entities”. Erwin Schrodinger got a hold of these ideas and expounded on them, not correctly all the time. Based on Delbruck and others’ data, he surmised that this genetic entity must be a molecule of about a thousand atoms with very stable bonds that can withstand the elevated thermal motions with the cell. To contain a large complex amount of information, they have to be aperiodic or non-repetitive. We now know of course that the genetic material is an “aperiodic” polymer called DNA). Towards the end of the chapte, the author offers the following statement to reconcile the interplay between necessity, laws of nature, and randomness: “Life can best be understood as a game of chance – played on the chessboard of space and time with rules supplied by physics and mathematics.” These games begin at the level of the atoms
CHAPTER 3 – THE ENTROPY OF A LATE-NIGHT ROBBER
The author begins this chapter by asking the following questions of how organized living organisms can emerge from the chaos and random motions of atoms and molecules and specifically when that threshold is crossed. To answer these questions, the author suggests starting from the fundamental building blocks of matter, atoms and molecules, and looking at how these particles that are in constant motion come together to create ordered objects. In the late 1800’s, three scientists focused their research on this: Boltzmann, Maxwell, and Gibbs. The main focus of their study was how gaseous particles constantly in random motion result in macroscopic properties that follow the gas laws. To answer this, they developed statistical mechanics: “applying statistics to the chaos of atoms and molecules, they found that averaged over time and space, the randomness of atomic motion gives way to order and regularity”. Using statistical mechanics, Maxwell and Boltzmann showed that the range of particle speeds in a gas follows a normal distribution. Energy distribution is an important aspect of how the individual behavior of atoms and molecules give rise to macroscopic properties. The behavior of atoms and particles is governed by energy conservation, the strictest law of nature. The kinetic energy of atoms and molecules comes from thermal motion. It was Count Rumford who first arrived at the conclusion that heat is a form of energy after observing the kinetic energy of a metal cylinder boring a cannon was converted to heat. This interrelationship between heat and kinetic energy was the focus of studies done by Maxwell, Boltzmann, and others who developed kinetic theory which had to assume the existence of small particles such as atoms that are in constant motion. Atoms were first discovered by Robert Brown after he observed random motion of pollen grains now referred to as Brownian motion. Albert Einstein years later proved that the random motion of pollen grains was due to the random motion of much smaller particles. To bridge the behavior at this level to observable macroscopic properties is one of the goals of thermodynamics: the science that deals with thermal energy and is the macroscopic “sister science” of statistical mechanics. It is “what emerges when we average the random motions of atoms using tools of statistical mechanics.” How the thermal energy of atoms and molecules can be converted into other forms depends on the distribution of energies among the individual particles. While temperature gives a direct measure of the average kinetic energy, macroscopic properties (or macrostates) cannot tell us much about the energies and speeds of individual atoms (or microstates). To understand why some forms of energy are more usable than others, the concept of entropy is discussed in terms of microstates. A disordered arrangement where particles are randomly positioned is more probable because there are more microstates leading to this macrostate. The author makes the following comparison in terms of the entropy “content” of energy: gravitational energy is low-entropy energy while heat is high-entropy energy. Energy is dissipated in processes like friction and impact which “are great randomizers of energy.” Nature tends toward dissipation of energy into a less usable form and low-entropy energy that is “completely organized, concentrated, and tidy is a rather artificial, low-probability situation.” This is reflected in the author’s statement of the second law of thermodynamics: “There can be no process whose only result is to convert high-entropy (randomly distributed) energy into low-entropy (simply distributed or concentrated) energy. Moreover, each time we convert one type of energy into another, we always end up overall with higher-entropy energy. In energy conversions, overall entropy always increases”. Entropy is a measure of the degree to which energy is dispersed (positional entropy ultimately only contributes to the total energy entropy because the energy of a particle is dependent on position but also on other modes of motions irrespective of position). Entropy is not just disorder. An example of the above is the difference in entropy between randomly stacked particles (which have a lower entropy, narrower energy distribution) and orderly stacked marbles (higher entropy, more freedom of motion). Many biological structures that are highly ordered can spontaneously form with an increase in entropy (e.g., assemblies of proteins, cell membrane structures, and fibers) because in the process, disorder is transferred to the water molecules. Thus, the emergence of (highly-ordered) life is not a violation of the second law: “life reduces entropy locally while increasing it globally.” The author then goes on with some detail explaining another thermodynamic consideration: free energy. In the author’s words, free energy is the usable energy left over after the dispersed energy associated with entropy is subtracted from a systems total energy. “In the language of free energy, the second law is restated this way: At constant temperature, a system tends to decrease its free energy until, at equilibrium, free energy has reached a minimum. The second law tells us that useful energy will become degraded, and eventually we will only be left with dispersed, unusable energy.” In the following statement, the author succinctly hones in on how necessity and chance are reconciled in the spontaneous emergence of life: “The concept of free energy captures the tug-of-war between deterministic forces (chemical bonds) and the molecular storm – or in other words, between necessity and chance, in one elegant formula, F = E – TS.”
Where did all this free energy come from that sustains life on earth? “The big bang started out as pure energy and very little entropy “(a singular point). Shortly after the big bang, new particles were formed from this featureless energy (energy congealing is the phrase used by the author): quarks, electrons, muons, neutrinos, photons, etc. Chaos was created and entropy increased as free energy was degraded. After 300,000 years, the universe cooled down and the first protons and neutrons were formed when three quarks combined resulting in a release of energy. Protons and neutrons collide and stick to form nuclei further releasing energy which increased the entropy of the surrounding even as the entropy of the system decreased. As the universe cooled down further and became less dense, no nuclei with a greater mass than hydrogen and helium could form. Gravitation forces then exerted its influence as more dense regions of the ensuing universe attracted more atoms. As the nebulae grow into giant systems, it started to collapse under its own weight, the core got denser and hot enough to initiate nuclear fusion to create heavier nuclei all the way up to iron. “Hydrogen and helium were cooked into heavier elements, and stars were born.” Energy in the form of heat from nuclear fusion reaches the earth in the form of free energy that atoms and molecules can absorb. In a nutshell: “As free energy is dissipated, and the entropy of the universe increased, new structures are born, from quarks to nuclei to atom to…life.” Toward the end of the chapter, the author concludes with the profound statement that living systems are open, tightly controlled, dissipative, near-equilibrium complex systems. Living organisms are open systems because they allow the flow of energy but dissipative because they consume highly organized, low entropy energy and produce highly dispersed, high-entropy energy. They are efficient users of energy because their processes are always near-equilibrium and do not involve high fluctuations in energy flow (when at equilibrium organisms are dead). Living organisms carry the necessary structures and tools to push thermodynamics to its limit and these structures “operate at the nanometer scale, the tiny scale of molecules. But what is so special about this scale that chaos can become structure, and noise can become directed motion?” That is the question for the next chapter.
CHAPTER 4 – ON A VERY SMALL SCALE
At the beginning of the chapter, the author mentions attending a biophysics research conference in 2011 showing motility assays, “attaching proteins called myosins to a surface and then seeding fibrous proteins, called actins, on top of the myosins. Fluorophores are attached to the molecules for visualization. In the video, the actin molecules are being moved around by the myosin molecules from one to the next, much like crowd-surfing in a rock concert as the author put it, fueled by ATP molecules. The role of nanoscience is the focus of this chapter. As the author claims toward the end of the last chapter, “life’s engines operate at the nanometer scale”. This is the scale where we expect the chaos of atoms and molecules become structures with functions and gain the ability to propel themselves. “Biophysics is nanophysics.” The author defines nanoscience as “the production, measurement, and understanding of systems where at least one spatial dimension is in the nanometer range”. The author notes that credit has usually been given to Richard Feynman for jumpstarting the nanoscience revolution. His many predictions pertaining to the construction of nanostructures for specific purposes have come true. The coming age of nanotechnology was much helped by the invention of the scanning probe microscopes (SPM): the scanning tunneling microscope and the atomic force microscope. Some of the challenges involved in studying at the nanoscale level include graininess and interfaces and the presence of more viscosity and stickiness between particles. At that small scale, ther are profound changes in behavior depending on size: as objects get smaller, the surface to volume ratio increases (pr2/(4/3 pr3 = 3/4r). At this scale, surface forces dominate. At the macroscale, mass forces such as gravity and inertia dominate. At the biological small scale, stickiness is controlled by balancing forces between the molecules and the salty water. Some special properties emerge at the nanoscale level: quantum mechanical effects, the importance of thermal noise and entropy, cooperative dynamics, large ranges of relevant time scales, and the convergence of energy scales. Quantum mechanical effects is fairly trivial at the nanoscale in biological systems as these effects are dwarfed by thermal motion effects. According to the author, essentially all molecular biology can be explained by classical physics (except chemical bonding). The ability to self-assemble is an important attribute of life and is a capability that nanoscale structures possess. For self-assembly to take place, attractive forces must be present and particles must have a way to come together, energy-wise and path-wise. Self-assembly depends on chance and necessity and entropy and energy combining to create stable, “robust” structures. It can be induced by controlling other conditions: using oddly shaped molecules that can direct how the structure is formed, applying non-equilibrium conditions such as electric fields, pressure, or a liquid flow), or letting entropy preside over the assembly. Emulsifiers can be used to provide a glue allowing particles that normally would not attract each other to mix. They help stabilize polar and nonpolar substances for instance by forming micelles, spherical cages where normally the oily, nonpolar molecules are trapped to keep it in the mixture with the more polar component. The free energy of the resulting structure is much reduced and thermal motions are too weak to break the micelles. The formation of these micelles is an example of cooperative dynamics or cooperativity requiring an optimal number of molecules to come together to form a stable structure. The properties of the solution may change when micelles spontaneously form at the critical concentration, e.g. a reduction in osmotic pressure as the number of dissolved particles decrease upon aggregation into micelles. Micelle formation usually occur as a sudden event. While it may sound counterintuitive, the random motions of particles are actually a pre-requisite for the assembly of stable structures. These parts and pieces need to be shuffled around and collide with the correct piece for the free-energy minimizing self-assembly to take place. Inside the cells, minimization of free energy can take place by maximizing entropy, even with the appearance of more highly ordered structures. The driving force for these processes is called entropic force. For instance, even though protein assembly leads to a higher order and lower entropy, the gain in entropy by making more space available to the aqueous contents of the cell more than makes up for the loss. Thus, overall the entropy still increases and the assembly of more organized and larger structures is driven by this entropic force. In the synthesis of proteins, DNA only encodes the sequence of amino acids but not the instruction for how the polymer is to fold. The folding process is governed by both chemical and physical forces and the external conditions of pH, temperature, ion concentration, etc. In all this, random motions provide the means by which the lowest-energy shape is found. The functional structures of these complex macromolecules are stabilized mostly by relatively weak bonds of hydrophobic forces, hydrogen bonds and but also by relatively strong bonds such as salt bridges and disulfide bonds. In gluing atoms together, there must be a balance between the stability provided by strong bonds but also the flexibility provided by relatively weaker bonds. Bonds “cooperate” to provide this balance to macromolecules. Bond and molecular cooperativity is critical for cell function. A process that is facilitated by cooperativity is a sudden change in molecular shape triggered by a small external cause which allows these molecules to act as molecular switches. Molecular switches are “molecules that can affect a large changes in response to small causes, such as the binding of a small molecule”. In addition to thermal motion, entropic forces, and cooperativity, another key property of the nanoscale systems is their ability to act like molecular machines which are energy conversion devices (e.g myosin is molecular machine that converts the chemical energy from ATP to kinetic energy). Nanoscale structures have the special property that many types of energy are of roughly the same magnitude and these energies are of the same order of magnitude as thermal energy (“molecular storm”) at room (“or body) temperature. This means that nanoscale structures can undergo a tremendous amount of fluctuations as they absorb thermal energy and convert it to other forms. In the end of the chapter, the author provides a good summary of what makes nanostructures special in terms of energy conversion and their capacity for spontaneous change and motion with just a little push from the thermal energy they are immersed in: “Thus, the nanoscale is truly special. Only at the nanoscale is the thermal energy of the right magnitude to allow the formation of complex molecular structures and assist the spontaneous transformation of different energy forms (mechanical, electrical, chemical) into one another. Moreover, the conjunction of energy scales allows for the self- assembly, adaptability, and spontaneous motion needed to make a living being. The nanoscale is the only scale at which machines can work completely autonomously. To jump into action, nanoscale machines just need a little push. And this push is provided by thermal energy of the molecular storm.”
CHAPTER 5 – MAXWELL’S DEMON AND FEYNMAN’S RATCHET
The second law of thermodynamics was developed to explain the limitations of machines, steam engines, in particular, for which most of the heat generated from burning the coal is wasted and not used for mechanical work. For example, a modern internal combustion engine is only about 20% efficient. This is because engines and other machines use gradients (e.g., temperature, pressure) to convert chemical energy from fuel into mechanical energy for motion. The efficiency of a machine then is dependent on the size of this gradient: the larger the gradient, the higher the efficiency but it can never reach 100%. The efficiency goes to zero when the engine’s temperature reaches that of the surrounding. As a practical implication, “the second law of thermodynamics allows us to extract work from gradients, at the cost of creating waste heat and the leveling of the gradient. The result is equilibrium – a state of uniform temperature and pressure, a state from which no further work can be extracted.” In this chapter, the author discusses two possible answers to the question of how molecular machines of living organisms can seemingly extract order (organized motion or work) from random, disordered motions (thermal energy or heat) in the uniform temperature bath of cells. Maxwell’s demon is a hypothetical situation in which Maxwell used to illustrate the statistical nature of the second law of thermodynamics. In large number systems (macroscopic), the low probability event of organized motion arising from random motions is virtually undetectable because this rare, unlikely event can only occur with a small number of particles to manifest itself in the macroscopic measurements For small scale systems (e.g. nanoscale), however, the energy adding effect (instead of scavenging effect) of low probability events such as random motions of surrounding atoms actually helping the non-random motion (organized motion) instead of resisting it (i.e. friction) is measurable enough to imply a violation of the second law. The RNA pulling and loop-closing measurements by Bustamante and Liphardt confirmed what is known as Jarzynski’s formula: “Nanoscale systems sometimes violate the second law of thermodynamics. At the molecular scale, entropy can sometimes spontaneously decrease (although, strictly speaking entropy is not defined at this scale). When that happens, it is as if time has reversed.” Thus, organized motion (work!), albeit very small and rare) can be extracted from otherwise “senseless motions of molecules” (heat!). But, one cannot build a machine out of these small systems because one these favorable instances are not repeatable and therefore molecular machines “cannot repeatedly extract energy from a uniform heat bath”. To make a small machine perform repeated motions, a reset step is needed, as the machine needs to be returned to its original state before it can begin a new cycle. And it is this reset step that lead to an inevitable increase in entropy. “The second part of the answer to how irreversibility can emerge from the reversible mechanics of particles is that the system has to be large enough – must contain enough – molecules – so that collisions always mix things up.” “The second law tells us any directed motion of a system will always encounter the resistance of friction. Friction is the result of many randomly moving molecules scavenging energy away from any non-random motion.” When you roll a macroscopic ball down an incline, its kinetic energy at the bottom is always less than it starting potential energy because collisions against the plane (friction) cause energy to be scavenged away. However, when a nanoscale ball is rolled down the same plane, sometimes, the kinetic energy of the nanoscale ball at the bottom might actually be greater than the initial potential energy because the randomly moving atoms of the surroundings did not cause any friction but actually pushed the ball in the direction it is rolling. This is because, for very small systems, like the nanoscale, these low-probability events have a larger, more noticeable effect than in the macroscopic system. This, “when systems are small enough, there is a finite probability, though rare, that the atomic chaos surrounding the system actually adds energy to the system, rather than stealing the energy”. (Experiments by Bustamante and Liphardt at UCB on RNA loop-closing energies have confirmed this. When measured at high speeds of moving the RNA, the energy difference was higher (friction takes away energy in addition to the actual amount lost) than measured at slower kinetic energies where the amount scavenged by random motions of surrounding atoms was smaller as well. Sometimes, the energy difference measured was lower than the minimum energy required to open the loop, implying the rare case where random motions of surrounding atoms actually helped open the loop instead of resisting it thus violating the second law. This confirms what is known as Jarzynski’s formula: “Nanoscale systems sometimes violate the second law of thermodynamics. At the molecular scale, entropy can sometimes spontaneously decrease (although, strictly speaking entropy is not defined at this scale).” Going back to the idea of how work can be done in biological systems if we have a constant-temperature environment, could the observations on these nanosize molecules explain how the body can generate order and extract energy from the uniform heat bath of living organisms? No, the occasional instance when energy to do work can be extracted from the random motions of surrounding particles cannot explain how order can arise from chaos in living organisms. These low-probability instances are not repeatable enough to sustain life even though they are acting on small particles. Overall, living organisms generate waste energy. The efficiency of a human body is only about 20%. Another possibility is a molecular device (a ratchet) that allows motion in only one direction can, theoretically, create organized motion (work!) from otherwise random (disorganized) thermal motions (heat!) of particles. See Figure 5.2. Feynman’s calculations, however, showed that such a ratchet would simply bob back and forth with equal probabilities (equilibrium state?) so cannot be used to account for the seemingly second-law violating assumption that living organisms can extract organized motion to do work from random thermal motions (heat) in the uniform heat bath of a living organism. The failure of the Feynman’s Ratchet and the low-probability event of thermal motions being converted to organized motion show that the second law still rules large number systems, “a powerful illustration of the second law of thermodynamics” (it is a statistical law!). “Work cannot be repeatedly extracted from an isolated reservoir at uniform temperature. If it were possible to make machines that could do this, our energy problems would be solved: Such machines would convert heat in our environment back into ordered mechanical energy.” So the question still stands: what allows living organisms to be able to extract organized motion (work!) from random, thermal motions (heat!)?
CHAPTER 6 – THE MYSTERY OF LIFE
The two possible answers to how order emerges from thermal chaos discussed in the previous chapter were invalidated by the second law of thermodynamics: “The second law is an inescapable (macroscopic) consequence of the randomizing power of the inescapable (microscopic) molecular storm”. In this chapter, the author describes two possible ways by which free energy from an external source (metabolized food) stored in ATP molecules with the help of random thermal motion can propel molecules forward via two motor mechanisms (tightly-coupled and loosely-coupled). But first, he gives an example of how “motion” is created through the binding of molecules. Cells have to respond to their environment and be able to make decisions. One of the ways it does this is by the action of enzymes whose activity and shape by the binding of molecules. In allosteric inhibition, a control molecule binds at a site different from the substrate binding site. Upon binding of the control molecule to this allosteric site, the shape of the binding site may change to either encourage substrate binding (positive feedback) or prohibit substrate binding (negative feedback). The binding of a substrate or a control molecule which results in a shape change can be viewed as a form of motion. However, without a way to “bias” the motion, there is no net forward movement. To support continuous forward motion, an irreversible reset step that cancels the backward motion in a reversible process (powered by random thermal motion) is required. Irreversible steps use free energy (low-entropy; food or sunlight) that is then degraded into heat which cannot be reused as free usable energy. The free energy derived from metabolized food is stored in ATP molecules (adenosine triphosphate). When a phosphate group detaches, this energy is transferred to an enzyme (or other molecules) in the form of vibrational energy. The ADP then goes back to the mitochondrion (the “cell’s recharging station”) to recharge from the breakdown of sugars as a phosphate group is reattached to reform ATP. In an example described by the author (see Figure 6.9), the forward motion of a kinesin molecule along a microtubule is propelled by a combination of allosteric binding of ATP to one “foot” causing a shape change which allows it to clamp down on the microtubule with a forward tilt of the leg while the other dangling foot is eventually moved forward (due to tilt bias) by the release of energy from an ATP. The two feet exchange roles and the cycle repeats as long as there are ATP molecules present. This is an example of a movement process controlled by a tightly-coupled motor more than a loosely coupled one. A loosely coupled process derives its motion partly from random motions (diffusion) and partly from force-directed motion (drift). For example, a molecule can attach and be subject to forces (drift) and detach and be subject to random motions. The actual stepping process is due to random motions and the asymmetric energy contour (dips, rises, and flats) in which the molecule is immersed determines the most probable direction of motion. The process of detaching requires free energy and its degradation to heat. As the author states, “it can be shown that any molecular machine that operates on an asymmetric energy landscape and incorporates an irreversible, energy-degrading step can extract useful work from the molecular storm”. But because diffusion occurs far more often than drift, loosely coupled motors are not as efficient in moving forward as tightly coupled machines.
READING NOTES
INTRODUCTION: WHAT IS LIFE?
What controls this collection of molecules into a life-giving machine? The author believes that the “force that drives life is chaos”.
“The fundamental goal of this book is to follow the discoveries of [these] scientists and to find out what it takes to turn a molecule into a machine; and many molecular machines into a living cell.”
D’Arcy Wentworth Thompson (1917), a British biologist and mathematician, contended that “the structure of the living organism was the necessary result of mathematics and physics”.
Life is driven by the interaction between chance and necessity: “As we enter the microscopic world of life’s molecules, we find that chaos, randomness, chance, and noise are our allies. Without the shaking and rattling of the atoms, life’s molecules would be frozen in place, unable to move. Yet, if there were only chaos, there would be no direction, no purpose, to all of this shaking. To make this molecular storm a useful force for life, it needs to be harnessed and tamed by physical laws and sophisticated structures – it must be tamed by molecular machines.”
The chaos of the “random motions of the atoms in our bodies” are an “afterglow of the creation of the universe, big bang. The big bang created a universe full of energy, and, eventually, it created stars like our sun. With the sun as intermediary, the energy of the big bang shakes the atoms of our cells – making life on Earth possible.”
By the end of this book, whether the reader likes it or not, it would have made a complete argument for chaos as the life force, “tempered by physical law”.
CHAPTER 1 – THE LIFE FORCE
· “All life started as a circle dance of molecules billions of years ago.”
· “The author states that humans and other living beings are not sources of energy” but “consumers of energy”. In large part that is true but manual labor has been used for a long time to modify our environment.
· This chapter discusses how the discourse on what constitutes life has evolved over the ages beginning from ideas put forth by Greek philosophers. For hundreds of years, the debate about what constitutes “life” has centered on the concepts of “purpose and mechanism”. The author summarized the three most prominently held explanations for life, starting from the Greek philosophers to the early-19th century biologists:
o Animism:“assumes an overarching universal principle that determines the purpose of the entire universe”; “erased the clear distinction of the inanimate and alive”
o Vitalism: “assumes a special life force that distinguishes life from matter, thus reserving purpose for life alone; gratuitously introduced an unseen force and raised the additional question of how this force interacted with the body”
o Mechanism and atomism: “denies purpose altogether”; “seemed impotent to account for those of life’s activities that seemed to show clear purposefulness, such as growth and reproduction”
· It was Epicurus who ascribed something akin to the seemingly fatalistic [my word] concept of atomism as denying purpose altogether and believed that an atomistic explanation “needed a mixture of necessity and chance”.
· In “Medicine and Magic”, the author gives an account of how medicine, “the practical science of life”, has contributed to an increasingly scientific and quantitative knowledge of the workings of the human body that offered some concrete evidence related to the debate of what constitutes life. “Originally based on magic and faith-healing, medicine was put on a more rational footing by Hippocrates and other Hippocratic thinkers around the time of Aristotle”.
· Some of the other players that insinuated the observations of medicine to the debate include Galen who believed that a “pneuma” or “life spirit”, inhaled into the lungs, mixed with blood in the heart to produce “vital spirits” responsible for movement, generating heat when it mixed with the air. There was also Paracelsus whose belief in alchemy introduced concepts of chemistry to the debate.
· “Harvey’s mathematical reasoning had an enormous impact on the subsequent history of the life sciences: Life, like the rest of nature, could yield to quantitative analysis and, with it, careful experimentation”. William Harvey re-introduced and championed a more mechanical philosophy into the discussion of what drives life. He carried out a more quantitative method of understanding the workings of the human body questioning how the heart could possibly be the source of blood and where it eventually ends up, having calculated by multiplying the volume of the heart and the pumping rate that about 540 pounds of blood must be produced and must end up somewhere.
· Descartes also contributed ideas that supported the mechanical view of life: “The mind or spirit was to be the realm of the soul and the divine, while the body was pure machine”.
· This was the level and flavor of discourse that helped usher in the Scientific Revolution: “The mechanical worldview, the revival of atomism, and the combination of rational examination and experiment were the foundation for one of the most influential periods in the history of science, the scientific revolution, which lasted from the late sixteenth to the eighteenth century.”
· Galileo Galilei (1564 – 1642) and Isaac Newton (1642 – 1726) were the major players of this period. Both of these men were atomists and used experiment and reason to investigate nature. For instance, Newton believed that matter is composed of small hard particles.
· This more mechanical and atomistic view emerging during the scientific revolution helped propel the pursuit of instruments (with help from improvements in the understanding of optics) to study the living world, i.e. the microscope being one of the most important. It was then that Robert Hooke (1635 – 1703) and later Antonie Philips van Leeuwenhoek (1632 – 1723) first visualized cells but neither understood that they were looking at the fundamental unit of all living things. Hooke’s and others’ further observations further convinced of the mechanical nature of living things.
· Another medical doctor and philosopher, Julien Offray de la Mettrie, contributed to the discourse based on his experiences with the injured as a medical office to the French Guards. In his most popular work, L’homme machine, de la Mettrie concluded that organisms function as a result of their physical and mechanical make-up based on two observations:
o “the functions of the body and mind could be greatly altered by physical influences and therefore could not be independent of them”
o “living tissue, such as muscle, could move on its own, even when removed from the body”
· On the question of “Animal Heat”, the first commonly held belief, held even by Harvey himself, was that the heart was the source of bodily heat. Before Harvey’s time, Galen had correctly suggested a relationship between food and hear and had observed that both fire and life die in the absence of air, deducing that discovering why flames are extinguished in the absence of air may lend some answers to how the heat in animals rely on respiration.
· This belief was challenged by observations of animals that remain alive even though they are cold, e.g frogs by van Helmont. Van Helmont put forth the idea that heat was a product of chemical processes in the body and not its driver.
· Robert Boyle (1627 – 1691), John Mayow (1641 – 1679), and Robert Hooke showed that air is somehow involved with both fire and respiration and their production of heat.
· Lavoisier, father of modern chemistry, thinking that heat or fire is an element, carried out studies on heat in a complicated experiment done in the peak of winter to prevent the incursion of heat from the surrounding. He and Laplace found that “breathing and combustion generated roughly the same amount of heat for the same amount of carbon dioxide released. It wasn’t until the early 19th century that Benjamin Thompson correctly showed that heat is a form of energy.
· The birth of modern biology in the 19thcentury ushered in new methods and new ideas for investigating the “self-sustaining, self-organizing activity” basic attribute of life. In the field of embryology, the thinkers were divided along the lines of preformationists and the epigenesists:
· “Preformation was the idea that every living being had to be preformed in the egg or sperm.”
· “Epigenesists claimed that unformed matter was shaped into a complex living being during embryonic development.”
· Teleomechanistic and vitalist views (late 18thto mid-18th century) Immanuel Kant and biologist Johann Blumenbach came up with teleomechanism as a new way to combine ideas from the mechanical and the vitalistic points of views, espousing the idea that a vital force, separate from the organism, is a “result of its special organization and structure”. According to the author, this “view of self-contained special forces in organically organized bodies helped shape biology into an autonomous science”. These vital forces were identified by German biologist Kielmeyer as “sensibility, irritability, reproduction, secretion, and propulsion”. The teleomechanistic views of Kant and Blumenbach led to the development of the cell theory, the idea that all living organisms are composed of fundamental units called cells.
· In de la Mettrie’s view, however, irritability is evidence of the purely mechanistic basis for life. The study of irritability gained traction using new methods of applying electricity to animal parts (Galvani and his severed frog legs experiment was noted by the author as the iconic image of this era). These experiments and observations suggested that electricity could be the vital force propounded by many believers.
· Mid-19th century revival of the mechanistic view driven by the work of Charles Darwin (1809 – 1882) who “destroyed teleology” and Hermann von Helmholtz (1821 – 1894) who “vanquished the vital force”.
· Before Helmholtz, there was Robert Mayer who put forth ideas on the conservation of energy based on biological observations noting that energy and matter can only be converted from one form or another but ‘creation of either one or the other never takes place’.
· Helmholtz’s dismissed any utility of the vital force theory stating that its presence implies the possibility of a perpetual machine that can generate energy from nothing. Instead, he suggested that energy must be conserved beginning with the assumption that “matter was made of pointlike particles, interacting through forces depending only on the distance between the particles”. Helmholtz’s experiments showed that “all the hallmarks of being alive, from animal heat to irritability – had to occur within the energy budget prescribed by the physicochemical world”. These experiments”
o Showed that muscle movement is a result of a chemical process by comparing the chemical extracts from frog legs irritated by electricity and from those not subjected to electricity (some chemicals in the muscles irritated by electricity were transformed from being water-soluble to ethanol-soluble substances.
o Showed that the difference in energy between the latent of consumed food and latent heat of excrement correlated with the amount of animal heat. This experiment was also able to explain a 10% higher energy expenditure than can be accounted for by the amount of oxygen used in the oxidation from respiration noting that food already contains some oxygen. “If this additional oxygen was included, food energy perfectly matched animal heat plus energy of the excrements, and no vital force was needed.”
o Showed that the chemical energy was used to move muscles by showing that only electricity applied to the entire frog leg resulted in a temperature change (compared to electricity applied to only the frog leg without the nerve and just the nerve itself; it was believed that the nervous system supplied the vital force). To measure these very small temperature changes associated with muscle movement, Helmholtz built a thermocouple sensitive enough to measure down to 1/1000 of a degree.
· By the end of the 19th century, biology has returned to mechanism, with “all biological processes occurring within the framework of chemistry and physics”.
· Darwin and Mendel: From Chance to Purpose:
· The success of these mechanistic studies brought to light the fact that biology, physics, and chemistry must intersect to explain what gives life to organisms but did not answer the fundamental question issue of “purpose”:
· “By the end of the 1850’s, nobody could deny that explain life’s processes, physical, chemical, and mechanical forces had to be invoked. Yet mechanics seemed woefully insufficient to explain the ordinary complexity and purposefulness of life…How can complexity emerge from chaos?”
· Charles Darwin and Alfred Russell Wallace provided an answer when they developed the theory of evolution based on natural selection. In his Origin of the Species published in 1859, Darwin presented his observations and his arguments for natural selection as the driving force of evolution: variations within individuals of a given species that favor reproductive success lead to progeny, thus survival and propagation.
· Mendel’s work on genetics and inheritance answered the follow-up question of how those traits leading to survival and propagation are transferred to the offspring: “traits are inherited whole, and that traits from each parent can be combined in various ways in the offspring”. The fact that some of these individual traits from parents can be conserved, and not always blended, was crucial to Darwin’s theory: “Only traits that could be passed on whole to the next generation could spread through a population and explain the emergence of a new species. If traits were blending, any new traits would soon be blended back into mediocrity”.
· The next issue to resolve is the question of what gives rise to these variations.
CHAPTER 2 – CHANCE AND NECESSITY
· “Until the end of the nineteenth century, everybody believed that randomness had no place in any explanation of the world…The consensus was that if something happened by chance, it only seemed that way because of our ignorance of all the circumstances.”
· Pascal’s triangle can be used to determine how many times you can choose a certain number of items (k items) out of n available items. This number can also be calculated using the binomial coefficient expression: n!/(n-k)!k!
· “Statistics has been called the theory of ignorance.” “Statistics provides the clues to understanding the underlying regularities or the emergence of new phenomena arising from the interaction of many parts.”
· Laplace’s central limit theorem: “any measurement that depends on a number of random influences tends to have errors that follow the normal distribution”.
· Charles Darwin’s cousin, the mathematician Francis Galton, applied Quetelet’s ideas (“how the error law, the normal distribution, and the central limit theorem governed almost everything”) to a wide range of biological phenomena: heights, masses of organs, and circumferences of limbs and found that they all follow a normal distribution.
· Galton discovered “regression toward the mean” wherein the offspring of an outlier parent (e.g. height) takes on a characteristic that brings their combined numbers closer to the mean or the average (an extremely tall father tends to have shorter offsprings).
· Galton also discovered the coefficient of correlation, a statistical measure of how two different variables are still statistically linked or correlated. “A correlation is a hint of connection, no a proof.”
· Mendel used statistics to develop the laws of heredity.
Randomness and Life: Three Views
· Randomness: Quetelet’s and Galton’s work in biology à existence of atoms, statistical and quantum mechanics àmolecular evolution and the role of mutation
· D’Arcy Wentworth Thompson (1860 – 1948)
o “Cell and tissue, shell and bon, leaf and flower, are so many portions of matter, and it is in obedience to the laws of physics that their particles have been moved, molded, and conformed.”
o Thompson believed that the why and how of many phenomena of life and life itself can be explained given enough understanding of the complexities involved. “Invoking chance, God, or any extraneous life principle when met with ignorance was a cheap trick…to keep us from doing the hard work of finding the true causes”.
· Pierre Teihard de Chardin (1881 – 1955)
o Believed that life events happen due to a higher purpose and worked to reconcile science and religion.
o Saw evolution as a process toward a more and more complex and sophisticated form of life akin to God, embracing evolution but still relying on the existence of a higher purpose.
o Saw the connection of physics and biology in the cell
· Jacques Monod (1910 – 1976)
o The main premise of his belief is that the beginning of life is an improbable event but once it happened, evolution took over.
o Placed a high importance on the role of chance and randomness.
· The two dichotomies of how life may have come about: (see Figure 2.2)
o Dichotomy of mechanics (“mere physics”) versus higher forces (life forces, the soul)
o Dichotomy of chance and necessity
· Kinetic Theory à Laws of Thermodynamics àStatistical Mechanics àQuantum Mechanics
· The physicists lie within the intersection of mechanics (physical forces) and necessity but with the development of the kinetic theory which evolved and broadened into the Laws of Thermodynamics, they finally found a way to incorporate and tame randomness into their efforts to explain (and predict) the properties of matter based on the existence of atoms. The Laws of Thermodynamics became Statistical Mechanics when physicists “tamed randomness” with well-defined probability distributions developed by averaging over large numbers the random motions of atoms.
· “Statistical mechanics is the science of averaging large numbers of randomly moving molecules to arrive at precise macroscopic laws.”
· Qunatum mechanics came into the picture in the early 19th century replacing the “iron-clad model of necessity, classical physics” with a “fundamentally statistical picture of nature”.
· Meanwhile in Biology:
· Chromosomes (bundles of DNA) and their duplication during cell division were known by 1882 but it was not until 1900 that a connection was made between chromosomes and Mendel’s hereditary traits.
· By 1909, genetic experiments were being conducted on fruit fly drosophila by Thomas Hunt Morgan (1866 – 1945). Morgan discovered that traits are not independent as Mendel thought but are linked suggesting that “traits were contained in some kind of linear arrangement on chromosomes with nearby traits more likely to be inherited together. The mixing of traits was assigned to a crossing-over of linear molecules.”
· By 1920, it was clear that hereditary information was contained in the linear arrangement of the chromosomes. In 1926, Joseph Muller discovered after years of experimentation and using a controlled dose of x-ray that radiation increased the probability of new genetic traits being created due to mutations. This correlation was later refined with the help fo Max Delbruck and they subsequently showed that the mutation rates depend on temperature and x-ray dose.
· In the words of the author, this sequence of events during the first half of the 20th century illustrate how “previously mysterious biological processes, such as heredity and variation, became connected to measurable physical (molecular) entities”. “Helmholtz contribution was restrictive; “it subtracted vital forces from the list of possibilities”.)
· Erwin Schrodinger’s “What is Life” book: Schrodinger got a hold of the green book published by Delbruck et al and made some deductions:
· If x-rays were strong to ionize one atom out of a thousand, mutations would occur with near uncertainty – from this Schrodinger assumed this entity must be one thousand atoms large and therefore about 3 nm cubed. (Did not take into account what may have been unknown back then that it was radicals that caused bond-braking and they can travel farther than 3 nm cubed).
· Genes must be molecules due to their stability despite the elevated conditions in the body, molecules that are able to withstand the thermal motions within the cell. To hold a large amount of information, this “crystal” had to be aperiodic or non-repetitive.
· (We now know of course that the genetic material is an “aperiodic” polymer called DNA).
· To reconcile the interplay between necessity, laws of nature, and randomness, the author offers the following statement as a preview of the next chapter: “Life can best be understood as a game of chance – played on the chessboard of space and time with rules supplied by physics and mathematics.” These games begin at the level of the atoms.
CHAPTER 3 – THE ENTROPY OF A LATE-NIGHT ROBBER
· The author starts off this chapter by asking the following questions:
o How do atoms and molecules assemble into a flower and a human?
o Where do we cross the threshold from lifeless atoms and molecules to living organisms?
· To answer these questions, the author suggests starting from the fundamental building blocks of matter, atoms and molecules, and look at how these particles that are in constant motion come together to create ordered objects.
· In the late 1800’s, three scientists focused their research on this: Boltzmann, Maxwell, and Gibbs. The main focus of their study is how gaseous particles constantly in random motion result in macroscopic properties that follow the gas laws. To answer this, they developed statistical mechanics: “applying statistics to the chaos of atoms and molecules, they found that averaged over time and space, the randomness of atomic motion gives way to order and regularity”. Using statistical mechanics, Maxwell and Boltzmann showed that the range of particle speeds in a gas follow a normal distribution.
· The behavior of atoms and particles is governed by energy conservation, the strictest law of nature.
· Heat as a form of energy was the conclusion reached by Count Rumford after observing the kinetic energy of a metal cylinder boring a cannon was converted to heat.
· Maxwell, Boltzmann, and others developed kinetic theory invoking the existence of small particles such as atoms that are in constant motion.
· Atoms were first discovered by Robert Brown after he observed random motion of pollen grains now referred to as Brownian motion. Albert Einstein years later proved that the random motion of pollen grains was due to the random motion of much smaller particles.
· Physicist later on proved the existence of atoms and their constant motion using microscopes in the early 1900’s.
· The continuous motion of atoms and molecules is called thermal motion. Their speeds can reach up to 500 m/s, like speeds of airplanes. If were to scale down to the size of a molecule, it would be akin to being in a molecular storm. Even though molecules move very fast, they don’t travel far because they are in constant collision with other atoms and molecules.
· First law of thermodynamics: conservation of energy
· “Thermodynamics is the science that deals with thermal energy and is the macroscopic “sister science” of statistical mechanics. Thermodynamics is what emerges when we average the random motions of atoms using tools of statistical mechanics.”
· Not all types of energy are interchangeable. The convertibility of energy in a system of particles depends on the distribution of energies among the individual atoms. Temperature gives only a direct measure of the average kinetic energy thus macroscopic properties (or macrostates) cannot tell us much about the energies and speeds of individual atoms (or microstates).
· “Why are some types of energy more useful than others, specifically, why can some types of energy be converted while others appear difficult to convert, thus making them useless?”
· The concept of entropy is discussed in terms of microstates. A disordered arrangement where particles are randomly position is more probably because there are more microstates leading to this macrostate. The author makes the following comparison in terms of the entropy “content” of energy: gravitational energy is low-entropy energy while heat is high-entropy energy. “Friction and impact are great randomizers of energy.” “Energy that is completely organized, concentrated, and tidy is a rather artificial, low-probability situation.”
· This is the second law of thermodynamics: “in any transformation in a closed system, entropy always increases” (“energy becomes more and more dispersed and thus unusable”).
· Author’s statement of the second law of thermodynamics: “There can be no process whose only result is to convert high-entropy (randomly distributed) energy into low-entropy (simply distributed or concentrated) energy. Moreover, each time we convert one type of energy into another, we always end up overall with higher-entropy energy. In energy conversions, overall entropy always increases”.
· Entropy is a measure of the degree to which energy is dispersed (positional entropy ultimately only contributes to the total energy entropy because the energy of a particle is dependent on position but also on other modes of motions irrespective of position). Entropy is not just disorder.
· An example of the above is the difference in entropy between randomly stacked particles and orderly stacked marbles. In randomly stacked particles, the freedom of motion is actually reduced because some marbles are immobilized. In orderly stacked particles, the marbles have more freedom of motion. Randomly stacked particles therefore have higher positional entropy but lower energy entropy because of a narrower energy distribution. [USE IN 1B]
· Many biological structures that are highly ordered spontaneously form with an increase in entropy (e.g., assemblies of proteins, cell membrane structures, and fibers) because in the process, disorder is transferred to the water molecules.
· The emergence of (highly-ordered) life is not a violation of the second law. Entropy can be locally reduced but globally increased in a transformation. Also, there are examples of highly ordered structures that embody more entropy.
· “Life reduces entropy locally while increasing it globally.”
· The author on free energy:
o “Free energy, F, is the total energy E minus the product of temperature T and entropy S of the system (F = E-TS). Because entropy represents how much energy has become dispersed and useless, free energy represents that part of the energy that is still “concentrated” and useful (because we are subtracting the useless part, TS).”
o “In the language of free energy, the second law is restated this way: At constant temperature, a system tends to decrease its free energy until, at equilibrium, free energy has reached a minimum. The second law tells us that useful energy will become degraded, and eventually we will only be left with dispersed, unusable energy.”
o “The concept of free energy captures the tug-of-war between deterministic forces (chemical bonds) and the molecular storm – or in other words, between necessity and chance, in one elegant formula, F = E – TS.”
o “According to the second law, free energy will eventually be degraded and reach a minimum.”
· The Big Bang and where free energy on earth came from:
o “The big bang started out as pure energy and very little entropy “(a singular point).
o Shortly after the big bang, new particles were formed from this featureless energy (energy congealing is the phrase used by the author): quarks, electrons, muons, neutrinos, photons, etc. Chaos was created and entropy increased as free energy was degraded.
o After 300,000 years, the universe cooled down and the first protons and neutrons were formed when three quarks combined resulting in a release of energy. Protons and neutrons collide and stick to form nuclei further releasing energy which increased the entropy of the surrounding even as the entropy of the system decreased. As the universe cooled down further and became less dense, no nuclei with a greater mass than hydrogen and helium could form.
o Gravitation forces then exerted its influence as more dense regions of the ensuing universe attracted more atoms. As the nebulae grow into giant systems, it started to collapse under its own weight, the core got denser and hot enough to initiate nuclear fusion to create heavier nuclei all the way up to iron. “Hydrogen and helium were cooked into heavier elements, and stars were born.”
o Energy in the form of heat from nuclear fusion reaches the earth in the form of free energy that atoms and molecules can absorb.
o In a nutshell: “As free energy is dissipated, and the entropy of the universe increased, new structures are born, from quarks to nuclei to atom to…life.”
· Living systems are open, tightly controlled, dissipative, near-equilibrium complex systems:
o “The continual flux of energy is a fact of life – a fact that keeps living systems out of thermodynamic equilibrium. Equilibrium is the state in which all available free energy has been degraded and no usable energy remains. Equilibrium means death. Living beings must avoid equilibrium. As long as we are alive, energy continues to flow through us. In thermodynamics, systems through which energy and matter flow from and to the environment are called open systems.’
o In living systems, “what enters is not the same as what leaves the system. Living beings gobble up low-entropy energy, degrade the energy, and expel high entropy energy into the environment. We call such systems dissipative systems, because they continuously dissipate free energy into high-entropy energy.”
o “Life is a highly efficient process. Efficiency is best achieved when we do not stray too far from equilibrium, because large movements cause friction and, consequently, rapid degradation of low-entropy energy…By staying away from equilibrium, we stay alive. By staying close to equilibrium, we increase efficiency.”
· “Such a complex system as life can only work if its parts are designed to push thermodynamics to its limits. Life does not exist despite the second law of thermodynamics; instead, life has evolved to take full advantage of the second law where it can.”
· How can it do this? “Life’s engines operate at the nanometer scale, the tiny scale of molecules. But what is so special about this scale that chaos can become structure, and noise can become directed motion?”
CHAPTER 4 – ON A VERY SMALL SCALE
· At the beginning of the chapter, the author mentions attending a biophysics research conference in 2011 showing motility assays, “attaching proteins called myosins to a surface and then seeding fibrous proteins, called actins, on top of the myosins. Fluorophores are attached to the molecules for visualization. ATP molecules fuel the motions. I thought this was really awesome and googled some videos showing this. I found these:
· The actin molecules are being moved around by the myosin molecules from one to the next, much like crowd-surfing in a rock concert as the author put it.
· The role of nanoscience is the focus of this chapter. As the author claims toward the end of the last chapter, “life’s engines operate at the nanometer scale, the tiny scale of molecules”. This is the scale where we expect the chaos of atoms and molecules become structures with functions and gain the ability to propel themselves. “Biophysics is nanophysics.”
· “Life must begin at the nanoscale. This is where complexity beyond simple atoms begins to emerge and where energy readily transforms from one form to another. It is here where chance and necessity meet. Below the nanoscale, we find only chaos; above this scale only rigid necessity.”
· The author defines nanoscience as “the production, measurement, and understanding of systems where at least one spatial dimension is in the nanometer range”.
· The author notes that credit has usually been given to Richard Feynman for jumpstarting the nanoscience revolution. Many of his predictions pertaining to the construction of nanostructures for specific purposes given in his 1959 American Physical society meeting talk have come true.
· K. Eric Drexler, in 1986, wrote the founding work on modern nanotechnology “Engines of Creation: The Coming Era of Nanotechnology”.
· The coming age of nanotechnology was much helped by the invention of the scanning probe microscope (SPM).
· Scanning Tunneling Microscope – the first SPM to be invented (1982). The contour of the surface of an object is resolved at 1/10 of a nanometer scale. The width of the tunneling barrier (empty space) between the tip of a very sharp metal needle and the surface of the object and the amount of electrons in the sample control the tunneling current measured which can decrease 10x even when the needle is moved only 1/10 of a nanometer from the surface. This level of sensitivity allows STM to resolve images of single atoms.
· Atomic Force Microscopy – the second one invented and now the most popular.
· Challenges at the nanoscale:
· Graininess and interfaces produce more viscosity and stickiness between particles.
· Profound changes in behavior depending on size: As objects get smaller, the surface to volume ratio increases (pr2/(4/3 pr3 = 3/4r). At this scale, surface forces dominate. At the macroscale, mass forces such as gravity and inertia dominate.
· At the biological small scale, stickiness is controlled by balancing forces between the molecules and the salty water.
· Some special properties emerge at the nanoscale level: quantum mechanical effects, the importance of thermal noise and entropy, cooperative dynamics, large ranges of relevant time scales, and the convergence of energy scales. Quantum mechanical effects is fairly trivial at the nanoscale in biological systems as these effects are dwarfed by thermal motion effects. According to the author, essentially all molecular biology can be explained by classical physics (except chemical bonding). [ASIDE: “Much research on quantum computing, spintronics, or other fancy new quantum electronics is done at low temperatures – much too low for any living system.”]
· On Self-Assembly:
o The formation of a snowflake is a good example of self-assembly. The author described in length the creation of a snowflake whose 6-sided general structure arises from the stable hexagonal geometry formed by 6 water molecules in the crystalline state. The formation of a snowflake illustrates a good example of how chance and necessity and entropy and energy combine to create stable, “robust” structures.
o The simplest form of self-assembly can be induced by depositing spherical particles on a surface. If attractive forces are present and they have a way to come together, they will stick to each other and form close-packed layers, the most ubiquitous in nature.
o Other particles can be induced to self-assemble in other more interesting ways by controlling other conditions: using oddly shaped molecules that can direct how the structure is formed, applying non-equilibrium conditions such as electric fields, pressure, or a liquid flow), or letting entropy preside over the assembly.
· There are molecules that act as glue between two types of particles that normally would not attract each other and mix. These substances are called emulsifiers and they are composed of amphiphilic molecules with dual solubility. They help stabilize polar and nonpolar substances for instance by forming micelles, spherical cages where normally the oily, nonpolar molecules are trapped to keep it in the mixture with the more polar component. The free energy of the resulting structure is much reduced and thermal motions are too weak to break the micelles.
· The formation of these micelles is an example of cooperative dynamics or cooperativityrequiring an optimal number of molecules to come together to form a stable structure. The properties of the solution may change when micelles spontaneously form at the critical concentration, e.g. a reduction in osmotic pressure as the number of dissolved particles decrease upon aggregation into micelles.
· A more complex but similar structures formed by lipids is vesicles, double-walled spheres that can form a separation between two volumes of water. The insides of these double-walled spheres can be used as protected reaction vessels much like the double layer membrane surrounding the chemical factory of a cell.
· On Entropic Forces
o While it may sound counterintuitive, the random motions of particles are actually a pre-requisite for assembly of stable structures. These parts and pieces need to be shuffled around and collide with the correct piece for the free-energy minimizing self-assembly to take place. Like in the case of the random stacking analogy with marbles, particles may sometime aggregate into a more ordered structure that leads to a higher number of microstates for motion or energy.
o Inside the cells, minimization of free energy can take place by maximizing entropy, even with the appearance of more highly ordered structures. The driving force for these processes is called entropic force. In the crowded spaces inside a cell, protein macromolecules are separated from each other only by about 10 nanometers (considering that proteins are about 10 – 100 nm in size, this distance is analogous to a foot separation between cars) of an aqueous solution containing small molecules and ions. Each of these macromolecules is surrounded by an exclusion or depletion zone that excludes these small molecules. When these proteins assemble, their exclusion zones merge increasing the amount of space available to the water molecules and the small molecules and ions contained therein. So, even though protein assembly leads to a higher order and lower entropy, the gain in entropy by making more space available to the aqueous contents of the cell more than makes up for the loss. Thus, overall the entropy still increases the assembly of more organized and larger structures is driven by this entropic force.
o Entropic force is used by the cells to assemble fibrous proteins such as collagen, actin, and microtubule filaments.
o The driving force for a spontaneous process can thus be either minimization of free energy or entropic forces. If the rate of the process increases with temperature, then it is mostly driven by free energy minimization. If the rate of the process decreases when the temperature is increased, it is mostly driven by entropic forces.
o There are two important forces at play in biological systems: hydrogen-bonding and hydrophobic forces. Hydrogen bonding is a strong type of force present between oxygen and hydrogen that are not covalently bonded. It is this force that gives water a high boiling point and a high heat capacity compared to similar molecules. Hydrophobic forces are also entropic forces. Hydrophobic forces are the forces that cause oil droplets dispersed in water to coalesce. It is also the force that is operative in the making of cheese. Cow’s milk is 80% casein (a phosphorus containing protein). When salt or acid or rennin is added to cow’s milk, the hydrophobic but negatively charged casein protein molecules congeal to form cheese. The ions in salt and the hydrogen ions in acids help stabilize/neutralize the negative charges in casein which allow hydrophobic forces to drive them to coalesce into cheese. The enzyme rennin causes the same coalescing effect by removing the negatively charged parts of casein.
o Hydrophobic forces are one of the forces that stabilize proteins. Protein molecules must go through random changes in shape before finding the most stable one. This lowest-energy configuration is achieved by the hydrophobic forces drawing the nonpolar sidechains inside the folded shape and the hydrophilic side chains on the external surface of the fold. Because it may take some time for a protein to find this low-energy configuration, folding has to be facilitated sometimes by chaperonin molecules that mold certain parts to aid in the folding. As the author notes in the end of this section, “protein folding is probably the best example of how physical laws, randomness, and information – provided by evolution – work together to create life’s complexities”.
· On cooperativity:
o DNA only encodes the sequence of amino acids but not the instruction for how the polymer is to fold. The folding process is governed by both chemical and physical forces and the external conditions of pH, temperature, ion concentration, etc. In all this, random motions provide the means by which the lowest-energy shape is found.
o The functional structures of these complex macromolecules are stabilized mostly by relatively weak bonds of hydrophobic forces, hydrogen bonds and but also by relatively strong bonds such as salt bridges and disulfide bonds. In gluing atoms together, there must be a balance between the stability provided by strong bonds but also the flexibility provided by relatively weaker bonds. Bonds “cooperate” to provide this balance to macromolecules.
o In a study of water molecules squeezed to a nanoscale amount, the author and his students found that cooperativity between water molecules (e.g. simultaneously moving away to make room for an AFM tip) can occur in the relatively long order of seconds (molecular collisions happen a million billion times faster than this). Calculations suggest that this long time scale, some 30-40 molecules are involved in the cooperativity. Furthermore, they also observed a sharp change in properties from liquid-like to solid-like in response to a small change in squeeze rate. These sudden shifts are indicators of cooperative behavior.
o Bond and molecular cooperativity is critical for cell function. A process that is facilitated by cooperativity is a sudden change in molecular shape triggered by a small external cause which allows these molecules to act as molecular switches. Molecular switches are “molecules that can affect a large changes in response to small causes, such as the binding of a small molecule”. From molecular switches, molecular circuits can be built that control cell activity. [In electronics, transistors are switches that allow a small change in voltage to control a large current.]
· On molecular machines and energy transfer:
· In addition to thermal motion, entropic forces, and cooperativity, another key property of the nanoscale systems is their ability to act like molecular machines. Machines, are by definition, energy conversion devices (e.g., a car is a machine that converts chemical energy to kinetic energy and myosin is molecular machine that converts the chemical energy from ATP to kinetic energy).
· Nanoscale structures have the special property that many types of energy are of roughly the same magnitude. Figure 4.6 in the book shows how electrostatic energy, chemical bond energies, and elastic energies all converge at the 10-9 m scale. In addition, these energies are of the same order of magnitude as thermal energy (“molecular storm”) at room (“or body) temperature. This means that nanoscale structures can undergo tremendous amount of fluctuations as they absorb thermal energy and convert it to other forms. In contrast, binding energies are too high for atoms or nuclei for thermal energy to induce any discernible fluctuation. Same for the other end of the spectrum: for structures larger than nanostructures, mechanical and electrical energies are too large to be affected by thermal energies and “everything becomes deterministic”, unable to allow spontaneous change and self-assembly which are necessary in living systems.
· In the end of the chapter, the author provides a good summary of what makes nanostructures special in terms of energy conversion and their capacity for spontaneous change and motion with just a little push from the thermal energy they are immersed in: “Thus, the nanoscale is truly special. Only at the nanoscale is the thermal energy of the right magnitude to allow the formation of complex molecular structures and assist the spontaneous transformation of different energy forms (mechanical, electrical, chemical) into one another. Moreover, the conjunction of energy scales allows for the self- assembly, adaptability, and spontaneous motion needed to make a living being. The nanoscale is the only scale at which machines can work completely autonomously. To jump into action, nanoscale machines just need a little push. And this push is provided by thermal energy of the molecular storm.”
CHAPTER 5 – MAXWELL’S DEMON AND FEYNMAN’S RATCHET
· One of Maxwell’s original statements of the Second Law of Thermodynamics:
o “One of the best established facts in thermodynamics is that it is impossible in a [closed]system…which permits neither change of volume nor passage of heat, and in which both the temperature and pressure are everywhere the same, to produce any inequality of temperature or pressure without expenditure of work. This is the second law of thermodynamics, and it is undoubtedly true as long as we can deal with bodies only in mas, and have no power of perceiving or handling the separate molecules of which they are made up.”
· Practical implications of the second law of thermodynamics:
o The second law of thermodynamics was developed to explain the limitations of machines, steam engines, in particular, for which most of the heat generated from burning the coal is wasted and not used for mechanical work. At present, for example, an internal combustion engine is only about 20% efficient.
o Engines and other machines use gradients (e.g., temperature, pressure) to convert chemical energy from fuel into mechanical energy for motion. The efficiency of a machine then is dependent on the size of this gradient: the larger the gradient, the higher the efficiency but it can never reach 100%. The efficiency goes to zero when the engine’s temperature reaches that of the surrounding.
o “The second law of thermodynamics allows us to extract work from gradients, at the cost of creating waste heat and the leveling of the gradient. The result is equilibrium – a state of uniform temperature and pressure, a state from which no further work can be extracted.”
· “How can molecular machines extract work from the uniform-temperature environment of cells without violating the second law of thermodynamics?”
o “The second law is a statistical law – which states that most of the time (usually very close to “always”), systems tend toward their most probable state.” “…the second law emerges once we talk about a large number of molecules. It is a statistical law.”
o “In a large system, one visible with an optical microscope or larger, a violation of the second law will, for all practical purposes, never happen. However, a really small system (a single molecule, for example) can violate the second law relatively often.” But, one cannot build a machine out of these small systems because one “cannot repeatedly extract energy from a uniform heat bath”.
o “When systems are small enough, there is a finite probability, though rare, that the atomic chaos surrounding the system actually adds energy to the system rather than stealing energy.”
o To make a small machine perform repeated motions, a reset step is needed, as the machine needs to be returned to its original state before it can begin a new cycle. And it is this reset step that lead to an inevitable increase in entropy.
o Related to this is the seeming connection between entropy and information storage: “each time you erase information, you dissipate energy and increase entropy”.
· On Reversibility and Irreversibility:
o [MY THOUGHTS: The equilibrium state the most probable state. A low-probability state is a non-equilibrium state. LeChatelier’s principle states that a reaction will shift in the direction to counteract whatever perturbation was introduced. Is this shift always spontaneous even though it might not be in the spontaneous direction? No, a shift in the non-spontaneous direction will require input of delta G. Entropy must always increase as systems arrive at the most probable state.]
o “Irreversibility (comes) from the fact that that initial system was not at equilibrium. That is, it was not in a state of maximum entropy. This has consequences for the entire universe we live in: If there is such a thing as the arrow of time, which points from past to future, this arrow can only be there because the universe started in a very low – entropy state. Stars, galaxies, planets, and living beings have been feeding off the low entropy ever since.”
o “The second part of the answer to how irreversibility can emerge from the reversible mechanics of particles is that the system has to be large enough – must contain enough – molecules – so that collisions always mix things up.” “The second law tells us any directed motion of a system will always encounter the resistance of friction. Friction is the result of many randomly moving molecules scavenging energy away from any non-random motion.” When you roll a macroscopic ball down an incline, its kinetic energy at the bottom is always less than it starting potential energy because collisions against the plane (friction) cause energy to be scavenged away. However, when a nanoscale ball is rolled down the same plane, sometimes, the kinetic energy of the nanoscale ball at the bottom might actually be greater than the initial potential energy because the randomly moving atoms of the surroundings did not cause any friction but actually pushed the ball in the direction it is rolling. This is because, for very small systems, like the nanoscale, these low-probability events have a larger, more noticeable effect than in the macroscopic system. Thus, “when systems are small enough, there is a finite probability, though rare, that the atomic chaos surrounding the system actually adds energy to the system, rather than stealing the energy”. (Experiments by Bustamante and Liphardt at UCB on RNA loop-closing energies have confirmed this. When measured at high speeds of moving the RNA, the energy difference was higher (friction takes away energy in addition to the actual amount lost) than measured at slower kinetic energies where the amount scavenged by random motions of surrounding atoms was smaller as well. Sometimes, the energy difference measured was lower than the minimum energy required to open the loop, implying the rare case where random motions of surrounding atoms actually helped open the loop instead of resisting it thus violating the second law. This confirms what is known as Jarzynski’s formula: “Nanoscale systems sometimes violate the second law of thermodynamics. At the molecular scale, entropy can sometimes spontaneously decrease (although, strictly speaking entropy is not defined at this scale). When that happens, it is as if time has reversed.”
· Applying this to biological systems:
· Going back to the idea of how work can be done in biological systems if we have a constant-temperature environment. Could the observations on these nanosize molecules explain how the body can generate order and extract energy from the uniform heat bath of living organisms?
· No, because living organisms generate waste energy. The efficiency of a human body is about 20%. Efficiency, in this case, is defined as the amount of physical work that can be done using the energy contained in food consumed. 80% of the energy from food is released as heat through friction or used to maintain metabolic processes in our cells [BUT DON’T THESE COUNT AS SOME TYPE OF CHEMICAL WORK?]
· However, a molecular device (a ratchet) that allows motion in only one direction can, theoretically, create organized motion from otherwise random (disorganized) thermal motions of particles: Feynman’s Ratchet (the idea was originated by Smoluchowski).
· Failure of the ratchet: the energy or work done allowing motion in the “easier” direction is equal to the energy required to go in the opposite, harder direction. In the easier direction, the incline distance is longer even though less force is required for the pawl to move up this gentle incline. In the harder “restricted” direction, the force required for the pawl to go over the almost vertical wall is much higher but the distance is shorter (work = f x d). The spring attached to the pawl allowing it to go up and down must be weak enough for water molecules to be able to push the pawl over one tooth. However, when one unfavorable hit while the pawl is up can cause the ratchet to turn backwards and the pawl to potentially hit the incline and cause the ratchet to backward. In Feynman’s calculation, the probabilities of the ratchet moving forward and backward are always the same and the ratchet ends up just bobbing back and forth with equal probabilities [equilibrium! No further net work can be done!]
· The failure of the Feynman’s Ratchet and the low-probability event of thermal motions being converted to organized motion show that the second law still rules large number systems, “a powerful illustration of the second law of thermodynamics” (it is a statistical law!). “Work cannot be repeatedly extracted from an isolated reservoir at uniform temperature. If it were possible to make machines that could do this, our energy problems would be solved: Such machines would convert heat in our environment back into ordered mechanical energy,”
· So the question still stands: what allows living organisms to be able to extract organized motion (work!) from random, thermal motions (heat!)?
CHAPTER 6 – THE MYSTERY OF LIFE
· “Molecular machines read and translate DNA; make new machines, operate the processes that makes cells reproduce, transport nutrients, and expel wastes; and help the cell change shape and move about. These tiny machines are the basis of life. But how do they work?”
· The two possible answers discussed in the previous chapter were invalidated by the second law of thermodynamics: “The second law is an inescapable (macroscopic) consequence of the randomizing power of the inescapable (microscopic) molecular storm”.
· The author describes the following quantitative estimations given in a 2002 Physics Today article on Brownian motors written by Dean Astumian and Peter Hanggi:
· Power density of molecular motors: A typical molecular motor uses about 100 to 1,000 ATP molecules per second ~ 10-16watts. The 1021 molecular motors needed to provide the 130 horsepower found in modern cars can fit in a teaspoon. Molecular motors have a power density of about 108 watts per cubic meter, about 1,000 larger than a car engine’s.
· Collision rate and power at 310 K: A water molecule hits a molecular motor about once every 10-13 seconds. Each collision delivers about 4.3 x 10-21 J (= kT = (1.3806488 × 10-23 m2 kg s-2 K-1 x 310 K). So, the power input is about 10-8watts.
· The power of the molecular storm: So, each molecular machine is being hit by water molecules with 108 times more power than its own power output. This is equivalent to a car being hit by a wind speed of about 70,000 mph.
· This power is delivered from random directions, however. How do these molecular machines “tame the chaos of thermal motion”?
· For a chemical reaction to take place, reactant molecules first go through the “awkward”, high-energy transition state. How much strain the chemical bonds have to endure determines the size of the activation energy required to push molecules to this stage before they can achieve another low-energy state. For molecular machines, this energy is provided by the thermal motions of molecules around them. But, this can take a very long time (the higher the activation energy, the slower the rate as there are fewer higher energy collisions). Enzymes, biological catalysts, help make the transition state more “comfortable” (less strained) so the activation energy required is lower. The author describes the enzyme-substrate binding and complex formation and how it leads to lowered energy for transition state and subsequent release of the lower energy product molecules. “A molecule does not know how it will fit into the binding pocket of the enzyme. Instead the constant bombardment by water molecules (about every 10-13 seconds) rapidly rotates and deforms the molecule. After a millisecond, by chance the molecules is pounded into the right shape and orientation to form a complex with the enzyme.”
· Phosphoglucomutase speeds up the conversion of an indigestible type of sugar into a digestible one a trillion times with one phosphoglucomutase converting a hundred sugar molecules per second.
· Cells have to respond to their environment and be able to make decisions. One of the ways it does this is by the action of enzymes which can be controlled by positive or negative feedback inhibition.
· In competitive inhibition, the activity of an enzyme is inhibited by the binding of a competing non-substrate molecule in the binding site of the enzyme and no reaction takes place.
· In allosteric inhibition, a control molecule binds at a site different from the substrate binding site. Upon binding of the control molecule, the shape of the binding site may change to either encourage substrate binding (positive feedback) or prohibit substrate binding (negative feedback).
· In positive feedback, the product itself binds to the allosteric site and changes the shape of the active site to accommodate substrate binding. This results in a “rapid, explosive increase in products (until the reactants run out)” necessary if the cell needs a huge amount of a compound very quickly as it responds to an external stimulus.
· In negative feedback allosteric inhibition, the product binds to the allosteric site which then causes the active site to change shape to prohibit substrate binding, moderating its own production in the cell.
· Other examples of control molecules result in more “complicated schemes of feedback loops and mutual enhancement and inhibition provide the computing power that makes living cells seem intelligent”.
· The binding of a substrate or a control molecule which results in a shape change can be viewed as a form of motion.
· To support continuous forward motion, an irreversible reset step that cancels the backward motion in a reversible process (powered by random thermal motion) is required. Irreversible steps use free energy that is then degraded into heat which cannot be reused as free usable energy. These irreversible steps are fueled by “free (low-entropy) energy from the outside (food or sunlight) and this free energy is degraded (dissipated) by our molecular machines as they use it to harness the molecular storm”.
· The energy from metabolized food is stored in ATP molecules (adenosine triphosphate). When a phosphate group detaches, this energy is transferred to an enzyme in the form of vibrational energy. The resulting motion of the enzyme is equivalent to raising its temperature from 98 F to 7,000 F. The ADP then goes back to the mitochondrion (the “cell’s recharging station”) to recharge. The mitochondrion is where energy from the breakdown of sugars is stored as a phosphate group is reattached to an ADP to form ATP.
· In an example described by the author (see Figure 6.9), the forward motion of a kinesin molecule along a microtubule is propelled by a combination of allosteric binding of ATP to one “foot” causing a shape change which allows it to clamp down on the microtubule with a forward tilt of the leg while the other foot is moved forward (eventually) by the release of energy from an ATP. The two feet exchange roles and the cycle repeats as long as there are ATP molecules present. This is an example of a tightly-coupled motor more than a loosely-coupled one.
· “The hallmark of a tightly coupled molecular motor is that it goes through well-defined cycles, using up a fixed number of ATP molecules during each step. Nevertheless, random motion is the drive behind the motor’s locomotion, as it ultimately moves the legs of the motor forward – of course, rectified by the allosteric interaction of the motor’s legs with ATP.”
· A loosely coupled process derives its motion partly from random motions (diffusion) and partly from force-directed motion (drift). For example, a molecule can attach and be subject to forces (drift) and detach and be subject to random motions. The actual stepping process is due to random motions and the asymmetric energy contour (dips, rises, and flats) in which the molecule is immersed determines the most probable direction of motion. The process of detaching requires free energy and its degradation to heat. As the author states, “it can be shown that any molecular machine that operates on an asymmetric energy landscape and incorporates an irreversible, energy-degrading step can extract useful work from the molecular storm”. But because diffusion occurs far more often than drift, loosely coupled motors are not as efficient in moving forward as tightly coupled machines.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.