A Comprehensive Guide to 21st Century Skills.

Our 21st-century world is vastly different. In the past two decades— even just the past two years— so much has changed. And that’s usually a problem in education, an institution rooted in tradition and historically struggling to keep up with the times. Our technology has changed everything. Covid-19 has hijacked our timelines, propelling change and calling for adaptability at an incredible rate and scale.

In education and in our world in general, the stakes are higher and time is of the essence. Not understanding the sciences and humanities can have farther-reaching effects than ever before. What we do, learn, think, say, post, and tweet on one side of the world does now affect people on the other side, too. Our education needs to reflect this new world and prepare our children with the skills needed to succeed within it.

The term “21st century skills” is generally used to refer to certain core competencies such as collaboration, digital literacy, critical thinking, and problem-solving that advocates believe schools need to teach to help students thrive in today’s world. In a broader sense, however, the idea of what learning in the 21st century should look like is open to interpretation—and controversy. (Richard Allington, Professor of Education, University of Tennessee; Early-Reading Expert)

What future-ready skills do our children need in this ever-shrinking, post-pandemic society? Working backwards from the objective, what is the end goal of a 21st-century education? What should it mean to be a high school graduate in the US education system? Although there are a variety of perspectives that span global and US societies, The Brookings Institution has found that, across these many cultures, “there is a common drive for individuals who are literate and numerate, with knowledge of global societies, who understand the scientific principles that underlie how the physical world operates, and who have the competencies and skills to function adaptively and effectively within their immediate environments, globally, and virtually” (2018). This latest, pervasively common 21st-century view favours a globally conscious and more diverse and inclusive scope of learning.

The Brookings Institution, known for its nonpartisan, in-depth research, has been following closely and speculating the best ways for educators to handle this shift to 21st Century Skills (21CS), which are based on the United Nations’ Sustainable Development Goals (2016). They derive their definition of 21CS from Binkley et. al. and Scoular and Care: “21st-century skills are tools that can be universally applied to enhance ways of thinking, learning, working and living in the world. The skills include critical thinking/reasoning, creativity/creative thinking, problem-solving, metacognition, collaboration, communication and global citizenship.” These 21st Century Skills also include the many literacies such as reading, writing, numeracy, information, technology, etc., but these are not severe shifts from previous models.

As Brookings Institution’s Esther Care, Helyn Kim, Alvin Vista, and Kate Anderson present in their paper (2018) on “Education System Alignment for 21st Century Skills,” there are a few big challenges in the implementation of these future-ready skills in an education system— requiring a clear understanding of the nature of 21CS, a strong perception of different competency levels, and a solid grasp on how to design appropriate and authentic assessments. “[I]t may be that countries have difficulty in imagining how to move from rhetoric to reality.”

In the 21st century, the way students learn, interact and prepare themselves for the world outside the classroom has changed. Teachers have kept up with these changes, and have readied themselves to understand what skills students need to know – however; they may not know how to teach those skills.

During the first season of the Competencies without a Classroom podcast, we interviewed business leaders, decision-makers, hiring managers and executives on what competencies they’re looking for from their young employees, teammates and co-workers in order to succeed in today’s competitive landscape.

What we heard from these leaders was that ‘soft’ skills like resilience, problem-solving, critical thinking and resourcefulness are among the most high-demand traits for young people making the shift from the classroom to the workplace. These were the skills that set apart applicants from other applicants, individual contributors from other contributors, and the good leaders from the great leaders.

Teachers and educators know this. Educators know that these skills, among others, are the traits that their students will require to thrive in the digital era.

The challenge?

Educators may not know how to instill these skills into their students in the classroom setting.

#21For21 was created to equip teachers with the tools, tactics, and resources they need to empower their students to develop the skills to succeed in our 21st century world.

We conducted 21 interviews with 21 teachers to hear how they implement 21st century skills in their classrooms.

If you had a magic wand and could change one thing about the education system as we know it today, what would you change? This is how some of our guests from season 2 of the Competencies without a Classroom podcast answered that question.As a teacher, you have tried to explain how the concepts you are teaching in the classroom will help to carry your students forward as they enter the “real world.” The Competencies without a Classroom podcast provides classroom teachers with access to brilliant minds and hearts in the “real world” bringing alive the skills and competencies required to be successful in the 21st century.” (Tanya Clift, District Career Facilitator)

  1. Critical Thinking/Reasoning
  2. Creativity/Creative Thinking
  3. Problem Solving
  4. Metacognition
  5. Collaboration
  6. Communication
  7. Global Citizenship

Let’s join the conversation and move this talk “to reality.” Each of the seven 21st Century Skills, derived from The Brookings Institution’s chosen definition, and ideas for implementation in the classroom are listed below:

Critical Thinking/Reasoning

We hear about critical thinking skills often anymore, but what are they exactly? One single answer isn’t easy to nail down. “After a careful review of the mountainous body of literature,” the University of Louisville, decided to go with Michael Scriven and Richard Paul (2003) for the best, most comprehensive and concise definition: “Critical thinking is the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action.” In other words, critical thinking is following a trail of internal questioning and deep thinking that leads to one’s beliefs. A child who questions the existence of Santa Claus is building those critical thinking skills. An adult who refrains from opening a conspicuous email is practicing critical thinking skills and probably saving themself from hacking. Scriven and Paul continue to provide clarity in writing that it is based on “universal intellectual values that transcend subject matter divisions,” meaning that all content areas can implement and practice this 21st Century Skill.

Rasmussen University conveniently boiled critical thinking down to the following 6 types, each of which can be a step in the process of a student-led research assignment across content areas and throughout K-12 education:

  • Identification: Students can choose or be given a topic based on the unit of study. They then need to identify, based on the clearly outlined goal of the lesson, the problem or central question and the steps to achieve the goal.
  • Research: students can choose or be given their resource materials and evaluate the sources of information for their assigned topic.
  • Identifying Biases: During the research portion of the assignment, students can identify, verbally or in writing, the biases of their sources and any potential biases of their own.
  • Inference: Giving the best guess, students can bridge gaps in information or even apply the newfound information to the central text. A possible question to build inference skills is “knowing what I know now, what can I best guess about this particular character or event in the text?”
  • Determining Relevance: students decide what is relevant information to include on the assignment based on the unit as a whole and the explicit objective of the lesson.
  • Curiosity: Students can create open-ended questions for their topic and answer those questions themselves or open it up to the class for discussion.

Creativity/Creative Thinking

Paul Torrance, the “Father of Creativity” and creator of the widely used Torrance Test of Creative Thinking, described the four elements of creativity: Fluency (number of ideas), Flexibility (variety of ideas), Originality (uniqueness of ideas), and Elaboration (details of ideas). Using this approach, competency levels can be assessed relative to peers and progressively paced through the age groups.

Continuing with the research assignment example, this divergent-thinking skill set could be developed through an artistic activity in which students can create a visual, possibly within clearly defined parameters of time and/or materials. Each of the four elements can reveal different competency levels. Teaching the research behind creativity, such as that caffeine can hurt creative thinking and that the colour blue can help, can also be methods and processes that students can engage in during the creative portion of the research assignment.

Problem Solving

According to MIT, “Problem-solving is the process of identifying a problem, developing possible solution paths, and taking the appropriate course of action,” and it is something we all do every day; there isn’t necessarily always a right or wrong answer, but there are many possible answers and some are better or worse than others. These skills are essential not only in our daily lives but also in our careers. If a student misbehaves in class, there are a myriad of ways that the teacher can respond, displaying their own problem-solving skills.

One way to improve students’ problem-solving skills is to lead them to the tools they need to strategize and solve problems. The Florida Comprehensive Assessment Test (FCAT) created a review of problem-solving strategies that can work in math, such as looking for a pattern, guessing and checking, drawing a diagram, or working backwards. These same skills can be transferred into other content areas and for different age groups. Sticking with the research assignment model, students can use many of these skills after the identification portion of the assignment: so they have identified the problem or situation, and now they need to approach the assignment (deciding which resource to use and why), divide the labour (using a chart), and plan how to complete it on time (working backwards from the due date).

Metacognition

Metacognition, a term credited to developmental psychologist John Flavell (1979), is thinking about your own thinking. Sounds simple enough, but it is a high-level skill that helps improve every other skill. Paul R. Pintrich from The Ohio State University’s College of Education (2002) claims that “students who know about the different kinds of strategies for learning, thinking, and problem-solving will be more likely to use them.” This “level of awareness above the subject matter,” as Vanderbilt University’s Nancy Chick wrote in her essay on metacognition in teaching and learning, shows that this 21st Century Skill can, again, cross-content areas. But can we implement metacognition skills in all grade levels? In “How People Learn: Brain, Mind, Experience, and School,” Bransford, Brown, and Cocking from the National Academies of Sciences, Engineering, and Medicine (2000) assert that “children differ from adult learners in many ways, but there are also surprising commonalities across learners of all ages.”

Chick clues us in on the particulars of metacognition: “a key element is recognizing the limitations of one’s knowledge or ability and then figuring out how to expand that knowledge or extend the ability.” This skill is at the crux of being critically self-aware or suffering from the cognitively biased Dunning-Kruger effect (2013). Pintrich defends that these skills need to be taught explicitly: “We are continually surprised at the number of students who come to college having very little metacognitive knowledge; knowledge about different strategies, different cognitive tasks, and particularly, accurate knowledge about themselves.”  We, teachers, can help with that. Reflection is our game.

Reflective journals, pre-assessments, post-assessments, and everything in between should continue as part of good pedagogy, but explicitly teaching students the different strategies is crucial for metacognition. The Iris Center at the Peabody College of Vanderbilt University advises teachers to share with students the questions involved in planning, monitoring, and modifying their work, teaching students “how to consider the appropriateness of the problem-solving approach, make sure that all procedural steps are implemented, and check for accuracy or to confirm that their answers make sense.” This process of planning, self-monitoring and modifying and then reflecting can be implemented easily with unlimited types of activities, especially the aforementioned example of a research assignment. Modelling good questions and scaffolding student learning will be key in teaching metacognition.

Collaboration

“Collaboration occurs when meeting a goal requires more than what any one individual is able to manage alone and needs to pool resources with others” (E. Care, H. Kim, A.Vista, and K.Anderson, 2018). In the workforce and in education, especially since the Covid-19 pandemic, virtual collaboration has been crucial. The authors give examples of the knowledge, skills, and attitudes needed for collaboration that could be shown through successful group work, such as knowing when it is appropriate to listen or to speak, introducing new ideas, compromising, sharing resources and responsibility, having meaningful conversations, and valuing others’ contributions. Modelling these skills explicitly for the students and then practicing them often will instill this highly important future-ready skill.

Communication

Often referred to as one of the “4 C’s of learning” in 21st century US education (creativity, collaboration, critical thinking, and communication), this skill is more than just writing or speaking. Verbal, nonverbal, and technological communication is broad and ever-evolving, but the skills behind them all still involve one common element: empathy. Being able to understand how the audience will respond is a timeless skill with ever-increasing importance. Teachers can directly teach these communication skills for group work and presentations, but they also can teach the concept of empathy, especially through global literature with common themes.

Global Citizenship

Sometime this century, it is likely that being able to communicate in more than one language will be a necessity for success. Time Magazine reported that 21st-century education “is a story about … whether an entire generation of kids will fail to make the grade in the global economy because they can’t think their way through abstract problems, work in teams, distinguish good information from bad, or speak a language other than [their own]” (2006). UN Secretary-General Ban-Ki Moon advised us to “be a global citizen. Act with passion and compassion. Help us make this world safer and more sustainable today and for the generations that will follow us. That is our moral responsibility.” Students can learn this skill of how to be a global citizens by collaborating on projects with students from around the world. Today’s technology has made this possible. Let’s make the most of its potential for global education.

The Brookings Institution claims that “any major reform in an educational philosophy shift must ensure alignment across the areas of curriculum, pedagogy, and assessment” and that “learning progression models are key to ensuring alignment through the education delivery system” (E. Care, H. Kim, A.Vista, and K.Anderson, 2019). Now that we know the 21st Century Skills, we must next design learning progression models and aligned assessments. It is time for action.

UN Secretary-General Ban-Ki Moon looks forward with dark optimism, acknowledging our potential, but helping us feel the urgency: “Ours can be the first generation to end poverty- and the last generation to address climate change before it is too late.” Becoming a global citizen with these 21st Century Skills in this smaller, digital world isn’t much of an option anymore, it’s our vital necessity and responsibility.

How to reconcile the theory of relativity with quantum mechanics? What is spin? Where does the electric charge come from?

  1. 0:00 – Intro
  2. 1:52 – Field and spin
  3. 4:38 – Stored values
  4. 6:02 – Quantum field
  5. 7:39 – Standard model
  6. 10:15 – Interactions
  7. 13:58 – Conclusion

Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles.  It is the foundation of all quantum physics including quantum chemistry, quantum field theory, quantum technology, and quantum information science.

Quantum mechanics is a fundamental  theory  in physics  that provides a description of the physical properties of  nature at the scale of  atoms  and  subatomic particles. It is the foundation of all quantum physics, including  quantum chemistryquantum field theoryquantum technology,  and quantum information science.

Classical physics, the collection of theories that existed before the advent of quantum mechanics, describes many aspects of nature at an ordinary (macroscopic)  scale, but is not sufficient for describing them at small (atomic and  subatomic)  scales. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large (macroscopic) scale.

Quantum mechanics differs from classical physics in that  energymomentumangular momentum,  and other quantities of a  bound   system are restricted to  discrete values   (quantization),  objects have characteristics of both particles  and  waves  (wave–particle duality),  and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle).

Quantum mechanics arose gradually from theories to explain observations which could not be reconciled with classical physics, such as  Max Planck‘s solution in 1900 to the  black-body radiation  problem, and the correspondence between energy and frequency in  Albert Einstein‘s 1905 paper  which explained the  photoelectric effect.  These early attempts to understand microscopic phenomena, now known as the  “old quantum theory“,  led to the full development of quantum mechanics in the mid-1920s by  Niels BohrErwin SchrödingerWerner HeisenbergMax BornPaul Dirac  and others. The modern theory is formulated in various specially developed mathematical formalisms.  In one of them, a mathematical entity called the wave function provides information, in the form of  probability amplitudes,  about what measurements of a particle’s energy, momentum, and other physical properties may yield.

Overview and fundamental concepts

Quantum mechanics allows the calculation of properties and behaviour of physical  systems. It is typically applied to microscopic systems: molecules, atoms and sub-atomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms, but its application to human beings raises philosophical problems, such as  Wigner’s friend,  and its application to the universe as a whole remains speculative.  Predictions of quantum mechanics have been verified experimentally to an extremely high degree of  accuracy.

A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a  complex number, known as a probability amplitude. This is known as the  Born rule,  named after physicist  Max Born.  For example, a quantum particle like an  electron  can be described by a   wave function,  which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a  probability density function  for the position that the electron will be found to have when an experiment is performed to measure it.  This is the best the theory can do; it cannot say for certain where the electron will be found. The  Schrödinger equation  relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.

One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between different measurable quantities. The most famous form of  this  uncertainty principle  says that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of its  momentum.

Another consequence of the mathematical rules of quantum mechanics is the phenomenon of  quantum interference,  which is often illustrated with the  double-slit experiment.  In the basic version of this experiment, a  coherent light source,  such as a  laser  beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate.  The wave nature of light causes the light waves passing through the two slits to   interfere,  producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles.  However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected  photon  passes through one slit (as would a classical particle), and not through both slits (as would a wave).  However,  such experiments  demonstrate that particles do not form the interference pattern if one detects which slit they pass through. Other atomic-scale entities, such as  electrons,  are found to exhibit the same behavior when fired towards a double slit. This behavior is known as  wave–particle duality.

Another counter-intuitive phenomenon predicted by quantum mechanics is  quantum tunnelling:  a particle that goes up against a  potential barrier  can cross it, even if its kinetic energy is smaller than the maximum of the potential.In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enabling  radioactive decaynuclear fusion in stars, and applications such a s scanning tunnelling microscopy  and the  tunnel diode.

When quantum systems interact, the result can be the creation of  quantum entanglement:  their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement “…the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought”. Quantum entanglement enables the counter-intuitive properties of  quantum pseudo-telepathy, and can be a valuable resource in communication protocols, such as  quantum key distribution  and  superdense coding. Contrary to popular misconception, entanglement does not allow sending signals  faster than light,  as demonstrated by the  no-communication theorem.

Another possibility opened by entanglement is testing for “hidden variables“, hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory can provide. A collection of results, most significantly  Bell’s theorem,  have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell’s theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a  Bell test  will be constrained in a particular, quantifiable way. Many Bell tests have been performed, using entangled particles, and they have shown results incompatible with the constraints imposed by local hidden variab

It is not possible to present these concepts in more than a superficial way without introducing the actual mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but also  linear algebradifferential equationsgroup theory,  and other more advanced subjects.  Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples.

Mathematical formulation

In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector  belonging to a (separable) complex  Hilbert space   .  This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys , and it is well-defined up to a complex number of modulus 1 (the global phase), that is,    represent the same physical system. In other words, the possible states are points in the  projective space  of a Hilbert space, usually called the  complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable function  , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors    with the usual inner product.

Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A quantum state can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue is non-degenerate and the probability is given by , where is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by , where  is the projector onto its associated eigenspace. In the continuous case, these formulas give instead the probability density.

After the measurement, if result   was obtained, the quantum state is postulated to collapse to , in the non-degenerate case, or to , in the general case. The  probabilistic  nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous  Bohr–Einstein debates,  in which the two scientists attempted to clarify these fundamental principles by way of  thought experiments.  In the decades after the formulation of quantum mechanics, the question of what constitutes a “measurement” has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of “wave function collapse” (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled so that the original quantum system ceases to exist as an independent entity. For details, see the article on  measurement in quantum mechanics.

The time evolution of a quantum state is described by the Schrödinger equation:

Here  denotes the Hamiltonian, the observable corresponding to the total energy of the system, and  is the reduced Planck constant. The constant  is introduced so that the Hamiltonian is reduced to the classical Hamiltonian in cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called the correspondence principle.

The solution of this differential equation is given by

The operator  is known as the time-evolution operator, and has the crucial property that it is unitary. This time evolution is deterministic in the sense that – given an initial quantum state   – it makes a definite prediction of what the quantum state  will be at any later time.

Fig. 1: Probability densities corresponding to the wave functions of an electron in a hydrogen atom possessing definite energy levels (increasing from the top of the image to the bottom: n = 1, 2, 3, …) and angular momenta (increasing across from left to right: s, p, d, …).

 
Denser areas correspond to higher probability density in a position measurement. Such wave functions are directly comparable to  Chladni’s figures of acoustic modes of vibration in  classical physics  and are modes of oscillation as well, possessing a sharp  energy  and thus, a definite  frequency. The  angular momentum and energy are  quantized  and take only discrete values like those shown (as is the case for resonant frequencies  in acoustics)

Some wave functions produce probability distributions that are independent of time, such as eigenstates of the Hamiltonian. Many systems that are treated dynamically in classical mechanics are described by such “static” wave functions. For example, a single  electron  in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the  atomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as an  s orbital (Fig. 1).

Analytic solutions of the Schrödinger equation are known for  very few relatively simple model Hamiltonians  including the  quantum harmonic oscillator, the  particle in a box,  the  dihydrogen cation, and the  hydrogen atom. Even the  helium atom – which contains just two electrons – has defied all attempts at a fully analytic treatment.

However, there are techniques for finding approximate solutions. One method, called perturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weak potential energy. Another method is called “semi-classical equation of motion”, which applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion. This approach is particularly important in the field of  quantum chaos.

Uncertainty principle

One consequence of the basic quantum formalism is the  uncertainty principle.  In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum. Both position and momentum are observables, meaning that they are represented by Hermitian operators. The position operator  and momentum operator  do not commute, but rather satisfy the canonical commutation relation:

Given a quantum state, the Born rule lets us compute expectation values for both , and moreover for powers of them. Defining the uncertainty for an observable by a  standard deviation,  we have

and likewise for the momentum:

The uncertainty principle states that

Either standard deviation can in principle be made arbitrarily small, but not both simultaneously. This inequality generalizes to arbitrary pairs of self-adjoint operators . The commutator of these two operators is

and this provides the lower bound on the product of standard deviations:

Another consequence of the canonical commutation relation is that the position and momentum operators are Fourier transforms of each other, so that a description   of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to an {\ displaystyle and / \ hbar} factor) to taking the derivative according to the position, since in Fourier analysis differentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentum  is replaced by {\displaystyle -i\hbar {\frac {\partial }{\partial x}}}, and in particular in the non-relativistic Schrödinger equation in position space the momentum-squared term is replaced with a Laplacian times {\displaystyle -\hbar ^{2}}[19]

Composite systems and entanglement

When two different quantum systems are considered together, the Hilbert space of the combined system is the tensor product of the Hilbert spaces of the two components. For example, let A and B be two quantum systems, with Hilbert spaces {\displaystyle {\mathcal {H}}_{A}} and {\displaystyle {\mathcal {H}}_{B}}, respectively. The Hilbert space of the composite system is then

{\displaystyle {\mathcal {H}}_{AB}={\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}.}

If the state for the first system is the vector {\ displaystyle \ psi _ {A}} and the state for the second system is {\ displaystyle \ psi _ {B}}, then the state of the composite system is

{\ displaystyle \ psi _ {A} \ otimes \ psi _ {B}.}

Not all states in the joint Hilbert space {\displaystyle {\mathcal {H}}_{AB}} can be written in this form, however, because the superposition principle implies that linear combinations of these “separable” or “product states” are also valid. For example, if {\ displaystyle \ psi _ {A}} and {\ displaystyle \ phi _ {A}} are both possible states for system {\ displaystyle A}, and likewise {\ displaystyle \ psi _ {B}}and{\ displaystyle \ phi _ {B}}are both possible states for system{\ displaystyle B}, then

{\displaystyle {\tfrac {1}{\sqrt {2}}}\left(\psi _{A}\otimes \psi _{B}+\phi _{A}\otimes \phi _{B}\right)}

is a valid joint state that is not separable. States that are not separable are called entangled.[22][23]

If the state for a composite system is entangled, it is impossible to describe either component system A or system B by a state vector. One can instead define reduced density matrices that describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system.  Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory. As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic. Equivalence between formulations

There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the “transformation theory” proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger).[26] An alternative formulation of quantum mechanics is Feynman‘s path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics.

Symmetries and conservation laws

The Hamiltonian {\displaystyle H} is known as the generator of time evolution, since it defines a unitary time-evolution operator {\ displaystyle U (t) = e ^ {- iHt / \ hbar}} for each value of {\displaystyle t}. From this relation between {\ displaystyle U (t)} and {\displaystyle H}, it follows that any observable {\ displaystyle A} that commutes with {\displaystyle H} will be conserved: its expectation value will not change over time. This statement generalizes, as mathematically, any Hermitian operator {\ displaystyle A} can generate a family of unitary operators parameterized by a variable {\displaystyle t}. Under the evolution generated by {\ displaystyle A}, any observable {\ displaystyle B} that commutes with {\ displaystyle A} will be conserved. Moreover, if {\ displaystyle B} is conserved by evolution under {\ displaystyle A}, then {\ displaystyle A} is conserved under the evolution generated by {\ displaystyle B}. This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law.

Examples

Free particle

Position space probability density of a Gaussian wave packet moving in one dimension in free space.

The simplest example of quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy:

{\displaystyle H={\frac {1}{2m}}P^{2}=-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}.}

The general solution of the Schrödinger equation is given by

{\displaystyle \psi (x,t)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\hat {\psi }}(k,0)e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}\mathrm {d} k,}

which is a superposition of all possible plane waves {\displaystyle e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}}, which are eigenstates of the momentum operator with momentum {\ displaystyle p = \ hbar k}. The coefficients of the superposition are {\ displaystyle {\ hat {\ psi}} (k, 0)}, which is the Fourier transform of the initial quantum state {\ displaystyle \ psi (x, 0)}.

It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet:

{\displaystyle \psi (x,0)={\frac {1}{\sqrt[{4}]{\pi a}}}e^{-{\frac {x^{2}}{2a}}}}

which has Fourier transform, and therefore momentum distribution

{\ displaystyle {\ hat {\ psi}} (k, 0) = {\ sqrt [{4}] {\ frac {a} {\ pi}}} e ^ {- {\ frac {ak ^ {2} } {2}}}.}

We see that as we make {\ displaystyle a} smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making {\ displaystyle a} larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle.

As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant. Particle in a box

1-dimensional potential energy box (or infinite potential well)

The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region.  For the one-dimensional case in the {\displaystyle x} direction, the time-independent Schrödinger equation may be written

{\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .}

With the differential operator defined by

{\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}}

the previous equation is evocative of the classic kinetic energy analogue,

{\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,}

with state {\ displaystyle \ psi} in this case having energy {\displaystyle E} coincident with the kinetic energy of the particle.

The general solutions of the Schrödinger equation for the particle in a box are

{\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}}

or, from Euler’s formula,

{\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).\!}

The infinite potential walls of the box determine the values of {\displaystyle C,D,} and {\ displaystyle k} at {\displaystyle x=0}and{\displaystyle x=L} where {\ displaystyle \ psi} must be zero. Thus, at {\displaystyle x=0},

{\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D}

and {\displaystyle D=0}. At {\displaystyle x=L},

{\ displaystyle \ psi (L) = 0 = C \ sin (kL),}

in which {\displaystyle C} cannot be zero as this would conflict with the postulate that {\ displaystyle \ psi} has norm 1. Therefore, since {\displaystyle \sin(kL)=0},{\ displaystyle kL} must be an integer multiple of {\ displaystyle \ pi},

{\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .}

This constraint on {\ displaystyle k} implies a constraint on the energy levels, yielding

{\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.}

finite potential well  is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy.

Harmonic oscillator

Some trajectories of a harmonic oscillator (i.e. a ball attached to a spring) in classical mechanics (A-B) and quantum mechanics (C-H). In quantum mechanics, the position of the ball is represented by a wave (called the wave function), with the real part shown in blue and the imaginary part shown in red. Some of the trajectories (such as C, D, E, and F) are standing waves (or “stationary states“). Each standing-wave frequency is proportional to a possible energy level of the oscillator. This “energy quantization” does not occur in classical physics, where the oscillator can have any energy.

As in the classical case, the potential for the quantum harmonic oscillator is given by

{\displaystyle V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.}

This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant “ladder method” first proposed by Paul Dirac. The eigenstates are given by

{\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad }
{\displaystyle n=0,1,2,\ldots .}

where Hn are the Hermite polynomials

{\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x^{2}}\right),}

and the corresponding energy levels are

{\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right).}

This is another example illustrating the discretization of energy for bound states.

Mach–Zehnder interferometer

Schematic of a Mach–Zehnder interferometer.

The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement. 

We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the “lower” path which starts from the left, goes straight through both beam splitters, and ends at the top, and the “upper” path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector {\displaystyle \psi \in \mathbb {C} ^{2}} that is a superposition of the “lower” path {\displaystyle \psi _{l}={\begin{pmatrix}1\\0\end{pmatrix}}} and the “upper” path {\displaystyle \psi _{u}={\begin{pmatrix}0\\1\end{pmatrix}}}, that is, {\ displaystyle \ psi = \ alpha \ psi _ {l} + \ beta \ psi _ {u}} for complex {\displaystyle \alpha ,\beta }. In order to respect the postulate that {\displaystyle \langle \psi ,\psi \rangle =1} we require that {\displaystyle |\alpha |^{2}+|\beta |^{2}=1}.

Both beam splitters are modelled as the unitary matrix {\displaystyle B={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&i\\i&1\end{pmatrix}}}, which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of {\ displaystyle 1 / {\ sqrt {2}}}, or be reflected to the other path with a probability amplitude of {\displaystyle i/{\sqrt {2}}}. The phase shifter on the upper arm is modelled as the unitary matrix {\displaystyle P={\begin{pmatrix}1&0\\0&e^{i\Delta \Phi }\end{pmatrix}}}, which means that if the photon is on the “upper” path it will gain a relative phase of {\ displaystyle \ Delta \ Phi}, and it will stay unchanged if it is in the lower path.

A photon that enters the interferometer from the left will then be acted upon with a beam splitter {\ displaystyle B}, a phase shifter {\ displaystyle P}, and another beam splitter {\ displaystyle B}, and so end up in the state

{\displaystyle BPB\psi _{l}=ie^{i\Delta \Phi /2}{\begin{pmatrix}-\sin(\Delta \Phi /2)\\cos(\Delta\Phi /2 )\end{pmatrix}}}

and the probabilities that it will be detected at the right or at the top are given respectively by

{\displaystyle p(u)=|\langle \psi _{u},BPB\psi _{l}\rangle |^{2}=\cos ^{2}{\frac {\Delta \Phi }{2 }},}
{\displaystyle p(l)=|\langle \psi _{l},BPB\psi _{l}\rangle |^{2}=\sin ^{2}{\frac {\Delta \Phi }{2 }}.}

One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities.

It is interesting to consider what would happen if the photon were definitely in either the “lower” or “upper” paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases there will be no interference between the paths anymore, and the probabilities are given by {\ displaystyle p (u) = p (l) = 1/2}, independently of the phase {\ displaystyle \ Delta \ Phi}. From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths.  

Applications

Quantum mechanics has had enormous success in explaining many of the features of our universe, with regards to small-scale and discrete quantities and interactions which cannot be explained by classical methods.[note 4] Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electronsprotonsneutronsphotons, and others). Solid-state physics and materials science are dependent upon quantum mechanics.

In many aspects modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistryquantum opticsquantum computingsuperconducting magnetslight-emitting diodes, the  optical amplifier and the laser, the transistor and  semiconductors  such as the microprocessormedical and research imaging  such as magnetic resonance imaging  and  electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA.

Relation to other scientific theories

Classical mechanics

The rules of quantum mechanics assert that the state space of a system is a  Hilbert space  and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers.  One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization.

When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.

Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos  studies the relationship between classical and quantum descriptions in these systems.

Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement  becomes simply classical correlations. Quantum coherence is not typically evident at macroscopic scales, except maybe at temperatures approaching absolute zero at which quantum behavior may manifest macroscopically.

Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics. 

Special relativity and electrodynamics

Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the  electromagnetic  interaction.  Quantum electrodynamics  is, along with  general relativity, one of the most accurate physical theories ever devised.

The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat  charged  particles as quantum mechanical objects being acted on by a classical electromagnetic field.  For example, the elementary quantum model of the hydrogen atom  describes  the electric field of the hydrogen atom using a classical {\displaystyle \textstyle -e^{2}/(4\pi \epsilon _{_{0}}r)} Coulomb potential. This “semi-classical” approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles.

Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as  quarks  and gluons.  The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists  Abdus SalamSheldon Glashow  and  Steven Weinberg.

Relation to general relativity

Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of  quantum gravity  is an important issue in  physical cosmology  and the search by physicists for an elegant “Theory of Everything” (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon.

One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its masscharge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force. Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric “woven” of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG.

Philosophical implications

Unsolved problem in physics: Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the “superposition of states” and “wave function collapse“, give rise to the reality we perceive?

Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, “I think I can safely say that nobody understands quantum mechanics.”  According to Steven Weinberg, “There is now in my opinion no entirely satisfactory interpretation of quantum mechanics.”

The views of Niels BohrWerner Heisenberg and other physicists are often grouped together as the “Copenhagen interpretation“. According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of “causality”. Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations remain popular in the 21st century.

Albert Einstein, himself one of the founders of  quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism  and  locality.  Einstein’s long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the  Bohr–Einstein debates.  Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids  action at a distance.  He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators  Boris Podolsky  and  Nathan Rosen  published an argument that the principle of locality implies the incompleteness of quantum mechanics, a  thought experiment later termed the  Einstein–Podolsky–Rosen paradox.  In 1964,  John Bell showed that EPR’s principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles.  Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism.

Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem.

Everett’s many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.  This is a consequence of removing the axiom of the collapse of the wave packet.  All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical  quantum superposition.  While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we don’t observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule, with no consensus on whether they have been successful.

Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later.

History

 

Quantum mechanics was developed in the early decades  of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert HookeChristiaan Huygens  and  Leonhard Euler  proposed a wave theory of light based on experimental observations. In 1803 English  polymath Thomas Young described the famous  double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light.

During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk MaxwellLudwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible units – the word “atom” deriving from the Greek for “uncuttable” – the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was  Michael Faraday‘s 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius PlückerJohann Wilhelm Hittorf  and  Eugen Goldstein  carried on and improved upon Faraday’s work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons.

The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete “quanta” (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation.  The word quantum derives from the Latin, meaning “how great” or “how much”. According to Planck, quantities of energy could be thought of as divided into “elements” whose size (E) would be proportional to their frequency (ν):

{\ displaystyle E = h \ nu \},

where h is Planck’s constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation. In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck’s quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck’s ideas about radiation into a model of the hydrogen atom that successfully predicted the spectral lines of hydrogen.[69] Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency. In his paper “On the Quantum Theory of Radiation,” Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation, which became the basis of the laser.

The 1927 Solvay Conference in Brussels
was the fifth world physics conference.

This phase is known as the old quantum theory. Never complete or self-consistent, the old quantum theory was rather a set of  heuristic  corrections to  classical mechanics.  The theory is now understood as a  semi-classical approximation  to modern quantum mechanics. Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein and Peter Debye‘s work on the specific heat of solids, Bohr and Hendrika Johanna van Leeuwen‘s proof that classical physics cannot account for diamagnetism, and Arnold Sommerfeld‘s extension of the Bohr model to include special-relativistic effects.

In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie’s approach, modern quantum mechanics was born in 1925, when the German physicists Werner HeisenbergMax Born, and Pascual Jordan  developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics. Born introduced the probabilistic interpretation of Schrödinger’s wave function in July 1926. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.

By 1930 quantum mechanics had been further unified and formalized by David HilbertPaul Dirac and John von Neuman  with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the ‘observer’. It has since permeated many disciplines, including quantum chemistry, quantum electronicsquantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors  and superfluids.

See also

Explanatory notes

  1. ^ See, for example, Precision tests of QED. The relativistic refinement of quantum mechanics known as quantum electrodynamics (QED) has been shown to agree with experiment to within 1 part in 108 for some atomic properties.
  2. ^ Physicist John C. Baez cautions, “there’s no way to understand the interpretation of quantum mechanics without also being able to solve quantum mechanics problems – to understand the theory, you need to be able to use it (and vice versa)”.[15] Carl Sagan outlined the “mathematical underpinning” of quantum mechanics and wrote, “For most physics students, this might occupy them from, say, third grade to early graduate school – roughly 15 years. […] The job of the popularizer of science, trying to get across some idea of quantum mechanics to a general audience that has not gone through these initiation rites, is daunting. Indeed, there are no successful popularizations of quantum mechanics in my opinion – partly for this reason.”[16]
  3. ^ A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise, a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle’s Hilbert space. Physicists sometimes introduce fictitious “bases” for a Hilbert space comprising elements outside that space. These are invented for calculational convenience and do not represent physical states.[19]: 100–105 
  4. ^ See, for example, the Feynman Lectures on Physics for some of the technological applications which use quantum mechanics, e.g., transistors (vol III, pp. 14–11 ff), integrated circuits, which are follow-on technology in solid-state physics (vol II, pp. 8–6), and lasers (vol III, pp. 9–13).
  5. ^ see macroscopic quantum phenomenaBose–Einstein condensate, and Quantum machine
  6. ^ The published form of the EPR argument was due to Podolsky, and Einstein himself was not satisfied with it. In his own publications and correspondence, Einstein used a different argument to insist that quantum mechanics is an incomplete theory.

Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles.  It is the foundation of all quantum physics including quantum chemistry, quantum field theory, quantum technology, and quantum information science.

How to reconcile the theory of relativity with quantum mechanics? What is spin? Where does the electric charge come from?

  1. 0:00 – Intro
  2. 1:52 – Field and spin
  3. 4:38 – Stored values
  4. 6:02 – Quantum field
  5. 7:39 – Standard model
  6. 10:15 – Interactions
  7. 13:58 – Conclusion
0:35
 
Video simulation the merger of GW150914, showing the spacetime distortion from gravity as the black holes orbit and merge

The theory of relativity usually encompasses two interrelated theories by Albert Einsteinspecial relativity and general relativity, proposed and published in 1905 and 1915, respectively.[1] Special relativity applies to all physical phenomena in the absence of gravity. General relativity explains the law of gravitation and its relation to other forces of nature.[2] It applies to the cosmological and astrophysical realm, including astronomy.[3]

The theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton.[3][4][5] It introduced concepts including 4-dimensional spacetime as a unified entity of space and timerelativity of simultaneitykinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron starsblack holes, and gravitational waves.

Development and acceptance

Albert Einstein published the theory of special relativity in 1905, building on many theoretical results and empirical findings obtained by Albert A. MichelsonHendrik LorentzHenri Poincaré and others. Max PlanckHermann Minkowski and others did subsequent work.

Einstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916.[3]

The term “theory of relativity” was based on the expression “relative theory” (GermanRelativtheorie) used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the same paper, Alfred Bucherer used for the first time the expression “theory of relativity” (GermanRelativitätstheorie).[6][7]

By the 1920s, the physics community understood and accepted special relativity.[8] It rapidly became a significant and necessary tool for theorists and experimentalists in the new fields of atomic physicsnuclear physics, and quantum mechanics.

By comparison, general relativity did not appear to be as useful, beyond making minor corrections to predictions of Newtonian gravitation theory.[3] It seemed to offer little potential for experimental test, as most of its assertions were on an astronomical scale. Its mathematics seemed difficult and fully understandable only by a small number of people. Around 1960, general relativity became central to physics and astronomy. New mathematical techniques to apply to general relativity streamlined calculations and made its concepts more easily visualized. As astronomical phenomena were discovered, such as quasars (1963), the 3-kelvin microwave background radiation (1965), pulsars (1967), and the first black hole candidates (1981),[3] the theory explained their attributes, and measurement of them further confirmed the theory.

Special relativity

Special relativity is a theory of the structure of spacetime. It was introduced in Einstein’s 1905 paper “On the Electrodynamics of Moving Bodies” (for the contributions of many other physicists see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics:

  1. The laws of physics are the same for all observers in any inertial frame of reference relative to one another (principle of relativity).
  2. The speed of light in a vacuum is the same for all observers, regardless of their relative motion or of the motion of the light source.

The resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment. Moreover, the theory has many surprising and counterintuitive consequences. Some of these are:

  • Relativity of simultaneity: Two events, simultaneous for one observer, may not be simultaneous for another observer if the observers are in relative motion.
  • Time dilation: Moving clocks are measured to tick more slowly than an observer’s “stationary” clock.
  • Length contraction: Objects are measured to be shortened in the direction that they are moving with respect to the observer.
  • Maximum speed is finite: No physical object, message or field line can travel faster than the speed of light in a vacuum.
    • The effect of gravity can only travel through space at the speed of light, not faster or instantaneously.
  • Mass–energy equivalenceE = mc2, energy and mass are equivalent and transmutable.
  • Relativistic mass, idea used by some researchers.[9]

The defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics by the Lorentz transformations. (See Maxwell’s equations of electromagnetism.)

General relativity

General relativity is a theory of gravitation developed by Einstein in the years 1907–1915. The development of general relativity began with the equivalence principle, under which the states of accelerated motion and being at rest in a gravitational field (for example, when standing on the surface of the Earth) are physically identical. The upshot of this is that free fall is inertial motion: an object in free fall is falling because that is how objects move when there is no force being exerted on them, instead of this being due to the force of gravity as is the case in classical mechanics. This is incompatible with classical mechanics and special relativity because in those theories inertially moving objects cannot accelerate with respect to each other, but objects in free fall do so. To resolve this difficulty Einstein first proposed that spacetime is curved. In 1915, he devised the Einstein field equations which relate the curvature of spacetime with the mass, energy, and any momentum within it.

Some of the consequences of general relativity are:

Technically, general relativity is a theory of gravitation whose defining feature is its use of the Einstein field equations. The solutions of the field equations are metric tensors which define the topology of the spacetime and how objects move inertially.

Experimental evidence

Einstein stated that the theory of relativity belongs to a class of “principle-theories”. As such, it employs an analytic method, which means that the elements of this theory are not based on hypothesis but on empirical discovery. By observing natural processes, we understand their general characteristics, devise mathematical models to describe what we observed, and by analytical means we deduce the necessary conditions that have to be satisfied. Measurement of separate events must satisfy these conditions and match the theory’s conclusions.[2]

Tests of special relativity

 
A diagram of the Michelson–Morley experiment

Relativity is a falsifiable theory: It makes predictions that can be tested by experiment. In the case of special relativity, these include the principle of relativity, the constancy of the speed of light, and time dilation.[11] The predictions of special relativity have been confirmed in numerous tests since Einstein published his paper in 1905, but three experiments conducted between 1881 and 1938 were critical to its validation. These are the Michelson–Morley experiment, the Kennedy–Thorndike experiment, and the Ives–Stilwell experiment. Einstein derived the Lorentz transformations from first principles in 1905, but these three experiments allow the transformations to be induced from experimental evidence.

Maxwell’s equations—the foundation of classical electromagnetism—describe light as a wave that moves with a characteristic velocity. The modern view is that light needs no medium of transmission, but Maxwell and his contemporaries were convinced that light waves were propagated in a medium, analogous to sound propagating in air, and ripples propagating on the surface of a pond. This hypothetical medium was called the luminiferous aether, at rest relative to the “fixed stars” and through which the Earth moves. Fresnel’s partial ether dragging hypothesis ruled out the measurement of first-order (v/c) effects, and although observations of second-order effects (v2/c2) were possible in principle, Maxwell thought they were too small to be detected with then-current technology.[12][13]

The Michelson–Morley experiment was designed to detect second-order effects of the “aether wind”—the motion of the aether relative to the earth. Michelson designed an instrument called the Michelson interferometer to accomplish this. The apparatus was sufficiently accurate to detect the expected effects, but he obtained a null result when the first experiment was conducted in 1881,[14] and again in 1887.[15] Although the failure to detect an aether wind was a disappointment, the results were accepted by the scientific community.[13] In an attempt to salvage the aether paradigm, FitzGerald and Lorentz independently created an ad hoc hypothesis in which the length of material bodies changes according to their motion through the aether.[16] This was the origin of FitzGerald–Lorentz contraction, and their hypothesis had no theoretical basis. The interpretation of the null result of the Michelson–Morley experiment is that the round-trip travel time for light is isotropic (independent of direction), but the result alone is not enough to discount the theory of the aether or validate the predictions of special relativity.[17][18]

 
The Kennedy–Thorndike experiment shown with interference fringes.

While the Michelson–Morley experiment showed that the velocity of light is isotropic, it said nothing about how the magnitude of the velocity changed (if at all) in different inertial frames. The Kennedy–Thorndike experiment was designed to do that, and was first performed in 1932 by Roy Kennedy and Edward Thorndike.[19] They obtained a null result, and concluded that “there is no effect … unless the velocity of the solar system in space is no more than about half that of the earth in its orbit”.[18][20] That possibility was thought to be too coincidental to provide an acceptable explanation, so from the null result of their experiment it was concluded that the round-trip time for light is the same in all inertial reference frames.[17][18]

The Ives–Stilwell experiment was carried out by Herbert Ives and G.R. Stilwell first in 1938[21] and with better accuracy in 1941.[22] It was designed to test the transverse Doppler effect – the redshift of light from a moving source in a direction perpendicular to its velocity—which had been predicted by Einstein in 1905. The strategy was to compare observed Doppler shifts with what was predicted by classical theory, and look for a Lorentz factor correction. Such a correction was observed, from which was concluded that the frequency of a moving atomic clock is altered according to special relativity.

Those classic experiments have been repeated many times with increased precision. Other experiments include, for instance, relativistic energy and momentum increase at high velocities, experimental testing of time dilation, and modern searches for Lorentz violations.

Tests of general relativity

General relativity has also been confirmed many times, the classic experiments being the perihelion precession of Mercury‘s orbit, the deflection of light by the Sun, and the gravitational redshift of light. Other tests confirmed the equivalence principle and frame dragging.

Modern applications

Far from being simply of theoretical interest, relativistic effects are important practical engineering concerns. Satellite-based measurement needs to take into account relativistic effects, as each satellite is in motion relative to an Earth-bound user and is thus in a different frame of reference under the theory of relativity. Global positioning systems such as GPSGLONASS, and Galileo, must account for all of the relativistic effects, such as the consequences of Earth’s gravitational field, in order to work with precision.[23] This is also the case in the high-precision measurement of time.[24] Instruments ranging from electron microscopes to particle accelerators would not work if relativistic considerations were omitted.25Asymptotic symmetries

The spacetime symmetry group for Special Relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries if any might apply in General Relativity. A tractable case may be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group.

In 1962, Hermann Bondi, M. G. van der Burg, A. W. Metzner[26] and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at light-like infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group — not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity does not reduce to special relativity in the case of weak fields at long distances.

See also

QUANTUM MECHANICS

Interactive Design

Software Development

How to reconcile the theory of relativity with quantum mechanics? What is spin? Where does the electric charge come from?

  1. 0:00 – Intro
  2. 1:52 – Field and spin
  3. 4:38 – Stored values
  4. 6:02 – Quantum field
  5. 7:39 – Standard model
  6. 10:15 – Interactions
  7. 13:58 – Conclusion

More Than 50+ Companies Have Trusted Us For Quality

We are glad to be a part of countless success stories by providing the best quality software solutions to our clients.

Quantum mechanics is a fundamental  theory  in physics  that provides a description of the physical properties of  nature at the scale of  atoms  and  subatomic particles. It is the foundation of all quantum physics, including  quantum chemistryquantum field theoryquantum technology,  and quantum information science.

Classical physics, the collection of theories that existed before the advent of quantum mechanics, describes many aspects of nature at an ordinary (macroscopic)  scale, but is not sufficient for describing them at small (atomic and  subatomic)  scales. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large (macroscopic) scale.

Quantum mechanics differs from classical physics in that  energymomentumangular momentum,  and other quantities of a  bound   system are restricted to  discrete values   (quantization),  objects have characteristics of both particles  and  waves  (wave–particle duality),  and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle).

Quantum mechanics arose gradually from theories to explain observations which could not be reconciled with classical physics, such as  Max Planck‘s solution in 1900 to the  black-body radiation  problem, and the correspondence between energy and frequency in  Albert Einstein‘s 1905 paper  which explained the  photoelectric effect.  These early attempts to understand microscopic phenomena, now known as the  “old quantum theory“,  led to the full development of quantum mechanics in the mid-1920s by  Niels BohrErwin SchrödingerWerner HeisenbergMax BornPaul Dirac  and others. The modern theory is formulated in various specially developed mathematical formalisms.  In one of them, a mathematical entity called the wave function provides information, in the form of  probability amplitudes,  about what measurements of a particle’s energy, momentum, and other physical properties may yield.

Overview and fundamental concepts

Quantum mechanics allows the calculation of properties and behaviour of physical  systems. It is typically applied to microscopic systems: molecules, atoms and sub-atomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms, but its application to human beings raises philosophical problems, such as  Wigner’s friend,  and its application to the universe as a whole remains speculative.  Predictions of quantum mechanics have been verified experimentally to an extremely high degree of  accuracy.

A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a  complex number, known as a probability amplitude. This is known as the  Born rule,  named after physicist  Max Born.  For example, a quantum particle like an  electron  can be described by a   wave function,  which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a  probability density function  for the position that the electron will be found to have when an experiment is performed to measure it.  This is the best the theory can do; it cannot say for certain where the electron will be found. The  Schrödinger equation  relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.

One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between different measurable quantities. The most famous form of  this  uncertainty principle  says that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of its  momentum.

Another consequence of the mathematical rules of quantum mechanics is the phenomenon of  quantum interference,  which is often illustrated with the  double-slit experiment.  In the basic version of this experiment, a  coherent light source,  such as a  laser  beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate.  The wave nature of light causes the light waves passing through the two slits to   interfere,  producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles.  However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected  photon  passes through one slit (as would a classical particle), and not through both slits (as would a wave).  However,  such experiments  demonstrate that particles do not form the interference pattern if one detects which slit they pass through. Other atomic-scale entities, such as  electrons,  are found to exhibit the same behavior when fired towards a double slit. This behavior is known as  wave–particle duality.

Another counter-intuitive phenomenon predicted by quantum mechanics is  quantum tunnelling:  a particle that goes up against a  potential barrier  can cross it, even if its kinetic energy is smaller than the maximum of the potential.In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enabling  radioactive decaynuclear fusion in stars, and applications such a s scanning tunnelling microscopy  and the  tunnel diode.

When quantum systems interact, the result can be the creation of  quantum entanglement:  their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement “…the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought”. Quantum entanglement enables the counter-intuitive properties of  quantum pseudo-telepathy, and can be a valuable resource in communication protocols, such as  quantum key distribution  and  superdense coding. Contrary to popular misconception, entanglement does not allow sending signals  faster than light,  as demonstrated by the  no-communication theorem.

Another possibility opened by entanglement is testing for “hidden variables“, hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory can provide. A collection of results, most significantly  Bell’s theorem,  have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell’s theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a  Bell test  will be constrained in a particular, quantifiable way. Many Bell tests have been performed, using entangled particles, and they have shown results incompatible with the constraints imposed by local hidden variab

It is not possible to present these concepts in more than a superficial way without introducing the actual mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but also  linear algebradifferential equationsgroup theory,  and other more advanced subjects.  Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples.

Mathematical formulation

In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector  belonging to a (separable) complex  Hilbert space   .  This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys , and it is well-defined up to a complex number of modulus 1 (the global phase), that is,    represent the same physical system. In other words, the possible states are points in the  projective space  of a Hilbert space, usually called the  complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable function  , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors    with the usual inner product.

Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A quantum state can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue is non-degenerate and the probability is given by , where is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by , where  is the projector onto its associated eigenspace. In the continuous case, these formulas give instead the probability density.

After the measurement, if result   was obtained, the quantum state is postulated to collapse to , in the non-degenerate case, or to , in the general case. The  probabilistic  nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous  Bohr–Einstein debates,  in which the two scientists attempted to clarify these fundamental principles by way of  thought experiments.  In the decades after the formulation of quantum mechanics, the question of what constitutes a “measurement” has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of “wave function collapse” (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled so that the original quantum system ceases to exist as an independent entity. For details, see the article on  measurement in quantum mechanics.

The time evolution of a quantum state is described by the Schrödinger equation:

Here  denotes the Hamiltonian, the observable corresponding to the total energy of the system, and  is the reduced Planck constant. The constant  is introduced so that the Hamiltonian is reduced to the classical Hamiltonian in cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called the correspondence principle.

The solution of this differential equation is given by

The operator  is known as the time-evolution operator, and has the crucial property that it is unitary. This time evolution is deterministic in the sense that – given an initial quantum state   – it makes a definite prediction of what the quantum state  will be at any later time.

Fig. 1: Probability densities corresponding to the wave functions of an electron in a hydrogen atom possessing definite energy levels (increasing from the top of the image to the bottom: n = 1, 2, 3, …) and angular momenta (increasing across from left to right: s, p, d, …).

 
Denser areas correspond to higher probability density in a position measurement. Such wave functions are directly comparable to  Chladni’s figures of acoustic modes of vibration in  classical physics  and are modes of oscillation as well, possessing a sharp  energy  and thus, a definite  frequency. The  angular momentum and energy are  quantized  and take only discrete values like those shown (as is the case for resonant frequencies  in acoustics)

Some wave functions produce probability distributions that are independent of time, such as eigenstates of the Hamiltonian. Many systems that are treated dynamically in classical mechanics are described by such “static” wave functions. For example, a single  electron  in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the  atomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as an  s orbital (Fig. 1).

Analytic solutions of the Schrödinger equation are known for  very few relatively simple model Hamiltonians  including the  quantum harmonic oscillator, the  particle in a box,  the  dihydrogen cation, and the  hydrogen atom. Even the  helium atom – which contains just two electrons – has defied all attempts at a fully analytic treatment.

However, there are techniques for finding approximate solutions. One method, called perturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weak potential energy. Another method is called “semi-classical equation of motion”, which applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion. This approach is particularly important in the field of  quantum chaos.

Uncertainty principle

One consequence of the basic quantum formalism is the  uncertainty principle.  In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum. Both position and momentum are observables, meaning that they are represented by Hermitian operators. The position operator  and momentum operator  do not commute, but rather satisfy the canonical commutation relation:

Given a quantum state, the Born rule lets us compute expectation values for both , and moreover for powers of them. Defining the uncertainty for an observable by a  standard deviation,  we have

and likewise for the momentum:

The uncertainty principle states that

Either standard deviation can in principle be made arbitrarily small, but not both simultaneously. This inequality generalizes to arbitrary pairs of self-adjoint operators . The commutator of these two operators is

and this provides the lower bound on the product of standard deviations:

Another consequence of the canonical commutation relation is that the position and momentum operators are Fourier transforms of each other, so that a description   of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to an {\ displaystyle and / \ hbar} factor) to taking the derivative according to the position, since in Fourier analysis differentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentum  is replaced by {\displaystyle -i\hbar {\frac {\partial }{\partial x}}}, and in particular in the non-relativistic Schrödinger equation in position space the momentum-squared term is replaced with a Laplacian times {\displaystyle -\hbar ^{2}}[19]

Composite systems and entanglement

When two different quantum systems are considered together, the Hilbert space of the combined system is the tensor product of the Hilbert spaces of the two components. For example, let A and B be two quantum systems, with Hilbert spaces {\displaystyle {\mathcal {H}}_{A}} and {\displaystyle {\mathcal {H}}_{B}}, respectively. The Hilbert space of the composite system is then

{\displaystyle {\mathcal {H}}_{AB}={\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}.}

If the state for the first system is the vector {\ displaystyle \ psi _ {A}} and the state for the second system is {\ displaystyle \ psi _ {B}}, then the state of the composite system is

{\ displaystyle \ psi _ {A} \ otimes \ psi _ {B}.}

Not all states in the joint Hilbert space {\displaystyle {\mathcal {H}}_{AB}} can be written in this form, however, because the superposition principle implies that linear combinations of these “separable” or “product states” are also valid. For example, if {\ displaystyle \ psi _ {A}} and {\ displaystyle \ phi _ {A}} are both possible states for system {\ displaystyle A}, and likewise {\ displaystyle \ psi _ {B}}and{\ displaystyle \ phi _ {B}}are both possible states for system{\ displaystyle B}, then

{\displaystyle {\tfrac {1}{\sqrt {2}}}\left(\psi _{A}\otimes \psi _{B}+\phi _{A}\otimes \phi _{B}\right)}

is a valid joint state that is not separable. States that are not separable are called entangled.[22][23]

If the state for a composite system is entangled, it is impossible to describe either component system A or system B by a state vector. One can instead define reduced density matrices that describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system.  Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory. As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic. Equivalence between formulations

There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the “transformation theory” proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger).[26] An alternative formulation of quantum mechanics is Feynman‘s path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics.

Symmetries and conservation laws

The Hamiltonian {\displaystyle H} is known as the generator of time evolution, since it defines a unitary time-evolution operator {\ displaystyle U (t) = e ^ {- iHt / \ hbar}} for each value of {\displaystyle t}. From this relation between {\ displaystyle U (t)} and {\displaystyle H}, it follows that any observable {\ displaystyle A} that commutes with {\displaystyle H} will be conserved: its expectation value will not change over time. This statement generalizes, as mathematically, any Hermitian operator {\ displaystyle A} can generate a family of unitary operators parameterized by a variable {\displaystyle t}. Under the evolution generated by {\ displaystyle A}, any observable {\ displaystyle B} that commutes with {\ displaystyle A} will be conserved. Moreover, if {\ displaystyle B} is conserved by evolution under {\ displaystyle A}, then {\ displaystyle A} is conserved under the evolution generated by {\ displaystyle B}. This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law.

Examples

Free particle

Position space probability density of a Gaussian wave packet moving in one dimension in free space.

The simplest example of quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy:

{\displaystyle H={\frac {1}{2m}}P^{2}=-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}.}

The general solution of the Schrödinger equation is given by

{\displaystyle \psi (x,t)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\hat {\psi }}(k,0)e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}\mathrm {d} k,}

which is a superposition of all possible plane waves {\displaystyle e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}}, which are eigenstates of the momentum operator with momentum {\ displaystyle p = \ hbar k}. The coefficients of the superposition are {\ displaystyle {\ hat {\ psi}} (k, 0)}, which is the Fourier transform of the initial quantum state {\ displaystyle \ psi (x, 0)}.

It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet:

{\displaystyle \psi (x,0)={\frac {1}{\sqrt[{4}]{\pi a}}}e^{-{\frac {x^{2}}{2a}}}}

which has Fourier transform, and therefore momentum distribution

{\ displaystyle {\ hat {\ psi}} (k, 0) = {\ sqrt [{4}] {\ frac {a} {\ pi}}} e ^ {- {\ frac {ak ^ {2} } {2}}}.}

We see that as we make {\ displaystyle a} smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making {\ displaystyle a} larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle.

As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant. Particle in a box

1-dimensional potential energy box (or infinite potential well)

The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region.  For the one-dimensional case in the {\displaystyle x} direction, the time-independent Schrödinger equation may be written

{\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .}

With the differential operator defined by

{\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}}

the previous equation is evocative of the classic kinetic energy analogue,

{\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,}

with state {\ displaystyle \ psi} in this case having energy {\displaystyle E} coincident with the kinetic energy of the particle.

The general solutions of the Schrödinger equation for the particle in a box are

{\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}}

or, from Euler’s formula,

{\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).\!}

The infinite potential walls of the box determine the values of {\displaystyle C,D,} and {\ displaystyle k} at {\displaystyle x=0}and{\displaystyle x=L} where {\ displaystyle \ psi} must be zero. Thus, at {\displaystyle x=0},

{\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D}

and {\displaystyle D=0}. At {\displaystyle x=L},

{\ displaystyle \ psi (L) = 0 = C \ sin (kL),}

in which {\displaystyle C} cannot be zero as this would conflict with the postulate that {\ displaystyle \ psi} has norm 1. Therefore, since {\displaystyle \sin(kL)=0},{\ displaystyle kL} must be an integer multiple of {\ displaystyle \ pi},

{\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .}

This constraint on {\ displaystyle k} implies a constraint on the energy levels, yielding

{\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.}

finite potential well  is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy.

Harmonic oscillator

Some trajectories of a harmonic oscillator (i.e. a ball attached to a spring) in classical mechanics (A-B) and quantum mechanics (C-H). In quantum mechanics, the position of the ball is represented by a wave (called the wave function), with the real part shown in blue and the imaginary part shown in red. Some of the trajectories (such as C, D, E, and F) are standing waves (or “stationary states“). Each standing-wave frequency is proportional to a possible energy level of the oscillator. This “energy quantization” does not occur in classical physics, where the oscillator can have any energy.

As in the classical case, the potential for the quantum harmonic oscillator is given by

{\displaystyle V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.}

This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant “ladder method” first proposed by Paul Dirac. The eigenstates are given by

{\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad }
{\displaystyle n=0,1,2,\ldots .}

where Hn are the Hermite polynomials

{\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x^{2}}\right),}

and the corresponding energy levels are

{\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right).}

This is another example illustrating the discretization of energy for bound states.

Mach–Zehnder interferometer

Schematic of a Mach–Zehnder interferometer.

The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement. 

We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the “lower” path which starts from the left, goes straight through both beam splitters, and ends at the top, and the “upper” path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector {\displaystyle \psi \in \mathbb {C} ^{2}} that is a superposition of the “lower” path {\displaystyle \psi _{l}={\begin{pmatrix}1\\0\end{pmatrix}}} and the “upper” path {\displaystyle \psi _{u}={\begin{pmatrix}0\\1\end{pmatrix}}}, that is, {\ displaystyle \ psi = \ alpha \ psi _ {l} + \ beta \ psi _ {u}} for complex {\displaystyle \alpha ,\beta }. In order to respect the postulate that {\displaystyle \langle \psi ,\psi \rangle =1} we require that {\displaystyle |\alpha |^{2}+|\beta |^{2}=1}.

Both beam splitters are modelled as the unitary matrix {\displaystyle B={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&i\\i&1\end{pmatrix}}}, which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of {\ displaystyle 1 / {\ sqrt {2}}}, or be reflected to the other path with a probability amplitude of {\displaystyle i/{\sqrt {2}}}. The phase shifter on the upper arm is modelled as the unitary matrix {\displaystyle P={\begin{pmatrix}1&0\\0&e^{i\Delta \Phi }\end{pmatrix}}}, which means that if the photon is on the “upper” path it will gain a relative phase of {\ displaystyle \ Delta \ Phi}, and it will stay unchanged if it is in the lower path.

A photon that enters the interferometer from the left will then be acted upon with a beam splitter {\ displaystyle B}, a phase shifter {\ displaystyle P}, and another beam splitter {\ displaystyle B}, and so end up in the state

{\displaystyle BPB\psi _{l}=ie^{i\Delta \Phi /2}{\begin{pmatrix}-\sin(\Delta \Phi /2)\\cos(\Delta\Phi /2 )\end{pmatrix}}}

and the probabilities that it will be detected at the right or at the top are given respectively by

{\displaystyle p(u)=|\langle \psi _{u},BPB\psi _{l}\rangle |^{2}=\cos ^{2}{\frac {\Delta \Phi }{2 }},}
{\displaystyle p(l)=|\langle \psi _{l},BPB\psi _{l}\rangle |^{2}=\sin ^{2}{\frac {\Delta \Phi }{2 }}.}

One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities.

It is interesting to consider what would happen if the photon were definitely in either the “lower” or “upper” paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases there will be no interference between the paths anymore, and the probabilities are given by {\ displaystyle p (u) = p (l) = 1/2}, independently of the phase {\ displaystyle \ Delta \ Phi}. From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths.  

Applications

Quantum mechanics has had enormous success in explaining many of the features of our universe, with regards to small-scale and discrete quantities and interactions which cannot be explained by classical methods.[note 4] Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electronsprotonsneutronsphotons, and others). Solid-state physics and materials science are dependent upon quantum mechanics.

In many aspects modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistryquantum opticsquantum computingsuperconducting magnetslight-emitting diodes, the  optical amplifier and the laser, the transistor and  semiconductors  such as the microprocessormedical and research imaging  such as magnetic resonance imaging  and  electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA.

Relation to other scientific theories

Classical mechanics

The rules of quantum mechanics assert that the state space of a system is a  Hilbert space  and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers.  One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization.

When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.

Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos  studies the relationship between classical and quantum descriptions in these systems.

Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement  becomes simply classical correlations. Quantum coherence is not typically evident at macroscopic scales, except maybe at temperatures approaching absolute zero at which quantum behavior may manifest macroscopically.

Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics. 

Special relativity and electrodynamics

Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the  electromagnetic  interaction.  Quantum electrodynamics  is, along with  general relativity, one of the most accurate physical theories ever devised.

The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat  charged  particles as quantum mechanical objects being acted on by a classical electromagnetic field.  For example, the elementary quantum model of the hydrogen atom  describes  the electric field of the hydrogen atom using a classical {\displaystyle \textstyle -e^{2}/(4\pi \epsilon _{_{0}}r)} Coulomb potential. This “semi-classical” approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles.

Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as  quarks  and gluons.  The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists  Abdus SalamSheldon Glashow  and  Steven Weinberg.

Relation to general relativity

Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of  quantum gravity  is an important issue in  physical cosmology  and the search by physicists for an elegant “Theory of Everything” (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon.

One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its masscharge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force. Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric “woven” of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG.

Philosophical implications

Unsolved problem in physics: Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the “superposition of states” and “wave function collapse“, give rise to the reality we perceive?

Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, “I think I can safely say that nobody understands quantum mechanics.”  According to Steven Weinberg, “There is now in my opinion no entirely satisfactory interpretation of quantum mechanics.”

The views of Niels BohrWerner Heisenberg and other physicists are often grouped together as the “Copenhagen interpretation“. According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of “causality”. Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations remain popular in the 21st century.

Albert Einstein, himself one of the founders of  quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism  and  locality.  Einstein’s long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the  Bohr–Einstein debates.  Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids  action at a distance.  He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators  Boris Podolsky  and  Nathan Rosen  published an argument that the principle of locality implies the incompleteness of quantum mechanics, a  thought experiment later termed the  Einstein–Podolsky–Rosen paradox.  In 1964,  John Bell showed that EPR’s principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles.  Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism.

Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem.

Everett’s many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.  This is a consequence of removing the axiom of the collapse of the wave packet.  All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical  quantum superposition.  While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we don’t observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule, with no consensus on whether they have been successful.

Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later.

History

 

Quantum mechanics was developed in the early decades  of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert HookeChristiaan Huygens  and  Leonhard Euler  proposed a wave theory of light based on experimental observations. In 1803 English  polymath Thomas Young described the famous  double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light.

During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk MaxwellLudwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible units – the word “atom” deriving from the Greek for “uncuttable” – the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was  Michael Faraday‘s 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius PlückerJohann Wilhelm Hittorf  and  Eugen Goldstein  carried on and improved upon Faraday’s work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons.

The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete “quanta” (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation.  The word quantum derives from the Latin, meaning “how great” or “how much”. According to Planck, quantities of energy could be thought of as divided into “elements” whose size (E) would be proportional to their frequency (ν):

{\ displaystyle E = h \ nu \},

where h is Planck’s constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation. In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck’s quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck’s ideas about radiation into a model of the hydrogen atom that successfully predicted the spectral lines of hydrogen.[69] Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency. In his paper “On the Quantum Theory of Radiation,” Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation, which became the basis of the laser.

The 1927 Solvay Conference in Brussels
was the fifth world physics conference.

This phase is known as the old quantum theory. Never complete or self-consistent, the old quantum theory was rather a set of  heuristic  corrections to  classical mechanics.  The theory is now understood as a  semi-classical approximation  to modern quantum mechanics. Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein and Peter Debye‘s work on the specific heat of solids, Bohr and Hendrika Johanna van Leeuwen‘s proof that classical physics cannot account for diamagnetism, and Arnold Sommerfeld‘s extension of the Bohr model to include special-relativistic effects.

In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie’s approach, modern quantum mechanics was born in 1925, when the German physicists Werner HeisenbergMax Born, and Pascual Jordan  developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics. Born introduced the probabilistic interpretation of Schrödinger’s wave function in July 1926. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.

By 1930 quantum mechanics had been further unified and formalized by David HilbertPaul Dirac and John von Neuman  with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the ‘observer’. It has since permeated many disciplines, including quantum chemistry, quantum electronicsquantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors  and superfluids.

See also

Explanatory notes

  1. ^ See, for example, Precision tests of QED. The relativistic refinement of quantum mechanics known as quantum electrodynamics (QED) has been shown to agree with experiment to within 1 part in 108 for some atomic properties.
  2. ^ Physicist John C. Baez cautions, “there’s no way to understand the interpretation of quantum mechanics without also being able to solve quantum mechanics problems – to understand the theory, you need to be able to use it (and vice versa)”.[15] Carl Sagan outlined the “mathematical underpinning” of quantum mechanics and wrote, “For most physics students, this might occupy them from, say, third grade to early graduate school – roughly 15 years. […] The job of the popularizer of science, trying to get across some idea of quantum mechanics to a general audience that has not gone through these initiation rites, is daunting. Indeed, there are no successful popularizations of quantum mechanics in my opinion – partly for this reason.”[16]
  3. ^ A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise, a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle’s Hilbert space. Physicists sometimes introduce fictitious “bases” for a Hilbert space comprising elements outside that space. These are invented for calculational convenience and do not represent physical states.[19]: 100–105 
  4. ^ See, for example, the Feynman Lectures on Physics for some of the technological applications which use quantum mechanics, e.g., transistors (vol III, pp. 14–11 ff), integrated circuits, which are follow-on technology in solid-state physics (vol II, pp. 8–6), and lasers (vol III, pp. 9–13).
  5. ^ see macroscopic quantum phenomenaBose–Einstein condensate, and Quantum machine
  6. ^ The published form of the EPR argument was due to Podolsky, and Einstein himself was not satisfied with it. In his own publications and correspondence, Einstein used a different argument to insist that quantum mechanics is an incomplete theory.

Why Choose Us

Interactive Design

Software Development

Quality Testing

Unlimited Revisions

Validate Your Software Quality By The Bests In Town

Our special quality team will make sure your software is fit for potential customers. We offer unlimited revisions till you are happy with the quality.

Why Choose Us

Interactive Design

Software Development

Quality Testing

Unlimited Revisions

Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all  organisms   are  made up of cells  that process hereditary information encoded in  genes, which can be transmitted to future generations. Another major theme is evolution, which explains the unity and diversity of life. Energy processing is also important to life as it allows organisms to move, grow, and reproduce. Finally, all organisms are able to regulate their own internal environments. Biologists  are able to study life at multiple  levels of organization,  from the molecular biology of a cell to anatomy  and  physiology  of  plants  and  animals, and evolution of  populations. Hence, there are multiple subdisciplines within biology, each defined by the nature of their  research  questions  and the  tools  that they use. Like other scientists, biologists use the  scientific method  to make  observations, pose  questions,  generate   hypotheses,  perform  experiments,  and  form conclusions about the world around them. Life on Earth, which emerged more than 3.7 billion years ago, is immensely diverse. Biologists have sought to study and classify the various forms of life, from prokaryotic organisms such as  archaea  and bacteria to  eukaryotic org anisms such as  protists,  fungi, plants, and animals. These various organisms contribute to the  bio  diversity of an ecosystem, where they play specialized roles in the  cycling  of nutrients  and  energy  through their biophysical environment.
  1. 00:00 кто сегодняшний гость
  2. 02:18 чем сейчас занимается Михаил Гельфанд
  3. 06:35 способ померить, как работают гены
  4. 12:15 насекомые с полным и неполным превращением
  5. 15:10 чуть подробнее про стрекоз
  6. 17:40 про людей будущего и эволюцию: «всё идет в жопу»
  7. 24:47 «эволюция не знает, что будет через 10 лет»
  8. 27:10 про запасы жира: худые и полные люди
  9. 33:58 чем занимается биоинформатика
  10. 41:23 про вирусы, эпидемию и вакцинацию
  11. 51:24 работают ли ДНК-тесты
  12. 59:37 если проверить ДНК всех людей, что это даст
  13. 01:04:30 «прогресс в биологии состоит в увеличении нашего незнания»
  14. 01:06:15 новые области в биологии
  15. 01:15:02 откуда берется что-то новое в биологии
  16. 01:19:44 заблуждение о ДНК
  17. 01:24:15 почему важно учить биологию
  18. 01:35:00 геном и интеллект
  19. 01:42:10 «мы одновременно умнеем и тупеем»
  20. 01:45:40 про культуру отмены

Biology derives from the Ancient  Greek words of  βίος  romanized  bíos meaning  'life' and -λογία;  romanized  - logía meaning  'branch of study' or 'to speak'. Those combined make the Greek word  βιολογία  romanized biología meaning 'biology'.  Despite this, the term βιολογία as a whole did not exist in Ancient Greek. The first to borrow it was the English and French (biologie). Historically there was another term for biology in English, lifelore; it is rarely used today.

The Latin-language form of the term first appeared in 1736 when Swedish scientist  Carl Linnaeus  (Carl von Linné) used biologi in his Bibliotheca Botanica. It was used again in 1766 in a work entitled Philosophiae naturalis sive physicae: tomus III, continens geologian, biologian, phytologian generalis, by Michael Christoph Hanov,  a disciple o f Christian Wolff. The first German use, Biologie, was in a 1771 translation of Linnaeus' work. In 1797, Theodor Georg August Roose used the term in the preface of a book, Grundzüge der Lehre van der LebenskraftKarl Friedrich Burdach  used the term in 1800 in a more restricted sense of the study of human beings from a morphological, physiological and psychological perspective (Propädeutik zum Studien der gesammten Heilkunst). The term came into its modern usage with the six-volume treatise  Biologie, oder Philosophie der lebenden Natur (1802–22) by Gottfried Reinhold Treviranus, who announced: 

The objects of our research will be the different forms and manifestations of life, the conditions and laws under which these phenomena occur, and the causes through which they  have been affected. The  science  that concerns itself with these objects we will indicate by the name biology [Biologie] or the doctrine of life [Lebenslehre].

Many other terms used in biology to describe plants, animals, diseases, and drugs have been derived from Greek and Latin due to the historical contributions of the Ancient Greek and Roman  civilizations as well as the continued use of these two languages in European universities during the Middle Ages and at the beginning of the  Renaissance.

A drawing of a fly from facing up, with wing detail
Diagram of a fly from  Robert Hooke's innovative  Micrographia, 1665

The earliest of roots of  science, which included  medicine, can be traced to  ancient Egypt and Mesopotamia in around 3000 to 1200 BCE. Their contributions later entered and shaped Greek natural philosophy of classical antiquity.  Ancient Greek philosophers such as Aristotle (384–322 BCE) contributed extensively to the development of biological knowledge. His works such as  History of Animals were especially important because they revealed his naturalist leanings, and later more empirical works that focused on biological causation and the diversity of life. Aristotle's successor at the  LyceumTheophrastus, wrote a series of books on  botany that survived as the most important contribution of antiquity to the plant sciences, even into the Middle Ages.[19]

Scholars of the medieval Islamic world who wrote on biology included al-Jahiz (781–869), Al-Dīnawarī (828–896), who wrote on botany,[20] and Rhazes (865–925) who wrote on anatomy and physiologyMedicine was especially well studied by Islamic scholars working in Greek philosopher traditions, while natural history drew heavily on Aristotelian thought, especially in upholding a fixed hierarchy of life.

Biology began to quickly develop and grow with Anton van Leeuwenhoek's dramatic improvement of the microscope. It was then that scholars discovered spermatozoabacteriainfusoria and the diversity of microscopic life. Investigations by Jan Swammerdam led to new interest in entomology and helped to develop the basic techniques of microscopic dissection and staining.

Advances in microscopy also had a profound impact on biological thinking. In the early 19th century, a number of biologists pointed to the central importance of the cell. Then, in 1838, Schleiden and Schwann began promoting the now universal ideas that (1) the basic unit of organisms is the cell and (2) that individual cells have all the characteristics of life, although they opposed the idea that (3) all cells come from the division of other cells. However, Robert Remak and Rudolf Virchow were able to reify the third tenet, and by the 1860s most biologists accepted all three tenets which consolidated into cell theory.

Meanwhile, taxonomy and classification became the focus of natural historians. Carl Linnaeus published a basic taxonomy for the natural world in 1735 (variations of which have been in use ever since), and in the 1750s introduced scientific names for all his species.[24] Georges-Louis Leclerc, Comte de Buffon, treated species as artificial categories and living forms as malleable—even suggesting the possibility of common descent. Although he was opposed to evolution, Buffon is a key figure in the history of evolutionary thought; his work influenced the evolutionary theories of both Lamarck and Darwin.

In 1842, Charles Darwin penned his first sketch of On the Origin of Species.[26]

Serious evolutionary thinking originated with the works of Jean-Baptiste Lamarck, who was the first to present a coherent theory of evolution.

 He posited that evolution was the result of environmental stress on properties of animals, meaning that the more frequently and rigorously an organ was used, the more complex and efficient it would become, thus adapting the animal to its environment. Lamarck believed that these acquired traits could then be passed on to the animal's offspring, who would further develop and perfect them.

 However, it was the British naturalist Charles Darwin, combining the biogeographical approach of Humboldt, the uniformitarian geology of LyellMalthus's writings on population growth, and his own morphological expertise and extensive natural observations, who forged a more successful evolutionary theory based on natural selection; similar reasoning and evidence led Alfred Russel Wallace to independently reach the same conclusions.

Darwin's theory of evolution by natural selection quickly spread through the scientific community and soon became a central axiom of the rapidly developing science of biology.

The basis for modern genetics began with the work of Gregor Mendel, who presented his paper, "Versuche über Pflanzenhybriden" ("Experiments on Plant Hybridization"), in 1865, which outlined the principles of biological inheritance, serving as the basis for modern genetics. However, the significance of his work was not realized until the early 20th century when evolution became a unified theory as the modern synthesis reconciled Darwinian evolution with classical genetics.[33] In the 1940s and early 1950s, a series of experiments by Alfred Hershey and Martha Chase pointed to DNA as the component of chromosomes that held the trait-carrying units that had become known as genes. A focus on new kinds of model organisms such as viruses and bacteria, along with the discovery of the double-helical structure of DNA by James Watson and Francis Crick in 1953, marked the transition to the era of molecular genetics. From the 1950s onwards, biology has been vastly extended in the molecular domain. The genetic code was cracked by Har Gobind KhoranaRobert W. Holley and Marshall Warren Nirenberg after DNA was understood to contain codons. Finally, the Human Genome Project was launched in 1990 with the goal of mapping the general human genome. This project was essentially completed in 2003, with further analysis still being published. The Human Genome Project was the first step in a globalized effort to incorporate accumulated knowledge of biology into a functional, molecular definition of the human body and the bodies of other organisms.

Atoms and molecules

In the Bohr model of an atom, electrons (blue dot) orbit around an atomic nucleus (red-filled circle) in specific  atomic orbitals  (grey empty circles).

All organisms are made up of matter and all matter is made up of elements.[35] Oxygencarbonhydrogen, and nitrogen are the four elements that account for 96% of all organisms, with calciumphosphorussulfursodiumchlorine, and magnesium constituting the remaining 3.7%.[35] Different elements can combine to form compounds such as water, which is fundamental to life.[35] Life on Earth began from water and remained there for about three billions years prior to migrating onto land.[36] Matter can exist in different states as a solidliquid, or gas.

The smallest unit of an element is an atom, which is composed of an atomic nucleus and one or more electrons moving around the nucleus, as described by the Bohr model.[37] The nucleus is made of one or more protons and a number of neutrons. Protons have a positive electric charge, neutrons are electrically neutral, and electrons have a negative electric charge.[38] Atoms with equal numbers of protons and electrons are electrically neutral. The atom of each specific element contains a unique number of protons, which is known as its atomic number, and the sum of its protons and neutrons is an atom's mass number. The masses of individual protons, neutrons, and electrons can be measured in grams or Daltons (Da), with the mass of each proton or neutron rounded to 1 Da.[38] Although all atoms of a specific element have the same number of protons, they may differ in the number of neutrons, thereby existing as isotopes.[35] Carbon, for example, can exist as a stable isotope (carbon-12 or carbon-13) or as a radioactive isotope (carbon-14), the latter of which can be used in radiometric dating (specifically radiocarbon dating) to determine the age of organic materials.[35]

Individual atoms can be held together by chemical bonds to form molecules and ionic compounds.[35] Common types of chemical bonds include ionic bondscovalent bonds, and hydrogen bonds. Ionic bonding involves the electrostatic attraction between oppositely charged ions, or between two atoms with sharply different electronegativities,[39] and is the primary interaction occurring in ionic compounds. Ions are atoms (or groups of atoms) with an electrostatic charge. Atoms that gain electrons make negatively charged ions (called anions) whereas those that lose electrons make positively charged ions (called cations).

Unlike ionic bonds, a covalent bond involves the sharing of electron pairs between atoms. These electron pairs and the stable balance of attractive and repulsive forces between atoms, when they share electrons, is known as covalent bonding.[40][better source needed]

A hydrogen bond is primarily an electrostatic force of attraction between a hydrogen atom which is covalently bound to a more electronegative atom or group such as oxygen. A ubiquitous example of a hydrogen bond is found between water molecules. In a discrete water molecule, there are two hydrogen atoms and one oxygen atom. Two molecules of water can form a hydrogen bond between them. When more molecules are present, as is the case with liquid water, more bonds are possible because the oxygen of one water molecule has two lone pairs of electrons, each of which can form a hydrogen bond with a hydrogen on another water molecule.

Water

Model of hydrogen bonds (1) between molecules of water

Life arose from the Earth's first ocean, which was formed approximately 3.8 billion years ago.[38] Since then, water continues to be the most abundant molecule in every organism. Water is important to life because it is an effective solvent, capable of dissolving solutes such as sodium and chloride ions or other small molecules to form an aqueous solution. Once dissolved in water, these solutes are more likely to come in contact with one another and therefore take part in chemical reactions that sustain life.[38]

In terms of its molecular structure, water is a small polar molecule with a bent shape formed by the polar covalent bonds of two hydrogen (H) atoms to one oxygen (O) atom (H2O).[38] Because the O–H bonds are polar, the oxygen atom has a slight negative charge and the two hydrogen atoms have a slight positive charge.[38] This polar property of water allows it to attract other water molecules via hydrogen bonds, which makes water cohesive.[38] Surface tension results from the cohesive force due to the attraction between molecules at the surface of the liquid.[38] Water is also adhesive as it is able to adhere to the surface of any polar or charged non-water molecules.[38]

Water is denser as a liquid than it is as a solid (or ice).[38] This unique property of water allows ice to float above liquid water such as ponds, lakes, and oceans, thereby insulating the liquid below from the cold air above.[38] The lower density of ice compared to liquid water is due to the lower number of water molecules that form the crystal lattice structure of ice, which leaves a large amount of space between water molecules.[38] In contrast, there is no crystal lattice structure in liquid water, which allows more water molecules to occupy the same amount of volume.[38]

Water also has the capacity to absorb energy, giving it a higher specific heat capacity than other solvents such as ethanol.[38] Thus, a large amount of energy is needed to break the hydrogen bonds between water molecules to convert liquid water into gas (or water vapor).[38]

As a molecule, water is not completely stable as each water molecule continuously dissociates into hydrogen and hydroxyl ions before reforming into a water molecule again.[38] In pure water, the number of hydrogen ions balances (or equals) the number of hydroxyl ions, resulting in a pH that is neutral. If hydrogen ions were to exceed hydroxyl ions, then the pH of the solution would be acidic. Conversely, a solution's pH would turn basic if hydroxyl ions were to exceed hydrogen ions.

Organic compounds

Organic compounds such as glucose are vital to organisms.

Organic compounds are molecules that contain carbon bonded to another element such as hydrogen.[38] With the exception of water, nearly all the molecules that make up each organism contain carbon.[38][41] Carbon has six electrons, two of which are located in its first shell, leaving four electrons in its valence shell. Thus, carbon can form covalent bonds with up to four other atoms, making it the most versatile atom on Earth as it is able to form diverse, large, and complex molecules.[38][41] For example, a single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide (CO2), or a triple covalent bond such as in carbon monoxide (CO). Moreover, carbon can form very long chains of interconnecting carbon–carbon bonds such as octane or ring-like structures such as glucose.

The simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other elements such as oxygen (O), hydrogen (H), phosphorus (P), and sulfur (S), which can change the chemical behavior of that compound.[38] Groups of atoms that contain these elements (O-, H-, P-, and S-) and are bonded to a central carbon atom or skeleton are called functional groups.[38] There are six prominent functional groups that can be found in organisms: amino groupcarboxyl groupcarbonyl grouphydroxyl groupphosphate group, and sulfhydryl group.[38]

In 1953, Stanley Miller and Harold Urey conducted a classic experiment (otherwise known as the Miller-Urey experiment), which showed that organic compounds could be synthesized abiotically within a closed system that mimicked the conditions of early Earth, leading them to conclude that complex organic molecules could have arisen spontaneously in early Earth, most likely near volcanoes, and could have part of the early stages of abiogenesis (or origin of life).[42][38]

Macromolecules

A phospholipid bilayer consists of two adjacent sheets of phospholipids, with the hydrophilic tails facing inwards and the hydrophobic heads facing outwards.

Macromolecules are large molecules made up of smaller molecular subunits that are joined together.[43] Small molecules such as sugars, amino acids, and nucleotides can act as single repeating units called monomers to form chain-like molecules called polymers via a chemical process called condensation.[44] For example, amino acids can form polypeptides whereas nucleotides can form strands of nucleic acid. Polymers make up three of the four macromolecules (polysaccharideslipidsproteins, and nucleic acids) that are found in all organisms. Each of these macromolecules plays a specialized role within any given cell.

Carbohydrates (or sugar) are molecules with the molecular formula (CH2O)n, with n being the number of carbon-hydrate groups.[45] They include monosaccharides (monomer), oligosaccharides (small polymers), and polysaccharides (large polymers). Monosaccharides can be linked together by glycosidic linkages, a type of covalent bond.[45] When two monosaccharides such as glucose and fructose are linked together, they can form a disaccharide such as sucrose.[45] When many monosaccharides are linked together, they can form an oligosaccharide or a polysaccharide, depending on the number of monosaccharides. Polysaccharides can vary in function. Monosaccharides such as glucose can be a source of energy and some polysaccharides can serve as storage material that can be hydrolyzed to provide cells with sugar.

Lipids are the only class of macromolecules that are not made up of polymers. The most biologically important lipids are steroidsphospholipids, and fats.[44] These lipids are organic compounds that are largely nonpolar and hydrophobic.[46] Steroids are organic compounds that consist of four fused rings.[46] Phospholipids consist of glycerol that is linked to a phosphate group and two hydrocarbon chains (or fatty acids).[46] The glycerol and phosphate group together constitute the polar and hydrophilic (or head) region of the molecule whereas the fatty acids make up the nonpolar and hydrophobic (or tail) region.[46] Thus, when in water, phospholipids tend to form a phospholipid bilayer whereby the hydrophobic heads face outwards to interact with water molecules. Conversely, the hydrophobic tails face inwards towards other hydrophobic tails to avoid contact with water.[46]

The (a) primary, (b) secondary, (c) tertiary, and (d) quaternary structures of a hemoglobin protein

Proteins are the most diverse of the macromolecules, which include enzymestransport proteins, large signaling molecules, antibodies, and structural proteins. The basic unit (or monomer) of a protein is an amino acid, which has a central carbon atom that is covalently bonded to a hydrogen atom, an amino group, a carboxyl group, and a side chain (or R-group, "R" for residue).[43] There are twenty amino acids that make up the building blocks of proteins, with each amino acid having its own unique side chain.[43] The polarity and charge of the side chains affect the solubility of amino acids. An amino acid with a side chain that is polar and electrically charged is soluble as it is hydrophilic whereas an amino acid with a side chain that lacks a charged or an electronegative atom is hydrophobic and therefore tends to coalesce rather than dissolve in water.[43] Proteins have four distinct levels of organization (primarysecondarytertiary, and quartenary). The primary structure consists of a unique sequence of amino acids that are covalently linked together by peptide bonds.[43] The side chains of the individual amino acids can then interact with each other, giving rise to the secondary structure of a protein.[43] The two common types of secondary structures are alpha helices and beta sheets.[43] The folding of alpha helices and beta sheets gives a protein its three-dimensional or tertiary structure. Finally, multiple tertiary structures can combine to form the quaternary structure of a protein.

Nucleic acids are polymers made up of monomers called nucleotides.[47] Their function is to store, transmit, and express hereditary information.[44] Nucleotides consist of a phosphate group, a five-carbon sugar, and a nitrogenous base. Ribonucleotides, which contain ribose as the sugar, are the monomers of ribonucleic acid (RNA). In contrast, deoxyribonucleotides contain deoxyribose as the sugar and are constitute the monomers of deoxyribonucleic acid (DNA). RNA and DNA also differ with respect to one of their bases.[47] There are two types of bases: purines and pyrimidines.[47] The purines include guanine (G) and adenine (A) whereas the pyrimidines consist of cytosine (C), uracil (U), and thymine (T). Uracil is used in RNA whereas thymine is used in DNA. Taken together, when the different sugar and bases are take into consideration, there are eight distinct nucleotides that can form two types of nucleic acids: DNA (A, G, C, and T) and RNA (A, G, C, and U).[47]

Cell  theory  states that  cells  are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division.[48] Most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope.[49] There are generally two types of cells: eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. Prokaryotes are single-celled organisms such as bacteria, whereas eukaryotes can be single-celled or multicellular. In multicellular organisms, every cell in the organism's body is derived ultimately from a single cell in a fertilized egg.

Cell structure

Structure of an animal cell depicting various organelles

Every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space.[50] A cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. Cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions.[51] Cell membranes also contains membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell.[52] Cell membranes are involved in various cellular processes such as cell adhesionstoring electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wallglycocalyx, and cytoskeleton.

Structure of a plant cell

Within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids.[53] In addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units.[54] These organelles include the cell nucleus, which contains most of the cell's DNA, or mitochondria, which generates adenosine triphosphate (ATP) to power cellular processes. Other organelles such as endoplasmic reticulum and Golgi apparatus play a role in the synthesis and packaging of proteins, respectively. Biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. Plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds.[54] Eukaryotic cells also have cytoskeleton that is made up of microtubulesintermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its organelles.[54] In terms of their structural composition, the microtubules are made up of tubulin (e.g., α-tubulin and β-tubulin whereas intermediate filaments are made up of fibrous proteins.[54] Microfilaments are made up of actin molecules that interact with other strands of proteins.[54]

Metabolism

Example of an enzyme-catalysed exothermic reaction

All cells require energy to sustain cellular processes. Energy is the capacity to do work, which, in thermodynamics, can be calculated using Gibbs free energy. According to the first law of thermodynamics, energy is conserved, i.e., cannot be created or destroyed. Hence, chemical reactions in a cell do not create new energy but are involved instead in the transformation and transfer of energy.[55] Nevertheless, all energy transfers lead to some loss of usable energy, which increases entropy (or state of disorder) as stated by the second law of thermodynamics. As a result, an organism requires continuous input of energy to maintain a low state of entropy. In cells, energy can be transferred as electrons during redox (reduction–oxidation) reactions, stored in covalent bonds, and generated by the movement of ions (e.g., hydrogen, sodium, potassium) across a membrane.

Metabolism is the set of life-sustaining chemical reactions in organisms. The three main purposes of metabolism are: the conversion of food to energy to run cellular processes; the conversion of food/fuel to building blocks for proteinslipidsnucleic acids, and some carbohydrates; and the elimination of metabolic wastes. These enzyme-catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. Metabolic reactions may be categorized as catabolic—the breaking down of compounds (for example, the breaking down of glucose to pyruvate by cellular respiration); or anabolic—the building up (synthesis) of compounds (such as proteins, carbohydrates, lipids, and nucleic acids). Usually, catabolism releases energy, and anabolism consumes energy.

The chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. Enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy that will not occur by themselves, by coupling them to spontaneous reactions that release energy. Enzymes act as catalysts—they allow a reaction to proceed more rapidly without being consumed by it—by reducing the amount of activation energy needed to convert reactants into products. Enzymes also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell's environment or to signals from other cells.

Cellular respiration

Respiration in a eukaryotic cell

Cellular respiration is a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into adenosine triphosphate (ATP), and then release waste products.[56] The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, releasing energy. Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it clearly does not resemble one when it occurs in a cell because of the slow, controlled release of energy from the series of reactions.

Sugar in the form of glucose is the main nutrient used by animal and plant cells in respiration. Cellular respiration involving oxygen is called aerobic respiration, which has four stages: glycolysiscitric acid cycle (or Krebs cycle), electron transport chain, and oxidative phosphorylation.[57] Glycolysis is a metabolic process that occurs in the cytoplasm whereby glucose is converted into two pyruvates, with two net molecules of ATP being produced at the same time.[57] Each pyruvate is then oxidized into acetyl-CoA by the pyruvate dehydrogenase complex, which also generates NADH and carbon dioxide. Acetyl-Coa enters the citric acid cycle, which takes places inside the mitochondrial matrix. At the end of the cycle, the total yield from 1 glucose (or 2 pyruvates) is 6 NADH, 2 FADH2, and 2 ATP molecules. Finally, the next stage is oxidative phosphorylation, which in eukaryotes, occurs in the mitochondrial cristae. Oxidative phosphorylation comprises the electron transport chain, which is a series of four protein complexes that transfer electrons from one complex to another, thereby releasing energy from NADH and FADH2 that is coupled to the pumping of protons (hydrogen ions) across the inner mitochondrial membrane (chemiosmosis), which generates a proton motive force.[57] Energy from the proton motive force drives the enzyme ATP synthase to synthesize more ATPs by phosphorylating ADPs. The transfer of electrons terminates with molecular oxygen being the final electron acceptor.

If oxygen were not present, pyruvate would not be metabolized by cellular respiration but undergoes a process of fermentation. The pyruvate is not transported into the mitochondrion but remains in the cytoplasm, where it is converted to waste products that may be removed from the cell. This serves the purpose of oxidizing the electron carriers so that they can perform glycolysis again and removing the excess pyruvate. Fermentation oxidizes NADH to NAD+ so it can be re-used in glycolysis. In the absence of oxygen, fermentation prevents the buildup of NADH in the cytoplasm and provides NAD+ for glycolysis. This waste product varies depending on the organism. In skeletal muscles, the waste product is lactic acid. This type of fermentation is called lactic acid fermentation. In strenuous exercise, when energy demands exceed energy supply, the respiratory chain cannot process all of the hydrogen atoms joined by NADH. During anaerobic glycolysis, NAD+ regenerates when pairs of hydrogen combine with pyruvate to form lactate. Lactate formation is catalyzed by lactate dehydrogenase in a reversible reaction. Lactate can also be used as an indirect precursor for liver glycogen. During recovery, when oxygen becomes available, NAD+ attaches to hydrogen from lactate to form ATP. In yeast, the waste products are ethanol and carbon dioxide. This type of fermentation is known as alcoholic or ethanol fermentation. The ATP generated in this process is made by substrate-level phosphorylation, which does not require oxygen.

Photosynthesis

Photosynthesis changes sunlight into chemical energy, splits water to liberate O2, and fixes CO2 into sugar.

Photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism's metabolic activities via cellular respiration. This chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water.[58][59][60] In most cases, oxygen is also released as a waste product. Most plantsalgae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the Earth's atmosphere, and supplies most of the energy necessary for life on Earth.[61]

Photosynthesis has four stages: Light absorption, electron transport, ATP synthesis, and carbon fixation.[57] Light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. The absorbed light energy is used to remove electrons from a donor (water) to a primary electron acceptor, a quinone designated as Q. In the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of NADP+, which is reduced to NADPH, a process that takes place in a protein complex called photosystem I (PSI). The transport of electrons is coupled to the movement of protons (or hydrogen) from the stroma to the thylakoid membrane, which forms a pH gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. This is analogous to the proton-motive force generated across the inner mitochondrial membrane in aerobic respiration.[57]

During the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the ATP synthase is coupled to the synthesis of ATP by that same ATP synthase.[57] The NADPH and ATPs generated by the light-dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate (RuBP) in a sequence of light-independent (or dark) reactions called the Calvin cycle.[62]

Cell signaling

Cell communication (or signaling) is the ability of cells to receive, process, and transmit signals with its environment and with itself.[63][64] Signals can be non-chemical such as light, electrical impulses, and heat, or chemical signals (or ligands) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside a cell.[65][64] There are generally four types of chemical signals: autocrineparacrinejuxtacrine, and hormones.[65] In autocrine signaling, the ligand affects the same cell that releases it. Tumor cells, for example, can reproduce uncontrollably because they release signals that initiate their own self-division. In paracrine signaling, the ligand diffuses to nearby cells and affect them. For example, brain cells called neurons release ligands called neurotransmitters that diffuse across a synaptic cleft to bind with a receptor on an adjacent cell such as another neuron or muscle cell. In juxtacrine signaling, there is direct contact between the signaling and responding cells. Finally, hormones are ligands that travel through the circulatory systems of animals or vascular systems of plants to reach their target cells. Once a ligand binds with a receptor, it can influence the behavior of another cell, depending on the type of receptor. For instance, neurotransmitters that bind with an inotropic receptor can alter the excitability of a target cell. Other types of receptors include protein kinase receptors (e.g., receptor for the hormone insulin) and G protein-coupled receptors. Activation of G protein-coupled receptors can initiate second messenger cascades. The process by which a chemical or physical signal is transmitted through a cell as a series of molecular events is called signal transduction

Cell cycle

In meiosis, the chromosomes duplicate and the homologous chromosomes exchange genetic information during meiosis I. The daughter cells divide again in meiosis II to form haploid gametes.

The cell cycle is a series of events that take place in a cell that cause it to divide into two daughter cells. These events include the duplication of its DNA and some of its organelles, and the subsequent partitioning of its cytoplasm into two daughter cells in a process called cell division.[66] In eukaryotes (i.e., animalplantfungal, and protist cells), there are two distinct types of cell division: mitosis and meiosis.[67] Mitosis is part of the cell cycle, in which replicated chromosomes are separated into two new nuclei. Cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. In general, mitosis (division of the nucleus) is preceded by the S stage of interphase (during which the DNA is replicated) and is often followed by telophase and cytokinesis; which divides the cytoplasmorganelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. The different stages of mitosis all together define the mitotic phase of an animal cell cycle—the division of the mother cell into two genetically identical daughter cells.[68] The cell cycle is a vital process by which a single-celled fertilized egg develops into a mature organism, as well as the process by which hairskinblood cells, and some internal organs are renewed. After cell division, each of the daughter cells begin the interphase of a new cycle. In contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of DNA replication followed by two divisions.[69] Homologous chromosomes are separated in the first division (meiosis I), and sister chromatids are separated in the second division (meiosis II). Both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. Both are believed to be present in the last eukaryotic common ancestor.

Prokaryotes (i.e., archaea and bacteria) can also undergo cell division (or binary fission). Unlike the processes of mitosis and meiosis in eukaryotes, binary fission takes in prokaryotes takes place without the formation of a spindle apparatus on the cell. Before binary fission, DNA in the bacterium is tightly coiled. After it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. Growth of a new cell wall begins to separate the bacterium (triggered by FtsZ polymerization and "Z-ring" formation)[70] The new cell wall (septum) fully develops, resulting in the complete split of the bacterium. The new daughter cells have tightly coiled DNA rods, ribosomes, and plasmids.

Inheritance

Punnett square depicting a cross between two pea plants heterozygous for purple (B) and white (b) blossoms

Genetics is the scientific study of inheritance.[71][72][73] Mendelian inheritance, specifically, is the process by which genes and traits are passed on from parents to offspring.[32] It was formulated by Gregor Mendel, based on his work with pea plants in the mid-nineteenth century. Mendel established several principles of inheritance. The first is that genetic characteristics, which are now called alleles, are discrete and have alternate forms (e.g., purple vs. white or tall vs. dwarf), each inherited from one of two parents. Based on his law of dominance and uniformity, which states that some alleles are dominant while others are recessive; an organism with at least one dominant allele will display the phenotype of that dominant allele.[74] Exceptions to this rule include penetrance and expressivity.[32] Mendel noted that during gamete formation, the alleles for each gene segregate from each other so that each gamete carries only one allele for each gene, which is stated by his law of segregationHeterozygotic individuals produce gametes with an equal frequency of two alleles. Finally, Mendel formulated the law of independent assortment, which states that genes of different traits can segregate independently during the formation of gametes, i.e., genes are unlinked. An exception to this rule would include traits that are sex-linkedTest crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype.[75] A Punnett square can be used to predict the results of a test cross. The chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by Thomas Morgans's experiments with fruit flies, which established the sex linkage between eye color and sex in these insects.[76] In humans and other mammals (e.g., dogs), it is not feasible or practical to conduct test cross experiments. Instead, pedigrees, which are genetic representations of family trees,[77] are used instead to trace the inheritance of a specific trait or disease through multiple generations.[78]

DNA

Bases lie between two spiraling DNA strands.

gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid (DNA) that carries genetic information that influences the form or function of an organism in specific ways. DNA is a molecule composed of two polynucleotide chains that coil around each other to form a double helix, which was first described by James Watson and Francis Crick in 1953.[79] It is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. A chromosome is an organized structure consisting of DNA and histones. The set of chromosomes in a cell and any other hereditary information found in the mitochondriachloroplasts, or other locations is collectively known as a cell's genome. In eukaryotes, genomic DNA is localized in the cell nucleus, or with small amounts in mitochondria and chloroplasts.[80] In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid.[81] The genetic information in a genome is held within genes, and the complete assemblage of this information in an organism is called its genotype.[82] Genes encode the information needed by cells for the synthesis of proteins, which in turn play a central role in influencing the final phenotype of the organism.

The two polynucleotide strands that make up DNA run in opposite directions to each other and are thus antiparallel. Each strand is composed of nucleotides,[83][84] with each nucleotide containing one of four nitrogenous bases (cytosine [C], guanine [G], adenine [A] or thymine [T]), a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. It is the sequence of these four bases along the backbone that encodes genetic information. Bases of the two polynucleotide strands are bound together by hydrogen bonds, according to base pairing rules (A with T and C with G), to make double-stranded DNA. The bases are divided into two groups: pyrimidines and purines. In DNA, the pyrimidines are thymine and cytosine whereas the purines are adenine and guanine.

There are grooves that run along the entire length of the double helix due to the uneven spacing of the DNA strands relative to each other.[79] Both grooves differ in size, with the major groove being larger and therefore more accessible to the binding of proteins than the minor groove.[79] The outer edges of the bases are exposed to these grooves and are therefore accessible for additional hydrogen bonding.[79] Because each groove can have two possible base-pair configurations (G-C and A-T), there are four possible base-pair configurations within the entire double helix, each of which is chemically distinct from another.[79] As a result, protein molecules are able to recognize and bind to specific base-pair sequences, which is the basis of specific DNA-protein interactions.

DNA replication is a semiconservative process whereby each strand serves as a template for a new strand of DNA.[79] The process begins with the unwounding of the double helix at an origin of replication, which separates the two strands, thereby making them available as two templates. This is then followed by the binding of the enzyme primase to the template to synthesize a starter RNA (or DNA in some viruses) strand called a primer from the 5’ to 3’ location.[79] Once the primer is completed, the primase is released from the template, followed by the binding of the enzyme DNA polymerase to the same template to synthesize new DNA. The rate of DNA replication in a living cell was measured as 749 nucleotides added per second under ideal conditions.[85]

DNA replication is not perfect as the DNA polymerase sometimes insert bases that are not complementary to the template (e.g., putting in A in the strand opposite to G in the template strand).[79] In eukaryotes, the initial error or mutation rate is about 1 in 100,000.[79] Proofreading and mismatch repair are the two mechanisms that repair these errors, which reduces the mutation rate to 10−10, particularly before and after a cell cycle.[79]

Mutations are heritable changes in DNA.[79] They can arise spontaneously as a result of replication errors that were not corrected by proofreading or can be induced by an environmental mutagen such as a chemical (e.g., nitrous acidbenzopyrene) or radiation (e.g., x-raygamma rayultraviolet radiation, particles emitted by unstable isotopes).[79] Mutations can appear as a change in single base or at a larger scale involving chromosomal mutations such as deletionsinversions, or translocations.[79]

In multicellular organisms, mutations can occur in somatic or germline cells.[79] In somatic cells, the mutations are passed on to daughter cells during mitosis.[79] In a germline cell such as a sperm or an egg, the mutation will appear in an organism at fertilization.[79] Mutations can lead to several types of phenotypic effects such as silent, loss-of-function, gain-of-function, and conditional mutations.[79]

Some mutations can be beneficial, as they are a source of genetic variation for evolution.[79] Others can be harmful if they were to result in a loss of function of genes needed for survival.[79] Mutagens such as carcinogens are typically avoided as a matter of public health policy goals.[79] One example is the banning of chlorofluorocarbons (CFC) by the Montreal Protocol, as CFCs tend to deplete the ozone layer, resulting in more ultraviolet radiation from the sun passing through the Earth's upper atmosphere, thereby causing somatic mutations that can lead to skin cancer.[79] Similarly, smoking bans have been enforced throughout the world in an effort to reduce the incidence of lung cancer.[79]

Gene expression

The extended central dogma of molecular biology includes all the processes involved in the flow of genetic information.

Gene expression is the molecular process by which a genotype gives rise to a phenotype, i.e., observable trait. The genetic information stored in DNA represents the genotype, whereas the phenotype results from the synthesis of proteins that control an organism's structure and development, or that act as enzymes catalyzing specific metabolic pathways. This process is summarized by the central dogma of molecular biology, which was formulated by Francis Crick in 1958.[86][87][88] According to the Central Dogma, genetic information flows from DNA to RNA to protein. Hence, there are two gene expression processes: transcription (DNA to RNA) and translation (RNA to protein).[89] These processes are used by all life—eukaryotes (including multicellular organisms), prokaryotes (bacteria and archaea), and are exploited by viruses—to generate the macromolecular machinery for life.

During transcription, messenger RNA (mRNA) strands are created using DNA strands as a template, which is initiated when RNA polymerase binds to a DNA sequence called a promoter, which instructs the RNA to begin transcription of one of the two DNA strands.[90] The DNA bases are exchanged for their corresponding bases except in the case of thymine (T), for which RNA substitutes uracil (U).[91] In eukaryotes, a large part of DNA (e.g., >98% in humans) contain non-coding called introns, which do not serve as patterns for protein sequences. The coding regions or exons are interspersed along with the introns in the primary transcript (or pre-mRNA).[90] Before translation, the pre-mRNA undergoes further processing whereby the introns are removed (or spliced out), leaving only the spliced exons in the mature mRNA strand.[90]

The translation of mRNA to protein occurs in ribosomes, whereby the transcribed mRNA strand specifies the sequence of amino acids within proteins using the genetic codeGene products are often proteins, but in non-protein-coding genes such as transfer RNA (tRNA) and small nuclear RNA (snRNA), the product is a functional non-coding RNA.[92][93]

Gene regulation

Regulation of various stages of gene expression

The regulation of gene expression (or gene regulation) by environmental factors and during different stages of development can occur at each step of the process such as transcriptionRNA splicingtranslation, and post-translational modification of a protein.[94]

The ability of gene transcription to be regulated allows for the conservation of energy as cells will only make proteins when needed.[94] Gene expression can be influenced by positive or negative regulation, depending on which of the two types of regulatory proteins called transcription factors bind to the DNA sequence close to or at a promoter.[94] A cluster of genes that share the same promoter is called an operon, found mainly in prokaryotes and some lower eukaryotes (e.g., Caenorhabditis elegans).[94][95] It was first identified in Escherichia coli—a prokaryotic cell that can be found in the intestines of humans and other animals—in the 1960s by François Jacob and Jacques Monod.[94] They studied the prokaryotic cell's lac operon, which is part of three genes (lacZlacY, and lacA) that encode three lactose-metabolizing enzymes (β-galactosidaseβ-galactoside permease, and β-galactoside transacetylase).[94] In positive regulation of gene expression, the activator is the transcription factor that stimulates transcription when it binds to the sequence near or at the promoter. In contrast, negative regulation occurs when another transcription factor called a repressor binds to a DNA sequence called an operator, which is part of an operon, to prevent transcription. When a repressor binds to a repressible operon (e.g., trp operon), it does so only in the presence of a corepressor. Repressors can be inhibited by compounds called inducers (e.g., allolactose), which exert their effects by binding to a repressor to prevent it from binding to an operator, thereby allowing transcription to occur.[94] Specific genes that can be activated by inducers are called inducible genes (e.g., lacZ or lacA in E. coli), which are in contrast to constitutive genes that are almost always active.[94] In contrast to both, structural genes encode proteins that are not involved in gene regulation.[94]

In prokaryotic cells, transcription is regulated by proteins called sigma factors, which bind to RNA polymerase and direct it to specific promoters.[94] Similarly, transcription factors in eukaryotic cells can also coordinate the expression of a group of genes, even if the genes themselves are located on different chromosomes.[94] Coordination of these genes can occur as long as they share the same regulatory DNA sequence that bind to the same transcription factors.[94] Promoters in eukaryotic cells are more diverse but tend to contain a core sequence that RNA polymerase can bind to, with the most common sequence being the TATA box, which contains multiple repeating A and T bases.[94] Specifically, RNA polymerase II is the RNA polymerase that binds to a promoter to initiate transcription of protein-coding genes in eukaryotes, but only in the presence of multiple general transcription factors, which are distinct from the transcription factors that have regulatory effects, i.e., activators and repressors.[94] In eukaryotic cells, DNA sequences that bind with activators are called enhances whereas those sequences that bind with repressors are called silencers.[94] Transcription factors such as nuclear factor of activated T-cells (NFAT) are able to identify specific nucleotide sequence based on the base sequence (e.g., CGAGGAAAATTG for NFAT) of the binding site, which determines the arrangement of the chemical groups within that sequence that allows for specific DNA-protein interactions.[94] The expression of transcription factors is what underlies cellular differentiation in a developing embryo.[94]

In addition to regulatory events involving the promoter, gene expression can also be regulated by epigenetic changes to chromatin, which is a complex of DNA and protein found in eukaryotic cells.[94]

Post-transcriptional control of mRNA can involve the alternative splicing of primary mRNA transcripts, resulting in a single gene giving rise to different mature mRNAs that encode a family of different proteins.[94][96] A well-studied example is the Sxl gene in Drosophila, which determines the sex in these animals. The gene itself contains four exons and alternative splicing of its pre-mRNA transcript can generate two active forms of the Sxl protein in female flies and one in inactive form of the protein in males.[94] Another example is the human immunodeficiency virus (HIV), which has a single pre-mRNA transcript that can generate up to nine proteins as a result of alternative splicing.[94] In humans, eighty percent of all 21,000 genes are alternatively spliced.[94] Given that both chimpanzees and humans have a similar number of genes, it is thought that alternative splicing might have contributed to the latter's complexity due to the greater number of alternative splicing in the human brain than in the brain of chimpanzees.[94]

Translation can be regulated in three known ways, one of which involves the binding of tiny RNA molecules called microRNA (miRNA) to a target mRNA transcript, which inhibits its translation and causes it to degrade.[94] Translation can also be inhibited by the modification of the 5’ cap by substituting the modified guanosine triphosphate (GTP) at the 5’ end of an mRNA for an unmodified GTP molecule.[94] Finally, translational repressor proteins can bind to mRNAs and prevent them from attaching to a ribosome, thereby blocking translation.[94]

Once translated, the stability of proteins can be regulated by being targeted for degradation.[94] A common example is when an enzyme attaches a regulatory protein called ubiquitin to the lysine residue of a targeted protein.[94] Other ubiquitins then attached to the primary ubiquitin to form a polyubiquitinated protein, which then enters a much larger protein complex called proteasome.[94] Once the polyubiquitinated protein enters the proteasome, the polyubiquitin detaches from the target protein, which is unfolded by the proteasome in an ATP-dependent manner, allowing it to be hydrolyzed by three proteases.[94]

Genomes

Composition of the human genome

genome is an organism's complete set of DNA, including all of its genes.[97] Sequencing and analysis of genomes can be done using high throughput DNA sequencing and bioinformatics to assemble and analyze the function and structure of entire genomes.[98][99][100] The genomes of prokaryotes are small, compact, and diverse. In contrast, the genomes of eukaryotes are larger and more complex such as having more regulatory sequences and much of its genome are made up of non-coding DNA sequences for functional RNA (rRNAtRNA, and mRNA) or regulatory sequences. The genomes of various model organisms such as arabidopsisfruit fly, mice, nematodes, and yeast have been sequenced. The Human Genome Project was a major undertaking by the international scientific community to sequence the entire human genome, which was completed in 2003.[101] The sequencing of the human genome has yielded practical applications such as DNA fingerprinting, which can be used for paternity testing and forensics. In medicine, sequencing of the entire human genome has allowed for the identification of mutations that cause tumors as well as genes that cause a specific genetic disorder.[101] The sequencing of genomes from various organisms has led to the emergence of comparative genomics, which aims to draw comparisons of genes from the genomes of those different organisms.[101]

Many genes encode more than one protein, with posttranslational modifications increasing the diversity of proteins within a cell. An organism's proteome is its entire set of proteins expressed by its genome and proteomics seeks to study the complete set of proteins produced by an organism.[101] Because many proteins are enzymes, their activities tend to affects the concentrations of substrates and products. Thus, as the proteome changes, so do the amount of small molecules or metabolites.[101] The complete set of small molecules in a cell or organism is called a metabolome and metabolomics is the study of the metabolome in relation to the physiological activity of a cell or organism.[101]

Biotechnology

Construction of recombinant DNA, in which a foreign DNA fragment is inserted into a plasmid vector

Biotechnology is the use of cells or organisms to develop products for humans.[102] One commonly used technology with wide applications is the creation of recombinant DNA, which is a DNA molecule assembled from two or more sources in a laboratory. Before the advent of polymerase chain reaction, biologists would manipulate DNA by cutting it into smaller fragments using restriction enzymes. They would then purify and analyze the fragments using gel electrophoresis and then later recombine the fragments into a novel DNA sequence using DNA ligase.[102] The recombinant DNA is then cloned by inserting it into a host cell, a process known as transformation if the host cells were bacteria such as E. coli, or transfection if the host cells were eukaryotic cells like yeast, plant, or animal cells. Once the host cell or organism has received and integrated the recombinant DNA, it is described as transgenic.[102]

A recombinant DNA can be inserted in one of two ways. A common method is to simply insert the DNA into a host chromosome, with the site of insertion being random.[102] Another approach would be to insert the recombinant DNA as part of another DNA sequence called a vector, which then integrates into the host chromosome or has its own origin of DNA replication, thereby allowing to replicate independently of the host chromosome.[102] Plasmids from bacterial cells such as E. coli are typically used as vectors due to their relatively small size (e.g. 2000-6000 base pairs in E. coli), presence of restriction enzymes, genes that are resistant to antibiotics, and the presence of an origin of replication.[102] A gene coding for a selectable marker such as antibiotic resistance is also incorporated into the vector.[102] Inclusion of this market allows for the selection of only those host cells that contained the recombinant DNA while discarding those that do not.[102] Moreover, the marker also serves as a reporter gene that once expressed, can be easily detected and measured.[102]

Once the recombinant DNA is inside individual bacterial cells, those cells are then plated and allowed to grow into a colony that contains millions of transgenic cells that carry the same recombinant DNA.[103] These transgenic cells then produce large quantities of the transgene product such as human insulin, which was the first medicine to be made using recombinant DNA technology.[102]

One of the goals of molecular cloning is to identify the function of specific DNA sequences and the proteins they encode.[102] For a specific DNA sequence to be studied and manipulated, millions of copies of DNA fragments containing that DNA sequence need to be made.[102] This involves breaking down an intact genome, which is much too large to be introduced into a host cell, into smaller DNA fragments. Although no longer intact, the collection of these DNA fragments still make up an organism's genome, with the collection itself being referred to as a genomic library, due to the ability to search and retrieve specific DNA fragments for further study, analogous to the process of retrieving a book from a regular library.[102] DNA fragments can be obtained using restriction enzymes and other processes such as mechanical shearing. Each obtained fragment is then inserted into a vector that is taken up by a bacterial host cell. The host cell is then allowed to proliferate on a selective medium (e.g., antibiotic resistance), which produces a colony of these recombinant cells, each of which contains many copies of the same DNA fragment.[102] These colonies can be grown by spreading them over a solid medium in Petri dishes, which are incubated at a suitable temperature. One dish alone can hold thousands of bacterial colonies, which can be easily screened for a specific DNA sequence.[102] The sequence can be identified by first duplicating a Petri dish with bacterial colonies and then exposing the DNA of the duplicated colonies for hybridization, which involves labeling them with complementary radioactive or fluorescent nucleotides.[102]

Smaller DNA libraries that contain genes from a specific tissue can be created using complementary DNA (cDNA).[102] The collection of these cDNAs from a specific tissue at a particular time is called a cDNA library, which provides a "snapshot" of transcription patterns of cells at a specific location and time.[102]

Other biotechnology tools include DNA microarraysexpression vectorssynthetic genomics, and CRISPR gene editing.[102][104] Other approaches such as pharming can produce large quantities of medically useful products through the use of genetically modified organisms.[102] Many of these other tools also have wide applications such as creating medically useful proteins, or improving plant cultivation and animal husbandry.[102]

Genes, development, and evolution

Model of concentration gradient building up; fine yellow-orange outlines are cell boundaries.[105]

Development is the process by which a multicellular organism (plant or animal) goes through a series of a changes, starting from a single cell, and taking on various forms that are characteristic of its life cycle.[106] There are four key processes that underlie development: Determinationdifferentiationmorphogenesis, and growth. Determination sets the developmental fate of a cell, which becomes more restrictive during development. Differentiation is the process by which specialized cells from less specialized cells such as stem cells.[107][108] Stem cells are undifferentiated or partially differentiated cells that can differentiate into various types of cells and proliferate indefinitely to produce more of the same stem cell.[109] Cellular differentiation dramatically changes a cell's size, shape, membrane potentialmetabolic activity, and responsiveness to signals, which are largely due to highly controlled modifications in gene expression and epigenetics. With a few exceptions, cellular differentiation almost never involves a change in the DNA sequence itself.[110] Thus, different cells can have very different physical characteristics despite having the same genome. Morphogenesis, or development of body form, is the result of spatial differences in gene expression.[106] Specially, the organization of differentiated tissues into specific structures such as arms or wings, which is known as pattern formation, is governed by morphogens, signaling molecules that move from one group of cells to surrounding cells, creating a morphogen gradient as described by the French flag modelApoptosis, or programmed cell death, also occurs during morphogenesis, such as the death of cells between digits in human embryonic development, which frees up individual fingers and toes. Expression of transcription factor genes can determine organ placement in a plant and a cascade of transcription factors themselves can establish body segmentation in a fruit fly.[106]

A small fraction of the genes in an organism's genome called the developmental-genetic toolkit control the development of that organism. These toolkit genes are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. Differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. Among the most important toolkit genes are the Hox genes. Hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva.[111] Variations in the toolkit may have produced a large part of the morphological evolution of animals. The toolkit can drive evolution in two ways. A toolkit gene can be expressed in a different pattern, as when the beak of Darwin's large ground-finch was enlarged by the BMP gene,[112] or when snakes lost their legs as Distal-less (Dlx) genes became under-expressed or not expressed at all in the places where other reptiles continued to form their limbs.[113] Or, a toolkit gene can acquire a new function, as seen in the many functions of that same gene, distal-less, which controls such diverse structures as the mandible in vertebrates,[114][115] legs and antennae in the fruit fly,[116] and eyespot pattern in butterfly wings.[117] Given that small changes in toolbox genes can cause significant changes in body structures, they have often enabled convergent or parallel evolution.

Evolutionary processes

Natural selection for darker traits

A central organizing concept in biology is that life changes and develops through evolution, which is the change in heritable characteristics of populations over successive generations.[118][119] Evolution is now used to explain the great variations of life on Earth. The term evolution was introduced into the scientific lexicon by Jean-Baptiste de Lamarck in 1809.[120][121] He proposed that evolution occurred as a result of inheritance of acquired characteristics, which was unconvincing but there were no alternative explanations at the time.[120] Charles Darwin, an English naturalist, had returned to England in 1836 from his five-year travels on the HMS Beagle where he studied rocks and collected plants and animals from various parts of the world such as the Galápagos Islands.[120] He had also read Principles of Geology by Charles Lyell and An Essay on the Principle of Population by Thomas Malthus and was influenced by them.[122] Based on his observations and readings, Darwin began to formulate his theory of evolution by natural selection to explain the diversity of plants and animals in different parts of the world.[120][122] Alfred Russel Wallace, another English naturalist who had studied plants and animals in the Malay Archipelago, also came to the same idea, but later and independently of Darwin.[120] Both Darwin and Wallace jointly presented their essay and manuscript, respectively, at the Linnaean Society of London in 1858, giving them both credit for their discovery of evolution by natural selection.[120][123][124][125][126] Darwin would later publish his book On the Origin of Species in 1859, which explained in detail how the process of evolution by natural selection works.[120]

To explain natural selection, Darwin drew an analogy with humans modifying animals through artificial selection, whereby animals were selectively bred for specific traits, which has given rise to individuals that no longer resemble their wild ancestors.[122] Darwin argued that in the natural world, it was nature that played the role of humans in selecting for specific traits. He came to this conclusion based on two observations and two inferences.[122] First, members of any population tend to vary with respect to their heritable traits. Second, all species tend to produce more offspring than can be supported by their respective environments, resulting in many individuals not surviving and reproducing.[122] Based on these observations, Darwin inferred that those individuals who possessed heritable traits that are better adapted to their environments are more likely to survive and produce more offspring than other individuals.[122] He further inferred that the unequal or differential survival and reproduction of certain individuals over others will lead to the accumulation of favorable traits over successive generations, thereby increasing the match between the organisms and their environment.[122][127][128] Thus, taken together, natural selection is the differential survival and reproduction of individuals in subsequent generations due to differences in or more heritable traits.[129][122][120]

Darwin was not aware of Mendel's work of inheritance and so the exact mechanism of inheritance that underlie natural selection was not well-understood[130] until the early 20th century when the modern synthesis reconciled Darwinian evolution with classical genetics, which established a neo-Darwinian perspective of evolution by natural selection.[129] This perspective holds that evolution occurs when there are changes in the allele frequencies within a population of interbreeding organisms. In the absence of any evolutionary process acting on a large random mating population, the allele frequencies will remain constant across generations as described by the Hardy–Weinberg principle.[131]

Another process that drives evolution is genetic drift, which is the random fluctuations of allele frequencies within a population from one generation to the next.[132] When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward at each successive generation because the alleles are subject to sampling error.[133] This drift halts when an allele eventually becomes fixed, either by disappearing from the population or replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone.

Speciation

species is a group of organisms that mate with one another and speciation is the process by which one lineage splits into two lineages as a result of having evolved independently from each other.[134] For speciation to occur, there has to be reproductive isolation.[134] Reproductive isolation can result from incompatibilities between genes as described by Bateson–Dobzhansky–Muller model. Reproductive isolation also tends to increase with genetic divergence. Speciation can occur when there are physical barriers that divide an ancestral species, a process known as allopatric speciation.[134] In contrast, sympatric speciation occurs in the absence of physical barriers.

Pre-zygotic isolation such as mechanicaltemporalbehavioral, habitat, and gametic isolations can prevent different species from hybridizing.[134] Similarly, post-zygotic isolations can result in hybridization being selected against due to the lower viability of hybrids or hybrid infertility (e.g., mule). Hybrid zones can emerge if there were to be incomplete reproductive isolation between two closely related species.

Phylogeny

Bacteria Archaea Eukaryota Aquifex Thermotoga Bacteroides–Cytophaga Planctomyces "Cyanobacteria" Proteobacteria Spirochetes Gram-positives Chloroflexi Thermoproteus–Pyrodictium Thermococcus celer Methanococcus Methanobacterium Methanosarcina Haloarchaea Entamoebae Slime molds Animals Fungi Plants Ciliates Flagellates Trichomonads Microsporidia Diplomonads
 
Phylogenetic tree showing the domains of bacteriaarchaea, and eukaryotes

A phylogeny is an evolutionary history of a specific group of organisms or their genes.[135] It can be represented using a phylogenetic tree, which is a diagram showing lines of descent among organisms or their genes. Each line drawn on the time axis of a tree represents a lineage of descendants of a particular species or population. When a lineage divides into two, it is represented as a node (or split) on the phylogenetic tree. The more splits there are over time, the more branches there will be on the tree, with the common ancestor of all the organisms in that tree being represented by the root of that tree. Phylogenetic trees may portray the evolutionary history of all life forms, a major evolutionary group (e.g., insects), or an even smaller group of closely related species. Within a tree, any group of species designated by a name is a taxon (e.g., humans, primates, mammals, or vertebrates) and a taxon that consists of all its evolutionary descendants is a clade, otherwise known as a monophyletic taxon.[135] Closely related species are referred to as sister species and closely related clades are sister clades. In contrast to a monophyletic group, a polyphyletic group does not include its common ancestor whereas a paraphyletic group does not include all the descendants of a common ancestor.[135]

Phylogenetic trees are the basis for comparing and grouping different species.[135] Different species that share a feature inherited from a common ancestor are described as having homologous features (or synapomorphy).[136][137][135] Homologous features may be any heritable traits such as DNA sequence, protein structures, anatomical features, and behavior patterns. A vertebral column is an example of a homologous feature shared by all vertebrate animals. Traits that have a similar form or function but were not derived from a common ancestor are described as analogous features. Phylogenies can be reconstructed for a group of organisms of primary interests, which are called the ingroup. A species or group that is closely related to the ingroup but is phylogenetically outside of it is called the outgroup, which serves a reference point in the tree. The root of the tree is located between the ingroup and the outgroup.[135] When phylogenetic trees are reconstructed, multiple trees with different evolutionary histories can be generated. Based on the principle of Parsimony (or Occam's razor), the tree that is favored is the one with the fewest evolutionary changes needed to be assumed over all traits in all groups. Computational algorithms can be used to determine how a tree might have evolved given the evidence.[135]

Phylogeny provides the basis of biological classification, which is based on Linnaean taxonomy that was developed by Carl Linnaeus in the 18th century.[135] This classification system is rank-based, with the highest rank being the domain followed by kingdomphylumclassorderfamilygenus, and species.[135] All organisms can be classified as belonging to one of three domainsArchaea (originally Archaebacteria); bacteria (originally eubacteria), or eukarya (includes the protistfungiplant, and animal kingdoms).[138] A binomial nomenclature is used to classify different species. Based on this system, each species is given two names, one for its genus and another for its species.[135] For example, humans are Homo sapiens, with Homo being the genus and sapiens being the species. By convention, the scientific names of organisms are italicized, with only the first letter of the genus capitalized.[139][140]

History of life

The history of life on Earth traces the processes by which organisms have evolved from the earliest emergence of life to present day. Earth formed about 4.5 billion years ago and all life on Earth, both living and extinct, descended from a last universal common ancestor that lived about 3.5 billion years ago.[141][142] The dating of the Earth's history can be done using several geological methods such as stratigraphyradiometric dating, and paleomagnetic dating.[143] Based on these methods, geologists have developed a geologic time scale that divides the history of the Earth into major divisions, starting with four eons (HadeanArcheanProterozoic, and Phanerozoic), the first three of which are collectively known as the Precambrian, which lasted approximately 4 billion years.[143] Each eon can be divided into eras, with the Phanerozoic eon that began 539 million years ago[144] being subdivided into PaleozoicMesozoic, and Cenozoic eras.[143] These three eras together comprise eleven periods (CambrianOrdovicianSilurianDevonianCarboniferousPermianTriassicJurassicCretaceousTertiary, and Quaternary) and each period into epochs.[143]

The similarities among all known present-day species indicate that they have diverged through the process of evolution from their common ancestor.[145] Biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteriaarchaea, and eukaryotes.[146][10][147][148] Microbal mats of coexisting bacteria and archaea were the dominant form of life in the early Archean epoch and many of the major steps in early evolution are thought to have taken place in this environment.[149] The earliest evidence of eukaryotes dates from 1.85 billion years ago,[150][151] and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. Later, around 1.7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions.[152]

Algae-like multicellular land plants are dated back even to about 1 billion years ago,[153] although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2.7 billion years ago.[154] Microorganisms are thought to have paved the way for the inception of land plants in the Ordovician period. Land plants were so successful that they are thought to have contributed to the Late Devonian extinction event.[155]

Ediacara biota appear during the Ediacaran period,[156] while vertebrates, along with most other modern phyla originated about 525 million years ago during the Cambrian explosion.[157] During the Permian period, synapsids, including the ancestors of mammals, dominated the land,[158] but most of this group became extinct in the Permian–Triassic extinction event 252 million years ago.[159] During the recovery from this catastrophe, archosaurs became the most abundant land vertebrates;[160] one archosaur group, the dinosaurs, dominated the Jurassic and Cretaceous periods.[161] After the Cretaceous–Paleogene extinction event 66 million years ago killed off the non-avian dinosaurs,[162] mammals increased rapidly in size and diversity.[163] Such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify.[164]

Bacteria and Archaea

 
Bacteria – Gemmatimonas aurantiaca (-=1 Micrometer)

Bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. Typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. Bacteria were among the first life forms to appear on Earth, and are present in most of its habitats. Bacteria inhabit soil, water, acidic hot springsradioactive waste,[165] and the deep biosphere of the earth's crust. Bacteria also live in symbiotic and parasitic relationships with plants and animals. Most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory.[166]

Archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria (in the Archaebacteria kingdom), a term that has fallen out of use.[167] Archaeal cells have unique properties separating them from the other two domainsBacteria and Eukaryota. Archaea are further divided into multiple recognized phyla. Archaea and bacteria are generally similar in size and shape, although a few archaea have very different shapes, such as the flat and square cells of Haloquadratum walsbyi.[168] Despite this morphological similarity to bacteria, archaea possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. Other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes,[169] including archaeols. Archaea use more energy sources than eukaryotes: these range from organic compounds, such as sugars, to ammoniametal ions or even hydrogen gasSalt-tolerant archaea (the Haloarchaea) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both. Archaea reproduce asexually by binary fissionfragmentation, or budding; unlike bacteria, no known species of Archaea form endospores.

The first observed archaea were extremophiles, living in extreme environments, such as hot springs and salt lakes with no other organisms. Improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. Archaea are particularly numerous in the oceans, and the archaea in plankton may be one of the most abundant groups of organisms on the planet.

Archaea are a major part of Earth's life. They are part of the microbiota of all organisms. In the human microbiome, they are important in the gut, mouth, and on the skin.[170] Their morphological, metabolic, and geographical diversity permits them to play multiple ecological roles: carbon fixation; nitrogen cycling; organic compound turnover; and maintaining microbial symbiotic and syntrophic communities, for example.[171]

Protists

 
Diversity of protists

Eukaryotes are hypothesized to have split from archaea, which was followed by their endosymbioses with bacteria (or symbiogenesis) that gave rise to mitochondria and chloroplasts, both of which are now part of modern day eukaryotic cells.[172] The major lineages of eukaryotes diversified in the Precambrian about 1.5 billion years ago and can be classified into eight major cladesalveolatesexcavatesstramenopilesplantsrhizariansamoebozoansfungi, and animals.[172] Five of these clades are collectively known as protists, which are mostly microscopic eukaryotic organisms that are not plants, fungi, or animals.[172] While it is likely that protists share a common ancestor (the last eukaryotic common ancestor),[173] protists by themselves do not constitute a separate clade as some protists may be more closely related to plants, fungi, or animals than they are to other protists. Like groupings such as algaeinvertebrates, or protozoans, the protist grouping is not a formal taxonomic group but is used for convenience.[172][174] Most protists are unicellular, which are also known as microbial eukaryotes.[172]

The alveolates are mostly photosynthetic unicellular protists that possess sacs called alveoli (hence their name alveolates) that are located beneath their cell membrane, providing support for the cell surface.[172] Alveolates comprise several groups such as dinoflagellatesapicomplexans, and ciliates. Dinoflagellates are photosynthetic and can be found in the ocean where they play a role as primary producers of organic matter.[172] Apicomplexans are parasitic alveolates that possess an apical complex, which is a group of organelles located in the apical end of the cell.[172] This complex allows apicomplexans to invade their hosts' tissues. Ciliates are alveolates that possess numerous hair-like structure called cilia. A defining characteristic of ciliates is the presence of two types of nuclei in each ciliate cell. A commonly studied ciliate is the paramecium.[172]

The excavates are groups of protists that began to diversify approximately 1.5 billion years ago shortly after the origin of the eukaryotes.[172] Some excavates do not possess mitochondria, which are thought to have been lost over the course of evolution as these protists still possess nuclear genes that are associated with mitochondria.[172] The excavates comprise several groups such as diplomonadsparabasalidsheteroloboseanseuglenids, and kinetoplastids.[172]

Stramenopiles, most of which can be characterized by the presence of tubular hairs on the longer of their two flagella, include diatoms and brown algae.[172] Diatoms are primary producers and contribute about one-fifth of all photosynthetic carbon fixation, making them a major component of phytoplankton.[172]

Rhizarians are mostly unicellular and aquatic protists that typically contain long, thin pseudopods.[172] The rhizarians comprise three main groups: cercozoansforaminiferans, and radiolarians.[172]

Amoebozoans are protists with a body form characterized by the presence lobe-shaped pseudopods, which help them to move.[172] They include groups such as loboseans and slime molds (e.g., plasmodial slime mold and cellular slime molds).[172]

Plant diversity

 
Diversity of plants

Plants are mainly multicellular organisms, predominantly photosynthetic eukaryotes of the kingdom Plantae, which would exclude fungi and some algae. A shared derived trait (or synapomorphy) of Plantae is the primary endosymbiosis of a cyanobacterium into an early eukaryote about one billion years ago, which gave rise to chloroplasts.[175] The first several clades that emerged following primary endosymbiosis were aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related.[175] Algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of Plantae.[175] Unlike glaucophytes, the other algal clades such as red and green algae are multicellular. Green algae comprise three major clades: chlorophytescoleochaetophytes, and stoneworts.[175]

Land plants (embryophytes) first appeared in terrestrial environments approximately 450 to 500 million years ago.[175] A synapomorphy of land plants is an embryo that develops under the protection of tissues of its parent plant.[175] Land plants comprise ten major clades, seven of which constitute a single clade known as vascular plants (or tracheophytes) as they all have tracheids, which are fluid-conducting cells, and a well-developed system that transports materials throughout their bodies.[175] In contrast, the other three clades are nonvascular plants as they do not have tracheids.[175] They also do not constitute a single clade.[175]

Nonvascular plants include liverwortsmosses, and hornworts. They tend to be found in areas where water is readily available.[175] Most live on soil or even on vascular plants themselves. Some can grow on bare rock, tree trunks that are dead or have fallen, and even buildings.[175] Most nonvascular plants are terrestrial, with a few living in freshwater environments and none living in the oceans.[175]

The seven clades (or divisions) that make up vascular plants include horsetails and ferns, which together can be grouped as a single clade called monilophytes.[175] Seed plants (or spermatophyte) comprise the other five divisions, four of which are grouped as gymnosperms and one is angiosperms. Gymnosperms includes coniferscycadsGinkgo, and gnetophytes. Gymnosperm seeds develop either on the surface of scales or leaves, which are often modified to form cones, or solitary as in yewTorreyaGinkgo.[176] Angiosperms are the most diverse group of land plants, with 64 orders, 416 families, approximately 13,000 known genera and 300,000 known species.[177] Like gymnosperms, angiosperms are seed-producing plants. They are distinguished from gymnosperms by having characteristics such as flowersendosperm within their seeds, and production of fruits that contain the seeds.

Fungi

 
Diversity of fungi. Clockwise from top left: Amanita muscaria, a basidiomycete; Sarcoscypha coccinea, an ascomycete; bread covered in moldchytridAspergillus conidiophore.

Fungi are eukaryotic organisms that digest foods outside of their bodies.[178] They do so through a process called absorptive heterotrophy whereby they would first secrete digestive enzymes that break down large food molecules before absorbing them through their cell membranes. Many fungi are also saprobes as they are able to take in nutrients from dead organic matter and are hence, the principal decomposers in ecological systems.[178] Some fungi are parasites by absorbing nutrients from living hosts while others are mutualists.[178] Fungi, along with two other lineages, choanoflagellates and animals, can be grouped as opisthokonts. A synapomorphy that distinguishes fungi from other two opisthokonts is the presence of chitin in their cell walls.[178]

Most fungi are multicellular but some are unicellular such as yeasts, which live in liquid or moist environments and are able to absorb nutrients directly into their cell surfaces.[178] Multicellular fungi, on the other hand, have a body called mycelium, which is composed of a mass of individual tubular filaments called hyphae that allows for nutrient absorption to occur.[178]

Fungi can be divided into six major groups based on their life cycles: microsporidiachytrids, zygospore fungi (Zygomycota), arbuscular mycorrhizal fungi (Glomeromycota), sac fungi (Ascomycota), and club fungi (Basidiomycota).[178] Fungi are classified by the particular processes of sexual reproduction they use. The usual cellular products of meiosis during sexual reproduction are spores that are adapted to survive inclement times and to spread. A principal adaptive benefit of meiosis during sexual reproduction in the Ascomycota and Basidiomycota was proposed to be the repair of DNA damage through meiotic recombination.[179]

The fungus kingdom encompasses an enormous diversity of taxa with varied ecologies, life cycle strategies, and morphologies ranging from unicellular aquatic chytrids to large mushrooms. However, little is known of the true biodiversity of Kingdom Fungi, which has been estimated at 2.2 million to 3.8 million species.[180] Of these, only about 148,000 have been described,[181] with over 8,000 species known to be detrimental to plants and at least 300 that can be pathogenic to humans.[182]

Animal diversity

Echinoderm Cnidaria Bivalve Tardigrade Crustacean Arachnid Sponge Insect Mammal Bryozoa Acanthocephala Flatworm Cephalopod Annelid Tunicate Fish Bird Phoronida
 
Diversity of animals. From top to bottom, first column: Echinodermcnidariabivalvetardigradecrustacean, and arachnid. Second column: Spongeinsectmammalbryozoaacanthocephala, and flatworm. Third column: Cephalopodannelidtunicatefishbird, and phoronida.

Animals are multicellular eukaryotic organisms that form the kingdom Animalia. With few exceptions, animals consume organic materialbreathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million animal species in total. They have complex interactions with each other and their environments, forming intricate food webs.

Animals can be distinguished into two groups based on their developmental characteristics.[183] For instance, embryos of diploblastic animals such as ctenophores, placeozoans, and cnidarians have two cell layers (ectoderm and endoderm) whereas the embryos of triploblastic animals have three tissue layers (ectoderm, mesoderm, and endoderm), which is a synapomorphy of these animals.[183] Triploblastic animals can be further divided into two major clades based on based on the pattern of gastrulation, whereby a cavity called a blastopore is formed from the indentation of a blastula. In protostomes, the blastopore gives rise to the mouth, which is then followed by the formation of the anus.[183] In deuterostomes, the blastopore gives rise to the anus, followed by the formation of the mouth.[183]

Animals can also be differentiated based on their body plan, specifically with respect to four key features: symmetrybody cavitysegmentation, and appendages.[183] The bodies of most animals are symmetrical, with symmetry being either radial or bilateral.[183] Triploblastic animals can be divided into three types based on their body cavity: acoelomatepseudocoelomate, and coelomate.[183] Segmentation can be observed in the bodies of many animals, which allows for specialization of different parts of the body as well as allowing the animal to change the shape of its body to control its movements.[183] Finally, animals can be distinguished based on the type and location of their appendages such as antennae for sensing the environment or claws for capturing prey.[183]

Sponges, the members of the phylum Porifera, are a basal Metazoa (animal) clade as a sister of the diploblasts.[184][185][186][187][188] They are multicellular organisms that have bodies full of pores and channels allowing water to circulate through them, consisting of jelly-like mesohyl sandwiched between two thin layers of cells.

The majority (~97%) of animal species are invertebrates,[189] which are animals that do not have a vertebral column (or backbone or spine), derived from the notochord. This includes all animals apart from the subphylum Vertebrata. Familiar examples of invertebrates include spongescnidarians (hydrasjellyfishessea anemones, and corals), mollusks (chitonssnailbivalvessquids, and octopuses), annelids (earthworms and leeches), and arthropods (insectsarachnidscrustaceans, and myriapods). Many invertebrate taxa have a greater number and variety of species than the entire subphylum of Vertebrata.[190]

In contrast, vertebrates comprise all species of animals within the subphylum Vertebrata, which are chordates with vertebral columns. These animals have four key features, which are an anterior skull with a brain, a rigid internal skeleton supported by a vertebral column that encloses a spinal cord, internal organs suspended in a coelom, and a well-developed circulatory system driven by a single large heart.[183] Vertebrates represent the overwhelming majority of the phylum Chordata, with currently about 69,963 species described.[191] Vertebrates comprise different major groups that include jawless fishes (not including hagfishes), jawed vertebrates such as cartilaginous fishes (sharksrays, and ratfish), bony fishestetrapods such as amphibiansreptilesbirds, and mammals.[183]

The two remaining groups of jawless fishes that have survived beyond the Devonian period are hagfishes and lamprey, which are collectively known as cyclostomes (for circled mouths).[183] Both groups of animals have elongated eel-like bodies with no paired fins.[183] However, because hagfishes have a weak circulatory system with three accessory hearts, a partial skull with no cerebellum, no jaws or stomach, and no jointed vertebrae, some biologists do not classify them as vertebrates but instead as a sister group of vertebrates.[183] In contrast, lampreys have a complete skull and a distinct vertebrae that is cartilaginous.[183]

Mammals have four key features that distinguish them from other animals such as sweat glandsmammary glands, hair, and a four-chambered heart.[183] Small and medium-sized mammals used to co-exist with large dinosaurs in much of the Mesozoic era but soon radiated following the mass extinction of dinosaurs at the end of the Cretaceous period.[183] There are approximately 57,000 mammal species, which can be divided into two primary groups: prototherians and therians. Prototherians do not possess nipples on their mammary but instead secrete milk onto their skin, allowing their offspring to lap if off their furs.[183] They also lack a placenta, lays eggs, and have sprawling legs. Currently, there only five known species of prototherians (platypus and four species of echidnas).[183] The therian clade is viviparous and can be further divided into two groups: marsupials and eutherians.[183] Marsupial females have a ventral pouch to carry and feed their offspring. Eutherians form the majority of mammals and include major groups such as rodentsbatseven-toed ungulates and cetaceansshrews and molesprimatescarnivoresrabbitsAfrican insectivoresspiny insectivoresarmadillostreeshrewsodd-toed ungulateslong-nosed insectivoresanteaters and slothspangolinshyraxessirenianselephantscolugos, and aardvark.[183]

A split in the primate lineage occurred approximately 90 million years ago during the Cretaceous, which brought about two major clades: prosimians and anthropoids.[183] The prosimians include lemurslorises, and galagos whereas the anthropoids comprise tarsiersNew World monkeysOld World monkeys, and apes.[183] Apes separated from Old World monkeys about 35 million years ago, with various species living in AfricaEurope, and Asia between 22 and 5.5 million years ago.[183] The modern descendants of these animals include chimpanzees and gorillas in Africa, gibbons and orangutans in Asia, and humans worldwide. A split in the ape lineage occurred about six million years ago in Africa, which resulted in the emergence of chimpanzees as one group and a hominid clade as another group that includes humans and their extinct relatives.[183] Bipedalism emerged in the earliest protohominids known as ardipithecines. As an adaptation, bipedalism conferred three advantages. First, it enabled the ardipithecines to use their forelimbs to manipulate and carry objects while working.[183] Second, it elevated the animal's eyes to spot preys or predators over tall vegetation.[183] Finally, bipedalism is more energetically efficient than quadrupedal locomotion.[183]

Viruses

 
Bacteriophages attached to a bacterial cell wall

Viruses are submicroscopic infectious agents that replicate inside the cells of organisms.[192] Viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea.[193][194] More than 6,000 virus species have been described in detail.[195] Viruses are found in almost every ecosystem on Earth and are the most numerous type of biological entity.[196][197]

When infected, a host cell is forced to rapidly produce thousands of identical copies of the original virus. When not inside an infected cell or in the process of infecting a cell, viruses exist in the form of independent particles, or virions, consisting of the genetic material (DNA or RNA), a protein coat called capsid, and in some cases an outside envelope of lipids. The shapes of these virus particles range from simple helical and icosahedral forms to more complex structures. Most virus species have virions too small to be seen with an optical microscope, as they are one-hundredth the size of most bacteria.

The origins of viruses in the evolutionary history of life are unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. In evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction.[198] Because viruses possess some but not all characteristics of life, they have been described as "organisms at the edge of life",[199] and as self-replicators.[200]

Viruses can spread in many ways. One transmission pathway is through disease-bearing organisms known as vectors: for example, viruses are often transmitted from plant to plant by insects that feed on plant sap, such as aphids; and viruses in animals can be carried by blood-sucking insects. Influenza viruses are spread by coughing and sneezing. Norovirus and rotavirus, common causes of viral gastroenteritis, are transmitted by the faecal–oral route, passed by hand-to-mouth contact or in food or water. Viral infections in animals provoke an immune response that usually eliminates the infecting virus. Immune responses can also be produced by vaccines, which confer an artificially acquired immunity to the specific viral infection.

Plant form and function

Plant body

 
Root and shoot systems in a eudicot

The plant body is made up of organs that can be organized into two major organ systems: a root system and a shoot system.[201] The root system anchors the plants into place. The roots themselves absorb water and minerals and store photosynthetic products. The shoot system is composed of stemleaves, and flowers. The stems hold and orient the leaves to the sun, which allow the leaves to conduct photosynthesis. The flowers are shoots that have been modified for reproduction. Shoots are composed of phytomers, which are functional units that consist of a node carrying one or more leaves, internode, and one or more buds.

A plant body has two basic patterns (apical–basal and radial axes) that been established during embryogenesis.[201] Cells and tissues are arranged along the apical-basal axis from root to shoot whereas the three tissue systems (dermalground, and vascular) that make up a plant's body are arranged concentrically around its radial axis.[201] The dermal tissue system forms the epidermis (or outer covering) of a plant, which is usually a single cell layer that consists of cells that have differentiated into three specialized structures: stomata for gas exchange in leaves, trichomes (or leaf hair) for protection against insects and solar radiation, and root hairs for increased surface areas and absorption of water and nutrients. The ground tissue makes up virtually all the tissue that lies between the dermal and vascular tissues in the shoots and roots. It consists of three cell types: Parenchymacollenchyma, and sclerenchyma cells. Finally, the vascular tissues are made up of two constituent tissues: xylem and phloem. The xylem is made up of two conducting cells called tracheids and vessel elements whereas the phloem is characterized by the presence of sieve tube elements and companion cells.[201]

Plant nutrition and transport

The xylem (blue) transports water and minerals from the roots upwards whereas the phloem (orange) transports carbohydrates between organs.

Like all other organisms, plants are primarily made up of water and other molecules containing elements that are essential to life.[202] The absence of specific nutrients (or essential elements), many of which have been identified in hydroponic experiments, can disrupt plant growth and reproduction. The majority of plants are able to obtain these nutrients from solutions that surrounds their roots in the soil.[202] Continuous leaching and harvesting of crops can deplete the soil of its nutrients, which can be restored with the use of fertilizersCarnivorous plants such as Venus flytraps are able to obtain nutrients by digesting other arthropods whereas parasitic plants such as mistletoes can parasitize other plants for water and nutrients.

Plants need water to conduct photosynthesis, transport solutes between organs, cool their leaves by evaporation, and maintain internal pressures that support their bodies.[202] Water is able to diffuse in and out of plant cells by osmosis. The direction of water movement across a semipermeable membrane is determined by the water potential across that membrane.[202] Water is able to diffuse across a root cell's membrane through aquaporins whereas solutes are transported across by the membrane by ion channels and pumps. In vascular plants, water and solutes are able to enter the xylem, a vascular tissue, by way of an apoplast and symplast. Once in the xylem, the water and minerals are distributed upward by transpiration from the soil to the aerial parts of the plant.[175][202] In contrast, the phloem, another vascular tissue, distributes carbohydrates (e.g., sucrose) and other solutes such as hormones by translocation from a source (e.g., mature leaf or root) in which they were produced to a sink (e.g., root, flower, or developing fruit) in which they will be used and stored.[202] Sources and sinks can switch roles, depending on the amount of carbohydrates accumulated or mobilized for the nourishment of other organs.

Plant development

Plant development is regulated by environmental cues and the plant's own receptorshormones, and genome.[203] Morever, they have several characteristics that allow them to obtain resources for growth and reproduction such as meristems, post-embryonic organ formation, and differential growth.

Development begins with a seed, which is an embryonic plant enclosed in a protective outer covering. Most plant seeds are usually dormant, a condition in which the seed's normal activity is suspended.[203] Seed dormancy may last may last weeks, months, years, and even centuries. Dormancy is broken once conditions are favorable for growth, and the seed will begin to sprout, a process called germinationImbibition is the first step in germination, whereby water is absorbed by the seed. Once water is absorbed, the seed undergoes metabolic changes whereby enzymes are activated and RNA and proteins are synthesized. Once the seed germinates, it obtains carbohydratesamino acids, and small lipids that serve as building blocks for its development. These monomers are obtained from the hydrolysis of starchproteins, and lipids that are stored in either the cotyledons or endosperm. Germination is completed once embryonic roots called radicle have emerged from the seed coat. At this point, the developing plant is called a seedling and its growth is regulated by its own photoreceptor proteins and hormones.[203]

Unlike animals in which growth is determinate, i.e., ceases when the adult state is reached, plant growth is indeterminate as it is an open-ended process that could potentially be lifelong.[201] Plants grow in two ways: primary and secondary. In primary growth, the shoots and roots are formed and lengthened. The apical meristem produces the primary plant body, which can be found in all seed plants. During secondary growth, the thickness of the plant increases as the lateral meristem produces the secondary plant body, which can be found in woody eudicots such as trees and shrubs. Monocots do not go through secondary growth.[201] The plant body is generated by a hierarchy of meristems. The apical meristems in the root and shoot systems give rise to primary meristems (protoderm, ground meristem, and procambium), which in turn, give rise to the three tissue systems (dermalground, and vascular).

Plant reproduction

Reproduction and development in sporophytes

Most angiosperms (or flowering plants) engage in sexual reproduction.[204] Their flowers are organs that facilitate reproduction, usually by providing a mechanism for the union of sperm with eggs. Flowers may facilitate two types of pollination: self-pollination and cross-pollination. Self-pollination occurs when the pollen from the anther is deposited on the stigma of the same flower, or another flower on the same plant. Cross-pollination is the transfer of pollen from the anther of one flower to the stigma of another flower on a different individual of the same species. Self-pollination happened in flowers where the stamen and carpel mature at the same time, and are positioned so that the pollen can land on the flower's stigma. This pollination does not require an investment from the plant to provide nectar and pollen as food for pollinators.[205]

Plant responses

Like animals, plants produce hormones in one part of its body to signal cells in another part to respond. The ripening of fruit and loss of leaves in the winter are controlled in part by the production of the gas ethylene by the plant. Stress from water loss, changes in air chemistry, or crowding by other plants can lead to changes in the way a plant functions. These changes may be affected by genetic, chemical, and physical factors.

To function and survive, plants produce a wide array of chemical compounds not found in other organisms. Because they cannot move, plants must also defend themselves chemically from herbivorespathogens and competition from other plants. They do this by producing toxins and foul-tasting or smelling chemicals. Other compounds defend plants against disease, permit survival during drought, and prepare plants for dormancy, while other compounds are used to attract pollinators or herbivores to spread ripe seeds.

Many plant organs contain different types of photoreceptor proteins, each of which reacts very specifically to certain wavelengths of light.[206] The photoreceptor proteins relay information such as whether it is day or night, duration of the day, intensity of light available, and the source of light. Shoots generally grow towards light, while roots grow away from it, responses known as phototropism and skototropism, respectively. They are brought about by light-sensitive pigments like phototropins and phytochromes and the plant hormone auxin.[207] Many flowering plants bloom at the appropriate time because of light-sensitive compounds that respond to the length of the night, a phenomenon known as photoperiodism.

In addition to light, plants can respond to other types of stimuli. For instance, plants can sense the direction of gravity to orient themselves correctly. They can respond to mechanical stim

Plant body

 
Root and shoot systems in a eudicot

The plant body is made up of organs that can be organized into two major organ systems: a root system and a shoot system.[201] The root system anchors the plants into place. The roots themselves absorb water and minerals and store photosynthetic products. The shoot system is composed of stemleaves, and flowers. The stems hold and orient the leaves to the sun, which allow the leaves to conduct photosynthesis. The flowers are shoots that have been modified for reproduction. Shoots are composed of phytomers, which are functional units that consist of a node carrying one or more leaves, internode, and one or more buds.

A plant body has two basic patterns (apical–basal and radial axes) that been established during embryogenesis.[201] Cells and tissues are arranged along the apical-basal axis from root to shoot whereas the three tissue systems (dermalground, and vascular) that make up a plant's body are arranged concentrically around its radial axis.[201] The dermal tissue system forms the epidermis (or outer covering) of a plant, which is usually a single cell layer that consists of cells that have differentiated into three specialized structures: stomata for gas exchange in leaves, trichomes (or leaf hair) for protection against insects and solar radiation, and root hairs for increased surface areas and absorption of water and nutrients. The ground tissue makes up virtually all the tissue that lies between the dermal and vascular tissues in the shoots and roots. It consists of three cell types: Parenchymacollenchyma, and sclerenchyma cells. Finally, the vascular tissues are made up of two constituent tissues: xylem and phloem. The xylem is made up of two conducting cells called tracheids and vessel elements whereas the phloem is characterized by the presence of sieve tube elements and companion cells.[201]

Plant nutrition and transport

 
The xylem (blue) transports water and minerals from the roots upwards whereas the phloem (orange) transports carbohydrates between organs.

Like all other organisms, plants are primarily made up of water and other molecules containing elements that are essential to life.[202] The absence of specific nutrients (or essential elements), many of which have been identified in hydroponic experiments, can disrupt plant growth and reproduction. The majority of plants are able to obtain these nutrients from solutions that surrounds their roots in the soil.[202] Continuous leaching and harvesting of crops can deplete the soil of its nutrients, which can be restored with the use of fertilizersCarnivorous plants such as Venus flytraps are able to obtain nutrients by digesting other arthropods whereas parasitic plants such as mistletoes can parasitize other plants for water and nutrients.

Plants need water to conduct photosynthesis, transport solutes between organs, cool their leaves by evaporation, and maintain internal pressures that support their bodies.[202] Water is able to diffuse in and out of plant cells by osmosis. The direction of water movement across a semipermeable membrane is determined by the water potential across that membrane.[202] Water is able to diffuse across a root cell's membrane through aquaporins whereas solutes are transported across by the membrane by ion channels and pumps. In vascular plants, water and solutes are able to enter the xylem, a vascular tissue, by way of an apoplast and symplast. Once in the xylem, the water and minerals are distributed upward by transpiration from the soil to the aerial parts of the plant.[175][202] In contrast, the phloem, another vascular tissue, distributes carbohydrates (e.g., sucrose) and other solutes such as hormones by translocation from a source (e.g., mature leaf or root) in which they were produced to a sink (e.g., root, flower, or developing fruit) in which they will be used and stored.[202] Sources and sinks can switch roles, depending on the amount of carbohydrates accumulated or mobilized for the nourishment of other organs.

Plant development

Plant development is regulated by environmental cues and the plant's own receptorshormones, and genome.[203] Morever, they have several characteristics that allow them to obtain resources for growth and reproduction such as meristems, post-embryonic organ formation, and differential growth.

Development begins with a seed, which is an embryonic plant enclosed in a protective outer covering. Most plant seeds are usually dormant, a condition in which the seed's normal activity is suspended.[203] Seed dormancy may last may last weeks, months, years, and even centuries. Dormancy is broken once conditions are favorable for growth, and the seed will begin to sprout, a process called germinationImbibition is the first step in germination, whereby water is absorbed by the seed. Once water is absorbed, the seed undergoes metabolic changes whereby enzymes are activated and RNA and proteins are synthesized. Once the seed germinates, it obtains carbohydratesamino acids, and small lipids that serve as building blocks for its development. These monomers are obtained from the hydrolysis of starchproteins, and lipids that are stored in either the cotyledons or endosperm. Germination is completed once embryonic roots called radicle have emerged from the seed coat. At this point, the developing plant is called a seedling and its growth is regulated by its own photoreceptor proteins and hormones.[203]

Unlike animals in which growth is determinate, i.e., ceases when the adult state is reached, plant growth is indeterminate as it is an open-ended process that could potentially be lifelong.[201] Plants grow in two ways: primary and secondary. In primary growth, the shoots and roots are formed and lengthened. The apical meristem produces the primary plant body, which can be found in all seed plants. During secondary growth, the thickness of the plant increases as the lateral meristem produces the secondary plant body, which can be found in woody eudicots such as trees and shrubs. Monocots do not go through secondary growth.[201] The plant body is generated by a hierarchy of meristems. The apical meristems in the root and shoot systems give rise to primary meristems (protoderm, ground meristem, and procambium), which in turn, give rise to the three tissue systems (dermalground, and vascular).

Plant reproduction

 
Reproduction and development in sporophytes

Most angiosperms (or flowering plants) engage in sexual reproduction.[204] Their flowers are organs that facilitate reproduction, usually by providing a mechanism for the union of sperm with eggs. Flowers may facilitate two types of pollination: self-pollination and cross-pollination. Self-pollination occurs when the pollen from the anther is deposited on the stigma of the same flower, or another flower on the same plant. Cross-pollination is the transfer of pollen from the anther of one flower to the stigma of another flower on a different individual of the same species. Self-pollination happened in flowers where the stamen and carpel mature at the same time, and are positioned so that the pollen can land on the flower's stigma. This pollination does not require an investment from the plant to provide nectar and pollen as food for pollinators.[205]

Plant responses

Like animals, plants produce hormones in one part of its body to signal cells in another part to respond. The ripening of fruit and loss of leaves in the winter are controlled in part by the production of the gas ethylene by the plant. Stress from water loss, changes in air chemistry, or crowding by other plants can lead to changes in the way a plant functions. These changes may be affected by genetic, chemical, and physical factors.

To function and survive, plants produce a wide array of chemical compounds not found in other organisms. Because they cannot move, plants must also defend themselves chemically from herbivorespathogens and competition from other plants. They do this by producing toxins and foul-tasting or smelling chemicals. Other compounds defend plants against disease, permit survival during drought, and prepare plants for dormancy, while other compounds are used to attract pollinators or herbivores to spread ripe seeds.

Many plant organs contain different types of photoreceptor proteins, each of which reacts very specifically to certain wavelengths of light.[206] The photoreceptor proteins relay information such as whether it is day or night, duration of the day, intensity of light available, and the source of light. Shoots generally grow towards light, while roots grow away from it, responses known as phototropism and skototropism, respectively. They are brought about by light-sensitive pigments like phototropins and phytochromes and the plant hormone auxin.[207] Many flowering plants bloom at the appropriate time because of light-sensitive compounds that respond to the length of the night, a phenomenon known as photoperiodism.

In addition to light, plants can respond to other types of stimuli. For instance, plants can sense the direction of gravity to orient themselves correctly. They can respond to mechanical stimulation.[208

General features

 
Negative feedback is necessary for maintaining homeostasis such as keeping body temperature constant.

The cells in each animal body are bathed in interstitial fluid, which make up the cell's environment. This fluid and all its characteristics (e.g., temperature, ionic composition) can be described as the animal's internal environment, which is in contrast to the external environment that encompasses the animal's outside world.[209] Animals can be classified as either regulators or conformers. Animals such as mammals and birds are regulators as they are able to maintain a constant internal environment such as body temperature despite their environments changing. These animals are also described as homeotherms as they exhibit thermoregulation by keeping their internal body temperature constant. In contrast, animals such as fishes and frogs are conformers as they adapt their internal environment (e.g., body temperature) to match their external environments. These animals are also described as poikilotherms or ectotherms as they allow their body temperatures to match their external environments. In terms of energy, regulation is more costly than conformity as an animal expands more energy to maintain a constant internal environment such as increasing its basal metabolic rate, which is the rate of energy consumption.[209] Similarly, homeothermy is more costly than poikilothermy. Homeostasis is the stability of an animal's internal environment, which is maintained by negative feedback loops.[209][210]

The body size of terrestrial animals vary across different species but their use of energy does not scale linearly according to their size.[209] Mice, for example, are able to consume three times more food than rabbits in proportion to their weights as the basal metabolic rate per unit weight in mice is greater than in rabbits.[209] Physical activity can also increase an animal's metabolic rate. When an animal runs, its metabolic rate increases linearly with speed.[209] However, the relationship is non-linear in animals that swim or fly. When a fish swims faster, it encounters greater water resistance and so its metabolic rates increases exponential.[209] Alternatively, the relationship of flight speeds and metabolic rates is U-shaped in birds.[209] At low flight speeds, a bird must maintain a high metabolic rates to remain airborne. As it speeds up its flight, its metabolic rate decreases with the aid of air rapidly flows over its wings. However, as it increases in its speed even further, its high metabolic rates rises again due to the increased effort associated with rapid flight speeds. Basal metabolic rates can be measured based on an animal's rate of heat production.

Water and salt balance

 
Diffusion of water and ions in and out of a freshwater fish

An animal's body fluids have three properties: osmotic pressureionic composition, and volume.[211] Osmotic pressures determine the direction of the diffusion of water (or osmosis), which moves from a region where osmotic pressure (total solute concentration) is low to a region where osmotic pressure (total solute concentration) is high. Aquatic animals are diverse with respect to their body fluid compositions and their environments. For example, most invertebrate animals in the ocean have body fluids that are isosmotic with seawater. In contrast, ocean bony fishes have body fluids that are hyposmotic to seawater. Finally, freshwater animals have body fluids that are hyperosmotic to fresh water. Typical ions that can be found in an animal's body fluids are sodiumpotassiumcalcium, and chloride. The volume of body fluids can be regulated by excretionVertebrate animals have kidneys, which are excretory organs made up of tiny tubular structures called nephrons, which make urine from blood plasma. The kidneys' primary function is to regulate the composition and volume of blood plasma by selectively removing material from the blood plasma itself. The ability of xeric animals such as kangaroo rats to minimize water loss by producing urine that is 10–20 times concentrated than their blood plasma allows them to adapt in desert environments that receive very little precipitation.[211]

Nutrition and digestion

 
Different digestive systems in marine fishes

Animals are heterotrophs as they feed on other organisms to obtain energy and organic compounds.[212] They are able to obtain food in three major ways such as targeting visible food objects, collecting tiny food particles, or depending on microbes for critical food needs. The amount of energy stored in food can be quantified based on the amount of heat (measured in calories or kilojoules) emitted when the food is burnt in the presence of oxygen. If an animal were to consume food that contains an excess amount of chemical energy, it will store most of that energy in the form of lipids for future use and some of that energy as glycogen for more immediate use (e.g., meeting the brain's energy needs).[212] The molecules in food are chemical building blocks that are needed for growth and development. These molecules include nutrients such as carbohydratesfats, and proteinsVitamins and minerals (e.g., calcium, magnesium, sodium, and phosphorus) are also essential. The digestive system, which typically consist of a tubular tract that extends from the mouth to the anus, is involved in the breakdown (or digestion) of food into small molecules as it travels down peristaltically through the gut lumen shortly after it has been ingested. These small food molecules are then absorbed into the blood from the lumen, where they are then distributed to the rest of the body as building blocks (e.g., amino acids) or sources of energy (e.g., glucose).[212]

In addition to their digestive tracts, vertebrate animals have accessory glands such as a liver and pancreas as part of their digestive systems.[212] The processing of food in these animals begins in the foregut, which includes the mouth, esophagus, and stomach. Mechanical digestion of food starts in the mouth with the esophagus serving as a passageway for food to reach the stomach, where it is stored and disintegrated (by the stomach's acid) for further processing. Upon leaving the stomach, food enters into the midgut, which is the first part of the intestine (or small intestine in mammals) and is the principal site of digestion and absorption. Food that does not get absorbed are stored as indigestible waste (or feces) in the hindgut, which is the second part of the intestine (or large intestine in mammals). The hindgut then completes the reabsorption of needed water and salt prior to eliminating the feces from the rectum.[212]

Breathing

 
Respiratory system in a bird

The respiratory system consists of specific organs and structures used for gas exchange in animals. The anatomy and physiology that make this happen varies greatly, depending on the size of the organism, the environment in which it lives and its evolutionary history. In land animals the respiratory surface is internalized as linings of the lungs.[213] Gas exchange in the lungs occurs in millions of small air sacs; in mammals and reptiles these are called alveoli, and in birds they are known as atria. These microscopic air sacs have a very rich blood supply, thus bringing the air into close contact with the blood.[214] These air sacs communicate with the external environment via a system of airways, or hollow tubes, of which the largest is the trachea, which branches in the middle of the chest into the two main bronchi. These enter the lungs where they branch into progressively narrower secondary and tertiary bronchi that branch into numerous smaller tubes, the bronchioles. In birds the bronchioles are termed parabronchi. It is the bronchioles, or parabronchi that generally open into the microscopic alveoli in mammals and atria in birds. Air has to be pumped from the environment into the alveoli or atria by the process of breathing, which involves the muscles of respiration.

Circulation

 
Circulatory systems in arthropods, fish, reptiles, and birds/mammals

circulatory system usually consists of a muscular pump such as a heart, a fluid (blood), and system of blood vessels that deliver it.[215][216] Its principal function is to transport blood and other substances to and from cells and tissues. There are two types of circulatory systems: open and closed. In open circulatory systems, blood exits blood vessels as it circulates throughout the body whereas in closed circulatory system, blood is contained within the blood vessels as it circulates. Open circulatory systems can be observed in invertebrate animals such as arthropods (e.g., insectsspiders, and lobsters) whereas closed circulatory systems can be found in vertebrate animals such as fishesamphibians, and mammals. Circulation in animals occur between two types of tissues: systemic tissues and breathing (or pulmonary) organs.[215] Systemic tissues are all the tissues and organs that make up an animal's body other than its breathing organs. Systemic tissues take up oxygen but adds carbon dioxide to the blood whereas a breathing organs takes up carbon dioxide but add oxygen to the blood.[217] In birds and mammals, the systemic and pulmonary systems are connected in series.

In the circulatory system, blood is important because it is the means by which oxygencarbon dioxidenutrientshormones, agents of immune system, heat, wastes, and other commodities are transported.[215] In annelids such as earthworms and leeches, blood is propelled by peristaltic waves of contractions of the heart muscles that make up the blood vessels. Other animals such as crustaceans (e.g., crayfish and lobsters), have more than one heart to propel blood throughout their bodies. Vertebrate hearts are multichambered and are able to pump blood when their ventricles contract at each cardiac cycle, which propels blood through the blood vessels.[215] Although vertebrate hearts are myogenic, their rate of contraction (or heart rate) can be modulated by neural input from the body's autonomic nervous system.

Muscle and movement

 
Asynchronous muscles power flight in most insects. a: Wings b: Wing joint c: Dorsoventral muscles power upstrokes d: Dorsolongitudinal muscles power downstrokes.

In vertebrates, the muscular system consists of skeletalsmooth and cardiac muscles. It permits movement of the body, maintains posture and circulates blood throughout the body.[218] Together with the skeletal system, it forms the musculoskeletal system, which is responsible for the movement of vertebrate animals.[219] Skeletal muscle contractions are neurogenic as they require synaptic input from motor neurons. A single motor neuron is able to innervate multiple muscle fibers, thereby causing the fibers to contract at the same time. Once innervated, the protein filaments within each skeletal muscle fiber slide past each other to produce a contraction, which is explained by the sliding filament theory. The contraction produced can be described as a twitch, summation, or tetanus, depending on the frequency of action potentials. Unlike skeletal muscles, contractions of smooth and cardiac muscles are myogenic as they are initiated by the smooth or heart muscle cells themselves instead of a motor neuron. Nevertheless, the strength of their contractions can be modulated by input from the autonomic nervous system. The mechanisms of contraction are similar in all three muscle tissues.

In invertebrates such as earthworms and leechescircular and longitudinal muscles cells form the body wall of these animals and are responsible for their movement.[220] In an earthworm that is moving through a soil, for example, contractions of circular and longitudinal muscles occur reciprocally while the coelomic fluid serves as a hydroskeleton by maintaining turgidity of the earthworm.[221] Other animals such as mollusks, and nematodes, possess obliquely striated muscles, which contain bands of thick and thin filaments that are arranged helically rather than transversely, like in vertebrate skeletal or cardiac muscles.[222] Advanced insects such as waspsfliesbees, and beetles possess asynchronous muscles that constitute the flight muscles in these animals.[222] These flight muscles are often called fibrillar muscles because they contain myofibrils that are thick and conspicuous.[223]

Nervous system

 
Mouse pyramidal neurons (green) and GABAergic neurons (red)[224]

Most multicellular animals have nervous systems[225] that allow them to sense from and respond to their environments. A nervous system is a network of cells that processes sensory information and generates behaviors. At the cellular level, the nervous system is defined by the presence of neurons, which are cells specialized to handle information.[226] They can transmit or receive information at sites of contacts called synapses.[226] More specifically, neurons can conduct nerve impulses (or action potentials) that travel along their thin fibers called axons, which can then be transmitted directly to a neighboring cell through electrical synapses or cause chemicals called neurotransmitters to be released at chemical synapses. According to the sodium theory, these action potentials can be generated by the increased permeability of the neuron's cell membrane to sodium ions.[227] Cells such as neurons or muscle cells may be excited or inhibited upon receiving a signal from another neuron. The connections between neurons can form neural pathwaysneural circuits, and larger networks that generate an organism's perception of the world and determine its behavior. Along with neurons, the nervous system contains other specialized cells called glia or glial cells, which provide structural and metabolic support.

In vertebrates, the nervous system comprises the central nervous system (CNS), which includes the brain and spinal cord, and the peripheral nervous system (PNS), which consists of nerves that connect the CNS to every other part of the body. Nerves that transmit signals from the CNS are called motor nerves or efferent nerves, while those nerves that transmit information from the body to the CNS are called sensory nerves or afferent nervesSpinal nerves are mixed nerves that serve both functions. The PNS is divided into three separate subsystems, the somaticautonomic, and enteric nervous systems. Somatic nerves mediate voluntary movement. The autonomic nervous system is further subdivided into the sympathetic and the parasympathetic nervous systems. The sympathetic nervous system is activated in cases of emergencies to mobilize energy, while the parasympathetic nervous system is activated when organisms are in a relaxed state. The enteric nervous system functions to control the gastrointestinal system. Both autonomic and enteric nervous systems function involuntarily. Nerves that exit directly from the brain are called cranial nerves while those exiting from the spinal cord are called spinal nerves.

Many animals have sense organs that can detect their environment. These sense organs contain sensory receptors, which are sensory neurons that convert stimuli into electrical signals.[228] Mechanoreceptors, for example, which can be found in skin, muscle, and hearing organs, generate action potentials in response to changes in pressures.[228][229] Photoreceptor cells such as rods and cones, which are part of the vertebrate retina, can respond to specific wavelengths of light.[228][229] Chemoreceptors detect chemicals in the mouth (taste) or in the air (smell).[229]

Hormonal control

Hormones are signaling molecules transported in the blood to distant organs to regulate their function.[230][231] Hormones are secreted by internal glands that are part of an animal's endocrine system. In vertebrates, the hypothalamus is the neural control center for all endocrine systems. In humans specifically, the major endocrine glands are the thyroid gland and the adrenal glands. Many other organs that are part of other body systems have secondary endocrine functions, including bonekidneysliverheart and gonads. For example, kidneys secrete the endocrine hormone erythropoietin. Hormones can be amino acid complexes, steroidseicosanoidsleukotrienes, or prostaglandins.[232] The endocrine system can be contrasted to both exocrine glands, which secrete hormones to the outside of the body, and paracrine signaling between cells over a relatively short distance. Endocrine glands have no ducts, are vascular, and commonly have intracellular vacuoles or granules that store their hormones. In contrast, exocrine glands, such as salivary glandssweat glands, and glands within the gastrointestinal tract, tend to be much less vascular and have ducts or a hollow lumen.

Animal reproduction

Animals can reproduce in one of two ways: asexual and sexual. Nearly all animals engage in some form of sexual reproduction.[233] They produce haploid gametes by meiosis. The smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova.[234] These fuse to form zygotes,[235] which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge.[236] In most other groups, the blastula undergoes more complicated rearrangement.[237] It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm.[238] In most cases, a third germ layer, the mesoderm, also develops between them.[239] These germ layers then differentiate to form tissues and organs.[240] Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentationbudding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids.[241][242]

 

Animal development

 
Cleavage in zebrafish embryo

Animal development begins with the formation of a zygote that results from the fusion of a sperm and egg during fertilization.[243] The zygote undergoes a rapid multiple rounds of mitotic cell period of cell divisions called cleavage, which forms a ball of similar cells called a blastulaGastrulation occurs, whereby morphogenetic movements convert the cell mass into a three germ layers that comprise the ectodermmesoderm and endoderm.

The end of gastrulation signals the beginning of organogenesis, whereby the three germ layers form the internal organs of the organism.[244] The cells of each of the three germ layers undergo differentiation, a process where less-specialized cells become more-specialized through the expression of a specific set of genes. Cellular differentiation is influenced by extracellular signals such as growth factors that are exchanged to adjacent cells, which is called juxtracrine signaling, or to neighboring cells over short distances, which is called paracrine signaling.[245][246] Intracellular signals consist of a cell signaling itself (autocrine signaling), also play a role in organ formation. These signaling pathways allows for cell rearrangement and ensures that organs form at specific sites within the organism.[244][247]

Immune system

 
Processes in the primary immune response

The immune system is a network of biological processes that detects and responds to a wide variety of pathogens. Many species have two major subsystems of the immune system. The innate immune system provides a preconfigured response to broad groups of situations and stimuli. The adaptive immune system provides a tailored response to each stimulus by learning to recognize molecules it has previously encountered. Both use molecules and cells to perform their functions.

Nearly all organisms have some kind of immune system. Bacteria have a rudimentary immune system in the form of enzymes that protect against virus infections. Other basic immune mechanisms evolved in ancient plants and animals and remain in their modern descendants. These mechanisms include phagocytosisantimicrobial peptides called defensins, and the complement systemJawed vertebrates, including humans, have even more sophisticated defense mechanisms, including the ability to adapt to recognize pathogens more efficiently. Adaptive (or acquired) immunity creates an immunological memory leading to an enhanced response to subsequent encounters with that same pathogen. This process of acquired immunity is the basis of vaccination.

Animal behavior

 
Brood parasites, such as the cuckoo, provide a supernormal stimulus to the parenting species.

Behaviors play a central a role in animals' interaction with each other and with their environment.[248] They are able to use their muscles to approach one another, vocalize, seek shelter, and migrate. An animal's nervous system activates and coordinates its behaviors. Fixed action patterns, for instance, are genetically determined and stereotyped behaviors that occur without learning.[248][249] These behaviors are under the control of the nervous system and can be quite elaborate.[248] Examples include the pecking of kelp gull chicks at the red dot on their mother's beak. Other behaviors that have emerged as a result of natural selection include foragingmating, and altruism.[250] In addition to evolved behavior, animals have evolved the ability to learn by modifying their behaviors as a result of early individual experiences.[248]

Ecosystems

 
Terrestrial biomes are shaped by temperature and precipitation.

Ecology is the study of the distribution and abundance of life, the interaction between organisms and their environment.[251] The community of living (biotic) organisms in conjunction with the nonliving (abiotic) components (e.g., water, light, radiation, temperature, humidityatmosphereacidity, and soil) of their environment is called an ecosystem.[252][253][254] These biotic and abiotic components are linked together through nutrient cycles and energy flows.[255] Energy from the sun enters the system through photosynthesis and is incorporated into plant tissue. By feeding on plants and on one another, animals play an important role in the movement of matter and energy through the system. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matterdecomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and other microbes.[256]

The Earth's physical environment is shaped by solar energy and topography.[254] The amount of solar energy input varies in space and time due to the spherical shape of the Earth and its axial tilt. Variation in solar energy input drives weather and climate patterns. Weather is the day-to-day temperature and precipitation activity, whereas climate is the long-term average of weather, typically averaged over a period of 30 years.[257][258] Variation in topography also produces environmental heterogeneity. On the windward side of a mountain, for example, air rises and cools, with water changing from gaseous to liquid or solid form, resulting in precipitation such as rain or snow.[254] As a result, wet environments allow for lush vegetation to grow. In contrast, conditions tend to be dry on the leeward side of a mountain due to the lack of precipitation as air descends and warms, and moisture remains as water vapor in the atmosphere. Temperature and precipitation are the main factors that shape terrestrial biomes.

Populations

 
Reaching carrying capacity through a logistic growth curve

population is the number of organisms of the same species that occupy an area and reproduce from generation to generation.[259][260][261][262][263] Its abundance can be measured using population density, which is the number of individuals per unit area (e.g., land or tree) or volume (e.g., sea or air).[259] Given that it is usually impractical to count every individual within a large population to determine its size, population size can be estimated by multiplying population density by the area or volume.