Theories and Models


The clonal selection theory is a hypothesis which states that individual B-cell lymphocytes express a receptor that is antigen-specific. This would be determined before the antibody ever encounters the antigen. Activation occurs within the lymph nodes, spleen, or similar lymphoid organs, which then encourages cloning, so that each individual cell is able to target an individual antigen with effectiveness.

It can be summarized through these four key points.

  1. Every lymphocyte offers a single receptor that has an individualized unique specificity.
  2. For cell activation to occur, receptor occupation is required.
  3. Every differentiated effector cell that has been derived from a lymphocyte which has been activated will bear a receptor that is identical in specificity as its parent cell.
  4. Lymphocytes that bear receptors for self-molecules are going to be deleted, primarily at an early stage.

This theory of antibody product was first proposed in 1957 by Dr. Frank Burnet, an Australian doctor who was attempting to explain how a diverse array of antibody was able to be produced during an immune response. Experimental evidence of this theory came just a year later when Joshua Lederberg and Gustav Nossal were able to show that one B-cell always produces a single antibody.

History of the Clonal Selection Theory

Paul Ehrlich is credited with proposing a side-chain theory of antibody production, which essentially stated that certain membrane-bound antibodies were able to react to different antigens. The antigen would bind to the matching “side-chain,” which would then create duplication so that the entire antigen could be removed throughout the body.

Although it would prove to be somewhat inaccurate, when it was first proposed in 1900, it was much closer to what evidence would show to be true than other theories regarding immunology at the time.

Then, in 1955, Niels Jerne offered the idea that soluble antibodies are actually present before any infection actually takes place. This Danish immunologist suggested that the body would select the correct type of antibody in response to the antigen.

Burnet essentially combined these two ideas together to formulate his hypothesis, which would become backed by empirical evidence.

Who Is Sir Frank Macfarlane Burnet?

Born in September 1899, Frank Burnet was the son of a Scottish emigrant to Australia. Due to his family structure, he often found himself alone as a child, so his pursuits were often “bookish” in nature. This led to an emphasis in studying that would benefit him later on in life.

Dr. Burnet would win the Nobel Prize in 1965 for predicting acquired immune tolerance. His development of the clonal selection theory is his best known work.

He earned his Doctorate of Medicine in 1924 from the University of Melbourne and then his PhD from the University of London in 1928. Much of his work, including the development of the clonal selection theory of antibody production, took place at the Walter and Eliza Hall Institute of Medical Research that is located in Melboure. Burnet served as the director of the institute from 1944-1965.

Burnet would continue to work at the University of Melbourne after his retirement in 1965 and remained active in his field of study. He was a founding member of the Australian Academy of Science and even served as its president from 1965-1969.

In 1978, Burnet was made a Knight o the Order of Australia and has received numerous honorary doctorates, the Lasker Award, the Royal Medal, and the Copley Medal in addition to the Nobel Prize.

What Are the Results of the Clonal Selection Theory?

Because of the clonal selection theory, Dr. Burnet was able to propose that tissues could be transplanted successfully into a foreign recipient. This has brought about numerous advances into the fields of tissue and organ transplantation, along with an even deeper understanding of what the immune system is able to do.

Immune network theory is also based on the clonal selection theory, which is another hypothesis which would win a Nobel Prize in 1984. Proposed by Niels Kaj Jerne, it proposes that the immune system functions in the style of a network. Variable parts of the lymphocytes and their molecules would be regulated by how they interact with one another.

Much of the understanding that we have today regarding the immune system comes from the work and proposals of Frank Burnet. The clonal selection theory of antibody production has become the foundation of immunology and modern medicine, allowing us to be able to better treat health issues due to the understanding it has provided.


Carl Rogers believed that humans are constantly reacting to the stimuli they encounter within their reality. This stimuli changes constantly, which requires each person to develop their concept of self, based on the feedback they receive from their reality. Rogers, a humanistic psychology, believed that his theory of personality would help to understand why there is such an emphasis on the importance of self-fulfilling tendencies and prophesies during the personality shaping.

This means it is up to each individual to shape their own personality, based on the type of feedback that they receive from their external and internal worlds. It also means that the personality of an individual can change over time because the stimuli they encounter, either real or perceived, may vary.

Rogers believed that humans are always active. They are always experiencing life as it occurs in creative ways. This causes them to form perceptions, which will evolve into relationships, and then this creates encounters where the personality can be developed. In other words, if a person wants to have a humorous personality, they will create life situations that will help them to develop such a personality.

Does This Mean a Person’s Personality Is Based on Free Will?

When Rogers formed the humanistic theory of personality, he emphasized the concept of free will in the theory’s foundation. He also made a general assumption about what the human potential for doing good on a regular basis happens to be.

This allowed him to developed what he called the “Phenomenal Field.” At the center of this field is the “self,” or the core of who each individual happens to be. Then each person will surround themselves with five specific influential factors that interact with them inside of the phenomenal field.

  • People.
  • Thoughts.
  • Objects.
  • Behaviors.
  • Images.

Although the self does not change, it can be constantly influenced within the phenomenal field by these five influential factors. This is why there are internal and external factors involved with personality development and why a person’s personality can change over time.

Yet the phenomenal field is not the only influential factor involved in personality development. The environment can act upon a person’s phenomenal field, creating unanticipated changes and interactions with the influential personality factors. The personal motivations an individual may have, like the pursuit of a dream or specific goal, can also have an influence on the phenomenal field.

This, in turn, creates a situation where a battle for dominance begins to take place. It is a battle between the real self and the ideal self.

What Is the Real Self and Why Is It Different from the Ideal Self?

Rogers looked at his humanistic theory of personality and realized that the “concept of self” needed to be divided into two distinct categories: the “real” self and the “ideal” self. The real self is the person you happen to be right now, whereas the ideal self is the person that you would like to be one day.

Rogers decided that there needed to be a certain level of consistency between these two concepts of self. This is what the battle between the real self and the ideal self is intended to do. As balance is created and achieved, it influences the personality of that individual.

If balance can be achieved, then it creates a personality that is based on high levels of self-worth. People with a good balance, according to Rogers, have the best opportunities to create a life for themselves that is both healthy and productive.

For those who are unable to achieve a balance, which means their ideal life is a great deal different than their real life, then is creates a state of maladjustment. It forms a personality that is based of discontent and other forms of personalized negative energy.

Rogers called this state of imbalance “incongruence.”

How Can We Have the Good Life?

In the humanistic theory of personality, Rogers believed that there was no greater influence on a person than unconditional love. When there is such a positive influence, it limits the amount of incongruence that can be created. This, in turn, helps to define a positive self-worth, allowing the individual to create an even better balance.

When this balance could be achieved, then individuals could begin to pursue what Rogers called the “Good Life.” It is a pursuit that is based on the traits that only balance could provide, such as being open, trusting personal judgment, and embracing freedom of choice.

Personalities can be quite varied. They can also change over time. A personality may also have several permanent elements to it. With the humanistic theory of personality, Carl Rogers helps us all be able to understand why we are the people we happen to be today.


In 1983, Howard Gardner proposed that intelligence wasn’t just dominated by a single, generalized ability. Gardner felt that intelligence had to fulfill eight specific criteria. Then he chose eight different abilities that he felt would be able to meet the needs of that criteria. This would allow people to identify which ways they were able to learn most effectively, including through non-cognitive abilities, so that every person had the opportunity to grow in a way that made the most sense for them.

The eight criteria that Gardner identified as needing to be fulfilled included an individual awareness of their place in evolutionary history, the potential for brain isolation, core operations, a coding susceptibility, developmental progression that is distinctive in nature, the possibility of savants or prodigies, and psychological or psychometric findings.

Once Gardner identified these core components for his theory, he also identified specific abilities that could be used as evidence to show that the criteria for intelligence had been met.

What Are the Eight Abilities that Gardner Chose?

Gardner chose abilities that would show evidence of mental awareness or interaction that could be processed with a certain response. Each ability offers something specific that could be classified as intelligence on its own, or when combined with the other abilities, would show evidence of a potentially superior form of intelligence.

Here are the eight abilities that Gardner chose.

1. Musical Rhythmic (Harmonic)

This modality involves how intelligence interacts with tones, sounds, rhythms, and even music. People who are rated highly in this modality often have excellent pitch. Some even have absolute pitch. These individuals are well-equipped in the areas of singing, composing music, and playing various instruments.

High modality individuals are also very sensitive to meter, tone, pitch and rhythm. They tend to be very hard on themselves for making an error and want melody and timbre to be as perfect as possible. They learn difficult concepts easily when they are set to music or melody.

2. Visual Spatial

People who are rated high in this modality are able to visualize space with accuracy by picturing it in their mind. This is also one of the intelligence factors that is included in other models of intelligence, including the hierarchical mode.

3. Verbal Linguistic

Most people are able to communicate with verbal linguistic skills at some level. Those who rate highly on this modality are able to demonstrate a greater facility with language fluency, using specific words instead of generalities to accurately communicate within conversation or writing. People with this intelligence modality may also be adept at learning new languages with ease.

This ability is also displayed through various verbal techniques, such as memorization, storytelling, reading and remembering information. It is one of the most loaded general intelligence factors that shows overall mental ability and can be tested through the use of a verbal IQ testing model.

4. Logical Mathematical

Although mathematics and logic are central points to this modality, hence its name, the real focus here is on the ability to perform critical thinking. It offers evidence of having the ability to understand an underlying principle. People who rate highly in this modality are able to handle numbers, abstractions, and logical reasoning because there is a higher overall fluid intelligence that is present.

5. Bodily Kinesthetic

Being intelligent requires more than command over thinking processes. There must also be an ability to show that one can overcome instinctual urges. The mind must have the ability to control bodily motions to such an extent that objects can be handled with skill.

People who rate highly in this modality are also able to handle issues that involve timing. They can train their responses to correspond with specific movements, actions, or timeframes in order to accomplish a specific goal. You will find individuals who excel in sports, acting, or musical careers tend to rank highly in this intelligence modality.

Law enforcement, military service, and construction also require high modality levels in this area of intelligence for success. It is an intelligence that can be developed over time, but individuals must be actively participating in the skill for it to develop. Simulations have proven to be ineffective in enhancing this modality.

6. Interpersonal

Sometimes the people who rate highly in this intelligence modality are referred to as having a high “emotional intelligence.” This is because individuals equipped with this modality are particularly sensitive to the changing feelings, moods, and motivations that occur in the people who are around them. Those with particularly high rankings in this modality can even recognize specific emotions and anticipate reactions to those emotions from strangers.

Most people who have use this intelligence modality to their advantage look for ways to cooperate with a team or group. They look for the place where they can fill-in a gap that is needed so that everyone can get along without conflict.

Sometimes this is thought of as being an extrovert, but introverted personalities can easily rank high in interpersonal intelligence. It’s not about liking people; it’s about understanding them at a core level. These folks like to enjoy a good discussion or watch a debate, making them excellent teachers, counselors, or even social workers.

7. Intrapersonal

This intelligence modality is much like the interpersonal modality, but looking inward at oneself. It is the ability to deeply understand personal morals and values. People who rank highly in this modality are very aware of their personal strengths and weaknesses. They also know what makes them unique and are unafraid to push that uniqueness out for the world to see.

A unique trait that comes with the intrapersonal intelligence modality is an ability to be able to predict personal emotions, reactions, and behaviors.

8. Naturalistic

This intelligence modality wasn’t actually part of the original theory of multiple intelligences. It would be Gardner who decided that it should be added, which occurred in 1995. People who rank highly in this modality are able to recognize and identify flora and fauna. They can also see how personal decisions would affect the natural world and would use their skills in this modality to protect and preserve it.

Are There Any New Additions Coming to the Theory of Multiple Intelligences?

Gardner has not wanted to commit to two additional forms of intelligence, but had admitted that there is evidence of their existence.

The first is a spiritual intelligence. Individuals who would be considered philosophers or developmental theorists would be considered as having a high level of this modality. The only problem with this modality is the fact that it is difficult to give it a quantifiable condition. After all, spirituality is more about the individual than a general scientific principle. Although others have attempted to give this modality a greater definition, Gardner prefers to consider the possibility of an existential intelligence.

One hallmark of being gifted in this modality is the ability to be a source of guidance for other people, even if they happen to disagree with the conclusions that have helped in your personal guidance.

The other modality that is being considered by Gardner is what he calls the Teaching Pedagogical intelligence. According to Gardner, this would be a modality which would allow someone to teach others in a successful way.

There have been other modalities that have been proposed by those who are familiar with Gardner’s theory of intelligences, but he does not acknowledge that they should be part of his theory. This would include proposed modalities that include sexuality and reproduction, cooking or other specific vocational skills, and humor.

How Has Gardner’s Theory of Multiple Intelligences Been Received?

The primary criticism of Gardner’s theory is that it doesn’t actually expand upon the definition of intelligence. It simple denies intelligence as it has been traditionally understood. Instead of an ability that someone has, the theory redefines it as a modality that may be partly inherited.

Gardner’s theory also tends to offer a low correlation between the different aspects of intelligence, whereas many psychometrics, such as an intelligence test, tend to find that there are high correlations between the different components of intelligence.

The bottom line? There really isn’t any empirical evidence that backs up what Gardner has theorized. Many people can pick out specific skills or attributes that correspond with the varying intelligence modalities. Certain animals may also rank highly, or even higher, than humans in certain categories. When combined with the low correlation levels, it tends to add more confusion to what intelligence really happens to be.

Of course, there are also some people who believe that the traditional definition of intelligence is too generalized. Just because one person is good at math and another is good at music doesn’t mean one is more intelligent than the other. Intelligence becomes subjective and individualized instead of categorized, which makes it more like how humans learn and think instead of creating a cookie-cutter definition.

Whether one agrees or disagrees, Howard Gardner’s theories of multiple intelligences continues to inspire conversation and debate. In that way, it may even be contributing to its own modalities.


The Expanding Earth Theory offers the idea that continental movements and positioning are due not to actual movement, but because the volume of the Earth is increasing. This theory offers three specific hypotheses.

  1. That the mass of the Earth has remained constant, which has caused surface gravitational pulls to be decreased as time passes.
  2. The mass of the Earth has grown in volume in such a way that it has allowed surface gravity to remain constant.
  3. Surface gravity has increased over time due to the Earth growing in both mass and volume.

Since the discovery of the tectonic plates, the Expanding Earth Theory has been essentially debunked. This is because the hypotheses behind the theory have never been shown to have a verifiable mechanism of action. There hasn’t even been a plausible one on the hypothetical level.

The scientific community also believes that these specific key points that have been verified are able to contradict the key points that are made in the Expanding Earth Theory.

1. High-precision measurements disprove growth.

Using high-precision geodetic techniques to create models of the planet, measurements of the horizontal motions of rigid plates that are independent at the surface are found to have an accuracy of 0.2mm and show no growth of the planet at all. This is believed to prove that the solid planet is now growing larger within the results of current, small uncertainties.

2. The motions of plates and subduction zones.

The very motion of the various tectonic plates, along with the subduction zones, offers a reflection of specific techniques that support plate tectonics instead of an expanding planet. This encompasses geodetic, geological, and geophysical techniques that show consistency within the planet’s size instead of its ongoing expansion.

3. Lithosphere imaging supports consumption.

When the imaging of lithosphere fragments is reviewed, it shows that the mantle of the planet supports consumption. This occurs through a process called “subduction.” The lithosphere is defined as being the outer part of our planet, which consists of the upper mantle and the crust.

Subduction is a process that takes place when one tectonic plate moves underneath another and is forced to sink into the mantle of the planet. This is how subduction zones are created. As the plate goes into the upper layer of the mantle, the heat of the planet consumes the excess amount within the subduction zone so that the size of the planet remains consistent.

This is how mountain building and island arcs are created. It is also why there are greater periods of earthquakes found in this area, since two plates are moving together, with one being shoved underneath.

4 Long-term calculations may indicate size consistency.

Researchers have used paleomagnetism, or the study of magnetic fields within the rocks of our planet, to help calculate what the size of the ancient world would have been. Although this specific debunking effort of the Expanding Earth theory is contested, the results show that the overall size of the Earth was very similar 400 million years ago to what it happens to be today.

5. Geological data from ancient Earth show consistency.

Another way to examine the overall size development of the planet is to look at geological data from the Paleozoic Era. By looking at the moment of inertia, which is the angular mass of the planet, there is evidence to conclude that there has not been a significant change to the size of Earth’s radius in over 600 million years.

Is There Any Evidence Which Supports the Expanding Earth Theory?

The primary evidence that is used to support the idea of the Expanding Earth theory is the concept of Pangea. Because the continents seem to fit together like a planetary-sized puzzle, the thought is that during the late Paleozoic period, there was a time of expansion that allowed the land mass on our planet to break apart and be separated by the upwelling of a growing ocean.

Some support the idea of a smaller planet because it would have lower overall gravity. This would allow the great dinosaurs to be able to develop and move with greater ease than what our standard gravity would allow today. There are also arguments that involve the topology of the ocean floor and how it may show “expansion scars” from planetary expansion.

The scientific community supports tectonic plate theory based on observable evidence. Although the Expanding Earth theory cannot be completed debunked, it is an idea that is generally not accepted today.


Topology is the study of geometric properties and spatial relations. These are unaffected despite the continuous change of either shape or size within the figures. They are interrelated or arranged in a certain way. The continuous functions from one topological space to another is referred to as “homotopic,” which means that they are similar. If one can be continuously deformed into the other, then the deformation is referred to as a “homotopy.”

So what is homotopy type theory? It’s an idea that brings something new into the world of mathematics. It suggests that there is an invariant conception within the objects of mathematics, offering the idea that intrinsic homotopical content is present. The goal is to create another step toward having all mathematics have consistent foundations, unifying the language of numbers.

Why Is a New Language Needed for Mathematics?

A majority of the homotopy type theory involves making sure the formulations are fine-tuned. This will allow it to work in conjunction with the traditional homotopy theory so we can understand both at a deeper level. Yet for those who have been involved with mathematics for some time, there is a complaint that what this new theory is doing is creating a new language for reformulation – and one that is not really necessary.

The reason why this new theory is needed is because it allows for ordinary logic to be able to work through hypothetical systems. Instead of needing to deal with equality or formulations that create specific outcomes that could influence the end result, the homotopy type theory allows for a literal interpretation of the equation elements that are being studies.

It’s a subtle shift. Instead of equality, homotopy type theory promotes equivalence instead.

So what do we learn from this process? That mathematics may still be a formal language, but that it can also be a natural language that includes more people. Instead of making people think a specific way, the homotopy type theory allows people to think in a natural way. This changes the formulation of an equation, but it still creates the exact same result.

It’s Less About Looks and More About Feelings

If you’re not active in the world of mathematics, then discussing dependent sums, product types, homotopy pullback, and other terms is going to feel like a foreign language. You can look at it all you want, but until someone explains what those terms mean, you’re going to be unable to communicate with someone who does speak that language.

What homotopy type theory provides is a more transparent language that helps to understand the equations, proofs, and other elements that are being evaluated when looking at advanced mathematics. This allows it to be understood on a core level by more individuals, even if they are not putting in the effort to solve the equations involved.

Think about it like this. You know 2 + 2 = 4. But why do you know this? Because you can see two items, understand that “+” means to add the numbers together, and the “=” is your final answer. Under that expression, the actual equation becomes transparent. You know how it was solved, even if you didn’t watch the person solve it.

This equation also gives you the information you need to begin solving other equations that are similar to it. It’s become what would become a foundation file. Now that you know 2 + 2 = 4, you can figure out other expressions. For example: 2 + 2 + 2 = 6.

Yet what if the universe doesn’t work in that way? Maybe the universe says that 2 + 2 = 3 + 1. Using homotopy type theory, it becomes possible to express solutions in such a way.

You would also be able to change how these equations are being expressed in a transparent way that is simpler to understand. It’s like transforming 2 + 2 + 2 = 6 into 2 x 3 = 6, but on a much larger scale. Like having heuristics become a theorem.

For Example: LX={x,y: X | (x=y) (z=y)}.

This gives us a mathematical universe where items are similar, but potentially more interesting, then they were before using the foundations of mathematics. It creates simplicity from complication, added transparency, and hopefully an ability to make unfamiliar things become more familiar because expressions are more accurate.

This is because equivalence becomes the point of evidence instead of equality needing to be required in order for a solution to be discovered.


John Dalton’s atomic theory experiment was the first attempt to describe all matter by way of atoms and their properties in a way that was complete. His theory was based on two verified scientific laws: the law of conservation of mass and the law of constant composition.

The law of conservation of mass says that within a closed system, no matter can be created or destroyed. This means if a chemical reaction happens to create something new, then the amount of each element must come from the same starting materials. It is for this reason that mathematics seeks to create equality and balance.

The law of constant composition says that pure compounds will always have the same proportion of the same elements. That means if you were to look at salt crystals, then you would have the same proportions of the base elements, chlorine and salt, no matter how much salt you had or where you got the salt. Now other items could be added to the salt to change it, but the core atoms of salt are always the same.

The Four Principles of Dalton’s Atomic Theory

When Dalton proposed his atomic theory, it was based on ideas, assumptions, and principles more than facts that were directly observable. This means that there are five components to the atomic theory that are offered by Dalton.

  1. All matter is made up of atoms. This means that everything that is made of matter is composed of atoms, which are indivisible by design.
  2. All atoms can be identified by mass and properties. This means that any given element has atoms that must be identical in properties, including their mass. It also means that an element can be identified because its atoms will act like a fingerprint to identify it.
  3. All compounds are made up of atom combinations. For a compound to form, Dalton suggested with his atomic theory that it would have to be composed of at least two different types of atoms. A combination may also include more than two.
  4. All chemical reactions are a rearrangement of atoms. This indicates that when a chemical reaction occurs, it is because the atoms are being rearranged in such a way that they form a different combination. It is a whole-number ratio.
  5. If an element reacts, their atoms may sometimes combine into more than one simple whole-number ratio. This would help to explain why weight ratios in various gases were simple multiples of each other.

Dalton had another postulate that he included with his initial atomic theory that, unfortunately, made it difficult for the scientific community to accept his ideas in their entirety. He believed that when atoms combined in only one ratio, then it needed to be assumed that it would be a binary ratio. This caused him to believe that the formula for water was HO instead of H2O and ammonia was NH instead of NH3.

Dalton had made the same mistake that many had before. Based on his own work, he made an assumption that turned out to not be true. This is why experimentation is so critical to the scientific process.

The Atomic Theory, Experimentations, and Its Modern View

When we look at an atomic theory experiment, what we’re trying to do is either prove that Dalton’s theory is correct or prove that it is incorrect. Evidence must be obtained in order for this to occur, which can only be done through experimentation and observation. Since the theory was first proposed, we have learned quite a lot about atoms and can prove that components of Dalton’s theory are categorically incorrect.

For example: in Principle #1, Dalton stated that atoms were indivisible by design. We know that this is not the case. Atoms are actually made of positive components called protons, negative components called electrons, and neutral components that are called neutrons. Instead of being units that are made up of great mass, atomic theory experiments were able to prove that a vast majority of atoms are basically just empty space.

There are more experiments that have helped to disprove other elements of Dalton’s atomic theory as well, though it would take several generations for scientists to realize that there was a greater truth to find.

The Issue of Neutrons and Isotopes with the Atomic Theory

In Principle #2 of Dalton’s atomic theory, we have found that the idea of atoms having the same mass within a specific element is also incorrect. This is because the number of neutrons that may be present within an atom can vary based on the different isotopes which exist for the same element.

This means Dalton was partially correct, but also partially incorrect. Here’s why.

Let’s take carbon as an example. At the time of this writing, there are 15 known types of carbon that currently exist. Some are natural, while others are artificial. The most stable carbon isotope has a half-life of 5,700 years, while the most stable artificial carbon isotope has a half-life of just 20 minutes. There are actually 3 different occurring isotopes of carbon that occur in nature.

Each isotope is assigned a number. Using the naturally occurring isotopes as an example, they are Carbon-12, Carbon-13, and Carbon-14. These numbers are assigned in such a way not because of the order in which they were discovered, but because each one has a specific isotopic mass.

This means Carbon-8 has an isotopic mass that is close to 8u exactly. Carbon-12 would be 12u. And so forth.

So what the atomic theory experiments regarding atomic number, mass number, and isotopes has been able to determine is this: elements can have different masses. The specific isotopes, however, do not have a different mass. So Dalton was partially correct because you’re not going to find Carbon-14 atoms when you’re looking at Carbon-12. He was partially incorrect because at the time, it was not known that elements could have these different isotope masses.

Dalton’s Atomic Theory and It’s One Missing Item

Maybe you’ve heard of a Quark. No – not the Ferengi bartender on the show Star Trek: Deep Space Nine. Quarks are subatomic particles that carry a fractional electrical charge. They have not been directly observed, but their existence has been predicted and confirmed through experimentation. It is considered to be an elementary particle.

Quarks are considered to be the very building blocks of each atom. They are a primary constituent of neutrons and protons, which means they are part of all ordinary matter. We can determine if an atom will be composing a proton or a neutron because of the number of “up” and “down” quarks that are found.

Two up quarks with one down quark make up a proton. Two down quarks with one up quark make up a neutron.

But these aren’t the only quarks that have been found since Dalton first proposed the atomic theory. Here are some of the other quarks that have been determined to exist.

  • Strange Quark. Discovered with the lambda particle, the quark was deemed to be strange because it gave the nucleus of the particle a longer half-life than expected. A lambda particle is a different baryon formation than what creates protons and neutrons. The lambda consists of one up quark, one down quark, and one strange quark.
  • Charm Quark. This quark was discovered through experimentation in 1974 and can be transformed into a charm quark.
  • Top Quark. Evidence of a third quark was reported in 1995, found through the collision of protons and antiprotons in a collider. Little is known about this quark, other than its mass is quite large compared to other quarks that are believed to exist.

When Dalton was conducting atomic theory experiments, he conducted meteorology experiments because he wanted to prove that evaporated water could exist in the atmosphere as an independent gas. Instead of water molecules and air molecules mixing together, what would happen if it could be proven that they were actually separated?

This caused him to perform experiments on a series of gas mixtures to determine what effect each individual gas may have on the other. Through his observations, he was able to come up with what would become the first version of the atomic theory. It is a process that is still being evaluated to this day.

What Does Dalton’s Atomic Theory Mean Today?

When Dalton first proposed his atomic theory, there was no way to even predict the existence of protons, electrons, and neutrons – much less the existence of quarks or other subatomic particles. Yet when one looks at the entirety of the theory that was offered, many components of it are still considered to be true. It even provides much of the framework that is used in modern chemistry efforts.

Through experimentation, parts of the theory have been modified because of new knowledge. The principles, however, have offered multiple generations of scientists and researchers to know more about the smallest components of our universe. With future experimentation, we can continue to use Dalton’s atomic theory as a foundation for new discoveries.


In the behaviorist learning theory, the idea is to create specific behaviors through rewards for wanted behaviors and consequences for unwanted behaviors. When it is applied to a classroom setting, it becomes a method of operant conditioning. It is used to not to help children understand the benefits of following the rules through a logical debate, but through the use of positive and negative reinforcement.

With the behaviorist learning theory in the classroom, there are four basic types of reinforcement that can be used.

  • Positive Reinforcement. This is an immediate reinforcement of a wanted behavior when it is observed. Giving a student verbal praise for a wanted behavior is a common form of positive reinforcement that teachers offer to students.
  • Negative Reinforcement. Instead of offering a student a compliment, this type of reinforcement tells a student that their behavior is not wanted. The goal isn’t to embarrass the student, but to offer an alternative behavior that could bring about a desired reward.
  • Presentation Punishment. This option is often used as a form of showing an entire class what will create a negative reinforcement response. If Johnny keeps yelling during story time, a teacher might bring the student up to the front of the class and then tell Johnny that his behavior is inappropriate at that moment. The goal here is to embarrass the student, but to also encourage other students not to be embarrassed by not replicating Johnny’s behavior.
  • Removal Reinforcement. This may be used by removing a disruptive student with negative behaviors from the classroom. It may also be used through a period of negotiation so that a teacher gets what they want, but a student can also have something that they want.

Each reinforcement opportunity has specific benefits and disadvantages that must be considered before it is implemented in a classroom setting.

Pros and Cons of Positive Reinforcement

It offers an immediate reinforcement of a wanted behavior. Specific statements of praise help to reinforce the compliment being offered. Specific actions, such as “clipping up” or “earning a star,” can also be included to initiate rewards.

Some students aren’t motivated by rewards. They don’t care about the classroom setting and will not respond to the positive reinforcement opportunities.

Pros and Cons of Negative Reinforcement

It creates an immediate “consequence” for an unwanted behavior. Some students may hear this consequence and not want to have it themselves, which will modify their behavior. It can create immediate change within a student who is motivated by rewards.

Some students are not motivated by a negative reinforcement either. “Who cares what you think?” Their behaviors are more about their individual needs and those needs don’t involve the classroom setting.

Pros and Cons of Presentation Punishment

It impacts the entire classroom. You’re able to modify the behavior of a large group by using an unwanted behavior from one individual. It can address a specific and potentially dangerous unwanted behavior immediately.

It causes the student being used as a presentation to be targeted by other students. They may make fun of that student or not want to be associated with them. Some students are sensitive and may resent being used as an example toward other students, which increases the number and the aggressiveness of their unwanted individuals.

Pros and Cons of Removal Reinforcement

It is a way to meet the needs of a specific student without disrupting the entire class. It may remove an unwanted behavior from the classroom immediately. Removal minimizes impact while allowing learning progression. It takes away something that a student sees as “good,” which encourages them to “earn it back” with wanted behaviors.

It may encourage a student to continue offering unwanted behaviors so they can get their way. They learn that there is a direct connection between behaving “badly” and getting what they want. It may cause other students in the classroom setting to behave in the same way so they can receive “special treatment” as well.

Which Option Is Right for Teachers Today?

Teachers should be using all of these options when appropriate to address wanted and unwanted behaviors in the classroom. The goal should always be to avoid an unpleasant consequence, but sometimes a punishment is necessary to remove an unwanted behavior. Teachers should never belittle a student. They should always be looking for a way to generate a positive outcome.

And behaviorist learning theory in the classroom works best when an individualized approach is taken. A group consequence creates resentment in students who weren’t involved. Group rewards only reinforce unwanted behaviors in those who weren’t meeting expectations. By finding the middle ground, the classroom can really become a good learning environment.


Professor Geert Hofstede developed the Cultural Dimensions Theory through his studies of how the values of a workplace can be influenced by culture. Under Hofstede’s definition, culture is considered to be the collective programming of the mind, which allows each member of a group or category of people to be distinguished from one another.

Under this theory, there are six dimensions of national culture that have been identified. Here is a look at those six dimensions and what it means for the modern workplace.

Power Distance

This cultural dimension is an expression of how people without power in a society accept and even expect unequal distribution. How does a society handle inequality when it is discovered amongst its people? When there is a large amount of this dimension within a society that has a hierarchy, then people accept their place and role within that society without complaint.

If there is a low amount of this dimension within a society, then people will work to create an equalization of power amongst all members of that society. They will demand that any inequality discovered be either rectified or justified.


This cultural dimension requires individuals to take care of themselves and their families. It creates a close-knit network of similarly-minded individuals who are accomplishing tasks in a similar (or opposite) way so that everyone can enjoy a better standard of life. When there are high levels of this dimension, then people work hard for themselves.

When there are low levels of individualism within a society, then collectivism begins to take shape. The framework in society becomes a need for an individual to be taken care of my family and friends. This is done in exchange for loyalty to the collective. Instead of “me,” society focuses on “we.”


This cultural dimension looks at the preference a society has for certain achievements that are generally associated with me. This may include heroism, assertiveness, and other forms of achievement. It is a measurement of success. With high levels of masculinity, these traits will be emphasized and treasured, creating a competitive way of life.

With low levels of masculinity, there is more of a preference toward modesty and cooperation. The society is more orientated toward consensus results and caring for those who may need some level of assistance.

Uncertainty Avoidance

This cultural dimension focuses on how often and how much the members of a society become uncomfortable with events that are strange, uncertain, or ambiguous. If there are high levels of this dimension operating within a society, then there will be rigid behavioral codes that will be enforced and anyone “thinking outside of the box” will be harshly criticized.

When there are low levels of this dimension in a society, then the goal is often to control the future instead of letting it happen. Attitudes are more relaxed toward creative ideas because the end that is achieved is more important than the processes that were used to reach the end of the journey.

Long Term Orientation

This cultural dimension is a reflection of how a society looks at the past. Do they use the lessons learned from before to make challenges in the present easier? Are there future plans being initiated from current situations? Or is the society ignoring the past completely?

When a society focuses on this dimension at a high level, it will typically take an approach that can only be described as pragmatic. They will encourage greater education and learning opportunities while reducing spending to create a better future.

With low levels of this dimension present, any change to society is considered to be suspicious. The people who complete tasks based on traditions and previous best practices are often celebrated.


This cultural dimension looks at how often the members of a society are able to purchase or receive items that go beyond their basic needs. It is a reflection of how an individual may pursue their personal enjoyment of life and be able to have fun. With societies that have a high indulgence level, spending often reflects the individual’s wants and virtually anything within reason is allowed.

When there is a low level of this dimension in a society, then indulgence is often regulated by policies, procedures, or even laws. The goal is to suppress the desired to find individual gratification and it will often be enforced through the creation and enforcement of very strict norms within the society.

In Conclusion

Geert Hofstede’s Cultural Dimensions Theory offers us a glimpse at how we can expect a group of individuals to behave within a society of any size and scope. Whether it’s at work, in a community, or even on a national level, these six dimensions help to define who we are and who we will plan to be.


The hegemonic stability theory is one that has often been offered as an explanation behind the successful cooperation that occurs within an international system. By having one single and dominant actor, international politics is able to provide a desirable outcome for everyone that is involved within that international system.

This means the reverse side of this theory is that an absence of such a dominant actor would create an undesirable outcome for everyone involved in the international system. It would create a hegemon that is associated with disorder instead.

The limits of hegemonic stability theory are because it is limited to very special and specific conditions. It only becomes a valid theory based on the following.

  • The extent to which a hypothesis regarding public goods or consumption is able to explain issues that are being seen within the realm of international politics.
  • The extent to how much a specific assumption happens to be true in regards to the actions, taken as a collective, within an international system and its level of impossibility when there isn’t a dominant actor.

Because of these limits, the hegemonic stability theory can only be an empirical truth if the following factors are present.

  • The dominant actor is able to provide a greater level of stability within the international system with their presence.
  • The stability that is achieved by the dominant actor is able to benefit everyone within the international system. It may even need to benefit the smaller states in an international system more than the larger states.

To overcome the natural limits of hegemonic stability theory, it becomes necessary to look at specific influences that may affect the limits and truths that can be achieved. This means dynamic processes must be included in their entirety while there is clarification offered in terms of size. There must also be a clear role for the hegemonic power being offered, which can either be coercive or benevolent in nature.

Then there must be a definition of those contrasts based on whether the hegemonic structures are centralized or decentralized.

What Does It Take to Become a Hegemon?

For a dominant actor within an international system to be defined as a hegemon, it must have three specific attributes.

  1. It must be able to enforce the rules. Whatever system becomes developed for the international system must be able to be enforced by the single dominant actor. This would mean if the world would sign a peace treaty, it would be up to the hegemon to enforce its application.
  2. It must have the will to enforce the rules. If the hegemon does not want to enforce the rules, then by definition, it limits the hegemonic stability theory. The dominant actor must enforce the rules for them to be beneficial to all in an equalized way.
  3. It must be committed to the system. A hegemon cannot only be involved to further its own best interests or the interests of their allies. It must be committed to a system which is mutually beneficial to all states.

In order to have these specific attributes, a hegemon must be able to demonstrate it has the capability attributes that will be required for the successful implementation of this theory. This includes having an economy that is large and continues to grow. It must be able to prove dominance within a major economic or technological sector. There must also be political powers in place that are protected by a military power.

Once these attributes are in place, the dominant actor is able to create a system that works toward the collective good of all. The only problem is that the international system wants to put in the least amount of work as possible to gather the benefits that are being produced by the hegemon. This means the dominant actor must continue to convince (or coerce) all other parties involved within the international system to participate as needed.

What Is a Current Example of the Hegemonic Stability Theory?

There can be many forms of the hegemonic stability theory operating simultaneously and independently of one another. In a modern example of two hegemons, the US qualifies in terms of government structure and societal emphasis, while China qualifies in terms of trade and manufacturing.

Why is the US a hegemon? The United States attempts to produce capitalism and democracy throughout the world today. Through this effort, the overall goal is to promote human rights. The idea of capitalism is that an individual can work to achieve their own goals instead of being forced to achieve the goals of the government. In order to enforce these ideas, the US is backed by a military force that will help smaller countries enforce the same ideas for their governmental and societal structures.

Why is China a hegemon? China qualifies as a hegemon for trade and manufacturing because the benefit to the rest of the world is affordable goods. By focusing on import/export, China is able to side-step the US role as a hegemon and avoid the enforcement of democracy and capitalism within their own borders. Because the US utilizes the manufacturing and trade that is available, they concede that the benefits being provided are better than the benefits of attempting to create a certain society within Chinese culture.

In all practical purposes, the US must continue to remain committed to democracy and capitalism. They must do so even if other countries put up barriers to such a system. Putting up barriers is simply a way for that group to remove themselves from the international system. If the US were to put up barriers, it would collapse as a hegemon.

China is in a similar situation. If they were to stop being committed to open manufacturing and trade, then their role as an economic hegemon would collapse. Even if other countries were to impose tariffs or import restrictions on Chinese goods, the hegemon would collapse if China were to do the same thing as part of the first steps of access to the trade and manufacturing systems.

What Happens When Technologies Change in the Hegemonic Stability Theory?

As time goes by, there will be different states or groups that are able to achieve a better product, service, or idea than the one that is being offered by the hegemon. When growth becomes uneven because power within the international system shifts due to new practices, methods, or technologies that are produced outside the influence of the dominant actor, then this creates limits on the ability of the hegemonic stability theory to operate properly.

Once the system becomes unstable, it will begin to erode the hierarchy that was developed around the actions and decisions that were offered by the hegemon. Once the position of the dominant actor is undermined, other actors, referred to as “pretenders,” will begin to emerge.

Pretenders appear when the benefits of the current system being enforced become perceptively unfair to those who are participating within the system. Once this occurs, the hegemon has two options.

  1. They can attempt to re-establish their dominance by showing the international system that the pretenders do not meet the qualifications of a hegemon as effectively as they have in the past as the dominant actor.
  2. They can cede the hegemon role to the pretender, assuming that the pretender has the ability to meet all of the qualifications to enforce the hegemonic stability theory.

Hegemons have been present throughout human history, from Portugal dominating colonialism because of their superior methods of navigation, to Britain dominating for nearly 300 years because of their textiles, their naval fleet, and their development of initial industrial supremacy. At some point, other participants are able to innovate and become a potential pretender, which causes the dominance to go away.

And this is what limits hegemonic stability theory at its core. Because a dominant actor cannot fully control the actions and activities of every individual within participating states in the international system, there will always be a chance that a better idea, system, or role can be developed to replace the current hegemon.

Do We Need a Hegemon to Be Successful in the Modern World?

Hegemons certainly play a role in society. This doesn’t mean that a hegemon is always present in society. Although a hegemon will typically emerge in some way, there are periods of transition where a true hegemon may not be present. Once such period of time was between the start of World War I to the end of World War II. As the world fought over moral, societal, and foundational issues, it reshaped itself.

After both world wars, international organizations were formed to help limit what a hegemon could potentially do for the international system as well. Both the UN and the League of Nations served as a check and balance for potential hegemons.

So do we need a hegemon? No – and that limits hegemonic stability theory. Yet with a hegemon, the world can be more stable, productive, and happy, which means the argument to support this theory will also be ever-present.


Developed by Walter Cannon and Philip Bard, the Cannon-Bard theory of emotion is the idea that an emotional response to a stimulus occurs simultaneously. This is the opposed to other theories of emotions which infer that they occur when only psychological arousal happens. Under the Cannon-Bard theory, the same patterns of emotional arousal can lead to different emotions and physical responses.

The classic example given when explaining this theory is of a woman who is walking through the woods. She happens to encounter a bear while walking on the trail. This causes her to begin feeling nervous. Her muscles tense up. She may begin to start trembling. Sweating might happen. Although not every single physical response may show at the exact same moment, the stimulus which begins the physical response occurs at the same time.

This creates an equation which the Cannon-Bard theory of emotion is able to follow. There is a stimulus, which is followed by an emotion, which is then followed by either a reaction or a response. Here are some more examples that could be applied to this theory.

1. Excitement

John has a concert he is going to be playing in tonight. It’s his first concert. The idea of playing before a large group of people has him feeling nervous. His stomach feels squeamish. His head starts to hurt a little bit. His breathing becomes a little bit heavier. Yet he knows that he needs to play in the concert in order to get a good grade in Band, so he gets dressed.

2. Happiness

Carol sees her wedding dress for the first time. She instantly feels a tear begin to well up in her eye. Her heart starts to beat faster. Her palms are starting to feel very sweaty. She also feels a little worried because she doesn’t know if she can actually afford this dress. Yet because seeing it makes her feel happy, she decides to try it on to see just how good she looks while wearing it.

3. Grief

Mia wakes up to discover that one of her fish in her aquarium has died. She’s 7 and this is the first time one of her pets has died. She suddenly feels this pain in her stomach like nothing she’s ever had before. It has become difficult for her to breathe. The whole world seems to be spinning out of control. This causes her to run out of her room, just as fast as her feet can move, as her whole body begins to sob uncontrollably.

4. Sadness

Harold just submitted his work project to his boss. It was a project that had taken 6 months to complete. His boss looks at the work, then looks at Harold, and says, “You need to start this over from the very beginning.” Harold’s stomach suddenly drops. There’s a weight that has been placed on his chest. All of the hopes he’d had about what might happen because of his hard work are suddenly gone. So Harold says, “Yes sir.” Then backs slowly out of the office and lightly closes the door.

5. Anger

Harold just submitted his work project to his boss. It was a project that had taken 6 months to complete. His boss looks at the work, then looks at Harold, and says, “You need to start this over from the very beginning.” Harold suddenly feels his cheeks get very hot. His forehead becomes very sweaty. He feels his chest and abdomen clench up. His eyebrows furrow. “I will not!” Harold yells at his boss. He leaves the office in a rush, slamming the door as hard as he can behind him. Pictures on the wall clatter down.

What Makes the Cannon-Bard Theory of Emotion Unique?

What is unique about the Cannon-Bard theory of emotion is that the same stimulus can cause two very different emotional reactions (see examples #4 and #5 above). Even in the classic example of the woman and the bear, some people might decide to become aggressive with the bear. Some people might become overwhelmingly sad.

Because different emotional responses may occur, a different response is also likely to occur after the emotion has been triggered by a stimulus. This is why the simultaneous nature of the physiological and emotional changes is an important part of this theory.

Without a physical response at the same time, we could theoretically all have the same response to any situation we encounter – no matter what the emotion being experienced happened to be.