Definitions and Examples of Theory

Dr. Paul Hersey and Ken Blanchard have identified four primary leadership styles that are implemented in personal and professional circumstances.

  • Telling. This type of leader tends to tell their direct reports what to do and how the task needs to be completed with specificity.
  • Selling. This leader engages in a back-and-forth interaction between the leaders in their group and the followers. The ideas are sold to the group as a way to get the entire team to “buy-in” for the process which needs to be completed.
  • Participating. This type of leader offers less direction to their direct reports, allowing the individuals on their team to take on an active problem-solving role. These leaders help the team to brainstorm ideas, make their own decisions, and oversee the process to ensure its completion.
  • Delegating. This leader takes a hands-off approach to their direct reports. Members of the team are generally asked to make a majority of the decisions and are responsible for the outcomes which occur. The role of the leader is to take tasks that are given and then distribute them to appropriate team members.

The Hersey-Blanchard situational leadership theory suggests that there is a fifth type of leader: one that can adapt their style based on the situation that they encounter. In some situations, they may need to have a telling style. In others, they may need to be a participating leader. By being adaptive, the situational leader can lead their direct reports in the most efficient manner possible because they’ve been able to identify the team’s current needs.

How Maturity Affects the Leadership Style Chosen

Hersey and Blanchard suggest with their leadership theory that individuals choose what type of leader they plan to be. One of the key identification markers which leaders use to determine the type of leader they will be is the maturity level of their direct reports.

In general terms, teams that are less mature are going to require more hands-on leadership from the person who is designated to be in charge. The situational leadership theory identifies 4 general types of maturity that are recognized.

  • Level 1. The direct reports lack the knowledge needed to complete the job. They may not have the skills which are necessary or a willingness to complete a task.
  • Level 2. A team with this maturity level is willing to complete a task, but do not have the necessary skills to get the job done.
  • Level 3. Direct reports with this maturity level have the capability and skills to complete a task, but they do not wish to take responsibility for the decisions that may need to be made.
  • Level 4: This team is highly skilled, willing to complete any task, and willing to accept the responsibility for the outcomes which are achieved.

These levels are not static. People develop new skills every day. Situational leaders can recognize this fact and adapt their leadership style to the changing maturity levels of their direct reports. A new job may require a telling style of leadership because there isn’t any knowledge or skill available to complete a task. As time passes and team members become qualified, the leader may transition to a participating style so that the team can develop their problem-solving skills next.

By being adaptable, leaders can then avoid the pitfalls which occur when someone is locked into a specific style. A team that is very mature will struggle with a leader who wants to take a telling approach because they already have developed the necessary skills to work independently. The same is true for the opposite type of team. Taking a delegating leadership approach to an unskilled team will make it difficult to complete an assigned task.

Behavior and Situational Leaders

Individuals must also be addressed by situational leaders because a team-only approach does not account for enough variables. Some team members may have high commitment levels, but low competence levels. Some people are self-reliant achievers, with a commitment to the cause and a high commitment level.

There may also be disillusioned workers on a team, who have an average level of competence, but have low levels of commitment because of setbacks that have happened to them. Some may be cautious with their commitment levels, waiting to see what leadership style is going to be employed.

The Hersey-Blanchard situational leadership theory makes it possible for today’s leaders to recognize the skills, maturity, and behaviors of their direct reports and adjust their leadership style to meet specific needs. In doing so, it becomes possible to lead any team to a successful outcome.

Have you ever had something happen to you that was unplanned? Perhaps a promotion came up unexpectedly because a co-worker moved away or someone wrote a blog post about what you do. These unexpected events may be positive, like having your boss come to you and say they’ve recommended you for the promotion, so be sure to apply for it.

They may also be negative, like having someone write a negative blog post that attempts to ruin your public reputation and have that content go viral.

John Krumboltz developed the happenstance theory to show how positive or negative events can be the foundation of indecision or a stepping stone to something greater. He suggests that every opportunity or chance encounter that may happen to an individual over the course of any given day offers some type of benefit.

By focusing on the potential benefit, it becomes more likely that the benefit will be realized in that person’s life.

What Is the Core of Happenstance Theory?

The future can be predicted to some extent, but there are times when we can also make our own luck. This is the core of the happenstance theory. Chance events, unpredictable social factors, or predictable environmental factors can all have a unique influence on an individual. These experiences may be positive or negative, but certain personality traits can turn any encounter into one that can eliminate indecision.

Here are the four personality traits that Kumboltz recommends developing within the happenstance theory.

  • A curiosity to explore whatever learning opportunities might be made available to an individual, whether planned or unplanned.
  • A persistent attitude that allows for individuals to deal with roadblocks or obstacles that may come up over the course of any given day.
  • A flexibility to address the events, circumstances, problems, or successes that may occur through the pattern of choices an individual makes during the day.
  • A focus on positive energy instead of negative energy so that optimism becomes the foundation of choice when an unplanned event occurs.

Kumboltz suggests that when an individual can focus on these four personality traits and develop them over time, then they will have the ability to capitalize on chance events which occur to them. Coincidence becomes opportunity.

Factors That Improve the Chances of Implementing Happenstance Theory

There are several factors which can be helpful to individuals who are seeking to turn “lemons into lemonade.” Kumboltz suggests that implementing multiple factors on a regular basis on a personal level makes it possible for someone to identify chance encounters and turn them into a choice opportunity.

These are the factors which are suggested for individuals to highlight in their own lives.

  • Ongoing self-assessments that are open and honest about personal strengths and weaknesses. The focus should be to improve weak areas without compromising strong areas.
  • A commitment to developing personal skills by taking advantage of ongoing learning opportunities. Once an individual becomes comfortable with their circumstances, they are less likely to seek out those opportunities.
  • Receiving feedback and assessments from trust family, friends, supervisors, and colleagues. By seeing oneself through the eyes of another, it becomes possible to improve areas of weakness that may have not been otherwise identified.
  • Networking effectively in personal and professional circles.
  • Achieving a balance between personal and professional responsibilities. Identifying areas that seek to unbalance an individual and either eliminating or reducing their influence allows for better happenstance recognition.
  • Planning for the future, including financial planning, so that periods of unemployment or uncertainty can be turned into periods of opportunity.

When these tasks and attributes become a personal point of focus, it becomes possible to turn any encounter or occurrence that happens over the course of a day into an amazing personal or professional opportunity.

Resistance Within the Happenstance Theory

Becoming comfortable is what creates resistance to the happenstance theory. Comfort stops people from acting on recognized opportunities or being willing to take a risk. This is generally from a career perspective, but there are personal applications which may apply.

There is a difference between feeling happy and satisfied and feeling comfortable. You may be satisfied with where you are because you’ve reached your goals, but there must be future goals toward which one strives. Comfort occurs because there is a lack of future goals. There is no need to stretch oneself because personal or professional achievement has been “maximized.”

Except in happenstance theory, maximization never occurs. You can always be a little bit better every day if you’re willing to look for the opportunities that come your way.

Geometric measure theory is the study of the geometric properties of sets that are typically in Euclidean space. When calculating a coordinate, it is necessary to have three specific points available in the two-dimensional Euclidean plane to determine a specific location. This is a process that is similar to triangulation. Where the three lines connect become the point in Euclidean space that is being examined.

This is different from a standard special coordinate, which would require six specific points to determine a specific location within the space. That is because standard special coordinates work in three dimensions instead of two dimensions.

By studying the geometric properties of sets, it becomes possible to use various geometric tools to surfaces that may not be smooth and would normally be difficult to interpret otherwise.

What Was Geometric Measure Theory Developed

Geometric measure theory was developed out of the need to solve the Plateau problem. This problem asks for every smooth closed curve to have a surface which exists that is the least area among all surfaces when the boundary equals a given curve. It was a problem that was first proposed in 1760 and not solved until the 1930s.

The problem is named after a 19th century physicist named Joseph Plateau. He studied soap films and found that many patterns in nature followed these four specific laws.

  1. They are made of an unbroken smooth surface.
  2. The mean curvature of a film portion is everywhere constant at any point.
  3. Films always meet in threes along an edge and do so at a specific angle.
  4. The borders meet in fours at a vertex and do so at a specific angle as well.

These laws hold for minimal surfaces. Geometric measure theory has been able to prove these laws are factual mathematically.

What Is Central to the Geometric Measure Theory?

There are 4 objects that are considered to be central when looking at the geometric measure theory.

  • Radon measures, or rectifiable sets, have the least possible regularity required to admit an approximate tangent space.
  • Integral manifold currents are present, possibly with a boundary.
  • Flat chains are an alternative to the manifold currents and may also have a possible boundary.
  • Sets of locally finite perimeter, sometimes called Caccioppoli sets, generalize the concept of the manifold currents and the Divergence theorem applies.

There are also four theorems or concepts that are considered to be central to the implementation of the geometric measure theory.

  • Area Formula. This generalizes the concept of variable change during integration.
  • Coarea Formula. This generalizes and then adapts Fubini’s theorem to the geometric measure theory.
  • Isoperimetric Inequality. This states that the smallest circumference that is possible for any given area is that of a circle which is round.
  • Flat Convergence. This generalizes the concept of manifold convergence.

What About Unorientated Surfaces?

The geometric measure theory does an excellent job of investigated surfaces that are orientated. For unorientated surfaces, however, certain problems arise with the equations. This led to the developed of the Varifolds theory to work in conjunction with the geometric measure theory. By incorporating varifolds, it becomes possible to gain information about the rectifiability and the degree of smoothness within the calculation.

It should also be noted that there are several variants that can operate within and without the geometric measure theory that may produce similar results. The success of this theory to calculate Euclidean spaces, however, is beyond dispute. It’s accuracy suggests that the ideas presented within the theory could be applied to other spaces that are more generalized, making it potentially possible to calculate 6 points instead of just 3 with predictability.

The geometric measure theory has made the language of mathematics become more accessible to humanity. Through its study and application, it becomes possible to study dimensional spaces with more accuracy, allowing for the possibility of proving past ideas and theories. By finding the minimal area spanning a surface on a boundary curve, we unlock more ways to explore the universe.

Frontier molecular orbital theory is an application of the MO theory that describes the interactions of HOMO and LUMO interactions. First published in the Journal of Chemical Physics by Kenichi Fukui in 1952, it is a theory of reactivity that would eventually help Fukui share a Nobel Prize in Chemistry for reaction mechanisms.

He would become the first Asian scientist to win a chemistry-based Nobel Prize.

The foundation of the theory is found by looking at the frontier orbitals, which are the HOMO and LUMO interactions. Fukui made three primary observations for his theory as he watched two molecules interact with one another.

  • When there are occupied orbitals of different molecules, they will repel one another.
  • The positive charges of one molecule with attract the negative charges of the other molecule.
  • The occupied orbitals of one molecule and the unoccupied orbitals of the other molecule, with specificity to the HOMO and LUMO interactions, cause an attraction between the two molecules.

Because of these observations, the frontier molecular orbital theory can explain how the interactions of HUMO in one species are naturally attracted the LUMO of another species.

Why Is It Called the “Frontier” Molecular Orbital Theory?

Frontier molecular theory looks at the orbitals which are at the outer edges of a molecule instead of all the orbitals that may exist. These outer-edge orbitals, on the “frontier” of the molecule, are the ones that tend to be the most spatially delocalized. This means they tend to have the highest and lowest energies, whether they are occupied or unoccupied.

This is where the HOMO and LUMO interactions come into play.

HOMO stands for “highest occupied molecular orbital.” LUMO stands for “lowest unoccupied molecular orbital.” The “high” and “low” components of the description refer to the energies that are present.

Different degrees of energy are present within these components. The next highest occupied molecular orbital, for example, would be designated HOMO +1. It would be followed by HOMO +2 and so forth. The same applies to the LUMO.

Why We Need the Frontier Molecular Orbital Theory

Molecules can begin to form bonds when they are able to share an electron. When two atoms can share two electrons, then you’ve constituted a chemical bond. Atoms can share up to three electrons, forming singular, double, and triple bonds in the process.

Electrons orbit the nucleus, but not in the way that the Earth orbits the sun. They exist as standing waves. That means the lowest possible energy that an electron can take is analogous to the fundamental frequencies of a wave on a string. Using classical mechanics, electrons would eventually have their orbits decay and spiral into the nucleus of the atom. This would cause it to collapse, which means a different orbital process must take place.

This is where FMO comes to help the reaction process of the positive and negative energies which exist in the orbits of every atom. The positive potential energy of an electron will become more negative as it moves toward the attractive field of the atom’s nucleus. The total energy remains constant, however, so the loss of potential energy is compensated by an increase in kinetic energy.

Then, by examining the particles that are in the furthest orbits, the amount of attraction that one has for the other can become a predictor of a chemical reaction. The highest orbitals have energy to give and the lowest orbitals want to take that energy away. In doing so, a balance can be created within an atom.

It also provides the potential of bonding when two atoms are brought together.

Why Is HOMO and LUMO So Important to FMO?

In frontier molecular orbital theory, the HOMO and LUMO are the orbitals which are the most likely to be involved in a chemical reaction. This reaction involves the redistribution of electrons in some way, including the creation or destruction of bonds, through reduction, oxidation, and other allowed methods.

HOMO is the orbital that is still occupied with the highest energy level. That makes it the easiest orbital to have the electrons removed from it.

LUMO is the lowest lying orbital that is not occupied. That makes it the easiest orbital to have electrons added to it.

Frontier molecular orbital theory may focus on HOMO and LUMO, but they are not always involved during chemical reactivity. Symmetry has a role to play within this theory. If the correct symmetry is not present in the reaction, it may shift to the next highest HOMO and next lowest LUMO to complete the process.

By recognizing this process, it becomes possible to predict what can happen during a chemical reaction.

The Three Applications of Frontier Molecular Orbital Theory

There are three specific reactions that occur within frontier molecular orbital theory that are worth noting.

1. Cycloadditions.

This is the reaction that simultaneously forms when a minimum of two new bonds converts into 2+ open-chain molecules into rings. It becomes a pericyclic reaction because these reactions typically involve the electrons within the molecules moving in a continuous ring. The theory also finds that the stereoselectivity of the reaction can be predicted through the consideration and example of how ethane and butadiene react with one another.

Through this process, only the reaction of cyclopentadiene HOMO and maleic anhydride LUMO would be allowed. This is due to the endo-product being favored in frontier molecular orbital theory. Each orbital interaction offers secondary non-bonding interactions which lower the overall energy of the reaction.

2. Sigmatropic Rearrangement

This is a reaction that occurs when a sigma bond moves across a conjugated pi system. A concomitant shift in the pi bonds must be present for this reaction to occur. Antarafacial and suprafacial shifts in the sigma bond are possible. This creates a predictable result through the frontier molecular orbital theory by observing the HOMO and LUMO of the two species.

  • In this application, two separate idea should be considered.
  • Is the reaction allowed or is not allowed?
    Which mechanism of the reaction proceeds through?

3. Electrocyclic Reactions

This is a pericyclic reaction that involves the creation of a sigma bond with the formation of a ring while involving the net loss of a pi bond. It is a reaction which proceeds with wither a disrotatory or conrotatory mechanism. Depending on how the pi system moves from LUMO to HOMO, the reaction will be allowed.

The frontier molecular orbital theory is foundational model of organic chemistry. By observing the HOMO and LUMO and how they react, it becomes possible to predict the results of a chemical reaction.

To understand the benefits of FMO, however, it is necessary to have a solid introduction to molecular orbital theory. FMO is based on the key principles of MO theory, which in itself is based on Lewis Theory, which helps to understand the mechamisms of many reactions.

Ester Boserup was an economist who studied agricultural and economic development. Her work involved agrarian change on the international level and what the role of women should be within societal development. Much of her work was for the United Nations and other international organizations.

Her best-known work regarding population cycles and agricultural production is called The Conditions of Agricultural Growth and was published in 1965. Unlike other theories of population change, Boserup didn’t draw an apocalyptic view of the future. Instead, the major point that she attempts to make with her theory is that humans look at necessity as an inspiration to invent new processes.

What Is the Ester Boserup Theory?

For more than two centuries, population growth centered around a theory proposed by Thomas Malthus. He suggested that if human populations continued to grow, then food production would be unable to keep up with the demands placed upon it. Eventually, the planet would reach a point where there wouldn’t be enough food that could be grown to support the number of people living.

This would create a famine that would likely kill many people, thus adjusting the population levels to the maximum number that could be supported by food production activities. Referred to as Malthusian theory, the idea is that humanity will one day exceed its carrying capacity.

The Ester Boserup theory takes a different approach. Instead of human population levels being limited to the amount of food that a society can grow, she suggests that food production will continue to increase as population levels increase.
Boserup developed her theory based on her knowledge and experiences in the agrarian world. She showed that when there is a threat of starvation to a population center, there is an enhanced level of motivation for people to improve their farming methods. They will invent new technologies and change their labor patterns so that more food can be produced.

How Accurate is Boserupian Theory?

When Malthus first suggested his theory, there were fewer than 800 million people living on the planet. It wouldn’t be until the 19th century when the first estimations of 1 billion come about.

When Boserup proposed her theory, US and UN census data estimated a global population level of over 3 billion people.

Today, there is more than 7 billion people living on our planet. By 2050, the estimated population levels will be between 9-10 billion.

Data released by Oxfam suggests that Boserupian theory has some merit. In their last reported agricultural harvest, they show that current crop yields produce 17% more food than what is needed for every human in 2010 to have enough to eat. Hunger exists because of the governmental and distribution structures that are in place. Billions of tons of food are wasted annually because of these structures.

This means the Ester Boserup theory has great merit. Since she first proposed this theory, population levels have doubled. The world is 10 times more populated today than it was during the time of Malthus. We are still producing more food from a total capacity standpoint than we need and production levels continue to rise.

Is There a Limit to Potential Growth?

Boserup suggests that agricultural production is based on the idea of “intensification.” Farmers may own land, but choose not to maximize their property’s production levels. There might be three fields owned by the farmer, but only two will be used because the third doesn’t have optimal growing conditions. If the farmer has more children and must support a larger family, he will use the third field in some way to support the higher food needs that are required.

This means there is no real limit to the potential growth that humanity could experience when it comes to food production.

The fact is that we are still greatly under-utilizng our croplands today. In the United States, about 350 million acres is designated as cropland. 80% of total US acreage is used for 4 crops: feeder corn, soybeans, alfalfa, and wheat. The first three crops are generally used to feed livestock, which is then used to create animal products within the food system.

Just 3 million acres is set aside in the US right now for vegetable production. There are nearly 800 million acres of pasture and another 250 million acres of grazed forest lands that could be potentially converted into croplands – and that’s just in the United States.

There could be some truth to what Malthus suggests, but Boserup shows that we are a long way from such an apocalyptic future right now. That’s why she suggests we should hope for the future instead of despair.

What do you feel when someone close to you dies? What would you feel if a doctor told you that you had a terminal illness?

For many, the emotion of these circumstances would be “grief.” The Elisabeth Kubler-Ross theory originally suggested that when severe grief occurs, people will undergo a series of emotions that occur in consecutive stages, though she later explained that they are not intended to be linear. The five stages are common experiences which occur in any order, but may not always occur for some individuals.

What Are the 5 Stages of Grief?

Published in On Death and Dying, the Elisabeth Kubler-Ross theory of grief offers stages of emotion that are sometimes abbreviated as DABDA.

  • Denial. In this emotional stage, an individual believes that their circumstances are somehow incorrect. Either the diagnosis was wrong, the news was incorrect, or something else has happened to make everyone believe something that is not true. The goal is to cline to a reality that is preferable, but false.
  • Anger. At some point, the individual realizes that they can no longer continue existing in their false reality. This creates frustration, often targeted at the individuals who originally brought them the news which caused grief. A common response to this stage of grief is to ask questions, such as “Why me?” Statements such as, “It’s not fair,” are also present.
  • Bargaining. Once the energy from anger begins to fade away, the individual begins searching for a way to avoid feeling grief. The goal is to create a source of hope. People may bargain with God, with doctors, their family, or themselves, asking for more time or for circumstances to be different. In return, the individual will live a better life or offer to give anything in return for more time with someone they have lost.
  • Depression. If the bargaining doesn’t provide the hope which is desired, a state of sadness descends upon the individual. It is a depression that is based on the recognition of their own mortality or the loss that has been experienced. It is common for people to become sullen, silent, and isolate during this stage of grief. They may feel like nothing is worth doing because of how they feel, their diagnosis, or the loss which was experienced.
  • Acceptance. In this stage, the individual will make a decision. They will either begin preparing to confront their circumstances head-on or realize that life will continue to go on, despite what has happened.

The initial Kubler-Ross theory of grief was a description of major events in life. It was a reflection of the death of a loved one or the diagnosis of a terminal illness. After her initial publication of the DABDA process, she expanded it to include other major life events that can happen to people.

The five stages of grief could occur, according to Kubler-Ross, when someone lost their job or a source of income. It might happen because of a divorce or the ending of a long relationship. Drug addiction, the onset of a long illness, infertility, or even a long-term incarceration could also result in these stages of grief occurring.

Kubler-Ross also suggests that any traumatic event that occurs to an individual may cause feelings of grief to occur, which would initiate the DABDA process. Something as simple as a person’s favorite sports team losing an important game could cause grief.

A supported candidate losing a political election could also trigger this process.

Criticism of the Elisabeth Kubler Ross Theory

The primary criticism of the Kubler-Ross theory of grief is that it is difficult to obtain empirical evidence to support it. The existence of the stages is difficult to demonstrate because people handle their emotions in a unique way. Some people can endure grief with a great tenacity while others are completely overwhelmed by their emotions and shut down.

A person’s emotions are also directly affected by their environment. Someone in a supportive family environment with regular counseling may not endure the same severity in their stages of grief than someone trying to do it all on their own.

This means that helping someone who is experiencing grief creates a blurred line where the description of what is happening to them may also be part of the prescription needed to handle the difficult emotion. For example: if someone needs to confront their false reality, they must first realize that their reality is false.

Grief is never easy to endure. By recognizing this emotion and the DABDA stages, however, the Elisabeth Kubler Ross Theory suggests that it can be managed.

Theories of aging often look at the behaviors of older adults and how they are influenced by personal choices, societal pressure, and changes to socioeconomic networks. What if the way people think, behave, and act as they age had a biological influence to it? This is essentially what Thomas Kirkwood proposed in 1977 when he published his disposable soma theory of aging.

Kirkwood worked as a statistician at the time he published this initial theory of aging. He has gone on to publish several additional works regarding the science of aging because of his research work at the University of Newcastle. His idea is this: that an organism only has a limited amount of energy and it must be divided between reproduction and non-reproductive aspects of the organism.

Does the Human Body Budget Its Energy?

Kirkwood proposes that a human body is required to budget the amount of energy that is available to it on a daily basis. Every action taken, either voluntarily or not, has an energy expenditure. The disposable soma theory breaks down the budget line for energy distribution into three separate categories: metabolism, reproduction, and repair/maintenance.

This budget must be in place because there is a finite food supply given to the human body each day. It requires a compromise to be made so that each system does not operate at its full potential. Over time, as people age, the energy requirements for each system evolve as well. The compromises made for the three major systems shift. More energy is budgeted for repair and maintenance and the metabolism, which means less energy is available for reproduction.

Although there is individual variability in how these energy transfers occur, the compromises operate in a similar curve for everyone. Over time, less energy is dedicated to reproduction. This means the evolutionary developmental and gestation rates must counter this energy transfer by putting biological pressure on people to have children before a certain time.

If you’ve ever heard of someone talking about their “ticking clock” to have children, that would be a description of Kirkwood’s disposable soma theory.

How Much Energy Goes to Reproduction?

As people age, there are several pressures placed on the reproductive system. This includes the amount of food that people eat. There is a direct correlation to a lower energy level in the reproductive system and lower food intake levels. The other budgeted energy levels are also reduced, but not necessarily to the same extent.

That is why the idea that reproductive energy is the first line-item that is compromised by the body is the foundation of the disposable soma theory. The repair and maintenance requirements of the body are lower with a lower caloric intake, but the energy budget remains proportionally the same. This is also true with the metabolism energy requirement. Digestion may not be necessary, but the body will pull energy from stored fat resources if needed to maintain proper functionality.

For this theory of aging, it explains why people can feel pressures at different times for having children. Based on their diet, lifestyle, and other factors that affect their health, more energy may be dedicated to maintenance and repair or metabolism. Since there is a finite budget of energy available, the body draws from reproductive energies to support the other needs.

Concerns with the Disposable Soma Theory

The idea that processes begin to deteriorate as a person ages is widely accepted. Dead cells are replaced with live cells. Your fingernails continue to grow. Your hair continues to grow and it might even start growing in places you don’t like as you age – like inside your ear canal. Wounds heal. Infections are defeated.

In the disposable soma theory, an assumption must be made for it to work properly. Organisms would be able to reduce repair or maintenance in time, but the adverse effects of that reduction would not occur until later. This creates an energy trade-off that doesn’t account for the fact that some biological needs are short-term, but others are long-term.

Your hair grows a little bit every day. Your brain cells are replaced with much less frequency. This would mean that reducing long-term maintenance resources would create very little energy transfer. It has also been shown through direct observation that some animals and people still have an increased reproductive capacity as older adults, which conflicts with the idea that there is a tradeoff which occurs between aging and reproduction.

The disposable soma theory offers a way to explain aging through a scientific process. Further research and experimentation will be required to determine how accurate it happens to be.

How humans age has always been the subject of a great debate. In the disengagement theory of aging, it is proposed that as people age, they have a withdrawal from interactions and relationships to the various systems of which they belong. The theory states that this withdrawal is inevitable and mutual.

It is one of three major psychosocial theories describing the development process of individuals as they age. The other two theories are the Activity Theory of Aging and the Continuity Theory of Aging.

First proposed in 1961, the idea was that older adults should find it acceptable, even natural, to withdraw from society. It was published in the book Growing Old, authored by Elaine Cumming and William E. Henry. What it proposes places this theory at odds with the other two major psychosocial theories of aging.

Postulates of the Disengagement Theory of Aging

Cumming and Henry propose that there are 9 postulates that describe the process of disengagement within their theory of aging.

1. Everyone expects death.
This means that older adults accept that their abilities will be deteriorating over time As a result of this deterioration, they begin to lose contact with their societal networks.

2. Fewer contacts creates behavioral freedoms.
When individuals reduce their interactions with societal networks, there are fewer constraints placed on them to behave in a certain way. This freedom feels liberating to the individual, which encourages it to continue happening.

3. Men are different than women.
The disengagement theory of aging suggests that women play socioeconomic roles, while men play instrumental roles, and this causes disengagement differences.

4. The ego evolves as it ages.
Age-grading allows for younger individuals to take over from older individuals in knowledge- and skill-based positions in society. This means older adults step aside to the younger adults through the retirement process, which encourages disengagement. Instead of seeking power, the ego of an older evolves to seek out personal enjoyment.

5. Complete disengagement occurs when society is ready for it.
Only when society and older adults both approve of their disengagement will it occur. If society is not ready to let go of an individual, then they cannot completely disengage from their personal networks.

6. Disengagement can occur if people lose their roles.
The disengagement theory of aging suggests that a man’s central role is providing labor, while the woman’s role is family and marriage. If these roles are abandoned, then the disengagement process begins unless different roles can be assumed within their state.

7. Readiness equates to societal permission.
The readiness of disengagement occurs for older adults when they are aware of their scarcity of time, perceive their space decreasing, and loses “ego energy.” Society will then grant disengagement to these individuals because of the occupational system requirements in the society, differential death rates, or the nature of the family unit.

8. Relational rewards become more diverse.
By disengaging from society and the central roles that are played, people transform their relational rewards. Societal rewards become horizontal instead of vertical, causing people to engage more with their remaining interpersonal relationships for vertical, instead of horizontal, rewards.

9. This theory is independent of culture.
Yet the disengagement theory of aging, for it to properly work, but also take on a form that is bound by the individual’s culture.

Concerns with the Disengagement Theory of Aging

Since its publication in the 1960s, the disengagement theory of aging has been on the receiving end of strong concerns regarding its validity.

One of the primary criticisms of this theory is that it is unidirectional. There is no concept of individual circumstances within this theory except for the idea that society may not allow certain people to disengage while they age because they still have contributions to be made. Those contributions are focused on the central roles that people play in this theory.

Those central roles are clearly dated by time. Men are not always the household provider and women are not always the spouse that stays home. This theory assumes that each family unit is a two-parent household with a father and a mother. There is no consideration for the single parent in this structure. One could argue that in a same-gender family unit, one person could be the “father” and the other could be the “mother” to make this theory fit, but it would be a difficult argument to make because the central roles in this theory are clearly based on gender.

The disengagement theory of aging has proposed different ideas to what happens to people as they get older. It may be controversial to some, but it has also play a significant role in our current understanding of gerontology.

Founded in 2001, Theory of a Deadman is a rock band that is based out of Delta, British Columbia. Signed to Roadrunner Records and 604 Records, this Canadian group has so far had a total of eight Top 10 US Billboard Hot Mainstream Rock Track hits. This includes their two #1 hits on this chart, entitled “Lowlife” and “Bad Girlfriend.”

Members of the band are Dave Brenner, Dean Back, Tyler Connolly, and Joey Dandeneau.

How Theory of a Deadman Started

When Nickelback frontman Chad Kroeger began 604 Records in 2001, Theory of a Deadman became the first act to sign with them. Their self-titled debut album was released in September 2002. The name of the band comes from one of the songs on their first album, which is about a man who was prepping for his own suicide.

After the song’s release, the track would be renamed “The Last Song” to avoid confusion with the band name.

The Gasoline Era of Theory of a Deadman

Gasoline was the second album released by the Canadian rockers, hitting shelves on March 29, 2005. The band then began a promotional tour with The Exies and Breaking Benjamin. The music from the album helped to propel the band into the mainstream, with songs from Gasoline appearing on video games and in World Wrestling Entertainment promotions.

During the era, Theory of a Deadman also toured with No Address and Shinedown.

“No Surprise” would be the top performing single off the album, peaking at #8 on the US main chart and at #24 on the US alternative chart. “Say Goodbye,” “Santa Monica,” and “Hello Lonely” would all be Top 30 hits for the band.

Scars and Souvenirs with Theory of a Deadman

In 2008, Dave Brenner and Theory of a Deadman released their third album called Scars and Souvenirs. Eight singles would eventually be released from this album, with guest vocalists including Robin Diaz and Christ Daughtry on some of the track.

This would be the album that would send the band toward even more fame as it had their first #1 Billboard hit. Throughout 2008, Brenner and the band would perform during the Grey Cup halftime show, be part of Crue Fest 2, and have them make an appearance at the Juno Awards that year.

In 53 weeks, Scars and Souvenirs would be their first album that would be certified gold for having sales reach 500,000 copies in the United States. It would also achieve a #1 peak position on the US Billboard hard rock albums chart and reach #2 on the Canadian albums chart.

Theory of a Deadman and The Truth Is Era

The fourth album from Dave Brenner and Theory of a Deadman was recorded in 2010 and released in the summer of 2011. Called The Truth Is… the highlight of this album was “Lowlife,” the first single released. It would become the band’s second #1 hit.

With the fourth album released, Theory of a Deadman would use the year to work with Alter Bridge as co-headliners of the Carnival of Madness Tour. They would also be asked to contribute a song to the Transformers: Dark Side of the Moon soundtrack that year. Two additional singles were released from the album, but did not perform as well as “Lowlife.”

The Truth Is… would be a #1 album for the band on the Billboard rock, alternative, and hard rock album charts. It would also be a #2 digital album and reach #8 on the Billboard 200.

Savages, Angel, and a Change for Theory of a Deadman

In 2014, the fifth album released from the band would be Savages. Although the singles from this album underperformed for Brenner and company compared to previous albums, their song “Panic Room” would be used by World Wrestling Entertainment for one of their pay-per-view promotions that year. “Angel” would obtain a peak chart position at #2 in the United States, while peaking at #33 for the Canadian rock charts.

Although the singles for Savages did not perform as well, the album would reach #1 on both the Alternative and Hard Rock album charts. It would peak at #8 in the Billboard 200.

The following year brought some changes to Dave Brenner and Theory of a Deadman. They released their first acoustic EP in 2015, called Angel. The EP contains a cover of a Tove Lo song and then four covers of their own work from past albums.

Dave Brenner and Theory of a Deadman have a sixth studio album planned for 2017 and their latest releases are covers of “Shape of My Heart” and “Cold Water.”

The counterpoint music theory is the relationship within a composition where voices are independent in contour and rhythm, but are still interdependent harmonically. It allows for 2+ musical lines that can stand on their own into a composition where they all work together as a whole.

There are two counterpoints to consider in this music theory: first-species and second-species.

What Is First-Species Counterpoint?

To begin a first-species counterpoint, it is necessary to first have a cantus firmus. This is an existing melody that will be used as the basis of the composition. A single new line is composed above or below the cantus firmus. This new line is the counterpoint. It is a new line that will contain one note for every note that is already in the existing melody.

That is why this type of counterpoint is often referred to as 1:1 counterpoint. The melody and the counterpoint will be whole notes.

Beginning a first-species counterpoint means focusing on the creation of a perfect consonance. To achieve this, the first note of a counterpoint is a P1 or P8 below the cantus firmus. If the counterpoint is above, then a P5 may be added in addition to the P1 or P8.

A P5 cannot be used on a lower counterpoint because the tonal context could be misheard by the listener, as the combination would create what is called a “dissonant fourth.”

Some may prefer to use a P12.

Then the final note of a first-species counterpoint should be a P1 or P8, whether it is above or below the melody. This creates a smoother ending that offers listeners some variety to the sound while still providing orientation to the goals of the composition. Different pitches may be considered between the major sixth or minor third in major or minor keys.

The counterpoint should have its own climax and not cross voice with the melody unless it is absolutely necessary. Any voice crossing reduces the independence of each musical line and thus stops the effectiveness of the counterpoint in those locations.

What Is Second-Species Counterpoint?

When creating a second-series counterpoint, the composer must move in half notes against the whole notes of the primary melody. If looking at a composition in 4/4 time, the cantus firmus and first-species counterpoint would be whole notes, while the second-species counterpoint would be half notes.

That is why this type of counterpoint is often referred to as 2:1 counterpoint.

When added to the composition, the listener can pick up the differentiation that the counterpoints create with strong and weak beats. At the same time, the composition begins to include passing tone dissonance. The goal with this counterpoint is to add textural variety, tension to the sound, but with a balance that does not seem harsh or grating to the listener.

A second-species counterpoint must have stepwise motion and a single climax. Because there are added notes to this counterpoint, there must be small steps contained within it so that it doesn’t interfere with the leaps that the melody will be making.

This type of counterpoint will also have secondary climaxes that are employed throughout the composition. That allows the composer to draw certain phrases or expressions within the composition to a logical conclusion. It helps to maintain the integrity of the lines, allowing for the shape of the cantus firmus and the counterpoints so the listener feels like they are listening to a consistent thought instead of multiple tangents.

Unlike with the first-species counterpoint, there are unisons allowed when beginning the second-species counterpoint. It can begin with two half notes in the first bar if desired, but a standard method of adding it is to incorporate a half-beat rest and then including a single half note in the first bar. Using the rest as the initial introduction allows for parsing and makes a composition easier to compose.

Downbeats are always consonant in this counterpoint.

Ending a second-species counterpoint can be either two half notes or a single whole note. This depends on how the composer wants to end the piece.

Why Counterpoint Music Theory is Important to Know

Whether you compose structured or improvisational pieces, music requires movement. The counterpoint music is one effective method that can be used to create the necessary movement by composing three lines in total that can stand independently, but work better together.

Listeners can pick out each expression, while at the same time listening to the entire piece, and this creates a memorable experience.