In statistics, a mediation model is one that seeks to identify and explain the mechanism or process that underlies an observed relationship between an independent variable and a dependent variable via the inclusion of a third hypothetical variable, known as a mediator variable (also a mediating variable, intermediary variable, or intervening variable). Rather than a direct causal relationship between the independent variable and the dependent variable, a mediation model proposes that the independent variable influences the (non-observable) mediator variable, which in turn influences the dependent variable. Thus, the mediator variable serves to clarify the nature of the relationship between the independent and dependent variables.
Mediation analyses are employed to understand a known relationship by exploring the underlying mechanism or process by which one variable influences another variable through a mediator variable. Mediation analysis facilitates a better understanding of the relationship between the independent and dependent variables when the variables appear to not have a definite connection. They are studied by means of operational definitions and have no existence apart.
Baron and Kenny (1986)  laid out several requirements that must be met to form a true mediation relationship. They are outlined below using a real-world example. See the diagram above for a visual representation of the overall mediating relationship to be explained.
Independent variable dependent variable
Independent variable mediator
The following example, drawn from Howell (2009), explains each step of Baron and Kenny's requirements to understand further how a mediation effect is characterized. Step 1 and step 2 use simple regression analysis, whereas step 3 uses multiple regression analysis.
How you were parented confidence in own parenting abilities.
How you were parented Feelings of competence and self-esteem.
Such findings would lead to the conclusion implying that your feelings of competence and self-esteem mediate the relationship between how you were parented and how confident you feel about parenting your own children.
Note: If step 1 does not yield a significant result, one may still have grounds to move to step 2. Sometimes there is actually a significant relationship between independent and dependent variables but because of small sample sizes, or other extraneous factors, there could not be enough power to predict the effect that actually exists (See Shrout & Bolger, 2002  for more info).
In the diagram shown above, the indirect effect is the product of path coefficients "A" and "B". The direct effect is the coefficient " C' ". The direct effect measures the extent to which the dependent variable changes when the independent variable increases by one unit and the mediator variable remains unaltered. In contrast, the indirect effect measures the extent to which the dependent variable changes when the independent variable is held fixed and the mediator variable changes by the amount it would have changed had the independent variable increased by one unit. In linear systems, the total effect is equal to the sum of the direct and indirect effects (C' + AB in the model above). In nonlinear models, the total effect is not generally equal to the sum of the direct and indirect effects, but to a modified combination of the two.
A mediator variable can either account for all or some of the observed relationship between two variables.
Maximum evidence for mediation, also called full mediation, would occur if inclusion of the mediation variable drops the relationship between the independent variable and dependent variable (see pathway c in diagram above) to zero. This rarely, if ever, occurs. The most likely event is that c becomes a weaker, yet still significant path with the inclusion of the mediation effect.
Partial mediation maintains that the mediating variable accounts for some, but not all, of the relationship between the independent variable and dependent variable. Partial mediation implies that there is not only a significant relationship between the mediator and the dependent variable, but also some direct relationship between the independent and dependent variable.
In order for either full or partial mediation to be established, the reduction in variance explained by the independent variable must be significant as determined by one of several tests, such as the Sobel test. The effect of an independent variable on the dependent variable can become nonsignificant when the mediator is introduced simply because a trivial amount of variance is explained (i.e., not true mediation). Thus, it is imperative to show a significant reduction in variance explained by the independent variable before asserting either full or partial mediation. It is possible to have statistically significant indirect effects in the absence of a total effect. This can be explained by the presence of several mediating paths that cancel each other out, and become noticeable when one of the cancelling mediators is controlled for. This implies that the terms 'partial' and 'full' mediation should always be interpreted relative to the set of variables that are present in the model. In all cases, the operation of "fixing a variable" must be distinguished from that of "controlling for a variable," which has been inappropriately used in the literature. The former stands for physically fixing, while the latter stands for conditioning on, adjusting for, or adding to the regression model. The two notions coincide only when all error terms (not shown in the diagram) are statistically uncorrelated. When errors are correlated, adjustments must be made to neutralize those correlations before embarking on mediation analysis (see Bayesian Networks).
As mentioned above, Sobel's test is performed to determine if the relationship between the independent variable and dependent variable has been significantly reduced after inclusion of the mediator variable. In other words, this test assesses whether a mediation effect is significant. It examines the relationship between the independent variable and the dependent variable compared to the relationship between the independent variable and dependent variable including the mediation factor.
The Sobel test is more accurate than the Baron and Kenny steps explained above; however, it does have low statistical power. As such, large sample sizes are required in order to have sufficient power to detect significant effects. This is because the key assumption of Sobel's test is the assumption of normality. Because Sobel's test evaluates a given sample on the normal distribution, small sample sizes and skewness of the sampling distribution can be problematic (see Normal distribution for more details). Thus, the rule of thumb as suggested by MacKinnon et al., (2002)  is that a sample size of 1000 is required to detect a small effect, a sample size of 100 is sufficient in detecting a medium effect, and a sample size of 50 is required to detect a large effect.
The bootstrapping method provides some advantages to the Sobel's test, primarily an increase in power. The Preacher and Hayes Bootstrapping method is a non-parametric test (See Non-parametric statistics for a discussion on non parametric tests and their power). As such, the bootstrap method does not violate assumptions of normality and is therefore recommended for small sample sizes. Bootstrapping involves repeatedly randomly sampling observations with replacement from the data set to compute the desired statistic in each resample. Over hundreds, or thousands, of bootstrap resamples provide an approximation of the sampling distribution of the statistic of interest. Hayes offers a macro <http://www.afhayes.com/> that calculates bootstrapping directly within SPSS, a computer program used for statistical analyses. This method provides point estimates and confidence intervals by which one can assess the significance or nonsignificance of a mediation effect. Point estimates reveal the mean over the number of bootstrapped samples and if zero does not fall between the resulting confidence intervals of the bootstrapping method, one can confidently conclude that there is a significant mediation effect to report.
As outlined above, there are a few different options one can choose from to evaluate a mediation model.
Bootstrapping is becoming the most popular method of testing mediation because it does not require the normality assumption to be met, and because it can be effectively utilized with smaller sample sizes (N < 25). However, mediation continues to be most frequently determined using the logic of Baron and Kenny  or the Sobel test. It is becoming increasingly more difficult to publish tests of mediation based purely on the Baron and Kenny method or tests that make distributional assumptions such as the Sobel test. Thus, it is important to consider your options when choosing which test to conduct.
While the concept of mediation as defined within psychology is theoretically appealing, the methods used to study mediation empirically have been challenged by statisticians and epidemiologists and interpreted formally.
(1) Experimental-causal-chain design
An experimental-causal-chain design is used when the proposed mediator is experimentally manipulated. Such a design implies that one manipulates some controlled third variable that they have reason to believe could be the underlying mechanism of a given relationship.
(2) Measurement-of-mediation design
A measurement-of-mediation design can be conceptualized as a statistical approach. Such a design implies that one measures the proposed intervening variable and then uses statistical analyses to establish mediation. This approach does not involve manipulation of the hypothesized mediating variable, but only involves measurement.
Experimental approaches to mediation must be carried out with caution. First, it is important to have strong theoretical support for the exploratory investigation of a potential mediating variable. A criticism of a mediation approach rests on the ability to manipulate and measure a mediating variable. Thus, one must be able to manipulate the proposed mediator in an acceptable and ethical fashion. As such, one must be able to measure the intervening process without interfering with the outcome. The mediator must also be able to establish construct validity of manipulation. One of the most common criticisms of the measurement-of-mediation approach is that it is ultimately a correlational design. Consequently, it is possible that some other third variable, independent from the proposed mediator, could be responsible for the proposed effect. However, researchers have worked hard to provide counter evidence to this disparagement. Specifically, the following counter arguments have been put forward:
(1) Temporal precedence. For example, if the independent variable precedes the dependent variable in time, this would provide evidence suggesting a directional, and potentially causal, link from the independent variable to the dependent variable.
(2) Nonspuriousness and/or no confounds. For example, should one identify other third variables and prove that they do not alter the relationship between the independent variable and the dependent variable he/she would have a stronger argument for their mediation effect. See other 3rd variables below.
Mediation can be an extremely useful and powerful statistical test, however it must be used properly. It is important that the measures used to assess the mediator and the dependent variable are theoretically distinct and that the independent variable and mediator cannot interact. Should there be an interaction between the independent variable and the mediator one would have grounds to investigate moderation.
In experimental studies, there is a special concern about aspects of the experimental manipulation or setting that may account for study effects, rather than the motivating theoretical factor. Any of these problems may produce spurious relationships between the independent and dependent variables as measured. Ignoring a confounding variable may bias empirical estimates of the causal effect of the independent variable.
In general, the omission of suppressors or confounders will lead to either an underestimation or an overestimation of the effect of X on Y, thereby either reducing or artificially inflating the magnitude of a relationship between two variables.
Mediation and moderation can co-occur in statistical models. It is possible to mediate moderation and moderate mediation.
Moderated mediation is when the effect of the treatment A on the mediator and/or the partial effect B on the dependent variable depend in turn on levels of another variable (moderator). Essentially, in moderated mediation, mediation is first established, and then one investigates if the mediation effect that describes the relationship between the independent variable and dependent variable is moderated by different levels of another variable (i.e., a moderator). This definition has been outlined by Muller, Judd, and Yzerbyt (2005) and Preacher, Rucker, and Hayes (2007).
There are five possible models of moderated mediation, as illustrated in the diagrams below.
Mediated moderation is a variant of both moderation and mediation. This is where there is initially overall moderation and the direct effect of the moderator variable on the outcome is mediated. The main difference between mediated moderation and moderated mediation is that for the former there is initial (overall) moderation and this effect is mediated and for the latter there is no moderation but the effect of either the treatment on the mediator (path A) is moderated or the effect of the mediator on the outcome (path B) is moderated.
In order to establish mediated moderation, one must first establish moderation, meaning that the direction and/or the strength of the relationship between the independent and dependent variables (path C) differs depending on the level of a third variable (the moderator variable). Researchers next look for the presence of mediated moderation when they have a theoretical reason to believe that there is a fourth variable that acts as the mechanism or process that causes the relationship between the independent variable and the moderator (path A) or between the moderator and the dependent variable (path C).
The following is a published example of mediated moderation in psychological research. Participants were presented with an initial stimulus (a prime) that made them think of morality or made them think of might. They then participated in the Prisoner's Dilemma Game (PDG), in which participants pretend that they and their partner in crime have been arrested, and they must decide whether to remain loyal to their partner or to compete with their partner and cooperate with the authorities. The researchers found that prosocial individuals were affected by the morality and might primes, whereas proself individuals were not. Thus, social value orientation (proself vs. prosocial) moderated the relationship between the prime (independent variable: morality vs. might) and the behaviour chosen in the PDG (dependent variable: competitive vs. cooperative).
The researchers next looked for the presence of a mediated moderation effect. Regression analyses revealed that the type of prime (morality vs. might) mediated the moderating relationship of participants' social value orientation on PDG behaviour. Prosocial participants who experienced the morality prime expected their partner to cooperate with them, so they chose to cooperate themselves. Prosocial participants who experienced the might prime expected their partner to compete with them, which made them more likely to compete with their partner and cooperate with the authorities. In contrast, participants with a pro-self social value orientation always acted competitively.
Muller, Judd, and Yzerbyt (2005) outline three fundamental models that underlie both moderated mediation and mediated moderation. Mo represents the moderator variable(s), Me represents the mediator variable(s), and ?i represents the measurement error of each regression equation.
Step 1: Moderation of the relationship between the independent variable (X) and the dependent variable (Y), also called the overall treatment effect (path C in the diagram).
Step 2: Moderation of the relationship between the independent variable and the mediator (path A).
Step 3: Moderation of both the relationship between the independent and dependent variables (path A) and the relationship between the mediator and the dependent variable (path B).
Mediation analysis quantifies the extent to which a variable participates in the transmittance of change from a cause to its effect. It is inherently a causal notion, hence it cannot be defined in statistical terms. Traditionally, however, the bulk of mediation analysis has been conducted within the confines of linear regression, with statistical terminology masking the causal character of the relationships involved. This led to difficulties, biases, and limitations that have been alleviated by modern methods of causal analysis, based on causal diagrams and counterfactual logic.
The source of these difficulties lies in defining mediation in terms of changes induced by adding a third variables into a regression equation. Such statistical changes are epiphenomena which sometimes accompany mediation but, in general, fail to capture the causal relationships that mediation analysis aims to quantify.
The basic premise of the causal approach is that it is not always appropriate to "control" for the mediator M when we seek to estimate the direct effect of X on Y (see the Figure above). The classical rationale for "controlling" for M" is that, if we succeed in preventing M from changing, then whatever changes we measure in Y are attributable solely to variations in X and we are justified then in proclaiming the effect observed as "direct effect of X on Y." Unfortunately, "controlling for M" does not physically prevent M from changing; it merely narrows the analyst's attention to cases of equal M values. Moreover, the language of probability theory does not possess the notation to express the idea of "preventing M from changing" or "physically holding M constant". The only operator probability provides is "Conditioning" which is what we do when we "control" for M, or add M as a regressor in the equation for Y. The result is that, instead of physically holding M" constant (say at M = m) and comparing Y for units under X = 1' to those under X = 0, we allow M to vary but ignore all units except those in which M achieves the value M = m. These two operations are fundamentally different, and yield different results, except in the case of no omitted variables.
To illustrate, assume that the error terms of M and Y are correlated. Under such conditions, the structural coefficient B and A (between M and Y and between Y and X) can no longer be estimated by regressing Y on X and M. In fact, the regression slopes may both be nonzero even when C is zero. This has two consequences. First, new strategies must be devised for estimating the structural coefficients A,B and C. Second, the basic definitions of direct and indirect effects must go beyond regression analysis, and should invoke an operation that mimics "fixing M", rather than "conditioning on M."
Such an operator, denoted do(M = m), was defined in Pearl (1994) and it operates by removing the equation of M and replacing it by a constant m. For example, if the basic mediation model consists of the equations:
then after applying the operator do(M = m) the model becomes:
and after applying the operator do(X = x) the model becomes:
where the functions f and g, as well as the distributions of the error terms ?1 and ?3 remain unaltered. If we further rename the variables M and Y resulting from do(X = x) as M(x) and Y(x), respectively, we obtain what came to be known as "potential outcomes" or "structural counterfactuals". These new variables provide convenient notation for defining direct and indirect effects. In particular, four types of effects have been defined for the transition from X = 0 to X = 1:
(a) Total effect -
(b) Controlled direct effect -
(c) Natural direct effect -
(d) Natural indirect effect
Where E[ ] stands for expectation taken over the error terms.
These effects have the following interpretations:
A controlled version of the indirect effect does not exist because there is no way of disabling the direct effect by fixing a variable to a constant.
According to these definitions the total effect can be decomposed as a sum
where NIEr stands for the reverse transition, from X = 1 to X = 0; it becomes additive in linear systems, where reversal of transitions entails sign reversal.
The power of these definitions lies in their generality; they are applicable to models with arbitrary nonlinear interactions, arbitrary dependencies among the disturbances, and both continuous and categorical variables.
In linear analysis, all effects are determined by sums of products of structural coefficients, giving
Therefore, all effects are estimable whenever the model is identified. In non-linear systems, more stringent conditions are needed for estimating the direct and indirect effects  . For example, if no confounding exists, (i.e., ?1, ?2, and ?3 are mutually independent) the following formulas can be derived:
The last two equations are called Mediation Formulas  and have become the target of estimation in many studies of mediation. They give distribution-free expressions for direct and indirect effects and demonstrate that, despite the arbitrary nature of the error distributions and the functions f, g, and h, mediated effects can nevertheless be estimated from data using regression. The analyses of moderated mediation and mediating moderators fall as special cases of the causal mediation analysis, and the mediation formulas identify how various interactions coefficients contribute to the necessary and sufficient components of mediation.
Assume the model takes the form
where the parameter quantifies the degree to which M modifies the effect of X on Y. Even when all parameters are estimated from data, it is still not obvious what combinations of parameters measure the direct and indirect effect of X on Y, or, more practically, how to assess the fraction of the total effect that is explained by mediation and the fraction of that is owed to mediation. In linear analysis, the former fraction is captured by the product , the latter by the difference , and the two quantities coincide. In the presence of interaction, however, each fraction demands a separate analysis, as dictated by the Mediation Formula, which yields:
Thus, the fraction of output response for which mediation would be sufficient is
while the fraction for which mediation would be necessary is
These fractions involve non-obvious combinations of the model's parameters, and can be constructed mechanically with the help of the Mediation Formula. Significantly, due to interaction, a direct effect can be sustained even when the parameter vanishes and, moreover, a total effect can be sustained even when both the direct and indirect effects vanish. This illustrates that estimating parameters in isolation tells us little about the effect of mediation and, more generally, mediation and moderation are intertwined and cannot be assessed separately.
As of 19 June 2014, this article is derived in whole or in part from Causal Analysis in Theory and Practice. The copyright holder has licensed the content in a manner that permits reuse under CC BY-SA 3.0 and GFDL. All relevant terms must be followed.