In mathematical modeling, statistical modeling and experimental sciences, the values of dependent variables depend on the values of independent variables. The dependent variables represent the output or outcome whose variation is being studied. The independent variables represent inputs or causes, i.e., potential reasons for variation or, in the experimental setting, the variable controlled by the experimenter. Models and experiments test or determine the effects that the independent variables have on the dependent variables. Sometimes, independent variables may be included for other reasons, such as for their potential confounding effect, without a wish to test their effect directly.
In mathematics, a function is a rule for taking an input (in the simplest case, a number or set of numbers) and providing an output (which may also be a number). A symbol that stands for an arbitrary input is called an independent variable, while a symbol that stands for an arbitrary output is called a dependent variable. The most common symbol for the input is x, and the most common symbol for the output is y; the function itself is commonly written .
It is possible to have multiple independent variables and/or multiple dependent variables. For instance, in multivariable calculus, one often encounters functions of the form , where z is a dependent variable and x and y are independent variables. Functions with multiple outputs are often referred to as vector-valued functions.
In set theory, a function between a set X and a set Y is a subset of the Cartesian product such that every element of X appears in an ordered pair with exactly one element of Y. In this situation, a symbol representing an element of X may be called an independent variable and a symbol representing an element of Y may be called a dependent variable, such as when X is a manifold and the symbol x represents an arbitrary point in the manifold. However, many advanced textbooks do not distinguish between dependent and independent variables.
Through the modeling effort performed in social sciences research using the data percolation methodology, four types of variables are used and suffice to express all combinations pertaining to the phenomena under investigation. The bonds between the independent and the dependent variables can be descriptive (structural or functional), of influence, longitudinal, or causal. Each bond is polarized, whether positive or negative. A positive bond occurs when the augmentation of the value of the independent variable is met with an augmentation of the value of the dependent variable. Variables of influence can be direct or indirect, and indirect variables can be moderators or mediators. Longitudinal (or temporal) variables can be retroactive when a loop is necessary, for example in dynamic systems,.
In an experiment, a variable, manipulated by an experimenter, is called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.
In data mining tools (for multivariate statistics and machine learning), the depending variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data. The target variable is used in supervised learning algorithms but not in non-supervised learning.
In mathematical modeling, the dependent variable is studied to see if and how much it varies as the independent variables vary. In the simple stochastic linear model the term is the i th value of the dependent variable and is the i th value of the independent variable. The term is known as the "error" and contains the variability of the dependent variable not explained by the independent variable.
With multiple independent variables, the model is , where n is the number of independent variables.
In simulation, the dependent variable is changed in response to changes in the independent variables.
Depending on the context, an independent variable is sometimes called a "predictor variable", "regressor", "controlled variable", "manipulated variable", "explanatory variable", "exposure variable" (see reliability theory), "risk factor" (see medical statistics), "feature" (in machine learning and pattern recognition) or "input variable."
Depending on the context, a dependent variable is sometimes called a "response variable", "regressand", "predicted variable", "measured variable", "explained variable", "experimental variable", "responding variable", "outcome variable", "output variable" or "label".
"Explanatory variable" is preferred by some authors over "independent variable" when the quantities treated as independent variables may not be statistically independent or independently manipulable by the researcher. If the independent variable is referred to as an "explanatory variable" then the term "response variable" is preferred by some authors for the dependent variable.
"Explained variable" is preferred by some authors over "dependent variable" when the quantities treated as "dependent variables" may not be statistically dependent. If the dependent variable is referred to as an "explained variable" then the term "predictor variable" is preferred by some authors for the independent variable.
Variables may also be referred to by their form: continuous, binary/dichotomous, nominal categorical, and ordinal categorical, among others.
A variable may be thought to alter the dependent or independent variables, but may not actually be the focus of the experiment. So that variable will be kept constant or monitored to try to minimize its effect on the experiment. Such variables may be designated as either a "controlled variable", "control variable", or "extraneous variable".
Extraneous variables, if included in a Regression analysis regression as independent variables, may aid a researcher with accurate response parameter estimation, prediction, and goodness of fit, but are not of substantive interest to the hypothesis under examination. For example, in a study examining the effect of post-secondary education on lifetime earnings, some extraneous variables might be gender, ethnicity, social class, genetics, intelligence, age, and so forth. A variable is extraneous only when it can be assumed (or shown) to influence the dependent variable. If included in a regression, it can improve the fit of the model. If it is excluded from the regression and if it has a non-zero covariance with one or more of the independent variables of interest, its omission will bias the regression's result for the effect of that independent variable of interest. This effect is called confounding or omitted variable bias; in these situations, design changes and/or Controlling for a variable statistical control is necessary.
Extraneous variables are often classified into three types:
In modelling, variability that is not covered by the independent variable is designated by and is known as the "residual", "side effect", "error", "unexplained share", "residual variable", or "tolerance".