The loss function
Rafael Irizarry
In this section, we describe how the general approach to defining “best” in machine learning is to define a loss function, which can be applied to both categorical and continuous data.
The most commonly used loss function is the squared loss function. If is our predictor and is the observed outcome, the squared loss function is simply:
Because we often have a test set with many observations, say , we use the mean squared error (MSE):
In practice, we often report the root mean squared error (RMSE), which is , because it is in the same units as the outcomes. But doing the math is often easier with the MSE and it is therefore more commonly used in textbooks, since these usually describe theoretical properties of algorithms.
If the outcomes are binary, both RMSE and MSE are equivalent to one minus accuracy, since is 0 if the prediction was correct and 1 otherwise. In general, our goal is to build an algorithm that minimizes the loss so it is as close to 0 as possible.
Because our data is usually a random sample, we can think of the MSE as a random variable and the observed MSE can be thought of as an estimate of the expected MSE, which in mathematical notation we write like this:
This is a theoretical concept because in practice we only have one dataset to work with. But in theory, we think of having a very large number of random samples (call it ), apply our algorithm to each, obtain an MSE for each random sample, and think of the expected MSE as:
with denoting the observation in the random sample and the resulting prediction obtained from applying the exact same algorithm to the random sample. Again, in practice we only observe one random sample, so the expected MSE is only theoretical.
Note that there are loss functions other than the squared loss. For example, the Mean Absolute Error uses absolute values, instead of squaring the errors . However, in this book we focus on minimizing square loss since it is the most widely used.