Precision vs Recall

Nick Anderson
2 min readApr 22, 2021

I had a bit of trouble learning and deciding which classification metric to use during my last data science project, which leads me to write this blog post to determine, and hopefully help others, what is the best metric for classification…precision or recall?

Lets start off with what each one does, and then we can elaborate on which metrics we should be using for different classification problems and see if we can’t optimize which choice we’re making based on the data we’re using.

Precision aims to answer the following question: What proportion of the positive classifications were actually correct?

Recall aims to answer the following questions: What proportion of actual positives were actually correct?

This is calculated using the variables :

TP = True Positive

TN = True Negative

FP = False Positive

FN = False Negative

Precision formula: TP / (TP+FP)

Recall formula: TP / (TP + FN)

Usually, these two metrics can tend to be a ‘Tug-of-War’ on each other. Unfortunately and usually, as one gets better the other gets worse. These metrics measure how precise or accurate a model is in predicting specific parts of the data. They each measure different things. A high precision is linked with not making a lot of mistakes when predicting data and focusing on being as precise as possible. A high recall avoids a lot of…. It’s good to have both a strong precision and recall however depending on the problem at hand that may change. For example, if we’re looking to identify cancer in cancer patients, the penalty associated with wrongly classifying an actual cancer patient as someone with no cancer would be much higher than wrongly classifying a non-cancer patient as someone with cancer. In this instance we want to avoid our model mistaking cancer as no cancer. Here we would want to focus on recall. By focusing on recall, our model will tend to overclassify the patients in our data as having cancer based on our perceived risk/reward in identifying cancer. It’s much better to have a false alarm, than to not identify cancer in a patient.

Deciding which metric to use for a given classification may be difficult but if you follow the general rule that recall is more important than precision when the cost of acting is low, but the opportunity cost of passing up on a candidate is high and vice versa you should be able to understand which metric is more important to your individual data set.

--

--