Accuracy, Precision, Recall and F1 score in M.L.



In machine learning, accuracy is a measure of how well a model is performing. It tells you the percentage of correct predictions out of the total predictions the model makes.

Imagine you have a simple model that predicts whether it will rain or not based on the weather features it has learned. If the model predicts rain 9 times out of 10, and it actually rains 9 times out of 10, then the accuracy of the model is 90%. This means the model is correct 90% of the time.

Accuracy is a good metric when the classes (categories) in your dataset are balanced. However, in situations where one class is much more common than the others (imbalanced dataset), accuracy might not provide a complete picture, and other metrics like precision, recall, or F1 score could be more informative.


Let’s understand with confusion matrix:


Suppose you have a set of 100 predictions made by a model, and you know the actual outcomes. Here's a breakdown:

True Positives (model correctly predicted positive): 70

True Negatives (model correctly predicted negative): 20

False Positives (model predicted positive, but it was actually negative): 5

False Negatives (model predicted negative, but it was actually positive): 5


Accuracy:

Now, We can calculate accuracy using the formula:

So, the accuracy of the model is 90%, indicating that it made correct predictions for 90 out of the total 100 cases.


Precision :

Precision is a metric that focuses on the accuracy of positive predictions made by a model. It is calculated using the formula:

So, the precision of the model is approximately 93.33%. This means that when the model predicts a positive outcome, it is correct about 93.33% of the time. Precision is especially useful when you want to be confident that the positive predictions are accurate, even if the overall number of positive predictions is low.


Recall:

Recall, also known as Sensitivity or True Positive Rate, is calculated using the formula:


F1 Score:

The F1 score is a metric that combines precision and recall into a single value, providing a balance between the two. It is calculated using the formula:

So, the F1 score of the model is approximately 0.9338. The F1 score is useful when you want to consider both precision and recall together and find a balanced measure of a model's performance.


Connect on Linkedin

Comments