What is Regularization?

A machine learning model can quickly become overfitted or underfitted during training. To prevent this, we appropriately fit a model onto our test set using regularisation in machine learning. Regularization methods aid in obtaining the best model by lowering the likelihood of overfitting.

In this article we would be knowing about what is regularization, Types of regularization. Furthermore, we will discuss bias, variation, underfitting, and overfitting.

Bias and Variance


Biases are the underlying assumptions that data utilise to simplify the target function. Indeed, bias reduces the model’s sensitivity to lone data points and increases the generalizability of the data. It also reduces training time because the required function is simpler. High bias denotes a higher reliability assumption for the target function. The model may occasionally be underfit as a result.


As a result of a model’s sensitivity to even the smallest differences in the dataset, variance is a type of error that occurs in machine learning. An algorithm would model the noise and outliers in the training set because of the large fluctuation. The phrase most frequently used to describe this is overfitting. The model in this instance learns every data point, therefore when tested on a fresh dataset, it cannot make correct predictions.

Underfitting and Overfitting

What is Underfitting?

A model is said to be underfit when it has not adequately learnt the patterns in the training data, which prevents it from correctly generalising to the new data. An underfit model performs poorly and produces poor predictions when applied to training data. Underfitting happens when the bias and variation are both low.

What is Overfitting?

A model is considered to be overfit when it performs exceptionally well on training data but badly on test data. The noise and subtlety in the training data are picked up by the machine learning model in this situation, which has a detrimental impact on how well the model performs on test data. Overfitting might occur when low bias and high variability coexist.

What is Regularization in Machine Learning?

The term “regularization” describes methods for calibrating machine learning models to reduce the adjusted loss function and avoid overfitting or underfitting.

We can properly fit our machine learning model on a particular test set using regularisation, which lowers the mistakes in the test set.

Types of Regularization

The commonly used regularization techniques are :

  1. Lasso regularization (L1)
  2. Ridge regularization (L2)

Lasso regularization (L1)

  • we have a regularisation method to lessen the model’s complexity is lasso regression. Least Absolute and Selection Operator are its acronyms.
  • With the exception of the penalty term’s absence of a square of weights, it is comparable to the Ridge Regression.

Ridge regularization (L2)

  • In order to improve our long-term forecasts, ridge regression is one of the forms of linear regression that introduces a little level of bias.
  • Regularization methods like ridge regression are employed to make the model less complicated. It also goes by the name L2 regularisation.

You can read my entire article on Lasso and Ridge Regularization.

If you like my article and efforts towards the community, you may support and encourage me, by simply buying coffee for me


well I have good news for you I would be bringing some more articles to explain machine learning concepts and models with codes so leave a comment and tell me how excited are you about this.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Aviral Bhardwaj

Aviral Bhardwaj

One of the youngest writer and mentor on AI-ML & Technology.