Article byKhalid Ahmed
Edited byShreeya Jain
Reviewed byDheeraj Vaidya, CFA, FRM

What Is Overfitting?

Overfitting refers to a scenario in machine learning (ML) where a model becomes too closely aligned with the training data it was trained on, to the point that it performs poorly on new, unseen data. It mainly aims to identify and makes the model capable of generalization to fresh data and ensures ML becomes less specialized to the training data.


You are free to use this image on your website, templates, etc, Please provide us with an attribution linkHow to Provide Attribution?Article Link to be Hyperlinked
For eg:
Source: Overfitting (wallstreetmojo.com)

Machine learning tasks like natural language processing and image recognition use overfitting. Moreover, these models could predict new data more accurately and make sound forecasts related to real-world applications after the removal of overfitting. It occurs due to too small a data size, a data set with a large quantity of irrelevant information, too long training on a particular data set, and the complexity of the model needing to be lowered.

Key Takeaways

  • Overfitting is a term used in machine learning (ML) that describes when a model becomes overly complex and is too closely fitted to the training data.
  • The primary objective of addressing overfitting is to help the model achieve better generalization performance on new data. 
  • One can detect it by monitoring all losses, observing the learning curve, adding regularization terms, subjecting the model to cross-validation, and inspecting the prediction visually for a close fit to training data.
  • One can use the following techniques to prevent overfitting in machine learning: regularization using L1 and L2, cross-validation, early stopping, data augmentation, dropout, and feature selection.

Overfitting Explained

Overfitting occurs when a machine learning model becomes too specialized in the training data and fails to generalize well to new, unseen data. In neural networksthis happens when the machine learning model places more importance on unimportant information in the training data. As a result, this model needs help making accurate forecasts about fresh data as it fails to segregate noisy data from relevant essential data forming the pattern. 

Overfitting could happen due to the following reasons:

  • The training data for the model is unclean and has significant noise levels.
  • Moreover, the training dataset needs to be more significant.
  • Where the model gets constructed using only a portion of the available data, not accurately reflecting the whole dataset.

Thus, one can detect an overfitting model by examining validation metrics in overfitting decision trees like accuracy and loss. Usually, these metrics tend to increase to a certain level, after which they either start decreasing or plateau due to the impact of overfitting. Furthermore, this model strives to accomplish an optimal fit, and once it gets done, the trend of these metrics begins to decline or flatten. Hence, finding the right balance between model complexity and the available training data is essential to address this.

Therefore, to address this model in neural networks, several techniques can be employed:

  • Regularization
  • Dropout
  • Early stopping
  • Architecture modification

Hence, applying these techniques helps mitigate overfitting in neural networks, allowing them to generalize better and make accurate predictions on new and unseen data.

Financial Modeling & Valuation Courses Bundle (25+ Hours Video Series)

–>> If you want to learn Financial Modeling & Valuation professionally , then do check this ​Financial Modeling & Valuation Course Bundle​ (25+ hours of video tutorials with step by step McDonald’s Financial Model). Unlock the art of financial modeling and valuation with a comprehensive course covering McDonald’s forecast methodologies, advanced valuation techniques, and financial statements.


Let us use a few examples to understand the topic.

Example #1

Suppose Jack is a real estate agent trying to predict house prices based on features such as size, number of bedrooms, and location. Therefore, he decides to use a machine learning algorithm to develop a predictive model.

Hence, he collects a dataset of 100 houses with their corresponding features and prices. Then, he splits the data into a training set (80%) and a test set (20%) for evaluation. 

Thus, as he trained the model, he noticed it achieved near-perfect accuracy on the training set. The model captures all the intricacies and specific details of the training data, including the noise and outliers. 

However, when he evaluates the model on the test set, he finds that its performance is significantly worse. Hence, the model needs to predict the prices of houses accurately. It has yet to see before. His model has overfit the training data, meaning it has learned the specific peculiarities and noise in that dataset.

Upon further investigation, he discovers that the model needs to be simplified for his limited training data. As a result, it needs to generalize better to new, unseen houses.

To address this overfitting issue, he can take several steps. For instance:

  1. Feature selection: Choose the most relevant features with a stronger correlation with house prices rather than including every available feature.
  2. Regularization: Apply regularization techniques, such as L1 or L2 regularization.

Example # 2

Suppose Janet is a quantitative analyst at AQR Capital Inc. working on developing a stock trading strategy. She has historical stock price data for a particular company and decides to build a model to predict the future price movements of its stock.

Therefore, Janet develops a complex algorithm incorporating numerous technical indicators, such as moving averages, relative strength index (RSI), and stochastic oscillator. Moreover, she trains the model using the historical data and fine-tuning it to achieve high returns on the training set.

Excited by the impressive performance of the model on the training data, she decided to deploy it in real-world trading. However, when she starts using the model to make actual trades, Janet notices that the strategy consistently needs to perform better and generate profits as expected.

Upon investigation, she realizes that the model has overfitted the historical data. It has become too specialized in capturing the specific price patterns, noise, and anomalies in the training set.

In this case, this statistical modeling has led to poor performance in real-world trading. The model’s excessive complexity and ability to fit noise and random fluctuations in the historical data have undermined its effectiveness in capturing the genuine patterns driving stock price movements.

How To Detect It?

Several ways may be utilized to detect the overfitting, as follows: 

  • One must monitor all losses, including when training loss decreases while the validation loss increases.
  • A keen observation must be made on the learning curve of machine learning as any divergence of validation and training curve signifies overfitting.
  • Regularization term must be added to the loss function as it prevents the occurrence of overfitting. 
  • Lastly, one must inspect the model’s prediction visually and observe whether the model fits too closely with the training data; then, it indicates overfitting.

How To Prevent It?

Many techniques help in the prevention of overfitting in machine learning, as follows:

  • Using joint Regularization L1 &L2 techniques, the model adds a penalty term to the loss function. As a result, it discourages the model from too closely fitting the training data and, in turn, prevents overfitting. 
  • Cross-validation helps to prevent it. 
  • When one uses the early stopping technique in the model, the training stops automatically just before reaching the point of overfit. 
  • Utilizing the data augmentation technique increases the new data formation from the old training data. As a result, the exposure of the model to a variety of data samples increases, leading to the prevention of overfitting.
  • The dropout technique forces the model to learn more robust and generalizable representations, preventing data overfitting.
  • By decreasing the number of features through feature selection, the model could be trained to prevent overfitting to noisy data.

Therefore, the overfitting problem could be reduced by following these methods, creating a more accurate and functional machine-learning model. 

Overfitting vs Underfitting

Let us discuss the differences between overfitting and underfitting using the table below:

DefinitionIt leads to bad performance on fresh data due to the complexity of the model and needs to be more tightly packed with the training dataUnderfitting refers to poor performance in fresh and training data due to the model needing to be more complex, leading to the non-capture of underlying patterns in the data.
CauseIts primary causes are poor feature selection, Too many model parameters, and insufficient training data.The model can be solved by: adding more features, increasing model complexity, changing the model architecture, and collecting more data.
Performance of Training DataThey comprise lower training errors.Moreover, it comprises higher training errors.
BiasModerately has a low bias.Particularly has a high bias.
SolutionsThis model can be solved through: feature selection, regularization, cross-validation, data augmentation, and early stopping.The model can be solved by: adding more features, increasing model complexity, changing the model architecture and collecting more data.

Frequently Asked Questions (FAQs)

1. Why are decision tresses prone to overfitting?

Decision trees are prone to this model due to their inherent nature of creating complex and detailed splits in the training data. Thus, decision trees’ tendency to overfit can be addressed by applying pruning techniques, controlling tree growth, and utilizing ensemble methods to create more robust and accurate models.

2. Can overfitting occur in any machine learning algorithm? 

Yes, it can occur in any machine learning algorithm. However, complex models with many parameters or a significant degree of flexibility, such as deep neural networks or decision trees, are more susceptible to overfitting.

3. Is more data always the solution to overfitting? 

While having more data can help reduce overfitting, it is not always the sole solution. The quality and relevance of the data are crucial. Adding more irrelevant or noisy data may not improve the model’s performance. Other techniques like regularization or feature selection may be more effective in addressing this issue, even with limited data.

This has been a guide to What is Overfitting. Here, we compare it with underfitting and explain how to detect and prevent it, along with its examples. You can learn more about it from the following articles –

Reader Interactions

Leave a Reply

Your email address will not be published. Required fields are marked *