**Here are 10 commonly asked data science interview questions & answers:**

1. What is the difference between supervised and unsupervised learning?

Supervised learning involves learning from labeled data to predict outcomes while unsupervised learning involves finding patterns in unlabeled data.

2. Explain the bias-variance tradeoff in machine learning.

The bias-variance tradeoff is a key concept in machine learning. Models with high bias have low complexity and over-simplify, while models with high variance are more complex and over-fit to the training data. The goal is to find the right balance between bias and variance.

3. What is the Central Limit Theorem and why is it important in statistics?

The Central Limit Theorem (CLT) states that the sampling distribution of the sample means will be approximately normally distributed regardless of the underlying population distribution, as long as the sample size is sufficiently large. It is important because it justifies the use of statistics, such as hypothesis testing and confidence intervals, on small sample sizes.

4. Describe the process of feature selection and why it is important in machine learning.

Feature selection is the process of selecting the most relevant features (variables) from a dataset. This is important because unnecessary features can lead to over-fitting, slower training times, and reduced accuracy.

5. What is the difference between overfitting and underfitting in machine learning? How do you address them?

Overfitting occurs when a model is too complex and fits the training data too well, resulting in poor performance on unseen data. Underfitting occurs when a model is too simple and cannot fit the training data well enough, resulting in poor performance on both training and unseen data. Techniques to address overfitting include regularization and early stopping, while techniques to address underfitting include using more complex models or increasing the amount of input data.

6. What is regularization and why is it used in machine learning?

Regularization is a technique used to prevent overfitting in machine learning. It involves adding a penalty term to the loss function to limit the complexity of the model, effectively reducing the impact of certain features.

7. How do you handle missing data in a dataset?

Handling missing data can be done by either deleting the missing samples, imputing the missing values, or using models that can handle missing data directly.

8. What is the difference between classification and regression in machine learning?

Classification is a type of supervised learning where the goal is to predict a categorical or discrete outcome, while regression is a type of supervised learning where the goal is to predict a continuous or numerical outcome.

9. Explain the concept of cross-validation and why it is used.

Cross-validation is a technique used to evaluate the performance of a machine learning model. It involves spliting the data into training and validation sets, and then training and evaluating the model on multiple such splits. Cross-validation gives a better idea of the model’s generalization ability and helps prevent over-fitting.

10. What evaluation metrics would you use to evaluate a binary classification model?

Some commonly used evaluation metrics for binary classification models are accuracy, precision, recall, F1 score, and ROC-AUC. The choice of metric depends on the specific requirements of the problem.