Machine Learning Q/A


We have the most asked questions on machine learning interviews with their answers. Prepare yourself!

category

Most asked question in the interview regarding Machine Learning Q/A

Ques. ) Q1- What is the difference between supervised and unsupervised machine learning?


Answer:

Supervised learning has labels associated with it i.e, if there is an image of an apple in input row then the label named "apple" would be associated with it whereas in unsupervised machine learning we do not have output associated with the input.

Ques. ) Q2- How is KNN different from k-means clustering?


Answer:

K-Nearest Neighbors is a supervised classification algorithm, while k-means clustering is an unsupervised clustering algorithm. While the mechanisms may seem similar at first, what this really means is that in order for K-Nearest Neighbors to work, you need labeled data you want to classify an unlabeled point into (thus the nearest neighbor part). K-means clustering requires only a set of unlabeled points and a threshold: the algorithm will take unlabeled points and gradually learn how to cluster them into groups by computing the mean of the distance between different points. The critical difference here is that KNN needs labeled points and is thus supervised learning, while k-means doesn’t — and is thus unsupervised learning.

Ques. ) Q3- Why is “Naive” Bayes naive?


Answer:

Despite its practical applications, especially in text mining, Naive Bayes is considered “Naive” because it makes an assumption that is virtually impossible to see in real-life data: the conditional probability is calculated as the pure product of the individual probabilities of components. This implies the absolute independence of features — a condition probably never met in real life.

Ques. ) Q4- What’s your favorite algorithm, and can you explain it to me in less than a minute?


Answer:

This type of question tests your understanding of how to communicate complex and technical nuances with poise and the ability to summarize quickly and efficiently. Make sure you have a choice and make sure you can explain different algorithms so simply and effectively that a five-year-old could grasp the basics!

Ques. ) Q5- What’s the difference between Type I and Type II error?


Answer:

Type I error is a false positive, while Type II error is a false negative. Briefly stated, Type I error means claiming something has happened when it hasn’t, while Type II error means that you claim nothing is happening when in fact something is. A clever way to think about this is to think of Type I error as telling a man he is pregnant, while Type II error means you tell a pregnant woman she isn’t carrying a baby.

Ques. ) Q6- What’s the difference between a generative and discriminative model?


Answer:

A generative model will learn categories of data while a discriminative model will simply learn the distinction between different categories of data. Discriminative models will generally outperform generative models on classification tasks.

Ques. ) Q7- How is a decision tree pruned?


Answer:

Pruning is what happens in decision trees when branches that have weak predictive power are removed in order to reduce the complexity of the model and increase the predictive accuracy of a decision tree model. Pruning can happen bottom-up and top-down, with approaches such as reduced error pruning and cost complexity pruning.

Ques. ) Q8- What’s the trade-off between bias and variance?


Answer:

The bias-variance decomposition essentially decomposes the learning error from any algorithm by adding the bias, the variance and a bit of irreducible error due to noise in the underlying dataset. Essentially, if you make the model more complex and add more variables, you’ll lose bias but gain some variance — in order to get the optimally reduced amount of error, you’ll have to trade off bias and variance. You don’t want either high bias or high variance in your model.

Ques. ) Q9- Explain how a ROC curve works.


Answer:

The ROC curve is a graphical representation of the contrast between true positive rates and the false positive rate at various thresholds. It’s often used as a proxy for the trade-off between the sensitivity of the model (true positives) vs the fall-out or the probability it will trigger a false alarm (false positives).

Ques. ) Q10- Explain the difference between L1 and L2 regularization


Answer:

L2 regularization tends to spread error among all the terms, while L1 is more binary/sparse, with many variables either being assigned a 1 or 0 in weighting. L1 corresponds to setting a Laplacean prior to the terms, while L2 corresponds to a Gaussian prior.

Most recent interview's QNA


category-image
c/c++ Interview Question
Get QNA
category-image
Python Interview Questions
Get QNA
category-image
Machine Learning Q/A
Get QNA
More