1.
In L1 regularization, we penalize the absolute value of the weights while in L2 regularization, we penalize the squared value of the weights.
2.
The scikit-learn Python machine learning library provides the ColumnTransformer that allows you to selectively apply data transforms to different columns in your dataset
3.
It is good practice to use MinMaxScaling on a feature with a few extreme outliers.
4.
The parameter k should take odd values in kNN, so that there are no ties in the voting.
5.
Treating a non-ordinal categorical variable as continuous variable would result in a better predictive model.
6.
One-hot-encoding increasing the dimensionality of a data set.
7.
OLS Regression is expected to have more overfitting (lower bias) than Ridge.
8.
Fitting your scaling transformation separately to your training and the test sets improves the model performance.
9.
The more features that we use to represent our data, the better the learning algorithm will generalize to new data points.
10.
It is not a good machine learning practice to use the test set to help adjust the hyperparameters of your learning algorithm
12.
Say, there are two kids Jack and Jill in a maths exam. Jack only learnt additions and Jill memorized the questions and their answers from the maths book. Now, who will succeed in the exam? The answer is neither. From machine learning lingo, Jack is blank1 and Jill is blank2.
13.
You are working on a classification problem. For validation purposes, you have randomly sampled the training data set into train and validation. You are confident that your model will work incredibly well on unseen data since your validation accuracy is high. However, you get shocked after getting poor test accuracy. What might have gone wrong?
14.
Which of the following models is more acceptable then others to be implemented on new data points?
15.
Which of the following is not correct regarding LogisticRegression and LinearSVM?