Q91
Q91 What is the main idea behind Principal Component Analysis (PCA)?
To maximize variance along new dimensions
To minimize data loss
To reduce the number of data points
To classify data
Q92
Q92 What happens to the original features in PCA after transformation?
They remain the same
They are transformed into orthogonal components
They are multiplied by a scalar
They are clustered
Q93
Q93 What is the role of the eigenvectors in PCA?
They represent the directions of the principal components
They increase the variance
They minimize the cost function
They maximize the distance between clusters
Q94
Q94 Which function in sklearn is used to implement PCA?
pca_reduction()
PrincipalComponent()
PCA()
ComponentAnalysis()
Q95
Q95 How can you set the number of components to retain in PCA using sklearn?
num_components()
n_components
retain_components
reduce_dimensions
Q96
Q96 Which library can be used to implement t-SNE for dimensionality reduction in Python?
sklearn
pandas
numpy
matplotlib
Q97
Q97 After applying PCA, a model's performance drops significantly. What could be the issue?
Too many components retained
Too few components retained
The model is overfitting
Data was not scaled properly
Q98
Q98 A t-SNE model does not correctly represent the structure of high-dimensional data. What could improve it?
Use fewer iterations
Increase perplexity
Reduce the learning rate
Use PCA before t-SNE
Q99
Q99 A dimensionality reduction algorithm removes important features from the data. What could prevent this?
Increase the number of components
Use regularization
Perform feature selection first
Use a different distance metric
Q100
Q100 What is the purpose of model evaluation in machine learning?
To reduce the number of features
To increase the accuracy
To assess the performance of a model
To select the best algorithm
Q101
Q101 Which metric is most appropriate for evaluating classification problems?
Mean Squared Error
Precision and Recall
R-squared
Mean Absolute Error
Q102
Q102 What does the ROC curve represent in a classification task?
The trade-off between true positive and false positive rates
The accuracy of the model
The training time
The distribution of classes
Q103
Q103 What does a high variance in a model indicate?
Underfitting
Overfitting
Balanced performance
Poor training accuracy
Q104
Q104 What is the F1-Score used for in classification problems?
To measure the ratio of true positives
To balance precision and recall
To calculate accuracy
To measure sensitivity
Q105
Q105 Which function in sklearn is used to calculate accuracy for classification models?
calc_accuracy()
accuracy()
accuracy_score()
classification_accuracy()
Q106
Q106 How can you calculate the confusion matrix in sklearn?
confusion_matrix()
conf_matrix()
calc_confusion()
matrix_conf()
Q107
Q107 How do you implement cross-validation in Python using sklearn?
cross_validate()
cross_val_score()
validation_score()
cv_validate()
Q108
Q108 A model has a high accuracy but poor performance on new data. What is the issue?
Overfitting
Underfitting
Low variance
Incorrect metric
Q109
Q109 A classification model has a high false positive rate. Which metric should be optimized?
Accuracy
Recall
Precision
F1-Score
Q110
Q110 A model performs well on the training set but poorly on the validation set. What could be the cause?
Underfitting
Overfitting
Balanced data
High recall
Q111
Q111 What is the role of the activation function in a neural network?
To adjust weights
To control the learning rate
To introduce non-linearity
To increase training speed
Q112
Q112 What is the vanishing gradient problem in deep learning?
Weights become too large
Gradients stop flowing back to earlier layers
Gradients become too large
The model overfits
Q113
Q113 What is the purpose of dropout in a neural network?
To prevent overfitting
To increase learning rate
To improve accuracy
To increase data size
Q114
Q114 How does backpropagation work in neural networks?
By adjusting the input data
By updating weights using gradients
By changing the model architecture
By increasing the number of layers
Q115
Q115 Which Python library is commonly used to implement neural networks?
numpy
pandas
keras
matplotlib
Q116
Q116 Which function in Keras is used to compile a neural network model?
model.compile()
network.compile()
compile_nn()
compile_model()
Q117
Q117 How do you add a dense layer to a neural network in Keras?
add_dense()
model.add(Dense())
add_layer()
layer.add(Dense())
Q118
Q118 A neural network performs well on the training set but poorly on the test set. What could be the issue?
Underfitting
Overfitting
Data leakage
Incorrect architecture
Q119
Q119 A neural network fails to converge during training. What could be the cause?
Low learning rate
High number of epochs
Small dataset
Overfitting
Q120
Q120 A deep neural network suffers from the vanishing gradient problem. What can help mitigate this issue?
Use a larger dataset
Increase learning rate
Use ReLU activation function
Reduce the number of layers