
Q61
Q61 Which library in Python provides pre-trained GloVe embeddings?
nltk
spaCy
gensim
torchtext
Q62
Q62 How do you load pre-trained Word2Vec embeddings using gensim?
gensim.load_word2vec_format()
gensim.load()
gensim.word2vec.load()
gensim.model_load()
Q63
Q63 Which method retrieves the most similar words to a given word in a Word2Vec model?
most_similar()
similarity()
find_similar()
retrieve_closest()
Q64
Q64 How do you calculate the cosine similarity between two words in Word2Vec using gensim?
word.similarity()
model.similarity()
gensim.cosine_similarity()
gensim.vector_similarity()
Q65
Q65 A Word2Vec model struggles to learn word relationships. What should you adjust?
Use a larger window size
Reduce vocabulary size
Use smaller datasets
Disable embeddings
Q66
Q66 Pre-trained embeddings show poor performance on domain-specific data. What should you do?
Fine-tune embeddings
Ignore domain data
Train embeddings from scratch
Use smaller datasets
Q67
Q67 A GloVe model fails to represent polysemy effectively. What could solve this issue?
Use positional embeddings
Use context-based embeddings
Reduce dimensionality
Increase vocabulary size
Q68
Q68 What is the primary objective of sentiment analysis in NLP?
To identify grammatical errors
To detect the polarity of text
To tokenize text
To classify entities
Q69
Q69 Which machine learning algorithm is commonly used for sentiment classification tasks?
K-Means
Naive Bayes
Apriori
DBSCAN
Q70
Q70 How does bag-of-words representation affect sentiment analysis?
Captures context
Ignores context
Improves polarity detection
Improves grammatical accuracy
Q71
Q71 What is the role of a sentiment lexicon in sentiment analysis?
Provides context
Offers predefined word sentiments
Improves training speed
Tokenizes text
Q72
Q72 Which neural network architecture is most suitable for capturing sentiment in long texts?
CNN
RNN
Transformer
Naive Bayes
Q73
Q73 Which Python library provides a SentimentIntensityAnalyzer for sentiment analysis?
nltk
TextBlob
VADER
spaCy
Q74
Q74 How can you calculate sentiment polarity using TextBlob?
text.sentiment.polarity
text.sentiment
text.polarity
textblob.polarity
Q75
Q75 Which attribute of VADER provides a compound sentiment score?
polarity
subjectivity
compound
intensity
Q76
Q76 How do you improve sentiment analysis for domain-specific text?
Use a general lexicon
Fine-tune with domain data
Use static embeddings
Increase batch size
Q77
Q77 A sentiment analysis model fails to recognize negation in sentences. What should you adjust?
Add more negation examples
Reduce dataset size
Disable embeddings
Change tokenization logic
Q78
Q78 A sentiment analysis pipeline misclassifies sarcastic sentences. How can you improve it?
Add a sarcasm detection layer
Reduce training data
Use a smaller model
Ignore such cases
Q79
Q79 A sentiment model overfits on training data but performs poorly on test data. What should you do?
Add regularization
Reduce vocabulary size
Use static embeddings
Increase epoch size
Q80
Q80 What is the primary goal of machine translation in NLP?
To translate text between languages
To tokenize text
To generate embeddings
To classify text
Q81
Q81 Which model architecture is widely used for neural machine translation?
CNN
RNN
Transformer
Naive Bayes
Q82
Q82 What is the role of the attention mechanism in machine translation?
It encodes sentences
It focuses on relevant parts of the input
It generates word embeddings
It speeds up training
Q83
Q83 What is the advantage of sequence-to-sequence models in machine translation?
They simplify preprocessing
They handle variable-length input and output
They improve tokenization
They use static embeddings
Q84
Q84 Which evaluation metric is commonly used for machine translation quality?
BLEU
F1 Score
RMSE
Precision
Q85
Q85 Which Python library provides pre-trained translation models via the transformers package?
spaCy
nltk
Hugging Face
TextBlob
Q86
Q86 How do you load a pre-trained translation model using transformers?
TranslationPipeline()
load_model()
pipeline(task="translation")
transformers.load()
Q87
Q87 How do you train a machine translation model using transformers?
Use a translation pipeline
Use a dataset of aligned sentence pairs
Train on a single language
Use an unsupervised dataset
Q88
Q88 How can you handle out-of-vocabulary (OOV) words during machine translation?
Skip OOV words
Use byte pair encoding (BPE)
Replace with synonyms
Ignore them completely
Q89
Q89 A translation model produces overly literal translations. What could improve this?
Use a larger dataset
Add an attention mechanism
Fine-tune on diverse data
Reduce model size
Q90
Q90 A model struggles to handle long input sequences in translation. What should you adjust?
Use transformers
Use static embeddings
Reduce dataset size
Ignore such inputs