Q91
Q91 A test case fails repeatedly due to an environmental issue. What should be done?
Log the defect as invalid
Report the issue as a blocker
Skip the test
Modify the test case
Q92
Q92 During test execution, multiple defects are discovered in a single module. What is the best course of action?
Log all defects individually
Log one defect and ignore the rest
Merge all defects
Fix one defect and retest
Q93
Q93 What is the main goal of User Acceptance Testing (UAT)?
To check for code quality
To validate the system meets user requirements
To test integration
To automate test cases
Q94
Q94 Who is primarily responsible for conducting User Acceptance Testing?
Test engineers
Developers
End users or clients
Product managers
Q95
Q95 Which document serves as the basis for UAT?
Test strategy
Requirement specification document
Test execution log
Defect log
Q96
Q96 What is a key difference between UAT and system testing?
UAT is conducted by developers
System testing validates only user interfaces
UAT validates business requirements
System testing is always automated
Q97
Q97 How can UAT scenarios be documented effectively?
Include test steps and expected outcomes
Focus only on negative scenarios
Exclude business processes
List only critical paths
Q98
Q98 Which tool is commonly used to manage and document UAT test cases?
Postman
JIRA
Microsoft Word
Tableau
Q99
Q99 How can you ensure that a complex business workflow is covered in UAT?
Test only edge cases
Use detailed workflow scenarios
Rely on system testing results
Skip redundant steps
Q100
Q100 What should be done if a critical defect is found during UAT?
Ignore it
Log it and fix immediately
Postpone deployment
Update the requirements document
Q101
Q101 What should a tester do if a UAT participant struggles to execute a test case?
Execute it for them
Provide guidance and document feedback
Ignore the issue
Stop testing
Q102
Q102 During UAT, a participant reports an issue that contradicts approved requirements. What should be done?
Ignore the issue
Update the requirements
Log and escalate the issue
Re-test the system
Q103
Q103 What is the primary purpose of regression testing?
To test new functionalities
To verify that existing functionalities remain unaffected by changes
To fix defects
To validate design
Q104
Q104 When is regression testing typically conducted?
Before the initial release
After code modifications
During system installation
Only during UAT
Q105
Q105 Which type of test cases are prioritized during regression testing?
High-risk areas and frequently used functionalities
Newly added features
Deprecated features
Minor functionalities
Q106
Q106 What distinguishes regression testing from retesting?
Regression testing focuses on new features
Retesting verifies fixed defects, while regression ensures no new defects
Regression tests random areas
Retesting involves integration testing only
Q107
Q107 How can regression testing be automated effectively?
By prioritizing test cases and creating reusable scripts
By testing manually
By focusing on deprecated features
By skipping minor changes
Q108
Q108 How would you update a regression test suite when a new feature is added?
Add relevant test cases to the suite
Remove existing test cases
Retest only the new feature
Skip updating the suite
Q109
Q109 How do you select test cases for regression testing after a minor code change?
Select only critical test cases related to the change
Test all cases in the system
Ignore testing
Select only unit tests
Q110
Q110 During regression testing, a test case that previously passed now fails. What should be done?
Log a defect and reassign
Ignore the issue
Mark it as invalid
Retest unrelated modules
Q111
Q111 What should you do if a regression test identifies multiple failures in unrelated areas?
Stop testing
Isolate each failure and log defects
Merge all failures
Retest only one area
Q112
Q112 What is the primary objective of compatibility testing?
To test software's performance
To ensure software works across different environments
To validate new features
To automate testing
Q113
Q113 How does backward compatibility testing differ from forward compatibility testing?
Backward checks newer versions, forward checks older versions
Backward ensures compatibility with previous versions
Backward focuses on hardware
Forward validates test cases only
Q114
Q114 Which aspect of software is commonly validated during hardware compatibility testing?
Database schema
Operating system requirements
Peripheral device integration
Network performance
Q115
Q115 Why is cross-browser compatibility testing critical in web applications?
To optimize server performance
To ensure consistent functionality across browsers
To validate back-end logic
To minimize test execution time
Q116
Q116 How would you validate cross-browser compatibility for a web application?
Use a single browser
Use browser testing tools like BrowserStack
Focus only on Chrome
Ignore visual differences
Q117
Q117 How can operating system compatibility be ensured for a desktop application?
Test on all supported OS versions
Test on a single OS
Ignore minor OS differences
Rely on virtual machines only
Q118
Q118 How can you test mobile compatibility for a web application?
Test using physical devices and emulators
Focus only on emulators
Ignore device-specific differences
Rely on a single test tool
Q119
Q119 During compatibility testing, a feature fails on one browser but works on others. What should be done?
Mark it as non-critical
Log the issue and identify browser-specific behavior
Skip the test
Focus on server compatibility
Q120
Q120 A mobile app crashes on specific devices during testing. What is the best approach to resolve this issue?
Ignore the issue
Log the crash details, including device specifications
Test only on other devices
Focus on OS testing