manual testing banner

Manual Testing Multiple Choice Questions (MCQs) and Answers

Master Manual Testing with Practice MCQs. Explore our curated collection of Multiple Choice Questions. Ideal for placement and interview preparation, our questions range from basic to advanced, ensuring comprehensive coverage of Manual Testing concepts. Begin your placement preparation journey now!

Q91

Q91 A test case fails repeatedly due to an environmental issue. What should be done?

A

Log the defect as invalid

B

Report the issue as a blocker

C

Skip the test

D

Modify the test case

Q92

Q92 During test execution, multiple defects are discovered in a single module. What is the best course of action?

A

Log all defects individually

B

Log one defect and ignore the rest

C

Merge all defects

D

Fix one defect and retest

Q93

Q93 What is the main goal of User Acceptance Testing (UAT)?

A

To check for code quality

B

To validate the system meets user requirements

C

To test integration

D

To automate test cases

Q94

Q94 Who is primarily responsible for conducting User Acceptance Testing?

A

Test engineers

B

Developers

C

End users or clients

D

Product managers

Q95

Q95 Which document serves as the basis for UAT?

A

Test strategy

B

Requirement specification document

C

Test execution log

D

Defect log

Q96

Q96 What is a key difference between UAT and system testing?

A

UAT is conducted by developers

B

System testing validates only user interfaces

C

UAT validates business requirements

D

System testing is always automated

Q97

Q97 How can UAT scenarios be documented effectively?

A

Include test steps and expected outcomes

B

Focus only on negative scenarios

C

Exclude business processes

D

List only critical paths

Q98

Q98 Which tool is commonly used to manage and document UAT test cases?

A

Postman

B

JIRA

C

Microsoft Word

D

Tableau

Q99

Q99 How can you ensure that a complex business workflow is covered in UAT?

A

Test only edge cases

B

Use detailed workflow scenarios

C

Rely on system testing results

D

Skip redundant steps

Q100

Q100 What should be done if a critical defect is found during UAT?

A

Ignore it

B

Log it and fix immediately

C

Postpone deployment

D

Update the requirements document

Q101

Q101 What should a tester do if a UAT participant struggles to execute a test case?

A

Execute it for them

B

Provide guidance and document feedback

C

Ignore the issue

D

Stop testing

Q102

Q102 During UAT, a participant reports an issue that contradicts approved requirements. What should be done?

A

Ignore the issue

B

Update the requirements

C

Log and escalate the issue

D

Re-test the system

Q103

Q103 What is the primary purpose of regression testing?

A

To test new functionalities

B

To verify that existing functionalities remain unaffected by changes

C

To fix defects

D

To validate design

Q104

Q104 When is regression testing typically conducted?

A

Before the initial release

B

After code modifications

C

During system installation

D

Only during UAT

Q105

Q105 Which type of test cases are prioritized during regression testing?

A

High-risk areas and frequently used functionalities

B

Newly added features

C

Deprecated features

D

Minor functionalities

Q106

Q106 What distinguishes regression testing from retesting?

A

Regression testing focuses on new features

B

Retesting verifies fixed defects, while regression ensures no new defects

C

Regression tests random areas

D

Retesting involves integration testing only

Q107

Q107 How can regression testing be automated effectively?

A

By prioritizing test cases and creating reusable scripts

B

By testing manually

C

By focusing on deprecated features

D

By skipping minor changes

Q108

Q108 How would you update a regression test suite when a new feature is added?

A

Add relevant test cases to the suite

B

Remove existing test cases

C

Retest only the new feature

D

Skip updating the suite

Q109

Q109 How do you select test cases for regression testing after a minor code change?

A

Select only critical test cases related to the change

B

Test all cases in the system

C

Ignore testing

D

Select only unit tests

Q110

Q110 During regression testing, a test case that previously passed now fails. What should be done?

A

Log a defect and reassign

B

Ignore the issue

C

Mark it as invalid

D

Retest unrelated modules

Q111

Q111 What should you do if a regression test identifies multiple failures in unrelated areas?

A

Stop testing

B

Isolate each failure and log defects

C

Merge all failures

D

Retest only one area

Q112

Q112 What is the primary objective of compatibility testing?

A

To test software's performance

B

To ensure software works across different environments

C

To validate new features

D

To automate testing

Q113

Q113 How does backward compatibility testing differ from forward compatibility testing?

A

Backward checks newer versions, forward checks older versions

B

Backward ensures compatibility with previous versions

C

Backward focuses on hardware

D

Forward validates test cases only

Q114

Q114 Which aspect of software is commonly validated during hardware compatibility testing?

A

Database schema

B

Operating system requirements

C

Peripheral device integration

D

Network performance

Q115

Q115 Why is cross-browser compatibility testing critical in web applications?

A

To optimize server performance

B

To ensure consistent functionality across browsers

C

To validate back-end logic

D

To minimize test execution time

Q116

Q116 How would you validate cross-browser compatibility for a web application?

A

Use a single browser

B

Use browser testing tools like BrowserStack

C

Focus only on Chrome

D

Ignore visual differences

Q117

Q117 How can operating system compatibility be ensured for a desktop application?

A

Test on all supported OS versions

B

Test on a single OS

C

Ignore minor OS differences

D

Rely on virtual machines only

Q118

Q118 How can you test mobile compatibility for a web application?

A

Test using physical devices and emulators

B

Focus only on emulators

C

Ignore device-specific differences

D

Rely on a single test tool

Q119

Q119 During compatibility testing, a feature fails on one browser but works on others. What should be done?

A

Mark it as non-critical

B

Log the issue and identify browser-specific behavior

C

Skip the test

D

Focus on server compatibility

Q120

Q120 A mobile app crashes on specific devices during testing. What is the best approach to resolve this issue?

A

Ignore the issue

B

Log the crash details, including device specifications

C

Test only on other devices

D

Focus on OS testing

ad verticalad vertical
ad