You are on page 1of 15

Test scenarios for Pen

This is one of the common questions asked during software testing interview. Interview candidates may wonder for a moment whether I am attending an interview in a Software testing company or in a Pen manufacturing company !!!. This question is asked by interviewers to tickle your grey cells and check your ability to think and come out with different possible test scenarios at the same time trying to access how realistic your test scenarios are. Interviewer is well aware that if you can come up with good set of test scenarios for a testing a Pen then you can come up with good test scenarios for any other given application to be tested. Of course interviewer might not expect that you have read this article and came prepared for this surprising question So here is how I would start answering this question Firstly, we need to understand requirements against which pen was created. Our test scenarios for pen should be based on these factors like Type of Pen (Ball point, Ink or Felt tip), functionality of the pen, Compatibility, Installation and Uninstallation, Performance and Usability. Since I dont have requirements for the Pen to be tested, I presume the pen being tested is used by normal people and it would be used for their day to day writing needs and not somebody like an astronaut in space with 0 gravity or a deep sea diver who wants to write underwater. Below are the test scenarios for Pen to be tested, I have categorized on the type of the Test. Test Scenarios for Functionality: 1) Verify type of Pen (Ball Point or Ink or Felt tip) etc., Lets assume its Ball Point pen as further scenarios will change based on the Pen type. 2) 3) Test Pen colour is as per specification Test if the ink colour is as per specification.

4) Test ink impression/colour on paper remains consistent from start till ink in the refill gets over.

5) Test Dimensions of the Pen, Size length and Shape of the Pen meets the requirements. 6) Test if pen writes smoothly and continuously without breaks or blots on the paper. 7) Test if ink on paper dries quickly as per requirement (assume less than 1 second as per requirement) 8) Test if the thickness of the line drawn by the pen tip is as per specification. (0.4 mm etc.,) 9) Keep ballpoint of the pen in contact with paper for considerable amount of time and check if the ink blots. 10) Write on different type of paper (smooth, rough, card boards etc.,) and confirm pen writes correctly on all types of papers. 11) Check if the pen cap has a gripper, so that user can plug the pen to shirt pocket firmly. 12) Test if ink is waterproof. Write on paper, dip paper in water and check if writing/ink has faded or smudged. 13) Verify that the ink used does not have bad odor because of the chemicals used in the ink. 14) Check weight of the pen is as per requirements.

15) Test pen can be used for writing with various angles like straight, slant etc.. Different writing angle should not impact the quality of the impression made on the paper nor should the pen tip be damaged. Performance test scenarios: 1) Test if Pen works in extreme temperatures like less than 0 degrees and at higher temperatures like 60 etc., 2) Test if pen works in higher and lower altitudes i.e. in different gravity.

3) Test if pen works with lower and higher atmospheric pressures. Ink may not flow evenly in lower or higher atmospheric pressures. 4) Test if the pen functions if it is used for writing continuously for longer duration (say for 10 hours). 5) Keep scribbling on paper vigorously and continuously for several minutes to check friction on the ball does not damage the pen tip. 6) Test how much can be written using 1 complete refill. Measure the distance it can write and compare it with specification. 7) Test if ink dries up if pen is kept open (without cap) for longer duration.

8) Dip pen in water for considerable amount of time, take it out, dry it out in open air and write. Verify pen still writes without any issues.

Load test scenarios: 1) Drop the pen from reasonable height say about 5 to 6 feet multiple times with different angles like Pen tip hitting the floor dead on and pen falling flat on ground etc., Verify pen is not broken and pen still writes without issues. 2) Press the pen hard on to surface while writing, check if pen still writes and Pen tip or pen itself does not break. 3) Drop pen on the ground, stamp it to test if it can bear reasonable external weight (e.g.: up to 100 Kgs). Verify pen is not broken and still writes as expected.

Compatibility Test Scenarios: 1) Write on different types materials like leather, cloth, plastic sheet, and rubber etc., Verify ink dries quickly. 2) Test compatibility of refills of same and other brands.

Installation and Un-installation Test Scenarios: 1) Disassemble and assemble pen. Remove cap, remove refill etc., and put it back. Test if the pen writes as expected. 2) Change Refill and Test if the pen writes as expected.

Usability Test Scenarios: 1) Test if Pen is usable across age (Children to aged) and by different Professionals like Lawyers, Doctors, and Businessmen etc. can write comfortably with the pen. 2) Check with different audience if the colour and dimension of the Pen is acceptable for their usage or profession. 3) 4) Test if ink leaks from refill when pen is kept upside down for longer duration. Test materials used in pen is recyclable.

5) Test to see if there are smaller parts in the pen that people can accidentally swallow, especially children have tendency to stick rear end of the pen in their mouth while thinking and some people have the habit of chewing pen as they think. 6) Test external material of the pen is made of non-toxic material.

Test strategy and Test approach


Test Strategy is a high level document describing testing portion of SDLC (Software Development Life Cycle) for the release. Test Strategy will outline different types of testing that will performed for the release i.e., APT (Application Product Test), IPT (Integration Product Test), Performance Test, UAT, ORT for the release. Test Strategy tries to answers questions like What (what is testing scope of the release), Why (objective of each testing type) and Who (which team is responsible) and other high level questions that arises on testing.

Test Approach is a detailed document explaining specifics about a testing type i.e., different test approach documents will be created for different types of testing i.e., APT (Application Product Test), IPT (Integration Product Test) and Performance Test. Test Approach answers questions like What (What is the scope of particular testing type), How (details of testing) for a particular testing type. There are some common topics called out in both Test Strategy and Test Approach. Most of the topics called out in Test Strategy are explained in detail in Test Approach. For example: a) Risks, Test Strategy mentions about risks that can impact entire testing phase or any of the testing types being planned for the release i.e., it includes Assembly Testing, APT, IPT, Performance testing and UAT. Test Approach details about the risks that could impact testing type for which Test Approach document was created for e.g.:- Application Product Test Approach document will highlight risks in detail that could impact APT only and not about Performance Testing as a separate Test Approach document will be created for Performance Testing. b) Test Strategy provides timelines for different testing types i.e., Assembly Testing, Application Product Test, Integration Product Test, Performance Test, User Acceptance Test or Operations readiness Test. However, Test Approach provides breakup of timelines in much more detail than compared to Test Strategy e.g.:Application Product Test Approach will not just mention start and end dates of APT instead it will also include Pass/Test Cycle start and end dates as well.

Software Bug report Dos and Donts


Bug report or defect report is the only right way to record and communicate issues to developers and other stake holders and to track bugs till they get fixed. A tester who writes good bug report is always appreciated by Developers and his lead or manager. Bug reports highlights strengths or weakness of the tester who logged it. Bug report Dos: 1. Always Include Steps to reproduce: Always include Steps to reproduce in every bug report. No matter how trivial the

issue is. Never assume that a developer will follow the exact steps that you did. Incase developer does not follow correct steps, then the developer can reject your bug as not reproducible or reject your bug asking to provide more details. Instances like this will reduce confidence of developer(s) and project managers on you and your written communication skills. 2. Attach Screenshots: Always attach screenshot(s), screenshots not only provide visualization of the bug but also serve as a proof of the existence of the bug. Developers cannot simply reject bug as not reproducible even after looking at the screenshot. Remember a picture is worth more than thousand words. 3. Describe exact scenario: Before logging a bug, always explore various combinations under which the bug does and does-not occur. Think and try combinations of test data, test flow etc., to check if the bug occurs for a particular scenario or for few or any combination. If you dont explore then developer will have to investigate during debugging and developers will eventually spend more time to debug. Instead if you spend little more time to nail down the bug to a particular scenario, you are literally helping developer to debug faster. Developers will really appreciate efforts of testers who provide exact scenario the helps to nail down the bug. For e.g.: Lets say you report a bug on Unable to open an invoice. If you explore a little more, you may find out that older invoices (last year and beyond) have a problem and the ones created in the current version of the software are opening correctly. If you report this citation then developer knows the exact scenario that will expose the bug, otherwise developer may check with a new invoice and then reject your bug as not reproducible. Its understood that you will add a comment again asking developer 4. Report important bugs first: Report high severity and priority bugs during initial period of testing phase and then report low priority and severity bugs. By doing so, you are providing more time for developers to fix important bugs and helping them to chanelise their effort in fixing important bugs. 5. Attach error logs when necessary: Most of the time, error log helps developers to debug the issue much faster when error log is provided. If tester has not attached error log, developer will have to execute Steps to reproduce provided in the bug report and look at the error log. Incase error log is readily available then instead of spending time on execute the steps to reproduce the error, developer can straight away look at the code to debug. 6. Provide details of test data used: It is always better to provide the test data you used to find the bug. This helps developer to reproduce the bug and also to investigate if the bug is valid or the result of incorrect test data.

7. Get your bugs reviewed: It is a good practice to get your bugs reviewed by a peer or preferably from your Test Lead. In fact Test lead should review each of the bugs logged by his team member. Incase your test lead is not reviewing them, please request him to review the bugs and confirm if you have set severity and priority of the bug correctly. 8. Associate failed test cases to bug(s): Every bug you have logged should be traceable to test case(s) and each of the failed test cases should be traceable to an Open bug (bugs in New or Open or Assigned state). Traceability helps in retesting while verifying fixed bugs. 9. Assign your new bugs: Your newly logged bugs must be assigned to a person (developer or project manager or test lead) as per the process, whoever it might be, just get it assigned to the person responsible, so that bugs are not forgotten. Most of the people usually look at the bugs that are assigned to them, as a result some of the bugs that are not assigned to anybody can remain unassigned for sometime. Incase any of your bugs are not assigned to anybody or assigned to person who has left your project, then its your responsibility to get the bug(s) assigned to the right person/developer. 10. Spell check your report: Do a spell check on your bug report. Most of the popular bug reporting tools provide a provision for doing a spell check, incase your tool does not then, you can compose your bug report in a word document and then copy/paste content to your bug reporting tool. Please also avoid non standard abbreviations that are used while chatting like u, gt, thx etc. Most of the people tend to do that on bug reports and emails, this is the influence of chatting 11. Latest comments on top: Communicate to all the stake holders to add/insert comments at the top of the comments section i.e., in the reverse chronological order i.e. latest comment on top. There are few controversial bugs that can attract more than 20 comments. 12. Minimize attachment size: Be it screenshot or error file; try to restrict size of the attachment to 200K to 500K. For e.g.:- Instead of attaching Bitmap file, you can save the bitmap file in JPEG format to reduce the file size. Usually Bitmap files we be of couple of Mbs where as the same file when store as JPEG occupies few hundred Kbs. Also when you are attaching spread sheets or error files more than 500K, then you can zip the file(s) and attach them to bug report. Bug report Donts: 1. Do not combine Multiple bugs in a single report: Do not combine multiple bugs in a single report. As each of the bug in the bug report may have to be assigned to different developers, severities and priorities of each of the bugs can be different. Its always better to create one bug report per bug. 2. Do not report duplicate bugs: Before reporting a bug, spend few minutes to check if the issue is already reported. Have a word with your peers or with your Test lead or search in the bug reporting tool.

3. Do not blindly accept Rejected bugs: Always review rejected bugs, incase your bug is rejected and you feel it is valid, please have a discussion with your Test lead or Project manager and have discussion with developer with supporting like requirements or design document that proves validity of your bug. 4. Do not yield to developers: Do not yield to developers and deviate from process. Some of the Developer(s) may ask you not to log a bug in bug management software; instead they may ask you to email the bug details to them, so that they can get the fix in the next build. 5. Do not set incorrect Severity or priority: Some of the testers who want to get their bugs fixed at the earliest tend to increase the bug severity or priority. Testers should always appraise their bug with correct severity and priority. Incase project manager or developers notice that a tester is unnecessarily increasing severity or priority of bugs, they will complain to your supervisor or Management and blame you for misleading developers on bug prioritization. Due to this, tester may loose credit for all the good bugs that he/she had logged earlier. 6. Do not edit others bug report: During testing phase, other stake holders (developers or business analysts or technical architect etc.,) can also report bugs. However their bug reports may not be as detailed as bugs reported by testers. For e.g.:- If you feel that Steps to reproduce or test data information is missing in the bug report add the details in the comments, do Not modify the original description section of the bug report. Incase you had to change the description section for some good reason then take permission from the bug reporter and then only update the bug report.

Entry and Exit Criteria


Entry and exit criteria are the set of conditions that should be met in order to commence and close a particular project phase or stage. Each of the SDLC (Software Development Life Cycle) phase or stage will have one or more Exit/Entry Criteria conditions defined, documented and signed off. Incase any of the conditions specified in Entry and Exit criteria cannot be met after they are documented and signed off, approval should be taken from stake holders who were involved in signing off on the Exit /Entry Criteria or the document containing Entry and Exit Criteria e.g.:- Application Test Approach document contains Entry and Exit criteria for the APT phase, so any changes done to Entry and Exit criteria conditions of the document will require taking written approval from stake holders who signed off on this document.

Every Test Stage, be it AT (Assembly Test), APT (Application Product Test) or IPT (Integration Product Test) or Performance Test or User Acceptance Test will have its own set of Entry and Exit Criteria. Incase modification of Entry / Exit criteria involves wavier of a deliverable (e.g.:Requirements Traceability Matrix) then a wavier request should be sent to SQA (Software Quality Assurance) team and a written approval should be obtained. Below is a sample Entry and Exit Criteria for Application Product Test stage: Entry Criteria Build notes is provided to APT team. All defect logged during earlier phases (Requirements, Design or Development) and planned to be fixed during APT phase are logged in Test Management software with a target resolution date. Business Analysts, Technical Architects, Developers, DBAs (Database Administrators), build deployment and support resources are identified and are made available as required during APT (Application Product Phase) testing. RTM (Requirements Traceability Matrix) is signed off by required stakeholders. Test Closure Report for AT (Assembly Testing) is signed off by required stake holders. Exit Criteria All planned Test Scripts of Pass3 are executed and 95% of the Pass3 Test Cases have passed. Any Application Product Test Cases that are marked as NA (Not Applicable) should be reviewed and approved prior exiting APT (Application Product Test). There are no open/pending Severity 1 and Severity 2 defects, any pending Severity 3 and Severity 4 defects can be deferred only if they have been reviewed and approved by UAT users, Business users and other project stake holders. All the defects found pre-APT phases are closed or deferred by taking approval from all the required stake holders.

Application Product Test resources, Business Analysts, Technical Architects, Developers, DBAs (Database Administrators), build deployment and support resources are identified and are made available for next phase of testing IPT (Integration Product Test). Test Closure Report for APT (Application Product Test) is signed off by required stake holders and handed off to IPT (Integration Product Test) team lead. Following deliverables are completed and signed off (Test approach, Test conditions and Expected results, Test scenarios, Test scripts, and common Test data sheet) before APT (Application Product Test) can start. APT (Application product test) environment is ready in terms of Hardware, Software and Build and is made available for APT team. Build deployed in Application Product Test environment has met the Exit Criteria defined for Assembly Testing. There are no pending Severity 1 Defects logged during Unit Testing or AT (Assembly Testing) phases.

IP address
IP (Internet Protocol) address is a numeric identification and logical address that is assigned to computers or any device like modems, gateways or printers or routers etc., that is on network using Internet Protocol for communication. Designers of TCP/IP (Protocol) defined IP address as a 32-bit number and this naming system was named as Internet Protocol Version 4 (IPv4). Due to rapid growth of internet and resulting depletion of IP addresses a new addressing system know as IPv6 which uses 128 bits of address was developed. Below are examples of IPv4 and IPv6 addresses: IPv4 has four parts, example: 208.67.200.201 IPv6 has the following example: 2002:db8:0:2345:0:678:1:1 Private IP addresses: IPv4 address blocks that are set aside by the American Registry of Internet Numbers (ARIN) for use as private addresses on private networks that are not directly connected to the Internet. IP addresses in the below range are used for assigning to computers or devices on the LAN (Local Area Network) and these computers are

devices are not accessible over internet but accessible only on local network i.e., home or office network.

Start End 10.0.0.0 10.255.255.255 172.16.0.0 172.31.255.255 192.168.0.0 192.168.255.255


Loopback IP address: IP address 127.0.0.0 also known as loopback address refers to the same computer or device. If you ping 127.0.0.1 on your computer, you can verify that the network hardware and network software is functioning as expected.

Smoke and sanity testing


Below are the differences between Smoke and Sanity Testing. Smoke Testing Objective of Smoke Testing is to verify all the Critical and major functionality of an application is working as expected before going ahead with full fledged testing i.e., Functional or Regression testing. Smoke tests are broad and shallow. Smoke Tests are designed to catch any Critical or High severity defects across all the important functionalities. Smoke tests are designed to catch Show stoppers (that was not tested and caught during sanity test) or blocker defects i.e., defects that indicate that a particular flow or functionality cannot be tested. Sanity Testing Objective of Sanity testing is to confirm that the new build, environment and external services are stable enough to carry out any test i.e., even before carrying out Smoke test. Sanity tests are very narrow (usually, tests a single flow or two at the max) it is not designed to test all the important functionality of the application. Sanity tests are intended to verify if the application is available (up and running) and it is able to interact successfully with database, external services and external devices if any.Sanity tests are designed to catch show stopper defects. Like, Unable to login to application OR application is not functioning due to JDBC connection failure etc., Sanity Testing is mostly done by Build deployment/Operations Team after every new build is deployed OR once the environment is brought up after a scheduled application / environment maintenance. Sanity testing is done by Build deployment team as immediate

Smoke Testing is done by Testing team only as the focus of this testing more on validating application functionality.

issues encountered post a new build deployment is more often towards configuration, database access and other setup issues. In some of the bigger projects, testing team may be asked to perform sanity testing. Smoke testing is usually done after Sanity Sanity Testing is done immediately after Testing is completed. new build deployment OR after application / environment scheduled maintenance. Smoke Test cases are mostly documented. Sanity Test cases are usually not Smoke Test suite is built by picking documented i.e., no written test cases. In Functional test cases that requires most of the companies they follow a Sanity validating all the critical and important check list.e.g.:-a) Verify Loginb) Submit an functionalities of the application.e.g.:-a) Order and pay by cash Submit Orders and pay by different tender c) Verify Oracle reports can be opened types (Cash, Credit, Debit, Gift Card etc.,)b) Verify cancel order is working fine. As you can see, intent of Sanity testing is c) Verify return functionality is working to find issues that are show stoppers and fine that can make the system completely not testable. d) Verify sales data in Oracle daily sales report In the above example, tests are very narrow and does not test all the important And more test cases to test all the functionality of the application, instead it important functionality of the application. checks but the intent of the testing.

Duplicate Defects
Duplicate Defects does not benefit in anyway, instead they cause discomfort to developers, test lead and finally to the person who logged it. Then how to avoid logging duplicate defects? Unfortunately there is no silver bullet that can avoid you logging duplicate defects; however one can try to avoid log duplicate defects. a) Testers should be serious and must make conscious effort not to log duplicate defects. b) Incase there are hundreds and thousands of defects in defect tracking system, it will not be possible to go through all of them trying to figure out which one is a

duplicate of the issue you are about to log. Better option would be to check with Defect Co-coordinator (if there is one for your project) or Test Lead or your Test peers if they are aware of the issue you are about to log. In my experience, this approach has helped me in avoiding logging duplicate defects 7 out of 10 times. It would work for you as well. c) The above approach will not work when the team is based out of different geographic location. Then the option we have is to use the search facility provided by the Test Management / Defect Tracking application. For example: In QC (Quality Center), there is a functionality known as Match Defects that searches for similar defects. Each time a defect is logged in QC, QC stores list of key words from summary and description fields and this would be used for finding Matching defects. Below are steps on how to search for Matching Defects. a) Navigate to Defects Tab b) Clear Filter (click Arrow mark next to Set Filter button and choose Clear Filter option from the dropdown list) c) Click New Defect button and New defect window will be displayed d) Enter 2 or 3 Key words in Summary field, Keywords entered should be related to defect that you want to find. e.g.:- If you want to find a defect related to login failure then enter text login fail in Summary field. c) Click Find Similar defects button d) QC will display result as a list of matching defects that contain Login and Fail in Summary and / or Description fields of existing defects. Along with Matching defect id, Summary, a field by name Similar is also displayed that shows percentage of relevance with respect to the keywords you are searching with. You can read summary and description of each of the defects with higher percentage similarity and figure out if there exists a duplicate defect before logging defect. There is one more way to quickly search defects using keywords and wildcard characters. a) Navigate to Defects Tab b) Clear Filter (click Arrow mark next to Set Filter button and choose Clear Filter option from the dropdown list) c) Set Filter condition to Filter defects related to your Release / Test Phase etc. and set Status as Not Closed d) Enter Key words in combination with wildcards. e.g.:- If you want to find a defect related to login failure then enter text *login*fail* in Summary field. In this case any defect with text login followed by text fail will be displayed in search results.

e) Quickly go through the list of defects listed in the result, to decide if a defect for the issue being logged already exists.

Integration Testing
Integration Testing: Test Data flow between one module to another module. Ex:gmail Account Compose: In the compose box Type valid details and click the send mail button Sent Items: Composed mail should display on sent items folder. System Testing: Once you thoroughly completed on Functional and Integration testing then you start System testing. System testing means End to- end testing which doing test environment like production environment. Which means the customer real business flow like that we need test the application. Example: online shopping Go to product page Select product Selected product add quantity depends upon your preference Select payment options like paypal or creditcard Select shipping method and shipping details Pay your payment Then confirm your order click Apply confirmation This is what client business flow. Depends upon project business flow you need to test the application. Here you no need to concentrate on Functional and Integration Testing because once you complete that testing then only you enter system testing. CMM levels

Level One Initial

Company has no standard process for software development. Nor does it have a projecttracking system that enables developers to predict costs or finish dates with any accuracy.
Level Two Repeatable

Company has installed basic software management processes and controls. But there is no consistency or coordination among different groups.
Level Three Defined

Company has pulled together a standard set of processes and controls for the entire organization so that developers can move between projects more easily and customers can begin to get consistency from different groups.
Level Four Managed

In addition to implementing standard processes, company has installed systems to measure the quality of those processes across all projects.

Level Five Optimizing

Company has accomplished all of the above and can now begin to see patterns in performance over time, so it can tweak its processes in order to improve productivity and reduce defects in software development across the entire organization.

You might also like