You are on page 1of 5

Process that needs to be in place for Foundation pack projects.

Background:
We had been tracking the foundation pack projects through variety of means, including Excel spreadsheets as well as some newer Thor reports. As we passed the In QA for Applications Denali Phase 2, we generated QANs to track these issues.

What went wrong?


In the process of generating these QANs for the Foundations pack projects for the applications to resolve the outstanding issues, there were in fact some things that went wrong. We had bogus QANs for the first time we ran auto generation of QANs. This was the avoidable issue and was caused because we had overwritten the already existing ZQN import spreadsheet with the new modified data. In this process we failed to notice that there was some left over data from previous tests at the bottom of the spreadsheet which was in fact bogus and corrupt. When we ran the auto generation, there were about 30% bogus QANs generated. There were about 2200 QANs generated in total in this process and we did not have any way to programmatically find the bogus QANs out of the total. So we had to contact Brent Warner from EMC2 to void all the generated QANs for us. The checks for finding issues related to Foundation pack projects were not formally developed. We have a copy of the standard Foundations Code Search Utility, which we often modify to meet our needs and introduce new checks for random internal searches as well as the checks for the Foundations pack projects. These checks were PQAed by Dan Hardgrove but we never had any DLG associated with this development. So all the issues we faced and the decisions that we took while developing these checks were never documented anywhere. So as a consequence of such informal development, we had a communication gap between several people involved in this project. As these checks for foundation pack projects were not developed formally as a part of standard DLG, nobody else other than the developer had an idea about what checks are associated with foundation pack projects and how they need to run it in order to generate the output in the desired format for the Thor reports and the spreadsheet that is being used for generating import file for ZQN imports. Therefore when Dan, Hardgrove tried to use the utility in the developers absence he was running some wrong checks which were never the part of Foundations pack projects. So for sometime the checks that were meant to be shown in the Thor reports were not showing up and instead the wrong checks were

being shown up. As I already mentioned we have a copy of the standard Foundations Code Search Utility that we maintain separately for the internal searches as well as for Foundations pack projects. As we can realize here, when we start working on the copy of standard utility, we make modifications and improvement to the copy as we develop and find bugs. The original utility gets out of sync with the copy and thus lacks all the improvements and the bug fixing that has been done on the copy. Also when we make any changes to the standard Foundations Code Search Utility, we hardly copy the modifications over to the copy of the utility. Thus both of them are out of sync with each other. This becomes problem sometimes as happened a few weeks back. Dan, Hardgrove tried to sync the changes from the standard utility to its copy, and in the process the changes that were already in place in the copy of the utility were gone, and the utility was broken in the sense that it was not searching records for few weeks. We were not reporting on any records for few weeks for Foundations pack projects. After we generated QANs, we started finding out that 10% of QANs were just the false positives. We had considered several theoretical cases while looking for the occurrence of specific issue in our code. E.g. for searching for direct references to data globals we were reporting on cases like EN_US or EN-US thinking that EN_ and EN- is a global reference. Clearly we can say that these are not the global references, but we tend to had flexibility so that we do not miss anything. This approach was acceptable for Thor reports because they can be generated again without any hassle. But QANs once generated cannot be regenerated without voiding the existing one or we will have multiple QANs for the existing QANs. We basically used the same code for checking for issues for generating Thor reports as well as QANs. We could have used a different strategy for QAN generation. There was no communication done between the developer and the owners of Foundations pack projects. As a result of this, we were searching for several cases as an issue in the code which were in fact legitimate to use and require no changes in the code. This resulted in a bunch of false positives. E.g. the check to search for $$zterm should never had been the part of the search because usage of $$zterm requires the context to decide whether the usage if legitimate or not. And as Foundations Code Search Utility cannot build up the context of the usage, we cannot determine programmatically if we need to report on a particular usage of $$zterm. Several fields in the QANs were not populated. After we voided all the bogus QANs, we decided to have a PQA pass on the code generating the results as well as the ZQN import spreadsheet to make sure that we do not miss anything. But as very little time was spent on reviewing the spreadsheet, I, Dan Hardgrove and Rob Owen failed to notice that few required fields were missing. We realized this only after the generation of QANs.

We did not have any ahead of time preparations for the auto generation of these QANs. As we never had any checklist for the To-Do things, we missed the important part of the over all process. DOCUMENTATION. So once the QANs were assigned to the applications, they did not know anything about the false positives of the utility, and some of the people did not know about what they need to do for these QANs.
We can now breakdown the above mentioned issues into a summary here.

We did not have any process in place that could have avoided the above made mistakes Our development was informal in the sense that we did not have any associated DLG for the development. We worked on the copy of standard utility leaving both standard and copy out of sync We used same strategy for generating Thor reports as well as QANs. Very little time was spent reviewing the ZQN import spreadsheet. Lack of communication between developer and project owners. No ahead of time preparations. Lack of documentation to assist people to resolve the issues.

What is required to be done?


We basically need to have a process in place so that all the mistakes that we made in the past can be avoided in the future generation of QANs for Foundations pack projects. For this we need to keep in mind the previous mistakes as well as the feedback that we got from the applications and also make some improvements over the areas we identified as weak, ourselves. As I mentioned in the What went wrong? section above that we did not do any development as a part of standard DLG for Foundations pack projects. This I think lead to the most of the problems. If we had done the formal development, we must have avoided the issues that should have been caught in the process of PQAing and by the QAers. Also DLG could have been a standard place to document the expected behavior, and thus anybody looking into the DLG should have gotten a fair idea about what the utility checks are looking for in the code. But as it may be necessary at times to have informal development, partly because the development is related to foundation pack issues only and require the output file in the format that we can use for importing QANs and partly because we do not want to release all this code to the customers, we can adopt a different strategy here. 1) We must identify the checks that are required to be developed for Foundations Pack Projects at the starting of the release. 2) Developer must communicate with the project owners before proceeding to the development to get a fair idea about what the checks must flag.

3) The development of these checks must be done in a standard DLG. 4) Now to develop some extra stuff, like code for ZQN import compatible output file and gathering objects information from the TRACK, we must maintain some excel sheet in some shared folder that can act as a substitute for the DLG PQA and notes section. Here a developer can document the changes that he has made to the copy of the utility, and then similarly PQA comments can be addressed in a somewhat formal way. Every revision must require a sign off from PQAers. This way in future we can track the changes made to the copy of the utility, and apply them to the standard utility and make both in sync. 5) The process of creating the results file must be automated. I.e we must have a batch run for this utility to perform its searching at the scheduled times. This requires updating the Batch job with new checks in every release. 5) After the development is done, demo must be given to R&D TL, QA TL and the owners of the associated projects. This step will ensure that what we are searching for is correct and meet everybody's expectations 6) A meeting must be done with the DIV OPS to ensure that nothing has changed in the process of creating thor reports. 7) Once we are ready to run the reports, the same must be conveyed to the R&D TL, so that they know that we have started creating the reports and can communicate with the applications people to make them aware of the reports. 8) Wiki needs to be created for the new Foundations Pack projects keeping in mind that we will be creating QANs later in the release. It must have all the information for the people who will be looking into these QANs, including FAQs and To-Do steps. 8) Developer must ensure that all the above steps have been completed, before QANs can be generated for the applications 9) The ZQN import file created, must be reviewed by atleast two persons in the team as another PQA round for the file. This step will ensure that the import file does not create any bogus/corrupted data, and hence we can avoid creating bogus QANs. Also this step will ensure that all the necessary fields for the file have been filled correctly. 10) Now finally we can create QANs for the teams and hence can prepare ourselves for the feedback.

FILENAME \p C:\Users\gbatra\AppData\Roaming\Microsoft\Templates\Normal.dotm

- PAGE 3 -

Last Saved: SAVEDATE 0/0/0000 0:00:00 AM

You might also like