Software Quality Assurance Part IV

from low-level to high level (Testing in Stages)
Except for small programs, systems should not be tested as a single unit. Large systems are built out of sub-systems, which are built out of modules that are composed of procedures and functions. The testing process should therefore proceed in stages where testing is carried out incrementally in conjunction with system implementation.

The most widely used testing process consists of five stages

Component testing

Unit Testing

(Process Oriented)

White Box Testing Techniques
(Tests that are derived from knowledge of the program’s structure and implementation)

Module Testing

Integrated testing

Sub-system Testing

System Testing

User testing

Acceptance Testing

(Product Oriented)

Black Box Testing Techniques
(Tests are derived from the program specification)

However, as defects are discovered at any one stage, they require program modifications to correct them and this may require other stages in the testing process to be repeated.
Errors in program components, say may come to light at a later stage of the testing process. The process is therefore an iterative one with information being fed back from later stages to earlier parts of the process.

How to test and to get the difference between two images which is in the same window?

How are you doing your comparison? If you are doing it manually, then you should be able to see any major differences. If you are using an automated tool, then there is usually a comparison facility in the tool to do that.

Jasper Software is an open-source utility which can be compiled into C++ and has a imgcmp function which compares JPEG files in very good detail as long as they have the same dimentions and number of components.

Rational has a comparison tool that may be used. I’m sure Mercury has the same tool.

The key question is whether we need a bit-by-bit exact comparison, which the current tools are good at, or an equivalency comparison. What differences between these images are not differences? Near-match comparison has been the subject of a lot of research in printer testing, including an M.Sc. thesis at Florida Tech. It’s a tough problem.

Testing Strategies

Strategy is a general approach rather than a method of devising particular systems for component tests.
Different strategies may be adopted depending on the type of system to be tested and the development process used. The testing strategies are

Top-Down Testing
Bottom – Up Testing
Thread Testing
Stress Testing
Back- to Back Testing

1. Top-down testing
Where testing starts with the most abstract component and works downwards.

2. Bottom-up testing
Where testing starts with the fundamental components and works upwards.

3. Thread testing
Which is used for systems with multiple processes where the processing of a transaction threads its way through these processes.

4. Stress testing
Which relies on stressing the system by going beyond its specified limits and hence testing how well the system can cope with over-load situations.

5. Back-to-back testing
Which is used when versions of a system are available. The systems are tested together and their outputs are compared.

6. Performance testing.
This is used to test the run-time performance of software.

7. Security testing.
This attempts to verify that protection mechanisms built into system will protect it from improper penetration.

8. Recovery testing.
This forces software to fail in a variety ways and verifies that recovery is properly performed.

Large systems are usually tested using a mixture of these strategies rather than any single approach. Different strategies may be needed for different parts of the system and at different stages in the testing process.

Whatever testing strategy is adopted, it is always sensible to adopt an incremental approach to sub-system and system testing. Rather than integrate all components into a system and then start testing, the system should be tested incrementally. Each increment should be tested before the next increment is added to the system. This process should continue until all modules have been incorporated into the system.

When a module is introduced at some stage in this process, tests, which were previously unsuccessful, may now, detect defects. These defects are probably due to interactions with the new module. The source of the problem is localized to some extent, thus simplifying defect location and repai

Brute force, backtracking, cause elimination.

Unit Testing


Focuses on each module and whether it works properly. Makes heavy use of white box testing

Integration Testing


Centered on making sure that each module works with another module.
Comprised of two kinds:
Top-down and
Bottom-up integration.
Or focuses on the design and construction of the software architecture.
Makes heavy use of Black Box testing.(Either answer is acceptable)

Validation Testing


Ensuring conformity with requirements

Systems Testing

Systems Engineering

Making sure that the software product works with the external environment, e.g., computer system, other software products.

Driver and Stubs

Driver: dummy main program
Stub: dummy sub-program
This is because the modules are not yet stand-alone programs therefore drive and or stubs have to be developed to test each unit.

When do we prepare a Test Plan?

[Always prepared a Test Plan for every new version or release of the product? ]

For four or five features at once, a single plan is fine. Write new test cases rather than new test plans. Write test plans for two very different purposes. Sometimes the test plan is a product; sometimes it’s a tool.

What is boundary value analysis?

Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code is to exercise it at its natural boundaries.

Boundary Value Analysis is a method of testing that complements equivalence partitioning. In this case, data input as well as data output are tested. The rationale behind BVA is that the errors typically occur at the boundaries of the data. The boundaries refer to the upper limit and the lower limit of a range of values or more commonly known as the “edges” of the boundary.

Describe methods to determine if you are testing an application too much?

While testing, you need to keep in mind following two things always:
— Percentage of requirements coverage
— Number of Bugs present + Rate of fall of bugs
— Firstly, There may be a case where requirement is covered quite adequately but number of bugs do not fall. This indicates over testing.
— Secondly, There may be a case where those parts of application are also being tested which are not affected by a CHANGE or BUG FIXTURE. This is again a case of over testing.
— Third is the case as you have suggested, with slight modification, i.e bug has sufficiently dropped off but still testing is being at SAME levels as before.

Methods to determine if an application is being over-tested are–
1. Comparison of ‘Rate of Drop in number of Bugs’ & ‘Effort Invested in Testing’ (With all Requirements been met) That is, if bug rate is falling (as it generally happens in all applications), but effort invested in man hours does not fall, this implies Over testing.
2. Comparison of ‘Achievment of bug rate threshold’ & ‘Effort Invested in Testing’ (With all Requirements been met) That is, if bug rate has already achieved the agreed-upon value with business and still the testing efforts are being invested with no/little reduction.
3. Verifying if the ‘Impact Analysis’ for ‘Change Requests’ has been done properly and being implemented correctly. That is, to check and verify that the components of AUT which have got impacted by the new change are being tested only and no other unrequired component is being tested unneccessarily. If unaffected components are being tested, this implies Over testing.

If the bug find rate has dropped off considerably, the test group should shift its testing strategy. One of the key problems with heavy reliance on regression testing is that the bug find rate drops off even though there are plenty of bugs not yet found. To find new bugs, you have to run new tests.
Every test technique is stronger for some types of bugs and weaker for others. Many test groups use only a few techniques. In our consulting, James Bach and I repeatedly worked with companies that relied on only one or two main techniques.
When one technique, any one test technique, yields few bugs, shifting to new technique(s) is likely to expose new problems.
At some point, you can use a measure that is only partially statistical — if your bug find rate is low AND you can’t think of any new testing approaches that look promising, THEN you are at the limit of your effectiveness and you should ship the product. That still doesn’t mean that the application is overtested. It just means that YOU’RE not going to find many new bugs.

Best way is to monitor the test defects over the period of time
Refer williams perry book, where he has mentioned the concept of ‘under test’ and ‘over test’, in fact the data can be plotted to see the criteria.
Yes one of the criteria is to monitor the defect rate and see if it is almost zero second method would be using test coverage when it reach 100% (or 100% requirement coverage)

Procedural Software Testing Issues

Software testing in the traditional sense can miss a large number of errors if used alone. That is why processes like Software Inspections and Software Quality Assurance (SQA) have been developed. However, even testing all by itself is very time consuming and very costly. It also ties up resources that could be used otherwise. When combined with inspections and/or SQA or when formalized, it also becomes a project of its own requiring analysis, design and implementation and supportive communications infrastructure. With it interpersonal problems arise and need managing. On the other hand, when testing is conducted by the developers, it will most likely be very subjective. Another problem is that developers are trained to avoid errors. As a result they may conduct tests that prove the product is working as intended (i.e. proving there are no errors) instead of creating test cases that tend to uncover as many errors as possible.