Wednesday, June 10, 2009

(8) TYPES OF TESTING

TYPES OF TESTING

Black box testing:
Black box testing involves testing of the game in isolation from its development. Rather than checking the game's code, sample data is passed into the program (or parts thereof) to check that the expected outputs are produced. Several specific testing techniques fall under this category, including equivalence partitioning, boundary value analysis and smoke testing.

Playtesting:
Playtesting is the process by which a game played thoroughly before release, in order to test it for bugs. This process generally benefits from as much gameplay as possible during the testing period, and so is often performed by large teams of beta testers or in public beta programs.

White box testing


White box testing, like black box testing, is used to test that individual parts of a program produce the expected outputs for given input data. However, it differs from the other technique in that the tester is aware of the code used in the software, and so can make judgements on the specific causes of bugs without the aid of another developer. This type of testing is done extensively by programmers during game development.


Regression testing
As game development progresses, it is necessary to go back and re-check features, art and text (content) that previously was error-free to ensure they still are. Though tedious, this type of testing is extremely important since late and supposedly benign changes to code and other content can have produce disastrous side effects that may make some features inoperable or can destroy the game as a whole.

Regression testing is any type of software testing which seeks to uncover regression bugs. Regression bugs occur whenever software functionality that previously worked as desired stops working or no longer works in the same way that was previously planned. Typically regression bugs occur as an unintended consequence of program changes.
Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged.
Experience has shown that as software is developed, this kind of reemergence of faults is quite common. Sometimes it occurs because a fix gets lost through poor revision control practices (or simple human error in revision control), but just as often a fix for a problem will be "fragile" - i.e. if some other change is made to the program, the fix no longer works. Finally, it has often been the case that when some feature is redesigned, the same mistakes will be made in the redesign that were made in the original implementation of the feature.
Therefore, in most software development situations it is considered good practice that when a bug is located and fixed, a test that exposes the bug is recorded and regularly retested after subsequent changes to the program. Although this may be done through manual testing procedures using programming techniques, it is often done using automated testing tools. Such a 'test suite' contains software tools that allow the testing environment to execute all the regression test cases automatically; some projects even set up automated systems to automatically re-run all regression tests at specified intervals and report any regressions. Common strategies are to run such a system after every successful compile (for small projects), every night, or once a week.
Regression testing is an integral part of the extreme programming software development methodology. In this methodology, design documents are replaced by extensive, repeatable, and automated testing of the entire software package at every stage in the software development cycle.

No comments:

Post a Comment