Documente online.
Zona de administrare documente. Fisierele tale
Am uitat parola x Creaza cont nou
 HomeExploreaza
upload
Upload




ESD's Software Quality Assurance Management System

software


ESD's Software Quality Assurance Management System

Quality Assurance Procedures



The path that the source code has to follow in order to get to the Quality Assurance Team and the testing procedures are presented in the figure below.

Note: The red lines and notes indicate the steps that have to be taken after the source has been tested.



Developer 4

 

Developer 3

 

Developer 2

 

Developer 1

 



Yes



No

Source code

ready for

release

 


Communication Flow and Releases

Whenever the QA team member gives his/her approval for sending the source code to

the Integration Test Team, a release is made. Each release should contain only the things that have been previously installed and tested on the testing environment.

The QA person must also consider whether she/he will accept new patches in the middle of a test cycle. The key issues here are regression testing and the implications of the patch for the validity of the previous test cycles results. Will the QA person have to do a complete reset and start the test cycle over? These unexpected arrivals can cause real problems, especially toward the end of the system test phase, if the QA receives new releases every few days or even every few hours.

The test team member is the only person who may authorize the Configuration Manager to make the release for the Integration Test Team.

Releases are made only after the test team member performed a thorough GAP analysis, checking that the application built respects the specifications of the client and that there are no missing features for the module proposed for release. Also, the test team must make sure the released code does not contain showstoppers and major logic flaws.

The test team member should receive from the project manager notifications for every document with client specifications that is new and uploaded into CTPS. After reviewing those documents, the test team member should make estimations regarding the time that will be necessary for testing the module properly and communicate this to the Project Manager.

When the decisions related to the deadlines for releases are made by the Project Manager, the QA should receive from him/her a complete and precise notification so that the QA can properly coordinate with the developers and present a good release at the time established.

When deciding the time required for testing, the QA team must keep in mind an important fact: even the best test plan will not survive its first encounter with reality. Good tests will find bugs, which will delay releases. Some bugs hold up the releases to the point that the entire revision levels will be skipped for certain cycles. Logistical problems - lack of proper built platforms, compiler glitches, and so forth - will also delay releases. Every now and then releases will come in early. And some releases might be late because of the developers.

Each release has to be accompanied by a QA Statement that will provide information about the new features that the release brings, the things that are to be done, the bugs that have been fixed with that release and the bugs that are to be fixed, the tasks that have been completed since the last release. The QA team must make sure that the Integration Test Team will have all the information necessary in order to continue the work on the Integration Test environment.

The information about the release should go from the QA Team to the Integration Test Team/Project Manager and from the Integration Test Team/Project Manager to the QA Team. If this is not followed, the chain will miss a link and the project will fail to proceed correctly. For more details, see the figure below:

Client

 



3. Quality Assurance Techniques and Tools



Difference between Testing Techniques and Tools

A tool is a vehicle for performing a test process. The tool is a resource to the tester, but by itself is insufficient to conduct testing. For example, a hammer is a tool, but until the technique for using that hammer is determined the tool will lie dormant.

A testing technique is a process for ensuring that some aspect of an application system or unit functions properly. There are few techniques, but many tools. For example, a technique would be the leverage provided by swinging an instrument to apply force to accomplish an objective - the swinging of a hammer to drive in a nail. The hammer is the tool used by the swinging technique to drive in a nail. On the other hand, a swinging technique can also be used to split a log using an axe or to drive a stake in the ground with a sledgehammer.

The concept of tools and techniques is important in the testing process. It is a combination of the two that enables the test process to be performed. The tester should first understand the testing techniques and then understand the tools that can be used with each of the techniques.

Testing Techniques

Structural System Testing Techniques

Structural system testing is designed to verify that the developed system and programs work. The objective is to ensure that the product designed is structurally sound and will function correctly. It attempts to determine that the technology has been used properly and that when all the component parts are assembled they function as a cohesive unit. The structural system testing techniques provide the facility for determining that the implemented configuration and its interrelationship of parts functions so that they can perform the intended tasks. The techniques are not designed to ensure that the application system is functionally correct, but rather, that it is structurally sound. The structural system testing techniques are described in the figure below:

Technique

Description

Example

STRESS

Determine system performs with expected volumes

Sufficient disk space allocated

Communication lines adequate

EXECUTION

System achieves desired level of proficiency

Transaction turnaround time adequate

Software/hardware use optimized

RECOVERY

System can be returned to an operational status after a failure

Induce failure

Evaluate adequacy of backup data

OPERATIONS

System can be executed in a normal operational status

Determine systems can run using document

COMPLIANCE (TO PROCESS)

System is developed in accordance with standards and procedures

Standards followed

Documentation complete

SECURITY

System is protected in accordance with importance to organization

Access denied

Procedures in place

A. Stress Testing Technique

Stress testing is designed to determine if the system can function when subject to larg 23523v2110x e volumes --larger than would be normally expected. The areas that are stressed include input transactions, internal tables, disk space, output, communications, computer capacity, and interaction with people. If the application functions adequately under test, it can be assumed that it will function properly with normal volumes of work.

Stress Testing Examples

Stress tests can be designed to test all or parts of an application system. Some specific examples of stress testing include:

Enter transactions to determine that sufficient disk space has been allocated to the application.

Ensure that the communication capacity is sufficient to handle the volume of work by attempting to overload the network with transactions.

Test system overflow conditions by entering more transactions than can be accommodated by tables, queues, internal storage facilities, and so on.

B.      Execution Testing Technique

Execution testing is designed to determine whether the system achieves the desired level of proficiency in a production status. Execution testing can verify response times, turnaround times, as well as design performance. The execution of a system can be tested in whole or in part, using the actual system or simulated model of a system.

Execution Testing Examples

Some specific examples of execution testing include:

Calculating turnaround time on transactions processed through the application.

Determining that the hardware and software selected provide the optimum processing capability.

Using software monitors to determine that the program code is effectively used.

C.      Recovery Testing Technique

Recovery is the ability to restart operations after the integrity of the application has been lost. The process normally involves reverting to a point where the integrity of the system is known, and then reprocessing transactions up until the point of failure. The number of restart points affects the time required to recover operations, the volume of applications run on the computer center, the training and skill of the people conducting the recovery operation, and the tools available for recovery. The importance of recovery will vary from application to application.

Recovery Testing Examples

Recovery testing can involve the manual functions of an application, loss of input capability, loss of communication lines, hardware or operating system failure, loss of database integrity, operator error, or application system failure. It is desirable to test all aspects of recovery processing. Some specific examples of recovery testing include:

Inducing a failure in one of the application system programs during processing. This could be accomplished by inserting a special instruction to look for a transaction code that upon identification would cause an abnormal program termination.

The recovery could be conducted from a known point of integrity to ensure that the available backup data was adequate for the recovery process. When the recovery had been completed, the files at the point where the exercise was requested could be compared to the files recreated during the recovery process.

D.      Operations Testing Technique

After testing, the application will be integrated into the operating environment. At this point in time, the application will be executed using the normal operations staff, operations procedures, and documentation. Operations testing is designed to verify prior to production that the operating procedures and staff can properly execute the application.

Operations Testing Examples

Operations testing is a specialized technical test of executing the application system. Some specific examples of operations testing include:

Determining that the operator instructions have been prepared and documented in accordance with other operations instructions, and that computer operators have been trained in any unusual procedures.

Testing that the job control language statements and other operating systems support features perform the predetermined tasks.

Verifying that the file labeling and protection procedures function properly.

E.       Compliance Testing Technique

Compliance testing verifies that the application was developed in accordance with information technology standards, procedures, and guidelines. The methodologies are used to increase the probability of success, to enable the transfer of people in and out of the project with minimal cost, and to increase the maintainability of the application system. The type of testing conducted varies on the phase of the system development life cycle. However, it may be more important to compliance test adherence to the process during requirements that at later stages in the life cycle because it is difficult to correct applications when requirements are not adequately documented.

Compliance Testing Examples

A peer group of programmers would be assembled to test line-by-line that a computer program is compliant with programming standards. At the end of the peer review, the programmer would be given a list of non-compliant information that would need to be corrected.

F.       Security Testing Technique

Security is a protection system that is needed for both secure confidential information and for competitive purposes to assure third parties that their data will be protected. The amount of security provided will be dependent upon the risks associated with compromise or loss of information. Protecting the confidentiality of the information is designed to protect the resources of the organization. However, information such as customer lists or improper disclosure of customer information may result in a loss of customer business to competitors. Security testing is designed to evaluate the adequacy of the protective procedures and countermeasures.

Security Testing Examples

Security testing involves a wide spectrum of conditions. Testing can first be divided into physical and logical security. Physical security deals with the penetration by people in order to physically gather information, while logical security deals with the use of computer processing and/or communication capabilities to improperly access information. Second, access control can be divided by type of perpetrator, such as employee, consultant, cleaning or service personnel, as well as categories of employees. The type of test to be conducted will vary upon the condition being tested.

Some specific examples of security testing include:

Determination that the resources being protected are identified, and access is defined for each resource. Access can be defined by program or individual.

Evaluation as to whether the designed security procedures have been properly implemented and function in accordance with the specifications.

Unauthorized access can be attempted in on-line systems to ensure that the system can identify and prevent access by unauthorized sources.

Functional System Testing Techniques

Functional system testing is designed to ensure that the system requirements and specifications are achieved. The process normally involves creating test conditions for use in evaluating the correctness of the application. The types of techniques useful in performing functional testing include the following:

Technique

Description

Example

REQUIREMENTS

System performs as specified

Prove system requirements

Compliance to policies, regulations

REGRESSION

Verifies that anything unchanged still performs correctly

Unchanged system segments function

Unchanged manual procedures correct

ERROR HANDLING

Errors can be prevented or detected, and then corrected

Error introduced into test

Errors reentered

MANUAL SUPPORT

The people-computer interaction works

Manual procedures developed

People trained

INTERSYSTEMS

Data is correctly passed from system to system

Intersystem parameters changed

Intersystem documentation updated

CONTROL

Controls reduce system risk to an acceptance level

File reconciliation procedures work

Manual controls in place

PARALLEL

Old system and new system are run and the results compared to detect unplanned differences

Old and new system can reconcile

Operational status of old system maintained

A.      Requirements Testing Technique

Requirements testing must verify that the system can perform its function correctly and that the correctness can be sustained over a continuous period of time. Unless the system can function correctly over an extended period of time, management will not be able to rely upon the system. The system can be tested for correctness throughout the life cycle, but it is difficult to test the reliability until the program becomes operational.

Requirements Testing Examples

Some specific requirements testing examples include:

Creating a test matrix to prove that the system requirements as documented are the requirements desired by the user.

Using a checklist prepared specifically for the application to verify the application's compliance to organizational policies and governmental regulations

Determining that the system meets the auditability requirements established by the organization's department of internal auditors.

B.      Regression Testing Technique

One of the attributes that has plagued information technology professionals for years is the snowballing or cascading effect of making changes to an application system. One segment of the system is developed and thoroughly tested. Then a change is made to another part of the system, which has a disastrous effect on the thoroughly tested portion. Either the incorrectly implemented change causes a problem, or the change introduces new data or parameters that cause problems in a previously tested segment. Regression testing retests previously tested segments to ensure that they still function properly after a change has been made to another part of the application.

Regression Testing Examples

Some specific examples of regression testing include:

Rerunning of previously conducted tests to ensure that the unchanged system segments function properly

Reviewing previously prepared manual procedures to ensure that they remain correct after changes have been made to the application system.

Obtaining a printout from the data dictionary to ensure that the documentation for data elements that have been changed is correct.

C.      Error-Handling Testing Technique

One of the characteristics that differentiate automated from manual systems is the pre-determined error-handling feature. Manual systems can deal with problems as they occur, but automated systems must preprogram error handling. In many instances the completeness of error handling affects the usability of the application. Error-handling testing determines the ability of the application system to properly process incorrect transactions.

Error-Handling Testing Examples

Error handling requires the tester to think negatively. The testers must try to determine how the system might fail due to errors, so they can test to determine if the software can properly process the erroneous data.

Some specific examples of error handling include the following:

Produce a representative set of transactions containing error and enter them into the system to determine whether the application can identify the problems.

Through iterative testing, enter errors that will result in corrections, and then reenter those transactions with errors that were not included in the original set of test transactions.

Enter improper master data, such as prices or employee pay rates, to determine that errors that will occur repetitively are subject to greater scrutiny than these causing single-error results.

D.      Manual-Support Testing Technique

Systems commence when transactions originate and conclude with the use of the results of processing. The manual part of the system requires the same attention to the testing as does the automated segment. Although the timing and testing methods may be different, the objectives of manual testing remain the same as testing the automated segment of the application system.

Manual-Support Testing Examples

Some specific examples of manual-support testing include the following:

Provide input personnel with type of information they would normally receive from their customers and then have them transcribe that information and enter it into the computer.

Output reports are prepared from the computer based on typical conditions, and the users are then asked to take the necessary action based on the information contained in computer reports.

Users can be provided a series of test conditions and then asked to respond to those conditions. Conducted in this manner, manual support testing is like an examination in which the users are asked to obtain the answer from the procedures and manuals available to them.

E.  Intersystem Testing Technique

Application systems are frequently interconnected to other application system. The interconnection may be data coming into the system from another application, leaving for another application, or both. Frequently multiple applications - sometimes called cycles or functions - are involved. For example, there is a revenue function or cycle that interconnects all of the income-producing applications such as order entry, billing, receivables, shipping, and returned goods. Intersystem testing is designed to ensure that the interconnection between applications functions correctly.

Intersystem Testing Examples

Some specific examples of intersystem testing include:

Developing a representative set of test transactions in one application for passage to another application for processing verification.

Entering test transactions in a live production environment using the integrated test facility so that the test conditions can be passed from application to application, to verify that the processing is correct.

Manually verifying that the documentation in the affected systems is updated based upon the new or changed parameters in the system being tested.

F. Control Testing Technique

Approximately one-half of the total system development effort is directly attributable to controls. Controls include data validation, file integrity, audit trail, backup and recovery, documentation, and the other aspects of systems related to integrity. Although control testing will be included in the other testing techniques, that control testing technique is designed to ensure that the mechanisms that oversee the proper functioning of an application system work.

Control Testing Examples

Control-oriented people frequently do control testing. Like error handling, it requires a negative look at the application system to ensure that those "what-can-go-wrong" conditions are adequately protected. Error handling is a subset of controls oriented toward the detection and correction of erroneous information. Control in the broader sense looks at the totality of the system.

Specific examples of control testing include:

Determining that there is adequate assurance that the detailed records in a file equal the control total. This is normally done by running a special program that accumulates the detail and reconciles it to the total.

Determining that the manual controls used to ensure that computer processing is correct are in place and working.

Selecting transactions and verifying that the processing for those transactions can be reconstructed on a test basis.

G. Parallel Testing Technique

In the early days of computer systems, parallel testing was one of the more popular testing techniques. However, as systems become more integrated and complex, the difficulty in conducting parallel tests increases and thus the popularity of the technique diminishes. Parallel testing is used to determine that the results of the new application are consistent with the processing of the previous application or version of the application.

Parallel Testing Examples

Specific examples of parallel testing include:

Operating a new and old version of a payroll system to determine that the paychecks from both systems are reconcilable.

Running the old version of the application system to ensure that the operational status of the old system has been maintained in the event that the problems are encountered in the new application.

Testing Tools

The selection of the appropriate tool in testing is an important aspect of the test process. Techniques are few in number and broad in scope, while tools are large in number and narrow in scope. Each provides different capabilities; each tool is designed to accomplish a specific testing objective.

Listed below are the more common testing tools:

#

Tool name

Testing use

Acceptance Test Criteria

The development of system standards that must be achieved before the user will accept the system for production purposes.

Boundary Value Analysis

A method of dividing application systems into segments so that testing can occur within the boundaries of those segments. The concept complements top-down system design.

Cause-Effect Graphing

Attempts to show the effect of each event processed in order to categorize events by the effect that will occur as a result of processing. The objective is to reduce the number of test conditions by eliminating the need for multiple test events that all produce the same effects.

Checklist

A series of probing questions designed for use in reviewing a predetermined area or function.

Code Comparison

Identifies differences between two versions of the same program. It can be used with either object or source code.

Compiler-based Analysis

Utilizes the diagnostics produced by a compiler or diagnostic routines added to a compiler to identify program defects during the compilation of the program.

Complexity-based Metric Testing

Uses statistics and mathematics to develop highly predictive relationships that can be used to identify the complexity of computer programs and the completeness of testing in evaluating the complex logic.

Confirmation/Examination

Verifies the correctness of many aspects of the system by contacting third parties, such as users, or examining a document to verify that it exists.

Control Flow Analysis

Requires the development of a graphic representation of a program to analyze the branch logic within the program to identify logic problems.

Correctness Proof

Involves developing a set of statements and hypotheses that define the correctness of processing. These hypotheses are then tested to determine whether the application system performs processing in accordance with these correctness statements.

Coverage-based Metrics Testing

Uses mathematical relationships to show what percent of the application system has been covered by the test process. The resulting metric should be usable for predicting the effectiveness of the test process.

Data Dictionary

The documentation tool for recording data elements and the attributes of the data elements that, under some implementations, can produce test data to validate the system's data edits.

Data Flow Analysis

A method of ensuring that the data used by the program has been properly defined, and the defined data is properly used.

Design-based Functional Testing

Recognizes that functions within an application system are necessary to support the requirements. This process identifies those design-based functions for test purposes.

Design Reviews

Reviews conducted during the systems development process, normally in accordance with systems development methodology. The primary objective of design reviews is to ensure compliance to the design methodology.

Desk Checking

Reviews by the originator of the requirements, design, or program as a check on the work performed by that individual.

Disaster Test

A procedure that predetermines a disaster as a basis for testing the recovery process. The test group then causes or simulates the disaster as a basis for testing the procedures and training for the recovery process.

Error Guessing

Uses the experience or judgement of people to predetermine through guessing what the most probable errors will be and then test to ensure whether the system can handle those test conditions.

Executable Specs

Requires a special language for writing system specifications so that those specifications can be compiled into a testable program. The compiled specs have less detail and precision than will the final implemented programs, but are sufficient to evaluate the completeness and proper functioning of the specifications.

Exhaustive Testing

Performs sufficient testing to evaluate every possible path and condition in the application system. This is only test method that guarantees the proper functioning of the application system.

Fact - Finding

Information needed to conduct a test or provide assurance of the correctness of information on a document, achieved through an investigative process requiring obtaining information, look-up, or search for the facts regarding a predetermined condition.

Flowchart

Graphically represents the system and/or program flow in order to evaluate the completeness of the requirements, design, or program specifications.

Inspections

A highly structured step-by-step review of the deliverables produced by each phase of the systems development life cycle in order to identify potential defects.

Instrumentation

The use of monitors and/or counters to determine the frequency with which predetermined events occur.

Integrated Test Facility

A concept that permits the introduction of test data into a production environment so that applications can be tested at the same time they are running in production. The concept permits testing the accumulation of data over many iterations of the process, and facilities intersystem testing.

Mapping

Process that analyzes which parts of a computer program are exercised during the test and the frequency of execution of each statement or routine in a program. Can be used to detect system flaws, determine how much of a program is executed during testing, and to identify areas where more efficient code may reduce execution time.

Modeling

Method of simulating the functioning of the application system and/or its environment to determine if the design specifications will achieve the system objectives.

Parallel Operation

Runs both the old and new version within the same time frame in order to identify differences between the two processes. The tool is most effective when there is a minimal number of changes between the old and new processing versions of the system.

Parallel Simulation

Develops a less precise version of a segment of a computer system in order to determine whether the results produced by the test are reasonable. Effective when used with large volumes of data to automatically determine the correctness of the results of processing. Normally only approximates actual processing.

Peer Review

A review process that uses peers to review that aspect of the systems development life cycle with which they are most familiar. Normally the peers review compliance to standards, procedures, guidelines, and the use of good practices as opposed to efficiency, effectiveness, and economy of the design and implementation.

Risk Matrix

Tests the adequacy of controls through the identification of risks and the controls implemented in each part of the application system to reduce those risks to a level acceptable to the user.

SCARF (System Control Audit Review File)

Evaluates the operational system over a period of time, or compares the operation of like entities at a specific point in time. The tool uses information collect during operations to perform the analysis. For example, all data entry errors would be collected over a period of time to show whether the quality of inputs is improving or degrading over time.

Scoring

Method used to determine what aspects of the application system should be tested by determining the applicability of problem criteria to the application being tested. The process can be used to determine the degree of testing ( for example, high-risk systems would be subject to more tests than low-risk systems) or to identify areas within the application system to determine the amount of testing needed.

Snapshot

A method of printing the status of computer memory at predetermined points during processing. Computer memory can be printed when specific instructions are executed, or when data with specific attributes are processed.

Symbolic Execution

Permits the testing of programs without test data. The symbolic execution of a program results in an expression that can be used to evaluate the completeness of the programming job.

System Logs

Uses information collected during the operation of a computer system for analysis purposes to determine how well the system performed. The logs used are those produced by operating software such as database management systems, operating systems, and job accounting systems.

Test Data

System transactions that are created for the purpose of testing the application system.

Test Data Generator

Software systems that can be used to automatically generate test data for test purposes. Frequently, these generators only require parameters of the data element values in order to generate large amounts of test transactions.

Tracing

A representation of the paths followed by computer programs as they process data or the paths followed in a database to locate one or more pieces of data used to produce a logical record for processing.

Utility Programs

A general-purpose software package that can be used in the testing of an application system. The most valuable utilities are those that analyze or list data files.

Volume Testing

The creation of specific types of test data in order to test predetermined system limits to verify how the system functions when those limits are reached or exceeded.

Walkthroughs

A process that asks the programmer or analyst to explain the application system to a test team normally using a simulation of the execution of the application system. The objective of the walkthrough is to provide a basis for questioning by the test team as a basis of identifying defects.

4. Program Testing

Objective

The objective of this step is to determine whether the software system will perform correctly in an executable mode. The software is executed in a test environment in approximately the same operational mode, as it would be in an operational environment. The test should be executed in as many different ways as necessary. Any deviation from the expected results should be recorded during this step. Depending on the nature and severity of those problems, uncovered changes may need to be made to the software before it is placed in a production status. If the findings/problems are extensive it may be necessary to stop testing completely and return the software to the developers to make the needed changes prior to restoring testing.

Test Factor: Manual, Regressional, and Functional Testing (Reliability)

Recommended Test

Test Technique

Test Tool

Verify that data validation programs reject data not conforming to data element specifications.

Error handling

Test data & Test data generator

Verify that the system rejects data relationships that do not conform to system specifications.

Error handling

Test data & Test data generator

Verify that program rejects invalid identifiers.

Error handling

Test data & Test data generator

Confirm that the system detects missing sequence numbers.

Requirements

Test data

Verify that the system will detect inaccurate batch totals.

Error handling

Test data

Verify that the programs will detect data missing from batches and scheduled data that does not arrive on time.

Manual support

Test data & Test data generator

Conduct regression test to ensure that unchanged portions of the program are not affected by invalid data.

Execution

Inspections, Test data & Test data generator

Verify the correctness of the results obtained from the recovery process.

Recovery

Disaster test & Inspections

Test Factor: Compliance Testing (Authorization)

Recommended Test

Test Technique

Test Tool

Test manual procedures to verify that authorization procedures are followed.

Security

Cause-effect graphing

Verify that programs enforce automated authorization rules.

Control

Test data & Test data generator

Confirm that the actual identifiers for the authorization are included in the programs.

Control

Inspections & Confirmation/examination

Verify that the authorization programs reject unauthorized transactions.

Security

Symbolic execution

Verify that multiple authorization procedures perform properly.

Control

Exhaustive testing

Verify that the system can identify potential violations of authorization limits caused by entering multiple transactions below the limit.

Security

Exhaustive testing

Verify that the procedure to change the authorization rules of a program performs properly.

Control

Test data

Verify that the authorization reports are properly prepared and delivered.

Control

Test data & Confirmation/examination

Test Factor: Functional Testing (File Integrity)

Recommended Test

Test Technique

Test Tool

Verify that the procedures to balance the files function properly.

Requirements

Test data & Test data generator

Verify that the independently maintained control totals can confirm the automated file control totals.

Requirements

Inspections

Verify that the new control totals properly reflect the updated transactions.

Requirements

Test data & Test data generator

Cause a program to fail to determine if it affects the file integrity.

Recovery

Disaster test

Enter erroneous data to determine that it cannot affect the integrity of the file totals.

Error handling

Test data & Test data generator

Verify that the manual procedures can be properly performed to produce correct independent control totals.

Control

Test data & Test data generator

Change a data element in one file that is redundant in several files to verify that the other files will be changed accordingly.

Requirements

Test data

Run system with one and no records on each file.

Requirements

Test data

Test Factor: Functional Testing (Audit Trail)

Recommended Test

Test Technique

Test Tool

Verify that a given source transaction can be traced to the appropriate control total.

Requirements

Tracing

Determine for a control total that all the supporting transactions can be identified.

Requirements

Inspections

Verify that the processing of a single transaction can be reconstructed.

Recovery

Disaster test

Examine the review trail to verify that it contains the appropriate information.

Requirements

Inspections

Verify that a review trail is marked to be saved for the appropriate time frame.

Control

Checklist & Fact finding

Verify that by using the review trail procedures people can reconstruct processing.

Recovery

Disaster test

Determine the cost of using the review trail to determine it is economical to use.

Requirements

Fact finding

Verify with the auditors that the review trail is satisfactory for their purpose.

Control

Confirmation/examination

Test Factor: Recovery Testing (Continuity of Processing)

Recommended Test

Test Technique

Test Tool

Simulate a disaster to verify that recovery can occur after a disaster.

Recovery

Disaster test

Verify that a recovery can be performed directly from the recovery procedures.

Operations

Disaster test

Conduct a recovery test to determine that it can be performed within the required time frame.

Recovery

Disaster test

Confirm with operation personnel that they have received appropriate recovery training.

Operations

Confirmation/examination

Verify that the system can recover from each of the various types of system failures.

Recovery

Disaster test

Simulate a system disaster to verify that the manual procedures are adequate.

Stress

Volume testing

Verify that the system users can properly enter data that has been accumulated during the system failures.

Recovery

Disaster test

Require the manual alternate processing procedures to be performed exclusively from the procedures.

Recovery

Disaster test

Test Factor: Stress Testing (Service Level)

Recommended Test

Test Technique

Test Tool

Confirm with the project leader that all the project limits are documented.

Operations

Confirmation/examination

Verify that the application limits have been tested.

Stress

Volume testing

Confirm that when more transactions are entered than the system can handle they are stored for later processing.

Stress

Volume testing

Verify that excessive input will not result in system problems.

Stress

Volume testing

Verify that when people get more transactions than they can process, no transactions will be lost.

Stress

Volume testing

Verify that when communication systems are required to process more transactions than their capability, transactions are not lost.

Stress

Volume testing

Evaluate the reasonableness of the excess capacity procedures.

Operations

Fact finding

Test the functioning of the system when operated by backup personnel.

Execution

Disaster test

Test Factor: Compliance Test (Performance)

Recommended Test

Test Technique

Test Tool

Verify that the system can be operated within anticipated manual effort.

Stress

Test data & Test data generator

Verify that the transaction processing costs are within expected tolerances.

Stress

Test data & Test data generator

Verify from the accounting reports that the test phase has been performed within budget.

Execution

Test data & Test data generator

Confirm with the project leader that uncovered problems will not significantly affect the cost-effectiveness of the system.

Execution

Confirmation/examination & Fact finding

Confirm with user management that the expected benefit should be received.

Execution

Confirmation/examination

Confirm with computer operations whether projected changes to hardware and software will significantly reduce operations and maintenance costs.

Execution

Confirmation/examination & Fact finding

Examine the completeness of the test phase work program.

Compliance

Inspections

Confirm with an independent source that soundness of the implementation technology.

Execution

Confirmation/examination & Fact finding

Test Factor: Compliance Testing (Security)

Recommended Test

Test Technique

Test Tool

Examine the completeness of the protection against the identified security risks.

Security

Cause-effect graphing & Risk matrix

Attempt to violate physical security to determine adequacy of security.

Security

Error guessing & Inspections

Conduct procedures that violate access security to test whether security procedures are adequate.

Security

Error guessing & Inspections

Attempt to utilize computer resources without proper authorization.

Security

Error guessing & Inspections

Conduct security violations during non-working hours to determine adequacy of security procedures.

Security

Error guessing & Inspections

Conduct repetitive security violations to determine if you can break security through repetitive attempts.

Security

Error guessing & Inspections

Attempt to gain access to computer programs and system documentation.

Security

Error guessing & Inspections

Verify that employees know and follow security procedures.

Control

Confirmation/examination

Test Factor: Test Complies with Methodology

Recommended Test

Test Technique

Test Tool

Verify that the operational system results are in compliance with the organization's policies and procedures.

Compliance

Checklist & Inspections

Verify that the operational system results are in compliance with the information services policies and procedures.

Compliance

Checklist & Inspections

Verify that the operational system results are in compliance with the accounting policies and procedures.

Compliance

Checklist & Inspections

Verify that the operational system results are in compliance with governmental regulations.

Compliance

Checklist & Inspections

Verify that the operational system results are in compliance with industry standards.

Compliance

Checklist & Inspections

Verify that the operational system results are in compliance with the user department policies and procedures.

Compliance

Checklist & Inspections

Verify that the test plan was fully implemented.

Compliance

Confirmation/examination & Fact finding

Confirm with the user the completeness of the test to verify sensitive data is protected.

Compliance

Confirmation/examination

Test Factor: Functional Testing (Correctness)

Recommended Test

Test Technique

Test Tool

Verify that the transaction origination procedures perform in accordance with system requirements.

Requirements

Test data & Test data generator

Verify that the input procedures perform in accordance with system requirements.

Requirements

Test data & Test data generator

Verify that the Processing procedures perform in accordance with system requirements.

Requirements

Test data & Test data generator

Verify that the storage retention procedures perform in accordance with system requirements.

Requirements

Test data & Test data generator

Verify that the output procedures perform in accordance with system requirements.

Requirements

Test data & Test data generator

Verify that the error-handling procedures perform in accordance with system requirements.

Error handling

Test data & Test data generator

Verify that the manual procedures perform in accordance with system requirements.

Manual support

Test data & Test data generator

Verify that the data retention procedures perform in accordance with system requirements.

Requirements

Test data & Test data generator

Test Factor: Manual Support Testing (Ease of Use)

Recommended Test

Test Technique

Test Tool

Confirm with clerical personnel that they understand the procedures.

Manual support

Confirmation/examination

Examine results of using reference documents.

Manual support

Inspections

Examine processing for correctness.

Requirements

Inspections

Examine correctness of use of output documents.

Requirements

Checklist

Identify time span for manual processing.

Requirements

Fact finding

Examine outputs for priority of use indications

Requirements

Fact finding

Examine documents for clarity of identification.

Requirements

Fact finding

Confirm with clerical personnel the ease of use of the system.

Manual support

Confirmation/examination

Test Factor: Inspections (Maintainability)

Recommended Test

Test Technique

Test Tool

Determine all program statements are entrant.

Compliance

Compiler-based analysis & mapping

Examine reasonableness of program processing results.

Compliance

Test data

Introduce an error into the program.

Compliance

Test data

Verify the executable version of the program conforms to the program documentation.

Compliance

Inspections

Examine completeness of history of program changes.

Compliance

Inspections

Examine usability of test data for maintenance.

Compliance

Peer review

Examine usability of expected test results for maintenance.

Compliance

Peer review

Verify errors detected during testing have been corrected.

Compliance

Inspections

Test Factor: Disaster Testing (Portability)

Recommended Test

Test Technique

Test Tool

Confirm that alternate site requirements have been identified.

Intersystems

Test data & Test data generator

Execute data files on new facilities.

Intersystems

Test data & Test data generator

Execute programs on new facilities.

Intersystems

Inspections

Request normal operators execute system on new facilities.

Intersystems

Inspections

Examine usability of outputs produced using the new facilities.

Operations

Test data

Monitor execution time of new facility.

Intersystems

Fact finding

Recompile programs using new facility.

Intersystems

Confirmation/examination & Fact finding

Request users to operate system using new facilities.

Control

Confirmation/examination & Fact finding

Test Factor: Operations Test (Ease of Operations)

Recommended Test

Test Technique

Test Tool

Verify documented instructions conform to standards.

Compliance

Inspections

Confirm with operators completeness of instructions.

Operations

Confirmation/examination & Fact finding

Examine call-in list.

Operations

Confirmation/examination

Determine operator instructions are complete.

Operations

Inspections

Examine schedule for reasonable allocation of time.

Operations

Fact finding

Verify completeness of retention procedures.

Operations

Inspections

Verify that operators can operate the system by only using operator instructions.

Operations

Test data

Verify that operator recommendations have been adequately reviewed.

Operations

Confirmation/examination & Fact finding

5. Testing Tips

The most important thing to remember about Quality Assurance is that you will have

to test: early, often and anything.

The 5 questions that a Quality Assurance team member has to ask the developers

when finding a problem in the system are:

Is it a big problem?

Why is it a big/small problem?

Will the change that comes with solving this problem affect the testing environment?

What modules will be affected? How will they be affected?

Is there any other way to fix this problem?

When testing an application, we usually start on an empty database and we try to act
as a user, entering logical information so that we can create valid reports based on this data. After the system has been tested properly starting from an empty database, we can continue by loading large amounts of data so that we can check the way the application behaves under this situation.

Act as a user, think as a user, and enter in the application data as the user would enter and think about the mistakes the user might do while using the application and test how the system would react when confronted with those user mistakes.

Try to create a test data for every possible condition and every path in the program.

When adding new records to the application, check if the data has been correctly and completely saved and verify results on all the affected screens (if the data you have entered is transmitted to more than one screen).

Generate test data to verify data validation, always enter characters like: ~ ! @ # $ % % ^ & * ( ) _ + | : " < > ? , . / ; ' [ ] \ = - `.

Try to close the browser while the application is working.

Test the entire application using only the keyboard, not the mouse (tab through the application)

More tips on the way!!!!! (still have to work on this section)



Document Info


Accesari: 1437
Apreciat: hand-up

Comenteaza documentul:

Nu esti inregistrat
Trebuie sa fii utilizator inregistrat pentru a putea comenta


Creaza cont nou

A fost util?

Daca documentul a fost util si crezi ca merita
sa adaugi un link catre el la tine in site


in pagina web a site-ului tau.




eCoduri.com - coduri postale, contabile, CAEN sau bancare

Politica de confidentialitate | Termenii si conditii de utilizare




Copyright © Contact (SCRIGROUP Int. 2024 )