Scope
This document discusses the development test (test the code by developers before sending the artifacts to the QA/test team).
Benefits of developer testing
• Reduce bug fix costs by detecting the defects earlier before the code is delivered to QA/Test team.• Early and frequent development tests give early feedbacks to the developer team.
• The statistics and the trend charts can be useful for the management team to assess the maturity/reliability of the product and if early actions are necessary to correct the process. For example it would be late when you found that you need to hire a security expert if you have sent the product to the external party with security bugs on it. The QA manager can use the test statistics to decide whether or not to accept the product from the development team.
Define the process
• Determine how the developer do the test in the software process, e.g. o build the test before the developer start coding (TDD/agile),
o run the test after the code is mature enough before delivery to the test team (waterfall)
o perform automatic continous integration test after every SCM commit (Agile)
o perform exploratory tests (Agile)
o scrum demo / user test for user feedback at the end of each Sprint (Agile)
o spiral/incremental test: run the test iteratively, add new tests for the integration of new SOA components (while keep running the previous tests as regression test) at each Scrum sprint
• Do you need a test plan / documented test cases?
• Does the test plan need to be review (e.g. completeness)?
• Define the process/how tests will be conducted e.g. automatic test (unit test, GUI Selenium), manual user test, manual exploratory tests?
• Determine entry criteria (e.g. code is mature enough)
• Determine exit criteria (e.g. approval by developer manager, approval by QA manager that the code is mature enough to be delivered to the QA/test team)
• Determine metrics (e.g. error list with severity & type)
• Are tools available to assist test process (e.g. SOAPUI test, yslow)?
• Determine defect reporting/communication channel: how to report test results (e.g. Trac, bugzilla), how to archive test cases & results (e.g. svn, wiki), defect management (e.g. how to track the test status, reworks en retesting)
• Determine who will play the tester role. You may have several testers assigned to specific areas (e.g. security developer specialist for penetration testing, or invite customers for use cases testing).
• Determine the time needed to develop and perform tests. Discuss the time/plan with project manager / team lead to obtain management support. Schedule the meetings. Set time-limits.
• Do the tests, register the anomalies.
• Discuss whether or not a fix is needed.
• Discuss the fix, decide which version the fix should be done, who will do the reworks, estimate/plan the reworks
• Determine the exit decision e.g. re-inspection after required reworks, minor reworks with no further verification needed.
• Reschedule the follow up/ reinspection for reworks.
• Collect "lessons to learn" to improve the development process.
• Do you need permission or need to inform other department? (e.g. you'd better seek permission from the infrastructure-manager before bombing the servers with DoS penetration test or performance stress testing). This is also in the case of the red-team testers (which perform penetration test without pre-knowledge about the system and without IT staffs awareness), always seek the permission from the management first.
Best practices
• Test is performed not by the developer who implements the code: to avoid blind spots, objective, to make sure good documentation.• Determine how to share the test codes with other developers & the QA team (for code reuse and reproducible results). Reuse tests with test libraries / knowledge repository (e.g. test case library). Use version control (e.g. svn)
• Regression test: rerun the past tests to detect if the current fix has introduced new bugs
• Automatic tests is better than manual test: repeatable, less error-prone, more efficient to run and reuse, can be run frequently (e.g. continous integration)
• Discuss with your client / user for realistic scenarios when defining the test data
• Find-out the typical use (e.g. the average message size, how many requests per minute) by asking the users
• Find-out the typical failures in the production (e.g. network outage) by asking the production-team
• Find out the typical environment/configuration in the production (e.g. browser, OS). Do you need to consider old environtment/data for back compatibility (e.g. IE 5.0) ?
• Build a test case for every requirement / use case items. Mention the requirement number in the test case document for traceability.
• Determine which tests to be perform within limited time e.g. installation/configuration/uninstall test, user functional test, performance test, security test, compatibility test.
• Don’t try to cover everything. Prioritize the test cases base on the most likely error (e.g. which functional area, which class) and the risk.
• Avoid overlap of test cases
• Use test case with hand-convenient values (e.g. 10000 instead of 47921)
• Make sure that the testers have business knowledge about the domain (e.g. terminologies, business logics, workflow, typical inputs)
• Consider automatic test case generator
• Review and test the test code
• GUI prototyping/pilot test: involve only limited numbers of testers & use easier scenarios
• Consider positive (e.g. good data) as well as negative (e.g. wrong data, database connection failure) test cases
• Use test framework/tools (avoid reinventing the wheel) e.g. SOAPUI, Selenium, JMeter.
• Keep, interpret and report the test statistics, useful charts:
o defect gap analysis: found bugs and solved bugs vs time
o number of bugs per function/module/area (bugs tend to be concentrated in certain modules)
o number of bugs per severity level (e.g. critical, major, minor)
o number of bugs per status (e.g. ok, solved, unsolved, not yet tested)
o test burnout graph: number of unsolved bugs and not yet being ran test-cases vs time
o number of bugs per root causes (e.g. incomplete requirement, database data/structure, etc).
Test checklists
Please see http://soa-java.blogspot.nl/2012/09/test-checklists.html
The test pyramid
• Level 1: automatic unit tests• Level 2: service integration tests (e.g. the connection between services)
• Level 3: user acceptance / system tests (e.g. GUI, security, performance)
Tools
• Unit tests: junit, nunit• Service functional tests e.g. SOAPUI
• Performance tests e.g. SOAP UI, Jmeter, yslow (GUI)
• Security tests e.g. SOAPUI, paros, spike, wireshark
• GUI tests e.g. Selenium, httpunit
Please share your comment.
Source: Steve's blog http://soa-java.blogspot.com
References:
• Software Testing and Continuous Quality Improvement by Lewis• Code complete by McConnell
No comments:
Post a Comment