Monday, October 1, 2012

How to address software performance issues: Proactive vs Reactive ?


As a Java developer, it was quite fun to learn a lot from the .Net communities, for example "the patterns & practises" series which are provided for free by Microsoft. Here are some lessons to learn from "Improving .NET Application Performance and Scalability" by Meier et.al.

Reactive approach

• You investigate the performance only if you face performance problems after design & coding to avoid premature optimization.
• Your bet is that you need to tune & scale vertically (buying faster/more expensive hardware, more clouds-resources). You experience increased hardware expense / total cost of ownership.
• Performance problems are frequently introduced early in the design and cannot always be fixed through tuning or more efficient coding. Also, fixing architectural / design issues later in the cycle very expensive nor always possible.
• You generally cannot tune a poorly designed system to perform as well as a system that was well designed from the start.

Proactive approach

• You incorporate performance modelling and validation since the early design.
• Iteratively you test your assumption / design decision by prototyping and validating the performance for that design (e.g. Hibernate vs iBatis)
• Evaluate your tradeoffs of performance/scalability with other QoS (data integrity, security, availability, manageability) since the design phase.
• You know where to focus your optimization efforts
• You decrease the need to tune and redesign; therefore, you save money.
• You can save money with less expensive hardware or less frequent hardware upgrades.
• You have reduced operational costs.


Performance modelling process

1.Identify key scenarios (uses cases with specific performance requirement/SLA, frequently executed, consume significant system resources, run in parallel)
2. Identify workload (e.g. total concurrent users, data volume)
3. Identify performance objectives (e.g. response time, throughout, resource utilization)
4. Identify budget (max processing time, server timeout, CPU utilization percent, memory MB, disk I/O, network I/O Mbps utilization, number of database connections, hardware & license cost)
5. Identify processing steps for each scenarios (e.g. order submit, validate, database processing, response to user)

For each steps:

6. Allocate budget
7. Evaluate (by prototyping and testing/measuring): Does the budget meet the objective? Are the requirement & budget realistic? Do you need to modify design / deployment topology?
8. Validate your model.


Performance Model Document

The contents:
• Performance objectives.
• Budgets.
• Workloads.
• Itemized scenarios with goals.
• Test cases with goals.

Use risk driven agile architecture

First, prototype and test the most risky areas (e.g. unfamiliar technologies, strong requirement in SLA). The result will guide your next design step. Repeat the past test again (regression test) in the next spirals for example using continous integration. When you address the most risky areas first, you still have more breath looking for alternatives or renegotiate with the customers in case of problems.



Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)


Reference:

"Improving .NET Application Performance and Scalability" by Meier et.al.

No comments: