Sunday, March 21, 2010

Architectural Patterns That Limit Application Scalability & Performance Optimization

Note - Currently working on this blog

Client & Server Pattern


Master & Worker Pattern

Partition For Parallelism 

Common Approach To Improve Throughput

Performance Optimization Concepts





Latency = time delay between starting an activity and when the results are  available /  detectable
Throughput = ratio of number of tasks completed in a unit of time.
Performance (perceived speed / responsiveness) = number of requests made and acknowledged in a unit of time.
Throughput and Performance are often confused!  (sometimes they are the same)
Example:
Average Throughput = 10 tasks / sec
Average Latency = 1000ms (1sec / 10)
Performance = unknown

NOTE - 


To improve performance = reduce latencies (between request and response)
To improve throughput = increase capacity (or reduce total latency)


Only Execute Mandatory Tasks

Increase CPU speed (scale up)

Optimize Algorithm

Exploit Parallelism (Scale Out)

Optimize Large Latencies

Best of all world - Do them all

Reduce Use Of XML


Use Case - Oracle Coherence In Compute Grid

No comments: