by Dave Linthicum

More on SPO…

analysis
Feb 12, 20076 mins

Continued from my last blog on SPO...Service Performance Optimization Second, Experiment and Test Many of those who focus on the discipline of performance within complex distributed systems such as SOA will first steer you toward modeling. Unfortunately, we don't know enough about how services behave to model how they will perform, so it's a good idea to test the services that will make up your SOA before you bu

Continued from my last blog on SPO…Service Performance Optimization

Second, Experiment and Test

Many of those who focus on the discipline of performance within complex distributed systems such as SOA will first steer you toward modeling. Unfortunately, we don’t know enough about how services behave to model how they will perform, so it’s a good idea to test the services that will make up your SOA before you build your performance model; otherwise, you’re just guessing.

So, how do you test services you’ve not yet built? It’s called a “proof-of-concept,” meaning you stand up very raw and simplistic versions of the services (either existing abstractions or new services) for the purpose of proving that they work and to illustrate their operational characteristics. This is typically done in parallel with existing design work, and the proof-of-concept is largely a throw away after you gather your data, but nonetheless important to your understanding of the final product before you complete the design and development.

Testing services, even proof-of-concept services, means that you simulate operational characteristics during the test, or, how you intend to leverage the service. You do this by building or buying test harnesses that can load the service as needed for testing. You should utilize low use, medium use, and high use scenarios to determine how the service behaves under an increasing load, and make sure you have some sort of monitoring mechanism to gather the data for analysis.

What you’ll find, in most cases, is that the service will reach a saturation point where performance drops off significantly as the load increases. The saturation point is largely dependent on the patterns of the service. For instance, transactional service should be able to support a much higher load than light weight services.

Creation of a SOA Performance Model

SOAs are not unlike any other distributed computing systems, and thus designing a performance model should be nothing too new. At this point we understand exactly how each service behaves under an increasing load, and we have enough data to plug into a model. Now, it’s just a matter of building a model.

There are very expensive performance monitoring and simulation tools that are for sale in the market, but sometimes the least expensive and most simplistic tools work best…in many cases, just a spreadsheet will do. For our purposes, we need to consider both information and behavior in the context of performance, also core features of a SOA.

Information Movement Modeling, typically asynchronous in nature, means we’re attempting to simulate how information moves from point-to-point, point-to-many points, or, many points to many points.

Based on the information we accumulated we know:

– Information production rate from a service.

– Information consumption rate from a service.

For example, an instance of a service is able to produce 52 messages (or similar groupings of information) per second…the source service. An instance of a service is able to consume 34 messages per second…the target service. This is a simple point to point relationship, but keep in mind that multi-points to multi-points are always possible and those are a bit more complex to model since you have to determine patterns of movement between multiple points vs. all messages produced by a single service that are consumed by another.

Moreover, keep in mind transformation and routing latency is typically an issue here as well, and needs to be modeled along with consumption and production. You should have test data from these services, but the performance of transformation and routing services will be largely dependent upon the complexities of the transformations and logic associated with the routing. What many do when creating performance models is to model very complex, complex, and simple transformation scenarios, and the percentages of each.

Service Invocation Modeling, typically more synchronous in nature, means we’re attempting to determine the number of times a service is able to provide a behavior (application function) in an instance of time, typically a second.

For instance, you may have a service that provides a risk calculation for the insurance business, and is perhaps abstracted into several different applications (composites). We know through testing that each composite can invoke the service up to 100 times a second before it hits a saturation point, meaning the performance of the service quickly diminishes as additional load is placed upon it. This saturation number plugs into the model, as well as the number of applications that are abstracting this service. You have to model all of these services in the same way.

Models are important because they allow you to predict performance under changing needs without having to actually build and test the system. Models, of course, are not perfect, and you must constantly adjust assumptions and modeling information as you learn more about the behavior of the architecture.

Designing for SOA Performance, Monitoring, and Optimizing

So, now that we know how to diagnose the performance of a SOA, as well as model for it to determine how it will behave under a changing environment, how do we design a service and SOA with optimized performance? Here are a few tips.

– The more processing you can place at the origin of the service, the better your SOA will perform. In many SOAs, the architects abstract the services to a single server, and performance can be somewhat problematic in larger implementations.

– Many services are built on top of more traditional legacy APIs, and as such the translations between legacy APIs to expose them as services may cause performance problems. The ability to leverage existing legacy systems as services is a powerful notion. However, you must be careful in selecting the proper enabling technology to do this. Service invocations that take a second or more to produce behavior, or information bound to behavior, will cause big problems when you align them with hundreds of other services that are doing the same thing.

– Use of too many fine-grained services may cause performance problems. Indeed, you should not be afraid to leverage fine-grained services within your SOA. However, you need to understand the performance issues with doing so, taking the network bandwidth and how other applications leverage the services into careful consideration.

– Make sure to consider performance when selecting your orchestration layer. Many BPEL engines are notoriously poor performers, and can become the bottleneck for the SOA.

– Understand the basic rule that, while the value of an SOA is the ability to leverage many remote services, the more services you leverage, the more problematic your SOA will become.

Core SOA Performance Issues

Making solutions scale is nothing new. However, the SOA technology and approaches recently employed are largely untested with higher application and information and service management traffic loads. SOA implementers were happy to get their solutions up–and–running, yet, in many cases, scalability is simply not a consideration within the SOA, nor was load testing or other performance fundamentals. We are seeing the results of this neglect now that SOA problem domains are exceeding the capacity of their architectures and the technology. It does not have to be this way.