by Maggie Biggs

Dashboards for the enterprise

reviews
Mar 4, 20056 mins

Indicative and Segue performance managers are far-reaching and flexible

Enterprise application performance or service level management solutions not only help companies adhere to business-defined SLAs. They also help IT staffs to identify problems quickly in complex, distributed applications that often involve multiple technology stacks from many vendors. When production issues do occur, these tools promptly pinpoint the problem. Time to resolution is cut short and finger pointing among different technology groups is eliminated.

Among the dozens of such solutions in the marketplace, Indicative Service Director and SLA Manager 6.5 from Indicative Software, and Silk-Central Performance Manager 2.7 from Segue Software, are two that are definitely up to the task of managing performance in large, mixed enterprise environments.

Both solutions provide monitoring agents for a wide range of platforms and technologies such as server operating systems, networks, Web servers, application servers, and more. They offer tremendous flexibility in configuring performance metrics and defining service levels, and easily remove the need for a large number of disparate monitoring tools in your datacenter.

In addition to passive and agent-based monitoring of production systems, both solutions perform active and agentless monitoring, allowing transaction scripting for regression and load testing of applications during staging. Both offer hooks into other management frameworks such as HP OpenView and IBM Tivoli; Segue also supports Computer Associates’ Unicenter.

From a functional standpoint, you’d be hard-pressed to find something that one could do and the other couldn’t. The main differences lie in ease of implementation and use. In both areas, Indicative has the edge.

To test the solutions, I used them to monitor a half-dozen applications running on Apache and Microsoft IIS Web servers, BEA WebLogic and IBM WebSphere J2EE application servers, and IBM DB2 and Oracle databases.

Indicative Service Director and SLA Manager

The Indicative solution installed quickly and easily. Components include the Diagnostic Measurement Server, which is used for gathering and interpreting metrics; the administrative console, used to configure the monitoring environment; and the end-user graphical interface, which supports role-based access.

The admin console made it easy to define monitors and thresholds for my various servers. I simply dragged and dropped templates into my “service model” and set the threshold to the appropriate response time for my WebSphere servers, Oracle databases, and other systems.

Indicative provides more than 700 templates for monitoring everything from CPU utilization of your servers to the performance of Java or .Net applications, e-mail systems, network infrastructure, and Web services. For custom applications and other cases that the templates don’t cover, administrators can define their own monitors; the same is true for Segue’s solution.

One particularly useful feature in Indicative (and lacking in Segue) is autodiscovery of hosts and applications on the network. I used autodiscovery to locate and populate monitors for all the Web servers in my test environment. If you have a particularly large enterprise that requires deploying many monitors, autodiscovery is a real time-saver.

The end-user graphical interface is very easy to navigate and interpret, providing the familiar red, green, and yellow traffic-light colors to reflect current conditions. A rolled-up view of the service model and its status is available for top-level executives, whereas IT administrators can have more specialized and detailed views into specific applications or portions of the service model. Overall, I found Indicative’s GUI to be more comprehensive and intuitive than Segue’s. The views were better tailored to different roles, and it was easier to drill down from dashboard information to diagnostic details.

To test Indicative’s active monitoring, I made several changes to three interrelated Web sites and then generated load against a staging environment to validate the modifications. I soon saw that the changes had improved performance when compared with the data previously collected using passive monitoring in the production environment.

Indicative includes good reporting and notification capabilities. Web-based reports can be produced for admins, end-users, and other stakeholders on a scheduled basis or on demand, and a full range of export possibilities is supported.  

Segue SilkCentral Performance Manager

Segue’s solution didn’t install as easily as Indicative’s did. Unlike Indicative, which includes an embedded database, Segue requires an external repository — either Oracle (on Windows or Solaris), Microsoft SQL Server, or MSDE (Microsoft SQL Desktop Engine). I had some issues getting database connectivity working, but I got them resolved.

Behind a browser-based GUI, Segue puts a number of components to work behind the scenes. A front-end server presents the GUI. An application server runs the logic needed to support actions such as distributing schedule information or saving monitoring results to the database. An execution server is used to run agent-based testing using SilkTest and SilkPerformer (Segue’s companion products) together with SilkPerformer-specific agents. A charting server is included that generates results graphs that are later viewed in the user’s Web browser.

I also installed the separate passive monitoring module and the TrueLog Viewer, which is used to view the results of simulated monitoring sessions graphically, as well as the InetSoft Style Report Designer, which is used to create custom reports of monitoring activity.

After installation was complete, I created several user accounts under different roles and configured other portions of the solution, such as the charting server and the locations I would monitor.

The next step was to create projects and monitors to watch the server health of my test systems as well as the performance of my test applications. It wasn’t hard to configure all the monitors I needed for the tests, but Segue’s interfaces proved less intuitive than those of the Indicative solution. As a result, it took longer to complete the Segue setup.

One particularly useful feature in Segue’s solution is its capability to capture all aspects of an end-user’s session. I found this feature to be quite useful during functional testing of applications in my staging environment. After Segue’s monitor alerts you to an error, you can use the TrueLog Viewer to walk through an end-user session and view the error that occurred exactly as the end-user saw it.

Like Indicative, Segue matches run-time health against SLAs you define. Although it provides roughly the same flexibility in defining metrics and service levels, it’s not quite as easy to set up as Indicative. After setting up SLAs, I was able to use the SLA status tab to view — via red, green, or yellow graphics — when my test applications were meeting or violating my response-time thresholds.

Segue fell slightly short in my usability tests. Functionally, the two are on a par. Both deserve a close look from any enterprise wanting to monitor application service levels across a large and varied infrastructure. Segue’s ability to replay end-user sessions is a welcome feature and a great benefit when diagnosing some types of application problems. Indicative’s autodiscovery feature and smoother GUI, however, earn it my nod.

InfoWorld Scorecard
Setup (10.0%)
Monitoring (20.0%)
Scalability (20.0%)
Reporting (20.0%)
Manageability (20.0%)
Value (10.0%)
Overall Score (100%)
Indicative Service Director and SLA Manager 6.5 9.0 9.0 8.0 8.0 9.0 9.0 8.6
Segue SilkCentral Performance Manager 2.7 7.0 9.0 8.0 9.0 8.0 8.0 8.3