User experience can boost or kill your revenue. Unhappy users are likely to abandon a service they struggle with and go to your competitors. To effectively manage the experience of your users, you need to efficiently monitor and understand their transactions in your mobile, web and enterprise applications. More importantly, and often overlooked, the practice of User Experience Management (UEM) does not end in the client application. Common, user experience tools fail to ensure holistic UEM the same way many think performance management is only based on analyzing server logs. Neither approach will shed light on the true user experience.
UEM is one of the key aspects of the Application Performance Management. It can be realized through distinct technologies. In this article, we discuss why none one them can be the single solution to ensure enough visibility. We argue that a true, end-to-end UEM should rely on data gathered using different monitoring technologies.
Technologies For End-User Experience Monitoring
Although UEM is only one of the dimensions realizing APM, it is the one that gets most attention. End-user experience is the point where the business processes interact with the technology stack. We can define three types of technologies for UEM from different perspectives (see Fig. 1) as:
- Synthetic, Transaction-based UEM:Using synthetic scripts we can monitor web and intranet applications from different locations on both internet and internal WANs, using mobile or web client applications. This allows us to effortlessly simulate multiple end-user environments during off-peak or non-business hours to check service levels against the SLA. This gives you constant, repeatable and comparable measurements ready for comparison and baselines. It resembles testing car safety with dummies: you learn a lot about car safety but it might not directly translate to the experience of the actual passengers in real life situations (see Fig. 2).
- Endpoint instrumentation:When we look at user experience from the perspective of the endpoint (mobile client or web-based) instrumentations we even get data to analyze user behavior. Nevertheless, performance analysis based on the endpoint instrumentation alone is like looking only at the car speedometer and few other gauges at the dashboard in your car: it gives you plenty of information about your speed, RPMs and gas level but if the engine blows a piston stopping your car, your dashboard alone won’t tell you the source of the problem … it just says your speed is zero! Endpoint instrumentation is also very dependent on the client’s technology stack with which it has to interact seamlessly.
- Network Packet Capture and Analysis within the Data Center:Analyzing network traffic across the whole data center all the way to the end user applicationenables you to correlate end-user experience with the actual state of the network and services; it remains the only feasible UEM solution when the client front-end cannot be instrumented, but still uses TCP to connect to the data center. This perspective however, remains blind to the impact of everything that happens outside of Data Center, e.g., third-party components. Similarly to Endpoint Instrumentation, all the car dashboard instruments might not always accurately correlate to the quality of our journey.
Comprehensive UEM: Correlated Answers Instead of Separated Data
Using technologies that report from different perspectives is only half the battle. Whether you’ve got the complex gauges or an airplane cockpit or the simple dashboard of a car, correlating this information into actionable information isn’t something you can just eyeball.
When we bring all three UEM technologies together, we still need a report which tells us the correlated state (from the end user perspective) of the applications we monitor.
For each application we could show a correlated performance status for (see Fig. 3):
- Which applications perform according to expectations or fail to deliver
- The status of transactions delivered by those applications
- How user performance is seen from the perspective of the synthetic agent
- If there are any network problems within data center or on the way to the end-user
- If any of technology tiers, e.g., web server, middleware, database, mainframe, are failing to deliver
- The business impact of the problem – how many users or visits of the given application experience performance problems and where the users are coming from.
With the data coming from different measurement perspectives we can quickly tell which applications are not offering the best performance for our end users and why. The synthetic agent measurements will tell us about problems in advance of busy office hours or about problems at particular location: Synthetic Agent will turn red while Real Users would still be green or void. This might give us just enough time to react before our real users start using our application. The endpoint instrumentations delivering information about Real Users will tell us exactly how our users perceive our applications, while correlating data from network monitoring (remaining traffic lights left of Synthetic Agents) will help to isolate the fault domain responsible for causing the problem.
All of the aforementioned perspectives to User Experience Management are implemented by different components of the Dynatrace APM suite:
- APMSaaS and on-premise solutions deliver synthetic monitoring for Enterprise, Mobile, and Web applications
- Data Center Real User Monitoring (DC RUM) analyzes network packets to correlate end-user experience with the state of the network and the data center
New landing page report, Application Health Status (see Fig. 3) in DC RUM, enables to correlate performance data gathered by all of the aforementioned perspectives and products on a single pane of glass. Please visit our APM Community to learn how this report delivers comprehensive UEM.