On this page:
This topic provides an overview of some of the tools AppDynamics provides for monitoring Java applications and troubleshooting common issues.
JVM Key Performance Indicators
A typical JVM may have thousands of attributes that reflect various aspects of the JVM's activities and state. The key performance indicators that AppDynamics focusses on as most useful for evaluating performance include:
- Total classes loaded and how many are currently loaded
- Thread usage
- Percent CPU process usage
On a per-node basis, AppDynamics reports:
- Heap usage
- Garbage collection
- Memory pools and caching
- Java object instances
You can configure additional monitoring for
- Automatic leak detection
- Custom memory structures
Monitoring JVM performance
You can view JVM performance information from the node dashboard or from the Metric Browser.
In the node dashboard, see the following tabs for JVM-specific information:
- The Memory subtab of the dashboard allows you to view various types of JVM performance information: Heap and Garbage Collection, Automatic Leak Detection, Object Instance Tracking, and Custom Memory Structures.
- The JMX subtab of the dashboard allows you to view information about JVM classes, garbage collection, memory, threads, and process CPU. In the JMX Metrics subtab metric tree, click an item and drag it to the line graph to plot current metric data.
In the Metric Browser, click Application Infrastructure Performance and expand the JVM folder for a given node to access information about Garbage Collection, Classes, Process CPU, Memory, and Thread use.
Alert for JVM Health
You can set up health rules based on JVM or JMX metrics. Once you have a health rule, you can create specific policies based on health rule violations. One type of response to a health rule violation is an alert. See Alert and Respond for a discussion of how health rules, alerts, and policies can be used.
You can also create additional persistent JMX metrics from MBean attributes. See Configure JMX Metrics from MBeans.
JVM Crash Guard
Using the Standalone Machine Agent, when a JVM crash occurs on a machine or node, you can be notified almost immediately and take remediation actions. Learning of a JVM crash is because it may be a sign of a severe runtime problem in an application. Implemented as part of JVM Crash Guard, JVM Crash is an event type that you can activate to provide the critical information you need to expeditiously handle JVM crashes.
Memory management includes managing the heap, certain memory pools, and garbage collection. This section focuses on managing the heap. You can view heap information in the metric browser or in the Memory tab for a given node as directed in "Monitoring JVM Information."
The size of the JVM heap can affect performance and should be adjusted if needed:
- A heap that is too small will cause excess garbage collections and increases the chances of
- A heap that is too big will delay garbage collection and stress the operating system when needing to page the JVM process to cope with large amounts of live data
For more detail on garbage collection, see Garbage Collection.
Detect Memory Leaks
By monitoring the JVM heap and memory pool, you can identify potential memory leaks. Consistently increasing heap valleys might indicate either an improper heap configuration or a memory leak. You can identify potential memory leaks by analyzing the usage pattern of either the survivor space or the old generation. To troubleshoot memory leaks see Java Memory Leaks.
Detect Memory Thrash
Memory thrash is caused when a large number of temporary objects are created in very short intervals. Although these objects are temporary and are eventually cleaned up, the garbage collection mechanism might struggle to keep up with the rate of object creation. This might cause application performance problems. Monitoring the time spent in garbage collection can provide insight into performance issues, including memory thrash. For example, an increase in the number of spikes for major collections affects the JVM's ability to serve Business Transaction traffic, and might indicate potential memory thrash.
The Object Instance Tracking subtab helps you isolate the root cause of possible memory thrash. To troubleshoot memory thrash, see Java Memory Thrash.
Monitor Long-lived Collections
AppDynamics automatically tracks long-lived Java collections (HashMap, ArrayList, and so on) with Automatic Leak Detection. Custom Memory Structures you have configured are displayed in Custom Memory Structures in the Memory tab for the node dashboard.
AppDynamics provides visibility into
- Cache access for slow, very slow, and stalled business transactions
- Usage statistics (rolled up to Business Transaction level)
- Keys being accessed
- Deep size of internal cache structure