This page describes hardware and software requirements for the Controller and other components hosted on private or public cloud to help you prepare for your Splunk AppDynamics deployment.

Note

Network Considerations

Your network or the host machine may have built-in firewall rules that you require to adjust to accommodate the Splunk AppDynamics On-Premises platform. Also, you may require to allow network traffic on the ports used in the system. For more information, see Port Settings.

For expected bandwidth consumption for the agents, see the requirements documentation for app agents, see Install App Server Agents.

System User Account

Install all the platform components with a single user account or accounts that have the equivalent permissions on the operating system. The user requires write permission on the installation directory. For information on component compatibility, see Controller Compatibility with Operating Systems and Components 

Select a component on the left pane of the following section to view the requirement details for that component:

Every deployment is unique. Factors such as the nature of the application, workload, and the Splunk AppDynamics configuration can all affect the resources required for your specific scenario. Be sure to test the performance of your system in a staging environment, so that you can fully understand your requirements before deploying Splunk AppDynamics On-Premises to its live operating environment. 

Before installation, it is good to estimate your deployment size based on the number of nodes. For Java, for example, a node corresponds to a JVM. However, the best indicator of the actual workload on your Controller is provided by the metric ingestion rate.

After initial installation, verify your Controller sizing using the metric upload rate. You then need to continue to monitor the Controller for changing workload brought about by changes in the monitored application, its usage patterns, or in the Splunk AppDynamics configuration.

General Hardware Requirements 

The following general requirements apply to the machine on which you install the Controller:

  • The Controller should run on a dedicated machine. A production Controller must run on a dedicated machine. The requirements here assume that no other major processes are running on the machine where the Controller is installed, including no other Controllers. 
  • The Controller is not supported on machines that use Power Architecture processors, including PowerPC processors. The Controller is supported on amd64 / x86-64 architectures.
  • Ensure that the Controller host has approximately 200 MB of free space available in the system temporary directory. 
  • Disk I/O is a key element to Controller performance, particularly low latency. See Disk I/O Requirements for more information. 

Controller Sizing

The following table shows Controller installation profiles by metric ingestion rate and node count. As previously noted, the actual metrics generated by a node can vary greatly depending on the nature of the application on the node and the Splunk AppDynamics configuration. Be sure to validate your sizing against the metric ingestion rate before deploying to production. 

ProfileMax Metrics/MinuteMax Agents (Approx)OSLinux ComputeWindows ComputeStorage
Demo10,0005Linux or Windows2 Cores, 8 GB RAM2 Cores, 8 GB RAM50 GB
Note: This profile is not supported when installing with Aurora DB.
Small50,00050Linux or Windows4 Cores, 16 GB RAM4 Cores, 16 GB RAM400 GB
Note: This profile is not supported when installing with Aurora DB.
Medium1,000,0002,000Linux or WindowsBare-metal: 8 Cores, 128GB RAMBare-metal: 8 Cores, 128 GB RAM5 TB SAS SSDs. Hardware-based RAID 5 configuration
Linux or WindowsVM: 16 vCPUs, 128GB RAMVM: 16 vCPUs, 128 GB RAM5 TB SSD-based SAN with 10 Gb/s FCoE
Large5,000,00015,000Linux

Bare-metal: 28 cores, 512 GB RAM

VM: 32 vCPUs, 256GB RAM

N/A
  • 2 x 800 GB write-intensive NVMe cards for MySQL redo logs. Software-based (mdadm) RAID 1 configuration.
  • 20 TB SAS SSDs for main data volume. Hardware-based RAID 5 configuration
Extra Large

For deployments that exceed the Large profile configuration defined here, contact Splunk AppDynamics Professional Services for a thorough viability evaluation of your Controller.

Amazon Web Services (AWS) Sizing for On-Premises

AWS Profile with AuroraMax Metrics/MinuteMax Agents (Approx)OSComputeInstance Size for AWS Aurora StorageBlock Storage (for Controller application files only)*
Medium1,000,0001500LinuxEC2: r4.2xlargedb.r4.4xlarge10 GB GP2 EBS Volume. We recommend using a different volume than the instance's root volume.
Large5,000,00010000LinuxEC2: r4.8xlargedb.r4.16xlarge10 GB GP2 EBS Volume. We recommend using a different volume than the instance's root volume.

* The specified disk space must be available for use by the Controller. Specifications do not include overhead from the operating system, file system, and so on.

Elastic Network Interface (ENI)

The ENI numbers were last updated on Feb 28, 2018.

For AWS, provision an ENI for each Controller host and link the license to the MAC address of the ENI. For more information about ENI, see AWS documentation.

This document contains references to the Amazon Web Services and Microsoft documentations. Splunk AppDynamics does not own any rights and assumes no responsibility for the accuracy or completeness of such third-party documentation.

Disk I/O Requirements

A critical factor in a machine's ability to support the performance requirements of a Controller in a production environment is the machine's disk I/O performance. 

There are two requirements related to I/O latency: 

  • This disk I/O must perform such that the maximum write latency for the Controller’s primary storage must not exceed 3 milliseconds while the Controller is under sustained load. Splunk AppDynamics cannot provide support for Controller problems resulting from excessive disk latency. 
  • Self-monitoring must be set up for the Controller. Self-monitoring consists of a SIM agent that measures the latency of data partitions on the Controller host, and the configuration needs to include dashboard and health rule alerts that trigger when the maximum latency exceeds 3 ms. For details on Controller self-monitoring, contact customer support. 

Disk I/O Operations

The Controller performs two types of I/O operations important to Controller performance:

  • The MySQL intent log is very sensitive to latency, and MySQL performs writes using varying block sizes.
  • MySQL’s InnoDB storage engine uses random, asynchronous, 16Kb reads and writes to move database pages between storage and cache. In a properly sized Controller, most reads are satisfied from one of the software caches.

It’s important for best performance that the stripe size of the RAID configuration matches the write size. The two write sizes are 16Kb (for the database) and 128Kb (for the logs). You should use the smallest stripe size supported, but no smaller than 16Kb. If using a hardware-based RAID controller, be sure that it supports these stripe sizes. The stripe size can be determined by the number of data disks multiplied by the strip/segment/chunk (the portion of data stored on a single disk).

SAN-based Storage Limitations

While onboard disks typically satisfy I/O requirements, SAN-based storage could be hampered by poor I/O latency performance. In addition, refrain from using an NFS-mounted filesystem. NFS adds latency and throughput constraints that can negatively affect Controller performance and even lead to data corruption. Similarly, you should avoid iSCSI or other SAN technologies that are subject to quality of service issues from the underlying network.

If you choose to deploy one of these latency-challenged storage technologies on a system that is expected to process 1M metrics/min or greater, a mirrored NVMe configured as a write-back cache for all storage accesses is recommended. Configuring such a device will hide some of the longer latencies that have been seen in these environments.

In all cases, be sure to thoroughly test the deployment with real-world traffic load before putting an Controller into a live environment.

The Enterprise Console can run on the same host as the Controller and the embedded Events Service. If this is the case, the machine you choose to run the Enterprise Console must meet the requirements for all the components that run on that machine. 

However, we recommend that you place the Enterprise Console on its own separate dedicated host, particularly if you deploy Controllers as High Availability pairs.

Supported Web Browsers

The Splunk AppDynamics On-Premises Enterprise Console UI is an HTML 5-based browser application that works best with the latest version of any browser. 

The Enterprise Console UI has been tested with and supports the last two versions of these browsers:

  • Safari
  • Chrome
  • Firefox
  • Microsoft Edge
  • Internet Explorer

Certain types of ad blockers can interfere with features in the Enterprise Console UI. We recommend disabling adblockers while using the Enterprise Console UI.

Disk Space Requirements

The Enterprise Console requires ten GB of free space to install. After the Enterprise Console installation, there must be at least one GB of additional space on the Enterprise Console host in order to perform any operations, such as installing a remote Controller.

Network Protocol Requirements

The Enterprise Console requires SSH and Secure File Transfer Protocol (SFTP) to be properly configured and enabled for it to use remote hosts.

To access remote hosts, the Enterprise Console uses Java Secure Channel (JSch) API with the provided key file. The Enterprise Console does not support SSH Jump Server. If you use an SSH Jump Server, or have jump host configuration, please contact your customer support for deployment options.

Microsoft Visual C++ 2019 Redistributable Package Requirement for Windows

Starting Enterprise Console version 23.7, the MySQL database is upgraded to version 8. MySQL 8 on windows environment requires Microsoft Visual C++ 2019 redistributable package. Install the package from Microsoft Download Center before you install or upgrade Enterprise Console to version 23.7 or higher in Windows environment.

Software Requirements

On systems that run Linux, you must have cURL and netstat installed. Linux systems must also have the libaio library installed. This library provides for asynchronous I/O operations on the system. 

See Required Libraries for how to install libaio and other libraries on some common flavors of the Linux operating system.

Required Libraries

Linux systems must include these libraries for Enterprise Console operation:

  • libaio
  • numactl package, which includes libnuma.so.1 for RHEL, CentOS, and Fedora, and libnuma1 for Ubuntu and Debian
  • glibc2.12 

    This glibc version is included into a given operating system release, and therefore cannot be updated.

  • tzdata for RHEL, CentOS, Fedora, openSUSE Leap 15.5, and Ubuntu version 20.x and higher

    The tzdata package is also required by the MySQL connector.

  • libncurses5 (and above) for Ubuntu, CentOS, Debian, openSUSE Leap 15.5

    As of MySQL 5.5.57 and 5.7.19, libtinfo.so.5 is a required prerequisite library.

  • ncurses-libs-5.x for RHEL and CentOS

    As of MySQL 5.5.57 and 5.7.19, libtinfo.so.5 is a required prerequisite library.

  • SLES12 and SLES15 using libxml2-2 and libxml2-tools

This table provides instructions on how to install the libraries on some common flavors of the Linux operating system.

If you cannot install the library, check that you have a supported version of your Linux flavor.

Linux FlavorCommand
  • Red Hat
  • CentOS
  • CentOS Stream
  • Amazon

Use yum to install the library, such as:

  • yum install libaio
  • yum install numactl
  • yum install tzdata
  • yum install ncurses-libs-5.x

Ensure that only one package mgr for rpm and is installed before running the Enterprise Console installer.

 

For RHEL8, CentOS8 you can either manually install version 5 of ncurses or use version 6.

  • To install version 5, follow these steps:
  1. sudo rpm -ivh --force ncurses-base-5.x.rpm 
  2. sudo rpm -ivh --force ncurses-libs-5.x.rpm

The ncurses-libs depends on the ncurses-base so you must install the ncurses-base first. These are examples of a trusted source for rpm download:

  • To install version 6, follow these steps:

You must either create symlinks for ncurses-libs-5 which points to ncurses-libs-6, or install the ncurses-compat-libs package, to provide ABI version 5 compatibility.

RHEL8 symlink:
sudo ln /usr/lib64/libtinfo.so.6.1 /usr/lib64/libtinfo.so.5
sudo ln /usr/lib64/libncurses.so.6.1 /usr/lib64/libncurses.so.5

CentOS8 symlink:
sudo ln /usr/lib64/libtinfo.so.6.1 /usr/lib64/libtinfo.so.5
sudo ln /usr/lib64/libncurses.so.6.1 /usr/lib64/libncurses.so.5

RHEL8 compat-libs:
sudo yum install -y ncurses-compat-libs
CentOS8 compat-libs:

sudo yum install -y ncurses-compat-libs

 Use the following prerequisites to install on CentOS Stream:

  • sudo yum install -y libaio.x86_64
  • sudo yum install -y numactl.x86_64
  • sudo yum install -y tzdata
  • sudo yum install -y ncurses-compat-libs.x86_64
FedoraInstall the library RPM from the Fedora website:
  • yum install libaio
  • yum install numactl
  • yum install tzdata
Ubuntu

Use apt-get, such as:

  • sudo apt-get install libaio1
  • sudo apt-get install numactl
  • sudo apt-get install tzdata
  • sudo apt-get install libncurses5

Ensure that only one package mgr between dpkg and rpm is installed before running the Enterprise Console installer. This pkg manager utility will be used to verify mandatory pkgs before the Enterprise Console installation.

 

For Ubuntu20 you can install libncurses5 or libncurses6.

  • If you choose libncurses5:

    sudo apt-get install libncurses5

  • If you choose libncurses6:

    sudo apt-get install libncurses6

    Note: For libncurses6 you need to create symlink for libncurses5 pointing to libncurses6.

    sudo ln -s /usr/lib/x86_64-linux-gnu/libncurses.so.6.2 /usr/lib/x86_64-linux-gnu/libncurses.so.5
    sudo ln -s /usr/lib/x86_64-linux-gnu/libtinfo.so.6.2 /usr/lib/x86_64-linux-gnu/libtinfo.so.5

DebianUse a package manager (such as APT) to install the library (as previously described in the Ubuntu instructions). 
openSUSE Leap 15.5

Use zypper to install the library, such as:

  • sudo zypper install libaio
  • sudo zypper install libnuma1
  • sudo zypper install tzdata

  • sudo zypper install libncurses5

    For openSUE Leap 15.5, you can install libncurses5 or libncurses6.

    • If you choose libncurses5:

      sudo zypper install libncurses5

    • If you choose libncurses6:

      sudo zypper install libncurses6

      Note: For libncurses6 you need to create symlink for libncurses5 pointing to libncurses6.

      sudo ln /lib64/libncurses.so.6.1 /lib64/libncurses.so.5
      sudo ln -s /lib64/libtinfo.so.6.1 /lib64/libtinfo.so.5ncurses-compat-libs

Ensure that only one package mgr for rpm and is installed before running the Enterprise Console installer. Also, you need to add the openSUSE machine repository before installing the tzdata package.

For openSUSE Tumbleweed run the following as root:
zypper addrepo https://download.opensuse.org/repositories/home:amshinde/openSUSE_Tumbleweed/home:amshinde.repo
zypper refresh
zypper install tzdata

For openSUSE Leap 42.1 run the following as root:
zypper addrepo https://download.opensuse.org/repositories/home:amshinde/openSUSE_Leap_42.1/home:amshinde.repo
zypper refresh
zypper install tzdata

For openSUSE 13.2 run the following as root:
zypper addrepo https://download.opensuse.org/repositories/home:amshinde/openSUSE_13.2/home:amshinde.repo
zypper refresh
zypper install tzdata

For openSUSE 13.1 run the following as root:
zypper addrepo https://download.opensuse.org/repositories/home:amshinde/openSUSE_13.1/home:amshinde.repo
zypper refresh
zypper install tzdata
 
You may run into file conflicts when two packages attempt to install files with the same name but different contents. If you choose to continue, the old files and their contents will be replaced.
See the openSUSE website (https://software.opensuse.org/download.html?project=home%3Aamshinde&package=tzdata) to manually download and install the tzdata package.
SLES12 and SLES15

Use zypper to install the library, such as:

  • sudo zypper install libxml2-2
  • sudo zypper install libxml2-tools
  • sudo zypper install libaio1
  • sudo zypper install numactl
  • sudo zypper install libcurses5
  • sudo zypper install tzdata

See Platform Requirements for operating system support information.

High Availability Requirements

You must install rsync if you plan on deploying a Controller (HA) pair. In addition, when using SSH or an SSH client, note that OpenSSH 5.3p1 is the minimum version supported by the Enterprise Console for HA.

Supported SSH Key Exchanges, Cipher Algorithms, MAC and Host Key Type

You can use these ssh key exchanges, cipher algorithms, MAC types, and host key types to customize the ssh configuration on your host(s):

Supported sshDetails
Key Exchanges
  • diffie-hellman-group-exchange-sha1
  • diffie-hellman-group1-sha1
  • diffie-hellman-group14-sha1
  • diffie-hellman-group-exchange-sha256
  • ecdh-sha2-nistp256
  • ecdh-sha2-nistp384
  • ecdh-sha2-nistp521
Cipher Algorithms
  • blowfish-cbc
  • 3des-cbc
  • aes128-cbc
  • aes192-cbc
  • aes256-cbc
  • aes128-ctr
  • aes192-ctr
  • aes256-ctr
  • 3des-ctr
  • arcfour
  • arcfour128
  • arcfour256

MAC Type

  • hmac-md5
  • hmac-sha1
  • hmac-md5-96
  • hmac-sha1-96
Host Key Type
  • ssh-dss
  • ssh-rsa
  • ecdsa-sha2-nistp256
  • ecdsa-sha2-nistp384
  • ecdsa-sha2-nistp521

CPU and Memory Space Requirements

The Enterprise Console is not CPU intensive and therefore can manage multiple platforms with two Cores. When the Enterprise Console host is shared with the Controller host, Enterprise Controller should have enough space to match the Controller host requirements because there is no additional memory requirement for the Enterprise Console.

However, when the Enterprise Console host is not shared with the Controller host, then Enterprise Console requires additional memory and disk space. The Enterprise Console requires an additional memory of one GB of free RAM for Java and MySQL processes. 

See Prepare the Controller Host for additional space requirements.

General Requirements

  • Determine which version of the Events Service that is compatible with your other platform components.
  • Use a supported Windows 64-bit or Linux 64-bit based operating system supported by the platform. See Platform Requirements.

  • Solid-state drives (SSD) can significantly outperform hard disk drives (HDD), and are therefore recommended for production deployments. Ideally, the disk size should be 1 TB or larger. 
  • The Events Service must run on dedicated machines with identical directory structures, user account profiles, and hardware profiles.
  • For heap space allocation, we recommend allocating half of the available RAM to the Events Service process, with a minimum of 7 GB up to 31 GB.  
  • When testing the events ingestion rate in your environment, it is important to understand that events are batched. Ingestion rates observed at the scale of a minute or two may not reflect the overall ingestion rate. For best results, observe ingestion rate over an extended period of time, several days at least. 
  • Events Service <= 4.5.2 requires Java 8u172.  Events Service >= 23.2.0 requires Java 17; Java 17 is bundled with Events Service >= 23.2.0.
  • Keep the clocks on your application, Controller, and Events Service nodes in sync to ensure consistent event time reporting across the deployment.
  • Your firewall should not block the Events Service REST API port 9080, otherwise, the Enterprise Console will not be able to reach the Events Service remotely.

Hardware Capacity and Resource Planning

When estimating your hardware requirements, the first step is to determine the event ingestion rate (for Transaction Analytics) or the amount of data being indexed (for Log Analytics). This helps you to determine the number of analytics license units you will need.

Once you determine your license units requirements, it is important to consider other factors that affect the hardware capacity, such as the processing load of queries run against the Events Service and the actual type of hardware used. A physical server is likely to perform better than a virtual machine. You should also take into account seasonal or daily spikes in activity in your monitored environment in your considerations. 

An event is the basic unit of data in the events service. In terms of application performance management, a Transaction Analytics event corresponds to a call received at a tier. A business transaction instance that crosses three tiers, therefore, would result in three events being generated. In application performance management metrics, the number of business transaction instances is reflected by the number of calls metric for the overall application. In End User Monitoring, each page view equates to an event, as does each Ajax request, network request, or crash report. 

Events Service Node Sizing Based on License Units

You can plan your hardware requirements with the data in the section. It describes recommended hardware configurations (in the context of Amazon EC2 instance types) corresponding to the number of license entitlement units for Log and Transaction Analytics. See License Entitlements and Restrictions for details about license units for Log and Transaction Analytics. 

For additional Events Service sizing information, see the following Splunk AppDynamics Community articles:

The hardware shown for each license amount represents the hardware capacity of a theoretical combined load of both Transaction Analytics and Log Analytics events. The numbers used were derived from actual tests that were performed with an uncombined load, from which the following numbers were extrapolated. Note that the test conditions did not include query load and so may not be representative of a true production analytics environment.

The following table shows sizing recommendations and describes the size of the cluster used for testing. This does not mean you are limited to a seven-node event service. If you need to go beyond seven nodes, contact customer support to ensure proper sizing for your specific environment.

Note that the retention can be 8, 30, 60 or 90 days which will directly affect storage requirements.

Event TypeAWS Machine Instance Type
i2.2xlarge (61 GB RAM, 8 vCPU, 1600 GB SSD)i2.4xlarge (122 GB RAM, 16 vCPU, 3200 GB SSD)i2.8xlarge (244 GB RAM, 32 vCPU, 6400 GB SSD)
1 node3 nodes5 nodes7 nodes1 node3 nodes5 nodes7 nodes1 node3 nodes5 nodes

Transaction Analytics license units

203744632241841135394120
Log Analytics license units71017191619324439116270

The following points describe the test conditions under which the license units-to-hardware profile mappings in the table were generated: 

  • Average Log event size in bytes: 350
  • Average size of business transaction event: 1 KB
  • Tiers in business transaction: 3 

The tests were conducted on virtual hardware and programmatically generated workload. Real-world workloads may vary. To best estimate your hardware sizing requirements, carefully consider the traffic patterns in your application and test the Events Service in a test environment that closely resembles your production application and user activity.

Minimum Events Service Node Sizing

To configure the Events Service 3 nodes minimum are required.

Database Visibility Events Service Sizing

Database Visibility features use the Events Service for storage. The ingestion capacity and sizing profile for Database Visibility Analytics events are equivalent to that of Log Analytics, with the size of the raw event being about 2 kilobytes on average.

End User Monitoring Events Service Sizing

End User Monitoring includes Analytics-related features that send data to the Events Service.

In End User Monitoring, each page view equates to an event, as does each Ajax request, network request, or crash report. There can be a few dozen Ajax requests for every page load. In general, the ingestion capacity and sizing profile for EUM or Database Visibility Analytics events are equivalent to that for Log Analytics, with the size of the raw events being about 2 kilobytes on average.

To calculate the sizing for EUM, multiply the peak number of browser records in a day by 12 KB. If peak capacity is reached, the Events Service simply starts dropping traffic.

The table below provides details about the memory and storage of different types of browser records. The default retention period is configurable.

Browser Record TypeMemory Requirements Per EventOptionalDefault Retention
BasePage, iFrame, Virtual Page1 KB / 1.5 KB (with sessions enabled)No8 days
Ajax requests1 KBYes
By default, Ajax requests are not stored in the Events Service.

Sizing the Controller for the Events Service

The Events Service is a file-based storage facility used by EUM, Database Monitoring, and Analytics. Database Monitoring uses the Events Service instance embedded in the Controller by default. The disk space required will vary depending upon how many databases are active and how many are being monitored. 

For redundancy and optimum performance, the Events Service must run on a separate machine.  

How do I calculate the size of my on-premise Events Service cluster data store?

How do I size the EUM Server and Events Service?

How do I configure Analytics for on-premises Controllers? 

 Hardware Requirements

The requirements and guidelines for the EUM Server machine (basic usage) are as follows:

  • Minimum 50 GB extra disk space. See Disk Requirements Based on Resource Timing Snapshots to learn when more disk space is needed.
  • 64-bit Windows or Linux operating system
  • Processing: 4 cores
  • 10 Mbps network bandwidth
  • Minimum 8 GB memory total (4 GB is defined as max heap in JVM). See RAM Requirements Based on the Beacon Load to learn when more RAM is required.
  • NTP enabled on both the EUM Server host and the Controller machine. The machine clocks need to be able to synchronize.

A machine with these specs can be expected to handle around 10K page requests a minute or 10K simultaneous mobile users. Adding on-premises Analytics capability requires increasing these requirements—particularly disk space—considerably, depending on the use case.

RAM Requirements Based on the Beacon Load

Beacons are sent to the EUM Server every 10 seconds, and each beacon can contain data for multiple events. You can configure the JavaScript Agent to limit the number of Ajax requests.

The table below specifies the required RAM based on your beacon load per minute and lists the content of a typical beacon.

Peak Beacons Per MinuteTypical Beacon CompositionRAM
~3K
  • 600 sessions
  • 1K base pages
  • 2K virtual pages
  • 7K Ajax requests
8 GB
~16K
  • 1.8K sessions
  • 5K base pages
  • 10K virtual pages
  • 40K Ajax requests
16 GB
~26K
  • 3.6K sessions
  • 8K base pages
  • 17K virtual pages
  • 62 Ajax requests
16 GB
~33K
  • 3.9K sessions
  • 10K base pages
  • 20K virtual pages
  • 74K Ajax requests
32 GB
>40K

12K base pages

32 GB

Disk Requirements Based on Resource Timing Snapshots

By default, the EUM Server accepts a maximum of 1K resource timing snapshots per minute and retains those snapshots for 15 days. On average, each snapshot takes 3 KB of disk space. 

Because of the number of resource timing snapshots impact disk usage, you should follow the guidelines in the table below.

Number of Resource Timing SnapshotsRecommended Disk Space
~500≥ 40 GB
~1000≥ 64 GB
~1500≥ 96 GB
~2000≥ 128 GB

If needed, you can reduce the number of resource timing snapshots or reduce the disk space allotted for storing resource snapshots by doing one or more of the following:

Filesystem Requirements

The filesystem of the machine on which you install EUM should be tuned to handle a large number of small files. In practical terms, this means that either the filesystem should be allocated with a large number of inodes or the filesystem should support dynamic inode allocation. 

Controller Version

The Splunk AppDynamics Platform you use with the EUM server must have a supported Controller version installed. To determine the Controller compatibility with other components, see Controller Compatibility with Operating Systems and Components.

Open File Descriptor and User Process Limits

On Linux, also ensure that open file descriptor and user process limits on the EUM Server machine are set to a sufficient value. For the EUM Server, the hard and soft limits should be as follows:

  • Open file descriptor limit (nofile): 65535
  • Process limit (nproc): 8192

See "Configure User Limits in Linux" below for information on how to check and set user limits. 

Configure User Limits in Linux

The following log warnings may indicate insufficient limits:

  • Warning in database log: "Could not increase number of max_open_files to more than xxxx".
  • Warning in server log: "Cannot allocate more connections".

To check your existing settings, as the root user, enter the following commands:

ulimit -S -n
ulimit -S -u
BASH

The output indicates the soft limits for the open file descriptor and soft limits for processes, respectively. If the values are lower than recommended, you need to modify them. 

Where you configure the settings depends upon your Linux distribution:  

  • If your system has a /etc/security/limits.d directory, add the settings as the content of a new, appropriately named file under the directory.
  • If it does not have a /etc/security/limits.d directory, add the settings to /etc/security/limits.conf
  • If your system does not have a /etc/security/limits.conf file, it is possible to put the ulimit command in /etc/profile. However, check the documentation for your Linux distribution for the recommendations specific for your system.  

To configure the limits: 

  1. Determine whether you have a /etc/security/limits.d directory on your system, and take one of the following steps depending on the result:
    • If you do not have a /etc/security/limits.d directory:
      1. As the root user, open the limits.conf file for editing: /etc/security/limits.conf

      2. Set the open file descriptor limit by adding the following lines, replacing <login_user> with the operating system username under which the EUM Server runs:

        <login_user> hard nofile 65535
        <login_user> soft nofile 65535
        <login_user> hard nproc 8192
        <login_user> soft nproc 8192
        BASH
    • If you do have a /etc/security/limits.d directory:
      1. As the root user, create a new file in the limits.d directory. Give the file a descriptive name, such as the following: /etc/security/limits.d/appdynamics.conf

      2. In the file, add the configuration setting for the limits, replacing <login_user> with the operating system username under which the EUM Server runs: 

        <login_user> hard nofile 65535
        <login_user> soft nofile 65535
        <login_user> hard nproc 8192
        <login_user> soft nproc 8192
        BASH
  2.  Enable the file descriptor and process limits as follows: 
    1. Open the following file for editing: /etc/pam.d/common-session

    2.  Add the line: session required pam_limits.so

  3. Save your changes to the file. 

When you log in again as the user identified by login_user, the limits will take effect.

Network Settings

The network settings on the operating system need to be tuned for high-performance data transfers. Incorrectly tuned network settings can manifest themselves as stability issues on the EUM Server.

The following command listing demonstrates tuning suggestions for Linux operating systems. As shown, Splunk AppDynamics recommends a TCP/FIN timeout setting of 10 seconds (the default is typically 60), the TCP connection keepalive time to 1800 seconds (reduced from 7200, typically), and disabling TCP window scale, TCP SACK, and TCP timestamps.

echo 5 > /proc/sys/net/ipv4/tcp_fin_timeout 
echo 1800 >/proc/sys/net/ipv4/tcp_keepalive_time 
echo 0 >/proc/sys/net/ipv4/tcp_window_scaling 
echo 0 >/proc/sys/net/ipv4/tcp_sack 
echo 0 >/proc/sys/net/ipv4/tcp_timestamps

The commands demonstrate how to configure the network settings in the /proc system. To ensure the settings persist across system reboots, be sure to configure the equivalent settings in the etc/sysctl.conf, or the network stack configuration file appropriate for your operating system. 

Required Libraries

libaio Requirement

The EUM processor requires the libaio library to be on the system. This library facilitates asynchronous I/O operations on the system. Note if you have a NUMA based architecture, then you are required to install the numactl package.

Install libaio on the host machine if it does not already have it installed. The following table provides instructions on how to install libaio for some common flavors of the Linux operating system.

Linux Flavor
Command
Red Hat and CentOS

Use yum to install the library, such as:

  • yum install libaio
  • yum install numactl
Fedora

Install the library RPM from the Fedora website:

  • yum install libaio
  • yum install numactl
Ubuntu

Use apt-get, such as:

  • sudo apt-get install libaio1
  • sudo apt-get install numactl
DebianUse a package manager such as APT to install the library (as described for the Ubuntu instructions above).

tar Requirement

You will need the tar utility to unpack the EUM Server installer.

Install tar on the host machine if it does not already have it installed. The following table provides instructions on how to install tar for some common flavors of the Linux operating system.

Linux Flavor

Command

Red Hat and CentOS

Use yum to install the library, such as:

  • yum install tar
Fedora

Install the library RPM from the Fedora website:

  • yum install tar
Ubuntu

Use apt-get, such as:

  • sudo apt-get install tar
DebianUse a package manager such as APT to install the library (as described for the Ubuntu instructions above).

 For more information on EUM Processor sizing information, see Analytics' Recipe Book for on-prem configuration in the Splunk AppDynamics Community.

End User Monitoring (EUM) Considerations

End User Monitoring (EUM) increases the number of collected metrics. Accordingly, the Small Controller profile is not supported for installations that use EUM. A Medium profile running 20+ high-traffic BRUM or MRUM agents should be sized at a specification closer to a Large profile for EUM.

EUM impact metrics as follows:

  • Web RUM can increase the number of individual metric data points per minute up to 22000.
  • Mobile RUM can increase the number of individual metric data points per minute by as much as 15K to 25K per instrumented application for heavily accessed applications. The actual number depends on how many network requests your applications receive.

  • Monitoring EUM is memory intensive and may require more space allocated to the metrics cache.

The number of separate EUM metric names saved in the Controller database can be larger than the kinds of saved individual data points.  For example, a metric name for an iOS 5 metric may still be in the database even if all your users have migrated away from iOS 5. So, the metric name will no longer have an impact on resource utilization, but it would count against the default limit in the Controller for metric names per application. The default limit for names is 200,000 for Browser RUM and 100,000 for Mobile RUM.

Related Articles

On premises EUM Server Sizing Guide

How do I calculate the size of an EUM JavaScript Agent beacon using Chrome Developer Tools?

Sizing XL profiles for on-premises EUM Server and the Events Service: What's recommended?

Splunk AppDynamics Components

To deploy the Synthetic Server, you need to install the following Splunk AppDynamics components:

  • Controller
  • Events Service
  • Synthetic Agent
    •     Private Synthetic Agent 
    •     Hosted Synthetic Agent

Ensure that:

  • Synthetic Server has access to the *.launchdarkly.com domain.
  • Synthetic Server and Private Synthetic Agent have access to the internet.
  • the eum.synthetic.onprem.installation property is set to true in the admin.jsp page. If this property is not set to true, then the User Experience > API Monitoring tab is not displayed on the Controller.

Certain Synthetic Server features—specifically, Synthetic Sessions Analytics, features of Application Analytics that extend the functionality of Synthetic Sessions—require access to the Splunk AppDynamics Events Service. 

Synthetic Agent Requirements

The following table lists the requirements for deploying Private Synthetic Agents and Hosted Synthetic Agents.

Synthetic AgentRequirements
Private Synthetic AgentsSee Install the Private Synthetic Agent.
Hosted Synthetic Agents
  • Hosted Synthetic Agent license
  • Splunk AppDynamics Access (HMAC) Key (part of the license file for Hosted Synthetic Agent)

Hardware Requirements

These requirements assume that the Synthetic Server is installed on a separate machine. If other Splunk AppDynamics platforms are installed on the same machine, the requirements (particularly for memory) could vary greatly and require many more resources.

  • Storage: 50 GB free disk space
  • Memory: 8 GB memory 
  • CPU: 64-bit CPU with at least 2 cores 
  • Network bandwidth: 50 Mbps 

NTP should be enabled on both the EUM Server host and the Controller machine. The machine clocks need to be able to synchronize.

Scaling Requirements

You are required to have one EUM account for each on-premises deployment of the Synthetic Server. The machine hosting the Synthetic Server should be able to support 100 concurrent Synthetic Agents or 10 locations with 10 Synthetic Agents per location.

If you need the Synthetic Server to support more than 100 concurrent Synthetic Agents, see Increase the Synthetic Agent Support.

Operating System Support

The Synthetic Server is supported on the following operating systems:

Linux (64 bit)

  • RHEL 8 and 9
  • Cent OS 8.x
  • Ubuntu 20.x and 22.x
  • OpenSUSE Leap 15.5

 You can use the following file systems for machines that run Linux:

  • ZFS
  • EXT4
  • XFS 

On-premises deployments on Linux are only supported on Intel architecture. Windows is not supported at this time.

Network Requirements

The network settings on the operating system need to be tuned for high-performance data transfers. Incorrectly tuned network settings can manifest themselves as stability issues.

The following command listing demonstrates tuning suggestions for Linux operating systems. As shown, Splunk AppDynamics recommends a TCP/FIN timeout setting of 10 seconds (the default is typically 60), the TCP connection keepalive time to 1800 seconds (reduced from 7200, typically), and disabling TCP window scale, TCP SACK, and TCP timestamps.

echo 5 > /proc/sys/net/ipv4/tcp_fin_timeout 
echo 1800 >/proc/sys/net/ipv4/tcp_keepalive_time 
echo 0 >/proc/sys/net/ipv4/tcp_window_scaling 
echo 0 >/proc/sys/net/ipv4/tcp_sack 
echo 0 >/proc/sys/net/ipv4/tcp_timestamps
BASH

The commands demonstrate how to configure the network settings in the /proc system. To ensure the settings persist across system reboots, be sure to configure the equivalent settings in the etc/sysctl.conf or the network stack configuration file appropriate for your operating system. 

Software Requirements

The Synthetic Server requires the following software to run and function correctly. You are required to have outbound internet access to install Python, pip, and flake8.

SoftwareRequired VersionFunction
Java17

The Synthetic Server requires JDK 17 to run services such as Synthetic Scheduler and Synthetic Shepherd.

You need to set the environmental variable JAVA_HOME to the home directory of the JDK.

Python2.7+

The Synthetic Server relies on Python to validate scripts.

pip9+

Python uses pip to install software. For example, pip could be used on some Linux distributions to install flake8, a Python utility used to lint scripts. 

If the machine where you're installing the Synthetic Server does not have Internet access, run the following steps to fetch and install flake8:

  1. From a machine with internet access and pip installed:

    1. Create a directory for the flake8 library:

       mkdir ~/flake8
      BASH
    2. Download the flake8 package:

      python -m pip download flake8 -d ~/flake8
      BASH
      python3 -m pip download flake8 -d ~/flake8
      BASH



    3. Zip and tar the flake8 package:

      tar cvfz flake8.tgz ~/flake8
      BASH
    4. Copy flake8.tgz to the $HOME directory of the host machine of the Synthetic Server.

  2. From the host of the Synthetic Server that has no internet access, but does have pip installed:
    1. Unzip and extract the flake8.tgz file:

      tar xvfz flake8.tgz ~/flake8
      BASH
    2. Change to the flake8 directory.

    3. Install the flake8 library with pip with the following command, replacing <version> with the correct version.

      python -m pip install flake8-<version>-py2.py3-none-any.whl -f ./ --no-index
      BASH
      python3 -m pip install flake8-<version>-py2.py3-none-any.whl -f ./ --no-index
      BASH
libaioN/A

The Synthetic Server requires the libaio library to be on the system. This library facilitates asynchronous I/O operations on the system.

See How to Install libaio for instructions.


How to Install libaio 

Install libaio on the host machine if it does not already have it installed. You may require outbound internet access if you don't have a locally hosted repository.

The following table provides instructions on how to install libaio for some common flavors of the Linux operating system. Note that if you have a NUMA based architecture, you are required to install the numactl package.

Linux Flavor
Command
Red Hat and CentOS

Use yum to install the library, such as:

  • yum install libaio
  • yum install numactl
Fedora

Install the library RPM from the Fedora website:

  • yum install libaio
  • yum install numactl
Ubuntu

Use apt-get, such as:

  • sudo apt-get install libaio1
  • sudo apt-get install numactl
DebianUse a package manager such as APT to install the library (as described for the Ubuntu instructions above).



Hardware Requirements

Hardware requirements vary depending on database activity. If your database activity increases, you may need to adjust your hardware configuration.

The machine running the Database Agent should meet the following hardware requirements:

  • 1 GB of heap space and an additional 512 MB of heap space for each monitored database instance. For less busy databases, you may reduce the heap space to 256 MB per monitored database instance.
  • 2 GHz or higher CPU.

Database Instance

A Database instance can be a node in the Oracle RAC, MongoDB, Couchbase cluster, standalone-collector, or a sub-collector.

This table shows sample calculations for heap space allocation:

Number of Database Instances MonitoredHeap Space Allocation
5(5 x 512 MB) + 1024 MB = 3,584 MB
20(20 x 512 MB) + 1024 MB = 11,264 MB
100(100 x 512 MB) + 1024 MB = 52,224 MB

Splunk AppDynamics On-Premises Controller Sizing Requirements

The Controller database should meet the following hardware requirements:

  • 500 MB of disk space per collector per day
  • 500 MB of disk space for the Events Service per day. By default, the Events Service retains data for 10 days.

See Controller System Requirements.

Note

The Database Agent requires the Events Service. Start Event Service before you start the Database Agent.

Software Requirements

Network Requirements

  • The machine on which the database is running or the machine you want to monitor must be accessible from the machine where the Database Agent is installed and running. This machine must have a network connection, internet, or intranet.
  • If your databases are behind a firewall, you must configure the firewall to permit the machine running the Database Agent program access to the databases. The database listener port (and optionally the SSH or WMI port) must be open.
  • The network bandwidth used between the agent and the controller is approximately 300 KB per minute per collector for a large database with 200 clients using 50 schemas, processing about 10,000 queries a minute. The actual numbers depend on the type of database server, the number of individual schemas on the server, and the number of unique queries executed daily, and therefore varies.


Database Monitoring Considerations

The following guidelines can help you determine additional disk and RAM required for the machine hosting the Controller that is monitoring the Database Agent. For very large installations, you should work with your Splunk AppDynamics representative for additional guidelines. 

For on-premises installations, the machine running the Controller and Event Service will require the following additional considerations, for a data retention period of 10 days:

  • 1–10 collectors: 2 GB RAM, Single CPU
  • 10–20 collectors: 4 GB RAM, 2 CPUs
  • More than 20 collectors: 8 GB RAM, 4 CPUs

For supported environments, see Database Visibility Supported Environments.

Splunk AppDynamics Infrastructure Visibility provides end-to-end visibility on the performance of the hardware running your applications. Use Infrastructure Visibility to identify and troubleshoot problems that can affect application performance such as server failures, JVM crashes, and network packet loss. Following pages provide requirements for various Infrastructure Visibility components:

Additional Sizing Considerations

  • Large installations are not supported on virtual machines or systems that use network-attached storage. 
  • The RAM recommendations leave room for operating system processes. However, the recommendations assume that no other memory intensive applications are running on the same machine. While the Enterprise Console can run on the same host as the Controller in small or demo profile Controllers, it is not recommended for medium and larger profiles or for high availability deployments. See Enterprise Console Requirements if the Enterprise Console is on the same host as the Controller.
  • Disk sizing in the sizing table represents the approximate space consumption for metrics, about 7 MB for each metric per minute.
  • The motherboard should not have more than 2 sockets. 
  • See Calculating Node Count in .NET Environments for information related to sizing a .NET environment. 
  • The agent counts do not reflect additional requirements for EUM or Database Visibility. See the following sections for more information.

Calculating Node Count in .NET Environments

The .NET Agent dynamically creates nodes depending on the monitored application configuration in the IIS server. An IIS server can create multiple instances of each monitored IIS application. For every instance, the .NET Agent creates a node. For example, if an IIS application has five instances, the .NET Agent will create five nodes, one for each instance.

The maximum number of instances of a particular IIS application is determined by the number of worker processes configured for its application pool, as illustrated in the following diagram:

IIS diagram

The diagram shows three application pools — AppPool-1, AppPool-2, and AppPool-3 — with the following characteristics:

  • AppPool-1 and AppPool-3 can have a maximum of two worker processes (known as a web garden), containing two applications (AppA, AppB) and one application (AppF), respectively. 
  • AppPool-2 can have one worker process. It has three applications.

To determine the number of nodes, for each AppPool, multiply the number of applications by the maximum number of worker processes. Add those together, as well as a node for the Windows service or standalone application processes. 

The example would result in nine AppPool nodes. Adding one for a Windows service would result in a total of ten nodes, calculated as follows: 

AppPool-1: 2 (applications) * 2 (max number of worker processes)  = 4
AppPool-2: 3 (applications) * 1 (max number of worker processes)  = 3
AppPool-3: 1 (application) * 2 (max number of worker processes)   = 2
Windows Service or standalone application process                 = 1
------
Total:                                                            = 10
BASH

To find the number of CLRs that will be launched for a particular .NET Application/App Pool:

  1. Open the IIS manager and view the number of applications assigned to that AppPool.
  2. Check if any AppPools are configured to run as a Web Garden. This would be a multiplier for the number of .NET nodes coming from this AppPool as described in the preceding example.

Also, see View Applications in an Application Pool (IIS 7).

Asynchronous Call Monitoring Considerations

The Small profile is not supported for installations with extensive async monitoring. A Medium profile running 40+ agents may need to upgrade to a configuration closer to a Large profile if extensive asynchronous monitoring is added. 

Monitoring asynchronous calls increases the number of metrics per minute to a maximum number of 23000 per minute.