On this page: Related Pages:
On this page:
Events Service Host Requirements
Before starting, be sure to review the Release Notes for known issues and late breaking information on using the Events Service and Platform Admin application. Also observe the following requirements:
The Events Service can be deployed as a single node or as a multi-node cluster of 3 or more nodes.
- The versions of Linux supported include the flavors and versions supported by the Controller, as indicated by Prepare Linux for the Controller.
The Events Service must run on a dedicated machine. The machine should not run other applications or processes not related to Events Service.
- Use appropriately sized hardware for the Events Service machines. The Platform Admin application checks the target system for minimum hardware requirements. For more information on these requirements, see the description of the profile argument to the Events Service install command in Install the Events Service Cluster.
- The Controller and Events Service must reside on the same local network and communicate by internal network. Do not deploy the cluster to nodes on different networks, whether relative to each other or to the Controller where the Platform Admin application runs. When identifying cluster hosts in the configuration, you will need to use the internal DNS name or IP address of the host, not the externally routable DNS name.
For example, in terms of an AWS deployment, use the private IP address such as 172.31.2.19 rather than public DNS hostname such as ec2-34-201-129-89.us-west-2.compute.amazonaws.com.
Make sure that the appropriate ports on each Events Service host are open. See Port Settings for more information.
The Platform Admin application uses an SSH key to access the Events Services hosts. See the section below for information on generating the key.
Events Service nodes normally operate behind a load balancer. When installing an Events Service node, the Platform Admin application automatically configures a direct connection from the Controller to the node. If you deploy a cluster, the first master node is automatically configured as the connection point in the Controller. You will need to reconfigure the Controller to connect through the load balancer VIP after installation, as described below. For sample configurations, see Load Balance Events Service Traffic.
Each machine must have the following ports accessible to external (outside the cluster) traffic:
- Events Service API Store Port: 9080
- Events Service API Store Admin Port: 9081
For a cluster, ensure that the following ports are open for communication between machines within the cluster. Typically, this requires configuring iptables or OS-level firewall software on each machine to open the ports listed
9300 – 9400
The following shows an example of iptables commands to configure the operating system firewall:
If a port on the Events Service node is blocked, the Events Service installation command will fail for the node and the Platform Admin application command output and logs will include an error message similar to the following:
If you see this error, make sure that the ports indicated in this section are available to other cluster nodes.
Create the SSH Key
When installing Events Service, you will need to provide the SSH key that the Platform Admin application can use to access Events Service hosts remotely. Before starting, create the PEM public and private keys in RSA format. The key file must not use password protection.
For example, using ssh-keygen, you can create the key using the following command:
Configure SSH Passwordless Login
The Platform Administration Application needs to be able to access each cluster machine using passwordless SSH. Before starting, enable key-based SSH access.
This setup involves generating a key pair on the Controller host and adding the Controller's public key as an authorized key on the cluster nodes. The following steps take you through the configuration procedure for an example scenario. You will need to adjust the steps based on your environment.
If you are using EC2 instances on AWS, the following steps are taken care of for you when you provision the EC2 hosts. At that time, you are prompted for your PEM file, which causes the public key for the PEM file to be copied to the authorized_keys of the hosts. You can skip these steps in this case.
On the Platform Admin machine, follow these steps:
Log in to the platform-admin machine or switch to the user you will use to perform the deployment:
Create a directory for SSH artifacts (if it doesn't already exist) and set permissions on the directory, as follows:
Change to the directory:
Generate PEM public and private keys in RSA format:
The key file must not use password protection.
- Enter a name for the file in which to save the key when prompted, such as appd-analytics.
Rename the key file by adding the .pem extension:
You will later configure the path to it as the
sshKeyFilesetting in the Platform Administration Application configuration file, as described in Deploying an Events Service Cluster.
Transfer a copy of the public key to the cluster machines. For example, you can use scp to perform the transfer as follows:
Continuing with the example,
myservershould be appd-analytics.
The first time you connect you may need to confirm the connection to add the cluster machine to the list of known hosts and enter the user's password.
On each cluster node (host1, host2, and host3), create the .ssh directory in the user home directory, if not already there, and add the public key you just copied as an authorized key:
Test the configuration from the Controller machine by trying to log in to a cluster node by ssh:
If unable to connect, make sure that the cluster machines have the openssh-server package installed and that you have modified the operating system firewall rules to accept SSH connections. If successful, you can use the Platform Administration Application on the Controller host to deploy the Events Service cluster, as described next.
If the Platform Administration Application attempts to install the Events Service on a node for which passwordless SSH is not properly configured, you will see the following error message:
If you encounter this error, use the instructions in this section to double check your passwordless SSH configuration.
Installing the Events Service
Set up load balancing. See Load Balance Events Service Traffic for information about configuring the load balancer.
At the command line, navigate to the
platform-admindirectory created at Platform Admin application installation. See Install the Platform Admin Application.
If it has been more than one day since your last session, you will have to log in with the following command:
Create a platform as follows:
The installation directory is the directory where the application installs all platform components.
The same installation directory should exist, and is used on all remote nodes. This is done to maintain homogeneity of the configuration across different nodes.
Add the SSH key that the Platform Admin application will use to access and manage the Events Service hosts remotely. (See Create the SSH Key for more information):
<file path to the key file>is the private key for the Platform Admin machine. The installation process uses the keys to connect to the Events Service hosts. The keys are not deployed, but instead, encrypted and stored in the Platform Admin database.
The platform-name parameter is optional.
Add hosts to the platform, passing the credential you added to the platform:
The platform-name parameter is optional.
On each Events Service destination node in the cluster, create an installation directory for the Events Service. This is the directory you specified as the
installation-dirargument while creating the platform in step (2).
Again at the command line for the Platform Admin application machine, run the following command from the platform-admin directory. Pass the cluster configuration settings as arguments to the command. The format for the command is the following:
The platform-name parameter is optional.
hosts: Use this argument or
host-fileto specify the internal DNS hostnames or IP addresses of the cluster hosts in your deployment. With this argument, pass the hostnames or addresses as parameters. For example:
host-file: As an alternative to specifying hosts as --
hostsarguments, pass them as a list in a text file you specify with this argument. Specify the internal DNS hostname or IP address for each cluster host as a separate line in the plain text file:
profile: By default (with profile not specified), the installation is considered a production installation. Specifying a developer profile (‑‑
profile dev) directs the Platform Admin application to use a reduced hardware profile requirement, suitable for non-production environments only. The Platform Admin application checks for the following resources:
For a dev profile, 1 core CPU, 1 GB RAM and 2 GB disk space.
Otherwise, 4 core CPU, 12 GB RAM, and 128 GB of disk space.
If using a hosts text file, use the following command:
Log in to each Events Service node machine, and run the script for setting up the environment as follows:
Add execute permission to the tune-system.sh script:
Run the script:
Configure the Controller connection to the Events Service. If using a load balancer, use the virtual IP for the Events Service as presented at the load balancer. Configure the Controller connection to the Events Service as follows:
Open the Administration Console.
- In the Controller settings pane, find
appdynamics.on.premise.event.service.key andpaste in the value taken from the Events Service machine, which can be found in the file
<es_install_dir>/processor/conf/events-service-api-store.propertiesin the key
In the Controller settings pane, find
appdynamics.on.premise.event.service.url andchange its value to the URL of the virtual IP for the Events Service at the load balancer. If you are using a single node Events Service without the load balancer, you can use the address for that Events Service as the URL.
Restart the Controller.
When finished, use the Platform Admin application for any Events Service administrative functions. You should not need to access the cluster node machines directly once they are deployed. In particular, do not attempt to use scripts included in the Events Service node home directories.