PDFs


This page applies to an earlier version of the AppDynamics App IQ Platform.
See the latest version of the documentation.


Skip to end of metadata
Go to start of metadata

On this page:

Related Pages:

The AppDynamics Platform Admin application automates the task of installing and administering an Events Service deployment. For information on the Platform Admin application, see Install the Platform Admin Application

Events Service Host Requirements

Before starting, be sure to review the Release Notes for known issues and late breaking information on using the Events Service and Platform Admin application. Also observe the following requirements:

  • The Events Service can be deployed as a single node or as a multi-node cluster of 3 or more nodes. 

  • The versions of Linux supported include the flavors and versions supported by the Controller, as indicated by Prepare Linux for the Controller.
  • The Events Service must run on a dedicated machine. The machine should not run other applications or processes not related to Events Service.

  • Use appropriately sized hardware for the Events Service machines. The Platform Admin application checks the target system for minimum hardware requirements. For more information on these requirements, see the description of the profile argument to the Events Service install command in Install the Events Service Cluster
  • The Controller and Events Service must reside on the same local network and communicate by internal network. Do not deploy the cluster to nodes on different networks, whether relative to each other or to the Controller where the Platform Admin application runs. When identifying cluster hosts in the configuration, you will need to use the internal DNS name or IP address of the host, not the externally routable DNS name. 
    For example, in terms of an AWS deployment, use the private IP address such as 172.31.2.19 rather than public DNS hostname such as ec2-34-201-129-89.us-west-2.compute.amazonaws.com.
  • Make sure that the appropriate ports on each Events Service host are open. See Port Settings for more information. 

  • The Platform Admin application uses an SSH key to access the Events Services hosts. See the section below for information on generating the key. 

  • Events Service nodes normally operate behind a load balancer. When installing an Events Service node, the Platform Admin application automatically configures a direct connection from the Controller to the node. If you deploy a cluster, the first master node is automatically configured as the connection point in the Controller. You will need to reconfigure the Controller to connect through the load balancer VIP after installation, as described below. For sample configurations, see Load Balance Events Service Traffic. 

Port Settings

Each machine must have the following ports accessible to external (outside the cluster) traffic: 

  • Events Service API Store Port: 9080
  • Events Service API Store Admin Port: 9081

For a cluster, ensure that the following ports are open for communication between machines within the cluster. Typically, this requires configuring iptables or OS-level firewall software on each machine to open the ports listed

  • 9300 – 9400

The following shows an example of iptables commands to configure the operating system firewall: 

-A INPUT -m state --state NEW -m tcp -p tcp --dport 9080 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9081 -j ACCEPT
-A INPUT -m state --state NEW -m multiport -p tcp --dports 9300:9400 -j ACCEPT

If a port on the Events Service node is blocked, the Events Service installation command will fail for the node and the Platform Admin application command output and logs will include an error message similar to the following: 

failed on host: <ip_address> with message: Uri [http://localhost:9080/_ping] is un-pingable.

If you see this error, make sure that the ports indicated in this section are available to other cluster nodes. 

Create the SSH Key

When installing Events Service, you will need to provide the SSH key that the Platform Admin application can use to access Events Service hosts remotely. Before starting, create the PEM public and private keys in RSA format. The key file must not use password protection.

For example, using ssh-keygen, you can create the key using the following command:

ssh-keygen -t rsa -b 2048 -v 

Configure SSH Passwordless Login

The Platform Administration Application needs to be able to access each cluster machine using passwordless SSH. Before starting, enable key-based SSH access.

This setup involves generating a key pair on the Controller host and adding the Controller's public key as an authorized key on the cluster nodes. The following steps take you through the configuration procedure for an example scenario. You will need to adjust the steps based on your environment.  

If you are using EC2 instances on AWS, the following steps are taken care of for you when you provision the EC2 hosts. At that time, you are prompted for your PEM file, which causes the public key for the PEM file to be copied to the authorized_keys of the hosts. You can skip these steps in this case.  

On the Platform Admin machine, follow these steps:

  1. Log in to the platform-admin machine or switch to the user you will use to perform the deployment:

    su - $USER
  2. Create a directory for SSH artifacts (if it doesn't already exist) and set permissions on the directory, as follows:

    mkdir -p ~/.ssh 
    chmod 700 ~/.ssh
  3. Change to the directory:

    cd .ssh 
  4. Generate PEM public and private keys in RSA format:

    ssh-keygen -t rsa -b 2048 -v 

    The key file must not use password protection.

  5. Enter a name for the file in which to save the key when prompted, such as appd-analytics.

  6. Rename the key file by adding the .pem extension:  

    mv appd-analytics appd-analytics.pem

    You will later configure the path to it as the sshKeyFile setting in the Platform Administration Application configuration file, as described in Deploying an Events Service Cluster

  7. Transfer a copy of the public key to the cluster machines. For example, you can use scp to perform the transfer as follows: 

    scp ~/.ssh/myserver.pub host1:/tmp
    scp ~/.ssh/myserver.pub host2:/tmp
    scp ~/.ssh/myserver.pub host3:/tmp

    Continuing with the example, myserver should be appd-analytics.
    The first time you connect you may need to confirm the connection to add the cluster machine to the list of known hosts and enter the user's password. 

  8. On each cluster node (host1, host2, and host3), create the .ssh directory in the user home directory, if not already there, and add the public key you just copied as an authorized key:

    cat /tmp/appd-analytics.pub >> .ssh/authorized_keys
    chmod 600 ~/.ssh/authorized_keys
  9. Test the configuration from the Controller machine by trying to log in to a cluster node by ssh:

    ssh host1

    If unable to connect, make sure that the cluster machines have the openssh-server package installed and that you have modified the operating system firewall rules to accept SSH connections. If successful, you can use the Platform Administration Application on the Controller host to deploy the Events Service cluster, as described next. 

If the Platform Administration Application attempts to install the Events Service on a node for which passwordless SSH is not properly configured, you will see the following error message: 

./bin/platform-admin.sh install-events-service --ssh-key-file /root/e2e-demo.pem --remote-user username --installation-dir /home/username/ --hosts 172.31.57.202 172.31.57.203 172.31.57.204
...
Events Service installation failed. Task: Copying JRE to the remote host failed on host: 172.31.57.204 with message: Failed to upload file: java.net.ConnectException: Connection timed out

If you encounter this error, use the instructions in this section to double check your passwordless SSH configuration.  

Installing the Events Service

  1. Set up load balancing. See Load Balance Events Service Traffic for information about configuring the load balancer. 

  2. At the command line, navigate to the platform-admin directory created at Platform Admin application installation. See Install the Platform Admin Application.

  3. If it has been more than one day since your last session, you will have to log in with the following command:

    bin/platform-admin.sh login --user-name <admin_username> --password <admin_password>
  4. Create a platform as follows: 

    bin/platform-admin.sh create-platform --name <platform_name> --installation-dir <platform_installation_directory>

    The installation directory is the directory where the application installs all platform components.

    The same installation directory should exist, and is used on all remote nodes. This is done to maintain homogeneity of the configuration across different nodes.

  5. Add the SSH key that the Platform Admin application will use to access and manage the Events Service hosts remotely. (See Create the SSH Key for more information): 

    bin/platform-admin.sh add-credential --credential-name <name> --type ssh --user-name <username> --ssh-key-file <file path to the key file> --platform-name <name of platform>

    <file path to the key file> is the private key for the Platform Admin machine. The installation process uses the keys to connect to the Events Service hosts. The keys are not deployed, but instead, encrypted and stored in the Platform Admin database.
    The platform-name parameter is optional.

  6. Add hosts to the platform, passing the credential you added to the platform: 

    bin/platform-admin.sh add-hosts --hosts es_host_1 es_host_2 es_host_3 --credential <credential name> --platform-name <name of platform>

    The platform-name parameter is optional.

  7. On each Events Service destination node in the cluster, create an installation directory for the Events Service. This is the directory you specified as the installation-dir argument while creating the platform in step (2).

  8. Again at the command line for the Platform Admin application machine, run the following command from the platform-admin directory. Pass the cluster configuration settings as arguments to the command. The format for the command is the following

    bin/platform-admin.sh install-events-service  --profile prod --hosts <host1>  <host2>  <host3> --data-dir “<data_directory_on_node>” --platform-name <name of platform>

    The platform-name parameter is optional.
    Arguments are: 

    • hosts: Use this argument or host-file to specify the internal DNS hostnames or IP addresses of the cluster hosts in your deployment. With this argument, pass the hostnames or addresses as parameters. For example:

      --hosts 192.168.32.105 192.168.32.106 192.168.32.107
    • host-file: As an alternative to specifying hosts as --hosts arguments, pass them as a list in a text file you specify with this argument. Specify the internal DNS hostname or IP address for each cluster host as a separate line in the plain text file:

      192.168.32.105
      192.168.32.106
      192.168.32.107
    • profile: By default (with profile not specified), the installation is considered a production installation. Specifying a developer profile (‑‑profile dev) directs the Platform Admin application to use a reduced hardware profile requirement, suitable for non-production environments only. The Platform Admin application checks for the following resources:

      • For a dev profile, 1 core CPU, 1 GB RAM and 2 GB disk space. 

      • Otherwise, 4 core CPU, 12 GB RAM, and 128 GB of disk space. 

    For example:

    bin/platform-admin.sh install-events-service --profile dev --hosts ip-172-31-20-21.us-west-2.compute.internal ip-172-31-20-22.us-west-2.compute.internal ip-172-31-20-23.us-west-2.compute.internal 

    If using a hosts text file, use the following command:

    bin/platform-admin.sh install-events-service --host-file=/home/appduser/hosts.txt
  9. Log in to each Events Service node machine, and run the script for setting up the environment as follows:

    1. Add execute permission to the tune-system.sh script:

      chmod +x tune-system.sh 
      ./tune-system.sh
    2. Run the script:

      sudo <installation_dir>/events-service/processor/bin/tool/tune-system.sh
  10. Configure the Controller connection to the Events Service. If using a load balancer, use the virtual IP for the Events Service as presented at the load balancer. Configure the Controller connection to the Events Service as follows: 

    1. Open the Administration Console.

    2. In the Controller settings pane, find appdynamics.on.premise.event.service.key and paste in the value taken from the Events Service machine, which can be found in the file <es_install_dir>/processor/conf/events-service-api-store.properties in the key ad.accountmanager.key.controller
    3. In the Controller settings pane, find appdynamics.on.premise.event.service.url and change its value to the URL of the virtual IP for the Events Service at the load balancer. If you are using a single node Events Service without the load balancer, you can use the address for that Events Service as the URL.

    It may take a few minutes for the Controller and Events Service to synchronize account information after you modify connection settings in the console. 
  11. Restart the Controller.

When finished, use the Platform Admin application for any Events Service administrative functions. You should not need to access the cluster node machines directly once they are deployed. In particular, do not attempt to use scripts included in the Events Service node home directories. 

  • No labels