This page describes how to prepare the machine that will host Events Service nodes, along with general requirements for the environment. 

Network and Port Settings

The Controller and Events Service must reside on the same local network and communicate by the internal network. Do not deploy the cluster to nodes on different networks, whether relative to each other or to the Controller and the Enterprise Console. When identifying cluster hosts in the configuration, you will need to use the internal DNS name or IP address of the host, not the externally routable DNS name. 

For example, in terms of an AWS deployment, use the private IP address such as 172.31.2.19 rather than public DNS hostname such as ec2-34-201-129-89.us-west-2.compute.amazonaws.com.

On each machine, the following ports need to be accessible to external (outside the cluster) traffic: 

  • Events Service API Store Port: 9080
  • Events Service API Store Admin Port: 9081

For a cluster, ensure that the following ports are open for communication between machines within the cluster. Typically, this requires configuring iptables or OS-level firewall software on each machine to open the ports listed

  • 9300 – 9400

The following shows an example of iptables commands to configure the operating system firewall: 

-A INPUT -m state --state NEW -m tcp -p tcp --dport 9080 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9081 -j ACCEPT
-A INPUT -m state --state NEW -m multiport -p tcp --dports 9300:9400 -j ACCEPT
BASH

If a port on the Events Service node is blocked, the Events Service installation command will fail for the node and the Enterprise Console command output and logs will include an error message similar to the following: 

failed on host: <ip_address> with message: Uri [http://localhost:9080/_ping] is un-pingable. 

If you see this error, make sure that the ports indicated in this section are available to other cluster nodes. 

Configure SSH Passwordless Login

For Linux deployments, you will use the Enterprise Console to deploy and manage the Events Service cluster. 

The Enterprise Console needs to be able to access each cluster machine using passwordless SSH for a non-embedded Events Service. Before starting, enable key-based SSH access as described here.

This setup involves generating a key pair on the Enterprise Console and adding the public key as an authorized key on the cluster nodes. The following steps take you through the configuration procedure for an example scenario. You will need to adjust the steps based on your environment.  

If you are using EC2 instances on AWS, the following steps are taken care of for you when you provision the EC2 hosts. At that time, you are prompted for your PEM file, which causes the public key for the PEM file to be copied to the authorized_keys of the hosts. You can skip these steps in this case.  

On the host machine, follow these steps:

  1. Log in to the Enterprise Console host machine or switch to the user you will use to perform the deployment:

    su - $USER
  2. Create a directory for SSH artifacts (if it doesn't already exist) and set permissions on the directory, as follows:

    mkdir -p ~/.ssh 
    chmod 700 ~/.ssh
  3. Change to the directory:

    cd .ssh 
  4. Generate PEM public and private keys in RSA format:

    ssh-keygen -t rsa -b 2048 -v -m pem
  5. Enter a name for the file in which to save the key when prompted, such as appd-analytics.
  6. Rename the key file by adding the .pem extension:  

    mv appd-analytics appd-analytics.pem
    BASH

    You will later configure the path to it as the sshKeyFile setting in the Enterprise Console configuration file, as described in Deploying an Events Service Cluster

  7. Transfer a copy of the public key to the cluster machines. For example, you can use scp to perform the transfer as follows: 

    scp ~/.ssh/myserver.pub host1:/tmp
    scp ~/.ssh/myserver.pub host2:/tmp
    scp ~/.ssh/myserver.pub host3:/tmp
    BASH

    Continuing with the example, myserver should be appd-analytics.
    The first time you connect you may need to confirm the connection to add the cluster machine to the list of known hosts and enter the user's password. 

  8. On each cluster node (host1, host2, and host3), create the .ssh directory in the user home directory, if not already there, and add the public key you just copied as an authorized key:

    cat /tmp/appd-analytics.pub >> .ssh/authorized_keys
    chmod 600 ~/.ssh/authorized_keys
    BASH
  9. Test the configuration from the host machine by trying to log in to a cluster node by ssh:

    ssh host1
    BASH

    If unable to connect, make sure that the cluster machines have the openssh-server package installed and that you have modified the operating system firewall rules to accept SSH connections. If successful, you can use the Enterprise Console to deploy the platform.

 If you encounter the following error, use the instructions in this section to double-check your passwordless SSH configuration:

Copying JRE to the remote host failed on host: 172.31.57.204 with message: Failed to upload file: java.net.ConnectException: Connection timed out