- Events Service API Store Port: 9080
- Events Service API Store Admin Port: 9081
For a cluster, ensure that the following ports are open for communication between machines within the cluster. Typically, this requires configuring iptables or OS-level firewall software on each machine to open the ports listed
9300 – 9400
The following shows an example of iptables commands to configure the operating system firewall:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9080 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 9081 -j ACCEPT -A INPUT -m state --state NEW -m multiport -p tcp --dports 9300:9400 -j ACCEPT
If a port on the Events Service node is blocked, the Events Service installation command will fail for the node and the Enterprise Console command output and logs will include an error message similar to the following:
failed on host: <ip_address> with message: Uri [http://localhost:9080/_ping] is un-pingable.
If you see this error, make sure that the ports indicated in this section are available to other cluster nodes.
When installing Events Service, you will need to provide the SSH key that the Enterprise Console can use to access Events Service hosts remotely. Before starting, create the PEM public and private keys in RSA format. The key file must not use password protection.
For example, using ssh-keygen, you can create the key using the following command:
ssh-keygen -t rsa -b 2048 -v
The Enterprise Console needs to be able to access each cluster machine using passwordless SSH. Before starting, enable key-based SSH access.
This setup involves generating a key pair on the Controller Enterprise Console host and adding the ControllerEnterprise Console's public key as an authorized key on the cluster nodes. The following steps take you through the configuration procedure for an example scenario. You will need to adjust the steps based on your environment.
If you are using EC2 instances on AWS, the following steps are taken care of for you when you provision the EC2 hosts. At that time, you are prompted for your PEM file, which causes the public key for the PEM file to be copied to the authorized_keys of the hosts. You can skip these steps in this case.
On the Controller Enterprise Console machine, follow these steps:
Log in to the Controller Enterprise Console machine or switch to the user you will use to perform the deployment:
su - $USER
Create a directory for SSH artifacts (if it doesn't already exist) and set permissions on the directory, as follows:
mkdir -p ~/.ssh chmod 700 ~/.ssh
Change to the directory:
Generate PEM public and private keys in RSA format:
ssh-keygen -t rsa -b 2048 -v
The key file must not use password protection.
- Enter a name for the file in which to save the key when prompted, such as appd-analytics.
Rename the key file by adding the .pem extension:
mv appd-analytics appd-analytics.pem
You will later configure the path to it as the
sshKeyFilesetting in the Enterprise Console configuration file, as described in Deploying an Events Service Cluster.
Transfer a copy of the public key to the cluster machines. For example, you can use scp to perform the transfer as follows:
scp ~/.ssh/myserver.pub host1:/tmp scp ~/.ssh/myserver.pub host2:/tmp scp ~/.ssh/myserver.pub host3:/tmp
Continuing with the example,
myservershould be appd-analytics.
The first time you connect you may need to confirm the connection to add the cluster machine to the list of known hosts and enter the user's password.
On each cluster node (host1, host2, and host3), create the .ssh directory in the user home directory, if not already there, and add the public key you just copied as an authorized key:
cat /tmp/appd-analytics.pub >> .ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys
Test the configuration from the Controller machine by trying to log in to a cluster node by ssh:
If unable to connect, make sure that the cluster machines have the openssh-server package installed and that you have modified the operating system firewall rules to accept SSH connections. If successful, you can use the Enterprise Console on the Controller host to deploy the Events Service cluster, as described next.
If the Enterprise Console attempts to install the Events Service on a node for which passwordless SSH is not properly configured, you will see the following error message:
./bin/platform-admin.sh install-events-service --ssh-key-file /root/e2e-demo.pem --remote-user username --installation-dir /home/username/ --hosts 172.31.57.202 172.31.57.203 172.31.57.204 ... Events Service installation failed. Task: Copying JRE to the remote host failed on host: 172.31.57.204 with message: Failed to upload file: java.net.ConnectException: Connection timed out
If you encounter this error, use the instructions in this section to double check your passwordless SSH configuration.