The Controller and Events Service must reside on the same local network and communicate by internal network. Do not deploy the cluster to nodes on different networks, whether relative to each other or to the Controller and the Platform Administration ApplicationEnterprise Console. When identifying cluster hosts in the configuration, you will need to use the internal DNS name or IP address of the host, not the externally routable DNS name.
If a port on the Events Service node is blocked, the Events Service installation command will fail for the node and the Platform Administration Application Enterprise Console command output and logs will include an error message similar to the following:
For Linux deployments, you will use the Platform Administration Application the Enterprise Console to deploy and manage the Events Service cluster.
The Platform Administration Application Enterprise Console needs to be able to access each cluster machine using passwordless SSH for a non-embedded Events Service. Before starting, enable key-based SSH access as described here.
On the host machine, follow these steps:
Log in to the Platform Administration Application Enterprise Console host machine or switch to the user you will use to perform the deployment:
su - $USER
Create a directory for SSH artifacts (if it doesn't already exist) and set permissions on the directory, as follows:
mkdir -p ~/.ssh chmod 700 ~/.ssh
Change to the directory:
Generate PEM public and private keys in RSA format:
ssh-keygen -t rsa -b 2048 -v
- Enter a name for the file in which to save the key when prompted, such as appd-analytics.
Rename the key file by adding the .pem extension:
mv appd-analytics appd-analytics.pem
You will later configure the path to it as the
sshKeyFilesetting in the Platform Administration Application Enterprise Console configuration file, as described in Deploying an Events Service Cluster.
Transfer a copy of the public key to the cluster machines. For example, you can use scp to perform the transfer as follows:
scp ~/.ssh/myserver.pub host1:/tmp scp ~/.ssh/myserver.pub host2:/tmp scp ~/.ssh/myserver.pub host3:/tmp
Continuing with the example,
myservershould be appd-analytics.
The first time you connect you may need to confirm the connection to add the cluster machine to the list of known hosts and enter the user's password.
On each cluster node (host1, host2, and host3), create the .ssh directory in the user home directory, if not already there, and add the public key you just copied as an authorized key:
cat /tmp/appd-analytics.pub >> .ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys
Test the configuration from the host machine by trying to log in to a cluster node by ssh:
If unable to connect, make sure that the cluster machines have the openssh-server package installed and that you have modified the operating system firewall rules to accept SSH connections. If successful, you can use the Platform Administration Application Enterprise Console to deploy the platform.
If you encounter the following error, use the instructions in this section to double check your passwordless SSH configuration: