On this page: Related pages:
On this page:
Network and Port Settings
The Controller and Events Service must reside on the same local network and communicate by the internal network. Do not deploy the cluster to nodes on different networks, whether relative to each other or to the Controller and the Enterprise Console. When identifying cluster hosts in the configuration, you will need to use the internal DNS name or IP address of the host, not the externally routable DNS name.
For example, in terms of an AWS deployment, use the private IP address such as 172.31.2.19 rather than public DNS hostname such as ec2-34-201-129-89.us-west-2.compute.amazonaws.com.
On each machine, the following ports need to be accessible to external (outside the cluster) traffic:
- Events Service API Store Port: 9080
- Events Service API Store Admin Port: 9081
For a cluster, ensure that the following ports are open for communication between machines within the cluster. Typically, this requires configuring iptables or OS-level firewall software on each machine to open the ports listed
9300 – 9400
The following shows an example of iptables commands to configure the operating system firewall:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9080 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 9081 -j ACCEPT -A INPUT -m state --state NEW -m multiport -p tcp --dports 9300:9400 -j ACCEPT
If a port on the Events Service node is blocked, the Events Service installation command will fail for the node and the Enterprise Console command output and logs will include an error message similar to the following:
If you see this error, make sure that the ports indicated in this section are available to other cluster nodes.
Configure Cluster Nodes that Run Linux
If deploying to Linux machines, on each node in the Events Service cluster, make these configuration changes:
Using a text editor, open
/etc/sysctl.confand add the following:
Raise the open file descriptor limit in
/etc/security/limits.conf, as follows:
<username_running_eventsservice> soft nofile 96000 <username_running_eventsservice> hard nofile 96000
Replace username_running_eventsservice with the username under which the Events Service processes run. So if you are running Analytics as the user
appduser, you would use that name as the first entry.
Configure SSH Passwordless Login
For Linux deployments, you will use the Enterprise Console to deploy and manage the Events Service cluster.
The Enterprise Console needs to be able to access each cluster machine using passwordless SSH for a non-embedded Events Service. Before starting, enable key-based SSH access as described here.
This setup involves generating a key pair on the and adding the public key as an authorized key on the cluster nodes. The following steps take you through the configuration procedure for an example scenario. You will need to adjust the steps based on your environment.
If you are using EC2 instances on AWS, the following steps are taken care of for you when you provision the EC2 hosts. At that time, you are prompted for your PEM file, which causes the public key for the PEM file to be copied to the authorized_keys of the hosts. You can skip these steps in this case.
On the host machine, follow these steps:
Log in to the Enterprise Console host machine or switch to the user you will use to perform the deployment:
Create a directory for SSH artifacts (if it doesn't already exist) and set permissions on the directory, as follows:
Change to the directory:
Generate PEM public and private keys in RSA format:
- Enter a name for the file in which to save the key when prompted, such as appd-analytics.
Rename the key file by adding the .pem extension:
You will later configure the path to it as the
sshKeyFilesetting in the Enterprise Console configuration file, as described in Deploying an Events Service Cluster.
Transfer a copy of the public key to the cluster machines. For example, you can use scp to perform the transfer as follows:
Continuing with the example,
myservershould be appd-analytics.
The first time you connect you may need to confirm the connection to add the cluster machine to the list of known hosts and enter the user's password.
On each cluster node (host1, host2, and host3), create the .ssh directory in the user home directory, if not already there, and add the public key you just copied as an authorized key:
Test the configuration from the host machine by trying to log in to a cluster node by ssh:
If unable to connect, make sure that the cluster machines have the openssh-server package installed and that you have modified the operating system firewall rules to accept SSH connections. If successful, you can use the Enterprise Console to deploy the platform.
If you encounter the following error, use the instructions in this section to double-check your passwordless SSH configuration: