Download PDF
Download page Install the Cluster Agent with Helm Charts.
Install the Cluster Agent with Helm Charts
This page describes how to use Cluster Agent Helm Charts to deploy the Cluster Agent.
Helm is a package manager for Kubernetes. Helm charts are a collection of files that describe a set of Kubernetes resources. The Cluster Agent Helm chart is a convenient method to deploy the Splunk AppDynamics Operator and Cluster Agent. You can also use the Cluster Agent Helm chart to deploy multiple Cluster Agents in a single cluster. This may be necessary for larger clusters that exceed the pod monitoring limit for a single Cluster Agent. See Cluster Agent Requirements and Supported Environments.
Requirements
- Cluster Agent version >= 20.6
- Controller version >= 20.6
- Cluster Agent Helm charts are compatible with Helm 3.0
It is recommended to use Cluster Agent Helm Charts version >= v1.1.0.
Use Cluster Agent Helm Charts version >= v1.10.0 to install Cluster Agent >= 23.2.0 and Splunk AppDynamics Operator version >= 23.2.0.
Use Cluster Agent Helm Charts version >= v1.1.0 to install Cluster Agent >= 21.12.0 and Splunk AppDynamics Operator version >= 21.12.0.
You can install Cluster Agent version <= 21.10.0 and Splunk AppDynamics Operator version <= 0.6.11 using the older major version of Cluster Agent Helm Charts (<=0.1.19).
Install a Single Cluster Agent in a Cluster
Delete all the previously installed CustomResourceDefinition (CRDs) related to Splunk AppDynamics Agent by using these commands:
$ kubectl get crds $ kubectl delete crds <crd-names>
CODEAdd the chart repository to Helm:
helm repo add appdynamics-cloud-helmcharts https://appdynamics.jfrog.io/artifactory/appdynamics-cloud-helmcharts/
BASHCreate a namespace for
appdynamics
in your cluster:kubectl create namespace appdynamics
BASHCreate a Helm values file, in the example called
values-ca1.yaml
. Update thecontrollerInfo
properties with the credentials from your Controller.
Update theclusterAgent
properties to set the namespace and pods to monitor. See Configure the Cluster Agent for information about the available propertiesnsToMonitorRegex
,nsToExcludeRegex
andpodFilter
.values-ca1.yaml
# To install Cluster Agent installClusterAgent: true # controller info controllerInfo: url: https://<controller-url>:443 account: <appdynamics-controller-account> username: <appdynamics-controller-username> password: <appdynamics-controller-password> accessKey: <appdynamics-controller-access-key> # Cluster agent config clusterAgent: nsToMonitorRegex: dev-.*
YMLFrom Cluster Agent 24.9 onwards, you no longer require server monitoring user credentials or API-user credentials to mark associated nodes as historical upon pod deletion. The associated node in the Controller is automatically marked as historical when a pod is deleted.
See Configuration Options for values.yaml for more information regarding the available options. Also, you can download a copy of
values.yaml
from the Helm Chart repository using this command:helm show values appdynamics-cloud-helmcharts/cluster-agent
BASH- (Optional) If you require multiple Cluster Agents to monitor a single cluster, set up the Target Allocator.
In thevalues-ca1.yaml
file, enable the Target Allocator with the number of Cluster Agent replicas to enable the operator to create Cluster Agent replicas.values-ca1.yaml
... # Cluster agent config clusterAgent: nsToMonitorRegex: dev-.* #Instrumentation config instrumentationConfig: enabled: false # Target allocator Config targetAllocator: enabled: true clusterAgentReplicas: 3 autoScaling: enabled: false #false by default replicaProfile: Default maxClusterAgentReplicas: 12 scaleDown: stabilizationWindowSeconds: 86400 #In Seconds # Target Allocator pod specific properties targetAllocatorPod: imagePullPolicy: "" imagePullSecret: "" priorityClassName: "" nodeSelector: {} tolerations: [] resources: limits: cpu: "500m" memory: "500Mi" requests: cpu: "200m" memory: "200Mi" labels: {} securityContext: {}
CODE
For more information about Target Allocator, see Target Allocator. - (Optional) Create a secret based on the Controller Access Key.
kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key='<access-key>' --from-literal=api-user=‘<username@account:password>’
CODE If you have not installed the Kubernetes
metrics-server
in the cluster (usually located in thekube-system
namespace), then setinstall.metrics-server
totrue
in the values file to invoke the subchart to install it.install: metrics-server: true
BASHSetting
install.metrics-server
installsmetrics-server
in the namespace with the--namespace
flag which is located in the same namespace as the Cluster Agent.Deploy the Cluster Agent to the
appdynamics
namespace:helm install -f ./values-ca1.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace=appdynamics
BASH
Enable Auto-Instrumentation
Once you have validated that the Cluster Agent was successfully installed, you can add additional configuration to the instrumentationConfig
section of the values YAML file to enable auto-instrumentation. In this example, instrumentationConfig.enabled
has been set to true
, and multiple instrumentationRules
have been defined. See Auto-Instrument Applications with the Cluster Agent.
values-ca1.yaml with Auto-Instrumentation Enabled
# To install Cluster Agent
installClusterAgent: true
# AppDynamics controller info
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
username: <appdynamics-controller-username>
password: <appdynamics-controller-password>
accessKey: <appdynamics-controller-access-key>
# Cluster agent config
clusterAgent:
nsToMonitorRegex: ecom|books|groceries
instrumentationConfig:
enabled: true
instrumentationMethod: Env
nsToInstrumentRegex: ecom|books|groceries
defaultAppName: Ecommerce
tierNameStrategy: manual
enableInstallationReport: false
imageInfo:
java:
image: "docker.io/appdynamics/java-agent:latest"
agentMountPath: /opt/appdynamics
imagePullPolicy: Always
instrumentationRules:
- namespaceRegex: groceries
language: dotnetcore
tierName: tier
imageInfo:
image: "docker.io/appdynamics/dotnet-core-agent:latest"
agentMountPath: /opt/appdynamics
imagePullPolicy: Always
- namespaceRegex: books
matchString: openmct
language: nodejs
imageInfo:
image: "docker.io/appdynamics/nodejs-agent:20.5.0-alpinev10"
agentMountPath: /opt/appdynamics
imagePullPolicy: Always
analyticsHost: <hostname of the Analytics Agent>
analyticsPort: 443
analyticsSslEnabled: true
After saving the values-ca1.yaml
file with the added auto-instrumentation configuration, you must upgrade the Helm Chart:
helm upgrade -f ./ca1-values.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace appdynamics
Configuration Options
Config option | Description | Required |
---|---|---|
installClusterAgent | Used for installing Cluster Agent. This must be set to true. | Optional (Defaults to true) |
Image config options (Config options under imageInfo key in values.yaml ) | ||
imageInfo.agentImage | Cluster agent image address in format <registryUrl>/<registryAccount>/<project> | Optional (Defaults to the Docker Hub image) |
imageInfo.agentTag | Cluster agent image tag/version | Optional (Defaults to latest) |
imageInfo.operatorImage | Operator image address in format <registryUrl>/<registryAccount>/<project> | Optional (Defaults to the Docker Hub image) |
imageInfo.operatorTag | Operator image tag/version | Optional (Defaults to latest) |
imageInfo.imagePullPolicy | Image pull policy for the operator pod | Optional |
Controller config options (Config options under controllerInfo key in values.yaml ) | ||
controllerInfo.accessKey | Controller accessKey | Required This is not required if you have created Access Key secret based on the access Key. See Create Secret. |
controllerInfo.account | Controller account | Required |
controllerInfo.authenticateProxy | true/false if the proxy requires authentication | Optional |
controllerInfo.customSSLCert | Base64 encoding of PEM formatted SSL certificate | Optional |
controllerInfo.password | Controller password | Password for local user from the Controller. Required only when auto-instrumentation is enabled. This is not required if you have created Access Key secret based on the access Key. See Create Secret. |
controllerInfo.proxyPassword | Password for proxy authentication | Optional |
controllerInfo.proxyUrl | Proxy URL if the Controller is behind some proxy | Optional |
controllerInfo.proxyUser | Username for proxy authentication | Optional |
controllerInfo.url | Controller URL | Required |
controllerInfo.username | Controller username | Username for local user from the Controller. Required only when auto-instrumentation is enabled. |
Cluster Agent Config (Config options under For OpenShift version > For example, if you want to use | ||
clusterAgent.appName | Name of the cluster; displays in the Controller UI as your cluster name. | Required |
clusterAgent.eventUploadInterval | How often Kubernetes warning and state-change events are uploaded to the Controller in seconds. See Monitor Kubernetes Events. | Optional |
clusterAgent.httpClientTimeout | If no response is received from the Controller, number of seconds after which the server call is terminated. | Optional |
clusterAgent.imagePullSecret | Credential file used to authenticate when pulling images from your private Docker registry or repository. Based on your Docker registry configuration, you may need to create a secret file for the Splunk AppDynamics Operator to use when pulling the image for the Cluster Agent. See Create a Secret by providing credentials on the command line. | Optional |
clusterAgent.instrumentationMaxPollingAttempts | The maximum number of times Cluster Agent checks for the successful rollout of instrumentation before marking it as failed. | Optional |
clusterAgent.logProperties.logFileSizeMb | Maximum file size of the log in MB. | Optional |
clusterAgent.logProperties.logFileBackups | Maximum number of backups saved in the log. When the maximum number of backups is reached, the oldest log file after the initial log file is deleted. | Optional |
clusterAgent.logProperties.logLevel | Number of log details. | Optional |
clusterAgent.logProperties.maxPodLogsTailLinesCount | Number of lines to be tailed while collecting logs. To use this parameter, enable the log capturing feature. See Enable Log Collection for Failing Pods. | Optional |
clusterAgent.logProperties.stdoutLogging | By default, the Cluster Agent writes to a log file in the | Optional |
clusterAgent.nsToMonitorRegex | The regular expression for selecting the required namespaces to be monitored in the cluster. If you require to monitor multiple namespaces, separate the namespaces using If you are using Target Allocator, you must specify all the namespaces that you require to monitor. Target Allocator will auto-allocate these Namespaces to individual Cluster Agent replicas. See Edit Namespaces. Any modification to the namespaces in the UI takes the precedence over the YAML configuration. | Optional |
clusterAgent.nsToExcludeRegex | The regular expression for the namespaces that must be excluded from the selected namespaces that match the regular expression mentioned for
This parameter can be used only if you have specified a value for the | Optional |
clusterAgent.priorityClassName | The name of the pod priority class, which is used in the pod specification to set the priority. | Optional |
clusterAgent.securityContext.runAsGroup | If you configured the application container as a non-root user, provide the This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of Cluster Agent image contains a group with GID 9001. | Optional |
clusterAgent.securityContext.runAsUser | If you configured the application container as a non-root user, it provides the This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of Cluster Agent image contains a user with UID 9001. | Optional |
clusterAgent.securityContext.allowPrivilegeEscalation | To control if a process can get more privileges than its parent process. The value is true when the container runs as:
| Optional |
clusterAgent.securityContext.capabilities | To add or remove POSIX capabilities from the running containers. This uses the default set of capabilities during container runtime.
| Optional |
clusterAgent.securityContext.privileged | To run container in privileged mode, which is equivalent to root on the host.
| Optional |
clusterAgent.securityContext.procMount | The type of proc mount to use for the containers. This parameter is currently available for Deployment and DeploymentConfig mode. | Optional |
clusterAgent.securityContext.readOnlyRootFilesystem | To specify if this container has a read-only root filesystem.
| Optional |
clusterAgent.securityContext.runAsNonRoot | To specify if the container must run as a non-root user. If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation. This parameter is currently available for Deployment and DeploymentConfig mode. | Optional |
clusterAgent.securityContext.seLinuxOptions | To apply the SELinux context to the container. If this parameter is not specified, the container runtime allocates a random SELinux context for each container.
| Optional |
clusterAgent.securityContext.seccompProfile | To specify the seccomp options used by the container. If seccomp options are specified at both the pod and container level, the container options override the pod options.
| Optional |
clusterAgent.securityContext.windowsOptions | To specify Windows-specific options for every container.
| Optional |
Cluster Agent Pod Config | ||
agentPod.labels | Adds any required pod labels to the Cluster Agent pod. These labels are also added to the deployment of Cluster Agent. | Optional |
agentPod.nodeSelector | The Cluster Agent pod runs on the node that includes the specified | Optional |
agentPod.resources | Requests and limits of CPU and memory resources for the Cluster Agent. | Optional |
agentPod.tolerations | An array of tolerations required for the pod. See Taint and Tolerations. | Optional |
Pod Filter Config | ||
podFilter | Blocklist or allowlist pods based on:
Blocklisting or allowlisting by name takes preference over blocklisting or allowlisting by labels. For example, if you have the podFilter: blocklistedLabels: allowlistedNames: This blocks all the pods which have the label '
| Optional |
Target Allocator Config | ||
targetAllocator.enabled | Enables the use of auto allocation of namespaces to available Cluster Agent replicas. This is disabled by default. To enable this property, set For information about Target Allocator, see Target Allocator. | Optional |
targetAllocator.clusterAgentReplicas | The number of cluster Agent replicas. If Target Allocator is enabled, the default value is 3. Set the number of replicas based on your requirements. To decide how many replicas are required, see Cluster Agent Requirements and Supported Environments. | Optional This is required when targetAllocator.enabled is set to true. |
targetAllocator.autoScaling.enabled | The default value is | Optional |
targetAllocator.autoScaling.replicaProfile | The profile to be used. Currently only the | Optional. Required when auto-scaling is enabled. |
targetAllocator.autoScaling.maxClusterAgentReplicas | Specify the maximum number of replicas that you require to auto-scale. | Optional |
targetAllocator.autoScaling.scaleDown.stabilizationWindowSeconds | Specify the time in seconds after which Target Allocator can scale down the replicas. Scale-down may result in the metrics drop. By default, this parameter is disabled. | Optional |
Target Allocator Pod Config | ||
targetAllocatorPod.agentPod.labels | Adds any required pod labels to the pod. These labels are also added to the deployment of Target Allocator. | Optional |
targetAllocatorPod.nodeSelector | The Target Allocator pod runs on the node that includes the specified key-value pair within its labels property. See nodeSelector. | Optional |
targetAllocatorPod.agentPod.resources | Requests and limits of CPU and memory resources for the Target Allocator. | Optional |
targetAllocatorPod.agentPod.tolerations | An array of tolerations required for the pod. See Taint and Tolerations. | Optional |
targetAllocatorPod.imagePullPolicy | Image pull policy for Target Allocator. | Optional |
targetAllocatorPod.imagePullSecret | Credential file used to authenticate when pulling images from your private Docker registry or repository. Based on your Docker registry configuration, you may need to create a secret file for the Splunk AppDynamics Operator to use when pulling the image for the Target Allocator, which is the same as Cluster Agent image. See Create a Secret by providing credentials on the command line. | Optional |
targetAllocatorPod.priorityClassName | The name of the pod priority class, which is used in the pod specification to set the priority. | Optional |
targetAllocatorPod.nodeSelector | The Target Allocator pod runs on the node that includes the specified | Optional |
targetAllocatorPod.tolerations | An array of tolerations required for the pod. See Taint and Tolerations. | Optional |
targetAllocatorPod.resources | Requests and limits of CPU and memory resources for the Cluster Agent. | Optional |
targetAllocatorPod.securityContext For OpenShift version > For example, if you want to use | You can include the following parameters under
This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of | Optional |
This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of | ||
If you do not set this parameter, the helm uses the default value as true. This parameter is currently available for Deployment and DeploymentConfig mode. | ||
This parameter is currently available for Deployment and DeploymentConfig mode. | ||
If you do not set this parameter, the helm uses the default value as true. This parameter is currently available for Deployment and DeploymentConfig mode. | ||
This parameter is currently available for Deployment and DeploymentConfig mode. | ||
This parameter is currently available for Deployment and DeploymentConfig mode. | ||
If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation. This parameter is currently available for Deployment and DeploymentConfig mode. | ||
This parameter is currently available for Deployment and DeploymentConfig mode. | ||
This parameter is currently available for Deployment and DeploymentConfig mode. | ||
This parameter is currently available for Deployment and DeploymentConfig mode. |
Install Multiple Cluster Agents in a Cluster
The Cluster Agent Helm Chart supports multiple Cluster Agent installations in a cluster. This may be necessary for larger clusters that exceed the pod monitoring limit for a single Cluster Agent. See Cluster Agent Requirements and Supported Environments.
If you do not require to use auto-instrumentation and manual correlation (for Kubernetes >=1.25), then you can install Cluster Agent with Target Allocator.
The Target Allocator:
- simplifies the monitoring of large clusters by creating the specified number of replicas of the Cluster Agent.
- auto-allocates namespaces to the available Cluster Agent replicas.
- aggregates the cluster data to send to Controller.
Each Cluster Agent that is deployed must have different configuration. This is achieved by limiting the monitoring to a distinct set of namespaces and pods using the nsToMonitorRegex, nsToMonitorExcludeRegex
and podFilter
properties. See Configure the Cluster Agent.
To install Cluster Agents:
Create a new values file, called
values-ca2.yaml
as the example, that uses the samecontrollerInfo
properties as the first Cluster Agent.
Add additional properties, such asnsToMonitorRegex
andpodFilter
, to set the monitoring scope for this Cluster Agent.values-ca2.yaml
# To install Cluster Agent installClusterAgent: true # AppDynamics controller info controllerInfo: url: https://<controller-url>:443 account: <appdynamics-controller-account> accessKey: <appdynamics-controller-access-key> # Cluster agent config clusterAgent: nsToMonitorRegex: stage.* podFilter: allowlistedLabels: - label1: value1 - label2: value2 blocklistedLabels: [] allowlistedNames: [] blocklistedNames: []
YMLCreate a namespace distinct from the previous namespace used for the first installation:
kubectl create ns appdynamics-ca2
BASHInstall the additional Cluster Agent:
helm install -f ./values-ca2.yaml "<my-2nd-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace=appdynamics-ca2
BASH
Cluster Agent Helm Chart Configuration Examples
These examples display various configurations for the Cluster Agent Helm chart:
Use the Cluster Agent Helm Chart to Enable Custom SSL
user-values.yaml
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
accessKey: <appdynamics-controller-access-key>
#=====
customSSLCert: "<base64 of PEM formatted cert>"
#=====
agentServiceAccount: appdynamics-cluster-agent-ssl # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl # Can be any valid name
Use the Cluster Agent Helm Chart to Enable the Proxy Controller
Without authentication:
user-values.yaml
# To install Cluster Agent
installClusterAgent: true
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
accessKey: <appdynamics-controller-access-key>
#=====
proxyUrl: http://proxy-url.appd-controller.com
#=====
agentServiceAccount: appdynamics-cluster-agent-ssl # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl # Can be any valid name
With authentication:
user-values.yaml
# To install Cluster Agent
installClusterAgent: true
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
accessKey: <appdynamics-controller-access-key>
#=====
authenticateProxy: true
proxyUrl: http://proxy-url.appd-controller.com
proxyUser: hello
proxyPassword: world
#=====
agentServiceAccount: appdynamics-cluster-agent-ssl # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl # Can be any valid name
Use the Cluster Agent Helm Chart to add nodeSelector
and tolerations
user-values.yaml
agentPod:
nodeSelector:
nodeLabelKey: nodeLabelValue
tolerations:
- effect: NoExecute
operator: Equal
key: key1
value: val1
tolerationSeconds: 11
operatorPod:
nodeSelector:
nodeLabelKey: nodeLabelValue
anotherNodeLabel: anotherNodeLabel
tolerations:
- operator: Exists
key: key1
Best Practices for Sensitive Data
We recommend using multiple values.yaml
files to separate sensitive data in separate values.yaml
files. Examples of these values are:
controllerInfo.password
controllerInfo.accessKey
controllerInfo.customSSLCert
controllerInfo.proxyPassword
Each values
file follows the structure of the default values.yaml
enabling you to easily share files with non-sensitive configuration properties yet keep sensitive values safe.
Default user-values.yaml
File Example
user-values.yaml
# To install Cluster Agent
installClusterAgent: true
imageInfo:
agentImage: dtr.corp.appdynamics.com/sim/cluster-agent
agentTag: latest
operatorImage: docker.io/appdynamics/cluster-agent-operator
operatorTag: latest
imagePullPolicy: Always
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
username: <appdynamics-controller-username>
password: <appdynamics-controller-password>
accessKey: <appdynamics-controller-access-key>
agentServiceAccount: appdynamics-cluster-agent-ssl # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl # Can be any valid name
user-values-sensitive.yaml
controllerInfo:
password: welcome
accessKey: abc-def-ghi-1516
When installing the Helm Chart, use multiple -f
parameters to reference the files:
helm install -f ./user-values.yaml -f ./user-values-sensitive.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace ca-appdynamics