This page describes the contents of the Cluster Agent bundle downloaded from the Download portal, and how to perform common configuration tasks:

See Cluster Agent YAML File Configuration Reference for configuration option details.

This page contains links to Kubernetes documentation. Splunk AppDynamics makes no representation as to the accuracy of Kubernetes documentation because Kubernetes controls its own documentation.

Directory Structure of the Cluster Agent Bundle

An unzipped Cluster Agent bundle contains this directory structure:

cluster-agent 
ā”œā”€ā”€ cluster-agent-operator.yaml 
ā”œā”€ā”€ appdynamics-operator-alpine-linux-amd64-<version>
ā”œā”€ā”€ cluster-agent-operator-openshift-1.15-or-less.yaml
ā”œā”€ā”€ cluster-agent-operator-openshift.yaml 
ā”œā”€ā”€ cluster-agent.yaml 
ā”œā”€ā”€ infraviz.yaml
ā”œā”€ā”€ README-alpine.md  
└── docker 
	ā”œā”€ā”€ cluster-agent.zip 
	ā”œā”€ā”€ Dockerfile
	ā”œā”€ā”€ LICENSE 
	└── start-appdynamics
└── helm-charts
	ā”œā”€ā”€ Chart.yaml
    ā”œā”€ā”€ README.md
    ā”œā”€ā”€ crds
    ā”œā”€ā”€ templates
    └── values.yaml
TEXT
cluster-agent 
ā”œā”€ā”€ cluster-agent-operator.yaml 
ā”œā”€ā”€ appdynamics-operator-alpine-linux-arm64-<version>
ā”œā”€ā”€ cluster-agent-operator-openshift-1.15-or-less.yaml
ā”œā”€ā”€ cluster-agent-operator-openshift.yaml 
ā”œā”€ā”€ cluster-agent.yaml 
ā”œā”€ā”€ infraviz.yaml
ā”œā”€ā”€ README-alpine.md  
└── docker 
	ā”œā”€ā”€ cluster-agent.zip 
	ā”œā”€ā”€ Dockerfile
	ā”œā”€ā”€ LICENSE 
	└── start-appdynamics
└── helm-charts
	ā”œā”€ā”€ Chart.yaml
    ā”œā”€ā”€ README.md
    ā”œā”€ā”€ crds
    ā”œā”€ā”€ templates
    └── values.yaml
TEXT
cluster-agent 
ā”œā”€ā”€ cluster-agent-operator.yaml 
ā”œā”€ā”€ appdynamics-operator-rhel-linux-amd64-<version>
ā”œā”€ā”€ cluster-agent-operator-openshift-1.15-or-less.yaml
ā”œā”€ā”€ cluster-agent-operator-openshift.yaml 
ā”œā”€ā”€ cluster-agent.yaml 
ā”œā”€ā”€ README-rhel.md 
└── docker 
	ā”œā”€ā”€ cluster-agent.zip 
	ā”œā”€ā”€ Dockerfile-rhel
	ā”œā”€ā”€ LICENSE 
	└── start-appdynamics
└── helm-charts
	ā”œā”€ā”€ Chart.yaml
    ā”œā”€ā”€ README.md
    ā”œā”€ā”€ crds
    ā”œā”€ā”€ templates
    └── values.yaml
TEXT

Cluster Agent Bundle Files

This table describes the Cluster Agent directory files:

File NameDescription
appdynamics-operator-alpine-linux-amd64-<version>

The Splunk AppDynamics Operator artifacts contain Dockerfile, operator binary, licenses and scripts, which are used to build Alpine AMD-based Operator Images. 

appdynamics-operator-alpine-linux-arm64-<version>

The Splunk AppDynamics Operator artifacts contain Dockerfile, operator binary, licenses and scripts, which are used to build Alpine ARM-based Operator Images. 

appdynamics-operator-rhel-linux-amd64-<version>

The Splunk AppDynamics Operator artifacts contain Dockerfile-rhel, operator binary, licenses and scripts, which are used to build Rhel-based Operator Images.

cluster-agent.yaml

File used to configure and deploy the Cluster Agent.

  • The cluster-agent.yaml file provides Controller details and starts the Cluster Agent.
  • Where values are specified in the Splunk AppDynamics Operator configuration, these values always take precedence over any internal configuration file.

cluster-agent-operator.yaml

Files used to deploy the Cluster Agent Operator. These files set the default values for Kubernetes, Amazon EKS, and AKS, including a minimal set of RBAC permissions.

cluster-agent-operator-openshift.yaml

cluster-agent-operator-openshift-1.15-or-less.yaml

Files used to deploy the Cluster Agent on Red Hat OpenShift. These files set the default values for Red Hat OpenShift, including a minimal set of RBAC permissions.

docker

Docker directory contains all files required to create the Cluster Agent image.

Dockerfile

dockerfile used to create the Alpine-based Cluster Agent image.
Dockerfile-rheldockerfile used to create the Rhel-based Cluster Agent image.
infraviz.yaml

File used to configure and deploy the InfraViz.

  • The infrviz.yaml file provides Controller details and starts the Infrastructure Visibility Agent and Network Visibility Agents.
  • Here, values are specified in the Splunk AppDynamics Operator configuration, these values always take precedence over any internal configuration file.
LICENSELatest EULA file attached with the Cluster Agent image.

cluster-agent.zip

Zip archive containing the Cluster Agent binaries and configuration files.
helm-chartsFolder used to build the charts for deploying the Cluster Agent using Helm in Kubernetes.

README-rhel.md

README-alpine.md

Contains instructions on how to start the Cluster Agent using your preferred operating system.

start-appdynamics

Script used to run the Cluster Agent within Docker.

Configure Proxy Support

To understand proxy in Kubernetes, see the Kubernetes documentation (Proxies in Kubernetes).

  1. Locate and edit the cluster-agent.yaml file.

  2. Add a proxyUrl parameter to the cluster-agent.yaml file:

    proxyUrl: <protocol>://<host>:<port>
    TEXT
  3. (Optional) If the proxy server requires authentication:

    1. Add a proxyUser:

      proxyUser: <user>
      CODE
    2. Create a secret with a proxy-password:

      kubectl -n appdynamics create secret generic cluster-agent-proxy-secret --from-literal=proxy-password='<password>'
      CODE
  4. (Optional) If you are using SSL only for your proxy:
    1. Create a secret from a .pem certificate file (the certificate file must be named proxy-ssl.pem):

      kubectl -n appdynamics create secret generic ssl-cert --from-file=proxy-ssl.pem
      TEXT
    2. Set a secret filename in the cluster-agent.yaml file:

      customSSLSecret: ā€œssl-certā€
      TEXT

To use SSL with your proxy and your Controller, see Proxy and On-Premises Certificates Combined.

Configure the Cluster Agent to Use SSL for On-Premises Controllers

Cluster Agent SSL is automatically handled for SaaS Controllers.

Controllers with Public and Self-Signed Certificates

To configure SSL with a public or self-signed certificate, use kubectl to generate a secret. Enter this kubectl command, and include the path to your public or self-signed certificate:

kubectl -n appdynamics create secret generic ssl-cert --from-file=<path-to-your-self-signed-certs>/custom-ssl.pem
CODE

The certificate file must be named: custom-ssl.pem.

After your secret is created, you must add the customSSLSecret property with the secret name specified in the previous step to the cluster-agent.yaml file:

customSSLSecret: ā€œssl-certā€
CODE

Proxy and On-Premises Certificates Combined

If you have two different SSL certificates (one for the proxy server, and a different one for the on-premises Controller), then you can encapsulate both of them into a single secret:

kubectl -n appdynamics create secret generic ssl-cert --from-file=proxy-ssl.pem --from-file=<path-to-your-self-signed-certs>/custom-ssl.pem
TEXT

The Cluster Agent pulls each certificate from the secret identified in the customSSLSecret attribute and uses it appropriately. 

This example shows a cluster-agent.yaml file with the customSSLSecret attribute defined:

apiVersion: cluster.appdynamics.com/v1alpha1
kind: Clusteragent
metadata:
  name: k8s-cluster-agent-manual
  namespace: appdynamics
spec:
  # init agent configuration
  appName: "test-k8s-cluster-agent"
  controllerUrl: "https://<controller-url>:443" # always schema and port
  account: "<account-name>" # account
  # agent related properties
  # custom SSL secret name
  customSSLSecret: "ssl-cert"
  # logging properties
  logLevel: INFO
  logFileSizeMb: 7
  logFileBackups: 6
  # docker image info
  image: "<image-url>" 
CODE

Create Secret

If the Cluster Agent requires a secret to pull images from a container registry, use the Kubernetes API to create the secret and reference it in cluster-agent.yaml.

$ kubectl -n appdynamics create secret docker-registry myregcred --docker-server=https://index.docker.io/v1 --docker-username=<docker-username> --docker-password=<docker-password> --docker-email=unused
CODE
$ oc -n appdynamics create secret docker-registry myregcred --docker-server=https://index.docker.io/v1 --docker-username=<docker-username> --docker-password=<docker-password> --docker-email=unused 
$ oc -n appdynamics secrets link appdynamics-operator regcred --for=pull
CODE

Set the imagePullSecret property in cluster-agent.yaml to the name of the secret created above (myregcred):

kind: Clusteragent
metadata:
  name: k8s-cluster-agent
  namespace: appdynamics
spec:
  appName: "mycluster"
  controllerUrl: "http://<appdynamics-controller-host>:8080"
  account: "<account-name>"
  image: "<your-docker-registry>/appdynamics/cluster-agent:tag"
  serviceAccountName: appdynamics-cluster-agent
  imagePullSecret: "myregcred"
CODE

Cluster Agent YAML File Configuration Reference

To configure the Cluster Agent, use the cluster-agent.yaml file included with the download package as a template. You can modify these parameters:

ParameterDescriptionExampleDefaultDynamically Configurable?TypeRequired?

account

Splunk AppDynamics account name.

admin
N/A

 No

StringRequired

appName

Name of the cluster; displays in the Controller UI as your cluster name.

Ensure that this name is  unique for each Cluster Agent that is installed in same cluster or in different cluster that is part of the same Controller.

k8s-cluster
N/A

No

StringRequired

controllerUrl

Full Splunk AppDynamics Controller URL, including protocol and port.

HTTP: http://appd-controller.com:8090/
HTTPS: https://appd-controller.com:443

N/A

No

StringRequired

customSSLSecret

Provides the self-signed or public certificates to the Cluster Agent.
"ssl-cert"
N/ANoStringOptional

eventUploadInterval

How often Kubernetes warning and state-change events are uploaded to the Controller in seconds. See Monitor Kubernetes Events.
10
10NoIntegerOptional

httpClientTimeout

If no response is received from the Controller, number of seconds after which the server call is terminated.
30

30

NoIntegerOptional

image

Cluster Agent image.
your-docker-registry/appdynamics/cluster-agent:latest
N/A

No

String

Required

imagePullPolicy

Image pull policy for cluster agent.

IfNotPresent

Always

No

String

Optional

imagePullSecret

Credential file used to authenticate when pulling images from your private Docker registry or repository. Based on your Docker registry configuration, you may need to create a secret file for the Splunk AppDynamics Operator to use when pulling the image for the Cluster Agent. See Create a Secret by providing credentials on the command line.

regcred
N/ANoStringOptional
instrumentationMaxPollingAttemptsThe maximum number of times Cluster Agent checks for the successful rollout of instrumentation before marking it as failed.

instrumentationMaxPollingAttempts: 1510YesintegerOptional
instrumentationNsStatusPollingIntervalMinutesThe polling interval to add or remove the APPD_INSTRUMENTATION_CLUSTER_AGENT annotation. This is applicable for the agents that are part of the same cluster. When a namespace is uninstrumented from a Cluster Agent, this parameter periodically checks at the defined interval to remove the annotation from that Cluster Agent.instrumentationNsStatusPollingIntervalMinutes: 105YesIntegerOptional
labelsAdds any required pod labels to the Cluster Agent pod. These labels are also added to the deployment of Cluster Agent.
labels:
key1: value1
key2: value2

The following labels are created by default and cannot be modified:
name:clusterAgent
clusterAgent_cr:<name of agent>
pod-template-hash:<assigned by Kubernetes>

The key value pairs that you specify for this parameter gets added to the Cluster Agent pod along with the default value.

Nomap[string]stringOptional

logFileSizeMb

Maximum file size of the log in MB.
5
5

Yes

IntegerOptional

logFileBackups

Maximum number of backups saved in the log. When the maximum number of backups is reached, the oldest log file after the initial log file is deleted.
3
3

Yes

IntegerOptional

logLevel

Number of log details. INFOWARNING, DEBUG, or TRACE.
"INFO"

INFO

YesStringOptional
maxPodLogsTailLinesCount

Number of lines to be tailed while collecting logs.

To use this parameter, enable the log capturing feature. See Enable Log Collection for Failing Pods.

500500YesIntegerOptional

nodeSelector

The Cluster Agent pod runs on the node that includes the specified key-value pair within its labels property. See nodeSelector.
nodeSelector:
kubernetes.io/e2e-az-name: az1
N/ANomap[string]stringOptional
nsToMonitorRegex

The regular expression for selecting the required namespaces to be monitored in the cluster.

If you require to monitor multiple namespaces, separate the namespaces using | without spaces.

If you are using Target Allocator, you must specify all the namespaces that you require to monitor. Target Allocator will auto-allocate these Namespaces to each Cluster Agent replicas.

See Edit Namespaces.

Any modification to the namespaces in the UI takes the precedence over the yaml configuration.

  • nsToMonitorRegex: .*
  • nsToMonitorRegex: namespace1|namespace2
N/AYesRegular expressionOptional
nsToExcludeRegex

The regular expression for the namespaces that must be excluded from the selected namespaces that match the regular expression mentioned for nsToMonitorRegex.

  • This parameter is supported in Cluster Agent >= 20.9, and Controller >= 20.10.
  • Any modification to the namespaces in the UI takes the precedence over the yaml configuration.

This parameter can be used only if you have specified a value for the nsToMonitorRegex parameter.

nsToExcludeRegex: ns.*


N/AYesRegular expressionOptional

podFilter

Blocklist or allowlist pods based on:
  • Regular expressions for pod names
  • Pod labels

Blocklisting or allowlisting by name takes preference over blocklisting or allowlisting by labels. For example, if you have the podFilter as :

 podFilter:
 blocklistedLabels:
- release: v1
 allowlistedNames:
- ^podname

This blocks all the pods which have the label 'release=v1' except for the ones which have the names starting with 'podname'.

  • When a pod is listed as allowed by name and blocked by name, it will be allowlisted.
  • When a pod is listed as allowed by a label and blocked by a label, it will be allowlisted.
podFilter:
blocklistedLabels:
- label1: value1
allowlistedLabels:
- label1: value1
- label2: value2
allowlistedNames:
- name1
blocklistedNames:
- name2
N/AYesStringOptional

podMetricCollectionMaxGoRoutines

This is the maximum limit on the number of routines to fetch pod metrics in a collection cycle.

Specify the number of go routines by which Cluster Agent collects the pod metrics.

podMetricCollectionMaxGoRoutines: 5podMetricCollectionMaxGoRoutines: 3YesIntegerOptional

podMetricCollectionRequestTimeoutSeconds

This is the pod metric collection request timeout in seconds.

Specify the timeout value in seconds for the Cluster Agent request for collecting pod metrics.

podMetricCollectionRequestTimeoutSeconds: 10podMetricCollectionRequestTimeoutSeconds: 5YesIntegerOptional
priorityClassName

The name of the pod priority class, which is used in the pod specification to set the priority.

priorityClassName: system-node-criticalN/ANoStringOptional

proxyUrl

Publicly accessible host name of the proxy.

https://myproxy.example.com:8080
N/A

No

StringOptional

proxyUser

Username associated with the basic authentication credentials.

"user1"
N/A

No

StringOptional
resourcesRequests and limits of CPU and memory resources for the Cluster Agent.
resources:
limits:
cpu: 300m
memory: "200Mi"
requests:
cpu: 200m
memory: "100Mi"
  • CPU
    • request: 750m
    • limit: 1250m
  • Memory
    • request: 150Mi
      limit: 300Mi
YesArrayOptional

stdoutLogging

By default, the Cluster Agent writes to a log file in the logs directory. Additionally, the stdoutLogging parameter is provided to send logs to the container stdout.

"true", "false"
true
YesStringOptional
targetAllocator

enabled: Enables the use of auto allocation of namespaces to available Cluster Agent replicas. This is disabled by default. To enable this property, set enabled to true.

For information about Target Allocator, see Target Allocator.

enabled: true
falseYesStringOptional

clusterAgentReplicas: The number of cluster Agent replicas. If Target Allocator is enabled, the default value is 3. Set the number of replicas based on your requirements. To decide how many replicas are required, see Cluster Agent Requirements and Supported Environments.

clusterAgentReplicas: 53YesIntegerOptional
This is required when targetAllocator.enabled is set to true.

autoScaling




enabled: The default value is false. Specify true to enable auto-scaling for creating replicas. 

autoScaling:
      enabled: true
      replicaProfile: Default 
      maxclusterAgentReplicas: 12
      scaleDown:
          stabilizationWindowSeconds: 86400
CODE


falseYesStringOptional

replicaProfile: The profile to be used. Currently only the Default profile is available. The Default profile uses the 1550mi memory and 3750m CPU to monitor 2500 pods.

DefaultYesString

Optional.

Required when auto-scaling is enabled.

maxClusterAgentReplicas: Specify the maximum number of replicas that you require to auto-scale.

N/AYesIntegerOptional

scaleDown.stabilizationWindowSeconds: Specify the time in seconds after which Target Allocator can scale down the replicas. 

Scale-down may result in the metrics drop. By default, this parameter is disabled.

N/AYesIntegerOptional

tolerations

An array of tolerations required for the pod. See Taint and Tolerations.

tolerations: 
- effect: NoSchedule

key: type
value: test

- effect: NoExecute

key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 600

N/ANoArrayOptional

securityContext

For OpenShift version > 4.14, ensure that all the child parameters within securityContext are specified based on the permissible values outlined by the security context constraints (SCCs). See Managing Security Context Constraints in the Red Hat OpenShift documentation.

For example, if you want to use RunAsUser property, then user ID (UID) should be in the permissible range. The SCCs permissible range for UID is 1000 to 9001. Therefore, you can add the RunAsUser value within this range only. The same applies to other security context parameters.


You can include the following parameters under securityContext:

runAsGroup: If you configured the application container as a non-root user, provide the groupId of the corresponding group.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsGroupthat is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

securityContext:
  runAsUser: 1001  
  runAsGroup: 1001
  readOnlyRootFilesystem: false
  allowPrivilegeEscalation: "false"
  runAsNonRoot: false
  privileged: "false"
  seLinuxOptions:
    level: "s0:c123,c456"
  capabilities:
    drop: [ "ALL" ]
  seccompProfile:
   type: RuntimeDefault
  procMount: Default
  windowsOptions:  
CODE


N/A
NoArrayOptional

runAsUser: If you configured the application container as a non-root user, it provides the userId of the corresponding user.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsUserthat is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.


allowPrivilegeEscalation: To control if a process can get more privileges than its parent process. The value is true when the container runs as:

  • Privileged container
  • CAP_SYS_ADMIN 

If you do not set this parameter, the helm uses the default value as true. 

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.

capabilities: To add or remove POSIX capabilities from the running containers. This uses the default set of capabilities during container runtime. 

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.

privileged: To run container in privileged mode, which is equivalent to root on the host. 

If you do not set this parameter, the helm uses the default value as true.

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.

procMount: The type of proc mount to use for the containers. 

This parameter is currently available for Deployment and DeploymentConfig mode.


readOnlyRootFilesystem: To specify if this container has a read-only root filesystem. 

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.

runAsNonRoot: To specify if the container must run as a non-root user.

If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation. 

This parameter is currently available for Deployment and DeploymentConfig mode.


seLinuxOptions: To apply the SELinux context to the container. If this parameter is not specified, the container runtime allocates a random SELinux context for each container.

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.

seccompProfile: To specify the seccomp options used by the container. If seccomp options are specified at both the pod and container level, the container options override the pod options. 

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.

windowsOptions: To specify Windows-specific options for every container.  

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.

For specific auto-instrumentation configurations, see Auto-Instrument Applications with the Cluster Agent. Also the .yaml file includes the permissions for auto-instrumentation, which is enabled by default. If you do not want to use auto-instrumentation, you can remove the following text from the .yaml file:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: appdynamics-cluster-agent-instrumentation
subjects:
 - kind: ServiceAccount
 name: appdynamics-cluster-agent
 namespace: appdynamics
roleRef:
 kind: ClusterRole
 name: appdynamics-cluster-agent-instrumentation
 apiGroup: rbac.authorization.k8s.io
CODE

Cluster Agent File Example

This example shows a cluster-agent.yaml configuration file:

apiVersion: cluster.appdynamics.com/v1alpha1
kind: Clusteragent
metadata:
  name: k8s-cluster-agent
  namespace: appdynamics
spec:
  appName: "<app-name>"
  controllerUrl: "<protocol>://<appdynamics-controller-host>:8080"
  account: "<account-name>"
  # docker image info
  image: "<your-docker-registry>/appdynamics/cluster-agent:tag"
  nsToMonitorRegex: namespace1|namespace2
  eventUploadInterval: 10
  containerRegistrationInterval: 120
  httpClientTimeout: 30
  customSSLSecret: "<secret-name>"
  proxyUrl: "<protocol>://<domain>:<port>"
  proxyUser: "<proxy-user>"
  metricsSyncInterval: 30
  clusterMetricsSyncInterval: 60
  metadataSyncInterval: 60
  containerBatchSize: 25
  containerParallelRequestLimit: 3
  podBatchSize: 30
  metricUploadRetryCount: 3
  metricUploadRetryIntervalMilliSeconds: 5
  podFilter:
    # blocklistedLabels:
    #   - label1: value1
    # allowlistedLabels:
    #   - label1: value1
    #   - label2: value2
    # allowlistedNames:
    #   - name1
    # blocklistedNames:
    #   - name2
  logLevel: "INFO"
  logFileSizeMb: 5
  logFileBackups: 3
  stdoutLogging: "true"
  resources:
	limits:
		cpu: 300m
		memory: "200Mi"
	requests:
		cpu: 200m
		memory: "100Mi"
  labels:
	 key1: value1
     key2: value2
CODE