This page describes how to use Helm Charts to deploy the Cluster Agent.

Helm is a package manager for Kubernetes. Helm charts are a collection of files that describe a set of Kubernetes resources. The Cluster Agent Helm chart is a convenient method to deploy the Cluster Agent Operator and Cluster Agent. You can also use the Cluster Agent Helm chart to deploy multiple Cluster Agents in a single cluster. This may be necessary for larger clusters that exceed the pod monitoring limit for a single Cluster Agent. See Cluster Agent Requirements and Supported Environments.

Requirements

  • Cluster Agent version >= 20.6
  • Controller version >= 20.6
  • Cluster Agent Helm charts are compatible with Helm 3.0

Install a Single Cluster Agent in a Cluster

  1. Add the chart repository to Helm:

    helm repo add appdynamics-charts https://ciscodevnet.github.io/appdynamics-charts
    BASH
  2. Create a namespace for appdynamics in your cluster:

    kubectl create namespace appdynamics
    BASH
  3. Create a Helm values file, in the example called values-ca1.yaml
    Update the controllerInfo properties with the credentials from your Controller. 
    Update the clusterAgent properties to set the namespace and pods to monitor. See Configure the Cluster Agent for information about the available properties nsToMonitornsToMonitorRegexnsToExcludeRegex and podFilter.

    values-ca1.yaml

    # AppDynamics controller info
    controllerInfo:
      url: https://<controller-url>:443
      account: <appdynamics-controller-account>                   
      username: <appdynamics-controller-username>                          
      password: <appdynamics-controller-password>                                 
      accessKey: <appdynamics-controller-access-key>  
    
    # Cluster agent config
    clusterAgent:
      nsToMonitorRegex: dev-.*
    YML

    See Configuration Options for values.yaml for more information regarding the available options. Also, you can download a copy of values.yaml from the Helm Chart repository using this command:

    helm show values appdynamics-charts/cluster-agent
    BASH
  4. If you have not installed the Kubernetes metrics-server in the cluster (usually located in the kube-system namespace), then set install.metrics-server to true in the values file to invoke the subchart to install it.

    install:
      metrics-server: true
    BASH

    Setting install.metrics-server installs metrics-server in the namespace with the --namespace flag which is located in the same namespace as the Cluster Agent.

  5. Deploy the Cluster Agent to the appdynamics namespace:

    helm install -f ./values-ca1.yaml "<my-cluster-agent-helm-release>" appdynamics-charts/cluster-agent --namespace=appdynamics
    BASH

Enable Auto-Instrumentation

Once you have validated that the Cluster Agent was successfully installed, you can add additional configuration to the instrumentationConfig section of the values YAML file to enable auto-instrumentation. In this example, instrumentationConfig.enabled has been set to true, and multiple instrumentationRules have been defined. See Auto-Instrument Applications with the Cluster Agent.

values-ca1.yaml with Auto-Instrumentation Enabled

# AppDynamics controller info
controllerInfo:
  url: https://<controller-url>:443
  account: <appdynamics-controller-account>                   
  username: <appdynamics-controller-username>                          
  password: <appdynamics-controller-password>                                 
  accessKey: <appdynamics-controller-access-key>  

# Cluster agent config
clusterAgent:
  nsToMonitorRegex: ecom|books|groceries

instrumentationConfig:
  enabled: true
  instrumentationMethod: Env
  nsToInstrumentRegex: ecom|books|groceries
  defaultAppName: Ecommerce
  appNameStrategy: namespace
  imageInfo:
    java:
      image: "docker.io/appdynamics/java-agent:latest"
      agentMountPath: /opt/appdynamics
      imagePullPolicy: Always
  instrumentationRules:
    - namespaceRegex: groceries
      language: dotnetcore
      imageInfo:
        image: "docker.io/appdynamics/dotnet-core-agent:latest"
        agentMountPath: /opt/appdynamics
        imagePullPolicy: Always
    - namespaceRegex: books
      matchString: openmct
      language: nodejs
      imageInfo:
        image: "docker.io/appdynamics/nodejs-agent:20.5.0-alpinev10"
        agentMountPath: /opt/appdynamics
        imagePullPolicy: Always
      analyticsHost: <hostname of the Analytics Agent>
      analyticsPort: 443
      analyticsSslEnabled: true
YML

After saving the values-ca1.yaml file with the added auto-instrumentation configuration, you must upgrade the Helm Chart:

helm upgrade -f ./ca1-values.yaml "<my-cluster-agent-helm-release>" appdynamics-charts/cluster-agent --namespace appdynamics
BASH


Configuration Options

Config optionDescriptionRequired
deploymentModeUsed for multiple cluster agent deployment in a single clusterOptional
Image config options (Config options under imageInfo key in values.yaml)
imageInfo.agentImageCluster agent image address in format <registryUrl>/<registryAccount>/<project>Optional (Defaults to the Docker Hub image)
imageInfo.agentTagCluster agent image tag/versionOptional (Defaults to latest)
imageInfo.operatorImageOperator image address in format <registryUrl>/<registryAccount>/<project>Optional (Defaults to the Docker Hub image)
imageInfo.operatorTagOperator image tag/versionOptional (Defaults to latest)
imageInfo.imagePullPolicyImage pull policy for the operator podOptional
Controller config options (Config options under controllerInfo key in values.yaml)
controllerInfo.accessKeyAppDynamics Controller accessKeyRequired
controllerInfo.accountAppDynamics Controller accountRequired
controllerInfo.authenticateProxytrue/false if the proxy requires authenticationOptional
controllerInfo.customSSLCertBase64 encoding of PEM formatted SSL certificateOptional
controllerInfo.passwordAppDynamics Controller passwordRequired only when auto-instrumentation is enabled.
controllerInfo.proxyPasswordPassword for proxy authenticationOptional
controllerInfo.proxyUrlProxy URL if the Controller is behind some proxyOptional
controllerInfo.proxyUserUsername for proxy authenticationOptional
controllerInfo.urlAppDynamics Controller URLRequired
controllerInfo.usernameAppDynamics Controller usernameRequired only when auto-instrumentation is enabled.
RBAC config

agentServiceAccount

Service account to be used by the Cluster AgentOptional

createServiceAccount

Set to true if ServiceAccounts mentioned are to be created by HelmOptional

operatorServiceAccount

Service account to be used by the AppDynamics Operator

Optional
Agent pod config
agentPod.nodeSelectorKubernetes node selector field in the Cluster Agent pod specOptional
agentPod.tolerationsKubernetes tolerations field in the Cluster Agent pod specOptional
agentPod.resourcesKubernetes CPU and memory resources in the Cluster Agent pod specOptional
agentPod.labelsAdds any required pod labels to the Cluster Agent pod.Optional
Operator pod config
operatorPod.nodeSelectorKubernetes node selector field in the AppDynamics Operator pod specOptional
operatorPod.tolerationsKubernetes tolerations field in the AppDynamics Operator pod specOptional
operatorPod.resourcesKubernetes CPU and memory resources in the AppDynamics Operator pod specOptional
Install switches
install.metrics-serverTrue if metrics are to be installed. Metrics-server is installed in the same namespace as the agent.Optional

Install Additional Cluster Agents in a Cluster

The Cluster Agent Helm Chart supports multiple Cluster Agents installation in a cluster. This may be necessary for larger clusters that exceed the pod monitoring limit for a single Cluster Agent. See Cluster Agent Requirements and Supported Environments.

Each additional Cluster Agents that is deployed must have different configuration from any previously deployed Cluster Agents. This is achieved by limiting the monitoring to a distinct set of namespaces and pods using the nsToMonitornsToMonitorRegex, nsToMonitorExcludeRegex and podFilter properties. See Configure the Cluster Agent.

The first Cluster Agent must be installed using the steps above where the default value of the deploymentMode property is not overridden and set to PRIMARY. Additional Cluster Agents must set deploymentMode to NAMESPACED.

To install additional Cluster Agents: 

  1. Create a new values file, called values-ca2.yaml as the example, that uses the same controllerInfo properties as the first Cluster Agent.
    Set deploymentMode to NAMESPACED
    Add additional properties, such as nsToMonitorRegex and podFilter, to set the monitoring scope for this Cluster Agent.

    values-ca2.yaml

    deploymentMode: NAMESPACED
    
    # AppDynamics controller info
    controllerInfo:
      url: https://<controller-url>:443
      account: <appdynamics-controller-account>                   
      username: <appdynamics-controller-username>                          
      password: <appdynamics-controller-password>                                 
      accessKey: <appdynamics-controller-access-key>  
    
    # Cluster agent config
    clusterAgent:
      nsToMonitorRegex: stage.*
    
    podFilter:   
      allowlistedLabels:
        - label1: value1
        - label2: value2   
      blocklistedLabels: []
      allowlistedNames: []   
      blocklistedNames: []
    YML
  2. Create a namespace distinct from the previous namespace used for the first installation:

    kubectl create ns appdynamics-ca2
    BASH
  3. Install the additional Cluster Agent:

    helm install -f ./values-ca2.yaml "<my-2nd-cluster-agent-helm-release>" appdynamics-charts/cluster-agent --namespace=appdynamics-ca2
    BASH

You can enable auto-instrumentation only in the first Cluster Agent using PRIMARY mode. The Helm Chart generates an error if auto-instrumentation was enabled by additional Cluster Agents using that Namespaced mode.

Cluster Agent Helm Chart Configuration Examples

These examples display various configurations for the Cluster Agent Helm chart:

Use the Cluster Agent Helm Chart to Enable Custom SSL

user-values.yaml

controllerInfo:
  url: https://<controller-url>:443
  account: <appdynamics-controller-account>                   
  username: <appdynamics-controller-username>                          
  password: <appdynamics-controller-password>                                 
  accessKey: <appdynamics-controller-access-key>

  #=====
  customSSLCert: "<base64 of PEM formatted cert>"
  #=====
 
agentServiceAccount: appdynamics-cluster-agent-ssl     # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl       # Can be any valid name 
YML

Use the Cluster Agent Helm Chart to Enable the Proxy Controller

Without authentication:

user-values.yaml

controllerInfo:
  url: https://<controller-url>:443
  account: <appdynamics-controller-account>                   
  username: <appdynamics-controller-username>                          
  password: <appdynamics-controller-password>                                 
  accessKey: <appdynamics-controller-access-key>
  
  #=====
  proxyUrl: http://proxy-url.appd-controller.com
  #=====
 
agentServiceAccount: appdynamics-cluster-agent-ssl     # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl       # Can be any valid name   
YML


With authentication:

user-values.yaml

controllerInfo:
  url: https://<controller-url>:443
  account: <appdynamics-controller-account>                   
  username: <appdynamics-controller-username>                          
  password: <appdynamics-controller-password>                                 
  accessKey: <appdynamics-controller-access-key>
  
  #=====
  authenticateProxy: true 
  proxyUrl: http://proxy-url.appd-controller.com
  proxyUser: hello
  proxyPassword: world
  #=====

agentServiceAccount: appdynamics-cluster-agent-ssl     # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl       # Can be any valid name  
YML

Use the Cluster Agent Helm Chart to add nodeSelector and tolerations

user-values.yaml

agentPod:
  nodeSelector:
    nodeLabelKey: nodeLabelValue
  tolerations:
    - effect: NoExecute
      operator: Equal
      key: key1
      value: val1
      tolerationSeconds: 11

operatorPod:
  nodeSelector:
    nodeLabelKey: nodeLabelValue
    anotherNodeLabel: anotherNodeLabel
  tolerations:
    - operator: Exists
      key: key1
YML

Best Practices for Sensitive Data

We recommend using multiple values.yaml files to separate sensitive data in separate values.yaml files. Examples of these values are:

  • controllerInfo.password
  • controllerInfo.accessKey
  • controllerInfo.customSSLCert
  • controllerInfo.proxyPassword

Each values file follows the structure of the default values.yaml enabling you to easily share files with non-sensitive configuration properties yet keep sensitive values safe.

Default user-values.yaml File Example

user-values.yaml

deploymentMode: PRIMARY
 
imageInfo:
  agentImage: docker.io/appdynamics/cluster-agent
  agentTag: latest
  operatorImage: docker.io/appdynamics/cluster-agent-operator
  operatorTag: latest
  imagePullPolicy: Always                            
 
controllerInfo:
  url: https://<controller-url>:443
  account: <appdynamics-controller-account>                   
  username: <appdynamics-controller-username>                          
  password: <appdynamics-controller-password>                                 
  accessKey: <appdynamics-controller-access-key>
 
agentServiceAccount: appdynamics-cluster-agent-ssl     # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl       # Can be any valid name
YML

user-values-sensitive.yaml

controllerInfo:
  password: welcome
  accessKey: abc-def-ghi-1516
YML

When installing the Helm Chart, use multiple -f parameters to reference the files:

helm install -f ./user-values.yaml -f ./user-values-sensitive.yaml "<my-cluster-agent-helm-release>" appdynamics-charts/cluster-agent --namespace ca-appdynamics
BASH