This page describes how to use the Cluster Agent Helm Charts to deploy Infrastructure Visibility (InfraViz).

Helm is a package manager for Kubernetes. Helm charts are a collection of files that describe a set of Kubernetes resources. The Cluster Agent Helm Chart is a convenient method to deploy the Appdynamics Operator and InfraViz

Windows Containers are not supported for this deployment.

Requirements 

  • Machine Agent version >= 21.9.0 

  • NetViz version >= 21.3.0 

  • Controller version >= 20.6.0 

  • Cluster Agent Helm charts are compatible with Helm 3.0

  • Cluster Agent Helm Charts version should be >= 1.1.0 to install InfraViz using Cluster Agent Helm Charts. Older version (<=v0.1.19) of Cluster Agent Helm Charts does not work.
  • For environments where Kubernetes >=1.25, PodSecurityPolicy is removed from K8s >= 1.25 (https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes). Pod security restrictions are now applied at the namespace level (https://kubernetes.io/docs/concepts/security/pod-security-admission/) using Pod Security Standard levels . Therefore, you require to set the level as Privileged for the namespace in which Infrastructure Visibility pod is running.
  • For environments where Kubernetes <1.25, PodSecurityPolicies block certain pod security context configuration, such as privileged pods, you must deploy the infraviz-pod-security-policy.yaml before editing the infraviz.yaml file. You require to attach PodSecurityPolicy to the appdynamics-infraviz serviceaccount explicitly.
  • For environments where OpenShift SecurityContextConstraints block certain pod security context configuration, such as privileged pods, you must deploy the infraviz-security-context-constraint-openshift.yaml before editing the infraviz.yaml file.

 Install Infrastructure Visibility in a Cluster 

  1.  Delete all the previously installed CRDs related to Appdynamics Agent by using these commands: 

    $ kubectl get crds 
    $ kubectl delete crds <crd-names>
    CODE
  2. Add the chart repository to Helm:

    $ helm repo add appdynamics-cloud-helmcharts https://appdynamics.jfrog.io/artifactory/appdynamics-cloud-helmcharts/
    CODE
  3. Create a namespace for appdynamics in your cluster:

    $ kubectl create namespace appdynamics
    CODE
  4. Create a Helm values file in this example of values-ca1.yaml. Update the controllerInfo properties with the credentials from the Controller.  

    Update the Infraviz and netviz properties. See InfraViz Configuration Settings for information about the available properties like enableMastersenableContainerHostId, enableServerViz and so on.

    values-ca1.yaml

    # To install InfraViz  
    installInfraViz: true 
    
    
    # AppDynamics controller info 
    controllerInfo: 
    	url: https://<controller-url>:443 
    	account: <appdynamics-controller-account>  
    	username: <appdynamics-controller-username>  
    	password: <appdynamics-controller-password>  
    	accessKey: <appdynamics-controller-access-key>  
    	globalAccount: <appdynamics-controller-global-account>  
     
    
    # InfraViz config 
    infraViz: 
      nodeOS: "linux" 
      enableMasters: false 
      stdoutLogging: false 
      enableContainerHostId: true 
      enableServerViz: true 
      enableDockerViz: false 
      runAsUser: <UID of runAsUser>
      runAsGroup: 1001    # Netviz config 
    netViz: 
      enabled: true 
      netVizPort: 3892
    YML

    See Configuration Options for values.yaml for more information regarding the available options. Also, you can download a copy of values.yaml from the Helm Chart repository using this command:

    helm show values appdynamics-cloud-helmcharts/cluster-agent
    CODE
  5. Deploy the InfraViz to the appdynamics namespace:

    $ helm install -f ./values-ca1.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace=appdynamics
    CODE

Configuration Options

Configuration option 

Description 

Required 

installInfraViz 

Used for installing InfraViz. This must be set to true. 

Required (Defaults to false) 

Image configuration options (options under imageInfo key in values.yaml) 

imageInfo.operatorImage 

Operator image address in format <registryUrl>/<registryAccount>/cluster-agent-operator 

Optional (Defaults to the Docker Hub image 

imageInfo.operatorTag 

Operator image tag/version 

Optional (Defaults to 22.1.0) 

imageInfo.imagePullPolicy 

Image pull policy for the operator pod 

Optional 

imageInfo.machineAgentImage 

Machine Agent image address in format <registryUrl>/<registryAccount>/machine-agent 

Optional (Defaults to Docker Hub image)

imageInfo.machineAgentTag 

Machine Agent image tag/version 

Optional (Defaults to latest) 

ImageInfo.netVizImage 

NetViz Agent image address in format <registryUrl>/<registryAccount>/machine-agent-netviz 

Optional (Defaults to the Docker Hub image) 

ImageInfo.netvizTag 

NetViz Agent image tag/version 

Optional (Defaults to latest) 

Controller configuration options (Config options under controllerInfo key in values.yaml) 

controllerInfo.accessKey 

AppDynamics Controller accessKey 

Required 

controllerInfo.globalAccount 

AppDynamics Controller globalAccount 

Required 

controllerInfo.account 

AppDynamics Controller account 

Required 

controllerInfo.authenticateProxy 

true/false if the proxy requires authentication 

Optional 

controllerInfo.customSSLCert 

Base64 encoding of PEM formatted SSL certificate 

Optional 

controllerInfo.password 

AppDynamics Controller password 

Required only when auto-instrumentation is enabled. 

controllerInfo.proxyPassword 

Password for proxy authentication 

Optional 

controllerInfo.proxyUrl 

Proxy URL if the Controller is behind some proxy 

Optional 

controllerInfo.proxyUser 

Username for proxy authentication 

Optional 

controllerInfo.url 

AppDynamics Controller URL 

Required 

controllerInfo.username 

AppDynamics Controller username 

Required only when auto-instrumentation is enabled. 

RBAC configuration 

infravizServiceAccount 

Service account to be used by the InfraViz 

Optional 

createServiceAccount 

Set to true if ServiceAccounts mentioned are to be created by Helm 

Optional 

operatorServiceAccount 

Service account to be used by the AppDynamics Operator 

Optional 

NetViz config

netViz.resourcesNetViz

Set resources for the Network Visibility (NetViz) container

Optional

netViz.netVizPort

When > 0, the Network Agent is deployed in a sidecar with the Machine Agent. By default, the Network Visibility Agent works with port 3892.

Optional

InfraViz config

infraViz.enableContainerHostIdFlag that determines how container names are derived; specify either true or false.Required
infraViz.enableMastersBy default, only Worker nodes are monitored. When set to true, Server Visibility is provided for Master nodes. For managed Kubernetes providers, the flag has no effect because the Master plane is not accessible.Optional
infraViz.enableServerVizEnable Server VisibilityRequired
infraViz.enableDockerVizEnable Docker VisibilityRequired
infraViz.runAsUser

The UID (User ID) to run the entry point of the container process. If you do not specify the UID, this defaults to the user id specified in the image.

If you require to run on any other UID, change the UID for runAsUser without changing the group ID.

Optional
infraViz.runAsGroupThe GID (Group ID) to run the entry point of the container process. If you do not specify the ID, this uses the UID specified in the image, 

docker.io/appdynamics/machine-agent

docker.io/appdynamics/machine-agent-analytics:latest

Optional
infraViz.stdoutLoggingDetermines if logs are saved to a file or redirected to the Console.Optional

InfraViz pod config 

infravizPod.nodeSelector 

Kubernetes node selector field in the InfraViz pod spec.

Optional 

infravizPod.resources 

Kubernetes CPU and memory resources in the InfraViz pod spec.

Optional 

infravizPod.imagePullSecretThe credential file used to authenticate when pulling images from your private Docker registry or repository. Optional
infravizPod.priorityClassNameThe name of the pod priority class, which is used in the pod specification to set the priority.Optional
infravizPod.envList environment variables.Optional
infravizPod.overrideVolumeMountsThe list of volumeMounts.Optional
infravizPod.tolerationsList of tolerations based on the taints that are associated with nodes.Optional

InfraViz pod config 

infravizPod.nodeSelector 

Kubernetes node selector field in the InfraViz pod spec 

Optional 

infravizPod.resources 

Kubernetes CPU and memory resources in the InfraViz pod spec 

Optional 

infravizPod.imagePullSecretThe credential file used to authenticate when pulling images from your private Docker registry or repository. Optional
infravizPod.priorityClassNameThe name of the pod priority class, which is used in the pod specification to set the priority.Optional

Operator pod config 

operatorPod.nodeSelector 

Kubernetes node selector field in the AppDynamics Operator pod spec 

Optional 

operatorPod.tolerations 

Kubernetes tolerations field in the AppDynamics Operator pod spec 

Optional 

operatorPod.resources 

Kubernetes CPU and memory resources in the AppDynamics Operator pod spec 

Optional 

Best Practices for Sensitive Data

We recommend using multiple values.yaml files to separate sensitive data in separate values.yaml files. Examples of these values are:

  • controllerInfo.password
  • controllerInfo.accessKey
  • controllerInfo.customSSLCert
  • controllerInfo.proxyPassword

Each values file follows the structure of the default values.yaml enabling you to easily share files with non-sensitive configuration properties yet keep sensitive values safe.

Default user-values.yaml File Example

user-values.yaml

# To install InfraViz 
installInfraViz: true 
 
imageInfo: 
 operatorImage: docker.io/appdynamics/cluster-agent-operator 
 operatorTag: 22.1.0 
 imagePullPolicy: Always            # Will be used for operator pod 
 machineAgentImage: docker.io/appdynamics/machine-agent 
 machineAgentTag: latest 
 netVizImage: docker.io/appdynamics/machine-agent-netviz
 netvizTag: latest   
 
controllerInfo: 
 url: https://<controller-url>:443 
 account: <appdynamics-controller-account>  
 username: <appdynamics-controller-username>  
 password: <appdynamics-controller-password>  
 accessKey: <appdynamics-controller-access-key> 
 
 infravizServiceAccount: appdynamics-infraviz-ssl # Can be any valid name 
 operatorServiceAccount: appdynamics-operator-ssl # Can be any valid name
YML

user-values-sensitive.yaml

controllerInfo:
  password: welcome
  accessKey: abc-def-ghi-1516
YML

When installing the Helm Chart, use multiple -f parameters to reference the files:

helm install -f ./user-values.yaml -f ./user-values-sensitive.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace ca-appdynamics
BASH

Install Cluster Agent and Infrastructure Visibility in a Cluster

To install Cluster Agent and Infrastructure Visibility simultaneously, follow the same steps listed in Install Infrastructure Visibility in a Cluster along with the following updates:

  1. Specify the following in the yaml file (for example, values.ca1):

    installClusterAgent: true 
    installInfraViz: true
    CODE

     

  2. Update the controllerInfo properties with the credentials from the Controller.
    Update the clusterAgent properties to set the namespace and pods to monitor. See Configure the Cluster Agent for information about the available properties such as, nsToMonitor, nsToMonitorRegexnsToExcludeRegex and so on. 
    Update the InfraViz and NetViz properties. See InfraViz Configuration Settings for information about the available properties such as enableMasters, enableContainerHostId, enableServerViz, and so on in values.yaml.