This page describes how to use the Cluster Agent Helm Charts to deploy Infrastructure Visibility (InfraViz).

Helm is a package manager for Kubernetes. Helm charts are a collection of files that describe a set of Kubernetes resources. The Cluster Agent Helm Chart is a convenient method to deploy the Splunk AppDynamics Operator and InfraViz

Windows Containers are not supported for this deployment.

Requirements 

  • Machine Agent version >= 21.9.0 

  • NetViz version >= 21.3.0 

  • Controller version >= 20.6.0 

  • Cluster Agent Helm charts are compatible with Helm 3.0

  • Cluster Agent Helm Charts version should be >= 1.1.0 to install InfraViz using Cluster Agent Helm Charts. Older version (<=v0.1.19) of Cluster Agent Helm Charts does not work.
  • For environments where Kubernetes >=1.25, PodSecurityPolicy is removed from K8s >= 1.25 (https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes). Pod security restrictions are now applied at the namespace level (https://kubernetes.io/docs/concepts/security/pod-security-admission/) using Pod Security Standard levels . Therefore, you require to set the level as Privileged for the namespace in which Infrastructure Visibility pod is running.
  • For environments where Kubernetes <1.25, PodSecurityPolicies block certain pod security context configuration, such as privileged pods, you must deploy the infraviz-pod-security-policy.yaml before editing the infraviz.yaml file. You require to attach PodSecurityPolicy to the appdynamics-infraviz service account explicitly.
  • For environments where OpenShift SecurityContextConstraints block certain pod security context configuration, such as privileged pods, you must deploy the infraviz-security-context-constraint-openshift.yaml before editing the infraviz.yaml file.

 Install Infrastructure Visibility in a Cluster 

  1.  Delete all the previously installed CRDs related to Splunk AppDynamics Agent by using these commands: 

    $ kubectl get crds 
    $ kubectl delete crds <crd-names>
    CODE
  2. Add the chart repository to Helm:

    $ helm repo add appdynamics-cloud-helmcharts https://appdynamics.jfrog.io/artifactory/appdynamics-cloud-helmcharts/
    CODE
  3. Create a namespace for appdynamics in your cluster:

    $ kubectl create namespace appdynamics
    CODE
  4. Create a Helm values file in this example of values-ca1.yaml. Update the controllerInfo properties with the credentials from the Controller.  

    Update the Infraviz and netviz properties. See InfraViz Configuration Settings for information about the available properties like enableMastersenableContainerHostId, enableServerViz and so on.

    values-ca1.yaml

    # To install InfraViz  
    installInfraViz: true 
    
    
    # Cisco AppDynamics controller info 
    controllerInfo: 
    	url: https://<controller-url>:443 
    	account: <appdynamics-controller-account>  
    	username: <appdynamics-controller-username>  
    	password: <appdynamics-controller-password>  
    	accessKey: <appdynamics-controller-access-key>  
    	globalAccount: <appdynamics-controller-global-account>  
     
    
    # InfraViz config 
    infraViz: 
      nodeOS: "linux" 
      enableMasters: false 
      stdoutLogging: false 
      enableContainerHostId: true 
      enableServerViz: true 
      enableDockerViz: false 
      runAsUser: <UID of runAsUser>
      runAsGroup: 1001    # Netviz config 
    netViz: 
      enabled: true 
      netVizPort: 3892
    YML

    See Configuration Options for values.yaml for more information regarding the available options. Also, you can download a copy of values.yaml from the Helm Chart repository using this command:

    helm show values appdynamics-cloud-helmcharts/cluster-agent
    CODE
  5. Deploy the InfraViz to the appdynamics namespace:

    $ helm install -f ./values-ca1.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace=appdynamics
    CODE

Configuration Options

Configuration option 

Description 

Required 

installInfraViz 

Used for installing InfraViz. This must be set to true. 

Required (Defaults to false) 

Image configuration options (options under imageInfo key in values.yaml

imageInfo.operatorImage 

Operator image address in format <registryUrl>/<registryAccount>/cluster-agent-operator 

Optional (Defaults to the Docker Hub image 

imageInfo.operatorTag 

Operator image tag/version 

Optional (Defaults to 22.1.0) 

imageInfo.imagePullPolicy 

Image pull policy for the operator pod 

Optional 

imageInfo.machineAgentImage 

Machine Agent image address in format <registryUrl>/<registryAccount>/machine-agent 

Optional (Defaults to Docker Hub image)

imageInfo.machineAgentTag 

Machine Agent image tag/version 

Optional (Defaults to latest) 

ImageInfo.netVizImage 

NetViz Agent image address in format <registryUrl>/<registryAccount>/machine-agent-netviz 

Optional (Defaults to the Docker Hub image) 

ImageInfo.netvizTag 

NetViz Agent image tag/version 

Optional (Defaults to latest) 

Controller configuration options (Config options under controllerInfo key in values.yaml

controllerInfo.accessKey 

Controller accessKey 

Required 

controllerInfo.globalAccount 

Controller globalAccount 

Required 

controllerInfo.account 

Controller account 

Required 

controllerInfo.authenticateProxy 

true/false if the proxy requires authentication 

Optional 

controllerInfo.customSSLCert 

Base64 encoding of PEM formatted SSL certificate 

Optional 

controllerInfo.password 

Controller password 

Required only when auto-instrumentation is enabled. 

controllerInfo.proxyPassword 

Password for proxy authentication 

Optional 

controllerInfo.proxyUrl 

Proxy URL if the Controller is behind some proxy (protocol://domain:port).

Optional 

controllerInfo.proxyUser 

Username for proxy authentication (user@password)

Optional 

controllerInfo.url 

Controller URL 

Required 

controllerInfo.keyStoreFileSecret

Keystore file to apply the custom SSL configuration.

Optional

controllerInfo.keyStorePasswordSecret

Keystore password to apply the custom SSL configuration.

Optional

controllerInfo.username 

Controller username 

Required only when auto-instrumentation is enabled. 

RBAC configuration 

infravizServiceAccount 

Service account to be used by the InfraViz 

Optional 

createServiceAccount 

Set to true if ServiceAccounts mentioned are to be created by Helm 

Optional 

operatorServiceAccount 

Service account to be used by the Splunk AppDynamics Operator 

Optional 

NetViz config

netViz.resourcesNetViz

Set resources for the Network Visibility (NetViz) container

Optional

netViz.netVizPort

When > 0, the Network Agent is deployed in a sidecar with the Machine Agent. By default, the Network Visibility Agent works with port 3892.

Optional

netViz.securityContext.runAsGroup

If you configured the application container as a non-root user, provide the groupId of the corresponding group.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsGroup that is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

Optional

netViz.securityContext.runAsUser

If you configured the application container as a non-root user, it provides the userId of the corresponding user.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsUser that is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

Optional

netViz.securityContext.allowPrivilegeEscalation

To control if a process can get more privileges than its parent process. The value is true when the container runs as:

  • Privileged container
  • CAP_SYS_ADMIN 
  • NetViz does not run if the value for this parameter is set as false.
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
netViz.securityContext.capabilities

To add or remove POSIX capabilities from the running containers. This uses the default set of capabilities during container runtime. 

These values are included by default irrespective of whether you specify the value:

  • NET_ADMIN
  • NET_RAW

If you specify any value for capabilities, helm considers the value along with the default values.

  • NetViz does not run if the value for this parameter is set as false.
  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
netViz.securityContext.privileged

To run container in privileged mode, which is equivalent to root on the host. 

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
netViz.securityContext.procMount

The type of proc mount to use for the containers. 

This parameter is currently available for Deployment and DeploymentConfig mode.

Optional
netViz.securityContext.readOnlyRootFilesystem

To specify if this container has a read-only root filesystem. 

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
netViz.securityContext.runAsNonRoot

To specify if the container must run as a non-root user.

If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation. 

This parameter is currently available for Deployment and DeploymentConfig mode.

Optional
netViz.securityContext.seLinuxOptions

To apply the SELinux context to the container. If this parameter is not specified, the container runtime allocates a random SELinux context for each container.

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
netViz.securityContext.seccompProfile

To specify the seccomp options used by the container. If seccomp options are specified at both the pod and container level, the container options override the pod options. 

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
netViz.securityContext.windowsOptions

To specify Windows-specific options for every container.  

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional

InfraViz config

infraViz.appName

Name of the cluster displayed on the Controller UI as your cluster name. This configuration groups the nodes of the cluster based on the master, worker, infra, worker-infra roles and displays them on the Metric Browser. 

Optional
infraViz.enableContainerd

Enable containerd visibility on Machine Agent. Specify either true or false. The default value is false.

Optional
infraViz.enableContainerHostIdFlag that determines how container names are derived; specify either true or false.Required
infraViz.enableMastersBy default, only Worker nodes are monitored. When set to true, Server Visibility is provided for Master nodes. For managed Kubernetes providers, the flag has no effect because the Master plane is not accessible.Optional
infraViz.enableServerVizEnable Server VisibilityRequired
infraViz.enableDockerVizEnable Docker VisibilityRequired
infraViz.eventServiceUrlThe Event Service Endpoint.Optional
infraViz.runAsUser

The UID (User ID) to run the entry point of the container process. If you do not specify the UID, this defaults to the user id specified in the image.

If you require to run on any other UID, change the UID for runAsUser without changing the group ID.

If you specify runAsUser within infraviz.securityContext, then the securityContext value takes precedence and will override the infraViz.runasUser value.

This parameter is deprecated. We recommend to use infraviz.securityContext.runAsUser.

Optional

infraViz.logProperties.logLevel

Level of logging verbosity. Valid options are: info or debug.

Optional

infraViz.metricProperties.metricsLimit

Maximum number of metrics that the Machine Agent sends to the Controller.

Optional

infraViz.propertyBag

String with any other Machine Agent parameters

Optional

infraViz.runAsGroup
The GID (Group ID) to run the entry point of the container process. If you do not specify the ID, this uses the UID specified in the image, 

docker.io/appdynamics/machine-agent

docker.io/appdynamics/machine-agent-analytics:latest

If you also specify runAsGroup within infraviz.securityContext, then the securityContext value takes precedence and will override the infraViz.runasGroup  value.

This parameter is deprecated. We recommend to use infraviz.securityContext.runAsGroup.


Optional
infraViz.stdoutLoggingDetermines if logs are saved to a file or redirected to the Console.Optional

infraViz.uniqueHostId

Unique host ID in Splunk AppDynamics. Valid options are: spec.nodeNamestatus.hostIP.

Optional

infraViz.securityContext.runAsGroup

If you configured the application container as a non-root user, provide the groupId of the corresponding group.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsGroupthat is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

Optional

infraViz.securityContext.runAsUser

If you configured the application container as a non-root user, it provides the userId of the corresponding user.

This sets the appropriate file permission on the agent artifacts.

This value is applied to all the instrumented resources.

Add this parameter, if you require to override the default value of runAsUserthat is configured for default instrumentation, or if you require a specific value for the resources that satisfy this rule.

Optional

infraViz.securityContext.allowPrivilegeEscalation

To control if a process can get more privileges than its parent process. The value is true when the container runs as:

  • Privileged container
  • CAP_SYS_ADMIN 

If you do not set this parameter, the helm uses the default value as true

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
infraViz.securityContext.capabilities

To add or remove POSIX capabilities from the running containers. This uses the default set of capabilities during container runtime. 

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
infraViz.securityContext.privileged

To run container in privileged mode, which is equivalent to root on the host. 

If you do not set this parameter, the helm uses the default value as true.

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
infraViz.securityContext.procMount

The type of proc mount to use for the containers. 

This parameter is currently available for Deployment and DeploymentConfig mode.

Optional
infraViz.securityContext.readOnlyRootFilesystem

To specify if this container has a read-only root filesystem. 

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
infraViz.securityContext.runAsNonRoot

To specify if the container must run as a non-root user.

If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation. 

This parameter is currently available for Deployment and DeploymentConfig mode.

Optional
infraViz.securityContext.seLinuxOptions

To apply the SELinux context to the container. If this parameter is not specified, the container runtime allocates a random SELinux context for each container.

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
infraViz.securityContext.seccompProfile

To specify the seccomp options used by the container. If seccomp options are specified at both the pod and container level, the container options override the pod options. 

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional
infraViz.securityContext.windowsOptions

To specify Windows-specific options for every container.  

  • This parameter is unavailable when spec.os.name is Windows.
  • This parameter is currently available for Deployment and DeploymentConfig mode.
Optional

InfraViz pod config 

infravizPod.nodeSelector 

Kubernetes node selector field in the InfraViz pod spec.

Optional 

infravizPod.resources 

Kubernetes CPU and memory resources in the InfraViz pod spec.

Optional 

infravizPod.imagePullPolicy

The image pull policy for the InfraViz pod.

Optional

infravizPod.imagePullSecretThe credential file used to authenticate when pulling images from your private Docker registry or repository. Optional
infravizPod.priorityClassNameThe name of the pod priority class, which is used in the pod specification to set the priority.Optional
infravizPod.envList environment variables.Optional
infravizPod.overrideVolumeMountsThe list of volumeMounts.Optional
infravizPod.tolerationsList of tolerations based on the taints that are associated with nodes.Optional

Operator pod config 

operatorPod.nodeSelector 

Kubernetes node selector field in the Splunk AppDynamics Operator pod spec 

Optional 

operatorPod.tolerations 

Kubernetes tolerations field in the Splunk AppDynamics Operator pod spec 

Optional 

operatorPod.resources 

Kubernetes CPU and memory resources in the Splunk AppDynamics Operator pod spec 

Optional 

Best Practices for Sensitive Data

We recommend using multiple values.yaml files to separate sensitive data in separate values.yaml files. Examples of these values are:

  • controllerInfo.password
  • controllerInfo.accessKey
  • controllerInfo.customSSLCert
  • controllerInfo.proxyPassword

Each values file follows the structure of the default values.yaml enabling you to easily share files with non-sensitive configuration properties yet keep sensitive values safe.

Default user-values.yaml File Example

user-values.yaml

# To install InfraViz 
installInfraViz: true 
 
imageInfo: 
 operatorImage: docker.io/appdynamics/cluster-agent-operator 
 operatorTag: 22.1.0 
 imagePullPolicy: Always            # Will be used for operator pod 
 machineAgentImage: docker.io/appdynamics/machine-agent 
 machineAgentTag: latest 
 netVizImage: docker.io/appdynamics/machine-agent-netviz
 netvizTag: latest   
 
controllerInfo: 
 url: https://<controller-url>:443 
 account: <appdynamics-controller-account>  
 username: <appdynamics-controller-username>  
 password: <appdynamics-controller-password>  
 accessKey: <appdynamics-controller-access-key> 
 
 infravizServiceAccount: appdynamics-infraviz-ssl # Can be any valid name 
 operatorServiceAccount: appdynamics-operator-ssl # Can be any valid name
YML

user-values-sensitive.yaml

controllerInfo:
  password: welcome
  accessKey: abc-def-ghi-1516
YML

When installing the Helm Chart, use multiple -f parameters to reference the files:

helm install -f ./user-values.yaml -f ./user-values-sensitive.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace ca-appdynamics
BASH

Install Cluster Agent and Infrastructure Visibility in a Cluster

To install Cluster Agent and Infrastructure Visibility simultaneously, follow the same steps listed in Install Infrastructure Visibility in a Cluster along with the following updates:

  1. Specify the following in the yaml file (for example, values.ca1):

    installClusterAgent: true 
    installInfraViz: true
    CODE

     

  2. Update the controllerInfo properties with the credentials from the Controller.
    Update the clusterAgent properties to set the namespace and pods to monitor. See Configure the Cluster Agent for information about the available properties such as, nsToMonitorRegexnsToExcludeRegex and so on. 
    Update the InfraViz and NetViz properties. See InfraViz Configuration Settings for information about the available properties such as enableMasters, enableContainerHostId, enableServerViz, and so on in values.yaml.