Download PDF
Download page Install Infrastructure Visibility with Helm Charts.
Install Infrastructure Visibility with Helm Charts
This page describes how to use the Cluster Agent Helm Charts to deploy Infrastructure Visibility (InfraViz).
Helm is a package manager for Kubernetes. Helm charts are a collection of files that describe a set of Kubernetes resources. The Cluster Agent Helm Chart is a convenient method to deploy the Splunk AppDynamics Operator and InfraViz.
Windows Containers are not supported for this deployment.
Requirements
Machine Agent version >= 21.9.0
NetViz version >= 21.3.0
Controller version >= 20.6.0
Cluster Agent Helm charts are compatible with Helm 3.0
- Cluster Agent Helm Charts version should be >= 1.1.0 to install InfraViz using Cluster Agent Helm Charts. Older version (<=v0.1.19) of Cluster Agent Helm Charts does not work.
- For environments where Kubernetes >=1.25, PodSecurityPolicy is removed from K8s >= 1.25 (https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes). Pod security restrictions are now applied at the namespace level (https://kubernetes.io/docs/concepts/security/pod-security-admission/) using Pod Security Standard levels . Therefore, you require to set the level as Privileged for the namespace in which Infrastructure Visibility pod is running.
- For environments where Kubernetes <1.25,
PodSecurityPolicies
block certain pod security context configuration, such as privileged pods, you must deploy theinfraviz-pod-security-policy.yaml
before editing theinfraviz.yaml
file. You require to attach PodSecurityPolicy to theappdynamics-infraviz
service account explicitly. - For environments where OpenShift
SecurityContextConstraints
block certain pod security context configuration, such as privileged pods, you must deploy theinfraviz-security-context-constraint-openshift.yaml
before editing theinfraviz.yaml
file.
Install Infrastructure Visibility in a Cluster
Delete all the previously installed CRDs related to Splunk AppDynamics Agent by using these commands:
$ kubectl get crds $ kubectl delete crds <crd-names>
CODEAdd the chart repository to Helm:
$ helm repo add appdynamics-cloud-helmcharts https://appdynamics.jfrog.io/artifactory/appdynamics-cloud-helmcharts/
CODECreate a namespace for appdynamics in your cluster:
$ kubectl create namespace appdynamics
CODECreate a Helm values file in this example of values-ca1.yaml. Update the
controllerInfo
properties with the credentials from the Controller.Update the
Infraviz
andnetviz
properties. See InfraViz Configuration Settings for information about the available properties likeenableMasters
,enableContainerHostId
,enableServerViz
and so on.values-ca1.yaml
# To install InfraViz installInfraViz: true # Cisco AppDynamics controller info controllerInfo: url: https://<controller-url>:443 account: <appdynamics-controller-account> username: <appdynamics-controller-username> password: <appdynamics-controller-password> accessKey: <appdynamics-controller-access-key> globalAccount: <appdynamics-controller-global-account> # InfraViz config infraViz: nodeOS: "linux" enableMasters: false stdoutLogging: false enableContainerHostId: true enableServerViz: true enableDockerViz: false runAsUser: <UID of runAsUser> runAsGroup: 1001 # Netviz config netViz: enabled: true netVizPort: 3892
YMLSee Configuration Options for
values.yaml
for more information regarding the available options. Also, you can download a copy ofvalues.yaml
from the Helm Chart repository using this command:helm show values appdynamics-cloud-helmcharts/cluster-agent
CODEDeploy the InfraViz to the appdynamics namespace:
$ helm install -f ./values-ca1.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace=appdynamics
CODE
Configuration Options
Configuration option | Description | Required |
---|---|---|
| Used for installing InfraViz. This must be set to true. | Required (Defaults to false) |
Image configuration options (options under imageInfo key in values.yaml) | ||
| Operator image address in format <registryUrl>/<registryAccount>/cluster-agent-operator | Optional (Defaults to the Docker Hub image |
| Operator image tag/version | Optional (Defaults to 22.1.0) |
| Image pull policy for the operator pod | Optional |
| Machine Agent image address in format <registryUrl>/<registryAccount>/machine-agent | Optional (Defaults to Docker Hub image) |
| Machine Agent image tag/version | Optional (Defaults to latest) |
| NetViz Agent image address in format <registryUrl>/<registryAccount>/machine-agent-netviz | Optional (Defaults to the Docker Hub image) |
| NetViz Agent image tag/version | Optional (Defaults to latest) |
Controller configuration options (Config options under controllerInfo key in values.yaml) | ||
| Controller accessKey | Required |
| Controller globalAccount | Required |
| Controller account | Required |
| true/false if the proxy requires authentication | Optional |
| Base64 encoding of PEM formatted SSL certificate | Optional |
| Controller password | Required only when auto-instrumentation is enabled. |
| Password for proxy authentication | Optional |
| Proxy URL if the Controller is behind some proxy ( | Optional |
| Username for proxy authentication ( | Optional |
| Controller URL | Required |
| Keystore file to apply the custom SSL configuration. | Optional |
| Keystore password to apply the custom SSL configuration. | Optional |
| Controller username | Required only when auto-instrumentation is enabled. |
RBAC configuration | ||
| Service account to be used by the InfraViz | Optional |
| Set to true if ServiceAccounts mentioned are to be created by Helm | Optional |
| Service account to be used by the Splunk AppDynamics Operator | Optional |
NetViz config | ||
netViz.resourcesNetViz | Set resources for the Network Visibility (NetViz) container | Optional |
netViz.netVizPort | When > 0, the Network Agent is deployed in a sidecar with the Machine Agent. By default, the Network Visibility Agent works with port 3892. | Optional |
netViz.securityContext.runAsGroup | If you configured the application container as a non-root user, provide the This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of | Optional |
netViz.securityContext.runAsUser | If you configured the application container as a non-root user, it provides the This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of | Optional |
netViz.securityContext.allowPrivilegeEscalation | To control if a process can get more privileges than its parent process. The value is true when the container runs as:
| Optional |
netViz.securityContext.capabilities | To add or remove POSIX capabilities from the running containers. This uses the default set of capabilities during container runtime. These values are included by default irrespective of whether you specify the value:
If you specify any value for capabilities, helm considers the value along with the default values.
| Optional |
netViz.securityContext.privileged | To run container in privileged mode, which is equivalent to root on the host.
| Optional |
netViz.securityContext.procMount | The type of proc mount to use for the containers. This parameter is currently available for Deployment and DeploymentConfig mode. | Optional |
netViz.securityContext.readOnlyRootFilesystem | To specify if this container has a read-only root filesystem.
| Optional |
netViz.securityContext.runAsNonRoot | To specify if the container must run as a non-root user. If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation. This parameter is currently available for Deployment and DeploymentConfig mode. | Optional |
netViz.securityContext.seLinuxOptions | To apply the SELinux context to the container. If this parameter is not specified, the container runtime allocates a random SELinux context for each container.
| Optional |
netViz.securityContext.seccompProfile | To specify the seccomp options used by the container. If seccomp options are specified at both the pod and container level, the container options override the pod options.
| Optional |
netViz.securityContext.windowsOptions | To specify Windows-specific options for every container.
| Optional |
InfraViz config | ||
infraViz.appName | Name of the cluster displayed on the Controller UI as your cluster name. This configuration groups the nodes of the cluster based on the | Optional |
infraViz.enableContainerd | Enable containerd visibility on Machine Agent. Specify either true or false. The default value is false. | Optional |
infraViz.enableContainerHostId | Flag that determines how container names are derived; specify either true or false. | Required |
infraViz.enableMasters | By default, only Worker nodes are monitored. When set to true, Server Visibility is provided for Master nodes. For managed Kubernetes providers, the flag has no effect because the Master plane is not accessible. | Optional |
infraViz.enableServerViz | Enable Server Visibility | Required |
infraViz.enableDockerViz | Enable Docker Visibility | Required |
infraViz.eventServiceUrl | The Event Service Endpoint. | Optional |
infraViz.runAsUser | The UID (User ID) to run the entry point of the container process. If you do not specify the UID, this defaults to the user id specified in the image. If you require to run on any other UID, change the UID for runAsUser without changing the group ID. If you specify This parameter is deprecated. We recommend to use | Optional |
| Level of logging verbosity. Valid options are: | Optional |
| Maximum number of metrics that the Machine Agent sends to the Controller. | Optional |
| String with any other Machine Agent parameters | Optional |
infraViz.runAsGroup | The GID (Group ID) to run the entry point of the container process. If you do not specify the ID, this uses the UID specified in the image,
If you also specify This parameter is deprecated. We recommend to use | Optional |
infraViz.stdoutLogging | Determines if logs are saved to a file or redirected to the Console. | Optional |
| Unique host ID in Splunk AppDynamics. Valid options are: | Optional |
infraViz.securityContext.runAsGroup | If you configured the application container as a non-root user, provide the This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of | Optional |
infraViz.securityContext.runAsUser | If you configured the application container as a non-root user, it provides the This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of | Optional |
infraViz.securityContext.allowPrivilegeEscalation | To control if a process can get more privileges than its parent process. The value is true when the container runs as:
If you do not set this parameter, the helm uses the default value as
| Optional |
infraViz.securityContext.capabilities | To add or remove POSIX capabilities from the running containers. This uses the default set of capabilities during container runtime.
| Optional |
infraViz.securityContext.privileged | To run container in privileged mode, which is equivalent to root on the host. If you do not set this parameter, the helm uses the default value as
| Optional |
infraViz.securityContext.procMount | The type of proc mount to use for the containers. This parameter is currently available for Deployment and DeploymentConfig mode. | Optional |
infraViz.securityContext.readOnlyRootFilesystem | To specify if this container has a read-only root filesystem.
| Optional |
infraViz.securityContext.runAsNonRoot | To specify if the container must run as a non-root user. If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation. This parameter is currently available for Deployment and DeploymentConfig mode. | Optional |
infraViz.securityContext.seLinuxOptions | To apply the SELinux context to the container. If this parameter is not specified, the container runtime allocates a random SELinux context for each container.
| Optional |
infraViz.securityContext.seccompProfile | To specify the seccomp options used by the container. If seccomp options are specified at both the pod and container level, the container options override the pod options.
| Optional |
infraViz.securityContext.windowsOptions | To specify Windows-specific options for every container.
| Optional |
InfraViz pod config | ||
| Kubernetes node selector field in the InfraViz pod spec. | Optional |
| Kubernetes CPU and memory resources in the InfraViz pod spec. | Optional |
| The image pull policy for the InfraViz pod. | Optional |
infravizPod.imagePullSecret | The credential file used to authenticate when pulling images from your private Docker registry or repository. | Optional |
infravizPod.priorityClassName | The name of the pod priority class, which is used in the pod specification to set the priority. | Optional |
infravizPod.env | List environment variables. | Optional |
infravizPod.overrideVolumeMounts | The list of volumeMounts. | Optional |
infravizPod.tolerations | List of tolerations based on the taints that are associated with nodes. | Optional |
Operator pod config | ||
| Kubernetes node selector field in the Splunk AppDynamics Operator pod spec | Optional |
| Kubernetes tolerations field in the Splunk AppDynamics Operator pod spec | Optional |
| Kubernetes CPU and memory resources in the Splunk AppDynamics Operator pod spec | Optional |
Best Practices for Sensitive Data
We recommend using multiple values.yaml files to separate sensitive data in separate values.yaml files. Examples of these values are:
controllerInfo.password
controllerInfo.accessKey
controllerInfo.customSSLCert
controllerInfo.proxyPassword
Each values
file follows the structure of the default values.yaml
enabling you to easily share files with non-sensitive configuration properties yet keep sensitive values safe.
Default user-values.yaml
File Example
user-values.yaml
# To install InfraViz
installInfraViz: true
imageInfo:
operatorImage: docker.io/appdynamics/cluster-agent-operator
operatorTag: 22.1.0
imagePullPolicy: Always # Will be used for operator pod
machineAgentImage: docker.io/appdynamics/machine-agent
machineAgentTag: latest
netVizImage: docker.io/appdynamics/machine-agent-netviz
netvizTag: latest
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
username: <appdynamics-controller-username>
password: <appdynamics-controller-password>
accessKey: <appdynamics-controller-access-key>
infravizServiceAccount: appdynamics-infraviz-ssl # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl # Can be any valid name
user-values-sensitive.yaml
controllerInfo:
password: welcome
accessKey: abc-def-ghi-1516
When installing the Helm Chart, use multiple -f
parameters to reference the files:
helm install -f ./user-values.yaml -f ./user-values-sensitive.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace ca-appdynamics
Install Cluster Agent and Infrastructure Visibility in a Cluster
To install Cluster Agent and Infrastructure Visibility simultaneously, follow the same steps listed in Install Infrastructure Visibility in a Cluster along with the following updates:
Specify the following in the yaml file (for example, values.ca1):
installClusterAgent: true installInfraViz: true
CODE- Update the
controllerInfo
properties with the credentials from the Controller.
Update theclusterAgent
properties to set the namespace and pods to monitor. See Configure the Cluster Agent for information about the available properties such as,nsToMonitorRegex
,nsToExcludeRegex
and so on.
Update theInfraViz
and NetViz
properties. See InfraViz Configuration Settings for information about the available properties such asenableMasters
,enableContainerHostId
,enableServerViz
, and so on invalues.yaml
.