Download PDF
Download page Install Infrastructure Visibility with Helm Charts.
Install Infrastructure Visibility with Helm Charts
This page describes how to use the Cluster Agent Helm Charts to deploy Infrastructure Visibility (InfraViz).
Helm is a package manager for Kubernetes. Helm charts are a collection of files that describe a set of Kubernetes resources. The Cluster Agent Helm Chart is a convenient method to deploy the Appdynamics Operator and InfraViz.
Windows Containers are not supported for this deployment.
Requirements
Machine Agent version >= 21.9.0
NetViz version >= 21.3.0
Controller version >= 20.6.0
Cluster Agent Helm charts are compatible with Helm 3.0
- Cluster Agent Helm Charts version should be >= 1.1.0 to install InfraViz using Cluster Agent Helm Charts. Older version (<=v0.1.19) of Cluster Agent Helm Charts does not work.
- For environments where Kubernetes >=1.25, PodSecurityPolicy is removed from K8s >= 1.25 (https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes). Pod security restrictions are now applied at the namespace level (https://kubernetes.io/docs/concepts/security/pod-security-admission/) using Pod Security Standard levels . Therefore, you require to set the level as Privileged for the namespace in which Infrastructure Visibility pod is running.
- For environments where Kubernetes <1.25,
PodSecurityPolicies
block certain pod security context configuration, such as privileged pods, you must deploy theinfraviz-pod-security-policy.yaml
before editing theinfraviz.yaml
file. You require to attach PodSecurityPolicy to theappdynamics-infraviz
serviceaccount explicitly. - For environments where OpenShift
SecurityContextConstraints
block certain pod security context configuration, such as privileged pods, you must deploy theinfraviz-security-context-constraint-openshift.yaml
before editing theinfraviz.yaml
file.
Install Infrastructure Visibility in a Cluster
Delete all the previously installed CRDs related to Appdynamics Agent by using these commands:
$ kubectl get crds $ kubectl delete crds <crd-names>
CODEAdd the chart repository to Helm:
$ helm repo add appdynamics-cloud-helmcharts https://appdynamics.jfrog.io/artifactory/appdynamics-cloud-helmcharts/
CODECreate a namespace for appdynamics in your cluster:
$ kubectl create namespace appdynamics
CODECreate a Helm values file in this example of values-ca1.yaml. Update the
controllerInfo
properties with the credentials from the Controller.Update the
Infraviz
andnetviz
properties. See InfraViz Configuration Settings for information about the available properties likeenableMasters
,enableContainerHostId
,enableServerViz
and so on.values-ca1.yaml
# To install InfraViz installInfraViz: true # AppDynamics controller info controllerInfo: url: https://<controller-url>:443 account: <appdynamics-controller-account> username: <appdynamics-controller-username> password: <appdynamics-controller-password> accessKey: <appdynamics-controller-access-key> globalAccount: <appdynamics-controller-global-account> # InfraViz config infraViz: nodeOS: "linux" enableMasters: false stdoutLogging: false enableContainerHostId: true enableServerViz: true enableDockerViz: false runAsUser: <UID of runAsUser> runAsGroup: 1001 # Netviz config netViz: enabled: true netVizPort: 3892
YMLSee Configuration Options for
values.yaml
for more information regarding the available options. Also, you can download a copy ofvalues.yaml
from the Helm Chart repository using this command:helm show values appdynamics-cloud-helmcharts/cluster-agent
CODEDeploy the InfraViz to the appdynamics namespace:
$ helm install -f ./values-ca1.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace=appdynamics
CODE
Configuration Options
Configuration option | Description | Required |
---|---|---|
| Used for installing InfraViz. This must be set to true. | Required (Defaults to false) |
Image configuration options (options under imageInfo key in values.yaml) | ||
| Operator image address in format <registryUrl>/<registryAccount>/cluster-agent-operator | Optional (Defaults to the Docker Hub image |
| Operator image tag/version | Optional (Defaults to 22.1.0) |
| Image pull policy for the operator pod | Optional |
| Machine Agent image address in format <registryUrl>/<registryAccount>/machine-agent | Optional (Defaults to Docker Hub image) |
| Machine Agent image tag/version | Optional (Defaults to latest) |
| NetViz Agent image address in format <registryUrl>/<registryAccount>/machine-agent-netviz | Optional (Defaults to the Docker Hub image) |
| NetViz Agent image tag/version | Optional (Defaults to latest) |
Controller configuration options (Config options under controllerInfo key in values.yaml) | ||
| AppDynamics Controller accessKey | Required |
| AppDynamics Controller globalAccount | Required |
| AppDynamics Controller account | Required |
| true/false if the proxy requires authentication | Optional |
| Base64 encoding of PEM formatted SSL certificate | Optional |
| AppDynamics Controller password | Required only when auto-instrumentation is enabled. |
| Password for proxy authentication | Optional |
| Proxy URL if the Controller is behind some proxy | Optional |
| Username for proxy authentication | Optional |
| AppDynamics Controller URL | Required |
| AppDynamics Controller username | Required only when auto-instrumentation is enabled. |
RBAC configuration | ||
| Service account to be used by the InfraViz | Optional |
| Set to true if ServiceAccounts mentioned are to be created by Helm | Optional |
| Service account to be used by the AppDynamics Operator | Optional |
NetViz config | ||
netViz.resourcesNetViz | Set resources for the Network Visibility (NetViz) container | Optional |
netViz.netVizPort | When > 0, the Network Agent is deployed in a sidecar with the Machine Agent. By default, the Network Visibility Agent works with port 3892. | Optional |
InfraViz config | ||
infraViz.enableContainerHostId | Flag that determines how container names are derived; specify either true or false. | Required |
infraViz.enableMasters | By default, only Worker nodes are monitored. When set to true, Server Visibility is provided for Master nodes. For managed Kubernetes providers, the flag has no effect because the Master plane is not accessible. | Optional |
infraViz.enableServerViz | Enable Server Visibility | Required |
infraViz.enableDockerViz | Enable Docker Visibility | Required |
infraViz.runAsUser | The UID (User ID) to run the entry point of the container process. If you do not specify the UID, this defaults to the user id specified in the image. If you require to run on any other UID, change the UID for runAsUser without changing the group ID. | Optional |
infraViz.runAsGroup | The GID (Group ID) to run the entry point of the container process. If you do not specify the ID, this uses the UID specified in the image, | Optional |
infraViz.stdoutLogging | Determines if logs are saved to a file or redirected to the Console. | Optional |
InfraViz pod config | ||
infravizPod.nodeSelector | Kubernetes node selector field in the InfraViz pod spec. | Optional |
infravizPod.resources | Kubernetes CPU and memory resources in the InfraViz pod spec. | Optional |
infravizPod.imagePullSecret | The credential file used to authenticate when pulling images from your private Docker registry or repository. | Optional |
infravizPod.priorityClassName | The name of the pod priority class, which is used in the pod specification to set the priority. | Optional |
infravizPod.env | List environment variables. | Optional |
infravizPod.overrideVolumeMounts | The list of volumeMounts. | Optional |
infravizPod.tolerations | List of tolerations based on the taints that are associated with nodes. | Optional |
InfraViz pod config | ||
| Kubernetes node selector field in the InfraViz pod spec | Optional |
| Kubernetes CPU and memory resources in the InfraViz pod spec | Optional |
infravizPod.imagePullSecret | The credential file used to authenticate when pulling images from your private Docker registry or repository. | Optional |
infravizPod.priorityClassName | The name of the pod priority class, which is used in the pod specification to set the priority. | Optional |
Operator pod config | ||
| Kubernetes node selector field in the AppDynamics Operator pod spec | Optional |
| Kubernetes tolerations field in the AppDynamics Operator pod spec | Optional |
| Kubernetes CPU and memory resources in the AppDynamics Operator pod spec | Optional |
Best Practices for Sensitive Data
We recommend using multiple values.yaml files to separate sensitive data in separate values.yaml files. Examples of these values are:
controllerInfo.password
controllerInfo.accessKey
controllerInfo.customSSLCert
controllerInfo.proxyPassword
Each values
file follows the structure of the default values.yaml
enabling you to easily share files with non-sensitive configuration properties yet keep sensitive values safe.
Default user-values.yaml
File Example
user-values.yaml
# To install InfraViz
installInfraViz: true
imageInfo:
operatorImage: docker.io/appdynamics/cluster-agent-operator
operatorTag: 22.1.0
imagePullPolicy: Always # Will be used for operator pod
machineAgentImage: docker.io/appdynamics/machine-agent
machineAgentTag: latest
netVizImage: docker.io/appdynamics/machine-agent-netviz
netvizTag: latest
controllerInfo:
url: https://<controller-url>:443
account: <appdynamics-controller-account>
username: <appdynamics-controller-username>
password: <appdynamics-controller-password>
accessKey: <appdynamics-controller-access-key>
infravizServiceAccount: appdynamics-infraviz-ssl # Can be any valid name
operatorServiceAccount: appdynamics-operator-ssl # Can be any valid name
user-values-sensitive.yaml
controllerInfo:
password: welcome
accessKey: abc-def-ghi-1516
When installing the Helm Chart, use multiple -f
parameters to reference the files:
helm install -f ./user-values.yaml -f ./user-values-sensitive.yaml "<my-cluster-agent-helm-release>" appdynamics-cloud-helmcharts/cluster-agent --namespace ca-appdynamics
Install Cluster Agent and Infrastructure Visibility in a Cluster
To install Cluster Agent and Infrastructure Visibility simultaneously, follow the same steps listed in Install Infrastructure Visibility in a Cluster along with the following updates:
Specify the following in the yaml file (for example, values.ca1):
installClusterAgent: true installInfraViz: true
CODE- Update the
controllerInfo
properties with the credentials from the Controller.
Update theclusterAgent
properties to set the namespace and pods to monitor. See Configure the Cluster Agent for information about the available properties such as,nsToMonitor
,nsToMonitorRegex
,nsToExcludeRegex
and so on.
Update theInfraViz
and NetViz
properties. See InfraViz Configuration Settings for information about the available properties such asenableMasters
,enableContainerHostId
,enableServerViz
, and so on invalues.yaml
.