Download PDF
Download page Install Infrastructure Visibility with the Kubernetes CLI.
Install Infrastructure Visibility with the Kubernetes CLI
This page describes how to install the Machine Agent and Network Agents in a Kubernetes cluster where the Cluster Agent Operator is installed.
The Cluster Agent Operator provides a custom resource definition called InfraViz. You can use InfraViz to simplify deploying the Machine and Network Agents as a daemonset in a Kubernetes cluster. Additionally, you can deploy these agents by creating a daemonset YAML which does not require the Cluster Agent Operator. For more information, see these examples.
To deploy the Analytics Agent as a daemonset in a Kubernetes cluster, see Install Agent-Side Components in Kubernetes.
Windows Containers are not supported for this deployment.
Requirements
Before you begin, verify that you have:
- Installed kubectl >= 1.16
- Cluster Agent >= 21.3.1
- Met these requirements: Cluster Agent Requirements and Supported Environments.
- If Server Visibility is required, sufficient Server Visibility licenses based on the number of worker nodes in your cluster.
- Permissions to view servers in the Splunk AppDynamics Controller.
Installation Procedure
Install the Cluster Agent. From this Alpine Linux example:
Download the Cluster Agent bundle.
Unzip the Cluster Agent bundle.
Deploy the Cluster Agent Operator using the CLI specifying the correct Kubernetes and OpenShift version (if applicable):
unzip appdynamics-cluster-agent-alpine-linux-<version>.zip kubectl create namespace appdynamicsBASHkubectl create -f cluster-agent-operator.yamlBASHkubectl create -f cluster-agent-operator-openshift.yamlBASHkubectl create -f cluster-agent-operator-openshift-1.15-or-less.yamlBASHYou can also install Cluster Agent Operator from OpenShift OperatorHub in your OpenShift cluster.
Create a Cluster Agent secret using the Machine Agent access key to connect to the Controller. If a
cluster-agent-secretdoes not exist, you must create one, see Install the Cluster Agent with the Kubernetes CLI.kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key>BASH- (Optional) Create an Infrastructure Visibility secret by using the keystore credentials.
Run the following command to import your CA certificate from custom-ssl.pem file:
keytool -import -alias rootCA -file custom-ssl.pem -keystore cacerts.jks -storepass <your-password>CODECreate keystore file secret.
kubectl -n appdynamics create secret generic <cacertinfraviz> --from-file=cacerts.jksCODECreate Keystore password secret
kubectl -n appdynamics create secret generic <kspassinfraviz> --from-literal=keystore-password="<your-password>"CODEHere,
cacertinfravizis the keystore filename andkspassinfravizis the keystore password of Infrastructure Visibility.The the keystore file and password that you specify here should be included in the
infraviz.yamlfile to apply the custom SSL configuration. For example,keyStoreFileSecret: cacertinfraviz keystorePasswordSecret: kspassinfravizCODE
Update the
infraviz.yamlfile to set thecontrollerUrl, andaccountvalues based on the information from the Controller's License page.
To enable Server Visibility, setenableServerViztotrue(shown in theinfraviz.yamlconfiguration example).
To deploy a Machine Agent without Server Visibility enabled, setenableServerViztofalse.infraviz.yaml Configuration File with Server Visibility Enabled
apiVersion: v1 kind: ServiceAccount metadata: name: appdynamics-infraviz namespace: appdynamics --- apiVersion: cluster.appdynamics.com/v1alpha1 kind: InfraViz metadata: name: appdynamics-infraviz namespace: appdynamics spec: controllerUrl: "https://mycontroller.saas.appdynamics.com" image: "docker.io/appdynamics/machine-agent:latest" account: "<your-account-name>" globalAccount: "<your-global-account-name>" enableContainerHostId: true enableServerViz: true resources: limits: cpu: 500m memory: "1G" requests: cpu: 200m memory: "800M"YMLThe
infraviz.yamlconfiguration file example deploys adaemonsetthat runs a single pod per node in the cluster. Each pod runs a single container from where the Machine Agent, or Server Visibility Agent runs.To enable the Network Visibility Agent to run in a second container in the same pod, add the
netVizImageandnetVizPortkeys and values as shown in this configuration file example:infraviz.yaml Configuration File with Second Container in a Single Pod
apiVersion: v1 kind: ServiceAccount metadata: name: appdynamics-infraviz namespace: appdynamics --- apiVersion: cluster.appdynamics.com/v1alpha1 kind: InfraViz metadata: name: appdynamics-infraviz namespace: appdynamics spec: controllerUrl: "https://mycontroller.saas.appdynamics.com" image: "docker.io/appdynamics/machine-agent:latest" account: "<your-account-name>" enableContainerHostId: true enableServerViz: true netVizImage: appdynamics/machine-agent-netviz:latest netVizPort: 3892 resources: limits: cpu: 500m memory: "1G" requests: cpu: 200m memory: "800M"YMLUse
kubectlto deployinfraviz.yaml- For environments where Kubernetes >=1.25, PodSecurityPolicy is removed from Kubernetes >= 1.25 (https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes). Pod security restrictions are now applied at the namespace level (https://kubernetes.io/docs/concepts/security/pod-security-admission/) using Pod Security Standard levels . Therefore you must set the level as Privileged to the namespace in which Infrastructure Visibility pod is running.
- For environments where Kubernetes <1.25,
PodSecurityPoliciesblock certain pod security context configuration, such as privileged pods, you must deploy theinfraviz-pod-security-policy.yamlbefore editing theinfraviz.yamlfile. You must attach PodSecurityPolicy toappdynamics-infravizservice account explictly. - For environments where OpenShift
SecurityContextConstraintsblock certain pod security context configuration, such as privileged pods, you must deploy theinfraviz-security-context-constraint-openshift.yamlbefore editing theinfraviz.yamlfile.
kubectl create -f infraviz.yamlBASHkubectl create -f infraviz-pod-security-policy.yaml kubectl create -f infraviz.yamlBASHSpecify the following Kubernetes labels to the namespace where Infrastructure Visibility is installed:
pod-security.kubernetes.io/<MODE>: <LEVEL> (Required)
pod-security.kubernetes.io/<MODE>-version: <VERSION> (Optional)
For more info see, https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/.sample-namespace.yaml
apiVersion: v1 kind: Namespace metadata: name: appdynamics labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/enforce-version: v1.27 pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/audit-version: v1.27 pod-security.kubernetes.io/warn: privileged pod-security.kubernetes.io/warn-version: v1.27YML
Run the following command:
kubectl create -f infraviz.yamlCODE
kubectl create -f infraviz-security-context-constraint-openshift.yaml kubectl create -f infraviz.yamlBASHConfirm that the
appdynamics-infravizpod is running, and the Machine Agent, Server Visibility Agent, and Network Agent containers are ready:kubectl -n appdynamics get pods NAME READY STATUS RESTARTS AGE appdynamics-infraviz-shkhj 2/2 Running 0 18sBASHTo verify that the agents are registering with the Controller, review the logs and confirm that the agents display in the Agents Dashboard of the Controller Administration UI. In the Controller, if Server Visibility is enabled, the nodes are visible under Controller > Servers.
kubectl -n appdynamics logs appdynamics-infraviz-shkhj -c appd-infra-agent ... Started Machine Agent SuccessfullyBASH
InfraViz Configuration Settings
To configure Infrastructure Visibility, you can modify these parameters in the infraviz.yaml file included with the download package. After changing the file, delete and re-create the InfraViz deployment to ensure the changes are applied.
| Parameter | Description | Required/Optional | Default |
|---|---|---|---|
account | Splunk AppDynamics account name | Required | N/A |
appName | Name of the cluster displayed on the Controller UI as your cluster name. This configuration groups the nodes of the cluster based on the master, worker, infra, worker-infra roles and displays them on the Metric Browser. | Optional | N/A |
args | List of command arguments | Optional | N/A |
controllerUrl | URL of the Splunk AppDynamics Controller | Required | N/A |
enableContainerd | Enable containerd visibility on Machine Agent. Specify either | Optional | false |
enableContainerHostId | Flag that determines how container names are derived; specify either true or false. | Required | true |
enableMasters | By default, only Worker nodes are monitored. When set to true, Server Visibility is provided for Master nodes. For managed Kubernetes providers, the flag has no effect because the Master plane is not accessible. | Optional | false |
enableServerViz | Enable Server Visibility | Required | false |
enableDockerViz | Enable Docker Visibility | Required | false |
env | List environment variables | Optional | N/A |
eventServiceUrl | Event Service Endpoint | Optional | N/A |
globalAccount | Global account name | Optional | N/A |
image | Retrieves the most recent version of the Machine Agent image. | Optional | appdynamics/machine-agent:latest |
imagePullPolicy | The image pull policy for the InfraViz pod. | Optional |
|
imagePullSecret | Name of the pull secret image | Optional | N/A |
logLevel | Level of logging verbosity. Valid options are: info or debug. | Optional | info |
metricsLimit | Maximum number of metrics that the Machine Agent sends to the Controller. | Optional | N/A |
netVizImage | Retrieves the most recent version of Network Agent image. | Optional | appdynamics/machine-agent-netviz:latest |
netVizPort | When > 0, the Network Agent is deployed in a sidecar with the Machine Agent. By default, the Network Visibility Agent works with port 3892. | Optional | 3892 |
| netVizSecurityContext | You can include the following parameters under securityContext:
This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of | Optional | N/A |
This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of | Optional | N/A | |
If you do not set this parameter, the helm uses the default value as true.
| Optional | N/A | |
| Optional | ["NET_ADMIN","NET_RAW"] The default values are not overridden by the specified values. When you specify a value for capabilities, the value is considered along with the default values. | |
If you do not set this parameter, the helm uses the default value as true.
| Optional | N/A | |
This parameter is currently available for Deployment and DeploymentConfig mode. | Optional | N/A | |
| Optional | N/A | |
If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation. This parameter is currently available for Deployment and DeploymentConfig mode. | Optional | N/A | |
| Optional | N/A | |
| Optional | N/A | |
| Optional | N/A | |
nodeSelector | OS specific label that identifies nodes for scheduling of the daemonset pods. | Optional | linux |
| The list of volumeMounts. | Optional |
|
| Name of the priority class that determines priority when a pod needs to be evicted. | Optional | N/A |
propertyBag | String with any other Machine Agent parameters | Optional | N/A |
proxyUrl | URL of the proxy server (protocol://domain:port) | Optional | N/A |
proxyUser | Proxy user credentials (user@password) | Optional | N/A |
resources | Definitions of resources and limits for the Machine Agent | Optional | N/A |
resourcesNetViz | Set resources for the Network Visibility (NetViz) container | Optional | Request
Limit
|
runAsUser | The UID (User ID) to run the entry point of the container process. If you do not specify the UID, this defaults to the user id specified in the image.
If you require to run on any other UID, change the UID for runAsUser without changing the group ID. This parameter is deprecated. We recommend that you use the | Optional |
|
runAsGroup | The GID (Group ID) to run the entry point of the container process. If you do not specify the ID, this uses the UID specified in the image,
This parameter is deprecated. We recommend that you use the | Optional | GID: 1001Username: appdynamics |
For OpenShift version > For example, if you want to use | You can include the following parameters under securityContext:
This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of | Optional | NA |
This sets the appropriate file permission on the agent artifacts. This value is applied to all the instrumented resources. Add this parameter, if you require to override the default value of | Optional | NA | |
If you do not set this parameter, the helm uses the default value as true.
| Optional | true | |
| Optional | NA | |
If you do not set this parameter, the helm uses the default value as true.
| Optional | true | |
This parameter is currently available for Deployment and DeploymentConfig mode. | Optional | NA | |
| Optional | NA | |
If the value is true, the Kubelet validates the image at runtime to ensure that the container fails to start when run as root. If this parameter is not specified or if the value is false, there is no validation. This parameter is currently available for Deployment and DeploymentConfig mode. | Optional | NA | |
| Optional | NA | |
| Optional | NA | |
| Optional | NA | |
stdoutLogging | Determines if logs are saved to a file or redirected to the Console. | Optional | false |
tolerations | List of tolerations based on the taints that are associated with nodes. | Optional | N/A |
uniqueHostId | Unique host ID in Splunk AppDynamics. Valid options are: | Optional | spec.nodeName |