Download PDF
Download page Install Infrastructure Visibility with the Kubernetes CLI.
Install Infrastructure Visibility with the Kubernetes CLI
This page describes how to install the Machine Agent and Network Agents in a Kubernetes cluster where the Cluster Agent Operator is installed.
The Cluster Agent Operator provides a custom resource definition called InfraViz
. You can use InfraViz
to simplify deploying the Machine and Network Agents as a daemonset
in a Kubernetes cluster. Additionally, you can deploy these agents by creating a daemonset YAML
which does not require the Cluster Agent Operator. For more information, see these examples.
To deploy the Analytics Agent as a daemonset
in a Kubernetes cluster, see Install Agent-Side Components in Kubernetes.
Windows Containers are not supported for this deployment.
Requirements
Before you begin, verify that you have:
- Installed kubectl >= 1.11.3
- Cluster Agent >= 21.3.1
- Met these requirements: Cluster Agent Requirements and Supported Environments.
- If Server Visibility is required, sufficient Server Visibility licenses based on the number of worker nodes in your cluster.
- Permissions to view servers in the AppDynamics Controller.
Installation Procedure
Install the Cluster Agent. From this Alpine Linux example:
Download the Cluster Agent bundle.
Unzip the Cluster Agent bundle.
Deploy the Cluster Agent Operator using the CLI specifying the correct Kubernetes and OpenShift version (if applicable):
unzip appdynamics-cluster-agent-alpine-linux-<version>.zip kubectl create namespace appdynamics
BASHkubectl create -f cluster-agent-operator.yaml
BASHkubectl create -f cluster-agent-operator-openshift.yaml
BASHkubectl create -f cluster-agent-operator-1.14-or-less.yaml
BASHkubectl create -f cluster-agent-operator-openshift-1.14-or-less.yaml
BASH
Create a Cluster Agent secret using the Machine Agent access key to connect to the Controller. If a
cluster-agent-secret
does not exist, you must create one, see Install the Cluster Agent with the Kubernetes CLI.kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key>
BASHUpdate the
infraviz.yaml
file to set thecontrollerUrl
,account
, andglobalAccount
values based on the information from the Controller's License page.
To enable Server Visibility, setenableServerViz
totrue
(shown in theinfraviz.yaml
configuration example).
To deploy a Machine Agent without Server Visibility enabled, setenableServerViz
to"false"
.infraviz.yaml Configuration File with Server Visibility Enabled
apiVersion: v1 kind: ServiceAccount metadata: name: appdynamics-infraviz namespace: appdynamics --- apiVersion: appdynamics.com/v1alpha1 kind: InfraViz metadata: name: appd-infraviz namespace: appdynamics spec: controllerUrl: "https://mycontroller.saas.appdynamics.com" image: "docker.io/appdynamics/machine-agent-analytics:latest" account: "<your-account-name>" globalAccount: "<your-global-account-name>" enableServerViz: true resources: limits: cpu: 500m memory: "1G" requests: cpu: 200m memory: "800M"
YMLThe
infraviz.yaml
configuration file example deploys adaemonset
that runs a single pod per node in the cluster. Each pod runs a single container from where the Machine Agent, or Server Visibility Agent runs.To enable the Network Visibility Agent to run in a second container in the same pod, add the
netVizImage
andnetVizPort
keys and values as shown in this configuration file example:infraviz.yaml Configuration File with Second Container in a Single Pod
apiVersion: v1 kind: ServiceAccount metadata: name: appdynamics-infraviz namespace: appdynamics --- apiVersion: appdynamics.com/v1alpha1 kind: InfraViz metadata: name: appd-infraviz namespace: appdynamics spec: controllerUrl: "https://mycontroller.saas.appdynamics.com" image: "docker.io/appdynamics/machine-agent-analytics:latest" account: "<your-account-name>" globalAccount: "<your-global-account-name>" enableServerViz: true netVizImage: appdynamics/machine-agent-netviz:latest netVizPort: 3892 resources: limits: cpu: 500m memory: "1G" requests: cpu: 200m memory: "800M"
YMLUse
kubectl
to deployinfraviz.yaml
For environments where Kubernetes
PodSecurityPolicies
block certain pod security context configuration, such as privileged pods, you must deploy theinfraviz-pod-security-policy.yaml
before editing theinfraviz.yaml
file.For environments where OpenShift
SecurityContextConstraints
block certain pod security context configuration, such as privileged pods, you must deploy theinfraviz-security-context-constraint-openshift.yaml
before editing theinfraviz.yaml
file.kubectl create -f infraviz.yaml
BASHkubectl create -f infraviz-pod-security-policy.yaml kubectl create -f infraviz.yaml
BASHkubectl create -f infraviz-security-context-constraint-openshift.yaml kubectl create -f infraviz.yaml
BASHConfirm that the
appd-infraviz
pod is running, and the Machine Agent, Server Visibility Agent, and Network Agent containers are ready:kubectl -n appdynamics get pods NAME READY STATUS RESTARTS AGE appd-infraviz-shkhj 2/2 Running 0 18s
BASHTo verify that the agents are registering with the Controller, review the logs and confirm that the agents display in the Agents Dashboard of the Controller Administration UI. In the Controller, if Server Visibility is enabled, the nodes are visible under Controller > Servers.
kubectl -n appdynamics logs appd-infraviz-shkhj -c appd-infra-agent ... Started AppDynamics Machine Agent Successfully
BASH
InfraViz
Configuration Settings
To configure Infrastructure Visibility, you can modify these parameters in the infraviz.yaml
file included with the download package. After changing the file, delete and re-create the InfraViz
deployment to ensure the changes are applied.
Parameter | Description | Required/Optional | Default |
---|---|---|---|
account | AppDynamics account name | Required | N/A |
args | List of command arguments | Optional | N/A |
controllerUrl | URL of the AppDynamics Controller | Required | N/A |
enableContainerHostId | Flag that determines how container names are derived; specify either true or false . | Optional | true |
enableMasters | By default, only Worker nodes are monitored. When set to true , Server Visibility is provided for Master nodes. For managed Kubernetes providers, the flag has no effect because the Master plane is not accessible. | Optional | false |
enableServerViz | Enable Server Visibility | Optional
| true |
env | List environment variables | Optional | N/A |
eventServiceUrl | Event Service Endpoint | Optional | N/A |
globalAccount | Global account name | Required | N/A |
image | Retrieves the most recent version of the Machine Agent image. | Optional | appdynamics/machine-agent-analytics:latest |
imagePullSecret | Name of the pull secret image | Optional | N/A |
logLevel | Level of logging verbosity. Valid options are: info or debug . | Optional | info |
metricsLimit | Maximum number of metrics that the Machine Agent sends to the Controller. | Optional | N/A |
netVizImage | Retrieves the most recent version of Network Agent image. | Optional | appdynamics/machine-agent-netviz:latest |
netVizPort | When > 0, the Network Agent is deployed in a sidecar with the Machine Agent. By default, the Network Visibility Agent works with port 3892 . | Optional | 3892 |
nodeSelector | OS specific label that identifies nodes for scheduling of the daemonset pods. | Optional | linux |
| Name of the priority class that determines priority when a pod needs to be evicted. | Optional | N/A |
propertyBag | String with any other Machine Agent parameters | Optional | N/A |
proxyUrl | URL of the proxy server (protocol://domain:port ) | Optional | N/A |
proxyUser | Proxy user credentials (user@password ) | Optional | N/A |
resources | Definitions of resources and limits for the Machine Agent | Optional | N/A |
resourcesNetViz | Set resources for the Network Visibility (NetViz) container | Optional | Request
Limit
|
stdoutLogging | Determines if logs are saved to a file or redirected to the Console. | Optional | false |
tolerations | List of tolerations based on the taints that are associated with nodes. | Optional | N/A |
uniqueHostId | Unique host ID in AppDynamics. Valid options are: spec.nodeName , status.hostIP . | Optional | spec.nodeName |