Download PDF
Download page Auto-Instrument Applications with the Cluster Agent.
Auto-Instrument Applications with the Cluster Agent
This page describes how to auto-instrument Kubernetes workloads running in a cluster where the Cluster Agent is deployed. See Install the Cluster Agent.
For instrumentation options, see Container Installation Options.
AppDynamics recommends using auto-instrumentation to simplify operations.
With auto-instrumentation, you can dynamically add an App Server Agent to workloads for these application types:
- Java
- .NET Core on Linux
- Node.js
We support these Kubernetes workloads: Deployments
, DeploymentConfigs
and StatefulSets
.
Auto-Instrumentation Overview
To enable auto-instrumentation, you:
- Add additional configuration to the
cluster-agent.yaml
file, or - Add configuration to the Cluster Agent Helm Chart
values.yaml
file. See Install the Cluster Agent with Helm Charts.
Then, you can apply the changes using kubectl
or upgrade the Cluster Agent Helm Chart.
When the Cluster Agent detects a supported workload, and the workload matches the configured auto-instrumentation rules, the Cluster Agent modifies the workload's spec using the Kubernetes API. The Cluster Agent attaches an init container with the AppDynamics .NET Core, Node.js, or Java Agent image to the workload. When the application restarts, the required agent is copied into the application container. As a result, the application container references the AppDynamics agent (Node.js Agent, .NET Core on Linux Agent, or Java Agent) in an auto-instrumented application.
Requirements
AppDynamics Requirements
Cluster Agent >= 20.5. See Cluster Agent Requirements and Supported Environments.
- Installed the latest Cluster Agent and Operator versions in the cluster. The
cluster-agent-operator.yaml
sets up the permissions required by the Cluster Agent to perform auto-instrumentation. See Install the Cluster Agent. - At least one application is deployed to the cluster that was not previously instrumented with the required AppDynamics agent.
- A Controller with sufficient agent licenses based on the number of applications that will be auto-instrumented.
- Ensure that you have sufficient cluster capacity to process pod restarts. See Minimize the Impact of Pod Restarts.
- Ensure that you have specified the app, the tier, and the node name tuple to be unique across all the Kubernetes instances. If the name is not unique, the nodes may not report properly.
This is applicable to those agents on which app, tier, and node uniqueness is required such as Java and Node.js Agents.
Language-Specific Requirements
Node.js >= 8.6
- Java
- Java applications must support including the
-javaagent
argument in the Java command using an environment variable. By default, the Cluster Agent usesJAVA_TOOL_OPTIONS
; however, you can change this using thedefaultEnv
property. - Based on Java Agent resource requirements, it may be necessary to adjust the configured memory requests or limits for the application pods (see Install the Java Agent).
- Java applications must support including the
- .NET Core on Linux and Node.js
Ensure that the application base image Operating System (OS) matches the App Server Agent base image OS (Linux versus Alpine). For example, if your .NET Core on Linux application uses an Ubuntu base image, then you must set the
imageInfo.image
tag to the Linux version. In this example, the image tag is20.11.0-linux
.
If the application used an Alpine Linux base image OS, then the tag would be20.11.0-alpine
. See the AppDynamics Docker Hub page.apiVersion: cluster.appdynamics.com/v1alpha1 kind: Clusteragent metadata: name: k8s-cluster-agent namespace: appdynamics spec: # content removed for brevity instrumentationRules: - namespaceRegex: dev language: dotnetcore appName: MyDotNetAppOnUbuntu imageInfo: image: "docker.io/appdynamics/dotnet-core-agent:20.11.0-linux" agentMountPath: /opt/appdynamics
YMLIf the base image Operating Systems do not match, then the App Server Agent may not start. See Validate Auto-Instrumentation. The Node.js Agent also has specific Node.js runtime requirements that may prevent the agent from starting. See Node.js Supported Environments.
- For .NET Core and Node.js applications that communicate with an on-premises Controller, see Use Auto-Instrumentation with an On-Premises Controller.
- If transaction analytics data is required, then you must configure the Cluster Agent
analyticsHost
andanalyticsPort
properties.
Enable Auto-Instrumentation for the Cluster Agent
To set up the Cluster Agent feature:
First, remove any deleted pods from the Controller Tiers & Nodes Dashboard. Then, re-create the
cluster-agent-secret
that was created in Install the Cluster Agent to includeapi-user
. Set theapi-user
value to a local user from the Controller with theAdministrator
role:kubectl -n appdynamics delete secret cluster-agent-secret kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key> --from-literal=api-user="<username>@<customer>:<password>"
CODEThe Cluster Agent uses the
api-user
to mark the associated node in the Controller as historical upon pod deletion.
Add auto-instrumentation configuration to the
cluster-agent.yaml
or the Helmvalues.yaml
file. The configuration determines whichDeployments
,DeploymentConfigs
andStatefulSets
workloads to target for auto-instrumentation and which agent types and versions to use. See Auto-Instrumentation Configuration.After you save the configuration, apply or upgrade the Cluster Agent deployment. The related pods and containers restart based on the deployment rollout strategy associated with the applications.
kubectl apply -f cluster-agent.yaml
BASHhelm upgrade -f ./ca1-values.yaml "<my-cluster-agent-helm-release>" appdynamics-charts/cluster-agent --namespace appdynamics
BASHTo validate and troubleshoot auto-instrumentation, see Validate the Cluster Agent Installation.
- If a workload does not match the properties defined in
instrumentationRules
, then auto-instrumentation is not enabled. - If an auto-instrumentation property is not defined as a default, or in
instrumentationRules
, then the Cluster Agent uses the corresponding default value specified in Auto-Instrumentation Configuration. If there are no corresponding default values, then auto-instrumentation is not enabled.
Configuration Examples
Example 1 targets Java applications in the namespaces that match the ecom.*
pattern. Each matching application will be instrumented with a 20.20.1
Java Agent and will report to the Ecommerce
application in the AppDynamics Controller. By default, the tier name is the name of the Kubernetes workload, but you can override it by setting the tierName
property.
Example 1: cluster-agent-auto-1.yaml
apiVersion: cluster.appdynamics.com/v1alpha1
kind: Clusteragent
metadata:
name: k8s-cluster-agent
namespace: appdynamics
spec:
appName: "<app-name>"
controllerUrl: "<protocol>://<appdynamics-controller-host>:8080"
account: "<account-name>"
image: "docker.io/appdynamics/cluster-agent:20.12.1"
serviceAccountName: appdynamics-cluster-agent
nsToMonitorRegex: ecom.*
#
# auto-instrumentation config
#
instrumentationMethod: Env
nsToInstrumentRegex: ecom.*
defaultAppName: Ecommerce
instrumentationRules:
- language: java
imageInfo:
image: docker.io/appdynamics/java-agent:20.20.1
agentMountPath: /opt/appdynamics
Example 2 targets namespaces that contain Java and .NET Core on Linux applications, and incorporates these advanced configurations:
- Uses multiple
instrumentationRules
to target Java applications versus .NET Core on Linux applications. - Uses the
labelMatch
strategy to determine the agent type and associated agent image based on the value of theframework
label in the workload specauto-instrumented-dotnet-app.yaml
andauto-instrumented-java-app.yaml
below. - Rather than assigning a Controller application name in the
YAML
file, the configuration usesappNameStrategy:
label
to assign an application name based on a label from the workload spec. - For the Java applications, it uses
instrumentContainer: select
andcontainerMatchString
:.*service
to instruct the Cluster Agent to auto-instrument only the application service container only, and ignore any other defined containers defined.
Example 2: cluster-agent-auto-2.yaml
apiVersion: cluster.appdynamics.com/v1alpha1
kind: Clusteragent
metadata:
name: k8s-cluster-agent
namespace: appdynamics
spec:
appName: "<app-name>"
controllerUrl: "<protocol>://<appdynamics-controller-host>:8080"
account: "<account-name>"
image: "docker.io/appdynamics/cluster-agent:20.12.1"
serviceAccountName: appdynamics-cluster-agent
nsToMonitorRegex: ecom.*
#
# auto-instrumentation config
#
instrumentationMethod: Env
nsToInstrumentRegex: stage
appNameStrategy: label
instrumentationRules:
- namespaceRegex: stage
language: dotnetcore
labelMatch:
- framework: dotnetcore
appNameLabel: appName
imageInfo:
image: "docker.io/appdynamics/dotnet-core-agent:20.11.0-linux"
agentMountPath: /opt/appdynamics
- namespaceRegex: stage
language: java
labelMatch:
- framework: java
appNameLabel: appName
instrumentContainer: select
containerMatchString: .*service
imageInfo:
image: "docker.io/appdynamics/java-agent:21.3.0"
agentMountPath: /opt/appdynamics
Examples 3 and 4 show Deployment specs for .NET and Java services that define the appName
and framework labels, based on the auto-instrumentation configuration from the cluster-agent-auto-2.yaml:
.NET
apiVersion: apps/v1
kind: Deployment
metadata:
name: dotnet-profile-service
labels:
appName: backend-services
framework: dotnetcore
spec:
containers:
- image: myrepo/profile-service:v2
name: profile-service
# ...
Java
apiVersion: apps/v1
kind: Deployment
metadata:
name: java-account-service
labels:
appName: backend-services
framework: java
spec:
containers:
- image: myrepo/account-service:v2
name: account-service
- image: myrepo/proxy-util:v1
name: proxy-util
# ...
The value of containerMatchString
in cluster-agent-auto-2.yaml
indicates that only the account-service
container will be auto-instrumented in auto-instrumented-java-app.yaml
.
For additional configuration examples, see Auto-Instrumentation Configuration Examples.
AppDynamics Application Name Strategies
The Controller's Application Dashboard provides three application name strategies. Select a strategy by assigning the appNameStrategy
property to one of these values:
manual: Use the
defaultAppName
orappName
parameters in thecluster-agent.yaml
file to set the application name.label: Use a label from the workload's spec as the application name.
- namespace: Use the Kubernetes namespace as the application name.
Manual Strategy
By default, the appNameStrategy
is manual
, which uses the defaultAppName
or appName
parameter to set the application name.
- If
defaultAppName
is provided, then use it (unless overwritten in an instrumentation rule). - If
appName
is provided in an instrumentation rule, then use it.
For example in this spec, ECommerce
is the default application name applied to the ecom
and groceries
namespace, and BookStore
is the application name applied to the books
namespace.
apiVersion: cluster.appdynamics.com/v1alpha1
kind: Clusteragent
metadata:
name: k8s-cluster-agent
namespace: appdynamics
spec:
appName: "<cluster-name>"
# ...
# auto-instrumentation config
instrumentationMethod: Env
nsToInstrumentRegex: ecom|books|groceries
appNameStrategy: manual
defaultAppName: ECommerce
instrumentationRules:
- namespaceRegex: books
appName: BookStore
Label Strategy
This option uses the label
parameter as the application name strategy. To use the label option, specify a value in the appNameLabel
parameter. The appNameLabel
value refers to a label specified in the workload spec.
- If
spec.appNameLabel
is specified, then spec level value is used. - If
appNameLabel
is specified in an instrumentation rule, then that value is used unless a different appNameLabel is specified in the spec. - If the appNameLabel mentioned in the instrumentation rule is not found in the deployment spec, then the spec level
appNameLabel
value is used.
In the following example spec, appNameLabel: app
is used in the instrumentation rule, but the deployment spec does not have the label app
. The spec has a label called appname
which has a value eCommerce
.
Therefore, Controller displays the data that is reporting to eCommerce
application.
In the following spec, the workload spec label appname
is used to set the application name in the ecom
and groceries
namespaces, and the label app
is used in the books
namespace.
apiVersion: cluster.appdynamics.com/v1alpha1
kind: Clusteragent
metadata:
name: k8s-cluster-agent
namespace: appdynamics
spec:
appName: "<cluster-name>"
# ...
# auto-instrumentation config
instrumentationMethod: Env
nsToInstrumentRegex: ecom|books|groceries
appNameStrategy: label
appNameLabel: appname
instrumentationRules:
- namespaceRegex: books
appNameLabel: app
For an application deployed to the ecom
or groceries
namespaces that sets the label appname
(shown in this Deployment spec snippet), it reports to the eCommerce
application in the Controller's Application Dashboard.
apiVersion: apps/v1
kind: Deployment
metadata:
name: ecom-app
labels:
appname: eCommerce
spec:
...
Namespace Strategy
This option uses the Kubernetes namespace
parameter as the application name strategy and allows you to use the namespace name where an application is deployed as the application name in the Controller's Application Dashboard.
In this spec, each application in ecom
, books
, and groceries
namespace uses the application name based on the namespace that it is deployed to.
apiVersion: cluster.appdynamics.com/v1alpha1
kind: Clusteragent
metadata:
name: k8s-cluster-agent
namespace: appdynamics
spec:
appName: "<cluster-name>"
# ...
# auto-instrumentation config
instrumentationMethod: Env
nsToInstrumentRegex: ecom|books|groceries
appNameStrategy: namespace
Minimize the Impact of Pod Restarts
When auto-instrumentation is enabled, the related pods restart based on the deployment rollout strategy associated with the workload. Pod restarts often create CPU and memory usage spikes that may adversely impact performance or exhaust available capacity. To accommodate pods restarts, you may need to increase memory and CPU quotas associated with the impacted namespaces. To reduce the impact of restarting a large number of pods, the Cluster Agent (by default) allows only two concurrent auto-instrumentation tasks. The subsequent workloads (resourcesToInstrument
) are auto-instrumented after the rollout of an instrumented workload. However, you can configure the parameter, numberOfTaskWorkers
, to specify the number of concurrent auto-instrumentation tasks based on your cluster's requirements.