This page describes deployment options for Transaction Analytics and Log Analytics instrumented with Splunk AppDynamics app server agents in Kubernetes applications. 

Transaction Analytics (except for Java and .NET Agent) and Log Analytics require that an Analytics Agent is deployed with an app server agent. 

For Transaction Analytics, the Java Agent >= 4.5.15 or the .NET Agent >= 20.10 supports "agentless" analytics, which does not require that an Analytics Agent is deployed. See Deploy Analytics Without the Analytics Agent.

Transaction Analytics

The Analytics Agent acts as a proxy between the app server agent and the Events Service. See Deploy Analytics With the Analytics Agent.

There are two deployment options for the Analytics Agent to support Transaction Analytics on a Kubernetes application.

  1. A sidecar to the application container. 
     Transaction Analytics Side Car Diagram
    In this model, an Analytics Agent container is added to each application pod and will start/stop with the application container.
  2. A shared agent where a single Analytics Agent is deployed on each Kubernetes worker node. Each pod on the node will use that Analytics Agent to communicate with the Events Service. 

 Log Analytics Side Car Diagram

In this model, the Analytics Agent is deployed as a Daemonset.

Log Analytics

Once deployed, the Analytics Agent has access to the application's logs and can send log data to the Events Service.

There are three deployment options for the Analytics Agent to support Log Analytics on a Kubernetes application.

  1. A sidecar to the application container.

    Log Analytics STDOUT Diagram
    In this model, an Analytics Agent container is added to each application pod and will start/stop with the application container. The Analytics Agent and application container are configured to share a volume where the application logs are written.
  2. If the application bypasses the container filesystem and emits log data to STDOUT and STDERR, the Analytics Agent can be deployed on each Kubernetes worker node. The Analytics Agent can access the log output for every application container on the worker node's file system, stored by Kubernetes under /var/log/containers as a unique file per container. 

    Log Analytics STDOUT Diagram

    In this model, the Analytics Agent is deployed as a Daemonset.

    For some Kubernetes distributions such as OpenShift, the Analytics Agent will require elevated permissions to access the files under /var/log/containers.

  3. If a syslog provider is available in the Kubernetes cluster, the Analytics Agent can be deployed to receive syslog messages with TCP transport. A single Analytics Agent instance is required per syslog provider. See Collect Log Analytics Data from Syslog Messages.

For Transaction and Log Analytics, the sidecar approach is simpler to deploy, but consumes more cluster resources because it requires one additional container per application pod. The shared agent approach adds another deployment object to manage, but can significantly reduce the overall resource consumption for a cluster.

Example Configurations to Deploy the Analytics Agent

The option to import root certificates is supported only on Windows.


The following deployment specs are specific examples of how to implement the deployment options explained above.

The following deployment specification snippet is for a Java application that includes two containers:

  • travelapp application container 
  • analyticsagent container, which serves as a sidecar.

These containers share a volume named shared-storage, mounted at the /opt/appdynamics/app-logs path. The Java application is configured to write logs to this path. The Analytics Agent reads the logs from this path and sends the logs to the Events Service.

Transaction Analytics: Deployment Spec Using a Sidecar or Pod

The following deployment spec defines two containers, the application container flight-services, which uses an image instrumented with an app server agent, and the Analytics Agent container appd-analytics-agent, which uses the Analytics Agent from Docker Hub.

The appd-analytics-agent container leverages a ConfigMap and Secret to configure the Events Service credentials required by the Analytics Agent, including the account access key and global account name. See Install Agent-Side Components.

As a sidecar, the Analytics Agent is available at localhost and uses the default port 9090. The app server agent will connect automatically and no additional configuration is required.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: flight-services
spec:
  selector:
    matchLabels:
      name: flight-services
  replicas: 1
  template:
    metadata:
      labels:
        name: flight-services
    spec:
      containers:
      - name: flight-services
        image: <flight-services-docker-image>
        imagePullPolicy: IfNotPresent
        envFrom:
          - configMapRef:
              name: controller-info
        env:
        - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: appd-key
              name: appd-secret
        - name: APPDYNAMICS_AGENT_TIER_NAME
          value: flight-services
		
        ports:
        - containerPort: 8080
          protocol: TCP
        restartPolicy: Always
      - name: appd-analytics-agent
        envFrom:
        - configMapRef:
            name: controller-info
        env:
        - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: appd-key
              name: appd-secret
        - name: APPDYNAMICS_EVENTS_API_URL
          valueFrom:
            configMapKeyRef:
              key: EVENT_ENDPOINT
              name: controller-info
 		- name: EVENT_ENDPOINT
            value: events end point
        - name: APPDYNAMICS_EVENTS_CERTIFICATE_PATH
            value: <Path of root certificate in container> e.g. /certs/root_ca.crt 
        - name: APPDYNAMICS_AGENT_GLOBAL_ACCOUNT_NAME
          valueFrom:
            configMapKeyRef:
              key: FULL_ACCOUNT_NAME
              name: controller-info
        image: docker.io/appdynamics/analytics-agent:24.10.0-595-debian
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9090
          protocol: TCP
        resources:
          limits:
            cpu: 200m
            memory: 900M
          requests:
            cpu: 100m
            memory: 600M
...
CODE

The controller-info ConfigMap can be found in the Controller Info YAML File. The command to create appd-secret can be found in Secret.

The following deployment spec is for the same flight-services application, but instead of using a sidecar, it references a shared Analytics Agent deployed separately as a Daemonset. The flight-services container sets the agent environment variables APPDYNAMICS_ANALYTICS_HOST and APPDYNAMICS_ANALYTICS_PORT to the analytics-proxy service for the shared Analytics Agent defined in the example below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: flight-services
spec:
  selector:
    matchLabels:
      name: flight-services
  replicas: 1
  template:
    metadata:
      labels:
        name: flight-services
    spec:
      containers:
      - name: flight-services
        image: <flight-services-docker-image>
        imagePullPolicy: IfNotPresent
        envFrom:
          - configMapRef:
              name: controller-info
       env:
        - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: appd-key
              name: appd-secret
        - name: APPDYNAMICS_AGENT_TIER_NAME
          value: flight-services
        - name: APPDYNAMICS_ANALYTICS_HOST
          value: analytics-proxy
        - name: EVENT_ENDPOINT
            value: events end point
        - name: APPDYNAMICS_EVENTS_CERTIFICATE_PATH
            value: <Path of root certificate in container> e.g. /certs/root_ca.crt
        - name: APPDYNAMICS_ANALYTICS_PORT
          value: "9090"
        ports:
        - containerPort: 8080
          protocol: TCP
        volumeMounts:
        - mountPath: /certs       # Added a new volume mount for /certs, which corresponds to the 		    location in the container where the certificate will be accessible.
           name: certs-volume
      restartPolicy: Always
      volumes:
        - name: certs-volume
            hostPath:
              path: <Path on the host where the certificate resides, formatted for the container's host OS> e.g. /mnt/c/User/NewUser/certs       
              type: Directory
...
CODE


Log Analytics: Deployment Spec Using a Sidecar or Pod

apiVersion: apps/v1
kind: Deployment
metadata:
  name: two-container-deployment
spec:
  selector:
    matchLabels:
      app: two-container-app
  template:
    metadata:
      labels:
        app: two-container-app
    spec:
      containers:
      - name: analyticsagent
        image: docker.io/appdynamics/analytics-agent:24.10.0-595-debian
        imagePullPolicy: Never
        ports:
        - containerPort: 9090
        env:
		- name: APPDYNAMICS_CONTROLLER_HOST_NAME
			value: controller host name
		- name: APPDYNAMICS_CONTROLLER_SSL_ENABLED
			value: "false"
		- name: APPDYNAMICS_CONTROLLER_PORT
			value: controller port
		- name: APPDYNAMICS_AGENT_ACCOUNT_NAME
			value: account name
		- name: APPDYNAMICS_AGENT_APPLICATION_NAME
			value: applicant name
		- name: APPDYNAMICS_AGENT_GLOBAL_ACCOUNT_NAME
			value: global account name
		- name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
			value: account access key
		- name: EVENT_ENDPOINT
			value: events end point 
		- name: APPDYNAMICS_EVENTS_CERTIFICATE_PATH
          value: <Path on the machine (Windows-specific with backslashes)>:<Path in the container (Windows-specific with backslashes)>
        volumeMounts:
        - mountPath: /opt/appdynamics/app-logs
          name: shared-storage
      - name: travelapp
        image: travel-debian:latest
        imagePullPolicy: Never
        env:
		- name: APPDYNAMICS_CONTROLLER_HOST_NAME
			value: controller host name
		- name: APPDYNAMICS_CONTROLLER_SSL_ENABLED
			value: "false"
		- name: APPDYNAMICS_CONTROLLER_PORT
			value: controller port
		- name: APPDYNAMICS_AGENT_ACCOUNT_NAME
			value: account name
		- name: APPDYNAMICS_AGENT_APPLICATION_NAME
			value: applicant name
		- name: APPDYNAMICS_AGENT_GLOBAL_ACCOUNT_NAME
			value: global account name
		- name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
			value: account access key
		- name: EVENT_ENDPOINT
        volumeMounts:
        - mountPath: /opt/appdynamics/analytics-demo/appAgent/ver24.10.0-595/logs/Travel_one-pod-two-cntr-Node
          name: shared-storage
      volumes:
        - name: shared-storage
          emptyDir: {}

---
apiVersion: v1
kind: Service
metadata:
  name: service-analytics-agent
spec:
  selector:
    app: two-container-app
  ports:
  - protocol: TCP
    port: 9090
    targetPort: 9090
  type: NodePort
CODE


apiVersion: v1
kind: Pod
metadata:
  name: agent-pod
  namespace: ns1
  labels:
    app: agent-container
spec:
  containers:
    - name: aa-container
      image: docker.io/appdynamics/analytics-agent:24.10.0-595-debian
      imagePullPolicy: Never
      ports:
        - containerPort: 9090
      env:
        - name: APPDYNAMICS_CONTROLLER_HOST_NAME
            value: controller host name
        - name: APPDYNAMICS_CONTROLLER_SSL_ENABLED
            value: "false"
        - name: APPDYNAMICS_CONTROLLER_PORT
            value: controller port
        - name: APPDYNAMICS_AGENT_ACCOUNT_NAME
            value: account name
        - name: APPDYNAMICS_AGENT_APPLICATION_NAME
            value: applicant name
        - name: APPDYNAMICS_AGENT_GLOBAL_ACCOUNT_NAME
            value: global account name
        - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
            value: account access key
        - name: EVENT_ENDPOINT
            value: events end point
          - name: APPDYNAMICS_EVENTS_CERTIFICATE_PATH
            value: <Path of root certificate in container> e.g. /certs/root_ca.crt
      volumeMounts:
        - name: shared-volume
          mountPath: /opt/appdynamics/app-logs
        - mountPath: /certs       # Added a new volume mount for /certs in the analyticsagent container, which corresponds to the location in the container where the certificate will be accessible.
           name: certs-volume
  volumes:
    - name: shared-volume
      persistentVolumeClaim:
        claimName: shared-data-pvc
    - name: certs-volume
        hostPath:
          path: <Path on the host where the certificate resides, formatted for the container's host OS> e.g. /mnt/c/User/NewUser/certs       
          type: Directory

----------
CODE