This page explains how to use add the Kafka Prometheus exporter within your deployed environment using Helm Charts. The AppDynamics Helm chart package provides all the necessary dependencies. If you don’t have the AppDynamics Helm chart installed, see Install Kubernetes and App Service Monitoring.


Requirements

Before you integrate Prometheus, you must have configured Kafka and JMX exporters in your environment. See Prometheus Exporters and integrations.

Scaling Requirements

Each AppDynamics Distribution for OpenTelemetry™ Collector replica can process 5,000 Kafka partitions. This limit ensures that you have enough storage for AppDynamics metrics, events, logs, and traces (MELT) data. To process additional partitions, add one AppDynamics Distribution for OpenTelemetry™ Collector replica for every 5,000 partitions. For example, a 16-node Kubernetes cluster with 16 AppDynamics Distribution for OpenTelemetry™ Collector replicas can handle a total of 80,000 Kafka partitions.

This document contains references to third-party documentation. AppDynamics does not own any rights and assumes no responsibility for the accuracy or completeness of such third-party documentation.

Enable Prometheus Exporter Monitoring for Kafka

To enable the Prometheus exporter monitoring for Kafka using the AppDynamics Helm chart you'll need to annotate the collectors-values.yaml file:

  1. Copy and paste the following snippet into the collectors-values.yaml file in your Kubernetes deployment. For the full list of settings, see AppDynamics Distribution for OpenTelemetry Collector Settings.

    appdynamics-otel-collector:
      enablePrometheus: true
      spec: 
        replicas: <your-desired-number-of-replicas-here> 
    CODE

    Set the replicas value to the number of Kubernetes nodes in your Kubernetes cluster. Each AppDynamics Distribution for OpenTelemetry™ Collector replica can process 5,000 Kafka partitions. This limit ensures that you have enough storage for AppDynamics MELT data. To process additional partitions, add one AppDynamics Distribution for OpenTelemetry™ Collector replica for every 5,000 partitions. For example, a 16-node Kubernetes cluster with 16 AppDynamics Distribution for OpenTelemetry™ Collector replicas can handle a total of 80,000 Kafka partitions.

    replicas must be set to enable the AppDynamics deployment or the AppDynamics Distribution for OpenTelemetry™ Collector will not run.

  2. Run the Helm chart commands using the latest version to Upgrade Operators and Collectors. Once configured, run this command to validate:

    kubectl get po -n appdynamics
    YML

    Sample output:

    NAMESPACE      NAME                                                              READY   STATUS    RESTARTS        AGE
    appdynamics    appd-prom-appdynamics-otel-collector-collector-0                  1/1     Running   0               115s
    appdynamics    appd-prom-appdynamics-otel-collector-collector-1                  1/1     Running   0               114s
    appdynamics    appd-prom-appdynamics-otel-collector-targetallocator-676bfkxxxv   1/1     Running   0               115s
    appdynamics    opentelemetry-operator-controller-manager-58d65d7848-z5t5n        2/2     Running   0               7h19m
    YML
  3. Kubernetes annotations are designed to attach metadata to objects. Annotations are key/value pairs that clients can use to retrieve this metadata. Use Kubernetes annotations to expose the Prometheus exporter endpoints.

    If you start the Prometheus exporter with a path or port that is different from the default values, you need to update the prometheus.io/path and prometheus.io/port annotation values accordingly.


    1. Kafka exporter: 

      appdynamics.com/exporter_type: "kafka"
      appdynamics.com/kafka_cluster_name: "<your-kafka-cluster-name-here>"
      prometheus.io/path: "/metrics"
      prometheus.io/port: "9308"
      CODE

      Once configured, run:

      kubectl describe svc [Kafka exporter service name] [-n namespace]
      CODE

      Sample output:

      Name:              kafka-demo-metrics
      Namespace:         kafka
      Annotations:       appdynamics.com/exporter_type: kafka
                         appdynamics.com/kafka_cluster_name: kafka-demo
                         prometheus.io/path: /metrics
                         prometheus.io/port: 9308
      YML
    2. JMX exporter for Kafka: 

      appdynamics.com/exporter_type: "kafkajmx"
      appdynamics.com/kafka_cluster_name: "<your-kafka-cluster-name-here>"
      prometheus.io/path: "/"
      prometheus.io/port: "5556"
      CODE

      Once configured, run:

      kubectl describe svc [JMX exporter service name] [-n namespace]
      YML

      Sample output:

      Name:              kafka-demo-jmx-metrics
      Namespace:         kafka
      Annotations:       appdynamics.com/exporter_type: kafkajmx
                         appdynamics.com/kafka_cluster_name: kafka-demo
                         prometheus.io/path: /
                         prometheus.io/port: 5556
      YML
  4. Update your JMX configuration rules to make sure the metrics generated are properly ingested. These rules need to be applied instead of default rules.

    jmxUrl: <your-full-jmx-url-to-connect-to-here>
    lowercaseOutputName: true
    lowercaseOutputLabelNames: true
    whitelistObjectNames: ["kafka.controller:*","kafka.server:*","java.lang:*","kafka.network:*","kafka.log:*"] 
    rules:
      - pattern: kafka.controller<type=(KafkaController), name=(.+)><>(Value)
        name: kafka_controller_$1_$2_$3
        type: GAUGE
      - pattern: kafka.controller<type=(ControllerStats), name=(.+)><>(Count)
        name: kafka_controller_$1_$2_total
        type: COUNTER
      - pattern : kafka.network<type=(RequestMetrics), name=(RequestsPerSec), request=(.+), version=(.+)><>(Count)
        name: kafka_network_$1_$2_total
        type: COUNTER
        labels:
          request: $3
          version: $4
      - pattern : kafka.network<type=(RequestMetrics), name=(TotalTimeMs), request=(.+)><>(Count)
        name: kafka_network_$1_$2_$3_total
        type: COUNTER
        labels:
          request: $3
      - pattern: kafka.server<type=(KafkaServer|ReplicaManager), name=(.+)><>(Value)
        name: kafka_server_$1_total_$2_$3
        type: GAUGE
      - pattern: kafka.server<type=(.+), name=(.+), topic=(.+)><>(Count)
        name: kafka_server_$1_$2_total
        type: COUNTER
        labels:
          topic: $3
      - pattern: kafka.server<type=(DelayedOperationPurgatory), name=(PurgatorySize), delayedOperation=(.+)><>(Value)
        name: kafka_server_$1_$2_$3_$4
        type: GAUGE
      - pattern: kafka.server<type=(ReplicaManager), name=(.+)><>(Count)
        name: kafka_server_$1_broker_$2_total
        type: COUNTER
      - pattern: kafka.server<type=(BrokerTopicMetrics), name=(BytesOutPerSec|BytesInPerSec|MessagesInPerSec)><>(Count)
        name: kafka_server_$1_broker_$2_total
        type: COUNTER
      - pattern: kafka.server<type=(BrokerTopicMetrics), name=(FailedProduceRequestsPerSec|FailedFetchRequestsPerSec)><>(Count)
        name: kafka_server_$1_$2_total
        type: COUNTER
      - pattern: kafka.server<type=(SessionExpireListener), name=(.+)><>(Count)
        name: kafka_server_$2_total
        type: COUNTER
      - pattern: kafka.log<type=(LogFlushStats), name=(LogFlushRateAndTimeMs)><>(Count)
        name: kafka_log_$1_$2_total
        type: COUNTER
      - pattern: kafka.log<type=(Log), name=(Size), topic=(.+), partition=(.+)><>(.+)
        name: kafka_log_$1_$2
        labels:
          topic: $3
          partition: $4
        type: GAUGE
      - pattern : java.lang<type=(.*)>
        type: GAUGE
    CODE

Next Steps

You can observe entity details on AppDynamics Cloud UI. See Observe Kafka Entities.

KAFKA is a registered trademark of The Apache Software Foundation and has been licensed for use by AppDynamics and its affiliates (together, "AppDynamics"). AppDynamics has no affiliation with and is not endorsed by The Apache Software Foundation.

Prometheus® and Kubernetes® (as applicable) are trademarks of The Linux Foundation®.