This page describes the default and recommended resource values for the Kubernetes collectors.

This document contains references to third-party documentation. Splunk AppDynamics does not own any rights and assumes no responsibility for the accuracy or completeness of such third-party documentation.

Default Resource Values

By default, the Helm chart that installs the Kubernetes Collectors provisions each collector and operator with the following resources:

CollectorCPU LimitMemory LimitCPU RequestsMemory Requests

Cluster Collector

1,000 millicores

1,000 Mi

500 millicores

750 Mi

Infrastructure Collector

350 millicores

  • Linux: 100 Mi
  • Windows: 300 Mi

200 millicores

  • Linux: 64 Mi
  • Windows: 150 Mi

Cisco AppDynamics Distribution of OpenTelemetry Collector

The default resource values for the Cisco AppDynamics Distribution of OpenTelemetry Collector represent the minimum resources needed to run a POC. When deploying production collectors, users should follow the Recommended Resource Values to provide the appropriate resources based on their cluster size.


200 millicores1 Gi10 millicores256 Mi
200 millicores128 Mi100 millicores64 Mi

For additional details on the Cisco AppDynamics Distribution of OpenTelemetry Collector, see Performance and Scaling for the Cisco AppDynamics Distribution of OpenTelemetry Collector.

Recommended Resource Values

The following table represents the suggested collector and operator resource values, based on the stated deployment count and distribution of pods and nodes, with an added buffer. If your distribution differs, the following table should be used as guidance for your resource allocation. These values represent the maximum recorded resource usage of a single collector instance during a 24-hour period.

PodsNodesLogs per minuteSpans per minutePods per nodeCluster CollectorCisco AppDynamics Distribution of OpenTelemetry CollectorInfrastructure CollectorCisco AppDynamics Operator
CPU RequestsMemory RequestsCPU RequestsMemory RequestsCPU RequestsMemory RequestsCPU RequestsMemory Requests
1,00062,500,0002,000,000167160 millicores

350 Mi

580 millicores

250 Mi

100 millicores125 Mi50 millicores100 Mi
5,000265,000,0004,000,000192350 millicores

1,200 Mi

850 millicores

375 Mi


100 millicores125 Mi50 millicores100 Mi
10,0005110,000,0008,000,000196450 millicores

2,100 Mi


875 millicores

450 Mi

100 millicores150 Mi100 millicores125 Mi
15,0007615,000,00012,000,0001971,000 millicores

3,600 Mi


875 millicores

650 Mi

100 millicores200 Mi125 millicores175 Mi
20,00010120,000,00016,000,0001981,350 millicores

4,200 Mi


900 millicores

1,550 Mi

100 millicores200 Mi200 millicores200 Mi

Additional Guidance

  • We recommend setting your resource limits to a value >= the request values, at your discretion.
  • If the number of secrets, ConfigMaps, DaemonSets, or StatefulSets in your deployment is higher than our configured values, you'll need to increase the amount of resources allocated to your collectors. The average size of the ConfigMaps and secrets in the tests was 200 bytes. The recommended resource values consider the extrapolated size of 1 KB.
  • If your ratio of pods per node is different than the counts in the table, you'll need to increase or decrease the amount of resources allocated to your collectors accordingly.
  • For the purposes of this testing, CPU throttling was kept under 50% by increasing the CPU limits of the collectors. The recommended resource values contain an added buffer to further reduce CPU throttling, account for sudden traffic increases, and account for backed up data. For the detailed Kubernetes entities counts that our recommendations are based on, contact support.
  • For the Cisco AppDynamics Distribution of OpenTelemetry Collector:
    • If your MELT footprint exceeds the values in the table, you'll need to increase the resource values for the Cisco AppDynamics Distribution of OpenTelemetry Collector and possibly make changes to the batch processor configuration.
    • If the memory limiter processor is enabled in your Cisco AppDynamics Distribution of OpenTelemetry Collector, it should reduce your memory resource requirements.
    • The recommended resource values are based on a DaemonSet deployment. If your Cisco AppDynamics Distribution of OpenTelemetry Collector is deployed as a StatefulSet, we recommend setting your replica count to a node count in the table and using the recommended resource values in the table. If you plan to reduce the number of OpenTelemetry replicas in StatefulSet mode, be aware that you'll need to increase the resource allocation for the OpenTelemetry pods. For any questions on resource configuration, contact support.

Third party names, logos, marks, and general references used in these materials are the property of their respective owners or their affiliates in the United States and/or other countries. Inclusion of such references are for informational purposes only and are not intended to promote or otherwise suggest a relationship between Splunk AppDynamics and the third party.