This page describes how to configure optional settings for the Cisco AppDynamics Distribution of OpenTelemetry Collector. For advanced settings, see Advanced Settings for the Cisco AppDynamics Distribution of OpenTelemetry Collector.

Configure Proxy Settings

  1. Set the proxy URL to the HTTPS_PROXY environment variable. For other proxy settings, see net/http golang package proxy.

    The HTTP_PROXY environment variable ignores HTTPS communication, which is used for the Cisco Cloud Observability ingestion endpoint.

  2. To set up the environment variable for the Helm chart, modify the appdynamics-otel-collector spec:

    appdynamics-otel-collector:
      spec:
        env:
           -name: HTTPS_PROXY
            value: <proxy url>
    YML
  3. If your proxy is executing HTTPS inspection or SSL termination, you also need to set up your proxy root certificate authority (CA) for the otlphttp exporter. In Kubernetes® environments, Cisco AppDynamics recommends loading your CA certificate as a secret and mounting the secret as a volume that the Cisco AppDynamics Distribution of OpenTelemetry Collector can read. 

    Example spec:

    appdynamics-otel-collector:
      spec:
        volume:
          - name: proxycavol
            secret:
              secretName: proxyca
        volumeMounts:
          - name: proxycavol
            mountPath: /path/to/proxy
      configOverride:
        exporters:
          otlphttp:
            tls:
              ca_file: /path/to/proxy/ca.crt
    YML

Provide Client Secret with Environment Variable

Client secrets are typically provided as plain text in your Helm value file. The Cisco Cloud Observability Helm chart also provides the option to inject the client secret with an environment variable, where the actual secret value can be stored in a Kubernetes configuration map or secret.

The following example displays an environment variable with the Cisco Cloud Observability secret stored in a Kubernetes secret, where the name is secret-name and the key is secret-key:

appdynamics-otel-collector:  
  clientId: <client-id>
  clientSecretEnvVar: 
    valueFrom:
      secretKeyRef:
        name: <secret-name>
        key: <secret-key>
  tokenUrl: <token-url>
  endpoint: <endpoint>
YML

Provide Client Secret with Mounted Volume

The Cisco Cloud Observability Helm chart provides the option to inject the client secret with a mounted volume, where the actual secret value can be stored in a Kubernetes secret and read by the Cisco AppDynamics Distribution of OpenTelemetry Collector as a mounted volume:

clientSecretVolume: 
  secretName: <secret-name>
  secretKey: <secret-key>
YML

For more information on these configuration options, see appdynamics-otel-collector.clientSecretVolume.

Configure Batch Size for Rate Limiting

The Cisco Cloud Observability Helm chart has a default batch size of 1,000. If you have a higher load, you need to configure the batch size according to your tier and traffic.

  1. Check the rate limit for your metrics, logs, and traces. The rate limit is dependent on your token tier. 
  2. Calculate your batch based on your traffic and token tier rate limit. Given traffic of A requests/min with a rate limit of B requests/min, your batch size should be at least:

    $$\lceil A/B \rceil$$

    For example, if you are generating 1.5M metrics/min with a Tier 1 token, your batch size should be at least 1.5M/500 = 3,000 metrics/min.

  3. To customize your batch size, you need to configure the batch processors for metrics, logs, and traces. The following example displays how to configure the batch processors in your configuration file:

    appdynamics-otel-collector:
      configOverride:
        processors:
          batch/traces:
            send_batch_size: 3000
            timeout: 10s
            send_batch_max_size: 3000
          batch/metrics:
            send_batch_size: 3000
            timeout: 10s
            send_batch_max_size: 3000 
          batch/logs:
            send_batch_size: 3000
            timeout: 10s
            send_batch_max_size: 3000 
    YML

Customize Resource Request

If you need to increase the collector resource limit due to high traffic or decrease the resource limit for a small cluster, you can configure the resource request through the Cisco Cloud Observability Helm chart.

The default resource limit for the Cisco AppDynamics Distribution of OpenTelemetry Collector is listed as a hardware requirement according to our performance test results.

The following example displays how to increase the resource configuration, which doubles the resource limit:

appdynamics-otel-collector:
  spec:
    resources:
      limits:
        cpu: 400m
        memory: 2048Mi
YML

The Cisco Cloud Observability Helm chart will automatically configure the memory limit processor according to the resource request and limit configuration. You don't need to change the memory limiter configuration if you changed the resource configurations.

Configure Max Export Byte Size

The batchbybytesize processor can be used to break large OTLP data packets into smaller packets, which prevents Cisco Cloud Observability from rejecting data packets due to size limitations. The max packet size for the batchbybytesize processor can be configured as:

appdynamics-otel-collector:
  configOverride: 
    processors:
      batchbybytesize:
        batch_byte_size: 1e6
YML

The batchbybytesize processor must be added at the end of each processor pipeline to be enabled. If you have overridden the processor pipeline, you must add batchbybytesize manually. The following configuration displays an example where the batchbybytesize processor was manually added to the pipelines with other required processors.

appdynamics-otel-collector:
  configOverride:
    service:
      pipelines:
        metrics:
          processors: [memory_limiter, transform/jvmmetric, filter/jvm, metricstransform/jvmdatapoint, transform/truncate, batch/metrics, batchbybytesize]
        traces:
          processors: [memory_limiter, k8sattributes, transform/truncate, batch/traces, batchbybytesize]
        logs:
          processors: [memory_limiter, filter/non_appd, k8sattributes/logs, transform/logs, transform/truncate, batch/logs, batchbybytesize]
YML

The batchbybytesize processor was released in 24.3.0, but will not be enabled by default until 24.4.0. This processor must be enabled manually even if you did not override the pipeline configuration when using the 24.3.0 release.

OpenTelemetry™ is a trademark of The Linux Foundation®.