This page describes steps to fix problems with the deployment of the Log Collector. 

General Tips

  • Check logs of the clustermon pod.
  • Make sure kubectl is connected to correct cluster.
  • Make sure the credentials you specify in collectors-values.yaml are the right ones for your tenant and cluster.
  • See useful kubectl commands in Useful kubectl and helm Commands.

Log Collector Installation Fails

Installation can fail due to the following configuration errors in collectors-values.yaml:

  • In a simplified YAML layout, if you set multiLinePattern but not multiLineMatch, you get the following error: 

    Error: INSTALLATION FAILED: execution error at (appdynamics-collectors/charts/appdynamics-cloud-k8s-monitoring/templates/logCollector.yaml:28:8): "multiLineMatch" field is mandatory, if "multiLinePattern" is set.
    CODE

    Solution: Add multiLineMatch to your collectors-values.yaml: seeLog Collector Settings.  Install and reinstall the Log Collector: See "Resinstall Collectors" in Upgrade or Uninstall Kubernetes and App Service Monitoring

Log Collector is Not Deployed on Tainted Nodes

See Monitor the Tainted Nodes.

Log Collector is Not Reporting to Cisco Cloud Observability

If the Log Collector does not appear in Cisco Cloud Observability, there could be a connectivity issue with Cisco Cloud Observability.

  1. Verify that a Server Visibility license is available in Administration > License > Account Usage. The Log Collector requires a Server Visibility license to register successfully.

  2. Review the Log Collector events in the appdynamics namespace:

    kubectl -n appdynamics get events
    
    # to sort by most recent events:
    kubectl -n appdynamics get events --sort-by='.lastTimestamp'
    BASH
  3. Review the Log Collector pod specification for additional events:

    kubectl -n appdynamics get pod <log-collector-pod> -o yaml
    BASH
  4. Review the Log Collector logs for errors related to communication with Cisco Cloud Observability:

    kubectl -n appdynamics logs <log-agent-pod-name>
    
    BASH
  5. Verify that the configuration of the Log Collector matches what you expect:

    kubectl -n appdynamics describe cm <log-agent-pod-name>
    CODE
  6. Verify that the latest Log Collector is installed. If you have upgraded the Log Collector from a previous version, the AppDynamics Operator configuration or image may not be compatible. You must install both the Log Collector and the AppDynamics Operator again:

    1. To uninstall, see Uninstall Kubernetes and App Service Monitoring.

    2. To install, see Deploy the Log Collector.

  7. Configure the Log Collector to log Filebeat activity. See Log Collector Settings.

Log Collector Restarts

Verify that a restart occurred from the pod details:

kubectl get pods -n appdynamics | grep CrashLoop
CODE

Sample output:

NAME                                         READY   STATUS    RESTARTS   AGE
appdynamics-operator-6fff76b466-qtx57        1/1     Running   0          4h18m
k8s-log-agent-perf-jg-6fc498d557-q7zst       1/1     Running   1          83m
CODE
  • If Log Collector pods are crash looping, there is probably not enough CPU or memory on the node. Compare the Log Collector resource requirements with the available CPU and memory on your nodes. See Log Collector Requirements.
  • If the Log Collector unexpectedly restarts, the RESTARTS value will be greater than zero. You will have to explicitly reset both namespaces and the logs. 
  • The logs from the Log Collector persist even if it restarts. To view the logs for the pod that restarted, run this command: 

    kubectl -n appdynamics logs --previous <log-collector-pod-name>
    CODE

Log Collector Pods Are Not Visible in Cisco Cloud Observability

If pods are not visible in Cisco Cloud Observability, or if pods are not registered and reporting, compare the Log Collector resource requirements with the available CPU and memory on your nodes. See Log Collector Requirements.

Log Collector Pods Are Not Created When a Security Policy is Enabled

If you have pod security policies enabled on the cluster, the Log Collector needs to run as the root user:

  1. Set logCollectorPod.securityContext.runAsUser to 0 in your collectors-values.yaml file: 

    ...
    logCollectorPod
      securityContext: 
        runAsUser: 0
    YML
  2. Upgrade the AppDynamics collectors: See Upgrade or Uninstall Kubernetes and App Service Monitoring.


  3. (Optional) To confirm that changes have been applied, run the commands kubectl describe pod and kubectl get pod -o yaml.

Logs Are Not Parsed

You might need to update your parser types or patterns. See Log Parsing Validator.

If you see log messages with the value unknown in the field severity, it probably means your logs are not parsed. It could also mean that your logs don't include a field named severity

Why Did Parsing Fail?

  1. Navigate to the Logs page and find a log message with the value unknown in the field severity.
  2. In the Properties panel, look at the value of the parsing_failure_reason field. This may give you an idea of the problem.

Does Your Pattern Match the Log Message's Pattern?

  1. Navigate to the Logs page and find a log message with the value unknown in the field severity.
  2. In the Properties panel, look at the value of the _messageParser object. If this object doesn't have the parser type and pattern that match this log message, you need to modify your parsing configuration. 

Do You Have Multiple Log Patterns?

If you are sending application logs from multiple containers on a Kubernetes cluster, you must create a condition+config pair for every log-generating container or Kubernetes infrastructure. See Configure the Log Collector.

Is Your Configuration Out of Date?

Verify that your collectors-values.yaml contains the required settings. See Configure the Log Collector.

Logs Are Not Associated With Entities

If your collectors-values.yaml is in the legacy layout, make sure it contains these settings:


OpenTelemetry™ and Kubernetes® (as applicable) are trademarks of The Linux Foundation®.