Follow the troubleshooting steps if you face the following issues during or after installing Splunk AppDynamics On-Premises Virtual Appliance.

Update DNS Configuration for an Air-Gapped Environment

An air-gapped environment is a network setup that does not have Internet connectivity. In this environment, DNS may become unreachable. To fix this issue, configure a DNS server that can be reached.

Following are example details used to explain how to update DNS configuration:

The IP addresses 10.0.0.1, 10.0.0.2, and 10.0.0.3 belong to the Virtual Appliance cluster.

The 10.0.0.5 is the IP address of the standalone Controller.

stanalone-controller is the DNS of the standalone on-premises Controller.

  1. Update the /etc/hosts file.
    This ensures the appdcli ping command reaches the DNS server.

    Example

    AppDOS Cluster Hosts
    10.0.0.1 example-air-gap-va-node-3 10.0.0.1.nip.io
    10.0.0.2 example-air-gap-va-node-1 10.0.0.2.nip.io
    10.0.0.3 example-air-gap-va-node-2 10.0.0.3.nip.io
    CODE
  2. Edit the coredns configmap file to add the external Controller IP address.
    kubectl -n kube-system edit configmap/coredns
    CODE
  3. In the coredns configmap file, add the following entry in the .:53 section:

    Example

    hosts {
    		10.0.0.5 standalone-controller
    		fallthrough
    	  }
    CODE
  4. Edit the globals.yaml.gotmpl file to update dnsDomain and dbHost with the DNS of the standalone on-premises Controller.

Update CIDR of the Pod

If you require to change the default CIDR of the pod, you can update the CIDR to the available subnet range. Perform the following steps to update CIDR of the pod:

  1. Log in to the node console using the appduser credentials.
  2. Stop the services:
    appdcli stop appd
    appdcli stop operators
    CODE
  3. Back up the following files:
    /var/snap/microk8s/current/args/cni-network/cni.yaml
    /var/snap/microk8s/current/args/kube-proxy
    CODE
  4. Update the cni.yaml file.
    Existing ContentUpdated Content
    - name: CALICO_IPV4POOL_CIDR
         value: "10.1.0.0/16" 
    CODE


    Provide the available subnet range. For example: 10.2.0.0/16.

    - name: CALICO_IPV4POOL_CIDR
         value: "10.<Number>.0.0/16" 
    CODE
  5. Update the kube-proxy file.
    Existing ContentUpdated Content
    --cluster-cidr=10.1.0.0/16
    CODE


    Provide the available subnet range. For example: 10.2.0.0/16.

    --cluster-cidr=10.X.0.0/16
    CODE


  6. Run the following command to apply the changes:
    microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
    CODE
  7. Restart the nodes.
    microk8s stop
    microk8s start
    CODE
  8. Verify the node status.
    microk8s status
    CODE
  9. Delete the ippool and calico pod:
    microk8s kubectl delete ippools default-ipv4-ippool
    microk8s kubectl rollout restart daemonset/calico-node -n kube-system
    CODE

Insufficient Permissions to Access Microk8s

Sometimes this error appears if the terminal was inactive between installation steps. If you face this error, re-login to the terminal.

Error Appears for appdctl show boot

When you run the appdctl show boot command, the following error appears if any background processes are pending:

Error: Get “https://127.0.0.1/boot”: Socket /var/run/appd-os.sock not found. Bootstrapping maybe in progress
Please check appd-os service status with following command:
systemctl status appd-os
CODE

Run the command after few minutes.

Restore the MySQL Service

If a Virtual Machine restarts in the cluster, the MySQL service does not automatically start. To start the MySQL services, complete the following:

  1. Run the following command:

    $ appdcli run mysql_restore
    CODE
  2. Verify the pod status.

    appdcli run infra_inspect
    CODE
    NAME                                READY   STATUS      RESTARTS   AGE
    appd-mysqlsh-0                      1/1     Running     0          4m33s
    appd-mysql-0                        2/2     Running     0          4m33s
    appd-mysql-1                        2/2     Running     0          4m33s
    appd-mysql-2                        2/2     Running     0          4m33s
    appd-mysql-router-9f8bc6784-g7zx7   1/1     Running     0          5s
    appd-mysql-router-9f8bc6784-fhjnp   1/1     Running     0          5s
    appd-mysql-router-9f8bc6784-wrcwk   1/1     Running     0          5s
    CODE

EUM Health is Failing After Multiple Retries

Run the following commands to restart the Events and EUM pod:

kubectl delete pod events-ss-0 -n cisco-events
kubectl delete pod eum-ss-0 -n cisco-eum
CODE


IOException Error Occurs in the Controller UI

In the Controller UI, when you select Alert and Respond > Anomaly Detection, the following IOException error occurs:

IOException while calling 'https://pi.appdynamics.com/pi-rca/alarms/modelSensitivityType/getAll?accountId=2&controllerId=onprem&startRecordNo=0&appId=7&recordCount=1'
CODE

To workaround this issue, run the following commands:

kubectl get pods -n cisco-controller
kubectl delete pod <Controller-Pod-Name> -n cisco-controller
CODE