Observability into a Service Mesh with EraSearch

EraSearch gives you service-level insights into a service mesh with all the benefits that EraSearch is known for: separation of storage and compute, object storage, and zero-schema design for low-cost, high-performing, high-scale observability data management.

image of Observability into a Service Mesh with EraSearch

The shift to microservices comes with a new set of challenges for DevOps teams. They have to manage issues like service discovery, routing, load balancing, and encryption to ensure service health, reliability, and security. And DevOps teams need to aggregate the logs, events, and metrics originating from multiple microservices and integrate them into their observability and monitoring ecosystem for troubleshooting and optimizing application code.

To solve these operational challenges, many DevOps teams are adopting a service mesh, like Kuma. A service mesh is a complementary management system layered on top of a microservice architecture that provides a consistent way to connect, manage, and secure communication between containerized services. By abstracting service discovery, routing, load balancing, and security into a sidecar which is attached to a service instance and communicates with all other sidecars on the network, a service mesh frees DevOps teams from building tools to manage inter-service communication while providing out-of-the-box policies for your service traffic and network configuration.

With EraSearch, you can place your service mesh log data in the context of your infrastructure as a whole and create alerts to notify teams of possible issues within your service mesh deployment.

Understanding Kuma Service Mesh

Before we show how to integrate logging, we’ll need to deploy a service mesh within our Kubernetes cluster. For this walkthrough, we’re going to use Kuma, an easy-to-use service mesh from the Cloud Native Computing Foundation (CNCF) Sandbox Projects. 

To install Kuma with Helm, run:

helm repo add kuma https://kumahq.github.io/charts
helm install --version 0.7.1 --create-namespace --namespace kuma-system kuma kuma/kuma

# for the management UI
kubectl port-forward svc/kuma-control-plane -n kuma-system 5681:5681

If all went to plan, then loading http://localhost:5681/gui should present the Kuma GUI:

Now with the Kuma Control Plane deployed to the cluster, we can start adding services to the mesh. To quickly highlight the difference between the service mesh Control Plane and the Data Planes:

  • The Control Plane (or CP), as the name implies, simply controls the service mesh, but is not involved in the mesh communication directly. This means that traffic between pods and services in the mesh does not flow through the CP; the CP is only needed to keep track of the members of the mesh and make sure the mesh policies are being enforced.

  • The Data Planes (or DPs) are the sidecar proxies that sit alongside every pod in the mesh. These sidecars do the heavy lifting when it comes to service mesh communication. The DPs receive policies from the CP and then apply them to the network traffic.

You may be asking yourself what we mean by policies in this context. You can find the full list in the Kuma documentation here, but you can think of a policy as anything you want to do to traffic running through the service mesh. Do you want to encrypt traffic? Restrict traffic? Route it in a special way? Log it? Trace it? All of these traffic directives are expressed in the form of policies.

With that out of the way, let’s add some services to the mesh. Luckily for us, the Kuma project includes a demo app that can be used to quickly explore the mesh capabilities and features. To deploy the demo app into your cluster, run:

kubectl apply -f https://raw.githubusercontent.com/kumahq/kuma-counter-demo/master/demo.yaml

# you may need to wait a minute or two
kubectl port-forward svc/demo-app -n kuma-demo 5000:5000

If it was successful, navigating to http://localhost:5000 should display the lovely Kuma Counter Demo web application:

With the demo app deployed, within the Kuma GUI you should now see two mesh Data Planes (DPs): one for the demo-app service and another for the redis service.

With the DPs in place, any network communication going to or from the demo-app or redis services will flow through the mesh.

With our services deployed and our service traffic now flowing through the mesh, we can configure a logging policy for the mesh to generate logs on every request. To do this, we’ll need to:

  • Update the default mesh with a logging backend, then 

  • Create a new traffic logging policy to generate logs on every request

Let’s start with the logging backend, which can be configured by updating the default” mesh Custom Resource Definition (CRD). To do this, write the following content to a default-mesh.yml file:

apiVersion: kuma.io/v1alpha1
kind: Mesh
  name: default
    defaultBackend: stdout
      - name: stdout
        format: '{"start_time": "%START_TIME%", "source": "%KUMA_SOURCE_SERVICE%", "destination": "%KUMA_DESTINATION_SERVICE%", "source_address": "%KUMA_SOURCE_ADDRESS_WITHOUT_PORT%", "destination_address": "%UPSTREAM_HOST%", "duration_millis": "%DURATION%", "bytes_received": "%BYTES_RECEIVED%", "bytes_sent": "%BYTES_SENT%"}'
        type: file
          path: /dev/stdout

Then apply it with the command:

kubectl apply -f default-mesh.yml

With the logging backend set, now we can enable the traffic logging policy to log all traffic in the mesh. To do this, write the following content to a log-all-traffic.yml file:

apiVersion: kuma.io/v1alpha1
kind: TrafficLog
  name: all-traffic
mesh: default
  # This TrafficLog policy applies all traffic in that Mesh.
    - match:
        kuma.io/service: '*'
    - match:
        kuma.io/service: '*'

And then apply it with the command:

kubectl apply -f log-all-traffic.yml

With the traffic log policy in place, logs will now be emitted on every request. To validate this, run the following command and refresh the demo app web page to generate an event.

kubectl logs deployment/demo-app -c kuma-sidecar -n kuma-demo --tail=10 -f

You should see a JSON message similar to the following logged after refreshing the demo app web page:

{"start_time": "2022-01-07T17:19:59.271Z", "source": "demo-app_kuma-demo_svc_5000", "destination": "redis_kuma-demo_svc_6379", "source_address": "", "destination_address": "", "duration_millis": "2", "bytes_received": "77", "bytes_sent": "4121"}

It works! The last thing to do is to forward these logs into EraSearch.

Getting Logs into EraSearch

To get logs into EraSearch, we’ll need to deploy a collector/forwarder to scoop up the logs from our Kubernetes environment and ship them to EraSearch’s ingest endpoint. For this walkthrough we’re going to use Vector, which is a lightweight yet versatile log collector that makes it simple to collect and ship Kubernetes logs. We’ll also use EraCloud, the hosted offering of EraSearch.

Below is the values file for the Vector Helm deployment we’ll be using to collect logs from our Kubernetes pods. In this config, we’re telling Vector to:

  • Collect Kubernetes logs from all pods/namespaces

  • Attempt to parse any JSON-formatted lines, ignoring any errors in case they’re not JSON-formatted

  • Ship those logs to EraSearch for storage and visualization

role: Agent
      type: kubernetes_logs

      type: remap
      inputs: ["kube_logs"]
      source: |-
        structured, err = parse_json(string!(.message))
        if err == null {
          ., _ = merge(., structured)

      type: elasticsearch
      inputs: ["parse_json"]
      endpoint: "https://${ERACLOUD_HOSTNAME}"
        index: kube-logs
      healthcheck: false
          Authorization: "Bearer ${ERACLOUD_API_TOKEN}"

With the above content written to a vector-values.yml file, you can deploy Vector and configure it with the following commands:

helm repo add vector https://helm.vector.dev
helm repo update
helm install vector vector/vector-agent --namespace vector   --create-namespace --values vector-values.yaml

Once Vector is successfully deployed, you should start to see logs flow into the kube-logs EraSearch index.

Getting Insights into Service Mesh Logs

With logs now reaching EraSearch, let’s look at our service mesh logs. If you refresh the demo app web page a few times (or click the Increment button in the demo app), you should start to see log lines similar to the following show up in EraSearch:

Notice that the fields we specified in our mesh logging policy, like source, destination, and duration_millis, were all parsed from the JSON output by Vector and are present in the log object. All of this with just a few minutes of effort to format the logs in Kuma and send them to EraSearch with Vector! These logs can be modified/enriched further both by updating the mesh logging policy as well as manipulating them within Vector using VRL before being forwarded to EraSearch.

Now that the logs are being collected, we can start taking advantage of the extra data included in our logs by using special charts for the mesh-specific data. For example, in Grafana we can chart the bytes sent/received scoped by service, as well as the mean request duration:

For more information on what you can do with EraSearch data in Grafana, see our docs here.

If you’re new to EraSearch, start your free EraCloud trial today!