Era Software

Send your CloudWatch logs to EraSearch

Centralize your logs in EraSearch, including your cloud logs. This article will help you through the AWS and Vector configs.

image of Send your CloudWatch logs to EraSearch


Adding logs to EraSearch provides centralization across clouds to provide a wholistic view of operations at any point in time. EraSearch is a modern log management platform built for high volume workloads. DevOps, IT Operations and infrastructure teams use CloudWatch to monitor the performance of their cloud applications and infrastructures. Sending and analyzing logs using EraSearch enables teams to take advantage of highly cost optimized object storage and fast query performance critical for high data volumes.

This article will provide a how-to setup streaming Amazon CloudWatch logs through a Amazon Kinesis Data Firehose (Kinesis Firehose) to EraSearch.

Step 1: Configure AWS products to log to CloudWatch

The assumption is that logs are already flowing into CloudWatch and are ready to be sent to Amazon Kinesis.

Step 2: Configure Kinesis Firehose

Utilizing the links below setup a Kinesis subscription. Then stream CloudWatch logs to the subscription that was created.

Step 3: Send data to a destination

Choose destination

Step 4: Configure Vector

Create a Vector configuration with a AWS Kinesis Firehose source. The access key from Step 2: Configuring the Kinesis Firehose will be needed. The access key ${AKF_PWD} should be passed in from a secret, and the ${VECPORT} is passed in from the docker command line.

data_dir = "/var/lib/vector"

type = "aws_kinesis_firehose"
address = "${VECPORT}"
access_key= "${AKF-PWD}"
tls.enabled = true
tls.crt_file = "/var/lib/vector/cert/"
tls.key_file = "/var/lib/vector/cert/"

Next, parse the line into JSON and add some timestamps

# Parse Syslog logs
# See the Vector Remap Language reference for more info:
type = "remap"
inputs = ["akf"]
source = '''
. = parse_json!(.message)
._lid = to_unix_timestamp(now(),unit: "nanoseconds")
._ts =to_unix_timestamp(now(),unit: "milliseconds")

Lastly, we send the line to EraSearch using the Elastic Search Sink. Again the password, ${ERAPWD} should be in a secret and passed in on startup

  type = "elasticsearch"
  inputs = ["parse_logs_dev"]
  # Define this as the index you want to use below, you will need it in Step 4
  index = "logs-akf"
  healthcheck.enabled = false

type = "internal_metrics"

type = "prometheus_exporter"
inputs = ["internal"]
address = ""

Step 5: View data in EraSearch

Now that your logs are being sent to EraSearch it's easy to create a Grafana datasource.

The version of Grafana that we are utilizing is 8.1.1. We will assume that you have logged into Grafana and are on the Data sources page under configuration.


ManyData Sources may have already been created or not, in any case click Add data source


In the Filter by name or type type in Elas and click on Elasticsearch that is displayed.


Once the Select button is clicked, the following screen is displayed. Enter in the Name, ${ERAHOST}, ${ERAUSR},and the ${ERAPWD}. Also don't forget to toggle the Basic auth and With Credentials sliders so they show blue.


After those are inserted, scroll to the bottom and enter in the index, logs-akf for instance which you entered from step 4: under the sink.EraSearch. For the Time field name remove @timestamp and replace it with _ts , click the version to 7.10+ and click Save & test.


At this point you should have a valid datasource and can explore your data, build a dashboard, and create alerts.

To try CloudWatch logs with EraSearch and many more capabilities, please contact us to sign up for our free trial.