Log In | Register | June 25, 2017


Linux - March 13, 2014

Logstash, Elasticsearch, and Kibana in an EC2/AWS Enviroment

Here I will go over how to setup Logstash, Kibana, Redis, and Elasticsearch in an EC2 environment behind a public Load Balancer. The setup I’ll be doing will have:

1) One server for Redis to act as the broker/buffer to receive logs.
2) One server using Logstash receive logs from Redis and parse/index them over to Elasticsearch.
3) One server for Elasticsearch to receive logs and Kibana to view them in a browser.
4) One server to send the logs using logstash.
5) One public Load Balancer.

I am using m1.mediums for my servers, you can use whichever size you’d like but I’d definitely recommend an m1.medium for the Elasticsearch server at the very least. The public ELB is to allow public access, if this is not needed then you can make it an internal ELB or not even include one. Either way make sure to mount the partition if using m1.medium or greater.

This may seem like a lot but follow these steps and you’ll get the hang of it icon smile Logstash, Elasticsearch, and Kibana in an EC2/AWS Enviroment

What you will need:
1) AWS/EC2 access and keys.
2) A basicu understanding of AWS and security groups.
3) Linux shell terminal.
4) A Load Balancer.

This is pretty easy and straightforward. I won’t go into to much detail for this, I’ll save that for another blog.
1) Download, unzip and extract the Elasticsearch tar.gz file. Current release at this time is 1.0
2) Cd into /root/elasticsearch-1.0.0/config and open elasticsearch.yml.
3) Change the cluster name and node name to whatever you want. I recommend this as it can help separate and organize nodes and clusters later on if need be.
4) On the bottom of the config add:
              access_key: ACCESS_KEY
              secret_key: SECRET_KEY
    type: ec2
        groups: SECURITY-GROUP-NAME

Since this is a yml file spacing matters. The wrong spacing can cause it to error or not even be noticed by the config.
I was having issues with formatting the right spacing. For the correct version follow this link.
5) cd ../ and run ./bin/plugin -install elasticsearch/elasticsearch-cloud-aws/2.0.0.RC1. This installs the cloud plugin for AWS/EC2.
6) Start Elasticsearch: /root/elasticsearch-1.0.0/bin/elasticsearch &
7) Make sure security groups are updated to allow the correct instances to connect using ports 9200-9300.

If all is done correctly you can query it by running: curl -XGET ‘IP_ADDRESS_HERE:9200/_cluster/health?pretty=true’. It should look like this:
"cluster_name" : "searchbuild",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 65,
"active_shards" : 65,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 65

Again, I can go into even more detail but that’s for another blog post and yes, I know status is yellow icon smile Logstash, Elasticsearch, and Kibana in an EC2/AWS Enviroment

Redis Server:

This will be our server to broker logs as they come in.
1) Download and unzip Redis tar.gz file. Current version at this time is 2.8.6
2) install gcc, jemlocc: yum install gcc jemalloc -y
3) cd into redis-2.8.6/ and run: make. If you receive an error:
zmalloc.h:50:31: fatal error: jemalloc/jemalloc.h: No such file or directory
compilation terminated.
make[1]: *** [adlist.o] Error 1
make[1]: Leaving directory `/root/redis-2.8.6/src’
make: *** [all] Error 2

Then cd into ./deps and run: make hiredis lua jemalloc linenoise
cd ../ and run make again.

4) Start Redis: ./src/redis-server --loglevel verbose

That’s it for redis. Your prompt will show 0 clients connected for now. Until an indexer or shipper server’s input/ouput criteria are met Redis will show 0 clients connected.

If you’d like to test you can open a terminal and run: ./src/redis-cli and type anything to stdin.

There are two ways to setup Kibana. Method one involves installing Apache, downloading and unpackaging Kibana’s tar.gz file, and editing Kibana’s config.js file.
Method two just uses Logstash’s java web tool with Kibana built in.

Method 1:
1) Install Apache and unpackage the Kibana tar.gz file and move the directory to /var/www/html/ (or wherever web-server’s root directory is located).
2) cd into /var/www/html/kibana/app/dashboard and run: cp logstash.json default.json
3) Open the config.js and edit elasticsearch: ‘http://ELB_URL_HERE:9200′,
4) Start Apache.
5) Create public Load Balancer and add Elasticsearch/Kibana server to ELB.
6) Configure ELB security groups (HTTP/HTTPS) to allow 9200-9300.
7) Configure ELB on listeners for 9200 to 9200. You can forward port 80 to 9200 in the ELB as well.

Test by going to http://ELB_URL_HERE:9200/kibana-3.0.0milestone5/#/dashboard/file/default.json

Method 2:
1) Create public Load Balancer and add the Elasticsearch/Kibana server to ELB.
2) Configure ELB sercurity groups to allow 9200-9300.
3) Configure ELB on listeners for 9292 to 9292. You can forward port 80 to 9292 using the ELB.
4) Run: java -jar logstash-1.3.3-flatjar.jar web

Test by going to http://ELB_URL_HERE:9292

Either method works but if you want to control the config method one is the way to go.

Conf files:

Again, the indexer is the server that uses Logstash to receive the logs from Redis. and parses them out to Elasticsearch. Shipper is the server sending the actual logs to Redis.

Indexer file:
Create an indexer.conf file and add in:
input {
redis {
data_type => "list"
key => "logstash"
codec => json
output {
elasticsearch_http {
host => "elasticsearch-IP"

Pretty straightforward. The indexer is receiving the input from our Redis server and sending the output to our Elasticsearch server.

Start logstash:
java -jar logstash-1.3.3-flatjar.jar agent -f indexer.conf

  • You can use put redis one server and the Logstash indexer server on a second server. Just make sure the IPs are changed from to the correct IP.
  • Make sure the key field matches in all configs

Shipper conf file:

1) Create a shipper.conf file (or whatever you want) and add:
input {
file {
sincedb_path => /path/to/whatever/
path => "/path/to/log-file"
type => "example"
filter {
dns {
add_field => [ "IPs", "Logs, from %{host}" ]
type => [ "MESSAGES" ]
resolve => [ "host" ]
action => [ "append" ]
output {
redis { host => "redis-IP" data_type => "list" key => "logstash" }

2) Start Logstash:
java -jar logstash-1.3.3-flatjar.jar agent -f shipper.conf


  • With the DNS filter option you can choose to replace the host with the IP, but using append allows you to include the hostname and IP. This will require that the hostname be set and /etc/hosts to be updated.
  • The option “sincedb_path” is there because I am using this with an Logstash Init script.
  • Including stdout when stdout is already being directed to redis can cause log errors into /var/log/messages since it’s redundant output and unnecessary.

1) Setup and run redis.
2) Setup and run Elasticsearch.
3) Setup and run Logstash indexer and start Logstash/Kibana on Elasticsearch server.
4) Setup and run Logstash on shipper server and start logging.

Init Script:
I have an Init script I am using to setup logstash on the shipper servers. It works pretty well to start Logstash at boot as I have my own custom rc.local script for my EC2 instances to run.

1) Download or copy the init script from here.
2) Copy script into /etc/init.d/logstash and edit the paths accordingly. Make sure the script has executable permissions.
3) Run: chkconfig --add logstash and chkconfig logstash on
4) Run /etc/init.d/logstash start


  • If an error occurs of: “No HOME environment variable set, I don’t know where to keep track of the files I’m watching. Either set HOME in your environment, or set sincedb_path in in your logstash config for the file” you may need to set sincedb_path => “/path/” in the conf file.

If you have any questions or if something is incorrect feel free to comment.

Post By: | FavoriteLoadingAdd to favorites


Leave a Comment

Need Help? Ask a Question

Ask anything you want from how to questions to debug. We're here to help.

You Must Be Logged In To Post A Question.

Log In or Register