Here's the presumptions and limitations with this article:
- Not terribly familiar with LINUX (e.g. windows admin)
- Familiar with ELK in a single instance install, which is why you looked for distributed, to get more scale-ability
- You understand the benefits of centralized log storage and event correlation offered by the ELK stack, or systems similar to it such as SPLUNK or Sumo Logic
- The next walk-through will document getting IIS logs and Kiwi Syslogs imported into the ELK stack, so little effort will be spent here on log collection from end-points.
- Not dealing with user and security controls in this walk-through
- Presume you can at least install basic Ubuntu Linux with SSH server capabilities
Total config time for the steps in this article take roughly 2-6 hours
Setup for this configuration:
5 virtual servers with the following specifications:
Basic load of Ubuntu 16.04.1 server, installing only the SSH Server option
- DEVLSIN (logstash inbound) - 1024 RAM, 1CPU, 8GB HDD
- DEVLSOUT (logstash outbound) - 1024 RAM, 1CPU, 8GB HDD
- DEVES (Elasticsearch) - 4096 RAM, 1CPU, 10GB HDD
- DEVMQ (Redis Queuing Server) - 2048 RAM, 1CPU, 12GB HDD
- DEVKIB (Kibana Server) - 2048 RAM, 1CPU, 8GB HDD
These were bare minimum specifications. If you're looking at a production deployment, would likely need additional memory and hard disk resources, but this was sufficient for a small lab connected to a handful of logging sources.
Setup: DEVLSIN
Pre-requisite: Java Runtime Environment
Update your apt package database:
sudo apt-get update
Install the latest stable version of Oracle Java with this command:
sudo apt-get install default-jre
Install: Logstash Input Server
First task is to import the public signing key from Elasticsearch:
Next, install the apt-transport-https package:
Now, save the repository information for the 5.0 stack information:
In this multi-command line, the repository information will be updated and Logstash will be installed:
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch |
sudo apt-key add -
Next, install the apt-transport-https package:
sudo apt-get install apt-transport-https
Now, save the repository information for the 5.0 stack information:
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" |
sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
In this multi-command line, the repository information will be updated and Logstash will be installed:
sudo apt-get update && sudo apt-get install logstash
Install: Plugins
For the common input
types and output to the Redis, run the following commands:
cd /usr/share/logstash
sudo bin/logstash-plugin
install logstash-input-beats
sudo bin/logstash-plugin
install logstash-input-syslog
sudo bin/logstash-plugin
install logstash-input-tcp
sudo bin/logstash-plugin
install logstash-output-redis
*Note* If you want to see all the input and output types, you can run:
sudo bin/logstash-plugin list
Setup: DEVLSOUT
Pre-requisite: Java Runtime Environment
Update your apt package database:
sudo apt-get update
Install the latest stable version of Oracle Java with this command:
sudo apt-get install default-jre
Install: Logstash Index/Filter Server
First task is to import the public signing key from Elasticsearch:
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch |
sudo apt-key add -
Next, install the apt-transport-https package:
sudo apt-get install apt-transport-https
Now, save the repository information for the 5.0 stack information:
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main"
|sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
In this multi-command line, the repository information will be updated and Logstash will be installed:
sudo apt-get update && sudo apt-get install logstash
Install: Plugins
For this system, we need to install the plugins that allow input from Redis with output to Elasticsearch:
Setup: DEVES
Install the latest stable version of Oracle Java with this command:
Setup: DEVMQ
Install: Redis
The following information at least applies to Redis 3.0.6. Mileage may vary on updated versions.
Install redis from the default application repository:
The first time Redis starts up, you'll want to check the logs under /var/log/redis/redis-server.log
cd /usr/share/logstashsudo bin/logstash-plugin install logstash-input-redissudo bin/logstash-plugin install logstash-output-elasticsearch
Setup: DEVES
Pre-requisite: Java Runtime Environment
Update your apt package database:
sudo apt-get update
Install the latest stable version of Oracle Java with this command:
sudo apt-get install default-jre
Install: Elasticsearch 5.0
First task is to import the public signing key from Elasticsearch:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Next, install the apt-transport-https package:
sudo apt-get install apt-transport-https
Now, save the repository information for the 5.0 stack information:
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main"
| sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
In this multi-command line, the repository information will be updated and Logstash will be installed:
sudo apt-get update && sudo apt-get install elasticsearch
Set the Elasticsearch service to automatically start on reboot
sudo update-rc.d elasticsearch defaults 95 10
Sudo service elasticsearch restart
Test to see that the service is running and returning results:
curl -XGET 'localhost:9200/?pretty'
or may have to use
curl -XGET '<server IP>:9200/?pretty'
Response output should look similar to the following example:
{
"name" : "Cp8oag6",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",
"version" : {
"number" : "5.0.0",
"build_hash" : "f27399d",
"build_date" : "2016-03-30T09:51:41.449Z",
"build_snapshot" : false,
"lucene_version" : "6.2.0"
},
"tagline" : "You Know, for Search"
}
Setup: DEVMQ
Install: Redis
The following information at least applies to Redis 3.0.6. Mileage may vary on updated versions.
Install redis from the default application repository:
sudo apt-get install redis-server
Check that the redis-server service is "active (running)". If it's not running, proceed with the adjustments to the configs below, with critical importance of changing the configuration file to ensure logging is enabled.
sudo service redis-server status
Set the Redis server to start automatically after reboot:
sudo update-rc.d redis-server defaults 95 10
Part of the recommended Redis setup is to disable transparent hugepages. Add this command to your /etc/rc.local file.
sudo nano /etc/rc.local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; thenecho never > /sys/kernel/mm/transparent_hugepage/enabledfiif test -f /sys/kernel/mm/transparent_hugepage/defrag; thenecho never > /sys/kernel/mm/transparent_hugepage/defragfi
The first time Redis starts up, you'll want to check the logs under /var/log/redis/redis-server.log
For the specifications used in this walkthrough, I had to make a couple of adjustments to the /etc/sysctl.conf file. These were called out in the logs and will differ based on the memory allocated to your system.
Sudo nano /etc/sysctl.conf
vm.overcommit_memory = 1
fs.file-max=1024
You'll also want to make a couple of changes to your redis.conf file, located in /etc/redis
sudo nano /etc/redis/redis.conf
The redis.conf file will require a few updates as well, one to bind to your server IP, and then for you to set the location of the log file and the maxclient limit. Because the memory in my system was so low, I adjust that downward quite a bit. Again, logs should have an entry in there if it needs to be adjusted.
sudo nano /etc/redis/redis.conf
# Examples: # # bind 127.0.0.1 bind <your server IP># Specify the log file name. Also the empty string can be used to force # Redis to log on the standard output. Note that if you use standard # output for logging but daemonize, logs will be sent to /dev/null logfile /var/log/redis/redis-server.log################################### LIMITS ####################################
# Once the limit is reached Redis will close all the new connections
# sending an error 'max number of clients reached'. # maxclients 1024
First task is to import the public signing key from Elasticsearch:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch |
sudo apt-key add –
Next, install the apt-transport-https package:
sudo apt-get install apt-transport-https
Now, save the repository information for the 5.0 stack information:
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" |
sudo tee -a /etc/apt/sources.list.d/elastic-5.x.listdo apt-get install redis-server
In this multi-command line, the repository information will be updated and Kibana will be installed:
sudo apt-get update && sudo apt-get install kibana
Set the Kibana service to automatically start on reboot
sudo update-rc.d kibana defaults 95 10
Look for the "server.host" line and put in 0.0.0.0, which allows connections from any address.
Next, looks for the "elasticsearch.url" entry and put in the address for your elasticsearch server.
sudo nano /etc/kibana/kibana.yml
# Specifies the address to which the Kibana server will bind. IP addresses and
# host names are both valid values. The default is 'localhost', which usually
# means remote machines will not be able to connect. To allow connections from
# remote users, set this parameter to a non-loopback address. server.host: "0.0.0.0"
# The URL of the Elasticsearch instance to use for all your queries. elasticsearch.url: "http://<Elasticsearch IP>:9200"
Check-In
At this point, the servers should all be running. To finalize the configuration, the Logstash input and output servers need a couple basic config files in order to route traffic to and from the message queuing server and Elasticsearch instance.
On the DEVLSIN system, create the following configuration file to accept Filebeat input and forward on to the Redis system:
sudo nano /etc/logstash/conf.d/10-logstash-shipper.conf
input { beats {port => 5044 } }
output { redis { host => "Redis IP" data_type => "list" key => "logstash" } }
On the DEVLSOUT system, create the following configuration files to pull from Redis, parse syslog data and forward on to Elasticsearch. I've included a filter file to parse Kiwi Syslog input as an example.
sudo nano /etc/logstash/conf.d/10-indexer-input.conf
input {redis{ host => "<Redis IP>" data_type => "list" key => "logstash" }}
sudo nano /etc/logstash/conf.d/12-kiwi-filter.conf
filter { if [type] == "kiwisyslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:raw_log_time}
%{IP:raw_host_ip}
%{GREEDYDATA:raw_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } } }
sudo nano /etc/logstash/conf.d/30-indexer-out.conf
output { elasticsearch { hosts => ["<elasticsearch server IP:9200"] sniffing => false manage_template => false } }
After adding these configuration files, run this on both logstash servers DEVLSIN/DEVLSOUT:
sudo service logstash restart
With all this in place, your systems should now be able to accept input, indexed for Kiwi syslog. I'll post indexes for other log types, such as IIS in another post.