Thursday, May 25, 2017

IIS Log Monitoring Update - Logstash 5.4

Would appear that since my first post, some additional GROK parameters are needed to properly index the IIS fields, well, probably any field parsed through GROK.

I recently setup a new instance of ELK, on the 5.4 stack. I setup all my filters as I normally had, in fact, I had SCP'd the files over to my workstation and simply copied them back down.

I noticed, when I went to graph status codes and bytes in Kibana, the fields were not available for aggregation. Checking my index in Kibana, sure enough, all the fields were being indexed as strings. After some back and forth with the support guys, it was discovered that additional information is needed to properly index numerical values.

Here is my original post for full, IIS logging:

filter {
  if [type] == "iis" {
if [message] =~ "^#" {
  drop {}
  }
grok {
  match => { "message" => "%{DATESTAMP:Event_Time} %{WORD:site_name} %{HOSTNAME:host_name} %{IP:host_ip} %{URIPROTO:method} %{URIPATH:uri_target} (?:%{NOTSPACE:uri_query}|-) %{NUMBER:port} (?:%{WORD:username}|-) %{IP:client_ip} %{NOTSPACE:http_version} %{NOTSPACE:user_agent} (?:%{NOTSPACE:cookie}|-) (?:%{NOTSPACE:referer}|-) (?:%{HOSTNAME:host}|-) %{NUMBER:status} %{NUMBER:substatus} %{NUMBER:win32_status} %{NUMBER:bytes_received} %{NUMBER:bytes_sent} %{NUMBER:time_taken}"}
}
}
}



The new format looks something like this:


filter {
  if [type] == "iis" {
if [message] =~ "^#" {
  drop {}
  }
grok {
  match => { "message" => "%{DATESTAMP:Event_Time} %{WORD:site_name} %{HOSTNAME:host_name} %{IP:host_ip} %{URIPROTO:method} %{URIPATH:uri_target} (?:%{NOTSPACE:uri_query}|-) %{NUMBER:port:int} (?:%{WORD:username}|-) %{IP:client_ip} %{NOTSPACE:http_version} %{NOTSPACE:user_agent} (?:%{NOTSPACE:cookie}|-) (?:%{NOTSPACE:referer}|-) (?:%{HOSTNAME:host}|-) %{NUMBER:status} %{NUMBER:substatus:float} %{NUMBER:win32_status:float} %{NUMBER:bytes_received:float} %{NUMBER:bytes_sent:float} %{NUMBER:time_taken:float}"}
}
}
}

Monday, May 1, 2017

Error: Which: no javac in (/sbin:/bin:/usr/sbin:/usr/bin) when installing a Plugin for Logstash

This took me down a rabbit hole, but ended up being an easy solution.

The solution was, in addition to the JRE environment, I had to install the JDK as well.

If you do both and still have the issue, ensure you have the path variable in your profile.

Mine is as such:

JAVA_HOME=/usr/java/jre1.8.0_131
PATH=$JAVA_HOME/bin:$PATH
export PATH JAVA_HOME

Check your profile. I did so by sudo nano /etc/profile

added those lines just about the "pathmunge ()" line

When I only had the JRE installed, if I ran "which javac" I'd receive a path error. After installing the JDK, that command worked successfully and my Logstash Plugins installed without error as well.

Tuesday, December 6, 2016

Walk-Through, Part 1: How to Install Elastic Search 5.0, Logstash 5.0 and Kibana 5.0 in a Distributed Configuration on Ubuntu 16.04.1

The purpose of this walk-through is to get you up and running with a distributed ELK stack as quickly as possible, running on the 5.x version. It's like the J.G. Wentworth commercials, "I want my money and I want it now!" Very little fluff in this article, I skip NGIX and X-PACK setup and use default indexes created by Logstash. At the end of this walk-through, you should have a total of 5 servers in your ELK stack, a front end Logstash (input server), a Redis, queuing server, a back end Logstash (indexing and filter server), an Elasticsearch server and a Kibana server.

Distributed ELK (Elasticsearch, Logstash, Kibana) configuration with front-end logstash, redis queuing, back-end logstash, elasticsearch and kibana



Here's the presumptions and limitations with this article:
  • Not terribly familiar with LINUX (e.g. windows admin)
  • Familiar with ELK in a single instance install, which is why you looked for distributed, to get more scale-ability
  • You understand the benefits of centralized log storage and event correlation offered by the ELK stack, or systems similar to it such as SPLUNK or Sumo Logic
  • The next walk-through will document getting IIS logs and Kiwi Syslogs imported into the ELK stack, so little effort will be spent here on log collection from end-points.
  • Not dealing with user and security controls in this walk-through
  • Presume you can at least install basic Ubuntu Linux with SSH server capabilities

Total config time for the steps in this article take roughly 2-6 hours

Setup for this configuration:

5 virtual servers with the following specifications:

Basic load of Ubuntu 16.04.1 server, installing only the SSH Server option
  • DEVLSIN (logstash inbound) - 1024 RAM, 1CPU, 8GB HDD
  • DEVLSOUT (logstash outbound) - 1024 RAM, 1CPU, 8GB HDD
  • DEVES (Elasticsearch) - 4096 RAM, 1CPU, 10GB HDD
  • DEVMQ (Redis Queuing Server) -  2048 RAM, 1CPU, 12GB HDD
  • DEVKIB (Kibana Server) - 2048 RAM, 1CPU, 8GB HDD

These were bare minimum specifications. If you're looking at a production deployment, would likely need additional memory and hard disk resources, but this was sufficient for a small lab connected to a handful of logging sources.




Setup: DEVLSIN

Pre-requisite: Java Runtime Environment

Update your apt package database:

sudo apt-get update

Install the latest stable version of Oracle Java with this command: 





sudo apt-get install default-jre

Install: Logstash Input Server

First task is to import the public signing key from Elasticsearch:

wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | 
sudo apt-key add -

Next, install the apt-transport-https package:

sudo apt-get install apt-transport-https

Now, save the repository information for the 5.0 stack information:

echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | 
sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list

In this multi-command line, the repository information will be updated and Logstash will be installed:

sudo apt-get update && sudo apt-get install logstash

Install: Plugins

For the common input types and output to the Redis, run the following commands:

cd /usr/share/logstash
sudo bin/logstash-plugin install logstash-input-beats
sudo bin/logstash-plugin install logstash-input-syslog
sudo bin/logstash-plugin install logstash-input-tcp
sudo bin/logstash-plugin install logstash-output-redis

*Note* If you want to see all the input and output types, you can run:

 sudo bin/logstash-plugin list

Setup: DEVLSOUT



Pre-requisite: Java Runtime Environment

Update your apt package database:

sudo apt-get update



Install the latest stable version of Oracle Java with this command: 





sudo apt-get install default-jre

Install: Logstash Index/Filter Server



First task is to import the public signing key from Elasticsearch:

wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | 
sudo apt-key add -

Next, install the apt-transport-https package:

sudo apt-get install apt-transport-https

Now, save the repository information for the 5.0 stack information:

echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" 
|sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list

In this multi-command line, the repository information will be updated and Logstash will be installed:

sudo apt-get update && sudo apt-get install logstash

Install: Plugins

For this system, we need to install the plugins that allow input from Redis with output to Elasticsearch:


cd /usr/share/logstash
sudo bin/logstash-plugin install logstash-input-redis
sudo bin/logstash-plugin install logstash-output-elasticsearch

Setup: DEVES



Pre-requisite: Java Runtime Environment

Update your apt package database:

sudo apt-get update



Install the latest stable version of Oracle Java with this command: 





sudo apt-get install default-jre


Install: Elasticsearch 5.0
First task is to import the public signing key from Elasticsearch:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Next, install the apt-transport-https package:

sudo apt-get install apt-transport-https

Now, save the repository information for the 5.0 stack information:

echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main"
| sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list

In this multi-command line, the repository information will be updated and Logstash will be installed:

sudo apt-get update && sudo apt-get install elasticsearch

Set the Elasticsearch service to automatically start on reboot

sudo update-rc.d elasticsearch defaults 95 10

Sudo service elasticsearch restart

Test to see that the service is running and returning results:

curl -XGET 'localhost:9200/?pretty'

or may have to use
curl -XGET '<server IP>:9200/?pretty'


Response output should look similar to the following example:

{
  "name" : "Cp8oag6",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",
  "version" : {
    "number" : "5.0.0",
    "build_hash" : "f27399d",
    "build_date" : "2016-03-30T09:51:41.449Z",
    "build_snapshot" : false,
    "lucene_version" : "6.2.0"
  },
  "tagline" : "You Know, for Search"
}



Setup: DEVMQ

Install: Redis

The following information at least applies to Redis 3.0.6. Mileage may vary on updated versions.

Install redis from the default application repository:


sudo apt-get install redis-server



Check that the redis-server service is "active (running)". If it's not running, proceed with the adjustments to the configs below, with critical importance of changing the configuration file to ensure logging is enabled.



sudo service redis-server status


Set the Redis server to start automatically after reboot:


sudo update-rc.d redis-server defaults 95 10




Part of the recommended Redis setup is to disable transparent hugepages. Add this command to your /etc/rc.local file.

sudo nano /etc/rc.local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi

The first time Redis starts up, you'll want to check the logs under /var/log/redis/redis-server.log

For the specifications used in this walkthrough, I had to make a couple of adjustments to the /etc/sysctl.conf file. These were called out in the logs and will differ based on the memory allocated to your system.

Sudo nano /etc/sysctl.conf

vm.overcommit_memory = 1

fs.file-max=1024


You'll also want to make a couple of changes to your redis.conf file, located in /etc/redis
sudo nano /etc/redis/redis.conf

The redis.conf file will require a few updates as well, one to bind to your server IP, and then for you to set the location of the log file and the maxclient limit. Because the memory in my system was so low, I adjust that downward quite a bit. Again, logs should have an entry in there if it needs to be adjusted.



sudo nano /etc/redis/redis.conf
# Examples:
#
# bind 127.0.0.1
bind <your server IP>

# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile /var/log/redis/redis-server.log

################################### LIMITS ####################################
# Once the limit is reached Redis will close all the new connections
# sending an error 'max number of clients reached'.
#
maxclients 1024



Setup: DEVKIB

First task is to import the public signing key from Elasticsearch:


wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | 
sudo apt-key add 


Next, install the apt-transport-https package:


sudo apt-get install apt-transport-https

Now, save the repository information for the 5.0 stack information:


echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | 
sudo tee -a /etc/apt/sources.list.d/elastic-5.x.listdo apt-get install redis-server


In this multi-command line, the repository information will be updated and Kibana will be installed:

sudo apt-get update && sudo apt-get install kibana


Set the Kibana service to automatically start on reboot
sudo update-rc.d kibana defaults 95 10

Now update the kibana.yml configuration file to allow connections from other hosts:

Look for the "server.host" line and put in 0.0.0.0, which allows connections from any address.

Next, looks for the "elasticsearch.url" entry and put in the address for your elasticsearch server.

sudo nano /etc/kibana/kibana.yml

# Specifies the address to which the Kibana server will bind. IP addresses and 
# host names are both valid values. The default is 'localhost', which usually 
# means remote machines will not be able to connect. To allow connections from 
# remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://<Elasticsearch IP>:9200"



Check-In

At this point, the servers should all be running. To finalize the configuration, the Logstash input and output servers need a couple basic config files in order to route traffic to and from the message queuing server and Elasticsearch instance.

On the DEVLSIN system, create the following configuration file to accept Filebeat input and forward on to the Redis system:

sudo nano /etc/logstash/conf.d/10-logstash-shipper.conf

input { beats {port => 5044 } }
output {
  redis { host => "Redis IP" data_type => "list" key => "logstash" }
}



On the DEVLSOUT system, create the following configuration files to pull from Redis, parse syslog data and forward on to Elasticsearch. I've included a filter file to parse Kiwi Syslog input as an example.

sudo nano /etc/logstash/conf.d/10-indexer-input.conf


input {redis{ host => "<Redis IP>" data_type => "list" key => "logstash" }}


sudo nano /etc/logstash/conf.d/12-kiwi-filter.conf


filter {
  if [type] == "kiwisyslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:raw_log_time} 
                               %{IP:raw_host_ip} 
                               %{GREEDYDATA:raw_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
  }
}


sudo nano /etc/logstash/conf.d/30-indexer-out.conf

output
{
  elasticsearch
  {
  hosts => ["<elasticsearch server IP:9200"]
  sniffing => false
  manage_template => false
  }
}



After adding these configuration files, run this on both logstash servers DEVLSIN/DEVLSOUT:

sudo service logstash restart
With all this in place, your systems should now be able to accept input, indexed for Kiwi syslog. I'll post indexes for other log types, such as IIS in another post.