Skip to content

Instantly share code, notes, and snippets.

@nu11secur1ty
Last active March 13, 2023 18:20
Show Gist options
  • Select an option

  • Save nu11secur1ty/18bb6f1411c853cb6998 to your computer and use it in GitHub Desktop.

Select an option

Save nu11secur1ty/18bb6f1411c853cb6998 to your computer and use it in GitHub Desktop.

Revisions

  1. nu11secur1ty revised this gist Jan 23, 2016. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -181,10 +181,10 @@ Now open the Nginx configuration file in your favorite editor. We will use vi:

    Find the default server block (starts with server {), the last configuration block in the file, and delete it. When you are done, the last two lines in the file should look like this:
    nginx.conf excerpt

    ```
    include /etc/nginx/conf.d/*.conf;
    }

    ```
    Save and exit.

    Now we will create an Nginx server block in a new file:
  2. nu11secur1ty revised this gist Jan 23, 2016. 1 changed file with 4 additions and 4 deletions.
    8 changes: 4 additions & 4 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -106,10 +106,10 @@ Now that Elasticsearch is up and running, let's install Kibana.
    Install Kibana

    Before installing Kibana, let's set up a kibana user and group, which will own and run Kibana:

    ```
    sudo groupadd -g 1005 kibana
    sudo useradd -u 1005 -g 1005 kibana

    ```
    If those commands fail because the 1005 GID or UID already exist, replace the number with IDs that are free.

    Download Kibana to your home directory with the following command:
    @@ -126,9 +126,9 @@ Open the Kibana configuration file for editing:

    In the Kibana configuration file, find the line that specifies server.host, and replace the IP address ("0.0.0.0" by default) with "localhost":
    kibana.yml excerpt (updated)

    ```
    server.host: "localhost"

    ```
    Save and exit. This setting makes it so Kibana will only be accessible to the localhost. This is fine because we will use an Nginx reverse proxy to allow external access.

    Let's copy the Kibana files to a more appropriate location. Create the /opt directory with the following command:
  3. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion readme.md
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,4 @@
    # Installing ELK (CentOS 6,7)
    # Installing ELK (CentOS (6 - NOTE: with your own modified) ,7)
    Introduction

    In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on CentOS 7—that is, Elasticsearch 2.1.x, Logstash 2.1.x, and Kibana 4.3.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.0.x. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.
  4. nu11secur1ty revised this gist Jan 16, 2016. No changes.
  5. nu11secur1ty renamed this gist Jan 16, 2016. 1 changed file with 0 additions and 0 deletions.
    File renamed without changes.
  6. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 0 additions and 1 deletion.
    1 change: 0 additions & 1 deletion readme.md
    Original file line number Diff line number Diff line change
    @@ -1 +0,0 @@
    installation and configuration
  7. nu11secur1ty revised this gist Jan 16, 2016. 2 changed files with 500 additions and 473 deletions.
    499 changes: 499 additions & 0 deletions !readme.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,499 @@
    # Installing ELK (CentOS 6,7)
    Introduction

    In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on CentOS 7—that is, Elasticsearch 2.1.x, Logstash 2.1.x, and Kibana 4.3.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.0.x. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.

    Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

    It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.
    Our Goal

    The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.

    Our ELK stack setup has four main components:

    Logstash: The server component of Logstash that processes incoming logs
    Elasticsearch: Stores all of the logs
    Kibana: Web interface for searching and visualizing logs, which will be proxied through Nginx
    Filebeat: Installed on client servers that will send their logs to Logstash, Filebeat serves as a log shipping agent that utilizes the lumberjack networking protocol to communicate with Logstash

    ELK Infrastructure

    We will install the first three components on a single server, which we will refer to as our ELK Server. Filebeat will be installed on all of the client servers that we want to gather logs for, which we will refer to collectively as our Client Servers.
    Prerequisites

    To complete this tutorial, you will require root access to an CentOS 7 VPS. Instructions to set that up can be found here (steps 3 and 4): Initial Server Setup with CentOS 7.

    If you would prefer to use Ubuntu instead, check out this tutorial: How To Install ELK on Ubuntu 14.04.

    The amount of CPU, RAM, and storage that your ELK Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a VPS with the following specs for our ELK Server:

    OS: CentOS 7
    RAM: 4GB
    CPU: 2

    In addition to your ELK Server, you will want to have a few other servers that you will gather logs from.

    Let's get started on setting up our ELK Server!
    Install Java 8

    Elasticsearch and Logstash require Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elasticsearch recommends. It should, however, work fine with OpenJDK, if you decide to go that route. Following the steps in this section means that you accept the Oracle Binary License Agreement for Java SE.

    Change to your home directory and download the Oracle Java 8 (Update 65) JDK RPM with these commands:

    cd ~
    wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"

    Then install the RPM with this yum command (if you downloaded a different release, substitute the filename here):

    sudo yum localinstall jdk-8u65-linux-x64.rpm

    Now Java should be installed at /usr/java/jdk1.8.0_65/jre/bin/java, and linked from /usr/bin/java.

    You may delete the archive file that you downloaded earlier:

    rm ~/jdk-8u65-linux-x64.rpm

    Now that Java 8 is installed, let's install ElasticSearch.
    Install Elasticsearch

    Elasticsearch can be installed with a package manager by adding Elastic's package repository.

    Run the following command to import the Elasticsearch public GPG key into rpm:

    sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

    Create and edit a new yum repository file for Elasticsearch:

    sudo vi /etc/yum.repos.d/elasticsearch.repo

    Add the following repository configuration:
    /etc/yum.repos.d/elasticsearch.repo

    [elasticsearch-2.1]
    name=Elasticsearch repository for 2.x packages
    baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
    gpgcheck=1
    gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
    enabled=1

    Save and exit.

    Install Elasticsearch with this command:

    sudo yum -y install elasticsearch

    Elasticsearch is now installed. Let's edit the configuration:

    sudo vi /etc/elasticsearch/elasticsearch.yml

    You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can't read your data or shutdown your Elasticsearch cluster through the HTTP API. Find the line that specifies network.host, uncomment it, and replace its value with "localhost" so it looks like this:
    elasticsearch.yml excerpt (updated)

    network.host: localhost

    Save and exit elasticsearch.yml.

    Now start Elasticsearch:

    sudo systemctl start elasticsearch

    Then run the following command to start Elasticsearch automatically on boot up:

    sudo systemctl enable elasticsearch

    Now that Elasticsearch is up and running, let's install Kibana.
    Install Kibana

    Before installing Kibana, let's set up a kibana user and group, which will own and run Kibana:

    sudo groupadd -g 1005 kibana
    sudo useradd -u 1005 -g 1005 kibana

    If those commands fail because the 1005 GID or UID already exist, replace the number with IDs that are free.

    Download Kibana to your home directory with the following command:

    cd ~; wget https://download.elastic.co/kibana/kibana/kibana-4.3.0-linux-x64.tar.gz

    Extract Kibana archive with tar:

    tar xvf kibana-*.tar.gz

    Open the Kibana configuration file for editing:

    vi ~/kibana-4*/config/kibana.yml

    In the Kibana configuration file, find the line that specifies server.host, and replace the IP address ("0.0.0.0" by default) with "localhost":
    kibana.yml excerpt (updated)

    server.host: "localhost"

    Save and exit. This setting makes it so Kibana will only be accessible to the localhost. This is fine because we will use an Nginx reverse proxy to allow external access.

    Let's copy the Kibana files to a more appropriate location. Create the /opt directory with the following command:

    sudo mkdir -p /opt/kibana

    Now copy the Kibana files into your newly-created directory:

    sudo cp -R ~/kibana-4*/* /opt/kibana/

    Make the kibana user the owner of the files:

    sudo chown -R kibana: /opt/kibana

    Kibana can be started by running /opt/kibana/bin/kibana, but we want it to run as a service. Download a Kibana init script with this command:

    cd /etc/init.d && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-init
    cd /etc/default && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-default

    Now enable the Kibana service, and start it:

    sudo chmod +x /etc/init.d/kibana
    sudo service kibana start
    sudo chkconfig kibana on

    Before we can use the Kibana web interface, we have to set up a reverse proxy. Let's do that now, with Nginx.
    Install Nginx

    Because we configured Kibana to listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose.

    Note: If you already have an Nginx instance that you want to use, feel free to use that instead. Just make sure to configure Kibana so it is reachable by your Nginx server (you probably want to change the host value, in /opt/kibana/config/kibana.yml, to your Kibana server's private IP address). Also, it is recommended that you enable SSL/TLS.

    Add the EPEL repository to yum:

    sudo yum -y install epel-release

    Now use yum to install Nginx and httpd-tools:

    sudo yum -y install nginx httpd-tools

    Use htpasswd to create an admin user, called "kibanaadmin" (you should use another name), that can access the Kibana web interface:

    sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

    Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.

    Now open the Nginx configuration file in your favorite editor. We will use vi:

    sudo vi /etc/nginx/nginx.conf

    Find the default server block (starts with server {), the last configuration block in the file, and delete it. When you are done, the last two lines in the file should look like this:
    nginx.conf excerpt

    include /etc/nginx/conf.d/*.conf;
    }

    Save and exit.

    Now we will create an Nginx server block in a new file:

    sudo vi /etc/nginx/conf.d/kibana.conf

    Paste the following code block into the file. Be sure to update the server_name to match your server's name:
    /etc/nginx/conf.d/kibana.conf

    server {
    listen 80;

    server_name example.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
    proxy_pass http://localhost:5601;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
    }
    }

    Save and exit. This configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.

    Now start and enable Nginx to put our changes into effect:

    sudo systemctl start nginx
    sudo systemctl enable nginx

    Note: This tutorial assumes that SELinux is disabled. If this is not the case, you may need to run the following command for Kibana to work properly: sudo setsebool -P httpd_can_network_connect 1

    Kibana is now accessible via your FQDN or the public IP address of your ELK Server i.e. http://elk_server_public_ip/. If you go there in a web browser, after entering the "kibanaadmin" credentials, you should see a Kibana welcome page which will ask you to configure an index pattern. Let's get back to that later, after we install all of the other components.
    Install Logstash

    The Logstash package shares the same GPG Key as Elasticsearch, and we already installed that public key, so let's create and edit a new Yum repository file for Logstash:

    sudo vi /etc/yum.repos.d/logstash.repo

    Add the following repository configuration:
    /etc/yum.repos.d/logstash.repo

    [logstash-2.1]
    name=logstash repository for 2.1 packages
    baseurl=http://packages.elasticsearch.org/logstash/2.1/centos
    gpgcheck=1
    gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    enabled=1

    Save and exit.

    Install Logstash with this command:

    sudo yum -y install logstash

    Logstash is installed but it is not configured yet.
    Generate SSL Certificates

    Since we are going to use Filebeat to ship logs from our Client Servers to our ELK Server, we need to create an SSL certificate and key pair. The certificate is used by Filebeat to verify the identity of ELK Server. Create the directories that will store the certificate and private key with the following commands:

    Now you have two options for generating your SSL certificates. If you have a DNS setup that will allow your client servers to resolve the IP address of the ELK Server, use Option 2. Otherwise, Option 1 will allow you to use IP addresses.
    Option 1: IP Address

    If you don't have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your ELK Server—you will have to add your ELK Server's private IP address to the subjectAltName (SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:

    sudo vi /etc/pki/tls/openssl.cnf

    Find the [ v3_ca ] section in the file, and add this line under it (substituting in the ELK Server's private IP address):
    openssl.cnf excerpt

    subjectAltName = IP: logstash_server_private_ip

    Save and exit.

    Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:

    cd /etc/pki/tls
    sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.
    Option 2: FQDN (DNS)

    If you have a DNS setup with your private networking, you should create an A record that contains the ELK Server's private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server's public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your ELK Server.

    Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/...), with the following command (substitute in the FQDN of the ELK Server):

    cd /etc/pki/tls
    sudo openssl req -subj '/CN=logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration.
    Configure Logstash

    Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

    Let's create a configuration file called 02-filebeat-input.conf and set up our "filebeat" input:

    sudo vi /etc/logstash/conf.d/02-filebeat-input.conf

    Insert the following input configuration:
    02-filebeat-input.conf

    input {
    beats {
    port => 5044
    type => "logs"
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    }
    }

    Save and quit. This specifies a beats input that will listen on tcp port 5044, and it will use the SSL certificate and private key that we created earlier.

    Now let's create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages:

    sudo vi /etc/logstash/conf.d/10-syslog.conf

    Insert the following syslog filter configuration:
    10-syslog.conf

    filter {
    if [type] == "syslog" {
    grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    add_field => [ "received_at", "%{@timestamp}" ]
    add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
    match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
    }
    }

    Save and quit. This filter looks for logs that are labeled as "syslog" type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.

    Lastly, we will create a configuration file called 30-elasticsearch-output.conf:

    sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

    Insert the following output configuration:
    /etc/logstash/conf.d/30-elasticsearch-output.conf

    output {
    elasticsearch { hosts => ["localhost:9200"] }
    stdout { codec => rubydebug }
    }

    Save and exit. This output basically configures Logstash to store the logs in Elasticsearch, which is running at localhost:9200.

    With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).

    If you want to add filters for other applications that use the Filebeat input, be sure to name the files so they sort between the input and the output configuration (i.e. between 02- and 30-).

    Test your Logstash configuration with this command:

    sudo service logstash configtest

    It should display Configuration OK if there are no syntax errors. Otherwise, try and read the error output to see what's wrong with your Logstash configuration.

    Restart and enable Logstash to put our configuration changes into effect:

    sudo systemctl restart logstash
    sudo chkconfig logstash on

    Now that our ELK Server is ready, let's move onto setting up Filebeat.
    Set Up Filebeat (Add Client Servers)

    Do these steps for each CentOS or RHEL 7 server that you want to send logs to your ELK Server. For instructions on installing Filebeat on Debian-based Linux distributions (e.g. Ubuntu, Debian, etc.), refer to the Set Up Filebeat (Add Client Servers) section of the Ubuntu variation of this tutorial.
    Copy SSL Certificate

    On ELK Server, copy the SSL certificate to Client Server (substitute the client server's IP address, and your own login):

    scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp

    After providing the login credentials, ensure that the certificate copy was successful. It is required for communication between the client servers and the ELK server.
    Install Filebeat Package

    On Client Server, create run the following command to import the Elasticsearch public GPG key into rpm:

    sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

    Create and edit a new yum repository file for Filebeat:

    sudo vi /etc/yum.repos.d/elastic-beats.repo

    Add the following repository configuration:
    /etc/yum.repos.d/elastic-beats.repo

    [beats]
    name=Elastic Beats Repository
    baseurl=https://packages.elastic.co/beats/yum/el/$basearch
    enabled=1
    gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
    gpgcheck=1

    Save and exit.

    Install Filebeat with this command:

    sudo yum -y install filebeat

    Filebeat is installed but it is not configured yet.

    Now copy the ELK Server's SSL certificate into the appropriate location (/etc/pki/tls/certs):

    sudo mkdir -p /etc/pki/tls/certs
    sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

    Configure Filebeat

    Now we will configure Filebeat to connect to Logstash on our ELK Server. This section will step you through modifying the example configuration file that comes with Filebeat. When you complete the steps, you should have a file that looks something like this.

    On Client Server, create and edit Filebeat configuration file:

    sudo vi /etc/filebeat/filebeat.yml

    Note: Filebeat's configuration file is in YAML format, which means that indentation is very important! Be sure to use the same number of spaces that are indicated in these instructions.

    Near the top of the file, you will see the prospectors section, which is where you can define prospectors that specify which log files should be shipped and how they should be handled. Each prospector is indicated by the - character.

    We'll modify the existing prospector to send secure and messages logs to Logstash. Under paths, comment out the - /var/log/*.log file. This will prevent Filebeat from sending every .log in that directory to Logstash. Then add new entries for syslog and auth.log. It should look something like this when you're done:
    filebeat.yml excerpt 1 of 4

    ```
    ...
    paths:
    - /var/log/secure
    - /var/log/messages
    # - /var/log/*.log
    ...
    ```
    Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
    filebeat.yml excerpt 2 of 4


    ```
    ...
    document_type: syslog
    ...
    ```
    This specifies that the logs in this prospector are of type syslog (which is the type that our Logstash filter is looking for).

    If you want to send other files to your ELK server, or make any changes to how Filebeat handles your logs, feel free to modify or add prospector entries.

    Next, under the output section, find the line that says elasticsearch:, which indicates the Elasticsearch output section (which we are not going to use). Delete or comment out the entire Elasticsearch output section (up to the line that says logstash:).

    Find the commented out Logstash output section, indicated by the line that says #logstash:, and uncomment it by deleting the preceding #. In this section, uncomment the hosts: ["localhost:5044"] line. Change localhost to the private IP address (or hostname, if you went with that option) of your ELK server:
    filebeat.yml excerpt 3 of 4

    ### Logstash as output
    logstash:
    # The Logstash hosts
    hosts: ["ELK_server_private_IP:5044"]

    This configures Filebeat to connect to Logstash on your ELK Server at port 5044 (the port that we specified an input for earlier).

    Next, find the tls section, and uncomment it. Then uncomment the line that specifies certificate_authorities, and change its value to ["/etc/pki/tls/certs/logstash-forwarder.crt"]. It should look something like this:
    filebeat.yml excerpt 4 of 4

    ...
    tls:
    # List of root certificates for HTTPS server verifications
    certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

    This configures Filebeat to use the SSL certificate that we created on the ELK Server.

    Save and quit.

    Now start and enable Filebeat to put our changes into place:

    sudo systemctl start filebeat
    sudo chkconfig filebeat on

    Again, if you're not sure if your Filebeat configuration is correct, compare it against this example Filebeat configuration.

    Now Filebeat is sending your syslog messages and secure files to your ELK Server! Repeat this section for all of the other servers that you wish to gather logs for.
    Connect to Kibana

    When you are finished setting up Filebeat on all of the servers that you want to gather logs for, let's look at Kibana, the web interface that we installed earlier.

    In a web browser, go to the FQDN or public IP address of your ELK Server. After entering the "kibanaadmin" credentials, you should see a page prompting you to configure an index pattern:

    ---------------------------------------------------------------------

    Go ahead and select @timestamp from the dropdown menu, then click the Create button to create the first index.

    Now click the Discover link in the top navigation bar. By default, this will show you all of the log data over the last 15 minutes. You should see a histogram with log events, with log messages below:
    ---------------------------------------------------------------------


    Right now, there won't be much in there because you are only gathering syslogs from your client servers. Here, you can search and browse through your logs. You can also customize your dashboard.

    Try the following things:

    Search for "root" to see if anyone is trying to log into your servers as root
    Search for a particular hostname (search for host: "hostname")
    Change the time frame by selecting an area on the histogram or from the menu above
    Click on messages below the histogram to see how the data is being filtered

    Kibana has many other features, such as graphing and filtering, so feel free to poke around!
    Conclusion

    Now that your syslogs are centralized via Elasticsearch and Logstash, and you are able to visualize them with Kibana 4, you should be off to a good start with centralizing all of your important logs. Remember that you can send pretty much any type of log to Logstash, but the data becomes even more useful if it is parsed and structured with grok.

    To improve your new ELK stack, you should look into gathering and filtering your other logs with Logstash, and creating Kibana dashboards. These topics are covered in the second and third tutorials in this series. Also, if you are having trouble with your setup, follow our How To Troubleshoot Common ELK Stack Issues tutorial.

    Scroll down for links to learn more about using your ELK stack!
    474 changes: 1 addition & 473 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -1,473 +1 @@
    # Installing ELK (CentOS 6,7)
    Introduction

    In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on CentOS 7—that is, Elasticsearch 2.1.x, Logstash 2.1.x, and Kibana 4.3.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.0.x. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.

    Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

    It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.
    Our Goal

    The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.

    Our ELK stack setup has four main components:

    Logstash: The server component of Logstash that processes incoming logs
    Elasticsearch: Stores all of the logs
    Kibana: Web interface for searching and visualizing logs, which will be proxied through Nginx
    Filebeat: Installed on client servers that will send their logs to Logstash, Filebeat serves as a log shipping agent that utilizes the lumberjack networking protocol to communicate with Logstash

    ELK Infrastructure

    We will install the first three components on a single server, which we will refer to as our ELK Server. Filebeat will be installed on all of the client servers that we want to gather logs for, which we will refer to collectively as our Client Servers.
    Prerequisites

    To complete this tutorial, you will require root access to an CentOS 7 VPS. Instructions to set that up can be found here (steps 3 and 4): Initial Server Setup with CentOS 7.

    If you would prefer to use Ubuntu instead, check out this tutorial: How To Install ELK on Ubuntu 14.04.

    The amount of CPU, RAM, and storage that your ELK Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a VPS with the following specs for our ELK Server:

    OS: CentOS 7
    RAM: 4GB
    CPU: 2

    In addition to your ELK Server, you will want to have a few other servers that you will gather logs from.

    Let's get started on setting up our ELK Server!
    Install Java 8

    Elasticsearch and Logstash require Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elasticsearch recommends. It should, however, work fine with OpenJDK, if you decide to go that route. Following the steps in this section means that you accept the Oracle Binary License Agreement for Java SE.

    Change to your home directory and download the Oracle Java 8 (Update 65) JDK RPM with these commands:

    cd ~
    wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"

    Then install the RPM with this yum command (if you downloaded a different release, substitute the filename here):

    sudo yum localinstall jdk-8u65-linux-x64.rpm

    Now Java should be installed at /usr/java/jdk1.8.0_65/jre/bin/java, and linked from /usr/bin/java.

    You may delete the archive file that you downloaded earlier:

    rm ~/jdk-8u65-linux-x64.rpm

    Now that Java 8 is installed, let's install ElasticSearch.
    Install Elasticsearch

    Elasticsearch can be installed with a package manager by adding Elastic's package repository.

    Run the following command to import the Elasticsearch public GPG key into rpm:

    sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

    Create and edit a new yum repository file for Elasticsearch:

    sudo vi /etc/yum.repos.d/elasticsearch.repo

    Add the following repository configuration:
    /etc/yum.repos.d/elasticsearch.repo

    [elasticsearch-2.1]
    name=Elasticsearch repository for 2.x packages
    baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
    gpgcheck=1
    gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
    enabled=1

    Save and exit.

    Install Elasticsearch with this command:

    sudo yum -y install elasticsearch

    Elasticsearch is now installed. Let's edit the configuration:

    sudo vi /etc/elasticsearch/elasticsearch.yml

    You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can't read your data or shutdown your Elasticsearch cluster through the HTTP API. Find the line that specifies network.host, uncomment it, and replace its value with "localhost" so it looks like this:
    elasticsearch.yml excerpt (updated)

    network.host: localhost

    Save and exit elasticsearch.yml.

    Now start Elasticsearch:

    sudo systemctl start elasticsearch

    Then run the following command to start Elasticsearch automatically on boot up:

    sudo systemctl enable elasticsearch

    Now that Elasticsearch is up and running, let's install Kibana.
    Install Kibana

    Before installing Kibana, let's set up a kibana user and group, which will own and run Kibana:

    sudo groupadd -g 1005 kibana
    sudo useradd -u 1005 -g 1005 kibana

    If those commands fail because the 1005 GID or UID already exist, replace the number with IDs that are free.

    Download Kibana to your home directory with the following command:

    cd ~; wget https://download.elastic.co/kibana/kibana/kibana-4.3.0-linux-x64.tar.gz

    Extract Kibana archive with tar:

    tar xvf kibana-*.tar.gz

    Open the Kibana configuration file for editing:

    vi ~/kibana-4*/config/kibana.yml

    In the Kibana configuration file, find the line that specifies server.host, and replace the IP address ("0.0.0.0" by default) with "localhost":
    kibana.yml excerpt (updated)

    server.host: "localhost"

    Save and exit. This setting makes it so Kibana will only be accessible to the localhost. This is fine because we will use an Nginx reverse proxy to allow external access.

    Let's copy the Kibana files to a more appropriate location. Create the /opt directory with the following command:

    sudo mkdir -p /opt/kibana

    Now copy the Kibana files into your newly-created directory:

    sudo cp -R ~/kibana-4*/* /opt/kibana/

    Make the kibana user the owner of the files:

    sudo chown -R kibana: /opt/kibana

    Kibana can be started by running /opt/kibana/bin/kibana, but we want it to run as a service. Download a Kibana init script with this command:

    cd /etc/init.d && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-init
    cd /etc/default && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-default

    Now enable the Kibana service, and start it:

    sudo chmod +x /etc/init.d/kibana
    sudo service kibana start
    sudo chkconfig kibana on

    Before we can use the Kibana web interface, we have to set up a reverse proxy. Let's do that now, with Nginx.
    Install Nginx

    Because we configured Kibana to listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose.

    Note: If you already have an Nginx instance that you want to use, feel free to use that instead. Just make sure to configure Kibana so it is reachable by your Nginx server (you probably want to change the host value, in /opt/kibana/config/kibana.yml, to your Kibana server's private IP address). Also, it is recommended that you enable SSL/TLS.

    Add the EPEL repository to yum:

    sudo yum -y install epel-release

    Now use yum to install Nginx and httpd-tools:

    sudo yum -y install nginx httpd-tools

    Use htpasswd to create an admin user, called "kibanaadmin" (you should use another name), that can access the Kibana web interface:

    sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

    Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.

    Now open the Nginx configuration file in your favorite editor. We will use vi:

    sudo vi /etc/nginx/nginx.conf

    Find the default server block (starts with server {), the last configuration block in the file, and delete it. When you are done, the last two lines in the file should look like this:
    nginx.conf excerpt

    include /etc/nginx/conf.d/*.conf;
    }

    Save and exit.

    Now we will create an Nginx server block in a new file:

    sudo vi /etc/nginx/conf.d/kibana.conf

    Paste the following code block into the file. Be sure to update the server_name to match your server's name:
    /etc/nginx/conf.d/kibana.conf

    server {
    listen 80;

    server_name example.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
    proxy_pass http://localhost:5601;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
    }
    }

    Save and exit. This configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.

    Now start and enable Nginx to put our changes into effect:

    sudo systemctl start nginx
    sudo systemctl enable nginx

    Note: This tutorial assumes that SELinux is disabled. If this is not the case, you may need to run the following command for Kibana to work properly: sudo setsebool -P httpd_can_network_connect 1

    Kibana is now accessible via your FQDN or the public IP address of your ELK Server i.e. http://elk_server_public_ip/. If you go there in a web browser, after entering the "kibanaadmin" credentials, you should see a Kibana welcome page which will ask you to configure an index pattern. Let's get back to that later, after we install all of the other components.
    Install Logstash

    The Logstash package shares the same GPG Key as Elasticsearch, and we already installed that public key, so let's create and edit a new Yum repository file for Logstash:

    sudo vi /etc/yum.repos.d/logstash.repo

    Add the following repository configuration:
    /etc/yum.repos.d/logstash.repo

    [logstash-2.1]
    name=logstash repository for 2.1 packages
    baseurl=http://packages.elasticsearch.org/logstash/2.1/centos
    gpgcheck=1
    gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    enabled=1

    Save and exit.

    Install Logstash with this command:

    sudo yum -y install logstash

    Logstash is installed but it is not configured yet.
    Generate SSL Certificates

    Since we are going to use Filebeat to ship logs from our Client Servers to our ELK Server, we need to create an SSL certificate and key pair. The certificate is used by Filebeat to verify the identity of ELK Server. Create the directories that will store the certificate and private key with the following commands:

    Now you have two options for generating your SSL certificates. If you have a DNS setup that will allow your client servers to resolve the IP address of the ELK Server, use Option 2. Otherwise, Option 1 will allow you to use IP addresses.
    Option 1: IP Address

    If you don't have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your ELK Server—you will have to add your ELK Server's private IP address to the subjectAltName (SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:

    sudo vi /etc/pki/tls/openssl.cnf

    Find the [ v3_ca ] section in the file, and add this line under it (substituting in the ELK Server's private IP address):
    openssl.cnf excerpt

    subjectAltName = IP: logstash_server_private_ip

    Save and exit.

    Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:

    cd /etc/pki/tls
    sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.
    Option 2: FQDN (DNS)

    If you have a DNS setup with your private networking, you should create an A record that contains the ELK Server's private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server's public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your ELK Server.

    Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/...), with the following command (substitute in the FQDN of the ELK Server):

    cd /etc/pki/tls
    sudo openssl req -subj '/CN=logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration.
    Configure Logstash

    Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

    Let's create a configuration file called 02-filebeat-input.conf and set up our "filebeat" input:

    sudo vi /etc/logstash/conf.d/02-filebeat-input.conf

    Insert the following input configuration:
    02-filebeat-input.conf

    input {
    beats {
    port => 5044
    type => "logs"
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    }
    }

    Save and quit. This specifies a beats input that will listen on tcp port 5044, and it will use the SSL certificate and private key that we created earlier.

    Now let's create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages:

    sudo vi /etc/logstash/conf.d/10-syslog.conf

    Insert the following syslog filter configuration:
    10-syslog.conf

    filter {
    if [type] == "syslog" {
    grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    add_field => [ "received_at", "%{@timestamp}" ]
    add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
    match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
    }
    }

    Save and quit. This filter looks for logs that are labeled as "syslog" type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.

    Lastly, we will create a configuration file called 30-elasticsearch-output.conf:

    sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

    Insert the following output configuration:
    /etc/logstash/conf.d/30-elasticsearch-output.conf

    output {
    elasticsearch { hosts => ["localhost:9200"] }
    stdout { codec => rubydebug }
    }

    Save and exit. This output basically configures Logstash to store the logs in Elasticsearch, which is running at localhost:9200.

    With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).

    If you want to add filters for other applications that use the Filebeat input, be sure to name the files so they sort between the input and the output configuration (i.e. between 02- and 30-).

    Test your Logstash configuration with this command:

    sudo service logstash configtest

    It should display Configuration OK if there are no syntax errors. Otherwise, try and read the error output to see what's wrong with your Logstash configuration.

    Restart and enable Logstash to put our configuration changes into effect:

    sudo systemctl restart logstash
    sudo chkconfig logstash on

    Now that our ELK Server is ready, let's move onto setting up Filebeat.
    Set Up Filebeat (Add Client Servers)

    Do these steps for each CentOS or RHEL 7 server that you want to send logs to your ELK Server. For instructions on installing Filebeat on Debian-based Linux distributions (e.g. Ubuntu, Debian, etc.), refer to the Set Up Filebeat (Add Client Servers) section of the Ubuntu variation of this tutorial.
    Copy SSL Certificate

    On ELK Server, copy the SSL certificate to Client Server (substitute the client server's IP address, and your own login):

    scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp

    After providing the login credentials, ensure that the certificate copy was successful. It is required for communication between the client servers and the ELK server.
    Install Filebeat Package

    On Client Server, create run the following command to import the Elasticsearch public GPG key into rpm:

    sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

    Create and edit a new yum repository file for Filebeat:

    sudo vi /etc/yum.repos.d/elastic-beats.repo

    Add the following repository configuration:
    /etc/yum.repos.d/elastic-beats.repo

    [beats]
    name=Elastic Beats Repository
    baseurl=https://packages.elastic.co/beats/yum/el/$basearch
    enabled=1
    gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
    gpgcheck=1

    Save and exit.

    Install Filebeat with this command:

    sudo yum -y install filebeat

    Filebeat is installed but it is not configured yet.

    Now copy the ELK Server's SSL certificate into the appropriate location (/etc/pki/tls/certs):

    sudo mkdir -p /etc/pki/tls/certs
    sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

    Configure Filebeat

    Now we will configure Filebeat to connect to Logstash on our ELK Server. This section will step you through modifying the example configuration file that comes with Filebeat. When you complete the steps, you should have a file that looks something like this.

    On Client Server, create and edit Filebeat configuration file:

    sudo vi /etc/filebeat/filebeat.yml

    Note: Filebeat's configuration file is in YAML format, which means that indentation is very important! Be sure to use the same number of spaces that are indicated in these instructions.

    Near the top of the file, you will see the prospectors section, which is where you can define prospectors that specify which log files should be shipped and how they should be handled. Each prospector is indicated by the - character.

    We'll modify the existing prospector to send secure and messages logs to Logstash. Under paths, comment out the - /var/log/*.log file. This will prevent Filebeat from sending every .log in that directory to Logstash. Then add new entries for syslog and auth.log. It should look something like this when you're done:
    filebeat.yml excerpt 1 of 4

    ```
    ...
    paths:
    - /var/log/secure
    - /var/log/messages
    # - /var/log/*.log
    ...
    ```
    Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
    filebeat.yml excerpt 2 of 4


    ```
    ...
    document_type: syslog
    ...
    ```
    This specifies that the logs in this prospector are of type syslog (which is the type that our Logstash filter is looking for).

    If you want to send other files to your ELK server, or make any changes to how Filebeat handles your logs, feel free to modify or add prospector entries.

    Next, under the output section, find the line that says elasticsearch:, which indicates the Elasticsearch output section (which we are not going to use). Delete or comment out the entire Elasticsearch output section (up to the line that says logstash:).

    Find the commented out Logstash output section, indicated by the line that says #logstash:, and uncomment it by deleting the preceding #. In this section, uncomment the hosts: ["localhost:5044"] line. Change localhost to the private IP address (or hostname, if you went with that option) of your ELK server:
    filebeat.yml excerpt 3 of 4

    ### Logstash as output
    logstash:
    # The Logstash hosts
    hosts: ["ELK_server_private_IP:5044"]

    This configures Filebeat to connect to Logstash on your ELK Server at port 5044 (the port that we specified an input for earlier).

    Next, find the tls section, and uncomment it. Then uncomment the line that specifies certificate_authorities, and change its value to ["/etc/pki/tls/certs/logstash-forwarder.crt"]. It should look something like this:
    filebeat.yml excerpt 4 of 4

    ...
    tls:
    # List of root certificates for HTTPS server verifications
    certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

    This configures Filebeat to use the SSL certificate that we created on the ELK Server.

    Save and quit.

    Now start and enable Filebeat to put our changes into place:

    sudo systemctl start filebeat
    sudo chkconfig filebeat on

    Again, if you're not sure if your Filebeat configuration is correct, compare it against this example Filebeat configuration.

    Now Filebeat is sending your syslog messages and secure files to your ELK Server! Repeat this section for all of the other servers that you wish to gather logs for.
    Connect to Kibana

    When you are finished setting up Filebeat on all of the servers that you want to gather logs for, let's look at Kibana, the web interface that we installed earlier.

    In a web browser, go to the FQDN or public IP address of your ELK Server. After entering the "kibanaadmin" credentials, you should see a page prompting you to configure an index pattern:
    installation and configuration
  8. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 4 additions and 5 deletions.
    9 changes: 4 additions & 5 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -422,15 +422,14 @@ filebeat.yml excerpt 1 of 4
    ...
    ```
    Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
    ```
    ...
    filebeat.yml excerpt 2 of 4
    ...
    filebeat.yml excerpt 2 of 4


    ```
    ...
    document_type: syslog
    ...

    ```
    This specifies that the logs in this prospector are of type syslog (which is the type that our Logstash filter is looking for).

    If you want to send other files to your ELK server, or make any changes to how Filebeat handles your logs, feel free to modify or add prospector entries.
  9. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 6 additions and 3 deletions.
    9 changes: 6 additions & 3 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -422,10 +422,13 @@ filebeat.yml excerpt 1 of 4
    ...
    ```
    Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
    filebeat.yml excerpt 2 of 4

    ```
    ...
    filebeat.yml excerpt 2 of 4
    ...
    ```
    ...
    document_type: syslog
    document_type: syslog
    ...

    This specifies that the logs in this prospector are of type syslog (which is the type that our Logstash filter is looking for).
  10. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 2 additions and 1 deletion.
    3 changes: 2 additions & 1 deletion readme.md
    Original file line number Diff line number Diff line change
    @@ -413,13 +413,14 @@ Near the top of the file, you will see the prospectors section, which is where y
    We'll modify the existing prospector to send secure and messages logs to Logstash. Under paths, comment out the - /var/log/*.log file. This will prevent Filebeat from sending every .log in that directory to Logstash. Then add new entries for syslog and auth.log. It should look something like this when you're done:
    filebeat.yml excerpt 1 of 4

    ```
    ...
    paths:
    - /var/log/secure
    - /var/log/messages
    # - /var/log/*.log
    ...

    ```
    Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
    filebeat.yml excerpt 2 of 4

  11. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 469 additions and 289 deletions.
    758 changes: 469 additions & 289 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -1,290 +1,470 @@
    # Installing ELK (CentOS 6,7)
    1: Introduction
    2: In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on CentOS 7—that is, Elasticsearch 2.1.x, Logstash 2.1.x, and Kibana 4.3.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.0.x. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.
    3: Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.
    4: It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.
    5: Our Goal
    6: The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.
    7: Our ELK stack setup has four main components:
    8: Logstash: The server component of Logstash that processes incoming logs
    9: Elasticsearch: Stores all of the logs
    10: Kibana: Web interface for searching and visualizing logs, which will be proxied through Nginx
    11: Filebeat: Installed on client servers that will send their logs to Logstash, Filebeat serves as a log shipping agent that utilizes the lumberjack networking protocol to communicate with Logstash
    12: ELK Infrastructure
    13: We will install the first three components on a single server, which we will refer to as our ELK Server. Filebeat will be installed on all of the client servers that we want to gather logs for, which we will refer to collectively as our Client Servers.
    14: Prerequisites
    15: To complete this tutorial, you will require root access to an CentOS 7 VPS. Instructions to set that up can be found here (steps 3 and 4): Initial Server Setup with CentOS 7.
    16: If you would prefer to use Ubuntu instead, check out this tutorial: How To Install ELK on Ubuntu 14.04.
    17: The amount of CPU, RAM, and storage that your ELK Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a VPS with the following specs for our ELK Server:
    18: OS: CentOS 7
    19: RAM: 4GB
    20: CPU: 2
    21: In addition to your ELK Server, you will want to have a few other servers that you will gather logs from.
    22: Let's get started on setting up our ELK Server!
    23: Install Java 8
    24: Elasticsearch and Logstash require Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elasticsearch recommends. It should, however, work fine with OpenJDK, if you decide to go that route. Following the steps in this section means that you accept the Oracle Binary License Agreement for Java SE.
    25: Change to your home directory and download the Oracle Java 8 (Update 65) JDK RPM with these commands:
    26: cd ~
    27: wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"
    28: Then install the RPM with this yum command (if you downloaded a different release, substitute the filename here):
    29: sudo yum localinstall jdk-8u65-linux-x64.rpm
    30: Now Java should be installed at /usr/java/jdk1.8.0_65/jre/bin/java, and linked from /usr/bin/java.
    31: You may delete the archive file that you downloaded earlier:
    32: rm ~/jdk-8u65-linux-x64.rpm
    33: Now that Java 8 is installed, let's install ElasticSearch.
    34: Install Elasticsearch
    35: Elasticsearch can be installed with a package manager by adding Elastic's package repository.
    36: Run the following command to import the Elasticsearch public GPG key into rpm:
    37: sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
    38: Create and edit a new yum repository file for Elasticsearch:
    39: sudo vi /etc/yum.repos.d/elasticsearch.repo
    40: Add the following repository configuration:
    41: /etc/yum.repos.d/elasticsearch.repo
    42: [elasticsearch-2.1]
    43: name=Elasticsearch repository for 2.x packages
    44: baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
    45: gpgcheck=1
    46: gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
    47: enabled=1
    48: Save and exit.
    49: Install Elasticsearch with this command:
    50: sudo yum -y install elasticsearch
    51: Elasticsearch is now installed. Let's edit the configuration:
    52: sudo vi /etc/elasticsearch/elasticsearch.yml
    53: You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can't read your data or shutdown your Elasticsearch cluster through the HTTP API. Find the line that specifies network.host, uncomment it, and replace its value with "localhost" so it looks like this:
    54: elasticsearch.yml excerpt (updated)
    55: network.host: localhost
    56: Save and exit elasticsearch.yml.
    57: Now start Elasticsearch:
    58: sudo systemctl start elasticsearch
    59: Then run the following command to start Elasticsearch automatically on boot up:
    60: sudo systemctl enable elasticsearch
    61: Now that Elasticsearch is up and running, let's install Kibana.
    62: Install Kibana
    63: Before installing Kibana, let's set up a kibana user and group, which will own and run Kibana:
    64: sudo groupadd -g 1005 kibana
    65: sudo useradd -u 1005 -g 1005 kibana
    66: If those commands fail because the 1005 GID or UID already exist, replace the number with IDs that are free.
    67: Download Kibana to your home directory with the following command:
    68: cd ~; wget https://download.elastic.co/kibana/kibana/kibana-4.3.0-linux-x64.tar.gz
    69: Extract Kibana archive with tar:
    70: tar xvf kibana-*.tar.gz
    71: Open the Kibana configuration file for editing:
    72: vi ~/kibana-4*/config/kibana.yml
    73: In the Kibana configuration file, find the line that specifies server.host, and replace the IP address ("0.0.0.0" by default) with "localhost":
    74: kibana.yml excerpt (updated)
    75: server.host: "localhost"
    76: Save and exit. This setting makes it so Kibana will only be accessible to the localhost. This is fine because we will use an Nginx reverse proxy to allow external access.
    77: Let's copy the Kibana files to a more appropriate location. Create the /opt directory with the following command:
    78: sudo mkdir -p /opt/kibana
    79: Now copy the Kibana files into your newly-created directory:
    80: sudo cp -R ~/kibana-4*/* /opt/kibana/
    81: Make the kibana user the owner of the files:
    82: sudo chown -R kibana: /opt/kibana
    83: Kibana can be started by running /opt/kibana/bin/kibana, but we want it to run as a service. Download a Kibana init script with this command:
    84: cd /etc/init.d && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-init
    85: cd /etc/default && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-default
    86: Now enable the Kibana service, and start it:
    87: sudo chmod +x /etc/init.d/kibana
    88: sudo service kibana start
    89: sudo chkconfig kibana on
    90: Before we can use the Kibana web interface, we have to set up a reverse proxy. Let's do that now, with Nginx.
    91: Install Nginx
    92: Because we configured Kibana to listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose.
    93: Note: If you already have an Nginx instance that you want to use, feel free to use that instead. Just make sure to configure Kibana so it is reachable by your Nginx server (you probably want to change the host value, in /opt/kibana/config/kibana.yml, to your Kibana server's private IP address). Also, it is recommended that you enable SSL/TLS.
    94: Add the EPEL repository to yum:
    95: sudo yum -y install epel-release
    96: Now use yum to install Nginx and httpd-tools:
    97: sudo yum -y install nginx httpd-tools
    98: Use htpasswd to create an admin user, called "kibanaadmin" (you should use another name), that can access the Kibana web interface:
    99: sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin
    100: Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.
    101: Now open the Nginx configuration file in your favorite editor. We will use vi:
    102: sudo vi /etc/nginx/nginx.conf
    103: Find the default server block (starts with server {), the last configuration block in the file, and delete it. When you are done, the last two lines in the file should look like this:
    104: nginx.conf excerpt
    105: include /etc/nginx/conf.d/*.conf;
    106: }
    107: Save and exit.
    108: Now we will create an Nginx server block in a new file:
    109: sudo vi /etc/nginx/conf.d/kibana.conf
    110: Paste the following code block into the file. Be sure to update the server_name to match your server's name:
    111: /etc/nginx/conf.d/kibana.conf
    112: server {
    113: listen 80;
    114: server_name example.com;
    115: auth_basic "Restricted Access";
    116: auth_basic_user_file /etc/nginx/htpasswd.users;
    117: location / {
    118: proxy_pass http://localhost:5601;
    119: proxy_http_version 1.1;
    120: proxy_set_header Upgrade $http_upgrade;
    121: proxy_set_header Connection 'upgrade';
    122: proxy_set_header Host $host;
    123: proxy_cache_bypass $http_upgrade;
    124: }
    125: }
    126: Save and exit. This configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.
    127: Now start and enable Nginx to put our changes into effect:
    128: sudo systemctl start nginx
    129: sudo systemctl enable nginx
    130: Note: This tutorial assumes that SELinux is disabled. If this is not the case, you may need to run the following command for Kibana to work properly: sudo setsebool -P httpd_can_network_connect 1
    131: Kibana is now accessible via your FQDN or the public IP address of your ELK Server i.e. http://elk_server_public_ip/. If you go there in a web browser, after entering the "kibanaadmin" credentials, you should see a Kibana welcome page which will ask you to configure an index pattern. Let's get back to that later, after we install all of the other components.
    132: Install Logstash
    133: The Logstash package shares the same GPG Key as Elasticsearch, and we already installed that public key, so let's create and edit a new Yum repository file for Logstash:
    134: sudo vi /etc/yum.repos.d/logstash.repo
    135: Add the following repository configuration:
    136: /etc/yum.repos.d/logstash.repo
    137: [logstash-2.1]
    138: name=logstash repository for 2.1 packages
    139: baseurl=http://packages.elasticsearch.org/logstash/2.1/centos
    140: gpgcheck=1
    141: gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    142: enabled=1
    143: Save and exit.
    144: Install Logstash with this command:
    145: sudo yum -y install logstash
    146: Logstash is installed but it is not configured yet.
    147: Generate SSL Certificates
    148: Since we are going to use Filebeat to ship logs from our Client Servers to our ELK Server, we need to create an SSL certificate and key pair. The certificate is used by Filebeat to verify the identity of ELK Server. Create the directories that will store the certificate and private key with the following commands:
    149: Now you have two options for generating your SSL certificates. If you have a DNS setup that will allow your client servers to resolve the IP address of the ELK Server, use Option 2. Otherwise, Option 1 will allow you to use IP addresses.
    150: Option 1: IP Address
    151: If you don't have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your ELK Server—you will have to add your ELK Server's private IP address to the subjectAltName (SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:
    152: sudo vi /etc/pki/tls/openssl.cnf
    153: Find the [ v3_ca ] section in the file, and add this line under it (substituting in the ELK Server's private IP address):
    154: openssl.cnf excerpt
    155: subjectAltName = IP: logstash_server_private_ip
    156: Save and exit.
    157: Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:
    158: cd /etc/pki/tls
    159: sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
    160: The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.
    161: Option 2: FQDN (DNS)
    162: If you have a DNS setup with your private networking, you should create an A record that contains the ELK Server's private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server's public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your ELK Server.
    163: Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/...), with the following command (substitute in the FQDN of the ELK Server):
    164: cd /etc/pki/tls
    165: sudo openssl req -subj '/CN=logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
    166: The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration.
    167: Configure Logstash
    168: Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.
    169: Let's create a configuration file called 02-filebeat-input.conf and set up our "filebeat" input:
    170: sudo vi /etc/logstash/conf.d/02-filebeat-input.conf
    171: Insert the following input configuration:
    172: 02-filebeat-input.conf
    173: input {
    174: beats {
    175: port => 5044
    176: type => "logs"
    177: ssl => true
    178: ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    179: ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    180: }
    181: }
    182: Save and quit. This specifies a beats input that will listen on tcp port 5044, and it will use the SSL certificate and private key that we created earlier.
    183: Now let's create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages:
    184: sudo vi /etc/logstash/conf.d/10-syslog.conf
    185: Insert the following syslog filter configuration:
    186: 10-syslog.conf
    187: filter {
    188: if [type] == "syslog" {
    189: grok {
    190: match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    191: add_field => [ "received_at", "%{@timestamp}" ]
    192: add_field => [ "received_from", "%{host}" ]
    193: }
    194: syslog_pri { }
    195: date {
    196: match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    197: }
    198: }
    199: }
    200: Save and quit. This filter looks for logs that are labeled as "syslog" type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.
    201: Lastly, we will create a configuration file called 30-elasticsearch-output.conf:
    202: sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf
    203: Insert the following output configuration:
    204: /etc/logstash/conf.d/30-elasticsearch-output.conf
    205: output {
    206: elasticsearch { hosts => ["localhost:9200"] }
    207: stdout { codec => rubydebug }
    208: }
    209: Save and exit. This output basically configures Logstash to store the logs in Elasticsearch, which is running at localhost:9200.
    210: With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).
    211: If you want to add filters for other applications that use the Filebeat input, be sure to name the files so they sort between the input and the output configuration (i.e. between 02- and 30-).
    212: Test your Logstash configuration with this command:
    213: sudo service logstash configtest
    214: It should display Configuration OK if there are no syntax errors. Otherwise, try and read the error output to see what's wrong with your Logstash configuration.
    215: Restart and enable Logstash to put our configuration changes into effect:
    216: sudo systemctl restart logstash
    217: sudo chkconfig logstash on
    218: Now that our ELK Server is ready, let's move onto setting up Filebeat.
    219: Set Up Filebeat (Add Client Servers)
    220: Do these steps for each CentOS or RHEL 7 server that you want to send logs to your ELK Server. For instructions on installing Filebeat on Debian-based Linux distributions (e.g. Ubuntu, Debian, etc.), refer to the Set Up Filebeat (Add Client Servers) section of the Ubuntu variation of this tutorial.
    221: Copy SSL Certificate
    222: On ELK Server, copy the SSL certificate to Client Server (substitute the client server's IP address, and your own login):
    223: scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp
    224: After providing the login credentials, ensure that the certificate copy was successful. It is required for communication between the client servers and the ELK server.
    225: Install Filebeat Package
    226: On Client Server, create run the following command to import the Elasticsearch public GPG key into rpm:
    227: sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
    228: Create and edit a new yum repository file for Filebeat:
    229: sudo vi /etc/yum.repos.d/elastic-beats.repo
    230: Add the following repository configuration:
    231: /etc/yum.repos.d/elastic-beats.repo
    232: [beats]
    233: name=Elastic Beats Repository
    234: baseurl=https://packages.elastic.co/beats/yum/el/$basearch
    235: enabled=1
    236: gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
    237: gpgcheck=1
    238: Save and exit.
    239: Install Filebeat with this command:
    240: sudo yum -y install filebeat
    241: Filebeat is installed but it is not configured yet.
    242: Now copy the ELK Server's SSL certificate into the appropriate location (/etc/pki/tls/certs):
    243: sudo mkdir -p /etc/pki/tls/certs
    244: sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/
    245: Configure Filebeat
    246: Now we will configure Filebeat to connect to Logstash on our ELK Server. This section will step you through modifying the example configuration file that comes with Filebeat. When you complete the steps, you should have a file that looks something like this.
    247: On Client Server, create and edit Filebeat configuration file:
    248: sudo vi /etc/filebeat/filebeat.yml
    249: Note: Filebeat's configuration file is in YAML format, which means that indentation is very important! Be sure to use the same number of spaces that are indicated in these instructions.
    250: Near the top of the file, you will see the prospectors section, which is where you can define prospectors that specify which log files should be shipped and how they should be handled. Each prospector is indicated by the - character.
    251: We'll modify the existing prospector to send secure and messages logs to Logstash. Under paths, comment out the - /var/log/*.log file. This will prevent Filebeat from sending every .log in that directory to Logstash. Then add new entries for syslog and auth.log. It should look something like this when you're done:
    252: filebeat.yml excerpt 1 of 4
    253: ...
    254: paths:
    255: - /var/log/secure
    256: - /var/log/messages
    257: # - /var/log/*.log
    258: ...
    259: Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
    260: filebeat.yml excerpt 2 of 4
    261: ...
    262: document_type: syslog
    263: ...
    264: This specifies that the logs in this prospector are of type syslog (which is the type that our Logstash filter is looking for).
    265: If you want to send other files to your ELK server, or make any changes to how Filebeat handles your logs, feel free to modify or add prospector entries.
    266: Next, under the output section, find the line that says elasticsearch:, which indicates the Elasticsearch output section (which we are not going to use). Delete or comment out the entire Elasticsearch output section (up to the line that says logstash:).
    267: Find the commented out Logstash output section, indicated by the line that says #logstash:, and uncomment it by deleting the preceding #. In this section, uncomment the hosts: ["localhost:5044"] line. Change localhost to the private IP address (or hostname, if you went with that option) of your ELK server:
    268: filebeat.yml excerpt 3 of 4
    269: ### Logstash as output
    270: logstash:
    271: # The Logstash hosts
    272: hosts: ["ELK_server_private_IP:5044"]
    273: This configures Filebeat to connect to Logstash on your ELK Server at port 5044 (the port that we specified an input for earlier).
    274: Next, find the tls section, and uncomment it. Then uncomment the line that specifies certificate_authorities, and change its value to ["/etc/pki/tls/certs/logstash-forwarder.crt"]. It should look something like this:
    275: filebeat.yml excerpt 4 of 4
    276: ...
    277: tls:
    278: # List of root certificates for HTTPS server verifications
    279: certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
    280: This configures Filebeat to use the SSL certificate that we created on the ELK Server.
    281: Save and quit.
    282: Now start and enable Filebeat to put our changes into place:
    283: sudo systemctl start filebeat
    284: sudo chkconfig filebeat on
    285: Again, if you're not sure if your Filebeat configuration is correct, compare it against this example Filebeat configuration.
    286: Now Filebeat is sending your syslog messages and secure files to your ELK Server! Repeat this section for all of the other servers that you wish to gather logs for.
    287: Connect to Kibana
    288: When you are finished setting up Filebeat on all of the servers that you want to gather logs for, let's look at Kibana, the web interface that we installed earlier.
    289: In a web browser, go to the FQDN or public IP address of your ELK Server. After entering the "kibanaadmin" credentials, you should see a page prompting you to configure an index pattern:
    Introduction

    In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on CentOS 7—that is, Elasticsearch 2.1.x, Logstash 2.1.x, and Kibana 4.3.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.0.x. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.

    Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

    It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.
    Our Goal

    The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.

    Our ELK stack setup has four main components:

    Logstash: The server component of Logstash that processes incoming logs
    Elasticsearch: Stores all of the logs
    Kibana: Web interface for searching and visualizing logs, which will be proxied through Nginx
    Filebeat: Installed on client servers that will send their logs to Logstash, Filebeat serves as a log shipping agent that utilizes the lumberjack networking protocol to communicate with Logstash

    ELK Infrastructure

    We will install the first three components on a single server, which we will refer to as our ELK Server. Filebeat will be installed on all of the client servers that we want to gather logs for, which we will refer to collectively as our Client Servers.
    Prerequisites

    To complete this tutorial, you will require root access to an CentOS 7 VPS. Instructions to set that up can be found here (steps 3 and 4): Initial Server Setup with CentOS 7.

    If you would prefer to use Ubuntu instead, check out this tutorial: How To Install ELK on Ubuntu 14.04.

    The amount of CPU, RAM, and storage that your ELK Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a VPS with the following specs for our ELK Server:

    OS: CentOS 7
    RAM: 4GB
    CPU: 2

    In addition to your ELK Server, you will want to have a few other servers that you will gather logs from.

    Let's get started on setting up our ELK Server!
    Install Java 8

    Elasticsearch and Logstash require Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elasticsearch recommends. It should, however, work fine with OpenJDK, if you decide to go that route. Following the steps in this section means that you accept the Oracle Binary License Agreement for Java SE.

    Change to your home directory and download the Oracle Java 8 (Update 65) JDK RPM with these commands:

    cd ~
    wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"

    Then install the RPM with this yum command (if you downloaded a different release, substitute the filename here):

    sudo yum localinstall jdk-8u65-linux-x64.rpm

    Now Java should be installed at /usr/java/jdk1.8.0_65/jre/bin/java, and linked from /usr/bin/java.

    You may delete the archive file that you downloaded earlier:

    rm ~/jdk-8u65-linux-x64.rpm

    Now that Java 8 is installed, let's install ElasticSearch.
    Install Elasticsearch

    Elasticsearch can be installed with a package manager by adding Elastic's package repository.

    Run the following command to import the Elasticsearch public GPG key into rpm:

    sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

    Create and edit a new yum repository file for Elasticsearch:

    sudo vi /etc/yum.repos.d/elasticsearch.repo

    Add the following repository configuration:
    /etc/yum.repos.d/elasticsearch.repo

    [elasticsearch-2.1]
    name=Elasticsearch repository for 2.x packages
    baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
    gpgcheck=1
    gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
    enabled=1

    Save and exit.

    Install Elasticsearch with this command:

    sudo yum -y install elasticsearch

    Elasticsearch is now installed. Let's edit the configuration:

    sudo vi /etc/elasticsearch/elasticsearch.yml

    You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can't read your data or shutdown your Elasticsearch cluster through the HTTP API. Find the line that specifies network.host, uncomment it, and replace its value with "localhost" so it looks like this:
    elasticsearch.yml excerpt (updated)

    network.host: localhost

    Save and exit elasticsearch.yml.

    Now start Elasticsearch:

    sudo systemctl start elasticsearch

    Then run the following command to start Elasticsearch automatically on boot up:

    sudo systemctl enable elasticsearch

    Now that Elasticsearch is up and running, let's install Kibana.
    Install Kibana

    Before installing Kibana, let's set up a kibana user and group, which will own and run Kibana:

    sudo groupadd -g 1005 kibana
    sudo useradd -u 1005 -g 1005 kibana

    If those commands fail because the 1005 GID or UID already exist, replace the number with IDs that are free.

    Download Kibana to your home directory with the following command:

    cd ~; wget https://download.elastic.co/kibana/kibana/kibana-4.3.0-linux-x64.tar.gz

    Extract Kibana archive with tar:

    tar xvf kibana-*.tar.gz

    Open the Kibana configuration file for editing:

    vi ~/kibana-4*/config/kibana.yml

    In the Kibana configuration file, find the line that specifies server.host, and replace the IP address ("0.0.0.0" by default) with "localhost":
    kibana.yml excerpt (updated)

    server.host: "localhost"

    Save and exit. This setting makes it so Kibana will only be accessible to the localhost. This is fine because we will use an Nginx reverse proxy to allow external access.

    Let's copy the Kibana files to a more appropriate location. Create the /opt directory with the following command:

    sudo mkdir -p /opt/kibana

    Now copy the Kibana files into your newly-created directory:

    sudo cp -R ~/kibana-4*/* /opt/kibana/

    Make the kibana user the owner of the files:

    sudo chown -R kibana: /opt/kibana

    Kibana can be started by running /opt/kibana/bin/kibana, but we want it to run as a service. Download a Kibana init script with this command:

    cd /etc/init.d && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-init
    cd /etc/default && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-default

    Now enable the Kibana service, and start it:

    sudo chmod +x /etc/init.d/kibana
    sudo service kibana start
    sudo chkconfig kibana on

    Before we can use the Kibana web interface, we have to set up a reverse proxy. Let's do that now, with Nginx.
    Install Nginx

    Because we configured Kibana to listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose.

    Note: If you already have an Nginx instance that you want to use, feel free to use that instead. Just make sure to configure Kibana so it is reachable by your Nginx server (you probably want to change the host value, in /opt/kibana/config/kibana.yml, to your Kibana server's private IP address). Also, it is recommended that you enable SSL/TLS.

    Add the EPEL repository to yum:

    sudo yum -y install epel-release

    Now use yum to install Nginx and httpd-tools:

    sudo yum -y install nginx httpd-tools

    Use htpasswd to create an admin user, called "kibanaadmin" (you should use another name), that can access the Kibana web interface:

    sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

    Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.

    Now open the Nginx configuration file in your favorite editor. We will use vi:

    sudo vi /etc/nginx/nginx.conf

    Find the default server block (starts with server {), the last configuration block in the file, and delete it. When you are done, the last two lines in the file should look like this:
    nginx.conf excerpt

    include /etc/nginx/conf.d/*.conf;
    }

    Save and exit.

    Now we will create an Nginx server block in a new file:

    sudo vi /etc/nginx/conf.d/kibana.conf

    Paste the following code block into the file. Be sure to update the server_name to match your server's name:
    /etc/nginx/conf.d/kibana.conf

    server {
    listen 80;

    server_name example.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
    proxy_pass http://localhost:5601;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
    }
    }

    Save and exit. This configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.

    Now start and enable Nginx to put our changes into effect:

    sudo systemctl start nginx
    sudo systemctl enable nginx

    Note: This tutorial assumes that SELinux is disabled. If this is not the case, you may need to run the following command for Kibana to work properly: sudo setsebool -P httpd_can_network_connect 1

    Kibana is now accessible via your FQDN or the public IP address of your ELK Server i.e. http://elk_server_public_ip/. If you go there in a web browser, after entering the "kibanaadmin" credentials, you should see a Kibana welcome page which will ask you to configure an index pattern. Let's get back to that later, after we install all of the other components.
    Install Logstash

    The Logstash package shares the same GPG Key as Elasticsearch, and we already installed that public key, so let's create and edit a new Yum repository file for Logstash:

    sudo vi /etc/yum.repos.d/logstash.repo

    Add the following repository configuration:
    /etc/yum.repos.d/logstash.repo

    [logstash-2.1]
    name=logstash repository for 2.1 packages
    baseurl=http://packages.elasticsearch.org/logstash/2.1/centos
    gpgcheck=1
    gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    enabled=1

    Save and exit.

    Install Logstash with this command:

    sudo yum -y install logstash

    Logstash is installed but it is not configured yet.
    Generate SSL Certificates

    Since we are going to use Filebeat to ship logs from our Client Servers to our ELK Server, we need to create an SSL certificate and key pair. The certificate is used by Filebeat to verify the identity of ELK Server. Create the directories that will store the certificate and private key with the following commands:

    Now you have two options for generating your SSL certificates. If you have a DNS setup that will allow your client servers to resolve the IP address of the ELK Server, use Option 2. Otherwise, Option 1 will allow you to use IP addresses.
    Option 1: IP Address

    If you don't have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your ELK Server—you will have to add your ELK Server's private IP address to the subjectAltName (SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:

    sudo vi /etc/pki/tls/openssl.cnf

    Find the [ v3_ca ] section in the file, and add this line under it (substituting in the ELK Server's private IP address):
    openssl.cnf excerpt

    subjectAltName = IP: logstash_server_private_ip

    Save and exit.

    Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:

    cd /etc/pki/tls
    sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.
    Option 2: FQDN (DNS)

    If you have a DNS setup with your private networking, you should create an A record that contains the ELK Server's private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server's public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your ELK Server.

    Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/...), with the following command (substitute in the FQDN of the ELK Server):

    cd /etc/pki/tls
    sudo openssl req -subj '/CN=logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration.
    Configure Logstash

    Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

    Let's create a configuration file called 02-filebeat-input.conf and set up our "filebeat" input:

    sudo vi /etc/logstash/conf.d/02-filebeat-input.conf

    Insert the following input configuration:
    02-filebeat-input.conf

    input {
    beats {
    port => 5044
    type => "logs"
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    }
    }

    Save and quit. This specifies a beats input that will listen on tcp port 5044, and it will use the SSL certificate and private key that we created earlier.

    Now let's create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages:

    sudo vi /etc/logstash/conf.d/10-syslog.conf

    Insert the following syslog filter configuration:
    10-syslog.conf

    filter {
    if [type] == "syslog" {
    grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    add_field => [ "received_at", "%{@timestamp}" ]
    add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
    match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
    }
    }

    Save and quit. This filter looks for logs that are labeled as "syslog" type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.

    Lastly, we will create a configuration file called 30-elasticsearch-output.conf:

    sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

    Insert the following output configuration:
    /etc/logstash/conf.d/30-elasticsearch-output.conf

    output {
    elasticsearch { hosts => ["localhost:9200"] }
    stdout { codec => rubydebug }
    }

    Save and exit. This output basically configures Logstash to store the logs in Elasticsearch, which is running at localhost:9200.

    With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).

    If you want to add filters for other applications that use the Filebeat input, be sure to name the files so they sort between the input and the output configuration (i.e. between 02- and 30-).

    Test your Logstash configuration with this command:

    sudo service logstash configtest

    It should display Configuration OK if there are no syntax errors. Otherwise, try and read the error output to see what's wrong with your Logstash configuration.

    Restart and enable Logstash to put our configuration changes into effect:

    sudo systemctl restart logstash
    sudo chkconfig logstash on

    Now that our ELK Server is ready, let's move onto setting up Filebeat.
    Set Up Filebeat (Add Client Servers)

    Do these steps for each CentOS or RHEL 7 server that you want to send logs to your ELK Server. For instructions on installing Filebeat on Debian-based Linux distributions (e.g. Ubuntu, Debian, etc.), refer to the Set Up Filebeat (Add Client Servers) section of the Ubuntu variation of this tutorial.
    Copy SSL Certificate

    On ELK Server, copy the SSL certificate to Client Server (substitute the client server's IP address, and your own login):

    scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp

    After providing the login credentials, ensure that the certificate copy was successful. It is required for communication between the client servers and the ELK server.
    Install Filebeat Package

    On Client Server, create run the following command to import the Elasticsearch public GPG key into rpm:

    sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

    Create and edit a new yum repository file for Filebeat:

    sudo vi /etc/yum.repos.d/elastic-beats.repo

    Add the following repository configuration:
    /etc/yum.repos.d/elastic-beats.repo

    [beats]
    name=Elastic Beats Repository
    baseurl=https://packages.elastic.co/beats/yum/el/$basearch
    enabled=1
    gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
    gpgcheck=1

    Save and exit.

    Install Filebeat with this command:

    sudo yum -y install filebeat

    Filebeat is installed but it is not configured yet.

    Now copy the ELK Server's SSL certificate into the appropriate location (/etc/pki/tls/certs):

    sudo mkdir -p /etc/pki/tls/certs
    sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

    Configure Filebeat

    Now we will configure Filebeat to connect to Logstash on our ELK Server. This section will step you through modifying the example configuration file that comes with Filebeat. When you complete the steps, you should have a file that looks something like this.

    On Client Server, create and edit Filebeat configuration file:

    sudo vi /etc/filebeat/filebeat.yml

    Note: Filebeat's configuration file is in YAML format, which means that indentation is very important! Be sure to use the same number of spaces that are indicated in these instructions.

    Near the top of the file, you will see the prospectors section, which is where you can define prospectors that specify which log files should be shipped and how they should be handled. Each prospector is indicated by the - character.

    We'll modify the existing prospector to send secure and messages logs to Logstash. Under paths, comment out the - /var/log/*.log file. This will prevent Filebeat from sending every .log in that directory to Logstash. Then add new entries for syslog and auth.log. It should look something like this when you're done:
    filebeat.yml excerpt 1 of 4

    ...
    paths:
    - /var/log/secure
    - /var/log/messages
    # - /var/log/*.log
    ...

    Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
    filebeat.yml excerpt 2 of 4

    ...
    document_type: syslog
    ...

    This specifies that the logs in this prospector are of type syslog (which is the type that our Logstash filter is looking for).

    If you want to send other files to your ELK server, or make any changes to how Filebeat handles your logs, feel free to modify or add prospector entries.

    Next, under the output section, find the line that says elasticsearch:, which indicates the Elasticsearch output section (which we are not going to use). Delete or comment out the entire Elasticsearch output section (up to the line that says logstash:).

    Find the commented out Logstash output section, indicated by the line that says #logstash:, and uncomment it by deleting the preceding #. In this section, uncomment the hosts: ["localhost:5044"] line. Change localhost to the private IP address (or hostname, if you went with that option) of your ELK server:
    filebeat.yml excerpt 3 of 4

    ### Logstash as output
    logstash:
    # The Logstash hosts
    hosts: ["ELK_server_private_IP:5044"]

    This configures Filebeat to connect to Logstash on your ELK Server at port 5044 (the port that we specified an input for earlier).

    Next, find the tls section, and uncomment it. Then uncomment the line that specifies certificate_authorities, and change its value to ["/etc/pki/tls/certs/logstash-forwarder.crt"]. It should look something like this:
    filebeat.yml excerpt 4 of 4

    ...
    tls:
    # List of root certificates for HTTPS server verifications
    certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

    This configures Filebeat to use the SSL certificate that we created on the ELK Server.

    Save and quit.

    Now start and enable Filebeat to put our changes into place:

    sudo systemctl start filebeat
    sudo chkconfig filebeat on

    Again, if you're not sure if your Filebeat configuration is correct, compare it against this example Filebeat configuration.

    Now Filebeat is sending your syslog messages and secure files to your ELK Server! Repeat this section for all of the other servers that you wish to gather logs for.
    Connect to Kibana

    When you are finished setting up Filebeat on all of the servers that you want to gather logs for, let's look at Kibana, the web interface that we installed earlier.

    In a web browser, go to the FQDN or public IP address of your ELK Server. After entering the "kibanaadmin" credentials, you should see a page prompting you to configure an index pattern:
  12. nu11secur1ty revised this gist Jan 16, 2016. No changes.
  13. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 289 additions and 469 deletions.
    758 changes: 289 additions & 469 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -1,470 +1,290 @@
    # Installing ELK (CentOS 6,7)
    Introduction

    In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on CentOS 7—that is, Elasticsearch 2.1.x, Logstash 2.1.x, and Kibana 4.3.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.0.x. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.

    Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

    It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.
    Our Goal

    The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.

    Our ELK stack setup has four main components:

    Logstash: The server component of Logstash that processes incoming logs
    Elasticsearch: Stores all of the logs
    Kibana: Web interface for searching and visualizing logs, which will be proxied through Nginx
    Filebeat: Installed on client servers that will send their logs to Logstash, Filebeat serves as a log shipping agent that utilizes the lumberjack networking protocol to communicate with Logstash

    ELK Infrastructure

    We will install the first three components on a single server, which we will refer to as our ELK Server. Filebeat will be installed on all of the client servers that we want to gather logs for, which we will refer to collectively as our Client Servers.
    Prerequisites

    To complete this tutorial, you will require root access to an CentOS 7 VPS. Instructions to set that up can be found here (steps 3 and 4): Initial Server Setup with CentOS 7.

    If you would prefer to use Ubuntu instead, check out this tutorial: How To Install ELK on Ubuntu 14.04.

    The amount of CPU, RAM, and storage that your ELK Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a VPS with the following specs for our ELK Server:

    OS: CentOS 7
    RAM: 4GB
    CPU: 2

    In addition to your ELK Server, you will want to have a few other servers that you will gather logs from.

    Let's get started on setting up our ELK Server!
    Install Java 8

    Elasticsearch and Logstash require Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elasticsearch recommends. It should, however, work fine with OpenJDK, if you decide to go that route. Following the steps in this section means that you accept the Oracle Binary License Agreement for Java SE.

    Change to your home directory and download the Oracle Java 8 (Update 65) JDK RPM with these commands:

    cd ~
    wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"

    Then install the RPM with this yum command (if you downloaded a different release, substitute the filename here):

    sudo yum localinstall jdk-8u65-linux-x64.rpm

    Now Java should be installed at /usr/java/jdk1.8.0_65/jre/bin/java, and linked from /usr/bin/java.

    You may delete the archive file that you downloaded earlier:

    rm ~/jdk-8u65-linux-x64.rpm

    Now that Java 8 is installed, let's install ElasticSearch.
    Install Elasticsearch

    Elasticsearch can be installed with a package manager by adding Elastic's package repository.

    Run the following command to import the Elasticsearch public GPG key into rpm:

    sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

    Create and edit a new yum repository file for Elasticsearch:

    sudo vi /etc/yum.repos.d/elasticsearch.repo

    Add the following repository configuration:
    /etc/yum.repos.d/elasticsearch.repo

    [elasticsearch-2.1]
    name=Elasticsearch repository for 2.x packages
    baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
    gpgcheck=1
    gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
    enabled=1

    Save and exit.

    Install Elasticsearch with this command:

    sudo yum -y install elasticsearch

    Elasticsearch is now installed. Let's edit the configuration:

    sudo vi /etc/elasticsearch/elasticsearch.yml

    You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can't read your data or shutdown your Elasticsearch cluster through the HTTP API. Find the line that specifies network.host, uncomment it, and replace its value with "localhost" so it looks like this:
    elasticsearch.yml excerpt (updated)

    network.host: localhost

    Save and exit elasticsearch.yml.

    Now start Elasticsearch:

    sudo systemctl start elasticsearch

    Then run the following command to start Elasticsearch automatically on boot up:

    sudo systemctl enable elasticsearch

    Now that Elasticsearch is up and running, let's install Kibana.
    Install Kibana

    Before installing Kibana, let's set up a kibana user and group, which will own and run Kibana:

    sudo groupadd -g 1005 kibana
    sudo useradd -u 1005 -g 1005 kibana

    If those commands fail because the 1005 GID or UID already exist, replace the number with IDs that are free.

    Download Kibana to your home directory with the following command:

    cd ~; wget https://download.elastic.co/kibana/kibana/kibana-4.3.0-linux-x64.tar.gz

    Extract Kibana archive with tar:

    tar xvf kibana-*.tar.gz

    Open the Kibana configuration file for editing:

    vi ~/kibana-4*/config/kibana.yml

    In the Kibana configuration file, find the line that specifies server.host, and replace the IP address ("0.0.0.0" by default) with "localhost":
    kibana.yml excerpt (updated)

    server.host: "localhost"

    Save and exit. This setting makes it so Kibana will only be accessible to the localhost. This is fine because we will use an Nginx reverse proxy to allow external access.

    Let's copy the Kibana files to a more appropriate location. Create the /opt directory with the following command:

    sudo mkdir -p /opt/kibana

    Now copy the Kibana files into your newly-created directory:

    sudo cp -R ~/kibana-4*/* /opt/kibana/

    Make the kibana user the owner of the files:

    sudo chown -R kibana: /opt/kibana

    Kibana can be started by running /opt/kibana/bin/kibana, but we want it to run as a service. Download a Kibana init script with this command:

    cd /etc/init.d && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-init
    cd /etc/default && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-default

    Now enable the Kibana service, and start it:

    sudo chmod +x /etc/init.d/kibana
    sudo service kibana start
    sudo chkconfig kibana on

    Before we can use the Kibana web interface, we have to set up a reverse proxy. Let's do that now, with Nginx.
    Install Nginx

    Because we configured Kibana to listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose.

    Note: If you already have an Nginx instance that you want to use, feel free to use that instead. Just make sure to configure Kibana so it is reachable by your Nginx server (you probably want to change the host value, in /opt/kibana/config/kibana.yml, to your Kibana server's private IP address). Also, it is recommended that you enable SSL/TLS.

    Add the EPEL repository to yum:

    sudo yum -y install epel-release

    Now use yum to install Nginx and httpd-tools:

    sudo yum -y install nginx httpd-tools

    Use htpasswd to create an admin user, called "kibanaadmin" (you should use another name), that can access the Kibana web interface:

    sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

    Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.

    Now open the Nginx configuration file in your favorite editor. We will use vi:

    sudo vi /etc/nginx/nginx.conf

    Find the default server block (starts with server {), the last configuration block in the file, and delete it. When you are done, the last two lines in the file should look like this:
    nginx.conf excerpt

    include /etc/nginx/conf.d/*.conf;
    }

    Save and exit.

    Now we will create an Nginx server block in a new file:

    sudo vi /etc/nginx/conf.d/kibana.conf

    Paste the following code block into the file. Be sure to update the server_name to match your server's name:
    /etc/nginx/conf.d/kibana.conf

    server {
    listen 80;

    server_name example.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
    proxy_pass http://localhost:5601;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
    }
    }

    Save and exit. This configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.

    Now start and enable Nginx to put our changes into effect:

    sudo systemctl start nginx
    sudo systemctl enable nginx

    Note: This tutorial assumes that SELinux is disabled. If this is not the case, you may need to run the following command for Kibana to work properly: sudo setsebool -P httpd_can_network_connect 1

    Kibana is now accessible via your FQDN or the public IP address of your ELK Server i.e. http://elk_server_public_ip/. If you go there in a web browser, after entering the "kibanaadmin" credentials, you should see a Kibana welcome page which will ask you to configure an index pattern. Let's get back to that later, after we install all of the other components.
    Install Logstash

    The Logstash package shares the same GPG Key as Elasticsearch, and we already installed that public key, so let's create and edit a new Yum repository file for Logstash:

    sudo vi /etc/yum.repos.d/logstash.repo

    Add the following repository configuration:
    /etc/yum.repos.d/logstash.repo

    [logstash-2.1]
    name=logstash repository for 2.1 packages
    baseurl=http://packages.elasticsearch.org/logstash/2.1/centos
    gpgcheck=1
    gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    enabled=1

    Save and exit.

    Install Logstash with this command:

    sudo yum -y install logstash

    Logstash is installed but it is not configured yet.
    Generate SSL Certificates

    Since we are going to use Filebeat to ship logs from our Client Servers to our ELK Server, we need to create an SSL certificate and key pair. The certificate is used by Filebeat to verify the identity of ELK Server. Create the directories that will store the certificate and private key with the following commands:

    Now you have two options for generating your SSL certificates. If you have a DNS setup that will allow your client servers to resolve the IP address of the ELK Server, use Option 2. Otherwise, Option 1 will allow you to use IP addresses.
    Option 1: IP Address

    If you don't have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your ELK Server—you will have to add your ELK Server's private IP address to the subjectAltName (SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:

    sudo vi /etc/pki/tls/openssl.cnf

    Find the [ v3_ca ] section in the file, and add this line under it (substituting in the ELK Server's private IP address):
    openssl.cnf excerpt

    subjectAltName = IP: logstash_server_private_ip

    Save and exit.

    Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:

    cd /etc/pki/tls
    sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.
    Option 2: FQDN (DNS)

    If you have a DNS setup with your private networking, you should create an A record that contains the ELK Server's private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server's public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your ELK Server.

    Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/...), with the following command (substitute in the FQDN of the ELK Server):

    cd /etc/pki/tls
    sudo openssl req -subj '/CN=logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration.
    Configure Logstash

    Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

    Let's create a configuration file called 02-filebeat-input.conf and set up our "filebeat" input:

    sudo vi /etc/logstash/conf.d/02-filebeat-input.conf

    Insert the following input configuration:
    02-filebeat-input.conf

    input {
    beats {
    port => 5044
    type => "logs"
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    }
    }

    Save and quit. This specifies a beats input that will listen on tcp port 5044, and it will use the SSL certificate and private key that we created earlier.

    Now let's create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages:

    sudo vi /etc/logstash/conf.d/10-syslog.conf

    Insert the following syslog filter configuration:
    10-syslog.conf

    filter {
    if [type] == "syslog" {
    grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    add_field => [ "received_at", "%{@timestamp}" ]
    add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
    match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
    }
    }

    Save and quit. This filter looks for logs that are labeled as "syslog" type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.

    Lastly, we will create a configuration file called 30-elasticsearch-output.conf:

    sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

    Insert the following output configuration:
    /etc/logstash/conf.d/30-elasticsearch-output.conf

    output {
    elasticsearch { hosts => ["localhost:9200"] }
    stdout { codec => rubydebug }
    }

    Save and exit. This output basically configures Logstash to store the logs in Elasticsearch, which is running at localhost:9200.

    With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).

    If you want to add filters for other applications that use the Filebeat input, be sure to name the files so they sort between the input and the output configuration (i.e. between 02- and 30-).

    Test your Logstash configuration with this command:

    sudo service logstash configtest

    It should display Configuration OK if there are no syntax errors. Otherwise, try and read the error output to see what's wrong with your Logstash configuration.

    Restart and enable Logstash to put our configuration changes into effect:

    sudo systemctl restart logstash
    sudo chkconfig logstash on

    Now that our ELK Server is ready, let's move onto setting up Filebeat.
    Set Up Filebeat (Add Client Servers)

    Do these steps for each CentOS or RHEL 7 server that you want to send logs to your ELK Server. For instructions on installing Filebeat on Debian-based Linux distributions (e.g. Ubuntu, Debian, etc.), refer to the Set Up Filebeat (Add Client Servers) section of the Ubuntu variation of this tutorial.
    Copy SSL Certificate

    On ELK Server, copy the SSL certificate to Client Server (substitute the client server's IP address, and your own login):

    scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp

    After providing the login credentials, ensure that the certificate copy was successful. It is required for communication between the client servers and the ELK server.
    Install Filebeat Package

    On Client Server, create run the following command to import the Elasticsearch public GPG key into rpm:

    sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

    Create and edit a new yum repository file for Filebeat:

    sudo vi /etc/yum.repos.d/elastic-beats.repo

    Add the following repository configuration:
    /etc/yum.repos.d/elastic-beats.repo

    [beats]
    name=Elastic Beats Repository
    baseurl=https://packages.elastic.co/beats/yum/el/$basearch
    enabled=1
    gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
    gpgcheck=1

    Save and exit.

    Install Filebeat with this command:

    sudo yum -y install filebeat

    Filebeat is installed but it is not configured yet.

    Now copy the ELK Server's SSL certificate into the appropriate location (/etc/pki/tls/certs):

    sudo mkdir -p /etc/pki/tls/certs
    sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

    Configure Filebeat

    Now we will configure Filebeat to connect to Logstash on our ELK Server. This section will step you through modifying the example configuration file that comes with Filebeat. When you complete the steps, you should have a file that looks something like this.

    On Client Server, create and edit Filebeat configuration file:

    sudo vi /etc/filebeat/filebeat.yml

    Note: Filebeat's configuration file is in YAML format, which means that indentation is very important! Be sure to use the same number of spaces that are indicated in these instructions.

    Near the top of the file, you will see the prospectors section, which is where you can define prospectors that specify which log files should be shipped and how they should be handled. Each prospector is indicated by the - character.

    We'll modify the existing prospector to send secure and messages logs to Logstash. Under paths, comment out the - /var/log/*.log file. This will prevent Filebeat from sending every .log in that directory to Logstash. Then add new entries for syslog and auth.log. It should look something like this when you're done:
    filebeat.yml excerpt 1 of 4
    ```
    ...
    paths:
    - /var/log/secure
    - /var/log/messages
    - /var/log/*.log
    ...
    ```
    Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
    filebeat.yml excerpt 2 of 4

    ...
    document_type: syslog
    ...

    This specifies that the logs in this prospector are of type syslog (which is the type that our Logstash filter is looking for).

    If you want to send other files to your ELK server, or make any changes to how Filebeat handles your logs, feel free to modify or add prospector entries.

    Next, under the output section, find the line that says elasticsearch:, which indicates the Elasticsearch output section (which we are not going to use). Delete or comment out the entire Elasticsearch output section (up to the line that says logstash:).

    Find the commented out Logstash output section, indicated by the line that says #logstash:, and uncomment it by deleting the preceding #. In this section, uncomment the hosts: ["localhost:5044"] line. Change localhost to the private IP address (or hostname, if you went with that option) of your ELK server:
    filebeat.yml excerpt 3 of 4

    ### Logstash as output
    logstash:
    # The Logstash hosts
    hosts: ["ELK_server_private_IP:5044"]

    This configures Filebeat to connect to Logstash on your ELK Server at port 5044 (the port that we specified an input for earlier).

    Next, find the tls section, and uncomment it. Then uncomment the line that specifies certificate_authorities, and change its value to ["/etc/pki/tls/certs/logstash-forwarder.crt"]. It should look something like this:
    filebeat.yml excerpt 4 of 4

    ...
    tls:
    # List of root certificates for HTTPS server verifications
    certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

    This configures Filebeat to use the SSL certificate that we created on the ELK Server.

    Save and quit.

    Now start and enable Filebeat to put our changes into place:

    sudo systemctl start filebeat
    sudo chkconfig filebeat on

    Again, if you're not sure if your Filebeat configuration is correct, compare it against this example Filebeat configuration.

    Now Filebeat is sending your syslog messages and secure files to your ELK Server! Repeat this section for all of the other servers that you wish to gather logs for.
    Connect to Kibana

    When you are finished setting up Filebeat on all of the servers that you want to gather logs for, let's look at Kibana, the web interface that we installed earlier.

    In a web browser, go to the FQDN or public IP address of your ELK Server. After entering the "kibanaadmin" credentials, you should see a page prompting you to configure an index pattern:
    1: Introduction
    2: In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on CentOS 7—that is, Elasticsearch 2.1.x, Logstash 2.1.x, and Kibana 4.3.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.0.x. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.
    3: Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.
    4: It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.
    5: Our Goal
    6: The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.
    7: Our ELK stack setup has four main components:
    8: Logstash: The server component of Logstash that processes incoming logs
    9: Elasticsearch: Stores all of the logs
    10: Kibana: Web interface for searching and visualizing logs, which will be proxied through Nginx
    11: Filebeat: Installed on client servers that will send their logs to Logstash, Filebeat serves as a log shipping agent that utilizes the lumberjack networking protocol to communicate with Logstash
    12: ELK Infrastructure
    13: We will install the first three components on a single server, which we will refer to as our ELK Server. Filebeat will be installed on all of the client servers that we want to gather logs for, which we will refer to collectively as our Client Servers.
    14: Prerequisites
    15: To complete this tutorial, you will require root access to an CentOS 7 VPS. Instructions to set that up can be found here (steps 3 and 4): Initial Server Setup with CentOS 7.
    16: If you would prefer to use Ubuntu instead, check out this tutorial: How To Install ELK on Ubuntu 14.04.
    17: The amount of CPU, RAM, and storage that your ELK Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a VPS with the following specs for our ELK Server:
    18: OS: CentOS 7
    19: RAM: 4GB
    20: CPU: 2
    21: In addition to your ELK Server, you will want to have a few other servers that you will gather logs from.
    22: Let's get started on setting up our ELK Server!
    23: Install Java 8
    24: Elasticsearch and Logstash require Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elasticsearch recommends. It should, however, work fine with OpenJDK, if you decide to go that route. Following the steps in this section means that you accept the Oracle Binary License Agreement for Java SE.
    25: Change to your home directory and download the Oracle Java 8 (Update 65) JDK RPM with these commands:
    26: cd ~
    27: wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"
    28: Then install the RPM with this yum command (if you downloaded a different release, substitute the filename here):
    29: sudo yum localinstall jdk-8u65-linux-x64.rpm
    30: Now Java should be installed at /usr/java/jdk1.8.0_65/jre/bin/java, and linked from /usr/bin/java.
    31: You may delete the archive file that you downloaded earlier:
    32: rm ~/jdk-8u65-linux-x64.rpm
    33: Now that Java 8 is installed, let's install ElasticSearch.
    34: Install Elasticsearch
    35: Elasticsearch can be installed with a package manager by adding Elastic's package repository.
    36: Run the following command to import the Elasticsearch public GPG key into rpm:
    37: sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
    38: Create and edit a new yum repository file for Elasticsearch:
    39: sudo vi /etc/yum.repos.d/elasticsearch.repo
    40: Add the following repository configuration:
    41: /etc/yum.repos.d/elasticsearch.repo
    42: [elasticsearch-2.1]
    43: name=Elasticsearch repository for 2.x packages
    44: baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
    45: gpgcheck=1
    46: gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
    47: enabled=1
    48: Save and exit.
    49: Install Elasticsearch with this command:
    50: sudo yum -y install elasticsearch
    51: Elasticsearch is now installed. Let's edit the configuration:
    52: sudo vi /etc/elasticsearch/elasticsearch.yml
    53: You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can't read your data or shutdown your Elasticsearch cluster through the HTTP API. Find the line that specifies network.host, uncomment it, and replace its value with "localhost" so it looks like this:
    54: elasticsearch.yml excerpt (updated)
    55: network.host: localhost
    56: Save and exit elasticsearch.yml.
    57: Now start Elasticsearch:
    58: sudo systemctl start elasticsearch
    59: Then run the following command to start Elasticsearch automatically on boot up:
    60: sudo systemctl enable elasticsearch
    61: Now that Elasticsearch is up and running, let's install Kibana.
    62: Install Kibana
    63: Before installing Kibana, let's set up a kibana user and group, which will own and run Kibana:
    64: sudo groupadd -g 1005 kibana
    65: sudo useradd -u 1005 -g 1005 kibana
    66: If those commands fail because the 1005 GID or UID already exist, replace the number with IDs that are free.
    67: Download Kibana to your home directory with the following command:
    68: cd ~; wget https://download.elastic.co/kibana/kibana/kibana-4.3.0-linux-x64.tar.gz
    69: Extract Kibana archive with tar:
    70: tar xvf kibana-*.tar.gz
    71: Open the Kibana configuration file for editing:
    72: vi ~/kibana-4*/config/kibana.yml
    73: In the Kibana configuration file, find the line that specifies server.host, and replace the IP address ("0.0.0.0" by default) with "localhost":
    74: kibana.yml excerpt (updated)
    75: server.host: "localhost"
    76: Save and exit. This setting makes it so Kibana will only be accessible to the localhost. This is fine because we will use an Nginx reverse proxy to allow external access.
    77: Let's copy the Kibana files to a more appropriate location. Create the /opt directory with the following command:
    78: sudo mkdir -p /opt/kibana
    79: Now copy the Kibana files into your newly-created directory:
    80: sudo cp -R ~/kibana-4*/* /opt/kibana/
    81: Make the kibana user the owner of the files:
    82: sudo chown -R kibana: /opt/kibana
    83: Kibana can be started by running /opt/kibana/bin/kibana, but we want it to run as a service. Download a Kibana init script with this command:
    84: cd /etc/init.d && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-init
    85: cd /etc/default && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-default
    86: Now enable the Kibana service, and start it:
    87: sudo chmod +x /etc/init.d/kibana
    88: sudo service kibana start
    89: sudo chkconfig kibana on
    90: Before we can use the Kibana web interface, we have to set up a reverse proxy. Let's do that now, with Nginx.
    91: Install Nginx
    92: Because we configured Kibana to listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose.
    93: Note: If you already have an Nginx instance that you want to use, feel free to use that instead. Just make sure to configure Kibana so it is reachable by your Nginx server (you probably want to change the host value, in /opt/kibana/config/kibana.yml, to your Kibana server's private IP address). Also, it is recommended that you enable SSL/TLS.
    94: Add the EPEL repository to yum:
    95: sudo yum -y install epel-release
    96: Now use yum to install Nginx and httpd-tools:
    97: sudo yum -y install nginx httpd-tools
    98: Use htpasswd to create an admin user, called "kibanaadmin" (you should use another name), that can access the Kibana web interface:
    99: sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin
    100: Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.
    101: Now open the Nginx configuration file in your favorite editor. We will use vi:
    102: sudo vi /etc/nginx/nginx.conf
    103: Find the default server block (starts with server {), the last configuration block in the file, and delete it. When you are done, the last two lines in the file should look like this:
    104: nginx.conf excerpt
    105: include /etc/nginx/conf.d/*.conf;
    106: }
    107: Save and exit.
    108: Now we will create an Nginx server block in a new file:
    109: sudo vi /etc/nginx/conf.d/kibana.conf
    110: Paste the following code block into the file. Be sure to update the server_name to match your server's name:
    111: /etc/nginx/conf.d/kibana.conf
    112: server {
    113: listen 80;
    114: server_name example.com;
    115: auth_basic "Restricted Access";
    116: auth_basic_user_file /etc/nginx/htpasswd.users;
    117: location / {
    118: proxy_pass http://localhost:5601;
    119: proxy_http_version 1.1;
    120: proxy_set_header Upgrade $http_upgrade;
    121: proxy_set_header Connection 'upgrade';
    122: proxy_set_header Host $host;
    123: proxy_cache_bypass $http_upgrade;
    124: }
    125: }
    126: Save and exit. This configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.
    127: Now start and enable Nginx to put our changes into effect:
    128: sudo systemctl start nginx
    129: sudo systemctl enable nginx
    130: Note: This tutorial assumes that SELinux is disabled. If this is not the case, you may need to run the following command for Kibana to work properly: sudo setsebool -P httpd_can_network_connect 1
    131: Kibana is now accessible via your FQDN or the public IP address of your ELK Server i.e. http://elk_server_public_ip/. If you go there in a web browser, after entering the "kibanaadmin" credentials, you should see a Kibana welcome page which will ask you to configure an index pattern. Let's get back to that later, after we install all of the other components.
    132: Install Logstash
    133: The Logstash package shares the same GPG Key as Elasticsearch, and we already installed that public key, so let's create and edit a new Yum repository file for Logstash:
    134: sudo vi /etc/yum.repos.d/logstash.repo
    135: Add the following repository configuration:
    136: /etc/yum.repos.d/logstash.repo
    137: [logstash-2.1]
    138: name=logstash repository for 2.1 packages
    139: baseurl=http://packages.elasticsearch.org/logstash/2.1/centos
    140: gpgcheck=1
    141: gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    142: enabled=1
    143: Save and exit.
    144: Install Logstash with this command:
    145: sudo yum -y install logstash
    146: Logstash is installed but it is not configured yet.
    147: Generate SSL Certificates
    148: Since we are going to use Filebeat to ship logs from our Client Servers to our ELK Server, we need to create an SSL certificate and key pair. The certificate is used by Filebeat to verify the identity of ELK Server. Create the directories that will store the certificate and private key with the following commands:
    149: Now you have two options for generating your SSL certificates. If you have a DNS setup that will allow your client servers to resolve the IP address of the ELK Server, use Option 2. Otherwise, Option 1 will allow you to use IP addresses.
    150: Option 1: IP Address
    151: If you don't have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your ELK Server—you will have to add your ELK Server's private IP address to the subjectAltName (SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:
    152: sudo vi /etc/pki/tls/openssl.cnf
    153: Find the [ v3_ca ] section in the file, and add this line under it (substituting in the ELK Server's private IP address):
    154: openssl.cnf excerpt
    155: subjectAltName = IP: logstash_server_private_ip
    156: Save and exit.
    157: Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:
    158: cd /etc/pki/tls
    159: sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
    160: The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.
    161: Option 2: FQDN (DNS)
    162: If you have a DNS setup with your private networking, you should create an A record that contains the ELK Server's private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server's public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your ELK Server.
    163: Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/...), with the following command (substitute in the FQDN of the ELK Server):
    164: cd /etc/pki/tls
    165: sudo openssl req -subj '/CN=logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
    166: The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration.
    167: Configure Logstash
    168: Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.
    169: Let's create a configuration file called 02-filebeat-input.conf and set up our "filebeat" input:
    170: sudo vi /etc/logstash/conf.d/02-filebeat-input.conf
    171: Insert the following input configuration:
    172: 02-filebeat-input.conf
    173: input {
    174: beats {
    175: port => 5044
    176: type => "logs"
    177: ssl => true
    178: ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    179: ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    180: }
    181: }
    182: Save and quit. This specifies a beats input that will listen on tcp port 5044, and it will use the SSL certificate and private key that we created earlier.
    183: Now let's create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages:
    184: sudo vi /etc/logstash/conf.d/10-syslog.conf
    185: Insert the following syslog filter configuration:
    186: 10-syslog.conf
    187: filter {
    188: if [type] == "syslog" {
    189: grok {
    190: match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    191: add_field => [ "received_at", "%{@timestamp}" ]
    192: add_field => [ "received_from", "%{host}" ]
    193: }
    194: syslog_pri { }
    195: date {
    196: match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    197: }
    198: }
    199: }
    200: Save and quit. This filter looks for logs that are labeled as "syslog" type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.
    201: Lastly, we will create a configuration file called 30-elasticsearch-output.conf:
    202: sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf
    203: Insert the following output configuration:
    204: /etc/logstash/conf.d/30-elasticsearch-output.conf
    205: output {
    206: elasticsearch { hosts => ["localhost:9200"] }
    207: stdout { codec => rubydebug }
    208: }
    209: Save and exit. This output basically configures Logstash to store the logs in Elasticsearch, which is running at localhost:9200.
    210: With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).
    211: If you want to add filters for other applications that use the Filebeat input, be sure to name the files so they sort between the input and the output configuration (i.e. between 02- and 30-).
    212: Test your Logstash configuration with this command:
    213: sudo service logstash configtest
    214: It should display Configuration OK if there are no syntax errors. Otherwise, try and read the error output to see what's wrong with your Logstash configuration.
    215: Restart and enable Logstash to put our configuration changes into effect:
    216: sudo systemctl restart logstash
    217: sudo chkconfig logstash on
    218: Now that our ELK Server is ready, let's move onto setting up Filebeat.
    219: Set Up Filebeat (Add Client Servers)
    220: Do these steps for each CentOS or RHEL 7 server that you want to send logs to your ELK Server. For instructions on installing Filebeat on Debian-based Linux distributions (e.g. Ubuntu, Debian, etc.), refer to the Set Up Filebeat (Add Client Servers) section of the Ubuntu variation of this tutorial.
    221: Copy SSL Certificate
    222: On ELK Server, copy the SSL certificate to Client Server (substitute the client server's IP address, and your own login):
    223: scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp
    224: After providing the login credentials, ensure that the certificate copy was successful. It is required for communication between the client servers and the ELK server.
    225: Install Filebeat Package
    226: On Client Server, create run the following command to import the Elasticsearch public GPG key into rpm:
    227: sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
    228: Create and edit a new yum repository file for Filebeat:
    229: sudo vi /etc/yum.repos.d/elastic-beats.repo
    230: Add the following repository configuration:
    231: /etc/yum.repos.d/elastic-beats.repo
    232: [beats]
    233: name=Elastic Beats Repository
    234: baseurl=https://packages.elastic.co/beats/yum/el/$basearch
    235: enabled=1
    236: gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
    237: gpgcheck=1
    238: Save and exit.
    239: Install Filebeat with this command:
    240: sudo yum -y install filebeat
    241: Filebeat is installed but it is not configured yet.
    242: Now copy the ELK Server's SSL certificate into the appropriate location (/etc/pki/tls/certs):
    243: sudo mkdir -p /etc/pki/tls/certs
    244: sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/
    245: Configure Filebeat
    246: Now we will configure Filebeat to connect to Logstash on our ELK Server. This section will step you through modifying the example configuration file that comes with Filebeat. When you complete the steps, you should have a file that looks something like this.
    247: On Client Server, create and edit Filebeat configuration file:
    248: sudo vi /etc/filebeat/filebeat.yml
    249: Note: Filebeat's configuration file is in YAML format, which means that indentation is very important! Be sure to use the same number of spaces that are indicated in these instructions.
    250: Near the top of the file, you will see the prospectors section, which is where you can define prospectors that specify which log files should be shipped and how they should be handled. Each prospector is indicated by the - character.
    251: We'll modify the existing prospector to send secure and messages logs to Logstash. Under paths, comment out the - /var/log/*.log file. This will prevent Filebeat from sending every .log in that directory to Logstash. Then add new entries for syslog and auth.log. It should look something like this when you're done:
    252: filebeat.yml excerpt 1 of 4
    253: ...
    254: paths:
    255: - /var/log/secure
    256: - /var/log/messages
    257: # - /var/log/*.log
    258: ...
    259: Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
    260: filebeat.yml excerpt 2 of 4
    261: ...
    262: document_type: syslog
    263: ...
    264: This specifies that the logs in this prospector are of type syslog (which is the type that our Logstash filter is looking for).
    265: If you want to send other files to your ELK server, or make any changes to how Filebeat handles your logs, feel free to modify or add prospector entries.
    266: Next, under the output section, find the line that says elasticsearch:, which indicates the Elasticsearch output section (which we are not going to use). Delete or comment out the entire Elasticsearch output section (up to the line that says logstash:).
    267: Find the commented out Logstash output section, indicated by the line that says #logstash:, and uncomment it by deleting the preceding #. In this section, uncomment the hosts: ["localhost:5044"] line. Change localhost to the private IP address (or hostname, if you went with that option) of your ELK server:
    268: filebeat.yml excerpt 3 of 4
    269: ### Logstash as output
    270: logstash:
    271: # The Logstash hosts
    272: hosts: ["ELK_server_private_IP:5044"]
    273: This configures Filebeat to connect to Logstash on your ELK Server at port 5044 (the port that we specified an input for earlier).
    274: Next, find the tls section, and uncomment it. Then uncomment the line that specifies certificate_authorities, and change its value to ["/etc/pki/tls/certs/logstash-forwarder.crt"]. It should look something like this:
    275: filebeat.yml excerpt 4 of 4
    276: ...
    277: tls:
    278: # List of root certificates for HTTPS server verifications
    279: certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
    280: This configures Filebeat to use the SSL certificate that we created on the ELK Server.
    281: Save and quit.
    282: Now start and enable Filebeat to put our changes into place:
    283: sudo systemctl start filebeat
    284: sudo chkconfig filebeat on
    285: Again, if you're not sure if your Filebeat configuration is correct, compare it against this example Filebeat configuration.
    286: Now Filebeat is sending your syslog messages and secure files to your ELK Server! Repeat this section for all of the other servers that you wish to gather logs for.
    287: Connect to Kibana
    288: When you are finished setting up Filebeat on all of the servers that you want to gather logs for, let's look at Kibana, the web interface that we installed earlier.
    289: In a web browser, go to the FQDN or public IP address of your ELK Server. After entering the "kibanaadmin" credentials, you should see a page prompting you to configure an index pattern:
  14. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 3 additions and 3 deletions.
    6 changes: 3 additions & 3 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -412,14 +412,14 @@ Near the top of the file, you will see the prospectors section, which is where y

    We'll modify the existing prospector to send secure and messages logs to Logstash. Under paths, comment out the - /var/log/*.log file. This will prevent Filebeat from sending every .log in that directory to Logstash. Then add new entries for syslog and auth.log. It should look something like this when you're done:
    filebeat.yml excerpt 1 of 4

    ```
    ...
    paths:
    - /var/log/secure
    - /var/log/messages
    `#` - /var/log/*.log
    - /var/log/*.log
    ...

    ```
    Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
    filebeat.yml excerpt 2 of 4

  15. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion readme.md
    Original file line number Diff line number Diff line change
    @@ -417,7 +417,7 @@ filebeat.yml excerpt 1 of 4
    paths:
    - /var/log/secure
    - /var/log/messages
    # - /var/log/*.log
    `#` - /var/log/*.log
    ...

    Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
  16. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 3 additions and 8 deletions.
    11 changes: 3 additions & 8 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -1,8 +1,4 @@
    # Installing ELK (CentOS 6,7)


    Tutorial Series
    This tutorial is part 1 of 5 in the series: Centralized Logging with Logstash and Kibana On CentOS 7
    Introduction

    In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on CentOS 7—that is, Elasticsearch 2.1.x, Logstash 2.1.x, and Kibana 4.3.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.0.x. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.
    @@ -270,7 +266,7 @@ Save and exit.
    Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:

    cd /etc/pki/tls
    sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
    sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.
    Option 2: FQDN (DNS)
    @@ -421,7 +417,7 @@ filebeat.yml excerpt 1 of 4
    paths:
    - /var/log/secure
    - /var/log/messages
    `#` - /var/log/*.log
    # - /var/log/*.log
    ...

    Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
    @@ -471,5 +467,4 @@ Connect to Kibana

    When you are finished setting up Filebeat on all of the servers that you want to gather logs for, let's look at Kibana, the web interface that we installed earlier.

    In a web browser, go to the FQDN or public IP address of your ELK Server. After entering the "kibanaadmin" credentials, you should see a page prompting you to configure an index pattern:

    In a web browser, go to the FQDN or public IP address of your ELK Server. After entering the "kibanaadmin" credentials, you should see a page prompting you to configure an index pattern:
  17. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -270,7 +270,7 @@ Save and exit.
    Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:

    cd /etc/pki/tls
    sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
    sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.
    Option 2: FQDN (DNS)
    @@ -421,7 +421,7 @@ filebeat.yml excerpt 1 of 4
    paths:
    - /var/log/secure
    - /var/log/messages
    # - /var/log/*.log
    `#` - /var/log/*.log
    ...

    Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
  18. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 395 additions and 60 deletions.
    455 changes: 395 additions & 60 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -1,66 +1,78 @@
    # Installing ELK (CentOS 6,7)

    Prerequisites:

    To complete this tutorial, you will require root access to an CentOS 7 VPS. Instructions to set that up can be found here (steps 3 and Initial Server Setup with CentOS 7.
    Tutorial Series
    This tutorial is part 1 of 5 in the series: Centralized Logging with Logstash and Kibana On CentOS 7
    Introduction

    In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on CentOS 7—that is, Elasticsearch 2.1.x, Logstash 2.1.x, and Kibana 4.3.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.0.x. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.

    Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

    It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.
    Our Goal

    The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.

    Our ELK stack setup has four main components:

    Logstash: The server component of Logstash that processes incoming logs
    Elasticsearch: Stores all of the logs
    Kibana: Web interface for searching and visualizing logs, which will be proxied through Nginx
    Filebeat: Installed on client servers that will send their logs to Logstash, Filebeat serves as a log shipping agent that utilizes the lumberjack networking protocol to communicate with Logstash

    ELK Infrastructure

    We will install the first three components on a single server, which we will refer to as our ELK Server. Filebeat will be installed on all of the client servers that we want to gather logs for, which we will refer to collectively as our Client Servers.
    Prerequisites

    To complete this tutorial, you will require root access to an CentOS 7 VPS. Instructions to set that up can be found here (steps 3 and 4): Initial Server Setup with CentOS 7.

    If you would prefer to use Ubuntu instead, check out this tutorial: How To Install ELK on Ubuntu 14.04.

    The amount of CPU, RAM, and storage that your ELK Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a VPS with the following specs for our ELK Server:
    ```

    OS: CentOS 7
    RAM: 4GB
    CPU: 2
    ```

    In addition to your ELK Server, you will want to have a few other servers that you will gather logs from.

    Let's get started on setting up our ELK Server!
    Install Java 8

    Elasticsearch and Logstash require Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elasticsearch recommends. It should, however, work fine with OpenJDK, if you decide to go that route. Following the steps in this section means that you accept the Oracle Binary License Agreement for Java SE.

    Change to your home directory and download the Oracle Java 8 (Update 65) JDK RPM with these commands:

    ```
    cd ~

    wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"

    ```
    cd ~
    wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"

    Then install the RPM with this yum command (if you downloaded a different release, substitute the filename here):

    ```
    sudo yum localinstall jdk-8u65-linux-x64.rpm
    ```
    sudo yum localinstall jdk-8u65-linux-x64.rpm

    Now Java should be installed at /usr/java/jdk1.8.0_65/jre/bin/java, and linked from /usr/bin/java.

    You may delete the archive file that you downloaded earlier:
    ```
    rm ~/jdk-8u65-linux-x64.rpm
    ```
    Now that Java 8 is installed, let's install ElasticSearch.

    rm ~/jdk-8u65-linux-x64.rpm

    Now that Java 8 is installed, let's install ElasticSearch.
    Install Elasticsearch

    Elasticsearch can be installed with a package manager by adding Elastic's package repository.

    Run the following command to import the Elasticsearch public GPG key into rpm:
    ```
    sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
    ```

    sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

    Create and edit a new yum repository file for Elasticsearch:

    ```
    sudo vi /etc/yum.repos.d/elasticsearch.repo
    ```
    Add the following repository configuration:
    sudo vi /etc/yum.repos.d/elasticsearch.repo

    ```
    /etc/yum.repos.d/elasticsearch.repo
    Add the following repository configuration:
    /etc/yum.repos.d/elasticsearch.repo

    [elasticsearch-2.1]
    name=Elasticsearch repository for 2.x packages
    @@ -69,72 +81,395 @@ Add the following repository configuration:
    gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
    enabled=1

    ```
    Save and exit.

    Install Elasticsearch with this command:

    ```
    sudo yum -y install elasticsearch
    ```
    sudo yum -y install elasticsearch

    Elasticsearch is now installed. Let's edit the configuration:

    ```
    sudo vi /etc/elasticsearch/elasticsearch.yml
    ```
    sudo vi /etc/elasticsearch/elasticsearch.yml

    You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can't read your data or shutdown your Elasticsearch cluster through the HTTP API. Find the line that specifies network.host, uncomment it, and replace its value with "localhost" so it looks like this:
    ```
    elasticsearch.yml excerpt (updated)
    elasticsearch.yml excerpt (updated)

    network.host: localhost

    ```
    Save and exit ```elasticsearch.yml.```
    Save and exit elasticsearch.yml.

    Now start Elasticsearch:

    ```
    sudo systemctl start elasticsearch
    ```
    sudo systemctl start elasticsearch

    Then run the following command to start Elasticsearch automatically on boot up:

    ```
    sudo systemctl enable elasticsearch

    ```
    sudo systemctl enable elasticsearch

    Now that Elasticsearch is up and running, let's install Kibana.
    Install Kibana

    Before installing Kibana, let's set up a kibana user and group, which will own and run Kibana:

    ```
    sudo groupadd -g 1005 kibana
    sudo useradd -u 1005 -g 1005 kibana
    ```
    If those commands fail because the ```1005``` GID or UID already exist, replace the number with IDs that are free.

    If those commands fail because the 1005 GID or UID already exist, replace the number with IDs that are free.

    Download Kibana to your home directory with the following command:

    ```
    cd ~; wget https://download.elastic.co/kibana/kibana/kibana-4.3.0-linux-x64.tar.gz
    ```
    cd ~; wget https://download.elastic.co/kibana/kibana/kibana-4.3.0-linux-x64.tar.gz

    Extract Kibana archive with tar:

    tar xvf kibana-*.tar.gz

    ```
    tar xvf kibana-*.tar.gz
    ```
    Open the Kibana configuration file for editing:

    ```
    vi ~/kibana-4*/config/kibana.yml
    ```
    vi ~/kibana-4*/config/kibana.yml

    In the Kibana configuration file, find the line that specifies server.host, and replace the IP address ("0.0.0.0" by default) with "localhost":
    kibana.yml excerpt (updated)

    server.host: "localhost"

    Save and exit. This setting makes it so Kibana will only be accessible to the localhost. This is fine because we will use an Nginx reverse proxy to allow external access.

    Let's copy the Kibana files to a more appropriate location. Create the /opt directory with the following command:

    sudo mkdir -p /opt/kibana

    Now copy the Kibana files into your newly-created directory:

    sudo cp -R ~/kibana-4*/* /opt/kibana/

    Make the kibana user the owner of the files:

    sudo chown -R kibana: /opt/kibana

    Kibana can be started by running /opt/kibana/bin/kibana, but we want it to run as a service. Download a Kibana init script with this command:

    cd /etc/init.d && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-init
    cd /etc/default && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-default

    Now enable the Kibana service, and start it:

    sudo chmod +x /etc/init.d/kibana
    sudo service kibana start
    sudo chkconfig kibana on

    Before we can use the Kibana web interface, we have to set up a reverse proxy. Let's do that now, with Nginx.
    Install Nginx

    Because we configured Kibana to listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose.

    Note: If you already have an Nginx instance that you want to use, feel free to use that instead. Just make sure to configure Kibana so it is reachable by your Nginx server (you probably want to change the host value, in /opt/kibana/config/kibana.yml, to your Kibana server's private IP address). Also, it is recommended that you enable SSL/TLS.

    Add the EPEL repository to yum:

    sudo yum -y install epel-release

    Now use yum to install Nginx and httpd-tools:

    sudo yum -y install nginx httpd-tools

    Use htpasswd to create an admin user, called "kibanaadmin" (you should use another name), that can access the Kibana web interface:

    sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

    Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.

    Now open the Nginx configuration file in your favorite editor. We will use vi:

    sudo vi /etc/nginx/nginx.conf

    Find the default server block (starts with server {), the last configuration block in the file, and delete it. When you are done, the last two lines in the file should look like this:
    nginx.conf excerpt

    include /etc/nginx/conf.d/*.conf;
    }

    Save and exit.

    Now we will create an Nginx server block in a new file:

    sudo vi /etc/nginx/conf.d/kibana.conf

    Paste the following code block into the file. Be sure to update the server_name to match your server's name:
    /etc/nginx/conf.d/kibana.conf

    server {
    listen 80;

    server_name example.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
    proxy_pass http://localhost:5601;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
    }
    }

    Save and exit. This configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.

    Now start and enable Nginx to put our changes into effect:

    sudo systemctl start nginx
    sudo systemctl enable nginx

    Note: This tutorial assumes that SELinux is disabled. If this is not the case, you may need to run the following command for Kibana to work properly: sudo setsebool -P httpd_can_network_connect 1

    Kibana is now accessible via your FQDN or the public IP address of your ELK Server i.e. http://elk_server_public_ip/. If you go there in a web browser, after entering the "kibanaadmin" credentials, you should see a Kibana welcome page which will ask you to configure an index pattern. Let's get back to that later, after we install all of the other components.
    Install Logstash

    The Logstash package shares the same GPG Key as Elasticsearch, and we already installed that public key, so let's create and edit a new Yum repository file for Logstash:

    sudo vi /etc/yum.repos.d/logstash.repo

    Add the following repository configuration:
    /etc/yum.repos.d/logstash.repo

    [logstash-2.1]
    name=logstash repository for 2.1 packages
    baseurl=http://packages.elasticsearch.org/logstash/2.1/centos
    gpgcheck=1
    gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    enabled=1

    Save and exit.

    Install Logstash with this command:

    sudo yum -y install logstash

    Logstash is installed but it is not configured yet.
    Generate SSL Certificates

    Since we are going to use Filebeat to ship logs from our Client Servers to our ELK Server, we need to create an SSL certificate and key pair. The certificate is used by Filebeat to verify the identity of ELK Server. Create the directories that will store the certificate and private key with the following commands:

    Now you have two options for generating your SSL certificates. If you have a DNS setup that will allow your client servers to resolve the IP address of the ELK Server, use Option 2. Otherwise, Option 1 will allow you to use IP addresses.
    Option 1: IP Address

    If you don't have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your ELK Server—you will have to add your ELK Server's private IP address to the subjectAltName (SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:

    sudo vi /etc/pki/tls/openssl.cnf

    Find the [ v3_ca ] section in the file, and add this line under it (substituting in the ELK Server's private IP address):
    openssl.cnf excerpt

    subjectAltName = IP: logstash_server_private_ip

    Save and exit.

    Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:

    cd /etc/pki/tls
    sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.
    Option 2: FQDN (DNS)

    If you have a DNS setup with your private networking, you should create an A record that contains the ELK Server's private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server's public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your ELK Server.

    Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/...), with the following command (substitute in the FQDN of the ELK Server):

    cd /etc/pki/tls
    sudo openssl req -subj '/CN=logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

    The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration.
    Configure Logstash

    Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

    Let's create a configuration file called 02-filebeat-input.conf and set up our "filebeat" input:

    sudo vi /etc/logstash/conf.d/02-filebeat-input.conf

    Insert the following input configuration:
    02-filebeat-input.conf

    input {
    beats {
    port => 5044
    type => "logs"
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    }
    }

    Save and quit. This specifies a beats input that will listen on tcp port 5044, and it will use the SSL certificate and private key that we created earlier.

    Now let's create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages:

    sudo vi /etc/logstash/conf.d/10-syslog.conf

    Insert the following syslog filter configuration:
    10-syslog.conf

    filter {
    if [type] == "syslog" {
    grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    add_field => [ "received_at", "%{@timestamp}" ]
    add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
    match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
    }
    }

    Save and quit. This filter looks for logs that are labeled as "syslog" type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.

    Lastly, we will create a configuration file called 30-elasticsearch-output.conf:

    sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

    Insert the following output configuration:
    /etc/logstash/conf.d/30-elasticsearch-output.conf

    output {
    elasticsearch { hosts => ["localhost:9200"] }
    stdout { codec => rubydebug }
    }

    Save and exit. This output basically configures Logstash to store the logs in Elasticsearch, which is running at localhost:9200.

    With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).

    If you want to add filters for other applications that use the Filebeat input, be sure to name the files so they sort between the input and the output configuration (i.e. between 02- and 30-).

    Test your Logstash configuration with this command:

    sudo service logstash configtest

    It should display Configuration OK if there are no syntax errors. Otherwise, try and read the error output to see what's wrong with your Logstash configuration.

    Restart and enable Logstash to put our configuration changes into effect:

    sudo systemctl restart logstash
    sudo chkconfig logstash on

    Now that our ELK Server is ready, let's move onto setting up Filebeat.
    Set Up Filebeat (Add Client Servers)

    Do these steps for each CentOS or RHEL 7 server that you want to send logs to your ELK Server. For instructions on installing Filebeat on Debian-based Linux distributions (e.g. Ubuntu, Debian, etc.), refer to the Set Up Filebeat (Add Client Servers) section of the Ubuntu variation of this tutorial.
    Copy SSL Certificate

    On ELK Server, copy the SSL certificate to Client Server (substitute the client server's IP address, and your own login):

    scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp

    After providing the login credentials, ensure that the certificate copy was successful. It is required for communication between the client servers and the ELK server.
    Install Filebeat Package

    On Client Server, create run the following command to import the Elasticsearch public GPG key into rpm:

    sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

    Create and edit a new yum repository file for Filebeat:

    sudo vi /etc/yum.repos.d/elastic-beats.repo

    Add the following repository configuration:
    /etc/yum.repos.d/elastic-beats.repo

    [beats]
    name=Elastic Beats Repository
    baseurl=https://packages.elastic.co/beats/yum/el/$basearch
    enabled=1
    gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
    gpgcheck=1

    Save and exit.

    Install Filebeat with this command:

    sudo yum -y install filebeat

    Filebeat is installed but it is not configured yet.

    Now copy the ELK Server's SSL certificate into the appropriate location (/etc/pki/tls/certs):

    sudo mkdir -p /etc/pki/tls/certs
    sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

    Configure Filebeat

    Now we will configure Filebeat to connect to Logstash on our ELK Server. This section will step you through modifying the example configuration file that comes with Filebeat. When you complete the steps, you should have a file that looks something like this.

    On Client Server, create and edit Filebeat configuration file:

    sudo vi /etc/filebeat/filebeat.yml

    Note: Filebeat's configuration file is in YAML format, which means that indentation is very important! Be sure to use the same number of spaces that are indicated in these instructions.

    Near the top of the file, you will see the prospectors section, which is where you can define prospectors that specify which log files should be shipped and how they should be handled. Each prospector is indicated by the - character.

    We'll modify the existing prospector to send secure and messages logs to Logstash. Under paths, comment out the - /var/log/*.log file. This will prevent Filebeat from sending every .log in that directory to Logstash. Then add new entries for syslog and auth.log. It should look something like this when you're done:
    filebeat.yml excerpt 1 of 4

    ...
    paths:
    - /var/log/secure
    - /var/log/messages
    # - /var/log/*.log
    ...

    Then find the line that specifies document_type:, uncomment it and change its value to "syslog". It should look like this after the modification:
    filebeat.yml excerpt 2 of 4

    ...
    document_type: syslog
    ...

    This specifies that the logs in this prospector are of type syslog (which is the type that our Logstash filter is looking for).

    If you want to send other files to your ELK server, or make any changes to how Filebeat handles your logs, feel free to modify or add prospector entries.

    Next, under the output section, find the line that says elasticsearch:, which indicates the Elasticsearch output section (which we are not going to use). Delete or comment out the entire Elasticsearch output section (up to the line that says logstash:).

    Find the commented out Logstash output section, indicated by the line that says #logstash:, and uncomment it by deleting the preceding #. In this section, uncomment the hosts: ["localhost:5044"] line. Change localhost to the private IP address (or hostname, if you went with that option) of your ELK server:
    filebeat.yml excerpt 3 of 4

    ### Logstash as output
    logstash:
    # The Logstash hosts
    hosts: ["ELK_server_private_IP:5044"]

    This configures Filebeat to connect to Logstash on your ELK Server at port 5044 (the port that we specified an input for earlier).

    Next, find the tls section, and uncomment it. Then uncomment the line that specifies certificate_authorities, and change its value to ["/etc/pki/tls/certs/logstash-forwarder.crt"]. It should look something like this:
    filebeat.yml excerpt 4 of 4

    ...
    tls:
    # List of root certificates for HTTPS server verifications
    certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

    This configures Filebeat to use the SSL certificate that we created on the ELK Server.

    Save and quit.

    Now start and enable Filebeat to put our changes into place:

    sudo systemctl start filebeat
    sudo chkconfig filebeat on

    Again, if you're not sure if your Filebeat configuration is correct, compare it against this example Filebeat configuration.

    Now Filebeat is sending your syslog messages and secure files to your ELK Server! Repeat this section for all of the other servers that you wish to gather logs for.
    Connect to Kibana

    When you are finished setting up Filebeat on all of the servers that you want to gather logs for, let's look at Kibana, the web interface that we installed earlier.

    In a web browser, go to the FQDN or public IP address of your ELK Server. After entering the "kibanaadmin" credentials, you should see a page prompting you to configure an index pattern:

  19. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 73 additions and 217 deletions.
    290 changes: 73 additions & 217 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -1,284 +1,140 @@
    # Installing ELK (CentOS)
    # Installing ELK (CentOS 6,7)

    This is a short step-by-step guide on installing ElasticSearch LogStash and Kibana Stack on a CentOS environment to gather and analyze logs.
    Prerequisites:

    ## I. Install JDK
    To complete this tutorial, you will require root access to an CentOS 7 VPS. Instructions to set that up can be found here (steps 3 and Initial Server Setup with CentOS 7.

    ```
    rpm -ivh https://dl.dropboxusercontent.com/u/5756075/jdk-7u45-linux-x64.rpm
    ```

    ## II. Install & Configure ElasticSearch

    ### Add repository

    ```
    rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    cat > /etc/yum.repos.d/elasticsearch.repo <<EOF
    [elasticsearch-1.3]
    name=Elasticsearch repository for 1.3.x packages
    baseurl=http://packages.elasticsearch.org/elasticsearch/1.3/centos
    gpgcheck=1
    gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    enabled=1
    EOF
    ```

    ### Install ElasticSearch

    ```
    yum -y install elasticsearch
    ```

    ### Configure ElasticSearch

    1. Increase the openfile limits to elasticsearch by:
    If you would prefer to use Ubuntu instead, check out this tutorial: How To Install ELK on Ubuntu 14.04.

    The amount of CPU, RAM, and storage that your ELK Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a VPS with the following specs for our ELK Server:
    ```
    echo 'elasticsearch soft nofile 32000' >> /etc/security/limits.conf
    echo 'elasticsearch hard nofile 32000' >> /etc/security/limits.conf
    ```
    OS: CentOS 7
    RAM: 4GB
    CPU: 2
    ```
    2. Configure elasticsearch data storage path
    In addition to your ELK Server, you will want to have a few other servers that you will gather logs from.
    ```
    echo 'path.data: /data/es/logs' >> /etc/elasticsearch/elasticsearch.yml
    mkdir -p /data/es/logs
    chown -R elasticsearch:elasticsearch /data/es/logs
    ```
    Let's get started on setting up our ELK Server!
    3. Disallow elasticsearch process from swapping (try to lock the process address space into RAM)
    Install Java 8
    ```
    sed -i "s|^# bootstrap.mlockall:.*$|bootstrap.mlockall: true|" /etc/elasticsearch/elasticsearch.yml
    ```
    Elasticsearch and Logstash require Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elasticsearch recommends. It should, however, work fine with OpenJDK, if you decide to go that route. Following the steps in this section means that you accept the Oracle Binary License Agreement for Java SE.
    4. Change the JVM Size
    Change to your home directory and download the Oracle Java 8 (Update 65) JDK RPM with these commands:
    ```
    sed -i "s|^#ES_HEAP_SIZE=.*$|ES_HEAP_SIZE=4g|" /etc/sysconfig/elasticsearch
    ```
    > NOTE: Make sure you have enough RAM on the machine before bumping up the value of the ElasticSearch Deamon's JVM Heap Size and make changes accordingly.
    5. Start ElasticSearch
    ```
    service elasticsearch start
    ```
    ## III. Install & Configure Kibana
    1. Download Kibana
    ```
    cd /opt
    wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
    tar xzf kibana-4.3.1-linux-x64.tar.gz
    ln -s kibana-4.3.1-linux-x64 kibana4
    ```
    2. Install Nginx
    ```
    yum install epel-release
    yum -y install nginx
    ```
    Nginx does not start on its own. To get Nginx running, type:
    ```
    sudo systemctl start nginx
    ```
    If you are running a firewall, run the following commands to allow HTTP and HTTPS traffic:
    ```
    sudo firewall-cmd --permanent --zone=public --add-service=http
    sudo firewall-cmd --permanent --zone=public --add-service=https
    sudo firewall-cmd --reload
    ```
    Update OS
    ```
    yum update -y
    ```
    3. Configure Nginx to server kibana
    ```
    mkdir -p /usr/share/nginx/kibana4
    cp -R /opt/kibana4/* /usr/share/nginx/kibana4/
    ```
    4. Download sample nginx config:
    ```
    cd ~

    ```
    cd ~; curl -OL http://pastebin.com/raw/M35g7fhk
    mv /root/M35g7fhk /root/nginx.conf
    sed -i "s|kibana.myhost.org|$(hostname -f)|" nginx.conf
    sed -i "s|root.*/usr/share/kibana4;|root /usr/share/nginx/kibana4;|" nginx.conf
    cp ~/nginx.conf /etc/nginx/conf.d/default.conf
    ```
    wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"

    > NOTE: If you don't find the sample `nginx.conf` try this: https://github.com/elasticsearch/kibana/blob/kibana3/sample/nginx.conf, it generally should be laying around in some other branch of kibana.
    ```
    5. Install apache2-utils to generate username and password pair
    Then install the RPM with this yum command (if you downloaded a different release, substitute the filename here):
    ```
    yum -y install httpd-tools
    htpasswd -c /etc/nginx/conf.d/$(hostname -f).htpasswd admin
    ```
    ```
    sudo yum localinstall jdk-8u65-linux-x64.rpm
    ```
    6. Start nginx for serving kibana and to make sure that kibana is available after reboot's
    Now Java should be installed at /usr/java/jdk1.8.0_65/jre/bin/java, and linked from /usr/bin/java.
    ```
    systemctl enable httpd
    service nginx start
    systemctl enable nginx.service
    ```
    You may delete the archive file that you downloaded earlier:
    ```
    rm ~/jdk-8u65-linux-x64.rpm
    ```
    Now that Java 8 is installed, let's install ElasticSearch.
    ## IV. Install & Configure LogStash
    Install Elasticsearch
    ### Add Repository
    Elasticsearch can be installed with a package manager by adding Elastic's package repository.
    Run the following command to import the Elasticsearch public GPG key into rpm:
    ```
    cat > /etc/yum.repos.d/logstash.repo <<EOF
    [logstash-1.4]
    name=logstash repository for 1.4.x packages
    baseurl=http://packages.elasticsearch.org/logstash/1.4/centos
    gpgcheck=1
    gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    enabled=1
    EOF
    sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
    ```
    ### Install logstash
    Create and edit a new yum repository file for Elasticsearch:
    ```
    yum -y install logstash logstash-contrib
    sudo vi /etc/yum.repos.d/elasticsearch.repo
    ```
    Add the following repository configuration:
    ### Generating SSL Certificates
    Since we are going to use Logstash Forwarder to ship logs from our Servers to our Logstash Server, we need to create an SSL certificate and key pair. The certificate is used by the Logstash Forwarder to verify the identity of Logstash Server.
    ```
    /etc/yum.repos.d/elasticsearch.repo

    Generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/...), with the following command:
    [elasticsearch-2.1]
    name=Elasticsearch repository for 2.x packages
    baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
    gpgcheck=1
    gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
    enabled=1

    ```
    cd /etc/pki/tls; sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
    ```
    Save and exit.
    The `logstash-forwarder.crt` file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration.
    ### Configure logstash
    Install Elasticsearch with this command:
    ```
    cat > /etc/logstash/conf.d/01-lumberjack-input.conf <<EOF
    input {
    lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    }
    }
    EOF
    sudo yum -y install elasticsearch
    ```
    This specifies a lumberjack input that will listen on tcp port 5000, and it will use the SSL certificate and private key that we created earlier.
    Now lets create another config file, where we will add a filter for syslog messages:
    Elasticsearch is now installed. Let's edit the configuration:
    ```
    cat > /etc/logstash/conf.d/10-syslog.conf <<EOF
    filter {
    if [type] == "syslog" {
    grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    add_field => [ "received_at", "%{@timestamp}" ]
    add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
    match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
    }
    }
    EOF
    sudo vi /etc/elasticsearch/elasticsearch.yml
    ```
    This filter looks for logs that are labeled as "syslog" type (by a Logstash Forwarder), and it will try to use "grok" to parse incoming syslog logs to make it structured and query-able.
    You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can't read your data or shutdown your Elasticsearch cluster through the HTTP API. Find the line that specifies network.host, uncomment it, and replace its value with "localhost" so it looks like this:
    Now lets create another config file to tell logstash to store logs in elasticsearch.
    ```
    cat > /etc/logstash/conf.d/30-lumberjack-output.conf <<EOF
    output {
    elasticsearch { host => localhost }
    stdout { codec => rubydebug }
    }
    EOF
    ```
    elasticsearch.yml excerpt (updated)

    ### Start logstash
    network.host: localhost

    ```
    service logstash start
    ```
    Save and exit ```elasticsearch.yml.```
    ## V. Setup Logstash Forwarder
    Now start Elasticsearch:
    Note: Do these steps for each server that you want to send logs to your Logstash Server.
    ```
    sudo systemctl start elasticsearch
    ```
    ### Copy SSL certificate to logstash forwarder agents from logstash server:
    Then run the following command to start Elasticsearch automatically on boot up:
    ```
    scp /etc/pki/tls/certs/logstash-forwarder.crt [user]@[server]:/tmp
    sudo systemctl enable elasticsearch

    ```
    > NOTE: Replace [user] and [server] with the username you have access to ssh into the logstash agents and the server with hostname/ip-address of logstash agent
    ### Install logstash forwarder
    ```
    rpm -ivh http://packages.elasticsearch.org/logstashforwarder/centos/logstash-forwarder-0.3.1-1.x86_64.rpm
    ```
    Now that Elasticsearch is up and running, let's install Kibana.
    Install Kibana
    ### Install logstash forwarder init script
    Before installing Kibana, let's set up a kibana user and group, which will own and run Kibana:
    ```
    cd /etc/init.d/; sudo curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
    chmod +x logstash-forwarder
    sudo groupadd -g 1005 kibana
    sudo useradd -u 1005 -g 1005 kibana
    ```
    If those commands fail because the ```1005``` GID or UID already exist, replace the number with IDs that are free.
    Download Kibana to your home directory with the following command:
    ```
    cat > /etc/sysconfig/logstash-forwarder <<EOF
    LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
    EOF
    cd ~; wget https://download.elastic.co/kibana/kibana/kibana-4.3.0-linux-x64.tar.gz
    ```
    Extract Kibana archive with tar:
    ```
    cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs
    tar xvf kibana-*.tar.gz
    ```
    ### Configure logstash forwarder
    Open the Kibana configuration file for editing:
    ```
    LS_SERVER=[LOGSTASH_SERVER_FQDN]
    cat > /etc/logstash-forwarder <<EOF
    {
    "network": {
    "servers": [ "${LS_SERVER}:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
    },
    "files": [
    {
    "paths": [
    "/var/log/messages",
    "/var/log/secure"
    ],
    "fields": { "type": "syslog" }
    }
    ]
    }
    EOF
    vi ~/kibana-4*/config/kibana.yml
    ```
    > NOTE: Be sure to replace [LOGSTASH_SERVER_FQDN] with the FQDN of your logstash server
  20. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 6 additions and 5 deletions.
    11 changes: 6 additions & 5 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -75,7 +75,7 @@ yum -y install elasticsearch
    cd /opt
    wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
    tar xzf kibana-4.3.1-linux-x64.tar.gz
    ln -s kibana-4.3.1-linux-x64 kibana
    ln -s kibana-4.3.1-linux-x64 kibana4
    ```
    2. Install Nginx
    @@ -103,16 +103,16 @@ Nginx does not start on its own. To get Nginx running, type:
    ```
    mkdir -p /usr/share/nginx/kibana4
    cp -R /opt/kibana/* /usr/share/nginx/kibana4/
    cp -R /opt/kibana4/* /usr/share/nginx/kibana4/
    ```
    4. Download sample nginx config:
    ```
    cd ~; curl -OL http://pastebin.com/raw/M35g7fhk
    mv /root/yZdTw3B7 /root/nginx.conf
    mv /root/M35g7fhk /root/nginx.conf
    sed -i "s|kibana.myhost.org|$(hostname -f)|" nginx.conf
    sed -i "s|root.*/usr/share/kibana4;|root /usr/share/nginx/kibana3;|" nginx.conf
    sed -i "s|root.*/usr/share/kibana4;|root /usr/share/nginx/kibana4;|" nginx.conf
    cp ~/nginx.conf /etc/nginx/conf.d/default.conf
    ```
    @@ -128,8 +128,9 @@ Nginx does not start on its own. To get Nginx running, type:
    6. Start nginx for serving kibana and to make sure that kibana is available after reboot's
    ```
    systemctl enable httpd
    service nginx start
    chkconfig nginx on
    systemctl enable nginx.service
    ```
    ## IV. Install & Configure LogStash
  21. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 6 additions and 5 deletions.
    11 changes: 6 additions & 5 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -102,16 +102,17 @@ Nginx does not start on its own. To get Nginx running, type:
    3. Configure Nginx to server kibana
    ```
    mkdir -p /usr/share/nginx/kibana3
    cp -R /opt/kibana/* /usr/share/nginx/kibana3/
    mkdir -p /usr/share/nginx/kibana4
    cp -R /opt/kibana/* /usr/share/nginx/kibana4/
    ```
    4. Download sample nginx config:
    ```
    cd ~; curl -OL https://raw.githubusercontent.com/elasticsearch/kibana/kibana3/sample/nginx.conf
    cd ~; curl -OL http://pastebin.com/raw/M35g7fhk
    mv /root/yZdTw3B7 /root/nginx.conf
    sed -i "s|kibana.myhost.org|$(hostname -f)|" nginx.conf
    sed -i "s|root.*/usr/share/kibana3;|root /usr/share/nginx/kibana3;|" nginx.conf
    sed -i "s|root.*/usr/share/kibana4;|root /usr/share/nginx/kibana3;|" nginx.conf
    cp ~/nginx.conf /etc/nginx/conf.d/default.conf
    ```
    @@ -120,7 +121,7 @@ Nginx does not start on its own. To get Nginx running, type:
    5. Install apache2-utils to generate username and password pair
    ```
    yum -y install httpd-tools-2.2.15
    yum -y install httpd-tools
    htpasswd -c /etc/nginx/conf.d/$(hostname -f).htpasswd admin
    ```
  22. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 4 additions and 0 deletions.
    4 changes: 4 additions & 0 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -94,6 +94,10 @@ Nginx does not start on its own. To get Nginx running, type:
    sudo firewall-cmd --permanent --zone=public --add-service=https
    sudo firewall-cmd --reload
    ```
    Update OS
    ```
    yum update -y
    ```
    3. Configure Nginx to server kibana
  23. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 10 additions and 0 deletions.
    10 changes: 10 additions & 0 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -84,6 +84,16 @@ yum -y install elasticsearch
    yum install epel-release
    yum -y install nginx
    ```
    Nginx does not start on its own. To get Nginx running, type:
    ```
    sudo systemctl start nginx
    ```
    If you are running a firewall, run the following commands to allow HTTP and HTTPS traffic:
    ```
    sudo firewall-cmd --permanent --zone=public --add-service=http
    sudo firewall-cmd --permanent --zone=public --add-service=https
    sudo firewall-cmd --reload
    ```
    3. Configure Nginx to server kibana
  24. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion readme.md
    Original file line number Diff line number Diff line change
    @@ -81,7 +81,7 @@ yum -y install elasticsearch
    2. Install Nginx
    ```
    rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
    yum install epel-release
    yum -y install nginx
    ```
  25. nu11secur1ty revised this gist Jan 16, 2016. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -74,8 +74,8 @@ yum -y install elasticsearch
    ```
    cd /opt
    wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
    tar xzf kibana-3.1.0.tar.gz
    ln -s kibana-3.1.0 kibana
    tar xzf kibana-4.3.1-linux-x64.tar.gz
    ln -s kibana-4.3.1-linux-x64 kibana
    ```
    2. Install Nginx
  26. nu11secur1ty created this gist Jan 16, 2016.
    268 changes: 268 additions & 0 deletions readme.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,268 @@
    # Installing ELK (CentOS)

    This is a short step-by-step guide on installing ElasticSearch LogStash and Kibana Stack on a CentOS environment to gather and analyze logs.

    ## I. Install JDK

    ```
    rpm -ivh https://dl.dropboxusercontent.com/u/5756075/jdk-7u45-linux-x64.rpm
    ```

    ## II. Install & Configure ElasticSearch

    ### Add repository

    ```
    rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    cat > /etc/yum.repos.d/elasticsearch.repo <<EOF
    [elasticsearch-1.3]
    name=Elasticsearch repository for 1.3.x packages
    baseurl=http://packages.elasticsearch.org/elasticsearch/1.3/centos
    gpgcheck=1
    gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    enabled=1
    EOF
    ```

    ### Install ElasticSearch

    ```
    yum -y install elasticsearch
    ```

    ### Configure ElasticSearch

    1. Increase the openfile limits to elasticsearch by:

    ```
    echo 'elasticsearch soft nofile 32000' >> /etc/security/limits.conf
    echo 'elasticsearch hard nofile 32000' >> /etc/security/limits.conf
    ```
    2. Configure elasticsearch data storage path
    ```
    echo 'path.data: /data/es/logs' >> /etc/elasticsearch/elasticsearch.yml
    mkdir -p /data/es/logs
    chown -R elasticsearch:elasticsearch /data/es/logs
    ```
    3. Disallow elasticsearch process from swapping (try to lock the process address space into RAM)
    ```
    sed -i "s|^# bootstrap.mlockall:.*$|bootstrap.mlockall: true|" /etc/elasticsearch/elasticsearch.yml
    ```
    4. Change the JVM Size
    ```
    sed -i "s|^#ES_HEAP_SIZE=.*$|ES_HEAP_SIZE=4g|" /etc/sysconfig/elasticsearch
    ```
    > NOTE: Make sure you have enough RAM on the machine before bumping up the value of the ElasticSearch Deamon's JVM Heap Size and make changes accordingly.
    5. Start ElasticSearch
    ```
    service elasticsearch start
    ```
    ## III. Install & Configure Kibana
    1. Download Kibana
    ```
    cd /opt
    wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
    tar xzf kibana-3.1.0.tar.gz
    ln -s kibana-3.1.0 kibana
    ```
    2. Install Nginx
    ```
    rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
    yum -y install nginx
    ```
    3. Configure Nginx to server kibana
    ```
    mkdir -p /usr/share/nginx/kibana3
    cp -R /opt/kibana/* /usr/share/nginx/kibana3/
    ```
    4. Download sample nginx config:
    ```
    cd ~; curl -OL https://raw.githubusercontent.com/elasticsearch/kibana/kibana3/sample/nginx.conf
    sed -i "s|kibana.myhost.org|$(hostname -f)|" nginx.conf
    sed -i "s|root.*/usr/share/kibana3;|root /usr/share/nginx/kibana3;|" nginx.conf
    cp ~/nginx.conf /etc/nginx/conf.d/default.conf
    ```
    > NOTE: If you don't find the sample `nginx.conf` try this: https://github.com/elasticsearch/kibana/blob/kibana3/sample/nginx.conf, it generally should be laying around in some other branch of kibana.
    5. Install apache2-utils to generate username and password pair
    ```
    yum -y install httpd-tools-2.2.15
    htpasswd -c /etc/nginx/conf.d/$(hostname -f).htpasswd admin
    ```
    6. Start nginx for serving kibana and to make sure that kibana is available after reboot's
    ```
    service nginx start
    chkconfig nginx on
    ```
    ## IV. Install & Configure LogStash
    ### Add Repository
    ```
    cat > /etc/yum.repos.d/logstash.repo <<EOF
    [logstash-1.4]
    name=logstash repository for 1.4.x packages
    baseurl=http://packages.elasticsearch.org/logstash/1.4/centos
    gpgcheck=1
    gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    enabled=1
    EOF
    ```
    ### Install logstash
    ```
    yum -y install logstash logstash-contrib
    ```
    ### Generating SSL Certificates
    Since we are going to use Logstash Forwarder to ship logs from our Servers to our Logstash Server, we need to create an SSL certificate and key pair. The certificate is used by the Logstash Forwarder to verify the identity of Logstash Server.
    Generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/...), with the following command:
    ```
    cd /etc/pki/tls; sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
    ```
    The `logstash-forwarder.crt` file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration.
    ### Configure logstash
    ```
    cat > /etc/logstash/conf.d/01-lumberjack-input.conf <<EOF
    input {
    lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    }
    }
    EOF
    ```
    This specifies a lumberjack input that will listen on tcp port 5000, and it will use the SSL certificate and private key that we created earlier.
    Now lets create another config file, where we will add a filter for syslog messages:
    ```
    cat > /etc/logstash/conf.d/10-syslog.conf <<EOF
    filter {
    if [type] == "syslog" {
    grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    add_field => [ "received_at", "%{@timestamp}" ]
    add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
    match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
    }
    }
    EOF
    ```
    This filter looks for logs that are labeled as "syslog" type (by a Logstash Forwarder), and it will try to use "grok" to parse incoming syslog logs to make it structured and query-able.
    Now lets create another config file to tell logstash to store logs in elasticsearch.
    ```
    cat > /etc/logstash/conf.d/30-lumberjack-output.conf <<EOF
    output {
    elasticsearch { host => localhost }
    stdout { codec => rubydebug }
    }
    EOF
    ```
    ### Start logstash
    ```
    service logstash start
    ```
    ## V. Setup Logstash Forwarder
    Note: Do these steps for each server that you want to send logs to your Logstash Server.
    ### Copy SSL certificate to logstash forwarder agents from logstash server:
    ```
    scp /etc/pki/tls/certs/logstash-forwarder.crt [user]@[server]:/tmp
    ```
    > NOTE: Replace [user] and [server] with the username you have access to ssh into the logstash agents and the server with hostname/ip-address of logstash agent
    ### Install logstash forwarder
    ```
    rpm -ivh http://packages.elasticsearch.org/logstashforwarder/centos/logstash-forwarder-0.3.1-1.x86_64.rpm
    ```
    ### Install logstash forwarder init script
    ```
    cd /etc/init.d/; sudo curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
    chmod +x logstash-forwarder
    ```
    ```
    cat > /etc/sysconfig/logstash-forwarder <<EOF
    LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
    EOF
    ```
    ```
    cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs
    ```
    ### Configure logstash forwarder
    ```
    LS_SERVER=[LOGSTASH_SERVER_FQDN]
    cat > /etc/logstash-forwarder <<EOF
    {
    "network": {
    "servers": [ "${LS_SERVER}:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
    },
    "files": [
    {
    "paths": [
    "/var/log/messages",
    "/var/log/secure"
    ],
    "fields": { "type": "syslog" }
    }
    ]
    }
    EOF
    ```
    > NOTE: Be sure to replace [LOGSTASH_SERVER_FQDN] with the FQDN of your logstash server