Cloud security

Cloud Security Monitoring with Open-Source Tools

Dejan Lukan
July 23, 2015 by
Dejan Lukan

Introduction

Security is an important aspect of our IT world, because there are many breaches in various companies taking place on a daily basis. There are many cases where even the security companies are compromised, in the most recent breach the Hacking Team got compromised where more than 400GB of data was stolen and dumped on the Internet. (For more on the subject, see this.)

Therefore, every company must move security to the top of the priority list in order to be secure against the latest sophisticated attacks. Additionally, there are many cases where common techniques are used by hackers in order to infiltrate into the company's network – for example, malicious Word and Excel documents containing PowerShell code, which once enabled, will execute on the victim's machine.

Learn Cloud Security

Learn Cloud Security

Get hands-on experience with cloud service provider security, cloud penetration testing, cloud security architecture and management, and more.

Due to the vast number of security defense techniques and a many thread detection techniques, attackers have started concentrating on the weakest link in a chain, which is a human user. Therefore, social engineering attacks have mushroomed and are now commonly being used to infiltrate into the company's network. There are many way to protect against such attacks, the most important of them being user education about malicious threads, but if all else fails a device in a network will get compromised and it will be the job of a cloud security professional to detect and remove the threat.

Therefore, we would not like to focus on how to prevent an attack from succeeding in the first place, but rather how to detect it as soon as it happens to limit the scope of the attacker and prevent it from stealing sensitive information. This is the plan at least, which isn't always feasible, since other security measures have to be in place to battle against the attacker once he's infiltrated into the network. It has been said a million times and it will probably be said a million times more: a security is like an onion, where it has multiple layers the attacker has to circumvent in order to access the data at a certain layer – the deeper the layer, the more sensitive the information.

With that in mind, we'll take a look at how to start gathering security related information in a central place, which is the basis of a good monitoring solution. We'll setup a monitoring system that will store all logging information from every machine in our network.

Tools of the Trade

There are numerous tools that we can use to collect logging information in a central place. There are various solutions out there, which can be used for gathering logs from web servers, firewalls, network devices, IDS/IPS systems, workstations, etc., some of them are the following:

  • Splunk: is a great tool for gathering and searching over all the logs, but it's an enterprise solution with high-costs involved. There's also a community edition, which is free for up to 500MB of data per day.
  • Fluent: an open-source log collector, which enables us to log messages from 125+ types of systems, including gathering logs from many cloud platforms automatically.
  • AlientVault: is actually a SIEM system, which gathers logs and analyzes them in order to detect threats.
  • SEC (Simple Event Correlator): a simple event correlation tool written in Perl, which can be used to gather log messages and comparing them to rules to detect attacks.

There are other tools as well, but we didn't provide all of them. In this article, we'll closely look at Elasticsearch, Logstash, and Kibana, more commonly known as the ELK stack.

Getting Started

There is a easy way to get started with an ELK stack, because the Protean Security team has already done all the heavy lifting. They have provided a Docker image, which automates the installation of the whole ELK stack, which enables other researcher to start using ELK right away without the installation hassle. I would like to point out that there are many ELF stack repositories on the docker registry hub, but the one we're going to use is a rather simple one and provides the ELK stack without any unnecessary features. It also uses currently latest versions of Elasticsearch 1.6.0, Logstash 1.5.2 and Kibana 4.1.1.

Getting started with Docker

To get started, we can simply pull the image with the command below:

[plain]

# docker pull proteansec/elk

[/plain]

Then we can run the docker image by using the command line below, which will automatically start supervisord that in turn starts SSH, ElasticSearch, Logstash and Kibana.

[plain]

# docker run -it proteansec/elk

[/plain]

If we don't want to start all the services automatically, we can instead use the following command to get a /bin/bash shell on a Docker container, which will give us a chance to start everything manually.

[plain]

$ docker run -it proteansec/elk /bin/bash

root@3cd0a8058b70:/opt# /etc/init.d/supervisor start

Starting supervisor: 2015-07-19 20:58:12,524 CRIT Supervisor running as root (no user in config file)

2015-07-19 20:58:12,525 WARN Included extra file "/etc/supervisor/conf.d/kibana.conf" during parsing

2015-07-19 20:58:12,525 WARN Included extra file "/etc/supervisor/conf.d/default.conf" during parsing

2015-07-19 20:58:12,526 WARN Included extra file "/etc/supervisor/conf.d/logstash.conf" during parsing

2015-07-19 20:58:12,526 WARN Included extra file "/etc/supervisor/conf.d/elasticsearch.conf" during parsing

2015-07-19 20:58:12,527 WARN Included extra file "/etc/supervisor/conf.d/sshd.conf" during parsing

2015-07-19 20:58:12,537 INFO RPC interface 'supervisor' initialized

2015-07-19 20:58:12,538 CRIT Server 'unix_http_server' running without any HTTP authentication checking

2015-07-19 20:58:12,538 INFO supervisord started with pid 9

2015-07-19 20:58:13,541 INFO spawned: 'sshd' with pid 12

2015-07-19 20:58:13,543 INFO spawned: 'elasticsearch' with pid 13

2015-07-19 20:58:13,545 INFO spawned: 'logstash' with pid 14

2015-07-19 20:58:13,550 INFO spawned: 'kibana' with pid 15

[/plain]

To connect to the same docker container, we can run the docker exec command with the ID of the container already running – we can identify the actual id of the container by running the docker ps command. In the output below, we've first connected to the same docker container and issued the netstat command to identify the listening ports.

[plain]

$ docker exec -it 3cd0a8058b70 /bin/bash

root@3cd0a8058b70:/opt# netstat -luntp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name

tcp 0 0 127.0.0.1:5601 0.0.0.0:* LISTEN 15/node

tcp6 0 0 127.0.0.1:9200 :::* LISTEN 13/java

tcp6 0 0 127.0.0.1:9300 :::* LISTEN 13/java

[/plain]

Getting started with Vagrant

To get started with ELK stack with Vagrant, we can pull down the vagrant-elk repository from Github and UN the "vagrant up" command and the rest should be taken care of itself.

[plain]

# git clone <a href="https://github.com/proteansec/vagrant-elk"><span>https://github.com/proteansec/vagrant-elk</span></a>

# cd vagrant-elk/

# vagrant up

[/plain]

We won't go into the details with Vagrant as the steps are quite similar to Docker. The stage for experimentation has been provided; you just have to use it the same way as with Docker.

Getting the Logs

After we've setup our environment, we can easily send logs to it in order to be processed. We've setup the input filter of Logstash in such a way, it can process syslog messages easily. Therefore, we can easily pipe everything from /var/log/ directory into the system in order to be monitored.

Afterwards, we can connect to Kibana and configure the index pattern, which needs to be done only when starting to use Kibana to manage the index used for searching and analysis support in Elasticsearch. On the picture below, we have to click on the 'Create' button to create the index pattern.

Let's now echo something into the open port 5000, where we can connect via the TCP client like netcat. We can use echo command to pipe a line from /var/log/messages into the logstash, which is listening on port 5000.

[plain]

# echo "Oct 31 11:31:50 user kernel: [525887.354176] grsec: denied resource overstep by requesting 4096 for RLIMIT_CORE against limit 0" | nc localhost 5000 -q 1

[/plain]

Usually, you don't have to do this manually, but it's nice to know how to pipe a sample to logstash to be processed, so we can easily experiment with it. If we go to the Kibana interface, we'll see that entry in the Discover menu as shown below.

We can also expand the entry to get a better view over the parsed entry. We can see a single line we echoed to logstash was parsed into multiple fields, each containing its own field value that can be further used to search over the all data stored in Elasticsearch and fed into the system through Logstash.

To feed automatically all generated log information into Kibana, we can install rsyslog program and set-it up in order to send all messages to Logstash listening on TCP port 5000. We can install rsyslog with a simple "apt-get install rsyslog" command or use our default package manager, whatever it may be.

Then we need to check whether the port is open and there's nothing blocking in between in order to send logs from arbitrary servers to logstash. We can do that with a simple nmap command as shown below.

[plain]

# nmap docker -p 5514

Starting Nmap 6.47 ( http://nmap.org ) at 2015-07-20 19:29 CEST

Nmap scan report for docker (192.168.1.2)

Host is up (0.0021s latency).

PORT STATE SERVICE

5514/tcp open unknown

Nmap done: 1 IP address (1 host up) scanned in 1.23 seconds

[/plain]

After checking the port 5514 is indeed running, we need to create a new file /etc/rsyslog.d/10-logstash.conf with the following contents expressing that we want to send every log to the docker server on the port 5514 (configured in logstash). Afterwards, we have to restart the service in order for changes to take effect.

[plain]
# cat /etc/rsyslog.d/10-logstash.conf

*.* @@docker:5514
# /etc/init.d/rsyslog restart

[/plain]

Since we're testing the rsyslog for now, there's a better way of actually checking whether rsyslog works the way we expect it to. First, we have to stop the rsyslog daemon and run the rsyslog in daemon mode, which can be done with the command presented below.

[plain]

# /etc/init.d/rsyslog stop

# export RSYSLOG_DEBUGLOG="/var/log/rsyslog.log"

# export RSYSLOG_DEBUG="Debug"

# rsyslogd -c6

[/plain]

Soon after, we'll see log messages showing off in Kibana web interface, which are now being gathered at one central place. The majority of work has been done up to this point, we just need to ensure the logs from all the servers are being sent to Logstash, where they are parsed and indexed by Elasticsearch. By using to Kibana, we also get a nice web interface, presenting the logs through a web application, where we can also execute simple queries to present only the log messages we care about.

Conclusion

The article demonstrates that malicious attacks are common nowadays, and we have to do everything in our power to protect ourselves. However, as is the case in most projects, the available budget is usually not very high, so we have to rely on open-source solutions in order to implement best security practices. In the article we haven't presented how to defend ourselves in real-time, but rather how to detect that our defenses have been penetrated by the malicious attacker. Once we've discovered that, we have to determine what attacks the attacker issued in our network and determine which sensitive files were revealed and possibly copied by the attacker. It's also advisable to know what other actions were performed by the attacker in order to determine steps he/she took once infiltrated into the network.

If we know the attacks performed by attacker in our network, we can better protect ourselves when removing all the infections and backdoors left by attacker in our network. If we don't have any logs whatsoever, we certainly won't be able to detect an attacker has penetrated into our network, let alone be able to remove every malicious backdoors installed by attacker. Failing to do so, will allow an attacker to have continuous access to our internal network and our sensitive information for months before discovery.

We've established that having and keeping logs from various devices from our network, from web servers, database servers, network devices, workstations, etc., is beneficial in order to protect ourselves once an intrusion happens. Having the logs stored in a central place doesn't guarantee that we'll be able to detect all kinds of attacks as well as it doesn't assure us we'll be able to track everything an attacker does in our network to be able to remove every malicious backdoor, but merely gives us a better view on the overall network/system activity. It certainly doesn't provide an overall solution to our security policy, but adds an additional security layer an attacker must circumvent to stay invisible in the network.

Learn Cloud Security

Learn Cloud Security

Get hands-on experience with cloud service provider security, cloud penetration testing, cloud security architecture and management, and more.

By adding additional layers to the whole security infrastructure, we've improving our overall security design and certainly making life of an attacker more difficult and complicated. Since the primary attacker's concern is to compromise as many systems as possibly in as little time as possible, such attacker will probably leave us alone and won't spend extra time required to stay invisible.

Dejan Lukan
Dejan Lukan

Dejan Lukan is a security researcher for InfoSec Institute and penetration tester from Slovenia. He is very interested in finding new bugs in real world software products with source code analysis, fuzzing and reverse engineering. He also has a great passion for developing his own simple scripts for security related problems and learning about new hacking techniques. He knows a great deal about programming languages, as he can write in couple of dozen of them. His passion is also Antivirus bypassing techniques, malware research and operating systems, mainly Linux, Windows and BSD. He also has his own blog available here: http://www.proteansec.com/.