Category Archives: Installing and Configuring (notes to my future self)

Installing a John the Ripper Cluster (Fedora 23/24)

John the Ripper is an excellent password cracking tool that I regularly use during penetration tests to recover plaintext passwords from multiple hash formats.

I recently started building a new dedicated rig with the sole purpose of cracking passwords. I didn’t have any money to invest in this project, so I am using whatever servers and workstations are lying around unused in the back of the server room. I therefore decided my best bet for maximum hash cracking goodness would be to use John in parallel across all these machines. This is a first for me so I thought I had better document how I did it for when it all burns to the ground and I have to start again. There are several guides online dictating how to achieve this kind of setup using Kali or Ubuntu, but I prefer Fedora so had to alter a lot of the commands to suit and I encountered some odd errors along the way.  I hope this (rambling) guide is useful to others as well as my future self.

[Note this post is a little bit of a work in process as I continue to build and refine the cracking rig.] 

I’m using Fedora 23 Server on each of the hosts,  have configured a static IP address using the Cockpit interface on port 9090 (i.e https://ip_address:9090), and created a user which will be used to authenticate between the hosts.

[Update: I have successfully used the same process on Fedora 24]

On a side note, if when running dnf commands you encounter an error like:

Error: Failed to synchronize cache for repo 'updates' from 'https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64': Cannot prepare internal mirrorlist: Curl error (60): Peer certificate cannot be authenticated with given CA certificates for https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64 [Peer's Certificate has expired.]

You may not have access to the Internet because your network firewall / proxy is trying to mitm  – in my case I fixed this by shouting over to the firewall guy in the office.

As I need support for a large variety of hash formats the version of John in the Fedora repositories is useless to me, instead I am using the community enhanced edition.

Compiling John

Non-clustered

There are several dependencies that you need before attempting to build:

sudo dnf install openssl openssl-devel gcc

I first tried to install this from the tarball available on the openwall site. However when running:

cd src/
. /configure && make

I encountered some errors like this:

In file included from /usr/include/stdio.h:27:0,
 from jumbo.h:20,
 from os-autoconf.h:29,
 from os.h:20,
 from bench.c:25:
 /usr/include/features.h:148:3: warning: #warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE" [-Wcpp]
 # warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE"
 ^
 gpg2john.c: In function ‘pkt_type’:
 gpg2john.c:1194:7: warning: type of ‘tag’ defaults to ‘int’ [-Wimplicit-int]
 char *pkt_type(tag) {
 ^
 /usr/bin/ar: creating aes.a
 dynamic_fmt.o: In function `DynamicFunc__crypt_md5_to_input_raw_Overwrite_NoLen':
 /opt/john-1.8.0-jumbo-1/src/dynamic_fmt.c:4989: undefined reference to `MD5_body_for_thread'
 dynamic_fmt.o: In function `DynamicFunc__crypt_md5':
 /opt/john-1.8.0-jumbo-1/src/dynamic_fmt.c:4425: undefined reference to `MD5_body_for_thread'
 dynamic_fmt.o: In function `DynamicFunc__crypt_md5_in1_to_out2':
 /opt/john-1.8.0-jumbo-1/src/dynamic_fmt.c:4732: undefined reference to `MD5_body_for_thread'
 dynamic_fmt.o: In function `DynamicFunc__crypt_md5_to_input_raw':
 /opt/john-1.8.0-jumbo-1/src/dynamic_fmt.c:4903: undefined reference to `MD5_body_for_thread'
 dynamic_fmt.o: In function `DynamicFunc__crypt_md5_to_input_raw_Overwrite_NoLen_but_setlen_in_SSE':
 /opt/john-1.8.0-jumbo-1/src/dynamic_fmt.c:4946: undefined reference to `MD5_body_for_thread'
 dynamic_fmt.o:/opt/john-1.8.0-jumbo-1/src/dynamic_fmt.c:4817: more undefined references to `MD5_body_for_thread' follow
 collect2: error: ld returned 1 exit status
 Makefile:294: recipe for target '../run/john' failed
 make[1]: *** [../run/john] Error 1
 Makefile:185: recipe for target 'default' failed
 make: *** [default] Error 2

I have no idea why this is and could not find out how to fix it, however I did discover that the version on GitHub is not affected by this problem.

sudo dnf install git
git clone https://github.com/magnumripper/JohnTheRipper.git
cd JohnTheRipper/src
./configure && make -s clean && make -s

You should now be able to use John :

cd ../run
./john --test

This gives a useable version of John for a single machine, but it will not work for a cluster.

Clustered (openmpi support)

For a cluster we need openmpi support.

First install the dependencies:

sudo dnf install openssl openssl-devel gcc openmpi openmpi-devel mpich

If you now try to build with openmpi support :

cd src/
./configure --enable-mpi && make -s clean && make -s

You will probably encounter an error like this:

checking build system type... x86_64-unknown-linux-gnu
 checking host system type... x86_64-unknown-linux-gnu
 checking whether to compile using MPI... yes
 checking for mpicc... no
 checking for mpixlc_r... no
 checking for mpixlc... no
 checking for hcc... no
 checking for mpxlc_r... no
 checking for mpxlc... no
 checking for sxmpicc... no
 checking for mpifcc... no
 checking for mpgcc... no
 checking for mpcc... no
 checking for cmpicc... no
 checking for cc... cc
 checking for gcc... (cached) cc
 checking whether the C compiler works... yes
 checking for C compiler default output file name... a.out
 checking for suffix of executables...
 checking whether we are cross compiling... no
 checking for suffix of object files... o
 checking whether we are using the GNU C compiler... yes
 checking whether cc accepts -g... yes
 checking for cc option to accept ISO C89... none needed
 checking whether cc understands -c and -o together... yes
 checking for function MPI_Init... no
 checking for function MPI_Init in -lmpi... no
 checking for function MPI_Init in -lmpich... no
 configure: error: in `/opt/john/JohnTheRipper/src':
 configure: error: No MPI compiler found
 See `config.log' for more details

The error here is that the mpi compiler and it’s libraries (which we installed previously) cannot be found because they are installed to a directory not in your path.

To fix this temporarily:

export PATH=$PATH:/usr/lib64/openmpi/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/openmpi/lib

To fix more permanently add the following to your ~/.bashrc

PATH=$PATH:/usr/lib64/openmpi/bin
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/openmpi/lib

You should now be able to build John with openmpi support:

. /configure --enable-mpi && make -s clean && make -s

Setting up the cluster

Now that we know how to get John installed successfully with openmpi support on Fedora Server 23, it’s time to setup the other services required to allow hashes to be cracked as part of a cluster.

In the example commands below, the master node is 192.168.0.1 and the slave node is 192.168.0.2.

EDIT: [I have recently noticed that ALL nodes need to be able to authenticate to and communicate with EVERY other node, once the number of nodes passes a certain number. Therefore every node must have an SSH key, with the public key in the authorized_keys file on every other node, and firewall rules allowing traffic between all nodes. I will find out how to automate this process and update this post in the future.]

Setup SSH Keys

On the master node generate an SSH key. The following command will prevent the passphrase prompt from appearing and set the key-pair to be stored in plaintext (this is not a very secure and I will be changing this to a more secure option on the near future, however since this key is only going to be used to access the slave nodes, and I am the only user on the box, it will do for now…):

ssh-keygen -b 2048 -t rsa -f ~/.ssh/id_rsa -q -N ""

Issue the following command on the master node to copy the ssh key to the authorized_keys file for the user of the slave node:

ssh-copy-id -i ~/.ssh/id_rsa.pub username@192.168.0.2

Note it is easiest if the user account on all of the nodes has the same name, however I believe it is possible to use different usernames (and keys) provided the ~/.ssh/config file has appropriate entries.

Ensure the permissions on the slave node are correct if you are having trouble using your key:

chmod 600 ~/.ssh/authorized_keys
chmod 700 ~/.ssh

And make sure that selinux is not getting in the way (again on the slave node):

restorecon -R -v ~/.ssh

Setup NFS

We will use an NFS share to store files that need to be shared between the nodes, for example the john.pot file and any wordlists we wish to use.

NFS and RPCBind are already installed in Fedora Server, they just need some configuration.

Make a directory to use as the nfs share on the master node and change the ownership to the user account we are using:

sudo mkdir /var/johnshare
sudo chown -R username /var/johnshare

Add this directory to the list of “exports”. Note that using ‘*’ in place of the host (in this case ‘192.168.0.2’) is a security vulnerability as it would allow ANY host to mount the share. It should be noted that any user on 192.168.0.2 will be able to mount this share, in this case this is not a serious issue since I have sole control over the node. (Information about securing NFS can be found here).

sudo vim /etc/exports
/var/johnshare  192.168.0.2(rw,sync)

Start exporting this directory and start the service:

sudo exportfs -a
sudo systemctl start nfs

At this point you should be able to see the export on localhost:

showmount -e 127.0.0.1
Export list for 127.0.0.1:
/var/johnshare 192.168.0.2

But if you try it from the slave node you will get the error:

clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)

This is because the ports are not open on the firewall of the master node. The following commands will reconfigure the firewall to allow the services we need (on the master node):

firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=mountd
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --reload

Note the above commands will open several ports through your firewall from ANY host, while this is often not too much of a concern on an internal trusted network, ideally you should limit the rule to only the hosts required (see below).

Mount the NFS Share on the Nodes

Make a directory to mount the nfs share onto on the slave node and change the ownership to the user we are using:

sudo mkdir /var/johnshare
sudo chown -R username /var/johnshare

You could mount the nfs share manually using the following command:

sudo mount 192.168.0.1:/var/johnshare /var/johnshare

However this will not survive a reboot. Instead you could use /etc/fstab as described here, but apparently using autofs is more efficient.

Add the following configuration file to the slave node to use autofs:

cat /etc/auto.john
/var/johnshare -fstype=nfs 192.168.0.1:/var/johnshare

Edit file /etc/auto.master on the slave node and add the line:

/- /etc/auto.john

Start the automount service to mount the share and start it at boot:

sudo systemctl start autofs
sudo systemctl enable autofs

Mpiexec Firewall Rules

Ideally we would add specific ports for mpiexec to the firewall configuration, however a large number of ports are used for communication and are dynamically assigned. According to the documentation and mailing lists it should be possible to restrict the range of ports that will be used and then allow these through the firewall. Unfortunately during my setup I was unable to accomplish this. Unwilling to fall back to turning off the firewall permanently out of frustration I decided to go for the middle ground of allowing all ports but only to specific hosts. While this is not as granular as I would like, it is certainly more secure than no firewall at all.

sudo firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.2/32" accept'
sudo firewall-cmd --reload

Enter the same commands on the slave node, substituting the IP address.

[Update: it is possible to restrict the ports mpiexec uses by entering the following configuration into /etc/openmpi-x86_64/openmpi-mca-params.conf on all of the nodes:

oob_tcp_port_min_v4 = 10000
oob_tcp_port_range_v4 = 100
btl_tcp_port_min_v4 = 10000
btl_tcp_port_range_v4 = 100

This will make mpiexec use TCP ports 10000 to 10100, and you can therefore restrict the firewall configuration by both host and ports to minimise the attack surface. Note that as more nodes are added more ports may be required.]

You can test that your openmpi installation is working by issuing the following command on the master node:

mpiexec -n 1 -host 192.168.0.2 hostname

If all is well you should see the hostname of the slave node. If you see something like this:

ORTE was unable to reliably start one or more daemons.
This usually is caused by:
* not finding the required libraries and/or binaries on
 one or more nodes. Please check your PATH and LD_LIBRARY_PATH
 settings, or configure OMPI with --enable-orterun-prefix-by-default
* lack of authority to execute on one or more specified nodes.
 Please verify your allocation and authorities.
* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
 Please check with your sys admin to determine the correct location to use.
* compilation of the orted with dynamic libraries when static are required
 (e.g., on Cray). Please check your configure cmd line and consider using
 one of the contrib/platform definitions for your system type.
* an inability to create a connection back to mpirun due to a
 lack of common network interfaces and/or no route found between
 them. Please check network connectivity (including firewalls
 and network routing requirements).

Then as the error indicates you have a problem that could have multiple causes, however the one that I encountered was the local firewall blocking the unsolicited connection from the remote node. Ensure you have added appropriate allow rules and either issued the reload command or restarted the service.

Install John on the Slave Node

Assign a static IP address, add the ssh key (ensure it is possible to authenticate with the key). Follow the steps above to install the dependencies, clone john from the repo, and build it (note that slave nodes also require openmpi). Ensure that it is in the same path as the other nodes. Configure and start autofs.

Running John Across the Cluster

Create a list of nodes and available cores on that node, on the master node:

cat nodes.txt
192.168.0.1 slots=8
192.168.0.2 slots=8

To run John on the configured nodes from the master node:

mpiexec -display-map -tag-output -hostfile nodes.txt ./john /var/johnshare/hashes.pwdump --format=nt --pot=/var/johnshare/john.pot --session=/var/johnshare/my_session_name

The -display-map parameter will output a list of the nodes and the host they are on at the start of the job, -tag-output will prefix every line of output from the program with the job id and the node number. I find this information helpful, however if you prefer less verbose information they can be omitted.

Note, it is important that the session file is accessible by all nodes (i.e. it must be on the NFS share) otherwise it will not be possible to resume a crashed/cancelled session. If you do not store the session file on the share, you will see an error like the one below from each of the cores on each of your slave nodes, but the session will resume on the master node:

9@192.168.0.2: fopen: my_session_name.rec: No such file or directory
8@192.168.0.2: fopen: my_session_name.rec: No such file or directory

Another error that you may encounter is:

9@192.168.0.2: 8@192.168.0.2: [192.168.0.2:28647] *** Process received signal ***
[192.168.0.2:28647] Signal: Segmentation fault (11)
[192.168.0.2:28647] Signal code: Address not mapped (1)
[192.168.0.2:28647] Failing at address: 0x1825048b64c4
[192.168.0.2:28648] *** Process received signal ***
[192.168.0.2:28648] Signal: Segmentation fault (11)
[192.168.0.2:28648] Signal code: Address not mapped (1)
[192.168.0.2:28648] Failing at address: 0x1825048b64c4
[192.168.0.2:28647] [ 0] /lib64/libpthread.so.0(+0x109f0)[0x7f504ffb19f0]
[192.168.0.2:28647] [ 1] /lib64/libc.so.6(_IO_vfprintf+0xaef)[0x7f504fc2c38f]
[192.168.0.2:28647] [ 2] /lib64/libc.so.6(+0x4e441)[0x7f504fc2e441]
[192.168.0.2:28647] [ 3] /lib64/libc.so.6(_IO_vfprintf+0x1bd)[0x7f504fc2ba5d]
[192.168.0.2:28647] [ 4] /opt/john/JohnTheRipper/run/john[0x6354eb]
[192.168.0.2:28647] [ 5] /opt/john/JohnTheRipper/run/john[0x625b03]
[192.168.0.2:28647] [ 6] /opt/john/JohnTheRipper/run/john[0x6237cb]
[192.168.0.2:28647] [ 7] /opt/john/JohnTheRipper/run/john[0x624227]
[192.168.0.2:28647] [ 8] /opt/john/JohnTheRipper/run/john[0x62516c]
[192.168.0.2:28647] [ 9] /lib64/libc.so.6(__libc_start_main+0xf0)[0x7f504fc00580]
[192.168.0.2:28647] [10] /opt/john/JohnTheRipper/run/john[0x4065a9]
[192.168.0.2:28647] *** End of error message ***
[192.168.0.2:28648] [ 0] /lib64/libpthread.so.0(+0x109f0)[0x7f386381b9f0]
[192.168.0.2:28648] [ 1] /lib64/libc.so.6(_IO_vfprintf+0xaef)[0x7f386349638f]
[192.168.0.2:28648] [ 2] /lib64/libc.so.6(+0x4e441)[0x7f3863498441]
[192.168.0.2:28648] [ 3] /lib64/libc.so.6(_IO_vfprintf+0x1bd)[0x7f3863495a5d]
[192.168.0.2:28648] [ 4] /opt/john/JohnTheRipper/run/john[0x6354eb]
[192.168.0.2:28648] [ 5] /opt/john/JohnTheRipper/run/john[0x625b03]
[192.168.0.2:28648] [ 6] /opt/john/JohnTheRipper/run/john[0x6237cb]
[192.168.0.2:28648] [ 7] /opt/john/JohnTheRipper/run/john[0x624227]
[192.168.0.2:28648] [ 8] /opt/john/JohnTheRipper/run/john[0x62516c]
[192.168.0.2:28648] [ 9] /lib64/libc.so.6(__libc_start_main+0xf0)[0x7f386346a580]
[192.168.0.2:28648] [10] /opt/john/JohnTheRipper/run/john[0x4065a9]
[192.168.0.2:28648] *** End of error message ***
Session aborted
3 0g 0:00:00:02 0.00% (ETA: Wed 09 Jul 2031 01:57:37 BST) 0g/s 0p/s 0c/s 0C/s
4 0g 0:00:00:02 0.02% (ETA: 00:00:21) 0g/s 305045p/s 305045c/s 859007KC/s bear902..zephyr902
[192.168.0.1][[28350,1],3][btl_tcp_endpoint.c:818:mca_btl_tcp_endpoint_complete_connect] connect() to 192.168.0.2 failed: Connection refused (111)
5 0g 0:00:00:04 0.37% (ETA: 21:56:45) 0g/s 2599Kp/s 2599Kc/s 7557MC/s 47886406..M1911a16406
[192.168.0.1][[28350,1],4][btl_tcp_endpoint.c:818:mca_btl_tcp_endpoint_complete_connect] connect() to 192.168.0.2 failed: Connection refused (111)
6 0g 0:00:00:06 0.78% (ETA: 21:51:27) 0g/s 3527Kp/s 3527Kc/s 10677MC/s skyler.&01..virgil.&01
[192.168.0.1][[28350,1],5][btl_tcp_endpoint.c:818:mca_btl_tcp_endpoint_complete_connect] connect() to 192.168.0.2 failed: Connection refused (111)
7 0g 0:00:00:08 1.12% (ETA: 21:50:34) 0g/s 3873Kp/s 3873Kc/s 11466MC/s Angie18%$..Cruise18%$
[192.168.0.1][[28350,1],6][btl_tcp_endpoint.c:818:mca_btl_tcp_endpoint_complete_connect] connect() to 192.168.0.2 failed: Connection refused (111)
--------------------------------------------------------------------------
mpiexec noticed that process rank 7 with PID 0 on node 192.168.0.2 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

In my experience this extremely unhelpful error means that the slave node cannot access the NFS share. I first encountered this when a slave node rebooted and I learned that I had not enabled autofs to start on boot.

You can get each process to output it’s status by sending the USR1 signal using pkill (you should also run this command before cancelling a session to ensure that only the minimum work possible is lost):

pkill - USR1 mpiexec

You can force a pot sync to stop other nodes from working on hashes and salts that have already been cracked by sending the USR2 signal:

pkill - USR2 mpiexec

Adding New Nodes

Follow the same process above to add the ssh public key of the master node to the new slave node, install John with openmpi support, configure autofs to mount the nfs share, add the new node to the /etc/exports file on the master node, to the list of nodes on the master node, restrictions the ports in use if required, and add the firewall rules on the master and slave node to allow full access between them.

More Troubleshooting

After adding some nodes to my cluster and running some jobs I started to find that long running tasks (such as running john with a large wordlist and some complex rules) hung with the following error:

mca_btl_tcp_frag_recv: readv failed: Connection timed out

The Google results I found talking about this error were mostly discussing when this error was encountered at the start of a job and were often the result of host based firewalling errors. These did not seem to be relevant to my situation since the task completed, but hung on this error instead of exiting.

A quirk of my cluster  is that it is distributed around a network rather than being on its own network segment as recommended for MPI clusters. This means that the connections between the nodes are passing through a firewall, and although the ports that are required are allowed, the firewall turned out to be the cause of my problem.

The idle connection timeout on the firewall rule was configured to the default 180 seconds, increasing this to 86400 seconds (1 day) resolved the issue. 

My best guess for why this is the case is that the connection is established at the start of the job, but then remains idle until a hash is cracked, therefore on long running jobs the idle timeout is exceeded and the firewall terminates the connection. I would have expected the program to reestablish the connection when it needs to use it again, but this does not appear to be the case and instead timeouts trying to use its existing (terminated) connection.

Obviously a better solution would be to move the nodes onto the same network to remove the firewall from the equation, but unfortunately that isn’t an option right now.


As always, if you have any questions, comments or suggestions please feel free to get in touch.

Using BitLocker To Go on Fedora 23 (dislocker)

I have multiple machines, some run Windows and some run Fedora. I also need to keep a significant amount of my data encrypted l, and I need to be able to do this from Windows machines that are not under my control.

Both Windows and Linux have multiple encryption solutions available, with varying levels of uptake and acceptance. However in order to be as compatible as possible for my clients, I decided to use a Windows solution and figure out how to use it on Linux.

BitLocker  is the obvious choice for Windows compatability. Enabling BitLocker on a USB stick, will include the executables required to mount the volume on any Windows machine.

Dislocker can be used on Linux systems to mount the Bitlocker volume – although this tool was initially read only it now supports read/write. (Note I have only tried this using ExFAT formatted drives, however I believe FAT and NTFS will also work.)

The following is my cheat sheet of how to install and use dislocker.

Note: All commands as root

Installing dislocker

Install exfat support

dnf install exfat-utils fuse-exfat

At the moment need to enable the testing repo so that we can get the version of dislocker that supports usb (need at least v0.5) [Could also install from source…]

dnf config-manager --set-enabled updates-testing

Install dislocker

dnf install dislocker fuse-dislocker

Disable the testing repo so that we aren’t getting any other unstable packages when we install

dnf config-manager --set-disabled updates-testing

 

Make a mount point for dislocker container

mkdir /mnt/dislocker-container

Make a mount point for the dislocker file

mkdir /mnt/dislocker

 

Mounting a BitLocker USB device

Find the usb device probably /dev/sdc1 or similar

fdisk -l

Mount the dislocker container (assuming /dev/sdc1 is the USB) using the User password you configured when you setup BitLocker (you will be prompted). Note recovery passwords and filed are also supported.

dislocker -v -V /dev/sdc1 -u -- /mnt/dislocker-container

Make sure that this has worked correctly, ‘dislocker-file’  should be within that directory.

ls /mnt/dislocker-container/dislocker-file

Mount the dislocker-file as a loop device and give everyone permission to write to it (maybe should restrict this more…)

mount -o loop,umask=0,uid=nobody,gid=nobody /mnt/dislocker-container/dislocker-file /mnt/dislocker

Thats it!
Work on the files in /mnt/dislocker you should have read write access (for all users).

Common Errors:

Some error about the /mnt/dislocker-container already existing. You don’t have fuse-dislocker installed so it is trying to create an unencrypted copy of the usb.

Its taking ages and running out of disk space. Same as above, it’s trying to make an unencrypted copy of your volume.

Unmounting

Make sure you aren’t in the directories you need to unmount or it will error

cd /mnt

Unmount the dislocker-file mount point

umount /mnt/dislocker

Unmount the dislocker container mount point

umount /mnt/dislocker-container

Eject the USB device using the file manager on the system.

Done!


As always, if you have any comments or suggestions please feel free to get in touch.

Change the TPM Owner Password and BitLocker Recovery Key

I recently purchased a Microsoft Surface Pro 4 which came with Windows 10. BitLocker was enabled by default during setup, however the recovery key was automatically uploaded to my Microsoft account. While this is a really good feature and for the vast majority of users will not pose a problem, I have slightly different concerns than the average user… therefore I decided I did not want my recovery key to be entrusted to Microsoft.

The quickest and easiest option was to delete the recovery key from my Microsoft account, which can be done here. However although this would remove my ability to get my recovery key from my Microsoft account it gives me absolutely no guarantee that Microsoft actually deleted it in any kind of permanent way, and given that everyone has a rigorous backup process (right? 😉 ), it is actually very likely that they actually still have my recovery key.

To have slightly more confidence I decided to change both the TPM Owner Password and BitLocker Recovery Key on my machine and keep them in a safe place offline in case I ever needed them.

To change the TPM Owner Password, open tpm.msc, then select “Change Owner Password…” in the top right, I followed the prompts within the dialogue box to change the password and save the file to external media.

To change the BitLocker Recovery Key is slightly more involved and utilises  the BitLocker Device Encryption Configuration Tool:

manage-bde

Assuming C: is the BitLocker protected drive you want to change recovery password do the following within an elevated command prompt.

List the recovery passwords:

 manage-bde C: -protectors -get -type RecoveryPassword

Locate which protector you want to change, there is probably only one, and copy its ID field including the curly braces.

Delete this protector:

manage-bde C: -protectors -delete -id [ID you copied]

Create a new protector:

Type manage-bde C: -protectors -add -rp

Note you can specify a 48 digit password at the end of the previous command if you wish, however if one is not specified one is randomly generated for you  – computers are much better at randomly generating passwords than you so probably best to let it do it.

Take heed of the output of the last command:

ACTIONS REQUIRED:

1. Save this numerical recovery password in a secure location away from your computer:

[YOUR RECOVERY KEY IS HERE]

To prevent data loss, save this password immediately. This password helps ensure that you can unlock the encrypted volume.

As always, if you have any comments or suggestions please feel free to get in touch.

Unknown incremental mode: LM_ASCII

LanManager is an obsolete hashing format used by older versions of Windows. It is extremely weak as it first splits the password into two 7 character blocks, uppercases and hashes these individually therefore drastically reducing the number of permutations. Storing LM hashes is a security vulnerability and should be addressed immediately by preventing Windows from storing LM Hash, and changing greater than the amount of password history stored to completely purge the old LM hashes.

One of the tools I use to crack test hashes is John the Ripper, however I recently encountered an error (probably due to my mangling of config files) when incrementing through the permutations using john (i.e without a wordlist).

Unknown incremental mode: LM_ASCII

In order to fix this error I added the following to my john.conf file:

[Incremental:LM_ASCII]
File = $JOHN/lm_ascii.chr
MinLen = 0
MaxLen = 7
CharCount = 69

[List.External:Filter_LM_ASCII]
void filter()
{
 int i, c;
i = 0;
 while (c = word[i]) {
 if (c < 0x20 || c > 0x7e || // Require ASCII-only
 i >= 14) { // of up to 14 characters long
 word = 0; return;
 }
 if (c >= 'a' && c <= 'z') // Convert to uppercase
 word[i] &= 0xDF;
 i++;
 }
word[7] = 0; // Truncate at 7 characters
}

I believe this should be there by default, however it caused me enough of a headache when i found it to be missing that I thought it worth a quick blog post for my future self. 🙂

Installing pytesseract – practically painless

A recent project of mine called for optical character recognition.  After a brief Google search and a personal recommendation I decided to use tesseract because it is cross platform, under active development, and has a Python API (pytesseract).

Installing these was surprisingly easy:

tesseract has a Windows installer which comes with the English language data available here.

pytesseract can be installed using pip:

pip install pytesseract

pytesseract states that it requires Python Imaging Library (PIL) however this project no longer appears to be active, so I used the maintained fork of that project pillow. This can be installed using pip:

pip install pillow

And that’s it!

You should now be able to do some optical recognition with python:

import pytesseract
from PIL import Image
print pytesseract.image_to_string(Image.open('test.jpg'))

 


As always, if you have any comments or suggestions please feel free to get in touch.

Installing rdpy on Windows 7 64bit

At the time of writing I am working on a tool that utilities rdpy.  I encountered some problems installing it so I thought I’d document how I solved them for when I inevitably need to rebuild my laptop and in the hope that it will be useful to others.

Following the instructions in the readme: https://github.com/citronneur/rdpy/blob/master/README.md

pip install rdpy

Failed with error:

error: Unable to find vcvarsall.bat

This cryptic error message means that the compiler required to compile the C/C++ code in the package is not present on the system. Usually in this situation the easiest option is to download a precompiled version of the package. Unfortunately I haven’t been able to find a precompiled version of rdpy, so a compiler is required, preferably the same one used to compile the version of Python you are using. For 2.7 this is Visual Studio C++ 2008 and for 3.4.1 this is Visual Studio C++ 2010.

You can double check which version you need using the Python interactive interpreter. If the version information (shown bold red below) shows MSC v.1500 you need Visual Studio C++ 2008, whereas if it show MSC v.1600 you need Visual Studio C++ 2010.

c:\>python
ActivePython 2.7.8.10 (ActiveState Software Inc.) based on
Python 2.7.8 (default, Jul 2 2014, 19:48:49) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()

So where do you get Visual Studio C++ 2008 from? Well you only need the express edition, so you can get for free from here (at the time of writing).

You will also need to install the Windows SDK, again the version is important Windows SDK for Windows 7 and .NET Framework 3.5 SP1, you should be able to uncheck everything except:

Developer Tools >> Visual C++ Compilers

(Note I encountered installation errors when I attempted this, this is probably unique to my machine, but if you also encounter a problem try installing the complete SDK, which worked for me.)

(You should register Visual Studio (for free) within 30 days.)

At this point you should now be able to install the package, however you may get this error (discussed here ):

ValueError: [u'path']

This is because the latest version of Visual Studio has changed the .bat file, but Python has not yet (at the time of writing) updated to account for this.

To fix this error you simply need to copy the <install_location>\VC\bin\vcvars64.bat to <install_location>\VC\bin\vcvarsamd64.bat and <install_location>\VC\bin\amd64\vcvarsamd64.bat

Assuming the default install path is used (C:\Program Files (x86)\Microsoft Visual Studio 9.0) the following will accomplish this if run in an elevated command prompt (i.e. Run as Administrator):

copy "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars64.bat" "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvarsamd64.bat"
copy "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars64.bat" "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\amd64\vcvarsamd64.bat"

With Visual Studio installed, rdpy should now install correctly.

There are two dependencies for rdpy, PyWin32 and PyQt4. The easiest way I have found to install PyWin32 is to install ActivePython which comes with this package. An installer for PyQt4 is available here (ensure you select the correct installer for your version of Python and operating system architecture).

With the dependencies met you should now be able to use rdpy. However you are likely to encounter this error:

C:\Python27\lib\site-packages\twisted\internet\_sslverify.py:184: UserWarning: You do not have the service_identity module installed. Please install it from <https://pypi.python.org/pypi/service_identity>. Without the service_identity module and a recent enough pyOpenSSL to support it, Twisted can perform only rudimentary TLS client hostnameverification. Many valid certificate/hostname mappings may be rejected.  verifyHostname, VerificationError = _selectVerifyImplementation()

As this helpful error message indicates, you need to install the service_identity module. This can be done easily using pip:

pip install service_identity


As always, if you have any comments or suggestions please feel free to get in touch.

Installing python-ldap on Windows 7 64bit

TL;DR: Architecture is important (duh), download the 64bit exe from here.


I recently had a horrendous time trying to install python-ldap on Windows 7 64bit so I decided to write a short post for when my future self needs to install it again in the hope that it will go much more smoothly.

As it turns out I missed one key piece of information, the “preferred point for downloading the ‘official’ source distribution is now the PyPI repository which supports installing via setuptools” (http://www.python-ldap.org/download.shtml) BUT this only has 32bit versions, not 64bit!

Therefore installing using pip or the exe fails (obvious in hindsight).

Here are some of the super helpful error message I encountered while I epically failed to take architectures into account:

1) Installing using pip (pip install python-ldap) requires Visual C++ 2008 Express Edition installed, which I didn’t have so resulted in:

error: Unable to find vcvarsall.bat

To fix, install it free from Microsoft here because it will be required for other modules in the future. Then read 2).

2) Installing using pip (pip install python-ldap) tries to install a 32bit version which isn’t much use when the OS and Python interpreter are 64bit so results in:

ValueError: [u'path']

To fix: there is not a 64bit version in the pip repo as far as I am aware – skip to the punch line below.

3) Installing using the officially distributed executable on PyPI failed to find my Python install in the registry and gave me this error:

Python version 2.7 required, which was not found in the registry.

I use ActiveState Python because it comes with pywin32  but apparently it didn’t put the entry in the registry. Looking at the documentation afterwards it appears as though the installer didn’t run with admin privileges and therefore created entries under KEY_CURRENT_USER and not under HKEY_LOCAL_MACHINE.

4) After fixing 3) the exe was able to complete installation (and for a few brief moments I was happy…), but when I tried to import the module it failed with this error:

ImportError: DLL load failed: %1 is not a valid Win32 application

This is once again because of an architecture mismatch, my Python interpreter is 64bit but the module is 32bit.


After a cup of tea I noticed that the architecture problem and found the alternative installers by Christoph Gohlke here. (The python-ldap download page links to the maintainer’s (Waldemar Osuch) webpage which links to Christoph Gohlke’s page.)

I was then able to download and install the 64bit version with no problems.

 


As always, if you have any comments or suggestions please feel free to get in touch.