InputOutput.io - The free-thinkin' free-speakin' rabble-rousin' geek.

Remote Monitoring of Network Connections with Arduino and LEDs

Using cerealbox to create a colorful visualization of your TCP/UDP connections.



Inspiration

At Defcon in 2011 I attended a talk by Steve Ocepek over at Spider Labs introducing a neat little project he was working on. I recommend you watch the talk, but here’s the jist of it. Basically, he was using the Arduino with an 8×8 LED board (each coordinate having an RGB value) to visualize the current established connections on a given network interface. Each coordinate of the matrix was color-coded based on country code, so you could differentiate connections based on region. As you made an outbound connection, lights would suddenly appear on the board, indicating where you are connecting to. And when those connections dropped or you disconnected, the lights disappeared. Network monitoring was done with libpcap, and a list of the active connections were sent to the Arduino using the serial interface and a perl script he coded. He called it cerealbox. I thought it was really neat to have a display of network connections always visible, without exhausting valuable screen real estate. Not only was it really useful for network admins, it was also really pretty! I mean, who doesn’t like shiny bright LEDs? And he provided the source for it, so you could buy the boards and set this up yourself.

I’d never worked with Arduino before, but I was inspired enough by this proof of concept to at least get his demo working for myself, and maybe make a few modifications. But suiting the project to my needs required a few additional considerations.

Requirements

Continent Codes, not Country Codes

Getting the demo set up was the easy part. But the color-coding was randomized based on country code. I wanted something a bit more useful – something that I could clearly see what region a connection was coming from, rather than a randomized color. The problem is that there’s a lot of countries in the world. Like, almost 200. With so many countries, the variations of color would be too slight for me to figure out at a glance which region a connection is coming from. So I wanted to code it based on continent code instead, with higher contrast between indicators. Here’s my schema:

  • Blue = Europe
  • Orange = Asia
  • Purple = Oceania
  • Yellow = Africa
  • White = South America
  • Teal = North America
  • Pink = Antarctica & Local
  • Green = United States
  • Red = Special IPs (My VPSes)

Okay, so I cheated – not each one of those is a continent. US is it’s own color, and it seems a little weird to have Antarctica grouped with local connections. But connections coming from Antarctica seemed like such an extreme edge case, and I didn’t want to give up an entire color for a continent with no permanent r. So there are those exceptions, and then red for my own VPSes.

Client-Server Architecture

One of the things I really wanted to do was be able to visualize the connections of any machine, not just the one connected to the Arduino. The original project bundled the packet sniffing and serial connection to the Arduino in one neat little perl script, but I wanted to separate the part that monitors the network connection and the part that sends it to the arduino. The former will belong to the client, and the latter to the server.

I originally considered just using the Arduino standalone, with the color shield chained directly on top of a WiFly shield, and sending network connections to a server hosted directly on the Arduino. I even modified the Colorduino library to use different pins from the WiFly. But in the end I wanted to ensure that the connection to the Arduino was secure. This would be difficult to implement with the 32k space limitations of the Uno I was working with.

Secure Transport Layer

Between the client & server, I wanted to add a secure network layer. I decided on having the client/server negotiate an SSL connection with a simple password authentication.

Python, not Perl

Not to start a religious war, but I’m more comfortable in Python, so I’d have to rewrite the network monitoring and serial communication components of the script Steve had written.

RasPi

I wanted the server to run on the Raspberry Pi. Actually this was the easiest part – with Raspbian, it was just a matter of installing pip with python 2.7 and installing everything else with pip. No assembly required!

Implementation

Client

Since the geolocation lookup happens immediately as a connection is read on the client-side, I needed to use a few geoip libraries: pygeoip to perform a lookup on the country code, and incf.countryutils to then fetch the continent code. Additionally, a packet sniffer was needed. I considered using scapy for this task, which I’ve had a lot of fun with in the past and I highly recommend as a versatile tool for python packet-slicing. However, scapy seemed a bit heavyweight for the task at hand, so I decided on pcapy for sniffing and imapacket for dissecting packets.

As packets come across the wire, we keep track of them, keeping a hash (in python parlance, a dict) of udp and tcp connections. For the TCP connections, we look for the syn + ack flags, indicating a connection is established. Conversely, a fin or rst flag indicates the connection has been severed. Since UDP packets are stateless, we immediately record that a connection is established when we see any UDP traffic. Periodically, we have a sweeper to handle timed out connections.

Once a connection is established or severed, perform a geoip lookup on the remote IP, and immediately send that data to the server via an ssl socket.

Server

The server is super simple. It just receives signals coming in over the ssl bind socket, and forwards that signal directly to the arduino. Serial communications are handled by the pyserial module. Several other standard library modules are used by both the client and the server, such as ssl and socket to instantiate a secure communication.

By the time we write to the serial interface, the message we send contains the following information: [Connection Closed or Opened],[Remote Mac Address],[Remote ipv4 IP],[Remote Port],[Country Code],[Continent Code]

Arduino

I didn’t change what Steve wrote a whole lot here, except for reading the continent code and setting the colors appropriately. I did separate out a file for IP addresses I wanted to specially highlight.

Challenges

One of the challenges I’ve had is when to determine when a timeout on a TCP connection has occurred. Some connections (for example, SSH), can stand for several hours before timing out. Others (HTTP, for example) time out very quickly. From my understanding, it’s impossible to tell from the transport layer if a connection has timed out in a given period of time. We can infer the timeout from the application layer, but it seems a bit inelegant of a solution. I have yet to find a good way of dealing with this problem. If you know of a solution, please contact me.

On the client, the sniffing of packets by pcapy is a blocking call. Sending a ctrl-c doesn’t throw a KeyboardInterrupt exception until a new packet is actually read. In order to ensure that users can kill the client immediately, I had to use the multiprocessing module and actually use a separate process to determine if a KeyboardInterrupt was issued, which killed both itself and the sniffing loop process. This seems a bit silly to me, but I’m not sure if there’s any better way to do this. Again, if you have any suggestions, let me know.

Left: cerealbox visualizing http(s) connections. Right: bittorrent traffic

Outcome

There’s still a few bugs to squash. But it works, it’s pretty, and it’s useful! Check out colorduino on my github and let me know what you think!

Enigma Machine in Captain America: The First Avenger

In a nod to the history of cryptography, the folks over at Marvel Studios included a modified replica of the Enigma Machine in one scene for the 2011 film Captain America: The First Avenger. If you don’t know, the Enigma Machine was a tool used by the German army and navy to encipher and decipher messages during WWII. The breaking of the cipher by the brilliant cryptographer Alan Turing gave the allies vital information on land movements of axis troops. In this scene, it was even used in the correct context: decrypting messages of the Nazi train system. I commend Marvel for their thoroughness on this one!

A modified replica enigma as seen in the film

An actual enigma machine from a picture I took last year at the Computer History Museum in Mountain View, CA

Lookbehind / Lookahead Regex in Vim

Here’s a nifty little vim tip for you.

I recently had to switch a few variables in PHP from $varname to $somearray[‘varname’]. Since there were quite a few of these replacements to be done, I found it convenient to use vim’s search/replace regex feature. In this case, I have to use lookbehind, since the matching string is simply varname, and I’m not interested in catching the $ at the beginning. I just want the regex to match anything starting with the $, without having the $ as part of the matching string itself.

So, let’s try to replace the following line:

authenticate($key, $secret, $uri);

with this one:

authenticate($somearray['key'], $somearray['secret'], $somearray['uri']);

We’ll want to construct a lookbehind for the $, with some string in front. Then, we’ll replace it with $somearray[‘matching_string‘]. In vim, lookbehind uses the special @ symbol, rather than the perl (?<=somestring) syntax. [shell] :'<,'>s/\$\@<=[a-z]\+/$somearray['&']/g [/shell] This will do the trick. As you can see, the $, @, and + must all be escaped. The lookbehind positive search chars, @<= can be replaced with @<! if a negative search is desired. Lookahead is similar to lookbehind’s syntax, but uses @= and @! instead. The special & character in the replace string designates a matching token, which you can use to place the matching string in your replacement.

So for reference:

  • :%s/\(some\)\@<=thing/one/g
    

    searches for all strings starting with some, then matching thing
    changes thing into one

    end result: something becomes someone

  • :%s/\(some\)\@<!thing/one/g
    

    searches for all strings not starting with some, then matching thing
    changes thing into one

    end result: something is not changed, but everything changes to everyone

  • :%s/some\(thing\)\@=/every/g
    

    searches for all strings ending with thing, then matching some
    changes some into every

    end result: something becomes everything

  • :%s/some\(thing\)\@!/every/g
    

    searches for all strings not ending with thing, then matching some
    changes some into every

    end result: something is not changed, but someone becomes everyone

Hardening your VPN Setup with iptables

I’ll be heading out to Defcon 19 next month, so I want my VPN connection to be stable and secure.

You probably know the situation. You’re at your local coffee shop, using their (hopefully not) wide-open unsecured wifi hotspot. But you’re smart enough not to send all your data out over the clear, since there might be malicious script kiddies ready to take your sensitive data and sell it to kids on the street. So you use a VPN. You fire up OpenVPN and connect to your VPN service. Then you start browsing, comforted by the fact that your traffic is encapsulated in a secure SSL tunnel. Better yet, the user experience is transparent: you don’t have to configure your applications to manually use a SOCKS5 proxy. OpenVPN handles your routing tables and creates a virtual interface using the tun module. It’s so simple, you don’t need to think about it. But there’s a problem with this setup.

No one can reach into your stream and extract or insert data, but there’s a caveat. Anyone can destroy your TCP stream by sending you a spoofed RST packet from the remote server, or otherwise making the service unavailable to you. Destroying the TCP stream destroys the virtual (tun) interface, which, in turn, destroys the routes associated with that interface. Now you’re using your physical interface unprotected from those pesky hackers. Worse still, you don’t realize it. Not a thing has changed from the perspective of user experience. Since everything is transparent, you don’t notice any change at all. Now you’re screwed.

Little did you know that this all could have been avoided by our friend iptables. Sure, you could modify your routes further to ensure that only traffic going to the remote server goes over your physical interface, but that’s too easy. Plus, routing tables aren’t intended for security, they’re inteded to move packets along. iptables seems like the tool for the task, so I modified a script I found here to make sure that we disallow any traffic that we don’t want:

#!/bin/bash
if [[ $EUID -ne 0 ]]; then
	echo "This script must be run as root" 1>&2
	exit 1
fi

# name of primary network interface (before tunnel)
PRIMARY=wlan0

# address of tunnel server
SERVER=seattle.vpn.riseup.net
# address of vpn server
VPN_SERVER=seattle.vpn.riseup.net

# gateway ip address (before tunnel - adsl router ip address)
# automatically determine the ip from the default route
GATEWAY=`route -n | grep $PRIMARY | egrep "^0\.0\.0\.0" | tr -s " " | cut -d" " -f2`

# provided by pppd: interface name
TUNNEL=tun0

openvpn --config /my/path/to/riseup.ovpn --auth-user-pass /my/path/to/authentication.conf &

# iptables rules - important!

#LOCAL_NET=192.168.0.0/16
LOCAL_NET=$GATEWAY

# Flush all previous filter rules, you might not want to include this line if you already have other rules setup
iptables -t filter --flush

iptables -t filter -X MYVPN
iptables -t filter -N MYVPN

# Exceptions for local traffic & vpn server
iptables -t filter -A MYVPN -o lo -j RETURN
iptables -t filter -A MYVPN -o ${TUNNEL} -j RETURN
iptables -t filter -A MYVPN --dst 127.0.0.1 -j RETURN
iptables -t filter -A MYVPN --dst $LOCAL_NET -j RETURN
iptables -t filter -A MYVPN --dst ${SERVER} -j RETURN
iptables -t filter -A MYVPN --dst ${VPN_SERVER} -j RETURN
# Add extra local nets here as necessary

iptables -t filter -A MYVPN -j DROP

# MYVPN traffic leaving this host:
iptables -t filter -A OUTPUT -p tcp --syn -j MYVPN
iptables -t filter -A OUTPUT -p icmp -j MYVPN
iptables -t filter -A OUTPUT -p udp -j MYVPN

echo "nameserver 8.8.8.8" > /etc/resolv.conf

You’ll want to modify the openvpn command, interfaces, and servers to meet your needs. And that’s it! If your stream is taken down, you have these rules to protect you. I have this script as a post-connect hook for any untrusted networks I connect to (wicd is a nice network manager for adding hooks). Later, if you want your traffic to go over the clear again, you can use this script:

#!/bin/bash
if [[ $EUID -ne 0 ]]; then
	echo "This script must be run as root" 1>&2
	exit 1
fi

iptables -t filter --flush
iptables -t filter -X MYVPN

Rooting a router: Wiretapping dd-wrt / OpenWRT embedded linux firmware

Note: The following post is written partially as a follow-up to the presentation I gave on dd-wrt at the December meeting of the Western North Carolina Linux Users Group.

Concept

If you’re running dd-wrt as your router (or OpenWRT, or Tomato for that matter), you already know how powerful it can be. Capabilities such as boosting signal strength, interacting with Dynamic DNS services, running a VPN server, and transitioning to ipv6 all come prepackaged standard edition, running on a mere 4MB of flash memory (and micro running on 2MB!) What I will show you is that using the features commonly bundled with dd-wrt, you can turn your router into a wiretap, regardless of your wireless security. I’ll be dealing specifically with dd-wrt v24-sp2, but you can also wiretap OpenWRT by following similar instructions.

The idea is this: you have a router sitting at a critical juncture in your network infrastructure. Packets are rapidly being routed in and out of various interfaces on the router. If we are dealing with a wireless router with security enabled, the router is an endpoint for this encryption. Which means that the packets will be decrypted upon arrival, before being pushed through the wire. Additionally, the OpenWRT / dd-wrt communities have ported a wide array of Linux projects to the platform. Notably, they’ve ported tcpdump, the powerful command-line packet analyzer, and libpcap, the C/C++ library required for capturing network traffic. Using such a tool at the router level means your network is owned.

There’s a problem though – once we have a capture going, where do we store it? Most of these routers only have extremely limited flash storage space, usually barely enough for the embedded firmware alone. Even those that have more can only store perhaps a few moments of a heavy traffic capture. Where is all that data to go? Well, we’re in luck: dd-wrt has precompiled support for CIFS (also known as SMB), Microsoft’s network sharing protocol. If we’re able to mount a network share, and store our capture there, then we don’t have to worry about storage limitations. We can even install the packages necessary for the capture on the remote filesystem.

Implementation

Let’s start with a base install. We’ll need ssh access, so lets load up the web interface and under Services → Services enable SSHd. The package manager OpenWRT uses is called ipkg. This is not available until we enable JFFS2 under Administration → Management. Next, create a CIFS share on the local machine. Here, our local machine’s ip is 192.168.1.2, and the network share is named “share.” SSH in as root, and issue the following commands to insert the CIFS module and mount the network share:

insmod /lib/modules/2.4.35/cifs.o
mount.cifs "\\\\192.168.1.2\\share" /jffs -o user=username,password=password

Nice, now we have the network share mounted. If you encounter an error issuing the mount.cifs command, double check your ip, share name, username, and password. Since we’ve essentially mounted over the already mounted /jffs, we need to issue an additional command for ipkg to cleanly update:

mkdir -p /jffs/tmp/ipkg
ipkg update

Once ipkg has updated its package list, we can see all that is available to us by issuing:

ipkg list

As you can see, there’s a ton of stuff we can install. At this point, though, I started encountering some problems. When I issued the “ipkg install tcpdump” command, it fetched and installed the required libpcap first. Then, it went to install tcpdump, and threw an error that libpcap wasn’t installed. I tried to install them manually, but that didn’t work either. So at this point I started looking for alternatives. Optware is another way to install packages, using ipkg in /opt rather than /jffs. Following the instructions here, I created a local ext2 filesystem available via the network share:

dd if=/dev/zero of=share/optware.ext2 bs=1 count=1 seek=10M
mkfs.ext2 share/optware.ext2

If you want more space for packages, change the seek parameter. Next, we’ll be mounting this to /opt on the router side. We’ll need to install kmod-loop first, and insert the loop and ext2 kernel modules:

ipkg install kmod-loop
insmod /lib/modules/2.4.35/ext2.o
insmod /jffs/lib/modules/2.4.30/loop.o
mount -o loop /jffs/optware.ext2 /opt

Great, now we have /opt mounted to the remote ext2 filesystem. Get the install script and install it:

wget http://www.3iii.dk/linux/optware/optware-install-ddwrt.sh -O - | tr -d '\r' > /tmp/optware-install.sh
sh /tmp/optware-install.sh

Excellent, now we have the port of optware installed on /opt! Lets run an ipkg update on this ipkg:

/opt/bin/ipkg update

For comparisons sake, lets just look at how many packages we now have available, as opposed to before:

root@DD-WRT:/opt# ipkg list | wc -l
652
root@DD-WRT:/opt# /opt/bin/ipkg list | wc -l
1242

So we’ve almost doubled the amount of packages available to us. And most importantly, no more complications with tcpdump:

/opt/bin/ipkg install tcpdump

Now that we have tcpdump and libpcap, we can dump our packets to the network share:

tcpdump not host 192.168.1.2 -s 0 -w /jffs/network.cap

From here on in, we can open the packet dump with wireshark and find lots of useful information. We can even store the commands in a start-up script in the dd-wrt web interface under Administration → Commands:

insmod /lib/modules/2.4.35/cifs.o
insmod /lib/modules/2.4.35/ext2.o
mount.cifs "\\\\192.168.1.2\\share" /jffs -o user=username,password=password
insmod /jffs/lib/modules/2.4.30/loop.o
mount -o loop /jffs/optware.ext2 /opt
tcpdump not host 192.168.1.2 -s 0 -w /jffs/network.cap &

Implications

Given the wide range of routers supported by dd-wrt/OpenWRT, this is a major security concern. Although this requires physical access to the device in question, there is nothing to stop an attacker from purchasing an identical model of router, installing dd-wrt and tcpdump on it, and swapping a target router with the malicious one. If the attacker already knows the wireless password, the malicious router can be configured such a swap would not draw attention. Resetting the router is no defense – the OpenWRT firmware modification kit can easily modify the firmware image file. Modifying the image file to add code that monitors network traffic would mean that any reset would only be restoring malicious firmware.
Such an attack need not be local. Most ISPs block CIFS traffic, but the router could be made to forward the CIFS ports through an SSH tunnel to a remote endpoint. The stock Dropbear SSH isn’t capable of tunneling, but openssh is available in the ipkg repository, and can be either included in the firmware or installed on the local /jffs space available. Sending all network traffic that goes over the wire to a remote endpoint may be impractical for an attacker, but packet headers still provide a wealth of information.

Swinedroid, Snort Monitoring tool, available on the Android Market

QR Code to Download Swinedroid ClientSwinedroid v0.20 has been released is now available on the Android Market. If you haven’t read my previous post about it, here’s the low down. Swinedroid is a remote Snort monitoring application for Android. Currently, it allows you to view server threat statistics, display the latest alerts, search alerts (by alert severity, signature name, time frame) and view alert details (including a hex dump if available). It consists of two components: the client – which runs on your Android device, and the server – which runs on the system you wish to monitor (or a third party server that can access the snort server db port). The server provides statistics requested by the client over a secure and authenticated SSL link.

Since the last (non-market) release, I’ve introduced a server threat graph (thaks to AChartEngine), alert detail breakdown, SSL authenticity negotiation, functional alert browsing, a more helpful launcher screen, and crash fixes.

Swinedroid Server OverviewSwinedroid Alert Overview

Having an Android Snort monitoring application can prove handy for a variety of situations where access to web-based clients is either unavailable or inconvenient. Since this is a monitoring tool that runs natively in Android, it will also be possible to recieve notifications based on alert statistics – a feature I plan to implement at some stage. Also upcoming is alert tagging and deleting functionality, more advanced alert statistics, attacker profiling (including reverse DNS / location information), and more. If you have suggestions, please post your feedback.

Download the client app here.

Download the server here.

Swinedroid – the new Snort Monitoring tool for Android

QR Code to Download Swinedroid ClientIf you’ve ever been on the go when crisis strikes, you know how convenient it is to have a mobile application for dealing with the problems you might face. For instance, I’ve found it really convenient that there’s an application that interfaces with the API for my Virtual Private Server, Slicehost. I no longer have to fumble around with the browser trying find the page which reboots the VPS, I simply load the Slicehost application. This stores my API key, and I’m able to manage my servers in a more streamlined fashion.

It is in this spirit that I began development on Swinedroid. Swinedroid is an Android Snort monitoring and management application. In its current state it allows you to view server alert statistics, display latest alerts, and search alerts based on severity, signature name, and time frame. In the coming months, I plan to add support for viewing alert details (such as the hex dump and whois information), sorting alerts, managing alerts (e.g. tagging or deleting them), and interpreting a variety of Snort log formats.

Here’s the way it works. There are two components: the server and the client. The server runs on any machine that you want to monitor. In order for the Swinedroid server component to work, you need to have Snort installed and logging alerts to MySQL. The client you install on your Android device, and configure it to communicate with the server component. This communication is done over SSL in a secure (but not authenticated) fashion.

Swinedroid overview screenSwinedroid overview screen

The project is still very much in the beginning stages, and there are exciting features to come. Everything is free and open source. I invite you to try it out, and give me your feedback.

Git Repository (Client): git://github.com/Hainish/Swinedroid.git

Git Repository (Server): git://github.com/Hainish/Swinedroid-Server.git

Client Component: http://www.inputoutput.io/files/swinedroid-client_0.10.apk

Server Component: http://www.inputoutput.io/files/swinedroid-server_0.10.tar.gz

Update:
Swinedroid has been released on the Android Market. See this post for more info.

SSL or S-S-Hell?

Broken Key2009’s Beating on SSL, Round One

Hot on the heels of the Microsoft Crypto API patch comes another SSL vulnerability. The last round of attacks on SSL relied on a problem with the deployment of SSL on the web, as the research of Moxie Marlinspike shows. To sum up the crucial point in their research in a nutshell: just because the x509 protocol in web certificates accepts strings such as www.paypal.com\0.thoughtcrime.org without terminating the string, that doesn’t mean your web browser will do the same. We’re able to actually create a certificate signing request (.csr) with www.paypal.com\0 as the subdomain to a domain which we genuinely control. Because of the automated nature of today’s domain (and subdomain) verification process, this will go unnoticed by most Certificate Authority signing processes. Once we get the certificate back from the CA, we’re able to pose ourselves as a man-in-the-middle. Until recently, most browsers would terminate the string at the null character, leaving “www.paypal.com” as the domain for which we’ve been authenticated. Not only is this a theoretical possibility, but Moxie has released tools for it, available at thoughtcrime.org, which are probably still quite effective for unpatched systems.

Round Two: The K.O.

Whereas the null character vulnerability was an issue with web deployment of SSL and certificate chaining, the latest flaw (released on November 5th) seems to be a severe problem with the protocol itself. While there’s been a fair degree of hype surrounding a number of supposed vulnerabilities in SSL, this seems to be the real deal. Specifically, the flaw is in SSL 3.0/ TLS 1.0 – and has something to do with inserting unverified traffic into the renegotiation process of SSL sessions. Marsh Ray of PhoneFactor discovered the vulnerability, which seems to be severe, and “In certain circumstances this flaw could be used in MITM attacks, allowing an attacker to inject attacker-chosen plain text prefix into a secure session of the victim.” The bug has been being worked on for several months, and OpenSSL has released a patch to deal with the bug in its 0.9.8l release, available at www.openssl.org. Again, this is not a problem with deployment, or (as with last year’s Debian SSL vulnerability) distribution-specific forking, it is a fundamental problem with the way SSL renegotiates sessions. Also unlike last year’s Debian vulnerability, which can be retroactively exploited, this exploit requires foreknowledge of the vulnerability and situating oneself as a man-in-the-middle. Exploits are in the wild as of this writing. Kudos to OpenSSL for releasing a patch so quickly.

Using the android browser with tor or any socks proxy & privoxy

Update: If all  you’re looking to do is use TOR with android, please use this tutorial.  The below information is out of date for such uses.

Prerequisites:

  1. A jailbroken android install.
  2. Debian Armel on android.
  3. SSHD running in the chrooted debian environment.

Want to browse the web anonymously with your android device, without t-mobile recording your every move? Look no further.

Few are aware that the default android browser actually allows you to use an http proxy to connect to the web. It is a rather obscure setting to trigger, and there are no provisions for you to connect through a socks proxy, such as an ssh tunnel or the tor network. Luckily, privoxy handles all this for us. Privoxy is an http proxy that is able to forward http requests through the encrypted socks tunnel, and out to its intended recipient. In this tutorial, I will show you how to set your android browser to use privoxy, and how to configure privoxy to forward to a socks proxy.

Lets jump right in.

Using connectbot (available from the android market), ssh into your chrooted debian on localhost. Run:

apt-get install tor

This will fetch both tor and privoxy for you. Now, you’ll need to configure privoxy to forward its http requests through tor, or whatever other tunnel you’ve created through ssh (see my previous post, http://www.inputoutput.io/how-to-subvert-deep-packet-inspection-the-right-way/). Append the following line to your /etc/privoxy/config file:

forward-socks5 / localhost:9050 .

Change 9050 to whatever port your tor or ssh tunnel is listening on. Default is 9050 for tor. Now, start tor and privoxy with:

/etc/init.d/tor start
privoxy /etc/privoxy/config

I had to make /dev/null world-writable for tor to stop complaining. You’ll have to run that last part every time you restart your android device. Now on to the annoying part. In terminal emulator (also available from the android market):

su
sqlite3 /data/data/com.android.providers.settings/databases/settings.db
SQLite version 3.5.9
Enter ".help" for instructions
sqlite> INSERT INTO system VALUES (99, 'http_proxy', 'localhost:8118');
sqlite> .quit

Change 8118 to whatever port privoxy is listening on, but that port is the default. Now the browser is configured to use privoxy as its http proxy. Privoxy, in turn, is configured to forward connections through tor or the ssh tunnel. This means your done, congratulations!

If you want to stop the browser from using the proxy at any point, in terminal emulator:

su
sqlite3 /data/data/com.android.providers.settings/databases/settings.db
SQLite version 3.5.9
Enter ".help" for instructions
sqlite> DELETE FROM system WHERE name='http_proxy';
sqlite> .quit

It’s quite frustrating to go through this process every time you want to switch between proxified and raw browsing, so I suggest installing a second browser such as ‘steel’ for your raw connection, and only using the default browser for proxified connections.

VMWare Workstation in BackTrack {3, 4} Live

BackTrack 4

Why?

There’s been a number of situations in the past where, even though I’m perfectly happy running BackTrack as a host operating system, it would nonetheless be sweet to run any number of virtualized guest machines as well. For instance, if exploit code or a tool has been released in Windows (e.g. Ferret/Hamster) but is not yet, or will never be, released for Linux. Or if you want to do research in a virtualized network environment. And of course in general, it’s just a good idea to keep your options open, to sharpen your axe before you go out and chop some wood. My virtualization software of choice is VMWare Workstation, especially the newer versions >= 6.5. I’m not going to go into why I favor VMWare over other options, but suffice to say that they are just the best choice for non-commercial virtualized environments (and, uhm, unity mode is kickass.) So this will be a quick run-through for you to create a customized .lzm file for BackTrack Live with a full and functioning install of VMWare Workstation.

How?

While on the road to creating a customized .lzm file, I was steering for the path of least resistance. Basically, I created a before- and after-install list of files across the entire file system. I then compared the two – the difference being the new files that were created from the install. Copy those files over to a subdirectory structure, and run dir2lzm. Place .lzm file into the appropriate directory, uncompressed at boot time. Done. (Here I have to add a disclaimer: this method probably can be improved upon, since it doesn’t take into account those files which the install did not create, but may have only modified. Perhaps checking modification timestamps would be better.)

Boot up to BackTrack Live, and lets get started:

mkdir ~/vmware-install-tracking/
cd ~/vmware-install-tracking/
find / | sort > before

Now that we have a list of files before the install takes place, it’s time for us to install VMWare. Once you’ve installed it, run it, customize your settings, enter your serial number, etc. Open a few virtual machines. Get your settings to a point where you’re comfortable with them – you won’t be able to modify them again after this point. Close VMWare.

find / | sort > after
diff before after > new_files
cat new_files | egrep -v "^---$" | egrep -v "^[0-9]" | egrep -v "[><] /dev" | egrep -v "[><] /mnt/live" | egrep -v "[><] /proc" | egrep -v "[><] /sys" | egrep -v "[><] /tmp" | egrep -v "[><] /var/run" | egrep -v "[><] /var/lock/subsys/vmware" | egrep -v "[><] /root/vmware-install-tracking/" | cut -d" " -f2 > required_files
echo "/lib/modules/2.6.28.1/modules.dep" >> required_files # don't forget those modules!

The directory in the last line will vary based on current kernel version. At this point we have compiled a list of all the files and directories we need for the .lzm file. But we need a script that will parse through required_files and create a file/directory structure from it. I threw the following together in python, create_filestructure_from_filelist.py:

#!/usr/bin/python

import subprocess, os, sys
if len(sys.argv) is not 3:
        print "Usage: " + sys.argv[0] + " [file list to parse] [destination path]"
        exit()

dest_path = sys.argv[2]
if dest_path[len(dest_path) - 1] is '/':
        dest_path = dest_path[0:len(dest_path) - 1]

try:
        fp = open(sys.argv[1],"r")
except:
        print "Error: Could not open file for reading!"

x = fp.readline().strip()
file_list = []
dir_list = []
while x:
        if os.path.isdir(x):
                dir_list.append(x)
        if os.path.isfile(x):
                file_list.append(x)
        x = fp.readline().strip()

for dir in dir_list:
        if not os.path.isdir(dest_path + dir):
                subprocess.call('mkdir -p ' + dest_path + dir,shell=True)

for file in file_list:
        file_components = file.split('/')
        containing_dir = '/'.join(file_components[0:len(file_components) - 1])
        if not os.path.isdir(dest_path + containing_dir):
                subprocess.call('mkdir -p ' + dest_path + containing_dir,shell=True)
        subprocess.call('cp ' + file + ' ' + dest_path + file,shell=True)

Now all thats left to do is call the script, create the .lzm, and put it in the loadtime modules directory. Make sure the destination path in the script has enough storage space.

./create_filestructure_from_filelist.py required_files vmware-tmp/
dir2lzm vmware-tmp/ vmware.lzm
mv vmware.lzm /mnt/sdb1/bt4/modules/

Reboot to your live distribution. You now have a working install of VMWare Workstation on your BackTrack Live. Enjoy!