[18] | 1 | # Nanown |
---|
| 2 | |
---|
| 3 | A tool for identifying, evaluating, and exploiting timing |
---|
| 4 | vulnerabilities remotely. This is part of the output from a research |
---|
| 5 | effort [discussed at BlackHat 2015](https://www.blackhat.com/us-15/briefings.html#web-timing-attacks-made-practical). |
---|
| 6 | This project is still highly experimental and not particularly easy to |
---|
| 7 | use at this point. |
---|
| 8 | |
---|
| 9 | |
---|
| 10 | # Prerequisites |
---|
| 11 | |
---|
| 12 | Linux and Python 3.4+ are required. Yes, really, your Python needs to |
---|
| 13 | be that new. You will also need to install the following modules for |
---|
| 14 | this version of Python: |
---|
| 15 | ``` |
---|
| 16 | requests |
---|
| 17 | numpy |
---|
| 18 | netifaces |
---|
| 19 | matplotlib |
---|
| 20 | ``` |
---|
| 21 | On Debian unstable, you can get these by running: |
---|
| 22 | ``` |
---|
| 23 | apt-get install python3-requests python3-numpy python3-netifaces python3-matplotlib |
---|
| 24 | ``` |
---|
| 25 | Otherwise, resort to `pip3`. |
---|
| 26 | |
---|
| 27 | In addition, you'll need to have a C compiler and the development |
---|
| 28 | package for libpcap installed. Under Debian this is probably sufficient: |
---|
| 29 | ``` |
---|
| 30 | apt-get install libpcap-dev gcc |
---|
| 31 | ``` |
---|
| 32 | |
---|
| 33 | |
---|
| 34 | # Installation |
---|
| 35 | |
---|
| 36 | Hah! Funny. |
---|
| 37 | |
---|
| 38 | Currently there's no installation script... |
---|
| 39 | |
---|
| 40 | To attempt to use this code, clone the repository and build the |
---|
| 41 | `nanown-listen` tool with: |
---|
| 42 | ``` |
---|
| 43 | cd nanown/trunk/src && ./compile.sh |
---|
| 44 | ``` |
---|
| 45 | |
---|
| 46 | That will drop the `nanown-listen` binary under nanown/trunk/bin. You |
---|
| 47 | must then put this directory in your `$PATH` in order to perform any |
---|
| 48 | data collection. |
---|
| 49 | |
---|
| 50 | To run any of the other scripts, change to the nanown/trunk directory |
---|
| 51 | and run them directly from there. E.g.: |
---|
| 52 | ``` |
---|
| 53 | bin/train ...args... |
---|
| 54 | bin/graph ...args... |
---|
| 55 | ``` |
---|
| 56 | |
---|
| 57 | |
---|
| 58 | # Usage |
---|
| 59 | |
---|
| 60 | Our goal for a usage workflow is this: |
---|
| 61 | |
---|
| 62 | 1. Based on example HTTP requests, and test cases supplied by the user, |
---|
| 63 | a script generator creates a new script. This new script serves |
---|
| 64 | as the sample collection script, customized for your web |
---|
| 65 | application. |
---|
| 66 | |
---|
| 67 | 2. After collecting samples using the script from step 1, you run a |
---|
| 68 | mostly automated script to train and test various classifiers on your |
---|
| 69 | samples. This will then tell you how many samples you need to |
---|
| 70 | reliably detect the timing difference. |
---|
| 71 | |
---|
| 72 | 3. Given the output from step 3 and inputs to step 1, a second script |
---|
| 73 | generator creates an attack script for you as a starting point. You |
---|
| 74 | customize this and run your attacks. |
---|
| 75 | |
---|
| 76 | Sounds great, yeah? Well steps 1 and 3 aren't quite implemented yet. =\ |
---|
| 77 | |
---|
| 78 | If you are really dying to use this code right now, just make a copy of |
---|
| 79 | the `trunk/bin/sampler` script and hack on it until it sends HTTP requests |
---|
| 80 | that your targeted web application expects. Be sure to define the test |
---|
| 81 | cases appropriately. Then run it to collect at least |
---|
| 82 | 50,000 samples for each the train, test and train_null data sets |
---|
| 83 | (150,000 samples total). NOTE: Your sampler script must be run as `root` |
---|
| 84 | so it can tweak local networking settings and sniff packets. |
---|
| 85 | |
---|
| 86 | Next you can move on to step 2, where you simply run the train script |
---|
| 87 | against the database created by your sampler script: |
---|
| 88 | ``` |
---|
| 89 | bin/train mysamples.db |
---|
| 90 | ``` |
---|
| 91 | This will run for a while. If you cancel out and re-run it, it will |
---|
| 92 | pick up where it left off. Pay special attention to the final results |
---|
| 93 | it prints out. This will tell you how many samples are needed to |
---|
| 94 | distinguish between the test cases. Do a little math on your own to |
---|
| 95 | decide how feasible your overall attack will be. |
---|
| 96 | |
---|
| 97 | Finally, we come to step 3. If you choose to carry out an attack, you |
---|
| 98 | will need to implement your own attack script that collects batches of |
---|
| 99 | samples, distinguishes between them using the best classifier available |
---|
| 100 | (from step 2) and then repeats as needed. Consider starting with the |
---|
| 101 | sample script at `test/blackhat-demo/jregistrate-attack`. |
---|
| 102 | |
---|
| 103 | Any questions? See the source, watch our BlackHat presentation, read |
---|
| 104 | our research paper, or [post an issue](https://github.com/ecbftw/nanown/issues) on GitHub. |
---|