project:gsm:deka:deka-admin
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
project:gsm:deka:deka-admin [2015/11/08 20:44] – patched version of Kraken jenda | project:gsm:deka:deka-admin [2017/01/16 15:40] (current) – [Installing tables] jenda | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== Installing deka ====== | ||
+ | ===== Getting the source ===== | ||
+ | |||
+ | Get deka and Kraken source. | ||
+ | |||
+ | git clone http:// | ||
+ | |||
+ | git clone < | ||
+ | |||
+ | The original Kraken might not work on recent systems. However, someone published my patched version on GitHub. That version should work on something like Debian Jessie. https:// | ||
+ | |||
+ | ===== Getting tables ===== | ||
+ | |||
+ | Get the table files (*.dlt) generated by TMTO Project/ | ||
+ | ===== Installing tables ===== | ||
+ | |||
+ | It is to be done this way | ||
+ | |||
+ | < | ||
+ | ./ | ||
+ | # table format | ||
+ | |||
+ | if stored in files. However, to avoid filesystem overhead, a direct installation on a block device is advised. The install.py script should help you with this. | ||
+ | ===== Configuring tables for deka ===== | ||
+ | |||
+ | Edit delta_config.h and write paths to devices and index files and offsets from the generated tables.conf. | ||
+ | |||
+ | Protip: do not use /dev/sdX, but path or UUID. /dev/sdX names tend to mix up! | ||
+ | |||
+ | ===== Generating kernel ===== | ||
+ | |||
+ | Run ./ | ||
+ | |||
+ | Switching to 64bit would also require changing " | ||
+ | |||
+ | Compiling fails with (older?) nVidia compilers due to unsupported " | ||
+ | |||
+ | <code c> | ||
+ | ulong one = 1; mask |= one << i; | ||
+ | ... | ||
+ | ulong all = 0xFFFFFFFFFFFFFFFF; | ||
+ | if(diff != all) { | ||
+ | </ | ||
+ | ===== Setting kernel options ===== | ||
+ | |||
+ | In vankusconf.py and .h, number of concurrently launched kernels could be also changed. A good starting value is a small integer multiply of number of computing cores on your card minus 1. For example 4095 if your card has 2048 cores. | ||
+ | |||
+ | Additionally, | ||
+ | |||
+ | ===== Running deka ===== | ||
+ | |||
+ | Run paplon.py. | ||
+ | |||
+ | Run oclvankus.py, | ||
+ | |||
+ | Run delta_client.py, | ||
+ | |||
+ | (or use init.sh to run all the above -- but running it manually is better for the first time as you can see debug prints) | ||
+ | |||
+ | Then, connect to the server (for example with telnet) and test it. | ||
+ | |||
+ | < | ||
+ | Trying ::1... | ||
+ | Connected to localhost. | ||
+ | Escape character is ' | ||
+ | crack 001110001001010111000110000100110100001000011010100001000010000110101100101010100110110100100111110011101110000000 | ||
+ | Cracking #0 001110001001010111000110000100110100001000011010100001000010000110101100101010100110110100100111110011101110000000 | ||
+ | Found 44D85D82BAF275B4 @ 2 #0 (table:412) | ||
+ | crack #0 took 35586 msec</ | ||
+ | |||
+ | Congratulations, | ||
+ | |||
+ | ===== Performance tuning ===== | ||
+ | |||
+ | By entering " | ||
+ | |||
+ | Possible speedups: | ||
+ | * tune loop unrolling in kernel | ||
+ | * tune number of iterations in kernel (currently 3000) | ||
+ | * tune number of kernels executed | ||
+ | * use async IO or multiple threads to read blocks |