Pragmatic Programmer Issues

skipfish – fast, easy and simple

Skipfish is google code project. It is web application security scanner, high speed (they claim 2000 requests per second* – * – at local LAN :)) and due the fact it is command line tool without fancy wizards, options and so on, it is relatively easy to use, and for sure it is easy to just start scanning.

Skipfish is active scanner so it first scan application, preparing the map of web site, than recursively ran different test, the last thing is report generation. Documentation is simple and has a lot of example we can start on. So let’s see that in action.

One of such command is:

$ skipfish -m 5 -LVJ -W /dev/null -o output_dir -b ie http://www.example.com/

During the scan, Skipfish is displaying statistics:

Scan statistics
---------------

       Scan time : 0:11:07.0068
   HTTP requests : 2446 sent (3.71/s), 16228.73 kB in, 659.18 kB out (25.32 kB/s)  
     Compression : 0.00 kB in, 0.00 kB out (0.00% gain)    
 HTTP exceptions : 34 net errors, 0 proto errors, 0 retried, 0 drops
 TCP connections : 2451 total (1.09 req/conn)  
  TCP exceptions : 0 failures, 1 timeouts, 0 purged
  External links : 745 skipped
    Reqs pending : 219        

Database statistics
-------------------

          Pivots : 471 total, 94 done (19.96%)    
     In progress : 323 pending, 38 init, 12 attacks, 4 dict    
   Missing nodes : 54 spotted
      Node types : 1 serv, 269 dir, 46 file, 1 pinfo, 91 unkn, 63 par, 0 val
    Issues found : 70 info, 111 warn, 49 low, 1 medium, 13 high impact
       Dict size : 0 words (0 new), 0 extensions, 0 candidates

After few hours/minutes, it depends on the site we are scanning, we will got

[+] Copying static resources...
[+] Sorting and annotating crawl nodes: 1666
[+] Looking for duplicate entries: 1666
[+] Counting unique issues: 1158
[+] Writing scan description...
[+] Counting unique issues: 1666
[+] Generating summary views...
[+] Report saved to outputDir/index.html
[+] This was a great day for science!

The report consist of “crawl results”, “document type overview” and “issue type overview”. My last scan result has some finding, but also has a lot of false positives, it seams that a lot of work still waiting for a Skipfish team, but it looks promising.

Categories