zoukankan      html  css  js  c++  java
  • Web轻量级扫描工具Skipfish

    Web轻量级扫描工具Skipfish

    1. Skipfish 简介

    2. Skipfish 基本操作

    3.身份认证

    一. Skipfish 简介

    Skipfish是一款主动的Web应用程序安全侦察工具。它通过执行递归爬取和基于字典的探测来为目标站点准备交互式站点地图。最终的地图然后用来自许多活动(但希望是不中断的)安全检查的输出来注释。该工具生成的最终报告旨在作为专业Web应用程序安全评估的基础。

    主要特征:
    高速:纯C代码,高度优化的HTTP处理,最小的CPU占用空间 - 轻松实现响应目标的每秒2000个请求。
    易于使用:启发式支持各种古怪的Web框架和混合技术站点,具有自动学习功能,动态词汇表创建和表单自动完成功能。
    尖端的安全逻辑:高质量,低误报率,差分安全检查,能够发现一系列细微的缺陷,包括盲注入矢量。

    更多特征:
    c语言编写
    实验性的主动web安全评估工具
    递归爬网
    基于字典的探测
    速度较快
    -多路单线程,全异步网络i/o,消除内存管理和调度开销
    -启发式自动内容识别
    误报较低


    二. Skipfish 基本操作

    1.skipfish --help

    查看这个命令的参数选项

     Authentication and access options:
    
          -A user:pass      - use specified HTTP authentication credentials
          -F host=IP        - pretend that 'host' resolves to 'IP'
          -C name=val       - append a custom cookie to all requests
          -H name=val       - append a custom HTTP header to all requests
          -b (i|f|p)        - use headers consistent with MSIE / Firefox / iPhone
          -N                - do not accept any new cookies
          --auth-form url   - form authentication URL
          --auth-user user  - form authentication user
          --auth-pass pass  - form authentication password
          --auth-verify-url -  URL for in-session detection
    
        Crawl scope options:
    
          -d max_depth     - maximum crawl tree depth (16)
          -c max_child     - maximum children to index per node (512)
          -x max_desc      - maximum descendants to index per branch (8192)
          -r r_limit       - max total number of requests to send (100000000)
          -p crawl%        - node and link crawl probability (100%)
          -q hex           - repeat probabilistic scan with given seed
          -I string        - only follow URLs matching 'string'
          -X string        - exclude URLs matching 'string'
          -K string        - do not fuzz parameters named 'string'
          -D domain        - crawl cross-site links to another domain
          -B domain        - trust, but do not crawl, another domain
          -Z               - do not descend into 5xx locations
          -O               - do not submit any forms
          -P               - do not parse HTML, etc, to find new links
    
        Reporting options:
    
          -o dir          - write output to specified directory (required)
          -M              - log warnings about mixed content / non-SSL passwords
          -E              - log all HTTP/1.0 / HTTP/1.1 caching intent mismatches
          -U              - log all external URLs and e-mails seen
          -Q              - completely suppress duplicate nodes in reports
          -u              - be quiet, disable realtime progress stats
          -v              - enable runtime logging (to stderr)
    
        Dictionary management options:
    
          -W wordlist     - use a specified read-write wordlist (required)
          -S wordlist     - load a supplemental read-only wordlist
          -L              - do not auto-learn new keywords for the site
          -Y              - do not fuzz extensions in directory brute-force
          -R age          - purge words hit more than 'age' scans ago
          -T name=val     - add new form auto-fill rule
          -G max_guess    - maximum number of keyword guesses to keep (256)
    
          -z sigfile      - load signatures from this file
    
        Performance settings:
    
          -g max_conn     - max simultaneous TCP connections, global (40)
          -m host_conn    - max simultaneous connections, per target IP (10)
          -f max_fail     - max number of consecutive HTTP errors (100)
          -t req_tmout    - total request response timeout (20 s)
          -w rw_tmout     - individual network I/O timeout (10 s)
          -i idle_tmout   - timeout on idle HTTP connections (10 s)
          -s s_limit      - response size limit (400000 B)
          -e              - do not keep binary responses for reporting
    
        Other settings:
    
          -l max_req      - max requests per second (0.000000)
          -k duration     - stop scanning after the given duration h:m:s
          --config file   - load the specified configuration file
    
        Send comments and complaints to <heinenn@google.com>.
    
    
    #使用skipfish中的字典枚举发现目标服务器隐藏文件
    #skipfish的字典默认以wl结尾
    root@kali:~# dpkg -L skipfish | grep wl #查找其字典文件
    /usr/share/skipfish/dictionaries/medium.wl #中型字典
    /usr/share/skipfish/dictionaries/minimal.wl #小型字典
    /usr/share/skipfish/dictionaries/extensions-only.wl #扩展字典
    /usr/share/skipfish/dictionaries/complete.wl #完整型字典
    
    #参数-o表示将扫描的内容存储到该参数后面的文件内
    #参数-I表示匹配URL中某个字符串进行扫描,在本例中即扫描/dvwa目录
    #参数-S表示指定文件列表,后面跟字典表示用字典去扫描目标的隐藏文件
    root@kali:~# skipfish -o test6 -I /dvwa -S /usr/share/skipfish/dictionaries/minimal.wl  http://192.168.128.129/dvwa
    
    #参数-X:表示不检查包含某个字符串的URL
    #参数-K:表示不对制定的参数进行Fuzz测试
    #参数-D:表示跨站点爬另一个域,
    如下面命令表示去扫描192.168.128.129网站的内容,如果有xxx.com这个域的链接,那么也会去扫xxx.com这个域的信息
    root@kali:~# skipfish -o test7  -D xxx.com -I /dvwa -S /usr/share/skipfish/dictionaries/minimal.wl  http://192.168.128.129/dvwa
    
    参数-l:每秒最大的请求数下面的例子表示每秒最大请求20次,实际上比20次多一些
    root@kali:~# skipfish -o test8  -l 20  -S /usr/share/skipfish/dictionaries/minimal.wl  http://192.168.128.129/dvwa 
    
    参数-m:表示每个ip最大并发连接数
    root@kali:~# skipfish -o test9  -m 20  -S /usr/share/skipfish/dictionaries/minimal.wl  http://192.168.128.129/dvwa 
    
    可以在其配置文件内将需要的参数配置好,然后输命令的时候加上参数--config指定配置文件即可
    
    **Skipfish身份认证**
    参数-A 用户名:密码:表示使用特定的http验证
    root@kali:~#  skipfish -o test11 -I /dvwa -A admin:password  http://192.168.128.129/dvwa
    
    #参数-C后面接cookie
    #参数-X表示不扫描制定的字符串的内容,此例表示不扫描logout.php页面(一旦扫描logout.php便会退出,故不扫描)
    root@kali:~#  skipfish -o test10 -I /dvwa -X logout.php -C "PHPSESSID=6f155b6b28fa5b88721ad9e5cbd3f08" -C "security=low"  http://192.168.128.129/dvwa 
    
    #通过表单提交用户名密码
    #参数--auth-form表示登陆账户名密码的界面
    #参数--auth-user 后面指定用户名
    #参数--auth-pass 后面指定密码
    #参数--auth-verify-url 后面指定登陆成功后的界面(即判断是否登陆成功)
    root@kali:~# skipfish -o test12 --auth-form http://192.168.128.129/dvwa/login.php --a
    
    

      

    2.skipfish -o test  http://1.1.1.1/dvwa/

    1.扫描 http://1.1.1.1/dvwa/,将扫描结果存放在test

    2.浏览器打开此页面

    • 扫描了整个站点
    • 结果保存在 test1/index.html 中 
    root@kali:~# skipfish -o test http://192.168.14.157/dvwa/
        skipfish web application scanner - version 2.10b
        [!] WARNING: Wordlist '/dev/null' contained no valid entries.
        Welcome to skipfish. Here are some useful tips:
    
        1) To abort the scan at any time, press Ctrl-C. A partial report will be written
           to the specified location. To view a list of currently scanned URLs, you can
           press space at any time during the scan.
    
        2) Watch the number requests per second shown on the main screen. If this figure
           drops below 100-200, the scan will likely take a very long time.
    
        3) The scanner does not auto-limit the scope of the scan; on complex sites, you
           may need to specify locations to exclude, or limit brute-force steps.
    
        4) There are several new releases of the scanner every month. If you run into
           trouble, check for a newer version first, let the author know next.
    
        More info: http://code.google.com/p/skipfish/wiki/KnownIssues
    
        Press any key to continue (or wait 60 seconds)... 
    
    

      

     

    3.skipfish -o test @url.txt    #指定目标IP列表文件

    扫描多个目标,该命令表示扫描url.txt文件中的url, #并且将扫描结果存放在test文件内

     

    4.skipfish -o test -S complet.wl -W abc.wl http://1.1.1.1 #字典

    # 默认扫描使用的字典
    root@kali:~# dpkg -L skipfish | grep wl
        /usr/share/skipfish/dictionaries/medium.wl
        /usr/share/skipfish/dictionaries/minimal.wl
        /usr/share/skipfish/dictionaries/extensions-only.wl
        /usr/share/skipfish/dictionaries/complete.wl
    # 指定字典 (-S)
    root@kali:~# skipfish -o test1 -I /dvwa/ -S /usr/share/skipfish/dictionaries/minimal.wl http://172.16.10.133/dvwa/
        NOTE: The scanner is currently configured for directory brute-force attacks,
        and will make about 65130 requests per every fuzzable location. If this is
        not what you wanted, stop now and consult the documentation.
    # 将目标网站特有的特征漏洞代码存到文件 (-W)
    root@kali:~# skipfish -o test1 -I /dvwa/ -S /usr/share/skipfish/dictionaries/minimal.wl -W abc.wl http://172.16.10.133/dvwa/
    
    

      

    5.更多操作

    -I 只检查包含′string′的 URL
    skipfish -o test -I /dvwa/ http://1.1.1.1/dvwa/
    
    -X 不检查包含′string′的URL #例如:login
    skipfish -o test -X /login/ http://1.1.1.1/dvwa/
    
    -S 用字典去爬网站
    skipfish -o test -S complet.wl http://1.1.1.1/dvwa/
    
    -K 不对指定参数进行 Fuzz 测试
    如果你不想对参数进行Fuzz测试就可以指定
    
    -D 跨站点爬另外一个域
    skipfish -o test -D http://url -I /dvwa/ http://1.1.1.1/dvwa/
    
    -l 每秒最大请求数 真实性能还要考虑你的网络环境
    skipfish -o test -l 200 -S complet.wl http://1.1.1.1/dvwa/
    
    -m 每IP最大并发连接数
    skipfish -o test -m 100 -I /dvwa/ http://1.1.1.1/dvwa/
    
    

      

    三.身份认证

    基于http身份认证
    skipfish -A user:pass -o test http://1.1.1.1/dvwa/
    
    
    基于Cookies身份认证
    skipfish -C "name=val" -o test http://1.1.1.1/dvwa/
    如果有多个cookies值,多一个cookies就多一个 -C "name=val"
    skipfish  -o test -C "name=val"  -C "name=val" http://1.1.1.1/dvwa/
    
    
    
    基于表单的身份认证
    skipfish  -o test --auth-form url --auth-form-atrget url --auth-user-filed 用户名的表单名 --auth-user 用户名 --auth-pass-filed  用户名的密码名 --auth-pass 密码 --auth-verify-url url http://1.1.1.1/dvwa/ 
    
    --auth-form url 表单所在网页
    --auth-form-atrget 表单提交到哪个url处理表单
    --auth-verify-url 表单提交成功后,重定向哪个url,就是身份认证成功后所在页面
    

      

  • 相关阅读:
    词云
    结巴分词
    重复值处理
    异常值判断
    MySQL基本使用
    缺失值处理
    fit_transform和transform的区别
    sklearn学习笔记之简单线性回归
    图解机器学习
    Unity---UNet学习(1)----基本方法介绍
  • 原文地址:https://www.cnblogs.com/-wenli/p/9917771.html
Copyright © 2011-2022 走看看