zoukankan      html  css  js  c++  java
  • 2019年遥感图像稀疏表征与智能分析竞赛-语义分割组

    主题一:遥感图像场景分类

    遥感图像场景分类旨在对空间信息网络中的遥感图像进行场景级内容解译,并为每一幅遥感图像赋予场景类别标签。本项竞赛以包含典型场景的遥感图像为处理对象,参赛队伍使用主办方提供的数据对指定的遥感图像进行场景分类,主办方依据评分标准对遥感图像场景分类结果进行综合评价。

    赛题详情:http://rscup.bjxintong.com.cn/#/theme/1          场景分类:https://github.com/vicwer/sense_classification

    主题二:遥感图像目标检测

    遥感图像目标检测识别竞赛即利用算法模型对遥感图像中的一个或多个目标的类别和位置进行自动化判定与识别。本项竞赛以包含典型地物目标的遥感图像为处理对象,参赛队伍使用主办方提供的图像进行带方向的目标检测与识别处理,主办方依据评分标准对检测识别结果进行综合评价。

    赛题详情:http://rscup.bjxintong.com.cn/#/theme/2

    主题三:遥感图像语义分割

    遥感图像语义分割竞赛即利用遥感图像中各类地物光谱信息和空间信息进行分析,将图像中具有语义信息的各个像元分别赋予语义类别标签。本项竞赛以包含典型土地利用分类的光学遥感图像为处理对象,参赛队伍使用主办方提供的遥感图像进行土地利用类型语义分割处理,主办方依据评分标准对检测识别结果进行综合评价。

    赛题详情:http://rscup.bjxintong.com.cn/#/theme/3          语义分割:https://github.com/huiyiygy/rssrai2019_semantic_segmentation

    主题四:遥感图像变化检测

    遥感图像变化检测即利用多时相的遥感数据分析并确定地表覆盖变化的特征与过程,将多时相图像中随时间变化的像元赋予变化语义类别标签。本项竞赛以光学遥感图像为处理对象,参赛队伍使用主办方提供的遥感图像进行建筑物变化检测,主办方根据评分标准对变化检测结果进行综合评价。

    赛题详情:http://rscup.bjxintong.com.cn/#/theme/4

    主题五:遥感视频目标跟踪

    光学遥感视频卫星目标自动跟踪竞赛即利用算法模型对一段光学遥感卫星视频中的一个目标的位置进行自动化识别与判定,并以矩形边界框的形式进行标识。本项竞赛以包含典型运动目标的光学遥感卫星视频为处理对象,视频以时间上连续的多帧图像的方式提供,参赛队伍使用主办方提供的光学遥感卫星视频进行运动目标自动跟踪处理,主办方依据评分标准对自动跟踪结果进行综合评价。

    赛题详情:http://rscup.bjxintong.com.cn/#/theme/5

     https://www.sohu.com/a/322434729_772793

    大赛官网:http://rscup.bjxintong.com.cn/#/theme/3/

    主题一和主题三跟这次比较像,尤其是三

    相同之处:都是图像分割

    不同之处:图像尺寸不同,个数也不同

    # -*- coding:utf-8 -*-
    """
    @function:
    @author:HuiYi or 会意
    @file: vis.py.py
    @time: 2019/6/23 下午7:00
    """
    import argparse
    import os
    import numpy as np
    import torch
    from tqdm import tqdm
    
    from mypath import Path
    from utils.saver import Saver
    from utils.summaries import TensorboardSummary
    from dataloaders import make_data_loader
    from models.backbone.UNet import UNet
    from models.backbone.UNetNested import UNetNested
    from utils.calculate_weights import calculate_weigths_labels
    from utils.loss import SegmentationLosses
    from utils.metrics import Evaluator
    # from utils.lr_scheduler import LR_Scheduler
    from models.sync_batchnorm.replicate import patch_replication_callback
    
    
    class Trainer(object):
        def __init__(self, args):
            self.args = args
    
            # Define Saver
            self.saver = Saver(args)
            self.saver.save_experiment_config()
            # Define Tensorboard Summary
            self.summary = TensorboardSummary(self.saver.experiment_dir)
            self.writer = self.summary.create_summary()
    
            # Define Dataloader
            kwargs = {'num_workers': args.workers, 'pin_memory': True}
            self.train_loader, self.val_loader, self.test_loader, self.nclass = make_data_loader(args, **kwargs)
    
            model = None
            # Define network
            if self.args.backbone == 'unet':
                model = UNet(in_channels=4, n_classes=self.nclass, sync_bn=args.sync_bn)
                print("using UNet")
            if self.args.backbone == 'unetNested':
                model = UNetNested(in_channels=4, n_classes=self.nclass, sync_bn=args.sync_bn)
                print("using UNetNested")
    
            # train_params = [{'params': model.get_params(), 'lr': args.lr}]
            train_params = [{'params': model.get_params()}]
    
            # Define Optimizer
            # optimizer = torch.optim.SGD(train_params, momentum=args.momentum,
            #                             weight_decay=args.weight_decay, nesterov=args.nesterov)
            optimizer = torch.optim.Adam(train_params, self.args.learn_rate, weight_decay=args.weight_decay, amsgrad=True)
    
            # Define Criterion
            # whether to use class balanced weights
            if args.use_balanced_weights:
                classes_weights_path = os.path.join(Path.db_root_dir(args.dataset), args.dataset + '_classes_weights.npy')
                if os.path.isfile(classes_weights_path):
                    weight = np.load(classes_weights_path)
                else:
                    weight = calculate_weigths_labels(args.dataset, self.train_loader, self.nclass)
                weight = torch.from_numpy(weight.astype(np.float32))
            else:
                weight = None
            self.criterion = SegmentationLosses(weight=weight, cuda=args.cuda).build_loss(mode=args.loss_type)
            self.model, self.optimizer = model, optimizer
    
            # Define Evaluator
            self.evaluator = Evaluator(self.nclass)
            # Define lr scheduler
            # self.scheduler = LR_Scheduler(args.lr_scheduler, args.lr, args.epochs, len(self.train_loader))
    
            # Using cuda
            if args.cuda:
                self.model = torch.nn.DataParallel(self.model, device_ids=self.args.gpu_ids)
                patch_replication_callback(self.model)
                self.model = self.model.cuda()
    
            # Resuming checkpoint
            self.best_pred = 0.0
            if args.resume is not None:
                if not os.path.isfile(args.resume):
                    raise RuntimeError("=> no checkpoint found at '{}'" .format(args.resume))
                checkpoint = torch.load(args.resume)
                args.start_epoch = checkpoint['epoch']
                if args.cuda:
                    self.model.module.load_state_dict(checkpoint['state_dict'])
                else:
                    self.model.load_state_dict(checkpoint['state_dict'])
                if not args.ft:
                    self.optimizer.load_state_dict(checkpoint['optimizer'])
                self.best_pred = checkpoint['best_pred']
                print("=> loaded checkpoint '{}' (epoch {})"
                      .format(args.resume, checkpoint['epoch']))
    
            # Clear start epoch if fine-tuning
            if args.ft:
                args.start_epoch = 0
    
        def training(self, epoch):
            print('[Epoch: %d, learning rate: %.6f, previous best = %.4f]' % (epoch, self.args.learn_rate, self.best_pred))
            train_loss = 0.0
            self.model.train()
            self.evaluator.reset()
            tbar = tqdm(self.train_loader)
            num_img_tr = len(self.train_loader)
    
            for i, sample in enumerate(tbar):
                image, target = sample['image'], sample['label']
                if self.args.cuda:
                    image, target = image.cuda(), target.cuda()
                # self.scheduler(self.optimizer, i, epoch, self.best_pred)
                self.optimizer.zero_grad()
                output = self.model(image)
                loss = self.criterion(output, target)
                loss.backward()
                self.optimizer.step()
                train_loss += loss.item()
                tbar.set_description('Train loss: %.5f' % (train_loss / (i + 1)))
                self.writer.add_scalar('train/total_loss_iter', loss.item(), i + num_img_tr * epoch)
    
            pred = output.data.cpu().numpy()
            target = target.cpu().numpy()
            pred = np.argmax(pred, axis=1)
            # Add batch sample into evaluator
            self.evaluator.add_batch(target, pred)
    
            # Fast test during the training
            Acc = self.evaluator.Pixel_Accuracy()
            Acc_class = self.evaluator.Pixel_Accuracy_Class()
            mIoU = self.evaluator.Mean_Intersection_over_Union()
            FWIoU = self.evaluator.Frequency_Weighted_Intersection_over_Union()
            self.writer.add_scalar('train/mIoU', mIoU, epoch)
            self.writer.add_scalar('train/Acc', Acc, epoch)
            self.writer.add_scalar('train/Acc_class', Acc_class, epoch)
            self.writer.add_scalar('train/fwIoU', FWIoU, epoch)
            self.writer.add_scalar('train/total_loss_epoch', train_loss, epoch)
    
            print('train validation:')
            print("Acc:{}, Acc_class:{}, mIoU:{}, fwIoU: {}".format(Acc, Acc_class, mIoU, FWIoU))
            print('Loss: %.3f' % train_loss)
            print('---------------------------------')
    
        def validation(self, epoch):
            test_loss = 0.0
            self.model.eval()
            self.evaluator.reset()
            tbar = tqdm(self.val_loader, desc='
    ')
            num_img_val = len(self.val_loader)
    
            for i, sample in enumerate(tbar):
                image, target = sample['image'], sample['label']
                if self.args.cuda:
                    image, target = image.cuda(), target.cuda()
                with torch.no_grad():
                    output = self.model(image)
                loss = self.criterion(output, target)
                test_loss += loss.item()
                tbar.set_description('Test loss: %.5f' % (test_loss / (i + 1)))
                self.writer.add_scalar('val/total_loss_iter', loss.item(), i + num_img_val * epoch)
                pred = output.data.cpu().numpy()
                target = target.cpu().numpy()
                pred = np.argmax(pred, axis=1)
                # Add batch sample into evaluator
                self.evaluator.add_batch(target, pred)
    
            # Fast test during the training
            Acc = self.evaluator.Pixel_Accuracy()
            Acc_class = self.evaluator.Pixel_Accuracy_Class()
            mIoU = self.evaluator.Mean_Intersection_over_Union()
            FWIoU = self.evaluator.Frequency_Weighted_Intersection_over_Union()
            self.writer.add_scalar('val/total_loss_epoch', test_loss, epoch)
            self.writer.add_scalar('val/mIoU', mIoU, epoch)
            self.writer.add_scalar('val/Acc', Acc, epoch)
            self.writer.add_scalar('val/Acc_class', Acc_class, epoch)
            self.writer.add_scalar('val/fwIoU', FWIoU, epoch)
            print('test validation:')
            print("Acc:{}, Acc_class:{}, mIoU:{}, fwIoU: {}".format(Acc, Acc_class, mIoU, FWIoU))
            print('Loss: %.3f' % test_loss)
            print('====================================')
    
            new_pred = mIoU
            if new_pred > self.best_pred:
                is_best = True
                self.best_pred = new_pred
                self.saver.save_checkpoint({
                    'epoch': epoch + 1,
                    'state_dict': self.model.module.state_dict(),
                    'optimizer': self.optimizer.state_dict(),
                    'best_pred': self.best_pred,
                }, is_best)
    
    
    def main():
        parser = argparse.ArgumentParser(description="PyTorch Unet Training")
        parser.add_argument('--backbone', type=str, default='unet',
                            choices=['unet', 'unetNested'],
                            help='backbone name (default: unet)')
        parser.add_argument('--dataset', type=str, default='rssrai2019',
                            choices=['rssrai2019'],
                            help='dataset name (default: rssrai2019)')
        parser.add_argument('--workers', type=int, default=4,
                            metavar='N', help='dataloader threads')
        parser.add_argument('--base-size', type=int, default=400,
                            help='base image size')
        parser.add_argument('--crop-size', type=int, default=400,
                            help='crop image size')
        parser.add_argument('--sync-bn', type=bool, default=None,
                            help='whether to use sync bn (default: auto)')
        parser.add_argument('--freeze-bn', type=bool, default=False,
                            help='whether to freeze bn parameters (default: False)')
        parser.add_argument('--loss-type', type=str, default='ce',
                            choices=['ce', 'focal'],
                            help='loss func type (default: ce)')
        # training hyper params
        parser.add_argument('--epochs', type=int, default=None, metavar='N',
                            help='number of epochs to train (default: auto)')
        parser.add_argument('--start_epoch', type=int, default=0, metavar='N',
                            help='start epochs (default:0)')
        parser.add_argument('--batch-size', type=int, default=None, metavar='N',
                            help='input batch size for training (default: auto)')
        parser.add_argument('--test-batch-size', type=int, default=None, metavar='N',
                            help='input batch size for testing (default: auto)')
        parser.add_argument('--use-balanced-weights', action='store_true', default=False,
                            help='whether to use balanced weights (default: False)')
        # optimizer params
        parser.add_argument('--learn-rate', type=float, default=None, metavar='LR',
                            help='learning rate (default: auto)')
        parser.add_argument('--lr-scheduler', type=str, default='poly',
                            choices=['poly', 'step', 'cos'],
                            help='lr scheduler mode: (default: poly)')
        parser.add_argument('--momentum', type=float, default=0.9,
                            metavar='M', help='momentum (default: 0.9)')
        parser.add_argument('--weight-decay', type=float, default=5e-4,
                            metavar='M', help='w-decay (default: 5e-4)')
        parser.add_argument('--nesterov', action='store_true', default=True,
                            help='whether use nesterov (default: False)')
        # cuda, seed and logging
        parser.add_argument('--no-cuda', action='store_true', default=False,
                            help='disables CUDA training')
        parser.add_argument('--gpu-ids', type=str, default='0',
                            help='use which gpu to train, must be a comma-separated list of integers only (default=0)')
        parser.add_argument('--seed', type=int, default=1, metavar='S',
                            help='random seed (default: 1)')
        # checking point
        parser.add_argument('--resume', type=str, default=None,
                            help='put the path to resuming file if needed')
        parser.add_argument('--checkname', type=str, default=None,
                            help='set the checkpoint name')
        # finetuning pre-trained models
        parser.add_argument('--ft', action='store_true', default=False,
                            help='finetuning on a different dataset')
        # evaluation option
        parser.add_argument('--eval-interval', type=int, default=1,
                            help='evaluation interval (default: 1)')
        parser.add_argument('--no-val', action='store_true', default=False,
                            help='skip validation during training')
    
        args = parser.parse_args()
        args.cuda = not args.no_cuda and torch.cuda.is_available()
        if args.cuda:
            try:
                args.gpu_ids = [int(s) for s in args.gpu_ids.split(',')]
            except ValueError:
                raise ValueError('Argument --gpu_ids must be a comma-separated list of integers only')
    
        if args.sync_bn is None:
            if args.cuda and len(args.gpu_ids) > 1:
                args.sync_bn = True
            else:
                args.sync_bn = False
    
        # default settings for epochs, batch_size and lr
        if args.epochs is None:
            epoches = {'rssrai2019': 100}
            args.epochs = epoches[args.dataset.lower()]
    
        if args.batch_size is None:
            args.batch_size = 4 * len(args.gpu_ids)
    
        if args.test_batch_size is None:
            args.test_batch_size = args.batch_size
    
        if args.learn_rate is None:
            lrs = {'rssrai2019': 0.01}
            args.learn_rate = lrs[args.dataset.lower()] / (4 * len(args.gpu_ids)) * args.batch_size
    
        if args.checkname is None:
            args.checkname = str(args.backbone)
    
        print(args)
        torch.manual_seed(args.seed)
        trainer = Trainer(args)
        print('Starting Epoch:', trainer.args.start_epoch)
        print('Total Epoches:', trainer.args.epochs)
        print('====================================')
        for epoch in range(trainer.args.start_epoch, trainer.args.epochs):
            trainer.training(epoch)
            if epoch % args.eval_interval == (args.eval_interval - 1):
                trainer.validation(epoch)
    
        trainer.writer.close()
    
    
    if __name__ == "__main__":
        main()
  • 相关阅读:
    COGS2355 【HZOI2015】 有标号的DAG计数 II
    COGS2353 【HZOI2015】有标号的DAG计数 I
    COGS2259 异化多肽
    二项式定理
    Codeforces 528D Fuzzy Search
    技术文章阅读-华为WS331a产品管理页面存在CSRF漏洞
    技术文章阅读-天翼创维awifi路由器存在多处未授权访问漏洞
    技术文章阅读-红蓝对抗系列之浅谈蓝队反制红队的手法一二
    技术文章阅读-Solr ReplicationHandler漏洞浅析
    技术文章阅读-记一次edu站点从敏感信息泄露到getshell
  • 原文地址:https://www.cnblogs.com/2008nmj/p/13607773.html
Copyright © 2011-2022 走看看