zoukankan      html  css  js  c++  java
  • iOS 图片部分模糊,类似于美图秀秀

    代码地址如下:
    http://www.demodashi.com/demo/14277.html

    演示效果

    演示效果

    代码结构

    项目结构截图如下:

    该模块的核心源码部分为 MBPartBlurView,要放到自己的工程中,直接把该目录copy进去。剩余部分是测试用的。

    使用介绍

    使用方法

    头文件 MBPartBlurView.h 中的注释很详细,使用起来也很简单,目前支持三种形式的图形,圆形、正方形和长方形。

    @interface MBPartBlurView : UIView
    
    /**
     * 原始图片
     */
    @property (nonatomic, strong) UIImage *rawImage;
    
    /**
     * 处理完的图片
     */
    @property (nonatomic, strong, readonly) UIImage *currentImage;
    
    @property (nonatomic, assign) CGFloat blurRadius;
    
    /**
     * 不进行blur的区域
     */
    @property (nonatomic, assign) CGRect excludeBlurArea;
    
    /**
     * 不进行blur的区域是否是圆形 (目前只支持正方形)
     */
    @property (nonatomic, assign) BOOL excludeAreaCircle;
    
    @end
    

    程序实现

    局部模糊的实现
    局部模糊实际上是两张图片叠加在一起的效果,底部是一张全部模糊的图片,顶部是一张未经过模糊的图片。对顶部图片加上mask,然后贴在底部图上。

    a. 底部的图片放在 imageView 的 layer 上。顶部图片放在 rawImageLayer 上。
    maskLayer 就是遮罩层,作为 rawImageLayer 的遮罩。他们分别由以下三个属性保持

    @property (nonatomic, strong) UIImageView *imageView;
    @property (nonatomic, strong) CALayer *rawImageLayer;
    @property (nonatomic, strong) CAShapeLayer *maskLayer;
    

    b. 初始化的过程。图层一目了然

    - (void)commonInit
    {
    	[self.imageView 	addGestureRecognizer:self.tapGesture];
        [self.imageView addGestureRecognizer:self.pinchGesture];
    	[self.brushView addGestureRecognizer:self.panGesture];
        [self addSubview:self.imageView];
        [self.imageView.layer addSublayer:self.rawImageLayer];
    	[self.imageView addSubview:self.brushView];
        self.rawImageLayer.mask = self.maskLayer;
        [self setExcludeAreaCircle:true];
        [self setExcludeBlurArea:CGRectMake(100, 100, 100, 100)];
        [self showBrush:false];
    }
    

    1. 图片模糊算法

    算法用的是业界广泛使用的高斯模糊

         (UIImage *)blurryImage:(UIImage *)image withBlurLevel:(CGFloat)blur
        {
        NSData *imageData = UIImageJPEGRepresentation(image, 1); // convert to jpeg
        UIImage* destImage = [UIImage imageWithData:imageData];
        int boxSize = (int)(blur * 100);
        if (blur > 0.5) {
            boxSize = (int)(blur * 100) + 50;
        }else if (blur <= 0.5) {
            boxSize = (int)(blur * 100);
        }
        boxSize = boxSize - (boxSize % 2) + 1;
        CGImageRef img = destImage.CGImage;
        vImage_Buffer inBuffer, outBuffer;
        vImage_Error error;
        void *pixelBuffer;
        //create vImage_Buffer with data from CGImageRef
        CGDataProviderRef inProvider = CGImageGetDataProvider(img);
        CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
        inBuffer.width = CGImageGetWidth(img);
        inBuffer.height = CGImageGetHeight(img);
        inBuffer.rowBytes = CGImageGetBytesPerRow(img);
        inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);
        //create vImage_Buffer for output
        pixelBuffer = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img));
        if(pixelBuffer == NULL)
        NSLog(@"No pixelbuffer");
        outBuffer.data = pixelBuffer;
        outBuffer.width = CGImageGetWidth(img);
        outBuffer.height = CGImageGetHeight(img);
        outBuffer.rowBytes = CGImageGetBytesPerRow(img);
        // Create a third buffer for intermediate processing
        void *pixelBuffer2 = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img));
        vImage_Buffer outBuffer2;
        outBuffer2.data = pixelBuffer2;
        outBuffer2.width = CGImageGetWidth(img);
        outBuffer2.height = CGImageGetHeight(img);
        outBuffer2.rowBytes = CGImageGetBytesPerRow(img);
        //perform convolution
        error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer2, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
        if (error) {
            NSLog(@"error from convolution %ld", error);
        }
        error = vImageBoxConvolve_ARGB8888(&outBuffer2, &inBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
        if (error) {
            NSLog(@"error from convolution %ld", error);
        }
        error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
        if (error) {
            NSLog(@"error from convolution %ld", error);
        }
        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
        CGContextRef ctx = CGBitmapContextCreate(outBuffer.data,
                                                 outBuffer.width,
                                                 outBuffer.height,
                                                 8,
                                                 outBuffer.rowBytes,
                                                 colorSpace,
                                                 (CGBitmapInfo)kCGImageAlphaNoneSkipLast);
        CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
        UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
        //clean up
        CGContextRelease(ctx);
        CGColorSpaceRelease(colorSpace);
        free(pixelBuffer);
        free(pixelBuffer2);
        CFRelease(inBitmapData);
        CGImageRelease(imageRef);
        return returnImage;
        }
    

    2.手势处理

    主要是实现缩放的手势,在处理 UIPanGestureRecognizer 的函数- (void)handlePinch:(UIPinchGestureRecognizer *)gestureRecognizer 中,当 state 为 UIGestureRecognizerStateChanged 时候调用核心缩放函数 - (void)scaleMask:(CGFloat)scale,代码具体如下:

    - (void)scaleMask:(CGFloat)scale
    {
        CGFloat mS = MIN(self.imageView.frame.size.width/self.brushView.frame.size.width, self.imageView.frame.size.height/self.brushView.frame.size.height);
        CGFloat s = MIN(scale, mS);
        [CATransaction setDisableActions:YES];
        CGAffineTransform zoomTransform = CGAffineTransformScale(self.brushView.layer.affineTransform, s, s);
        self.brushView.layer.affineTransform = zoomTransform;
        zoomTransform = CGAffineTransformScale(self.maskLayer.affineTransform, s, s);
        self.maskLayer.affineTransform = zoomTransform;
        [CATransaction setDisableActions:false];
    }
    

    在这个函数中,更新maskLayer 的 affineTransform 就可以扩大或者缩小 mask 的范围。表现出来就是不模糊的区域扩大或者缩小。

    3. 处理相机图片自动旋转问题

     - (void)setRawImage:(UIImage *)rawImage
      {
        UIImage *newImage = [self fixedOrientation:rawImage]; //从相册出来的自带90度旋转
        _rawImage = newImage;
        CGFloat w,h;
        if (newImage.size.width >= newImage.size.height) {
            w = self.frame.size.width;
            h = newImage.size.height/newImage.size.width * w;
        }
        else {
            h = self.frame.size.height;
            w = newImage.size.width/newImage.size.height * h;
        }
        self.imageView.frame = CGRectMake(0.5 *(self.frame.size.width-w), 0.5 *(self.frame.size.height-h), w, h);
        self.rawImageLayer.frame = self.imageView.bounds;
        self.imageView.image = [self blurryImage:newImage withBlurLevel:self.blurRadius];
        self.rawImageLayer.contents = (id)newImage.CGImage;
       }
    

    补充

    暂时没有
    [1]: http://www.ruanyifeng.com/blog/2012/11/gaussian_blur.htmliOS 图片部分模糊,类似于美图秀秀

    代码地址如下:
    http://www.demodashi.com/demo/14277.html

    注:本文著作权归作者,由demo大师代发,拒绝转载,转载需要作者授权

  • 相关阅读:
    dbgrideh标题排序
    ctrl r w 去掉
    c# 中@ 的三种用法
    vs插件
    oracle查看会话(常规操作)
    3 docker容器
    k8s-组件
    k8s-常见错误
    k8s监控-kube-prometheus
    helm-私有仓库
  • 原文地址:https://www.cnblogs.com/demodashi/p/10474054.html
Copyright © 2011-2022 走看看