zoukankan      html  css  js  c++  java
  • 超简单集成HMS ML Kit 人脸检测实现可爱贴纸

    前言


      在这个美即真理、全民娱乐的时代,可爱有趣的人脸贴纸在各大美颜软件中得到了广泛的应用,现在已经不仅局限于相机美颜类软件中,在社交、娱乐类的app中对人脸贴纸、AR贴纸的需求也非常广泛。本文详细介绍了集成华为HMS ML kit人脸识别实现2d贴纸的集成过程,在后面的文章中我们还会介绍3D贴纸的开发过程,欢迎大家关注哦~

    场景


      在美颜相机、美图app以及社交类app(如抖音、微博、微信)等需要对拍照,或者对照片进行处理的app都会构建自己特有的贴纸的需求。

    开发前准备


    在项目级gradle里添加华为maven仓

      打开AndroidStudio项目级build.gradle文件

    在这里插入图片描述
      增量添加如下maven地址:

    buildscript {
        {        
           maven {url 'http://developer.huawei.com/repo/'}
       }    
    }
    allprojects {
       repositories {      
           maven { url 'http://developer.huawei.com/repo/'}
       }
    }
    

    在应用级的build.gradle里面加上SDK依赖

    在这里插入图片描述

    // Face detection SDK.
    implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
    // Face detection model.
    implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
    

    在AndroidManifest.xml文件里面申请相机、访问网络和存储权限

    <!--相机权限-->
    <uses-feature android:name="android.hardware.camera" />
    <uses-permission android:name="android.permission.CAMERA" />
    <!--写权限-->
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
    <!--读权限-->
    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
    

    代码开发关键步骤


    设置人脸检测器

    MLFaceAnalyzerSetting detectorOptions;
    detectorOptions = new MLFaceAnalyzerSetting.Factory()
           .setFeatureType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_FEATURES)
           .setShapeType(MLFaceAnalyzerSetting.TYPE_SHAPES)
           .allowTracing(MLFaceAnalyzerSetting.MODE_TRACING_FAST)
           .create();
    detector = MLAnalyzerFactory.getInstance().getFaceAnalyzer(detectorOptions);
    

    这里我们通过相机回调拿到相机帧数据,并通过调用人脸检测器拿到人脸轮廓点后写入FacePointEngine供贴纸滤镜使用

    @Override
    public void onPreviewFrame(final byte[] imgData, final Camera camera) {
       int width = mPreviewWidth;
       int height = mPreviewHeight;
    
       long startTime = System.currentTimeMillis();
       //设置前后摄方向一致
       if (isFrontCamera()){
           mOrientation = 0;
       }else {
           mOrientation = 2;
       }
       MLFrame.Property property =
               new MLFrame.Property.Creator()
                       .setFormatType(ImageFormat.NV21)
                       .setWidth(width)
                       .setHeight(height)
                       .setQuadrant(mOrientation)
                       .create();
    
       ByteBuffer data = ByteBuffer.wrap(imgData);
       // 调用人脸检测接口
       SparseArray<MLFace> faces = detector.analyseFrame(MLFrame.fromByteBuffer(data,property));
       //判断是否获取到人脸信息
       if(faces.size()>0){
           MLFace mLFace = faces.get(0);
           EGLFace EGLFace = FacePointEngine.getInstance().getOneFace(0);
           EGLFace.pitch = mLFace.getRotationAngleX();
           EGLFace.yaw = mLFace.getRotationAngleY();
           EGLFace.roll = mLFace.getRotationAngleZ() - 90;
           if (isFrontCamera())
               EGLFace.roll = -EGLFace.roll;
           if (EGLFace.vertexPoints == null) {
               EGLFace.vertexPoints = new PointF[131];
           }
           int index = 0;
           // 获取一个人的轮廓点坐标并转化到openGL归一化坐标系下的浮点值
           for (MLFaceShape contour : mLFace.getFaceShapeList()) {
               if (contour == null) {
                   continue;
               }
               List<MLPosition> points = contour.getPoints();
    
               for (int i = 0; i < points.size(); i++) {
                   MLPosition point = points.get(i);
                   float x = ( point.getY() / height) * 2 - 1;
                   float y = ( point.getX() / width ) * 2 - 1;
                   if (isFrontCamera())
                       x = -x;
                   PointF Point = new PointF(x,y);
                   EGLFace.vertexPoints[index] = Point;
                   index++;
               }
           }
           // 插入人脸对象
           FacePointEngine.getInstance().putOneFace(0, EGLFace);
           // 设置人脸个数
           FacePointEngine.getInstance().setFaceSize(faces!= null ? faces.size() : 0);
       }else{
           FacePointEngine.getInstance().clearAll();
       }
       long endTime = System.currentTimeMillis();
       Log.d("TAG","Face detect time: " + String.valueOf(endTime - startTime));
    }
    

    ML kit接口返回的人脸轮廓点情况如图所示:
    在这里插入图片描述

    介绍如何设计贴纸,首先看一下贴纸数JSON数据定义

    public class FaceStickerJson {
    
       public int[] centerIndexList;   // 中心坐标索引列表,有可能是多个关键点计算中心点
       public float offsetX;           // 相对于贴纸中心坐标的x轴偏移像素
       public float offsetY;           // 相对于贴纸中心坐标的y轴偏移像素
       public float baseScale;         // 贴纸基准缩放倍数
       public int startIndex;          // 人脸起始索引,用于计算人脸的宽度
       public int endIndex;            // 人脸结束索引,用于计算人脸的宽度
       public int width;               // 贴纸宽度
       public int height;              // 贴纸高度
       public int frames;              // 贴纸帧数
       public int action;              // 动作,0表示默认显示,这里用来处理贴纸动作等
       public String stickerName;      // 贴纸名称,用于标记贴纸所在文件夹以及png文件的
       public int duration;            // 贴纸帧显示间隔
       public boolean stickerLooping;  // 贴纸是否循环渲染
       public int maxCount;            // 最大贴纸渲染次数
    ...
    }
    

    我们制作猫耳贴纸JSON文件,通过人脸索引找到眉心84号点和鼻尖85号点分别贴上耳朵和鼻子,然后把它和图片都放在assets目录下

    {
       "stickerList": [{
           "type": "sticker",
           "centerIndexList": [84],
           "offsetX": 0.0,
           "offsetY": 0.0,
           "baseScale": 1.3024,
           "startIndex": 11,
           "endIndex": 28,
           "width": 495,
           "height": 120,
           "frames": 2,
           "action": 0,
           "stickerName": "nose",
           "duration": 100,
           "stickerLooping": 1,
           "maxcount": 5
       }, {
           "type": "sticker",
           "centerIndexList": [83],
           "offsetX": 0.0,
           "offsetY": -1.1834,
           "baseScale": 1.3453,
           "startIndex": 11,
           "endIndex": 28,
           "width": 454,
           "height": 150,
           "frames": 2,
           "action": 0,
           "stickerName": "ear",
           "duration": 100,
           "stickerLooping": 1,
           "maxcount": 5
       }]
    }
    

    这里渲染贴纸纹理我们使用GLSurfaceView,使用起来比TextureView简单, 首先在onSurfaceChanged实例化贴纸滤镜,传入贴纸路径并开启相机

    @Override
    public void onSurfaceCreated(GL10 gl, EGLConfig config) {
    
       GLES30.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
       mTextures = new int[1];
       mTextures[0] = OpenGLUtils.createOESTexture();
       mSurfaceTexture = new SurfaceTexture(mTextures[0]);
       mSurfaceTexture.setOnFrameAvailableListener(this);
    
       //将samplerExternalOES 输入到纹理中
       cameraFilter = new CameraFilter(this.context);
    
       //设置assets目录下人脸贴纸路径
       String folderPath ="cat";
       stickerFilter = new FaceStickerFilter(this.context,folderPath);
    
       //创建屏幕滤镜对象
       screenFilter = new BaseFilter(this.context);
    
       facePointsFilter = new FacePointsFilter(this.context);
       mEGLCamera.openCamera();
    }
    

    然后在onSurfaceChanged初始化贴纸滤镜

    @Override
    public void onSurfaceChanged(GL10 gl, int width, int height) {
       Log.d(TAG, "onSurfaceChanged.  " + width + ", height: " + height);
       int previewWidth = mEGLCamera.getPreviewWidth();
       int previewHeight = mEGLCamera.getPreviewHeight();
       if (width > height) {
           setAspectRatio(previewWidth, previewHeight);
       } else {
           setAspectRatio(previewHeight, previewWidth);
       }
       // 设置画面的大小,创建FrameBuffer,设置显示尺寸
       cameraFilter.onInputSizeChanged(previewWidth, previewHeight);
       cameraFilter.initFrameBuffer(previewWidth, previewHeight);
       cameraFilter.onDisplaySizeChanged(width, height);
    
       stickerFilter.onInputSizeChanged(previewHeight, previewWidth);
       stickerFilter.initFrameBuffer(previewHeight, previewWidth);
       stickerFilter.onDisplaySizeChanged(width, height);
    
       screenFilter.onInputSizeChanged(previewWidth, previewHeight);
       screenFilter.initFrameBuffer(previewWidth, previewHeight);
       screenFilter.onDisplaySizeChanged(width, height);
    
       facePointsFilter.onInputSizeChanged(previewHeight, previewWidth);
       facePointsFilter.onDisplaySizeChanged(width, height);
       mEGLCamera.startPreview(mSurfaceTexture);
    }
    

    最后通过onDrawFrame把贴纸绘制到屏幕

    @Override
    public void onDrawFrame(GL10 gl) {
       int textureId;
       // 清除屏幕和深度缓存
       GLES30.glClear(GLES30.GL_COLOR_BUFFER_BIT | GLES30.GL_DEPTH_BUFFER_BIT);
       //更新获取一张图
       mSurfaceTexture.updateTexImage();
       //获取SurfaceTexture转化矩阵
       mSurfaceTexture.getTransformMatrix(mMatrix);
       //设置相机显示转化矩阵
       cameraFilter.setTextureTransformMatrix(mMatrix);
    
       //绘制相机纹理
       textureId = cameraFilter.drawFrameBuffer(mTextures[0],mVertexBuffer,mTextureBuffer);
       //绘制贴纸纹理
       textureId = stickerFilter.drawFrameBuffer(textureId,mVertexBuffer,mTextureBuffer);
       //绘制到屏幕
       screenFilter.drawFrame(textureId , mDisplayVertexBuffer, mDisplayTextureBuffer);
       if(drawFacePoints){
           facePointsFilter.drawFrame(textureId, mDisplayVertexBuffer, mDisplayTextureBuffer);
       }
    }
    

    这样我们的贴纸就画到人脸上了.

    Demo效果


    在这里插入图片描述

    源码


    Demo源码已上传Github,地址请戳: https://github.com/HMS-Core/hms-ml-demo/tree/master/Face2D-Sticker,大家可以做参考做基于场景的优化

    欲了解更多详情,请参阅:
    华为开发者联盟官网:https://developer.huawei.com/consumer/en/hms
    获取开发指导文档:https://developer.huawei.com/consumer/en/doc/development
    参与开发者讨论请到Reddit社区:https://www.reddit.com/r/HMSCore/
    下载demo和示例代码请到Github:https://github.com/HMS-Core
    解决集成问题请到Stack Overflow:https://stackoverflow.com/questions/tagged/huawei-mobile-services?tab=Newest


    原文链接:https://developer.huawei.com/consumer/cn/forum/topicview?tid=0203324526929930082&fid=18

    原作者:旭小夜

  • 相关阅读:
    shared_ptr weak_ptr boost 内存管理
    _vimrc win7 gvim
    qt 拖放
    数学小魔术 斐波那契数列
    qt4 程序 移植到 qt5
    (转)字符串匹配算法总结
    c++11
    BM 字符串匹配
    编译qt5 demo
    c++ 类库 学习资源
  • 原文地址:https://www.cnblogs.com/developer-huawei/p/13491791.html
Copyright © 2011-2022 走看看