这里先给出zxing包的源码地址
zip包:https://codeload.github.com/zxing/zxing/zip/master
Github:https://github.com/zxing/zxing
包可能较大,因为包含了其它平台的源码,这里主要分析Android平台
首先说一下zxing包中扫描实现的是被固定为横屏模式,在不同的手机屏幕下可能会出现图像变形情况,近日得空,研究了一下,首先分析一下源码Barcode scanner中的一些问题。
- 首先解释设置为横屏模式的原因,android手机中camera 在我们调用的时候,他的成像是被顺时针旋转90°的,在Barcode scanner中并没有对camera旋转,因此成像在横向手机屏幕上才是正常方向,camera的成像一般来说,宽比高的值要大,如果想让手机方向切换为其它模式时,若要显示成像正常,则可以调用setDisplayOrientation(度数)方法,参数中应该设置的参数与getWindowManager().getDefaultDisplay().getRotation()中得到的不同,可参考下面的代码。
public int getOrientationDegree() { int rotation = mActivity.getWindowManager().getDefaultDisplay().getRotation(); switch (rotation) { case Surface.ROTATION_0: return 90; case Surface.ROTATION_90: return 0; case Surface.ROTATION_180: return 270; case Surface.ROTATION_270: return 180; default: return 0; } }
-
关于部分手机屏幕可能会变形问题,大家知道不同的手机,摄像头的像素数是不同的,这里列出部分1920x1080,1280x960,1280x720,960x720,864x480,800x480,720x480,768x432,640x480,576x432,480x320,384x288,352x288,320x240,这些列出的都是camera所支持的宽x高的大小,想知道camera支持哪些像素大小可通过Camera.Parameters.getSupportedPreviewSizes()方法得到支持的列表。camera成像以后要在surfaceView上显示,成像会变形的原因就是camera设置的像素参数的大小,与surfaceView的长宽比不相同,camera在preview时,会自动填充满surfaceView,导致我们看到的成像变形
- Camera在preview时,我们自然就可以去取我们看到的成像,当然我们只是取框中的一部分,那么我们就要计算框的位置及大小,下面先看源码中的计算方法
<resources> <style name="CaptureTheme" parent="android:Theme.Holo"> <item name="android:windowFullscreen">true</item> <item name="android:windowContentOverlay">@null</item> <item name="android:windowActionBarOverlay">true</item> <item name="android:windowActionModeOverlay">true</item> </style> </resources>
<com.google.zxing.client.android.ViewfinderView android:id="@+id/viewfinder_view" android:layout_width="fill_parent" android:layout_height="fill_parent"/>
/** * Like {@link #getFramingRect} but coordinates are in terms of the preview frame, * not UI / screen. * * @return {@link Rect} expressing barcode scan area in terms of the preview size */ public synchronized Rect getFramingRectInPreview() { if (framingRectInPreview == null) { Rect framingRect = getFramingRect(); if (framingRect == null) { return null; } Rect rect = new Rect(framingRect); Point cameraResolution = configManager.getCameraResolution(); Point screenResolution = configManager.getScreenResolution(); if (cameraResolution == null || screenResolution == null) { // Called early, before init even finished return null; } rect.left = rect.left * cameraResolution.x / screenResolution.x; rect.right = rect.right * cameraResolution.x / screenResolution.x; rect.top = rect.top * cameraResolution.y / screenResolution.y; rect.bottom = rect.bottom * cameraResolution.y / screenResolution.y; framingRectInPreview = rect; } return framingRectInPreview; }
源码中,我们可以看到应用的主题是全屏模式,因此计算取成像大小的时候用的是屏幕的大小,首先是得到屏幕中ViewfinderView的框的大小,也就是我们在屏幕上看到的大小,然后根据camera与屏幕的比值去计算camera中应该取图像的大小。
/** * A factory method to build the appropriate LuminanceSource object based on the format * of the preview buffers, as described by Camera.Parameters. * * @param data A preview frame. * @param width The width of the image. * @param height The height of the image. * @return A PlanarYUVLuminanceSource instance. */ public PlanarYUVLuminanceSource buildLuminanceSource(byte[] data, int width, int height) { Rect rect = getFramingRectInPreview(); if (rect == null) { return null; } // Go ahead and assume it's YUV rather than die. return new PlanarYUVLuminanceSource(data, width, height, rect.left, rect.top, rect.width(), rect.height(), false); }
下面代码是在CameraManager中的,该方法参数中data即camera取得的全部图像数据width、height分别是图像的宽与高,PlanarYUVLuminanceSource就是从整张图片中取出getFramingRectInPreview()所返回的矩形大小的图片数据。最后在对数据进行解析。如下代码所示。
/** * Decode the data within the viewfinder rectangle, and time how long it took. For efficiency, * reuse the same reader objects from one decode to the next. * * @param data The YUV preview frame. * @param width The width of the preview frame. * @param height The height of the preview frame. */ private void decode(byte[] data, int width, int height) { long start = System.currentTimeMillis(); Result rawResult = null; PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(data, width, height); if (source != null) { BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source)); try { rawResult = multiFormatReader.decodeWithState(bitmap); } catch (ReaderException re) { // continue } finally { multiFormatReader.reset(); } } Handler handler = activity.getHandler(); if (rawResult != null) { // Don't log the barcode contents for security. long end = System.currentTimeMillis(); Log.d(TAG, "Found barcode in " + (end - start) + " ms"); if (handler != null) { Message message = Message.obtain(handler, R.id.decode_succeeded, rawResult); Bundle bundle = new Bundle(); bundleThumbnail(source, bundle); message.setData(bundle); message.sendToTarget(); } } else { if (handler != null) { Message message = Message.obtain(handler, R.id.decode_failed); message.sendToTarget(); } } }
-
解析完成以后,成功则发送成功的消息到CaptureActivityHandler中,失败则发送失败的消息,由下面的代码可以看出,成功以后,回调到了acitvity,而失败则重新请求正在Preview中的数据,如此反复解析。
case R.id.decode_succeeded: state = State.SUCCESS; Bundle bundle = message.getData(); Bitmap barcode = null; float scaleFactor = 1.0f; if (bundle != null) { byte[] compressedBitmap = bundle.getByteArray(DecodeThread.BARCODE_BITMAP); if (compressedBitmap != null) { barcode = BitmapFactory.decodeByteArray(compressedBitmap, 0, compressedBitmap.length, null); // Mutable copy: barcode = barcode.copy(Bitmap.Config.ARGB_8888, true); } scaleFactor = bundle.getFloat(DecodeThread.BARCODE_SCALED_FACTOR); } activity.handleDecode((Result) message.obj, barcode, scaleFactor); break; case R.id.decode_failed: // We're decoding as fast as possible, so when one decode fails, start another. state = State.PREVIEW; cameraManager.requestPreviewFrame(decodeThread.getHandler(), R.id.decode); break;
解析说明完,我们就来说说简化修改后的情况。
- 第一点,关于旋转屏幕的问题,上面知道了旋转屏幕出现问题的原因,我们就知道如何来修改了,设置camera的旋转角度就好了,调用setDisplayOrientation()方法,注意调用了setDisplayOrientation这个以后,只是camera在surfaceView上的成像旋转了,他的宽与高并没有改变,屏幕为正常情况竖屏时此时如果我们显示出扫描到的图片,我们可以看到,图片是camera的原始成像,因此可以得知上述结论。由于旋转了图像,如果我们仍然以上述第3条中的方法去取数据的话,很有可能会越界,导致crash。如果我们预设的识别的窗口大小不是正方形,而是长方形的话,我们就可以看到,识别的窗口中显示是正常图像,可识别出来显示的图片却是垂直方向的,因此,我们在取图像时,应重新计算大小,及长宽。对原有代码做如下修改。
public synchronized Rect getFramingRectInPreview() { if (framingRectInPreview == null) { Rect framingRect = ScanManager.getInstance().getViewfinderRect(); Point cameraResolution = configManager.getCameraResolution(); if (framingRect == null || cameraResolution == null || surfacePoint == null) { return null; } Rect rect = new Rect(framingRect); float scaleX = cameraResolution.x * 1.0f / surfacePoint.y; float scaleY = cameraResolution.y * 1.0f / surfacePoint.x; if (isPortrait) { rect.left = (int) (framingRect.top * scaleY); rect.right = (int) (framingRect.bottom * scaleY); rect.top = (int) (framingRect.left * scaleX); rect.bottom = (int) (framingRect.right * scaleX); } else { scaleX = cameraResolution.x * 1.0f / surfacePoint.x; scaleY = cameraResolution.y * 1.0f / surfacePoint.y; rect.left = (int) (framingRect.left * scaleX); rect.right = (int) (framingRect.right * scaleX); rect.top = (int) (framingRect.top * scaleY); rect.bottom = (int) (framingRect.bottom * scaleY); } framingRectInPreview = rect; } return framingRectInPreview; }
- 对于成像有变形的问题,最简单的方法就是我们根据camera的大小去计算surfaceView的大小,先看修改后的代码
public void initFromCameraParameters(Camera camera, Point maxPoint) { Camera.Parameters parameters = camera.getParameters(); Point size = new Point(maxPoint.y, maxPoint.x); cameraResolution = CameraConfigurationUtils.findBestPreviewSizeValue(parameters, size); Log.i(TAG, "Camera resolution: " + cameraResolution); Log.i(TAG, "size resolution: " + size); }
public void findBestSurfacePoint(Point maxPoint) { Point cameraResolution = configManager.getCameraResolution(); if (cameraResolution == null || maxPoint == null || maxPoint.x == 0 || maxPoint.y == 0) return; double scaleX, scaleY, scale; if (maxPoint.x < maxPoint.y) { scaleX = cameraResolution.x * 1.0f / maxPoint.y; scaleY = cameraResolution.y * 1.0f / maxPoint.x; } else { scaleX = cameraResolution.x * 1.0f / maxPoint.x; scaleY = cameraResolution.y * 1.0f / maxPoint.y; } scale = scaleX > scaleY ? scaleX : scaleY; if (maxPoint.x < maxPoint.y) { surfacePoint.x = (int) (cameraResolution.y / scale); surfacePoint.y = (int) (cameraResolution.x / scale); } else { surfacePoint.x = (int) (cameraResolution.x / scale); surfacePoint.y = (int) (cameraResolution.y / scale); } }
在CameraConfigurationManager中camera初始化的时候,即initFromCameraParameters,我们会传进去一个surfaceView支持的最大宽高,通过CameraConfigurationUtils.findBestPreviewSizeValue(parameters, size);我们能够得到camera在surfaceView中最好的成像大小,此时,我们就使用这个大小,但由于这个大小是上述列表列出的固定大小,而surfaceView的宽高变化较多,因此有可能会使成像变形,因此,我们要根据camera返回的最好成像的大小去计算适合的surfaceView的大小,即最合适的比例,通过findBestSurfacePoint方法,我们可以实现。计算出来以后,只要重新设置surfaceView的大小即可,此时看到的图像就不会再变形。
更多解析,这里不再缀述,具体过程看实现代码,点击下载 。
第一次写blog,很多不好的地方,不正确的地方,欢迎大家指正,点评。