zoukankan      html  css  js  c++  java
  • HTML5 file api读取文件的MD5码工具

    1、工具的用途:用HTML5 file api读取文件的MD5码。MD5码在文件的唯一性识别上有很重要的应用,业内常用MD5进行文件识别、文件秒传、文件安全性检查等;

    2、适用性:IE、Chrome皆兼容;

    3、缺陷:当上传大文件时,需要较长的时间才能扫描出MD5码;

    4、关于引用:其中引用了js文件(spark-md5.js

    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="utf-8">
    
        <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
    
        <title>HTML5 read files hash</title>
        <meta name="author" content="Mofei">
        <meta name="viewport" content="width=device-width; initial-scale=1.0;">
        <script src="spark-md5.js" type="text/javascript"></script>
    </head>
    
    <body>
        <div>
            <header>
                <h1>HTML5 read files hash</h1>
            </header>
            <div>
                <input type="file" id="file">
                <div id="box"></div>
            </div>
            <footer>
                <p>&copy; Copyright  by Percy(<a href="http://www.cnblogs.com/Percy_Lee/">www.cnblogs.com/Percy_Lee</a>)</p>
            </footer>
        </div>
    
        <script type="text/javascript">
         document.getElementById("file").addEventListener("change", function () {
            var fileReader = new FileReader(),
                box = document.getElementById('box'),
                blobSlice = File.prototype.mozSlice || File.prototype.webkitSlice || File.prototype.slice,
                file = document.getElementById("file").files[0],
                chunkSize = 2097152,
                chunks = Math.ceil(file.size / chunkSize),
                currentChunk = 0,
                bs = fileReader.readAsBinaryString,
                spark = bs ? new SparkMD5() : new SparkMD5.ArrayBuffer();
    
            fileReader.onload = function (ee) {
                spark.append(ee.target.result);
                currentChunk++;
    
                if (currentChunk < chunks) {
                    loadNext();
                } else {
                    box.innerText = 'MD5:  ' + spark.end();
                }
            }
    
            function loadNext() {
                var start = currentChunk * chunkSize, end = start + chunkSize >= file.size ? file.size : start + chunkSize;
                if (bs) fileReader.readAsBinaryString(blobSlice.call(file, start, end));
                else fileReader.readAsArrayBuffer(blobSlice.call(file, start, end));
            }
    
            loadNext();
        });
    
        </script>
    </body>
    </html>
  • 相关阅读:
    数据挖掘-基本流程
    ArcGIS GP应用-GP模型服务发布
    ArcGIS GP应用-GP模型创建-缓冲区分析
    Hadoop2的Yarn和MapReduce2相关
    hadoop学习WordCount+Block+Split+Shuffle+Map+Reduce技术详解
    WordCount示例深度学习MapReduce过程
    数组的几种排序算法的实现
    hBase官方文档以及HBase基础操作封装类
    Hive SQL执行流程分析
    Hive SQL的编译过程
  • 原文地址:https://www.cnblogs.com/Percy_Lee/p/5018825.html
Copyright © 2011-2022 走看看