zoukankan      html  css  js  c++  java
  • ceph rgw s3 java sdk 上传大文件分批的方法

    Using the AWS Java SDK for Multipart Upload (High-Level API)

    The AWS SDK for Java exposes a high-level API that simplifies multipart upload (seeUploading Objects Using Multipart Upload API). You can upload data from a file or a stream. You can also set advanced options, such as the part size you want to use for the multipart upload, or the number of threads you want to use when uploading the parts concurrently. You can also set optional object properties, the storage class, or ACL. You use the PutObjectRequest and the TransferManagerConfiguration classes to set these advanced options. The TransferManager class of the Java API provides the high-level API for you to upload data.

    When possible, TransferManager attempts to use multiple threads to upload multiple parts of a single upload at once. When dealing with large content sizes and high bandwidth, this can have a significant increase on throughput.

    In addition to file upload functionality, the TransferManager class provides a method for you to abort multipart upload in progress. You must provide a Date value, and then the API aborts all the multipart uploads that were initiated before the specified date.

    Upload a File

    The following tasks guide you through using the high-level Java classes to upload a file. The API provides several variations, called overloads, of the upload method to easily upload your data.

     

    High-Level API File Uploading Process

    1

    Create an instance of the TransferManager class.

    2

    Execute one of the TransferManager.upload overloads depending on whether you are uploading data from a file, or a stream.

    The following Java code example demonstrates the preceding tasks.

    Example

    The following Java code example uploads a file to an Amazon S3 bucket. For instructions on how to create and test a working sample, see Testing the Java Code Examples.

    import java.io.File;
    
    import com.amazonaws.AmazonClientException;
    import com.amazonaws.auth.profile.ProfileCredentialsProvider;
    import com.amazonaws.services.s3.transfer.TransferManager;
    import com.amazonaws.services.s3.transfer.Upload;
    
    public class UploadObjectMultipartUploadUsingHighLevelAPI {
    
        public static void main(String[] args) throws Exception {
            String existingBucketName = "*** Provide existing bucket name ***";
            String keyName            = "*** Provide object key ***";
            String filePath           = "*** Path to and name of the file to upload ***";  
            
            TransferManager tm = new TransferManager(new ProfileCredentialsProvider());        
            System.out.println("Hello");
            // TransferManager processes all transfers asynchronously, 
            // so this call will return immediately.
            Upload upload = tm.upload(
            		existingBucketName, keyName, new File(filePath));
            System.out.println("Hello2");
    
            try {
            	// Or you can block and wait for the upload to finish
            	upload.waitForCompletion();
            	System.out.println("Upload complete.");
            } catch (AmazonClientException amazonClientException) {
            	System.out.println("Unable to upload file, upload was aborted.");
            	amazonClientException.printStackTrace();
            }
        }
    }
    

      



    http://docs.aws.amazon.com/AmazonS3/latest/dev/HLuploadFileJava.html

  • 相关阅读:
    Ansible学习 Playbooks_1
    Ansible学习 ad-hoc命令
    Ansible学习 Patterns
    【转】对于SQL SERVER 事务日志已满问题整理
    【转】Apache配置正向代理与反向代理
    【转】Python的IDE和破解Code——pycharm
    【转】SQLServer 数据库变成单个用户后无法访问问题的解决方法
    【转】 尝试在数据库5 中提取逻辑页() 失败。该逻辑页属于分配单元xxx而非xxx
    【Oracle】Oracle版本导入导出问题
    【Java】eclipse导入war包二次开发
  • 原文地址:https://www.cnblogs.com/bodhitree/p/6143685.html
Copyright © 2011-2022 走看看