zoukankan      html  css  js  c++  java
  • 使用Eclipse开发及测试Spark的环境搭建及简单测试

    一、下载专门开发的Scala的Eclipse

      1、下载地址:http://scala-ide.org/download/sdk.html,或链接:http://pan.baidu.com/s/1hrexmx2 密码:x0za

      2、打开后新建一个名为WordCount的工程(这个应该都知道吧File-->New-->Scala Project),会发现有个Scala Library container[2.11.7],因为这个不是我们需要的版本,需要改一下;即右击WordCount-->Properties-->Scala Compile-->Use Project Settings-->Scala Installation,选择第二个,即Latest 2.10 bundle (dynamic),然后点击OK

      PS:如果你的scala版本是2.11.x,则可以忽略这步。

      3、右击WordCount-->Build Path-->Configure Build Path-->Libraries-->Add External JARs...选择你解压的spark-assembly-1.0.0-hadoop1.0.4.jar,然后点击OK

      下载地址:http://spark.apache.org/downloads.html,或链接:http://pan.baidu.com/s/1eRpWIdG 密码:ue3l,下载后解压即可。

      4、此时所有的包依赖都导入成功,然后新建一个Scala类,即可开发Spark。

    二、具体开发程序

    下面给出史上最详细的程序:

      1、如果想要在本地上搞Spark的话:

     1 package com.df.spark
     2 import org.apache.spark.SparkConf
     3 import org.apache.spark.SparkContext
     4 import org.apache.spark.SparkContext._
     5 import org.apache.spark.rdd.RDD
     6 /**
     7  * 使用Scala开发集群运行的Spark WordCount程序
     8  * @author liuzhongfeng
     9  */
    10 object WordCount_Cluster {
    11   def main(args: Array[String]){
    12     /**
    13      * 第一步:创建Spark的配置对象SparkConf,设置Spark程序的运行时的配置信息
    14      * 例如说通过setMaster来设置程序要链接的Spark集群的Master的URL,如果设置为local,
    15      * 则代表Spark程序在本地运行,特别适合机器配置条件差的初学者。
    16      */
    17     val conf=new SparkConf()//创建SparkConf对象
    18     conf.setAppName("My First Spark App!")//设置应用程序的名称,在程序运行的监控界面可以看到名称
    19     conf.setMaster("spark://cMaster-spark:7077")//程序此时运行在Spark集群
    20     
    21     /**
    22      * 第二步:创建SparkContext对象,
    23      * SparkContext是Spark程序所有功能的唯一入口,无论是采用Scala、Java、Python、R等都必须有一个SparkContext
    24      * SparkContext的核心作用:初始化Spark应用程序运行所需要的核心组件,包括DAGScheduler、TaskScheduler、SchedulerBacken
    25      * 同时还会负责Spark程序往Master注册程序等
    26      * SparkContext是整个Spark应用程序中至关重要的一个对象
    27      */
    28     val sc=new SparkContext(conf)//通过创建SparkContext对象,通过传入SparkConf实例来定制Spark运行的具体参数和配置信息
    29     
    30     /**
    31      * 第三步:根据具体的数据来源(HDFS、HBase、Local FS、S3)通过SparkContext来创建RDD
    32      * RDD的创建基本有三种方式:根据外部的数据来源(例如HDFS)、根据Scala集合、由其他的RDD操作
    33      * 数据会被RDD划分称为一些列的Partitions,分配到每个Partition的数据属于一个Task的处理范畴
    34      */
    35    // val lines: RDD[String]=sc.textFile("H://下载//linux软件包//linux-spark的文件//spark//spark-1.0.0-bin-hadoop1//README.md", 1)
    36     //读取本地文件并设置为一个Partition
    37     //val lines=sc.textFile("H://下载//linux软件包//linux-spark的文件//spark//spark-1.0.0-bin-hadoop1//README.md", 1)
    38     val lines=sc.textFile("/in", 1)
    39     /**
    40      * 第四步:对初始的RDD进行Transformation级别的处理,例如map、filter等高阶函数的编程,来进行具体的数据计算
    41      * 第4.1步:将每一行的字符串拆分成单个的单词 
    42      */
    43     val words=lines.flatMap { line => line.split(" ")}//对每一行的字符串进行单词切分,并把所有行的切分结果通过flat合并成一个大的单词集合   
    44     /**
    45      * 第四步:对初始的RDD进行Transformation级别的处理,例如map、filter等高阶函数的编程,来进行具体的数据计算
    46      * 第4.2步:在单词切分的基础上,对每个单词实例的计数为1,也就是word=>(word,1)
    47      */
    48     val pairs=words.map { word => (word,1) }
    49     /**
    50      * 第四步:对初始的RDD进行Transformation级别的处理,例如map、filter等高阶函数的编程,来进行具体的数据计算
    51      * 第4.3步:在每个单词实例计数为1的基础之上统计每个单词在文件中出现的总次数
    52      */
    53     val wordCounts=pairs.reduceByKey(_+_)//对相同的Key,进行Value的累计(包括Local和Reducer级别同时Reduce)
    54     wordCounts.collect.foreach(wordNumberPair=>println(wordNumberPair._1+" : "+wordNumberPair._2))
    55     sc.stop()
    56   }  
    57 }
    View Code

      通过点击右键,选择Run As-->Scala Application,然后出现运行结果:

    16/01/27 16:55:27 INFO SecurityManager: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
    16/01/27 16:55:27 INFO SecurityManager: Changing view acls to: liuzhongfeng
    16/01/27 16:55:27 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(liuzhongfeng)
    16/01/27 16:55:28 INFO Slf4jLogger: Slf4jLogger started
    16/01/27 16:55:28 INFO Remoting: Starting remoting
    16/01/27 16:55:28 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@Frank:38059]
    16/01/27 16:55:28 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@Frank:38059]
    16/01/27 16:55:28 INFO SparkEnv: Registering MapOutputTracker
    16/01/27 16:55:28 INFO SparkEnv: Registering BlockManagerMaster
    16/01/27 16:55:28 INFO DiskBlockManager: Created local directory at C:UsersLIUZHO~1AppDataLocalTempspark-local-20160127165528-81e4
    16/01/27 16:55:28 INFO MemoryStore: MemoryStore started with capacity 1068.9 MB.
    16/01/27 16:55:28 INFO ConnectionManager: Bound socket to port 38062 with id = ConnectionManagerId(Frank,38062)
    16/01/27 16:55:28 INFO BlockManagerMaster: Trying to register BlockManager
    16/01/27 16:55:28 INFO BlockManagerInfo: Registering block manager Frank:38062 with 1068.9 MB RAM
    16/01/27 16:55:28 INFO BlockManagerMaster: Registered BlockManager
    16/01/27 16:55:28 INFO HttpServer: Starting HTTP Server
    16/01/27 16:55:28 INFO HttpBroadcast: Broadcast server started at http://192.168.1.107:38063
    16/01/27 16:55:28 INFO HttpFileServer: HTTP File server directory is C:UsersLIUZHO~1AppDataLocalTempspark-59ecde39-31f6-4f84-ac49-e86194415dec
    16/01/27 16:55:28 INFO HttpServer: Starting HTTP Server
    16/01/27 16:55:28 INFO SparkUI: Started SparkUI at http://Frank:4040
    16/01/27 16:55:29 INFO MemoryStore: ensureFreeSpace(32816) called with curMem=0, maxMem=1120822886
    16/01/27 16:55:29 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 32.0 KB, free 1068.9 MB)
    16/01/27 16:55:29 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    16/01/27 16:55:29 WARN LoadSnappy: Snappy native library not loaded
    16/01/27 16:55:29 INFO FileInputFormat: Total input paths to process : 1
    16/01/27 16:55:29 INFO SparkContext: Starting job: foreach at WordCount.scala:53
    16/01/27 16:55:29 INFO DAGScheduler: Registering RDD 4 (reduceByKey at WordCount.scala:52)
    16/01/27 16:55:29 INFO DAGScheduler: Got job 0 (foreach at WordCount.scala:53) with 1 output partitions (allowLocal=false)
    16/01/27 16:55:29 INFO DAGScheduler: Final stage: Stage 0(foreach at WordCount.scala:53)
    16/01/27 16:55:29 INFO DAGScheduler: Parents of final stage: List(Stage 1)
    16/01/27 16:55:29 INFO DAGScheduler: Missing parents: List(Stage 1)
    16/01/27 16:55:29 INFO DAGScheduler: Submitting Stage 1 (MapPartitionsRDD[4] at reduceByKey at WordCount.scala:52), which has no missing parents
    16/01/27 16:55:29 INFO DAGScheduler: Submitting 1 missing tasks from Stage 1 (MapPartitionsRDD[4] at reduceByKey at WordCount.scala:52)
    16/01/27 16:55:29 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
    16/01/27 16:55:29 INFO TaskSetManager: Starting task 1.0:0 as TID 0 on executor localhost: localhost (PROCESS_LOCAL)
    16/01/27 16:55:29 INFO TaskSetManager: Serialized task 1.0:0 as 2172 bytes in 2 ms
    16/01/27 16:55:29 INFO Executor: Running task ID 0
    16/01/27 16:55:29 INFO BlockManager: Found block broadcast_0 locally
    16/01/27 16:55:29 INFO HadoopRDD: Input split: file:/H:/下载/linux软件包/linux-spark的文件/spark/spark-1.0.0-bin-hadoop1/README.md:0+4221
    16/01/27 16:55:29 INFO Executor: Serialized size of result for 0 is 775
    16/01/27 16:55:29 INFO Executor: Sending result for 0 directly to driver
    16/01/27 16:55:29 INFO Executor: Finished task ID 0
    16/01/27 16:55:29 INFO TaskSetManager: Finished TID 0 in 231 ms on localhost (progress: 1/1)
    16/01/27 16:55:29 INFO DAGScheduler: Completed ShuffleMapTask(1, 0)
    16/01/27 16:55:29 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
    16/01/27 16:55:29 INFO DAGScheduler: Stage 1 (reduceByKey at WordCount.scala:52) finished in 0.240 s
    16/01/27 16:55:29 INFO DAGScheduler: looking for newly runnable stages
    16/01/27 16:55:29 INFO DAGScheduler: running: Set()
    16/01/27 16:55:29 INFO DAGScheduler: waiting: Set(Stage 0)
    16/01/27 16:55:29 INFO DAGScheduler: failed: Set()
    16/01/27 16:55:29 INFO DAGScheduler: Missing parents for Stage 0: List()
    16/01/27 16:55:29 INFO DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[6] at reduceByKey at WordCount.scala:52), which is now runnable
    16/01/27 16:55:29 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 (MapPartitionsRDD[6] at reduceByKey at WordCount.scala:52)
    16/01/27 16:55:29 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
    16/01/27 16:55:29 INFO TaskSetManager: Starting task 0.0:0 as TID 1 on executor localhost: localhost (PROCESS_LOCAL)
    16/01/27 16:55:29 INFO TaskSetManager: Serialized task 0.0:0 as 2003 bytes in 1 ms
    16/01/27 16:55:29 INFO Executor: Running task ID 1
    16/01/27 16:55:29 INFO BlockManager: Found block broadcast_0 locally
    16/01/27 16:55:29 INFO BlockFetcherIterator$BasicBlockFetcherIterator: maxBytesInFlight: 50331648, targetRequestSize: 10066329
    16/01/27 16:55:29 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
    16/01/27 16:55:29 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote fetches in 6 ms
    For : 5
    Programs : 1
    gladly : 1
    Because : 1
    The : 1
    agree : 1
    cluster. : 1
    webpage : 1
    its : 1
    under : 2
    legal : 1
    1.x, : 1
    have : 2
    Try : 1
    MRv1, : 1
    add : 2
    through : 1
    several : 1
    This : 2
    Whether : 1
    "yarn-cluster" : 1
    % : 2
    storage : 1
    To : 2
    setting : 1
    any : 2
    Once : 1
    application : 1
    explicitly, : 1
    use: : 1
    prefer : 1
    SparkPi : 2
    version : 3
    file : 1
    documentation, : 1
    Along : 1
    the : 28
    entry : 1
    author. : 1
    are : 2
    systems. : 1
    params : 1
    not : 2
    different : 1
    refer : 1
    Interactive : 2
    given. : 1
    if : 5
    file's : 1
    build : 3
    when : 2
    be : 2
    Tests : 1
    Apache : 6
    ./bin/run-example : 2
    programs, : 1
    including : 1
    <http://spark.apache.org/documentation.html>. : 1
    Spark. : 2
    2.0.5-alpha : 1
    package. : 1
    1000).count() : 1
    project's : 3
    Versions : 1
    HDFS : 1
    license : 3
    email, : 1
    <artifactId>hadoop-client</artifactId> : 1
    >>> : 1
    "org.apache.hadoop" : 1
    <version>1.2.1</version> : 1
    programming : 1
    Testing : 1
    run: : 1
    environment : 2
    pull : 3
    1000: : 2
    v2 : 1
    <groupId>org.apache.hadoop</groupId> : 1
    Please : 1
    is : 6
    run : 7
    URL, : 1
    SPARK_HADOOP_VERSION=2.2.0 : 1
    threads. : 1
    same : 1
    MASTER=spark://host:7077 : 1
    on : 4
    built : 2
    against : 1
    tests : 1
    examples : 2
    at : 1
    usage : 1
    using : 3
    Maven, : 1
    talk : 1
    submitting : 1
    Shell : 2
    class : 2
    adding : 1
    abbreviated : 1
    directory. : 1
    README : 1
    overview : 1
    dependencies. : 1
    `examples` : 2
    example: : 1
    ## : 9
    N : 1
    set : 2
    use : 3
    Hadoop-supported : 1
    running : 1
    find : 1
    via : 2
    contains : 1
    project : 3
    SPARK_HADOOP_VERSION=2.0.5-alpha : 1
    Pi : 1
    need : 1
    request, : 1
    or : 5
    </dependency> : 1
    <class> : 1
    uses : 1
    "hadoop-client" : 2
    Hadoop, : 1
    (You : 1
    requires : 1
    Contributions : 1
    SPARK_HADOOP_VERSION=1.2.1 : 1
    Documentation : 1
    of : 3
    cluster : 1
    using: : 1
    accepted : 1
    must : 1
    "1.2.1" : 1
    1.2.1 : 2
    built, : 1
    Hadoop : 11
    means : 1
    Spark : 12
    this : 4
    Python : 2
    original : 2
    YARN, : 3
    2.1.X, : 1
    pre-built : 1
    [Configuration : 1
    locally. : 1
    ./bin/pyspark : 1
    A : 1
    locally : 2
    # : 6
    sc.parallelize(1 : 1
    only : 1
    library : 1
    Configuration : 1
    basic : 1
    MapReduce : 2
    documentation : 1
    first : 1
    which : 2
    following : 2
    changed : 1
    also : 4
    Cloudera : 4
    without : 1
    should : 2
    for : 1
    "yarn-client" : 1
    [params]`. : 1
    `SPARK_YARN=true`: : 1
    setup : 1
    mesos:// : 1
    <http://spark.apache.org/> : 1
    GitHub : 1
    requests : 1
    latest : 1
    your : 6
    test : 1
    MASTER : 1
    example : 3
    authority : 1
    SPARK_YARN=true : 3
    scala> : 1
    guide](http://spark.apache.org/docs/latest/configuration.html) : 1
    configure : 1
    artifact : 1
    can : 7
    About : 1
    you're : 1
    instructions. : 1
    do : 3
    2.0.X, : 1
    easiest : 1
    no : 1
    When : 1
    how : 1
    newer : 1
    `./bin/run-example : 1
    source : 2
    copyrighted : 1
    material : 2
    Note : 1
    2.10. : 1
    by : 3
    please : 1
    Lightning-Fast : 1
    spark:// : 1
    so. : 1
    Scala : 3
    Alternatively, : 1
    If : 1
    Cluster : 1
    variable : 1
    submit : 1
    an : 2
    thread, : 1
    them, : 1
    2.2.X : 1
    And : 1
    application, : 1
    return : 2
    developing : 1
    ./bin/spark-shell : 1
    `<dependencies>` : 1
    warrant : 1
    "local" : 1
    start : 1
    You : 4
    <dependency> : 1
    Spark](#building-spark). : 1
    one : 2
    help : 1
    with : 8
    print : 1
    CDH : 4
    2.2.X, : 1
    $ : 5
    SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 : 1
    in : 4
    Contributing : 1
    downloaded : 1
    versions : 4
    online : 1
    `libraryDependencies`: : 1
    - : 1
    section: : 1
    4.2.0 : 2
    comes : 1
    [building : 1
    Python, : 1
    0.23.x, : 1
    `SPARK_HADOOP_VERSION` : 1
    Many : 1
    other : 4
    Running : 1
    sbt/sbt : 5
    building : 1
    way : 1
    SBT, : 1
    Online : 1
    change : 1
    MRv2, : 1
    contribution : 1
    from : 1
    Example : 1
    POM : 1
    open : 2
    sc.parallelize(range(1000)).count() : 1
    you : 8
    runs. : 1
    Building : 1
    protocols : 1
    that : 4
    a : 5
    their : 1
    guide, : 1
    name : 1
    example, : 1
    state : 2
    work : 2
    will : 1
    instance: : 1
    to : 19
    v1 : 1
    core : 1
     : 149
    license. : 1
    "local[N]" : 1
    programs : 2
    package.) : 1
    shell: : 2
    ./sbt/sbt : 2
    assembly : 6
    specify : 1
    and : 9
    Computing : 1
    command, : 2
    SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 : 1
    sample : 1
    requests, : 1
    16/01/27 16:55:29 INFO Executor: Serialized size of result for 1 is 825
    16/01/27 16:55:29 INFO Executor: Sending result for 1 directly to driver
    16/01/27 16:55:29 INFO Executor: Finished task ID 1
    16/01/27 16:55:29 INFO DAGScheduler: Completed ResultTask(0, 0)
    16/01/27 16:55:29 INFO DAGScheduler: Stage 0 (foreach at WordCount.scala:53) finished in 0.126 s
    16/01/27 16:55:29 INFO TaskSetManager: Finished TID 1 in 123 ms on localhost (progress: 1/1)
    16/01/27 16:55:29 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
    16/01/27 16:55:29 INFO SparkContext: Job finished: foreach at WordCount.scala:53, took 0.521885349 s
    16/01/27 16:55:29 INFO SparkUI: Stopped Spark web UI at http://Frank:4040
    16/01/27 16:55:29 INFO DAGScheduler: Stopping DAGScheduler
    16/01/27 16:55:31 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!
    16/01/27 16:55:31 INFO ConnectionManager: Selector thread was interrupted!
    16/01/27 16:55:31 INFO ConnectionManager: ConnectionManager stopped
    16/01/27 16:55:31 INFO MemoryStore: MemoryStore cleared
    16/01/27 16:55:31 INFO BlockManager: BlockManager stopped
    16/01/27 16:55:31 INFO BlockManagerMasterActor: Stopping BlockManagerMaster
    16/01/27 16:55:31 INFO BlockManagerMaster: BlockManagerMaster stopped
    16/01/27 16:55:31 INFO SparkContext: Successfully stopped SparkContext
    16/01/27 16:55:31 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
    16/01/27 16:55:31 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
    View Code

      2、如果想要在集群上搞Spark的话:

     1 package com.df.spark
     2 import org.apache.spark.SparkConf
     3 import org.apache.spark.SparkContext
     4 import org.apache.spark.SparkContext._
     5 import org.apache.spark.rdd.RDD
     6 /**
     7  * 使用Scala开发集群运行的Spark WordCount程序
     8  * @author liuzhongfeng
     9  */
    10 object WordCount_Cluster {
    11   def main(args: Array[String]){
    12     /**
    13      * 第一步:创建Spark的配置对象SparkConf,设置Spark程序的运行时的配置信息
    14      * 例如说通过setMaster来设置程序要链接的Spark集群的Master的URL,如果设置为local,
    15      * 则代表Spark程序在本地运行,特别适合机器配置条件差的初学者。
    16      */
    17     val conf=new SparkConf()//创建SparkConf对象
    18     conf.setAppName("My First Spark App!")//设置应用程序的名称,在程序运行的监控界面可以看到名称
    19     conf.setMaster("spark://cMaster-spark:7077")//程序此时运行在Spark集群
    20     
    21     /**
    22      * 第二步:创建SparkContext对象,
    23      * SparkContext是Spark程序所有功能的唯一入口,无论是采用Scala、Java、Python、R等都必须有一个SparkContext
    24      * SparkContext的核心作用:初始化Spark应用程序运行所需要的核心组件,包括DAGScheduler、TaskScheduler、SchedulerBacken
    25      * 同时还会负责Spark程序往Master注册程序等
    26      * SparkContext是整个Spark应用程序中至关重要的一个对象
    27      */
    28     val sc=new SparkContext(conf)//通过创建SparkContext对象,通过传入SparkConf实例来定制Spark运行的具体参数和配置信息
    29     
    30     /**
    31      * 第三步:根据具体的数据来源(HDFS、HBase、Local FS、S3)通过SparkContext来创建RDD
    32      * RDD的创建基本有三种方式:根据外部的数据来源(例如HDFS)、根据Scala集合、由其他的RDD操作
    33      * 数据会被RDD划分称为一些列的Partitions,分配到每个Partition的数据属于一个Task的处理范畴
    34      */
    35    
    36     val lines=sc.textFile("/in", 1)//导入你的hdfs上的文件
    37     /**
    38      * 第四步:对初始的RDD进行Transformation级别的处理,例如map、filter等高阶函数的编程,来进行具体的数据计算
    39      * 第4.1步:将每一行的字符串拆分成单个的单词 
    40      */
    41     val words=lines.flatMap { line => line.split(" ")}//对每一行的字符串进行单词切分,并把所有行的切分结果通过flat合并成一个大的单词集合   
    42     /**
    43      * 第四步:对初始的RDD进行Transformation级别的处理,例如map、filter等高阶函数的编程,来进行具体的数据计算
    44      * 第4.2步:在单词切分的基础上,对每个单词实例的计数为1,也就是word=>(word,1)
    45      */
    46     val pairs=words.map { word => (word,1) }
    47     /**
    48      * 第四步:对初始的RDD进行Transformation级别的处理,例如map、filter等高阶函数的编程,来进行具体的数据计算
    49      * 第4.3步:在每个单词实例计数为1的基础之上统计每个单词在文件中出现的总次数
    50      */
    51     val wordCounts=pairs.reduceByKey(_+_)//对相同的Key,进行Value的累计(包括Local和Reducer级别同时Reduce)
    52     wordCounts.collect.foreach(wordNumberPair=>println(wordNumberPair._1+" : "+wordNumberPair._2))
    53     sc.stop()
    54   }  
    55 }
    View Code

      (1)将你的程序打包到你的linux,运行Spark集群。具体的打包过程为:右键你需要打包的文件名如:WordCount.scala,然后-->Export-->Java-->JAR file,选择想要导出的路径,点击OK!

      (2)然后导出的包复制到你的linux系统上,我的目录为

        

      然后打开你的hadoop集群和spark集群,用jps查看一下。    

            

      然后执行命令:      

    此时运行成功!

    当神已无能为力,那便是魔渡众生
  • 相关阅读:
    导出表结构语句
    closeChannel: close the connection to remote address[] result: true
    spingboot使用rabbitmq
    服务器很卡问题排查
    docker-compose安装nginx
    Docker方式安装ShowDoc
    "docker build" requires exactly 1 argument
    Intellij IDEA常用快捷键介绍 Intellij IDEA快捷键大全汇总
    IDEA 2018 3.4 激活破解方法
    jpress:v3.2.5的docker-compose安装
  • 原文地址:https://www.cnblogs.com/liuzhongfeng/p/5163952.html
Copyright © 2011-2022 走看看