zoukankan      html  css  js  c++  java
  • ubuntu14.04 编译hadoop-2.6.0-cdh5.4.4

    1 protocol buffer

    sudo apt-get install libprotobuf-dev

    asn@hadoop1:~/Desktop$ protoc --version

    libprotoc 2.5.0

    2 安装CMake

    apt-get
    										install cmake
    

    asn@hadoop1:~/Desktop$ cmake --version

    cmake version 2.8.12.2

    3、安装其他依赖

    sudo apt-get install zlib1g-dev libssl-dev

    4 Snappy压缩

    snappy

    cd snappy-1.1.2/

    sudo ./configure

    sudo make

    sudo make install

    asn@hadoop1:~/snappy-1.1.2$ ls /usr/local/lib | grep snappy

    libsnappy.a

    libsnappy.la

    libsnappy.so

    libsnappy.so.1

    libsnappy.so.1.2.1

    hadoop-snappy

    asn@hadoop1:~/hadoop-snappy$ ls -l /usr/bin/gcc

    lrwxrwxrwx 1 root root 7 4月 27 22:36 /usr/bin/gcc -> gcc-4.8

    asn@hadoop1:~/hadoop-snappy$ sudo rm /usr/bin/gcc

    asn@hadoop1:~/hadoop-snappy$ ls -l /usr/bin/gcc

    ls: cannot access /usr/bin/gcc: No such file or directory

    $ sudo apt-get install gcc-4.4

    asn@hadoop1:~/hadoop-snappy$ sudo ln -s /usr/bin/gcc-4.4 /usr/bin/gcc

    asn@hadoop1:~/hadoop-snappy$ ls -l /usr/bin/gcc

    lrwxrwxrwx 1 root root 16 7月 24 11:30 /usr/bin/gcc -> /usr/bin/gcc-4.4

    [exec] make: *** [libhadoopsnappy.la] Error 1

    [exec] libtool:

    link: gcc -shared -fPIC -DPIC

    src/org/apache/hadoop/io/compress/snappy/.libs/SnappyCompressor.o src/org/apache/hadoop/io/compress/snappy/.libs/SnappyDecompressor.o

    -L/usr/local/lib -ljvm -ldl -O2 -m64 -O2 -Wl,-soname -Wl,libhadoopsnappy.so.0 -o .libs/libhadoopsnappy.so.0.0.1

    这是因为没有把安装jvmlibjvm.so symbolic链接到usr/local/lib

    如果你的系统是64位,可到/root/bin/jdk1.6.0_37/jre/lib/amd64/server/察看libjvm.so 链接到的地方,这里修改如下,使用命令:

    $ sudo ln -s /usr/local/jdk1.6.0_45/jre/lib/amd64/server/libjvm.so /usr/local/lib/
    

    问题即可解决。

     

    mvn package

    [INFO] ------------------------------------------------------------------------

    [INFO] BUILD SUCCESS

    [INFO] ------------------------------------------------------------------------

    [INFO] Total time: 43.710 s

    [INFO] Finished at: 2015-07-24T11:54:09+08:00

    [INFO] Final Memory: 23M/359M

    [INFO] ------------------------------------------------------------------------

    5 findbugs

    http://osdn.jp/projects/sfnet_findbugs/downloads/findbugs/3.0.1/findbugs-3.0.1.zip/

    asn@hadoop1:~$ unzip -h

    UnZip 6.00 of 20 April 2009, by Debian. Original by Info-ZIP.

    Usage: unzip [-Z] [-opts[modifiers]] file[.zip] [list] [-x xlist] [-d exdir]

    Default action is to extract files in list, except those in xlist, to exdir;

    file[.zip] may be a wildcard. -Z => ZipInfo mode ("unzip -Z" for usage).

    -p extract files to pipe, no messages -l list files (short format)

    -f freshen existing files, create none -t test compressed archive data

    -u update files, create if necessary -z display archive comment only

    -v list verbosely/show version info -T timestamp archive to latest

    -x exclude files that follow (in xlist) -d extract files into exdir

    modifiers:

    -n never overwrite existing files -q quiet mode (-qq => quieter)

    -o overwrite files WITHOUT prompting -a auto-convert any text files

    -j junk paths (do not make directories) -aa treat ALL files as text

    -U use escapes for all non-ASCII Unicode -UU ignore any Unicode fields

    -C match filenames case-insensitively -L make (some) names lowercase

    -X restore UID/GID info -V retain VMS version numbers

    -K keep setuid/setgid/tacky permissions -M pipe through "more" pager

    -O CHARSET specify a character encoding for DOS, Windows and OS/2 archives

    -I CHARSET specify a character encoding for UNIX and other archives

    See "unzip -hh" or unzip.txt for more help. Examples:

    unzip data1 -x joe => extract all files except joe from zipfile data1.zip

    unzip -p foo | more => send contents of foo.zip via pipe into program more

    unzip -fo foo ReadMe => quietly replace existing ReadMe if archive file newer

    asn@hadoop1:~$ sudo unzip findbugs-3.0.1.zip -d /usr/local

    配置环境变量

    # findbugs

    export FINDBUGS_HOME=/usr/local/findbugs-3.0.1

    export PATH=$PATH:$FINDBUGS_HOME/bin

    5 编译

    mvn package -Pdist,native,docs -DskipTests -Dtar

    [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---

    [INFO] Building jar: /home/asn/hadoop-2.6.0-cdh5.4.4-src/hadoop-dist/target/hadoop-dist-2.6.0-cdh5.4.4-javadoc.jar

    [INFO] ------------------------------------------------------------------------

    [INFO] Reactor Summary:

    [INFO]

    [INFO] Apache Hadoop Main ................................. SUCCESS [ 1.612 s]

    [INFO] Apache Hadoop Project POM .......................... SUCCESS [ 0.879 s]

    [INFO] Apache Hadoop Annotations .......................... SUCCESS [ 2.210 s]

    [INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 0.467 s]

    [INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 2.778 s]

    [INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 2.803 s]

    [INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 3.063 s]

    [INFO] Apache Hadoop Auth ................................. SUCCESS [ 21.714 s]

    [INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 4.256 s]

    [INFO] Apache Hadoop Common ............................... SUCCESS [03:50 min]

    [INFO] Apache Hadoop NFS .................................. SUCCESS [ 5.584 s]

    [INFO] Apache Hadoop KMS .................................. SUCCESS [03:47 min]

    [INFO] Apache Hadoop Common Project ....................... SUCCESS [ 0.037 s]

    [INFO] Apache Hadoop HDFS ................................. SUCCESS [09:31 min]

    [INFO] Apache Hadoop HttpFS ............................... SUCCESS [03:25 min]

    [INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [ 6.757 s]

    [INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 3.637 s]

    [INFO] Apache Hadoop HDFS Project ......................... SUCCESS [ 0.207 s]

    [INFO] hadoop-yarn ........................................ SUCCESS [ 0.231 s]

    [INFO] hadoop-yarn-api .................................... SUCCESS [01:21 min]

    [INFO] hadoop-yarn-common ................................. SUCCESS [ 21.180 s]

    [INFO] hadoop-yarn-server ................................. SUCCESS [ 0.089 s]

    [INFO] hadoop-yarn-server-common .......................... SUCCESS [ 8.697 s]

    [INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [ 18.131 s]

    [INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [ 2.802 s]

    [INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [ 6.952 s]

    [INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [ 18.398 s]

    [INFO] hadoop-yarn-server-tests ........................... SUCCESS [ 1.072 s]

    [INFO] hadoop-yarn-client ................................. SUCCESS [ 4.769 s]

    [INFO] hadoop-yarn-applications ........................... SUCCESS [ 0.062 s]

    [INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [ 2.625 s]

    [INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [ 1.906 s]

    [INFO] hadoop-yarn-site ................................... SUCCESS [ 0.088 s]

    [INFO] hadoop-yarn-registry ............................... SUCCESS [ 4.753 s]

    [INFO] hadoop-yarn-project ................................ SUCCESS [ 5.611 s]

    [INFO] hadoop-mapreduce-client ............................ SUCCESS [ 0.204 s]

    [INFO] hadoop-mapreduce-client-core ....................... SUCCESS [ 16.280 s]

    [INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 15.060 s]

    [INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [ 5.416 s]

    [INFO] hadoop-mapreduce-client-app ........................ SUCCESS [ 7.769 s]

    [INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [ 6.008 s]

    [INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [ 4.967 s]

    [INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [ 1.901 s]

    [INFO] hadoop-mapreduce-client-nativetask ................. SUCCESS [01:32 min]

    [INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 5.513 s]

    [INFO] hadoop-mapreduce ................................... SUCCESS [ 5.140 s]

    [INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 3.647 s]

    [INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 6.701 s]

    [INFO] Apache Hadoop Archives ............................. SUCCESS [ 2.504 s]

    [INFO] Apache Hadoop Rumen ................................ SUCCESS [ 5.215 s]

    [INFO] Apache Hadoop Gridmix .............................. SUCCESS [ 3.454 s]

    [INFO] Apache Hadoop Data Join ............................ SUCCESS [ 2.347 s]

    [INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [ 2.068 s]

    [INFO] Apache Hadoop Extras ............................... SUCCESS [ 2.509 s]

    [INFO] Apache Hadoop Pipes ................................ SUCCESS [ 12.185 s]

    [INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 5.989 s]

    [INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [01:17 min]

    [INFO] Apache Hadoop Azure support ........................ SUCCESS [ 26.187 s]

    [INFO] Apache Hadoop Client ............................... SUCCESS [ 13.386 s]

    [INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [ 2.447 s]

    [INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 6.560 s]

    [INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 11.723 s]

    [INFO] Apache Hadoop Tools ................................ SUCCESS [ 0.126 s]

    [INFO] Apache Hadoop Distribution ......................... SUCCESS [03:02 min]

    [INFO] ------------------------------------------------------------------------

    [INFO] BUILD SUCCESS

    [INFO] ------------------------------------------------------------------------

    [INFO] Total time: 33:22 min

    [INFO] Finished at: 2015-07-24T15:56:25+08:00

    [INFO] Final Memory: 230M/791M

    [INFO] ------------------------------------------------------------------------

    6 导入hadoop项目到eclipse

    1. 安装hadoop-maven-plugins插件

    $ cd hadoop-maven-plugins

    $ mvn install

    2)在hadoop-2.6.0-cdh5.4.4-src目录下执行

    mvn eclipse:eclipse -DskipTests

    asn@hadoop1:~/hadoop-2.6.0-cdh5.4.4-src$ mvn org.apache.maven.plugins:maven-eclipse-plugin:2.6:eclipse-DdownloadJavadocs=true

    Javadoc for some artifacts is not available.

    Please run the same goal with the -DdownloadJavadocs=true parameter in order to check remote repositories for javadoc.

    main:

    [INFO] Executed tasks

    [INFO]

    [INFO] --- maven-remote-resources-plugin:1.0:process (default) @ hadoop-dist ---

    [INFO] inceptionYear not specified, defaulting to 2015

    [INFO]

    [INFO] <<< maven-eclipse-plugin:2.6:eclipse (default-cli) < generate-resources @ hadoop-dist <<<

    [INFO]

    [INFO] --- maven-eclipse-plugin:2.6:eclipse (default-cli) @ hadoop-dist ---

    [INFO] Using Eclipse Workspace: null

    [INFO] Adding default classpath container: org.eclipse.jdt.launching.JRE_CONTAINER

    [INFO] Wrote settings to /home/asn/hadoop-2.6.0-cdh5.4.4-src/hadoop-dist/.settings/org.eclipse.jdt.core.prefs

    [INFO] Wrote Eclipse project for "hadoop-dist" to /home/asn/hadoop-2.6.0-cdh5.4.4-src/hadoop-dist.

    [INFO]

    [INFO] ------------------------------------------------------------------------

    [INFO] Reactor Summary:

    [INFO]

    [INFO] Apache Hadoop Main ................................. SUCCESS [ 0.959 s]

    [INFO] Apache Hadoop Project POM .......................... SUCCESS [ 0.468 s]

    [INFO] Apache Hadoop Annotations .......................... SUCCESS [ 0.239 s]

    [INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 0.129 s]

    [INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 0.109 s]

    [INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 0.957 s]

    [INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 3.097 s]

    [INFO] Apache Hadoop Auth ................................. SUCCESS [ 1.904 s]

    [INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 0.442 s]

    [INFO] Apache Hadoop Common ............................... SUCCESS [ 1.475 s]

    [INFO] Apache Hadoop NFS .................................. SUCCESS [ 0.690 s]

    [INFO] Apache Hadoop KMS .................................. SUCCESS [ 1.199 s]

    [INFO] Apache Hadoop Common Project ....................... SUCCESS [ 0.035 s]

    [INFO] Apache Hadoop HDFS ................................. SUCCESS [ 1.738 s]

    [INFO] Apache Hadoop HttpFS ............................... SUCCESS [ 1.294 s]

    [INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [ 0.448 s]

    [INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 0.369 s]

    [INFO] Apache Hadoop HDFS Project ......................... SUCCESS [ 0.040 s]

    [INFO] hadoop-yarn ........................................ SUCCESS [ 0.038 s]

    [INFO] hadoop-yarn-api .................................... SUCCESS [ 0.714 s]

    [INFO] hadoop-yarn-common ................................. SUCCESS [ 0.370 s]

    [INFO] hadoop-yarn-server ................................. SUCCESS [ 0.030 s]

    [INFO] hadoop-yarn-server-common .......................... SUCCESS [ 0.423 s]

    [INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [ 0.576 s]

    [INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [ 0.416 s]

    [INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [ 0.499 s]

    [INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [ 0.707 s]

    [INFO] hadoop-yarn-server-tests ........................... SUCCESS [ 0.870 s]

    [INFO] hadoop-yarn-client ................................. SUCCESS [ 0.444 s]

    [INFO] hadoop-yarn-applications ........................... SUCCESS [ 0.029 s]

    [INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [ 0.347 s]

    [INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [ 0.357 s]

    [INFO] hadoop-yarn-site ................................... SUCCESS [ 0.037 s]

    [INFO] hadoop-yarn-registry ............................... SUCCESS [ 0.620 s]

    [INFO] hadoop-yarn-project ................................ SUCCESS [ 0.445 s]

    [INFO] hadoop-mapreduce-client ............................ SUCCESS [ 0.125 s]

    [INFO] hadoop-mapreduce-client-core ....................... SUCCESS [ 1.548 s]

    [INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 0.960 s]

    [INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [ 0.905 s]

    [INFO] hadoop-mapreduce-client-app ........................ SUCCESS [ 0.814 s]

    [INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [ 0.975 s]

    [INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [ 0.699 s]

    [INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [ 0.446 s]

    [INFO] hadoop-mapreduce-client-nativetask ................. SUCCESS [ 0.470 s]

    [INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 0.786 s]

    [INFO] hadoop-mapreduce ................................... SUCCESS [ 0.197 s]

    [INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 0.127 s]

    [INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [04:15 min]

    [INFO] Apache Hadoop Archives ............................. SUCCESS [ 0.157 s]

    [INFO] Apache Hadoop Rumen ................................ SUCCESS [ 0.174 s]

    [INFO] Apache Hadoop Gridmix .............................. SUCCESS [ 6.043 s]

    [INFO] Apache Hadoop Data Join ............................ SUCCESS [ 0.159 s]

    [INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [ 0.104 s]

    [INFO] Apache Hadoop Extras ............................... SUCCESS [ 0.126 s]

    [INFO] Apache Hadoop Pipes ................................ SUCCESS [ 0.044 s]

    [INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 0.500 s]

    [INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [03:11 min]

    [INFO] Apache Hadoop Azure support ........................ SUCCESS [ 19.169 s]

    [INFO] Apache Hadoop Client ............................... SUCCESS [04:05 min]

    [INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [ 1.293 s]

    [INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 4.901 s]

    [INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 11.164 s]

    [INFO] Apache Hadoop Tools ................................ SUCCESS [ 0.027 s]

    [INFO] Apache Hadoop Distribution ......................... SUCCESS [ 0.109 s]

    [INFO] ------------------------------------------------------------------------

    [INFO] BUILD SUCCESS

    [INFO] ------------------------------------------------------------------------

    [INFO] Total time: 12:50 min

    [INFO] Finished at: 2015-07-24T19:45:50+08:00

    [INFO] Final Memory: 118M/521M

    [INFO] ------------------------------------------------------------------------

    7 单机伪分布

    1)etc配置

    core-site.xml

    <configuration>

        <property>

    <name>fs.defaultFS</name>

    <value>hdfs://localhost:9000</value>

    </property>

        <property>

            <name>hadoop.tmp.dir</name>

            <value>file:/home/asn/tmp</value>

        </property>

    </configuration>

    hdfs-site.xml

    <configuration>

        <property>

            <name>dfs.namenode.name.dir</name>

            <value>file:/home/asn/dfs/name</value>

        </property>

        <property>

            <name>dfs.datanode.data.dir</name>

            <value>file:/home/asn/dfs/data</value>

        </property>

        <property>

    <name>dfs.replication</name>

    <value>1</value>

    </property>

        <property>

            <name>dfs.webhdfs.enabled</name>

            <value>true</value>

        </property>

    </configuration>

    2) 格式化namenode

    $ bin/hdfs namenode -format

    3) 启动NameNode和DataNode后台进程

    $ sbin/start-dfs.sh

    asn@hadoop1:~/hadoop-2.6.0-cdh5.4.4$ sbin/start-dfs.sh

    Starting namenodes on [localhost]

    localhost: starting namenode, logging to /home/asn/hadoop-2.6.0-cdh5.4.4/logs/hadoop-asn-namenode-hadoop1.out

    localhost: datanode running as process 44763. Stop it first.

    Starting secondary namenodes [0.0.0.0]

    0.0.0.0: secondarynamenode running as process 44939. Stop it first.

    4) 运行例子grep

    $ bin/hdfs dfs -mkdir /user

    $ bin/hdfs dfs -mkdir /user/asn

    $ bin/hdfs dfs -put etc/hadoop input

    $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.4.4.jar grep input output 'dfs[a-z.]+'

    5) 查看结果

    asn@hadoop1:~/hadoop-2.6.0-cdh5.4.4$ bin/hdfs dfs -get output output ##从hdfs文件系统中拷贝输出文件到本地

    asn@hadoop1:~/hadoop-2.6.0-cdh5.4.4$ ls

    bin etc include lib libexec LICENSE.txt logs NOTICE.txt output README.txt sbin share

    asn@hadoop1:~/hadoop-2.6.0-cdh5.4.4$ ls output

    part-r-00000 _SUCCESS

    asn@hadoop1:~/hadoop-2.6.0-cdh5.4.4$ cat output/* ##查看计算结果

    6    dfs.audit.logger

    4    dfs.class

    3    dfs.server.namenode.

    2    dfs.audit.log.maxfilesize

    2    dfs.audit.log.maxbackupindex

    2    dfs.period

    1    dfsmetrics.log

    1    dfsadmin

    1    dfs.webhdfs.enabled

    1    dfs.servers

    1    dfs.replication

    1    dfs.file

    1    dfs.datanode.data.dir

    1    dfs.namenode.name.dir

    asn@hadoop1:~/hadoop-2.6.0-cdh5.4.4$ bin/hdfs dfs -cat output/* ##查看hdfs文件系统上的计算结果

    6    dfs.audit.logger

    4    dfs.class

    3    dfs.server.namenode.

    2    dfs.audit.log.maxfilesize

    2    dfs.audit.log.maxbackupindex

    2    dfs.period

    1    dfsmetrics.log

    1    dfsadmin

    1    dfs.webhdfs.enabled

    1    dfs.servers

    1    dfs.replication

    1    dfs.file

    1    dfs.datanode.data.dir

    1    dfs.namenode.name.dir

    8 端口总结

    dfs.namenode.http-address

    0.0.0.0:50070

    The address and the base port where the dfs namenode web ui will listen on.

    分布式文件系统NameNode节点web ui界面的监听端口

    dfs.namenode.secondary.http-address

    0.0.0.0:50090

    The secondary namenode http server address and port.

    dfs.datanode.http.address

    0.0.0.0:50075

    The datanode http server address and port.

    dfs文件系统服务暴露在9000端口

    重新生成ssh-key时,报错处理

    asn@hadoop1:~/hadoop-2.6.0-cdh5.4.4$ ssh localhost

    Agent admitted failure to sign using the key.

    解决办法

    在当前用户下执行命令:

    ssh-add

  • 相关阅读:
    SDNU 1123.Encoding
    SDNU 1120.ISBN号码
    SDNU 1119.Intelligent IME(水题)
    SDNU 1115.谁拿了最多奖学金(水题)
    解决Docker运行命令时提示"Got permission denied while trying to connect to the Docker daemon socket"类情况
    jupyter notebook修改默认浏览器
    CentOS切换用户命令su or su+username
    图像内插,双线性插值等
    python求最大公约数和最小公倍数
    Python split()方法
  • 原文地址:https://www.cnblogs.com/asnjudy/p/4675001.html
Copyright © 2011-2022 走看看