Ubuntu-14.04.5搭建Hadoop-2.9.0分布式集群环境

前端之家收集整理的这篇文章主要介绍了Ubuntu-14.04.5搭建Hadoop-2.9.0分布式集群环境前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

1.安装说明

1.1: 安装参数: ubuntu: 14.04.5; hadoop: 2.9.0; jdk: 1.8.0_121

1.2: 配置集群机器:三台主机(或虚拟机)搭建分布式集群环境(Ubuntu-14.04.5)(同一局域网)(可先准备一台):

ip hostname comment
192.168.1.203 hadoop-master Master
192.168.1.204 hadoop-slave01 Slave 01
192.168.1.205 hadoop-slave02 Slave 02

1.3. 查看ip:

  1. $ ifconfig
注:inet addr:192.168.1.203

1.4. 配置ip:

  1. $ vim /etc/network/interfaces
  2. auto lo
  3. iface lo inet loopback
  4. auto eth0
  5. iface eth0 inet static
  6. address 192.168.1.203
  7. netmask 255.255.255.0
  8. gateway 192.168.1.1
  1. $ vim /etc/resolvconf/resolv.conf.d/base
  2. nameserver 192.168.1.1
  3. nameserver 8.8.8.8
  1. /etc/init.d/networking restart

1.5. 查看主机名称:

  1. $ hostname

1.6. 修改主机名称:

  1. $ vim /etc/hostname
  2. hadoop-master
  3. $ reboot
注:主机名称,为一个自定义字符串,此处为: hadoop-master。重启生效。

1.7. 配置ip地址和对应主机名(三台主机添加同样的配置):

  1. $ vim /etc/hosts
  2. 127.0.0.1 localhost
  3. 192.168.1.203 hadoop-master
  4. 192.168.1.204 hadoop-slave01
  5. 192.168.1.205 hadoop-slave02

温馨提示:只准备一台机器192.168.1.203(此处虚拟机),以下操作先在203上完成,最后备份主机镜像导入修改ip为集群(此处以下操作都为root用户操作,也可以其他用户hadoop)!

2:下载安装软件包

2.1: hadoop安装包下载地址

  1. http://archive.apache.org/dist/hadoop/core/
  2. http://archive.apache.org/dist/hadoop/core/stable2/hadoop-2.9.0.tar.gz
  3. https://dist.apache.org/repos/dist/release/hadoop/common/
  4. http://mirror.bit.edu.cn/apache/hadoop/common/

2.2: jdk安装包下载地址

  1. http://www.oracle.com/technetwork/java/javase/archive-139210.html

2.3: Hadoop 0.18文档

  1. http://hadoop.apache.org/docs/r1.0.4/cn/

2.4: 这里准备的安装包

  1. hadoop-2.9.0.tar.gz
  2. jdk-8u121-linux-x64.tar.gz

2.5: Ubuntu14.04.5操作系统安装,Ubuntu-14.04.5 LTS trusty

3:安装hadoop所有必需依赖软件

3.1: 安装jdk(JavaTM1.5.x+),建议选择Sun公司发行的Java版本,Jdk1.8.0_121

3.1.1: Java的jdk安装包jdk-8u121-linux-x64.tar.gz解压
  1. $ tar zxvf jdk-8u121-linux-x64.tar.gz -C /usr/local
3.1.2: Java的环境变量配置 (最后加入)
  1. $ vim /etc/profile
  2. #Jdk
  3. export JAVA_HOME=/usr/local/jdk1.8.0_121
  4. export CLASSPATH=$JAVA_HOME/lib:.
  5.  
  6. export PATH=$PATH:$JAVA_HOME/bin
3.1.3: 生效环境变量,注:ubuntu系统每次登录都要生效操作
  1. $ source /etc/profile
3.1.4: Java版本检查
  1. $ java -version
  2. java version "1.8.0_121"
  3. Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
  4. Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13,mixed mode)

3.2: 安装ssh、rsync、openssh-server

  1. $ sudo apt-get install ssh
  2. $ sudo apt-get install rsync
  3. $ sudo apt-get install openssh-server
  1. $ dpkg -l |grep ssh
  2. ii openssh-client 1:6.6p1-2ubuntu2.8 amd64 secure shell (SSH) client,for secure access to remote machines
  3. ii openssh-server 1:6.6p1-2ubuntu2.8 amd64 secure shell (SSH) server,for secure access from remote machines
  4. ii openssh-sftp-server 1:6.6p1-2ubuntu2.8 amd64 secure shell (SSH) sftp server module,for SFTP access from remote machines
  5. ii ssh 1:6.6p1-2ubuntu2.8 all secure shell client and server (Metapackage)
  6. ii ssh-import-id 3.21-0ubuntu1 all securely retrieve an SSH public key and install it locally
  7. $ dpkg -l |grep rsync
  8. ii rsync 3.1.0-2ubuntu0.2 amd64 fast,versatile,remote (and local) file-copying tool
注:ssh目前较可靠,专为远程登录会话和其他网络服务提供安全性的协议。必须安装并且保证sshd一直运行,以便用Hadoop脚本管理远端Hadoop守护进程。
注:rsync是类unix系统下的数据镜像备份工具。使用快速增量备份(第一次同步时rsync会复制全部内容,但在下一次只传输修改过的文件。)工具Remote Sync可以远程同步,支持本地复制,或者与其他SSH、rsync主机同步。rsync可以镜像保存整个目录树和文件系统。可以很容易做到保持原来文件的权限、时间、软硬链接等等。rsync 在传输数据的过程中可以实行压缩及解压缩操作,因此可以使用更少的带宽。可以使用scp、ssh等方式来传输文件,当然也可以通过直接的socket连接。支持匿名传输,以方便进行网站镜象。
注:openssh-server生成ssh密钥(私钥、公钥)

4. 创建目录、用户、组、密码、所属 (此处三台主机均为):

  1. $ mkdir -p /hadoop/bin
  2. $ mkdir -p /hadoop/tmp
  3. $ mkdir -p /hadoop/dfs/data
  4. $ mkdir -p /hadoop/dfs/name
  5. $ groupadd hadoop
  6. $ useradd hadoop -g hadoop -d /hadoop -s /bin/bash
  7. $ grep hadoop /etc/passwd
  8. $ chown -R hadoop:hadoop /hadoop
  9. $ passwd hadoop
注:查看用户的根目录: $ ls ~

5: 免密码ssh设置

5.1: 生成密钥和添加免密码ssh集群用户公钥/授权,配置ssh无密码登录本机和访问集群机器(注:~ = $HOME):
  1. $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_rsa
  2. $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
5.2: 检查:确认能否不输入口令就用ssh登录localhost:
  1. $ ssh localhost
注: 第一次”Are you sure you want to continue connecting (yes/no)? “,yes就好,进入后可用: $ exit 退出

6: 安装hadoop

  1. $ tar zxvf hadoop-2.9.0.tar.gz -C /hadoop/bin/

6: 修改Hadoop配置文件

注:Hadoop配置文件都位于此目录下:/hadoop/bin/hadoop-2.9.0/etc/hadoop/
6.1: 修改JAVA_HOME:设置为Java安装根路径(hadoop/yarn):
6.1.1: 修改配置文件:hadoop-env.sh
  1. $ vim /hadoop/bin/hadoop-2.9.0/etc/hadoop/hadoop-env.sh
注:hadoop-env.sh文件“# The java implementation to use.”后加入:
  1. export JAVA_HOME=/usr/local/jdk1.8.0_121
6.1.1: 修改配置文件:yarn-env.sh
  1. $ vim /hadoop/bin/hadoop-2.9.0/etc/hadoop/yarn-env.sh
注:yarn-env.sh文件“# some Java parameters.”后加入:
  1. export JAVA_HOME=/usr/local/jdk1.8.0_121
6.2: 修改slaves
注:把Datanode的主机名写入该文件,每行一个。这里让hadoop-master节点主机仅作为NameNode使用。
  1. $ vim /hadoop/bin/hadoop-2.9.0/etc/hadoop/slaves
  2. hadoop-slave01
  3. hadoop-slave02
6.3: 修改core-site.xml
  1. $ vim /hadoop/bin/hadoop-2.9.0/etc/hadoop/core-site.xml
  2. <configuration>
  3. <property>
  4. <name>hadoop.tmp.dir</name>
  5. <value>file:/hadoop/tmp</value>
  6. <description>Abase for other temporary directories.</description>
  7. </property>
  8. <property>
  9. <name>fs.defaultFS</name>
  10. <value>hdfs://hadoop-master:9000</value>
  11. </property>
  12. </configuration>
6.4: 修改hdfs-site.xml
  1. $ vim /hadoop/bin/hadoop-2.9.0/etc/hadoop/hdfs-site.xml
  2. <configuration>
  3. <property>
  4. <name>dfs.replication</name>
  5. <value>3</value>
  6. </property>
  7. <property>
  8. <name>dfs.namenode.name.dir</name>
  9. <value>file:/hadoop/dfs/name</value>
  10. </property>
  11. <property>
  12. <name>dfs.datanode.data.dir</name>
  13. <value>file:/hadoop/dfs/data</value>
  14. </property>
  15. </configuration>
6.5: 修改mapred-site.xml
  1. $ cp /hadoop/bin/hadoop-2.9.0/etc/hadoop/mapred-site.xml.template /hadoop/bin/hadoop-2.9.0/etc/hadoop/mapred-site.xml
  2. $ vim /hadoop/bin/hadoop-2.9.0/etc/hadoop/mapred-site.xml
  3. <configuration>
  4. <property>
  5. <name>mapreduce.framework.name</name>
  6. <value>yarn</value>
  7. </property>
  8. </configuration>
6.6: 修改yarn-site.xml文件:
  1. $ vim /hadoop/bin/hadoop-2.9.0/etc/hadoop/yarn-site.xml
  2. <configuration>
  3. <!-- Site specific YARN configuration properties -->
  4. <property>
  5. <name>yarn.nodemanager.aux-services</name>
  6. <value>mapreduce_shuffle</value>
  7. </property>
  8. <property>
  9. <name>yarn.resourcemanager.hostname</name>
  10. <value>hadoop-master</value>
  11. </property>
  12. </configuration>

7: Hadoop的环境变量配置

  1. $ vim /etc/profile
  2.  
  3. #Jdk
  4. export JAVA_HOME=/usr/local/jdk1.8.0_121
  5. export CLASSPATH=$JAVA_HOME/lib:.
  6. #Hadoop
  7. export HADOOP_HOME=/hadoop/bin/hadoop-2.9.0
  8.  
  9. export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
  10.  
  11. $ source /etc/profile
  12. $ chown -R hadoop:hadoop /hadoop
  13. $ hadoop version
  14. Hadoop 2.9.0
  15. Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 756ebc8394e473ac25feac05fa493f6d612e6c50
  16. Compiled by arsuresh on 2017-11-13T23:15Z
  17. Compiled with protoc 2.5.0
  18. From source with checksum 0a76a9a32a5257331741f8d5932f183
  19. This command was run using /hadoop/bin/hadoop-2.9.0/share/hadoop/common/hadoop-common-2.9.0.jar

温馨提示:此处192.168.1.203(此处虚拟机),配置已全部完成,现在备份主机(192.168.1.203)镜像导入204和205其他两台主机(此处虚拟机:VirtualBox-5.1.30)!

8: 主机备份的镜像导入其他两台从主机(203为主)

8.1: 修改204和205的ip和hostname:192.168.1.204 (hadoop-slave01),192.168.1.205 (hadoop-slave02),重启!
8.2: ping双方的ip测试网络连通性(例: 192.168.1.203 (hadoop-master)):
  1. $ ping 192.168.1.204
  2. $ ping 192.168.1.205
  3. $ ping hadoop-slave01
  4. $ ping hadoop-slave02
8.3: 添加免密码ssh集群用户公钥/授权:
8.3.1: 保证了三台主机电脑ssh都能连接到本地localhost,还需要让master主机免密码登录slave01和slave02主机。在master执行如下命令,将master的id_rsa.pub传送给两台slave主机。
  1. $ scp ~/.ssh/id_rsa.pub hadoop@hadoop-slave01:/hadoop/
  2. $ scp ~/.ssh/id_rsa.pub hadoop@hadoop-slave02:/hadoop/
8.3.2:接着在hadoop-slave01、hadoop-slave02主机上将hadoop-master的公钥加入各自authorized_keys(用户公钥):
  1. $ cat /hadoop/id_rsa.pub >> ~/.ssh/authorized_keys
  2. $ rm /hadoop/id_rsa.pub
8.3.3: 集群Master免密码ssh登录Slave测试(hadoop-master上执行):
  1. $ ssh hadoop-slave01
  2. $ ssh hadoop-slave02
注: 如果Master主机和Slave主机的用户名不一样,还需要在Master修改~/.ssh/config文件,如果没有此文件,自己创建文件(此处无)。
  1. Host hadoop-master
  2. user hadoop1
  3. Host hadoop-slave01
  4. user hadoop2

9: 格式化分布式文件系统,启动Hadoop集群 (hadoop-master上执行):、

  1. $ /hadoop/bin/hadoop-2.9.0/bin/hdfs namenode -format
  2. $ /hadoop/bin/hadoop-2.9.0/sbin/start-all.sh

10: 关闭Hadoop集群

  1. $ /hadoop/bin/hadoop-2.9.0/sbin/stop-all.sh

11: 运行后,测试

11.1: Java进程,在hadoop-master、hadoop-slave01、hadoop-slave02运行jps命令(显示当前所有java进程pid)
  1. root@hadoop-master:/home/server# jps
  2. 1346 NameNode
  3. 1561 SecondaryNameNode
  4. 2009 Jps
  5. 1738 ResourceManager
  6. root@hadoop-slave01:/home/server# jps
  7. 1718 Jps
  8. 1435 Datanode
  9. 1551 NodeManager
  10. root@hadoop-slave02:/home/server# jps
  11. 1651 Jps
  12. 1371 Datanode
  13. 1487 NodeManager
11.2: 端口:
  1. root@hadoop-master:/home/server# netstat -tunlp
  2. Active Internet connections (only servers)
  3. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  4. tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 1346/java
  5. tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 801/sshd
  6. tcp 0 0 192.168.1.203:9000 0.0.0.0:* LISTEN 1346/java
  7. tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 1561/java
  8. tcp6 0 0 :::22 :::* LISTEN 801/sshd
  9. tcp6 0 0 192.168.1.203:8088 :::* LISTEN 1738/java
  10. tcp6 0 0 192.168.1.203:8030 :::* LISTEN 1738/java
  11. tcp6 0 0 192.168.1.203:8031 :::* LISTEN 1738/java
  12. tcp6 0 0 192.168.1.203:8032 :::* LISTEN 1738/java
  13. tcp6 0 0 192.168.1.203:8033 :::* LISTEN 1738/java
  14. root@hadoop-slave01:/home/server# netstat -tunlp
  15. Active Internet connections (only servers)
  16. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  17. tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 823/sshd
  18. tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 1435/java
  19. tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 1435/java
  20. tcp 0 0 127.0.0.1:36035 0.0.0.0:* LISTEN 1435/java
  21. tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN 1435/java
  22. tcp6 0 0 :::32979 :::* LISTEN 1551/java
  23. tcp6 0 0 :::22 :::* LISTEN 823/sshd
  24. tcp6 0 0 :::13562 :::* LISTEN 1551/java
  25. tcp6 0 0 :::8040 :::* LISTEN 1551/java
  26. tcp6 0 0 :::8042 :::* LISTEN 1551/java
  27. root@hadoop-slave02:/home/server# netstat -tunlp
  28. Active Internet connections (only servers)
  29. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  30. tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 791/sshd
  31. tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 1371/java
  32. tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 1371/java
  33. tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN 1371/java
  34. tcp 0 0 127.0.0.1:37261 0.0.0.0:* LISTEN 1371/java
  35. tcp6 0 0 :::22 :::* LISTEN 791/sshd
  36. tcp6 0 0 :::13562 :::* LISTEN 1487/java
  37. tcp6 0 0 :::8040 :::* LISTEN 1487/java
  38. tcp6 0 0 :::8042 :::* LISTEN 1487/java
  39. tcp6 0 0 :::41422 :::* LISTEN 1487/java
11.3: Web UI访问: 查看 NameNode 和Datanode信息,还可以在线查看HDFS中的文件:
  1. http://192.168.1.203:50070
  2. http://192.168.1.204:50075
  3. http://192.168.1.205:50075
  4. http://hadoop-master:50070
  5. http://hadoop-slave01:50075
  6. http://hadoop-slave02:50075

注:若windows系统则修改文件:

  1. C:\Windows\System32\drivers\etc\hosts
  2. 192.168.1.200 hadoop-master
  3. 192.168.1.201 hadoop-slave01
  4. 192.168.1.202 hadoop-slave02

12: HDFS中的文件简单操作

12.1: 查看HDFS有没有文件文件夹,第一次查看显示为空
  1. $ hadoop fs -ls /
12.2: 在HDFS下创建test1文件
  1. $ hadoop fs -mkdir /test1
  2. $ hadoop fs -ls /
  3. Found 1 items
  4. drwxr-xr-x - root supergroup 0 2017-11-19 18:45 /test1
12.3: 在HDFS下改变test1文件夹的拥有者
  1. $ hadoop fs -chown -R hadoop:hadoop /test1
  2. $ hadoop fs -ls /
  3. Found 1 items
  4. drwxr-xr-x - hadoop hadoop 0 2017-11-19 18:45 /test1
12.4: 在HDFS下改变test1文件夹的权限
  1. $ hadoop fs -chmod 777 /test1
  2. $ hadoop fs -ls /
  3. Found 1 items
  4. drwxrwxrwx - hadoop hadoop 0 2017-11-19 18:45 /test1
12.5: 在HDFS下向test1文件夹下复制“上传(put)”和“下载(get)”
  1. $ touch 11111.txt
  2. $ vim 11111.txt
  3. $ hadoop fs -put 11111.txt /test1
  4. $ hadoop fs -ls /test1
  5. Found 1 items
  6. -rw-r--r-- 3 root hadoop 1421 2017-11-19 19:06 /test1/11111.txt
  7. $ hadoop fs -get /test1/11111.txt /home
  8. $ ls /home/
  9. 11111.txt server
12.6: 在web页面的Utilities菜单下Browse the file system查看HDFS中的文件上传和下载。
  1. $ hadoop fs -ls /test1
  2. Found 2 items
  3. -rw-r--r-- 3 root hadoop 1421 2017-11-19 19:06 /test1/11111.txt
  4. -rwxr-xr-x 3 dr.who hadoop 1216816138 2017-11-19 19:12 /test1/卑鄙的我3_神偷奶爸.mp4
  5. $ hadoop fs -du -h /test1
  6. 1.4 K /test1/11111.txt
  7. 1.1 G /test1/卑鄙的我3_神偷奶爸.mp4

附录1:格式化分布式文件系统: $ /hadoop/bin/hadoop-2.9.0/bin/hdfs namenode -format

  1. 17/11/19 18:27:17 INFO namenode.NameNode: STARTUP_MSG:
  2. /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = hadoop-master/192.168.1.203 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.9.0 STARTUP_MSG: classpath = /hadoop/bin/hadoop-2.9.0/etc/hadoop:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/nimbus-jose-jwt-3.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/zookeeper-3.4.6.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jersey-json-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/servlet-api-2.5.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jersey-core-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/paranamer-2.3.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jersey-server-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/log4j-1.2.17.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-cli-1.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/httpclient-4.5.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/json-smart-1.1.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jsch-0.1.54.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jsr305-3.0.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jsp-api-2.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/hadoop-auth-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-lang3-3.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jcip-annotations-1.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/hadoop-annotations-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/gson-2.2.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/guava-11.0.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/curator-framework-2.7.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-lang-2.6.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/junit-4.11.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jettison-1.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/curator-client-2.7.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/xmlenc-0.52.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/httpcore-4.4.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/xz-1.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-digester-1.8.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/avro-1.7.7.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-io-2.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/asm-3.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-codec-1.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-net-3.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/snappy-java-1.0.5.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jetty-6.1.26.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/stax2-api-3.1.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/activation-1.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/hadoop-nfs-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/hadoop-common-2.9.0-tests.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/common/hadoop-common-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/okio-1.4.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/asm-3.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/hadoop-hdfs-native-client-2.9.0-tests.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/hadoop-hdfs-2.9.0-tests.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/hadoop-hdfs-client-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/hadoop-hdfs-client-2.9.0-tests.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/hadoop-hdfs-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/hdfs/hadoop-hdfs-native-client-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/nimbus-jose-jwt-3.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/java-util-1.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/httpclient-4.5.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/json-smart-1.1.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jsch-0.1.54.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/api-asn1-api-1.0.0-M20.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jets3t-0.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/api-util-1.0.0-M20.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jsp-api-2.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/fst-2.50.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-math-2.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-lang3-3.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-beanutils-1.7.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jcip-annotations-1.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/gson-2.2.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/curator-framework-2.7.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jettison-1.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/xmlenc-0.52.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/curator-recipes-2.7.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/httpcore-4.4.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/xz-1.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-digester-1.8.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/javax.inject-1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/avro-1.7.7.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/json-io-2.5.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/asm-3.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/metrics-core-3.0.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/java-xmlbuilder-0.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/htrace-core4-4.1.0-incubating.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jetty-sslengine-6.1.26.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-net-3.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-configuration-1.6.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/apacheds-i18n-2.0.0-M15.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/snappy-java-1.0.5.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/guice-3.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/stax2-api-3.1.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/activation-1.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-beanutils-core-1.8.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/lib/woodstox-core-5.0.3.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-client-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-common-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-server-router-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-registry-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-api-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-server-common-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/junit-4.11.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/avro-1.7.7.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/snappy-java-1.0.5.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.9.0-tests.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.9.0.jar:/hadoop/bin/hadoop-2.9.0/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 756ebc8394e473ac25feac05fa493f6d612e6c50; compiled by 'arsuresh' on 2017-11-13T23:15Z STARTUP_MSG: java = 1.8.0_121 ************************************************************/
  3. 17/11/19 18:27:17 INFO namenode.NameNode: registered UNIX signal handlers for [TERM,HUP,INT]
  4. 17/11/19 18:27:17 INFO namenode.NameNode: createNameNode [-format]
  5. Formatting using clusterid: CID-30e36383-87f2-48cc-bdfb-34ed5b983ee5
  6. 17/11/19 18:27:20 INFO namenode.FSEditLog: Edit logging is async:true
  7. 17/11/19 18:27:20 INFO namenode.FSNamesystem: KeyProvider: null
  8. 17/11/19 18:27:20 INFO namenode.FSNamesystem: fsLock is fair: true
  9. 17/11/19 18:27:20 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
  10. 17/11/19 18:27:20 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
  11. 17/11/19 18:27:20 INFO namenode.FSNamesystem: supergroup = supergroup
  12. 17/11/19 18:27:20 INFO namenode.FSNamesystem: isPermissionEnabled = true
  13. 17/11/19 18:27:20 INFO namenode.FSNamesystem: HA Enabled: false
  14. 17/11/19 18:27:20 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
  15. 17/11/19 18:27:21 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000,counted=60,effected=1000
  16. 17/11/19 18:27:21 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
  17. 17/11/19 18:27:21 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
  18. 17/11/19 18:27:21 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Nov 19 18:27:21
  19. 17/11/19 18:27:21 INFO util.GSet: Computing capacity for map BlocksMap
  20. 17/11/19 18:27:21 INFO util.GSet: VM type = 64-bit
  21. 17/11/19 18:27:21 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
  22. 17/11/19 18:27:21 INFO util.GSet: capacity = 2^21 = 2097152 entries
  23. 17/11/19 18:27:21 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
  24. 17/11/19 18:27:21 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
  25. 17/11/19 18:27:21 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
  26. 17/11/19 18:27:21 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
  27. 17/11/19 18:27:21 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
  28. 17/11/19 18:27:21 INFO blockmanagement.BlockManager: defaultReplication = 3
  29. 17/11/19 18:27:21 INFO blockmanagement.BlockManager: maxReplication = 512
  30. 17/11/19 18:27:21 INFO blockmanagement.BlockManager: minReplication = 1
  31. 17/11/19 18:27:21 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
  32. 17/11/19 18:27:21 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
  33. 17/11/19 18:27:21 INFO blockmanagement.BlockManager: encryptDataTransfer = false
  34. 17/11/19 18:27:21 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
  35. 17/11/19 18:27:21 INFO namenode.FSNamesystem: Append Enabled: true
  36. 17/11/19 18:27:22 INFO util.GSet: Computing capacity for map INodeMap
  37. 17/11/19 18:27:22 INFO util.GSet: VM type = 64-bit
  38. 17/11/19 18:27:22 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
  39. 17/11/19 18:27:22 INFO util.GSet: capacity = 2^20 = 1048576 entries
  40. 17/11/19 18:27:22 INFO namenode.FSDirectory: ACLs enabled? false
  41. 17/11/19 18:27:22 INFO namenode.FSDirectory: XAttrs enabled? true
  42. 17/11/19 18:27:22 INFO namenode.NameNode: Caching file names occurring more than 10 times
  43. 17/11/19 18:27:22 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false
  44. 17/11/19 18:27:22 INFO util.GSet: Computing capacity for map cachedBlocks
  45. 17/11/19 18:27:22 INFO util.GSet: VM type = 64-bit
  46. 17/11/19 18:27:22 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
  47. 17/11/19 18:27:22 INFO util.GSet: capacity = 2^18 = 262144 entries
  48. 17/11/19 18:27:22 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
  49. 17/11/19 18:27:22 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
  50. 17/11/19 18:27:22 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
  51. 17/11/19 18:27:22 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
  52. 17/11/19 18:27:22 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
  53. 17/11/19 18:27:22 INFO util.GSet: Computing capacity for map NameNodeRetryCache
  54. 17/11/19 18:27:22 INFO util.GSet: VM type = 64-bit
  55. 17/11/19 18:27:22 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
  56. 17/11/19 18:27:22 INFO util.GSet: capacity = 2^15 = 32768 entries
  57. 17/11/19 18:27:22 INFO namenode.FSImage: Allocated new BlockPoolId: BP-665037173-192.168.1.203-1511087242885
  58. 17/11/19 18:27:23 INFO common.Storage: Storage directory /hadoop/dfs/name has been successfully formatted.
  59. 17/11/19 18:27:23 INFO namenode.FSImageFormatProtobuf: Saving image file /hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
  60. 17/11/19 18:27:23 INFO namenode.FSImageFormatProtobuf: Image file /hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.
  61. 17/11/19 18:27:23 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
  62. 17/11/19 18:27:23 INFO namenode.NameNode: SHUTDOWN_MSG:
  63. /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop-master/192.168.1.203 ************************************************************/

附录2:启动Hadoop集群: $ /hadoop/bin/hadoop-2.9.0/sbin/start-all.sh

  1. This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
  2. Starting namenodes on [hadoop-master]
  3. The authenticity of host 'hadoop-master (192.168.1.203)' can't be established.
  4. ECDSA key fingerprint is 1b:49:c4:67:c0:74:d4:aa:af:d4:54:ba:26:bc:2a:bd.
  5. Are you sure you want to continue connecting (yes/no)? yes
  6. hadoop-master: Warning: Permanently added 'hadoop-master,192.168.1.203' (ECDSA) to the list of known hosts.
  7. hadoop-master: starting namenode,logging to /hadoop/bin/hadoop-2.9.0/logs/hadoop-root-namenode-hadoop-master.out
  8. hadoop-slave01: starting datanode,logging to /hadoop/bin/hadoop-2.9.0/logs/hadoop-root-datanode-hadoop-slave01.out
  9. hadoop-slave02: starting datanode,logging to /hadoop/bin/hadoop-2.9.0/logs/hadoop-root-datanode-hadoop-slave02.out
  10. Starting secondary namenodes [0.0.0.0]
  11. The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
  12. ECDSA key fingerprint is 1b:49:c4:67:c0:74:d4:aa:af:d4:54:ba:26:bc:2a:bd.
  13. Are you sure you want to continue connecting (yes/no)? yes
  14. 0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
  15. 0.0.0.0: starting secondarynamenode,logging to /hadoop/bin/hadoop-2.9.0/logs/hadoop-root-secondarynamenode-hadoop-master.out
  16. starting yarn daemons
  17. starting resourcemanager,logging to /hadoop/bin/hadoop-2.9.0/logs/yarn-root-resourcemanager-hadoop-master.out
  18. hadoop-slave01: starting nodemanager,logging to /hadoop/bin/hadoop-2.9.0/logs/yarn-root-nodemanager-hadoop-slave01.out
  19. hadoop-slave02: starting nodemanager,logging to /hadoop/bin/hadoop-2.9.0/logs/yarn-root-nodemanager-hadoop-slave02.out

附录3:关闭Hadoop集群: $ /hadoop/bin/hadoop-2.9.0/sbin/stop-all.sh

  1. This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
  2. Stopping namenodes on [hadoop-master]
  3. hadoop-master: stopping namenode
  4. hadoop-slave01: stopping datanode
  5. hadoop-slave02: stopping datanode
  6. Stopping secondary namenodes [0.0.0.0]
  7. 0.0.0.0: stopping secondarynamenode
  8. stopping yarn daemons
  9. stopping resourcemanager
  10. hadoop-slave02: stopping nodemanager
  11. hadoop-slave01: stopping nodemanager
  12. hadoop-slave02: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
  13. hadoop-slave01: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
  14. no proxyserver to stop

附录4:Web文件上传日志: $ tail -f /hadoop/bin/hadoop-2.9.0/logs/hadoop-root-namenode-hadoop-master.log

  1. 2017-11-19 19:06:13,169 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741825_1001,replicas=192.168.1.205:50010,192.168.1.204:50010 for /test1/11111.txt._COPYING_
  2. 2017-11-19 19:06:14,076 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /test1/11111.txt._COPYING_
  3. 2017-11-19 19:06:14,487 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /test1/11111.txt._COPYING_ is closed by DFSClient_NONMAPREDUCE_-1894088392_1
  4. 2017-11-19 19:10:30,011 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 11 Total time for transactions(ms): 71 Number of transactions batched in Syncs: 1 Number of syncs: 10 SyncTimes(ms): 147
  5. 2017-11-19 19:10:30,158 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741826_1002,192.168.1.204:50010 for /test1/卑鄙的我3_神偷奶爸.mp4
  6. 2017-11-19 19:10:46,787 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741827_1003,192.168.1.204:50010 for /test1/卑鄙的我3_神偷奶爸.mp4
  7. 2017-11-19 19:11:00,641 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741828_1004,192.168.1.204:50010 for /test1/卑鄙的我3_神偷奶爸.mp4
  8. 2017-11-19 19:11:13,782 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741829_1005,192.168.1.204:50010 for /test1/卑鄙的我3_神偷奶爸.mp4
  9. 2017-11-19 19:11:27,341 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741830_1006,192.168.1.204:50010 for /test1/卑鄙的我3_神偷奶爸.mp4
  10. 2017-11-19 19:11:40,664 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 27 Total time for transactions(ms): 73 Number of transactions batched in Syncs: 8 Number of syncs: 19 SyncTimes(ms): 283
  11. 2017-11-19 19:11:40,666 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741831_1007,192.168.1.204:50010 for /test1/卑鄙的我3_神偷奶爸.mp4
  12. 2017-11-19 19:11:56,154 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741832_1008,192.168.1.204:50010 for /test1/卑鄙的我3_神偷奶爸.mp4
  13. 2017-11-19 19:12:11,636 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741833_1009,192.168.1.204:50010 for /test1/卑鄙的我3_神偷奶爸.mp4
  14. 2017-11-19 19:12:25,754 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741834_1010,192.168.1.204:50010 for /test1/卑鄙的我3_神偷奶爸.mp4
  15. 2017-11-19 19:12:40,719 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 39 Total time for transactions(ms): 74 Number of transactions batched in Syncs: 12 Number of syncs: 27 SyncTimes(ms): 422
  16. 2017-11-19 19:12:40,722 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741835_1011,192.168.1.204:50010 for /test1/卑鄙的我3_神偷奶爸.mp4
  17. 2017-11-19 19:12:41,380 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /test1/卑鄙的我3_神偷奶爸.mp4 is closed by DFSClient_NONMAPREDUCE_-934323569_29

附录5:帮助文档5: $ hadoop –help

  1. Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
  2. CLASSNAME run the class named CLASSNAME
  3. or
  4. where COMMAND is one of:
  5. fs run a generic filesystem user client
  6. version print the version
  7. jar <jar> run a jar file
  8. note: please use "yarn jar" to launch
  9. YARN applications,not this command.
  10. checknative [-a|-h] check native hadoop and compression libraries availability
  11. distcp <srcurl> <desturl> copy file or directories recursively
  12. archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
  13. classpath prints the class path needed to get the
  14. Hadoop jar and the required libraries
  15. credential interact with credential providers
  16. daemonlog get/set the log level for each daemon
  17. trace view and modify Hadoop tracing settings
  18.  
  19. Most commands print help when invoked w/o parameters.

附录6:帮助文档2: $ hadoop fs –help

  1. --help: Unknown command
  2. Usage: hadoop fs [generic options]
  3. [-appendToFile <localsrc> ... <dst>]
  4. [-cat [-ignoreCrc] <src> ...]
  5. [-checksum <src> ...]
  6. [-chgrp [-R] GROUP PATH...]
  7. [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
  8. [-chown [-R] [OWNER][:[GROUP]] PATH...]
  9. [-copyFromLocal [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
  10. [-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
  11. [-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] <path> ...]
  12. [-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
  13. [-createSnapshot <snapshotDir> [<snapshotName>]]
  14. [-deleteSnapshot <snapshotDir> <snapshotName>]
  15. [-df [-h] [<path> ...]]
  16. [-du [-s] [-h] [-x] <path> ...]
  17. [-expunge]
  18. [-find <path> ... <expression> ...]
  19. [-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
  20. [-getfacl [-R] <path>]
  21. [-getfattr [-R] {-n name | -d} [-e en] <path>]
  22. [-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
  23. [-help [cmd ...]]
  24. [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...]]
  25. [-mkdir [-p] <path> ...]
  26. [-moveFromLocal <localsrc> ... <dst>]
  27. [-moveToLocal <src> <localdst>]
  28. [-mv <src> ... <dst>]
  29. [-put [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
  30. [-renameSnapshot <snapshotDir> <oldName> <newName>]
  31. [-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
  32. [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
  33. [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
  34. [-setfattr {-n name [-v value] | -x name} <path>]
  35. [-setrep [-R] [-w] <rep> <path> ...]
  36. [-stat [format] <path> ...]
  37. [-tail [-f] <file>]
  38. [-test -[defsz] <path>]
  39. [-text [-ignoreCrc] <src> ...]
  40. [-touchz <path> ...]
  41. [-truncate [-w] <length> <path> ...]
  42. [-usage [cmd ...]]
  43.  
  44. Generic options supported are:
  45. -conf <configuration file> specify an application configuration file
  46. -D <property=value> define a value for a given property
  47. -fs <file:///|hdfs://namenode:port> specify default filesystem URL to use,overrides 'fs.defaultFS' property from configurations.
  48. -jt <local|resourcemanager:port> specify a ResourceManager
  49. -files <file1,...> specify a comma-separated list of files to be copied to the map reduce cluster
  50. -libjars <jar1,...> specify a comma-separated list of jar files to be included in the classpath
  51. -archives <archive1,...> specify a comma-separated list of archives to be unarchived on the compute machines
  52.  
  53. The general command line Syntax is:
  54. command [genericOptions] [commandOptions]

附录6:截图







猜你在找的Ubuntu相关文章