VMware建立虚拟机
建立虚拟机命名为master,网上很多教程,就不多说了。
配置Java环境
网上同样很多教程
克隆虚拟机
在修改master的hosts,
192.168.197.132 master-01 192.168.197.133 slave-01 192.168.197.134 slave-02
然后克隆matser,分别命名为slave1,slave2。
现在有三台虚拟机
IP 虚拟机名称 用户
192.168.197.132 master yang
192.168.197.133 slave1 yang
192.168.197.134 slave2 yang
ssh免登陆
(1)CentOS默认没有启动ssh无密登录,去掉/etc/ssh/sshd_config其中2行的注释,每台服务器都要设置,
#RSAAuthentication yes
#PubkeyAuthentication yes
安装ssh
在master-01的机器上进入 yang用户 的 .ssh 目录
使用 ssh-keygen -t rsa 来生成公钥和私钥(连续回车,不设置密码)
把公钥文件复制到要访问的机器的yang的用户目录下的.ssh 目录
scp ~/.ssh/id_rsa.pub yang@master-01:/home/hadoop/.ssh/authorized_keys
scp ~/.ssh/id_rsa.pub yang@slave-01:/home/hadoop/.ssh/authorized_keys
scp ~/.ssh/id_rsa.pub yang@slave-02:/home/hadoop/.ssh/authorized_keys
检测是否可以不需要密码登陆
ssh localhost
ssh yang@master-01
ssh yang@slave-01
ssh yang@slave-02
这里只有master-01是master,如果有多个namenode,或者rm的话则需要打通所有master到其他剩余节点的免密码登陆。(将master-01的authorized_keys追加到02和03的authorized_keys)
参考:http://www.jb51.cc/article/p-ziippdps-ku.html
配置安装Hadoop 2.7.3
下载Hadoop-2.7.3
下载Hadoop 2.7.3并解压到/usr/software目录下,在hadoop-2.7.3目录下新建hdfs,hdfs/data,hdfs/name,hdfs/temp目录。
配置core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master-01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/software/hadoop-2.7.3/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
</configuration>
配置mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master-01:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master-01:19888</value>
</property>
</configuration>
配置yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master-01:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master-01:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master-01:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master-01:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master-01:8088</value>
</property>
<!-- <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>768</value> </property>-->
</configuration>
设置slaves
修改hadoop-2.7.3/etc/hadoop下的slaves文件,添加我们之前建立好的两个slave
slave-01 slave-02
网上很多地方说需要设置hadoop-env.sh和yarn-env.sh的Java环境,我看了这两个文件的内容,已经帮我们配置好了,所以不用管。
配置完成
然后分别复制master下面的已经配置好的Hadoop-2.7.3到yang@slave-01和yang@slave02的/usr/software目录下。
启动
在Master服务器启动hadoop,从节点会自动启动,进入/usr/software/hadoop-2.7.3目录
(1)初始化,输入命令,bin/hdfs namenode -format
(2)启动sbin/start-dfs.sh,输出如下内容,则成功
Starting namenodes on [master-@H_404_420@01]
master-@H_404_420@01: starting namenode,logging to /usr/software/hadoop-@H_404_420@2.7@H_404_420@.3/logs/hadoop-yang-namenode-master-@H_404_420@01.out
slave-@H_404_420@01: starting datanode,logging to /usr/software/hadoop-@H_404_420@2.7@H_404_420@.3/logs/hadoop-yang-datanode-slave-@H_404_420@01.out
slave-@H_404_420@02: starting datanode,logging to /usr/software/hadoop-@H_404_420@2.7@H_404_420@.3/logs/hadoop-yang-datanode-slave-@H_404_420@02.out
Starting secondary namenodes [master-@H_404_420@01]
master-@H_404_420@01: starting secondarynamenode,logging to /usr/software/hadoop-@H_404_420@2.7@H_404_420@.3/logs/hadoop-yang-secondarynamenode-master-@H_404_420@01.out
(3)sbin/start-yarn.sh,如下则成功
[yang@master-@H_404_420@01 hadoop-@H_404_420@2.7@H_404_420@.3]$ ./sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager,logging to /usr/software/hadoop-@H_404_420@2.7@H_404_420@.3/logs/yarn-yang-resourcemanager-master-@H_404_420@01.out
slave-@H_404_420@02: starting nodemanager,logging to /usr/software/hadoop-@H_404_420@2.7@H_404_420@.3/logs/yarn-yang-nodemanager-slave-@H_404_420@02.out
slave-@H_404_420@01: starting nodemanager,logging to /usr/software/hadoop-@H_404_420@2.7@H_404_420@.3/logs/yarn-yang-nodemanager-slave-@H_404_420@01.out
(4)停止的话,输入命令,sbin/stop-dfs.sh,sbin/stop-yarn.sh
(5)输入命令,jps,可以看到相关信息
yang@master-@H_404_420@01 hadoop-@H_404_420@2.7@H_404_420@.3]$ jps
@H_404_420@6932 SecondaryNameNode
@H_404_420@7384 Jps
@H_404_420@6729 NameNode
@H_404_420@7118 ResourceManager
[yang@master-@H_404_420@01 hadoop-@H_404_420@2.7@H_404_420@.3]$ ./bin/hdfs dfsadmin -report
Configured Capacity: @H_404_420@75404550144 (@H_404_420@70.23 GB)
Present Capacity: @H_404_420@54191501312 (@H_404_420@50.47 GB)
DFS Remaining: @H_404_420@54191452160 (@H_404_420@50.47 GB)
DFS Used: @H_404_420@49152 (@H_404_420@48 KB)
DFS Used%: @H_404_420@0.00%
Under replicated blocks: @H_404_420@0
Blocks with corrupt replicas: @H_404_420@0
Missing blocks: @H_404_420@0
Missing blocks (with replication factor @H_404_420@1): @H_404_420@0
-------------------------------------------------
Live datanodes (@H_404_420@2):
Name: @H_404_420@192.168@H_404_420@.197@H_404_420@.133:@H_404_420@50010 (slave-@H_404_420@01)
Hostname: slave-@H_404_420@01
Decommission Status : Normal
Configured Capacity: @H_404_420@37702275072 (@H_404_420@35.11 GB)
DFS Used: @H_404_420@24576 (@H_404_420@24 KB)
Non DFS Used: @H_404_420@10606755840 (@H_404_420@9.88 GB)
DFS Remaining: @H_404_420@27095494656 (@H_404_420@25.23 GB)
DFS Used%: @H_404_420@0.00%
DFS Remaining%: @H_404_420@71.87%
Configured Cache Capacity: @H_404_420@0 (@H_404_420@0 B)
Cache Used: @H_404_420@0 (@H_404_420@0 B)
Cache Remaining: @H_404_420@0 (@H_404_420@0 B)
Cache Used%: @H_404_420@100.00%
Cache Remaining%: @H_404_420@0.00%
Xceivers: @H_404_420@1
Last contact: Tue Sep @H_404_420@27 @H_404_420@17:@H_404_420@18:@H_404_420@44 CST @H_404_420@2016
Name: @H_404_420@192.168@H_404_420@.197@H_404_420@.134:@H_404_420@50010 (slave-@H_404_420@02)
Hostname: slave-@H_404_420@02
Decommission Status : Normal
Configured Capacity: @H_404_420@37702275072 (@H_404_420@35.11 GB)
DFS Used: @H_404_420@24576 (@H_404_420@24 KB)
Non DFS Used: @H_404_420@10606292992 (@H_404_420@9.88 GB)
DFS Remaining: @H_404_420@27095957504 (@H_404_420@25.24 GB)
DFS Used%: @H_404_420@0.00%
DFS Remaining%: @H_404_420@71.87%
Configured Cache Capacity: @H_404_420@0 (@H_404_420@0 B)
Cache Used: @H_404_420@0 (@H_404_420@0 B)
Cache Remaining: @H_404_420@0 (@H_404_420@0 B)
Cache Used%: @H_404_420@100.00%
Cache Remaining%: @H_404_420@0.00%
Xceivers: @H_404_420@1
Last contact: Tue Sep @H_404_420@27 @H_404_420@17:@H_404_420@18:@H_404_420@44 CST @H_404_420@2016
@H_502_714@Web访问直接关闭防火墙
(1)浏览器打开http://192.168.197.132:8088/
(2)浏览器打开http://192.168.197.132:50070/
有如下信息:
Configured Capacity: 35.11 GB
DFS Used: 28 KB (0%)
Non DFS Used: 9.88 GB
DFS Remaining: 25.23 GB (71.87%)
Block Pool Used: 28 KB (0%)
Datanodes usages% (Min/Median/Max/stdDev): 0.00% / 0.00% / 0.00% / 0.00%
Live Nodes 1 (Decommissioned: 0)
Dead Nodes 1 (Decommissioned: 0)
Decommissioning Nodes 0
Total Datanode Volume Failures 0 (0 B)
Number of Under-Replicated Blocks 0
Number of Blocks Pending Deletion 0
Block Deletion Start Time 9/27/2016,5:15:33 PM
总结
我在启动的时候总是出现错误,提示权限问题,后来发现我之前的操作是用root用户,然后hadoop-2.7.3的用户组也是root,yang这个用户根本没有权限,那么问题找到了就修改呗,chrown修改为yang,问题解决。当然配置的过程中出现各种问题,都是参照网上的办法和logs解决了。就不一一指出了,如果大家按照这个配置,还是有些问题,那么请多多百度,google吧。