Ubuntu16安装Hadoop2.7.3完全分布式

前端之家收集整理的这篇文章主要介绍了Ubuntu16安装Hadoop2.7.3完全分布式前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
@H_502_0@第一步:

@H_502_0@安装java:通过下载官网安装包方式我就不说了,网上很多;现在采用的是ppa(源) 方式安装。

1.添加ppa

sudo add-apt-repository ppa:webupd8team/java

sudo apt-get update

2.安装oracle-java-installer

sudo apt-get install oracle-java8-installer
@H_502_0@安装器会提示你同意 oracle 的服务条款,选择 ok

@H_502_0@然后选择yes 即可

3.设置系统默认jdk

sudo update-java-alternatives -s java-8-oracle

4.测试jdk 是是否安装成功:

java -version

javac -version
@H_502_0@第二步:

@H_502_0@更改主机名:

vim /etc/hostname
@H_502_0@其他节点都需要更改,然后重启即可

@H_502_0@配置hosts文件

@H_502_0@填写好IP地址及其映射名。

@H_502_0@设置hadoop用户

sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop
@H_502_0@用户添加sudo权限:

@H_502_0@sudo usermod -aG 超级用户组名 用户名

@H_502_0@例子:sudo usermod -aG sudo hadoop

@H_502_0@其中a:表示添加,G:指定组名

@H_502_0@第三步:

@H_502_0@免密码登录

ssh-keygen -t rsa
cd ~/.ssh
cp id_rsa.pub authorized_keys
@H_502_0@每个节点都运行:

@H_502_0@然后把authorized_keys复制到各个节点

scp /home/hadoop/.ssh/authorized_keys hadoop@slave1:~/.ssh/
scp /home/hadoop/.ssh/authorized_keys hadoop@slave2:~/.ssh/
@H_502_0@测试:

ssh master 
ssh slave1
ssh slave2
@H_502_0@第四步:

@H_502_0@配置Hadoop文件

@H_502_0@我的Hadoop存放位置

/home/hadoop/hadoop273
@H_502_0@创建目录

mkdir /home/hadoop/tmp
mkdir /home/hadoop/dfs
mkdir /home/hadoop/dfs/name
mkdir /home/hadoop/dfs/data
@H_502_0@配置hadoop-env.sh和yarn-env.sh的java目录

@H_502_0@配置slaves

slave1
slave2
@H_502_0@配置core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/home/hadoop/tmp</value>
        <description>Abase for other temporary   directories.</description>
    </property>
</configuration>
@H_502_0@配置hdfs-site.xml

<configuration>
      <property>
                <name>dfs.namenode.secondary.http-address</name>
               <value>master:9001</value>
       </property>
     <property>
             <name>dfs.namenode.name.dir</name>
             <value>file:/home/hadoop/dfs/name</value>
       </property>
      <property>
              <name>dfs.datanode.data.dir</name>
              <value>file:/home/hadoop/dfs/data</value>
       </property>
       <property>
               <name>dfs.replication</name>
               <value>2</value>
        </property>
        <property>
                 <name>dfs.webhdfs.enabled</name>
                  <value>true</value>
         </property>
        <property>
                 <name>dfs.permissions</name>
                 <value>false</value>
        </property>
</configuration>
@H_502_0@配置mapred-site.xml

<configuration>
          <property>                                                                    
        <name>mapreduce.framework.name</name>  
                <value>yarn</value>  
           </property>  
          <property>  
                  <name>mapreduce.jobhistory.address</name>  
                  <value>master:10020</value>  
          </property>  
          <property>  
                <name>mapreduce.jobhistory.webapp.address</name>  
                <value>master:19888</value>  
       </property> 
       <property>
                <name>mapred.job.tracker</name>
                <value>master:9001</value>
       </property>
</configuration>
@H_502_0@第五步:

@H_502_0@格式化hdfs

bin/hadoop  namenode -format
@H_502_0@运行Hadoop

./sbin/start-all.sh
@H_502_0@检查是否成功

@H_502_0@常见问题:

@H_502_0@免密码的时候

ssh中“Host key verification Failed.“的解决方

@H_502_0@在/etc/ssh/ssh_config

StrictHostKeyChecking no
UserKnownHostsFile /dev/null

出现Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/hadoop/2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now的解决方

vim ~/.bash_profile
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
source ~/.bash_profile
@H_502_0@在.bash_profile里添加如上代码 即可。


WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

@H_502_0@运行hadoop namenode -format 出现该警告通过如下方法消除了: 在hadoop-env.sh中 修改HADOOP_OPTS: exportHADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib:$HADOOP_PREFIX/lib/native"

猜你在找的Ubuntu相关文章