源码网商城,靠谱的源码在线交易网站 我的订单 购物车 帮助

源码网商城

详解搭建ubuntu版hadoop集群

  • 时间:2022-01-10 14:35 编辑: 来源: 阅读:
  • 扫一扫,手机访问
摘要:详解搭建ubuntu版hadoop集群
用到的工具:VMware、hadoop-2.7.2.tar、jdk-8u65-linux-x64.tar、ubuntu-16.04-desktop-amd64.iso 1、  在VMware上安装ubuntu-16.04-desktop-amd64.iso 单击“创建虚拟机”è选择“典型(推荐安装)”è单击“下一步” [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090120_0_317.png[/img] [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090120_1_13043.png[/img] [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090120_2_51958.png[/img] [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090121_3_32893.png[/img] è点击完成  [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090121_4_82932.png[/img] 修改/etc/hostname vim hostname 保存退出  [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090122_5_74871.png[/img] 修改etc/hosts
127.0.0.1  localhost
192.168.1.100  s100
192.168.1.101  s101
192.168.1.102  s102
192.168.1.103  s103
192.168.1.104  s104
192.168.1.105  s105
[b]配置NAT网络[/b] 查看window10下的ip地址及网关 [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090123_6_49589.png[/img] 配置/etc/network/interfaces
#interfaces(5) file used by ifup(8) and ifdown(8)
#The loopback network interface
auto lo
iface lo inet loopback

#iface eth0 inet static
iface eth0 inet static
address 192.168.1.105
netmask 255.255.255.0
gateway 192.168.1.2
dns-nameservers 192.168.1.2
auto eth0
也可以通过图形化界面配置 [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090123_7_31647.png[/img] [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090124_8_92651.png[/img] 配置好后执行ping www.baidu.com看网络是不是已经起作用 当网络通了之后,要想客户机宿主机之前进行Ping通,只需要做以下配置 修改宿主机c:windowssystem32driversetchosts文件 文件内容
127.0.0.1    localhost
192.168.1.100 s100
192.168.1.101 s101
192.168.1.102 s102
192.168.1.103 s103
192.168.1.104 s104
192.168.1.105 s105
安装ubuntu 163 14.04 源
$>cd /etc/apt/

$>gedit sources.list
切记在配置之前做好备份
deb http://mirrors.163.com/ubuntu/ trusty main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ trusty-security main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ trusty-updates main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ trusty-proposed main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ trusty-backports main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ trusty main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ trusty-security main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ trusty-updates main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ trusty-proposed main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ trusty-backports main restricted universe multiverse
[b]更新[/b]
$>apt-get update
在家根目录下新建soft文件夹    mkdir soft 但是建立完成后,该文件属于root用户,修改权限  chown enmoedu:enmoedu soft/ 安装共享文件夹 [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090124_9_4883.png[/img] 将该文件放到桌面,右键,点击“Extract here” [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090125_10_63514.png[/img] 切换到enmoedu用户的家目录,cd /Desktop/vmware-tools-distrib [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090125_11_98580.png[/img] 执行./vmware-install.pl文件 Enter键执行  [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090126_12_24157.png[/img] 安装完成  [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090126_13_92689.png[/img] 拷贝hadoop-2.7.2.tar、jdk-8u65-linux-x64.tar到enmoedu家目录下的/Downloads
$> sudo cp hadoop-2.7.2.tar.gz jdk-8u65-linux-x64.tar.gz ~/Downloads/
分别解压hadoop-2.7.2.tar、jdk-8u65-linux-x64.tar到当前目录
$> tar -zxvf hadoop-2.7.2.tar.gz

$>tar -zxvf jdk-8u65-linux-x64.tar.gz

$>cp -r hadoop-2.7.2 /soft

$>cp -r jdk1.8.0_65/ /soft
建立链接文件
$>ln -s hadoop-2.7.2/ hadoop

$>ln -s jdk1.8.0_65/ jdk

$>ls -ll
[img]http://img.1sucai.cn/uploads/article/2018010709/20180107090126_14_11957.png[/img] 配置环境变量
$>vim /etc/environment
JAVA_HOME=/soft/jdk
HADOOP_HOME=/soft/hadoop
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/soft/jdk/bin:/soft/hadoop/bin:/soft/hadoop/sbin"
让环境变量生效
$>source environment
检验安装是否成功
$>java –version
[img]http://img.1sucai.cn/uploads/article/2018010709/20180107090127_15_24164.png[/img]
$>hadoop version
 [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090127_16_85622.png[/img] 配置/soft/hadoop/etc/hadoop/  下的配置文件 [core-site.xml]
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://s100/</value>
  </property>
  <property>
     <name>hadoop.tmp.dir</name>
     <value>/home/enmoedu/hadoop</value>
  </property>
</configuration>
[hdfs-site.xml]
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
  <property>
     <name>dfs.namenode.secondary.http-address</name>
      <value>s104:50090</value>
   <description>
    The secondary namenode http server address and port.
   </description>
</property>
</configuration>
[mapred-site.xml]
<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>
[yarn-site.xml]
<configuration>
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>s100</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
</configuration>
[b]配置ssh无密码登录[/b] 安装ssh
$>sudo apt-get install ssh
生成秘钥对 在enmoedu家目录下执行
$>ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
导入公钥数据到授权库中
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[img]http://img.1sucai.cn/uploads/article/2018010709/20180107090128_17_13563.png[/img] 测试localhost成功后,将master节点上的供钥拷贝到授权库中 其中root一样执行即可
$>ssh localhost
[img]http://img.1sucai.cn/uploads/article/2018010709/20180107090128_18_65454.png[/img] 从master节点上测试是否成功。 [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090129_19_93365.png[/img] 修改slaves文件 [/soft/hadoop/etc/hadoop/slaves]
s101
s102
s103
s105
其余机器,通过克隆,修改hostname和网络配置即可 塔建完成后 格式化hdfs文件系统
$>hadoop namenode –format
启动所有进程
start-all.sh
最终结果:  [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090129_20_63707.png[/img] 自定义脚本xsync(在集群中分发文件) [/usr/local/bin] 循环复制文件到所有节点的相同目录下。 [usr/local/bin/xsync]
#!/bin/bash
pcount=$#
if (( pcount<1 ));then
  echo no args;
  exit;
fi
p1=$1;
fname=`basename $p1`
#echo $fname=$fname;

pdir=`cd -P $(dirname $p1) ; pwd`
#echo pdir=$pdir

cuser=`whoami`
for (( host=101;host<106;host=host+1 )); do
  echo ------------s$host----------------
  rsync -rvl $pdir/$fname $cuser@s$host:$pdir
done
测试 xsync hello.txt [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090130_21_85771.png[/img] [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090130_22_24714.png[/img] 自定义脚本xcall(在所有主机上执行相同的命令) [usr/local/bin]
#!/bin/bash
pcount=$#
if (( pcount<1 ));then
  echo no args;
  exit;
fi
echo -----------localhost----------------
$@
for (( host=101;host<106;host=host+1 )); do
  echo ------------s$host-------------
  ssh s$host $@

done
测试 xcall rm –rf hello.txt [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090130_23_45012.png[/img] [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090131_24_87427.png[/img]   集群搭建完成后,测试次运行以下命令
touch a.txt
gedit a.txt
hadoop fs -mkdir -p /user/enmoedu/data
hadoop fs -put a.txt /user/enmoedu/data
hadoop fs -lsr /
[img]http://img.1sucai.cn/uploads/article/2018010709/20180107090131_25_86992.png[/img] 也可以进入浏览器查看 [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090132_26_37847.png[/img] [img]http://img.1sucai.cn/uploads/article/2018010709/20180107090136_27_45902.png[/img] 以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持编程素材网。
  • 全部评论(0)
联系客服
客服电话:
400-000-3129
微信版

扫一扫进微信版
返回顶部