搭建完全分布式hadoop

上传人:m**** 文档编号:213006148 上传时间:2023-05-24 格式:DOCX 页数:29 大小:293.90KB
收藏 版权申诉 举报 下载
搭建完全分布式hadoop_第1页
第1页 / 共29页
搭建完全分布式hadoop_第2页
第2页 / 共29页
搭建完全分布式hadoop_第3页
第3页 / 共29页
资源描述:

《搭建完全分布式hadoop》由会员分享,可在线阅读,更多相关《搭建完全分布式hadoop(29页珍藏版)》请在装配图网上搜索。

1、版本信息:虚拟机版本:vmware14系统版本: ubuntu18.04Hadoop 版本: hadoop3.1.0Java 版本: 1.8第一部分:服务器布置由于需要多台服务器,我才用vmware虚拟机系统的方式。一个主机两个从机。、vmware 安装:百度贴吧有破解版下载,以及安装方式,不再赘述。二、ubuntu 系统安装配置:下载官方ubuntu18.04 iso镜像,复制出三个系统,网络适配器选择NAT模式,如下图1. 将三个系统设置为固定 IP在虚拟网络编辑器中 子网 IP 和子网掩码设置固定网段,NAT 诡M子网 IF; m.L.3.G 子两揮网.2$5kZ5已2Q 网吉FS|wn

2、h2. 启动系统,三个系统网络信息如下二 J Ubuntu 18.04 X j匸A HadoopOlHadoop02Terminal 三09:na$hmd$ten -O 0 IFile Edit View Search Terminal Helpnashnaste:$ ifconfigeflS3S: flags=4163 mtu 150tnet 192.168.3.10 netmask 255.255.255.0 broadcast 192.168.3.255 lnet6 fe80:S8d3:4i09:43c6:6f94 preftxlen 64 scopetd 0x2O ether oo:0

3、c:29:da:5a:b9 txqueuelen 1000 (Ethernet)RX packets 158261 bytes 176360028 (176.3 MB)RX errors dropped 9 overruns 9 frame 0TX packets 99377 bytes 13991286 (13.0 MB)TX errors 0 dropped B overruns 6 carrier 0 collisions 010: flags=73UP,LOOPBACK,RUNNING ntU 65536tnet 127.0.0.1 netmask 255.0.0.06 : 1 pre

4、ftxlen 128 scopeid 6xl0loop txqueuelen 1000 (Local Loopback)RX packets 68875 bytes 12965451 (12.9 MB)RX errors 0 dropped d overruns d frame 0TX packets 68875 bytes 12905451 (12.9 MB)tx errors 0 dropped 0 overruns e carrier 0 collisions 0nashfinaster:-$ | 5 Ubuntu 18.04应 HadoopOl X(5 Hodoop02 XFl Ter

5、minal nashslavel: 7 File Edit View search Terminal Helpnashslavel:-5 ifconfigens33: flags=4i63up,broadcast,running,multicast ntu 1500inet6 fe80:e81:2978:5483:426e prefixlen 64 scopeid Ox20 ether 00:Oc:29:10:8c:lc txqueuelen 1000 (Ethernet)tnet 192.168.3.4 netmask 255.255.255.0 broadcast 192.168.3.25

6、5RX packets 23269 bytes 3212572 (3.2 MB)RX errors 0 dropped 0 overruns 0 frane eTX packets 23185 bytes 3202954 (3.2 MB)TX errors 0 dropped 6 overruns 0 carrier 0 collisions 0lo: flags=73 mtu 65536tnet 127.0.0.1 netnask 255.0.0.e inet6 :1 preftxlen 128 scopeid 0xl6 loop txqueuelen 1000 (Local Loopbac

7、k) RX packets 14865 bytes 2110637 (2.1 MB) RX errors 0 dropped 0 overruns 0 frame 0 tx packets 14865 bytes 2110637 (2.1 mb) TX errors 0 dropped 0 overruns 6 carrier 0 collisions 6nashslavel:SFl Terminal 审Tue 1Trashtxqiueuelen lees (Ethernet) 2316966 (2,3 MB) overruns 0 frame 0 22tSB8M (2.2 MB)paclke

8、ts 15781 byteserrors 0 dropped 0paclkets 15S13 byteserrors * dropped & overruns 0 carrier 0 collisions 0File Edit view/ search Terminal Helpnashi|5lave21亠$ ifcan figinS33: flagS=41&3-=UP, BROADCA51RUNNING ,WUILTICAST mtU 1500tnet W2.lS.3.5 netmask 255,255.255.0 boadcast 192.168.3.255 tnet:& feso: &d

9、sc: esvs:df90:leso preftxlen S4 scopetd 9x20ltol ether o&:ec:29tfs:96;zfRXRXTXTXlo: f 1595=73, LOOPBACK, RUNNING mtu 6S536tnet 127.0.ft.1 netmask 25S.0-0.0Linet6 : : 1 prefix.Ler 12S 弓匚opeid Sxl& loop txqueuelen 1000 Local Loopb-ack) RX pacj authorized_keyscat id_rsa.pub.slave2 authorized_keys现在 aut

10、horized_keys 就有 3 台机器的公钥,可以无密码访问任意机器。只需要将authorized_keys 复制到 slave1 和 slave2 即可。在 master 上执行:scp authorized_keys hadoopslave1:/home/hadoop/.ssh/authorized_keysscp authorized_keys hadoopslave2:/home/hadoop/.ssh/authorized_keys最后我们可以测试一下,在 master 上运行ssh slave1 如果没有提示输入用户名密码,而是直接进入,就说明我们配置成功了。同样的方式测试其他

11、机器的无密码ssh访问。第二部分安装JDK和hadoo(三个系统,其中 hadoop 文件内的配置可以在主机配好后,复制到从机相同的目录下)一、 安装 JAVAVioeo-jTrathOtherpycharimr EomirnunitvMjdlk 1 .E.O_1T1u 5-sleeted 花口ntihiniing 14 itefnis二、将下载好的hadoop压缩文件解压缩到/opt目录下第三部分:配置hadoop(三个系统都要配)使用 hadoop 用户登录一、配置环境变量vi /.bashrc 或者 vi /etc/profile 打开文件(我的两个文件全配置了),添加如下配置expor

12、t JAVA_HOME=/usr/local/jdk1.8.0_171export JRE_HOME=$JAVA_HOME/jreexport CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$JAVA_HOME/bin:$PATH#hadoop environment varsexport HADOOP_HOME=/opt/hadoopexport HADOOP_INSTALL=$HADOOP_HOMEexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HAD

13、OOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/binhgdmp却Ester;營。File Edit View Starch Terminal Help-/usr/share/bash-completion/bash_conpletioneltf - f /etc/bash cetnp

14、Liittflft j then,/etc/bash_compLetiarftflexport JAVA_HOHE=/u5r/locaL/jdkl.记 +0_171export JR E_HflME=S ; J AV A_HCM E j r eexport CLASS PAT H= . i $ Jfl V A_.HGM E /11 b i $- JR E_HOM E /11 besport PATH=StJAvA_HOME/bxn;SPATHhkadoop envtronn亡nt 戟arsexport HADOOP_HOME=/opt/hadoopexport HA D DO P_INSTAL

15、 L=$ HADOGP_ HOM Eexport HADDO P_HAP R EC_HCJ1 E= $ HA DOO P_HOMEexport HADOO P_COMMON_HCHE=* HA DOOP_HOM卜export HA D DC P_H DF S_h OME= $HA DOG P_HOM E export YARN_HDME-SHflDOOP_f IGMEesport HAOWP.COMMON.LIB.hiAriVE.&IRHACOOP-HOME;T.ib/natlvwExport PATbt= SPATH : $HADQGP_HGME/Sbint $HADGOP_HQMEy bl

16、nType : qa l and怎已.,日11 thEngeE End exit 壮曲135*1日 ot I二、Hadoop 配置cd /opt/hadoop/etc/hadoop1. 修改 hadoop-env.sh增加如下配置 export JAVA_HOME=/usr/local/jdk1.8.0_171nash&hvel: /opt/hadoop/etc/hadcopFile Edit View Search Teiminal Help# Tor exanple:# J A V A_H0M E=/jsr/jdva /testing hdlfs dfs -Is# Therefore th

17、e vast majortty (呂UT hl0T ALL!) of theee defaults# are configured for 5ubstttutton and not appendr If append# is preferable, nodify 七his file accordingly# Generic settings for HADOOP 制#3 Technically,七h亡 only required envtronEEn七 variable is JAVA_HOME.# All at hers a re opti-onal- However, the defaul

18、ts are probably not# preferred, Many sites configure these options outside of nadoop# such as tn /etc/proftl&.d# Ths Java trnplernenteflon to use. By defaul七,this environmentit triable is REQUIRED1 on ALL pldtforms except OS Xexport JAVA_HOME=/usr/local/jdkl.80_171# Location of kiddoop r By default,

19、 Hadoop will attempt to deternine# this location based upon its executton path,hmdoDD-env . shemdjni_wl 42ZL-_1町 47.1_2. 修改 core-site.xmlfs.default.namehdfs:/master:9000hadoop.tmp.dir/opt/hadoop/tmphadaopmaster: /apt/hodoop/ctc/hadoopFile Edit View Search Terminal HelpYou nay obtain a copy of the Li

20、cense atlittpt /uwuj . apachexOrg/ltcenses/Li匚En5E2*e)nlees required by appltcable law or agreed to tn urtttng; Eoftware distributed under the License is distributed on an ffAS ISF BASIS, WITHOUT MARRAMTJ ES OR CONDITIONS OF ANY KIMD, either express or implted. See the Li 匚 enise fnr the specific la

21、nguage govern in g pe missions and linltdtions under the License* See acconpanying LICENSE file.工I - - Put site-specific property overrtdes in this file, -apropertyicnanefs*default.namehdf:/master:90O0nanehadoop. tmp. dl re/name:* cvaluo/opt/hddeopy tnp*/valje3/cc nftgu tio n 28,1B毗3. 修改 hdfs-site.x

22、mldfs.replication2heartbeat.recheckinterval10dfs.namenode.name.dir /opt/hadoop/hdfs/name dfs.datanode.data.dir /opt/hadoop/hdfs/datadfs.http.address192.168.3.10:50070dfs.webhdfs.enabled true MiCrwL rM L*4ir i ApKhi LL-orae. HrtUn 上. (eb * i/tGraa和? 餐j my h Lkcwsa.w R#y stci.n cny * bw Llciria athiq口

23、叮?4*ahq-Qr|JlUvrvM-|JLlrEE I-flMU-L-L i*4aLr-Hi AflfiHi X* Jj tmJ 1 3 M-LriA#. ufcurwriLtir IhKBri 皿I chvli rtLitrlm m JU :E齢I nU1 vWjvirm g CMirriMl W 心“Hr -np-m ar 叶 1E.5m -iiv LImihi ror tk um.LTLc lgafr iwir.Hag ptthIuLms IImLUCIm ndc-i Ur Luewk. in !K.lvwIhi LECDISE 111f.ui Mia- spc,Llu prprE M

24、*i-ruiL mlL. rrol抵! mww Im n FfbJE*. rai 甘 丹 L f umwrfprcfarn? wprr 炯 hrc d I* s. naHCTvd r. iwe. dl F f hpc P e *1 I! r IJ rudta 1 njMNv J I. atwmrrifn.dKMuda.aiEb.diFHf iiMhJ W 伽卅砂 1 hM济J F hJ4FI3H 01! LRP*W厂I01啊4 E F町*4 *_l帕.IEniJaaAMB用十“ IM M f L ” wtt m MH LfCT J UFW :riakHFt!*,llAlLHPTp-Gpirtp

25、F4. 修改 mapred-site.xmlmapreduce.framework.nameyarnmapreduce.application.classpath/opt/hadoop/share/hadoop,/opt/hadoop/share/hadoop/common/*,/opt/hadoop/share/hadoop/common/lib/*,/opt/hadoop/share/hadoop/hdfs/*,/opt/hadoop/share/hadoop/hdfs/lib/*,/opt/hadoop/share/hadoop/mapreduce/*,/opt/hadoop/share

26、/hadoop/mapreduce/*,/opt/hadoop/share/hadoop/yarn/*,/opt/hadoop/share/hadoop/yarn/lib/*yarn.app.mapreduce.am.env八 va-uevHADOOPMAPREDHOME 上 opuhadoopAva-uevApropertyv八 propertyv八 namevmapreduce.lnap.envAnalTlev八 va-uevHADOOPMAPREDHOME 上 opuhadoopAva-uevApropertyv八 propertyv八 namevmapreduce.reduce.env

27、Analnev八 va-uevHADOOPMAPREDHOME 上 opuhadoopAva-uevApropertyvAconfiguraHonv址 It Ms.- m l. ew_ FHEAJ flr-LwhME LMHBar“Hf0LL?呂BILrn-LL-Hn-绘 L*dir -7 APMILlQmaT =rllLH1 -v-5T-r-2B.Jy 思 子 及 Al WU LP T-cl.Llw !环? L*Lr ? f s s -1 3 LuffsFflnreql!.r4d J- +P.Tn*u! r-nd倉EEk_-*lv.fr_EATTmr3arLssffs t7-s-f 1 M*

28、 .- 4.- Hft - _ ff - _- - JyqS3 F13 r ti. -s-r .VI _-in.-f- 3- 二*w - 也=udM.r&rJud-iup -左 F1=ui4 -emt_ci_elh*r_史 ETCff 一ii_-f E4nhjB-*q-_ Sr- ss亠W0 yamsiCD.XIn-八 configuraHonvyarn.nodemanager.aux-servicesmapreduce_shuffle寸L lirLLsmd Ldr ifi+ 心比 Li-aia. HnUn .ri-rLlm-)Irzi m ih KMjli- dHaLlifKiLchri.S

29、 隔曲)怕 * 4tty br Mi LKvii Eh!1-i.*r-IWCrt-l(?mLh i i*iLrd 可 aIchL-* l*w w 科rad i V# tarLtlr, lEQiara riLnribwtri d*r lcmth la rtlatri.zi4d er- fWi% CWI,r|Ci (J K* KD.ip-m ir Inllvri.i* L、nH! hM wrcLrwnliMkMi -si vn4r L、iww血7号 LKIMM MWlEMtf l*uXI1Hi!cl - IrLIr RpeclFLi: Y#fh EEwFLqM-dDn roffriln -?Ml

30、NVfA.riWhtmI varAiMlur_dtaarrL*LjnL m:用十if WTn#taHaVHr. PfIWT =”#11.2!2第四部分:启动 hadoop一、将 hadoop 权限赋予 hadoop 用户使用 root 用户运行命令sudo chown -R hadoop:hadoop /opt/hadoop二、分别重启三个系统三、Master节点:1. Hadoop 用户执行:hdfs name node -format,格式化 name node2. 执行:start-dfs.sh3. 执行: start-yarn.sh4. 执行:jps,运行结果如下17287 DataN

31、ode19400 Jps17529 SecondaryNameNode17117 NameNode17790 ResourceManager17967 NodeManagerhadoopiriflster: /pt/hadoop/Etc/hEido口口File Edit view search Terminal Help hidiQpnasteirz/ppt/hddloop/et/hai4p$ jps 17257 DataMode1夕斗目0 JpS175Z5 secondaryNameNiode17117 NameNode17799 ResQurceNanager 17967 NodeMana

32、ger hadooipAater:/&pt/hdi0&p/etc/h3doop$ |四、Slavel节点:1. 重复master节点的4、5、6步骤(不需要第三步的格式化name node),运行结果如下:4693 NodeManager8536 Jps 7722 ResourceManager发现没有 datanode 进程!然后再执行2. Hadoop-daemo n.sh start data node 后,输入 jps,运行结果如下:8547 Jps4693 NodeManager8090 DataNode7722ourceManager五、Slave2 节点执行方式如 slave1六

33、、在浏览器访问:http:/192.168.3.10:50070页面结果如下:七、页面访问:http:/192.168.3.10:8088结果如下:ClL&ter Ncdes l,/-r1ric3APP* MftfTilM匚 cnAlrt Ri-mnaluiTflr WErl歧L -iffEI 论 dsr*FLWSKEIj MOW KILLED.I5M1TFDWL11r15:11Nixk iiirp AxkfrwMhniiirHfn MkKHQfiCjii ji fly tdiwJuiiiIIFjy JLw4r|_刑*1 httih- VpdifllF注意事项:1. hdfs name nod

34、e -format格式化命令只执行一次。如果需要重新格式化NameNode,需要 先将原来NameNode和DataNode下的文件全部删除,不然会报错,NameNode和 DataNode 所在目录是在 core-site.xml 和 hdfs-site.xml 中 hadoop.tmp.dir、 dfs. name no de. name.dir、dfs.data node.data.dir 属性配置的。因为每次格式化,默认是创建一个集群ID,并写入NameNode和DataNode的 VERSION 文件中(VERSION 文件所在目录为 dfs/name/current 和 dfs/d

35、ata/current), 重新格式化时,默认会生成一个新的集群ID,如果不删除原来的目录,会导致name node 中的VERSION文件中是新的集群ID,而DataNode中是旧的集群ID不一致时会报错。另一种方法是格式化时指定集群ID参数,指定为旧的集群ID。2. 启动历史服务的命令:mr-jobhistory-daemon.sh start historyserver3. 运行 wordcounta. Jar 位置hadoopmaster:/opt/hadoop/share/hadoop/mapreduce$ pwd /opt/hadoop/share/hadoop/mapreduce

36、b. 生成数据文件word.txt并创建HDFS目录hadoopmaster:$ echo Hello Worldword.txthadoopmaster:$ echo Hello Hadoopword.txthadoopmaster:$ echo Hello Hiveword.txthadoopmaster:$ hadoop dfs -mkdir /work/data/inputc. 将数据文件word.txt上传以HDFS /work/data/in put目录下hadoopmaster:$ hadoop dfs -copyFromLocal word.txt /work/data/inp

37、utWARNING: Use of this script to execute dfs is deprecated.WARNING: Attempting to execute replacement hdfs dfs instead.WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.2018-07-18 16:12:30,532 WARN util.NativeCodeLoader: Unable to loadnative-hadoop library for you

38、r platform. using builtin-java classes where applicabled. 查看数据hadoopmaster:$ hadoop dfs -text /work/data/input/word.txtWARNING: Use of this script to execute dfs is deprecated.WARNING: Attempting to execute replacement hdfs dfs instead.WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using v

39、alue of HADOOP_PREFIX.2018-07-18 16:13:00,040 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform. using builtin-java classes where applicableHello WorldHello HadoopHello Hivee. 运行的命令(hadoop安装目录下)hadoopmaster:/opt/hadoop$hadoopjarshare/hadoop/mapreduce/hadoop-mapreduce

40、-examples-3.1.0.jarwordcount/work/data/input /work/data/outputf. 查看结果hadoopmaster:/opt/hadoop$ hadoop dfs -lsr /work/data/outputWARNING: Use of this script to execute dfs is deprecated.WARNING: Attempting to execute replacement hdfs dfs instead.WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME

41、.Using value of HADOOP_PREFIX.2018-07-18 16:18:53,160 WARN util.NativeCodeLoader: Unable to loadnative-hadoop library for your platform. using builtin-java classes whereapplicablelsr: DEPRECATED: Please use ls -R instead.-rw-r-r-2 hadoop supergroup 0 2018-07-18 16:18/work/data/output/_SUCCESS-rw-r-r

42、-2 hadoop supergroup32 2018-07-18 16:18/work/data/output/part-r-00000hadoopmaster:/opt/hadoop$ hadoop dfs -text /work/data/output/part-r-00000WARNING: Use of this script to execute dfs is deprecated.WARNING: Attempting to execute replacement hdfs dfs instead.WARNING: HADOOP_PREFIX has been replaced

43、by HADOOP_HOME. Using value of HADOOP_PREFIX.2018-07-18 16:19:39,649 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform. using builtin-java classes where applicableHadoop 1Hello 3Hive 1World 1 注:从机运行 wordcount 卡死时,修改 yarn-site.xml 文件,增加如下配置 yarn.resourcemanager.address master:8032yarn.resourcemanager.scheduler.address master:8030yarn.resourcemanager.resource-tracker.address master:8031

展开阅读全文
温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!