@@ -61,6 +61,7 @@ ssh localhost
6161```
6262
6363- 将公钥复制到两台 slave
64+ - 如果你是采用 pem 登录的,可以看这个:[SSH 免密登录](SSH-login-without-password.md)
6465
6566```
6667ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@172.16.0.43,根据提示输入 hadoop-node1 机器的 root 密码,成功会有相应提示
@@ -95,6 +96,7 @@ tar zxvf hadoop-2.6.5.tar.gz,有 191M 左右
9596```
9697
9798- ** 给三台机子都先设置 HADOOP_HOME**
99+ - 会 ansible playbook 会方便点:[Ansible 安装和配置](Ansible-Install-And-Settings.md)
98100
99101```
100102vim /etc/profile
@@ -338,29 +340,31 @@ SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
338340
339341```
340342
341- - 启动
343+ ## HDFS 启动
344+
345+ - 启动:start-dfs.sh,根据提示一路 yes
342346
343347```
344- 启动:start-dfs.sh,根据提示一路 yes
345- hadoop-master 会启动:NameNode 和 SecondaryNameNode
346- 从节点启动:DataNode
348+ 这个命令效果:
349+ 主节点会启动任务:NameNode 和 SecondaryNameNode
350+ 从节点会启动任务:DataNode
351+
347352
348- 查看 :jps,可以看到:
353+ 主节点查看 :jps,可以看到:
34935421922 Jps
35035521603 NameNode
35135621787 SecondaryNameNode
352357
353358
354- 然后再从节点可以 jps 可以看到:
359+ 从节点查看: jps 可以看到:
35536019728 DataNode
35636119819 Jps
357-
358362```
359363
360- ```
361364
362- 查看运行更多情况:hdfs dfsadmin -report
365+ - 查看运行更多情况:` hdfs dfsadmin -report `
363366
367+ ```
364368Configured Capacity: 0 (0 B)
365369Present Capacity: 0 (0 B)
366370DFS Remaining: 0 (0 B)
@@ -371,15 +375,9 @@ Blocks with corrupt replicas: 0
371375Missing blocks: 0
372376```
373377
378+ - 如果需要停止:` stop-dfs.sh `
379+ - 查看 log:` cd $HADOOP_HOME/logs `
374380
375- ```
376-
377- 如果需要停止:stop-dfs.sh
378-
379- 查看 log:cd $HADOOP_HOME/logs
380-
381-
382- ```
383381
384382## YARN 运行
385383
@@ -391,22 +389,53 @@ start-yarn.sh
391389
392390停止:stop-yarn.sh
393391
392+ ```
393+
394+ ## 端口情况
394395
396+ - 主节点当前运行的所有端口:` netstat -tpnl | grep java `
397+ - 会用到端口(为了方便展示,整理下顺序):
398+
399+ ```
400+ tcp 0 0 172.16.0.17:9000 0.0.0.0:* LISTEN 22932/java >> NameNode
401+ tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 22932/java >> NameNode
402+ tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 23125/java >> SecondaryNameNode
403+ tcp6 0 0 172.16.0.17:8030 :::* LISTEN 23462/java >> ResourceManager
404+ tcp6 0 0 172.16.0.17:8031 :::* LISTEN 23462/java >> ResourceManager
405+ tcp6 0 0 172.16.0.17:8032 :::* LISTEN 23462/java >> ResourceManager
406+ tcp6 0 0 172.16.0.17:8033 :::* LISTEN 23462/java >> ResourceManager
407+ tcp6 0 0 172.16.0.17:8088 :::* LISTEN 23462/java >> ResourceManager
408+ ```
409+
410+ - 从节点当前运行的所有端口:` netstat -tpnl | grep java `
411+ - 会用到端口(为了方便展示,整理下顺序):
412+
413+ ```
414+ tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 14545/java >> DataNode
415+ tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN 14545/java >> DataNode
416+ tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 14545/java >> DataNode
417+ tcp6 0 0 :::8040 :::* LISTEN 14698/java >> NodeManager
418+ tcp6 0 0 :::8042 :::* LISTEN 14698/java >> NodeManager
419+ tcp6 0 0 :::13562 :::* LISTEN 14698/java >> NodeManager
420+ tcp6 0 0 :::37481 :::* LISTEN 14698/java >> NodeManager
395421```
396422
397- - 可以看到当前运行的所有端口: ` netstat -tpnl | grep java `
423+ -------------------------------------------------------------------
398424
425+ ## 管理界面
399426
427+ - 查看 HDFS 管理界面:< http://hadoop-master:50070 >
428+ - 访问 YARN 管理界面:< http://hadoop-master:8088 >
400429
401- 查看HDFS管理界面:http://hadoop-master:50070
402- 访问YARN管理界面:http://hadoop-master:8088
403430
431+ -------------------------------------------------------------------
404432
433+ ## 运行作业
405434
406- 搭建完成之后,我们运行一个Mapreduce作业感受一下:
407- hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar pi 5 10
408- hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount /data/input /data/output/result
435+ - 运行一个 Mapreduce 作业试试:
436+ - `hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar pi 5 10`
409437
438+ -------------------------------------------------------------------
410439
411440## 资料
412441
0 commit comments