温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

hadoop运行实例分析

发布时间:2021-12-10 13:40:30 来源:亿速云 阅读:152 作者:iii 栏目:大数据

这篇文章主要讲解了“hadoop运行实例分析”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“hadoop运行实例分析”吧!

1.找到examples的jar包

2.创建输入和输出目录

3.将需要分隔的文件上传到wc_input目录下

4.查看上传的文件

5.hadoop jar /hadoop_soft/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /wc_input/* /wc_output/

 [root@hadoop input]# hadoop jar /hadoop_soft/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /wc_input/* /wc_output/
17/08/15 10:25:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/08/15 10:25:25 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.120:18040
17/08/15 10:25:27 INFO input.FileInputFormat: Total input paths to process : 2
17/08/15 10:25:27 INFO mapreduce.JobSubmitter: number of splits:2
17/08/15 10:25:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1502762082449_0001
17/08/15 10:25:28 INFO impl.YarnClientImpl: Submitted application application_1502762082449_0001
17/08/15 10:25:29 INFO mapreduce.Job: The url to track the job: http://hadoop:18088/proxy/application_1502762082449_0001/
17/08/15 10:25:29 INFO mapreduce.Job: Running job: job_1502762082449_0001
17/08/15 10:25:48 INFO mapreduce.Job: Job job_1502762082449_0001 running in uber mode : true
17/08/15 10:25:48 INFO mapreduce.Job:  map 0% reduce 0%
17/08/15 10:25:50 INFO mapreduce.Job:  map 100% reduce 0%
17/08/15 10:25:51 INFO mapreduce.Job:  map 100% reduce 100%
17/08/15 10:25:51 INFO mapreduce.Job: Job job_1502762082449_0001 completed successfully
17/08/15 10:25:52 INFO mapreduce.Job: Counters: 52
 File System Counters
  FILE: Number of bytes read=276
  FILE: Number of bytes written=545
  FILE: Number of read operations=0
  FILE: Number of large read operations=0
  FILE: Number of write operations=0
  HDFS: Number of bytes read=798
  HDFS: Number of bytes written=398613
  HDFS: Number of read operations=66
  HDFS: Number of large read operations=0
  HDFS: Number of write operations=23
 Job Counters
  Launched map tasks=2
  Launched reduce tasks=1
  Other local map tasks=2
  Total time spent by all maps in occupied slots (ms)=1972
  Total time spent by all reduces in occupied slots (ms)=803
  TOTAL_LAUNCHED_UBERTASKS=3
  NUM_UBER_SUBMAPS=2
  NUM_UBER_SUBREDUCES=1
  Total time spent by all map tasks (ms)=1972
  Total time spent by all reduce tasks (ms)=803
  Total vcore-milliseconds taken by all map tasks=1972
  Total vcore-milliseconds taken by all reduce tasks=803
  Total megabyte-milliseconds taken by all map tasks=2019328
  Total megabyte-milliseconds taken by all reduce tasks=822272
 Map-Reduce Framework
  Map input records=5
  Map output records=11
  Map output bytes=111
  Map output materialized bytes=109
  Input split bytes=210
  Combine input records=11
  Combine output records=8
  Reduce input groups=7
  Reduce shuffle bytes=109
  Reduce input records=8
  Reduce output records=7
  Spilled Records=16
  Shuffled Maps =2
  Failed Shuffles=0
  Merged Map outputs=2
  GC time elapsed (ms)=637
  CPU time spent (ms)=1820
  Physical memory (bytes) snapshot=830070784
  Virtual memory (bytes) snapshot=8998096896
  Total committed heap usage (bytes)=500510720
 Shuffle Errors
  BAD_ID=0
  CONNECTION=0
  IO_ERROR=0
  WRONG_LENGTH=0
  WRONG_MAP=0
  WRONG_REDUCE=0
 File Input Format Counters
  Bytes Read=70
 File Output Format Counters
  Bytes Written=57

6.查看运行结果

7.检查结果数据

感谢各位的阅读,以上就是“hadoop运行实例分析”的内容了,经过本文的学习后,相信大家对hadoop运行实例分析这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是亿速云,小编将为大家推送更多相关知识点的文章,欢迎关注!

向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI