今天开发跑了hive任务报错:
Query ID = gsadmin_20171113143046_07c2e5ee-c0e3-4624-8947-538e410bbc2b
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1009
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1510045652060_3118, Tracking URL = http://TS-HELIUM-002:8088/proxy/application_1510045652060_3118/
Kill Command = /var/gs/hadoop/bin/hadoop job -kill job_1510045652060_3118
Hadoop job information for Stage-1: number of mappers: 31175; number of reducers: 1009
2017-11-13 14:31:14,929 Stage-1 map = 0%, reduce = 0%
2017-11-13 14:31:36,729 Stage-1 map = 57%, reduce = 0%
2017-11-13 14:31:37,790 Stage-1 map = 100%, reduce = 100%
Ended Job = job_1510045652060_3118 with errors
Examining task ID: task_1510045652060_3118_m_000097 (and more) from job
Task with the most failures(4):
-----
Task ID:
task_1510045652060_3118_m_000014
URL:
http://TS-HELIUM-002:8088/taskdetails.jsp?jobid=job_1510045652060_3118&tipid=task_1510045652060_3118_m_000014
-----
Diagnostic Messages for this Task:
Error: Java heap space
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
查看mapred.child.java.opts=200M,网上查看默认值就是200M
gsadmin@TS-DEP-TASK01:~$ hive
Logging initialized using configuration in file:/var/gs/conf/hive/hive-log4j.properties
hive> set mapred.child.java.opts;
mapred.child.java.opts=-Xmx200m
解决办法:
登陆hive后执行:hive>set mapred.child.java.opts=-Xmx2048M;
再执行对应的sql
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。