这篇文章主要讲解了“怎么利用eclipse编写自定义hive udf函数”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“怎么利用eclipse编写自定义hive udf函数”吧!
在做日志分析的过程中,用到了hadoop框架中的hive,不过有些日志处理用hive中的函数处理显得力不从心,就需要用udf来进行扩展处理了
1 在eclipse中新建java project hiveudf 然后新建class package(com.afan) name(UDFLower)
2 添加jar library hadoop-core-1.1.2.jar(来源hadoop1.1.2) hive-exec-0.9.0.jar(来源hive-0.9.0)两个文件到project
import org.apache.hadoop.hive.ql.exec.UDF; import org.apache.hadoop.io.Text; public class UDFLower extends UDF{ public Text evaluate(final Text s){ if (null == s){ return null; } return new Text(s.toString().toLowerCase()); } }
4 编译输出打包文件为 udf_hive.jar
第一步:
第二步:
第三步:
第四步:
第五步:
第六步:
5 将udf_hive.jar放入配置好的linux系统的文件夹中路径为/root/data/udf_hive.jar
6 打开hive命令行测试
hive> add jar /root/data/udf_hive.jar;
Added udf_hive.jar to class path
Added resource: udf_hive.jar
创建udf函数
hive> create temporary function my_lower as 'UDFLower'; // UDFLower'表示你的类的地址,例如你有包名:cn.jiang.UDFLower.java,那么就as后面接‘cn.jiang.UDFLower’,如果没有包名就直接写类名'UDFLower'就行
创建测试数据
hive> create table dual (name string);
导入数据文件test.txt
test.txt文件内容为
WHO
AM
I
HELLO
hive> load data local inpath '/root/data/test.txt' into table dual;
hive> select name from dual;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201105150525_0003, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201105150525_0003
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201105150525_0003
2011-05-15 06:46:05,459 Stage-1 map = 0%, reduce = 0%
2011-05-15 06:46:10,905 Stage-1 map = 100%, reduce = 0%
2011-05-15 06:46:13,963 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201105150525_0003
OK
WHO
AM
I
HELLO
使用udf函数
hive> select my_lower(name) from dual;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201105150525_0002, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201105150525_0002
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201105150525_0002
2011-05-15 06:43:26,100 Stage-1 map = 0%, reduce = 0%
2011-05-15 06:43:34,364 Stage-1 map = 100%, reduce = 0%
2011-05-15 06:43:37,484 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201105150525_0002
OK
who
am
i
hello
经测试成功通过
参考文章http://landyer.iteye.com/blog/1070377
感谢各位的阅读,以上就是“怎么利用eclipse编写自定义hive udf函数”的内容了,经过本文的学习后,相信大家对怎么利用eclipse编写自定义hive udf函数这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是亿速云,小编将为大家推送更多相关知识点的文章,欢迎关注!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。