Cloudera Developer Training for Spark and hadoop
Course Time:2016年6月27-30日
Course Location:上海市 浦东新区 张江高科 伯克利工程创新中心
Contact us:400-679-6113
QQ:1438118790
Certification:CCA-175
Learn how toimport data into your Apache Hadoop closter and process it with spark、hive、flume、sqoop、impala and other Hadoop ecosystem tools.
Audience and Prerequisites
This coursedesigned for developers and engineers who have programming experience. Apachespark examples and hands-on exercises are presented in Scala and Python, so theability to program in one of those languages is required. Basic familiaritywith the Linux command line is assumed. Basic knowledge of SQL is helpful. Priorknowledge of Hadoop is not required.
Course outline:DeveloperTraining for Spark and hadoop
Introduction to Hadoop and the Hadoop ecosystem
Hadoop architecture and HDFS
Importing relational data with Apache spoop
Introduction to impala and hive
Modeling and managing data with impala and hive
Data formats
Data partitioning
Capturing data with Apache flume
Spark basics
Working with RDDs in spark
Writing and deploying spark applications
Parallel programming with spark
Spark caching and persistence
Common patterns in spark data processing
Preview:spark SQL
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。