这篇文章主要介绍如何使用JAVA API操作HDFS,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!
windows操作系统需要配置一下hadoop环境
mac本质上是unix系统,不需要配置
==参考文档《Windows&Mac本地开发环境配置》==
链接:https://pan.baidu.com/s/1tFJSlRxn18YELUUAUkXXQA
提取码:g9ka
<properties>
<hadoop.version>3.1.4</hadoop.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>${hadoop.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/junit/junit -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>RELEASE</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
<encoding>UTF-8</encoding>
<!-- <verbal>true</verbal>-->
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<minimizeJar>true</minimizeJar>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
//简化版
@Test
public void mkDirOnHDFS() throws IOException {
//配置项
Configuration configuration = new Configuration();
//设置要连接的hdfs集群
configuration.set("fs.defaultFS", "hdfs://node01:8020");
//获得文件系统
FileSystem fileSystem = FileSystem.get(configuration);
//调用方法创建目录;若目录已经存在,则创建失败,返回false
boolean mkdirs = fileSystem.mkdirs(new Path("/kaikeba/dir1"));
//释放资源
fileSystem.close();
}
//指定目录所属用户
@Test
public void mkDirOnHDFS2() throws IOException, URISyntaxException, InterruptedException {
//配置项
Configuration configuration = new Configuration();
//获得文件系统
FileSystem fileSystem = FileSystem.get(new URI("hdfs://node01:8020"), configuration, "test");
//调用方法创建目录
boolean mkdirs = fileSystem.mkdirs(new Path("/kaikeba/dir2"));
//释放资源
fileSystem.close();
}
//创建目录时,指定目录权限
@Test
public void mkDirOnHDFS3() throws IOException {
Configuration configuration = new Configuration();
configuration.set("fs.defaultFS", "hdfs://node01:8020");
FileSystem fileSystem = FileSystem.get(configuration);
FsPermission fsPermission = new FsPermission(FsAction.ALL, FsAction.READ, FsAction.READ);
boolean mkdirs = fileSystem.mkdirs(new Path("hdfs://node01:8020/kaikeba/dir3"), fsPermission);
if (mkdirs) {
System.out.println("目录创建成功");
}
fileSystem.close();
}
注意:我们一定按照之前的环境搭建要求配置好了相应的hadoop集群,相关host,并且hadoop启动成功的情况下才能正常运行以上程序。
以上是“如何使用JAVA API操作HDFS”这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注亿速云行业资讯频道!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。