今天就跟大家聊聊有关Java如何实现基于模板的网页结构化信息精准抽取组件HtmlExtractor,可能很多人都不太了解,为了让大家更加了解,小编给大家总结了以下内容,希望大家根据这篇文章可以有所收获。
HtmlExtractor是一个Java实现的基于模板的网页结构化信息精准抽取组件,本身并不包含爬虫功能,但可被爬虫或其他程序调用以便更精准地对网页结构化信息进行抽取。
HtmlExtractor由2个子项目构成,html-extractor和html-extractor-web。 html-extractor实现了数据抽取逻辑,是从节点,html-extractor-web提供web界面来维护抽取规则,是主节点。 html-extractor是一个jar包,可通过maven引用:
<dependency> <groupId>org.apdplat</groupId> <artifactId>html-extractor</artifactId> <version>1.0</version> </dependency>
html-extractor-web是一个war包,需要部署到Servlet/Jsp容器上。
//1、构造抽取规则 List<UrlPattern> urlPatterns = new ArrayList<>(); //1.1、构造URL模式 UrlPattern urlPattern = new UrlPattern(); urlPattern.setUrlPattern("http://money.163.com/\\d{2}/\\d{4}/\\d{2}/[0-9A-Z]{16}.html"); //1.2、构造HTML模板 HtmlTemplate htmlTemplate = new HtmlTemplate(); htmlTemplate.setTemplateName("网易财经频道"); htmlTemplate.setTableName("finance"); //1.3、将URL模式和HTML模板建立关联 urlPattern.addHtmlTemplate(htmlTemplate); //1.4、构造CSS路径 CssPath cssPath = new CssPath(); cssPath.setCssPath("h2"); cssPath.setFieldName("title"); cssPath.setFieldDescription("标题"); //1.5、将CSS路径和模板建立关联 htmlTemplate.addCssPath(cssPath); //1.6、构造CSS路径 cssPath = new CssPath(); cssPath.setCssPath("div#endText"); cssPath.setFieldName("content"); cssPath.setFieldDescription("正文"); //1.7、将CSS路径和模板建立关联 htmlTemplate.addCssPath(cssPath); //可象上面那样构造多个URLURL模式 urlPatterns.add(urlPattern); //2、获取抽取规则对象 ExtractRegular extractRegular = ExtractRegular.getInstance(urlPatterns); //注意:可通过如下3个方法动态地改变抽取规则 //extractRegular.addUrlPatterns(urlPatterns); //extractRegular.addUrlPattern(urlPattern); //extractRegular.removeUrlPattern(urlPattern.getUrlPattern()); //3、获取HTML抽取工具 HtmlExtractor htmlExtractor = HtmlExtractor.getInstance(extractRegular); //4、抽取网页 String url = "http://money.163.com/08/1219/16/4THR2TMP002533QK.html"; List<ExtractResult> extractResults = htmlExtractor.extract(url, "gb2312"); //5、输出结果 int i = 1; for (ExtractResult extractResult : extractResults) { System.out.println((i++) + "、网页 " + extractResult.getUrl() + " 的抽取结果"); for(ExtractResultItem extractResultItem : extractResult.getExtractResultItems()){ System.out.print("\t"+extractResultItem.getField()+" = "+extractResultItem.getValue()); } System.out.println("\tdescription = "+extractResult.getDescription()); System.out.println("\tkeywords = "+extractResult.getKeywords()); }
1、运行主节点,负责维护抽取规则: 将子项目html-extractor-web打成War包然后部署到Tomcat。 2、获取一个HtmlExtractor的实例(从节点),示例代码如下:
String allExtractRegularUrl = "http://localhost:8080/HtmlExtractorServer/api/all_extract_regular.jsp"; String redisHost = "localhost"; int redisPort = 6379; HtmlExtractor htmlExtractor = HtmlExtractor.getInstance(allExtractRegularUrl, redisHost, redisPort);
3、抽取信息,示例代码如下:
String url = "http://money.163.com/08/1219/16/4THR2TMP002533QK.html"; List<ExtractResult> extractResults = htmlExtractor.extract(url, "gb2312"); int i = 1; for (ExtractResult extractResult : extractResults) { System.out.println((i++) + "、网页 " + extractResult.getUrl() + " 的抽取结果"); for(ExtractResultItem extractResultItem : extractResult.getExtractResultItems()){ System.out.print("\t"+extractResultItem.getField()+" = "+extractResultItem.getValue()); } System.out.println("\tdescription = "+extractResult.getDescription()); System.out.println("\tkeywords = "+extractResult.getKeywords()); }
看完上述内容,你们对Java如何实现基于模板的网页结构化信息精准抽取组件HtmlExtractor有进一步的了解吗?如果还想了解更多知识或者相关内容,请关注亿速云行业资讯频道,感谢大家的支持。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。