最近继续在探索es的更多内容,查阅了相关资料(包括博客文章跟其它一些教程),学习到写一致性原理的相关知识,其本身并不难理解,但是一定要在实践中经过验证才会有更深的体会,就像在项目过使用过es做各种聚合统计搜索分析,跟没使用过,差别还是很大的。
于是在es 5.4的版本上进行测试:
PUT myblog/article/1?consistency=all
{
"title":"test"
}
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "request [/myblog/article/1] contains unrecognized parameter: [consistency]"
}
],
"type": "illegal_argument_exception",
"reason": "request [/myblog/article/1] contains unrecognized parameter: [consistency]"
},
"status": 400
}
可以看到并不能识别consistency,然后我继续使用5.2、5.6和老版本1.7进行测试,发现5.x的版本都是不行的,但是1.7的可以。所以想到很有可能写一致性在es 5.x版本之后应该是已经废弃掉的。
于是上网查找一下资料,搜索出来的结果,文档竟然几乎都是来自同一份源文档,基于es 5.2的版本,但是却还是把这个写一致性带上。
所以没有办法,只能查阅官方文档。
官方文档对于es 5.x的变化说明如下:
writeConsistencyLevel removed on write requestsedit
In previous versions of Elasticsearch, the various write requests had a setWriteConsistencyLevel method to set the shard consistency level for write operations. However, the semantics of write consistency were ambiguous as this is just a pre-operation check to ensure the specified number of shards were available before the operation commenced. The write consistency level did not guarantee that the data would be replicated to those number of copies by the time the operation finished. The setWriteConsistencyLevel method on these write requests has been changed to setWaitForActiveShards, which can take a numerical value up to the total number of shard copies or ActiveShardCount.ALL for all shard copies. The default is to just wait for the primary shard to be active before proceeding with the operation. See the section on wait for active shards for more details.
This change affects IndexRequest, IndexRequestBuilder, BulkRequest, BulkRequestBuilder, UpdateRequest, UpdateRequestBuilder, DeleteRequest, and DeleteRequestBuilder.
可以查看:https://www.elastic.co/guide/en/elasticsearch/reference/5.6/breaking_50_java_api_changes.html#_writeconsistencylevel_removed_on_write_requests
后续在es 6.x是否还会有其它变化?
当然目前我不会去查阅es 6.x的相关资料,因为公司也只是刚从es 1.x 2.x升级到5.6,并且很大一部分项目还是使用旧的,所以目标很明确,后面更多是会关注es 5.x版本的,因为时间精力都是十分有限的。
就像现在spark已经是2.x了,但是由于公司使用的最新版本是1.6.3,所以对于spark,我个人的精力,包括spark的开发习惯,也都还是基于spark 1.6的。
当然,后续也肯定会把精力放到新的版本上。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。