Chinaunix

标题: elasticsearch es [打印本页]

作者: freemangui    时间: 2016-01-16 15:26
标题: elasticsearch es
各位好,在学习elasticsearch,现在遇到了一个问题,baidu goole后也没有查到原因,
希望各位大牛能帮忙解决下。
CentOS Linux release 7.2.1511 (Core)     64bit
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
elasticsearch-2.1.1
已安装elasticsearch-head       
        elasticsearch soft memlock unlimited
        elasticsearch hard memlock unlimited
已经添加到/etc/security/limits.conf
elasticsearch 为yum安装。
烦请帮忙解决或提供思路。感谢


[root@Test2 tools]# more /var/log/elasticsearch/my-application.log
[2016-01-16 14:37:51,121][WARN ][bootstrap                ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-01-16 14:37:51,122][WARN ][bootstrap                ] This can result in part of the JVM being swapped out.
[2016-01-16 14:37:51,122][WARN ][bootstrap                ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-01-16 14:37:51,122][WARN ][bootstrap                ] These can be adjusted by modifying /etc/security/limits.conf, for example:
        # allow user 'elasticsearch' mlockall
        elasticsearch soft memlock unlimited
        elasticsearch hard memlock unlimited
[2016-01-16 14:37:51,122][WARN ][bootstrap                ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2016-01-16 14:37:51,355][INFO ][node                     ] [node-2] version[2.1.1], pid[2741], build[40e2c53/2015-12-15T13:05:55Z]
[2016-01-16 14:37:51,355][INFO ][node                     ] [node-2] initializing ...
[2016-01-16 14:37:51,432][INFO ][plugins                  ] [node-2] loaded [], sites []
[2016-01-16 14:37:51,481][INFO ][env                      ] [node-2] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [14.9gb], net total_space [17.4gb], spins? [unknown],
types [rootfs]
[2016-01-16 14:37:53,344][INFO ][node                     ] [node-2] initialized
[2016-01-16 14:37:53,344][INFO ][node                     ] [node-2] starting ...
[2016-01-16 14:37:53,446][WARN ][common.network           ] [node-2] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {192.168.207.135}
[2016-01-16 14:37:53,446][INFO ][transport                ] [node-2] publish_address {192.168.207.135:9300}, bound_addresses {[::]:9300}
[2016-01-16 14:37:53,458][INFO ][discovery                ] [node-2] my-application/M9uHSPkzT7W-DBT6V7xf9A
[2016-01-16 14:37:56,570][INFO ][cluster.service          ] [node-2] new_master {node-2}{M9uHSPkzT7W-DBT6V7xf9A}{192.168.207.135}{192.168.207.135:9300}, reason: zen-disco-join(elected_a
s_master, [0] joins received)
[2016-01-16 14:37:56,602][WARN ][common.network           ] [node-2] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {192.168.207.135}
[2016-01-16 14:37:56,603][INFO ][http                     ] [node-2] publish_address {192.168.207.135:9200}, bound_addresses {[::]:9200}
[2016-01-16 14:37:56,603][INFO ][node                     ] [node-2] started
[2016-01-16 14:37:56,652][INFO ][gateway                  ] [node-2] recovered [0] indices into cluster_state
[2016-01-16 14:40:47,482][WARN ][transport.netty          ] [node-2] exception caught on transport layer [[id: 0x80c47f3c, /192.168.207.129:56724 => /192.168.207.135:9300]], closing con
nection
java.io.StreamCorruptedException: invalid internal transport message format, got (73,74,61,74)
        at org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:64)
        at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
        at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:26
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
        at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:8
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:10
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
        at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:17
        at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:10
        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
[2016-01-16 14:40:47,486][WARN ][transport.netty          ] [node-2] exception caught on transport layer [[id: 0x80c47f3c, /192.168.207.129:56724 :> /192.168.207.135:9300]], closing con
nection
java.io.StreamCorruptedException: invalid internal transport message format, got (73,74,61,74)
        at org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:64)
作者: freemangui    时间: 2016-01-17 09:37
现在这个问题已经解决,做了如下操作

discovery.zen.ping.multicast.enabled: false
是否多播发现节点,默认是true,将其关闭
discovery.zen.ping.unicast.hosts: ["192.168.207.129", "192.168.207.135"]
设置节点列表
已下为log
detected_master {node-1}{t9KdncstRpOxx_VQoojD9A}{192.168.207.129}{192.168.207.129:9300}, added {{node-1}{t9KdncstRpOxx_VQoojD9A}{192.168.207.129}{192.168.207.129:9300},}, reason: zen-disco-receive(from master [{node-1}{t9KdncstRpOxx_VQoojD9A}{192.168.207.129}{192.168.207.129:9300}])




欢迎光临 Chinaunix (http://bbs.chinaunix.net/) Powered by Discuz! X3.2