elasticsearch es
各位好,在学习elasticsearch,现在遇到了一个问题,baidu goole后也没有查到原因,希望各位大牛能帮忙解决下。
CentOS Linux release 7.2.1511 (Core) 64bit
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
elasticsearch-2.1.1
已安装elasticsearch-head
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
已经添加到/etc/security/limits.conf
elasticsearch 为yum安装。
烦请帮忙解决或提供思路。感谢
# more /var/log/elasticsearch/my-application.log
Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
This can result in part of the JVM being swapped out.
Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
If you are logged in interactively, you will have to re-login for the new limits to take effect.
version, pid, build
initializing ...
loaded [], sites []
using data paths, mounts [[/ (rootfs)]], net usable_space , net total_space , spins? ,
types
initialized
starting ...
publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {192.168.207.135}
publish_address {192.168.207.135:9300}, bound_addresses {[::]:9300}
my-application/M9uHSPkzT7W-DBT6V7xf9A
new_master {node-2}{M9uHSPkzT7W-DBT6V7xf9A}{192.168.207.135}{192.168.207.135:9300}, reason: zen-disco-join(elected_a
s_master, joins received)
publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {192.168.207.135}
publish_address {192.168.207.135:9200}, bound_addresses {[::]:9200}
started
recovered indices into cluster_state
exception caught on transport layer [], closing con
nection
java.io.StreamCorruptedException: invalid internal transport message format, got (73,74,61,74)
at org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:64)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
exception caught on transport layer [], closing con
nection
java.io.StreamCorruptedException: invalid internal transport message format, got (73,74,61,74)
at org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:64) 现在这个问题已经解决,做了如下操作
discovery.zen.ping.multicast.enabled: false
是否多播发现节点,默认是true,将其关闭
discovery.zen.ping.unicast.hosts: ["192.168.207.129", "192.168.207.135"]
设置节点列表
已下为log
detected_master {node-1}{t9KdncstRpOxx_VQoojD9A}{192.168.207.129}{192.168.207.129:9300}, added {{node-1}{t9KdncstRpOxx_VQoojD9A}{192.168.207.129}{192.168.207.129:9300},}, reason: zen-disco-receive(from master [{node-1}{t9KdncstRpOxx_VQoojD9A}{192.168.207.129}{192.168.207.129:9300}])
页:
[1]