Hazelcast 3.8-EA WARNING:Received data format is invalid issue - java

while loading Map from external data source using MapLoader Hazelcast cluster(multicast discovery) gives error as
WARNING: [<IP>]:5702 [<cluster_name>] [3.8-EA] Received data format is invalid. (An old version of Hazelcast may be running here.)
com.hazelcast.nio.serialization.HazelcastSerializationException: Problem while reading DataSerializable, namespace: 0, id: 0, class: 'com.hazelcast.cluster.impl.JoinRequest', exception: com.hazelcast.cluster.impl.JoinRequest
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.rethrowReadException(DataSerializableSerializer.java:178)
...
Caused by: java.lang.ClassNotFoundException: com.hazelcast.cluster.impl.JoinRequest
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
I have tested this on hazelast 3.5.4 version .. It is working fine.
We can ignore this warning but not sure what is the impact of it. Also it floods the log.

The old and new versions of Hazelcast are not compatible in terms of multicast discovery since the internal protocol changed. That said, the new Hazelcast version cannot identify the old versions discovery packet.
Please change the multicast group according to the documentation found under: http://docs.hazelcast.org/docs/3.8-EA/manual/html-single/index.html#multicast-element

For those that may be running into this problem in an OSGi environment, you may be getting bit by a nuance of the com.hazelcast.util.ServiceLoader.findHighestReachableClassLoader() method sometimes picking the wrong class loader during Hazelcast initialization (as it won't always pick the class loader you set on the config). The following shows a way to work around that problem by taking advantage of Java's context class loader:
private HazelcastInstance createHazelcastInstance() {
// Use the following if you're only using the Hazelcast data serializers
final ClassLoader classLoader = Hazelcast.class.getClassLoader();
// Use the following if you have custom data serializers that you need
// final ClassLoader classLoader = this.getClass().getClassLoader();
final com.hazelcast.config.Config config = new com.hazelcast.config.Config();
config.setClassLoader(classLoader);
final ClassLoader previousContextClassLoader = Thread.currentThread().getContextClassLoader();
try {
Thread.currentThread().setContextClassLoader(classLoader);
return Hazelcast.newHazelcastInstance(config);
} finally {
if(previousContextClassLoader != null) {
Thread.currentThread().setContextClassLoader(previousContextClassLoader);
}
}
}

Related

Elastic search XContent Provider

Help required on the below exception.
Intermittent error with elastic search restHighlevel client.
we were not able to reproduce this in local.
`
Caused by: java.util.ServiceConfigurationError: org.elasticsearch.plugins.spi.NamedXContentProvider: Provider org.elasticsearch.client.indexlifecycle.IndexLifecycleNamedXContentProvider not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.elasticsearch.client.RestHighLevelClient.getProvidedNamedXContents(RestHighLevelClient.java:1887)
`
I recently experienced this same issue when trying to use the high level client as part of a more complex application. Using the client in a standalone project worked fine however. I'm still tracking down the cause of the issue, but how I fixed it was by modifying the high level client source code to explicitly set the class loader to be used by ServiceLoader.load. Specifically, the change I made was to the getProvidedNamedXContents() method in the RestHighLevelClient class:
Original:
static List<NamedXContentRegistry.Entry> getProvidedNamedXContents() {
List<NamedXContentRegistry.Entry> entries = new ArrayList<>();
for (NamedXContentProvider service : ServiceLoader.load(NamedXContentProvider.class)) {
entries.addAll(service.getNamedXContentParsers());
}
return entries;
}
Modified:
static List<NamedXContentRegistry.Entry> getProvidedNamedXContents() {
List<NamedXContentRegistry.Entry> entries = new ArrayList<>();
Class<NamedXContentProvider> namedXContentProviderClass = NamedXContentProvider.class;
for (NamedXContentProvider service : ServiceLoader.load(namedXContentProviderClass, namedXContentProviderClass.getClassLoader())) {
entries.addAll(service.getNamedXContentParsers());
}
return entries;
}
After rebuilding the client with these changes the issue no longer occurred for me. I hope this helps

WARNING: JGRP000014: Discovery.timeout has been deprecated: GMS.join_timeout should be used instead

WARNING: JGRP000014: Discovery.timeout has been deprecated: GMS.join_timeout should be used instead
why am I getting this if it's not defined directly by me? at least I don't think it is, looks like we're using the GMS.join_timeout
Here's how this one is configured
log().info(
"Starting JChannel for Distributable Sessions config:{} with channel name of {}",
configString,
channelName
);
jChannel = new JChannel(new PlainConfigurator(configString));
jChannel.connect(channelName);
replicatedSessionIds = new ReplicatedHashMap<>( jChannel );
sessionIds = replicatedSessionIds;
if (! sessionDistributedTest )
{
replicatedSessionIds.start(TIME_OUT);
}
and the output of that log messsage
Starting JChannel for Distributable Sessions config:TCP(bind_addr=172.20.0.4;bind_port=7800;max_bundle_size=200000):TCPPING(timeout=3000;initial_hosts=dex.master[7800],dex.slave[7800];port_range=1):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK2(use_mcast_xmit=false;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=50000;max_bytes=400000):pbcast.GMS(print_local_addr=true;join_timeout=2000;view_bundling=true):pbcast.STATE_SOCK with channel name of Dex_SpringSecurity_Cluster_Dev
jgroups 3.6.13
You actually do define timeout in configString passed to the channel constructor: TCPPING.timeout.
I have 2 suggestions for you:
Switch to XML based configuration; plain-text configuration will not be supported any longer in 4.0
Use tcp.xml shipped with 3.6.13 and modify it according to you liking. Your config looks a bit dated.

Reindex elasticsearch 2.3.3 when using Java NodeClient

With the deprecation of SearchTye.SCAN and the newly Reindex API we want to migrate our elasticsearch cluster and clients from 2.1.1 to 2.3.3.
We use java and the appropiate libraries to access elasticsearch. To access the cluster we use the TransportClient, for embedded Unittests we use the NodeClient.
Unfortunatly the Reindex API is provided as plugin, which the NodeClient seems to be unable to deal with.
So the question is how to use the NodeClient with the Reindex-Plugin?
I already tried exposing the protected NodeClient constructor to pass the ReindexPlugin class as argument without success.
Using the NodeClient to start an embedded ElasticSearch and using the TransportClient with the added ReindexPlugin didn't worked either. All I get here is an Exception: ActionNotFoundTransportException[No handler for action [indices:data/write/reindex]]
Dependencies of interest:
org.elasticsearch:elasticsearch:2.3.3
org.elasticsearch.module:reindex:2.3.3
org.apache.lucene:lucene-expressions:5.5.1
org.codehaus.groovy:groovy:2.4.6
Starting the NodeClient:
Settings.Builder settings = Settings.settingsBuilder();
settings.put("path.data", "/some/path/data");
settings.put("path.home", "/some/path/home");
//settings.put("plugin.types", ReindexPlugin.class.getName()); > No effect
settings.put("http.port", 9299);
settings.put("transport.tcp.port", 9399);
node = NodeBuilder.nodeBuilder()
.clusterName("testcluster")
.settings(settings)
.local(true)
.node();
// also tested with local(false), then no transport port is available, resulting in NoNodeAvailableException
Using TransportClient to access the Node:
Settings settings = Settings.settingsBuilder()
.put("cluster.name", "testcluster")
.put("discovery.zen.ping.multicast.enabled", false)
.build();
InetSocketTransportAddress[] addresses = new InetSocketTransportAddress[]
{new InetSocketTransportAddress(new InetSocketAddress("localhost", 9399))};
client = TransportClient.builder()
.settings(settings)
.addPlugin(ReindexPlugin.class)
.build()
.addTransportAddresses(addresses);
Main part of triggering reindex:
ReindexRequestBuilder builder = ReindexAction.INSTANCE.newRequestBuilder(getClient())
.source(indexFrom)
.destination(indexTo)
.refresh(true);
I was able to solve this, by combining both approaches described above.
So creating the NodeClient involves overriding the Node:
class ExposedNode extends Node {
public ExposedNode(Environment tmpEnv, Version version, Collection<Class<? extends Plugin>> classpathPlugins) {
super(tmpEnv, version, classpathPlugins);
}
}
And using it when starting the NodeClient:
Settings.Builder settings = Settings.settingsBuilder();
settings.put("path.data", "/some/path/data");
settings.put("path.home", "/some/path/home");
settings.put("http.port", 9299);
settings.put("transport.tcp.port", 9399);
// Construct Node without NodeBuilder
List<Class<? extends Plugin>> classpathPlugins = ImmutableList.of(ReindexPlugin.class);
settings.put("node.local", false);
settings.put("cluster.name", "testcluster");
Settings preparedSettings = settings.build();
node = new ExposedNode(InternalSettingsPreparer.prepareEnvironment(preparedSettings, null), Version.CURRENT, classpathPlugins);
node.start();
After that you can use the TransportClient which adds the ReindexPlugin, as described in the question.
Nevertheless this is a dirty hack, which may break in a future release, and shows how poorly Elasticsearch supports plugin development imo.

HazelcastClientCachingProvider class not found exception when requesting hazelcast jcache provider

I'm having a strange warning when I try to use Hazelcast-based implementation of JCache (i.e. JSR 107) as follows (original sample code):
// Explicitly retrieve the Hazelcast backed javax.cache.spi.CachingProvider
CachingProvider cachingProvider = Caching.getCachingProvider(name);
// Retrieve the javax.cache.CacheManager
CacheManager cacheManager = cachingProvider.getCacheManager("com.hazelcast.cache.impl.HazelcastCachingProvider");
Here is the logged message:
oct. 30, 2014 5:17:59 PM com.hazelcast.cache.impl.HazelcastCachingProvider
WARNING: Could not load client CachingProvider! Fallback to server one... java.lang.ClassNotFoundException: com.hazelcast.client.cache.impl.HazelcastClientCachingProvider
Why it si trying to load HazelcastClientCachingProvider will I asked for com.hazelcast.cache.impl.HazelcastCachingProvider. Am I using the wrong JCache provider?
HazelcastCachingProvider is just a delegate to automatically choose either client based or server bases CachingProvider.
For recent 3.4 SNAPSHOTS HazelcastCachingProvider also was moved to com.hazelcast.cache.HazelcastCachingProvider. For the new documentation please see the just drafted documentation version for 3.4: https://github.com/hazelcast/hazelcast/blob/master/hazelcast-documentation/src/JCache.md
You'll see it got waaaaaaay longer :)

Unable to retrieve slot information for clusters with admission failover level control policy enabled

I'm experiencing some problems when trying to retrieve the slot information for VMware clusters with admission failover level control policy enabled. I use the VI Java API.
When calling the following method:
clusterComputeResource.retrieveDasAdvancedRuntimeInfo()
I either get the following Exception:
java.rmi.RemoteException: VI SDK invoke exception:java.rmi.RemoteException: Exception in
WSClient.invoke:; nested exception is:
java.lang.NoSuchFieldException: slotInfo
at com.vmware.vim25.ws.WSClient.invoke(WSClient.java:122)
at com.vmware.vim25.ws.VimStub.retrieveDasAdvancedRuntimeInfo(VimStub.java:269)
or I get the result which is of type ClusterDasAdvancedRuntimeInfo
but I need the subclass ClusterDasFailoverLevelAdvancedRuntimeInfo in order to get the SlotInfo field (casting to the required sublcass doesn't work either).
I tried to access the web service of a vcenter directly via Soap UI and it worked without any problems, but with the vijava API it just doesn't.
Thanks in advance for any help!!!
After a lot of of debugging to see what the VI Java API does internally, I found out that if the web service client (wsc) is invoked with the name of the sublcass instead of the name of the superclass (as last parameter), the response will be converted correctly. This way the slot information can be retrieved without any problems. Here is the solution for those experiencing the same problems:
ClusterDasFailoverLevelAdvancedRuntimeInfo clusterDasFailoverLevelAdvancedRuntimeInfo = null;
try {
final Argument[] paras = new Argument[1];
paras[0] = new Argument("_this", "ManagedObjectReference", clusterComputeResource.getMOR());
clusterDasFailoverLevelAdvancedRuntimeInfo = (ClusterDasFailoverLevelAdvancedRuntimeInfo) serviceInstance.getServerConnection().getVimService().getWsc().invoke("RetrieveDasAdvancedRuntimeInfo", paras, "ClusterDasFailoverLevelAdvancedRuntimeInfo");
} catch (final Exception e) {
//error handling
}
(Note that this only works if admission control failover level policy is enabled!!!)

Categories

Resources