Elastic search XContent Provider - java

Help required on the below exception.
Intermittent error with elastic search restHighlevel client.
we were not able to reproduce this in local.
`
Caused by: java.util.ServiceConfigurationError: org.elasticsearch.plugins.spi.NamedXContentProvider: Provider org.elasticsearch.client.indexlifecycle.IndexLifecycleNamedXContentProvider not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.elasticsearch.client.RestHighLevelClient.getProvidedNamedXContents(RestHighLevelClient.java:1887)
`

I recently experienced this same issue when trying to use the high level client as part of a more complex application. Using the client in a standalone project worked fine however. I'm still tracking down the cause of the issue, but how I fixed it was by modifying the high level client source code to explicitly set the class loader to be used by ServiceLoader.load. Specifically, the change I made was to the getProvidedNamedXContents() method in the RestHighLevelClient class:
Original:
static List<NamedXContentRegistry.Entry> getProvidedNamedXContents() {
List<NamedXContentRegistry.Entry> entries = new ArrayList<>();
for (NamedXContentProvider service : ServiceLoader.load(NamedXContentProvider.class)) {
entries.addAll(service.getNamedXContentParsers());
}
return entries;
}
Modified:
static List<NamedXContentRegistry.Entry> getProvidedNamedXContents() {
List<NamedXContentRegistry.Entry> entries = new ArrayList<>();
Class<NamedXContentProvider> namedXContentProviderClass = NamedXContentProvider.class;
for (NamedXContentProvider service : ServiceLoader.load(namedXContentProviderClass, namedXContentProviderClass.getClassLoader())) {
entries.addAll(service.getNamedXContentParsers());
}
return entries;
}
After rebuilding the client with these changes the issue no longer occurred for me. I hope this helps

Related

Apache mina-sshd ssh client always prints EdDSA provider not supported

I'm using Apache sshd's ssh client. Whenever I establish a connection to the destination ssh server, I see this in the logs. The connection works, but is there something wrong? How can I fix it?
The exception looks like:
(SshException) to process: EdDSA provider not supported
How to fix
To fix the problem add a dependency net.i2p.crypto:eddsa. Bouncy castle does not provide the implementation of EdDSA. For example in maven add this dependency:
<dependency>
<groupId>net.i2p.crypto</groupId>
<artifactId>eddsa</artifactId>
<version>0.3.0</version>
</dependency>
Impact of not fixing
If you don't fix this, then you will not be able to validate the host keys. My testing was not impacted because I was not validating the host keys yet. However, once deployed to production, I would have been impacted because host keys must be validated.
Details
In the Apache mina-sshd source code, the class SecurityUtils reveals the problem. That class hardcodes the provider for EdDSA to EdDSASecurityProviderRegistrar :
public static final List<String> DEFAULT_SECURITY_PROVIDER_REGISTRARS = Collections.unmodifiableList(
Arrays.asList(
"org.apache.sshd.common.util.security.bouncycastle.BouncyCastleSecurityProviderRegistrar",
"org.apache.sshd.common.util.security.eddsa.EdDSASecurityProviderRegistrar"));
Looking through EdDSASecurityProviderRegistrar you see that it expects the class net.i2p.crypto.eddsa.EdDSAKey to exist:
#Override
public boolean isSupported() {
Boolean supported;
synchronized (supportHolder) {
supported = supportHolder.get();
if (supported != null) {
return supported.booleanValue();
}
ClassLoader cl = ThreadUtils.resolveDefaultClassLoader(getClass());
supported = ReflectionUtils.isClassAvailable(cl, "net.i2p.crypto.eddsa.EdDSAKey");
supportHolder.set(supported);
}
return supported.booleanValue();
}
A quick google search and you'll see that net.i2p.crypto.eddsa.EdDSAKey is provided by the library net.i2p.crypto:eddsa.

Hazelcast 3.8-EA WARNING:Received data format is invalid issue

while loading Map from external data source using MapLoader Hazelcast cluster(multicast discovery) gives error as
WARNING: [<IP>]:5702 [<cluster_name>] [3.8-EA] Received data format is invalid. (An old version of Hazelcast may be running here.)
com.hazelcast.nio.serialization.HazelcastSerializationException: Problem while reading DataSerializable, namespace: 0, id: 0, class: 'com.hazelcast.cluster.impl.JoinRequest', exception: com.hazelcast.cluster.impl.JoinRequest
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.rethrowReadException(DataSerializableSerializer.java:178)
...
Caused by: java.lang.ClassNotFoundException: com.hazelcast.cluster.impl.JoinRequest
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
I have tested this on hazelast 3.5.4 version .. It is working fine.
We can ignore this warning but not sure what is the impact of it. Also it floods the log.
The old and new versions of Hazelcast are not compatible in terms of multicast discovery since the internal protocol changed. That said, the new Hazelcast version cannot identify the old versions discovery packet.
Please change the multicast group according to the documentation found under: http://docs.hazelcast.org/docs/3.8-EA/manual/html-single/index.html#multicast-element
For those that may be running into this problem in an OSGi environment, you may be getting bit by a nuance of the com.hazelcast.util.ServiceLoader.findHighestReachableClassLoader() method sometimes picking the wrong class loader during Hazelcast initialization (as it won't always pick the class loader you set on the config). The following shows a way to work around that problem by taking advantage of Java's context class loader:
private HazelcastInstance createHazelcastInstance() {
// Use the following if you're only using the Hazelcast data serializers
final ClassLoader classLoader = Hazelcast.class.getClassLoader();
// Use the following if you have custom data serializers that you need
// final ClassLoader classLoader = this.getClass().getClassLoader();
final com.hazelcast.config.Config config = new com.hazelcast.config.Config();
config.setClassLoader(classLoader);
final ClassLoader previousContextClassLoader = Thread.currentThread().getContextClassLoader();
try {
Thread.currentThread().setContextClassLoader(classLoader);
return Hazelcast.newHazelcastInstance(config);
} finally {
if(previousContextClassLoader != null) {
Thread.currentThread().setContextClassLoader(previousContextClassLoader);
}
}
}

Reindex elasticsearch 2.3.3 when using Java NodeClient

With the deprecation of SearchTye.SCAN and the newly Reindex API we want to migrate our elasticsearch cluster and clients from 2.1.1 to 2.3.3.
We use java and the appropiate libraries to access elasticsearch. To access the cluster we use the TransportClient, for embedded Unittests we use the NodeClient.
Unfortunatly the Reindex API is provided as plugin, which the NodeClient seems to be unable to deal with.
So the question is how to use the NodeClient with the Reindex-Plugin?
I already tried exposing the protected NodeClient constructor to pass the ReindexPlugin class as argument without success.
Using the NodeClient to start an embedded ElasticSearch and using the TransportClient with the added ReindexPlugin didn't worked either. All I get here is an Exception: ActionNotFoundTransportException[No handler for action [indices:data/write/reindex]]
Dependencies of interest:
org.elasticsearch:elasticsearch:2.3.3
org.elasticsearch.module:reindex:2.3.3
org.apache.lucene:lucene-expressions:5.5.1
org.codehaus.groovy:groovy:2.4.6
Starting the NodeClient:
Settings.Builder settings = Settings.settingsBuilder();
settings.put("path.data", "/some/path/data");
settings.put("path.home", "/some/path/home");
//settings.put("plugin.types", ReindexPlugin.class.getName()); > No effect
settings.put("http.port", 9299);
settings.put("transport.tcp.port", 9399);
node = NodeBuilder.nodeBuilder()
.clusterName("testcluster")
.settings(settings)
.local(true)
.node();
// also tested with local(false), then no transport port is available, resulting in NoNodeAvailableException
Using TransportClient to access the Node:
Settings settings = Settings.settingsBuilder()
.put("cluster.name", "testcluster")
.put("discovery.zen.ping.multicast.enabled", false)
.build();
InetSocketTransportAddress[] addresses = new InetSocketTransportAddress[]
{new InetSocketTransportAddress(new InetSocketAddress("localhost", 9399))};
client = TransportClient.builder()
.settings(settings)
.addPlugin(ReindexPlugin.class)
.build()
.addTransportAddresses(addresses);
Main part of triggering reindex:
ReindexRequestBuilder builder = ReindexAction.INSTANCE.newRequestBuilder(getClient())
.source(indexFrom)
.destination(indexTo)
.refresh(true);
I was able to solve this, by combining both approaches described above.
So creating the NodeClient involves overriding the Node:
class ExposedNode extends Node {
public ExposedNode(Environment tmpEnv, Version version, Collection<Class<? extends Plugin>> classpathPlugins) {
super(tmpEnv, version, classpathPlugins);
}
}
And using it when starting the NodeClient:
Settings.Builder settings = Settings.settingsBuilder();
settings.put("path.data", "/some/path/data");
settings.put("path.home", "/some/path/home");
settings.put("http.port", 9299);
settings.put("transport.tcp.port", 9399);
// Construct Node without NodeBuilder
List<Class<? extends Plugin>> classpathPlugins = ImmutableList.of(ReindexPlugin.class);
settings.put("node.local", false);
settings.put("cluster.name", "testcluster");
Settings preparedSettings = settings.build();
node = new ExposedNode(InternalSettingsPreparer.prepareEnvironment(preparedSettings, null), Version.CURRENT, classpathPlugins);
node.start();
After that you can use the TransportClient which adds the ReindexPlugin, as described in the question.
Nevertheless this is a dirty hack, which may break in a future release, and shows how poorly Elasticsearch supports plugin development imo.

TokenResponseException: missing scope when using the RemoteAPI of AppEngine in Java with OAuth 2.0 on stand-alone application

My goal is for my stand-alone application to access the datastore of a Google App Engine application so that I can query it. My application used to work with ClientLogin, but I have been asked to use OAuth 2.0 for the authentication (and using ClientLogin doesn't work anymore).
I follow the instructions on this page: https://cloud.google.com/appengine/docs/java/tools/remoteapi
I use the provided code, have made an service account, downloaded the json key, made an environment variable pointing to this key. The result is that I get the following exception:
Exception in thread "main" java.lang.ExceptionInInitializerError
at myApplication.myClass4.moveResultsOfFeature(myClass4.java:51)
at myApplication.myClass2.migrate(MyClass3.java:32)
at myApplication.myClass1.main(Starter.java:11)
Caused by: java.lang.RuntimeException: Failed to acquire Google Application Default credential.
at com.google.appengine.tools.remoteapi.RemoteApiOptions.useApplicationDefaultCredential(RemoteApiOptions.java:163)
at commonMigration.RemoteOptions.<clinit>(RemoteOptions.java:18)
... 3 more
Caused by: com.google.appengine.repackaged.com.google.api.client.auth.oauth2.TokenResponseException: 400 Bad Request
{
"error" : "invalid_scope",
"error_description" : "Empty or missing scope not allowed."
}
at com.google.appengine.repackaged.com.google.api.client.auth.oauth2.TokenResponseException.from(TokenResponseException.java:105)
at com.google.appengine.repackaged.com.google.api.client.auth.oauth2.TokenRequest.executeUnparsed(TokenRequest.java:287)
at com.google.appengine.repackaged.com.google.api.client.auth.oauth2.TokenRequest.execute(TokenRequest.java:307)
at com.google.appengine.repackaged.com.google.api.client.googleapis.auth.oauth2.GoogleCredential.executeRefreshToken(GoogleCredential.java:384)
at com.google.appengine.repackaged.com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489)
at com.google.appengine.tools.remoteapi.RemoteApiOptions.useApplicationDefaultCredential(RemoteApiOptions.java:160)
... 4 more
which seems to point to a missing scope argument, a concern which isn't mentioned in the explication on the web page. Is there an easy way to fix this issue?
Per request, my code (simplified):
public class StackOverflow {
private static RemoteApiOptions REMOTE_OPTIONS = new RemoteApiOptions().server(
<application-id>.appspot.com, 443)
.useApplicationDefaultCredential();
public static void main(String[] args0) throws IOException {
// MAKING THE CONNECTION
RemoteApiInstaller installer = new RemoteApiInstaller();
// LOAD FROM Local
installer.install(REMOTE_OPTIONS);
try {
// MY OPERATIONS
} finally {
installer.uninstall();
}
}
}
This is a current limitation of the Remote API. See the note here:
Note: The Remote API call to useApplicationDefaultCredential() can only use credentials provided by the gcloud command.
(It's possible you followed the instructions before the note was added, since it is a recently discovered limitation). The limitation will be fixed in a future release. For now, you should either run:
gcloud auth login
And use your user account to authenticate using useApplicationDefaultCredential(). Or, you can use a service account with .useServiceAccountCredential, which accepts the service account email and a path to a p12 file instead of the json file.

Unable to retrieve slot information for clusters with admission failover level control policy enabled

I'm experiencing some problems when trying to retrieve the slot information for VMware clusters with admission failover level control policy enabled. I use the VI Java API.
When calling the following method:
clusterComputeResource.retrieveDasAdvancedRuntimeInfo()
I either get the following Exception:
java.rmi.RemoteException: VI SDK invoke exception:java.rmi.RemoteException: Exception in
WSClient.invoke:; nested exception is:
java.lang.NoSuchFieldException: slotInfo
at com.vmware.vim25.ws.WSClient.invoke(WSClient.java:122)
at com.vmware.vim25.ws.VimStub.retrieveDasAdvancedRuntimeInfo(VimStub.java:269)
or I get the result which is of type ClusterDasAdvancedRuntimeInfo
but I need the subclass ClusterDasFailoverLevelAdvancedRuntimeInfo in order to get the SlotInfo field (casting to the required sublcass doesn't work either).
I tried to access the web service of a vcenter directly via Soap UI and it worked without any problems, but with the vijava API it just doesn't.
Thanks in advance for any help!!!
After a lot of of debugging to see what the VI Java API does internally, I found out that if the web service client (wsc) is invoked with the name of the sublcass instead of the name of the superclass (as last parameter), the response will be converted correctly. This way the slot information can be retrieved without any problems. Here is the solution for those experiencing the same problems:
ClusterDasFailoverLevelAdvancedRuntimeInfo clusterDasFailoverLevelAdvancedRuntimeInfo = null;
try {
final Argument[] paras = new Argument[1];
paras[0] = new Argument("_this", "ManagedObjectReference", clusterComputeResource.getMOR());
clusterDasFailoverLevelAdvancedRuntimeInfo = (ClusterDasFailoverLevelAdvancedRuntimeInfo) serviceInstance.getServerConnection().getVimService().getWsc().invoke("RetrieveDasAdvancedRuntimeInfo", paras, "ClusterDasFailoverLevelAdvancedRuntimeInfo");
} catch (final Exception e) {
//error handling
}
(Note that this only works if admission control failover level policy is enabled!!!)

Categories

Resources