I try to connect to Elasticsearch with Searchguard from Spring Boot app.
I create a bean for TransportClient.
It looks like that:
Settings settings = Settings.builder()
.put(SSLConfigConstants.SEARCHGUARD_SSL_TRANSPORT_KEYSTORE_TYPE, "PKCS12")
.put(SSLConfigConstants.SEARCHGUARD_SSL_TRANSPORT_KEYSTORE_FILEPATH, keyStore)
.put(SSLConfigConstants.SEARCHGUARD_SSL_TRANSPORT_KEYSTORE_PASSWORD, keyPassword)
.put(SSLConfigConstants.SEARCHGUARD_SSL_TRANSPORT_TRUSTSTORE_FILEPATH, trustStore)
.put("cluster.name", clusterName)
.build();
TransportClient client = new PreBuiltTransportClient(settings, SearchGuardPlugin.class);
TransportAddress[] addresses = clusterNodes.stream()
.map(node -> {
String[] url = StringUtils.deleteWhitespace(node).split(":");
return new TransportAddress(new InetSocketAddress(url[0], Integer.parseInt(url[1])));
}).toArray(TransportAddress[]::new);
client.addTransportAddresses(addresses);
I have my repository extended ElasticsearchRepository.
But I receive a strange exception when my app starts:
ERROR com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport - SSL Problem error:10000410:SSL routines:OPENSSL_internal:SSLV3_ALERT_HANDSHAKE_FAILURE
javax.net.ssl.SSLHandshakeException: error:10000410:SSL routines:OPENSSL_internal:SSLV3_ALERT_HANDSHAKE_FAILURE
What can be a reason? What part of code should I check?
I have another app which uses ElasticsearchTemplate directly (only SearchQuery). And there I don't have any problems.
Elasticsearch version: 6.4.3
Fixed, when I set searchguard.ssl.transport.enable_openssl_if_available to false:
...
.put(SSLConfigConstants.SEARCHGUARD_SSL_TRANSPORT_ENABLE_OPENSSL, false)
...
Related
I have this problen when I try to run a function with BlobTrigger.
Microsoft.Azure.WebJobs.Host: Error indexing method 'Functions.myFunction'. Microsoft.Azure.WebJobs.Extensions.Storage:
Storage account connection string for 'AzureWebJobsAzureCosmosDBConnection' is invalid.
The variables are:
AzureWebJobsAzureCosmosDBConnection = AccountEndpoint=https://example.com/;AccountKey=YYYYYYYYYYY;
AzureWebJobsStorage = UseDevelopmentStorage=true
AzureCosmosDBConnection = AccountEndpoint=https://example.com/;AccountKey=YYYYYYYYYYY;
I don't know why this function throws exception....
Not Sure, if you have written your local.settings.json configuration will be in the format of key = value or just mentioned in the question.
The format of local.settings.json configuration to any Azure Function will be "key":"value":
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=pravustorageac88;AccountKey=<alpha-numeric-symbolic_access_key>;EndpointSuffix=core.windows.net",
"FUNCTIONS_WORKER_RUNTIME": "java",
"MAIN_CLASS":"com.example.DemoApplication",
"AzureWebJobsDashboard": "DefaultEndpointsProtocol=https;AccountName=pravustorageac88;AccountKey=<alpha-numeric-symbolic_access_key>;EndpointSuffix=core.windows.net"
"AzureCosmosdBConnStr":"Cosmos_db_conn_str"
}
}
If you are using Cosmos DB Connection string, then you have to configure in such a way:
public HttpResponseMessage execute(
#HttpTrigger(name = "request", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<User>> request,
#CosmosDBOutput(name="database", databaseName = "db_name", collectionName = "collection_name", connectionStringSetting = "AzureCosmosDBConnection")
ExecutionContext context)
Make Sure the cosmos database connection string present in the local.settigns.json file should be published to Azure Function App Configuration Menu > Application Settings.
For that either uncomment local.settings.json from .gitignore file or add the configuration settings manually to the Azure Function App Configuration:
I have uncommented the local.settings.json in .gitignore file and then published to Azure Function App as the cosmos database connection string also updated in the configuration:
Note:
If you have a proxy in the system, then you have to add the proxy settings in func.exe configuration file, given here by #p31415926
In two of the ways, you can configure the Cosmos DB Connection in the Azure Function Java Stack: Bindings (as code given above) and using the SDK given in this MS Doc.
I am maintaining a JSP/Servlet application that uses the MongoDB 3.8 Java driver. At this point, I cannot change versions to a newer one.
I have occasionally experienced some timeouts when connecting from the application to the database. Based on what I read in https://mongodb.github.io/mongo-java-driver/3.8/driver/tutorials/connect-to-mongodb/ I wrote the following code:
CodecRegistry pojoCodecRegistry = fromRegistries(com.mongodb.MongoClient.getDefaultCodecRegistry(),
fromProviders(PojoCodecProvider.builder().automatic(true).build()));
MongoCredential credential
= MongoCredential.createCredential(theuser, database, thepassword.toCharArray());
MongoClientSettings settings = MongoClientSettings.builder()
.credential(credential)
.codecRegistry(pojoCodecRegistry)
.applyToClusterSettings(builder
-> builder.hosts(Arrays.asList(new ServerAddress(addr, 27017))))
.build();
MongoClient client = MongoClients.create(settings);
This works, but with the eventual timeout (usually when reloading a JSP page).
I figured out I could create a SocketSettings instance with:
SocketSettings socketOptions = SocketSettings.builder().connectTimeout(60,TimeUnit.SECONDS).build();
But I cannot figure out how to apply these settings to the creation of the instance of MongoClient. Any hints?
thanks
I have problem with vertx HttpClient.
Here's code which shows that tests GET using vertx and plain java.
Vertx vertx = Vertx.vertx();
HttpClientOptions options = new HttpClientOptions()
.setTrustAll(true)
.setSsl(false)
.setDefaultPort(80)
.setProtocolVersion(HttpVersion.HTTP_1_1)
.setLogActivity(true);
HttpClient client = vertx.createHttpClient(options);
client.getNow("google.com", "/", response -> {
System.out.println("Received response with status code " + response.statusCode());
});
System.out.println(getHTML("http://google.com"));
Where getHTML() is from here: How do I do a HTTP GET in Java?
This is my output:
<!doctype html><html... etc <- correct output from plain java
Feb 08, 2017 11:31:21 AM io.vertx.core.http.impl.HttpClientRequestImpl
SEVERE: java.net.UnknownHostException: failed to resolve 'google.com'. Exceeded max queries per resolve 3
But vertx can't connect. What's wrong here? I'm not using any proxy.
For reference: a solution, as described in this question and in tsegismont's comment here, is to set the flag vertx.disableDnsResolver to true:
-Dvertx.disableDnsResolver=true
in order to fall back to the JVM DNS resolver as explained here:
sometimes it can be desirable to use the JVM built-in resolver, the JVM system property -Dvertx.disableDnsResolver=true activates this behavior
I observed this DNS resolution issue with a redis client in a kubernetes environment.
I had this issue, what caused it for me was stale DNS servers being picked up by the Java runtime, i.e. servers registered for a network the machine was no longer connected to. The issue is first in the Sun JNDI implementation, it also exists in Netty which uses JNDI to bootstrap its list of name servers on most platforms, then finally shows up in VertX.
I think a good place to fix this would be in the Netty layer where the set of default DNS servers is bootstrapped. I have raised a ticket with the Netty project so we'll see if they agree with me! Here is the Netty ticket
In the mean time a fairly basic workaround is to filter the default DNS servers detected by Netty, based on whether they are reachable or not. Here is a code Sample in Kotlin to apply before constructing the main VertX instance.
// The default set of name servers provided by JNDI can contain stale entries
// This default set is picked up by Netty and in turn by VertX
// To work around this, we filter for only reachable name servers on startup
val nameServers = DefaultDnsServerAddressStreamProvider.defaultAddressList()
val reachableNameServers = nameServers.stream()
.filter {ns -> ns.address.isReachable(NS_REACHABLE_TIMEOUT)}
.map {ns -> ns.address.hostAddress}
.collect(Collectors.toList())
if (reachableNameServers.size == 0)
throw StartupException("There are no reachable name servers available")
val opts = VertxOptions()
opts.addressResolverOptions.servers = reachableNameServers
// The primary Vertx instance
val vertx = Vertx.vertx(opts)
A little more detail in case it is helpful. I have a company machine, which at some point was connected to the company network by a physical cable. Details of the company's internal name servers were set up by DHCP on the physical interface. Using the wireless interface at home, DNS for the wireless interface gets set to my home DNS while the config for the physical interface is not updated. This is fine since that device is not active, ipconfig /all does not show the internal company DNS servers. However, looking in the registry they are still there:
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces
They get picked up by the JNDI mechanism, which feeds Netty and in turn VertX. Since they are not reachable from my home location, DNS resolution fails. I can imagine this home/office situation is not unique to me! I don't know whether something similar could occur with multiple virtual interfaces on containers or VMs, it could be worth looking at if you are having problems.
Here is the sample code which works for me.
public class TemplVerticle extends HttpVerticle {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
// Create the web client and enable SSL/TLS with a trust store
WebClient client = WebClient.create(vertx,
new WebClientOptions()
.setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
.setDefaultHost("www.w3schools.com")
);
client.get("www.w3schools.com")
.as(BodyCodec.string())
.send(ar -> {
if (ar.succeeded()) {
HttpResponse<String> response = ar.result();
System.out.println("Got HTTP response body");
System.out.println(response.body().toString());
} else {
ar.cause().printStackTrace();
}
});
}
}
Try using web client instead of httpclient, here you have an example (with rx):
private val client: WebClient = WebClient.create(vertx, WebClientOptions()
.setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
)
open fun <T> get(uri: String, marshaller: Class<T>): Single<T> {
return client.getAbs(host + uri).rxSend()
.map { extractJson(it, uri, marshaller) }
}
Another option is to use getAbs.
With the deprecation of SearchTye.SCAN and the newly Reindex API we want to migrate our elasticsearch cluster and clients from 2.1.1 to 2.3.3.
We use java and the appropiate libraries to access elasticsearch. To access the cluster we use the TransportClient, for embedded Unittests we use the NodeClient.
Unfortunatly the Reindex API is provided as plugin, which the NodeClient seems to be unable to deal with.
So the question is how to use the NodeClient with the Reindex-Plugin?
I already tried exposing the protected NodeClient constructor to pass the ReindexPlugin class as argument without success.
Using the NodeClient to start an embedded ElasticSearch and using the TransportClient with the added ReindexPlugin didn't worked either. All I get here is an Exception: ActionNotFoundTransportException[No handler for action [indices:data/write/reindex]]
Dependencies of interest:
org.elasticsearch:elasticsearch:2.3.3
org.elasticsearch.module:reindex:2.3.3
org.apache.lucene:lucene-expressions:5.5.1
org.codehaus.groovy:groovy:2.4.6
Starting the NodeClient:
Settings.Builder settings = Settings.settingsBuilder();
settings.put("path.data", "/some/path/data");
settings.put("path.home", "/some/path/home");
//settings.put("plugin.types", ReindexPlugin.class.getName()); > No effect
settings.put("http.port", 9299);
settings.put("transport.tcp.port", 9399);
node = NodeBuilder.nodeBuilder()
.clusterName("testcluster")
.settings(settings)
.local(true)
.node();
// also tested with local(false), then no transport port is available, resulting in NoNodeAvailableException
Using TransportClient to access the Node:
Settings settings = Settings.settingsBuilder()
.put("cluster.name", "testcluster")
.put("discovery.zen.ping.multicast.enabled", false)
.build();
InetSocketTransportAddress[] addresses = new InetSocketTransportAddress[]
{new InetSocketTransportAddress(new InetSocketAddress("localhost", 9399))};
client = TransportClient.builder()
.settings(settings)
.addPlugin(ReindexPlugin.class)
.build()
.addTransportAddresses(addresses);
Main part of triggering reindex:
ReindexRequestBuilder builder = ReindexAction.INSTANCE.newRequestBuilder(getClient())
.source(indexFrom)
.destination(indexTo)
.refresh(true);
I was able to solve this, by combining both approaches described above.
So creating the NodeClient involves overriding the Node:
class ExposedNode extends Node {
public ExposedNode(Environment tmpEnv, Version version, Collection<Class<? extends Plugin>> classpathPlugins) {
super(tmpEnv, version, classpathPlugins);
}
}
And using it when starting the NodeClient:
Settings.Builder settings = Settings.settingsBuilder();
settings.put("path.data", "/some/path/data");
settings.put("path.home", "/some/path/home");
settings.put("http.port", 9299);
settings.put("transport.tcp.port", 9399);
// Construct Node without NodeBuilder
List<Class<? extends Plugin>> classpathPlugins = ImmutableList.of(ReindexPlugin.class);
settings.put("node.local", false);
settings.put("cluster.name", "testcluster");
Settings preparedSettings = settings.build();
node = new ExposedNode(InternalSettingsPreparer.prepareEnvironment(preparedSettings, null), Version.CURRENT, classpathPlugins);
node.start();
After that you can use the TransportClient which adds the ReindexPlugin, as described in the question.
Nevertheless this is a dirty hack, which may break in a future release, and shows how poorly Elasticsearch supports plugin development imo.
I am creating a Elasticsearch Connection with Java API. I am using TransportConnection and I need to set the timeout for the connection.
I haven't configured any property and the connect takes three minutes to give me a timeout.
Anybody know if any property exists to set the value of timeout?
Thaks.
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", cluster_name).put("client.transport.ping_timeout", "30s").build();
TransportClient transport = new TransportClient(settings);
See also the following from ES documentation page:
Fault Detection
Use the code below to update the TransportClient's connection time out value:
Settings.builder().put("transport.tcp.connect_timeout", "240s")
The Complete TransportClient code:
Settings settings = Settings.builder()
.put("cluster.name", "elasticsearch")
.put("client.transport.sniff", true)
.put("transport.tcp.connect_timeout", "240s")
.build();
Client transportClient = new PreBuiltTransportClient(settings)
.addTransportAddresses(
new TransportAddress("127.0.0.1"), "9300"));
Important: Each Elasticsearch version has different config key. You can read this document to learn about other settings you can change:
www.elastic.co/guide/en/elasticsearch/reference/6.4/modules-transport.html
If you are using AWS Elasticsearch, check the load balancer time out settings.
docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html