Apache mina-sshd ssh client always prints EdDSA provider not supported - java

I'm using Apache sshd's ssh client. Whenever I establish a connection to the destination ssh server, I see this in the logs. The connection works, but is there something wrong? How can I fix it?
The exception looks like:
(SshException) to process: EdDSA provider not supported

How to fix
To fix the problem add a dependency net.i2p.crypto:eddsa. Bouncy castle does not provide the implementation of EdDSA. For example in maven add this dependency:
<dependency>
<groupId>net.i2p.crypto</groupId>
<artifactId>eddsa</artifactId>
<version>0.3.0</version>
</dependency>
Impact of not fixing
If you don't fix this, then you will not be able to validate the host keys. My testing was not impacted because I was not validating the host keys yet. However, once deployed to production, I would have been impacted because host keys must be validated.
Details
In the Apache mina-sshd source code, the class SecurityUtils reveals the problem. That class hardcodes the provider for EdDSA to EdDSASecurityProviderRegistrar :
public static final List<String> DEFAULT_SECURITY_PROVIDER_REGISTRARS = Collections.unmodifiableList(
Arrays.asList(
"org.apache.sshd.common.util.security.bouncycastle.BouncyCastleSecurityProviderRegistrar",
"org.apache.sshd.common.util.security.eddsa.EdDSASecurityProviderRegistrar"));
Looking through EdDSASecurityProviderRegistrar you see that it expects the class net.i2p.crypto.eddsa.EdDSAKey to exist:
#Override
public boolean isSupported() {
Boolean supported;
synchronized (supportHolder) {
supported = supportHolder.get();
if (supported != null) {
return supported.booleanValue();
}
ClassLoader cl = ThreadUtils.resolveDefaultClassLoader(getClass());
supported = ReflectionUtils.isClassAvailable(cl, "net.i2p.crypto.eddsa.EdDSAKey");
supportHolder.set(supported);
}
return supported.booleanValue();
}
A quick google search and you'll see that net.i2p.crypto.eddsa.EdDSAKey is provided by the library net.i2p.crypto:eddsa.

Related

Why does Apache Commons VFS consider Http Proxy and Socks5 Proxy but simply ignore Socks4 Proxy?

I am working on a application that connects to an SFTP server and downloads files using Apache Commons VFS, it works just fine, with the exception that the system needs to allow the user to specify a proxy, as needed.
Now, I know Apache Commons VFS is built on top of Jsch and I know Jsch contains the classes: com.jcraft.jsch.ProxyHTTP, com.jcraft.jsch.ProxySOCKS4 and com.jcraft.jsch.ProxySOCKS5.
The code below is an extract of VFS class org.apache.commons.vfs2.provider.sftp.SftpClientFactory:
public static Session createConnection(
...
final SftpFileSystemConfigBuilder.ProxyType proxyType = builder.getProxyType(fileSystemOptions);
...
final String proxyUser = builder.getProxyUser(fileSystemOptions);
final String proxyPassword = builder.getProxyPassword(fileSystemOptions);
Proxy proxy = null;
if (SftpFileSystemConfigBuilder.PROXY_HTTP.equals(proxyType)) {
proxy = createProxyHTTP(proxyHost, proxyPort);
((ProxyHTTP)proxy).setUserPasswd(proxyUser, proxyPassword);
} else if (SftpFileSystemConfigBuilder.PROXY_SOCKS5.equals(proxyType)) {
proxy = createProxySOCKS5(proxyHost, proxyPort);
((ProxySOCKS5)proxy).setUserPasswd(proxyUser, proxyPassword);
} else if (SftpFileSystemConfigBuilder.PROXY_STREAM.equals(proxyType)) {
proxy = createStreamProxy(proxyHost, proxyPort, fileSystemOptions, builder);
}
...
As you can you see, there's no "if" statement to instantiate ProxySOCKS4!
I have duplicated the SftpClientFactory class, set my version to load before the original class on the classpath and changed the code as follow:
public static Session createConnection(
...
final SftpFileSystemConfigBuilder.ProxyType proxyType = builder.getProxyType(fileSystemOptions);
...
final String proxyUser = builder.getProxyUser(fileSystemOptions);
final String proxyPassword = builder.getProxyPassword(fileSystemOptions);
Proxy proxy = null;
if (SftpFileSystemConfigBuilder.PROXY_HTTP.equals(proxyType)) {
proxy = createProxyHTTP(proxyHost, proxyPort);
((ProxyHTTP)proxy).setUserPasswd(proxyUser, proxyPassword);
/// change start (I also created the PROXY_SOCKS4 constant)
} else if (SftpFileSystemConfigBuilder.PROXY_SOCKS4.equals(proxyType)) {
proxy = createProxySOCKS4(proxyHost, proxyPort);
((ProxySOCKS4)proxy).setUserPasswd(proxyUser, proxyPassword);
/// change end
} else if (SftpFileSystemConfigBuilder.PROXY_SOCKS5.equals(proxyType)) {
proxy = createProxySOCKS5(proxyHost, proxyPort);
((ProxySOCKS5)proxy).setUserPasswd(proxyUser, proxyPassword);
} else if (SftpFileSystemConfigBuilder.PROXY_STREAM.equals(proxyType)) {
proxy = createStreamProxy(proxyHost, proxyPort, fileSystemOptions, builder);
}
...
.. and guess what, when I set my application to use a Socks 4 Proxy it works alright with the change above. It is important to say that setting the application to work with Socks 5 does not work if the proxy server is a Socks 4 type, and that's true not only for my application with VFS, but also any other client I tested, like Fillezila or WinSCP.
So, the main question is:
Why does VFS predicts the usage of ProxyHTTP, ProxySOCKS5 but completely ignores the JSch ProxySOCKS4 class? Am I missing some SFTP or Proxy concept here or should I consider VFS bugged? That's the first time I work with VFS.
Please consider the question in bold as the main question not to make it too broad.
I wasn't able to get or find a better answer in time, so what I did to solve my problem was exactly what I described in the question.
I duplicated the classes SftpClientFactory e SftpFileSystemConfigBuilder, made the necessary adjustments and used them instead of the original classes, it's ugly and now I am stuck with a specific VFS version, I know, but the problem was solved.
Lesson for next time: use Jsch instead of VFS.
I'll leave the question open though, in case someone else have a proper solution or answer.

How to get remote TLS/SSL certificate when not trusted using jax-ws/SSLSocket

We are using a JAX-WS client over HTTPS to send messages (backed by CXF, which I think uses SSLSocket).
We wish to log the remote certificate details, together with the message details, if the remote certificate is not trusted/invalid.
Initially I hoped we would get a usefull exception, but the interesting exceptions in the stack trace are internal (like sun.security.validator.ValidatorException and sun.security.provider.certpath.SunCertPathBuilderException), so shouldn't really be used, and regardless don't seem to hold the remote certificate.
So my question is, what would be the most tidy way to get the certificate, at the level where I also have the message details (outside the JAX-WS call)?
So far my best guess is to add my own javax.net.ssl.X509TrustManager, which wraps the currently used one, and puts the Certificate on a ThreadLocal, where the caller can lately pick it up. It doesn't seem very tidy, but it's the best that seems possible so far :)
Many thanks for any suggestions!
The main point is that JSSE is doing and hiding all of the things you are looking for, in your question. But luckily it seems that CXF allows some customization.
The idea is to customize the SSLSocketFactory ( http://cxf.apache.org/docs/tls-configuration.html#TLSConfiguration-ClientTLSParameters ) with your own implementation, and this one must create Sockets that come with your own HandshakeCompletedListener. This is this last object which will dump the information that you are looking for, I give you an implementation example :
class CustomHandshakeCompletedListener implements HandshakeCompletedListener {
private HandshakeCompletedEvent hce;
private String cipher;
private Certificate[] peerCertificates = null;
private Principal peerPrincipal = null;
public void handshakeCompleted(HandshakeCompletedEvent hce) {
this.hce = hce;
// only cipersuites different from DH_anon* will return a server certificate
if(!cipher.toLowerCase().contains("dh_anon")) {
try {
cipher = hce.getCipherSuite();
peerCertificates = hce.getPeerCertificates();
peerPrincipal = hce.getPeerPrincipal();
// do anything you want with these certificates and ciphersuite
}
catch(SSLPeerUnverifiedException spue) {
System.err.println("unexpected exception :");
spue.printStackTrace();
}
}
}
There is still some work to achieve your goal, let us know if it's working well this clue.

Vertx HttpClient getNow not working

I have problem with vertx HttpClient.
Here's code which shows that tests GET using vertx and plain java.
Vertx vertx = Vertx.vertx();
HttpClientOptions options = new HttpClientOptions()
.setTrustAll(true)
.setSsl(false)
.setDefaultPort(80)
.setProtocolVersion(HttpVersion.HTTP_1_1)
.setLogActivity(true);
HttpClient client = vertx.createHttpClient(options);
client.getNow("google.com", "/", response -> {
System.out.println("Received response with status code " + response.statusCode());
});
System.out.println(getHTML("http://google.com"));
Where getHTML() is from here: How do I do a HTTP GET in Java?
This is my output:
<!doctype html><html... etc <- correct output from plain java
Feb 08, 2017 11:31:21 AM io.vertx.core.http.impl.HttpClientRequestImpl
SEVERE: java.net.UnknownHostException: failed to resolve 'google.com'. Exceeded max queries per resolve 3
But vertx can't connect. What's wrong here? I'm not using any proxy.
For reference: a solution, as described in this question and in tsegismont's comment here, is to set the flag vertx.disableDnsResolver to true:
-Dvertx.disableDnsResolver=true
in order to fall back to the JVM DNS resolver as explained here:
sometimes it can be desirable to use the JVM built-in resolver, the JVM system property -Dvertx.disableDnsResolver=true activates this behavior
I observed this DNS resolution issue with a redis client in a kubernetes environment.
I had this issue, what caused it for me was stale DNS servers being picked up by the Java runtime, i.e. servers registered for a network the machine was no longer connected to. The issue is first in the Sun JNDI implementation, it also exists in Netty which uses JNDI to bootstrap its list of name servers on most platforms, then finally shows up in VertX.
I think a good place to fix this would be in the Netty layer where the set of default DNS servers is bootstrapped. I have raised a ticket with the Netty project so we'll see if they agree with me! Here is the Netty ticket
In the mean time a fairly basic workaround is to filter the default DNS servers detected by Netty, based on whether they are reachable or not. Here is a code Sample in Kotlin to apply before constructing the main VertX instance.
// The default set of name servers provided by JNDI can contain stale entries
// This default set is picked up by Netty and in turn by VertX
// To work around this, we filter for only reachable name servers on startup
val nameServers = DefaultDnsServerAddressStreamProvider.defaultAddressList()
val reachableNameServers = nameServers.stream()
.filter {ns -> ns.address.isReachable(NS_REACHABLE_TIMEOUT)}
.map {ns -> ns.address.hostAddress}
.collect(Collectors.toList())
if (reachableNameServers.size == 0)
throw StartupException("There are no reachable name servers available")
val opts = VertxOptions()
opts.addressResolverOptions.servers = reachableNameServers
// The primary Vertx instance
val vertx = Vertx.vertx(opts)
A little more detail in case it is helpful. I have a company machine, which at some point was connected to the company network by a physical cable. Details of the company's internal name servers were set up by DHCP on the physical interface. Using the wireless interface at home, DNS for the wireless interface gets set to my home DNS while the config for the physical interface is not updated. This is fine since that device is not active, ipconfig /all does not show the internal company DNS servers. However, looking in the registry they are still there:
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces
They get picked up by the JNDI mechanism, which feeds Netty and in turn VertX. Since they are not reachable from my home location, DNS resolution fails. I can imagine this home/office situation is not unique to me! I don't know whether something similar could occur with multiple virtual interfaces on containers or VMs, it could be worth looking at if you are having problems.
Here is the sample code which works for me.
public class TemplVerticle extends HttpVerticle {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
// Create the web client and enable SSL/TLS with a trust store
WebClient client = WebClient.create(vertx,
new WebClientOptions()
.setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
.setDefaultHost("www.w3schools.com")
);
client.get("www.w3schools.com")
.as(BodyCodec.string())
.send(ar -> {
if (ar.succeeded()) {
HttpResponse<String> response = ar.result();
System.out.println("Got HTTP response body");
System.out.println(response.body().toString());
} else {
ar.cause().printStackTrace();
}
});
}
}
Try using web client instead of httpclient, here you have an example (with rx):
private val client: WebClient = WebClient.create(vertx, WebClientOptions()
.setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
)
open fun <T> get(uri: String, marshaller: Class<T>): Single<T> {
return client.getAbs(host + uri).rxSend()
.map { extractJson(it, uri, marshaller) }
}
Another option is to use getAbs.

Hazelcast 3.8-EA WARNING:Received data format is invalid issue

while loading Map from external data source using MapLoader Hazelcast cluster(multicast discovery) gives error as
WARNING: [<IP>]:5702 [<cluster_name>] [3.8-EA] Received data format is invalid. (An old version of Hazelcast may be running here.)
com.hazelcast.nio.serialization.HazelcastSerializationException: Problem while reading DataSerializable, namespace: 0, id: 0, class: 'com.hazelcast.cluster.impl.JoinRequest', exception: com.hazelcast.cluster.impl.JoinRequest
at com.hazelcast.internal.serialization.impl.DataSerializableSerializer.rethrowReadException(DataSerializableSerializer.java:178)
...
Caused by: java.lang.ClassNotFoundException: com.hazelcast.cluster.impl.JoinRequest
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
I have tested this on hazelast 3.5.4 version .. It is working fine.
We can ignore this warning but not sure what is the impact of it. Also it floods the log.
The old and new versions of Hazelcast are not compatible in terms of multicast discovery since the internal protocol changed. That said, the new Hazelcast version cannot identify the old versions discovery packet.
Please change the multicast group according to the documentation found under: http://docs.hazelcast.org/docs/3.8-EA/manual/html-single/index.html#multicast-element
For those that may be running into this problem in an OSGi environment, you may be getting bit by a nuance of the com.hazelcast.util.ServiceLoader.findHighestReachableClassLoader() method sometimes picking the wrong class loader during Hazelcast initialization (as it won't always pick the class loader you set on the config). The following shows a way to work around that problem by taking advantage of Java's context class loader:
private HazelcastInstance createHazelcastInstance() {
// Use the following if you're only using the Hazelcast data serializers
final ClassLoader classLoader = Hazelcast.class.getClassLoader();
// Use the following if you have custom data serializers that you need
// final ClassLoader classLoader = this.getClass().getClassLoader();
final com.hazelcast.config.Config config = new com.hazelcast.config.Config();
config.setClassLoader(classLoader);
final ClassLoader previousContextClassLoader = Thread.currentThread().getContextClassLoader();
try {
Thread.currentThread().setContextClassLoader(classLoader);
return Hazelcast.newHazelcastInstance(config);
} finally {
if(previousContextClassLoader != null) {
Thread.currentThread().setContextClassLoader(previousContextClassLoader);
}
}
}

JSch logger - where can I configure the level

How can I configure the level of JSch logger?
Is it like Log4J configurable via XML?
JSch doesn't seem to use any known logging framework (I use JSch v0.1.49, but the last version is v0.1.51), or any XML configuration file. So here is what I did:
private class JSCHLogger implements com.jcraft.jsch.Logger {
private Map<Integer, MyLevel> levels = new HashMap<Integer, MyLevel>();
private final MyLogger LOGGER;
public JSCHLogger() {
// Mapping between JSch levels and our own levels
levels.put(DEBUG, MyLevel.FINE);
levels.put(INFO, MyLevel.INFO);
levels.put(WARN, MyLevel.WARNING);
levels.put(ERROR, MyLevel.SEVERE);
levels.put(FATAL, MyLevel.SEVERE);
LOGGER = MyLogger.getLogger(...); // Anything you want here, depending on your logging framework
}
#Override
public boolean isEnabled(int pLevel) {
return true; // here, all levels enabled
}
#Override
public void log(int pLevel, String pMessage) {
MyLevel level = levels.get(pLevel);
if (level == null) {
level = MyLevel.SEVERE;
}
LOGGER.log(level, pMessage); // logging-framework dependent...
}
}
Then before using JSch:
JSch.setLogger(new JSCHLogger());
Note that instead of MyLevel and MyLogger, you can use any logging framework classes you want (Log4j, Logback, ...)
You can get a complete example here: http://www.jcraft.com/jsch/examples/Logger.java.html
Just wanted to add a small comment to the accepted answer, but reputation doesnt allow. Sorry if this way via another answer is evil, but really want to mention the following.
The log activation works this way, and it can get you a lot of info about the connection process (key exchange and such). But there is practically no such thing as debug output for the core functionality after authentication, at least for SFTP. And a look at the source shows / confirms there is no logging in ChannelSftp (and the most other classes).
So if you want to activate this in order to inspect communication problems (after authentication) thats wasted - or you need to add suitable statements to the source yourself (I did not yet).
We encounter complete hangs (job threads get stuck for days/infinite) in put, get and even ls - and of course the server provider claims not to be the problem (and indeed the unix sftp commandline-client works - but not from the appserver host, which we have no access to.. so we would have to check network communication). If someone has an idea, Thanks..

Categories

Resources