We would like to access the same RMI-server from different hosts in our network (dev-pc via ssh-tunnel, jenkins-server via direct connection). The problem is that the RMI-host is known under different names on the different client hosts.
This is not a problem when we connect to the registry, because we can set the target host name like this:
Registry registry = LocateRegistry.getRegistry("hostname", 10099, new CustomSslRMIClientSocketFactory());
But when we lookup the remote object like below, it contains the wrong hostname.
HelloRemote hello = (HelloRemote) registry.lookup(HelloRemote.class.getSimpleName());
In the debugger I can observe that the host is like needed on the Registry object, but not on the Stub:
We get a connection timeout as soon as we call a method on the Stub. If I manually change the host value to localhost in the debugger the method invocation succeeds.
I'm aware that I can set java.rmi.server.hostname on the server side but then the connection from jenkins does not work anymore.
The simplest solution would be that I force RMI to use the same host as for the registry for all Stubs retrieved from that registry. Is there a better way than replacing the host-value in the Stub via reflection?
Unfortunately RMI has a deeply built-in assumption that the server host has a single 'most public' IP address or hostname. This explains the java.rmi.server.hostname fiasco. If your system doesn't comply you are out of luck.
As pointed out by EJP there seems to be no elegant out-of-the-box solution.
I can think of two unelegant ones:
Changing the network configuration on every client host in order to redirect traffic to the non-reachable ip to localhost instead.
Changing the host value on the "hello"-object via Reflection.
I went for the second option because I'm in a test environment and the code in question will not go productive anyways. I wouldn't recommend to do this otherwise, because this code might break with future versions of java and won't work if a Security Manager is in place.
However, here my working code:
private static void forceRegistryHostNameOnStub(Object registry, Object stub) {
try {
String regHost = getReferenceToInnerObject(registry, "ref", "ref", "ep", "host").toString();
Object stubEp = getReferenceToInnerObject(stub, "h", "ref", "ref", "ep");
Field fStubHost = getInheritedPrivateField(stubEp, "host");
fStubHost.setAccessible(true);
fStubHost.set(stubEp, regHost);
} catch (Exception e) {
LOG.error("Applying the registry host to the Stub failed.", e);
}
}
private static Object getReferenceToInnerObject(Object from, String... objectHierarchy) throws NoSuchFieldException, IllegalArgumentException, IllegalAccessException {
Object ref = from;
for (String fieldname : objectHierarchy) {
Field f = getInheritedPrivateField(ref, fieldname);
f.setAccessible(true);
ref = f.get(ref);
}
return ref;
}
private static Field getInheritedPrivateField(Object from, String fieldname) throws NoSuchFieldException {
Class<?> i = from.getClass();
while (i != null && i != Object.class) {
try {
return i.getDeclaredField(fieldname);
} catch (NoSuchFieldException e) {
// ignore
}
i = i.getSuperclass();
}
return from.getClass().getDeclaredField(fieldname);
}
The method invocation on the Stub succeeds now:
Registry registry = LocateRegistry.getRegistry("hostname", 10099, new CustomSslRMIClientSocketFactory());
HelloRemote hello = (HelloRemote) registry.lookup(HelloRemote.class.getSimpleName());
forceRegistryHostNameOnStub(registry, hello); // manipulate the stub
hello.doSomething(); // succeeds
Related
In Short: Using AmazonS3Client to connect to a local instance of MinIO results in a UnknownHostException thrown because the url is resolved to http://{bucket_name}.localhost:port.
Detailed description of the problem:
I'm creating an integration test for a Java service that uses AmazonS3Client lib to retrieve content from S3. I'm using MinIO inside a test container to perform the role of Amazon S3, as follows:
#Container
static final GenericContainer<?> minioContainer = new GenericContainer<>("minio/minio:latest")
.withCommand("server /data")
.withEnv(
Map.of(
"MINIO_ACCESS_KEY", AWS_ACCESS_KEY.getValue(),
"MINIO_SECRET_KEY", AWS_SECRET_KEY.getValue()
)
)
.withExposedPorts(MINIO_PORT)
.waitingFor(new HttpWaitStrategy()
.forPath("/minio/health/ready")
.forPort(MINIO_PORT)
.withStartupTimeout(Duration.ofSeconds(10)));
and then I export its url dynamically (because test containers are deployed at a random port) using something like this:
String.format("http://%s:%s", minioContainer.getHost(), minioContainer.getFirstMappedPort())
which in turn results in a url like this:
http://localhost:54123
The problem I encountered during the runtime of my test lies within the actual implementation of AmazonS3Client.getObject(String,String) - when creating the request it performs the following validation (class S3RequestEndpointResolver, method resolveRequestEndpoint):
...
if (shouldUseVirtualAddressing(endpoint)) {
request.setEndpoint(convertToVirtualHostEndpoint(endpoint, bucketName));
request.setResourcePath(SdkHttpUtils.urlEncode(getHostStyleResourcePath(), true));
} else {
request.setEndpoint(endpoint);
request.setResourcePath(SdkHttpUtils.urlEncode(getPathStyleResourcePath(), true));
}
}
private boolean shouldUseVirtualAddressing(final URI endpoint) {
return !isPathStyleAccess && BucketNameUtils.isDNSBucketName(bucketName)
&& !isValidIpV4Address(endpoint.getHost());
}
This in turn returns true for the url http://localhost:54123 and as a result this method
private static URI convertToVirtualHostEndpoint(URI endpoint, String bucketName) {
try {
return new URI(String.format("%s://%s.%s", endpoint.getScheme(), bucketName, endpoint.getAuthority()));
} catch (URISyntaxException e) {
throw new IllegalArgumentException("Invalid bucket name: " + bucketName, e);
}
}
concatenates the name of the bucket to the host resulting in: http://mybucket.localhost:54123 and this ultimately results in a UnknownHostException to be thrown. I can work around this by setting the host to 0.0.0.0 instead of localhost, but this is hardly a solution.
Therefore I was wondering if i) this a bug/limitation in AmazonS3Client?; ii) I'm the one who is missing something, e.g. poor configuration ?
Thank you for your time
I was able to find a solution. Looking at the method used by the resolver:
private boolean shouldUseVirtualAddressing(final URI endpoint) {
return !isPathStyleAccess && BucketNameUtils.isDNSBucketName(bucketName)
&& !isValidIpV4Address(endpoint.getHost());
}
which was returning true and leading the flow to the wrong concatenation I found that we can set the first variable isPathStyleAccess when building the client. In my case, I created a bean in my test configuration to override the main one:
#Bean
#Primary
public AmazonS3 amazonS3() {
return AmazonS3Client.builder()
.withPathStyleAccessEnabled(true) //HERE
.withCredentials(new AWSStaticCredentialsProvider(
new BasicAWSCredentials(AWS_ACCESS_KEY.getValue(), AWS_SECRET_KEY.getValue())
))
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(s3Endpoint, region)
)
.build();
}
For the SDK V2, the solution was pretty similar:
S3AsyncClient s3 = S3AsyncClient.builder()
.forcePathStyle(true) // adding this one
.endpointOverride(new URI(s3Endpoint))
.credentialsProvider(() -> AwsBasicCredentials.create(s3Properties.getAccessKey(), s3Properties.getSecretKey()))
.build()
I'm trying to make Socks v4 work out of the box in java.net, and I seem to have succeeded!
Roughtly the code that I'm using is this:
class SocketImplFactorySocks4 implements SocketImplFactory {
#Override
public SocketImpl createSocketImpl() {
System.out.println("Socket implementation triggered");
try {
return socketSocks4Factory();
} catch (Exception e) {
e.printStackTrace();
throw new Error("Can't go further");
}
}
private SocketImpl socketSocks4Factory() throws
[...] {
Class<?> aClass = Class.forName("java.net.SocksSocketImpl");
Constructor<?> cons = aClass.getDeclaredConstructor();
if (!cons.isAccessible())
cons.setAccessible(true);
Object socket = cons.newInstance();
Method method = socket.getClass().getDeclaredMethod("setV4");
if (!method.isAccessible())
method.setAccessible(true);
method.invoke(socket);
Field field = socket.getClass().getDeclaredField("useV4");
field.setAccessible(true);
Object value = field.get(socket);
return (SocketImpl) socket;
}
}
Long story short, it works when I create a socket and pass -DsocksProxyHost and -DsocksProxyPort.
My problem is when I use the same code in my junit test, I can check with Reflections that Socket.impl.useV4 is set to true, socksProxy* settings are set systemwide, but when I use my socket, it avoids using proxy altogether (I can see it in wireshark).
It's either JUnit or Gradle, but I've reached my limits. Please advice on where should I go next. build.gradle.kts for reference:
tasks{
test{
systemProperty("socksProxyHost", "localhost")
systemProperty("socksProxyPort", "8080")
}
}
Thanks in advance!
Well, it took me way too much time to figure it out, but I did. My initial goal was to test my Socks v4 server code, but there were two problems on my way:
1) Even though Java Socket has support for Socks v4 as client, it is not enabled by default. And there is no way to flip the toggle.
2) Having solved #1, I tried to write E2E test to smoke the whole thing, but for some reason it was avoiding going into the Socks proxy, even though the toggle (useV4) was true. This is what I came with here on SO.
To solve the first problem, I implemented SocketImplFactory (see above in the question).
What helped to tackle the topic question was my admin background, even though it didn't kick in until recently. :) I separated the original suspects (JUnit and Gradle) and made the test in a standalone psvm file. The test didn't work, it still avoided going through the proxy. And this is when it hit me: exception for localhost!
Basically, there is a hardcoded exception for localhost(127.0.0.1, ::, etc) deep in Java core library. After some searching I came across DsocksNonProxyHosts option. Which didn't help, as you might have guessed already :)
Eventually I ended up at this answer, which mentioned that I might need to implement ProxySelector. Which I did:
static class myProxySelector extends ProxySelector {
#Override
public List<Proxy> select(URI uri) {
List<Proxy> proxyl = new ArrayList<Proxy>(1);
InetSocketAddress saddr = InetSocketAddress.createUnresolved("localhost", 1080);
Proxy proxy = SocksProxy.create(saddr, 4);
proxyl.add(proxy);
System.out.println("Selecting Proxy for " + uri.toASCIIString());
return proxyl;
}
#Override
public void connectFailed(URI uri, SocketAddress sa, IOException ioe) {
if (uri == null || sa == null || ioe == null) {
throw new IllegalArgumentException("Arguments can't be null.");
}
}
}
The whole socket setup looks like this:
private void setupSocket() throws IOException {
Socket.setSocketImplFactory(new SocketImplFactorySocks4());
ProxySelector proxySelector = new myProxySelector();
ProxySelector.setDefault(proxySelector);
proxy = new Proxy(Proxy.Type.SOCKS, new InetSocketAddress("127.0.0.1", 1080));
}
Now everything I wanted works: I'm both able to E2E-test my socks4 code and can do it localhost.
In my webapp, running in Wildfly, there are several roles defined. User is given several tabs for each role he has (e.g. admin, support etc). User/admin can also enable/disable roles for himself or for other users in browser. But when the role is added/removed, tab should be added/removed as well. And that only happens if jboss cache is flushed manually from cli or even worse - restarted. Is it possible to remove the role or flush server cache at runtime (when user clicks the button)? Role authentication is done via 'request.isUserInRole()', but what I need is something like setUserInRole("admin")=false.
This is how I resolved it.
public void flushAuthenticationCache(String userid) {
final String domain = "mydomain";
try {
ObjectName jaasMgr = new ObjectName("jboss.as:subsystem=security,security-domain=" + domain);
Object[] params = {userid};
String[] signature = {"java.lang.String"};
MBeanServer server = (MBeanServer) MBeanServerFactory.findMBeanServer(null).get(0);
server.invoke(jaasMgr, "flushCache", params, signature);
} catch (Throwable e) {
e.printStackTrace();
}
}
Note that the method above flushes cache for specific user only.
The method below you flush cache for all users:
public static final void flushJaasCache(String securityDomain){
try {
javax.management.MBeanServerConnection mbeanServerConnection = java.lang.management.ManagementFactory
.getPlatformMBeanServer();
javax.management.ObjectName mbeanName = new javax.management.ObjectName("jboss.as:subsystem=security,security-domain="+securityDomain);
mbeanServerConnection.invoke(mbeanName, "flushCache", null, null);
} catch (Exception e) {
throw new SecurityException(e);
}
}
For those using JBoss CLI I figured out this command to do the equivalent of the above. In the below command I'm using a domain config, but similar should apply to single server.
/host=MyHost/server=MyServer/subsystem=security/security-domain=other:flush-cache(principal=UserToFlush)
I want to build system so that :
1. Client connects to Server
2. Client asks for a port to server him
3. Server creates a remote object to serve the Client and binds it to a port
4. return the port number to client
5. Client connects to the port
My session/connection manager
public class ConnectionManager extends UnicastRemoteObject implements Server
{
public List<ClientHandler> clients = Collections.synchronizedList(new ArrayList<>());
public ConnectionManager() throws RemoteException
{
super();
}
public static final String RMI_ID = "Server";
#Override
public boolean checkConnection() throws RemoteException
{
return true;
}
#Override
public int getPort(String ip) throws RemoteException
{
int i = 10000+clients.size()*2;
clients.add(new ClientHandler(ip, i));
return i;
}
}
Session implementation
public class ClientHandler extends UnicastRemoteObject implements Transfer
{
Registry rmi;
Registry reg;
PrintWriter log;
public Client client;
public ClientHandler(String ip, int port) throws RemoteException
{
super();
try
{
File f = new File(ip+" "+new Date()+".txt");
log = new PrintWriter(f);
}catch (Exception e)
{
e.printStackTrace();
}
rmi = LocateRegistry.createRegistry(port);
try
{
rmi.bind(String.valueOf(port),this);
}catch(Exception e)
{
e.printStackTrace();
}
}
The problem that if the object is created in a remote call , it is considered of remote origin and so unless you are on the same host it is not allowed to bind one to the LocalRegistry. and server throws java.rmi.AccessException Registry.Registry.bind disallowed : origin IP address is non-local host.
The problem that if the object is created in a remote call , it is considered of remote origin
Untrue. This is simply not correct. You've just made this up. Don't do that.
and so unless you are on the same host it is not allowed to bind one to the LocalRegistry.
But you are on the same host. The constructor ClientHandler runs in the same host as the ConnectionManager and it creates a Registry, also in the same host. Indeed all this takes place within the same JVM.
and server throws java.rmi.AccessException Registry.Registry.bind disallowed : origin IP address is non-local host.
No it doesn't. I tested your code. After fixing a compilation error it worked. Cannot reproduce.
HOWEVER
You don't need the part about the port, or the extra bind. All remote objects can share the same port. The server only needs to return a new instance of the remote session object to the client, directly.
A simple implementation of your requirement, using your classnames, looks like this, with a certain amount of guesswork as you didn't provide the Server interface:
public interface Server extends Remote
{
boolean checkConnection() throws RemoteException;
Transfer getSession(String ip) throws RemoteException;
}
public class ConnectionManager extends UnicastRemoteObject implements Server
{
public List<ClientHandler> clients = Collections.synchronizedList(new ArrayList<>());
public ConnectionManager() throws RemoteException
{
super();
}
public static final String RMI_ID = "Server";
#Override
public boolean checkConnection() throws RemoteException
{
return true;
}
#Override
public Transfer getSession(String ip) throws RemoteException
{
ClientHandler ch = new ClientHandler(ip);
clients.add(ch);
return ch;
}
}
public class ClientHandler extends UnicastRemoteObject implements Transfer
{
PrintWriter log;
public Client client;
public ClientHandler(String ip) throws RemoteException
{
super();
try
{
File f = new File(ip+" "+new Date()+".txt");
log = new PrintWriter(f);
}
catch (Exception e)
{
e.printStackTrace();
}
}
}
EDIT
It is clear from your post and your comment below that you are suffering from a number of major misconceptions about Java RMI:
if a remote object is created in a remote call , it is not 'considered of remote origin'
The port on which an object is exported has nothing to do with the Registry.
It is determined when you construct an object that extends UnicastRemoteObject, via the super(), call, or when you call UnicastRemoteObject.exportObject() for other objects. If you supply a non-zero port number, that port is used, and can normally shared with other remote objects. Otherwise if there is already an RMI port in use it is shared with this object, otherwise a system-allocated port number is obtained.
Note that the export step precedes the bind step, so it is quite impossible for your misconception to be correct.
You don't need multiple Registries running in the same host. You can use a single one and bind multiple names to it.
If you first call LocateRegistry.createRegistry(), all other remote objects you export from that JVM can share the port with the Registry.
Remote methods can return remote objects.
The Registry is an example: it is actually very little more than a remote hash-map. Your methods can do it too. The objects are replaced by their stubs during the return process.
For those reasons, your intended design is completely fallacious and your comment below complete nonsense.
Further notes:
A do-nothing method doesn't really check a connection. It is just as likely to create a new connection and check that. Connections don't really exist in RMI, or at least they are very well hidden from you, pooled, expired, etc.
You don't need to pass the client's IP address. You can get it at the server from RemoteServer.getClientHost().
The constructor for ClientHandler should not catch IOExceptions internally: it should throw them to the caller, so the caller can be aware of the problem.
While performing a client-server communication with various forums, I am unable to perform Remote-object's lookup on the client machine.
The errors which I receive are ConnectIOException(NoRouteToHostException), and sometimes ConnectException and sometimes someother.
This is not what I want to ask. But, the main concern is how should I setup client platform and server platform --- talking about networking details --- this is what I doubt interferes with my connection.
My questions :-
How should I edit my /etc/hosts file on both client-side and server-side? Server's IP- 192.168.1.8 & Client's IP-192.168.1.100. Means, should I add the system name in both the files:
192.168.1.8 SERVER-1 # on the server side
192.168.1.100 CLIENT-1 # on the client side
Should I edit like this? Can this be one of the possible concerns? I just want to remove any doubts left over to perform the rmi-communication!
Also, I am also setting Server's hostname property using System.setProperty("java.rmi.server.hostname",192.168.1.8); on the server side. Should I do the same on the client-side too?
I've read about setting classpath while running the java program on both server-side as well as the client-side. I did this too,but,again the same exceptions. No difference at all. I've read that since Java update 6u45, classpaths aren't necessary to include! Please throw some light on this too...
If I am missing something, Please enlighten about the same too. A brief idea/link to resources are most preferred.
You don't need any of this unless you have a problem. The most usual problem is the one described in the RMI FAQ #A.1, and editing the hosts file of the server or setting java.rmi.server.hostname in the server JVM is the solution to that.
'No route to host' is a network connectivity problem, not an RMI problem, and not one you'll solve with code or system property settings.
Setting the classpath has nothing to do with network problems.
Here is server example of which transfers an concrete class. This class must be exist in server and client classpath with same structure
Message:
public class MyMessage implements Serializable {
private static final long serialVersionUID = -696658756914311143L;
public String Title;
public String Body;
public MyMessage(String strTitle) {
Title = strTitle;
Body = "";
}
public MyMessage() {
Title = "";
Body = "";
}
}
And here is the server code that gets an message and returns another message:
public class SimpleServer {
public String ServerName;
ServerRemoteObject mRemoteObject;
public SimpleServer(String pServerName) {
ServerName = pServerName;
}
public void bindYourself() {
try {
mRemoteObject = new ServerRemoteObject(this);
java.rmi.registry.Registry iRegistry = LocateRegistry.getRegistry(RegistryContstants.RMIPort);
iRegistry.rebind(RegistryContstants.CMName, mRemoteObject);
} catch (Exception e) {
e.printStackTrace();
mRemoteObject = null;
}
}
public MyMessage handleEvent(MyMessage mMessage) {
MyMessage iMessage = new MyMessage();
iMessage.Body = "Response body";
iMessage.Title = "Response title";
return iMessage;
}
public static void main(String[] server) {
SimpleServer iServer = new SimpleServer("SERVER1");
iServer.bindYourself();
while (true) {
try {
Thread.sleep(10000);
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
and here is the remote interface of server remote object:
public interface ISimpleServer extends java.rmi.Remote{
public MyMessage doaction(MyMessage message) throws java.rmi.RemoteException;
}
all you need is adding MyMessage class both in server and client classpath.