Sending an object through the internet and invoking it's method - java

I've got a java web server and I made it so that the server will respond to a certain a http* request by sending back a java object that contains a "execute" method.
I'd like to be able to execute a remote object's method.
I can't use reflection because I don't send the class, thinking about making a local class that has the same method + package-name so i can try to object.getClass()
I won't like to put the entire block of code in the toString() from the object that I will send. (Override)
I can't cast to an interface.
I'm also thinking about making a .jar library that has the definition for the class file that will be created on the server and accessed on the client, how can this work?
I couldn't find another question regarding this, so I will leave this here.
EDIT:
I'm using URLConnection to communicate with a servlet, the servlet makes an instance of the object on the server then it will send it to the client using ObjectOutputStream, as well as ObjectInputStream on the client for it to get it.
Looking for some alternatives to RMI, if none, I will lookup some RMI tutorials.
Regarding about my choice to not use RMI in the first place: maybe I don't want every time to make a connection between client-server , maybe I want to deserialize objects and check/invoke it's methods.

If you are going to "send" serialized objects from one java virtual machine (java process) to another, you need to have the .class files already present at both ends. If you decide to continue with your current approach, you would need the following:
Your client must be Java, or be able to run Java, and have the .class files that correspond to the objects that it is receiving locally available, or must download them from the server before accessing them.
You must somehow wrap serialized object streams within HTTP. HTTP is a protocol for requesting and sending web pages. It is incompatible with Java's serialization protocol (it contains extra headers, for example), and you would need to wrap Java serialization inside HTTP payloads for things to work as you seem to expect.
When you send serialized objects, you are actually sending "object graphs" (the object and all objects accessible by navigating its fields). This can end up being inefficient. Serialization may not be the best answer for you for this reason.
It is far easier to use other mechanisms:
If you avoid HTTP, you avoid the need for extra wrappers. Writing a simple server that, when connected to, receives and sends serialized objects is much easier and efficient than writing a wrapper for HTTP within a traditional Java webapp (Java app servers tend to be resource-hungry).
Consider using Kryo or other Java serialization/networking libraries - they come with built-in servers, and allow very fine-grained control over what is being sent.
Java has in-built support for RMI ("Remote Method Invocation"). This seems to be what you are actually trying to achieve. You no longer need to be aware that objects are local or remote - they appear to work the same, and all required networking and serialization is done behind the scenes. Read all about it.

Related

Is there a Java class I can override to inspect and modify all network requests as they go out?

I would like to implement custom retry logic for all failed HTTP requests in my Java applications without having to modify all the networking code. Is there a class or interface I can implement that would force all outgoing requests through a method that I could inspect the request and response?
A related example from another project is a custom TrustManager implementation I used to handle revocation status for our internal CA PKI. I'm wondering if there is anything related that could be used to centrally manage HTTP behavior.
You can't do that on the source code level.
You see, the corresponding "data" doesn't "travel" through a specific unique instance of any class. Depending on your protocol, all kinds of classes, such as java.net.Socket will be instantiated over time. You can of course look into what code you wrote is doing, but you have no chance changing the behavior of such existing standard classes on the source code side.
The only things that might help:
looking into debuggers, for example by identifying methods that matter to you, to then set breakpoints there
using instrumentation. Meaning: you hook into JVM startup, and add/manipulate classes at runtime

How to pass an object between two JVM in java?

Recently I was asked this question in an interview: how can you pass an object between two JVMs? My response was using serialization, but I don't know if it's the right answer. How else could an object be passed between two JVMs?
Serialization is the perhaps only way out. Depending on your stack you could have one of the several possibilities
serialize class and deserialize them on the other end (remember remote ejb's)
write an object to a file (json etc) and read it on the other end from shared folder
or use use mciroservices to send and receive objects
you could also try out tools like protobuf, avro as they tackle the serialization problem specifically
My personal preference would be to have a small server side component (a service) for exchanging data.
You can pass Java Objects between 2 JVMs in a distributed environment as long as both the versions of JVMs are identical. Please note, your JVM is platform dependent (Platform includes Hardware (Processor) + OS). However, you need to make sure all dependent library files are available on both JVMs. Another drawback is that even if your JVMs are the same, the object passing is of no use if you are passing Java object running in one JVM to JRuby running in receiving JVM as the data types are different between the 2.
If if JVMs are same and all libraries are available, object passing is not recommended as you might decide to upgrade on OS from 32 bit to 64 bit and in that case, you need to upgrade JVM as well.
Object passing is something i prefer only within the same JVM.
You can store the object externally. Instead of a file you can go with a fast access data structure store like redis https://redis.io/ . Both(or more) the instances can read and write data to the same data store. You can also look into JMS(Java Messaging Service)

Java RMI, Can I send a serialized object without putting the class file # the webserver?

I'm experimenting with RMI lately and found out that I seem to be unable to send a serialized object if the class file isn't also stored at the webserver. Does this mean that all my serializable classes need to be put in the webservers classpath?
Doesn't really seem like the best design to me IMHO.
No. All these answers are wrong.
The classes don't need to exist at both sides if you use the RMI codebase feature. You can set up a Web server to host the JAR files and set -Djava.rmi.server.codebase= to define where those classes are available as a list of URLs of those JAR files. You set that at either the server or the client or both depending on who is going to be sending classes that the other side doesn't have. Then RMI annotates those classes with those URLs so the target knows where to get them, and downloads them as needed.
Yes, the classes must exist on both sides.
Yes, the class file must exist on the webserver as well, as RMI was intended (way back when) to send an instance of a class across the wire. If you are simply looking to send data to a web server without any encapsulating class business behavior, then there are much newer and simpler ways (JSON, XML, SOAP, etc) to simply send data.

Where do clients get definitions for remote classes that have not been added to registry?

Ive managed to create an RMI application that does what i need it to do quite succesfully, but im having a bit of trouble getting my head around where client obtains definitions for remote objects. for example:
I have a server that registers itself with the rmiregistry (allowing clients to call methods on it).
UnicastRemoteObject.exportObject(new Server(), 0);
running reg.list() confirms that my server has indeed been added to the registry. I have another remote object (rObj) running on the same JVM as the server. This is not added to the registry.
In my client, i can get the definition of my Server class by looking up Server in the rmiregistry:
reg.lookup("Server")
after this can freely create instances of rObj. The crux of my question is, where does my client get a definition for rObj even though its never been added to registry.
I know it must come from the server as thats where the class and interface are stored. Does the connection to Server automatically open the pipe for other remote classes to received?
If so, how does the client know to look on the server for the remote class. Is the server treated almost as an extension to the clients classpath (it will resort to checking the server for classes that arent in its own classpath)?
First of all, realize that it's not necessary to set up dynamic classloading from the server in order to use RMI. If you compile the interface and implementation into both the client and server jars, then everything will work fine. That is how I've almost always implemented RMI.
If you have a good reason for loading the classes dynamically from the server, you'll need to set up an HTTP server somewhere that has the interfaces and implementation classes (preferably in a jar file, although a class directory will work too). This doesn't happen automatically as part of RMI, you need to build the jars and put them somewhere on your web server. Then launch the client with a system property indicating the URL to this jar file:
-Djava.rmi.server.codebase=http://webline/public/mystuff.jar
This is explained in full detail here: http://download.oracle.com/javase/1.5.0/docs/guide/rmi/codebase.html
If you use new to create new instances of the same type (say, T) as rObj, then of course the Java compiler knew the definition of T, and your application also knows it at runtime. In this case, no RMI is involved at all.
But maybe I misunderstood your question? How exactly do you "freely create instances of rObj"?
Update: I'm eating my words here, of course being able to compile the file, and having the class available on the classpath at runtime or two different issues. Since you were not mentioning the classpath at all, I was assuming you'd somehow ended up having the classes on the client-side anyway.

transmit a java.lang.reflect.Proxy over a network

Is there a convenient way to transmit an object including its code (the class) over a network (not just the instance data)?
Don't ask me why I want to do this. It's in an assignment. I asked several times if that is really what they meant and the didn't rephrase their answer so I guess they really want us to transmit code (not just the field data) over a network. To be honest I have no clue why we need a Proxy in this assignment anyway, just writing a simple class would do IMO. The assignment says that we should instantiate the proxy on the server and transmit it to the client (and yes, they talk about a java.lang.reflect.Proxy, they name this class). Because there is no class file for a proxy I can't deploy that. I guess I would have to somehow read out the bytecode of the generated Proxy, transmit it to the client and then load it. Which makes absolutely no sense at all, but this seems what they want us to do. I don't get why.
This is the core value proposition of the Apache River project (formerly known as Jini when it was run by Sun).
You put the code you need to run remotely in a jar on a "codebase" http server and publish your proxy to a lookup server. River annotates that proxy (which is a serialized instance) with the codebase URL(s). When a client fetches that proxy from the lookup server and instantiates it, the codebase jars are used in a sandboxed classloader. It's common to create "smart proxies" which load a bunch of code to run on the client to manage communication back to the source service, or you can use a simpler proxy to just make RMI calls.
The technology encapsulated by River is complicated, but profound.

Categories

Resources