I'd like to profile network overhead of my RMI-based application. For instance, I'd be interesting to know how many bytes a stub transferred over the network, or how many method calls were done through it. I can't find anything in the RMI API to hook into, though. Is this possible at all?
I am not particularly fond of RMI and found JSon-based, Thrift and even XML-RPC easier to work with. However, sometimes we don't have a choice.
There is a microbenchmark suite for RMI, as well as object serialization, in the "test" tree of the jdk7/jdk repository, see:
jdk/test/java/rmi/reliability/benchmark
The script:
jdk/test/java/rmi/reliability/scripts/create_benchmark_jars.ksh
shows how to create two JAR files which is used in the benchmarking. You can pass command-line parameters to each each instance for specific settings such repetitions per run, etc. (One instance of the jar will run as the client and the other as the server, which is also configured from a command line parameter.)
I haven't played much with this myself - usually trusting existing benchmarks, for example:
http://daniel.gredler.net/2008/01/07/java-remoting-protocol-benchmarks
...or using tools such as (I haven't looked much at the last two):
JMeter (http://jmeter.apache.org/), Soap-stone (http://soap-stone.sourceforge.net/) or
JVM-serialisers (https://github.com/eishay/jvm-serializers/wiki/)
Related
I'm currently writing a Java program that is an interface to another server. The majority of the functions (close to >90%) do something on the server. Currently, I'm just writing simple classes that run some actions on the server, and then check it myself, or add methods to the test that read back the written information.
Currently, I'm developing on my own computer, and have a version of the server running locally on a VM.
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected too. I am not sure the best way to go about my testing. I have my JUnit tests (on simple functions that do not interact externally) that run every build. I can't seem to see the set way in JUnit to write tests that do not have to run at every build (perhaps when their functions change?).
Or, can anyone point me in the correct direction of how to best handle my testing.
Thanks!
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected to
This should have raised the alarms for you. Running the tests is what gives you feedback on whether you broke stuff. Not running them means you're blind. It does not mean that everything is fine.
There are several approaches, depending on how much access you have to the server code.
Full Access
If you're writing the server yourself, or you have access to the code, then you can create a test-kit for the server - A modified version of the server that runs completely in-memory and allows you to control how the server responds, so you can simulate different scenarios.
This kind of test-kit is created by separating the logic parts of the server from its surroundings, and then mocking them or creating in-memory versions of them (such as databases, queues, file-systems, etc.). This allows the server to run very quickly and it can then be created and destroyed within the test itself.
Limited/No Access
If you have to write tests for integration with a server that's out of your control, such as a 3rd party API, then the approach is to write a "mock" of the remote service, and a contract test to check that the mock still behaves the same way as the real thing. I usually put those in a different build, and run that occasionally just to know that my mock server hasn't diverged from the real server.
Once you have your mock server, you can write an adapter layer for it, covered by integration tests. The rest of your code will only use the adapter, and therefore can be tested using plain unit tests.
The second approach can, of course, be employed when you have full access as well, but usually writing the test-kit is better, since those kinds of tests tend to be duplicated across projects and teams, and then when the server changes a whole bunch of people need to fix their tests, whereas if the test-kit is written as part of the server code, it only has to be altered in one place.
I'm trying to use Ice4j, but there are no tutorials for it or anything. I have tried looking at the source code, but everything goes somewhere else and nothing is explained.
I've read the IcePseduTcp test and I want to implement my own but the problem is the test creates both local and remote agents together and then has them interact with each other. How do I separate the two, so that I have two programs, one that acts as the local controlling agent, and the other acts as the remote agent, and then have the local agent discover the remote agent?
The function Ice.transferRemoteCandidates uses both Agents, but how do I use the first agent to find the other?
addRemoteCandidateToAgent with addLocalCandidateToContentList will help you.
With addLocalCandidateToContentList, you build YOUR local ContentList (the data that need to be sent to the remote peer/server, and he will use it like in addRemoteCandidateToAgent).
Look over here: http://stellarbuild.com/blog/article/ice4j-networking-tutorial-part-1
I think that tutorial will explain how to connect the two agents. At least he uses SDP which doesn't need control.
If you want a SIP tutorial perhaps try: http://blog.sharedmemory.fr/en/2014/06/22/gsoc-2014-ice4j-tutorial/
I'm coding my server in java, and through the day, my server has to connect through 5 different proxies at once to other servers and gather data. However, reading about java proxy settings through stackexchange, I see that when you set a proxy, its effect is VM-wide, meaning whatever network activity that .jar was doing, it will do it through a proxy if somewhere a different thread sets a proxy setting within the jar.
I'm currently using this method of setting a proxy, which according to some tests it's actually pretty functional and works fast.
System.getProperties().put( "http.proxyHost", host );
System.getProperties().put( "http.proxyPort", port );
However, I can't really afford having 5 jars doing the same thing with different proxies, I tried it to, it would be a simple solution however I can't afford to use that much ram only for this, as my server is huge.
You need to call each connection with its own proxy settings. The Answer here by NickDk defines how you can call a url with its own proxy settings. You will need to do the same with each of your 5 proxies separately.
here is described the use a library embeded in the JRE, able to handle "proxypac" files in wich any combination of proxies can be defined.
since it is embeded in the JRE, standard ways to configure a Java application with a proxypac file (standard launch optional parameters) might exist, but I am not aware of it.
Howhever the solution described in the link provided should fit your needs since your usage is programatic.
Let us assume a Java application, accepting an integer command line argument, say bubu.
Assuming one uses a decent command line parser (and I do - https://github.com/jopt-simple/jopt-simple) plus keeping in mind the -D java switch, these are some of the typical ways to pass this command line parameter:
--bubu 5 (or --bubu=5 or --bubu5)
-Dbubu=5
Where the first one is the program argument and must be handled by the application using some command line parser, whereas the second is the VM argument and is already parsed by java, making it available as Integer.getInteger("bubu")
I am kinda puzzled. What should I use? Using the system property facility:
seems to cost nothing
does not depend on any command line parser library
provides convenient (albeit unexpected) API to obtain the values
As far as I can see, the only cons is that all the command line options have to use the -D flag.
Please, advice.
Thanks.
EDIT
Another pros for the system parameters - "they're usable even when the application is not a stand-alone app starting from a main, but also when the app is a webapp or a unit test." - thanks https://stackoverflow.com/users/571407/jb-nizet
EDIT2
Let me be more focused here. Is there any serious reason (besides esthetics) not to use the system parameters, like always?
EDIT3
OK, I think I get it now. If my code is likely to be loaded by a web application, then there is an issue of a potential name clash, since other web applications hosted by the same web container share the system property space with my code.
Therefore, I have to be prudent and disambiguate my system properties beforehand. So, no more bubu, it is com.shunra.myapp.bubu now. Meaning that instead of a simple
-Dbubu=5
I have
-Dcom.shunra.myapp.bubu=5
which becomes less attractive for a simple command line application.
Another reason is given by Mark Peters, which is pretty good to me.
I'd argue that the advantage Fortyrunner cites is actually the most significant negative for system properties--they are available to anyone who asks for them.
If the flag or option is meant to be a command-line option, it should be available to the layer or module of your code that deals with taking input from the command line, not any code that asks for it.
You can get some destructive coupling from global state, and system properties are no different than any other global state.
That said, if you're just trying to make a quick and dirty CLI program, and separation of concerns and coupling is not a big concern for you, system properties give you an easy method that however leads to (IMO) poor user experience. Some getopt library will give you a lot more support for building a good CLI user experience.
One of the main advantages of system properties is that they are available at any time during the life of you program.
Command line arguments are only available in the main method (unless you persist them).
I feel that there are many things that an average user like me do not need to know. System properties will help the developer of a system preset a number of value that will enable a system to run. For example, when I download GlassFish app server, it always come with many preset parameters that I have no ideas what they're for. I am not very experienced at dealing with server's setting. If you ask me to start GlassFish server with 20 parameters in the command line, I would have to learn what these parameters are for and how much should I set, etc. It's too troublesome.
In brief, when a system gets larger and larger, it may have more and more properties. With system properties preset, users may only need to know what they are when they really need to. For example, I only need to know about GlassFish's -XX:PermSize when I need to increase memory.
Is there a convenient way to transmit an object including its code (the class) over a network (not just the instance data)?
Don't ask me why I want to do this. It's in an assignment. I asked several times if that is really what they meant and the didn't rephrase their answer so I guess they really want us to transmit code (not just the field data) over a network. To be honest I have no clue why we need a Proxy in this assignment anyway, just writing a simple class would do IMO. The assignment says that we should instantiate the proxy on the server and transmit it to the client (and yes, they talk about a java.lang.reflect.Proxy, they name this class). Because there is no class file for a proxy I can't deploy that. I guess I would have to somehow read out the bytecode of the generated Proxy, transmit it to the client and then load it. Which makes absolutely no sense at all, but this seems what they want us to do. I don't get why.
This is the core value proposition of the Apache River project (formerly known as Jini when it was run by Sun).
You put the code you need to run remotely in a jar on a "codebase" http server and publish your proxy to a lookup server. River annotates that proxy (which is a serialized instance) with the codebase URL(s). When a client fetches that proxy from the lookup server and instantiates it, the codebase jars are used in a sandboxed classloader. It's common to create "smart proxies" which load a bunch of code to run on the client to manage communication back to the source service, or you can use a simpler proxy to just make RMI calls.
The technology encapsulated by River is complicated, but profound.