Java 5 - Check whether a network interface is up and running. - java

Java 6 offers isUp() method to check whether a network interface is up and running.
http://docs.oracle.com/javase/6/docs/api/java/net/NetworkInterface.html#isUp()
Is there any way to check the same in Java 5?

If you won't mind using an external library, check Sigar
You can get the network interface status, among with stats like bytes received or bytes sent.
The only fallback is that is a C library with a java binding, so you will need the specific version for your architecture at runtime

The best way to see whether any resource is available, from an NIC to a database to a Web server on the other side of the world, or the Moon, is to try to use it. Any other technique is liable to the timing problem that it was up when you tested and down when you used it, or the other way around. And when you try to use it you have to deal with the failure cases anyway, because they are generally checked exceptions: why write the same code twice?

Class NetworkInterface has a method getInetAddresses(), delivering an enumeration of InetAddress-es. Assuming each adapter supports IPv4, you'll find an Inet4Address among these. if the interface is down it will deliver "0.0.0.0" as address, by method toString(). If an Adapter is up but not linked (no dynamic address assigned), it will have an address of pattern "169.x.x.x".

Related

Best approach to stream multiple types with GRPC

I have a server that passes messages to a client. The messages are of different types and the server has a generic handleMessage and passMessage method for the clients.
Now I intend to adapt this and use GRPC for it. I know I could expose all methods of the server by defining services in my .proto file. But is there also a way to:
Stream
heterogenous types
with one RPC call
using GRPC
There is oneof which allows me to set a message that has only one of the properties set. I could have a MessageContainer that is oneof and all my message types are included in this container. Now the container only has one of the types and I would only need to write one
service {
rpc messageHandler(ClientInfo) returns (stream MessageContainer)
}
This way, the server could stream multiple types to the client through one unique interface. Does this make sense? Or is it better to have all methods exposed individually?
UPDATE
I found this thread which argues oneof would be the way to go. I'd like that obviously as it avoids me having to create potentially dozens of services and stubs. It would also help to make sure it's a FIFO setup instead of multiplexing several streams and not being sure which message came first. But it feels dirty for some reason.
Yes, this makes sense (and what you are calling MessageContainer is best understood as a sum type).
... but it is still better to define different methods when you can ("better" here means "more idiomatic, more readable by future maintainers of your system, and better able to be changed in the future when method semantics need to change").
The question of whether to express your service as a single RPC method returning a sum type or as multiple RPC methods comes down to whether or not the particular addend type that will be used can be known at RPC invocation time. Is it the case that when you set request.my_type_determining_field to 5 that the stream transmitted by the server always consists of MessageContainer messages that have their oneof set to a MyFifthKindOfParticularMessage instance? If so then you should probably just write a separate RPC method that returns a stream of MyFifthKindOfParticularMessage messages. If, however, it is the case that at RPC invocation time you don't know with certainty what the used addend types of the messages transmitted from the server will be (and "messages with different addend types in the same stream" is a sub-use-case of this), then I don't think it's possible for your service to be factored into different RPCs and the right thing for you to do is have one RPC method that returns a stream of a sum type.

How does Apache Spark send functions to other machines under the hood

I started playing with Pyspark to do some data processing. It was interesting to me that I could do something like
rdd.map(lambda x : (x['somekey'], 1)).reduceByKey(lambda x,y: x+y).count()
And it would send the logic in these functions over potentially numerous machines to execute in parallel.
Now, coming from a Java background, if I wanted to send an object containing some methods to another machine, that machine would need to know the class definition of the object im streaming over the network. Recently java had the idea of Functional Interfaces, which would create an implementation of that interface for me at compile time (ie. MyInterface impl = ()->System.out.println("Stuff");)
Where MyInterface would just have one method, 'doStuff()'
However, if I wanted to send such a function over the wire, the destination machine would need to know the implementation (impl itself) in order to call its 'doStuff()' method.
My question boils down to... How does Spark, written in Scala, actually send functionality to other machines? I have a couple hunches:
The driver streams class definitions to other machines, and those machines dynamically load them with a class loader. Then the driver streams the objects and the machines know what they are, and can execute on them.
Spark has a set of methods defined on all machines (core libraries) which are all that are needed for anything I could pass it. That is, my passed function is converted into one or more function calls on the core library. (Seems unlikely since the lambda can be just about anything, including instantiating other objects inside)
Thanks!
Edit: Spark is written in Scala, but I was interested in hearing how this might be approached in Java (Where a function can not exist unless its in a class, thus changing the class definition which needs updated on worker nodes).
Edit 2:
This is the problem in java in case of confusion:
public class Playground
{
private static interface DoesThings
{
public void doThing();
}
public void func() throws Exception {
Socket s = new Socket("addr", 1234);
ObjectOutputStream oos = new ObjectOutputStream(s.getOutputStream());
oos.writeObject("Hello!"); // Works just fine, you're just sending a string
oos.writeObject((DoesThings)()->System.out.println("Hey, im doing a thing!!")); // Sends the object, but error on other machine
DoesThings dt = (DoesThings)()->System.out.println("Hey, im doing a thing!!");
System.out.println(dt.getClass());
}
}
The System.out,println(dt.getClass()) returns:
"class JohnLibs.Playground$$Lambda$1/23237446"
Now, assume that the Interface definition wasn't in the same file, it was in a shared file both machines had. But this driver program, func(), essentially creates a new type of class which implements DoesThings.
As you can see, the destination machine is not going to know what JohnLibs.Playground$$Lambda$1/23237446 is, even though it knows what DoesThings is. It all comes down to you cant pass a function without it being bound to a class. In python you could just send a String with the definition, and then execute that string (Since its interpreted). Perhaps thats what spark does, since it uses scala instead of java (If scala can have functions outside of classes)
Java bytecode, which is, of course, what both Java and Scala are compiled to, was created specifically to be platform independent. So, if you have a classfile you can move it to any other machine, regardless of "silicon" architecture, and provided it has a JVM of at least that verion, it will run. James Gosling and his team did this deliberately to allow code to move between machines right from the very start, and it was easy to demonstrate in Java 0.98 (the first version I played with).
When the JVM tries to load a class, it uses an instance of a ClassLoader. Classloaders encompass two things, the ability to fetch the binary of a bytecode file, and the ability to load the code (verify its integrity, convert it into an in-memory instance of java.lang.Class, and make it available to other code in the system). At Java 1, you mostly had to write your own classloader if you wanted to take control of how the byes were loaded, although there was a sun-specific AppletClassLoader, which was written to load classfiles from http, rather than from the file system.
A little later, at Java 1.2, the "how to fetch the bytes of the classfile" part was separated out in the URLClassloader. That could use any supported protocol to load classes. Indeed, the protocol support mechanism was and is extensible via pluggable protocol handlers. So, now you can load classes from anywhere without the risk of making mistakes in the harder part, which is how you verify and install the class.
Along with that, Java's RMI mechanism allows a serialized object (the class name, along with the "state" part of an object) to be wrapped in a MarshaledObject. This adds "where this class may be loaded from", represented as a URL. RMI automates the conversion of real objects in memory to MarshaledObjects and also shipping them around on the network. If a JVM receives a marshaled object for which it already has the class definition, it always uses that class definition (for security). If not, however, then provided a bunch of criteria are met (security, and just plain working correctly, criteria) then the classfile may be loaded from that remote server, allowing a JVM to load classes for which it has never seen the definitions. (Obviously, the code for such systems must typically be written against ubiquitous interfaces--if not, there's going to be a lot of reflection going on!)
Now, I don't know (indeed, I found your question trying to determine the same thing whether Spark uses RMI infrastructure (I do know that hadoop does not, because, seemingly because the authors wanted to create their own system--which is fun and educational of course--rather than use a flexible, configurable, extensively-tested, including security tested!- system.)
However, all that has to happen to make this work in general are the steps that I outlined for RMI, those requirements are essentially:
1) Objects can be serialized into some byte sequence format understood by all participants
2) When objects are sent across the wire the receiving end must have some way to obtain the classfile that defines them. This can be a) pre-installation, b) RMI's approach of "here's where to find this" or c) the sending system sends the jar. Any of these can work
3) Security should probably be maintained. In RMI, this requirement was rather "in your face", but I don't see it in Spark, so they either hid the configuration, or perhaps just fixed what it can do.
Anyway, that's not really an answer, since I described principles, with a specific example, but not the actual specific answer to your question. I'd still like to find that!
When you submit a spark application to the cluster, your code is deployed to all worker nodes, so your class and function definitions exist on all nodes.

How can I tell java to use a specific outgoing ip interface for a http request?

Does anybody know a quick way to force an outgoing http request to go through a specific (logical) ip address, in java?
I'm thinking of using Apache HTTP client (part of http components), which surely enough has a way to do it, but the API looks not trivial. Has anybody already performed anything similar with it?
Thank you.
Does this help?
How to make the JVM use a given source IP by default?
use socket.bind(bindpoint) just before socket.connect(endpoint).
bindpoint and endpoint can be InetSocketAddress
http.route.local-address parameter is your friend [1]. Alternatively you might want to implement a custom HttpRoutePlanner in order to have full control over the process of route computation and to assign connections to local interfaces using a strategy of some sort.
[1] http://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html#d4e501

Which options do I have for Java process communication?

We have a place in a code of such form:
void processParam(Object param)
{
wrapperForComplexNativeObject result = jniCallWhichMayCrash(param);
processResult(result);
}
processParam - method which is called with many different arguments.
jniCallWhichMayCrash - a native method which is intended to do some complex processing of it's parameter and to create some complex object. It can crash in some cases.
wrapperForComplexNativeObject - wrapper type generated by SWIG
processResult - a method written in pure Java which processes it's parameter by creation of several kinds (by the kinds I'm not meaning classes, maybe some like hierarchies) of objects:
1 - Some non-unique objects which are referencing each other (from the same hierarchy), these objects can have duplicates created from the invocations of processParam() method with different parameter values. Since it's costly to keep all the duplicates it's necessary to cache them.
2 - Some unique objects which are referencing each other (from the same hierarchy) and some of the objects of 1st kind.
After processParam is executed for each of the arguments from some set the data created in processResult will be processed together. The problem is in fact that jniCallWhichMayCrash method may crash the entire JVM and this will be very bad. The reason of crash may be such that it can happen for one argument value and not for the other. We've decided that it's better to ignore crashes inside of JVM and just skip some chunks of data when such crashes occur. In order to do this we should run processParam function inside of separate process and pass the result somehow (HOW? HOW?! This is a question) to the main process and in case of any crashes we will only lose some part of data (It's ok) without lose of everything else. So for now the main problem is implementation of transport between different processes. Which options do I have? I can think about serialization and transmitting of binary data by the streams, but serialization may be not very fast due to object complexity. Maybe I have some other options of implementing this?
Let us assume that the processes are on the same machine. Your options include:
Use a Process.exec() to launch a new process for each request, passing the parameter object as a command line argument or via the processes standard input and reading the result from thr processes standard output. The process exits on completion of a single request.
Use a Process.exec() to launch a long running process, using the Processes standard input / output for sending the requests and replies. The process instance handles multiple requests.
Use a "named pipe" to send requests / replies to an existing local (or possibly remote) process.
Use raw TCP/IP Sockets or Unix Domain Sockets to send requests / replies to an existing local (or possibly remote) process.
For each of the above, you will need to design your own request formats and deal with parameter / result encoding and decoding on both sides.
Implement the process as a web service and use JSON or XML (or something else) to encode the parameters and results. Depending on your chosen encoding scheme, there will be existing libraries deal with encoding / decoding and (possibly) mapping to Java types.
SOAP / WSDL - with these, you typically design the application protocol at a higher level of abstraction, and the framework libraries take care of encoding / decoding, dispatching requests and so on.
CORBA or an equivalent like ICE. These options are like SOAP / WSDL, but using more efficient wire representations, etc.
Message queuing systems like MQ-series.
Note that the last four are normally used in systems where the client and server are on separate machines, but they work just as well (and maybe faster) when client and server are colocated.
I should perhaps add that an alternative approach is to get rid of the problematic JNI code. Either replace it with pure Java code, or run it as an external command or service without a Java wrapper around it.
Have you though about using web-inspired methods ? in your case, typically, web-services could/would be a solution in all its diversity :
REST invocation
WSDL and all the heavy-weight mechanism
Even XML-RPC over http, like the one used by Spring remoting or JSPF net export could inspire you
If you can isolate the responsibilities of the process, ie P1 is a producer of data and P2 is a consumer, the most robust answer is to use a file to communicate your data. There is overhead (read CPU cycles) involved in serailization/deserialization however your process(es) will not crash and it is very easy to debug/synchronize.

remote and dynamic proxy

i understand that ,once, developing remote proxy included generating stub/skeleton , though today this is no longer needed thanks to reflection. (dynamic proxy)
i want to get a clear explanation as to why and how reflection replaces this need.
for example i understood the stub was suppose to handle the communication over the network (in case the the remote object is on a different computer) , plus in charge of serialization/deserialization , etc... who's in charge of that now ?
maybe i got the concept of dynamic proxy all wrong.
in addition i read about that subject regarding Java and Rmi, how would i implement remote proxy in C++,
i probably can use DCOM, is there another, maybe easier, way ? (and do i need stub/skeleton in DCom or like java no more ? )
thanks
say you have an interface A, and class B implements A.
the server VM has one instance of B; it's associated with a name; the server listens on a TCP port for client communication. something like this:
Server server = new Server(localhost, port);
server.bind("bbbb", new B() );
on the client VM, you want to access that B object residing on the server VM. You need to obtain a reference (of type A) to it
A bb = lookup(serverIp, port, "bbbb");
bb is a subclass of A, created with java.lang.reflect.Proxy. any method call on bb is handled by an InvocationHandler, which encodes the invocation in whichever way, and send it to ther server over the wire. the server receives a request of "calling this method of this name [with these arguments] on the objected named bbbb", and the server has no problem performing that task with reflection. then the return value is sent back to client in the similar way.
So it is really not hard to do this by yourself. Sun's RMI is probably doing the same thing (RMI has some other features, like remote garbage collection). see http://java.sun.com/j2se/1.5.0/docs/guide/rmi/relnotes.html
If I'm not mistaking, the stub/skeleton was statically created by the rmic tool while now the whole process is dynamic.
Its not that its necessarily 'better', but just removes the hassle of compiling a stub and skeleton. In both cases, the request parameters and response is serialized in the same way pretty much at the same level.
As for question #1: Some implementation of RMI uses Corba, so if the code is available, you may want to look at that.
All you would need is to interface your C++ code with the Corba layer as the stub/skeleton did.

Categories

Resources