Imagine you have a standard java web-app using plain servlets, or SpringMVC, or whatever. You also want (for whatever reason) a way to talk to the server not using HTTP - I'll use direct sockets as it's the easiest example I can think of.
Writing a web-app is easy, you have servlets acting as entrypoints. Writing a java app which monitors ports is also pretty easy. But what about one that does both? Is it allowed without hacking? And if it turns out we agree this is a Bad Idea, what's a better architecture? Note that one of the motivations behind this is performance... we could easily have two separate apps sharing a DB but prefer to avoid using the DB as a communication tool, when information could be cached in memory much more efficiently.
So I assume there is a Java EE container in play, like Tomcat. If you want it to listen on some other port besides or in addition to 80, sure. You would make a new Connector in server.xml, in Tomcat's case, and specify whatever port you like.
If you want this connector to speak a custom protocol, you need to implement and register your own customer Connector. I've not done it, but seems straightforward.
If you're answering substantially the same requests via two protocols, it makes sense to use one server with different endpoints. I imagine it makes it far easier to share all that common logic.
Even if you want to run a separate app, it still probably pays to go this way, since you'll be leveraging the container's management of connections and such.
Related
There is this java web application with a lot of users. These users place some orders according to the data shown in their panels. The data is being updated second by second by calling an outsider service (via a webservice or so). The moment we get this data, users' panels must be updated immediately to make sure that users are placing valid orders.
So we need to push data to the client WebApp. Performance and reliability are of great concern.
What approach or technology do you suggest here? Should I use somthing like Comets? Or is using primitive WebSockets suitable?
Websocket is the fastest transport but it's not supported well (by old versions of Internet Explorer).
AJAX requests are not that fast (because browser potentially will establish new HTTP connection and HTTP headers introduce overhead too) but much better supported. And with correct keep-alive settings HTTP connection should be reused.
You can use some generic implementation like sock.js (well supported by Spring framework). It'll choose the best available transport automatically. But it introduces an additional layer of complexity.
There are a bunch of things to consider.
There are various technologies. You've mentioned Comet. There is also Web Sockets. Without support at the protocol level, you are stuck with pretty much polling for data. This is the approach Comet takes.
Web Sockets is specifically designed for this. It has far less overhead than a TCP or UDP based message stream.
Are you targeting modern browsers or also need to support older browsers?
There is varied support for protocols, across versions, implementations may have some caveats and so on.
Or is using primitive WebSockets suitable?
It is perfectly acceptable. Though you have to deal with variances with browser differences, or you may find porting your web sockets across web servers may require some work.
For instance, if you are deploying on Jetty (and using its API natively), you need to implement WebSocketCreator. If you are using Grizzly natively, you need to implement WebSocketListener and so on.
Atmosphere tries to fix this by providing a uniform interface which works across various servers. Again, once you pick such a library, you will need to make changes if you want a different library in the future.
Or you could do use a service like Pusher or any of its competitors.
If you Google around, you should be able to find plenty of examples.
Hopefully it helps.
Firstly Cheers to all PROGRAMMERS [ Today = Programmers day :) ]
Secondly,
I'm working on a project where the specifications require using a server as a front end and an application in the back end. The project is an advanced smart home system. The server will handle commands coming from the client through the internet (let's say like a remote control from outside the house) and send them (through a channel of communication) to the application (planning on using JAVA application) which will handle the main logic like controlling hardware stuff (lights ...) , reading from a microphone (local mic) and accessing a database to act as a speech recognition system (offline).
Now I'm still in the planning phase and I'm not sure which technologies are the best for this project. I'm thinking to use Node.js or Apache as the server and a JAVA application as the back end and any SQL database for the application's SRS.
I hope this illustration demonstrates clearly how the system works:
The main question is:
What is the best way to make the Java application communicate with the server (communication channel [must be bidirectional]) ?
and Do you recommend a specific server other than the mentioned ones for this job ?
What crossed my mind so far:
1- JSP and servlets (making the server is the application too). But I don't want a server to handle the offline stuff and I'm not sure if java servlets can access hardware interface. I also want the server to be separate from making critical decisions (different layer for security reasons and since it won't be used as frequently as the local [offline] system).
2- Communication channel :
A- A shared file, but it's a bad idea since I don't want the application to check if the file contents changed (command received) or not from time to time (excessive operations).
B- A an inter-process-communication through a port (socket communication) seems the best solution but I don't know how that would turn in terms of operation cost and communication errors.
OS used : Linux Raspbian
EDIT:
I'm sure ZMQ+Apache is good enough for this task, but how is it in comparison to WebServices (like SOAP) ? Would WebServices be a better solution in terms of standard implementation and security ?
All related suggestions are welcomed, TQ
ZeroMQ is great for internal communications, or any other similar communication solutions.
For specifically your case, I can see that ZeroMQ would be a best fit.
Reasons:
You offline server have to be agnostic to web solution.
Communication can be reliable and bi-directional, possibly another patterns like (pub>sub, req<>res, etc).
Restarting any of sides would not require to restart the sockets (connection) on other side, as messages are queued.
Possibility to scale not just on same hardware, but as well to local area network or even through internet.
Big community of support. It might look a bit hard to get into, but in reality it is dead simple, just go to examples and once concept understood - it is very easy and neat to work with.
ZeroMQ has lots drivers for most popular languages, that includes Java and Node.js.
Considerations:
You need to think over packets and data will be sent. So some popular data protocols like XML or JSON is good way of thinking.
Responsibilities over different services - make sure they are not dependant on each other too much. Or if main offline server - is a core of system, make sure it does not depend on web facing service, so that web face can be removed/replaced/improved etc.
Few more points to think about:
Why Java, and what about modular approach? For example if you want to expand and scale - add more sensors into smart home solutions, then having one giant application would require to change it, it is harder to maintain as well as maintain different clients with own needs. Think modular way - some core functionality for offline stuff, but many aggregator processes that would talk to different sensors. This makes easier to support different setups and environments, as well maintain the system as a whole by improving independent components.
I have an application where both the backend and the frontend are built in Java. The backend provides some functionalities like accessing the DB, etc. While the frontend built in Struts calls those functions.
I'm looking for a way to make any Java class easily callable on TCP, ideally in my mind this could be done by extending a specific class, let's say:
public class MyClass extends ThisIsAnAPI
making in this way all the public functions callable on a network protocol.
With such a framework the frontend could be easily implemented in other languages, like Ruby (On Rails), by making network requests to the backend APIs written in Java and exposed on TCP.
Any tips?
If you are likely to go to a JavaScript/Ajax UI then I would take the time to expose the backend as RESTful services. Using JAX/RS this is a matter of a few lines of a code and some annotations and an interface.
If you are staying pure Java, it's pretty trivial these days to turn a POJO into a remotely callable EJB: just a couple of annotations.
It may sound like overkill, but in terms of effort and cost (given a free app server such as WebSphere CE or JBoss) it's not that big a deal. However if you don't go for EJBs then you need to look at two big issues:
Security. You've got some TCP-callable services. How sensitive are those services? Do they need authentication and authorisation? You can all too easily open up sensitive databases to the whole company or even the internet.
Resilience and Scaling. How will you manage failure scenarios? EJBs exposed via RMI/IIOP can be clusterd and hence you can scale and deal with errors. If you start with a technology capable of doing that, even if you don't need the functionality right now, you are well placed for the future.
I would start with RMI which is designed to do this. You create an interface which the client uses and the server implements.
Try Hessian, which is a low-level TCP protocol also having bindings for several other platforms, so you will get C#/C++/Flash/... for free. I think it is a bit easier to work with compared to RMI.
If you need more portability for the future, consider exposing POJOs via SOAP/REST (most WS stacks have this ability, only few extra annotations are needed if any).
You might want to take a look at JMS. It's quite high level and easy to use, but you need to run a message broker. It's a bit of a different architecture to point-to-point communication.
As several persons have mentioned RMI you can look up spring which have support for this and I have myself used successfully. http://static.springsource.org/spring/docs/2.0.x/reference/remoting.html
I'd like to write an applet (or a java Web start application) calling its server (servlet?) to invoke some methods and to send/retrieve data. What would be the best way/technology to send and to retrieve those message ?
Protocol:
If you don't care about interoperability with other languages, I'd go with RMI over HTTP. It has support right from the JRE, quite easy to setup and very easy to use once you have the framework.
For applicative logic, I'd use either:
The command pattern, passing objects that, when invoked, invoke methods on the server. This is good for small projects, but tends to over complicate as time goes by and more commands are added. Also, it require the client to be coupled to server logic.
Request by name + DTO approach. This has the benefit of disassociating server logic from the client all together, leaving the server side free to change as needed. The overhead of building a supporting framework is a bit greater than the first option, but the separation of client from server is, in my opinion, worth the effort.
Implementation:
If you have not yet started, or you have and using Spring, then Spring remoting is a great tool. It works from everywhere (including applets) even if you don't use the IOC container.
If you do not want to use Spring, the basic RMI is quite easy to use as well and has an abundance of examples over the web.
HTTP requests? Parameters in, xml out.
XML is still my preferred choice for data interchange.
Using XML with something like xstream that removes much of the hassle of XML Java libraries. You can serialize and deserialize objects in a very simple way.
A lightweight solution could be Hessian too.
A simple example is here.
If you need an ORM for that case: try Cayenne.
I am implementing a website using PHP for the front end and a Java service as the back end. The two parts are as follows:
PHP front end listens to http requests and interacts with the database.
The Java back end run continuously and responds to calls from the front end.
More specifically, the back end is a daemon that connects and maintain the link to several IM services (AOL, MSN, Yahoo, Jabber...).
Both of the layers will be deployed on the same system (a CentOS box, I suppose) and introducing a middle layer (for instance: using XML-RPC) will reduce the performance (the resource is also rather limited).
Question: Is there a way to link the two layers directly? (no more web services in between)
Since this is communication between two separate running processes, a "direct" call (as in JNI) is not possible. The easiest ways to do such interprocess communcation are probably named pipes and network sockets. In both cases, you'll have to define a communication protocol and implement it on both sides. Using a standard protocol such as XML-RPC makes this easier, but is not strictly necessary.
There are generally four patterns for application integration:
via Filesystem, ie. one producers writes data to a directory monitored by the consumer
via Database, ie. two applications share a schema or table and use it to swap data
via RMI/RPC/web service/any blocking, sync call from one app to another. For PHP to Java you can pick from the various integration libraries listed above, or use some web services standards like SOAP.
via messaging/any non-blocking, async operation where one app sends a message to another app.
Each of these patterns has pros and cons, but a good rule of thumb is to pick the one with the loosest coupling that you can get away with. For example, if you selected #4 your Java app could crash without also taking down your PHP app.
I'd suggest before looking at specific libraries or technologies listed in the answers here that you pick the right pattern for you, then investigate your specific options.
I have tried PHP-Java bridge(php-java-bridge.sourceforge.net/pjb/) and it works quite well. Basically, we need to run a jar file (JavaBridge.jar) which listens on port(there are several options available like Local socket, 8080 port and so on). Your java class files must be availabe to the JavaBridge in the classpath. You need to include a file Java.inc in your php and you can access the Java classes.
Sure, there are lots of ways, but you said about the limited resource...
IMHO define your own lightweight RPC-like protocol and use sockets on TCP/IP to communicate. Actually in this case there's no need to use full advantages of RPC etc... You need only to define API for this particular case and implement it on both sides. In this case you can serialize your packets to quite small. You can even assign a kind of GUIDs to your remote methods and use them to save the traffic and speed-up your intercommunication.
The advantage of sockets usage is that your solution will be pretty scalable.
You could try the PHP/Java integration.
Also, if the communication is one-way (something like "sendmail for IM"), you could write out the PHP requests to a file and monitor that in your Java app.
I was also faced with this problem recently. The Resin solution above is actually a complete re-write of PHP in Java along the lines of JRuby, Jython and Rhino. It is called Quercus. But I'm guessing for you as it was for me, tossing out your Apache/PHP setup isn't really an option.
And there are more problems with Quercus besides: the free version is GPL, which is tricky if you're developing commercial software (though not as tricky as Resin would like you to believe (but IANAL)) and on top of that the free version doesn't support compiling to byte code, so its basically an interpreter written in Java.
What I decided on in the end was to just exchange simple messages over HTTP. I used PHP's json_encode()/json_decode() and Java's json-lib to encode the messages in JSON (simple, text-based, good match for data model).
Another interesting and light-weight option would be to have Java generate PHP code and then use PHP include() directive to fetch that over HTTP and execute it. I haven't tried this though.
If its the actual HTTP calls you're concerned about (for performance), neither of these solutions will help there. All I can say is that I haven't had problems with the PHP and Java on the same LAN. My feeling is that it won't be a problem for the vast majority of applications as long as you keep your RPC calls fairly course-grained (which you really should do anyway).
Sorry, this is a bit of a quick answer but: i heard the Resin app server has support for integrating java and PHP.
They claim they can smash php and java together: http://www.caucho.com/resin-3.0/quercus/
I've used resin for serving J2ee applications, but not for its PHP support.
I'd be interested to hear of such adventures.
Why not use web service?
Make a Java layer and put a ws access(Axis, SpringWS, etc...) and the Php access the Java layer using one ws client.
I think it's simple and useful.
I've come across this page which introduces a means to link the two layers. However, it still requires a middle layer (TCP/IP). Moreover, other services may exploit the Java service as well because it accepts all incoming connections.
http://www.devx.com/Java/Article/20509
[Researching...]