So my most basic question here is: how do you build TCP interfaces into your Java EE applications? Instead of interacting with a legacy EIS, I need to interact with a block of TCP/IP ports. Ideally, I'd like a message-driven bean to have it's onMessage method invoked by an incoming TCP request and also be able to respond back over the same connection.
JCA seems general enough to be capable of something like this within a Java EE environment. Would developing a custom connector be the appropriate technique for integrating inbound/outbound TCP interfaces in a Java enterprise ecosystem?
As far as what I've tried so far: we're currently utilizing a lifecycle module which starts by kicking off a number of TCP listeners; this invokes a message-driven bean which calls a business method, and it all returns over the same TCP stream. This is actually working alright, but the lifecycle support in my application server (Glassfish) feels like it has been added as an afterthought. So, JCA seems like a first-class solution to this sort of problem and it seems to enable us to communicate over TCP.
However, from the initial research we've conducted, it does seem like the connector architecture is 'targeted' towards legacy information systems, not generalized TCP communication. So, my question could be rendered: are people using custom JCA's to integrate TCP/IP into their Java EE applications -- or is there a better technique for accepting TCP connections from my EJBs?
MXBeans and JCA (MXBeans are easier, have implemented both) but basically you only need 2 things start/stop and possibly to rely on other MXBeans/JCA/JNDI to carry out your services w/ the AppServer generating the needed proxies for you.
Real application: hacked tomcat w/ the NIO acceptor that can trap connections on 80+443ports and still use the web-server normally.
Followed by full platform (incl. own (re)deployer) to manage sessions/messages and all the jazz.
It seems you already resolved your initial problem. It's nice, but to help people through, this is a nice sample on the matter: http://code.google.com/p/jca-sockets
Related
Consider this scenario: I have N>2 software components (microservices) that can communicate through two different communication protocols depending on how they are deployed. In other words, I have two deployment scenarios:
The components are to be deployed on the same machine. In this case I don't know if it makes sense to use HTTP to communicate these two components, if I think about performance. I understand that there are more efficient ways to communicate two processes on the same machine using java, such as sockets, RMI, RPC ...
The components are to be deployed on N different machines. In this case, it seems to me that it makes sense for me to use HTTP to communicate these components.
In short, what I want to do is to be able to configure the communication protocol depending on the way I perform the deployment: On a single machine, for example, use RMI, but when I deploy on two machines, use HTTP.
Does anyone know how I can do this using Spring Boot?
Many Thanks!
Fundamental building block of protocols like RMI or HTTP is socket communication. If you are not looking for the comfort of HTTP or RMI, and priority is performance, pure socket communication is your choice.
This will raise other concerns like, deployment difficulties. You should know IP address of both nodes in advance.
Another option, is to go for unix -domain socket for within server communication. For that you have to depend on JunixSocket.
If you want to go another route, check all inter process communication options.
EDIT
As you said in comment "It is simply no longer a question of two components but of many". In that scenario, each component should be a micro-service And should be capable to interact with each other. If that is the choice most scalable protocol are REST/RPC both are using HTTP protocol under the hood. REST is ideal solution for an API to be developed against a data source using CRUD operations. RPC is more lean towards action oriented API. You can find more details to identify the difference in between REST and RPC here.
How I understand this is...
if the components (producer and consumer) are deployed on the same host then use an optimized protocol and if on different hosts then use HTTP(s)
Firstly, there must be a serious driver to go down this route. I take it the driver here is performance. You would like to offer faster performance on local deployment and compartively compromised speeds on distributed deployments. BTW, given that we are in a distributed deployment world (or atleast where we are headed) HTTP will be what will survive. Custom protocols are discouraged.
Anyways... I would say your producer application should be in a self healing / discovery mode. On start-up (or periodically) it could check the health of the "optimized" end-point and decide whether it the optimized receiver is around. The receiver would need to stand behind a load-balancer. If the receiver is not up then go towards HTTP(S) and setup this instance accordingly at runtime.
For the consumer, it would need to keep both the gates (HTTP and optimized) open. It should be ready to handle requests from either channel.
In SpringBoot you can have a healthCheck implmented and switch the emitter on/off depending on the health of optimized end-point. If both end-points are unhealthy then surely the producer cannot emit anything. Apart from this the rest is just normal dependency-injection.
My application uses a custom binary protocol which is implemented with Netty. Recently I changed it to use Netty's websocket implementation. It works quite well.
My application also has a Jetty web server included and it offers websockets, too. Now I want to reduce the opened ports my server needs and handle all http traffic with one port.
I see three options:
Use either Netty or Jetty to proxy the traffic which belongs to the other implementation.
Reimplement the custom protocol on Jetty without the use of Netty's channels and piplines.
Create a custom implementation of Netty's channels that sends and receives it's data not over a socket but the methods Jetty's WebSocketListener offers.
Since Netty provides such a good api for writing binary protocols and a proxy sounds like extra problems to me I tend using the third approach. It shouldn't be too difficult to implement even though I don't know how to do it, yet.
Any thoughts what would be the best option and how I should implement it?
I have a requirement to make glassfish server being able to receive and forward messages in NTCIP protocol (basically to understand NTCIP protocol). Provided, that glassfish is an http server, I have no idea where to start. I did a lot of research on internet and could not find anything in particular. However I could find some generic answers roughly related to my problem, so by now I figured, that probably I need to write custom JCA connector for this (NTCIP) protocol. I don't even know if this is the right thing to do, is it ? Is it even possible to make glassfish talk in NTCIP protocol (no http) ? If so, how should I go about writing my own JCA for that protocol, OR ANY custom protocol for that matter, which does not use HTTP? Can I do it, using Java EE ?
In advance, thank you for help.
Yes, you are completely on the right track. I'm working on a project myself at the moment and we are building several JCA adaptors to connect out to other protocols and legacy systems. (disclaimer - There are a few cases where this is not the right choice, i don't know all your architecture details of course)
JCA (spec'ed in JSR-315?) is for inbound or outbound connections and part of the Java EE standard APIs. (deployment steps are specific to your application server)
I'm not that familiar with NCITP what you need to do depends on if you need inbound our outbound communications. Start with these example
From the JBoss Iron Jacamar sub project there are Hello World examples
From the Java EE 'oficial' code samples Inbound Mail Server adaptor
you may find that IDE support for JCA is limited. I usually just use a generic Jar file project template.
There is some complexity to consider around connection pooling, XA transactions, security, etc. But that can be added later.
Scenario:
I have been using Java SE for quite some time, working with threads etc, though I have little experience with Java EE.
I have a 3rd-party Java library that connects to a remote server (at the 3rd-party company). The library creates several threads and is keeping the connection alive by itself.
I am not allowed to open new connections over and over (by creating new instances of the library). I need to keep the same instance of the library which will keep the connection up at all time.
This is quite easy in a Java SE application.
Now, I want to create a web service (perhaps using GlassFish or similar) to use internally at my company to be able to use the functionality of this library with its connection.
In other words, I need a custom remote connection (that is not created by or managed by my code) to be kept alive between request instances.
Question:
Is this possible to achieve? If so, which technology should I take a look at?
you can do that using connection pool.when ever a connection required to remote server, get the connection from this pool instead of instantiating every time.This will help you in maintaing better memory foot print and efficiency.If connection is no longer in use, you can return the connection to pool.
I have recently implemented a similar system, using Tomcat as the Servlet Container and Metro 2.0 as the JAX-WS implementation. My service maintains socket connections to backend components (implemented in C++) and communicates with them using a proprietary network protocol.
I used a 'Component Manager' thread to manage the high-level communication with the Components (connection establishment, handshaking etc.) and a 'Network Selector' thread that managed the actual communication with the Components. This 'Network Selector' used asynchronous non-blocking sockets using the Java Socket Selector family of classes - using a single thread to interact with the Socket Selector class is an important point as some Java platforms exhibit bugs when multiple threads are used.
It's working very well so far, so I can tell you that it's certainly possible. If you require any clarification then please post here or e-mail me (see my profile).
You need to have a factory maintaining the connections, and then provide it through JNDI in the same way that e.g JDBC connection pools are provided.
You then need to ensure that the connections are returned to said factory and then integrate it in the application server life cycle so that it is pulled up and down programatically.
Note that there is a nasty classloader problem lurking here if you are not careful. You will have to have a common class to the factory and the clients and if it is not one in the standard runtime library you will need to figure out a way to have it correctly shared unless you want to use reflection to get to the methods.
I have to design a distributed application composed by one server (developed in Java) and one or more remote GUI clients (Swing application with windows).
As stated before the clients are Swing GUI application that can connect to the server in order to receive and send data.
The communication is bidirectional (Server <=> Clients).
Data sent over the network is mainly composed by my domain logic objects.
Two brief examples: a client calls the server in order to receive data to populate a table inside a window; the server calls client in order to send data to refresh a specific widget (like a button).
The amount of data transmitted between server and clients and the frequency of the network calls are not particularly high.
Which technology do you suggest me for the server-clients communication?
I've in mind one technology suitable for me but I would like to know your opinions.
Thanks a lot.
The first technology that came to my mind was RMI - suitable if you're communicating between java client and java server. But you may get difficulties if you want do switch the client technology to - say - a webinterface.
I would go with RMI but implement the whole architecture using Spring framework. This way it is independent of technology used and can be switched to other ways of communication (such as HTTP or other ) with almost no coding.
UPDATE: And Spring will allow you to have none of RMI specific code.
I believe sockets should do the trick. They are flexible and not especially hard to code/maintain. Most entry level programmer should also be able to maintain them. They are also fast and adapt to any kind of environment.
Unless, your server is going to be off-site or you expect to have firewall issues. In that case, web services are the way to go since your basic communication happens through port 80.
I would second msparer's suggestion of RMI, except I would just use EJB3 (which uses RMI as the communication protocol). EJB3 are very easy and even if you don't use the other feaures EJB gives you (e.g., security) you can still leverage Container Managed Transactions (CMT). It really does make development easy.
As for the server->client communication, you would probably want to use JMS. Again, using EJB3 this is pretty e3asy to do with annotations. The clients will subscribe to the message service and receive update notifications from the server.
And yes, I am currently working on an application that does this very thing. Unfortunately we are using EJB2.1. Still, it is my opinion that this is where EJBs really shine. Using EJBs in a web app is frequently overkill, but in a distributed client/server app they work very well.
You can try using ICE http://www.zeroc.com for establishing server-client connection.