I have a requirement to make glassfish server being able to receive and forward messages in NTCIP protocol (basically to understand NTCIP protocol). Provided, that glassfish is an http server, I have no idea where to start. I did a lot of research on internet and could not find anything in particular. However I could find some generic answers roughly related to my problem, so by now I figured, that probably I need to write custom JCA connector for this (NTCIP) protocol. I don't even know if this is the right thing to do, is it ? Is it even possible to make glassfish talk in NTCIP protocol (no http) ? If so, how should I go about writing my own JCA for that protocol, OR ANY custom protocol for that matter, which does not use HTTP? Can I do it, using Java EE ?
In advance, thank you for help.
Yes, you are completely on the right track. I'm working on a project myself at the moment and we are building several JCA adaptors to connect out to other protocols and legacy systems. (disclaimer - There are a few cases where this is not the right choice, i don't know all your architecture details of course)
JCA (spec'ed in JSR-315?) is for inbound or outbound connections and part of the Java EE standard APIs. (deployment steps are specific to your application server)
I'm not that familiar with NCITP what you need to do depends on if you need inbound our outbound communications. Start with these example
From the JBoss Iron Jacamar sub project there are Hello World examples
From the Java EE 'oficial' code samples Inbound Mail Server adaptor
you may find that IDE support for JCA is limited. I usually just use a generic Jar file project template.
There is some complexity to consider around connection pooling, XA transactions, security, etc. But that can be added later.
Related
Background Context:
Due to enterprise limitations, an uncooperative 3rd party vendor, and a lack of internal tools, this approach has been deemed most desirable. I am fully aware that there are easier ways to do this, but that decision is a couple of pay grades away from my hands, and I'm not about to fund new development efforts out of my own pocket.
Problem:
We need to send an internal file to an external vendor. The team responsible for these types of files only transfers with SFTP, while our vendor only accepts files via REST API calls. The idea we came up with (considering the above constraints) was to use our OpenShift environment to host a "middle-man" SFTP server (running from a jar file) that will hit the vendor's API after our team sends it the file.
I have learned that if we want to get SFTP to work with OpenShift we need to set up of our cluster and pods with an ingress/external IP. This looks promising, but due to enterprise bureaucracy, I'm waiting for the OpenShift admins to make the required changes before I can see if this works, and I'm running out of time.
Questions:
Is this approach even possible with the technologies involved? Am I on the right track?
Are there other configuration options I should be using instead of what I explained above?
Are there any clever ways in which an SFTP client can send a file via HTTP request? So instead of running an embedded SFTP server, we could just set up a web service instead (this is what our infrastructure supports and prefers).
References:
https://docs.openshift.com/container-platform/4.5/networking/configuring_ingress_cluster_traffic/configuring-externalip.html
https://docs.openshift.com/container-platform/4.5/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-service-external-ip.html#configuring-ingress-cluster-traffic-service-external-ip
That's totally possible, I have done it in the past as well with OpenShift 3.10. The approach to use externalIPs is the right way.
Our project is a traditional project which is using RMI to do the communication between a Server and a Client (using Swing).
Recently, we want to change protocol from RMI to HTTP(for the firewall safety) without changing too much original code(keep original Server logic and Swing GUI).
Is there any good and mature way to do the transition? Thanks.
You can use your code as-is with the RMI/HTTP tunnelling that's built in to RMI. You just install the RMI-CGI servlet that's distributed with the sample code, configure it appropriately, and Bob's your auntie.
See the documentation. Thanks to #JoopEggen for the link.
I'm trying to use HTTPS in Java EE, for my own login directives and transfering information via a secure protocol. This should be trivial, but I'm having trouble finding a tutorial/guide to do it.
Currently, I'm using Netbeans for all my J2EE work, which uses Glassfish 4.1.1, along with JDK and JRE at the 1.8 version.
I'm basically looking for a comprehensive guide or a quick resume on how to implement HTTPS on a Servlet, so when I access to that servlet (mydomain/#/myServlet) the protocol gets set to HTTPS, uses my own created certificate (I also need help with that), so it encrypts the GET/POST requests, in order to make it unable to read the info (or at least make it a non-trivial thing).
Knowing a list of TO-DO things could be enough; if I know what I have to do, I can look for the information in a proper way. But now, I really don't find anything easy to understand.
Anyone can help? Thank you!
Your server will have a port for HTTP communication and other for HTTPS communication. So if you will communicate on HTTPS port the communication will be on HTTPS. So see your server configuration and check the HTTPS port and use that port in the URL.
Anyone has expirience on having Jruby project running on Jboss (using torquebox or whatever) with an ability to communicate with another "japps" not on the same jboss where jruby app is, i.e. some java project on another jboss?
I know there is an torque-messanging but dunno if it's possible to communicate with external(out of jruby-app's jboss) app?
Best practices are welcomed.
Thanks in advance.
P.S. placing that other app on the jboss where jruby app is not acceptible solution.
I can recommend you to use Thrift and build communication via them.
Thrift have generator for both your needed languages (Java and JRuby) and provide good and fast communication.
UPDATED:
Thrift is RPC (remote procedure call) framework developed at Facebook. In detail you can read about it in Wiki.
In few word to save you time, what it is and how to use it:
You describe you data structures and service interface in .thrift file(files). And generate from this file all needed source files(with all need serialization) for one or few languages(what you need). Than you can simple create server and client in few lines
Using it inside client will be looks like you just use simple class.
With Thrift you can use what protocol and transport used.
In most cases uses Binary or Compact protocol via Blocked or Not-blocked transport. So network communication will be light and fast + with fast serialization.
SOAP(based on XML on HTTP) packages, its in few times bigger, and inappropriate for sending binary data, but not only this. Also XML-serialization is very slow. So with SOAP you receive big overhead. Also with soap you need to write (or use third-party) lib for calling server(tiny network layer), thrift already made it for you.
SMTP and basically JMS is inappropriate for realtime and question-answer communication.
I mean if you need just to put some message in queue and someone sometime give this message and process it — you can (and should) use JMS or any other MQ services(Thrift can do this to, but MQ architecture is better for this issue).
But if you need realtime query-answer calls, you should use RPC, as protocol it can be HTTP(REST, SOAP), binary(Thrift, ProtoBuf, JDBC, etc) or any other.
Thrift (and ProtoBuf) provide framework for generate client and server, so it incapsulate you from low level issues.
P.S:
I made some example in past https://github.com/imysak/using-thrift (communication via Thrift Java server + Java Client or node.js client), maybe it will be useful for someone . But you can found more simple and better examples.
Torquebox supports JMS. The gem you specified torquebox-messaging allows for publishing and processing of HornetQ messages on the local JBoss AS server/cluster that the JRuby app is running in. I don't think it currently supports connecting to remote servers.
Using this functionality in your JRuby app you could then configure your Java app on another server to communicate with HornetQ running in the JBoss AS that the JRuby app is running on.
Alternatively you could always implement your own communication protocol or use another Java library - you have access to anything Java you want to run from JRuby.
You can use Web Services or JMS for that
So my most basic question here is: how do you build TCP interfaces into your Java EE applications? Instead of interacting with a legacy EIS, I need to interact with a block of TCP/IP ports. Ideally, I'd like a message-driven bean to have it's onMessage method invoked by an incoming TCP request and also be able to respond back over the same connection.
JCA seems general enough to be capable of something like this within a Java EE environment. Would developing a custom connector be the appropriate technique for integrating inbound/outbound TCP interfaces in a Java enterprise ecosystem?
As far as what I've tried so far: we're currently utilizing a lifecycle module which starts by kicking off a number of TCP listeners; this invokes a message-driven bean which calls a business method, and it all returns over the same TCP stream. This is actually working alright, but the lifecycle support in my application server (Glassfish) feels like it has been added as an afterthought. So, JCA seems like a first-class solution to this sort of problem and it seems to enable us to communicate over TCP.
However, from the initial research we've conducted, it does seem like the connector architecture is 'targeted' towards legacy information systems, not generalized TCP communication. So, my question could be rendered: are people using custom JCA's to integrate TCP/IP into their Java EE applications -- or is there a better technique for accepting TCP connections from my EJBs?
MXBeans and JCA (MXBeans are easier, have implemented both) but basically you only need 2 things start/stop and possibly to rely on other MXBeans/JCA/JNDI to carry out your services w/ the AppServer generating the needed proxies for you.
Real application: hacked tomcat w/ the NIO acceptor that can trap connections on 80+443ports and still use the web-server normally.
Followed by full platform (incl. own (re)deployer) to manage sessions/messages and all the jazz.
It seems you already resolved your initial problem. It's nice, but to help people through, this is a nice sample on the matter: http://code.google.com/p/jca-sockets