I'm looking at the Java API for MarkLogic, which I assume leverages the HTTP protocol for database connections. Is it possible to establish a connection over TCP? If not with the Java API, is it possible to interrogate the database by any means over TCP?
Our current architecture is based on a Microservice architecture concept, and includes a number of stages in any given process flow through the system, including Queueing, Message-brokering, etc. Given the number of steps, I'd like to optimise traffic-speed insofar as possible by leveraging TCP connections.
The Java API uses the REST Application services on MarkLogic which is fully HTTP 1.1 compliant and TCP/IP.
Not sure what else you are asking for.
For programs written in Java the Java API is the recommended API for most uses
http://developer.marklogic.com/products/java
You can also use the REST services directly, but the Java API adds a lot of Best Practice and exposes a higher level of abstraction to make coding simplier.
You can use the REST API from any application that can do HTTP
http://docs.marklogic.com/guide/rest-dev/intro
But its a bit more work as you have to construct your HTTP messages directly.
You can also create your own HTTP interface and access it through TCP/IP (HTTP) by making an HTTP App Server (written in XQuery).
Finally if you want very low level but effecient access, using Java or .NET you can use the XCC interface which is more tedious to use but provides a lower level feature for advanced users. This requires the Java or .NET library as the protocol is not documented.
https://developer.marklogic.com/products/xcc
What language are you going to be using and what kinds of operations ? That can help focus on which API is best for you.
-David Lee
HTTP is built on TCP. So by definition all HTTP connections are over TCP.
If you'd like a proprietary protocol instead of HTTP, one option is to forget the fact you learned that the Java API uses HTTP and imagine it uses TCP directly. :)
If you really want a proprietary protocol over TCP, you can use the XDBC protocol in combination with the XCC client. By default XDBC uses a wire protocol on TCP that isn't published.
Related
My application uses a custom binary protocol which is implemented with Netty. Recently I changed it to use Netty's websocket implementation. It works quite well.
My application also has a Jetty web server included and it offers websockets, too. Now I want to reduce the opened ports my server needs and handle all http traffic with one port.
I see three options:
Use either Netty or Jetty to proxy the traffic which belongs to the other implementation.
Reimplement the custom protocol on Jetty without the use of Netty's channels and piplines.
Create a custom implementation of Netty's channels that sends and receives it's data not over a socket but the methods Jetty's WebSocketListener offers.
Since Netty provides such a good api for writing binary protocols and a proxy sounds like extra problems to me I tend using the third approach. It shouldn't be too difficult to implement even though I don't know how to do it, yet.
Any thoughts what would be the best option and how I should implement it?
Anyone has expirience on having Jruby project running on Jboss (using torquebox or whatever) with an ability to communicate with another "japps" not on the same jboss where jruby app is, i.e. some java project on another jboss?
I know there is an torque-messanging but dunno if it's possible to communicate with external(out of jruby-app's jboss) app?
Best practices are welcomed.
Thanks in advance.
P.S. placing that other app on the jboss where jruby app is not acceptible solution.
I can recommend you to use Thrift and build communication via them.
Thrift have generator for both your needed languages (Java and JRuby) and provide good and fast communication.
UPDATED:
Thrift is RPC (remote procedure call) framework developed at Facebook. In detail you can read about it in Wiki.
In few word to save you time, what it is and how to use it:
You describe you data structures and service interface in .thrift file(files). And generate from this file all needed source files(with all need serialization) for one or few languages(what you need). Than you can simple create server and client in few lines
Using it inside client will be looks like you just use simple class.
With Thrift you can use what protocol and transport used.
In most cases uses Binary or Compact protocol via Blocked or Not-blocked transport. So network communication will be light and fast + with fast serialization.
SOAP(based on XML on HTTP) packages, its in few times bigger, and inappropriate for sending binary data, but not only this. Also XML-serialization is very slow. So with SOAP you receive big overhead. Also with soap you need to write (or use third-party) lib for calling server(tiny network layer), thrift already made it for you.
SMTP and basically JMS is inappropriate for realtime and question-answer communication.
I mean if you need just to put some message in queue and someone sometime give this message and process it — you can (and should) use JMS or any other MQ services(Thrift can do this to, but MQ architecture is better for this issue).
But if you need realtime query-answer calls, you should use RPC, as protocol it can be HTTP(REST, SOAP), binary(Thrift, ProtoBuf, JDBC, etc) or any other.
Thrift (and ProtoBuf) provide framework for generate client and server, so it incapsulate you from low level issues.
P.S:
I made some example in past https://github.com/imysak/using-thrift (communication via Thrift Java server + Java Client or node.js client), maybe it will be useful for someone . But you can found more simple and better examples.
Torquebox supports JMS. The gem you specified torquebox-messaging allows for publishing and processing of HornetQ messages on the local JBoss AS server/cluster that the JRuby app is running in. I don't think it currently supports connecting to remote servers.
Using this functionality in your JRuby app you could then configure your Java app on another server to communicate with HornetQ running in the JBoss AS that the JRuby app is running on.
Alternatively you could always implement your own communication protocol or use another Java library - you have access to anything Java you want to run from JRuby.
You can use Web Services or JMS for that
I need to push events to web clients in a cross-browser manner (iPhone, iPad, Android, IE/FF/Chrome/etc.) from a Spring based Java server. I am using backbone.js on the client side.
To my best knowledge, I can either go with a Web socket only approach, or I can use something like socket.io.
What is the best practice for this issue, and which platform/frameworks should I use?
Thanks
Looks like you're interested in an AJAX Push engine. ICEPush (same group that makes ICEFaces) provides these capabilities, and works with a variety of server- and client-side frameworks. There is also APE.
You can have a look at Lightstreamer.
My company is currently using it to push real time financial data from a web server.
I suppose Grizzly or Netty may fit your needs. Don't have a real experience in that scope, unfortunately.
I'd recommend socket.io as you mentioned in your question, if you're doing browser based eventing from a remote host. Socket.io handles all the connection keep-alives and reconnections directly from javascript and has facilities for channeling messages to specific sessions (users). The real advantage comes from the two-way communication of WebSockets without all the boilerplate code of maintaining the connection.
You will need to do some digging for a java implementation thoughConsider running the server directly from V8.
what is the difference between socket programming, rmi and Servlets. When to use what?
The Socket APIs are the low-level (transport level) abstraction by which a Java application interacts with the network, and by extension with remote clients and services. Socket and related APIs support reliable byte stream and unreliable messaging services. They are typically used for TCP/IP and UDP/IP, though other networking protocol stacks can (at least in theory) be supported.
RMI is a framework and protocol family for implementing application-level networking between Java applications. It models network interactions as Java method calls made against objects that live in other applications. This model requires a mechanism (typically a name server) that allows one application to "publish" objects so that another application can refer to them. This (and the fact that RMI ports are typically blocked by default) means that there is a non-trivial amount of configuration effort in setting up RMI-based applications.
Servlets are a collection of APIs that are primarily designed for implementing the server side of HTTP communications; i.e. for building webservers in Java. They (or more accurately the web container in which they run) take care of the details of the HTTP protocol, so that the programmer (in theory) only needs to deal with "application" concerns.
In practice, the servlet developer and/or deployer has to deal with other things such as mapping URLs to servlets to objects, security and authentication. In addition, Servlets only deal with the server side of an HTTP interaction ... the client side must be handled by different APIs. (You could also argue that Servlets by themselves do not do enough, as evidenced by the proliferation of web application frameworks that are built on top of Servlets.)
In brief:
Sockets are for low-level network communication
RMI is for high-level Java-to-Java distributed computing
Servlets are for implementing websites and web services
Sockets -- Few simple calls which directly interface with TCP/IP. Very simple but you to implment your own buffer handling and deal with incomplete responses and timeouts in yourself. No authentication or security provided.
rmi -- handles all of the above, <personal opinion>its one of the worse APIs to have contaminated the java standards </personal opinion>, fairly simple to program, handles basic network errors, authentication and security issues. Difficult to configure and deploy.
Servlets -- lovely simple API, all network issues handled for you, security and authentication via plugins. No deployment issues, simple configuration.
Use sockets to implement a specific TCP/IP protocol, whether an existing standard or your own custom protocol. You have complete control over all aspects of network communication.
Servlets support request/reply semantics in the general sense, but it far more likely you will be using HTTPServlets which support, as expected, the HTTP request/reply semantics. For example, a web-server, or a RESTful HTTP based endpoint.
Use RMI for distributed Java Objects. RMI is itself implemented using Sockets (see above) and implements the Java Wire Protocol.
I am trying to learn about the Jini API in java, but can't get my head around how the server and client interact, and am constantly seeing things being referred to as "smart proxies". What are smart proxies? And how does the client and server interact ?
Thanks.
Jini is based on Java RMI, so clients and servers communicate with each other just as they do in RMI: request/response using RMI protocol on the wire.
As for the "smart proxies", the Jini compiler uses a proxy factory to generate implementation code for your interface that includes an API for sending and receiving meta data about services. This is the magic that makes it possible for a client to send out a request for a certain kind of service on the network (e.g. "I'd like a color laser plotter") and select from the responses to find the best match possible.