How can microservices communicate with each other? - java

I want to transition my currently monolithic application to a a microservices model. I'm currently stuck on figuring out the best way for independent microservices to communicate with each other. Each independent microservice could theoretically be written in a different language, so the communication protocol has to be language independent. Additionally, files don't work since the services may run a separate machines, or even on separate continents. Here are my current ideas and issues with them:
Use a DB/message broker like Redis: this relies on a central DB, so if it goes down the entire application goes down. Not ideal.
Use websockets: the overhead? How do I handle authentication?
Expose a HTTP(S) API: same downsides as websockets
Some other protocol that's designed for end users like RSS: This seems like a bad idea.
Any additional thoughts on the 4 methods I stated? Which one is the best? If possible, could I also be given some examples on how to best implement them? Java is the language that the first microservices will be written in. However, remember that it has to work on most other languages, NOT ONLY Java! If there's a better way to achieve my goals, please state it and elaborate on how it works.

Related

Inter-process-communication between a Java application and a local server

Firstly Cheers to all PROGRAMMERS [ Today = Programmers day :) ]
Secondly,
I'm working on a project where the specifications require using a server as a front end and an application in the back end. The project is an advanced smart home system. The server will handle commands coming from the client through the internet (let's say like a remote control from outside the house) and send them (through a channel of communication) to the application (planning on using JAVA application) which will handle the main logic like controlling hardware stuff (lights ...) , reading from a microphone (local mic) and accessing a database to act as a speech recognition system (offline).
Now I'm still in the planning phase and I'm not sure which technologies are the best for this project. I'm thinking to use Node.js or Apache as the server and a JAVA application as the back end and any SQL database for the application's SRS.
I hope this illustration demonstrates clearly how the system works:
The main question is:
What is the best way to make the Java application communicate with the server (communication channel [must be bidirectional]) ?
and Do you recommend a specific server other than the mentioned ones for this job ?
What crossed my mind so far:
1- JSP and servlets (making the server is the application too). But I don't want a server to handle the offline stuff and I'm not sure if java servlets can access hardware interface. I also want the server to be separate from making critical decisions (different layer for security reasons and since it won't be used as frequently as the local [offline] system).
2- Communication channel :
A- A shared file, but it's a bad idea since I don't want the application to check if the file contents changed (command received) or not from time to time (excessive operations).
B- A an inter-process-communication through a port (socket communication) seems the best solution but I don't know how that would turn in terms of operation cost and communication errors.
OS used : Linux Raspbian
EDIT:
I'm sure ZMQ+Apache is good enough for this task, but how is it in comparison to WebServices (like SOAP) ? Would WebServices be a better solution in terms of standard implementation and security ?
All related suggestions are welcomed, TQ
ZeroMQ is great for internal communications, or any other similar communication solutions.
For specifically your case, I can see that ZeroMQ would be a best fit.
Reasons:
You offline server have to be agnostic to web solution.
Communication can be reliable and bi-directional, possibly another patterns like (pub>sub, req<>res, etc).
Restarting any of sides would not require to restart the sockets (connection) on other side, as messages are queued.
Possibility to scale not just on same hardware, but as well to local area network or even through internet.
Big community of support. It might look a bit hard to get into, but in reality it is dead simple, just go to examples and once concept understood - it is very easy and neat to work with.
ZeroMQ has lots drivers for most popular languages, that includes Java and Node.js.
Considerations:
You need to think over packets and data will be sent. So some popular data protocols like XML or JSON is good way of thinking.
Responsibilities over different services - make sure they are not dependant on each other too much. Or if main offline server - is a core of system, make sure it does not depend on web facing service, so that web face can be removed/replaced/improved etc.
Few more points to think about:
Why Java, and what about modular approach? For example if you want to expand and scale - add more sensors into smart home solutions, then having one giant application would require to change it, it is harder to maintain as well as maintain different clients with own needs. Think modular way - some core functionality for offline stuff, but many aggregator processes that would talk to different sensors. This makes easier to support different setups and environments, as well maintain the system as a whole by improving independent components.

Share data between Java EE servers

What products/projects could help me with the following scenario?
More than one server (same location)
Some state should be shared between server (for instance information if a scheduled task is running and on what server).
The obvious answer could of course be databases but we are using Seam and there doesn't seem to be a good way to nest transactions inside a Seam-bean so I need to find a way where I don't have to go crazy over configuration (tried to use EJB:s but persistence.xml wasn't pretty afterwards). So i need another way around this problem until Seam support nested transactions.
This is basically the same scenario as I have if you need more details: https://community.jboss.org/thread/182126.
Any ideas?
Sounds like you need to do distributed job management.
The reality is that in the Java EE world, you are going to end up having to do Queues, as in MoM [Message-oriented Middleware]. Seam will work with JMS, and you can have publish and subscribe queues.
Where you might want to take a look for an alternative is at Akka. It gives you the ability to distribute jobs across machines using an Actor/Agent model that is transparent. That's to say your agents can cooperate with each other whether they are on the same instance or across the network from each other, and you are not writing a ton of code to make that happen, or having to special handle things up and down the message chain.
The other thing Akka has going for it is the notion of Supervision, aka Go Ahead and Fail, or Let it Crash. This is the idea (followed by the Telcos for years), that systems will fail and you should design for it and have a means of making things resilient.
Finally, the state of other options job wise in the Java world is dismal. Have used Seam for years. It's great, but they decided to just support Quartz for jobs, which is useless.
Akka is built on Netty, too, which does some pretty crazy stuff in terms of concurrency and performance.
[Not a TypeSafe employee, btw…]

Huge data processing/ HPC in Java - suggest me how to begin

I am thinking to work on a programming problem for which, I suppose, I will need to know a lot of advanced programming concepts. For some reasons I have decided to code it in Java - even though I am not proficient in it.
So I want you to help me with suggestions, guidance, pointers to resources, books, tutorials or any generic advises that you think is pertinent.
Here is the basic nature of my problem:
I need to create a client-server architecture. Server supports multiple concurrent clients. Clients send it simple instructions (may be server exposes some kind of API/ runs listener on specific port), server executes the instructions and send result back to client.
The main job of the server is to do huge volume of data processing based on the instructions given to it. It takes data from backend database/ file systems. Data volume can easily surge up to ~ 200GB - 700GB. Data will be usually streamed to it, but it may require to hold huge volume of data in memory cache during processing (and if RAM is not enough, then page it to disk). Computations are generally numerically intensive in nature (let's say taking the inverse of a matrix)
The server should be able to do multithreading (I don't know what this term mean in Java, what I wish is, the server should be able to distribute the job in multiple parallel sub-processes.)
The server itself should be very lightweight. I Do NOT need any GUI Interface.
It will be great if I design it in a way so that I can integrate it later with HPC frameworks like Hadoop.
Now if I got to do this, what kind of programming do I need to learn? By the way, I have good understanding on OOP, I am somewhat familiar with Data Structures and algorithms, I know basic Java (never done any network or multithreaded programming in Java before, but have used typical oop concepts, generics, comparable interfaces etc.). I basically work in database programming, but have also done lot of C, C++, C#, Python in the past.
Given the requirement and my background, please suggest,
How should I begin to work on this project? What is the way to architect the project?
Should I create some basic API definitions first and then start working on the details?
Should I follow any particular design pattern? Where to learn them from?
What are the things I need to learn in Java and where to learn them from?
What is the best way to read huge data in memory? Is Java nio good solution?
If I instantiate a class with huge amount of data, would it work? (example, let's say I have a Vector class to represent a matrix with millions of elements and the constructor of the class reads huge data set in the memory). What's the best way to handle that?
You will want to define how the client and server will talk to eachother. The easiest way is to use established protocols such as HTTP by creating REST services that the client can call without much coding.
Most frameworks that support HTTP create several listeners that run in different threads. This gives you multi threading out of the box.
I'd suggest looking into I prefer Spring Controllers. Spring is fairly light weight.
If you want to use these frameworks, you will want to quickly find, and incorporate them into your application for compilation and packaging.
I would suggest looking into Maven for this. It's a big time saver. In particular using archetypes to create your project's folder structure, and auto download dependencies, and their dependencies.
Finally my words of wisdom. Ensure your services are singleton stateless services. This means you only create the objects once, and each thread uses the same objects. There is lots less garbage collection happening. This makes a huge difference when processing large amounts of requests.
Be careful not to use class level variables to hold state, in these services. If you do, different threads will over write each others data.
First thing I would like to say that as per your explanation of the things you seem to be in a pretty good shape to use java as your server side language.
The kind of client server architecture you choose may depend on what kind of clients actually you are serving to. Would they be typical GUI or CUI based desktop clients or the web clients.
In the latter case you could use Spring Framework in a normal fashion and for the former one you could go further to explore Spring's support for Restful Web services. I would advise not to go with socket or TCP based networking solutions or use java networking.
Spring's RESTful API gives you a very cool abstraction over things like networking and multi threading even for a desktop based client. In case of a desktop client you can use JSON/XML as response and can use HttpClient library for making calls to server, which is a very cool abstraction of the underlying networking stuff.
Further up Spring's design patterns follow a very linear flow of data. A lot of your fundamental design considerations are catered by the Spring itself using Dependency Injection and Inversion of Control which are extremely simple to incorporate.
For a detailed analysis of design patterns related to specific requirements I would suggest you to read the book called Java Design Patterns: A Tutorial of Addison Wesley publications and the author is James W. Cooper.
One more thing about the API design. It would be preferable for you to first create a API specification and then go further to implement them.

Java TCP Server-Client Design Solution

I'm in the process of developing a highly object-oriented solution, (i.e., I want as little coupling as possible, lots of reusability and modular code, good use of design patterns, clean code, etc). I am currently implementing the client-server aspect of the application, and I am new to it. I know how to use Sockets, and how to send streams and receive them. However, I am unsure of actually how to design my solution.
What patterns (if any) are there for TCP Java solutions? I will be sending lots of serialized objects over the network, how do I handle the different requests/objects? In fact, how do I handle a request itself? Do I wrap each object I'm sending inside another object, and then when the object arrives I parse it for a 'command/request', then handle the object contained within accordingly? It is this general design that I am struggling with.
All the tutorials online just seem to be bog-standard, echo servers, that send back the text the client sent. These are only useful when learning about actual sockets, but aren't useful when applying to a real situation. Lots of case statements and if statements just seems poor development. Any ideas? I'd much rather not use a framework at this stage.
Cheers,
Tim.
Consider using a higher level protocol then TCP/IP, don't reinvent the wheel. rmi is a good option and you should be able to find good tutorials on it.
I suggest you either use RMI, or look at it in details so you can determine how you would do things differently. At a minimum I suggest you play with RMI to see how it works before attempting to do it yourself.
If high performance and low latency aren't main requirements then just use existing solutions.
And if you decide to use rmi than consider using J2EE with EJB - it'll provide you a transaction management on top of rmi.
Otherwise if you need extremely low latency take a look on sources of existing solutions that use custom protocols on top of tcp.
For example OpenChord sends serialized Request and Response objects and Project Voldemort uses custom messages for its few operations.

Java - Framework to "APIzize" any class and make it available on TCP?

I have an application where both the backend and the frontend are built in Java. The backend provides some functionalities like accessing the DB, etc. While the frontend built in Struts calls those functions.
I'm looking for a way to make any Java class easily callable on TCP, ideally in my mind this could be done by extending a specific class, let's say:
public class MyClass extends ThisIsAnAPI
making in this way all the public functions callable on a network protocol.
With such a framework the frontend could be easily implemented in other languages, like Ruby (On Rails), by making network requests to the backend APIs written in Java and exposed on TCP.
Any tips?
If you are likely to go to a JavaScript/Ajax UI then I would take the time to expose the backend as RESTful services. Using JAX/RS this is a matter of a few lines of a code and some annotations and an interface.
If you are staying pure Java, it's pretty trivial these days to turn a POJO into a remotely callable EJB: just a couple of annotations.
It may sound like overkill, but in terms of effort and cost (given a free app server such as WebSphere CE or JBoss) it's not that big a deal. However if you don't go for EJBs then you need to look at two big issues:
Security. You've got some TCP-callable services. How sensitive are those services? Do they need authentication and authorisation? You can all too easily open up sensitive databases to the whole company or even the internet.
Resilience and Scaling. How will you manage failure scenarios? EJBs exposed via RMI/IIOP can be clusterd and hence you can scale and deal with errors. If you start with a technology capable of doing that, even if you don't need the functionality right now, you are well placed for the future.
I would start with RMI which is designed to do this. You create an interface which the client uses and the server implements.
Try Hessian, which is a low-level TCP protocol also having bindings for several other platforms, so you will get C#/C++/Flash/... for free. I think it is a bit easier to work with compared to RMI.
If you need more portability for the future, consider exposing POJOs via SOAP/REST (most WS stacks have this ability, only few extra annotations are needed if any).
You might want to take a look at JMS. It's quite high level and easy to use, but you need to run a message broker. It's a bit of a different architecture to point-to-point communication.
As several persons have mentioned RMI you can look up spring which have support for this and I have myself used successfully. http://static.springsource.org/spring/docs/2.0.x/reference/remoting.html

Categories

Resources