I would like to design an application with a core and multiple modules on top of the core. The core would have the responsibility to receive messages from the network, to parse the incoming messages and to distribute the messages to registered modules.
There are multiple types of messages, some modules may be interested to receive only some of the types. The modules can execute in parallel, or they can execute sequentially (e.g. when modules are interdependent with a well defined order of execution).
Also, it would be great if the modules could be deployed/undeployed even if the core is up.
It is completely new for me, I used to write modular application but with the multiple parts wired statically.
Which direction (i.e. framework, pattern...) should I take for such a design? I don't know if it's relevant to my question but I precise I'll use Java.
Thanks
You has a very good approach at the architecture level. But it will be beneficial only when your application layers/tire will be at separate instance, so that you can shut down one module/server and while other part will still be running. Point is will you run the modules on separate instances?
Secondly, I would like to suggest you to build the application core architecture using Web-Service either REST/SOAP as it will automatic achieve your thought that follows Service Oriented Architecture (SOA). That will be producer - consumer relation and you can run on separate instance. And while deploying/undeploying you can still run the services part to support other client instances.
Using Web service will also provide you a global information exchange system that will likely to communicate with several application views/front end.
Related
I am developing a product using microservices and am running into a bit of an issue. In order to do any work, I need to have all 9 services running on my local development environment. I am using Cloud Foundry to run the applications, but when running locally I am just running the Spring Boot Jars themselves. Is there anyway to setup a more lightweight environment so that I don't need everything running? Ideally, I would only like to have the service I am currently working on to have to be real.
I believe this is a matter of your testing strategy. If you have a lot of micro-services in your system, it is not wise to always perform end-to-end testing at development time -- it costs you productivity and the set up is usually complex (like what you observed).
You should really think about what is the thing you wanna test. Within one service, it is usually good to decouple core logic and the integration points with other services. Ideally, you should be able to write simple unit tests for your core logic. If you wanna test integration points with other services, use mock library (a quick google search shows this to be promising http://spring.io/blog/2007/01/15/unit-testing-with-stubs-and-mocks/)
If you don't have already, I would highly recommend to set up a separate staging area with all micro-services running. You should perform all your end-to-end testing there, before deploying to production.
This post from Martin Fowler has a more comprehensive take on micro-service testing stratey:
https://martinfowler.com/articles/microservice-testing
It boils down to a test technique that you use. Here my recent answer in another topic that you could find useful https://stackoverflow.com/a/44486519/2328781.
In general, I think that Wiremock is a good choice because of the following reasons:
It has out-of-the-box support by Spring Boot
It has out-of-the-box support by Spring Cloud Contract, which gives a possibility to use a very powerful technique called Consumer Driven Contracts.
It has a recording feature. Setup your Wiremock as a proxy and make requests through it. This will generate stubs for you automatically based on your requests and responses.
There are multiple tools out there that let you create mocked versions of your microservices.
When I encountered this exact problem myself I decided to create my own tool which is tailored for microservice testing. The goal is to never have to run all microservices at once, only the one that you are working on.
You can read more about the tool and how to use it to mock microservices here: https://mocki.io/mock-api-microservices. If you only want to run them locally, it is possible using the open source CLI tool
It can be solved if your microservices allow passing metadata along with requests.
Good microservice architecture should use central service discovery, also every service should be able to take metadata map along with request payload. Known fields of this map can be somehow interpreted and modified by the service then passed to next service.
Most popular usage of per-request metadata is request tracing (i.e. collecting tree of nodes used to process this request and timings for every node) but it also can be used to tell entire system which nodes to use
Thus plan is
register your local node in dev environment service discovery
send request to entry node of your system along with metadata telling everyone to use your local service instance instead of default one
metadata will propagate and your local node will be called by dev environment, then local node will pass processed results back to dev env
Alternatively:
use code generation for inter-service communication to reduce risk of failing because of mistakes in RPC code
resort to integration tests, mocking all client apis for microservice under development
fully automate deployment of your system to your local machine. You will possibly need to run nodes with reduced memory (which is generally OK as memory is commonly consumed only under load) or buy more RAM.
An approach would be to use / deploy an app which maps paths / urls to json response files. I personally haven't used it but I believe http://wiremock.org/ might help you
For java microservices, you should try Stybby4j. This will mock the json responses of other microservices using Stubby server. If you feel that mocking is not enough to map all the features of your microservices, you should setup a local docker environment to deploy the dependent microservices.
I am confused when it is proper to use an application with multiple entry points, or I guess an application with multiple interconnected modules. I have a network application (Netty) as well as a web application (spring). I can bundle them together, in effect tightly coupling them together, or I can modularize them to operate interdependently of each other while still working together to make the application whole.
Is there any specific reason for making an application a single entity vs multiple entities? Is it "desired" to have a self contained application (eg. One main method)?
First of all, asking about the number of main() methods is a bit misleading. You can have several classes with main() methods in a single JAR file after all.
But the question seems to be more about single application vs. multiple applications, or to be more precise: tiers.
It's important to note that this issue is separate from the question of modularity and multi-threading, all of which can be employed in a single tier application just as easily as in a multi-tier application.
The reasons you'd need a multi-tiered application can vary, but here are a few examples:
It is simply part of the requirements: i.e. a chat software will usually need a server and a client because the requirement is to move data between two computers.
Scaling: you need to spread the work to multiple computers to cope with large amounts of data or requests. (This is a typical use case of message queues for example.)
Separation of concerns: This typically happens in "enterprise" systems, where different functions need to be performed in complete isolation, allowing modules to be replaced/restarted on the go or to scale them separately.
Web applications are supposed to have multiple entries; think of the URL you type that can lead to to a resource. In fact, in many web application architectures, such as JAX-RS, exposing the resource URI is encouraged. Each entity, as small as one java bean, has its own entry point. Not sure if this is what you mean, but that's my opinion.
I'm creating an application that relies heavily on dynamic creation/management of various resources like jms queues, webservice endpoints, jdbc connections... I have a background in java EE and am currently working on a jboss 7 server however I'm finding it increasingly difficult to control these things programmatically. The hardest thing to control seem to be the webservices. I need to be able to generate WSDLs (and XSDs) on the fly, manage the endpoints, soap handlers etc and the system simply does not seem to be set up to do that.
Other application servers don't seem to really offer any groundbreaking solutions so I'm wondering whether perhaps java EE is not the best solution to this particular problem?
Is there an application server that allows you to do just that? Is there another technology that does? Should I just roll a custom solution that integrates all the separate modules (e.g. a jms server, a web server etc...)?
UPDATE
To clarify, most java EE stuff is accomplished through a mixture of annotations and XML configuration. This however assumes that you have a POJO and/or a jar/war/... per resource.
Suppose I have a #WebServiceProvider bean which can be reused for multiple input/output combinations (for example because it dynamically redirects the content). I need to be able to deploy a new "instance" of the provider on the fly. This means I do not want to duplicate the code and redeploy it, I just want to take that one existing bean on the classpath and deploy it multiple times with different configuration settings. This also means I need to manage the WSDL dynamically. The end result should be a webservice that works pretty much like a standard webservice on the application server with the necessary integrated security, soap handlers,...
I imagine that at some point in the application server code, there must be a class "WebserviceManager" which has a method like "createWebservice(...)" that is actually used by the deployment module whenever it discovers a webservice annotation. I want access to that method and similar methods for creating jdbc connections, jms queues,...
You can use OSGi for these kind of scenarios. It is perfect for hot deployment of varios modules.
In my environment I need to schedule long-running task. I have application A which just shows to the client the list of currently running tasks and allows to schedule new ones. There is also application B which does the actual hard work.
So app A needs to schedule a task in app B. The only thing they have in common is the database. The simplest thing to do seems to be adding a table with a list of tasks and having app B query that table every once in a while and execute newly scheduled tasks.
Yet, it doesn't seem to be the proper way of doing it. At first glance it seems that the tool for the job in an enterprise environment is a message queue. App A sends a message with task description to the queue, app B reads a message from the queue and executes the task. Is it possible in such case for app A to get the status of all the tasks scheduled (persistent queue?) without creating a table like the one mentioned above to which app B would write the status of completed tasks? Note also that there may be multiple instances of app A and each of them needs to know about all tasks of all instances.
The disadvantage of the 'table approach' is that I need to have DB polling.
The disadvantage of the 'message queue approach' is that I'm introducing a new communication channel into the infrastructure (yet another thing that can fail).
What do you think? Any other ideas?
Thank you in advance for any advice :)
========== UPDATE ==========
Eventually I decided on the following approach: there are two sides of this problem: one is communication between A and B. The other is getting information about the tasks.
For communication the right tool for the job is JMS. For getting data the right tool is the database.
So I'll have app A add a new row to the 'tasks' table descibing a task (I can query this table later on to get list of all tasks). Then A will send a message to B via JMS just to say 'you have work to do'. B will do the work and update task status in the table.
Thank you for all responses!
You need to think about your deployment environment both now and likely changes in the future.
You're effectively looking at two problems, both which can be solved in several ways, depending on how much infrastructure you able to obtain and are also willing to introduce, but it's also important to "right size" your design for your problems.
Whilst you're correct to think about the use of both databases and messaging, you need to consider whether these items are overkill for your domain and only you and others who know your domain can really answer that.
My advice would be to look at what is already in use in your area. If you already have database infrastructure that you can build into, then monitoring task activity and scheduling jobs in a database are not a bad idea. However, if you would have to run your own database, get new hardware, don't have sufficient support resources then introduction of a database may not be a sensible option and you could look at a simpler, but potentially more fragile approach of having your processes write files to schedule jobs and report tasks.
At the same time, don't look at the introduction of a DB or JMS as inherently error prone. Correctly implemented they are stable and proven technologies that will make your system scalable and manageable.
As #kan says, use exposing an web service interface is also a useful option.
Another option is to make the B as a service, e.g. expose control and status interfaces as REST or SOAP interfaces. In this case the A will just be as a client application of the B. The B stores its state in the database. The A is a stateless application which just communicates with B.
BTW, using Spring Remote you could expose an interface and use any of JMS, REST, SOAP or RMI as a transport layer which could be changed later if necessary.
You have messages (JMS) in enterprise architecture. Use these, they are available in Java EE containers like Glassfish. Messages can be serialized to be sure they will be delivered even if the server reboots while they are in the queue. And you even do not need to care how all this is implemented.
There can be couple of approaches here. First, as #kan suggested to have app B expose some web service for the interactions. This will heterogenous clients to communicate with app B. Seems a good approach. App B can internally use whatever persistent store it deems fit.
Alternatively, you can have app B expose some management interface via JMX and have applications like app A talk to app B through this management interface. Implementing the task submission and retrieving the statistics etc. would be simpler. Additionally, you can also leverage JMX notifications for real time updates on task submissions and accomplishments etc. Downside to this is that this would be a Java specific solution and hence supporting heterogenous clients will be distant dream.
I have five separate java processes; which are running as business logic modules. I would like to develop my process management application were i can start/ping/monitor/message child processes.
Also, it maybe sharing resources like cache etc with child processes over rest-ws or worst case rmi calls since requires additional overhead.
I was inclined toward webservice based api, which will keep sending information about business logic running within processes. The processes can be data churning, computation, notification process engines.
Any ideas?
One option is to use JMX, and publish one or more MBeans. Oracle has documentation on it. You can use the request information from the processes, or to send them signals to change their behavior.
The bare bones outline of what you would do is decide what methods you need to expose remotely in each of your child processes. Each of them should define an interface with those methods, then an implementation of that interface. Then those implementations need to be registered with the MBeanServer.
The advantage of this approach is that you will immediately get a bare-bones 'management application', because you can open JConsole against your processes and use the MBeans. If you then wish to create a separate application that will more cleanly present your data, you can do so at your leisure, without changing the child processes.
This approach does not really get you anyway to 'sharing a cache', but sharing a cache between processes (or machines) should really be a separate question (I think).