I have application with modular architecture. Each module in this application communicate between each other though RPC protocol. This application is built on top of custom framework which is based on spring and hibernate inside. Each module has three layers architecture inside, so there is application layer in order to expose API for different modules or for UI part, it also has service layer in order to define business logic and handle transactions, data access layer in order to access/manage db data.
Additionally each module of this application is deployed as separate service (WAR) on dedicated port of application server. Also, each module has their own db connection pool. Currently we are working on performance topic and there is following scenario where performance should be improved:
Given: Module1.Component1 marked as #Transactional
And: Component1.method1() makes call to Module2.Component1
And: Module2.Component1 responds in
1sec;
30sec;
1min;
not responding;
responds with exception.
There are the following load on Module1Componet1
1 request/sec
10 requests/sec
With high load on the Module1 and with slow response of Module2 both jdbc connection pool are full and both modules are unable to react on further requests.
As an idea to improve performance for Module1, following changes can take place:
Extract external call to another module out of open transaction for Module1
It will fix performance issue and Module1 will behave appropriately even if Module2 response will be too slow and there will be huge bunch of request on Module1. This fix can be done on service layer with movement of #Transaction annotation to another place to do external call out of transaction. But, question is how to do this correctly in order to avoid similar issues in the future when developers will do similar changes. Could you please advice how it can be possible to restrict this though architecture, namely how to force to call RPC remote calls out of db transaction for future development?
Related
I am new to Java EE for Web and I am creating a Java version for a REST microservice API, which I have versions for Delphi and PHP.
All of the API's configuration is stored in an Oracle Database and is compatible with all versions of this API architecture.
My goal is to load all the configuration only once, from the database to the API, and not on every request, to make thinks cheap and fast for the memory and CPU. The only moment when the API needs to reload the configuration is when I change any part of it (then the control panel will send a request for the API to reload the config from DB). These configurations are read-only by the API (thread-safe).
I ask those of you specialists in Java Servlets, if the right way to do that is by these steps:
Create a POJO for config structure;
Create a DAO to load this config into POJO;
Check if ServletContext does not have "myConfig" in Application Scope;
If it doesn't, then instantiate DAO and Write POJO into ServletContext as "myConfig";
Always get config through "myConfig" instead of instantiate DAO on every Request Scope.
Will I get better results with that approach, like:
Less database connections;
Less memory comsumption;
Less CPU load for my Servlet requests; and
More speed on each request?
Note: I am using pure JavaEE with no frameworks, like Spring. (wanna keep it thin)
Obs: For those who didn't get what I'm talking about, it's like the problem case: how to show in the homepage how many users are online right now.
My questions is as follows:
I have a service, which queries the DB and retrieves some records. Then it updates an external system using that information and in the end in updates back the DB.
Using spring trascations and weblogic jta transaction manager i was able with sample code below not to loose any messages in case
No records are retrived. (these are mandatory for the External System)
External System Error
Failed to update the DB
So in all aboce cases the JMS Listener puts the errror message back to the queue.
My question, is there any better way, using Spring with all its goodies, to manage that? The sample code below throws explicilty a RuntimeException which i dont think is a good design...
Please for your comments.
EDIT:
The queue is being polled by the submissionListener MDP that its configuration is shown below. After the message is consumed it invokes the registerDocument() of the service. (another Spring bean). That service invokes 2 times the DAO and the external system.
Check out Spring's documentation on JmsTemplate and Message Driven POJOs for patterns in the core Spring framework.
Spring Integration models higher-level abstractions related to message-orientated patterns.
I'm reading up on JMX for the first time, and trying to see if its a feasible solution to a problem we're having on production.
We have an architecture that is constantly hitting a remote web service (managed by a different team on their own servers) and requesting data from it (we also cache from this service, but its a sticky problem where caching isn't extremely effective).
We'd like the ability to dynamically turn logging on/off at one specific point in the code, right before we hit the web service, where we can see the exact URLs/queries we're sending to the service. If we just blindly set a logging level and logged all web service requests, we'd have astronomically-large log files.
JMX seems to be the solution, where we control the logging in this section with a managed bean, and then can set that bean's state (setLoggingEnabled(boolean), etc.) remotely via some manager (probably just basic HTML adaptor).
My questions are all deployment-related:
If I write the MBean interface and impl, as well as the agent (which register MBeans and the HTML adaptor with the platform MBean server), do I compile, package & deploy those inside my main web application (WAR), or do they have to compile to their own, say, JAR and sit on the JVM beside my application?
We have a Dev, QA, Demo and Prod envrionment; is it possible to have 1 single HTML adaptor pointing to an MBean server which has different MBeans registered to it, 1 for each environment? It would be nice to have one URL to go to where you can manage beans in different environments
If the answer to my first question above is that the MBean interface, impl and agent all deploy inside your application, then is it possible to have your JMX-enabled application deployed on one server (say, Demo), but to monitor it from another server?
Thanks in advance!
How you package the MBeans is in great part a matter of portability. Will these specific services have any realistic usefulness outside the scope of this webapp ? If not, I would simply declare your webapp "JMX Manageable" and build it in. Otherwise, componentize the MBeans, put them in a jar, put the jar in the WEB-INF/lib and initialize them using a startup servlet configured in your web.xml.
For the single HTML adaptor, yes it is possible. Think of it as having Dev, QA, Demo and Prod MBeanServers, and then one Master MBeanServer. Your HTML Adaptor should render the master. Then you can use the OpenDMK cascading service to register cascades of Dev, QA, Demo and Prod in the Master. Now you will see all 5 MBeanServer's beans in the HTML adaptor display.
Does that answer your third question ?
JMX is a technology used for remote management of your application and for a situation for example when you want to change a configuration without a restart is the most proper use.
But in your case, I don't see why you would need JMX. For example if you use Log4j for your logging you could configure a file watchdog and just change logging to the lowest possible level. I.e. to debug. This does not require a restart and IMHO that should have been your initial design in the first place i.e. work arround loggers and levels. Right now, it is not clear what you mean and what happens with setLoggingEnable.
In any case, the managed bean is supposed to be deployed with your application and if you are using Spring you are in luck since it offers a really nice integration with JMX and you could deploy your spring beans as managed beans.
Finally when you connect to your process you will see the managed beans running for that JVM. So I am not sure what exactly you mean with point 2.
Anyway I hope this helps a little
We've got a Spring based web application that makes use of Hibernate to load/store its entities to the underlying database.
Since it's a backend application we not only want to allow our UI but also 3rd party tools to manually initiate DB transactions. That's why the callers need to
Call a StartTransaction method and in return get an ID that they can refer to
Do all DB relevant calls (e. g. creating, modifying, deleting) by referring to this ID to make clear which operations belong to the started transaction
Call the CommitTransaction method to signal to our backend that the transaction can be committed now (or in the negative case RollbackTransaction will be called)
So keeping in mind, that all database handling will be done internally by the Java persistence annotations, how can we open the transaction management to our UI that behaves like a 3rd party application that has no direct access to the backend entities but deals with data transfer objects only?
From the Spring Reference: Programmatic transaction management
I think this can be done but would be a royal pain to implement/verify. You would basically require a transaction manager which is not bounded by "per-thread-transaction" definition but spans across multiple invocations for the same client.
JTA + Stateful session beans might be something you would want to have a look at.
Why don't you build services around your 'back end application' for example a SOAP interface or a REST interface.
With this strategy you can manage your transaction in the backend
I am working on a project that is developing a webapp with a 100% Flex UI that talks via Blaze to a Java backend running on an application server. The team has already created many unit tests, but have only created integration tests for the persistence module. Now we are wondering the best way to integration test the other parts. Here are the Maven modules we have now, I believe this is a very typical design:
Server Side:
1) a Java domain module -- this only has unit tests
2) a Java persistence module (DAO) -- right now this only has integration tests that talk to a live database to test the DAOs, nothing really to unit test here
3) a Java service module -- right now this only has unit tests
Client Side:
4) a Flex services module that is packaged as a SWC and talks to the Java backend -- currently this has no tests at all
5) a Flex client module that implements the Flex UI on top of the Flex services module - this has only unit tests currently (we used MATE to create a loosely couple client with no logic in the views).
These 5 modules are packaged up into a WAR that can be deployed in an application server or servlet container.
Here are the 4 questions I have:
Should we add integration tests to the service module or is this redundant given that the persist module has integration tests and the service module already has unit tests? It also seems that integration testing the Flex-Services module is a higher priority and would exercise the services module at the same time.
We like the idea of keeping the integration tests within their modules, but there is a circularity with the Flex services module and the WAR module. Integration test for the Flex services module cannot run without an app-server and therefore those tests will have
to come AFTER the war is built, yes?
What is a good technology to
integration test the Flex
client UIs (e.g. something like
Selenium, but for Flex)?
Should we put final integration tests in the
WAR module or create a separate
integration testing module that gets built after the WAR?
Any help/opinions is greatly appreciated!
More an hint than a strong answer but maybe have a look at fluint (formerly dpUInt) and the Continuous Integration with Maven, Flex, Fliunt, and Hudson blog post.
First off, just some clarification. When you say "4) Flex services module packaged as a SWC", you mean a Flex services library that I gather is loaded as an RSL. It's an important differential than writing the services as a runtime module because the latter could (and typically would) instantiate the services controller itself and distribute the service connection to other modules. Your alternative, simply a library you build into each module means they all create their own instance of a service controller. You're better off putting the services logic into a module that the application can load prior to the other modules loading and manages the movement of services between.
Eg.
Application.swf - starts, initialises IoC container, loads Services.swf, injects any dependencies it requires
Services.swf loads, establishes connection to server, manages required service collection
Application.swf adds managed instances from Services.swf into it's container (using some form of contextual awareness so as to prevent conflicts)
Application.swf loads ModuleA.swf, injects any dependencies it requires
ModuleA.swf loads, (has dependencies listed that come from Services.swf injected), uses those dependencies to contact services it requires.
That said, sticking with your current structure, I will answer your questions as accurately as possible.
What do you want to test in integration? That your services are there and returning what you expect I gather. As such, if using Remote Objects in BlazeDS, then you could write tests to ensure you can find the endpoint, that the channels can be found, the destination(s) exists, that all remote methods return as expected. The server team are testing the data store (from them to the DB and back), but you are testing that the contract between your client and the server still holds. This contract is for any assumptions - such as Value Objects returned on payloads, remote methods existing, etc, etc.
(See #4 below) The tests should be within their module however I would say here that you really should have a module to do the services (instead of a library as I suggested above). Regardless, yes still deploy the testing artifacts to a local web-server (using Jetty or some such) and ensure the integration tests goal depends on the WAR packager you use.
I find some developers interchange UI/functional testing with integration testing. Whilst you can indeed perform the two together, there is still room for automated integration tests in Flex where a webserver is loaded up and core services are checked to ensure they exist and are returning what is required. For the UI/functional tests, Adobe maintain a good collection of resources: http://www.adobe.com/products/flex/related/#ftesting. For integration tests as I mentioned,
Integration tests should have their own goal that depends the packaged WAR project.