We have multiple web apps on our container (Tomcat) that don't interact with each other but they share the same data model. Some basic data access operations are used in multiple web apps, and of course we don't want the same code duplicated between multiple webapps.
For this case is it better to build a library to provide the common functions or to expose the functions as a web service?
With the library the user would have to provide the data source to access the database while the web service would be self-contained plus have its own logging.
My quesion is similar to this SO question but performance isn't a concern - I think working with a web service on the same container will more than meet our needs. I'm interested to know if there's a standard way to approach this problem and if one way is better than the other - I'm sure I haven't considered all the factors.
Thank you.
I would make them a library. This will reduce any performance hits you would incur from network traffic, and in general would make it easier to reach your applications (because your library can't go 'down' like a webserver). If your applications which use this library otherwise do not require a network connection, then you will be able to totally relieve yourself of network connectivity constraints.
If you think you may want to expose some functionality of this library to your users, you should consider making a webservice around this library.
If it is just a model with some non-persistent operations (non-side effect calculations, etc) I'll use jar library. If it is more like a service (DB/Network/... operations), I'll create a separate webservice. If you have strong performance requirements, local library is the only solution.
Also you can implement it using interfaces and change implementation when it will be clear, what to use.
Webservice will certainly have its own share of overhead, both in terms of cpu and the codebase. If you dont to duplicate the same jar in every project, you can consider moving it to server lib, so that once updated every webapp gets the change. But this approach has a major drawback too, suppose you make some non backward compatible change in the model jar and update one webapp to use the newer model, you will certainly have to update all other webapps to be able to adapt to changes made in the common jar. You cant run multiple version from same server lib. You can package appropriate version of common jar in every webapp, but then for even a minor change in the common (model) jar, you will have to repckage and deploy all the webapps.
Related
I've created my first Play application. Which is the most suitable deployment method for production? Should i copy the whole project to the production server and run play start? or should i make a war out of my application and deploy in tomcat / jboss? Which is the most recommended way? Getting confused with it comparing to its rails type of behavior. Note that this is supposed to be a big data application and also it may server loaded requests later on. So we are thinking of scalability, availability, performance aspects too. This application is decided to be deployed in a cloud.
Thanks.
As others have stated, using the dist command is the easiest way to deploy Play for a one-off application. However, to elaborate, I have here some other options and my experience with them:
When I have an app that I update frequently, I usually install Play on the server and perform updates through Git. Doing so, after every update, I simply run play stop (to stop the running server), sometimes I then run play clean to clear out any potentially corrupted libraries or binaries, then I run play stage to ensure all prerequisites are present and to perform compilation, and then finally play start to run the server for the updated app. It seems like a lot, but it is easy to automate via a quick bash script.
Another method is to deploy Play behind a front-end web server such as Apache, Nginx, etc. This is mostly useful if you want to perform some sort of load balancing, but not required as Play comes bundled with its own server. Docs: http://www.playframework.com/documentation/2.1.1/HTTPServer
Creating a WAR archive using the play2war plugin is another way to deploy, but I wouldn't recommend it unless you are giving it to someone who already has a major infrastructure built upon these servlet containers you mentioned (as many large companies do). Using a servlet containers adds a level of complexity that Play is supposed to remove by nature (hence the integrated server). There are no notable performance gains that I am aware of using this method over the two previously described.
Of course, there is always the play dist which creates the package for you, which you upload to your server and run play start from there. This is probably the easiest option. Docs: http://www.playframework.com/documentation/2.1.1/ProductionDist
For performance and scalability, the Netty server in Play will function very adequately to exceptional for what you require. Here's a reputable link showing Netty with the fastest performance of all frameworks and a "stock" Play app as coming in somewhere in the middle of the field, but way ahead of Rails/Django in terms of performance: http://www.techempower.com/blog/2013/04/05/frameworks-round-2/.
Don't forget, you can always change your deployment architecture down the road to run behind a front-end server as described above if you need more load balancing and such for availability. That is a trivial change with Play. I still would not recommend the WAR deployment option unless, like I said, you already have a large installed base of servlet containers in use that someone is forcing you to serve your app with.
Scalability and performance also has a lot more to do with other factors as well, such as your use of caching, the database configuration, use of concurrency (which Play is good at) and the quality of the underlying hardware or cloud platform. For instance, Instagram and Pinterest serve millions of people every day on a Python/Django stack which has mediocre performance by all popular benchmarks. They mitigate that with lots of caching and high-performing databases (which is usually the bottleneck in large applications).
At the risk of making this answer too long, I'll just add one last thing. I, too, used to fret over performance and scalability, thinking I needed the most powerful stack and configuration around to run my apps. That just isn't the case any more unless you're talking like Google or Facebook scale where every algorithm has to be finely tuned as it will be bombarded a billion times every day. Hardware (or cloud) resources are cheap but developer/sysadmin time isn't. You should consider ease of use and maintainability for deployment of your app over raw performance comparisons, even though in the case of Play the best performing deployment configuration is arguably the easiest option as well.
You don't need to use Play's console for running application, it consumes some resources and it's main goal is fast launch while development stage.
The best option is using dist command as described in the doc. Thanks to this, you don't even need to install Play on the target machine, as dist creates ready to use stand-alone application containing all required elements (also build-in server, so you don't need to deploy it with WAR in any container).
If you planning to use a cloud you should also check offers ie. from Heroku, or CloudBees, which allows you to deploy your application just by... pushing changes via git repository, which is very comfortable way, check the documentation's home, scroll down to links: Deploying to... for more details.
In our system we have a legacy standalone java application which we are trying to made available for new webapps we are developing all together running in a server (f.e. Tomcat)
In order to made requests to this app lighter we thought about made them directly to the same vm using jndi instead of developing a webservice interface.
I would like to start this application environment in some webapp context and make some API available to other webapps and invoke interfaces' methods.
I've not been able to bind this objects by JNDI in the Tomcat's read-only Context without adding the app in the Common lib, when I get more problems due to incompatibilities between dependencies versions. Maybe the best solution is to deploy these interfaces as EJBs so I'd use a Java EE Server instead of a servlet container. Or maybe I'd use some other framework such as Camel or something.
Thanks in advance and any suggestion will be helpful.
I would suggest to wrap your legacy java interfaces in REST. When you expose them as REST APIs, they will become available for any client, not only java. Also you don't need any Application Servers for that, all you need is a jar file for your REST reference implementation.
From performance perspective, well, I know theoretically JNDI should be faster, but in the real world the difference in performance becomes significant ONLY for very very performance intensive applications.
However, if performance is your primary requirement then wrap your legacy interfaces in EJBs.
Manual JNDI/RMI lookups are going to be the fastest, BUT and this is a rather big but, unless you are well experienced in network programming and multi threading, I would advise you to steer clear of that, and use a container. There are a lot of nitty gritty details that the container takes care of and you can concentrate on implementing your business logic.
I am planning on building a GWT app that will be deployed to GAE. In addition to the normal (GWT) web client, the server-side code will service requests from other clients besides just the web app. Specifically, it will host a RESTful API that can be hit from any HTTP-compliant client, and it will also service requests from native apps on iOS and Android.
If my understanding of GWT is correct, it's your job to code up both the client-side code (which includes the AJAX requests your app makes back to the server) as well as the server-side request handlers.
This got me thinking: why do I need to package the web client and the web server inside the same WAR? This forces me to (essentially) re-deploy the client-side code every time I want to make a change to the backend. Probably not a big deal, but if I don't have to, I'd prefer to honor "separation of concerns".
So I ask: is there a way to essentially deploy a Java-less WAR on GAE in such a way that it just serves pure HTML/JS/CSS back to any clients that will use it, and to then deploy the server-side in its own WAR, and some how link the two up? Thanks in advance!
The WAR is just for the server side. It includes the client-side classes needed for serializing objects passed between client and server: obviously, both sides need implementations of the same objects in order to handle those objects.
I don't think it will save you any effort or development time to separate the two concerns, but if you really want to, then you can rework your client/server interaction using something other than GWT-RPC. For example, JSON. See https://developers.google.com/web-toolkit/doc/latest/DevGuideServerCommunication#DevGuideHttpRequests for discussion of your options if you want to go down that road.
No, AFAIK, you can not do partial update in GAE, i.e. you can not upload a part of the project to GAE instance and then upload in a separate upload another part (and so separating HTNML/JS/CSS to java classes).
Hopefully this is what you are looking for.
Finally the main stub that you want to deploy might be EAR file which you can mention in main pom.xml
Most of the Java web application we run in our shop for software development purposes need to have some kind of an APP_HOME directory created where the web application can do work. The applications I am thinking of here are things like Hudson, Nexus, Confluence, JIRA, etc. Perhaps these applications are special because they are for software development (with Confluence perhaps the notable exception).
However, it our own web application we are striving to avoid this requirement and thus saving all the configuration information in a database, whose access can be provided via a JNDI database and/or entity manager factory. NOTE: Our application does NOT have a requirement to do any heavy duty file management.
What are the pros/cons of have an enterprise web application that uses the filesystem for work. Is this typical?
Putting configuration in files makes it easy to put into version control; logging out to files is also pretty darn useful.
File systems are really, really good at storing large blobs of data and providing super fast access and manipulation for them (including fine-grained, byte level manipulation). If you need that, you can wrap the file system storage in a JNDI resource... Using a database for these things may have advantages (backup requirements are crystal clear), but the performance will be quite poor compared to direct file system access. That said, unless your app is actually performance limited by how quickly it can interact with the database for those specific activities, it's probably not worth second guessing yourself.
I've just spent the last two days reading up all the OSGi stuff I can get my hands on and I finally think I've got my head around it.
I'm now trying to integrate it with an existing application for many reasons such as 3rd party plugins, automatic updates, not to mention that SOA just makes me happy.
I now have a decision I'm struggling to make, which is weather
My entire application should become an OSGi bundle installed by default in the container; or
My application should launch an embedded OSGi container and interact with it for all the plugged services.
I'd prefer 1, as this lets me update the application easily and the architecture would be consistent. Of course I expect to have to refactor the application into many smaller bundles. However 2 makes things much easier in the short term, but will become awkward in the future.
for option 1) you really don't want your whole application in one bundle - you would loose all the benefit from OSGi - but really that depend on the size of your application.
It really depends where you want to run application and which task you want it to perform. Also you probably want to have some kind of remoting to access the exposed services.
in option 1) you need to enable some kind of http/servlet bundle (there is a bridge that exists)
in option 2) you application can run inside an application server so you don't have to worry about that.
The first question you want to ask yourself is about the operational environment. Who is going to run the application? Do they need/want to be trained on OSGi? Are they more comfortable with the J2EE stack?
I think the best option for you is to keep your options open, there is no real differences between 1) and 2) but what is staring the OSGi framework, either your code or the framework code. Your application itself, ie the bundles constituting your application will be exactly the same.
My advice would be not to worry too much about OSGi runtime to start with - but start on OSGi development - nothing stop you from developing "OSGi-style" and running in a standard JRE environment.
I think you want to go with option 1, and have your application consist of a set of bundles inside of an (mostly out-of-the-box) OSGi container.
It will improve modularity of your own code. You may even find that some parts of it can provide services of use outside of the original application.
It is much easier to use other bundles from inside of OSGi, than from the host application. Because the host application cannot see the bundles' classes (and the bundles can only see what you explicitly expose from the host), you have to set up a pretty convoluted classpath or resort to reflection to call bundles from outside the container.
So I'd say that even in the short run, option 1 is probably easier.
Also, I agree with Patrick's assertion that the bulk of your code does not need to care if it runs in OSGi or in a plain JVM. Especially when using Declarative Services and such the need to use OSGi interfaces and mechanisms from your code is greatly reduced: You just add a few descriptor files to the jar's META-INF.
I would rather go with option 2,
Inherently your application is not a bundle, but an application.
If you want the OSGi value addition, spawn the OSGi container from within your application.
That way, at a future date if you decide to move away from OSGi, you can do in a simple way.
Have you looked at the Spring Application server? Doesn't this allow you to manage this stuff?
I would definitely recommend 1 - the app should become an OSGi bundle(s), and not only because of easy updating. If half of your code is in the OSGi framework and half is outside, you will have to construct a bridge for the communication between the two halves; you could also have issues with classes visibility.
There are also many benefits from 1, and it is not so difficult to achieve. What I would recommend is the following:
Separate the application in as many modules as it seems logical to you.
You are not forced to have many modules - OSGi can as easily handle two bundles 10 MB each as well as 100 smaller bundles. The separation should be a result of the functionality itself - a good starting point is the UML architecture diagram you probably did before you even started implementing the stuff. The places where the different functional parts communicate with each other are exactly the places where you should think about defining interfaces instead of classes - and these interfaces will then become your OSGi services and the implementations will become the bundles - and the next time you will have to update some part you will find out it is much easier to predict the effect on the other parts of the app because you separated it clearly and declared it in the manifest of the bundles.
Separate any external/open source libraries you use in separate bundles. They will most probably be the parts that will have to be updated more often and on a different timeline than your own code. It is also more important here to define clear package dependencies, package versions, and to avoid depending on the implementation parts instead of only on interfaces!
Think about which parts of the app you want to expose to plugins. Then make OSGi services out of these parts - i.e. publish the interfaces in the OSGi registry. You don't need to implement any specific thing - you can publish any Java object. The plugins will then use the regitry for the lookup.
The same goes for the plugins - think about what you want to get from plugins and define the respective interfaces that the plugins can implement and publish and your app can lookup in the registry.
And as a final tip - see what bundles are already available in the OSGi framework you have chosen. There are many standard OSGi interfaces defined by the OSGi spec - for configuration, logging, persistent storage, remote connections, user admin, eventing, and many more. Since they are standard, you can use them without becoming dependent on any specific OSGi implementation. And uninstall what you don't need.