There seems to be a current trend in java space to move away from deploying java web applications to a java servlet container (or application server) in the form of a war file (or ear file) and instead package the application as an executable jar with an embedded servlet/HTTP server like jetty. And I mean this more so in the way newer frameworks are influencing how new applications are developed and deployed rather than how applications are delivered to end users (because, for example, I get why Jenkins uses an embedded container, very easy to grab and go). Examples of frameworks adopting the executable jar option:
Dropwizard, Spring Boot, and Play (well it doesn't run on a servlet container but the HTTP server is embedded).
My question is, coming from an environment where we have deployed our (up to this point mostly Struts2) applications to a single tomcat application server, what changes, best practices, or considerations need to be made if we plan on using an embedded container approach? Currently, we have about 10 homegrown applications running on a single tomcat server and for these smallish applications
the ability to share resources and be managed on one server is nice. Our applications are not intended to be distributed to end users to run within their environment. However, moving forward if we decide to leverage a newer java framework, should this approach change? Is the shift to executable jars spurred on by the increasing use of cloud deployments (e.g., Heroku)?
If you've had experience managing multiple applications in the Play style of deployment versus traditional war file deployment on a single application server, please share your insight.
An interesting question. This is just my view on the topic, so take everything with a grain of salt. I have occasionally deployed and managed applications using both servlet containers and embedded servers. I'm sure there are still many good reasons for using servlet containers but I will try to just focus on why they are less popular today.
Short version: Servlet containers are great to manage multiple applications on a single host but don't seem very useful to manage just one single application. With cloud environments, a single application per virtual machine seems preferable and more common. Modern frameworks want to be cloud compatible, therefore the shift to embedded servers.
So I think cloud services are the main reason for abandoning servlet containers. Just like servlet containers let you manage applications, cloud services let you manage virtual machines, instances, data storage and much more. This sounds more complicated, but with cloud environments, there has been a shift to single app machines. This means you can often treat the whole machine like it is the application. Each application runs on a machine with appropriate size. Cloud instances can pop up and vanish at any time which is great for scaling. If an application needs more resources, you create more instances.
Dedicated servers on the other hand usually are powerful but with a fixed size, so you run multiple applications on a single machine to maximize the use of resources. Managing dozens of application - each with their own configurations, web servers, routes and connections etc. - is not fun, so using a servlet container helps you to keep everything manageable and yourself sane. It is harder to scale though. Servlet containers in the cloud don't seem very useful. They would have to be set up for each tiny instance, without providing much value since they only manage a single application.
Also, clouds are cool and non-cloud stuff is boring (if we still believe the hype). Many frameworks try to be scalable by default, so that they can easily be deployed to the clouds. Embedded servers are fast to deploy and run so they seem like a reasonable solution. Servlet containers are usually still supported but require a more complicated set up.
Some other points:
The embedded server could be optimized for the framework or is better integrated with the frameworks tooling (like the play console for example).
Not all cloud environments come with customizable machine images. Instead of writing initialization scripts to download and set up servlet containers, using dedicated software for cloud application deployments is much simpler.
I have yet to find a Tomcat setup that doesn't greet you with a perm gen space error every few redeployments of your app. Taking a bit longer to (re-)start embedded servers is no problem when you can almost instantly switch between staging and production instances without any downtime.
As already mentioned in the question, it's very convenient for the end user to just run the application.
Embedded servers are portable and convenient for development. Today everything is rapid, prototypes and MVPs need to be created and delivered as fast as possible. No one wants to spend too much time setting up an environment for every developer.
Related
I've created my first Play application. Which is the most suitable deployment method for production? Should i copy the whole project to the production server and run play start? or should i make a war out of my application and deploy in tomcat / jboss? Which is the most recommended way? Getting confused with it comparing to its rails type of behavior. Note that this is supposed to be a big data application and also it may server loaded requests later on. So we are thinking of scalability, availability, performance aspects too. This application is decided to be deployed in a cloud.
Thanks.
As others have stated, using the dist command is the easiest way to deploy Play for a one-off application. However, to elaborate, I have here some other options and my experience with them:
When I have an app that I update frequently, I usually install Play on the server and perform updates through Git. Doing so, after every update, I simply run play stop (to stop the running server), sometimes I then run play clean to clear out any potentially corrupted libraries or binaries, then I run play stage to ensure all prerequisites are present and to perform compilation, and then finally play start to run the server for the updated app. It seems like a lot, but it is easy to automate via a quick bash script.
Another method is to deploy Play behind a front-end web server such as Apache, Nginx, etc. This is mostly useful if you want to perform some sort of load balancing, but not required as Play comes bundled with its own server. Docs: http://www.playframework.com/documentation/2.1.1/HTTPServer
Creating a WAR archive using the play2war plugin is another way to deploy, but I wouldn't recommend it unless you are giving it to someone who already has a major infrastructure built upon these servlet containers you mentioned (as many large companies do). Using a servlet containers adds a level of complexity that Play is supposed to remove by nature (hence the integrated server). There are no notable performance gains that I am aware of using this method over the two previously described.
Of course, there is always the play dist which creates the package for you, which you upload to your server and run play start from there. This is probably the easiest option. Docs: http://www.playframework.com/documentation/2.1.1/ProductionDist
For performance and scalability, the Netty server in Play will function very adequately to exceptional for what you require. Here's a reputable link showing Netty with the fastest performance of all frameworks and a "stock" Play app as coming in somewhere in the middle of the field, but way ahead of Rails/Django in terms of performance: http://www.techempower.com/blog/2013/04/05/frameworks-round-2/.
Don't forget, you can always change your deployment architecture down the road to run behind a front-end server as described above if you need more load balancing and such for availability. That is a trivial change with Play. I still would not recommend the WAR deployment option unless, like I said, you already have a large installed base of servlet containers in use that someone is forcing you to serve your app with.
Scalability and performance also has a lot more to do with other factors as well, such as your use of caching, the database configuration, use of concurrency (which Play is good at) and the quality of the underlying hardware or cloud platform. For instance, Instagram and Pinterest serve millions of people every day on a Python/Django stack which has mediocre performance by all popular benchmarks. They mitigate that with lots of caching and high-performing databases (which is usually the bottleneck in large applications).
At the risk of making this answer too long, I'll just add one last thing. I, too, used to fret over performance and scalability, thinking I needed the most powerful stack and configuration around to run my apps. That just isn't the case any more unless you're talking like Google or Facebook scale where every algorithm has to be finely tuned as it will be bombarded a billion times every day. Hardware (or cloud) resources are cheap but developer/sysadmin time isn't. You should consider ease of use and maintainability for deployment of your app over raw performance comparisons, even though in the case of Play the best performing deployment configuration is arguably the easiest option as well.
You don't need to use Play's console for running application, it consumes some resources and it's main goal is fast launch while development stage.
The best option is using dist command as described in the doc. Thanks to this, you don't even need to install Play on the target machine, as dist creates ready to use stand-alone application containing all required elements (also build-in server, so you don't need to deploy it with WAR in any container).
If you planning to use a cloud you should also check offers ie. from Heroku, or CloudBees, which allows you to deploy your application just by... pushing changes via git repository, which is very comfortable way, check the documentation's home, scroll down to links: Deploying to... for more details.
looking into hosting sites (for a play framework application) i have noticed 2 options VPS & Dedicated JVM Java Hosting. will i be able to achieve same result using both options eventually or is one more limited ?
Borderline question. In fact, both strategies have advantages and inconvegnients. But for Playframework, you must be thinking about :
Playapps
Heroku
Jelastic
for the JVM Hosting. Just take into account the fact, Play is supposed to be served through it's embedded Jetty for better performance. When deploying to Jelastic, it will be deployed as a WAR. Performance issues might appear when using WARs instead of the out-of-the-box solution.
On the other side, a VPS must be configured can have security issues and all that. As I said, both have good and bad.
There are a couple of things that should be cleared.
Play framework comes with netty web server (not jetty, which is the server used by heroku), and play developers advice users to deploy on that server for production, mainly in order not to waste resources (a servlet container comes with lots of stuff that is not needed) and to deploy on the same platform that you are developing.
There are no performance issues deploying your application as a war exploded folder on any servlet container, it's just that you might be wasting resources.
The only disadvantage is that you won't be able to take profit of asynchronous requests.
Now there are lots of options to deploy a play application: openshift, heroku, gae, cloudbees, jelastic, dotCloud, playapps... in fact any servlet container will do.
have a look at this question: Experiences on free and low-cost hosting for play framework applications?.
if you are looking for an unexpensive option, I would go with openshift.
Apart from that is like Zenklys said, on a VPS you are your own IT department...
I'll need to develop a Java service that is simple because:
It only communicates via a TCP socket, no HTTP.
It runs on a dedicated server (there are no other services except the basic SSH and such)
Should I make this a standalone service (maybe in something like Java Service Wrapper) or make it run in a container like Tomcat? What are the benefits and detriments of both?
If you aren't working with HTTP, you will have to build your own connectors for Tomcat. When I've written these types of applications, I've just written them as standard Java applications. On Windows machines, I use a service wrapper that allows them to be part of the Windows startup process. On non-windows machines, you just need to add a start up script.
Using a container (regardless which) buys you that all the details about starting, stopping, scaling, logging etc, which you have to do yourself otherwise, and it is always harder than you think (at least when you reach production).
Especially the scalability is something you need to consider already now. Later it will be much harder to change your mind.
So, if somebody already wrote most of what you need, then use that.
Tomcat doesn't sound like a good choice for me in your situation. AFAIK it's primarily made for Servlets and JSPs, and you have neither. You also don't need to deploy multiple applications on your app. server etc. (so no benefit from ".war").
If you need dependency injection, connection pooling, logging, network programming framework etc., there are a lot of good solutions out there and they don't need tomcat.
For example, in my case I went for a standalone app. that used Spring, Hibernate, Netty, Apache Commons DBCP, Log4j etc. These can be easily setup, and this way you have a lot more freedom.
Should you need a HTTP server, maybe embedding Jetty is another option. With this option too, you have more control over the app. and this can potentially simplify your implementation compared to using a tomcat container.
Tomcat doesn't really buy you much if you don't use HTTP.
However, I was forced to move a non-HTTP server to Tomcat for following reasons,
We need some simple web pages to display the status/stats of the server so I need a web server. Java 6 comes with a simple HTTP server but Tomcat is more robust.
Our operation tools are geared to run Tomcat only and standalone app just falls off radar in their monitoring system.
We use DBCP for database pooling and everyone seems more comfortable to use it under Tomcat.
The memory foot-print of Tomcat (a few MBs) is not an issue for us so we haven't seen any performance change since moved to Tomcat.
A container can save you from reinventing the wheel in terms of startup, monitoring, logging, configuration, deployment, etc. Also it makes your service more understandable to non-developers.
I wouldn't necessarily go for tomcat, check out glassfish and germonimo as they are more modular, and you can have just the bits the need, and exclude the http server.
We faced a similar decision a while back, and some parts of the system ended up being jsw based, and the others as .war files. The .war option is simpler (well more standard for sure) to build and configure.
We building a Netty/NIO based service, and I'm considering the deployment of this service to our production environment. Our standard way of deploying services is as WARs, to be deployed inside Tomcats.
When I suggested the same approach here, I got shouts and complaints that "it shouldn't be done", because both Netty and Tomcat are servers, and "it doesn't make sense to host one server in another".
To me it makes perfect sense because it completely solves my deployment issue, as well as saves me from writing some other code. Why is it such a big "no no" ?
The dynamic WAR deploying and undeploying that Tomcat provides are designed for web applications. The Netty application you are trying to deploy into Tomcat is not a web application but just a separate server that only shares the VM memory. It means Tomcat has been repurposed into a generic microkernel such as OSGi.
However, I don't think it's a big problem. Since your company uses WAR as the standard deployment mechanism, it might be a good idea to reuse it. You don't even need to write some management functions like remote shutdown because Tomcat already provides them. All you need to do is to make sure all resources are freed up when undeployed.
Some people might not like this approach though. Ideally, there should be a common infrastructure for deploying and managing whatever application (aka microkernel), where even Tomcat is deployed as a module and the microkernel manages WAR directly instead Tomcat does. But that's a long way to go.
It makes terrific sense. We in fact run a Java email server in Tomcat.
Tomcat has some huge advantages for housing an application:
The daemon scripts are already written for you
Tomcat allows hot deployment (so long as you're not using hibernate :) )
At some point you may need either an Admin UI or Admin API
Lots and lots of tools to monitor Tomcat which means free monitoring of your app.
Now with Netty, Mina, or any event driven network technology you will not be able to use most MVC frameworks. In fact you will not be able to use most Java enterprise frameworks because many things rely on a thread per request (transactions, security, etc...).
Do not start anything in Tomcat, it is very inconvenient, and this has a lot of pain issues. Just embed Tomcat (or Jetty) inside your application and run your application as plain java process.
My manager has asked me to suggest an application server for web application development work.
What are the factors that needs to be considered before we select any application server for web application development in Java J2EE development?
If I select one now and IN future and I want to change to some other application server, is that minimum effort to change?
Apache Tomcat and Jetty are the two most popular web containers. Tomcat is the reference implementation of a Java servlet container, Jetty is a little bit faster and more lightweight. I personally favor Jetty, but you can't go wrong with either of them. A little comparison of the two can be found here.
Generally the migration of an application between web containers is fairly easy - only some configurations needs to be changed, but nothing in the source code(which is not always the case with full blown enterprise application servers).
The answer is that you can make it more or less difficult to change application containers based on your development practices. For example, the Liferay portal includes the custom XML configuration files for many application containers, allowing it to be used on many containers. So, it's certainly possible to switch flexibly, but you have to re-do all the server-specific configuration files and you can't rely on container-specific features.
In some cases, the containers themselves make it difficult. For example, the JBoss classloader has a history of scant support for the actual J2EE and Java EE standards. This makes it easy to rely on non-standard features, and in some cases nearly impossible to use standard ones.
Besides making sure that your application server enforces standards compliance, you do want to make sure you need a full application server, as opposed to just a servlet container as mentioned above. Does your application need EJBs, or just servlets? If you aren't doing EJB development, then an application server is over-kill.
If you are doing EJB development or otherwise using other EE features beyond what a servlet container supplies, consider ease of configuration and administration along side standards compliance, and I think you'll find a server that fits your needs.
A well written Java web application can be deployed on any web container, possibly with a bit of external configuration.
Hence you can choose the one that works the best for you during development, and then do testing on the target deployment server.
For netbeans, Tomcat is enclosed, and is fine. Eclipse does not have an enclosed web container yet, but Tomcat is supported.
In any case, use one that others use, then they can help you, and you them.
For pure development purposes, I would like a server with
Small footprint and very minimal start/stop time.
IDE plugins
So, my vote goes to Jetty for web app.
If you are on Netbeans, Glassfish is not a bad choice either as it shows superb performance via grizzly that uses NIO.