I'm puzzled and need clever suggestions.
I have this Java 8 web application developed with Spring Boot which runs on an Apache Tomcat server, have a Postgresql database, and make use of a RabbitMq to handle requests made via a JS client, Swagger for the exposed REST API. The main purpose is to exploit common libs like docx4j and ApachePOI to read/write excel/word files on the system.
Everything works like a charm on a local installation.
But moving to a different environment with a central server and multiple hosts (2-3 at least) accessing the client app, makes it troublesome.
The main problem seems related to Tomcat since there is the constant need to restart the main application many times.
Also Tomcat process memory keeps growing. (All streams are properly closed after any use).
No errors are thrown or logged.
Already tried to force garbage collection (even if not recommended) where possible with no improvements.
Already tried to change server (Wildfly) with no improvements.
Tried on different OS environment with no changes.
Any ideas?
Thank you for your time.
-- EDIT
Added APR support to Tomcat for better performance, still nothing. I've found a possible bottleneck in a function which makes intese use of docx4j libs for document merging. But it occurs only in this Windows environment.
I'll answer my own question.
All issues are related to application memory handling, not the environment.
I have replicated the "troublesome" environment to a different machine and still getting errors. So it is the application itself.
Thank you all for your time.
Related
I am writing a Java EE application, using Jetty as the app server for convenience during development. Although (re)deployment is fast, I'd like if it was possible for Java code changes to be reflected immediately in the running server without restarting. (I'm already using the useFileMappedBuffer setting to see immediate changes to statically served content).
I've seen questions about using the Maven Jetty plugin and setting scanInterval in order to redeploy a web context, but that's not what I want to do. My Jetty server is started from within a Java application in Eclipse and I'd like code changes to be immediately reflected in the running server, as is possible with ordinary Java applications in Eclipse. I'm running the code "in place", i.e. not building and deploying a WAR file first.
I realise that web apps have their own class loaders in order to conform to the servlet spec, but I don't mind risking non-standard behaviour to get changes deployed more rapidly in development. I've tried using WebAppContext.setClassLoader to set the classloader to a "normal" classloader but to no avail.
Is it possible to do what I want? I believe JRebel claims to do it, but what's it doing that I'm not?
If you connect to Jetty using remote debug from Eclipse, Hot code replace should be possible.
Remote debugging is enabled by adding following to Jetty start script:
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8787
Even the ordinary Java application needs to be restarted to see code changes.
Basically your options are:
Restart Jetty (you said you don't want that).
Make Jetty scan for application changes and allow hot redeploy if changes are detected (you said you don't want that).
Attach a remote debugger which then does hot-swap your code changes (see answer of Amila). This is limited to method body changes and therefore not really useful.
Use JRebel which provides a useful hot-swap implementation and also can pick up configuration changes for many frameworks.
Use a web framework which implements a proprietary hot-swap implementation based on throw-away-classloaders (e.g. Tapestry or Civilian).
I have a VPS where I want to install a web application for production.
Currently, it works on Tomcat 6 connected through Apache Server using AJP module.
Every thing works ok, but I want to know if it is a good idea to configure Tomcat as the main web server due we gonna run only JSP applications.
FYI: the server must handle https requests.
Thanks in advance.
Misinformation
Years ago there was much misinformation against using Tomcat directly as a web server, but in fact Tomcat works very well as such.
I've used nearly every version of Tomcat on various projects, though all were relatively low volume. Never let me down – well, the WebDAV module never worked right. But other than that Tomcat is fast and reliable both for static content as well as dynamic servlet work.
httpd
Apache HTTP Server ("httpd") is of course very advanced and full of features. If you need those features, then use httpd. If you have very high volume traffic, then use httpd. If you have huge amounts of static access and want to relieve Tomcat of that duty thereby letting Tomcat focus its performance on your servlets, then use httpd. If you want to keep a front-facing web site running while take Tomcat up and down, use httpd. But for most relatively simple web sites or simple web apps, Tomcat suffices.
I suggest you try Tomcat alone and see how it goes. Start with your development and testing systems. Review your config files for httpd to see if all its features can be reproduced using Tomcat features. If that all works, then do some load-testing. Either hack some load-tests of your own, or try any of the many load-test frameworks.
While you can hot-deploy webapps to Tomcat, that was not always recommended in production (I'm not sure about current versions). Plan how to handle taking down Tomcat to deploy updated apps and to perform maintenance chores.
Recent Tomcat
You say you are using Tomcat 6. You may want to consider moving to Tomcat 7 or 8, but review the release notes.
Jetty
Also consider Jetty, the most direct competitor/alternative to Tomcat. Jetty is much akin to Tomcat in its purpose, scope of features, and great reputation.
If you're using SSL you will get the following benefits by front-ending with Apache HTTPD:
Ability to specify cipher suites in order of preference, which is practically mandatory if you want case-hardened SSL, and impossible with Tomcat alone.
Ability to request or require client certificates on a per-location basis instead of globally, which is Tomcat's only mechanism.
There are many other benefits as well, such as:
the ability to load-balance between multiple Tomcat instances, which alone is probably enough of a reason to do it
more control over what's logged and where the log files go
more tools to defend against attacks of various kinds.
I used standalone Tomcats for years, but having made the switch I would never go back even for a clean-sheet project.
I don't think either is better, it depends on your requirements. In environments where you may have to address security or logging requirements horizontally across heterogeneous backends (IIS, Tomcat, etc), Apache is your friend (or an expensive load balancer).
Assuming you don't have these requirements, I don't know of any advantage in using Apache for Apache sake.
SSL is easily configured in Tomcat and performance these days is likely to be on par with Apache and can be improved with APR.
Problem: I have a standalone Java app (henceforth known as "the agent") that runs as a service on internal company servers. It acts as a remote agent for some central servers. As the agent gets deployed in more places, managing them is getting more complicated. Specifically: pushing updates is painful because it's a fairly manual process, and getting access to the logs and other info about the environments where the agents are running is problematic, making debugging difficult. The servers under discussion are headless and unattended, meaning that this has to be a fully automated process with no manual intervention, hence Java Web Start isn't a viable solution.
Proposed solution: Make the agent phone home (to the central servers) periodically to provide agent status and check for updates.
I'm open to other suggested solutions to the problem, but I've already got a working prototype for the "status and self-updates" idea, which is what this question is focused on.
What I came up with is actually a separate project that acts as a wrapper for the agent. The wrapper periodically calls the central server via HTTP to check for an updated version of the agent. Upon finding an update, it downloads the new version, shuts down the running agent, and starts the new one. If that seems like an odd or roundabout solution, here are a few other considerations/constraints worth noting:
When the wrapper gets a new version of the agent, there may be new JAR dependencies, meaning class path changes, meaning I probably want to spawn a separate Java process instead of fiddling with ClassLoaders and running the risk of a permanent generation memory leak, which would require manual intervention--exactly what I'm trying to get away from. This is why I ended up with a separate, "wrapper" process to manage the agent updates in my prototype.
Some servers where the agents are deployed are resource-limited, so any solution needs to be low on CPU and memory usage. That makes me want a solution that doesn't involve spinning up a new JVM and is a stroke against having a separate wrapper process.
The agent is already deployed to both Windows and RHEL servers, so the solution must be cross-platform, though I wouldn't have a problem duplicating a reasonable amount of the process in batch and bash scripts to get things rolling.
Question: As stated, I want to know how to make a self-updating Java app. More specifically, are there any frameworks/libraries out there that would help me with this? Can someone with experience in this area give me some pointers?
If your application is OSGi based, you could let OSGi handle bundle updates for you. It is similar to the wrapper approach you suggest, in that the OSGi container itself is "the wrapper" and some of it won't be updated. Here's a discussion on this
Different solution: use (and pay for) install4j. Check out the auto-update features here
No need for wrapper (save memory) or java web start (adds more restrictions on your application), simply let a thread in you application check periodically for updates (e.g. from cloud) and download updates if available, then code these two calls in you application:
launch a shell script (.sh or .cmd) to update your artifacts and launch your application after few seconds pause in the script(to avoid having two instances of your application at the same time).
Terminate your application (first instance)
The script can overwrite needed artifacts and re-launch your application.
enjoy !
Have a look at Java Web Start.
It is technology that's been part of Java since... 1.5? maybe 1.4? and allows deployment and install of standalone Java-based apps through a web browswer. It also enables you to always run the latest app.
http://www.oracle.com/technetwork/java/javase/overview-137531.html
http://en.wikipedia.org/wiki/JNLP#Java_Network_Launching_Protocol_.28JNLP.29
also see this question: What's the best way to add a self-update feature to a Java Swing application?
It appears as though Webstart is the only built in way to do this at the moment.
We're using Windows 2008 and we are thinking of switching application servers from Adobe ColdFusion 9 to Railo 3.1. This would mean using a new Java servlet container, so instead of Adobe JRun 4, we're looking at Apache Tomcat.
Adobe have a helpful perfmon plugin for CF9. We can gather most stats with that. The problem is, as far as I understand, there is no perfmon plugin for Tomcat.
I wanted to know if there are any kind of free profiling tools we can use to get metrics and performance data on Tomcat, for example requests/sec, memory usage etc.
I don't mind if they are just written to logs so long as we can read them in some format. Also, it doesn't have to be a stand-alone product.
Any and all help appreciated!
Just curious - which application server are you using now? Which one uses perfmon now?
Because you've got to run Tomcat on an operating system - Windows, Linux, etc. You seem to imply that perfmon is useless to you now. I don't believe that's the case.
If you need to embellish info from perfmon, you can certainly buy something. But the cheapest solution for you would be filters that would intercept every incoming request and outgoing response to calculate request counts, response time, etc. You'd write these classes once and declare them in your web.xml. They could write to logs using log4j.
Or maybe Hyperic's solution is what you have in mind. It used to be open source, but Spring bought them a few years back. Then VMWare bought Spring. It's all part of a grander solution.
LambdaProbe will give you monitoring for sessions, memory used, web app sessions and servlets, connections etc.
Take a look at the demo site http://demo.lambdaprobe.org/ for more.
Site login: demo/demo
I recently have a problem that my java code works perfectly ok on my local machine, however it just wouldn't work when I deploy it onto the web server, especially the DB part. The worst part is that the server is not my machine. So I had to come back and forth to check the versions of softwares, the db accounts, the settings, and so on...
I have to admit that I did not do a good job with the logging mechanism in the system. However as an newbie programmer with little experience, I had to accept my learning curves. Therefore, here comes a very general but important question:
According to your experience, where would it be most likely to go wrong when it is working perfectly on the development machine but totally surprises you on the production machine?
Thank you for sharing your experience.
The absolute number one cause of problems which occur in production but not in development is Environment.
Your production machine is, more likely than not, configured very differently from your development machine. You might be developing your Java application on a Windows PC whilst deploying to a Linux-based server, for example.
It's important to try and develop against the same applications and libraries as you'll be deploying to in production. Here's a quick checklist:
Ensure the JVM version you're using in development is the exact same one on the production machine (java -version).
Ensure the application server (e.g. Tomcat, Resin) is the same version in production as you're using in development.
Ensure the version of the database you're using is the same in production as in development.
Ensure the libraries (e.g. the database driver) installed on the production machine are the same versions as you're using in development.
Ensure the user has the correct access rights on the production server.
Of course you can't always get everything the same -- a lot of Linux servers now run in a 64-bit environment, whilst this isn't always the case (yet!) with standard developer machines. But, the rule still stands that if you can get your environments to match as closely as possible, you will minimise the chances of this sort of problem.
Ideally you would build a staging server (which can be a virtual machine, as opposed to a real server) which has exactly (or as close as possible to) the same environment as the production server.
If you can afford a staging server, the deployment process should be something like this:
Ensure application runs locally in development and ensure all unit and functional tests pass in development
Deploy to staging server. Ensure all tests pass.
Once happy, deploy to production
You're most likely running under a different user account. So the environment that you inherit as a developer will be vastly different from that a a production user (which is likely to be a very cut down environment). Your PATH/LD_LIBRARY_PATH (or Windows equivalents) will be different. Permissions will have changed etc. Plus the installed software will be different.
I would strongly recommend maintaining a test box and a test user account that is set up with the same software, permissions and environments as the production user. Otherwise you really can't guarantee anything. You really need to manage and control the production and test servers wrt. accounts/installed software etc. Your development box will always be different, but you need to be aware of the differences.
Finally a deployment sanity check is always a good idea. I usually implement a test URL that can be checked as soon as the app is deployed. It will perform database queries or whatever other key functions are required, and report unambiguously as to what's working/not working via a traffic light mechanism.
Specifically you can check all the configuration files (*.xml / *.properties) in your application and ensure that you are not hard coding any paths/variables in your app.
You should maintain different config files for each env. and verify the installation guide from env admin. (if exists)
Other than that versions of all softwares/dependency list etc as described by others.
A production machine will likely miss some of the libraries and tools you have on your development machine. Or there may be older versions of them. Under circumstances it may interfere with the normal software function.
Database connection situation may be different, meaning users and roles and access levels.
One common (albeit easy to detect) problem is conflicting libraries, especially if you're using Maven or Ivy for dependency management and don't double check all the managed dependencies at least once before deploying.
We've had numerous incompatible versions of logging frameworks and even Servlet/JSP API .jar:s a few times too many in our test deployment environment. Also it's always a good idea to check what the shared libraries folder of your tomcat/equivalent contains, we've had some database datasource class conflicts because someone had put postgre's jdbc jar to the shared folder and project came with its own jar for jdbc connectivity.
I always try to get an exact copy of the Server my product is running. After some apps and of course a lot of Bugs i vreated myself a List of common Bugs/Hints. Another Solution i tested for my last project was to get the running Software on that Server and try to configure it. Strange effects can happen with that^^
Last but not least..i always test my apps on different machines.
In my experience there is no definite answer to this question. Following are some of the issues I faced.
Automatic updates was not turned on in dev server (windows) and it was turned on in the production server(which in first place is wrong!). So one of my web application crached due to a patch applied.
Some batch jobs were running in the production app server which changed some data on which my application was using.
It is not me who does the deployment for my company so most of the time people who deploy miss some registry entries, or add wrong registry entries. Simple but very hard to detect (may be for me ;-) ) once I took hours to identify a space in one of the registry values. Now We have a very long release document which has all the details about all servers used by the application and there is a check list for "current release" which the engineers who deploy the application fill in.
Willl add more if I remeber any.
Beyond just a staging server another strategy for making sure the environments you deploy into are the same is to make sure they are set up automatically. That is you use a tool like Puppet to install all the dependencies that the server has and run your install process before every installation so that all the configuration is reset. That way you can ensure the configuration of the box is what you have set it to during the development process and have the configuration of the production environment in source control.