Tomcat 8.5 takes too long to recognize new content - java

I have the following problem, I write an Excel file in C:\Tomcat85\webapps\MyWebApp\Excel\myExcel.xls.
As soon as my Java application finishes writing the file, it performs a download for the user to work with it. This gives a nasty 404 error.
If I wait a few seconds and reload the page it downloads all right (or adding a five second sleep in java, it works the same).
So, what I conclude is that Tomcat is taking 5 seconds to recognize that this new excel file exists and just then is able to serve it.
Is there anyway to make Tomcat perform this task faster? Maybe using some configuration in web.xml to treat that "/Excel/" folder differently.
Windows 10 64bits, Tomcat 8.5, Java 7 (could try java8 but I dont think it will make a difference).
Some code:
new ExcelExport(remoteHandle, context).execute( outFileName, outMessage);
// Thread.sleep(5000);
httpContext.wjLoc = formatLink(outFileName);
sleep is commented or uncommented depending on the test. With out the sleep, I get 404, with the sleep in 5 seconds it works fine.
httpContet.wjLoc just performs the download, as a link to a file.
The wiriting is working fine, as I see it ready and writable in File Explorer, but if I try to open it by URL I get the same 404.

Resources are cached by default. The amount of time in milliseconds between the revalidation of cache entries is defined by the cacheTtl parameter, referenced in this documentation. By default its value is 5 seconds.
If you want to disable the cache, just set the cachingAllowed to false.

Related

"ls" in a container within a volume shared with Windows host gets stuck when there are many files in a directory [duplicate]

This question already exists:
Listing the contents of a directory in a shared volume with 200K files from within a ubuntu container hosted in windows gets stuck
Closed 2 years ago.
Background
I am running a Spring Boot application in a docker container (Ubuntu image).
The code is written in Kotlin and it walks through a directory on disk that contains 300,000 files.
I run the following code:
File(dir)
.walk()
.forEach{logger.info("{}", it.name)}
and this code blocks for at least 10 minutes.
I would expect that lines should start being printed very shortly after invoking walk.
Indeed the code works as expected when running it from Intellij - that is, not in a container.
Question:
Why is this happening and how can I fix it?
What I Have Tried
First Trial
I have tried just calling File.listfiles and logging the number of files like so:
val count = File(dir).listFiles().size
logger.info("{}", count)
This also blocked for a very long time and eventually logged the value 0.
Second Trial
I changed the settings for Docker Desktop.
I increased the RAM to 20 GB and the swap file to to 1 GB
This had no effect on the result.
Following Stefan Golubovic's comments I realize that this question is probably unrelated to kotlin or java and more likely to be related to the implementation of volumes in Docker for Desktop.
Therefore I submitted a more focused question here and I will close this one.

ColdFusion is hanging when I use cfdocument

This morning I ran into some issues using the cfdocument tag. When a user runs a report, the report just hangs. The report has been running for years with no issues. I even took all of the code out and just put in the following.
<cfdocument format="PDF">this is a test</cfdocument>
The browser still hangs, no errors and the CPU does not jump. I am not sure why this does not work. Any suggestions?
I had a bunch of programs that included file:/// in a cfdocument tag.
I had thought that the file reference would be more efficient, however under coldfusion 2016, it caused occassional, unpredicatable server hanging.
The cfdocument process moves all required files into a work folder, and then produces the pdf.
In CF 2016, there is a setting (Clear Temporary Files Created During CFaaS after (Minutes) that by default clears out work files older than 30 mins.
However, if you use the file:///, then the creation date of that file is not reset, and when that process runs it will delete the file immediately - it is always older than 30 mins.
If the cfdocument process is half way through processing, and it collides with the Clear Temp File process, then a required file disappears, and cfdocument just hangs.
Then subsequent programs with a call to cfdocument also hang, as only one is allowed to execute at any one time.
This then eventually fills up all the cf processing slots, and required a restart of cf to get things going again.
Adobe ColdFusion has been known to have bugs wherein an error in the code (e.g. improperly nested HTML tags not closed, DB query errors, invalid variable) inside the <cfdocument></cfdocument> can silently fail without showing an exception. When this happens, all other cfdocument requests will pile up behind. This can happen even when other pages, not using cfdocument, finish just fine.
As you have seen, restarting the CF service also restarts the PDF service and clears the 'pileup'.
The solution is to debug the code inside the cfdocument tag so that it doesn't throw an exception. Since your issue sounds intermittent, that can be really difficult to debug. You could put everything inside the cfdocument inside a cftry, then cfcatch any exceptions and email them to yourself.

Java's File.delete() sometimes leaves behind inaccessible files (Windows)

I am trying to programatically purge log files from a running(!) system consisting of several Java and non-Java servers. I used Java's File.delete() operation and it usually works fine. I am also perfectly fine with log files that are currently in use not being deleted, so I just log it as a warning whenever File.delete() returns false.
However, in log files which are currently still being written to by NON-Java applications (Postgres, Apache HTTPD etc., Java applications might also be affected, but I didn't notice yet, and all are using the same logging framework anyway, which seems to be OK) are not actually deleted (which is what I expected), however, File.delete() returns "true" for them.
But not only do these files still exist on the file system (Windows explorer and "dir" still show them), but afterwards they are inaccessible... when I try to open them with a text editor etc. I get "access denied" or similar error messages, when I try to copy them with explorer, it also claims that I do not have permissions, when I check its "properties" with explorer, it gives me "You do not have permission to view or edit this object's permissions".
Just to be clear: before I ran the File.delete() operation, I could access or delete these files without any problems, the delete operation "breaks" them. Once I stop the application, the file then disappears, and on restart, the application creates it from scratch and everything is back to normal.
The problem is that when NOT restarting the application after the log file purge operation, the application logs to nirvana.
This behavior reminds me a bit of the file deletion behavior of Linux: if you delete a file that is still held open by an application, it disappears from the file system, but the application - still holding a file handle - will happily continue writing to that file, but you will never be able to access it afterwards. The only difference being that here the files are still visible in the FS, but also not accessible otherwise.
I should mention that both my Java program and the applications themselves are running with "system" user.
I also tried Files.delete(), which allegedly throws an IOException indicating the error... but it seems there is no error.
What I tried to work around the problem is to check if the files are currently locked, using the method described here https://stackoverflow.com/a/1390669/5837050, but this only works for some of the files, not for all of them.
I basically need a reliable way (at least for Windows, if it worked also for Linux, that would be great) to determine if a file is still being used by some program, so I could just not delete it.
Any hints appreciated.
I haven't reproduced it but it seems like an OS expected behaviour, normally different applications run with different users which have ownership on this type of files but I understand that you want like a master purge Java which checks the log files not in use to delete them (running with enough grants of course).
So, considering that the OS behaviour is not going to change I would suggest to configure your logs with "roll file appender" policies and then check the files that match these policies.
Check the rollback policies for logback to make you an idea:
http://logback.qos.ch/manual/appenders.html#onRollingPolicies
For example, if your appender file policy is "more than one day or more than 1Gb" then just delete files which last edition date are older than one day or size are 1Gb. With this rule you will be sure to delete log files that are not in use.
Note that.. with a proper rolling policy maybe you even don't need your purge method, look at this configuration example:
<!-- keep 30 days' worth of history capped at 3GB total size -->
<maxHistory>30</maxHistory>
<totalSizeCap>3GB</totalSizeCap>
I hope this could help you a bit!!

GWT low performance

as an introduction: I am new to GWT and coding, so my questions may appear basic.
I made web app using GWT, Maven, Hibernate, IntelliJ IDEA. I deployed app on my own Tomcat server (I have separate computer for this: HP ProLiant ML310e Gen8 v2 4-Core 3.1GHz 4GB DDR3 + HDD 2x1TB SATA).
It is simple page with 5 tabs, and it has one .png 48 Kb image as header. The view that is loaded initially has:
2 Labels with few words,
CellTable containing 20 rows (content is obtained from database through RPC, from database)
4 Buttons (for table paging)
This view has almost no content, it has following panels just for exact layout I want:
3 VerticalPanels, 6 HorizontalPanels, 1 Tree and 1 Grid
The problem is: . When I run URL first time, it takes 1:09 min to load anything. And every next time I paste URL it takes about second to display app.
(after page loading everything goes smoothly, just a second to display widget)
I read this article: http://blog.trifork.com/2007/11/30/optimizing-startup-time-for-gwt-hosted-mode/ , but server runs the app in production mode (GWT.getScript() returns true). I also ran through a few topics on stackoverflow, but I don't see what loading time is "normal" for small size apps.
If time above 30 seconds is required to run anything, then GWT appears unacceptable for typical user, who may think that the link is broken at first time... I don't know how it works - is GWT rebuilding and recompiling the page for every new user request?
I don't know how it works - is GWT rebuilding and recompiling the page for every new user request?
No.
GWT uses caching. The initial loading time is really depends on many factors.
When user/browser requests for the very first time, all the resources related to the page loads. That takes few seconds to load and really depends on your internet speed.
Once the complete page loaded and if you are requesting new page/ reloading the current page, all the resources won't load this time.
Coming to the part that rebuilding and recompiling for each request is wrong. In Gwt there are permutations which are specific to each browser. Every major browser have it's own permutation. If you request from Mozilla for example, permutations related to Mozilla loads. These permutations actually generates at compile time of project which you done in your IDE before deploying the project.
Once the request hit the browser, for very first time these all files related to the specific permutation loads into browser and cached in browser. Form next time on words you won't see any new files loading in to browser (you can see that using your firebug).
Using code splitting, lazy intializations and data compression techniques.
Continue reading .....
check yourapp.gwt.xml file if you have inherited some modules that takes time to load.
if so, during first load it takes time and the way gwt works is it loads and maintains a cache for future loads, which answers your problem about taking time to load at first and seconds to load next time.
And gwt does not rebuild or recompile the page, simply displays the page you have compiled
Take a look at GWT.runAsync, this will help you to not load everything on startup, just what you need :)

Tomcat 7 keeps using an old jsp after an update

We (the people at my company) have created an application for Tomcat that uses servlets and jsp's as a GUI a while ago.
We've just now finished an update where one of those jsp's is heavily altered. But when we replace the war file on one computer, it keeps using the old jsp (of which all traces were deleted from said computer) whereas it works perfectly everywhere else.
The problem persists even after the computer was restarted.
Has anyone ever seen such behaviour? What can be done about it?
This may be because of the caching. First of all make confirm that the project is cleaned properly. and check the html of the page if the page contains old code or the latest one if old code is there then browser is getting the old files so try to clear the cache of your browser and then try to execute.
how to cleare cache firefox chrome
Does deleting all traces also imply a "clean" on the server? You probably know that it keeps some classes(especially compiled jsps) in the "work" folder
The problem very likely is caused by timestamp mismatch. The newly uploaded JSP page or servlet has an older timestamp than that of the cached page or servlet on the server. To avoid the problem, ensure the system clock on the machine where the JSP or servlet uploaded from is in sync with the system clock of the machine where the server is running on. To remedy the problem, check the following:
• Make sure the file transfer client (like winscp known to cause problem) date, time and time zone is in sync with the Apache Tomcat server.
• Verify the JSP date, time and time zone is up to date with the Apache Tomcat server. If not, re-deploy the JSP with the correct timestamp.
• If updating the JSP timestamp failed, the last thing you want to do is to remove the JSP in Apache Tomcat work directory if you don’t have important sessions to keep.
Stop the server.
Delete webapps/APP_NAME folder
Replace webapps/APP_NAME.war with the new one.
Start the server.
this should help :)
I had the same problem but it wasn't the tomcat.
My Apache was set to allow browsercaching for text/html and text/plain types for 1 month.
The call of that page was made per JavaScript and even if you reload the page with Ctrl+F5 those JavaScript calls are still loaded from the browsercache.
After clearing the browsercache i got the right page.
From now on i don't enable browsercaching for those types in apache anymore.
The problem persists even after the computer was restarted.
If you've deleted the JSPs, then the problem has to be compiled JSPs in the work directory tree. Take off and nuke them from orbit :-)

Categories

Resources