This morning I ran into some issues using the cfdocument tag. When a user runs a report, the report just hangs. The report has been running for years with no issues. I even took all of the code out and just put in the following.
<cfdocument format="PDF">this is a test</cfdocument>
The browser still hangs, no errors and the CPU does not jump. I am not sure why this does not work. Any suggestions?
I had a bunch of programs that included file:/// in a cfdocument tag.
I had thought that the file reference would be more efficient, however under coldfusion 2016, it caused occassional, unpredicatable server hanging.
The cfdocument process moves all required files into a work folder, and then produces the pdf.
In CF 2016, there is a setting (Clear Temporary Files Created During CFaaS after (Minutes) that by default clears out work files older than 30 mins.
However, if you use the file:///, then the creation date of that file is not reset, and when that process runs it will delete the file immediately - it is always older than 30 mins.
If the cfdocument process is half way through processing, and it collides with the Clear Temp File process, then a required file disappears, and cfdocument just hangs.
Then subsequent programs with a call to cfdocument also hang, as only one is allowed to execute at any one time.
This then eventually fills up all the cf processing slots, and required a restart of cf to get things going again.
Adobe ColdFusion has been known to have bugs wherein an error in the code (e.g. improperly nested HTML tags not closed, DB query errors, invalid variable) inside the <cfdocument></cfdocument> can silently fail without showing an exception. When this happens, all other cfdocument requests will pile up behind. This can happen even when other pages, not using cfdocument, finish just fine.
As you have seen, restarting the CF service also restarts the PDF service and clears the 'pileup'.
The solution is to debug the code inside the cfdocument tag so that it doesn't throw an exception. Since your issue sounds intermittent, that can be really difficult to debug. You could put everything inside the cfdocument inside a cftry, then cfcatch any exceptions and email them to yourself.
Related
We have been using Eclipse as our default IDE for some time and it has been a consistent source of headache, but moving away from it hasn't been an option because our build process is inexorably tied to it. I recently set us up to be able to build from VSCode. The problem is, that the build works for every computer except for mine and I can't for the life of me figure out why. When run from my computer, and only my computer, files that have been modified will fail to be detected during the build. What is even stranger is that, when Passive FTP is set to "yes", the files are simply ignored, as if they haven't been modified at all. When it is set to "no", however, I get this error:
SendPrivateMCS:
[ftp] sending files
[ftp] transferring C:\...\newtest.cfm
BUILD FAILED
C:\...\build.xml:1262: could not put file: 500 Illegal PORT command.
I get why I am getting this error. I am trying to send to a server configured for passive FTP. That's not a mystery. What is odd is that when active FTP is used, the test file that I modified is detected and an attempt is made to send it to the server, but when passive is used, it ignores it all together:
SendPrivateMCS:
[ftp] sending files
[ftp] 0 files sent
[ftp] sending files
[ftp] 0 files sent
[ftp] sending files
[ftp] 0 files sent
[ftp] sending files
[ftp] 0 files sent
Again, this only happens on my computer. The other devs are building just fine, in precisely the same way I am, and having no problems, which leads me to believe this is a problem with Java, Ant, or some sort of local FTP settings I'm not aware of but can't really figure out where to even begin looking. The build.xml file is the same one we have been using forever and hasn't been modified. I have tried reinstalling Java, reinstalling ANT, altering my environment variables, and looking for improperly uninstalled/deleted files and so far nothing has worked. I know all I have provided is some stack traces but there isn't really any code associated with this. I am just trying to run and ANT build from Powershell (ant Deploy -DDeployserver=foo) using a proven build.xml that has been in use here for at least half a decade. For some reason, it only sees my files when I use active FTP and can't seem to find any resources to help me figure out a possible cause. Any suggestions?
Ok, so I'm posting an answer to my own question because I finally came back to this project and we figured out the issue...sort of. Our server time is 6 hours ahead of my local time and the build file, for some reason, wasn't sure which time to use. I'm not sure why this happened, as it worked unmodified for everyone else and I verified that they were all seeing the same datetime entries I was, however, casting the server time zone explicitly for each ftp call did the trick for me and didn't break anything for them, so now we have a universal VSCode build that works for everybody. I'd still be interested if anyone has any information on why this worked at all. Since modifying the code wasn't necessary for anyone else to run this, I assumed that the problem must be with system settings, rather than the code, but the problem ended up being fixed by modifying the build.xml. The problem is fixed but I still have no idea why it didn't work in the first place or why the fix did work and would love it if anyone had clarification on that, but the issue is technically fixed and if anyone else has this problem, at least this is something you can try.
I have the following problem, I write an Excel file in C:\Tomcat85\webapps\MyWebApp\Excel\myExcel.xls.
As soon as my Java application finishes writing the file, it performs a download for the user to work with it. This gives a nasty 404 error.
If I wait a few seconds and reload the page it downloads all right (or adding a five second sleep in java, it works the same).
So, what I conclude is that Tomcat is taking 5 seconds to recognize that this new excel file exists and just then is able to serve it.
Is there anyway to make Tomcat perform this task faster? Maybe using some configuration in web.xml to treat that "/Excel/" folder differently.
Windows 10 64bits, Tomcat 8.5, Java 7 (could try java8 but I dont think it will make a difference).
Some code:
new ExcelExport(remoteHandle, context).execute( outFileName, outMessage);
// Thread.sleep(5000);
httpContext.wjLoc = formatLink(outFileName);
sleep is commented or uncommented depending on the test. With out the sleep, I get 404, with the sleep in 5 seconds it works fine.
httpContet.wjLoc just performs the download, as a link to a file.
The wiriting is working fine, as I see it ready and writable in File Explorer, but if I try to open it by URL I get the same 404.
Resources are cached by default. The amount of time in milliseconds between the revalidation of cache entries is defined by the cacheTtl parameter, referenced in this documentation. By default its value is 5 seconds.
If you want to disable the cache, just set the cachingAllowed to false.
I am trying to programatically purge log files from a running(!) system consisting of several Java and non-Java servers. I used Java's File.delete() operation and it usually works fine. I am also perfectly fine with log files that are currently in use not being deleted, so I just log it as a warning whenever File.delete() returns false.
However, in log files which are currently still being written to by NON-Java applications (Postgres, Apache HTTPD etc., Java applications might also be affected, but I didn't notice yet, and all are using the same logging framework anyway, which seems to be OK) are not actually deleted (which is what I expected), however, File.delete() returns "true" for them.
But not only do these files still exist on the file system (Windows explorer and "dir" still show them), but afterwards they are inaccessible... when I try to open them with a text editor etc. I get "access denied" or similar error messages, when I try to copy them with explorer, it also claims that I do not have permissions, when I check its "properties" with explorer, it gives me "You do not have permission to view or edit this object's permissions".
Just to be clear: before I ran the File.delete() operation, I could access or delete these files without any problems, the delete operation "breaks" them. Once I stop the application, the file then disappears, and on restart, the application creates it from scratch and everything is back to normal.
The problem is that when NOT restarting the application after the log file purge operation, the application logs to nirvana.
This behavior reminds me a bit of the file deletion behavior of Linux: if you delete a file that is still held open by an application, it disappears from the file system, but the application - still holding a file handle - will happily continue writing to that file, but you will never be able to access it afterwards. The only difference being that here the files are still visible in the FS, but also not accessible otherwise.
I should mention that both my Java program and the applications themselves are running with "system" user.
I also tried Files.delete(), which allegedly throws an IOException indicating the error... but it seems there is no error.
What I tried to work around the problem is to check if the files are currently locked, using the method described here https://stackoverflow.com/a/1390669/5837050, but this only works for some of the files, not for all of them.
I basically need a reliable way (at least for Windows, if it worked also for Linux, that would be great) to determine if a file is still being used by some program, so I could just not delete it.
Any hints appreciated.
I haven't reproduced it but it seems like an OS expected behaviour, normally different applications run with different users which have ownership on this type of files but I understand that you want like a master purge Java which checks the log files not in use to delete them (running with enough grants of course).
So, considering that the OS behaviour is not going to change I would suggest to configure your logs with "roll file appender" policies and then check the files that match these policies.
Check the rollback policies for logback to make you an idea:
http://logback.qos.ch/manual/appenders.html#onRollingPolicies
For example, if your appender file policy is "more than one day or more than 1Gb" then just delete files which last edition date are older than one day or size are 1Gb. With this rule you will be sure to delete log files that are not in use.
Note that.. with a proper rolling policy maybe you even don't need your purge method, look at this configuration example:
<!-- keep 30 days' worth of history capped at 3GB total size -->
<maxHistory>30</maxHistory>
<totalSizeCap>3GB</totalSizeCap>
I hope this could help you a bit!!
I have to create a jar with a java application that fulfills the following features:
There are xml data packed in the jar which are read the first time the application is started. with every consecutive start of the application the data are loaded from a dynamically created binary file.
A customer should not be able to reset the application to its primary state (e.g. if the binary file gets deleted for some reason, the application should fail to run again and give an error message).
All this should not depend on the os it is running on (which means e.g. setting a registry entry in windows won't do the job)
Summarizing I want to prevent a once started application to be reset in order to limit illegitimate reuse of the application.
Now to my ideas on how to accomplish that:
Delete the xml from the jar at the first run (so far I came to the understanding that it is not possible to let an application edit it's own jar. is that true?)
Set a variable/property/setting/whatever in the jar permanently at the first run (is that possible)
Any suggestions/ideas on how to accomplish that?
update:
I did not find a solution for this exact problem, but I found a simple workaround: along with my software I ship a certain file which gets changed after the program is started the first time. of course if someone keeps a copy of the original file he can always replace it and start over.
Any user able to delete the binary file, will, with enough time, also be able to revert any changes made in the jar. When the only existing part of the application is in the hand of the user, you won't able to prevent changes to it.
You can easily just store a backup of the original jar, make a copy, use that for one run, delete, copy the original jar, etc. You would need some sort of mechanism outside the users machine, like an activation server. The user gets one code to activate an account, and can't use that code again.
I'm using the java AWS SDK in order to download a big amount of files from one S3 bucket, edit the files, and copy them back to a different S3 bucket.
I think it's supposed to work fine, but there is one line that keeps throwing me exceptions:
when I use
myClient.getObject(myGetObjectRequest, myFile)
I get an AmazonClientException saying there are too many files open.
Now, each time I download a file, edit it and copy it back to the bucket, I delete the temporary files I create.
I'm assuming it's taking a few milliseconds to delete the file, and maybe that's why I'm getting these errors.
Or is it maybe because of the open files on Amazon's side?
Anyway, I made my application sleep for 3 seconds each time it encounters this exception, that way it would have time to close the files, but that just takes too much time. Even if I'll take it down to 1 second.
Has anybody encountered this problem?
What should I do?
Thanks
Do you actually call "myFile.close()" at some point?