Recently, I reviewed our application code, and I found one issue in our code.
/**
* truncate cat tree(s) from the import file
*/
private void truncateCatTreesInFile(File file, String userImplCode) throws Exception
{
String rowStr = null, treeCode = null;
BufferedReader reader = new BufferedReader(new FileReader(file));
rowStr = reader.readLine(); // skip 1st row - header
Impl impl;
List<String> row = null;
Set<String> truncatedTrees = new HashSet<String>();
while ((rowStr = reader.readLine()) != null)
{
row = CrudServiceHelper.getRowFromFile(rowStr);
if (row == null) continue;
impl = getCatImportImpl(row.get(ECatTreeExportImportData.IMPL.getIndex()), userImplCode);
treeCode = row.get(ECatTreeExportImportData.TREE_CODE.getIndex());
if(truncatedTrees.contains(treeCode)) continue;
truncatedTrees.add(treeCode);
CatTree catTree = _treeDao.findByCodeAndImpl(treeCode, impl.getId());
if(catTree!= null) _treeDao.makeTransient(catTree);
}
_treeDao.flush();
}
Looking at the above code, the "reader" was never closed, I was thinking it could be an issue, but actually, it just works fine, the file is able to delete by tomcat.
javax.servlet.context.tempdir>
[java] 2013-03-27 17:45:54,285 INFO [org.apache.struts2.dispatcher.Dispatch
er] -
Basically, what I am trying to do is uploading one file from browser, and generate sql based on the file to insert data into our database. After all done, delete the file.
I am surprised this code works fine, does anybody have an idea here? I tried to google it, but I did not get any idea.
Thanks,
Jack
Not closing a reader may result in a resource leak. Deleting an open file may still be perfectly fine.
Under Linux (and other Unix variants) deleting a file if just unlinking a name from it. A file without any names left gets actually freed. So opening a file, deleting it (removing its name) and then reading and writing to it is a well-known way to obtain a temporary file. Once the file is closed, the space is freed, but not earlier.
Under Windows, certain programs lock files they read, this prevents other processes from removing such a file. But not all programs do so. I don't have a Windows machine around to actually test how does Java handle this.
The fact that the code does not crash does not mean that the code works completely correctly. The problem you noticed might become visible only much later, if the app just consumes more and more RAM due to the leak. This is unlikely, though: the garbage collector will eventually close readers, and probably soon enough, because reader is local and never leaks out of the method.
Related
Here is my simple app that I am reading a resource file and it works fine:-
public class App {
public static void main(String[] args) throws IOException {
BufferedReader bufferedReader =
new BufferedReader(
new FileReader(
Objects.requireNonNull(
Objects.requireNonNull(
App.class.getClassLoader().getResource("file.txt")
).getFile()
)
)
);
String line = "";
while ((line = bufferedReader.readLine()) != null) {
System.out.println(line);
}
}
}
I want to make a executable jar file with the resource file. I follow this. Unfortunately when I run the jar it can't find. Error:- ReadFileExample.jar!/file.txt (No such file or directory). Actually, I don't need to use IDE if it is easier to do it from the terminal or maven plugins, please let me know how can I add my resource file in the jar either by the IDE or by terminal or any maven plugin ?
new FileReader(
This bit means it will never work. FileReader reads files. And only files. Hence the name. You 'luck' into it working during dev, as the resource is an actual file at that point.
There's good news though. Your code is incredibly complicated and can be made much simpler:
.getClassLoader().getResource(...) is more code AND worse than just .getResource. getCL can return null in exotic cases. Make sure to adjust the parameter; it is now relative to the place your class is in, and you can get back to 'go from root' by putting a slash in front.
Don't use FileREader, obviously. Actually, never use that class.
Your code fails to specify encoding. This is bad; the encoding will thus default to whatever the system you run it on has as default encoding which therefore by definition is not guaranteed to match what you stuck in that jar file. Always be explicit.
These are 25 year old APIs, java has nicer ones these days. Let's use them to try to make this code easier to read.
The requireNonNull calls are useless here; you already get an NPE if you try to pass a null ref to e.g. FileReader's constructor.
Your code opens a resource and doesn't safely close it. This causes you to leak handles, which means your app will soon be 'dead' - it has a bunch of objects still in memory that are holding open OS-level file handles and the OS simply won't give any further handles to your process. Any attempt to do anything that interacts with the OS (make network connections, open files, etcetera) will just flat out not work anymore, until you shut down the VM and start it up again. That's why try-with-resources exists, to protect from this. This problem is hard to catch with trivial apps and tests.
Putting it all together:
try (var raw = App.class.getResourceAsStream("/file.txt");
var in = new BufferedReader(new InputStreamReader(raw, StandardCharsets.UTF_8))) {
String line = "";
while ((line = in.readLine()) != null) {
// process line here.
}
}
Like the title says I'm not able to read the contents of a file (csv file) while running the same code on a linux container
private Set<VehicleConfiguration> loadConfigurations(Path file, CodeType codeType) throws IOException {
log.debug("File exists? " + Files.exists(file));
log.debug("Path " + file.toString());
log.debug("File " + file.toFile().toString());
log.debug("File absolute path " + file.toAbsolutePath().toString());
String line;
Set<VehicleConfiguration> configurations = new HashSet<>(); // this way we ignore duplicates in the same file
try(BufferedReader br = new BufferedReader(new FileReader(file.toFile()))){
while ((line = br.readLine()) != null) {
configurations.add(build(line, codeType));
}
}
log.debug("Loaded " + configurations.size() + " configurations");
return configurations;
}
The logs return "true" and the path for the file in both systems (locally on windows and on a linux docker container). On windows it loads "15185 configurations" but on the container it loads "0 configurations".
The file exists on linux, I use bash and check it myself. I use the head command and the file has lines.
Before this I tried with Files.lines like so:
var vehicleConfigurations = Files.lines(file)
.map(line -> build(line, codeType))
.collect(Collectors.toCollection(HashSet::new));
But this has a problem (on container only) regarding the contents. It reads the file but not the whole file, it reaches a given line (say line 8000) and does not read it completely (reads about half a line before the comma separator). Then I get a java.lang.ArrayIndexOutOfBoundsException because my build method tries to split then line and I access index 1 (which it doesn't have, only 0):
private VehicleConfiguration build(String line, CodeType codeType) {
String[] cells = line.split(lineSeparator);
var vc = new VehicleConfiguration();
vc.setVin(cells[0]);
vc.setCode(cells[1]);
vc.setType(codeType);
return vc;
}
What could be the issue? I don't understand how the same code (in Java) works on Windows but not on a Linux container. It makes no sense.
I'm using Java 11. The file is copied using volumes in a docker-compose file like this:
volumes:
- ./file-sources:/file-sources
I then copy the file (using cp command on the linux container) from file-sources to /root because that's where the app is listening for new files to arrive. File contents are then read with the methods I described. Example file data (does not have weird characters):
Thanks in advance.
UPDATE: Tried with newBufferedReader method, same result (works on windows, doesn't work on linux container):
private Set<VehicleConfiguration> loadConfigurations(Path file, CodeType codeType) throws IOException {
String line;
Set<VehicleConfiguration> configurations = new HashSet<>(); // this way we ignore duplicates in the same file
try(BufferedReader br = Files.newBufferedReader(file)){
while ((line = br.readLine()) != null) {
configurations.add(build(line, codeType));
}
}
log.debug("Loaded " + configurations.size() + " configurations");
return configurations;
}
wc -l in the linux container (in /root) returns: 15185 hard_001.csv
Update: This is no solution but I found out that by dropping the files directly on the file-sources folder and make that folder the folder that the code listens to, the files are read. So basically, it seems the problem is more apparent with using cp/mv inside the container to another folder. Maybe the file is read before it is fully copied/moved and that's why it reads 0 configurations?
There are a few methods in java you should never use. ever.
new FileReader(File) is one of them.
Any time that you have a thing that represents bytes and somehow chars or Strings fall out, or vice versa? Don't ever use those, unless the spec of said method explicitly points out that it always uses a pre-set charset. Almost all such methods use the 'system default charset' which means that the operation depends on the machine you run it on. That is shorthand for 'this will fail, and your tests won't catch it'. Which you don't want.
Which is why you should never use these things.
FileReader has been fixed (there is a second constructor that takes a charset), but that's only since JDK11. You already have the nice new API, why do you switch back to the dinky old File API? Don't do that.
All the various methods in Files, such as Files.newBufferedReader, are specced to do UTF-8 if you don't specify (in that way, Files is more useful, and unlike most other java core libraries). Thus:
try (BufferedReader br = Files.newBufferedReader(file)) {
which is just.. better.. than your line.
Now, it'll probably still fail on you. But that's good! It'll also fail on your dev machine. Most likely, the file you are reading is not, in fact, in UTF_8. This is the likely guess; most linuxen are deployed with a UTF_8 default charset, and most dev machines are not; if your dev machine is working and your deployment environment isn't, the obvious conclusion is that your input file is not UTF_8. It does not need to be what your dev machine has a default either; something like ISO_8859_1 will never throw exceptions, but it will read gobbledygook instead. Your code may seem to work (no crashes), but the text you read is still incorrect.
Figure out what text encoding you got, and then specify it. If it's ISO_8859_1, for example:
try (BufferedReader br = Files.newBufferedReader(file, StandardCharsets.ISO_8859_1)) {
and now your code no longer has the 'works on some machines but not on others' nature.
Inspect the line where it fails, in a hex editor if you have to. I bet you dollars to donuts there will be a byte there which is 0x80 or higher (in decimal, 128 or higher). Everything up to and including 127 tends to mean the exact same thing in a wide variety of text encodings, from ASCII to any ISO-8859 variant to UTF-8 Windows Cp1252 to macroman to so many other things, so as long as it's all just plain letters and digits, having the wrong encoding is not going to make any difference. But once you get to 0x80 or higher they're all different. Armed with that byte + some knowledge of what character it is supposed to be is usually a good start in figuring out what encoding that text file is in.
NB: If this isn't it, check how the text file is being copied from your dev machine to your deployment environment. Are you sure it is the same file? If it's being copied through a textual mechanism, charset encoding again can be to blame, but this time in how the file is written, instead of how your java app reads it.
This is very confusing problem.
We have a Java-application (Java8 and running on JBoss 6.4) that is looping a certain amount of objects and writing some rows to a File on each round.
On each round we check did we receive the File object as a parameter and if we did not, we create a new object and create a physical file:
if (file == null){
File file = new File(filename);
try{
file.createNewFile();
} catch (IOException e) {e.printStackTrace();}}
So the idea is that the file get's created only once and after that the step is skipped and we proceed straight to writing. The variable filename is not a path, it's just a file name with no path so the file gets created to a path jboss_root/tmp/hsperfdata_username/
edit1. I'll add here also the methods used from writing if they happen to make relevance:
fw = new FileWriter(indeksiFile, true); // append = true
bw = new BufferedWriter(fw);
out = new PrintWriter(bw);
.
.
out.println(..)
.
.
out.flush();
out.close(); // this flushes as well -> line above is useless
So now the problem is that occasionally, quite rarely thou, the physical file disappears from the path in the middle of the process. The java-object reference is never lost, but is seems that the object itself disappears because the code automatically creates the file again to the same path and keeps on writing stuff to it. This would not happen if the condition file == null would not evaluate to true. The effect is obviously that we loose the rows which were written to the previous file. Java application does not notice any errors and keeps on working.
So, I would have three questions which are strongly related for which I was not able to find answer from google.
If we call method File.CreateNewFile(), is the resulting file a permanent file in the filesystem or some JVM-proxy-file?
If it's permanent file, do you have any idea why it's disappearing? The default behavior in our case is that at some point the file is always deleted from the path. My guess is that same mechanism is deleting the file too early. I just dunno how to control that mechanism.
My best guess is that this is related to this path jboss_root/tmp/hsperfdata_username/ which is some temp-data folder created by the JVM and probably there is some default behavior that cleans the path. Am I even close?
Help appreciated! Thanks!
File.createNewFile I never used in my code: it is not needed.
When afterwards actually writing to the file, it probaby creates it anew, or appends.
In every case there is a race on the file system. Also as these are not atomic actions,
you might end up with something unstable.
So you want to write to a file, either appending on an existing file, or creating it.
For UTF-8 text:
Path path = Paths.get(filename);
try (PrintWriter out = new PrintWriter(
Files.newBufferedWriter(path, StandardOpenOption.CREATE, StandardOpenOption.APPEND),
false)) {
out.println("Killroy was here");
}
After comment
Honestly as you are interested in the cause, it is hard to say. An application restart or I/O (?) exceptions one would find in the logs. Add logging to a specific log for appending to the files, and a (logged) periodic check for those files' existence.
Safe-guard
Here we are doing repeated physical access to the file system.
To prevent appending to a file twice at the same time (of which I would expect an exception), one can make a critical section in some form.
// For 16 semaphores:
final int semaphoreCount = 16;
final int semaphoreMask = 0xF;
Semaphore[] semaphores = new Semaphore[semaphoreCount];
for (int i = 0; i < semaphores.length; ++i) {
semaphores[i] = new Semaphore(1, true); // FIFO
}
int hash = filename.hashcode() & semaphoreMask ; // toLowerCase on Windows
Semaphore semaphore = semaphores[hash];
try {
semaphore.aquire();
... append
} finally {
semaphore.release();
}
File locks would have been a more technical solution, which I would not like to propose.
The best solution, you perhaps already have, would be to queue messages per file.
I am trying to use a simple program to read from a log file. The code used is as follows:
RandomAccessFile in = new RandomAccessFile("/home/hduser/Documents/Sample.txt", "r");
String line;
while(true) {
if((line = in.readLine()) != null) {
System.out.println(line);
} else {
Thread.sleep(2000);
The code works well for new lines being added to the log file but it does not replicate the rollover process. i.e. when the content of the log file is cleared I expect the java console to continue reading text from the first line newly written to the log. Could that be possible? What changes need to be made to the existing code to achieve that?
At my work I had to deal with the processing of logs that can be rolled over without missing any data. What I do is store a tiny memo file that contains:
A hash of the first 1024 bytes (or less) of the log (I used SHA-1 or something because it's easy)
The number of bytes used to generate the hash
The current file position
I close the log file after processing all lines, or some maximum number of lines, and update the memo file. I sleep for a tiny bit and then open the log file again. This allows me to check whether a rollover has occurred. A rollover is detected when:
The current file is smaller than the last file position
The hash is not the same
In my case, I can use the hash to find the correct log file, and work backwards to get up to date. Once I know I've picked up where I left off in the correct file, I can continue reading and memoizing my position. I don't know if this is relevant to what you want to do, but maybe that gives you ideas.
If you don't have any persistence requirements, you probably don't need to store any memo files. If your 'rollover' just clears the log and doesn't move it away, you probably don't need to remember any file hashes.
I am sorry... My Bad.. I don't want it to go blank.. I just want the next new line written to the log to be read.
Since what you need is able to read from beginning when you file is cleared, you will need to monitor the length of file and reset the cursor pointer when length of file reduces. You can reset the cursor using seek(..) method.
See code below -
RandomAccessFile in = new RandomAccessFile("/home/hduser/Documents/Sample.txt", "r");
String line;
long length = 0;//used to check the file length
while (true) {
if(in.length()<length){//new condition to reset position if file length is reduced
in.seek(0);
}
if ((line = in.readLine()) != null) {
System.out.println(line);
length = in.length();
} else {
Thread.sleep(2000);
}
}
it does not replicate the rollover process. i.e. when the content of the log file is cleared I expect the java console to continue reading text from the first line newly written to the log. Could that be possible?
Struggling with this as well. +1 to #paddy for the hash idea.
Another solution (depending on your operating system) is to use the use the inode of the file although this may only work under unix:
Long inode = (Long)Files.getAttribute(logFile.toPath(), "unix:ino");
This returns the inode of the underlying file-system associated with log-file. If the inode changes then the file is a brand new file. This assumes when the log is rolled over that it is moved aside and the same file is not written over.
To make this work you would record the inode of the file you are reading then check to see if the inode has changed if you haven't gotten any new data in some period of time.
My Question: How do I open a file (in the system default [external] program for the file) without saving the file to disk?
My Situation: I have files in my resources and I want to display those without saving them to disk first. For example, I have an xml file and I want to open it on the user's machine in the default program for reading xml file without saving it to the disk first.
What I have been doing: So far I have just saved the file to a temporary location, but I have no way of knowing when they no longer need the file so I don't know when/if to delete it. Here's my SSCCE code for that (well, it's mostly sscce, except for the resource... You'll have to create that on your own):
package main;
import java.io.*;
public class SOQuestion {
public static void main(String[] args) throws IOException {
new SOQuestion().showTemplate();
}
/** Opens the temporary file */
private void showTemplate() throws IOException {
String tempDir = System.getProperty("java.io.tmpdir") + "\\BONotifier\\";
File parentFile = new File(tempDir);
if (!parentFile.exists()) {
parentFile.mkdirs();
}
File outputFile = new File(parentFile, "template.xml");
InputStream inputStream = getClass().getResourceAsStream("/resources/template.xml");
int size = 4096;
try (OutputStream out = new FileOutputStream(outputFile)) {
byte[] buffer = new byte[size];
int length;
while ((length = inputStream.read(buffer)) > 0) {
out.write(buffer, 0, length);
}
inputStream.close();
}
java.awt.Desktop.getDesktop().open(outputFile);
}
}
Because of this line:
String tempDir = System.getProperty("java.io.tmpdir") + "\\BONotifier\\";
I deduce that you're working on Windows. You can easily make this code multiplatform, you know.
The answer to your question is: no. The Desktop class needs to know where the file is in order to invoke the correct program with a parameter. Note that there is no method in that class accepting an InputStream, which could be a solution.
Anyway, I don't see where the problem is: you create a temporary file, then open it in an editor or whatever. That's fine. In Linux, when the application is exited (normally) all its temporary files are deleted. In Windows, the user will need to trigger the temporary files deletion. However, provided you don't have security constraints, I can't understand where the problem is. After all, temporary files are the operating system's concern.
Depending on how portable your application needs to be, there might be no "one fits all" solution to your problem. However, you can help yourself a bit:
At least under Linux, you can use a pipe (|) to direct the output of one program to the input of another. A simple example for that (using the gedit text editor) might be:
echo "hello world" | gedit
This will (for gedit) open up a new editor window and show the contents "hello world" in a new, unsaved document.
The problem with the above is, that this might not be a platform-independent solution. It will work for Linux and probably OS X, but I don't have a Windows installation here to test it.
Also, you'd need to find out the default editor by yourself. This older question and it's linked article give some ideas on how this might work.
I don't understand your question very well. I can see only two possibilities to your question.
Open an existing file, and you wish to operate on its stream but do not want to save any modifications.
Create a file, so that you could use file i/o to operate on the file stream, but you don't wish to save the stream to file.
In either case, your main motivation is to exploit file i/o existingly available to your discretion and programming pleasure, am I correct?
I have feeling that the question is not that simple and this my answer is probably not the answer you seek. However, if my understanding of the question does coincide with your question ...
If you wish to use Stream io, instead of using FileOutputStream or FileInputStream which are consequent to your opening a File object, why not use non-File InputStream or OutputStream? Your file i/o utilities will finally boil down to manipulating i/o streams anyway.
http://docs.oracle.com/javase/7/docs/api/java/io/OutputStream.html
http://docs.oracle.com/javase/7/docs/api/java/io/InputStream.html
No need to involve temp files.