I have a program that is opening a fileInputStream, reading it and writing it to an outputstream.
I'm using AIX.
When I run from terminal, I have no issues. However, when my third party application runs it, it encounters an issue where the FileInputStream only reads the first 65535 bytes from the file and then the next call to the .read() function returns -1, even though the file is slightly bigger than 65535 bytes (it's 68372 bytes). This results in a truncated output file.
My question is, what can cause this limitation? It doesn't appear to be an intrinsic limit of FileInputStream. I suspect there is a java option being set somewhere but I can't for the life of me determine where. Could this be an OS limitation somehow?
Here's my basic code:
OutputStream lOut = new FileOutputStream("/home/fileOut.txt");
FileInputStream fIn = new FileInputStream(new File("/home/fileIn.txt"));
int ch;
byte[] buf = new byte[65536];
while((ch = fIn.read(buf)) > 0)
{
lOut.write(buf);
}
fIn.close();
lOut.close();
Leaving aside that your test code is broken; see #EJP's comments. (Yes. He >>is<< correct. Trust me / him.)
My question is, what can cause this limitation?
AFAIK, there is no such limitation.
It doesn't appear to be an intrinsic limit of FileInputStream.
There isn't one.
I suspect there is a java option being set somewhere but I can't for the life of me determine where.
There is no JVM option that would cause this behavior1.
Could this be an OS limitation somehow?
In theory yes. Or it could be a bug / limitation in the file system drivers, a "resource limit" (see "man ulimit") or a storage media error. In practice, all of these explanations are unlikely. But you could confirm this the problem by trying to read ... or copy ... the same file using an OS utility that is known to work.
Actually, I suspect that the real problem is a bug in the third-party application2 you are trying to using. It could be a similar bug to the bug in your test code, or something else.
1 - A slight overstatement. If the 3rd-party app did something really stupid (i.e. catching and squashing Throwable), then setting the heap size too small could in theory "cause" this behavior. If you ever encounter a problem like that, change vendors as soon as you can!
2 - Just to be clear, a "third party application" is an application written & supplied by a third party. You / your organization are the first party, your JVM vendor is the second party ...
Related
I recently started to learn Java and as I'm really the type of a person that learns much quicker with a task in hands I decided to take a small application written in C# and create an equivalent in Java.
Perhaps I should have start with smaller tasks, but since I already started to design it (the C# app is not written very well, hence creating an equivalent application in terms of features, not design and structure), I don't feel like dropping my idea.
Well, as you may probably realized by now, reading this, I am stuck. The application is kind of an editor that acts on data stored in a binary file. What I can't figure out at this time is how to read this file (one thing is reading) and parse (extract) data I need. I know the structure of this file. So here are the things I'm looking for:
How should I access binary data stored in the file (in C# I would use BinaryReader)
How to deal with primitives that are not available in Java (uint8_t, uint16_t, since the file is created in C++)
EDIT
I should have probably also mention that I probably need to load whole file into memory before processing! The file is 2MB.
I usually get lost when dealing with binary files and streams :x
EDIT 2
I think I figured it out in the meantime. I need to verify 4 byte signature at first. So I do this:
byte[] identifier = {'A', 'B', 'C', 'D'};
fs = new FileInputStream(filename);
byte[] extractedIdentifier = new byte[4];
if(4 != fs.read(extractedIdentifier)) {
throw new Exception();
}
if(!Arrays.equals(identifier, extractedIdentifier)) {
throw new Exception();
}
and after this I need whole file anyway, so I found MappedByteBuffer https://docs.oracle.com/javase/7/docs/api/java/nio/MappedByteBuffer.html which I think I will look into, because it seems like the perfect solution at first glance.
FileChannel fc = fs.getChannel();
MappedByteBuffer buf = fc.map(FileChannel.MapMode.READ_ONLY, 0, fc.size());
Since I just started reading about this, are there any side effects of using this method? Is there a better way to do this?
Pretty sure this should be really easy, but I can't write to a file. No I/O exception is thrown nothing. I was having a similar problem reading earlier and I tried a hundred different ways until
DataInputStream dis = new DataInputStream(reading.class.getResourceAsStream("hello.txt");
BufferedReader input = new BufferedReader(new InputStreamreader(dis));
this worked! and I could use scanners and such to read from this point.
FileReader, making File file = new File("hello.txt") whatever, none of that worked. I couldn't get any of it to even throw an error when it was an incorrect file name.
Now I have the same problem except for writing to a file but there's no equivilant to
reading.class.getResourceAsStream("hello.txt"); that makes an /output/ stream.
Does anyone know how to get the "ResourceAsStream" but as an output stream, /or/ does anyone know what my problem might be?
I know a lot of people on this website have reading/writing issues but none of the posts helped me.
note - yes I was closing, yes I was flushing, yes I had code that wrote to the file.
GetResourceAsStream is meant to read resources (e.g. property files) that were distributed and packages along with the code. There's no guarantee they're in writable mode, e.g. both code and resources could be distributed as a jar, or jar-inside-a-WAR-inside-an-EAR...
See here Write to a file stream returned from getResourceAsStream() for additional discussion and a workaround suggestion, though it's not very recommended IMHO. I think the reasonable practice is to distinguish between (a) your immutable code distribution (b) data editable at runtime ... the latter could reside on a different machine, have different policies for secuirty/replicatoin/backup, etc.
I have a Java app that fetches a relatively small .zip file using a URL, saves it in a temp directory, unzips it onto the local machine and makes some changes to one of the files. This all works great.
However, I am accessing the .zip file via a BufferedInputStream in the following way:
Url url = "http://somedomain.com/file.zip";
InputStream is = new BufferedInputStream(url.openStream(), 1024);
My concern is that this app will actually be used to transfer very large zip files and I was wondering if a BufferedInputStream is actually the best way to do this, or whether I would just end up throwing some type of OutOfMemoryException?
So my question is, will a BufferedInputStream be suitable for this job, or should I be going about it in a completely different way?
BufferedInputStream doesn't load all the file into memory, it only uses an internal buffer, in your case of size 1024 bytes = 1kb. It never gets larger than that. You could actually increase the value if you aren't going to have many streams at once.
Edit: what you are thinking about maybe is a ByteArrayOutputStream, where data is saved in memory.
It depends on what you do with the content you read. If you read everything in memory, it will fail. If you write it to another stream, then it will be fine. Use BufferedInputStream
From the official Java Tutorials - Buffered Streams:
The Java platform implements buffered I/O streams. Buffered input
streams read data from a memory area known as a buffer; the native
input API is called only when the buffer is empty. Similarly, buffered
output streams write data to a buffer, and the native output API is
called only when the buffer is full.
There is another great SUN article.
So the answer is: BufferedInputStream is suitable for this kind of job in the sense of performance.
And yes, the memory consumption isn't so much dependent on the type of the input stream....
i wrote a bit of code that reads download links from a text file and downloads the videos using the copyURLToFile methode from apaches commons-io library and the download is really slow when im in my wlan.
when i put in an internet stick is is about 6 times faster although the stick got 4mbit and my wlan is 8 mbit.
i also tried to do it without the commons-io library but the problem is the same.
normally im downloading 600-700 kb/s in my wlan but with java it only downloads with about 50 kb/s. With the internet stick its about 300 kb/s.
Do you know what the Problem could be?
thanks in advance
//Edit: Here is the code but i dont think it has anything to do with this and what do you mean with network it policies?
FileInputStream fstream = new FileInputStream(linksFile);
DataInputStream in = new DataInputStream(fstream);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
String link;
String name;
while ((link = br.readLine()) != null) {
name = br.readLine();
FileUtils.copyURLToFile(new URL(link), new File("videos/"+name+".flv"));;
System.out.println(link);
}
This isn't likely to be a Java problem.
The code you've posted actually doesn't do any IO over the network - it just determines a URL and passes it to (presumably Apache Commons') FileUtils.copyURLToFile. As usual with popular third-party libraries, if this method had a bug in it that caused slow throughput in all but the most unusual situations, it would already have been identified (and hopefully fixed).
Thus the issue is going to lie elsewhere. Do you get the expected speeds when accessing resource through normal HTTP methods (e.g. in a browser)? If not, then there's a universal problem at the OS level. Otherwise, I'd have a look at the policies on your network.
Two possible causes spring to mind:
The obvious one is some sort of traffic shaping - your network deprioritises the packets that come from your Java app (for an potentially arbitrary reason). You'd need to see hwo this is configured and look at its logs to see if this is the case.
The problem resides with DNS. If Java's using a primary server that's either blocked or incredibly slow, then it could take up to a few seconds to convert that URL to an IP address and begin the actual transfer. I had a similar problem once when a firewall was silently dropping packets to one server and it took three seconds (per lookup!) for the Java process to switch to the secondary server.
In any case, it's almost certainly not the Java code that's at fault.
The FileUtils.copyURLToFile internals uses a buffer to read.
Increasing the value of the buffer could speed up the download, but that seems not possible.
I'm running into a strange problem while reading from an InputStream on the Android platform. I'm not sure if this is an Android specific issue, or something I'm doing wrong in general.
The only thing that is Android specific is this call:
InputStream is = getResources().openRawResource(R.raw.myfile);
This returns an InputStream for a file from the Android assets. Anyways, here's where I run into the issue:
bytes[] buffer = new bytes[2];
is.read(buffer);
When the read() executes it throws an IOException. The weird thing is that if I do two sequential single byte reads (or any number of single byte reads), there is no exception. In example, this works:
byte buffer;
buffer = (byte)buffer.read();
buffer = (byte)buffer.read();
Any idea why two sequential single byte reads work but one call to read both at once throws an exception? The InputStream seems fine... is.available() returns over a million bytes (as it should).
Stack trace shows these lines just before the InputStream.read():
java.io.IOException
at android.content.res.AssetManager.readAsset(Native Method)
at android.content.res.AssetManager.access$800(AssetManager.java:36)
at android.content.res.AssetManager$AssetInputStream.read(AssetManager.java:542)
Changing the buffer size to a single byte still throws the error. It looks like the exception is only raised when reading into a byte array.
If I truncate the file to 100,000 bytes (file is: 1,917,408 bytes originally) it works fine. Is there a problem with files over a certain size?
Any help is appreciated!
Thanks!
(my post to android-developers isn't showing up, so I'll try reposting it here)
IIRC, this problem comes from trying to access files that were compressed as part of building the APK.
Hence, to work around the issue, give it a file extension that won't be compressed. I forget the list of what extensions are skipped, but file types known to already be compressed (e.g., mp3, jpg) may work.
Changing the file extension to .mp3 to avoid the file compression does work, but the APK of the app is much bigger (in my case 2.3 MB instead of 0.99 MB).
Is there any other way to avoid this issue ?
Here is my answer:
Load files bigger than 1M from assets folder
You can compress the file for yourself with GZIP and unpack it with GZIPInputStream class.
http://developer.android.com/reference/java/util/zip/GZIPInputStream.html
You are correct, in that there is a certain size limit for extracting files. You may wish to split larger files into 1MB pieces, and have a method by which you know which files are made of which pieces, stitching them back together again when your app runs.