I'm having problems with my code, I'm encrypting a file with more than 300mb in base 64 but my application gives errors when I open the lra encrypt file
this is my code crashes on the byte, i don't understand why
private void encript(final File file) {
new AsyncTask<Void, Void, Void>() {
#Override
protected Void doInBackground(Void[] p) {
File new_file = null;
try {
new_file = new File(file.getAbsolutePath() + ".enc.txt");
if (!new_file.exists()) {
new_file.createNewFile();
}
BufferedInputStream mInputStream = new BufferedInputStream(new FileInputStream(file));
OutputStream mOutputStream = new DataOutputStream(new FileOutputStream(new_file));
byte[] data = new byte[mInputStream.available()];
int len = 0;
while (true) {
len = mInputStream.read(data);
if (len > 0) {
mOutputStream.write(Base64.encode(data, 0, len, Base64.DEFAULT));
}
break;
}
mOutputStream.flush();
if (mOutputStream != null) {
mOutputStream.close();
}
if (mInputStream != null) {
mInputStream.close();
}
} catch (Exception io) {
Toast.makeText(MainActivity.this, io.toString(), Toast.LENGTH_LONG).show();
}
return null;
}
#Override
protected void onPostExecute(Void res) {
Toast.makeText(MainActivity.this, "Sucesss", Toast.LENGTH_LONG).show();
}
}.execute(new Void[0]);
}
Note that what you are doing here is Base64 encoding the file contents. Don't imagine that someone can't trivially crack this (so-called) "encryption".
There are lots of things wrong with your attempt. I shall go through the more important ones:
#Override
protected Void doInBackground(Void[] p) {
File new_file = null;
try {
Problem: You should be using try with resources to avoid resource leaks.
new_file = new File(file.getAbsolutePath() + ".enc.txt");
if (!new_file.exists()) {
new_file.createNewFile();
}
Problems:
On the one hand, there is no need to use createNewFile to pre-create an output file. Opening the file using FileOutputStream will create it if it doesn't exist already.
On the other hand, this won't prevent (or report) errors in cases where the file's parent directory doesn't exist, is not writeable and so on.
It would be better to use java.nio.file.Path and java.nio.file.Files from Java 7 / Android API 26. Path and Files are better APIs and they will report problems as exceptions so that you can (hypothetically) report them to the user via your exception handler.
There are even some Files.copy methods, though they are not directly applicable to your use-case since you are encoding the data as you copy it.
BufferedInputStream mInputStream =
new BufferedInputStream(new FileInputStream(file));
OutputStream mOutputStream =
new DataOutputStream(new FileOutputStream(new_file));
Problem:
I don't think you need a DataOutputStream. It won't actually be doing anything.
byte[] data = new byte[mInputStream.available()];
Problem:
The available() method should not be used for this. It returns the number of bytes that are "available" to be read right now. The value you get is context dependent. For a socket stream it is typically the number of bytes that are currently in the kernel buffers ready to read. For a "regular" file it may be the length of the input file.
So if you are copying a "really big" file, then you may be attempting to allocate a buffer that will hold the entire file. In the worst case, that will cause your app to OOME!
NOTE - Such an OOME might be the "out of nowhere" problem that you are seeing.
The "best" way is debatable, but I would just use a fixed buffer size ... if I was doing an explicit read / write copy of a stream. The size of the buffer affects throughput, but if you are looking for ultimate performance you shouldn't be doing it this way.
int len = 0;
while (true) {
len = mInputStream.read(data);
if (len > 0) {
mOutputStream.write(
Base64.encode(data, 0, len, Base64.DEFAULT));
}
break;
}
Problem: This loop is simply wrong. You are unconditionally breaking on the first iteration. You should be doing something like this:
int len;
while ((len = mInputStream.read(data)) > 0) {
mOutputStream.write(Base64.encode(data, 0, len, Base64.DEFAULT));
}
In other words, keep reading / writing until read returns a non-positive result.
Note: I'm not sure which Base64 class you are using there. It doesn't appear to be java.util.Base64
mOutputStream.flush();
if (mOutputStream != null) {
mOutputStream.close();
}
if (mInputStream != null) {
mInputStream.close();
}
Problems:
The flush() is not necessary. Closing the stream will flush. And besides, what happens with your attempted flush if mOutputStream is null.
This version leaks resources (file descriptors). If an exception has been thrown, these statements won't be executed, and the stream objects will not be closed.
This is all unnecessary if you use try with resources instead.
} catch (Exception io) {
Toast.makeText(MainActivity.this, io.toString(),
Toast.LENGTH_LONG).show();
}
return null;
}
Problems:
Catching Exception is a bad idea. A better idea is to catch and handle the expected exceptions, and let the unexpected ones propagate so that they can be handled further up the stack.
In this case, it looks like you are assuming that the exception will be some sort of I/O exception. In fact, it could also be an unchecked exception such as an NPE. (An OOME is also possible, though this catch wouldn't catch that because OOMEs are Error exceptions.)
You are throwing away the exception details. Unexpected exceptions should be logged so that you can diagnose them via logcat.
Related
I have mainframe data file which is greater than 4GB. I need to read and process the data for every 500 bytes. I have tried using FileChannel, however I am getting error with message Integer.Max_VALUE exceeded
public void getFileContent(String fileName) {
RandomAccessFile aFile = null;
FileChannel inChannel = null;
try {
aFile = new RandomAccessFile(Paths.get(fileName).toFile(), "r");
inChannel = aFile.getChannel();
ByteBuffer buffer = ByteBuffer.allocate(500 * 100000);
while (inChannel.read(buffer) > 0) {
buffer.flip();
for (int i = 0; i < buffer.limit(); i++) {
byte[] data = new byte[500];
buffer.get(data);
processData(new String(data));
buffer.clear();
}
}
} catch (Exception ex) {
// TODO
} finally {
try {
inChannel.close();
aFile.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Can you help me out with a solution?
The worst problem of you code is the
catch (Exception ex) {
// TODO
}
part, which implies that you won’t notice any exceptions thrown by your code. Since there is nothing in the JRE printing a “Integer.Max_VALUE exceeded” message, that problem must be connected to your processData method.
It might be worth noting that this method will be invoked way too often with repeated data.
Your loop
for (int i = 0; i < buffer.limit(); i++) {
implies that you iterate as many times as there are bytes within the buffer, up to 500 * 100000 times. You are extracting 500 bytes from the buffer in each iteration, processing a total of up to 500 * 500 * 100000 bytes after each read, but since you have a misplaced buffer.clear(); at the end of the loop body, you will never experience a BufferUnderflowException. Instead, you will invoke processData each of the up to 500 * 100000 times with the first 500 bytes of the buffer.
But the whole conversion from bytes to a String is unnecessarily verbose and contains unnecessary copy operations. Instead of implementing this yourself, you can and should just use a Reader.
Besides that, your code makes a strange detour. It starts with a Java 7 API, Paths.get, to convert it to a legacy File object, create a legacy RandomAccessFile to eventually acquire a FileChannel. If you have a Path and want a FileChannel, you should open it directly via FileChannel.open. And, of course, use a try(…) { … } statement to ensure proper closing.
But, as said, if you want to process the contents as Strings, you surely want to use a Reader instead:
public void getFileContent(String fileName) {
try( Reader reader=Files.newBufferedReader(Paths.get(fileName)) ) {
CharBuffer buffer = CharBuffer.allocate(500 * 100000);
while(reader.read(buffer) > 0) {
buffer.flip();
while(buffer.remaining()>500) {
processData(buffer.slice().limit(500).toString());
buffer.position(buffer.position()+500);
}
buffer.compact();
}
// there might be a remaining chunk of less than 500 characters
if(buffer.position()>0) {
processData(buffer.flip().toString());
}
} catch(Exception ex) {
// the *minimum* to do:
ex.printStackTrace();
// TODO real exception handling
}
}
There is no problem with processing files >4GB, I just tested it with a 8GB file. Note that the code above uses the UTF-8 encoding. If you want to retain the behavior of your original code of using whatever happens to be your system’s default encoding, you may create the Reader using
Files.newBufferedReader(Paths.get(fileName), Charset.defaultCharset())
instead.
As stated in the title, should I close stream when reusing a FileOutputStream variable? For example, in the following codes, should I call the outfile.close() before I assign it a new file and why?
Thanks:)
FileOutputStream outfile = null;
int index = 1;
while (true) {
// check whether we should create a new file
boolean createNewFile = shouldCreateNewFile();
//write to a new file if pattern is identified
if (createNewFile) {
/* Should I close the outfile each time I create a new file?
if (outfile != null) {
outfile.close();
}
*/
outfile = new FileOutputStream(String.valueOf(index++) + ".txt");
}
if (outfile != null) {
outfile.write(getNewFileContent());
}
if (shouldEnd()) {
break;
}
}
try {
if (outfile != null) {
outfile.close();
}
} catch (IOException e) {
System.err.println("Something wrong happens...");
}
YES. Once you are done with one file (stream) you should always close it. So that the resources allocated with the file (stream) will be released to the operating system like file descriptors, buffer etc.
Java Documentation FileOutputStream.close()
Closes this file output stream and releases any system resources associated with this stream. This file output stream may no longer be used for writing bytes.
The unclosed file descriptors can even lead to resource leaks in the java program. Reference
I think the confusion here revolves around the concept of “re-using” the FileOutputStream. What you are doing is simply re-using an identifier (the name outfile of your variable) by associating a new value with it. But this only has syntactic meaning to the Java compiler. The object referred to by the name – the FileOutputStream – is simply dropped on the floor and will eventually be garbage collected at an unspecified later point in time. It doesn't matter what you do with the variable that once referred to it. Whether you re-assign it another FileOutputStream, set it to null or let it go out of scope is all the same.
Calling close explicitly flushes all buffered data to the file and releases the associated resources. (The garbage collector would release them too but you don't know when this might happen.) Note that close may also throw an IOException so it really matters that you know the point at which the operation is tried which you only do if you call the function explicitly.
Even without automatic resource management, or try-with-resources (see below), your code can be made much more readable and reliable:
for (int index = 1; shouldCreateNewFile(); ++index) {
FileOutputStream outfile = new FileOutputStream(index + ".txt");
try {
outfile.write(getNewFileContent());
}
finally {
outfile.close();
}
}
However, Java 7 introduced a new syntax for closures that is more reliable and informative in the case of errors. Using it, your code would look like this:
for (int index = 1; shouldCreateNewFile(); ++index) {
try (FileOutputStream outfile = new FileOutputStream(index + ".txt")) {
outfile.write(getNewFileContent());
}
}
The output stream will still be closed, but if there is an exception inside the try block, and another while closing the stream, the exception will be suppressed (linked to the main exception), rather than causing the main exception to be discarded like the previous example.
You should always use automatic resource management in Java 7 or above.
I wanted to make a program in Java that checks if src exists (if not to throw an FileNoot)
and to copy the contents of src.txt to des.txt
and to print the sizes of two files at the opening and the closing
The output is:
src.txt is in current directory
Before opening files:Size of src.txt:43 Bytes Size of des.txt:0 Bytes
After closing files:Size of src.txt:43 Bytes Size of des.txt:0 Bytes
After src.txt writes its contents in des.txt , des should be 43 bytes
First, I would like to ask if I can omit File declaration by writing
PrintWriter outStream = new PrintWriter(new FileWriter("des.txt"));
Secondly,I would like to ask how to adapt the following switch case (system indepent newline)
In order to add a newline after the one read.
Thirdly,I would like to ask the importance of try/catch block while closing File
Terribly sorry for this type of question but In C there was no error handling(I think) close() was certain to work
I am sorry for these types of questions but I am a beginner in java
import java.io.*;
public class Main
{
public static void main() throws FileNotFoundException
{
File src = new File("src.txt");
if(src.exists())
System.out.println("src.txt is in current directory");
else throw new FileNotFoundException("src.txt is not in current directory");
BufferedReader inStream = null;
PrintWriter outStream = null;
try {
File des = new File("des.txt");
inStream = new BufferedReader(new FileReader(src));
outStream = new PrintWriter(new FileWriter(des));
System.out.print("Before opening files:Size of src.txt:"+src.length()+" Bytes\t");
System.out.println("Size of des.txt:"+des.length()+" Bytes");
int c;
while((c = inStream.read()) != -1) {
switch(c){
case ' ': outStream.write('#');
break;
case '\r':
case '\n':outStream.write('\n');
outStream.write('\n');
break;
default:outStream.write(c);
}
}
System.out.print("After closing files:Size of src.txt:"+src.length()+" Bytes\t");
System.out.println("Size of des.txt:"+des.length()+" Bytes");
} catch(IOException io) {
System.out.println("Read/Write Error:"+io.toString());
} finally {
try {
if (inStream != null) {
inStream.close();
}
if (outStream != null) {
outStream.close();
}
} catch (IOException io) {
System.out.println("Error while closing Files:"+io.toString());
}
}
}
}
You have 3 questions inside your main question
The problem of the file sizes not being correct after you are done is caused by buffering of the file contents, by default it buffers some data to prevent short writes to the hard disk, causing lowered performance, check the size of you file after you closed the file so you see the correct size with the .length() call.
You can use
PrintWriter outStream = new PrintWriter(new FileWriter("des.txt"));
inside your code, since FileWriter accepts a String argument at its constructor.
It is recommend practice to close file handler/streams since they are not automatically closed at the time you are done with them, since the garbage collector don't run whole the time, but only at the times there is need for it, this can cause problems with undeletable files since the are still in use by a stream you cannot reach, but is still loaded inside the memory, this can also some problems with the fact that some streams are delayed writing using buffers, and if they are not closed, it causes problems that identify itself as your first problem.
I serialize an object and save it as a file on my HDD. When I'm reading it, in only some occasions it throws EOFException. After couple of hours debugging I am not able to find a problem.
Here is my code:
public void serialize(MyClass myClass,String path) {
FileOutputStream foStream = null;
ObjectOutputStream ooStream = null;
try {
File file = new File(path);
if (!file.exists()) {
file.createNewFile();
}
foStream = new FileOutputStream(file);
ooStream = new ObjectOutputStream(foStream);
ooStream.writeObject(myClass);
} catch (Throwable t) {
log.error(t);
} finally {
if (ooStream != null) {
try {
ooStream.flush();
ooStream.close();
} catch (IOException e) {
log.error(e);
}
}
}
}
For getting Object:
public MyClass deSerialize(String path) {
MyClass myClass=null;
FileInputStream fiStream = null;
ObjectInputStream oiStream = null;
String errorMessage = "";
try {
File file = new File(path);
if (!file.exists()) {
return null;
}
fiStream = new FileInputStream(path);
oiStream = new ObjectInputStream(fiStream);
Object o = oiStream.readObject();
myClass = (MyClass) o;
} catch (Throwable t) {
log.warn(t);
} finally {
if (oiStream != null) {
try {
oiStream.close();
} catch (IOException e) {
log.error(e);
}
}
}
return myClass;
}
Stacktrace:
java.io.EOFException at
java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2498)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1273)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
at java.util.LinkedList.readObject(LinkedList.java:776) at
sun.reflect.GeneratedMethodAccessor583.invoke(Unknown Source) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585) at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:946)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1809)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1908)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1832)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
Question:
My serialized object is now corrupted and then is it rubbish now?
Because this object is responsible for rendering the UI which saved by user. If User logs in it should render previously saved state of UI. However for some user the file cannot be deserialized.
EOFException means you are trying to read past the end of the file. Normally you don't have any way of knowing whethere there are more objects to read, other than trying it, so you shouldn't regard EOFException as a problem in the first place. If it is thrown in a situation where you think you know there are more objects in the file, e.g. when you have prefixed an object count to the file, it indicates a problem with the code that wrote the file, or possible corruption of the file itself. Another example is a zero length file that shouldn't be zero length. Whatever the problem is, it can't be solved by the reading end, it is already too late.
I cannot see any problem with the writing and reading of the file.
So my best guess is that the problem is at the file level. For example:
you could be writing one file and reading a different one, or
you could be reading the file before the file write completes, or
something else could be clobbering the file in between the running of your write code and read code.
I suggest that you add some tracing code that uses File.length() to find out what the file size is after you've written it and before you read it.
A couple of other possibilities:
the writer and reader code is using different versions of MyClass (or a dependent class) with incompatible representations and the same serialVersionId values, or
you could be using custom readObject and writeObject methods that are incompatible.
In my case, EOF Exception was solved by ensuring the read and writes to the file were thread safe. Like Stephen C answered above, if you try to write to a file which you also are trying to read from say from another thread, you may be stepping on the ObjectInputStream which is going to throw EOF Exception in this case.
I have a Servlet in Tomcat 5.5 that reads local images sitting on a folder. The image is then sent back to an Applet.
I'm getting this "javax.imageio.IIOException: Can't create an ImageInputStream!" error and not sure whats causing it.
Has anyone had this problem before? Could this be a Thread issue in the ImageIO? I can't reproduce this issue since it occurs about 3 times for every 1000 requests.
EDIT: This is the Servlet code that reads the image. I just use the ImageIO.read(File) in its static form inside the Servlet's doPost method the same way below:
doPost(req,resp){
...
BufferedImage image = ImageIO.read(imageFile);
...
}
Here is the source code for javax.imageio.ImageIO.read(File):
public static BufferedImage read(File input) throws IOException {
if (input == null) {
throw new IllegalArgumentException("input == null!");
}
if (!input.canRead()) {
throw new IIOException("Can't read input file!");
}
ImageInputStream stream = createImageInputStream(input);
if (stream == null) {
throw new IIOException("Can't create an ImageInputStream!");
}
BufferedImage bi = read(stream);
if (bi == null) {
stream.close();
}
return bi;
}
If the sole functional requirement is to read images from local disk and return it unmodified to the HTTP response using a servlet, then you do not need the ImageIO at all. It only adds unnecessary overhead and other problems like you're having now.
Get rid of the ImageIO stuff and just stream the raw image straight from disk to HTTP response, along a set of proper response headers. For example,
String name = request.getParameter("name");
File file = new File("/path/to/images", name);
response.setContentType(getServletContext().getMimeType(file.getName()));
response.setHeader("Content-Length", String.valueOf(file.length()));
response.setHeader("Content-Disposition", "inline; filename=\"" + file.getName() + "\"");
InputStream input = null;
OutputStream output = null;
try {
input = new BufferedInputStream(new FileInputStream(file));
output = new BufferedOutputStream(response.getOutputStream());
byte[] buffer = new byte[8192];
for (int length; (length = input.read(buffer)) > 0;) {
output.write(buffer, 0, length);
}
} finally {
if (output != null) try { output.close(); } catch (IOException logOrIgnore) {}
if (input != null) try { input.close(); } catch (IOException logOrIgnore) {}
}
That's all. You only need ImageIO whenever you would like to manipulate the image in server's memory before returning it, e.g. resizing, transforming or something.
Another, more robust, example of such a servlet can be found here and a more advanced one here.
The source I have (Java5 but I doubt it has changed a lot) states that if there are no ImageInputStream service providers registered, the createImageInputStream method returns null and thus you get that exception.
From the JavaDoc on IIORegistry.getDefaultInstance() which is used by ImageIO:
Each ThreadGroup will receive its own instance; this allows different Applets in the same browser (for example) to each have their own registry.
Thus it might actually be a threading problem in that you get a plain new instance of IIORegistry.
Edit: digging deeper into the source I found the following:
Most likely you'd get a FileImageInputStream, since you pass in a file. However, if an exception occurs the service provider returns null. Thus there might be a FileNotFoundException or any other IOException being thrown which causes the stream not to be created.
Unfortunately, there's no logging in the code, thus you'd have to debug somehow. It's probably due to missing file permissions, a corrupted/incomplete file or the file missing.
Here's the Java5 source for FileImageInputStreamSpi#createInputStreamInstance()
public ImageInputStream createInputStreamInstance(Object input,
boolean useCache,
File cacheDir) {
if (input instanceof File) {
try {
return new FileImageInputStream((File)input);
} catch (Exception e) {
return null;
}
} else {
throw new IllegalArgumentException();
}
}
Where is your use of close() methods within the exception handling? Streams have to be closed when there are exceptions, too, as well as normal termination of the block of the code.
The symptom sounds like you run out of heap space or sometime.
It is not the coding errors that others pointed out, since the problem is intermittent.