Why is my image coming out garbled? - java

I've got some Java code using a servlet and Apache Commons FileUpload to upload a file to a set directory. It's working fine for character data (e.g. text files) but image files are coming out garbled. I can open them but the image doesn't look like it should. Here's my code:
Servlet
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
try {
String customerPath = "\\leetest\\";
// Check that we have a file upload request
boolean isMultipart = ServletFileUpload.isMultipartContent(request);
if (isMultipart) {
// Create a new file upload handler
ServletFileUpload upload = new ServletFileUpload();
// Parse the request
FileItemIterator iter = upload.getItemIterator(request);
while (iter.hasNext()) {
FileItemStream item = iter.next();
String name = item.getFieldName();
if (item.isFormField()) {
// Form field. Ignore for now
} else {
BufferedInputStream stream = new BufferedInputStream(item
.openStream());
if (stream == null) {
LOGGER
.error("Something went wrong with fetching the stream for field "
+ name);
}
byte[] bytes = StreamUtils.getBytes(stream);
FileManager.createFile(customerPath, item.getName(), bytes);
stream.close();
}
}
}
} catch (Exception e) {
throw new UploadException("An error occured during upload: "
+ e.getMessage());
}
}
StreamUtils.getBytes(stream) looks like:
public static byte[] getBytes(InputStream src, int buffsize)
throws IOException {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
byte[] buff = new byte[buffsize];
while (true) {
int nBytesRead = src.read(buff);
if (nBytesRead < 0) {
break;
}
byteStream.write(buff);
}
byte[] result = byteStream.toByteArray();
byteStream.close();
return result;
}
And finally FileManager.createFile looks like:
public static void createFile(String customerPath, String filename,
byte[] fileData) throws IOException {
customerPath = getFullPath(customerPath + filename);
File newFile = new File(customerPath);
if (!newFile.getParentFile().exists()) {
newFile.getParentFile().mkdirs();
}
FileOutputStream outputStream = new FileOutputStream(newFile);
outputStream.write(fileData);
outputStream.close();
}
Can anyone spot what I'm doing wrong?
Cheers,
Lee

One thing I don't like is here in this block from StreamUtils.getBytes():
1 while (true) {
2 int nBytesRead = src.read(buff);
3 if (nBytesRead < 0) {
4 break;
5 }
6 byteStream.write(buff);
7 }
At line 6, it writes the entire buffer, no matter how many bytes are read in. I am not convinced this will always be the case. It would be more correct like this:
1 while (true) {
2 int nBytesRead = src.read(buff);
3 if (nBytesRead < 0) {
4 break;
5 } else {
6 byteStream.write(buff, 0, nBytesRead);
7 }
8 }
Note the 'else' on line 5, along with the two additional parameters (array index start position and length to copy) on line 6.
I could imagine that for larger files, like images, the buffer returns before it is filled (maybe it is waiting for more). That means you'd be unintentionally writing old data that was remaining in the tail end of the buffer. This is almost certainly happening most of the time at EoF, assuming a buffer > 1 byte, but extra data at EoF is probably not the cause of your corruption...it is just not desirable.

I'd just use commons io Then you could just do an IOUtils.copy(InputStream, OutputStream);
It's got lots of other useful utility methods.

Are you sure that the image isn't coming through garbled or that you aren't dropping some packets on the way in.

I don't know what difference it makes, but there seems to be a mismatch of method signatures. The getBytes() method called in your doPost() method has only one argument:
byte[] bytes = StreamUtils.getBytes(stream);
while the method source you included has two arguments:
public static byte[] getBytes(InputStream src, int buffsize)
Hope that helps.

Can you perform a checksum on your original file, and the uploaded file and see if there is any immediate differences?
If there are then you can look at performing a diff, to determine the exact part(s) of the file that are missing changed.
Things that pop to mind is beginning or end of stream, or endianness.

Related

FileOutputStream sends 0 byte file

I am trying to allow a user to download a file (attachment) using Java to serve up the download. I have been partially successful. The file is read, and on the client side there is a prompt for a download. A file is saved successfully, but it has 0 bytes. Here is my server side code:
String stored = "/var/lib/tomcat/webapps/myapp/attachments/" + request.getParameter("stored");
String realname = request.getParameter("realname");
// Open the input and output streams
FileInputStream attachmentFis = new FileInputStream(stored);
FileOutputStream attachmentFos = new FileOutputStream(realname);
try {
// Send the file
byte[] attachmentBuffer = new byte[1024];
int count = 0;
while((count = attachmentFis.read(attachmentBuffer)) != -1) {
attachmentFos.write(attachmentBuffer, 0, count);
}
} catch (IOException e) {
// Exception handling
} finally {
// Close the streams
attachmentFos.flush();
attachmentFos.close();
attachmentFis.close();
}
For context, this is in a servlet. The files have an obfuscated name, which is passed as "stored" here. The actual file name, the name the user will see, is "realname".
What do I need to do to get the actual file to arrive at the client end?
EDIT
Following suggestions in the comments, I changed the write to include the 0, count parameters and put the close stuff in a finally block. However, I am still getting a 0 byte file when I attempt a download.
EDIT 2
Thanks to the logging suggestion from Dave the Dane, I discovered the file was being written locally. A bit of digging and I found I needed to use response.getOutputStream().write instead of a regular FileOutputStream. I have been successful in getting a file to download through this method. Thank you all for your helpful suggestions.
As others have observed, you'd be better off using try-with-resources & let that handle the closing.
Assuming you have some Logging Framework available, maybe the following would cast light on the matter...
try {
LOG.info ("Requesting....");
final String stored = "/var/lib/tomcat/webapps/myapp/attachments/" + request.getParameter("stored");
LOG.info ("stored.......: {}", stored);
final String realname = request.getParameter("realname");
LOG.info ("realname.....: {}", realname);
final File fileStored = new File(stored);
LOG.info ("fileStored...: {}", fileStored .getCanonicalPath());
final File fileRealname = new File(realname);
LOG.info ("fileRealname.: {}", fileRealname.getCanonicalPath());
try(final InputStream attachmentFis = new FileInputStream (fileStored);
final OutputStream attachmentFos = new FileOutputStream(fileRealname))
{
final byte[] attachmentBuffer = new byte[64 * 1024];
int count;
while((count = attachmentFis.read (attachmentBuffer)) != -1) {
; attachmentFos.write(attachmentBuffer, 0, count);
LOG.info ("Written......: {} bytes to {}", count, realname);
}
attachmentFos.flush(); // Probably done automatically in .close()
}
LOG.info ("Done.");
}
catch (final Exception e) {
LOG.error("Problem!.....: {}", request, e);
}
If it won't reach the finally block, you should stop ignoring the IOException which is being thrown:
catch (IOException e) {
// Exception handling
System.err.println(e.getMessage());
}
I'd asssume that the realname is just missing an absolute path.

Java File Download - Downloaded file size is always zero Kb

I have written a java controller which handles any download request from the server. My files are present in the server however I am getting 0 kB file downloaded every time. The file is getting downloaded but the size is always 0 Kb. Please help me. Here is my code -
#RequestMapping(value="/downloadFile/{docId}")
public void getDownloadFile(#PathVariable(value = "docId") Integer docId, HttpServletResponse response) throws Exception {
String userName = getUserName();
try {
DocVault documentsVault = documentsVaultRepository.findDocumentAttachment(docId);
String fileName = documentsVault.getDocumentName();
int customerId = documentsVault.getCustomerId();
Map<Integer, String> customerInfo = cspUtils.getCustomersInfo(userName);
Set<Integer> customerIds = customerInfo.keySet();
for (int custId : customerIds) {
if (custId == customerId) {
String path = env.getProperty("doc.rootfolder") + File.separator + documentsVault.getFileName();
service.downloadFile(fileName, path, response);
} else {
logger.info("Customer not linked to user");
}
}
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
}
Implementation -
public void downloadFile(String fileName, String path, HttpServletResponse response) {
try {
File downloadFile = new File(path);
FileInputStream inputStream = new FileInputStream(downloadFile);
response.setContentLength((int) downloadFile.length());
response.setHeader("Content-Disposition", "attachment;filename=" + fileName);
// get output stream of the response
OutputStream outStream = response.getOutputStream();
byte[] buffer = new byte[BUFFER_SIZE];
int bytesRead = -1;
// write bytes read from the input stream into the output stream
while ((bytesRead = inputStream.read(buffer)) != -1) {
outStream.write(buffer, 0, bytesRead);
}
inputStream.close();
outStream.close();
} catch (Exception e) {
e.printStackTrace();
}
}
I am not getting any exceptions. Please help.
Thank you in advance.
response.setContentLength((int) downloadFile.length());
Remove this. The container will set it automatically.
int bytesRead = -1;
You don't need to initialize this variable. It gets assigned in the very next line.
Thank you for your help. What I figured out was there was a problem with my Content Disposition header. I was passing only the file name without extension. When I passed the full file name then it worked perfectly. Size and Extension of the file were proper.

Reading a block of bytes from one file and writing to other until all blocks are read?

I am working a project in which I have to play with some file reading writing tasks. I have to read 8 bytes from a file at one time and perform some operations on that block and then write that block to second file, then repeat the cycle until first file is completely read in chuncks of 8 bytes everytime and the after manipulation the data should be added/appended to the second. However, in doing so, I am facing some problems. Following is what I am trying:
private File readFromFile1(File file1) {
int offset = 0;
long message= 0;
try {
FileInputStream fis = new FileInputStream(file1);
byte[] data = new byte[8];
file2 = new File("file2.txt");
FileOutputStream fos = new FileOutputStream(file2.getAbsolutePath(), true);
DataOutputStream dos = new DataOutputStream(fos);
while(fis.read(data, offset, 8) != -1)
{
message = someOperation(data); // operation according to business logic
dos.writeLong(message);
}
fos.close();
dos.close();
fis.close();
} catch (IOException e) {
System.out.println("Some error occurred while reading from File:" + e);
}
return file2;
}
I am not getting the desired output this way. Any help is appreciated.
Consider the following code:
private File readFromFile1(File file1) {
int offset = 0;
long message = 0;
File file2 = null;
try {
FileInputStream fis = new FileInputStream(file1);
byte[] data = new byte[8]; //Read buffer
byte[] tmpbuf = new byte[8]; //Temporary chunk buffer
file2 = new File("file2.txt");
FileOutputStream fos = new FileOutputStream(file2.getAbsolutePath(), true);
DataOutputStream dos = new DataOutputStream(fos);
int readcnt; //Read count
int chunk; //Chunk size to write to tmpbuf
while ((readcnt = fis.read(data, 0, 8)) != -1) {
//// POINT A ////
//Skip chunking system if an 8 byte octet is read directly.
if(readcnt == 8 && offset == 0){
message = someOperation(tmpbuf); // operation according to business logic
dos.writeLong(message);
continue;
}
//// POINT B ////
chunk = Math.min(tmpbuf.length - offset, readcnt); //Determine how much to add to the temp buf.
System.arraycopy(data, 0, tmpbuf, offset, chunk); //Copy bytes to temp buf
offset = offset + chunk; //Sets the offset to temp buf
if (offset == 8) {
message = someOperation(tmpbuf); // operation according to business logic
dos.writeLong(message);
if (chunk < readcnt) {
System.arraycopy(data, chunk, tmpbuf, 0, readcnt - chunk);
offset = readcnt - chunk;
} else {
offset = 0;
}
}
}
//// POINT C ////
//Process remaining bytes here...
//message = foo(tmpbuf);
//dos.writeLong(message);
fos.close();
dos.close();
fis.close();
} catch (IOException e) {
System.out.println("Some error occurred while reading from File:" + e);
}
return file2;
}
In this excerpt of code, what I did was:
Modify your reading code to include the amount of bytes actually read from the read() method (noted readcnt).
Added a byte chunking system (the processing does not happen until there are at least 8 bytes in the chunking buffer).
Allowed for separate processing of the final bytes (that do not make up a 8 byte octet).
As you can see from the code, the data being read is first stored in a chunking buffer (denoted tmpbuf) until at least 8 bytes are available. This will happen only if 8 bytes are not always available (If 8 bytes are available directly and nothing is chunked, directly process. See "Point A" in code). This is done as a form of optimization to prevent excess array copies.
The chunking system uses offsets which increment every time bytes are written to tmpbuf until it reaches a value of 8 (it will not go over as the Math.min() method used in the assignment of 'chunk' will limit the value). Upon offset == 8, proceed to execute the processing code.
If that particular read produced more bytes than actually processed, continue writing them to tmpbuf, from the beginning again, whilst setting offset appropriately, otherwise set offset to 0.
Repeat cycle.
The code will leave the last few bytes of data that do not fit in an octet in the array tmpbuf with the offset variable indicating how much has actually been written. This data can then be processed separately at point C.
Seems a lot more complicating than it should be, and there probably is a better solution (possibly using existing java library methods), but off the top of my head, this is what I got. Hope this is clear enough for you to understand.
You could use the following, it uses NIO and especially the ByteBuffer class for the long handling. You can of course implement it the standard java way, but since i am a NIO fan, here is a possible solution.
The major problem in your code is that while(fis.read(data, offset, 8) != -1) will read up to 8 bytes, and not always 8 bytes, plus reading in such small portions is not very efficient.
I have put some comments in my code, if something is unclear please leave a comment. My someOperation(...) function just copies the next long value from the buffer.
Update:
added finally block to close the files.
import java.io.File;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.file.StandardOpenOption;
public class TestFile {
static final int IN_BUFFER_SIZE = 1024 * 8;
static final int OUT_BUFFER_SIZE = 1024 *9; // make the out-buffer > in-buffer, i am lazy and don't want to check for overruns
static final int MIN_READ_BYTES = 8;
static final int MIN_WRITE_BYTES = 8;
private File readFromFile1(File inFile) {
final File outFile = new File("file2.txt");
final ByteBuffer inBuffer = ByteBuffer.allocate(IN_BUFFER_SIZE);
final ByteBuffer outBuffer = ByteBuffer.allocate(OUT_BUFFER_SIZE);
FileChannel readChannel = null;
FileChannel writeChannel = null;
try {
// open a file channel for reading and writing
readChannel = FileChannel.open(inFile.toPath(), StandardOpenOption.READ);
writeChannel = FileChannel.open(outFile.toPath(), StandardOpenOption.CREATE, StandardOpenOption.WRITE);
long totalReadByteCount = 0L;
long totalWriteByteCount = 0L;
boolean readMore = true;
while (readMore) {
// read some bytes into the in-buffer
int readOp = 0;
while ((readOp = readChannel.read(inBuffer)) != -1) {
totalReadByteCount += readOp;
} // while
// prepare the in-buffer to be consumed
inBuffer.flip();
// check if there where errors
if (readOp == -1) {
// end of file reached, read no more
readMore = false;
} // if
// now consume the in-buffer until there are at least MIN_READ_BYTES in the buffer
while (inBuffer.remaining() >= MIN_READ_BYTES) {
// add data to the write buffer
outBuffer.putLong(someOperation(inBuffer));
} // while
// compact the in-buffer and prepare for the next read, if we need to read more.
// that way the possible remaining bytes of the in-buffer can be consumed after leaving the loop
if (readMore) inBuffer.compact();
// prepare the out-buffer to be consumed
outBuffer.flip();
// write the out-buffer until the buffer is empty
while (outBuffer.hasRemaining())
totalWriteByteCount += writeChannel.write(outBuffer);
// prepare the out-buffer for writing again
outBuffer.flip();
} // while
// error handling
if (inBuffer.hasRemaining()) {
System.err.println("Truncated data! Not a long value! bytes remaining: " + inBuffer.remaining());
} // if
System.out.println("read total: " + totalReadByteCount + " bytes.");
System.out.println("write total: " + totalWriteByteCount + " bytes.");
} catch (IOException e) {
System.out.println("Some error occurred while reading from File: " + e);
} finally {
if (readChannel != null) {
try {
readChannel.close();
} catch (IOException e) {
System.out.println("Could not close read channel: " + e);
} // catch
} // if
if (writeChannel != null) {
try {
writeChannel.close();
} catch (IOException e) {
System.out.println("Could not close write channel: " + e);
} // catch
} // if
} // finally
return outFile;
}
private long someOperation(ByteBuffer bb) {
// consume the buffer, do whatever you want with the buffer.
return bb.getLong(); // consumes 8 bytes of the buffer.
}
public static void main(String[] args) {
TestFile testFile = new TestFile();
File source = new File("input.txt");
testFile.readFromFile1(source);
}
}

Faster way of copying data in Java?

I have been given a task of copying data from a server. I am using BufferedInputStream and output stream to copy the data and I am doing it byte by byte. Even though it is running but It is taking ages to copy the data as some of them are in 100's MBs, so definitely it is not gonna work. Can anyone suggest me any alternate of Byte by Byte copy so that my code can copy file that are in few Hundred MBs.
Buffer is 2048.
Here is how my code look like:
static void copyFiles(SmbFile[] files, String parent) throws IOException {
SmbFileInputStream input = null;
FileOutputStream output = null;
BufferedInputStream buf_input = null;
try {
for (SmbFile f : files) {
System.out.println("Working on files :" + f.getName());
if (f.isDirectory()) {
File folderToBeCreated = new File(parent+f.getName());
if (!folderToBeCreated.exists()) {
folderToBeCreated.mkdir();
System.out.println("Folder name " + parent
+ f.getName() + "has been created");
} else {
System.out.println("exists");
}
copyFiles(f.listFiles(), parent + f.getName());
} else {
input = (SmbFileInputStream) f.getInputStream();
buf_input = new BufferedInputStream(input, BUFFER);
File t = new File(parent + f.getName());
if (!t.exists()) {
t.createNewFile();
}
output = new FileOutputStream(t);
int c;
int count;
byte data[] = new byte[BUFFER];
while ((count = buf_input.read(data, 0, BUFFER)) != -1) {
output.write(data, 0, count);
}
}
}
} catch (IOException e) {
e.printStackTrace();
} finally {
if (input != null) {
input.close();
}
if (output != null) {
output.close();
}
}
}
Here is a link to an excellent post explaining how to use nio channels to make copies of streams. It introduces a helper method ChannelTools.fastChannelCopy that lets you copy streams like this:
final InputStream input = new FileInputStream(inputFile);
final OutputStream output = new FileOutputStream(outputFile);
final ReadableByteChannel inputChannel = Channels.newChannel(input);
final WriteableByteChannel outputChannel = Channels.newChannel(output);
ChannelTools.fastChannelCopy(inputChannel, outputChannel);
inputChannel.close();
outputChannel.close()
Well since you're using a BufferedInputStream, you aren't reading byte by byte, but rather the size of the buffer. You could just try increasing the buffer size.
Reading/writing byte-by-byte is definitely going to be slow, even though the actual reading/writing is done by chunks of the buffer size. One way to speed it up is to read/write by blocks. Have a look at read(byte[] b, int off, int len) method of BufferedInputStream. However it probably won't give you enough of the improvement.
What would be much better is to use nio package (new IO) to copy data using nio channels. Have a look at nio documentation for more info.
I would suggest to use FileUtils from org.apache.commons.io. It has enough utility methods to perform file operations.
org.apache.commons.io.FileUtils API Here

How to use a chunk delimiter in a raw data file?

I want to save raw data chunks to a file, And later on read those chunks one by one. This is no big deal except the following doubt:
What exact bytes to use as a delimiter, i.e to identify end of one chunk and beginning of next ? Given that chunk data might also contain such a sequence of bytes by random chance.
Notes: chunks are of variable size and contain random data. They are jpeg images actually.
You could first write the length of the chunk to the file as a fixed-size value, e.g. a 4 bytes integer, followed by the data itself:
public void appendChunk(byte[] data, File file) throws IOException {
DataOutputStream stream = null;
try {
stream = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(file, true)));
stream.writeInt(data.length);
stream.write(data);
} finally {
if (stream != null) {
try {
stream.close();
} catch (IOException e) {
// ignore
}
}
}
}
If you later have to read the chunks back from that file, you start by reading the length of the first chunk. You now can decide whether to read the chunk data, or whether to skip it and continue with the next chunk.
public void processChunks(File file) throws IOException {
DataInputStream stream = null;
try {
stream = new DataInputStream(new BufferedInputStream(new FileInputStream(file)));
while (true) {
try {
int length = stream.readInt();
byte[] data = new byte[length];
stream.readFully(data);
// todo: do something with the data
} catch (EOFException e) {
// end of file reached
break;
}
}
} finally {
if (stream != null) {
try {
stream.close();
} catch (IOException e) {
// ignore
}
}
}
}
You can also add other meta-data about the chunks, like writing the original name of the file with stream.writeUTF(...). You only have to make sure that you write and read the same data in the same order.
Create a 2nd file in which you save the byteranges of your chunks in the chunkfile, or add that information to the header of your chunkfile. Did something similar once, don't forget that the byteranges than have the additional offset of the length of the header.
int startbyte = 0;
int lastByte = 0;
int chunkcount = 0;
File chunkfile;
File structurefile;
for (every chunk) {
append chunk to chunkfile
lastByte = startByte + chunk.sizeInBytes()
append to structurefile: chunkcount startByte lastByte
chunkcount++;
startByte = lastByte + 1
}

Categories

Resources