I am trying to write simple webapp using JSP / Servlet / EJB (for JPA) for uploading files (up to 50 mb) to DB.
In my entity class (User) i have following code:
#Lob
private byte[] file;
Here is how I retrieve file in Servlet (actually it saves file on my computer and I want to change it):
for (Part part : request.getParts()) {
InputStream is = request.getPart(part.getName()).getInputStream();
int i = is.available();
byte[] b = new byte[i];
is.read(b);
String fileName = getFileName(part);
FileOutputStream os = new FileOutputStream("C:/files/" + fileName);
os.write(b);
is.close();
}
I don't know how to write byte arrays (using for loop) to my User entity. Any ideas ?
Your input stream processing is incorrect - the available method doesn't return you the length of the entire stream, it only gives you the (estimated) amount that is available to be read before the stream will be blocked. Emphasis on estimate. You need to loop through reading the entire stream until a read call results in -1, or use a utility like IOUtils from Apache Commons IO.
final byte[] data = IOUtils.toByteArray(inputStream);
Once you have the data, just set it to your entity:
entity.setFile(data);
And then save it.
Related
I am creating Azure function using Java, My requirement I need to copy blob from one container to another container with encryption
so, for encrypting blob I am adding 4bites before and after the blob while uploading to sink container
now, I need to fetch blob content, for this I found one class in azure i.e,
#BlobInput(
name = "InputFileName",
dataType = "binary",
path = sourceContainerName+"/{InputFileName}")
byte[] content,
Here byte[] content, fetching content of blob
but I am facing some errors like, if I pass any file name as InputFileName parameter it is giving 200ok means returning successful. also it is difficult to mefor exception handling
so I am looking for other ways for fetching blob content.... please answer me if any methods or classes we have
If you are looking for more control, instead of using the bindings, you can use the Azure Storage SDK directly. Check out the quickstart doc for getting
setup.
This sample code has full end-to-end code that you could build upon. Here is the code that you are looking for in it for reference
String data = "Hello world!";
InputStream dataStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8));
/*
* Create the blob with string (plain text) content.
*/
blobClient.upload(dataStream, data.length());
dataStream.close();
/*
* Download the blob's content to output stream.
*/
int dataSize = (int) blobClient.getProperties().getBlobSize();
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(dataSize);
blobClient.downloadStream(outputStream);
outputStream.close();
Is there a way to read and write from a blob in chunks using Hibernate.
Right now I am getting OutOfmemoryException because the whole blob data is loaded in memory into a byte[].
To be more specific, let's say I want to save a large file into a database table called File.
public class File {
private byte[] data;
}
I open the file in a FileInputStream and then what?
How do I tell Hibernate that I need to stream the content and will not give the whole byte[] array at once?
Should I use Blob instead of byte[]? Anyway how can I stream the content?
Regarding reading, is there a way I can tell hibernate that (besides the lazy loading it does) I need the blob to be loaded in chunks, so when I retrieve my File it should not give me OutOfMemoryException.
I am using:
Oracle 11.2.0.3.0
Hibernate 4.2.3 Final
Oracle Driver 11.2
If going the Blob route, have you tried using Hibernate's LobHelper createBlob method, which takes an InputStream? To create a Blob and persist to the database, you would supply the FileInputStream object and the number of bytes.
Your File bean/entity class could map the Blob like this (using JPA annotations):
#Lob
#Column(name = "DATA")
private Blob data;
// Getter and setter
And the business logic/data access class could create the Blob for your bean/entity object like this, taking care not to close the input stream before persisting to the database:
FileInputStream fis = new FileInputStream(file);
Blob data = getSession().getLobHelper().createBlob(fis, file.length());
fileEntity.setData(data);
// Persist file entity object to database
To go the other way and read the Blob from the database as a stream in chunks, you could call the Blob's getBinaryStream method, giving you the InputStream and allowing you to set the buffer size later if needed:
InputStream is = fileEntity.getData().getBinaryStream();
Struts 2 has a convenient configuration available that can set the InputStream result's buffer size.
Below is the code that I have written. I want to do the simple thing, storing binary file data into byteBuffer.
File file = new File(fileName);
try {
ReadableByteChannel channel = new FileInputStream(fileName).getChannel();
ByteBuffer buf = ByteBuffer.allocateDirect(file.length());
// How can use buf.read to get all the contents?
} catch (Exception e){
}
I was wondering
how can I use read to get all data from channel and store it in ByteBuffer
if there is more elegant way to allocate ByteBuffer, other than using File object to get the length of the file
I prefer to use memory mapping.
FileChannel channel = new FileInputStream(fileName).getChannel();
ByteBuffer buf = channel.map(MapMode.READ_ONLY,0,channel.size());
If the file is greater than 2 GB, you have to have more than one mapping. On the plus side this takes around 10 ms regardless of size and doesn't use much heap or direct memory regardless of the size of the file.
From the ReadableByteChannel Javadocs
read(ByteBuffer dst)
An attempt is made to read up to r bytes from the channel, where r is the number of bytes remaining in the buffer, that is, dst.remaining(), at the moment this method is invoked.
So ... channel.read(buf);
As for your second question, if you want to read the entire contents of the file into memory at once that seems like a reasonable approach.
I'm updating some old code to grab some binary data from a URL instead of from a database (the data is about to be moved out of the database and will be accessible by HTTP instead). The database API seemed to provide the data as a raw byte array directly, and the code in question wrote this array to a file using a BufferedOutputStream.
I'm not at all familiar with Java, but a bit of googling led me to this code:
URL u = new URL("my-url-string");
URLConnection uc = u.openConnection();
uc.connect();
InputStream in = uc.getInputStream();
ByteArrayOutputStream out = new ByteArrayOutputStream();
final int BUF_SIZE = 1 << 8;
byte[] buffer = new byte[BUF_SIZE];
int bytesRead = -1;
while((bytesRead = in.read(buffer)) > -1) {
out.write(buffer, 0, bytesRead);
}
in.close();
fileBytes = out.toByteArray();
That seems to work most of the time, but I have a problem when the data being copied is large - I'm getting an OutOfMemoryError for data items that worked fine with the old code.
I'm guessing that's because this version of the code has multiple copies of the data in memory at the same time, whereas the original code didn't.
Is there a simple way to grab binary data from a URL and save it in a file without incurring the cost of multiple copies in memory?
Instead of writing the data to a byte array and then dumping it to a file, you can directly write it to a file by replacing the following:
ByteArrayOutputStream out = new ByteArrayOutputStream();
With:
FileOutputStream out = new FileOutputStream("filename");
If you do so, there is no need for the call out.toByteArray() at the end. Just make sure you close the FileOutputStream object when done, like this:
out.close();
See the documentation of FileOutputStream for more details.
I don't know what you mean with "large" data, but try using the JVM parameter
java -Xmx 256m ...
which sets the maximum heap size to 256 MByte (or any value you like).
If you need the Content-Length and your web-server is somewhat standard conforming, then it should provide you a "Content-Length" header.
URLConnection#getContentLength() should give you that information upfront so that you are able to create your file. (Be aware that if your HTTP server is misconfigured or under control of an evil entity, that header may not match the number of bytes received. In that case, why dont you stream to a temp-file first and copy that file later?)
In addition to that: A ByteArrayInputStream is a horrible memory allocator. It always doubles the buffer size, so if you read a 32MB + 1 byte file, then you end up with a 64MB buffer. It might be better to implement a own, smarter byte-array-stream, like this one:
http://source.pentaho.org/pentaho-reporting/engines/classic/trunk/core/source/org/pentaho/reporting/engine/classic/core/util/MemoryByteArrayOutputStream.java
subclassing ByteArrayOutputStream gives you access to the buffer and the number of bytes in it.
But of course, if all you want to do is to store de data into a file, you are better off using a FileOutputStream.
This problem seems to happen inconsistently. We are using a java applet to download a file from our site, which we store temporarily on the client's machine.
Here is the code that we are using to save the file:
URL targetUrl = new URL(urlForFile);
InputStream content = (InputStream)targetUrl.getContent();
BufferedInputStream buffered = new BufferedInputStream(content);
File savedFile = File.createTempFile("temp",".dat");
FileOutputStream fos = new FileOutputStream(savedFile);
int letter;
while((letter = buffered.read()) != -1)
fos.write(letter);
fos.close();
Later, I try to access that file by using:
ObjectInputStream keyInStream = new ObjectInputStream(new FileInputStream(savedFile));
Most of the time it works without a problem, but every once in a while we get the error:
java.io.StreamCorruptedException: invalid stream header: 0D0A0D0A
which makes me believe that it isn't saving the file correctly.
I'm guessing that the operations you've done with getContent and BufferedInputStream have treated the file like an ascii file which has converted newlines or carriage returns into carriage return + newline (0x0d0a), which has confused ObjectInputStream (which expects serialized data objects.
If you are using an FTP URL, the transfer may be occurring in ASCII mode.
Try appending ";type=I" to the end of your URL.
Why are you using ObjectInputStream to read it?
As per the javadoc:
An ObjectInputStream deserializes primitive data and objects previously written using an ObjectOutputStream.
Probably the error comes from the fact you didn't write it with ObjectOutputStream.
Try reading it wit FileInputStream only.
Here's a sample for binary ( although not the most efficient way )
Here's another used for text files.
There are 3 big problems in your sample code:
You're not just treating the input as bytes
You're needlessly pulling the entire object into memory at once
You're doing multiple method calls for every single byte read and written -- use the array based read/write!
Here's a redo:
URL targetUrl = new URL(urlForFile);
InputStream is = targetUrl.getInputStream();
File savedFile = File.createTempFile("temp",".dat");
FileOutputStream fos = new FileOutputStream(savedFile);
int count;
byte[] buff = new byte[16 * 1024];
while((count = is.read(buff)) != -1) {
fos.write(buff, 0, count);
}
fos.close();
content.close();
You could also step back from the code and check to see if the file on your client is the same as the file on the server. If you get both files on an XP machine, you should be able to use the FC utility to do a compare (check FC's help if you need to run this as a binary compare as there is a switch for that). If you're on Unix, I don't know the file compare program, but I'm sure there's something.
If the files are identical, then you're looking at a problem with the code that reads the file.
If the files are not identical, focus on the code that writes your file.
Good luck!