image upload problem - java

I am writting the code to upload file on oracle as BLOB but while saving that file its giving me the exception java.sql.SQLException: ORA-01460: unimplemented or unreasonable
following are the functions to convert my blob type to byteArray
private byte[] convertToByteArray(Blob fromBlob) {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
try {
return convertToByteArrayImpl(fromBlob, baos);
} catch (SQLException e) {
throw new RuntimeException(e);
} catch (IOException e) {
throw new RuntimeException(e);
} finally {
if (baos != null) {
try {
baos.close();
} catch (IOException ex) {
}
}
}
}
private byte[] convertToByteArrayImpl(Blob fromBlob, ByteArrayOutputStream baos)
throws SQLException, IOException {
byte[] buf = new byte[4000];
InputStream is = fromBlob.getBinaryStream();
try {
for (;;) {
int dataSize = is.read(buf);
if (dataSize == -1)
break;
baos.write(buf, 0, dataSize);
}
} finally {
if (is != null) {
try {
is.close();
} catch (IOException ex) {
}
}
}
return baos.toByteArray();
}
I think its because my byte length is above 4000 but, what is the solution to save more than 4000 bytes?

One of the quirks of working with BLOBs in earlier versions of Oracle was that we could not include the full BLOB in an insert statement. It had to be a two-stage process.
The 4000 byte limit is the key, because that figure is the upper bound of what Oracle considers to be a SQL datatype. So Oracle can handle LOBs of 4000 bytes or less without a snatch but hurls the ORA-01460 exception if we ask it to accept a larger LOB. The workaround was to insert the row with an empty_blob() placeholder, and then update the new row.
insert into t42 (id, blob_col) values (1, empty_blob());
update t42
set blob_col = some_blob_variable
where id = 1;
This might be the cause of your problem; it is difficult to tell without seeing the whole of your code.
NB: As far as I can tell the preceding does not apply to Oracle 11g: we can now easily insert rows containing oversize BLOBs.

Related

Memory Consumption by Java Applet

In my applet I have GET call to download file from a remote location. When I am trying to download some large file of around 13MB, then my Applet memory consumption is increasing more than 50MB. I am using the below code to get my memory consumption:
public static long getMemoryUsage()
{
long memory = 0;
// Get the Java runtime
Runtime runtime = Runtime.getRuntime();
memory = runtime.totalMemory() - runtime.freeMemory();
return memory;
}
Code for my get call is
public void getFiles(String filePath, long fileSize)throws MyException
{
InputStream objInputStream = null;
HttpURLConnection conn = null;
BufferedReader br = null;
try
{
URL fileUrl=new URL(filePath);
final String strAPICall=fileUrl.getPath();
final String strHost="some.test.com";
final int iPort=1000;
URL url = null;
url = new java.net.URL
( "https",
strHost, iPort , "/" + strAPICall,
new myHandler() );
conn = (HttpURLConnection)new HttpsURLConn(url);
conn.setRequestMethod("GET");
conn.connect();
if (conn.getResponseCode() != 200) {
objInputStream=conn.getInputStream();
br = new BufferedReader(new InputStreamReader(
(objInputStream)));
String output;
while ((output = br.readLine()) != null) {
System.out.println(output);
}
throw new MyException("Bad response from server",
MyError.BAD_RESPONSE_ERROR);
}
else
{
notifyProgressToObservers(0);
System.out.println("conn.getResponseCode()"+conn.getResponseCode());
System.out.println("conn.getResponseMessage()"+conn.getResponseMessage());
objInputStream = conn.getInputStream();
int count=objInputStream.available();
System.out.println("Stream size: "+count);
System.out.println("fileSize size: "+fileSize);
byte []downloadedData = getBytesFromInputStream
(objInputStream, count,fileSize);
notifyChunkToObservers(downloadedData);
notifyIndivisualFileEndToObservers(true, null);
}
}
catch (MyException pm)
{
throw new MyException
(pm, MyError.CONNECTION_TIMEOUT);
}
catch (IOException pm)
{
throw new MyException
(pm, MyError.CONNECTION_TIMEOUT);
}
catch (Exception e)
{
notifyIndivisualFileEndToObservers(false,new MyException(e.toString()));
}
finally
{
System.out.println("Closing all the streams after getting file");
if(conn !=null)
{
try
{
conn.disconnect();
}
catch(Exception e)
{
}
}
if(objInputStream != null)
{
try {
objInputStream.close();
} catch (IOException e) {
}
}
if (br != null)
{
try {
br.close();
} catch (IOException e) {
}
}
}
}
In the above method, I tried putting the log for memory consumption after each line and found that after conn.connect();, the memory consumption of applet increases by atleast 50MB even though the file I am trying to download is only 13MB.
Is there any memory leak anywhere?
EDIT: Added Implementation for getBytesFromInputStream()
public byte[] getBytesFromInputStream(InputStream is, int len, long fileSize)
throws IOException
{
byte[] readBytes= new byte[8192];
ByteArrayOutputStream getBytes= new ByteArrayOutputStream();
int numRead = 0;
while ((numRead = is.read(readBytes)) != -1) {
getBytes.write(readBytes, 0, numRead);
}
return getBytes.toByteArray();
}
it's because of this line:
byte []downloadedData = getBytesFromInputStream(objInputStream, count,fileSize);
here you are reading the complete amount of bytes of file into the heap. After that you need to track down what happens with this array. Maybe you are copying it somewhere and the GC needs some time to kick in even if you do not use the reference to the object anymore.
Large files should never be read completly to memory, but rather streamed directly to some processor of the data.
The only way to optimize getBytesFromInputStream() is if you know beforehand exactly how many by bytes there are to read. Then you allocate a byte[] of the required size, and read from the input directly into the byte[]. For example:
byte[] buffer = new byte[len];
int pos = 0;
while (pos < len) {
int nosRead = is.read(buffer, pos, len - pos);
if (nosRead == -1) {
throw new IOException("incomplete response");
}
pos += nosRead;
}
return buffer;
(For more information, read the javadoc.)
Unfortunately, your (apparent) attempt at getting the size is incorrect.
int count = objInputStream.available();
This doesn't return the total number of bytes that can be read from the stream. It returns the number of bytes that can be read right now without the possibility of blocking.
If the server is setting the Content-Length header in the response, then you could use that; call getContentLength() (or getContentLengthLong() in other use-cases) once you have the response. But be prepared for the case where that gives you -1.

Reading a list of Files as a Java 8 Stream

I have a (possibly long) list of binary files that I want to read lazily. There will be too many files to load into memory. I'm currently reading them as a MappedByteBuffer with FileChannel.map(), but that probably isn't required. I want the method readBinaryFiles(...) to return a Java 8 Stream so I can lazy load the list of files as I access them.
public List<FileDataMetaData> readBinaryFiles(
List<File> files,
int numDataPoints,
int dataPacketSize )
throws
IOException {
List<FileDataMetaData> fmdList = new ArrayList<FileDataMetaData>();
IOException lastException = null;
for (File f: files) {
try {
FileDataMetaData fmd = readRawFile(f, numDataPoints, dataPacketSize);
fmdList.add(fmd);
} catch (IOException e) {
logger.error("", e);
lastException = e;
}
}
if (null != lastException)
throw lastException;
return fmdList;
}
// The List<DataPacket> returned will be in the same order as in the file.
public FileDataMetaData readRawFile(File file, int numDataPoints, int dataPacketSize) throws IOException {
FileDataMetaData fmd;
FileChannel fileChannel = null;
try {
fileChannel = new RandomAccessFile(file, "r").getChannel();
long fileSz = fileChannel.size();
ByteBuffer bbRead = ByteBuffer.allocate((int) fileSz);
MappedByteBuffer buffer = fileChannel.map(FileChannel.MapMode.READ_ONLY, 0, fileSz);
buffer.get(bbRead.array());
List<DataPacket> dataPacketList = new ArrayList<DataPacket>();
while (bbRead.hasRemaining()) {
int channelId = bbRead.getInt();
long timestamp = bbRead.getLong();
int[] data = new int[numDataPoints];
for (int i=0; i<numDataPoints; i++)
data[i] = bbRead.getInt();
DataPacket dp = new DataPacket(channelId, timestamp, data);
dataPacketList.add(dp);
}
fmd = new FileDataMetaData(file.getCanonicalPath(), fileSz, dataPacketList);
} catch (IOException e) {
logger.error("", e);
throw e;
} finally {
if (null != fileChannel) {
try {
fileChannel.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return fmd;
}
Returning fmdList.Stream() from readBinaryFiles(...) won't accomplish this because the file contents will already have been read into memory, which I won't be able to do.
The other approaches to reading the contents of multiple files as a Stream rely on using Files.lines(), but I need to read binary files.
I'm, open to doing this in Scala or golang if those languages have better support for this use case than Java.
I'd appreciate any pointers on how to read the contents of multiple binary files lazily.
There is no laziness possible for the reading within the a file as you are reading the entire file for constructing an instance of FileDataMetaData. You would need a substantial refactoring of that class to be able to construct an instance of FileDataMetaData without having to read the entire file.
However, there are several things to clean up in that code, even specific to Java 7 rather than Java 8, i.e you don’t need a RandomAccessFile detour to open a channel anymore and there is try-with-resources to ensure proper closing. Note further that you usage of memory mapping makes no sense. When copy the entire contents into a heap ByteBuffer after mapping the file, there is nothing lazy about it. It’s exactly the same what happens, when call read with a heap ByteBuffer on a channel, except that the JRE can reuse buffers in the read case.
In order to allow the system to manage the pages, you have to read from the mapped byte buffer. Depending on the system, this might still not be better than repeatedly reading small chunks into a heap byte buffer.
public FileDataMetaData readRawFile(
File file, int numDataPoints, int dataPacketSize) throws IOException {
try(FileChannel fileChannel=FileChannel.open(file.toPath(), StandardOpenOption.READ)) {
long fileSz = fileChannel.size();
MappedByteBuffer bbRead=fileChannel.map(FileChannel.MapMode.READ_ONLY, 0, fileSz);
List<DataPacket> dataPacketList = new ArrayList<>();
while(bbRead.hasRemaining()) {
int channelId = bbRead.getInt();
long timestamp = bbRead.getLong();
int[] data = new int[numDataPoints];
for (int i=0; i<numDataPoints; i++)
data[i] = bbRead.getInt();
dataPacketList.add(new DataPacket(channelId, timestamp, data));
}
return new FileDataMetaData(file.getCanonicalPath(), fileSz, dataPacketList);
} catch (IOException e) {
logger.error("", e);
throw e;
}
}
Building a Stream based on this method is straight-forward, only the checked exception has to be handled:
public Stream<FileDataMetaData> readBinaryFiles(
List<File> files, int numDataPoints, int dataPacketSize) throws IOException {
return files.stream().map(f -> {
try {
return readRawFile(f, numDataPoints, dataPacketSize);
} catch (IOException e) {
logger.error("", e);
throw new UncheckedIOException(e);
}
});
}
This should be sufficient:
return files.stream().map(f -> readRawFile(f, numDataPoints, dataPacketSize));
…if, that is, you are willing to remove throws IOException from the readRawFile method’s signature. You could have that method catch IOException internally and wrap it in an UncheckedIOException. (The problem with deferred execution is that the exceptions also need to be deferred.)
I don't know how performant this is, but you can use java.io.SequenceInputStream wrapped inside of DataInputStream. This will effectively concatenate your files together. If you create a BufferedInputStream from each file, then the whole thing should be properly buffered.
Building on VGR's comment, I think his basic solution of:
return files.stream().map(f -> readRawFile(f, numDataPoints, dataPacketSize))
is correct, in that it will lazily process the files (and stop if a short-circuiting terminal action is invoked off the result of the map() operation. I would also suggest a slightly different to the implementation of readRawFile that leverages try with resources and InputStream, which will not load the whole file into memory:
public FileDataMetaData readRawFile(File file, int numDataPoints, int dataPacketSize)
throws DataPacketReadException { // <- Custom unchecked exception, nested for class
FileDataMetadata results = null;
try (FileInputStream fileInput = new FileInputStream(file)) {
String filePath = file.getCanonicalPath();
long fileSize = fileInput.getChannel().size()
DataInputStream dataInput = new DataInputStream(new BufferedInputStream(fileInput);
results = new FileDataMetadata(
filePath,
fileSize,
dataPacketsFrom(dataInput, numDataPoints, dataPacketSize, filePath);
}
return results;
}
private List<DataPacket> dataPacketsFrom(DataInputStream dataInput, int numDataPoints, int dataPacketSize, String filePath)
throws DataPacketReadException {
List<DataPacket> packets = new
while (dataInput.available() > 0) {
try {
// Logic to assemble DataPacket
}
catch (EOFException e) {
throw new DataPacketReadException("Unexpected EOF on file: " + filePath, e);
}
catch (IOException e) {
throw new DataPacketReadException("Unexpected I/O exception on file: " + filePath, e);
}
}
return packets;
}
This should reduce the amount of code, and make sure that your files get closed on error.

The gzip compressor is not going with the while loop

I am trying to query from database with Java JDBC and compress data in one column to gzip file in specific directory. I have tested my JDBC query and it working fine, but the Gzip code not going with the while loop, it's run with the loop firt row and stuck there. Why it's stuck? help me please!
These folders already existed: D:\Directory\My\year\id1\id2
//Some query JDBC code here, it's work well. I query all rows Data, year, id1,id2,id3
while (myRs1.next()){
String str = Data;
File myGzipFile = new File("D:\\Directory\\My\\"+year+"\\"+id1+"\\"+id2+"\\"+id3+".gzip");
GZIPOutputStream gos = null;
InputStream is = new ByteArrayInputStream(str.getBytes());
gos = new GZIPOutputStream(new FileOutputStream(myGzipFile));
byte[] buffer = new byte[1024];
int len;
while ((len = is.read(buffer)) != -1) {
gos.write(buffer, 0, len);
System.out.print("done for:"+id3);
}
try { gos.close(); } catch (IOException e) { }
}
Try formatting the source like this to catch exceptions.
public class InputStreamDemo {
public static void main(String[] args) throws Exception {
InputStream is = null;
int i;
char c;
try{
is = new FileInputStream("C://test.txt");
System.out.println("Characters printed:");
// reads till the end of the stream
while((i=is.read())!=-1)
{
// converts integer to character
c=(char)i;
// prints character
System.out.print(c);
}
}catch(Exception e){
// if any I/O error occurs
e.printStackTrace();
}finally{
// releases system resources associated with this stream
if(is!=null)
is.close();
}
}
}

java exception handling whats the difference between the two styles pasted here?

the first one is from a book which looks very cryptic/complex to me ,second one is the way i have seen people around me write including me :) ,also for the first style eclipse is showing that the catch "IOException openx" block is handing the exception for part where read and write is happening viz
while ((len = is.read(buf)) >= 0)
out.write(buf, 0, len);
.Does it mean catch "IOException iox" is useless code?
first style.
File file = new File("hsjdhsaj");
InputStream is = null;
try {
URL url = new URL("");
is = url.openStream();
OutputStream out = new FileOutputStream(file);
try {
byte[] buf = new byte[4096];
int len;
while ((len = is.read(buf)) >= 0)
out.write(buf, 0, len);
} catch (IOException iox) {
} finally {
try {
out.close();
} catch (IOException closeOutx) {
}
}
} catch (FileNotFoundException fnfx) {
} catch (IOException openx) {
} finally {
try {
if (is != null)
is.close();
} catch (IOException closeInx) {
}
}
second style.
File file = new File("hsjdhsaj");
InputStream is = null;
OutputStream out = null;
try {
URL url = new URL("");
is = url.openStream();
out = new FileOutputStream(file);
byte[] buf = new byte[4096];
int len;
while ((len = is.read(buf)) >= 0)
out.write(buf, 0, len);
} catch (FileNotFoundException fnfx) {
} catch (IOException openx) {
} finally {
try {
if (out != null)
out.close();
if (is != null)
is.close();
} catch (IOException closeInx) {
}
}
if i put
try {
if (is != null) is.close();
} catch (IOException closeInx) { }
try {
if (out != null) out.close();
} catch (IOException closeInx) { }
in finally block for second style then are they both same
With the second style is is not closed when out.close() throws an exception. The first style does not have this problem.
In both code snippets often exceptions are silently swallowed. This can cause maintenance nightmares. Something does not work and you have no idea why.
The first approach is more correct. Your second approach has a bug if an exception is thrown when calling out.close, because you'll never call is.close().
Of course, both of them are ugly. You should be using a utility method like IOUtils.closeQuietly() to close your streams. And you shouldn't be swallowing exceptions.
Yes, the first one is more correct, but also very ugly. That's why Java 7 improved a lot exception handling. In you case you can use Try-with-Resources :
The new syntax allows you to declare resources that are part of the try block. What this means is that you define the resources ahead of time and the runtime automatically closes those resources (if they are not already closed) after the execution of the try block.
try (BufferedReader reader = new BufferedReader(
new InputStreamReader(
new URL("http://www.yoursimpledate.server/").openStream())))
{
String line = reader.readLine();
SimpleDateFormat format = new SimpleDateFormat("MM/DD/YY");
Date date = format.parse(line);
} catch (ParseException | IOException exception) {
// handle I/O problems.
}
Take a look at Working with Java SE 7 Exception Changes

Reading zip archive from database

I have a zip file which I store in the database as a blob field. When I want to download it from it the zip file is corrupted. I can open it only from 7zip. The file is ok when I try to open it before upload it in the DB and when is in the DB. When I retrieve the file from the database as a blob I get this error when try to unzip it on unix
Archive: test.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of test.zip or
test.zip.zip, and cannot find test.zip.ZIP, period.
Here is the code when I retrieve the zip from the database :
public oracle.sql.BLOB GetBlob(Connection myConn,
CallableStatement cstmt) throws Exception {
String strSql = null;
BLOB tempBlob = null;
try {
strSql = .... // Here is the sql procedure which I called to retrieve the blobl field.
cstmt = myConn.prepareCall(strSql);
cstmt.registerOutParameter(1, OracleTypes.BLOB);
cstmt.setLong(2, request_id);
cstmt.execute();
tempBlob = (oracle.sql.BLOB)cstmt.getObject(1);
int bufsize = tempBlob.getBufferSize();
} catch (Exception e) {
e.printStackTrace();
throw e;
}
return tempBlob;
Here is the reading :
oracle.sql.BLOB tempBlob = null;
Connection myConn = null;
CallableStatement cstmt = null;
try {
myConn = DBHelper.getConnection();
if (null == myConn)
throw new SQLException();
tempBlob = GetBlob(myConn, cstmt);
int bufsize = tempBlob.getBufferSize();
InputStream in = tempBlob.getBinaryStream();
int length = 0;
byte buf[] = new byte[bufsize];
while ((in != null) && ((length = in.read(buf)) != -1)) {
out.write(buf, 0, length);
}
in.close();
// out.flush();
// out.close();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != myConn) {
try {
myConn.close();
} catch (Exception e) {
e.printStackTrace();
}
}
if (cstmt != null) {
try {
cstmt.close();
} catch (SQLException e) {
}
}
}
Could somebody help me.
Thanks in advance.
Compare the files before and after. The difference should give you some hint what is going wrong.
Possible culprits are:
Missing bytes at the end
converted bytes
messed up order of bytes
I'd expect looking at the first 10, the last 10 and the total number of bytes should be sufficient to give you a good idea what is going on.

Categories

Resources