Reading zip archive from database - java

I have a zip file which I store in the database as a blob field. When I want to download it from it the zip file is corrupted. I can open it only from 7zip. The file is ok when I try to open it before upload it in the DB and when is in the DB. When I retrieve the file from the database as a blob I get this error when try to unzip it on unix
Archive: test.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of test.zip or
test.zip.zip, and cannot find test.zip.ZIP, period.
Here is the code when I retrieve the zip from the database :
public oracle.sql.BLOB GetBlob(Connection myConn,
CallableStatement cstmt) throws Exception {
String strSql = null;
BLOB tempBlob = null;
try {
strSql = .... // Here is the sql procedure which I called to retrieve the blobl field.
cstmt = myConn.prepareCall(strSql);
cstmt.registerOutParameter(1, OracleTypes.BLOB);
cstmt.setLong(2, request_id);
cstmt.execute();
tempBlob = (oracle.sql.BLOB)cstmt.getObject(1);
int bufsize = tempBlob.getBufferSize();
} catch (Exception e) {
e.printStackTrace();
throw e;
}
return tempBlob;
Here is the reading :
oracle.sql.BLOB tempBlob = null;
Connection myConn = null;
CallableStatement cstmt = null;
try {
myConn = DBHelper.getConnection();
if (null == myConn)
throw new SQLException();
tempBlob = GetBlob(myConn, cstmt);
int bufsize = tempBlob.getBufferSize();
InputStream in = tempBlob.getBinaryStream();
int length = 0;
byte buf[] = new byte[bufsize];
while ((in != null) && ((length = in.read(buf)) != -1)) {
out.write(buf, 0, length);
}
in.close();
// out.flush();
// out.close();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != myConn) {
try {
myConn.close();
} catch (Exception e) {
e.printStackTrace();
}
}
if (cstmt != null) {
try {
cstmt.close();
} catch (SQLException e) {
}
}
}
Could somebody help me.
Thanks in advance.

Compare the files before and after. The difference should give you some hint what is going wrong.
Possible culprits are:
Missing bytes at the end
converted bytes
messed up order of bytes
I'd expect looking at the first 10, the last 10 and the total number of bytes should be sufficient to give you a good idea what is going on.

Related

using java jdbc downloading blob(audio) from mysql.Things goes well, but I cant play the audio

As the title described, when I download the blob(audio) file from MySQL, things goes well and I get the file, but I can't play the audio immediately, unless I terminate the progress.
I guess the audio file is being occupated by the program, if so how can I solve this problem without terminate the program. thx!
Here the code:
public void downloadAudio(int documentid,String pathname) {
String sql = "SELECT storage FROM chatroom_tool WHERE documentid=?";
ResultSet rSet = null;
try {
pstmt = conn.prepareStatement(sql);
pstmt.setInt(1, documentid);
rSet = pstmt.executeQuery();
File file = new File(pathname);
FileOutputStream output = new FileOutputStream(file);
System.out.println("writing to file: " + file.getAbsolutePath());
while (rSet.next()) {
InputStream inputStream = rSet.getBinaryStream("storage");
byte[] buffer = new byte[1024];
while (inputStream.read(buffer) > 0) {
output.write(buffer);
}
}
System.out.println("downLoad success +++++");
} catch (SQLException | IOException e) {
e.printStackTrace();
}finally{
try {
conn.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
Here is the picture when I open the audio without terminating the program.
image
inputStream.read does not have to fill the array with data completely, it returns the number of bytes it read. You must use the return value when you write the data to the output stream.
while ((bytes =inputStream.read(buffer)) >0)
output.write(buffer, bytes);
Also you have to close the output stream - Windows does not let other apps open the file as long as your program has it open

java : corrupted zip file created when copy using nio

I have implement following code to copy file(binary file)
code
private void copyFileWithChannels(File aSourceFile, File aTargetFile) {
log("Copying files with channels.");
FileChannel inChannel = null;
FileChannel outChannel = null;
FileInputStream inStream = null;
FileOutputStream outStream = null;
try {
inStream = new FileInputStream(aSourceFile);
inChannel = inStream.getChannel();
outStream = new FileOutputStream(aTargetFile);
outChannel = outStream.getChannel();
long bytesTransferred = 0;
while(bytesTransferred < inChannel.size()){
bytesTransferred += inChannel.transferTo(0, inChannel.size(), outChannel);
}
}
catch(FileNotFoundException e){
log.error("FileNotFoundException in copyFileWithChannels()",e);
}
catch (IOException e) {
log.error("IOException in copyFileWithChannels()",e);
}
catch (Exception e) {
log.error("Exception in copyFileWithChannels()",e);
}
finally {
try{
if (inChannel != null) inChannel.close();
if (outChannel != null) outChannel.close();
if (inStream != null) inStream.close();
if (outStream != null) outStream.close();
}catch(Exception e){
log.error("Exception in copyFileWithChannels() while closing the stream",e);
}
}
}
I have test code with one zip file. when i verify file I found that file which generated is corrupt(size was increased).
Source zip file is about 9GB.
Try this:
while(bytesTransferred < inChannel.size()){
bytesTransferred += inChannel.transferTo(bytesTransferred, inChannel.size() - bytesTransferred, outChannel);
}
Also, I would refer to IOUtils implementation, as a reference
https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/FileUtils.java
specifically
private static void doCopyFile(final File srcFile, final File destFile, final boolean preserveFileDate)
The transferTo method's first argument gives the position from which to transfer, not relative to where the stream left off, but relative to the start of the file. Since you put 0 there it will always transfer from the start of the file. So that line needs to be
bytesTransferred += inChannel.transferTo(bytesTransferred , inChannel.size(), outChannel);
mavarazy mentioned in his answer he's not sure if you need a loop when using inChannel.size(), since the expectation is that if you supply the whole size it will copy the entire file. However, the actual transfer might be less than the requested number of bytes if the output channel's buffer has less room available. So you do need the loop as in his second code snippet.
Unless you have a good reason best to use Files.copy(Path, Path, CopyOption...).

Reading to text file from MySQL

I'm trying to store a text file in a MySQL database, and when needed, save it to a file.
To save the file, I do:
public void saveFile_InDB(File file)
{
try {
String sql = "INSERT INTO sent_emails (fileName, time, clientName) values (?, ?, ?)";
PreparedStatement statement = conn.prepareStatement(sql);
statement.setString(1, new Date().toString());
statement.setString(2, new Date().toString());
InputStream inputStream = new FileInputStream(file);
statement.setBinaryStream(3, inputStream);
int row = statement.executeUpdate();
if (row > 0) {
System.out.println("File saved sucessfully.");
}
conn.close();
} catch (SQLException ex) {
ex.printStackTrace();
} catch (IOException ex) {
ex.printStackTrace();
}
}
And to retreive and save the file:
public void retrieveFile_fromDB()
{
try {
Statement stmt = (Statement) conn.createStatement();
ResultSet res = stmt.executeQuery("SELECT * FROM sent_emails WHERE clientName='sally'");
FileOutputStream fos = new FileOutputStream("file.txt");
if (res.next()) {
Blob File = (Blob) res.getBlob("fileName");
InputStream is = File.getBinaryStream();
int b = 0;
while ((b = is.read()) != -1) {
fos.write(b);
}
fos.flush();
}
} catch (IOException e) {
e.getMessage (); e.printStackTrace();
System.out.println(e);
} catch (SQLException e) {
e.getMessage (); e.printStackTrace();
System.out.println(e);
}
}
Storing the file works, but when I try to retrieve and save it, nothing is stored in the output file?
if you want read file from db Mysql
change this part in your code
Blob File = (Blob) res.getBlob("fileName");
InputStream is = File.getBinaryStream();
int b = 0;
while ((b = is.read()) != -1) {
fos.write(b);
}
fos.flush();
use this code read array of bytes
byte [] bs=res.getBytes("fileName");
fos.write(bs);
it will work
if you return multiple files from db you must declare
FileOutputStream fos = new FileOutputStream("file.txt");
inside while loop and change name of file to avoid overriding
You do not seem to put into the database the things that the column names describe?
fileName and time are for example both set to a timestamp, and clientName is set to the contents of the file. When you later try to select based on clientName, you are actually selecting based on the contents of the file.
Furthermore, when reading the data, you are reading the blob data from the column fileName, but this is wrong because:
fileName contains new Date().toString(), not the contents of the file
fileName should surely contain the file's name, not its contents?

A Problem while Retriving file from FTPS Server

I am working in an application development. On that application i am performing files store, retrieve and delete operations. For identifying the files on server i am using an index(a hash map file) file. every time when i perform upload operation i update "index" file and upload "index" file on server along with other uploading files.
For performing delete operation first i am retrieving the "index" file and based on the index i am deleting the files from server and after updating "index" file i again upload "index" file on server.
I am able to perform file uploading operation successfully but while performing delete operation, i am getting "java.io.EOFException" exception, when i am trying to retrieve "index" file.
i am writing following code to download "index" file from FTPS server
//download index file
if (service.retrFile("INDEX", "") == service.OK) {
try {
ObjectInputStream objIn = new ObjectInputStream(new FileInputStream("INDEX"));
try {
Map<String, FileData> filesUploaded = (HashMap<String, FileData>) objIn.readObject();
} catch (ClassNotFoundException ex) {
ex.printStackTrace();
}
objIn.close();
} catch (IOException ex) {
ex.printStackTrace();
}
}
Where "service.ok" returns '0' if it is successfully connected to FTPS server
and "FileData" contains information about file(attributes).
Same code i am using while performing uploading operation. there it is working fine with no exception. but while performing delete operation when i am retrieving "index" file i am getting exception on the statement :
Map filesUploaded = (HashMap) objIn.readObject();
Exception is :
SEVERE: null
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2298)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2767)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:798)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:298)
at com.pixelvault.gui.DeleteServerFilesDialog.startDeleting(DeleteServerFilesDialog.java:447)
I have checked whether FTPS server connections are properly closed after performing corresponding operations.
I am not getting where i am doing wrong.
please give me your valuable suggestions. i thank to all your suggestions which will help me to overcome with this problem.
i am using org.apache.commons.net.ftp and "retrFile" is a method created by me for retrieving files from server.
Here is code for "retrFile"
FTPSClient ftp;
public int retrFile(String filename, String savePath) {
if (!connected) {
return ERR;
}
FileOutputStream fout = null;
InputStream bin = null;
try {
ftp.enterLocalPassiveMode();
fout = new FileOutputStream(savePath + filename);
bin = ftp.retrieveFileStream(filename);
if (bin == null) {
fout.close();
return ERR;
}
byte[] b = new byte[ftp.getBufferSize()];
int bytesRead = 0;
while ((bytesRead = bin.read(b, 0, b.length)) != -1) {
fout.write(b, 0, bytesRead);
}
ftp.completePendingCommand();
fout.close();
} catch (FTPConnectionClosedException ex) {
ex.printStackTrace();
connected = false;
return NOT_CONNECTED;
} catch (IOException ex) {
ex.printStackTrace();
return ERR;
} finally {
try {
fout.close();
} catch (IOException ex) {
ex.printStackTrace();
return ERR;
}
try {
if (bin != null) {
bin.close();
}
} catch (IOException ex) {
ex.printStackTrace();
return ERR;
}
}
return OK;
}
Are you sure the INDEX file is correctly downloaded?
It's present in the filesystem when application is closed?
What FTP lib are you using?. i only know commons.net from Apache and i not recognice the "retrFile" file method. Could it be threaded so that the file is not completely downloaded when the readObject statement is executed?

image upload problem

I am writting the code to upload file on oracle as BLOB but while saving that file its giving me the exception java.sql.SQLException: ORA-01460: unimplemented or unreasonable
following are the functions to convert my blob type to byteArray
private byte[] convertToByteArray(Blob fromBlob) {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
try {
return convertToByteArrayImpl(fromBlob, baos);
} catch (SQLException e) {
throw new RuntimeException(e);
} catch (IOException e) {
throw new RuntimeException(e);
} finally {
if (baos != null) {
try {
baos.close();
} catch (IOException ex) {
}
}
}
}
private byte[] convertToByteArrayImpl(Blob fromBlob, ByteArrayOutputStream baos)
throws SQLException, IOException {
byte[] buf = new byte[4000];
InputStream is = fromBlob.getBinaryStream();
try {
for (;;) {
int dataSize = is.read(buf);
if (dataSize == -1)
break;
baos.write(buf, 0, dataSize);
}
} finally {
if (is != null) {
try {
is.close();
} catch (IOException ex) {
}
}
}
return baos.toByteArray();
}
I think its because my byte length is above 4000 but, what is the solution to save more than 4000 bytes?
One of the quirks of working with BLOBs in earlier versions of Oracle was that we could not include the full BLOB in an insert statement. It had to be a two-stage process.
The 4000 byte limit is the key, because that figure is the upper bound of what Oracle considers to be a SQL datatype. So Oracle can handle LOBs of 4000 bytes or less without a snatch but hurls the ORA-01460 exception if we ask it to accept a larger LOB. The workaround was to insert the row with an empty_blob() placeholder, and then update the new row.
insert into t42 (id, blob_col) values (1, empty_blob());
update t42
set blob_col = some_blob_variable
where id = 1;
This might be the cause of your problem; it is difficult to tell without seeing the whole of your code.
NB: As far as I can tell the preceding does not apply to Oracle 11g: we can now easily insert rows containing oversize BLOBs.

Categories

Resources