Java upload jpg using JakartaFtpWrapper - makes the file unreadable - java

I've been using JakartaFtpWrapper to upload files from the client Java application to my server (for backup purposes).
The files that are uploaded are text files, png files and jpgs.
I've noticed that the jpg files which are valid on the local machine - somehow become unreadable (corrupt files) on the server (where they were FTPd to).
The image file size is similar to the original one, but somehow it is defected.
Here's a code I'm using to write the jpg to the LOCAL disk:
public static void writeJpeg(BufferedImage bfImg, String fileName, float quality) throws IOException{
FileImageOutputStream output = null;
try{
Iterator iter = ImageIO.getImageWritersByFormatName("jpeg");
ImageWriter writer = (ImageWriter)iter.next();
ImageWriteParam iwp = writer.getDefaultWriteParam();
iwp.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
iwp.setCompressionQuality(quality); // an integer between 0 and 1
File file = new File(fileName);
output = new FileImageOutputStream(file);
writer.setOutput(output);
IIOImage image = new IIOImage(bfImg, null, null);
writer.write(null, image, iwp);
}
finally{
if (output != null){
output.close();
}
}
The ftp code is straight forward:
JakartaFtpWrapper ftpClient = new JakartaFtpWrapper();
ftpClient.connectAndLogin(FTP_URL, FTP_USER, FTP_PASSWORD);
ftpClient.setPassiveMode(true);
File[] imageFiles = folder.listFiles()
for (int j=0; j<imageFiles.length; j++){
File imageFile = imageFiles[j];
if (imageFile != null && imageFile.isFile() && (FileUtils.getFileSuffix(imageFile).equals("jpg") || FileUtils.getFileSuffix(imageFile).equals("png"))){ // upload only image files
ftpClient.uploadFile(imageFile.getAbsolutePath(), imageFile.getName());
}
}
Thanks,
Ran

What's running on the server? Is it an "out of the box" FTP server or something you wrote?
Images are binary data. If JakartaFtpWrapper offers some option of putting the FTP transfer into binary mode, you should do that; I think the most likely cause of your problem is a bad default attempt to process the transfer in text mode. If you compare small images bytewise, you should see Carriage Returns ((char) 0x0d == (char) 13) being added or removed next to 0x0a's. If so, that's your problem.

Related

called a soap web service which returns a zip file as an attachment. How to unzip it in memory?

I have seen posts about how to unzip files using Java, where the zip file is located somewhere on disk. In my case it's different.
I have code which calls a soap web service. The service response includes an attachment which is a zip file. I have been able to get the attachment. here is part of the code:
Iterator<?> i = soapResponse.getAttachments();
Object obj = null;
AttachmentPart att = (AttachmentPart) i.next();
So, I have the zip file as a type "AttachmentPart" however I could also do:
byte[] arr1 = att.getRawContentBytes();
which would give me the array of bytes containing the zip file.
I could also do
Object obj = att.getContent()
So, I can get the zip files in different formats/types. The zip files contains two .csv files and I have to do different stuff to those files. To make my question simpler, all I am looking to do for now is to get the two .csv files and print its content to the console.
I want to do everything in memory. I don't want to put the content of the zip files on disk.
How can I unzip the attachment and print the content?
If you grab the att.getRawContent() from the AttachmentPart object, you can pass it to the built in ZipInputStream to read the contents of the zip file. You can then write the bytes read from the ZipInputStream directly to System.out to view the contents on the console.
Below is an example that should read the zip contents and then write the entry name followed by the entry contents to standard out, assuming you pass it the AttachmentPart that contains the zip file. It will also filter out any entries that are directories so that they are not printed.
public static void printAttachmentPartZip(AttachmentPart att) throws IOException, SOAPException {
try (ZipInputStream zis = new ZipInputStream(att.getRawContent())) {
byte[] buffer = new byte[1024];
for (ZipEntry zipEntry = zis.getNextEntry(); zipEntry != null; zipEntry = zis.getNextEntry()) {
if (zipEntry.isDirectory()) {
continue;
}
System.out.println(zipEntry.getName());
for (int len = zis.read(buffer); len > 0; len = zis.read(buffer)) {
System.out.write(buffer, 0, len);
}
}
}
}

Image compression and ImageIO library

My application is responsible for splitting a single TIFF file into multiple smaller files using particular alghoritm. Everything works fine but what the thing that concerns me is the fact that files produced by application overpass original files in size.
Total size of original files that were processed by application is about 26mb with total size of produced files equal 387mb! Below is a code snippet of process - I'm an amator when it comes to image compression and ImageIO library and hasn't been able to find anything helpful on the web hence I'd like to ask if there's something I could change to make those results closer. Ideally I'd like to use the same compression as with original.
final ImageWriter writer = ImageIO.getImageWritersByFormatName(resultsExtension).next();
final ImageWriteParam writeParams = writer.getDefaultWriteParam();
writeParams.setCompressionMode(ImageWriteParam.MODE_COPY_FROM_METADATA);
BufferedImage page = ImageUtils.getSinglePageFromTiffFile(documentToSplit, currentPageIndex);
while (currentPageIndex < pagesQty) {
OutputStream outStream = null;
ImageOutputStream imgOutStream = null;
try {
outStream = new FileOutputStream(newDocFile);
imgOutStream = ImageIO.createImageOutputStream(outStream);
writer.setOutput(imgOutStream);
writer.prepareWriteSequence(null);
writer.writeToSequence(new IIOImage(page, null, null), writeParams);
currentPageIndex++;
if (CONDITION) {
writer.endWriteSequence();
break;
}
writer.writeToSequence(new IIOImage(page, null, null), writeParams);
currentPageIndex++;
}
} finally {
if (imgOutStream != null) {
imgOutStream.close();
}
if (outStream != null) {
outStream.close();
}
}
}
getSinglePageFromTiffFile method:
public static BufferedImage getSinglePageFromTiffFile(File file, int pageIndex)
throws IOException {
ImageInputStream is = ImageIO.createImageInputStream(file);
ImageReader reader;
try {
reader = ImageIO.getImageReaders(is).next();
reader.setInput(is);
return reader.read(pageIndex);
} finally {
if (is != null) {
is.close();
}
}
}
Reading your code I interpret the following:
It seems to me that you are reading from your source image into an uncompressed data structure (the BufferedImage). This data structure does not seem to feature any information about data compression.
So in your writing logic, you do have set the "copy compression from input image" (ImageWriteParam.MODE_COPY_FROM_METADATA). As the read image does not feature any compression information by itself, the image data should be written out in an uncompressed format.
With a compressed input image and uncompressed output image, it is no wonder that the individual tiles are larger in size that the input file. While it might be possible, that the input image also has the redundancy between the pages compressed away (I don't know TIFF enough to say this for sure), I'd say it is more likely that you are just writing uncompressed image data out.

How to make a copy of a file containing images and text using java

I have some word documents and excel sheets which has some images along with the file text content. I want to create a copy of that file and keep it at a specific location. I tried the following method which is creating file at specified location but the file is corrupted and cannot be read.
InputStream document = Thread.currentThread().getContextClassLoader().getResourceAsStream("upgradeworkbench/Resources/Upgrade_TD_Template.docx");
try {
OutputStream outStream = null;
Stage stage = new Stage();
stage.setTitle("Save");
byte[] buffer= new byte[document.available()];
document.read(buffer);
FileChooser fileChooser = new FileChooser();
fileChooser.setInitialFileName(initialFileName);
if (flag) {
fileChooser.getExtensionFilters().addAll(new FileChooser.ExtensionFilter("Microsoft Excel Worksheet", "*.xls"));
} else {
fileChooser.getExtensionFilters().addAll(new FileChooser.ExtensionFilter("Microsoft Word Document", "*.docx"));
}
fileChooser.setTitle("Save File");
File file = fileChooser.showSaveDialog(stage);
if (file != null) {
outStream = new FileOutputStream(file);
outStream.write(buffer);
// IOUtils.copy(document, outStream);
}
} catch (IOException ex) {
System.out.println(ex.getMessage());
}
Can anyone suggest me any different ways to get the proper file.
PS: I am reading the file using InputStream because it is inside the project jar.
PPS: I also tried Files.copy() but it didnt work.
I suggest you never trust on InputStream.available to know the real size of the input, because it just returns the number of bytes ready to be immediately read from the buffer. It might return a small number, but doesn't mean the file is small, but that the buffer is temporarily half-full.
The right algorithm to read an InputStream fully and write it over an OutputStream is this:
int n;
byte[] buffer=new byte[4096];
do
{
n=input.read(buffer);
if (n>0)
{
output.write(buffer, 0, n);
}
}
while (n>=0);
You can use the Files.copy() methods.
Copies all bytes from an input stream to a file. On return, the input stream will be at end of stream.
Use:
Files.copy(document, file.toPath(), StandardCopyOption.REPLACE_EXISTING);
As the class says, the second argument is a Path, not a File.
Generally, since this is 2015, use Path and drop File; if an API still uses File, make it so that it uses it at the last possible moment and use Path all the way.

zip the files which are present at one FTP location and copy to another FTP location directly

I want to create zip file of files which are present at one ftp location and Copy this zip file to other ftp location without saving locally.
I am able to handle this for small size of files.It works well for small size files 1 mb etc
But if file size is big like 100 MB, 200 MB , 300 MB then its giving error as,
java.io.FileNotFoundException: STOR myfile.zip : 550 The process cannot access the
file because it is being used by another process.
at sun.net.ftp.FtpClient.readReply(FtpClient.java:251)
at sun.net.ftp.FtpClient.issueCommand(FtpClient.java:208)
at sun.net.ftp.FtpClient.openDataConnection(FtpClient.java:398)
at sun.net.ftp.FtpClient.put(FtpClient.java:609)
My code is
URLConnection urlConnection=null;
ZipOutputStream zipOutputStream=null;
InputStream inputStream = null;
byte[] buf;
int ByteRead,ByteWritten=0;
***Destination where file will be zipped***
URL url = new URL("ftp://" + ftpuser+ ":" + ftppass + "#"+ ftppass + "/" +
fileNameToStore + ";type=i");
urlConnection=url.openConnection();
OutputStream outputStream = urlConnection.getOutputStream();
zipOutputStream = new ZipOutputStream(outputStream);
buf = new byte[size];
for (int i=0; i<li.size(); i++)
{
try
{
***Souce from where file will be read***
URL u= new URL((String)li.get(i)); // this li has values http://xyz.com/folder
/myPDF.pdf
URLConnection uCon = u.openConnection();
inputStream = uCon.getInputStream();
zipOutputStream.putNextEntry(new ZipEntry((String)li.get(i).substring((int)li.get(i).lastIndexOf("/")+1).trim()));
while ((ByteRead = inputStream .read(buf)) != -1)
{
zipOutputStream.write(buf, 0, ByteRead);
ByteWritten += ByteRead;
}
zipOutputStream.closeEntry();
}
catch(Exception e)
{
e.printStackTrace();
}
}
if (inputStream != null) {
try {
inputStream .close();
}
catch (Exception e) {
e.printStackTrace();
}
}
if (zipOutputStream != null) {
try {
zipOutputStream.close();
} catch (Exception e){
e.printStackTrace();
}
}
Can anybody let me know how I can avoid this error and handle large files
This is unrelated to file sizes; as the error says, you can't replace the file because some other process is currently locking it.
The reason why you see it more often with large files is because these take longer to transfer hence the chance of concurrent accesses is higher.
So the only solution is to make sure that no one uses the file when you try to transfer it. Good luck with that.
Possible other solutions:
Don't use Windows on the server.
Transfer the file under a temporary name and rename it when it's complete. That way, other processes won't see incomplete files. Always a good thing.
Use rsync instead of inventing the wheel again.
Back in the day, before we had network security, there were FTP servers that allowed 3rd party transfers. You could use site specific commands and send a file to another FTP server directly. Those days are long gone. Sigh.
Ok, maybe not long gone. Some FTP servers support the proxy command. There is a discussion here: http://www.math.iitb.ac.in/resources/manuals/Unix_Unleashed/Vol_1/ch27.htm

check if the file is of a certain type

I want to validate if all the files in a directory are of a certain type. What I did so far is.
private static final String[] IMAGE_EXTS = { "jpg", "jpeg" };
private void validateFolderPath(String folderPath, final String[] ext) {
File dir = new File(folderPath);
int totalFiles = dir.listFiles().length;
// Filter the files with JPEG or JPG extensions.
File[] matchingFiles = dir.listFiles(new FileFilter() {
public boolean accept(File pathname) {
return pathname.getName().endsWith(ext[0])
|| pathname.getName().endsWith(ext[1]);
}
});
// Check if all the files have JPEG or JPG extensions
// Terminate if validation fails.
if (matchingFiles.length != totalFiles) {
System.out.println("All the tiles should be of type " + ext[0]
+ " or " + ext[1]);
System.exit(0);
} else {
return;
}
}
This works fine if the file name have an extension like {file.jpeg, file.jpg}
This fails if the files have no extensions {file1 file2}.
When I do the following in my terminal I get:
$ file folder/file1
folder/file1: JPEG image data, JFIF standard 1.01
Update 1:
I tried to get the magic numbers of the file to check if it is JPEG:
for (int i = 0; i < totalFiles; i++) {
DataInputStream input = new DataInputStream(
new BufferedInputStream(new FileInputStream(
dir.listFiles()[i])));
if (input.readInt() == 0xffd8ffe0) {
isJPEGFlag = true;
} else {
isJPEGFlag = false;
try {
input.close();
} catch (IOException ignore) {
}
System.out.println("File not JPEG");
System.exit(0);
}
}
I ran into another problem. There are some .DS_Store files in my folder.
Any idea how to ignore them ?
Firstly, file extensions are not mandatory, a file without extension could very well be a valid JPEG file.
Check the RFC for JPEG format, the file formats generally start with some fixed sequence of bytes to identify the format of the file. This is definitely not straight forward, but I am not sure if there is a better way.
In a nutshell you have to open each file, read first n bytes depending on file format, check if they match to file format you expect. If they do, its a valid JPEG file even if it has an exe extension or even if it does not have any extension.
For JPEGs you can do the magic number check in header of the file:
static bool HasJpegHeader(string filename)
{
using (BinaryReader br = new BinaryReader(File.Open(filename, FileMode.Open)))
{
UInt16 soi = br.ReadUInt16();
UInt16 jfif = br.ReadUInt16();
return soi == 0xd8ff && jfif == 0xe0ff;
}
}
More complete method here which covers EXIFF as well: C# How can I test a file is a jpeg?
One good (though expensive) check for validity as an image understood by J2SE is to try to ImageIO.read(File) it. That methods throws some quite helpful exceptions if it does not find an image in the file provided.

Categories

Resources