My application is responsible for splitting a single TIFF file into multiple smaller files using particular alghoritm. Everything works fine but what the thing that concerns me is the fact that files produced by application overpass original files in size.
Total size of original files that were processed by application is about 26mb with total size of produced files equal 387mb! Below is a code snippet of process - I'm an amator when it comes to image compression and ImageIO library and hasn't been able to find anything helpful on the web hence I'd like to ask if there's something I could change to make those results closer. Ideally I'd like to use the same compression as with original.
final ImageWriter writer = ImageIO.getImageWritersByFormatName(resultsExtension).next();
final ImageWriteParam writeParams = writer.getDefaultWriteParam();
writeParams.setCompressionMode(ImageWriteParam.MODE_COPY_FROM_METADATA);
BufferedImage page = ImageUtils.getSinglePageFromTiffFile(documentToSplit, currentPageIndex);
while (currentPageIndex < pagesQty) {
OutputStream outStream = null;
ImageOutputStream imgOutStream = null;
try {
outStream = new FileOutputStream(newDocFile);
imgOutStream = ImageIO.createImageOutputStream(outStream);
writer.setOutput(imgOutStream);
writer.prepareWriteSequence(null);
writer.writeToSequence(new IIOImage(page, null, null), writeParams);
currentPageIndex++;
if (CONDITION) {
writer.endWriteSequence();
break;
}
writer.writeToSequence(new IIOImage(page, null, null), writeParams);
currentPageIndex++;
}
} finally {
if (imgOutStream != null) {
imgOutStream.close();
}
if (outStream != null) {
outStream.close();
}
}
}
getSinglePageFromTiffFile method:
public static BufferedImage getSinglePageFromTiffFile(File file, int pageIndex)
throws IOException {
ImageInputStream is = ImageIO.createImageInputStream(file);
ImageReader reader;
try {
reader = ImageIO.getImageReaders(is).next();
reader.setInput(is);
return reader.read(pageIndex);
} finally {
if (is != null) {
is.close();
}
}
}
Reading your code I interpret the following:
It seems to me that you are reading from your source image into an uncompressed data structure (the BufferedImage). This data structure does not seem to feature any information about data compression.
So in your writing logic, you do have set the "copy compression from input image" (ImageWriteParam.MODE_COPY_FROM_METADATA). As the read image does not feature any compression information by itself, the image data should be written out in an uncompressed format.
With a compressed input image and uncompressed output image, it is no wonder that the individual tiles are larger in size that the input file. While it might be possible, that the input image also has the redundancy between the pages compressed away (I don't know TIFF enough to say this for sure), I'd say it is more likely that you are just writing uncompressed image data out.
Related
I'm desperate,...I have tried and search a lot, no luck. Please help
Bit of a Background:
Using a raspberry Pi 3, I develop a webcam streaming server as I don't want the ones available. With raspistill the fps is very low (4fps), that is why I look into v4l2 option for streaming the webcam. For this I output the mjpeg video into a pipe.
Reading from this pipe, the first jpeg image is shown, but consecutive reads return null.
To investigate this further I made a small demo program - same result.
Here the code I use:
Iterating 20 times reading from bufferedinputstream
private void standardRead()
{
BufferedInputStream bis = null;
try {
bis = new BufferedInputStream(new FileInputStream(new File(image_path)));
} catch (FileNotFoundException e) {
e.printStackTrace();
}
System.out.println("Is mark supported? "+bis.markSupported());
try {
for(int i=0;i<20;i++)
{
readingImage(bis,i);
TimeUnit.MILLISECONDS.sleep(250);
}
} catch (IOException | InterruptedException e) {
e.printStackTrace();
}
}
Read method (enhanced with some System.out)
private void readingImage(BufferedInputStream bis,int iteration) throws IOException
{
System.out.println("Available bytes to read:"+bis.available());
System.out.println("Reading image"+iteration);
BufferedImage read = ImageIO.read(bis);
if(read!=null){
System.out.println(read.getRGB(25, 25)+" h:"+read.getHeight());System.out.println();
}else
{
System.out.println("image is null");
}
read = null;
}
What I have already tried:
- Creating a new BufferedInputStream for each iteration
- Closing and creating a new BufferedInputStream
- Tried using mark and reset (no luck)
- Reading from the stream using read instead of ImageIO (reads for ever obviously with about 20fps)
When I execute the program, v4l2 informs that frames are consumed, therefore the pipe is being emptied/read by the java program so new frames can be fed into it.
Only the first image and only during the first execution of the program gives me one image back. A second execution of the program gives null for the first image too.
Here an example output:
Is mark supported? true
Available bytes to read:65536
Reading image0
image is null
Available bytes to read:73720
Reading image1
image is null
Available bytes to read:73712
Reading image2
image is null
Available bytes to read:73704
Reading image3
image is null
Available bytes to read:73696
Reading image4
image is null
Available bytes to read:73688
Reading image5
image is null
One note, if any helpful. For the ImageIO.read(InputStream) function, Java doc states something strange which I can't understand:
(...) The InputStream is wrapped in an ImageInputStream. If no
registered ImageReader claims to be able to read the resulting stream,
null is returned (...)
Thanks in advance for your help and advice.
One sleepless night later, I got something working.
Eureka: I stream 1000 frames using v4l2 library into a linux pipe and can read all 1000 frames. With saving each file to a directory it takes about 103 seconds aka 10fps. No single frame skipped.
Here is how:
private void ReadImages(File path)
{
BufferedInputStream bis = null;
int index = 0;
ImageReader reader = null;
try {
bis = new BufferedInputStream(new FileInputStream(path));
ImageInputStream stream = ImageIO.createImageInputStream(bis);
while(bis.available()>0)
{
if(gotReader(stream))
{
reader = ImageIO.getImageReaders(stream).next();
reader.setInput(stream);
BufferedImage read = reader.read(index);
System.out.println("Image height"+read.getHeight() +" image width:"+read.getWidth()) ;
stream.flush();
index = 0;
}
}
} catch (IOException e) {
System.err.println(e.getMessage());
//e.printStackTrace();
}
}
Tip: flush the stream frequently and reset the index. Without flushing the growing memory breaks the performance dramatically.
Tip: Standard ImageIO does not read BGR3, RGB3,YU12,YUYV,YV12,YVYU but H264 and MJPEG
Tip: Reader is tested with
if(ImageIO.getImageReaders(stream).hasNext())
I have some word documents and excel sheets which has some images along with the file text content. I want to create a copy of that file and keep it at a specific location. I tried the following method which is creating file at specified location but the file is corrupted and cannot be read.
InputStream document = Thread.currentThread().getContextClassLoader().getResourceAsStream("upgradeworkbench/Resources/Upgrade_TD_Template.docx");
try {
OutputStream outStream = null;
Stage stage = new Stage();
stage.setTitle("Save");
byte[] buffer= new byte[document.available()];
document.read(buffer);
FileChooser fileChooser = new FileChooser();
fileChooser.setInitialFileName(initialFileName);
if (flag) {
fileChooser.getExtensionFilters().addAll(new FileChooser.ExtensionFilter("Microsoft Excel Worksheet", "*.xls"));
} else {
fileChooser.getExtensionFilters().addAll(new FileChooser.ExtensionFilter("Microsoft Word Document", "*.docx"));
}
fileChooser.setTitle("Save File");
File file = fileChooser.showSaveDialog(stage);
if (file != null) {
outStream = new FileOutputStream(file);
outStream.write(buffer);
// IOUtils.copy(document, outStream);
}
} catch (IOException ex) {
System.out.println(ex.getMessage());
}
Can anyone suggest me any different ways to get the proper file.
PS: I am reading the file using InputStream because it is inside the project jar.
PPS: I also tried Files.copy() but it didnt work.
I suggest you never trust on InputStream.available to know the real size of the input, because it just returns the number of bytes ready to be immediately read from the buffer. It might return a small number, but doesn't mean the file is small, but that the buffer is temporarily half-full.
The right algorithm to read an InputStream fully and write it over an OutputStream is this:
int n;
byte[] buffer=new byte[4096];
do
{
n=input.read(buffer);
if (n>0)
{
output.write(buffer, 0, n);
}
}
while (n>=0);
You can use the Files.copy() methods.
Copies all bytes from an input stream to a file. On return, the input stream will be at end of stream.
Use:
Files.copy(document, file.toPath(), StandardCopyOption.REPLACE_EXISTING);
As the class says, the second argument is a Path, not a File.
Generally, since this is 2015, use Path and drop File; if an API still uses File, make it so that it uses it at the last possible moment and use Path all the way.
my question might not be entirely related to Java but I'm currently seeking a method to combine several compressed (gzipped) textfiles without the requirement to recompress them manually. Lets say I have 4 files, all text that is compressed using gzip and want to compress these into one single *.gz file without de + recompressing them. My current method is to open an InputStream and parse the file linewise, storing in a GZIPoutputstream, which works but isn't very fast.... I could of course also call
zcat file1 file2 file3 | gzip -c > output_all_four.gz
This would work, too but isn't really fast either.
My idea would be to copy the inputstream and write it to outputstream directly without "parsing" the stream, as I don't need to manipulate anything actually. Is something like this possible?
Find below a simple solution in Java (it does the same as my cat ... example). Any kind of buffering the input/output has been omitted to keep the code slim.
public class ConcatFiles {
public static void main(String[] args) throws IOException {
// concatenate the single gzip files to one gzip file
try (InputStream isOne = new FileInputStream("file1.gz");
InputStream isTwo = new FileInputStream("file2.gz");
InputStream isThree = new FileInputStream("file3.gz");
SequenceInputStream sis = new SequenceInputStream(new SequenceInputStream(isOne, isTwo), isThree);
OutputStream bos = new FileOutputStream("output_all_three.gz")) {
byte[] buffer = new byte[8192];
int intsRead;
while ((intsRead = sis.read(buffer)) != -1) {
bos.write(buffer, 0, intsRead);
}
bos.flush();
}
// ungezip the single gzip file, the output contains the
// concatenated input of the single uncompressed files
try (GZIPInputStream gzipis = new GZIPInputStream(new FileInputStream("output_all_three.gz"));
OutputStream bos = new FileOutputStream("output_all_three")) {
byte[] buffer = new byte[8192];
int intsRead;
while ((intsRead = gzipis.read(buffer)) != -1) {
bos.write(buffer, 0, intsRead);
}
bos.flush();
}
}
}
The above method works if you just require to gzip many zipped files. In my case I had made a web servlet and my response was in 20-30 KBs. So I was sending the zipped response.
I tried to zip all my individual JS files on server start only and then add dynamic code runtime using the above method. I could print the entire response in my log file but chrome was able to unzip the first file only. Rest output was coming in bytes.
After research I found out that this is not possible with chrome and they have closed the bug also without solving it.
https://bugs.chromium.org/p/chromium/issues/detail?id=20884
I want to read images inside a .CBZ archive and store them inside an ArrayList. I have tried the following solution but it has, at least, 2 problems.
I get an OutOfMemory error after adding 10-15 images to the ArrayList
There must be a better way of getting the images inside the ArrayList instead of writing them on a temp file and reading that again.
public class CBZHandler {
final int BUFFER = 2048;
ArrayList<BufferedImage> images = new ArrayList<BufferedImage>();
public void extractCBZ(ZipInputStream tis) throws IOException{
ZipEntry entry;
BufferedOutputStream dest = null;
if(!images.isEmpty())
images.clear();
while((entry = tis.getNextEntry()) != null){
System.out.println("Extracting " + entry.getName());
int count;
FileOutputStream fos = new FileOutputStream("temp");
dest = new BufferedOutputStream(fos,BUFFER);
byte data[] = new byte[BUFFER];
while ((count = tis.read(data, 0, BUFFER)) != -1) {
dest.write(data, 0, count);
}
dest.flush();
dest.close();
BufferedImage img = ImageIO.read(new FileInputStream("temp"));
images.add(img);
}
tis.close();
}
}
The "OutOfMemoryError" may or may not be inherent in the amount of data you're trying to store in memory. You may need to change your maximum heap size. However, you can certainly avoid writing to disk - just write to a ByteArrayOutputStream instead, then you can get at the data as a byte array - potentially creating a ByteArrayInputStream round it if you need to. Do you definitely need to add them in your list as BufferedImage rather than (say) keeping each as a byte[]?
Note that if you're able to use Guava it makes the "extract data from an InputStream" bit very easy:
byte[] data = ByteStreams.toByteArray(tis);
Each BufferedImage will typically require significantly more memory than the byte[] from which it is constructed. Cache the byte[] and stamp each one out to an image as needed.
I've been using JakartaFtpWrapper to upload files from the client Java application to my server (for backup purposes).
The files that are uploaded are text files, png files and jpgs.
I've noticed that the jpg files which are valid on the local machine - somehow become unreadable (corrupt files) on the server (where they were FTPd to).
The image file size is similar to the original one, but somehow it is defected.
Here's a code I'm using to write the jpg to the LOCAL disk:
public static void writeJpeg(BufferedImage bfImg, String fileName, float quality) throws IOException{
FileImageOutputStream output = null;
try{
Iterator iter = ImageIO.getImageWritersByFormatName("jpeg");
ImageWriter writer = (ImageWriter)iter.next();
ImageWriteParam iwp = writer.getDefaultWriteParam();
iwp.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
iwp.setCompressionQuality(quality); // an integer between 0 and 1
File file = new File(fileName);
output = new FileImageOutputStream(file);
writer.setOutput(output);
IIOImage image = new IIOImage(bfImg, null, null);
writer.write(null, image, iwp);
}
finally{
if (output != null){
output.close();
}
}
The ftp code is straight forward:
JakartaFtpWrapper ftpClient = new JakartaFtpWrapper();
ftpClient.connectAndLogin(FTP_URL, FTP_USER, FTP_PASSWORD);
ftpClient.setPassiveMode(true);
File[] imageFiles = folder.listFiles()
for (int j=0; j<imageFiles.length; j++){
File imageFile = imageFiles[j];
if (imageFile != null && imageFile.isFile() && (FileUtils.getFileSuffix(imageFile).equals("jpg") || FileUtils.getFileSuffix(imageFile).equals("png"))){ // upload only image files
ftpClient.uploadFile(imageFile.getAbsolutePath(), imageFile.getName());
}
}
Thanks,
Ran
What's running on the server? Is it an "out of the box" FTP server or something you wrote?
Images are binary data. If JakartaFtpWrapper offers some option of putting the FTP transfer into binary mode, you should do that; I think the most likely cause of your problem is a bad default attempt to process the transfer in text mode. If you compare small images bytewise, you should see Carriage Returns ((char) 0x0d == (char) 13) being added or removed next to 0x0a's. If so, that's your problem.