I'm desperate,...I have tried and search a lot, no luck. Please help
Bit of a Background:
Using a raspberry Pi 3, I develop a webcam streaming server as I don't want the ones available. With raspistill the fps is very low (4fps), that is why I look into v4l2 option for streaming the webcam. For this I output the mjpeg video into a pipe.
Reading from this pipe, the first jpeg image is shown, but consecutive reads return null.
To investigate this further I made a small demo program - same result.
Here the code I use:
Iterating 20 times reading from bufferedinputstream
private void standardRead()
{
BufferedInputStream bis = null;
try {
bis = new BufferedInputStream(new FileInputStream(new File(image_path)));
} catch (FileNotFoundException e) {
e.printStackTrace();
}
System.out.println("Is mark supported? "+bis.markSupported());
try {
for(int i=0;i<20;i++)
{
readingImage(bis,i);
TimeUnit.MILLISECONDS.sleep(250);
}
} catch (IOException | InterruptedException e) {
e.printStackTrace();
}
}
Read method (enhanced with some System.out)
private void readingImage(BufferedInputStream bis,int iteration) throws IOException
{
System.out.println("Available bytes to read:"+bis.available());
System.out.println("Reading image"+iteration);
BufferedImage read = ImageIO.read(bis);
if(read!=null){
System.out.println(read.getRGB(25, 25)+" h:"+read.getHeight());System.out.println();
}else
{
System.out.println("image is null");
}
read = null;
}
What I have already tried:
- Creating a new BufferedInputStream for each iteration
- Closing and creating a new BufferedInputStream
- Tried using mark and reset (no luck)
- Reading from the stream using read instead of ImageIO (reads for ever obviously with about 20fps)
When I execute the program, v4l2 informs that frames are consumed, therefore the pipe is being emptied/read by the java program so new frames can be fed into it.
Only the first image and only during the first execution of the program gives me one image back. A second execution of the program gives null for the first image too.
Here an example output:
Is mark supported? true
Available bytes to read:65536
Reading image0
image is null
Available bytes to read:73720
Reading image1
image is null
Available bytes to read:73712
Reading image2
image is null
Available bytes to read:73704
Reading image3
image is null
Available bytes to read:73696
Reading image4
image is null
Available bytes to read:73688
Reading image5
image is null
One note, if any helpful. For the ImageIO.read(InputStream) function, Java doc states something strange which I can't understand:
(...) The InputStream is wrapped in an ImageInputStream. If no
registered ImageReader claims to be able to read the resulting stream,
null is returned (...)
Thanks in advance for your help and advice.
One sleepless night later, I got something working.
Eureka: I stream 1000 frames using v4l2 library into a linux pipe and can read all 1000 frames. With saving each file to a directory it takes about 103 seconds aka 10fps. No single frame skipped.
Here is how:
private void ReadImages(File path)
{
BufferedInputStream bis = null;
int index = 0;
ImageReader reader = null;
try {
bis = new BufferedInputStream(new FileInputStream(path));
ImageInputStream stream = ImageIO.createImageInputStream(bis);
while(bis.available()>0)
{
if(gotReader(stream))
{
reader = ImageIO.getImageReaders(stream).next();
reader.setInput(stream);
BufferedImage read = reader.read(index);
System.out.println("Image height"+read.getHeight() +" image width:"+read.getWidth()) ;
stream.flush();
index = 0;
}
}
} catch (IOException e) {
System.err.println(e.getMessage());
//e.printStackTrace();
}
}
Tip: flush the stream frequently and reset the index. Without flushing the growing memory breaks the performance dramatically.
Tip: Standard ImageIO does not read BGR3, RGB3,YU12,YUYV,YV12,YVYU but H264 and MJPEG
Tip: Reader is tested with
if(ImageIO.getImageReaders(stream).hasNext())
Related
My application is responsible for splitting a single TIFF file into multiple smaller files using particular alghoritm. Everything works fine but what the thing that concerns me is the fact that files produced by application overpass original files in size.
Total size of original files that were processed by application is about 26mb with total size of produced files equal 387mb! Below is a code snippet of process - I'm an amator when it comes to image compression and ImageIO library and hasn't been able to find anything helpful on the web hence I'd like to ask if there's something I could change to make those results closer. Ideally I'd like to use the same compression as with original.
final ImageWriter writer = ImageIO.getImageWritersByFormatName(resultsExtension).next();
final ImageWriteParam writeParams = writer.getDefaultWriteParam();
writeParams.setCompressionMode(ImageWriteParam.MODE_COPY_FROM_METADATA);
BufferedImage page = ImageUtils.getSinglePageFromTiffFile(documentToSplit, currentPageIndex);
while (currentPageIndex < pagesQty) {
OutputStream outStream = null;
ImageOutputStream imgOutStream = null;
try {
outStream = new FileOutputStream(newDocFile);
imgOutStream = ImageIO.createImageOutputStream(outStream);
writer.setOutput(imgOutStream);
writer.prepareWriteSequence(null);
writer.writeToSequence(new IIOImage(page, null, null), writeParams);
currentPageIndex++;
if (CONDITION) {
writer.endWriteSequence();
break;
}
writer.writeToSequence(new IIOImage(page, null, null), writeParams);
currentPageIndex++;
}
} finally {
if (imgOutStream != null) {
imgOutStream.close();
}
if (outStream != null) {
outStream.close();
}
}
}
getSinglePageFromTiffFile method:
public static BufferedImage getSinglePageFromTiffFile(File file, int pageIndex)
throws IOException {
ImageInputStream is = ImageIO.createImageInputStream(file);
ImageReader reader;
try {
reader = ImageIO.getImageReaders(is).next();
reader.setInput(is);
return reader.read(pageIndex);
} finally {
if (is != null) {
is.close();
}
}
}
Reading your code I interpret the following:
It seems to me that you are reading from your source image into an uncompressed data structure (the BufferedImage). This data structure does not seem to feature any information about data compression.
So in your writing logic, you do have set the "copy compression from input image" (ImageWriteParam.MODE_COPY_FROM_METADATA). As the read image does not feature any compression information by itself, the image data should be written out in an uncompressed format.
With a compressed input image and uncompressed output image, it is no wonder that the individual tiles are larger in size that the input file. While it might be possible, that the input image also has the redundancy between the pages compressed away (I don't know TIFF enough to say this for sure), I'd say it is more likely that you are just writing uncompressed image data out.
I have some word documents and excel sheets which has some images along with the file text content. I want to create a copy of that file and keep it at a specific location. I tried the following method which is creating file at specified location but the file is corrupted and cannot be read.
InputStream document = Thread.currentThread().getContextClassLoader().getResourceAsStream("upgradeworkbench/Resources/Upgrade_TD_Template.docx");
try {
OutputStream outStream = null;
Stage stage = new Stage();
stage.setTitle("Save");
byte[] buffer= new byte[document.available()];
document.read(buffer);
FileChooser fileChooser = new FileChooser();
fileChooser.setInitialFileName(initialFileName);
if (flag) {
fileChooser.getExtensionFilters().addAll(new FileChooser.ExtensionFilter("Microsoft Excel Worksheet", "*.xls"));
} else {
fileChooser.getExtensionFilters().addAll(new FileChooser.ExtensionFilter("Microsoft Word Document", "*.docx"));
}
fileChooser.setTitle("Save File");
File file = fileChooser.showSaveDialog(stage);
if (file != null) {
outStream = new FileOutputStream(file);
outStream.write(buffer);
// IOUtils.copy(document, outStream);
}
} catch (IOException ex) {
System.out.println(ex.getMessage());
}
Can anyone suggest me any different ways to get the proper file.
PS: I am reading the file using InputStream because it is inside the project jar.
PPS: I also tried Files.copy() but it didnt work.
I suggest you never trust on InputStream.available to know the real size of the input, because it just returns the number of bytes ready to be immediately read from the buffer. It might return a small number, but doesn't mean the file is small, but that the buffer is temporarily half-full.
The right algorithm to read an InputStream fully and write it over an OutputStream is this:
int n;
byte[] buffer=new byte[4096];
do
{
n=input.read(buffer);
if (n>0)
{
output.write(buffer, 0, n);
}
}
while (n>=0);
You can use the Files.copy() methods.
Copies all bytes from an input stream to a file. On return, the input stream will be at end of stream.
Use:
Files.copy(document, file.toPath(), StandardCopyOption.REPLACE_EXISTING);
As the class says, the second argument is a Path, not a File.
Generally, since this is 2015, use Path and drop File; if an API still uses File, make it so that it uses it at the last possible moment and use Path all the way.
I have an uncompressed binary file in res/raw that I was reading this way:
public byte[] file2Bytes (int rid) {
byte[] buffer = null;
try {
AssetFileDescriptor afd = res.openRawResourceFd(rid);
FileInputStream in = new FileInputStream(afd.getFileDescriptor());
int len = (int)afd.getLength();
buffer = new byte[len];
in.read(buffer, 0, len);
in.close();
} catch (Exception ex) {
Log.w(ACTNAME, "file2Bytes() fail\n"+ex.toString());
return null;
}
return buffer;
}
However, buffer did not contain what it was supposed to. The source file is 1024 essentially random bytes (a binary key). But buffer, when written out and examined, was not the same. Amongst unprintable bytes at beginning appeared "res/layout/main.xml" (the literal path) and then further down, part of the text content of another file from res/raw. O_O?
Exasperated after a while, I tried:
AssetFileDescriptor afd = res.openRawResourceFd(rid);
//FileInputStream in = new FileInputStream(afd.getFileDescriptor());
FileInputStream in = afd.createInputStream();
Presto, I got the content correctly -- this is easily reproducible.
So the relevant API docs read:
public FileDescriptor getFileDescriptor ()
Returns the FileDescriptor that can be used to read the data in the
file.
public FileInputStream createInputStream ()
Create and return a new auto-close input stream for this asset. This
will either return a full asset
AssetFileDescriptor.AutoCloseInputStream, or an underlying
ParcelFileDescriptor.AutoCloseInputStream depending on whether the the
object represents a complete file or sub-section of a file. You should
only call this once for a particular asset.
Why would a FileInputStream() constructed from getFileDescriptor() end up with garbage whereas createInputStream() gives proper access?
As per pskink's comment, the FileDescriptor returned by AssetFileDescriptor() is apparently not an fd that refers just to the file -- it perhaps refers to whatever bundle/parcel/conglomeration aapt has made of the resources.
AssetFileDescriptor afd = res.openRawResourceFd(rid);
FileInputStream in = new FileInputStream(afd.getFileDescriptor());
in.skip(afd.getStartOffset());
Turns out to be the equivalent of the FileInputStream in = afd.createInputStream() version.
I suppose there is a hint in the difference between "create" (something new) and "get" (something existing). :/
AssetFileDescriptor can be thought of as the entry point to the entire package's assets data.
I have run into the same issue and solved it finally.
If you want to manually create a stream from an AssetFileDescriptor, you have to skip n bytes to the requested resource. It is like you are paging thru all the available files in one big file.
Thanks to pskink! I had a look at the hex content of the jpg image I want to acquire, it starts with -1. The thing is, there are two jpg images. I did not know, so I arbitrarily skip 76L bytes. Got the first image!
I want to create zip file of files which are present at one ftp location and Copy this zip file to other ftp location without saving locally.
I am able to handle this for small size of files.It works well for small size files 1 mb etc
But if file size is big like 100 MB, 200 MB , 300 MB then its giving error as,
java.io.FileNotFoundException: STOR myfile.zip : 550 The process cannot access the
file because it is being used by another process.
at sun.net.ftp.FtpClient.readReply(FtpClient.java:251)
at sun.net.ftp.FtpClient.issueCommand(FtpClient.java:208)
at sun.net.ftp.FtpClient.openDataConnection(FtpClient.java:398)
at sun.net.ftp.FtpClient.put(FtpClient.java:609)
My code is
URLConnection urlConnection=null;
ZipOutputStream zipOutputStream=null;
InputStream inputStream = null;
byte[] buf;
int ByteRead,ByteWritten=0;
***Destination where file will be zipped***
URL url = new URL("ftp://" + ftpuser+ ":" + ftppass + "#"+ ftppass + "/" +
fileNameToStore + ";type=i");
urlConnection=url.openConnection();
OutputStream outputStream = urlConnection.getOutputStream();
zipOutputStream = new ZipOutputStream(outputStream);
buf = new byte[size];
for (int i=0; i<li.size(); i++)
{
try
{
***Souce from where file will be read***
URL u= new URL((String)li.get(i)); // this li has values http://xyz.com/folder
/myPDF.pdf
URLConnection uCon = u.openConnection();
inputStream = uCon.getInputStream();
zipOutputStream.putNextEntry(new ZipEntry((String)li.get(i).substring((int)li.get(i).lastIndexOf("/")+1).trim()));
while ((ByteRead = inputStream .read(buf)) != -1)
{
zipOutputStream.write(buf, 0, ByteRead);
ByteWritten += ByteRead;
}
zipOutputStream.closeEntry();
}
catch(Exception e)
{
e.printStackTrace();
}
}
if (inputStream != null) {
try {
inputStream .close();
}
catch (Exception e) {
e.printStackTrace();
}
}
if (zipOutputStream != null) {
try {
zipOutputStream.close();
} catch (Exception e){
e.printStackTrace();
}
}
Can anybody let me know how I can avoid this error and handle large files
This is unrelated to file sizes; as the error says, you can't replace the file because some other process is currently locking it.
The reason why you see it more often with large files is because these take longer to transfer hence the chance of concurrent accesses is higher.
So the only solution is to make sure that no one uses the file when you try to transfer it. Good luck with that.
Possible other solutions:
Don't use Windows on the server.
Transfer the file under a temporary name and rename it when it's complete. That way, other processes won't see incomplete files. Always a good thing.
Use rsync instead of inventing the wheel again.
Back in the day, before we had network security, there were FTP servers that allowed 3rd party transfers. You could use site specific commands and send a file to another FTP server directly. Those days are long gone. Sigh.
Ok, maybe not long gone. Some FTP servers support the proxy command. There is a discussion here: http://www.math.iitb.ac.in/resources/manuals/Unix_Unleashed/Vol_1/ch27.htm
I am trying to copy a file of about 80 megabytes from the assets folder of an Android application to the SD card.
The file is another apk. For various reasons I have to do it this way and can't simply link to an online apk or put it on the Android market.
The application works fine with smaller apks but for this large one I am getting an out of memory error.
I'm not sure exactly how this works but I am assuming that here I am trying to write the full 80 megabytes to memory.
try {
int length = 0;
newFile.createNewFile();
InputStream inputStream = ctx.getAssets().open(
"myBigFile.apk");
FileOutputStream fOutputStream = new FileOutputStream(
newFile);
byte[] buffer = new byte[inputStream.available()];
while ((length = inputStream.read(buffer)) > 0) {
fOutputStream.write(buffer, 0, length);
}
fOutputStream.flush();
fOutputStream.close();
inputStream.close();
} catch (Exception ex) {
if (ODP_App.getInstance().isInDebugMode())
Log.e(TAG, ex.toString());
}
I found this interesting -
A question about an out of memory issue with Bitmaps
Unless I've misunderstood, in the case of Bitmaps, there appears to be some way to split the stream to reduce memory usage using BitmapFactory.Options.
Is this do-able in my scenario or is there any other possible solution?
The trick is not to try to read the whole file in one go, but rather read it in small chunks and write each chunk before reading the next one into the same memory segment. The following version will read it in 1K chunks. It's for example only - you need to determine the right chunk size.
try {
int length = 0;
newFile.createNewFile();
InputStream inputStream = ctx.getAssets().open(
"myBigFile.apk");
FileOutputStream fOutputStream = new FileOutputStream(
newFile);
//note the following line
byte[] buffer = new byte[1024];
while ((length = inputStream.read(buffer)) > 0) {
fOutputStream.write(buffer, 0, length);
}
fOutputStream.flush();
fOutputStream.close();
inputStream.close();
} catch (Exception ex) {
if (ODP_App.getInstance().isInDebugMode())
Log.e(TAG, ex.toString());
}
Do not read the whole file into memory; read 64k at a time, then write them, repeat until you reach the end of file. Or use IOUtils from Apache Commons IO.