java.lang.OutOfMemoryError when doing javax.imageio.ImageIO.read("filename") - java

I want to compress the "JPG" images,which are about 4M or more.here is my codes:
public static void Compress(String sourceFolder,String destFolder,double proportion) throws IOException
{
File source=new File(sourceFolder);
File[] sourceFiles=null;
if(source.isDirectory())
{
sourceFiles=source.listFiles();
for(int i=0;i<sourceFiles.length;i++)
{
String name="";
javax.imageio.ImageIO.setUseCache(false);
Image src = javax.imageio.ImageIO.read(sourceFiles[i]);
name=sourceFiles[i].getName();
int width=src.getWidth(null);
int height=src.getHeight(null);
destWidth=(int) (height*proportion);
destHeight=(int) (width*proportion);
BufferedImage tag=new BufferedImage(destWidth,destHeight,BufferedImage.TYPE_INT_RGB);
Graphics g = tag.getGraphics();
g.drawImage(src, 0, 0, destWidth, destHeight, null);
src.flush();
src=null;
FileOutputStream out = new FileOutputStream(destFolder+"/"+name);
JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(out);
encoder.encode(tag);
out.close();
}
}
else
System.exit(0);
}
When it runs
Image src = javax.imageio.ImageIO.read("filename");
Exception occured:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.awt.image.DataBufferByte.<init>(DataBufferByte.java:58)
at java.awt.image.ComponentSampleModel.createDataBuffer(ComponentSampleModel.java:397)
at java.awt.image.Raster.createWritableRaster(Raster.java:938)
at javax.imageio.ImageTypeSpecifier.createBufferedImage(ImageTypeSpecifier.java:1056)
at javax.imageio.ImageReader.getDestination(ImageReader.java:2879)
at com.sun.imageio.plugins.jpeg.JPEGImageReader.readInternal(JPEGImageReader.java:943)
at com.sun.imageio.plugins.jpeg.JPEGImageReader.read(JPEGImageReader.java:915)
at javax.imageio.ImageIO.read(ImageIO.java:1422)
at javax.imageio.ImageIO.read(ImageIO.java:1282)
at functions.CompressImage.Compress(CompressImage.java:50)
at functions.CompressImage.main(CompressImage.java:24)
I tried the run arguments(-Xms=1g),it still doesn't work!
Who knows the solution? Please help me,thank you!

you need to get heap dump and analyze it. so the simplest way is to add JVM params like
-XX:+HeapDumpOnOutOfMemoryError
This will automatically create heap dump/ Later you can analyze what's wrong using java profilers(yourkit,jprofiler, etc)

A 4MB JPG will result in a huge BitMap File. I think, it will just need a lot of memory. I ofte read about large memory-sonsumption in javax.imagio.
To get the bitmap size, calculate image_X * image_Y * (8 to 10 bit * 3(colors))
update
Some Math:
I assume 8bit per colorchannel:
7000 * 4900 * 8 * 3 = 1029000000 bit
= 122MB
I beleave, there has to be a byte[] of 122MB within the Memory. If the operating system (not the JVM) can't create that memory block, you'll get that exception.

Related

Java BufferedImage to byte array conversion is too slow compared to other languages

I am trying to convert an image to a byte array so that I can transfer it over the network for further processing.
Now in C# following code does the job in about 3 or 2 milliseconds.
Image image = Image.FromFile("D:/tst.jpg");
DateTime pre = DateTime.Now;
int sz;
using (MemoryStream sourceImageStream = new MemoryStream())
{
image.Save(sourceImageStream, System.Drawing.Imaging.ImageFormat.Jpeg);
byte[] sourceImageData = sourceImageStream.ToArray();
sz = sourceImageData.Count();
}
MessageBox.Show("Size " + sz + " time : " + (DateTime.Now - pre).TotalMilliseconds);
Output:
Size 268152 time : 3.0118
But in Java doing the same as below takes way too much time.
BuffredImage image = ImageIO.read(new File("D:/tst.jpg"));
ByteArrayOutputStream baos = new ByteArrayOutputStream();
Instant pre = Instant.now();
ImageIO.write( image, "jpeg", baos );
baos.flush();
Instant now = Instant.now();
System.out.println("Size " + baos.size() + " time : " + ChronoUnit.MILLIS.between(pre, now));
Output:
Size 268167 time : 91.0
The source image is a JPG image. In C# when using png compressing. time was around 90ms. So my guess is that Java is taking time to somehow still compress the same JPG image. Image dimension is 2048 * 1536.
Java is frustratingly slow here. How can I get rid of this problem in Java?
Take this image into consideration.
C#:
Size 1987059 time : 11.0129
Java:
Size 845093 time : 155.0
The source image is 1987059 bytes (which is same as C# encoded byte array). But in Java it is compressed to 845093 bytes. I have tried setting the compression quality to 1f like this but it didn't help to reduce the time.
The main problem with this kind of testing is pointed out in the first comment: This is a micro-benchmark. If you run that code only once in Java, you'll mostly measure the time taken to initialize the run-time, class loading and initialisatizion.
Here's a slightly modified version of your code (I originally wrote this as an answer to your follow-up question that is now closed as a duplicate, but the same concept applies), that at least includes a warm-up time. And you'll see that there's a quite a difference in the measurments. On my 2014 MacBook Pro, the output is:
Initial load time 415 ms (5)
Average warm up load time 73 ms (5)
Normal load time 65 ms (5)
As you see, the "normal" time to load an image, is a lot less than the initial time, which includes a lot of overhead.
Code:
public class TestJPEGSpeed {
public static void main(String[] args) throws IOException {
File input = new File(args[0]);
test(input, 1, "Initial");
test(input, 100, "Average warm up");
test(input, 1, "Normal");
}
private static void test(File input, int runs, final String type) throws IOException {
BufferedImage image = null;
long start = System.currentTimeMillis();
for (int i = 0; i < runs; i++) {
image = ImageIO.read(input);
}
long stop = System.currentTimeMillis();
System.out.println(type + " load time " + ((stop - start) / runs) + " ms (type=" + image.getType() + ")");
}
}
(I also wrote a different version, that took a second parameter, and loaded a different file in the "normal" case, but the measurements were similar, so I left it out).
Most likely there's still issues with this benchmark, like measuring I/O time, rather than decoding time, but at least it's a little more fair.
PS: Some bonus background information. If you use an Oracle JRE at least, the bundled JPEG plugin for ImageIO uses JNI, and a native compiled version of IJG's libjpeg (written in C). This is used for both reading and writing JPEG. You could probably see better performance, if you used native bindings for libjpegTurbo. But as this is all native code, it's unlikely the performance will vary drastically from platform to platform.

Is there any way to make image compression and saving faster on Android?

The situation
I should show 200-350 frames animation in my application. Images have 500x300ish resolution. If user wants to share animation, i have to convert it to Video. For convertion i am using ffmpeg command.
ffmpeg -y -r 1 -i /sdcard/videokit/pic00%d.jpg -i /sdcard/videokit/in.mp3 -strict experimental -ar 44100 -ac 2 -ab 256k -b 2097152 -ar 22050 -vcodec mpeg4 -b 2097152 -s 320x240 /sdcard/videokit/out.mp4
To convert images to video ffmpeg wants actual files not Bitmap or byte[].
Problem
Compressing bitmaps to image files taking to much time. 210 image convertion takes about 1 minute to finish on average device(HTC ONE m7). Converting image files to mp4 takes about 15 seconds on the same device. All together user have to wait about 1.5 minutes.
What i have tried
I changed comrpession format form PNG to JPEG(1.5 minute result is
achieved with JPEG compression(quality=80),with PNG it takes about
2-2.5 minutes) success
Tried to find how pass byte[] or bitmap to ffmpeg - no succes.
QUESTION
Is there any way(library (even native)) to make saving process faster.
Is there any way to pass byte[] or Bitmap objects (i mean png file decompressed to Android Bitmap Class Object) to ffmpeg library video creating method
Is there any other working library which will create mp4(or any supported format(supported by main Social Networks)) from byte[] or Bitmap objects in about 30 seconds(for 200 frames).
You can convert Bitmap (or byte[]) to YUV format quickly, using renderscript (see https://stackoverflow.com/a/39877029/192373). You can pass these YUV frames to ffmpeg library (as suggests halfelf), or use the built-in native MediaCodec which uses dedicated hardware on modt devices (but compression options are less flexible than all-software ffmpeg).
There are two steps slow us down. Compressing image to PNG/JPG and writing them to disk. Both can be skipped if we directly code against ffmpeg libs, instead of calling ffmpeg command. (There are other improvements too, such like GPU encoding and multithreading, but much more complicated.)
Some approaches to code:
Only use C/C++ NDK for android programming. FFmpeg will happily work. But I guess it's not an option here.
Build it from scratch by Java JNI. Not much experience here. I only know this could link java to c/c++ libs.
Some java wrapper. Luckily I found javacpp-presets. (There are others too, but this one is good enough and up to date.)
This library includes a good example ported from famous dranger's ffmpeg tutorial, though it is a demuxing one.
We can try to write a muxing one, following ffmpeg's muxing.c example.
import java.io.*;
import org.bytedeco.javacpp.*;
import static org.bytedeco.javacpp.avcodec.*;
import static org.bytedeco.javacpp.avformat.*;
import static org.bytedeco.javacpp.avutil.*;
import static org.bytedeco.javacpp.swscale.*;
public class Muxer {
public class OutputStream {
public AVStream Stream;
public AVCodecContext Ctx;
public AVFrame Frame;
public SwsContext SwsCtx;
public void setStream(AVStream s) {
this.Stream = s;
}
public AVStream getStream() {
return this.Stream;
}
public void setCodecCtx(AVCodecContext c) {
this.Ctx = c;
}
public AVCodecContext getCodecCtx() {
return this.Ctx;
}
public void setFrame(AVFrame f) {
this.Frame = f;
}
public AVFrame getFrame() {
return this.Frame;
}
public OutputStream() {
Stream = null;
Ctx = null;
Frame = null;
SwsCtx = null;
}
}
public static void main(String[] args) throws IOException {
Muxer t = new Muxer();
OutputStream VideoSt = t.new OutputStream();
AVOutputFormat Fmt = null;
AVFormatContext FmtCtx = new AVFormatContext(null);
AVCodec VideoCodec = null;
AVDictionary Opt = null;
SwsContext SwsCtx = null;
AVPacket Pkt = new AVPacket();
int GotOutput;
int InLineSize[] = new int[1];
String FilePath = "/path/xxx.mp4";
avformat_alloc_output_context2(FmtCtx, null, null, FilePath);
Fmt = FmtCtx.oformat();
AVCodec codec = avcodec_find_encoder_by_name("libx264");
av_format_set_video_codec(FmtCtx, codec);
VideoCodec = avcodec_find_encoder(Fmt.video_codec());
VideoSt.setStream(avformat_new_stream(FmtCtx, null));
AVStream stream = VideoSt.getStream();
VideoSt.getStream().id(FmtCtx.nb_streams() - 1);
VideoSt.setCodecCtx(avcodec_alloc_context3(VideoCodec));
VideoSt.getCodecCtx().codec_id(Fmt.video_codec());
VideoSt.getCodecCtx().bit_rate(5120000);
VideoSt.getCodecCtx().width(1920);
VideoSt.getCodecCtx().height(1080);
AVRational fps = new AVRational();
fps.den(25); fps.num(1);
VideoSt.getStream().time_base(fps);
VideoSt.getCodecCtx().time_base(fps);
VideoSt.getCodecCtx().gop_size(10);
VideoSt.getCodecCtx().max_b_frames();
VideoSt.getCodecCtx().pix_fmt(AV_PIX_FMT_YUV420P);
if ((FmtCtx.oformat().flags() & AVFMT_GLOBALHEADER) != 0)
VideoSt.getCodecCtx().flags(VideoSt.getCodecCtx().flags() | AV_CODEC_FLAG_GLOBAL_HEADER);
avcodec_open2(VideoSt.getCodecCtx(), VideoCodec, Opt);
VideoSt.setFrame(av_frame_alloc());
VideoSt.getFrame().format(VideoSt.getCodecCtx().pix_fmt());
VideoSt.getFrame().width(1920);
VideoSt.getFrame().height(1080);
av_frame_get_buffer(VideoSt.getFrame(), 32);
// should be at least Long or even BigInteger
// it is a unsigned long in C
int nextpts = 0;
av_dump_format(FmtCtx, 0, FilePath, 1);
avio_open(FmtCtx.pb(), FilePath, AVIO_FLAG_WRITE);
avformat_write_header(FmtCtx, Opt);
int[] got_output = { 0 };
while (still_has_input) {
// convert or directly copy your Bytes[] into VideoSt.Frame here
// AVFrame structure has two important data fields:
// AVFrame.data (uint8_t*[]) and AVFrame.linesize (int[])
// data includes pixel values in some formats and linesize is size of each picture line.
// For example, if formats is RGB. linesize should has 3 valid values equaling to `image_width * 3`. And data will point to three arrays containing rgb values.
// But I guess we'll need swscale() to convert pixel format here. From RGB to yuv420p (or other yuv family formats).
Pkt = new AVPacket();
av_init_packet(Pkt);
VideoSt.getFrame().pts(nextpts++);
avcodec_encode_video2(VideoSt.getCodecCtx(), Pkt, VideoSt.getFrame(), got_output);
av_packet_rescale_ts(Pkt, VideoSt.getCodecCtx().time_base(), VideoSt.getStream().time_base());
Pkt.stream_index(VideoSt.getStream().index());
av_interleaved_write_frame(FmtCtx, Pkt);
av_packet_unref(Pkt);
}
// get delayed frames
for (got_output[0] = 1; got_output[0] != 0;) {
Pkt = new AVPacket();
av_init_packet(Pkt);
avcodec_encode_video2(VideoSt.getCodecCtx(), Pkt, null, got_output);
if (got_output[0] > 0) {
av_packet_rescale_ts(Pkt, VideoSt.getCodecCtx().time_base(), VideoSt.getStream().time_base());
Pkt.stream_index(VideoSt.getStream().index());
av_interleaved_write_frame(FmtCtx, Pkt);
}
av_packet_unref(Pkt);
}
// free c structs
avcodec_free_context(VideoSt.getCodecCtx());
av_frame_free(VideoSt.getFrame());
avio_closep(FmtCtx.pb());
avformat_free_context(FmtCtx);
}
}
For porting C code, normally several changes should be done:
Mostly the work is to replace every C struct member access (. and ->) to java getter/setter.
Also there are many C address-of operators &, just delete them.
Change C NULL macro and C++ nullptr pointer to Java null object.
C codes used to check bool result of an int type in if, for, while. Have to compare them with 0 in java.
And there may be other API changes, as long as referencing to javacpp-presets docs, it'll be ok.
Note that I omitted all error handling codes here. It may be needed in real development/production.
Really I don't want to make publicity but to use pkzip and its SDK may be a good
solution. Pkzip compress file to 95% as they say.
The Smartcrypt SDK is available in all major programming languages, including C++, Java, and C#, and can be used to encrypt both structured and unstructured data. Changes to existing applications typically consist of two or three lines of code.

OutOfMemoryError: Failed to allocate a 82956 byte allocation with 30836 free bytes and 30KB until OOM?

in my application i have a database where i store images but when i try to retrieve image in BitMap logcat shows OutOfMemoryError .this error does't appear when i retrieve only one image from the database(when i don't use do while loop my code is runing good ).please help i'm new in android.
this class try to retrieve images from db:
class Abc {
ArrayList < ForBitMap > pic;
Abc() {
pic = new Abc < > ();
DataBaseClass objOfDataBaseClass = new DataBaseClass(context);
mCursor = objOfDataBaseClass.showData();
if (mCursor.moveToNext()) {
do {
byte[] mg = null;
mg = mCursor.getBlob(mCursor.getColumnIndex("image"));
Bitmap bitmap = BitmapFactory.decodeByteArray(mg, 0, mg.length);
pic.add(new ForBitMap(bitmap));
} while (mCursor.moveToFirst());
}
}
this class use for store the value in ArrayList(ArrayList pic;)
class ForBitMap{
Bitmap btmp;
ForBitMap(Bitmap btmp){
this.btmp=btmp;
}
}
logat status is:
java.lang.OutOfMemoryError: Failed to allocate a 82956 byte allocation with 30836 free bytes and 30KB until OOM
You probably shouldn't store bitmaps in database. Just store them in file storage and keep file path in database.
Try using LruCache for saving images, bitmap allocates memory every any bitmap is loaded evem if it was loaded previously. Using LruCache it will save your image with a key in cache memory itself.
Refer this link:
https://developer.android.com/reference/android/util/LruCache.html

Memory issue when storing images in byteArray

I have an app that needs to access a large number of images very quickly, so I need to load those images into memory in some way. Doing so as bitmaps used over 100MB of RAM, which was completely out of the question, so I opted to read jpg files into memory, storing them inside a byteArray. Then I decode them and write them to the canvas as each is needed. This works pretty well, cutting out the slow disk access, while also respecting memory limits.
However, memory usage seems 'off' to me. I'm storing 450 jpgs with a file size of approximately 33kb each. This totals around 15MB of data. However, the app continually runs at between 35MB and 40MB of RAM as reported by both Eclipse DDMS and Android (on a physical device). I've tried modifying how many jpgs are loaded and the RAM used by the app tends to decrease by around 60-70kb per jpg, indicating that each image is stored twice in RAM. Memory usage does not fluctuate which implies that there is not an actual 'leak' involved.
Here is the relevant loading code:
private byte[][] bitmapArray = new byte[totalFrames][];
for (int x=0; x<totalFrames; x++) {
File file = null;
if (cWidth <= cHeight){
file = new File(directory + "/f"+x+".jpg");
} else {
file = new File(directory + "/f"+x+"-land.jpg");
}
bitmapArray[x] = getBytesFromFile(file);
imagesLoaded = x + 1;
}
public byte[] getBytesFromFile(File file) {
byte[] bytes = null;
try {
InputStream is = new FileInputStream(file);
long length = file.length();
bytes = new byte[(int) length];
int offset = 0;
int numRead = 0;
while (offset < bytes.length && (numRead = is.read(bytes, offset, bytes.length - offset)) >= 0) {
offset += numRead;
}
if (offset < bytes.length) {
throw new IOException("Could not completely read file " + file.getName());
}
is.close();
} catch (IOException e) {
//TODO Write your catch method here
}
return bytes;
}
Eventually, they get written to screen like so:
SurfaceHolder holder = getSurfaceHolder();
Canvas c = null;
try {
c = holder.lockCanvas();
if (c != null) {
int canvasWidth = c.getWidth();
int canvasHeight = c.getHeight();
Rect destinationRect = new Rect();
destinationRect.set(0, 0, canvasWidth, canvasHeight);
c.drawBitmap(BitmapFactory.decodeByteArray(bitmapArray[bgcycle], 0, bitmapArray[bgcycle].length), null, destinationRect, null);
}
} finally {
if (c != null)
holder.unlockCanvasAndPost(c);
}
Am I correct that there is some sort of duplication going on here? Or is there just that much overhead involved in storing jpgs in a byteArray like this?
Storing bytes in RAM is very different to storing data on hard drives... There is alot more overhead to it. The references to the objects as well the byte array structures all take up additional memory. There isn't really a single source to all the additional memory but just remember than loading a file into RAM normally takes up 2 ~ 3x more space (from experience, I'm afraid I can't quote any documentation here).
Consider this:
File F = //Some file here (Less than 2 GB please)
FileInputStream fIn = new FileInputStream(F);
ByteArrayOutputStream bOut = new ByteArrayOutputStream(((int)F.length()) + 1);
int r;
byte[] buf = new byte[32 * 1000];
while((r = fIn.read(buf) != -1){
bOut.write(buf, 0, r);
}
//Do a memory measurement at this point. You'll see your using nearly 3x the memory in RAM compared to the file.
//If your actually gonna try this, remember to surround with try-catch and close the streams as appropriate.
Also remember that unused memory is not instantly cleared up. The method getBytesFromFile() may be returning a copy of a byte array which causes memory duplication which may not immediately be garbage collected. If you want to be safe, check the method getBytesFromFile(file) is not leaking any references that should be cleaned up. It won't appear as a memory leak as you only call it a finite number of times.
It might be because your byte array is 2 dimensional, you only need one dimension for loading an image using a byte array, and the second dimension could potentially double the Ram needed as for each byte you would have an empty but still existing byte that you don't use

"mimetype: application" display on a picture instead of "mimetype: image" after a ImageIO.write

When I simply create a new image from another like this:
public static void scaleByTwoRight(String src, String dest)
throws IOException {
BufferedImage bsrc = ImageIO.read(new File(src));
int width = bsrc.getWidth()/2;
int height = bsrc.getHeight();
BufferedImage bdest = bsrc.getSubimage(width, 0, width, height);
ImageIO.write(bdest,"PNG",new File(dest));
}
Source file (src) = C:...\Manga\Shonan Juna_ Gumi Tome 11\Shonan Junaï Gumi Tome 11 - 091B.png
Destination file (dest) = C:...\Manga\Shonan Junaï Gumi Tome 11 - 091B_A.png
Example of generated file: https://docs.google.com/file/d/0B1vKCZzB5hxqYzNsUWF5RHA2Wm8/edit?usp=sharing
Problem: The new image has mimetype: application instead of mimetype: image
How I arrive to this conclusion: I'm using a function to test if the file is an image or not:
public static boolean isImage(String src)
throws IOException {
File f = new File(src);
String mimetype= new MimetypesFileTypeMap().getContentType(f);
String type = mimetype.split("/")[0];
if(type.equals("image")){
return true;
}else{
System.out.println("mimetype: "+type);
return false;
}
}
It has not a huge impact if the Mime-type is not correct but I prefer to have that working properly..
Thanks for your help!
Note:
I'm running under Windows 7 / 32b
JVM 1.7 / Eclipse Helios
Your code is working fine in my machine.
I have windows XP,32 bit,
Tried with jpeg image and it is returning the mimetype as image/jpeg only.
Hope you are not trying to execute both the functions simultaneously.
Also the destination file name should contain proper extension like .jpeg or. png etc...

Categories

Resources