libDGX doesn't draw texture with big dimension - java

I cannot draw big image in libGDX on my desktop. I have an image with dimension 9494x13082 pixels and use batch.draw(texture, 0, 0, width, height);. And instead of a texture libGDX draw a black square. If I compress the image to 60% or more, everything works fine. I tried to use TextureRegion, but that also doesn't work.
Tell me please, what could be the problem. Maybe I haven't enough RAM?
I work in Linux, OpenGL ES 2.0, 2GB Ram, minimum RAM for Java - 1GB, maximum - 2GB.

first: opengl texture dimension is equals h&w;
Even if you do not have a width equal to height, then the largest adjacency is also considered for another, and eventually the square is given to that memory.
Second: It is best for your picture to be no larger than 2048 * 2048. I think the maximum is 4096 * 4096.
opengl 1 not supported POT. libgdx chang you texture to POT. so, the best texture size is power of tow.
with this parameter disable force POT for libgdx.
Texture.setEnforcePotImages(false);
now your solution:
You have to split your photo into smaller pieces. For example 2048 * 2048(POT) then pack them with TEXTURE_PACKER. Then draw it where you need it.
Note that if you are using a compression tool, note that the compressed file is different with the graphical compression. So you do not have less memory space with jpg or png.
you should use texture compressor like : etc1, etc2, ktx , ....
i write the example code for you:
// in this sample my image is 5*4 (5*4 - 2048*2048)
for (int i = 0; i < 5; i++) {
for (int j = 0; j < 4; j++) {
batch.draw(texture[i,j], i*2048, j*2048, width, height);
}
}

Related

How to handle huge data/images in RAM in Java?

Summary
I am reading a large binary file which contains image data.
Cumulative Count Cut analysis is performed on data [It requires another array with same size as the image].
The data is stretched between 0 to 255 stored in BufferedImage pixel by pixel, to draw the image on JPanel.
On this image, zooming is performed using AffineTransform.
Problems
Small Image(<.5GB)
1.1 When I am increasing the scale factor for performing zooming, after a
point exception is thrown:-
java.lang.OutOfMemoryError: Java heap space.
Below is the code used for zooming-
scaled = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
Graphics2D g2d = (Graphics2D)scaled.createGraphics();
AffineTransform transformer = new AffineTransform();
transformer.scale(scaleFactor, scaleFactor);
g2d.setTransform(transformer);
Large Image(>1.5GB)
While loading a huge image(>1.5GB), same exception occurs as appeared in
1.1, even is the image is small enough to be loaded, sometimes, I get the same error.
Solutions Tried
I tried using BigBufferedImage in place of BufferedImage to store the stretched data. BigBufferedImage image = BigBufferedImage.create(newCol,newRow, BufferedImage.TYPE_INT_ARGB);
But it couldn't be passed to g2d.drawImage(image, 0, 0, this);
because the repaint method of JPanel just stops for some reason.
I tried loading image in low resolution where pixel is read and few columns and rows are jumped/skipped. But the problem is how to decide what number of pixels to skip as image size varies therefore I am unable to decide how to decide the jump parameter.
MappedByteBuffer buffer = inChannel.map(FileChannel.MapMode.READ_ONLY,0, inChannel.size());
buffer.order(ByteOrder.LITTLE_ENDIAN);
FloatBuffer floatBuffer = buffer.asFloatBuffer();
for(int i=0,k=0;i<nrow;i=i+jump) /*jump is the value to be skipped, nrow is height of image*/
{
for(int j=0,l=0;j<ncol1;j=j+jump) //ncol is width of image
{
index=(i*ncol)+j;
oneDimArray[(k*ncolLessRes)+l] = floatBuffer.get(index);//oneDimArray is initialised to size of Low Resolution image.
l++;
}
k++;
}
The problem is to decide how many column and row to skip i.e what value of jump should be set.
I tried setting Xmx but image size varies and we cannot dynamically set the Xmx values.
Here are some values -
table, th, td {
border: 1px solid black;
}
<table style="width:100%">
<tr>
<th>Image Size</th>
<th>Xmx</th>
<th>Xms</th>
<th>Problem</th>
</tr>
<tr>
<td>83Mb</td>
<td>512m</td>
<td>256m</td>
<td>working</td>
</tr>
<tr>
<td>83Mb</td>
<td>3096m</td>
<td>2048m</td>
<td>System hanged</td>
</tr>
<tr>
<td>3.84Gb</td>
<td>512m</td>
<td>256m</td>
<td>java.lang.OutOfMemoryError: Java heap space
</tr>
<tr>
<td>3.84Gb</td>
<td>3096m</td>
<td>512m</td>
<td>java.lang.OutOfMemoryError: Java heap space
</tr>
</table>
For this I tried finding memory allocated to program:-
try(BufferedWriter bw= new BufferedWriter(new FileWriter(dtaFile,true))){
Runtime runtime=Runtime.getRuntime();
runtime.gc();
double oneMB=Math.pow(2,20);
long[] arr= Instream.range(0,(int)(10.432*long.BYTES*Math.pow(2,20))).asLongStream().toArray();
runtime.gc();
long freeMemory= runtime.freeMemory();
long totalMemory= runtime.totalMemory();
long usedMemory= totalMemory-freeMemory;
long maxMemory= runtime.maxMemory();
String fileLine= String.format(" %9.3f %9.3f %9.3f " , usedMemory/oneMb, freeMemory/oneMB, totalMemory/oneMb, maxMemory/oneMB);
bw.write();
}
Following results were obtained
Memory Allocation
This approach failed because the available memory increases as per usage of my code. As a result it will not be useful for me to make a decision for jump.
Result Expected
A way to access the amount of available memory before the loading of the image so that I could use it to make decision on value of the jump. Is there any other alternative to decide jump value (i.e., how much I can lower the resolution?).
You can read the specific portion of an image, then scale it with reduced resolution for display purpose.
So in your case you can read the image in chunk (read image portions just like we read the data from db row by row)
For example:
// Define the portion / row size 50px or 100px
int rowHeight = 50;
int rowsToScan = imageHeight / rowHeight;
if(imageHeight % rowHeight > 0) rowsToScan++;
int x = 0;
int y = 0;
int w = imageWidth;
int h = rowHeight;
ArrayList<BufferedImage> scaledImagePortions = new ArrayList<>();
for(int i = 1; i <= rowsToScan; i++) {
// Read the portion of an image scale it
// and push the scaled version in lets say array
BufferedImage scalledPortionOfImage = this.getScaledPortionOfImage(img, x, y, w, h);
scaledImagePortions.add(scalledPortionOfImage);
y = (rowHeight * i);
}
// Create single image out of scaled images portions
Thread which can help you to get portion of an image Read region from very large image file in Java
Thread which can help you to scale the image (my quick search result :) )
how to resize Image in java?
Thread which can help you in merging the buffered images: Merging two images
You can always tweak the snippets :)
OutOfMemoryError that is self explainatory - you are out of memory. That is beeing said not physical RAM you have on your machine, but rather JVM hits upper memory allocation limit set by -xmx setting
Your xmx setting testing makes little sense as you try to put 3,8GB size of an image into 512MB memory block. It cannot work - you cannot put 10 liters of water in 5 liters bottle. For memory usage you need at least the size of image x3 as you are storing every pixel separately and that contains of 3 bytes (RGB). And that is just for pure image data. What is left is whole app and data object structure overhead + additional space required for computation and probably plenty more that I didn't mention and I am not even aware of.
You don't want to "dynamicly set" -xmx. Set it to maximum possible value in your system (trial and error). JVM will not take that much of memory unless it will need it. By additional -X settings you can tell JVM to free up unused memory so you don't have to worry about unused memory beeing "freezed" by JVM.
I never worked on image processing applications. Is Photoshop or Gimp is capable of opening and doing something usefull with such big images? Maybe you should looks for clues about processing that much of data there (if it is working)
If point above is just a naive as you need this for scientific purposes (and that is not what Photoshop or Gimp are made for unless you are flatearther :) ), you will need scientific grade hardware.
One thing that comes into my mind, is not to read image into memory at all but process it on the fly. This could reduce memory consumption to order of megabytes.
Take a closer look into ImageReader API as it suggest (readTile method) it might be possible to read only area of image (eg for zooming in)

HOG parameters in OpenCV Java Version

Hi I am developing in Android and I want to use my cellphone camera to do something.
I am using OpenCV-2.4.9 Java package to extract HOG feature, but I am confused about the output vector.
My image size is 480x640. I set the window to be 48x64, blocksize 24x32, cellsize 12x16 and 8 bins for each cell. So for each window, I should get a 128 dimension data to describe it. After running the following code:
MatOfFloat keyPoints = new MatOfFloat();
Hog.compute(imagePatch, keyPoints);
keyPoints is a array whose length is 172800 (I think it is 1350x128). I think there should be a parameter to set the window stride to control the number of windows. In the library, there also another function to control the window stride:
public void compute(Mat img, MatOfFloat descriptors, Size winStride, Size padding, MatOfPoint locations)
but I dont know the meaning of the parameters. Could anyone help me to figure this out?
void compute(Mat img, MatOfFloat descriptors, Size winStride, Size padding, MatOfPoint locations)
Mat img
input image to test
MatOfFloat descriptors
output vector of descriptors, one for each window in the sliding window search. in c++ it is an vector treated as an array, that is all descriptors are in descriptors[0] in one long array. You need to know the descriptor size to get back each descriptor: Hog.getDescriptorSize() .
Size winStride
size.width = the amount of overlap in the x direction for the sliding window search;
size.height= the amount of overlap in the y direction for the sliding window search;
So if you set it to 1,1 it will check a window centered on every pixel. However this will be slow, so you can set it to the cell-size for a good trade-off.
Size padding
This adds a border around the image, such that the detector can find things near to the edges. without this the first point of detection will half the window size into the image, thus a good choice would be to set it to the window size or half the window size, or some order of the cell size.
MatOfPoint locations
This is a list of locations that you can pre-specify, for instance if you only want descriptors for certain locations. Leave it empty to do a full search.
Example
disclaimer: may not be proper java, but should give you an idea what the parameters do...
Extract some Hogs
MatOfFloat descriptors(0) //an empty vector of descriptors
Size winStride(Hog.width/2,Hog.height/2) //50% overlap in the sliding window
Size padding(0,0) //no padding around the image
MatOfPoint locations(0) ////an empty vector of locations, so perform full search
compute(img , descriptors, winStride, padding, locations)

Image Interpolation Help Needed

I'm trying to figure out how to use my thermal sensor to change colors to an overlay that I have over the Android camera. The problem is that the data I get back is in a 16x4 array. How do I resize this 16x4 grid to a different resolution? Such as 32x8, 48x12...etc.
Edit:
For instance, I have this as my draw method:
public void onDraw(Canvas canvas){
super.onDraw(canvas);
for(int x = 0; x < 4; x++){
for(int y = 0; y < 16; y++){
// mapping 2D array to 1D array
int index = x*GRID_WIDTH + y;
tempVal = currentTemperatureValues.get(index);
// 68x68 bitmap squares to display temperature data
if(tempVal >= 40.0)
bitmaps[x][y].eraseColor(Color.RED);
else if(tempVal < 40.0 && tempVal > 35.0)
bitmaps[x][y].eraseColor(Color.YELLOW);
else
bitmaps[x][y].eraseColor(Color.BLUE);
}
}
combinedBitmap = mergeBitmaps();
// combinedBitmap = fastblur(combinedBitmap, 45);
paint.setAlpha(alphaValue);
canvas.drawBitmap(combinedBitmap, xBitmap, yBitmap, paint);
Log.i(TAG,"Done drawing");
}
The current implementation is to draw to a 16x4 overlay over my camera preview, but resolution is very low, and I'd like to improve it the best I can.
The Bitmap class in the Android API (that's what I'm assuming you're using) has a static method called createScaledBitmap: http://developer.android.com/reference/android/graphics/Bitmap.html#createScaledBitmap%28android.graphics.Bitmap,%20int,%20int,%20boolean%29
What this method does is that it accepts an already created Bitmap, you specify the final width and height dimensions as well as a boolean flag called filter. Setting this to false does nearest neighbour interpolation while true does bilinear interpolation.
As an example, given that you have a 2D array of Bitmaps, you could resize one like so:
Bitmap resize = Bitmap.createScaledBitmap(bitmaps[x][y], 32, 8, true);
The first parameter is the Bitmap you want resized, the second parameter is the width, third parameter the height, and the last is the filter flag. The output (of course) is stored in resize and is your resized / scaled image. Currently, the Javadoc for this method (as you can see) provides no explanation for what filter does. I had to look at the Android source to figure out what exactly it was doing, and also from experience as I have used the method before.
Generally, you set this to false if you are shrinking the image, while you set this to true if you are upscaling the image. The reason why is because when you are interpolating an image from small to large, you are trying to create more information than what was initially available. Doing this with nearest neighbour will introduce blocking artifacts, and so bilinear interpolation will help smooth this out. Going from large to small has no noticeable artifacts using either method, and so you generally choose nearest neighbour as it's more computationally efficient. There will obviously be blurring as you resize to a larger image. The larger you go, the more blurriness you get, but that beats that blockiness you get with nearest neighbour.
For using just the Android API, this is the best and easiest solution you can get. If you want to get into more sophisticated interpolation techniques (Cubic, Lanczos, etc...), unfortunately you will have to implement that yourself. Try bilinear first and see what you get.

Starting point for a tile map

I am due to start work on a 2D platform game in Java using Java2D, and am trying to devise a way to create a world. I have been reading up about this for the last few hours now, and as far as I can tell, a relatively effective way is to have a text file with a "matrix" of values in it, which is read in by the program in order to create the map (stored in a 2D array).
Now, my plan is to have multiple JComponents that display ImageIcons for the various textures in the world; the JComponent object would depend on the character in the given array index.
Is there anything I may have overlooked?
Will this schematic work with a background image, i.e. when there is a character that represents a blank space, will part of the background be shown?
Apologies if this seems like a lazy question, I can assure you it is not out of laziness. I am merely trying to plan this out before hacking code together.
Unless you have compelling reason to, having a different component for each tile is probably not a good way to go. Look into a Canvas and displaying loaded images at different offsets in it.
Example:
480x640 Canvas
128x16 image file(contains 8 16x16 tile images)
So your file has a bunch of numbers(characters etc.), we'll say 0-7 for the 8 tiles in the image. The file has 30x40 numbers, laid out in a grid the same as the canvas. So
1 2 1 3 4 8 2...
...
And to display the code ends up something like(not tested, based on docs)
Graphics g = //initialize graphics;
Image yourTileImage = //load your image;
for (int xpos = 0; xpos < maxX; xpos++)
for (int ypos = 0; ; ypos < maxY; ypos++)
int number = //get number from map file
g.drawImage(Image yourTileImage,
xpos * 16, ypos * 16, xpos * 16 + 15, ypos * 16 + 15,
number*16, 0, number+15, 15,
ImageObserver observer)
Which basically maps the number to your tile image, then puts that tile image into the right spot in the canvas(x,y) coordinates * size of tile.
There are a number of good 2d graphics engines available for Java. You would be better off using one of those rather than trying to re-invent the wheel. (Quite apart from anything else they will make use of the GPU.
You should easily find one that does what you need.

Make a BufferedImage use less RAM?

I have java program that reads a jpegfile from the harddrive and uses it as the background image for various other things. The image itself is stored in a BufferImage object like so:
BufferedImage background
background = ImageIO.read(file)
This works great - the problem is that the BufferedImage object itself is enormous. For example, a 215k jpeg file becomes a BufferedImage object that's 4 megs and change. The app in question can have some fairly large background images loaded, but whereas the jpegs are never more than a meg or two, the memory used to store the BufferedImage can quickly exceed 100s of megabytes.
I assume all this is because the image is being stored in ram as raw RGB data, not compressed or optimized in any way.
Is there a way to have it store the image in ram in a smaller format? I'm in a situation where I have more slack on the CPU side than RAM, so a slight performance hit to get the image object's size back down towards the jpeg compression would be well worth it.
One of my projects I just down-sample the image as it is being read from an ImageStream on the fly. The down-sampling reduces the dimensions of the image to a required width & height whilst not requiring expensive resizing computations or modification of the image on disk.
Because I down-sample the image to a smaller size, it also significantly reduces the processing power and RAM required to display it. For extra optimization, I render the buffered image in tiles also... But that's a bit outside the scope of this discussion. Try the following:
public static BufferedImage subsampleImage(
ImageInputStream inputStream,
int x,
int y,
IIOReadProgressListener progressListener) throws IOException {
BufferedImage resampledImage = null;
Iterator<ImageReader> readers = ImageIO.getImageReaders(inputStream);
if(!readers.hasNext()) {
throw new IOException("No reader available for supplied image stream.");
}
ImageReader reader = readers.next();
ImageReadParam imageReaderParams = reader.getDefaultReadParam();
reader.setInput(inputStream);
Dimension d1 = new Dimension(reader.getWidth(0), reader.getHeight(0));
Dimension d2 = new Dimension(x, y);
int subsampling = (int)scaleSubsamplingMaintainAspectRatio(d1, d2);
imageReaderParams.setSourceSubsampling(subsampling, subsampling, 0, 0);
reader.addIIOReadProgressListener(progressListener);
resampledImage = reader.read(0, imageReaderParams);
reader.removeAllIIOReadProgressListeners();
return resampledImage;
}
public static long scaleSubsamplingMaintainAspectRatio(Dimension d1, Dimension d2) {
long subsampling = 1;
if(d1.getWidth() > d2.getWidth()) {
subsampling = Math.round(d1.getWidth() / d2.getWidth());
} else if(d1.getHeight() > d2.getHeight()) {
subsampling = Math.round(d1.getHeight() / d2.getHeight());
}
return subsampling;
}
To get the ImageInputStream from a File, use:
ImageIO.createImageInputStream(new File("C:\\image.jpeg"));
As you can see, this implementation respects the images original aspect ratio as well. You can optionally register an IIOReadProgressListener so that you can keep track of how much of the image has been read so far. This is useful for showing a progress bar if the image is being read over a network for instance... Not required though, you can just specify null.
Why is this of particular relevance to your situation? It never reads the entire image into memory, just as much as you need it to so that it can be displayed at the desired resolution. Works really well for huge images, even those that are 10's of MB on disk.
I assume all this is because the image
is being stored in ram as raw RGB
data, not compressed or optimized in
any way.
Exactly... Say a 1920x1200 JPG can fit in, say, 300 KB while in memory, in a (typical) RGB + alpha, 8 bits per component (hence 32 bits per pixel) it shall occupy, in memory:
1920 x 1200 x 32 / 8 = 9 216 000 bytes
so your 300 KB file becomes a picture needing nearly 9 MB of RAM (note that depending on the type of images you're using from Java and depending on the JVM and OS this may sometimes be GFX-card RAM).
If you want to use a picture as a background of a 1920x1200 desktop, you probably don't need to have a picture bigger than that in memory (unless you want to some special effect, like sub-rgb decimation / color anti-aliasing / etc.).
So you have to choices:
makes your files less wide and less tall (in pixels) on disk
reduce the image size on the fly
I typically go with number 2 because reducing file size on hard disk means you're losing details (a 1920x1200 picture is less detailed than the "same" at 3940x2400: you'd be "losing information" by downscaling it).
Now, Java kinda sucks big times at manipulating pictures that big (both from a performance point of view, a memory usage point of view, and a quality point of view [*]). Back in the days I'd call ImageMagick from Java to resize the picture on disk first, and then load the resized image (say fitting my screen's size).
Nowadays there are Java bridges / APIs to interface directly with ImageMagick.
[*] There is NO WAY you're downsizing an image using Java's built-in API as fast and with a quality as good as the one provided by ImageMagick, for a start.
Do you have to use BufferedImage? Could you write your own Image implementation that stores the jpg bytes in memory, and coverts to a BufferedImage as necessary and then discards?
This applied with some display aware logic (rescale the image using JAI before storing in your byte array as jpg), will make it faster than decoding the large jpg every time, and a smaller footprint than what you currently have (processing memory requirements excepted).
Use imgscalr:
http://www.thebuzzmedia.com/software/imgscalr-java-image-scaling-library/
Why?
Follows best practices
Stupid simple
Interpolation, Anti-aliasing support
So you aren't rolling your own scaling library
Code:
BufferedImage thumbnail = Scalr.resize(image, 150);
or
BufferedImage thumbnail = Scalr.resize(image, Scalr.Method.SPEED, Scalr.Mode.FIT_TO_WIDTH, 150, 100, Scalr.OP_ANTIALIAS);
Also, use image.flush() on your larger image after conversion to help with the memory utilization.
File size of the JPG on disk is completely irrelevant.
The pixel dimensions of the file are. If your image is 15 Megapixels expect it to require crap load of RAM to load a raw uncompressed version.
Re-size your image dimensions to be just what you need and that is the best you can do without going to a less rich colorspace representation.
You could copy the pixels of the image to another buffer and see if that occupies less memory then the BufferedImage object. Probably something like this:
BufferedImage background = new BufferedImage(
width,
height,
BufferedImage.TYPE_INT_RGB
);
int[] pixels = background.getRaster().getPixels(
0,
0,
imageBuffer.getWidth(),
imageBuffer.getHeight(),
(int[]) null
);

Categories

Resources