More Efficient RGB to ARGB Conversion - java

I have this working code which reads in a 700x700 RGB24 TIF file and places it into display memory. The line which assigns the pixelARGB value appears to be extremely inefficient, this code takes 3-4 seconds to redraw the screen. Is there a way I can avoid the shifting and oring and just place the byte values into the correct position within the 32 bit word?
In other languages I have done this with "overlayed variables" or "variant records" or such. Cannot find this in Java. Thank you.
for (y=0; y<700; y++) { // for each line
i = 0;
for (x=0; x<700; x++) { // for each dot
red = lineBuf[i++] & 0xFF;
green = lineBuf[i++] & 0xFF;
blue = lineBuf[i++]& 0xFF;
pixelARGB = 0xFF000000 | (red << 16)| (green << 8) | blue;
this_g.setPixel(x + BORDER, y + BORDER, pixelARGB);
}
size=is.read(lineBuf,0,2100);
}

There is at least one way to convert your TIFF image data buffer into a Bitmap more efficiently, and there is an optimization that can possibly be made.
1. Use an int[] array instead of pixel copies:
You still have to calculate each pixel individually, but set them in an int[] array.
It is the setPixel() function that is taking all your time.
Example:
final int w = 700;
final int h = 700;
final int n = w * h;
final int [] buf = new int[n];
for (int y = 0; y < h; y++) {
final int yw = y * w;
for (int x = 0; x < w; x++) {
int i = yw + x;
// Calculate 'pixelARGB' here.
buf[i] = pixelARGB;
}
}
Bitmap result = Bitmap.createBitmap(buf, w, h, Bitmap.Config.ARGB_8888);
2. Resize within your Loop:
This is not very likely, but in case your destination ImageView for the result image is known to be smaller than the source image, which is 700x700 in your question, then you can resize within your for loop for an extremely high performance increase.
What you have to do is loop through your destination image pixels, calculate the pixel x, y values you need from your source image, calculate the pixelARGB value for only those pixels, populate a smaller int[] array, and finally generate a smaller Bitmap. Much. Faster.
You can even enhance the resize quality with a homebrew cubic interpolation of the four nearest source pixels for each destination pixel, but I think you will find this unnecessary for display purposes.

Related

How to swap pixels of an image randomly?

I've been working on a Java program that reads an image, subdivides it into a definable number of rectangular tiles, then swaps the pixels of each tile with those of another tile, and then puts them back together and renders the image.
An explanation of the idea: http://i.imgur.com/OPefpjf.png
I've been using the BufferedImage class, so my idea was to first read all width * height pixels from its data buffer and save them to an array.
Then, according to the tile height and width, copy the entire pixel information of each tile to small arrays, shuffle those, and then write back the data contained in these arrays to their position in the data buffer. It should then be enough to create a new BufferedImage with the original color and sample models as well as the updated data buffer.
However, I got ominous errors when creating a new WriteableRaster from the updated data buffer, and the number of pixels didn't match up (I had suddenly gotten 24 instead of originally 8, and so forth), so I figured there is something wrong with the way I address the pixel information.
( Reference pages for BufferedImage and WriteableRaster )
I used the following loop to iterate through the 1D data buffer:
// maximum iteration values
int numRows = height/tileHeight;
int numCols = width/tileWidth;
// cut picture into tiles
// for each column of the image matrix
// addressing columns (1D)
for ( int column = 0; column < numCols; column++ )
{
// for each row of the matrix
// addressing cells (2D)
for ( int row = 0; row < numRows; row++ )
{
byte[] pixels = new byte[(tileWidth+1) * (tileHeight+1)];
int celloffset = (column + (width * row)); // find cell base address
// for each row inside the cell
// adressing column inside a tile (3D)
for ( int colpixel = 0; colpixel < tileWidth; colpixel++ )
{
// for each column inside the tile -> each pixel of the cell
for ( int rowpixel = 0; rowpixel < tileHeight; rowpixel++ )
{
// address of pixel in original image buffer array allPixels[]
int origpos = celloffset + ((rowpixel * tileWidth) + colpixel);
// translated address of pixel in local pixels[] array of current tile
int transpos = colpixel + (rowpixel * tileWidth);
// source, start, dest, offset, length
pixels[transpos] = allPixels[origpos];
}
}
}
}
Is there something wrong with this code? Or is there perhaps a much easier way to do this that I haven't thought of yet?
The code below edits the image in place. So no need to create new objects, which should simplify. If you need to keep the original, just copy it entirely first. Also, no need to save to separate arrays.
Since you said "shuffle" I assume you want to swap the tiles randomly. I made a function for that, and if you just call it many times you will end up with tiles swapped randomly. If you want a pattern or some other rule of how they are swapped, just call the other function directly with your chosen tiles.
I haven't used BufferedImage before, but looking at the documentation,
http://docs.oracle.com/javase/7/docs/api/java/awt/image/BufferedImage.html
and this post,
Edit pixel values
It seems that an easy way is to use the methods getRGB and setRGB
int getRGB(int x, int y)
Returns an integer pixel in the default RGB color model
(TYPE_INT_ARGB) and default sRGB colorspace.
void setRGB(int x, int y, int rgb)
Sets a pixel in this BufferedImage to the specified RGB value.
I would try something like the following: (untested code)
Using random http://docs.oracle.com/javase/7/docs/api/java/util/Random.html
int numRows = height/tileHeight;
int numCols = width/tileWidth;
void swapTwoRandomTiles (BufferedImage b) {
//choose x and y coordinates randomly for the tiles
int xt1 = random.nextInt (numCols);
int yt1 = random.nextInt (numRows);
int xt2 = random.nextInt (numCols);
int yt2 = random.nextInt (numRows);
swapTiles (b,xt1,yt1,xt2,yt2);
}
void swapTiles(BufferedImage b, int xt1, int yt1, int xt2, int yt2) {
int tempPixel = 0;
for (int x=0; x<tileWidth; x++) {
for (int y=0; y<tileHeight; y++) {
//save the pixel value to temp
tempPixel = b.getRGB(x + xt1*tileWidth, y + yt1*tileHeight);
//write over the part of the image that we just saved, getting data from the other tile
b.setRGB ( x + xt1*tileWidth, y + yt1*tileHeight, b.getRGB ( x+xt2*tileWidth, y+yt2*tileHeight));
//write from temp back to the other tile
b.setRGB ( x + xt2*tileWidth, y + yt2*tileHeight, tempPixel);
}
}
}

How to create bulging effect is Java?

I am trying to create a Java function to make a bulging effect on an image by shifting the pixel to the relative centre of the image. I first take the (x,y) coordinate of the pixel, find the relative shift, x = x-(x/2) and convert it to polar form [rcos(a), rsin(a)]. r is found by: r = Math.sqrt(xx + yy). Angle a is found using Math.atan2(y/x). New radius (r') is found using r' = 2r^1.5 . However, the new x,y values from [rcos(a), rsin(a)] exceed the dimensions of the image, and errors occur.
Am I making a fundamental mistake?
public void bulge()
{
double xval, yval = 0;
//loop through the columns
for(int x = 0; x < this.getWidth(); x++)
{
//loop through the rows
for(int y = 0; y < this.getHeight(); y++)
{
int redValue, greenValue, blueValue = 0;
double newRadius = 0;
Pixel pixel = this.getPixel(x,y);
redValue = pixel.getRed();
greenValue = pixel.getGreen();
blueValue = pixel.getBlue();
xval = x - (x/2);
yval = y - (y/2);
double radius = Math.sqrt(xval*xval + yval*yval);
double angle = Math.atan2(yval, xval);
newRadius = 2*(Math.pow(radius,1.5));
xval = (int)(newRadius*Math.sin(angle));
yval = (int)(newRadius*Math.cos(angle));
Pixel pixelNewPos = this.getPixel((int)xval, (int)yval);
pixelNewPos.setColor(new Color(redValue, greenValue, blueValue));
}
}
}
It's a lot easier to successfully apply a transform from source image A to destination image B by doing the reverse transform from pixels in image B to pixels in image A.
By this I mean for each pixel in destination image B, determine the pixel or pixels in source image A that contribute to the color. That way you don't end up with a whole bunch of pixels in the target image that haven't been touched.
As an example using a linear scaling operation by 2, a simple implementation might look like this:
for (int x = 0; x < sourceWidth; ++x) {
for (int y = 0; y < sourceHeight; ++y) {
Pixel sourcePixel = sourceImage.getPixel(x, y);
int destPixelX = x * 2;
int destPixelY = y * 2;
destImage.setPixel(destPixelX, destPixelY, sourcePixel);
}
}
It should be clear from this code that pixels with either odd numbers X or Y values will not be set in the destination image.
A better way would be something like this:
for (int x = 0; x < destWidth; ++x) {
for (int y = 0; y < destHeight; ++y) {
int sourcePixelX = x / 2;
int sourcePixelY = y / 2;
Pixel sourcePixel = sourceImage.getPixel(sourcePixelX, sourcePixelY);
destImage.setPixel(x, y, sourcePixel);
}
}
Although this is not a good image upscaling algorithm in general, it does show how to make sure that all the pixels in your target image are set.
Am I making a fundamental mistake?
At a conceptual level, yes. Your algorithm is taking a rectangular image and moving the location of the pixels to give a larger, non-rectagular image. Obviously that won't fit into your original rectangle.
So you either need to clip (i.e. discard) the pixels that fall outside of the rectangle, or you need to use a larger rectangle so that all of the mapped pixels fall inside it.
In the latter case, there will be gaps around the edges ...if your transformation is doing what you claim it does. A non-linear transformation of a rectangle is not going to have straight sides.

Bitmap image coming out warped

// load pixels into an image
this.image = new BufferedImage(this.width,
this.height,
BufferedImage.TYPE_INT_RGB);
// get actual image data for easier pixel loading
byte[] iData = new byte[this.size - 54];
for(int i = 0; i < this.size - 54; i++) {
iData[i] = this.data[i+54];
}
// start from bottom row
for(int y = this.height-1; y >= 0; y--) {
for(int x = 0; x < this.width; x++) {
int index = (this.width*y + x) * 3;
int b = iData[index];
int g = iData[index+1];
int r = iData[index+2];
//System.out.format("R: %s\nG: %s\nB: %s\n\n", r, g, b);
// merge rgb values to single int
int rgb = ((r&0x0ff)<<16)|((g&0x0ff)<<8)|(b&0x0ff);
// build image from bottom up
this.image.setRGB(x, this.height-1-y, rgb);
}
}
I'm reading RGB values from a Bitmap. My iData byte array is correct, as I've checked it against a hex editor. However, when I run this loop, my output image is warped (see picture). I've been wracking my brain for hours trying to fix this, why is it happening?
Input image is a canadian flag.
output image:
I wasn't accounting for the zero-byte padding on the width, for which the formula is
WidthWithPadding = ceiling((w*colorDepth)/32)*32.
Images usually have width, depth (bits per pixel), and stride. Usually the stride is not just width*depth, but padded by some amount (often used to align each row to a 16-bit boundary for memory access reasons).
Your code does not appear to be accounting for the stride or any padding, which explains the offset as you go through the image. I'm surprised the colors don't get switched (suggesting the padding is the same size as a pixel), but those undefined values are causing the black stripe through the image.
Rather than multiplying (this.width*y + x) * 3 to get your index, you should use (this.stride*y + x) * 3 with the appropriate value for stride. The BufferedImage class doesn't seem to provide that in any obvious fashion, so you need to calculate or fetch that otherwise.
The general issue that's happening is that your code has conflated stride (distance between rows in bytes) with width (number of bytes in a row); in aligned data formats these are not necessarily the same thing. The general result of this is getting the sort of skew you're seeing, as the second row starts out on the last byte(s) of the first row, the third row starts out on the second-to-last byte(s) of the second row, and so on.
The first thing you need to do is calculate the stride of the data, and use that as the multiplication factor for the row. Here is a simplified loop that also is somewhat more efficient than multiplying on a per-pixel basis:
int stride = (this.width + 3) & ~3; // rounds up to the nearest multiple of 4
for (int y = 0; y < this.height; y++) {
int idx = y*stride;
for (int x = 0; x < this.width; x++) {
int b = iData[idx++];
int g = iData[idx++];
int a = iData[idx++];
// etc.
}
}
Note that the rounding trick above ((w + a - 1) & ~(a - 1)) only works for power-of-two values; a more general form of the rounding trick is:
int stride = (width + stride - 1)/stride*stride;
although it's very uncommon to find alignments that aren't a power of 2 in the first place.

reading black/white image in java with TYPE_USHORT_GRAY

I have the following code to read a black-white picture in java.
imageg = ImageIO.read(new File(path));
BufferedImage bufferedImage = new BufferedImage(image.getWidth(null), image.getHeight(null), BufferedImage.TYPE_USHORT_GRAY);
Graphics g = bufferedImage.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
int w = img.getWidth();
int h = img.getHeight();
int[][] array = new int[w][h];
for (int j = 0; j < w; j++) {
for (int k = 0; k < h; k++) {
array[j][k] = img.getRGB(j, k);
System.out.print(array[j][k]);
}
}
As you can see I have set the type of BufferedImage into TYPE_USHORT_GRAY and I expect that I see the numbers between 0 and 255 in the two D array mattrix. but I will see '-1' and another large integer. Can anyone highlight my mistake please?
As already mentioned in comments and answers, the mistake is using the getRGB() method which converts your pixel values to packed int format in default sRGB color space (TYPE_INT_ARGB). In this format, -1 is the same as ยด0xffffffff`, which means pure white.
To access your unsigned short pixel data directly, try:
int w = img.getWidth();
int h = img.getHeight();
DataBufferUShort buffer = (DataBufferUShort) img.getRaster().getDataBuffer(); // Safe cast as img is of type TYPE_USHORT_GRAY
// Conveniently, the buffer already contains the data array
short[] arrayUShort = buffer.getData();
// Access it like:
int grayPixel = arrayUShort[x + y * w] & 0xffff;
// ...or alternatively, if you like to re-arrange the data to a 2-dimensional array:
int[][] array = new int[w][h];
// Note: I switched the loop order to access pixels in more natural order
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
array[x][y] = buffer.getElem(x + y * w);
System.out.print(array[x][y]);
}
}
// Access it like:
grayPixel = array[x][y];
PS: It's probably still a good idea to look at the second link provided by #blackSmith, for proper color to gray conversion. ;-)
A BufferedImage of type TYPE_USHORT_GRAY as its name says stores pixels using 16 bits (size of short is 16 bits). The range 0..255 is only 8 bits, so the colors may be well beyond 255.
And BufferedImage.getRGB() does not return these 16 pixel data bits but quoting from its javadoc:
Returns an integer pixel in the default RGB color model (TYPE_INT_ARGB) and default sRGB colorspace.
getRGB() will always return the pixel in RGB format regardless of the type of the BufferedImage.

Image analysis function to calculate middle gray level (max(z)+min(z)/2 in Java

How do I calculate the middle gray level (max(z)+min(z)/2 over the points where the structuring element is 1 and sets the output pixel to that value?
I just know a little about how to get the RGB value each pixel by using image.getRGB(x,y). I have no idea how to get gray level value each pixel of the image and what is z in the formula and all that?
Please help me with this. Thanks in advance.
I'm going to assume that z are the pixels within your structuring element. I'm also going to assume that "structuring element" is in the case of morphology. Here are a few pointers before we start:
You can convert a colour pixel to its graylevel intensity by using the Luminance formula. By consulting the SMPTE Rec. 709 standard, the output graylevel intensity, given the RGB components is: Y = 0.2126*R + 0.7152*G + 0.0722*B.
We're going to assume that the structuring element is odd. This will allow for the symmetric analysis of the structuring element for each pixel in your image where it is placed
I'm going to assume that your image is already loaded in as a BufferedImage.
Your structuring element will be a 2D array of int.
I'm not going to process those pixels where the structuring element traverses out of bounds to make things easy.
As such, the basic algorithm is this:
For each pixel in our image, place the centre of the structuring element at this location
For each pixel location where the structuring element is 1 that coincides with this position, find the max and minimum graylevel intensity
Set the output image pixel at this location to be (max(z) + min(z)) / 2).
Without further ado:
public BufferedImage calculateMiddleGray(BufferedImage img, int[][] mask)
{
// Declare output image
BufferedImage outImg = new BufferedImage(img.getWidth(),
img.getHeight(), BufferedImage.TYPE_INT_RGB);
// For each pixel in our image...
for (int i = mask.length/2; i < img.getWidth() - mask.length/2; i++) {
for (int j = mask[0].length/2; j < img.getHeight() - mask[0].length/2; j++) {
int maxPix = -1;
int minPix = 256;
// For each pixel in the mask...
for (int x = -mask.length/2; x <= mask.length/2; x++) {
for (int y = -mask[0].length/2; y <= mask[0].length/2; y++) {
//Obtain structuring element pixel
int structPix = mask[y+mask.length/2][x+mask[0].length/2];
// If not 1, continue
if (structPix != 1)
continue;
// Get RGB pixel
int rgb = img.getRGB(i+x, j+y);
// Get red, green and blue channels individually
int redPixel = (rgb >> 16) & 0xFF;
int greenPixel = (rgb >> 8) & 0xFF;
int bluePixel = rgb & 0xFF;
// Convert to grayscale
// Performs SMPTE Rec. 709 lum. conversion using integer logic
int lum = (77*red + 150*green + 29*blue) >> 8;
// Find max and min appropriately
if (lum > maxPix)
maxPix = lum;
if (lum < minPix)
minPix = lum;
}
}
// Set output pixel
// Grayscale image has all of its RGB pixels equal
int outPixel = (maxPix + minPix) / 2;
// Cap output - Ensure we don't go out of bounds
if (outPixel > 255)
outPixel = 255;
if (outPixel < 0)
outPixel = 0;
int finalOut = (outPixel << 16) | (outPixel << 8) | outPixel;
outImg.setRGB(i, j, finalOut);
}
}
}
To call this method, create an image img using any standard method, then create a structuring element mask that is a 2D integer array. After, place this method in your class, then invoke the method by:
BufferedImage outImg = calculateMiddleGray(img, mask);
Also (and of course), make sure you import the necessary package for the BufferedImage class, or:
import java.awt.image.BufferedImage;
Note: This is untested code. Hope it works!

Categories

Resources