I've been working on a Java program that reads an image, subdivides it into a definable number of rectangular tiles, then swaps the pixels of each tile with those of another tile, and then puts them back together and renders the image.
An explanation of the idea: http://i.imgur.com/OPefpjf.png
I've been using the BufferedImage class, so my idea was to first read all width * height pixels from its data buffer and save them to an array.
Then, according to the tile height and width, copy the entire pixel information of each tile to small arrays, shuffle those, and then write back the data contained in these arrays to their position in the data buffer. It should then be enough to create a new BufferedImage with the original color and sample models as well as the updated data buffer.
However, I got ominous errors when creating a new WriteableRaster from the updated data buffer, and the number of pixels didn't match up (I had suddenly gotten 24 instead of originally 8, and so forth), so I figured there is something wrong with the way I address the pixel information.
( Reference pages for BufferedImage and WriteableRaster )
I used the following loop to iterate through the 1D data buffer:
// maximum iteration values
int numRows = height/tileHeight;
int numCols = width/tileWidth;
// cut picture into tiles
// for each column of the image matrix
// addressing columns (1D)
for ( int column = 0; column < numCols; column++ )
{
// for each row of the matrix
// addressing cells (2D)
for ( int row = 0; row < numRows; row++ )
{
byte[] pixels = new byte[(tileWidth+1) * (tileHeight+1)];
int celloffset = (column + (width * row)); // find cell base address
// for each row inside the cell
// adressing column inside a tile (3D)
for ( int colpixel = 0; colpixel < tileWidth; colpixel++ )
{
// for each column inside the tile -> each pixel of the cell
for ( int rowpixel = 0; rowpixel < tileHeight; rowpixel++ )
{
// address of pixel in original image buffer array allPixels[]
int origpos = celloffset + ((rowpixel * tileWidth) + colpixel);
// translated address of pixel in local pixels[] array of current tile
int transpos = colpixel + (rowpixel * tileWidth);
// source, start, dest, offset, length
pixels[transpos] = allPixels[origpos];
}
}
}
}
Is there something wrong with this code? Or is there perhaps a much easier way to do this that I haven't thought of yet?
The code below edits the image in place. So no need to create new objects, which should simplify. If you need to keep the original, just copy it entirely first. Also, no need to save to separate arrays.
Since you said "shuffle" I assume you want to swap the tiles randomly. I made a function for that, and if you just call it many times you will end up with tiles swapped randomly. If you want a pattern or some other rule of how they are swapped, just call the other function directly with your chosen tiles.
I haven't used BufferedImage before, but looking at the documentation,
http://docs.oracle.com/javase/7/docs/api/java/awt/image/BufferedImage.html
and this post,
Edit pixel values
It seems that an easy way is to use the methods getRGB and setRGB
int getRGB(int x, int y)
Returns an integer pixel in the default RGB color model
(TYPE_INT_ARGB) and default sRGB colorspace.
void setRGB(int x, int y, int rgb)
Sets a pixel in this BufferedImage to the specified RGB value.
I would try something like the following: (untested code)
Using random http://docs.oracle.com/javase/7/docs/api/java/util/Random.html
int numRows = height/tileHeight;
int numCols = width/tileWidth;
void swapTwoRandomTiles (BufferedImage b) {
//choose x and y coordinates randomly for the tiles
int xt1 = random.nextInt (numCols);
int yt1 = random.nextInt (numRows);
int xt2 = random.nextInt (numCols);
int yt2 = random.nextInt (numRows);
swapTiles (b,xt1,yt1,xt2,yt2);
}
void swapTiles(BufferedImage b, int xt1, int yt1, int xt2, int yt2) {
int tempPixel = 0;
for (int x=0; x<tileWidth; x++) {
for (int y=0; y<tileHeight; y++) {
//save the pixel value to temp
tempPixel = b.getRGB(x + xt1*tileWidth, y + yt1*tileHeight);
//write over the part of the image that we just saved, getting data from the other tile
b.setRGB ( x + xt1*tileWidth, y + yt1*tileHeight, b.getRGB ( x+xt2*tileWidth, y+yt2*tileHeight));
//write from temp back to the other tile
b.setRGB ( x + xt2*tileWidth, y + yt2*tileHeight, tempPixel);
}
}
}
Related
// load pixels into an image
this.image = new BufferedImage(this.width,
this.height,
BufferedImage.TYPE_INT_RGB);
// get actual image data for easier pixel loading
byte[] iData = new byte[this.size - 54];
for(int i = 0; i < this.size - 54; i++) {
iData[i] = this.data[i+54];
}
// start from bottom row
for(int y = this.height-1; y >= 0; y--) {
for(int x = 0; x < this.width; x++) {
int index = (this.width*y + x) * 3;
int b = iData[index];
int g = iData[index+1];
int r = iData[index+2];
//System.out.format("R: %s\nG: %s\nB: %s\n\n", r, g, b);
// merge rgb values to single int
int rgb = ((r&0x0ff)<<16)|((g&0x0ff)<<8)|(b&0x0ff);
// build image from bottom up
this.image.setRGB(x, this.height-1-y, rgb);
}
}
I'm reading RGB values from a Bitmap. My iData byte array is correct, as I've checked it against a hex editor. However, when I run this loop, my output image is warped (see picture). I've been wracking my brain for hours trying to fix this, why is it happening?
Input image is a canadian flag.
output image:
I wasn't accounting for the zero-byte padding on the width, for which the formula is
WidthWithPadding = ceiling((w*colorDepth)/32)*32.
Images usually have width, depth (bits per pixel), and stride. Usually the stride is not just width*depth, but padded by some amount (often used to align each row to a 16-bit boundary for memory access reasons).
Your code does not appear to be accounting for the stride or any padding, which explains the offset as you go through the image. I'm surprised the colors don't get switched (suggesting the padding is the same size as a pixel), but those undefined values are causing the black stripe through the image.
Rather than multiplying (this.width*y + x) * 3 to get your index, you should use (this.stride*y + x) * 3 with the appropriate value for stride. The BufferedImage class doesn't seem to provide that in any obvious fashion, so you need to calculate or fetch that otherwise.
The general issue that's happening is that your code has conflated stride (distance between rows in bytes) with width (number of bytes in a row); in aligned data formats these are not necessarily the same thing. The general result of this is getting the sort of skew you're seeing, as the second row starts out on the last byte(s) of the first row, the third row starts out on the second-to-last byte(s) of the second row, and so on.
The first thing you need to do is calculate the stride of the data, and use that as the multiplication factor for the row. Here is a simplified loop that also is somewhat more efficient than multiplying on a per-pixel basis:
int stride = (this.width + 3) & ~3; // rounds up to the nearest multiple of 4
for (int y = 0; y < this.height; y++) {
int idx = y*stride;
for (int x = 0; x < this.width; x++) {
int b = iData[idx++];
int g = iData[idx++];
int a = iData[idx++];
// etc.
}
}
Note that the rounding trick above ((w + a - 1) & ~(a - 1)) only works for power-of-two values; a more general form of the rounding trick is:
int stride = (width + stride - 1)/stride*stride;
although it's very uncommon to find alignments that aren't a power of 2 in the first place.
We are trying to get only the portion of the image out of the captured image. But in java we only get subimage in rectangular form using image.getImage(x,y,width, height). Let say if i virutally split the image as 10 parts as shown below. How can i able to extract only 1,2,4,6,8,9,10 out of it as show in the second image using native java very without consuming too many resources and time.
Update
Below is the sample code
for (int x = 0; x < columns; x++) {
for (int y = 0; y < rows; y++) {
imagePart = img.getSubimage(x * this.smallWidth, y
* this.smallHeight, this.smallWidth,
this.smallHeight);
if (!ifSelectedPart(imagePart)) {
smallImages[x][y] = imagePart;
}
else {
smallImages[x][y] = fillwithAlpha();
}
}
createImage(smallImages[][])
If these rectangles are all the same size you can treat the image as a grid and calculate what region of the image you need with a little math.
int numberColumns = 2;
int numberRows = 5;
public Rectangle getSubregion(int row, int column, int imgWidth, int imgHeight){
int cellWidth = imgWidth / numberColumns;
int cellHeight = imgHeight / numberRows;
return new Rectangle(column*cellWidth, row*cellHeight,cellWidth, cellHeight);
}
//usage
Rectangle cellOne = getSubregion(0, 0, img.getWidth(),img.getHeight());
Then just render each of those subregions to a new image in memory.
Images are by their nature rectangular. Perhaps you wish to draw over the image with 0 alpha composite color to cover up the region that you don't want to see. Either that or create a grid of rectangular sub-images, and keep the ones from the grid that you want to display.
I am at wits end with this problem.
Please note I am a really novice coder (although I'm sure you will see by my code).
The basics:
I have an image that I would like to count the number of objects in. Objects in this instance are just joined up pixels (the image has been threshold-ed and undergone binarization and binary erosion to get to this stage, code not included).
My problem:
I am trying to write some code to count how many objects there are left in this image, and within that method I call another method which is meant to remove any objects that have already been included by searching for neighbouring pixels to which they are attached. However, my current implementation of this removal method is throwing up an error: "coordinates out of bounds". I'm asking for any help solving this issue.
Code for overall object counting:
/**
* countObjects in image
*
* #param binary image to count objects in
* #param original image to put labels on
*
* #return labelled original image for graphics overlay
*/
public static BufferedImage countObjects(BufferedImage image, BufferedImage original){
BufferedImage target = copyImage(image);
int rgbBand = 0;
boolean finished = false;
Graphics labelColour = original.getGraphics();
labelColour.setColor(Color.RED);
while(!finished){
finished = false;
for ( int i=0; i<= target.getRaster().getWidth() - 1; i++ ) {
for( int j=0; j< target.getRaster().getHeight() - 1; j++ ) {
int clrz = target.getRaster().getSample(i, j, rgbBand);
if (clrz == 1) {
System.out.println(clrz);
removeObject(i, j, target);
labelColour.drawString( ""+count, i, j);
finished=true;
}
}
}
}
return original;
Code for object removal:
/**
*
* #param x
* #param y
* #param newImage
*
*/
private static void removeObject( int x, int y, BufferedImage newImage ){
int rgbBand = 0;
int[] zero = new int[] { 0 };
newImage.getRaster().setPixel(x, y, zero);
for (int a = Math.max(0, x - 1); a <= Math.min(x + 1, newImage.getRaster().getWidth()); a++) {
for (int b = Math.max(0, y - 1); b <= Math.min(y + 1, newImage.getRaster().getHeight()); b++) {
int na = a;
int nb = b;
if (newImage.getRaster().getSample(na, nb, rgbBand) == 1) {
removeObject( nc, nd, newImage );
}
}
}
}
In the above removeObject method, I am trying to use a recursive technique to remove pixel coordinates from the image being counted once either they or neighbouring pixels have been labelled.
If any of this is unclear (and I know there are probably more than a few confusing parts of my code, please ask and I will explain further).
Thanks for any help.
I dont have enough reputation to comment hence writing my comment as answer .
Are u sure u dint messed up with the x and y coordinates ?I had faced a similar problem some time ago ,but I had messed up with height and width of the image.
I have this working code which reads in a 700x700 RGB24 TIF file and places it into display memory. The line which assigns the pixelARGB value appears to be extremely inefficient, this code takes 3-4 seconds to redraw the screen. Is there a way I can avoid the shifting and oring and just place the byte values into the correct position within the 32 bit word?
In other languages I have done this with "overlayed variables" or "variant records" or such. Cannot find this in Java. Thank you.
for (y=0; y<700; y++) { // for each line
i = 0;
for (x=0; x<700; x++) { // for each dot
red = lineBuf[i++] & 0xFF;
green = lineBuf[i++] & 0xFF;
blue = lineBuf[i++]& 0xFF;
pixelARGB = 0xFF000000 | (red << 16)| (green << 8) | blue;
this_g.setPixel(x + BORDER, y + BORDER, pixelARGB);
}
size=is.read(lineBuf,0,2100);
}
There is at least one way to convert your TIFF image data buffer into a Bitmap more efficiently, and there is an optimization that can possibly be made.
1. Use an int[] array instead of pixel copies:
You still have to calculate each pixel individually, but set them in an int[] array.
It is the setPixel() function that is taking all your time.
Example:
final int w = 700;
final int h = 700;
final int n = w * h;
final int [] buf = new int[n];
for (int y = 0; y < h; y++) {
final int yw = y * w;
for (int x = 0; x < w; x++) {
int i = yw + x;
// Calculate 'pixelARGB' here.
buf[i] = pixelARGB;
}
}
Bitmap result = Bitmap.createBitmap(buf, w, h, Bitmap.Config.ARGB_8888);
2. Resize within your Loop:
This is not very likely, but in case your destination ImageView for the result image is known to be smaller than the source image, which is 700x700 in your question, then you can resize within your for loop for an extremely high performance increase.
What you have to do is loop through your destination image pixels, calculate the pixel x, y values you need from your source image, calculate the pixelARGB value for only those pixels, populate a smaller int[] array, and finally generate a smaller Bitmap. Much. Faster.
You can even enhance the resize quality with a homebrew cubic interpolation of the four nearest source pixels for each destination pixel, but I think you will find this unnecessary for display purposes.
I'm implementing a diagram that shows the level of a container. Depending on the fill level, the colour of the line should change (for instance, close to the maximum it should show red). Rather than calculating different segments of the line and setting their colours manually, I'd like to define a band in which the colour automatically changes. I thought to do this with a custom Composite/CompositeContext, but I seem not to be able to work out the locations of the pixels returned by the raster. My idea is to check for their y-Values and change the colour if a colour value is defined in the source and if the y-Value exceeds a threshold value.
My CompositeContext looks like this:
CompositeContext context = new CompositeContext() {
#Override
public void compose(Raster src, Raster dstIn, WritableRaster dstOut) {
int width = Math.min(src.getWidth(), dstIn.getWidth());
int height = Math.min(src.getHeight(), dstIn.getHeight());
int[] dstPixels = new int[width];
for (int y = 0; y < height; y++) {
dstIn.getDataElements(0, y, width, 1, dstPixels);
for (int x = 0; x < width; x++) {
if ( y ??? > 50) {
dstPixels[x] = 1;
} else {
// copy pixels from src
}
}
dstOut.setDataElements(0, y, width, 1, dstPixels);
}
}
"y" seems to be related to something, but it does not contain the absolute y-Value (in fact the compose method is called several times with 32x32 rasters). Maybe someone knows how to retrieve the position on the component or even a better way to define an area in which a given pixel value is replaced by another value.
Can't you just fill with a gradient with 0 alpha and then draw the line with full alpha?