I have an int array where each value stores a bitpacked rgb value (8 bits per channel) and alpha is always 255(opaque) and i want to display that in javafx.
My current approach is using a canvas like this:
GraphicsContext graphics = canvas.getGraphicsContext2D();
PixelWriter pw = graphics.getPixelWriter();
pw.setPixels(0, 0, width, height, PixelFormat.getIntArgbInstance(), pixels, 0, width);
However before that i actually have to set the alpha component of each pixel by iterating each pixel and OR'ing it with a mask that turns the pixel from rgb to argb like this:
for (int i = 0; i < pixels.length; i++) {
pixels[i] = 0xFF000000 | pixels[i];
}
Is there a more efficient to do this (as the pixels array is updated many times every second)?
I was hoping there's a IntRgbInstance but unfortunately there isn't (only ByteRgbInstance)
Other approaches i've tested:
Approach 1: Creating a IntBuffer that is filled up like this:
IntBuffer buffer = IntBuffer.allocate(pixels.length * 4);
for (int pixel : pixels) {
buffer.put(0xFF000000 | pixel);
}
And then generating a PixelBuffer that uses this buffer, the pixel buffer is then used as an input to this WritableImage constructor: https://openjfx.io/javadoc/17/javafx.graphics/javafx/scene/image/WritableImage.html#%3Cinit%3E(javafx.scene.image.PixelBuffer)
and then i display that WritableImage using a ImageView
This however still didn't speed up anything(rather made it a bit slower) and im guessing that because i have to construct a new WritableImage instance each time the pixels int array is updated.
Approach 2 (that didn't work for some reason, i.e. it displayed nothing in the screen): Creating a buffer the same way as above and using that in one of the setPixels() methods that takes in a buffer:
IntBuffer buffer = IntBuffer.allocate(pixels.length * 4);
for (int pixel : pixels) {
buffer.put(0xFF000000 | pixel);
}
pw.setPixels(0, 0, width, height, PixelFormat.getIntArgbInstance(), buffer, width);
After a bit of more research i found out that i don't need to create a new WritableImage instance each time the pixels array is updated but i can just use the updateBuffer method here: https://openjfx.io/javadoc/17/javafx.graphics/javafx/scene/image/PixelBuffer.html#updateBuffer(javafx.util.Callback)
So the code currently looks like this:
pb.updateBuffer(callback -> {
buffer.clear();
for (int pixel : pixels) {
buffer.put(0xFF000000 | pixel);
}
return null;
});
Where pb, buffer is only created once like this:
IntBuffer buffer = IntBuffer.allocate(pixels.length * 4);
PixelBuffer<IntBuffer> pb = new PixelBuffer<>(width, height, buffer, PixelFormat.getIntArgbPreInstance());
view.setImage(new WritableImage(pb));
and this did indeed result in a nice speedup (close to 2x compared to my initial approach)
Maybe this https://openjfx.io/javadoc/17/javafx.graphics/javafx/scene/image/WritableImage.html#%3Cinit%3E(javafx.scene.image.PixelBuffer) is what you are looking for. You could create a PixelBuffer from an IntBuffer of your data.
Related
I know that using LSB means that u can store messages at around 12% of the image carrier size.
I made a java program that splits a message into n fragments and fills the image carrier with these fragments until the 12 % are all occupied.I do this so that by cropping the image,the message wouldn't get lost.
The problem is that the resulting image is distorted and different from the original image.I thought that if I fill only 12% of the image,more exactly the LSB of the image,the image wouldn't get distorted.
int numHides = imLen/(totalLen*DATA_SIZE); // the number of messages I can store in the image
int offset = 0;
for(int h=0; h < numHides; h++) //hide all frags, numHides times
for(int i=0; i < NUM_FRAGS; i++) {//NUM_FRAGS ..the number of fragments
hideStegoFrag(imBytes, stegoFrags[i], offset);//the method that hides the fragment into the picture starting at the offset position
offset += stegoFrags[i].length*DATA_SIZE;
}
private static boolean hideStegoFrag(byte[] imBytes,byte[] stego,int off){
int offset=off;
for (int i = 0; i < stego.length; i++) { // loop through stego
int byteVal = stego[i];
for(int j=7; j >= 0; j--) { // loop through 8 bits of stego byte
int bitVal = (byteVal >>> j) & 1;
// change last bit of image byte to be the stego bit
imBytes[offset] = (byte)((imBytes[offset] & 0xFE) | bitVal);
offset++;
}
}
return true;
}
The code for transforming the Buffered Image into bits
private static byte[] accessBytes(BufferedImage image)
{
WritableRaster raster = image.getRaster();
DataBufferByte buffer = (DataBufferByte) raster.getDataBuffer();
return buffer.getData();
}
The code that creates the new image with a provided name and the buffered image of the source image
public static boolean writeImageToFile(String imFnm , BufferedImage im){
try {
ImageIO.write(im, "png", new File(imFnm));
} catch (IOException ex) {
Logger.getLogger(MultiSteg.class.getName()).log(Level.SEVERE, null, ex);
}
return true;
}
The output image you have posted is a 16-color paletted image.
The data I am seeing shows that you have actually applied your changes to the palette index, not to the colors of the image. The reason you are seeing the distortion is because of the way the palette is organized, you aren't modifying the LSB of the color, you're modifying the LSB of the index, which could change it to a completely different (and very noticeable, as you can see) color. (Actually, you're modifying the LSB of every other index, the 16-color form is 4 bits per pixel, 2 pixels per byte.)
It looks like you loaded raw image data and didn't decode it in to RGB color information. Your algorithm will only work on raw RGB (or raw grayscale) data; 3 bytes (or 1 for grayscale) per pixel. You need to convert your image to RGB888 or something similar before you operate on it. When you save it, you need to save it in a lossless, full color (unless you actually can fit all your colors in a palette) format too, otherwise you risk losing your information.
Your problem actually doesn't lie in the steganography portion of your program, but in the loading and saving of the image data itself.
When you load the image data, you need to convert it to an RGB format. The most convenient format for your application will be BufferedImage.TYPE_3BYTE_BGR, which stores each pixel as three bytes in blue, green, red order (so your byte array will be B,G,R,B,G,R,B,G,R,...). You can do that like so:
public static BufferedImage loadRgbImage (String filename) {
// load the original image
BufferedImage originalImage = ImageIO.read(filename);
// create buffer for converted image in RGB format
BufferedImage rgbImage = new BufferedImage(originalImage.getWidth(), originalImage.getHeight(), BufferedImage.TYPE_3BYTE_BGR);
// render original image into buffer, changes to destination format
rgbImage.getGraphics().drawImage(originalImage, 0, 0, null);
return rgbImage;
}
If you are frequently working with source images that are already in BGR format anyways, you can make one easy optimization to not convert the image if it's already in the format you want:
public static BufferedImage loadRgbImage (String filename) {
BufferedImage originalImage = ImageIO.read(filename);
BufferedImage rgbImage;
if (originalImage.getType() == BufferedImage.TYPE_3BYTE_BGR) {
rgbImage = originalImage; // no need to convert, just return original
} else {
rgbImage = new BufferedImage(originalImage.getWidth(), originalImage.getHeight(), BufferedImage.TYPE_3BYTE_BGR);
rgbImage.getGraphics().drawImage(originalImage, 0, 0, null);
}
return rgbImage;
}
You can then just use the converted image for all of your operations. Note that the byte array from the converted image will contain 3 * rgbImage.getWidth() * rgbImage.getHeight() bytes.
You shouldn't have to make any changes to your current image saving code; ImageIO will detect that the image is RGB and will write a 24-bit PNG.
I would like to extract the Alpha Channel from a bufferedimage and paint it in a separate image using greyscale. (like it is displayed in photoshop)
Not tested but contains the main points.
public Image alpha2gray(BufferedImage src) {
if (src.getType() != BufferedImage.TYPE_INT_ARGB)
throw new RuntimeException("Wrong image type.");
int w = src.getWidth();
int h = src.getHeight();
int[] srcBuffer = src.getData().getPixels(0, 0, w, h, (int[]) null);
int[] dstBuffer = new int[w * h];
for (int i=0; i<w*h; i++) {
int a = (srcBuffer[i] >> 24) & 0xff;
dstBuffer[i] = a | a << 8 | a << 16;
}
return Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(w, h, pix, 0, w));
}
I don't believe there's a single method call to do this; you would have to get all the image data and mask off the alpha byte for each pixel. So for example, use getRGB() to get the ARGB pixels for the image. The most significant byte is the alpha value. So for each pixel in the array you get from getRGB(),
int alpha = (pixel >> 24) & 0xFF;
You could grab the Raster from the BufferedImage, and then create a child Raster of this one which contains only the band you're interested in (the bandList parameter). From this child raster you can create a new BufferedImage with a suitable ColorModel which would only contain the grayscale aplpha mask.
The benefit of doing it this way instead of manually iterating over the pixels is that the runtime has chance to get an idea of what you are doing, and thus this might get accelerated by exploiting the hardware capabilities. Honestly I doubt it will be accelerated with current JVMs, but who knows what the future brings?
to my understanding the following code
int [] pixels = image.getRaster().getPixels(0, 0, width, height, (int[])null);
should generate an array which has exactly the size width x height, but in practice it seems to be much larger, why?
image may be a bufferedimage, toolkitimage or volatileimage.
it generates a array with
new int[numBands * w * h]; // The number of bands of the image data.
from the SampleModel from your Raster
I have some png files that I am applying a color to. The color changes depending on a user selection. I change the color via 3 RGB values set from another method. The png files are a random shape with full transparency outside the shape. I don't want to modify the transparency, only the RGB value. Currently, I'm setting the RGB values pixel by pixel (see code below).
I've come to realize this is incredibly slow and possibly just not efficient enough do in an application. Is there a better way I could do this?
Here is what I am currently doing. You can see that the pixel array is enormous for an image that takes up a decent part of the screen:
public void foo(Component component, ComponentColor compColor, int userColor) {
int h = component.getImages().getHeight();
int w = component.getImages().getWidth();
mBitmap = component.getImages().createScaledBitmap(component.getImages(), w, h, true);
int[] pixels = new int[h * w];
//Get all the pixels from the image
mBitmap[index].getPixels(pixels, 0, w, 0, 0, w, h);
//Modify the pixel array to the color the user selected
pixels = changeColor(compColor, pixels);
//Set the image to use the new pixel array
mBitmap[index].setPixels(pixels, 0, w, 0, 0, w, h);
}
public int[] changeColor(ComponentColor compColor, int[] pixels) {
int red = compColor.getRed();
int green = compColor.getGreen();
int blue = compColor.getBlue();
int alpha;
for (int i=0; i < pixels.length; i++) {
alpha = Color.alpha(pixels[i]);
if (alpha != 0) {
pixels[i] = Color.argb(alpha, red, green, blue);
}
}
return pixels;
}
Have you looked at the functions available in Bitmap? Something like extractAlpha sounds like it might be useful. You an also look at the way functions like that are implemented in Android to see how you could adapt it to your particular case, if it doesn't exactly meet your needs.
The answer that worked for me was a write up Square did here Transparent jpegs
They provide a faster code snippet for doing this exact thing. I tried extractAlpha and it didn't work but Square's solution did. Just modify their solution to instead modify the color bit and not the alpha bit.
i.e.
pixels[x] = (pixels[x] & 0xFF000000) | (color & 0x00FFFFFF);
I know its possible to convert an image to CS_GRAY using
public static BufferedImage getGrayBufferedImage(BufferedImage image) {
BufferedImageOp op = new ColorConvertOp(ColorSpace
.getInstance(ColorSpace.CS_GRAY), null);
BufferedImage sourceImgGray = op.filter(image, null);
return sourceImgGray;
}
however, this is a chokepoint of my entire program. I need to do this often, on 800x600 pixel images and takes about 200-300ms for this operation to complete, on average. I know I can do this alot faster by using one for loop to loop through the image data and set it right away. The code above on the other hand constructs a brand new 800x600 BufferedImage that is gray scale. I would rather just transform the image I pass in.
Does any one know how to do this with a for loop and given that the image is RGB color space?
ColorConvertOp.filter takes two parameters. The second parameter is also a BufferedImage, which will be the destination. If you pass a correct BufferedImage to the filter method it saves you from the hassle to create a fresh BufferedImage.
private static int grayscale(int rgb) {
int r = rgb >> 16 & 0xff;
int g = rgb >> 8 & 0xff;
int b = rgb & 0xff;
int cmax = Math.max(Math.max(r, g),b);
return (rgb & 0xFF000000) | (cmax << 16) | (cmax << 8) | cmax;
}
public static BufferedImage grayscale(BufferedImage bi) {
BufferedImage bout = new BufferedImage(bi.getWidth(), bi.getHeight(), BufferedImage.TYPE_INT_ARGB);
int[] rgbArray = new int[bi.getWidth() * bi.getHeight()];
rgbArray = bi.getRGB(0, 0, bi.getWidth(), bi.getHeight(), rgbArray, 0, bi.getWidth());
for (int i = 0, q = rgbArray.length; i < q; i++) {
rgbArray[i] = grayscale(rgbArray[i]);
}
bout.setRGB(0, 0, bout.getWidth(), bout.getHeight(), rgbArray, 0, bout.getWidth());
return bout;
}
Whatever you're doing you are likely doing something wrong. You shouldn't be regenerating a buffered image over and over again. But, rather figuring out a scheme to simply update the buffered image, or take the original pixels from the original and just using the grayscale which is the max of the RGB components, in each of the sections.