I want to read individual pixels from one image and "relocate" them to another image. I basically want to simulate how it would be if I grabbed pixel by pixel from one image and "move" them to a blank canvas. Turning the pixels I grab from the original image white.
This is what I have right now, I'm able to read the pixels from the image and create a copy (which comes out saturated for some reason) of it.
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class ImageTest
{
public static void main(String args[])throws IOException
{
//create buffered image object img
File oldImgFile = new File("/path/to/image/shrek4life.jpg");
BufferedImage oldImg = null;
BufferedImage newImg = null;
try{
oldImg = ImageIO.read(oldImgFile);
}catch(IOException e){}
newImg = new BufferedImage(oldImg.getWidth(), oldImg.getHeight(), BufferedImage.TYPE_INT_ARGB);
File f = null;
try
{
for(int i = 0; i < oldImg.getWidth(); i++){
for(int j = 0; j < oldImg.getHeight(); j++){
//get the rgb color of the old image and store it the new
Color c = new Color(oldImg.getRGB(i, j));
int r = c.getRed();
int g = c.getGreen();
int b = c.getBlue();
int col = (r<<16) | (g<<8) | b;
newImg.setRGB(i, j, col);
}
}
//write image
f = new File("newImg.jpg");
ImageIO.write(newImg, "jpg", f);
}catch(IOException e){
System.out.println("Error: " + e);
}
}//main() ends here
}//class ends here
And I would like to basically slow the process down and display it happening. But I'm not sure how to do that. Would I need to use to accomplish this ? I'm somewhat new to threading but I think I would need multiple threads to handle the painting of both pictures.
First of all, I would like to mention you are working in a very inefficient way. You are creating a Color, decomposing a pixel in its channels, and moving to the new image by a bit-shift. It is easier if you work directly with the integer the whole time (and more efficient).
I will assume the image "/path/to/image/shrek4life.jpg" has ARGB color space. I recommend ensure this, because if the old image does not have this color space you should make a conversion.
When you create the new image, you create it as ARGB color space, so each channel is expressed in a byte of the int, first byte for Alpha, second byte for red, third byte for green and the last one for blue.
I think you forgot the alpha channel when you manipulated the old image pixel to move it into the new image.
With this explanation in mind, I think you can change your code to increase the efficiency, like this:
for(int i = 0; i < oldImg.getWidth(); i++){
for(int j = 0; j < oldImg.getHeight(); j++){
int pixel = oldImg.getRGB(i,j);
newImg.setRGB(i, j, pixel );
//If you like to get more control over the pixels and print
//you can decompose the pixel using Color as you already do
//But when you understand fully the process I recommend erase it
Color c = new Color(pixel);
//Print the color or do whatever you like
}
}
About how to display the process of pixel relocation:
In process:
You can print the changed pixel as a number with its position in image (discouraged). System.out.println("pixel"+pixel+" X:"+i+" Y:"+j);
Use this tutorial in baeldung to print an image. I suggest draw a rectangle with the color of the image and wait for a key press (enter, for example) using Scanner. After the key was press, you can load the next pixel, an so on.
If a single rectangle with just one pixel has little information, I suggest add an array of rectangles to draw several pixels in a time. Even you can print an image, and see the process pixel by pixel, using Scanner to mark each step.
As #haraldK suggest, you can use Swing to display de relocation image. Through swing timer and invokes update()
Post process:
Save the image in a file. To improve the speed of process, I suggest save a few pixels (10 - 100).
Related
I have an issue where I have an image with a fully black background that I have added text to.
the program works well except that the background close, around the text, isn't fully black.
the program I'm trying to do takes a picture and areas where the pixels are bright add characters such as "#" and dark areas are filled with " "(space) or ".".
my current code is:
import java.awt.*;
import java.io.*;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
public static void main(String[] args) throws Exception {
BufferedImage image = ImageIO.read(new File("Person.jpg"));
Graphics g = image.getGraphics();
g.setFont(new Font("Times New Roman", Font.PLAIN,10));
Color backGround = new Color(0,0,0);
int bg = backGround.getRGB();
int pixelSize = 10;
for(int i = 0; i < image.getWidth()-pixelSize;i+=pixelSize){
for(int j =0; j <image.getHeight()-pixelSize;j+=pixelSize){
Color color = grayScale(image, i,j, pixelSize); /*grayScale() take the area...
of wanted pixel size and takes the average color and then grayscale it. */
for(int x = 0; x < pixelSize; x++){
for(int y = 0; y < pixelSize; y++){
image.setRGB(x+i,y+j, bg);
}
}
g.setColor(new Color(255,255,255));
g.drawString(acill(color), i, j); /* acill() maps the brightness of the pixel...
to a string of characters ex: string c = " .:!#" if the pixel i bright (255,255,255)
then the function returns "#". */
}
}
g.dispose();
ImageIO.write(image, "jpg", new File("Person1.jpg"));
}
}
managed to solve it by changing the picture file from jpg to png. I do not know what causes this but at least it works. If anyone knows why I'd be happy to know.
The noise you see in your image is fully expected, and is an artifact of lossy JPEG compression. You may get rid of some of it, by increasing the JPEG quality setting, at the expense of a larger file. But using lossy JPEG will always cause some such noise, and will never be able to exactly retain the information in your original image (thus "lossy"). JPEG is also created to compress "natural images" efficiently, and is not very good at compression "artificial" images like this.
As your image is black and white (bitonal) only, it is probably much better to use a lossless file format that natively supports bitonal images, like PNG, TIFF with "fax" compression, even BMP. For your image, these are likely to compress the data better than JPEG anyway. You could also use JBIG, which is a lossy format created for bitonal images, but only if you can accept the fact that it may not exactly retain your image.
Is it possible to edit a image in java?
I mean to draw a pixel in a certain RGB color in a certain spot and save the image.
I'm working on a game where the objects are loaded by an image and in order to save the current state of the map I need to edit some pixels and load it later.
Any help is appreciated! :)
It is. If you create an instance of BufferedImage, which is an object that stores image data, you will be able to get the pixels and change them. Here is how:
public static void main(String[] args) throws Exception {
BufferedImage originalImage = ImageIO.read(inputFile);
BufferedImage newImage = orgiginalImage;
int[] pixels = ((DataBufferInt)newImage.getRaster().getDataBuffer()).getData();
for(int i = 0; i < pixels.length; i++){
// Code for changing pixel data;
pixels[i] = 0xFFFFFFFF // White
// Syntax for setting pixel color: 0x(HEX COLOR CODE)
// There is no need to set these pixels to the image; they are allerady linked
// For instance, if you create a Canvas object in a JFrame,
// and used graphics.drawImage(newImage, 0, 0,
// newImage.getWidth(), newImage.getHeight(), null), it will be up to date
// Another example is, if you saved newImage to a file, it willallready have
// the white pixels drawn in.
}
}
I know that using LSB means that u can store messages at around 12% of the image carrier size.
I made a java program that splits a message into n fragments and fills the image carrier with these fragments until the 12 % are all occupied.I do this so that by cropping the image,the message wouldn't get lost.
The problem is that the resulting image is distorted and different from the original image.I thought that if I fill only 12% of the image,more exactly the LSB of the image,the image wouldn't get distorted.
int numHides = imLen/(totalLen*DATA_SIZE); // the number of messages I can store in the image
int offset = 0;
for(int h=0; h < numHides; h++) //hide all frags, numHides times
for(int i=0; i < NUM_FRAGS; i++) {//NUM_FRAGS ..the number of fragments
hideStegoFrag(imBytes, stegoFrags[i], offset);//the method that hides the fragment into the picture starting at the offset position
offset += stegoFrags[i].length*DATA_SIZE;
}
private static boolean hideStegoFrag(byte[] imBytes,byte[] stego,int off){
int offset=off;
for (int i = 0; i < stego.length; i++) { // loop through stego
int byteVal = stego[i];
for(int j=7; j >= 0; j--) { // loop through 8 bits of stego byte
int bitVal = (byteVal >>> j) & 1;
// change last bit of image byte to be the stego bit
imBytes[offset] = (byte)((imBytes[offset] & 0xFE) | bitVal);
offset++;
}
}
return true;
}
The code for transforming the Buffered Image into bits
private static byte[] accessBytes(BufferedImage image)
{
WritableRaster raster = image.getRaster();
DataBufferByte buffer = (DataBufferByte) raster.getDataBuffer();
return buffer.getData();
}
The code that creates the new image with a provided name and the buffered image of the source image
public static boolean writeImageToFile(String imFnm , BufferedImage im){
try {
ImageIO.write(im, "png", new File(imFnm));
} catch (IOException ex) {
Logger.getLogger(MultiSteg.class.getName()).log(Level.SEVERE, null, ex);
}
return true;
}
The output image you have posted is a 16-color paletted image.
The data I am seeing shows that you have actually applied your changes to the palette index, not to the colors of the image. The reason you are seeing the distortion is because of the way the palette is organized, you aren't modifying the LSB of the color, you're modifying the LSB of the index, which could change it to a completely different (and very noticeable, as you can see) color. (Actually, you're modifying the LSB of every other index, the 16-color form is 4 bits per pixel, 2 pixels per byte.)
It looks like you loaded raw image data and didn't decode it in to RGB color information. Your algorithm will only work on raw RGB (or raw grayscale) data; 3 bytes (or 1 for grayscale) per pixel. You need to convert your image to RGB888 or something similar before you operate on it. When you save it, you need to save it in a lossless, full color (unless you actually can fit all your colors in a palette) format too, otherwise you risk losing your information.
Your problem actually doesn't lie in the steganography portion of your program, but in the loading and saving of the image data itself.
When you load the image data, you need to convert it to an RGB format. The most convenient format for your application will be BufferedImage.TYPE_3BYTE_BGR, which stores each pixel as three bytes in blue, green, red order (so your byte array will be B,G,R,B,G,R,B,G,R,...). You can do that like so:
public static BufferedImage loadRgbImage (String filename) {
// load the original image
BufferedImage originalImage = ImageIO.read(filename);
// create buffer for converted image in RGB format
BufferedImage rgbImage = new BufferedImage(originalImage.getWidth(), originalImage.getHeight(), BufferedImage.TYPE_3BYTE_BGR);
// render original image into buffer, changes to destination format
rgbImage.getGraphics().drawImage(originalImage, 0, 0, null);
return rgbImage;
}
If you are frequently working with source images that are already in BGR format anyways, you can make one easy optimization to not convert the image if it's already in the format you want:
public static BufferedImage loadRgbImage (String filename) {
BufferedImage originalImage = ImageIO.read(filename);
BufferedImage rgbImage;
if (originalImage.getType() == BufferedImage.TYPE_3BYTE_BGR) {
rgbImage = originalImage; // no need to convert, just return original
} else {
rgbImage = new BufferedImage(originalImage.getWidth(), originalImage.getHeight(), BufferedImage.TYPE_3BYTE_BGR);
rgbImage.getGraphics().drawImage(originalImage, 0, 0, null);
}
return rgbImage;
}
You can then just use the converted image for all of your operations. Note that the byte array from the converted image will contain 3 * rgbImage.getWidth() * rgbImage.getHeight() bytes.
You shouldn't have to make any changes to your current image saving code; ImageIO will detect that the image is RGB and will write a 24-bit PNG.
For college, we have been given an assignment where, given an image, we have to identify the "figures", their color, and the amount of "pixel-groups" inside them. Let me explain:
The image above has one figure (in the image there can be multiple figures, but let us forget about that for now).
The background color of the canvas is the pixel at 0,0 (in this case, yellow)
The border color of the figure is black (it can be any color other than the canvas' background color).
The figure's background color is white (it can also be the same as the canvas' background color).
A figure can only have one background color.
There are two pixel groups in the figure. One is a pool of blue pixels, and the other is a pool of red with some green inside. As you can see, it doesn't matter the color of the pixel group's pixels (it is just different than the figure's background color). What matters is the fact that they're in contact (even diagonally). So despite having two different colors, such group is considered as just one anyway.
As you can see, the border can be as irregular as you wish. It only has, however, one color.
It is known that a pixel group will not touch the border.
I was told that a pixel group's colors can be any except the figure's background color. I assume that then it can be the same as the figure's border color (black).
We have been given a class capable of taking images and converting them to a matrix (each element being an integer representing the color of the pixel).
And that's it. I'm doing it with Java.
WHAT HAVE I DONE SO FAR
Iterate through each pixel in the matrix
If I find a pixel that is different from the background color, I will assume it belongs to the border of the figure. I will call this pixel initialPixel from now on.
Note that the initialPixel in the image I provided is that black pixel in the top-left corner of the figure. I made a sharp cut there purposefully to illustrate it.
My mission now is to find the background color of the figure (in this case white).
But I'm having quite a great deal of trouble to find such background color (white). This is the closest method I did, which worked for some cases - but not with this image:
Since I know the color of the border, I could find the first different color that is to the south of the initialPixel. Did sound like a good idea - it did work sometimes, but it would not work with the image provided: it will return yellow in this case, since initialPixel is quite away from the figure's contents.
Assuming I did find the figure's background color (white), my next task would be to realize that there exist two pixel groups within the figure. This one seems easier:
Since I now know the figure's background color (white), I can try iterating through each pixel within the figure and, if I find one that does not belong to the border and is not part of the figure's background, I can already tell there is one pixel group. I can begin a recursive function to find all pixels related to such group and "flag" them so that in the future iterations I can completely ignore such pixels.
WHAT I NEED
Yes, my problem is about how to find the figure's background color (keep in mind it can be the same as the whole image's background color - for now it is yellow, but it can be white as well) based on what I described before.
I don't need any code - I'm just having trouble thinking a proper algorithm for such. The fact that the border can have such weird irregular lines is killing me.
Or even better: have I been doing it wrong all along? Maybe I shouldn't have focused so much on that initialPixel at all. Maybe a different kind of initial method would have worked? Are there any documents/examples about topics like this? I realize there is a lot of research on "computer vision" and such, but I can't find much about this particular problem.
SOME CODE
My function to retrieve a vector with all the figures:
*Note: Figure is just a class that contains some values like the background color and the number of elements.
public Figure[] getFiguresFromImage(Image image) {
Figure[] tempFigures = new Figure[100];
int numberOfFigures = 0;
matrixOfImage = image.getMatrix();
int imageBackgroundColor = matrixOfImage[0][0];
int pixel = 0;
for (int y = 0; y < matrixOfImage.length; ++y) {
for (int x = 0; x < matrixOfImage[0].length; ++x) {
pixel = matrixOfImage[y][x];
if (!exploredPixels[y][x]) {
// This pixel has not been evaluated yet
if (pixel != imageBackgroundColor ) {
// This pixel is different than the background color
// Since it is a new pixel, I assume it is the initial pixel of a new figure
// Get the figure based on the initial pixel found
tempFigures[numberOfFigures] = retrieveFigure(y,x);
++numberOfFigures;
}
}
}
}
// ** Do some work here after getting my figures **
return null;
}
Then, clearly, the function retrieveFigure(y,x) is what I am being unable to do.
Notes:
For learning purposes, I should not be using any external libraries.
A good way to solve this problem is to treat the image as a graph, where there is one node ('component' in this answer) for each color filled area.
Here is one way to implement this approach:
Mark all pixels as unvisited.
For each pixel, if the pixel is unvisited, perform the flood fill algorithm on it. During the flood fill mark each connected pixel as visited.
Now you should have a list of solid color areas in your image (or 'components'), so you just have to figure out how they are connected to each other:
Find the component that has pixels adjacent to the background color component - this is your figure border. Note that you can find the background color component by finding the component with the 0,0 pixel.
Now find the components with pixels adjacent to the newly found 'figure border' component. There will be two such components - pick the one that isn't the background (ie that doesn't have the 0,0 pixel). This is your figure background.
To find the pixel groups, simply count the number of components with pixels adjacent to the figure background component (ignoring of course the figure border component)
Advantages of this approach:
runs in O(# pixels) time.
easy to understand and implement.
doesn't assume the background color and figure background color are different.
To make sure you understand how iterating through the components and their neighbors might work, here's an example pseudocode implementation for step 5:
List<Component> allComponents; // created in step 2
Component background; // found in step 3 (this is the component with the 0,0 pixel)
Component figureBorder; // found in step 4
List<Component> pixelGroups = new List<Component>(); // list of pixel groups
for each Component c in allComponents:
if c == background:
continue;
for each Pixel pixel in c.pixelList:
for each Pixel neighbor in pixel.neighbors:
if neighbor.getComponent() == figureBorder:
c.isPixelGroup = true;
int numPixelGroups = 0;
for each Component c in allComponents:
if (c.isPixelGroup)
numPixelGroups++;
Try this code :
import java.util.Scanner;
import java.awt.image.BufferedImage;
import java.io.*;
import javax.imageio.ImageIO;
class Analyzer{
private int pixdata[][];
private int rgbdata[][];
private BufferedImage image;
int background_color;
int border_color;
int imagebg_color;
private void populateRGB(){
rgbdata = new int[image.getWidth()][image.getHeight()];
for(int i = 0; i < image.getWidth(); i++){
for(int j = 0; j < image.getHeight(); j++){
rgbdata[i][j] = image.getRGB(i, j);
}
}
int howmanydone = 0;
int prevcolor,newcolor;
prevcolor = rgbdata[0][0];
/*
for(int i = 0; i < image.getWidth(); i++){
for(int j = 0; j < image.getHeight(); j++){
System.out.print(rgbdata[i][j]);
}
System.out.println("");
}*/
for(int i = 0; i < image.getWidth(); i++){
for(int j = 0; j < image.getHeight(); j++){
newcolor = rgbdata[i][j];
if((howmanydone == 0) && (newcolor != prevcolor)){
background_color = prevcolor;
border_color = newcolor;
prevcolor = newcolor;
howmanydone = 1;
}
if((newcolor != prevcolor) && (howmanydone == 1)){
imagebg_color = newcolor;
}
}
}
}
public Analyzer(){ background_color = 0; border_color = 0; imagebg_color = 0;}
public int background(){ return background_color; }
public int border() { return border_color;}
public int imagebg() {return imagebg_color;}
public int analyze(String filename,String what) throws IOException{
image = ImageIO.read(new File(filename));
pixdata = new int[image.getHeight()][image.getWidth()];
populateRGB();
if(what.equals("background"))return background();
if(what.equals("border"))return border();
if(what.equals("image-background"))return imagebg();
else return 0;
}
}
public class ImageAnalyze{
public static void main(String[] args){
Analyzer an = new Analyzer();
String imageName;
Scanner scan = new Scanner(System.in);
System.out.print("Enter image name:");
imageName = scan.nextLine();
try{
int a = an.analyze(imageName,"border");//"border","image-background","background" will get you different colors
System.out.printf("Color bg: %x",a);
}catch(Exception e){
System.out.println(e.getMessage());
}
}
}
the color returned is ARGB format. You will need to extract R,G and B from it.
There is a bug in this code. Working on implementation using Finite State machine. in the first state you're inside the image, hence 0,0 is the background color, then when there is a change, the change is the border color, then the third state is when inside the image + inside the border and the color changes.
I have an image, and I figured out how to use robot and getPixelColor() to grab the color of a certain pixel. The image is a character that I'm controlling, and I want robot to scan around the image constantly, and tell me if the pixels around it equal a certain color. Is this at all possible? Thanks!
Myself, I'd use the Robot to extract the image that's just a little larger than the "character", and then analyze the BufferedImage obtained. The details of course will depend on the details of your program. Probably the quickest would be to get the BufferedImage's Raster, then get thats dataBuffer, then get thats data, and analyze the array returned.
For example,
// screenRect is a Rectangle the contains your "character"
// + however many images around your character that you desire
BufferedImage img = robot.createScreenCapture(screenRect);
int[] imgData = ((DataBufferInt)img.getRaster().getDataBuffer()).getData();
// now that you've got the image ints, you can analyze them as you wish.
// All I've done below is get rid of the alpha value and display the ints.
for (int i = 0; i < screenRect.height; i++) {
for (int j = 0; j < screenRect.width; j++) {
int index = i * screenRect.width + j;
int imgValue = imgData[index] & 0xffffff;
System.out.printf("%06x ", imgValue );
}
System.out.println();
}