I have this code:
{
Robot robot = new Robot();
Color inputColor = new Color();
Rectangle rectangle = new Rectangle(0, 0, 1365, 770);
BufferedImage image = robot.createScreenCapture(rectangle);
for(int x = 0; x < rectangle.getWidth(); x++)
{
for (int y = 0; y < rectangle.getHeight(); y++)
{
if (image.getRGB(x, y) == inputColor.getRGB())
{
return 1;
break;
}
}
}
}
it is supposed to, and does, take a screenshot and find in it a pixel specified by the inputColor. However the program requirements have changed, and now it needs to find a string of pixels 5 long that match a given string. Is there an easy way to specify this with the existing code, or will I need to change it? I mean, can I keep the existing code and define inputColor as a string with the values of the 5 pixels, or do I need to change the whole algorithm?
I think something like this would work. Not the best efficiency, but its a bone to chew.
int[] pixels = image.getRGB(0, 0, image.getWidth(), image.getHeight(), null, 0, image.getWidth())) {
for (int i = 0; i < pixels.length; i++) {
if (Array.equals(search, Array.copyOfRange(pixels, i, i + search.length)) {
//found it
}
}
search would be an array of integers(your colors).
Related
I was steered over to this forum when I asked my lecturer for advice on a piece of code for a group project. The general idea is that there are two images on top of each other, the user can wipe the top image away to reveal the one underneath.
Using some other projects from this forum, I have managed to get the basics running, however I am struggling to get the code to the starting point once the user lets go of the mouse.
I would also appreciate any advice regarding how to convert this to using a touch screen. I have looked at the multitouch code within the processing app, however it does not allow me to add images to this, and if I try and use the computer software it does not seem to like the multitouch. Is there any way around this?
The code I currently have is below, I will be greatful so any advice or input- thanks in advance!
PImage img, front;
int xstart, ystart, xend, yend;
int ray;
void setup()
{
size(961, 534);
img = loadImage("back.jpg");
front = loadImage("front.jpg");
xstart = 0;
ystart = 0;
xend = img.width;
yend = img.height;
ray = 50;
}
void draw()
{
{
img.loadPixels();
front.loadPixels();
// loop over image pixels
for (int x = xstart; x < xend; x++)
{
for (int y = ystart; y < yend; y++ )
{
int loc = x + y*img.width;
float dd = dist(mouseX, mouseY, x, y);
// pixels distance less than ray
if (mousePressed && dd < 50)
{
// set equal pixel
front.pixels[loc] = img.pixels[loc];
}
else
{
if (!mousePressed)
{
// reset- this is what I have not been able to work as of yet
front.pixels[loc] = ;
}
}
}
}
img.updatePixels();
front.updatePixels();
// show front image
image(front, 0, 0);
}
}
I recommend to use a mask instead of changing the pixels of the image. Create an empty image and associated it as mask to the the image:
img = loadImage("back.jpg");
front = loadImage("front.jpg");
mask = createImage(img.width, img.height, RGB);
img.mask(mask);
If you now draw both images, then you can only "see" the front image:
image(front, 0, 0);
image(img, 0, 0);
Set the color of the mask (255, 255, 255) instead of changing the pixel of front:
mask.pixels[loc] = color(255, 255, 255);
and reapply the mask to the image
img.mask(mask);
When the mouse button is released, the pixels of the mask have to be changed back to (0, 0, 0) or simply create a new and empty mask:
mask = createImage(img.width, img.height, RGB);
See the example where I applied the suggestions to your original code:
PImage img, front, mask;
int xstart, ystart, xend, yend;
int ray;
void setup() {
size(961, 534);
img = loadImage("back.jpg");
front = loadImage("front.jpg");
mask = createImage(img.width, img.height, RGB);
img.mask(mask);
xstart = 0;
ystart = 0;
xend = img.width;
yend = img.height;
ray = 50;
}
void draw() {
img.loadPixels();
front.loadPixels();
// loop over image pixels
for (int x = xstart; x < xend; x++) {
for (int y = ystart; y < yend; y++ ) {
int loc = x + y*img.width;
float dd = dist(mouseX, mouseY, x, y);
if (mousePressed && dd < 50) {
mask.pixels[loc] = color(255, 255, 255);
}
else {
if (!mousePressed) {
//mask = createImage(img.width, img.height, RGB);
mask.pixels[loc] = color(0, 0, 0);
}
}
}
}
mask.updatePixels();
img.mask(mask);
// show front image
image(front, 0, 0);
image(img, 0, 0);
}
Let's say that I have an image like this one (in reality, I have a lot of them, but let's keep it simple)
example picture and I'd like to replace background color (that's the one that's not roads) with green color.
To do that I'd have to iterate through all of the pixels in the map and replace ones that match the color I want to remove.
But as you might think, my image is not a simple as 256x256 picture, but it's slightly bigger, it's 1440p, and the performance drop is significant.
How would I replace all of the unwanted pixels without iterating through all of the pixels.
I'm working with Processing 3 - Java(Android) and I'm currently using this piece of code:
for (x = 0; x < img.width; x++){
for (int y = 0; y < img.height; y++) {
//Color to transparent
int index = x + img.width * y;
if (img.pixels[index] == -1382175 || img.pixels[index] == 14605278 || img.pixels[index] == 16250871) {
img.pixels[index] = color(0, 0, 0, 0);
} else {
img.pixels[index] = color(map(bright, 0, 255, 64, 192));
}
}
}
Solved it with this one:
private PImage swapPixelColor(PImage img, int old, int now) {
old &= ~0x00000000;
now &= ~0x00000000;
img.loadPixels();
int p[] = img.pixels, i = p.length;
while (i-- != 0) if ((p[i]) == old) p[i] = now;
img.updatePixels();
return img;
}
It works like a charm and it takes almost no time:
Swapped colors in 140ms // That's replacing it three times(different colors ofc)
I'm trying to write a method for my game that will take an image, the hex value for the old color, and the hex value for the new color and then convert all pixels of the old color to the new color. Right now, the method paints the entire image the new color, as if the if statement is not working at all. This the method:
private void convertColors(BufferedImage img, int oldColor, int newColor)
{
Graphics g = img.getGraphics();
g.setColor(new Color(newColor));
Color old = new Color(oldColor);
for(int x = 0; x < img.getWidth(); x++)
{
for(int y = 0; y < img.getHeight(); y++)
{
Color tmp = new Color(img.getRGB(x, y));
if(tmp.equals(old));
{
System.out.println("Temp=" + tmp.toString() + "Old=" + old.toString() + "New=" + g.getColor().toString());
g.fillRect(x, y, 1, 1);
}
}
}
g.dispose();
}
*The hex for oldColor is 0xFFFFFF (white) and for newColor is 0xFF0000 (red).
Using the println method I get these kind of results:
Temp=java.awt.Color[r=0,g=0,b=0]Old=java.awt.Color[r=255,g=255,b=255]New=java.awt.Color[r=255,g=0,b=0]
Temp=java.awt.Color[r=255,g=255,b=255]Old=java.awt.Color[r=255,g=255,b=255]New=java.awt.Color[r=255,g=0,b=0]
The scond line looks correct, the temp color and the old are the same, but that is obviously not the case with the first. I have also tried creating a new BufferedImage and copy over the pixels but that leaves the same result... Does the equals method not work as I think it does or does this entire method just not work and there's a better way to do this? Thanks for you help in advance.
Just remove the ; after if(tmp.equals(old)).
Otherwise you compare the colors and do nothing after the comparsion and always choose the new color.
Besides that you need to reorganize your code a little bit to make it slightly more efficient:
Graphics g = img.getGraphics();
g.setColor(new Color(newColor));
for(int x = 0; x < img.getWidth(); x++) {
for(int y = 0; y < img.getHeight(); y++) {
if(img.getRGB(x, y) == oldColor) {//check if pixel color matches old color
g.fillRect(x, y, 1, 1);//fill the pixel with the right color
}
}
}
g.dispose();
Just because i was interested in the topic: relying on image filters you could do all that via:
class ColorSwapFilter extends RGBImageFilter {
int newColor, oldColor;
public ColorSwapFilter(int newColor, int oldColor) {
canFilterIndexColorModel = true;
this.newColor = newColor;
this.oldColor = oldColor;
}
#Override
public int filterRGB(int x, int y, int rgb) {
return rgb == oldColor ? newColor : oldColor;
}
}
which should be called via
BufferedImage img;//your image
ColorSwapFilter filter = new ColorSwapFilter(...,...);//your colors to be swapped.
ImageProducer producer = img.getSource();
producer = new FilteredImageSource(producer, filter);
Image im = Toolkit.getDefaultToolkit().createImage(producer);
You have a semicolon direct after your if statement if(tmp.equals(old));
This esentially tells Java that your if statement has only one command associated with it, and that command is a single semicolon, effectively meaning "do nothing". If you remove it, it will restore the condition on the block of code beneath it, which right now is just running every time regardless of the condition.
I got it to work; this is the working convertColors method:
private BufferedImage convertColors(BufferedImage img, int oldColor, int newColor)
{
int [] pixels = new int [img.getWidth() * img.getHeight()];
img.getRGB(0, 0, img.getWidth(), img.getHeight(), pixels, 0, img.getWidth());
Color old = new Color(oldColor);
Color newC = new Color(newColor);
for(int x = 0; x < img.getWidth(); x++)
{
for(int y = 0; y < img.getHeight(); y++)
{
Color tmp = new Color(pixels[x + y * img.getWidth()]);
if(tmp.equals(old))
{
pixels[x + y * img.getWidth()] = newC.getRGB();
}
}
}
img.setRGB(0, 0, img.getWidth(), img.getHeight(), pixels, 0, img.getWidth());
return newImg;
}
I am working on a method in Java to do some simple edge detection. I want to take the difference of two color intensities one at a pixel and the other at the pixel directly below it. The picture that I am using is being colored black no matter what threshold I put in for the method. I am not sure if my current method is just not computing what I need it to but I am at a loss what i should be tracing to find the issue.
Here is my method thus far:
public void edgeDetection(double threshold)
{
Color white = new Color(1,1,1);
Color black = new Color(0,0,0);
Pixel topPixel = null;
Pixel lowerPixel = null;
double topIntensity;
double lowerIntensity;
for(int y = 0; y < this.getHeight()-1; y++){
for(int x = 0; x < this.getWidth(); x++){
topPixel = this.getPixel(x,y);
lowerPixel = this.getPixel(x,y+1);
topIntensity = (topPixel.getRed() + topPixel.getGreen() + topPixel.getBlue()) / 3;
lowerIntensity = (lowerPixel.getRed() + lowerPixel.getGreen() + lowerPixel.getBlue()) / 3;
if(Math.abs(topIntensity - lowerIntensity) < threshold)
topPixel.setColor(white);
else
topPixel.setColor(black);
}
}
}
new Color(1,1,1) calls the Color(int,int,int) constructor of Color which takes values between 0 and 255. So your Color white is still basically black (well, very dark grey, but not enough to notice).
If you want to use the Color(float,float,float) constructor, you need float literals:
Color white = new Color(1.0f,1.0f,1.0f);
public void edgeDetection(int edgeDist)
{
Pixel leftPixel = null;
Pixel rightPixel = null;
Pixel bottomPixel=null;
Pixel[][] pixels = this.getPixels2D();
Color rightColor = null;
boolean black;
for (int row = 0; row < pixels.length; row++)
{
for (int col = 0;
col < pixels[0].length; col++)
{
black=false;
leftPixel = pixels[row][col];
if (col<pixels[0].length-1)
{
rightPixel = pixels[row][col+1];
rightColor = rightPixel.getColor();
if (leftPixel.colorDistance(rightColor) >
edgeDist)
black=true;
}
if (row<pixels.length-1)
{
bottomPixel =pixels[row+1][col];
if (leftPixel.colorDistance(bottomPixel.getColor())>edgeDist)
black=true;
}
if (black)
leftPixel.setColor(Color.BLACK);
else
leftPixel.setColor(Color.WHITE);
}
}
}
I am working with a LayeredPane that contains two images, one on each layer. I have been working on a method that obtains the image positions (based on the label positions the images are in) and then save the bottom image (lower one from within the layeredPane) and also the top one if it is at all covering the bottom image (may only be a part of the image), however I have been having some trouble with this and I'm a bit unsure on how to get it working properly.
I have been stuck working on this for quite a while now so any help with my existing code or thoughts on how I should approach this another way would be a big help for me.
Thanks in advance.
public void saveImageLayering(BufferedImage topImg,BufferedImage bottomImg, JLabel topLabel, JLabel bottomLabel) {
int width = bottomImg.getWidth();
int height = bottomImg.getHeight();
Point bottomPoint = new Point();
Point topPoint = new Point();
bottomPoint = bottomLabel.getLocation();
topPoint = topLabel.getLocation();
System.out.println("image x coordinate " + bottomPoint.x);
System.out.println("image y coordinate " + bottomPoint.y);
//arrays to store the bottom image
int bottomRedImgArray[][] = new int[width][height];
int bottomGreenImgArray[][] = new int[width][height];
int bottomBlueImgArray[][] = new int[width][height];
//arrays to store the top image
int topRedImgArray[][] = new int[width][height];
int topGreenImgArray[][] = new int[width][height];
int topBlueImgArray[][] = new int[width][height];
//loop through the bottom image and get all pixels rgb values
for(int i = bottomPoint.x; i < width; i++){
for(int j = bottomPoint.y; j < height; j++){
//set pixel equal to the RGB value of the pixel being looked at
pixel = new Color(bottomImg.getRGB(i, j));
//contain the RGB values in the respective RGB arrays
bottomRedImgArray[i][j] = pixel.getRed();
bottomGreenImgArray[i][j] = pixel.getGreen();
bottomBlueImgArray[i][j] = pixel.getBlue();
}
}
//create new image the same size as old
BufferedImage newBottomImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
//set values within the 2d array to the new image
for (int x1 = 0; x1 < width; x1++){
for (int y1 = 0; y1 < height; y1++){
//putting values back into buffered image
int newPixel = (int) bottomRedImgArray[x1][y1];
newPixel = (newPixel << 8) + (int) bottomGreenImgArray[x1][y1];
newPixel = (newPixel << 8) + (int) bottomBlueImgArray[x1][y1];
newBottomImage.setRGB(x1, y1, newPixel);
}
}
//create rectangle around bottom image to check if coordinates of top in inside and save only the ones that are
Rectangle rec = new Rectangle(bottomPoint.x, bottomPoint.y, bottomImg.getWidth(), bottomImg.getHeight());
//loop through the top image and get all pixels rgb values
for(int i = bottomPoint.x; i < bottomImg.getWidth(); i++){
for(int j = bottomPoint.y; j < bottomImg.getHeight(); j++){
//if top image is inside lower image then getRGB values
if (rec.contains(topPoint)) { //___________________________________________________________doesnt contain any..
if (firstPointFound == true) {
//set pixel equal to the RGB value of the pixel being looked at
pixel = new Color(topImg.getRGB(i, j));
//contain the RGB values in the respective RGB arrays
topRedImgArray[i][j] = pixel.getRed();
topGreenImgArray[i][j] = pixel.getGreen();
topBlueImgArray[i][j] = pixel.getBlue();
} else {
firstPoint = new Point(i, j);
firstPointFound = true;
}
}
}
}
//create new image the same size as old
BufferedImage newTopImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
//set values within the 2d array to the new image
for (int x1 = 0; x1 < topImg.getWidth(); x1++){
for (int y1 = 0; y1 < topImg.getHeight(); y1++){
//putting values back into buffered image
int newPixel = (int) topRedImgArray[x1][y1];
newPixel = (newPixel << 8) + (int) topGreenImgArray[x1][y1];
newPixel = (newPixel << 8) + (int) topBlueImgArray[x1][y1];
newTopImage.setRGB(x1, y1, newPixel);
}
}
BufferedImage newImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//uses the Graphics.drawImage() to place them on top of each other
Graphics g = newImage.getGraphics();
g.drawImage(newBottomImage, bottomPoint.x, bottomPoint.y, null);
g.drawImage(newTopImage, firstPoint.x, firstPoint.y, null);
try {
//then save as image once all in correct order
File outputfile = new File("saved_Layered_Image.png");
ImageIO.write(newImage, "png", outputfile);
JOptionPane.showMessageDialog(null, "New image saved successfully");
} catch (IOException e) {
e.printStackTrace();
}
}
I'm not really sure why you're messing around with the pixels, however, the idea is relatively simple
Basically, you want to create a third "merged" image which is the same size as the bottomImage. From there, you simply want to paint the bottomImage onto the merged image at 0x0.
Then you need to calculate the distance that the topImage is away from bottomImage's location and paint it at that point.
BufferedImage merged = new BufferedImage(bottomImg.getWidth(), bottomImg.getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D g2d = master.createGraphics();
g2d.drawImage(bottomImg, 0, 0, this);
int x = topPoint .x - bottomPoint .x;
int y = topPoint .y - bottomPoint .y;
g2d.drawImage(topImg, x, y, this);
g2d.dispose();
Using this basic idea, I was able to produce these...