I've been stuck at something recently.
What I want to do is to get multiple sub-images out of 1 big image.
So take this example. I have a frame of 128x128 pixels where all the images need to be in.
I'm putting all the bufferedImages inside a list and scaling all those images to 128x128.
The image you see on that link is showing that I need 4 sub-images from that image, so at the end, I have 4 images which are 128x128 but 4 times.
Or if you have an image with 128x384 it will give 3 sub-images going from top to bottom.
https://i.stack.imgur.com/RsCkf.png
I know there is a function called
BufferedImage.getSubimage(int x, int y, int w, int h);
But the problem is that I can't figure out what math I need to implement.
What I tried is if the height or width is higher than 200 then divide it by 2 but that never worked for me.
I'm not sure I fully understand what you are asking, but I think what you want is something like this:
First, loop over the image in both dimensions.
Then compute the size of the tile (the smaller value of 128 and (image dimension - start pos)). This is to make sure you don't try to fetch a tile out of bounds. If your images are always a multiple of 128 in any dimension, you could just skip this step and just use 128 (just make sure you validate that input images follow this assumption).
If you only want tiles of exactly 128x128, you could also just skip the remainder, if the tile is less than 128x128, I'm not sure what your requirement is here. Anyway, I'll leave that to you. :-)
Finally, get the subimage of that size and coordinates and store in the list.
Code:
BufferedImage image = ...;
int tileSize = 128;
List<BufferedImage> tiles = new ArrayList<>();
for (int y = 0; y < image.height(); y += tileSize) {
int h = Math.min(tileSize, image.height() - y);
for (int x = 0; x < image.width(); x += tileSize) {
int w = Math.min(tileSize, image.width() - x);
tiles .add(image.getSubimage(x, y, w, h));
}
}
Related
can anyone help me and tell me how to create a gray scale image where one pixel of the image is shown as a square with the size 2 x 2?
I already searched for help and found this how to create a gray scale image from pixel values using java but i don't know how to create a gray scale with the information that one pixel is shown as a square with the size 2 x 2.
thanks!
to create a picture where each pixel has the size 2x2 you must either scale the image (factor 2) for display only... or if you want to create a image you have to do it manually and create an image and draw with scale factor 2 on it
int[] pixels = ... //we already have our gray scale pixels here
int widthOriginal = ... //size of original image
int heightOriginal = ...
//let's create an buffered Image twice the size
BufferedImage img =
new BufferedImage(2*widthOriginal, 2*heightOriginal, BufferedImage.TYPE_4BYTE_ABGR);
//we paint on the buffered image's graphic...
Graphics gr = img.getGraphics();
//we draw all pixels on the graphic
for(int y = 0; y < heightOriginal; y ++){
for(int x = 0; x < widthOriginal; x ++){
int index = y*widthOriginal + x;
int gray = pixels[index];
//to draw we first set the color
gr.setColor(new Color(gray));
//then draw the pixel
gr.drawRect(2*x, 2*y,2,2); //draw a 2x2 pixel instead of a 1x1 pixel
}
}
uhm - honestly i've written that code entirely out of my head, so there may be some minor compilation problems... but the technique is explained properly...
I have tried to make an algorithm in java to rotate a 2-d pixel array(Not restricted to 90 degrees), the only problem i have with this is: the end result leaves me with dots/holes within the image.
Here is the code :
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int xp = (int) (nx + Math.cos(rotation) * (x - width / 2) + Math
.cos(rotation + Math.PI / 2) * (y - height / 2));
int yp = (int) (ny + Math.sin(rotation) * (x - width / 2) + Math
.sin(rotation + Math.PI / 2) * (y - height / 2));
int pixel = pixels[x + y * width];
Main.pixels[xp + yp * Main.WIDTH] = pixel;
}
}
'Main.pixels' is an array connected to a canvas display, this is what is displayed onto the monitor.
'pixels' and the function itself, is within a sprite class. The sprite class grabs the pixels from a '.png' image at initialization of the program.
I've tried looking at the 'Rotation Matrix' solutions. But they are too complicated for me. I have noticed that when the image gets closer to a point of 45 degrees, the image is some-what stretched ? What is going wrong? And what is the correct code; that adds the pixels to a larger scale array(E.g. Main.pixels[]).
Needs to be java! and relative to the code format above. I am not looking for complex examples, simply because i will not understand(As said above). Simple and straight to the point, is what i am looking for.
How id like the question to be answered.
Your formula is wrong because ....
Do this and the effect will be...
Simplify this...
Id recommend...
Im sorry if im asking to much, but i have looked for an answer relative to this question, that i can understand and use. But to always either be given a rotation of 90 degrees, or an example from another programming language.
You are pushing the pixels forward, and not every pixel is hit by the discretized rotation map. You can get rid of the gaps by calculating the source of each pixel instead.
Instead of
for each pixel p in the source
pixel q = rotate(p, theta)
q.setColor(p.getColor())
try
for each pixel q in the image
pixel p = rotate(q, -theta)
q.setColor(p.getColor())
This will still have visual artifacts. You can improve on this by interpolating instead of rounding the coordinates of the source pixel p to integer values.
Edit: Your rotation formulas looked odd, but they appear ok after using trig identities like cos(r+pi/2) = -sin(r) and sin(r+pi/2)=cos(r). They should not be the cause of any stretching.
To avoid holes you can:
compute the source coordinate from destination
(just reverse the computation to your current state) it is the same as Douglas Zare answer
use bilinear or better filtering
use less then single pixel step
usually 0.75 pixel is enough for covering the holes but you need to use floats instead of ints which sometimes is not possible (due to performance and or missing implementation or other reasons)
Distortion
if your image get distorted then you do not have aspect ratio correctly applied so x-pixel size is different then y-pixel size. You need to add scale to one axis so it matches the device/transforms applied. Here few hints:
Is the source image and destination image separate (not in place)? so Main.pixels and pixels are not the same thing... otherwise you are overwriting some pixels before their usage which could be another cause of distortion.
Just have realized you have cos,cos and sin,sin in rotation formula which is non standard and may be you got the angle delta wrongly signed somewhere so
Just to be sure here an example of the bullet #1. (reverse) with standard rotation formula (C++):
float c=Math.cos(-rotation);
float s=Math.sin(-rotation);
int x0=Main.width/2;
int y0=Main.height/2;
int x1= width/2;
int y1= height/2;
for (int a=0,y=0; y < Main.height; y++)
for (int x=0; x < Main.width; x++,a++)
{
// coordinate inside dst image rotation center biased
int xp=x-x0;
int yp=y-y0;
// rotate inverse
int xx=int(float(float(xp)*c-float(yp)*s));
int yy=int(float(float(xp)*s+float(yp)*c));
// coordinate inside src image
xp=xx+x1;
yp=yy+y1;
if ((xp>=0)&&(xp<width)&&(yp>=0)&&(yp<height))
Main.pixels[a]=pixels[xp + yp*width]; // copy pixel
else Main.pixels[a]=0; // out of src range pixel is black
}
I'm trying to scan a bar code for black and white lines (across the image from left to right). Can anyone help me in doing this? There are 95 bits in my bar code image and I want to scan across just once and get the values of those colors scanned with the .getRed, .getGreen, .getBlue methods.
I'm not sure if I started out right, but correct me if I'm wrong:
//Image already loaded in above code
//Scan Array
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++ {
}
}
I was told that the code above scans the whole entire image and not just once from left to right. Any help?
Edit:
Black lines would give (0, 0, 0) which would then equal to 1.
White lines would give (255, 255, 255) which would then equal to 0.
Don't try to do this yourself. Use Zebra Crossing.
https://github.com/zxing/zxing/
You'd need to scan across a row, find contrast, find threshold, create candidate lists of valid width predictions and what the bar code could be, and sort them... it is a enormous headache.
I was working on my game today and I found that the top of my trees have a weird texture problem where they overlap each other with a black box. It is only the top of the trees and the tops are split up into 9 blocks all with their own image. The 9 images are transparent, each is 32x32, and I've tried it a bunch of different ways with no luck. Does anyone know what the problem with the texture is? This isn't a generation question but an OpenGL/Slick2D question about textures. Here's a screenshot of the problem: Screenshot
EDIT: Here's a piece of the rendering code.
for (int x = (int) (World.instance.camera.getX() / Block.WIDTH); x < width; x++)
{
for (int y = (int) (World.instance.camera.getY() / Block.HEIGHT); y < height; y++)
{
try
{
if (blocks[x][y] != Block.AIR.getId())
{
g.drawImage(textureCache.get(blocks[x][y]), x * Block.WIDTH, y * Block.HEIGHT);
}
}
catch(Exception ex)
{
}
}
}
Looking at your code, it seems that you are only drawing a single image at each 32x32 square. So if tree A is in front of tree B, but tree A only partly fills a square, then tree A is the one listed in your blocks array and therefore retrieved from your "texture cache"; and not tree B. So tree A is all that is drawn.
To resolve this, your blocks structure would need to be three dimensional - basically, for each 32x32 square, you'd need some kind of "stack" of references to all the images whose corresponding object is found in that square. Then when you draw that square, draw all of the images in order, from the back to the front.
I want to look within a certain position in an image to see if the selected pixels have changed in color, how would I go about doing this? (Im trying to check for movement)
I was thinking I could do something like this:
public int[] rectanglePixels(BufferdImage img, Rectangle Range) {
int[] pixels = ((DataBufferByte) bufferedImage.getRaster().getDataBuffer()).getData();
int[] boxColors;
for(int y = 0; y < img.getHeight(); y++) {
for(int x = 0; x < img.getWidth; x++) {
boxColors = pixels[(x & Range.width) * Range.x + (y & Range.height) * Range.y * width]
}
}
return boxColors;
}
Maybe use that to extract the colors from the position? Not sure if im doing that right, but after that should I re-run this method, compare the two arrays for similarities? and if the number of similarities reach some threshold declare that the image has changed?
One approach to detect movement is the analysis of pixel color variation considering the entire image or a subimage in distinct times (n, n-1, n-2, ...). In this case you are considering a fixed camera. You might have two thresholds:
The threshold of color channel variation that defines that two pixels are distinct.
The threshold of distinct pixels between the images to consider there is movement. In other words: two images of the same scene at time n and n-1 have just 10 distinct pixels. It is a real movement or just noise?
Below an example showing how to counter the distict pixels in an image, given a color channel threshold.
for(int y=0; y<imageA.getHeight(); y++){
for(int x=0; x<imageA.getWidth(); x++){
redA = imageA.getIntComponent0(x, y);
greenA = imageA.getIntComponent1(x, y);
blueA = imageA.getIntComponent2(x, y);
redB = imageB.getIntComponent0(x, y);
greenB = imageB.getIntComponent1(x, y);
blueB = imageB.getIntComponent2(x, y);
if
(
Math.abs(redA-redB)> colorThreshold ||
Math.abs(greenA-greenB)> colorThreshold||
Math.abs(blueA-blueB)> colorThreshold
)
{
distinctPixels++;
}
}
}
However, there are Marvin plug-ins to do so. Check this source code example. It detects and display regions containing "movements", as shown in the image below.
There are more sophisticated approaches that determine/subtract background for this purpose or deal with camera movements. I guess you should start from the simplest scenario and then go to more complex ones.
You should use BufferedImage.getRGB(startX, startY, w, h, rgbArray, offset, scansize) unless you really want to play around with the loops and extra arrays.
Comparing two values through a threshold would serve as good indicator. Perhaps, you could calculate averages for each array to determine color and compare the two? If you do not want a threshold value just use .hashCode();