storing bitmap in integer array - java

what is the best way to do this. i have an array of height*width in length. do i simply loop through bitmap setting each pixel to the array based on the loop index? thanks
[update]
It's an app that places a fisheye distortion on a bitmap. i'm trying to store the pixel data in an array opposed to calling Bitmap.setPixel() as this comes with a massive GC overhead
for (int j=0;j<dst.getHeight();j++) {
for (int i=0;i<dst.getWidth();i++) {
origPixel= input.getPixel(i,j);
getRadXStart = System.currentTimeMillis();
float x = getRadialX((float)j,(float)i,centerX,centerY,k);
float y = getRadialY((float)j,(float)i,centerX,centerY,k);
sampleImage(input,x,y);
color = ((s[1]&0x0ff)<<16)|((s[2]&0x0ff)<<8)|(s[3]&0x0ff);
// System.out.print(i+" "+j+" \\");
//if( Math.sqrt( Math.pow(i - centerX, 2) + ( Math.pow(j - centerY, 2) ) ) <= 150 ){
if (Math.pow(i - centerX, 2) + ( Math.pow(j - centerY, 2) ) <= 22500 ) {
// dst.setPixel(i, j, color);
arr[i]=color;
} else {
//dst.setPixel(i,j,origPixel);
arr[i]=origPixel;
}
}
}
Bitmap dst2 = Bitmap.createBitmap(arr,width,height,input.getConfig());
return dst2;

Have you tried the method getPixels(...) on Bitmap?

To fill it, you will have to loop through the Bitmap to set each pixel. That said, I don't know if you'll see a major performance increase.

Related

Making an Image Concave in Java

I had a quick question, and wondered if anyone had any ideas or libraries I could use for this. I am making a java game, and need to make 2d images concave. The problem is, 1: I don't know how to make an image concave. 2: I need the concave effect to be somewhat of a post process, think Oculus Rift. Everything is normal, but the camera of the player distorts the normal 2d images to look 3d. I am a Sophmore, so I don't know very complex math to accomplish this.
Thanks,
-Blue
If you're not using any 3D libraries or anything like that, just implement it as a simple 2D distortion. It doesn't have to be 100% mathematically correct as long as it looks OK. You can create a couple of arrays to store the distorted texture co-ordinates for your bitmap, which means you can pre-calculate the distortion once (which will be slow but only happens once) and then render multiple times using the pre-calculated values (which will be faster).
Here's a simple function using a power formula to generate a distortion field. There's nothing 3D about it, it just sucks in the center of the image to give a concave look:
int distortionU[][];
int distortionV[][];
public void computeDistortion(int width, int height)
{
// this will be really slow but you only have to call it once:
int halfWidth = width / 2;
int halfHeight = height / 2;
// work out the distance from the center in the corners:
double maxDistance = Math.sqrt((double)((halfWidth * halfWidth) + (halfHeight * halfHeight)));
// allocate arrays to store the distorted co-ordinates:
distortionU = new int[width][height];
distortionV = new int[width][height];
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
// work out the distortion at this pixel:
// find distance from the center:
int xDiff = x - halfWidth;
int yDiff = y - halfHeight;
double distance = Math.sqrt((double)((xDiff * xDiff) + (yDiff * yDiff)));
// distort the distance using a power function
double invDistance = 1.0 - (distance / maxDistance);
double distortedDistance = (1.0 - Math.pow(invDistance, 1.7)) * maxDistance;
distortedDistance *= 0.7; // zoom in a little bit to avoid gaps at the edges
// work out how much to multiply xDiff and yDiff by:
double distortionFactor = distortedDistance / distance;
xDiff = (int)((double)xDiff * distortionFactor);
yDiff = (int)((double)yDiff * distortionFactor);
// save the distorted co-ordinates
distortionU[x][y] = halfWidth + xDiff;
distortionV[x][y] = halfHeight + yDiff;
// clamp
if(distortionU[x][y] < 0)
distortionU[x][y] = 0;
if(distortionU[x][y] >= width)
distortionU[x][y] = width - 1;
if(distortionV[x][y] < 0)
distortionV[x][y] = 0;
if(distortionV[x][y] >= height)
distortionV[x][y] = height - 1;
}
}
}
Call it once passing the size of the bitmap that you want to distort. You can play around with the values or use a totally different formula to get the effect you want. Using an exponent less than one for the pow() function should give the image a convex look.
Then when you render your bitmap, or copy it to another bitmap, use the values in distortionU and distortionV to distort your bitmap, e.g.:
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
// int pixelColor = bitmap.getPixel(x, y); // gets undistorted value
int pixelColor = bitmap.getPixel(distortionU[x][y], distortionV[x][y]); // gets distorted value
canvas.drawPixel(x + offsetX, y + offsetY, pixelColor);
}
}
I don't know what your actual function for drawing a pixel to the canvas is called, the above is just pseudo-code.

Hough circle detection accuracy very low

I am trying to detect a circular shape from an image which appears to have very good definition. I do realize that part of the circle is missing but from what I've read about the Hough transform it doesn't seem like that should cause the problem I'm experiencing.
Input:
Output:
Code:
// Read the image
Mat src = Highgui.imread("input.png");
// Convert it to gray
Mat src_gray = new Mat();
Imgproc.cvtColor(src, src_gray, Imgproc.COLOR_BGR2GRAY);
// Reduce the noise so we avoid false circle detection
//Imgproc.GaussianBlur( src_gray, src_gray, new Size(9, 9), 2, 2 );
Mat circles = new Mat();
/// Apply the Hough Transform to find the circles
Imgproc.HoughCircles(src_gray, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 1, 160, 25, 0, 0);
// Draw the circles detected
for( int i = 0; i < circles.cols(); i++ ) {
double[] vCircle = circles.get(0, i);
Point center = new Point(vCircle[0], vCircle[1]);
int radius = (int) Math.round(vCircle[2]);
// circle center
Core.circle(src, center, 3, new Scalar(0, 255, 0), -1, 8, 0);
// circle outline
Core.circle(src, center, radius, new Scalar(0, 0, 255), 3, 8, 0);
}
// Save the visualized detection.
String filename = "output.png";
System.out.println(String.format("Writing %s", filename));
Highgui.imwrite(filename, src);
I have Gaussian blur commented out because (counter intuitively) it was greatly increasing the number of equally inaccurate circles found.
Is there anything wrong with my input image that would cause Hough to not work as well as I expect? Are my parameters way off?
EDIT: first answer brought up a good point about the min/max radius hint for Hough. I resisted adding those parameters as the example image in this post is just one of thousands of images all with varying radii from ~20 to almost infinity.
I've adjusted my RANSAC algorithm from this answer: Detect semi-circle in opencv
Idea:
choose randomly 3 points from your binary edge image
create a circle from those 3 points
test how "good" this circle is
if it is better than the previously best found circle in this image, remember
loop 1-4 until some number of iterations reached. then accept the best found circle.
remove that accepted circle from the image
repeat 1-6 until you have found all circles
problems:
at the moment you must know how many circles you want to find in the image
tested only for that one image.
c++ code
result:
code:
inline void getCircle(cv::Point2f& p1,cv::Point2f& p2,cv::Point2f& p3, cv::Point2f& center, float& radius)
{
float x1 = p1.x;
float x2 = p2.x;
float x3 = p3.x;
float y1 = p1.y;
float y2 = p2.y;
float y3 = p3.y;
// PLEASE CHECK FOR TYPOS IN THE FORMULA :)
center.x = (x1*x1+y1*y1)*(y2-y3) + (x2*x2+y2*y2)*(y3-y1) + (x3*x3+y3*y3)*(y1-y2);
center.x /= ( 2*(x1*(y2-y3) - y1*(x2-x3) + x2*y3 - x3*y2) );
center.y = (x1*x1 + y1*y1)*(x3-x2) + (x2*x2+y2*y2)*(x1-x3) + (x3*x3 + y3*y3)*(x2-x1);
center.y /= ( 2*(x1*(y2-y3) - y1*(x2-x3) + x2*y3 - x3*y2) );
radius = sqrt((center.x-x1)*(center.x-x1) + (center.y-y1)*(center.y-y1));
}
std::vector<cv::Point2f> getPointPositions(cv::Mat binaryImage)
{
std::vector<cv::Point2f> pointPositions;
for(unsigned int y=0; y<binaryImage.rows; ++y)
{
//unsigned char* rowPtr = binaryImage.ptr<unsigned char>(y);
for(unsigned int x=0; x<binaryImage.cols; ++x)
{
//if(rowPtr[x] > 0) pointPositions.push_back(cv::Point2i(x,y));
if(binaryImage.at<unsigned char>(y,x) > 0) pointPositions.push_back(cv::Point2f(x,y));
}
}
return pointPositions;
}
float verifyCircle(cv::Mat dt, cv::Point2f center, float radius, std::vector<cv::Point2f> & inlierSet)
{
unsigned int counter = 0;
unsigned int inlier = 0;
float minInlierDist = 2.0f;
float maxInlierDistMax = 100.0f;
float maxInlierDist = radius/25.0f;
if(maxInlierDist<minInlierDist) maxInlierDist = minInlierDist;
if(maxInlierDist>maxInlierDistMax) maxInlierDist = maxInlierDistMax;
// choose samples along the circle and count inlier percentage
for(float t =0; t<2*3.14159265359f; t+= 0.05f)
{
counter++;
float cX = radius*cos(t) + center.x;
float cY = radius*sin(t) + center.y;
if(cX < dt.cols)
if(cX >= 0)
if(cY < dt.rows)
if(cY >= 0)
if(dt.at<float>(cY,cX) < maxInlierDist)
{
inlier++;
inlierSet.push_back(cv::Point2f(cX,cY));
}
}
return (float)inlier/float(counter);
}
float evaluateCircle(cv::Mat dt, cv::Point2f center, float radius)
{
float completeDistance = 0.0f;
int counter = 0;
float maxDist = 1.0f; //TODO: this might depend on the size of the circle!
float minStep = 0.001f;
// choose samples along the circle and count inlier percentage
//HERE IS THE TRICK that no minimum/maximum circle is used, the number of generated points along the circle depends on the radius.
// if this is too slow for you (e.g. too many points created for each circle), increase the step parameter, but only by factor so that it still depends on the radius
// the parameter step depends on the circle size, otherwise small circles will create more inlier on the circle
float step = 2*3.14159265359f / (6.0f * radius);
if(step < minStep) step = minStep; // TODO: find a good value here.
//for(float t =0; t<2*3.14159265359f; t+= 0.05f) // this one which doesnt depend on the radius, is much worse!
for(float t =0; t<2*3.14159265359f; t+= step)
{
float cX = radius*cos(t) + center.x;
float cY = radius*sin(t) + center.y;
if(cX < dt.cols)
if(cX >= 0)
if(cY < dt.rows)
if(cY >= 0)
if(dt.at<float>(cY,cX) <= maxDist)
{
completeDistance += dt.at<float>(cY,cX);
counter++;
}
}
return counter;
}
int main()
{
//RANSAC
cv::Mat color = cv::imread("HoughCirclesAccuracy.png");
// convert to grayscale
cv::Mat gray;
cv::cvtColor(color, gray, CV_RGB2GRAY);
// get binary image
cv::Mat mask = gray > 0;
unsigned int numberOfCirclesToDetect = 2; // TODO: if unknown, you'll have to find some nice criteria to stop finding more (semi-) circles
for(unsigned int j=0; j<numberOfCirclesToDetect; ++j)
{
std::vector<cv::Point2f> edgePositions;
edgePositions = getPointPositions(mask);
std::cout << "number of edge positions: " << edgePositions.size() << std::endl;
// create distance transform to efficiently evaluate distance to nearest edge
cv::Mat dt;
cv::distanceTransform(255-mask, dt,CV_DIST_L1, 3);
unsigned int nIterations = 0;
cv::Point2f bestCircleCenter;
float bestCircleRadius;
//float bestCVal = FLT_MAX;
float bestCVal = -1;
//float minCircleRadius = 20.0f; // TODO: if you have some knowledge about your image you might be able to adjust the minimum circle radius parameter.
float minCircleRadius = 0.0f;
//TODO: implement some more intelligent ransac without fixed number of iterations
for(unsigned int i=0; i<2000; ++i)
{
//RANSAC: randomly choose 3 point and create a circle:
//TODO: choose randomly but more intelligent,
//so that it is more likely to choose three points of a circle.
//For example if there are many small circles, it is unlikely to randomly choose 3 points of the same circle.
unsigned int idx1 = rand()%edgePositions.size();
unsigned int idx2 = rand()%edgePositions.size();
unsigned int idx3 = rand()%edgePositions.size();
// we need 3 different samples:
if(idx1 == idx2) continue;
if(idx1 == idx3) continue;
if(idx3 == idx2) continue;
// create circle from 3 points:
cv::Point2f center; float radius;
getCircle(edgePositions[idx1],edgePositions[idx2],edgePositions[idx3],center,radius);
if(radius < minCircleRadius)continue;
//verify or falsify the circle by inlier counting:
//float cPerc = verifyCircle(dt,center,radius, inlierSet);
float cVal = evaluateCircle(dt,center,radius);
if(cVal > bestCVal)
{
bestCVal = cVal;
bestCircleRadius = radius;
bestCircleCenter = center;
}
++nIterations;
}
std::cout << "current best circle: " << bestCircleCenter << " with radius: " << bestCircleRadius << " and nInlier " << bestCVal << std::endl;
cv::circle(color,bestCircleCenter,bestCircleRadius,cv::Scalar(0,0,255));
//TODO: hold and save the detected circle.
//TODO: instead of overwriting the mask with a drawn circle it might be better to hold and ignore detected circles and dont count new circles which are too close to the old one.
// in this current version the chosen radius to overwrite the mask is fixed and might remove parts of other circles too!
// update mask: remove the detected circle!
cv::circle(mask,bestCircleCenter, bestCircleRadius, 0, 10); // here the radius is fixed which isnt so nice.
}
cv::namedWindow("edges"); cv::imshow("edges", mask);
cv::namedWindow("color"); cv::imshow("color", color);
cv::imwrite("detectedCircles.png", color);
cv::waitKey(-1);
return 0;
}
If you'd set minRadius and maxRadius paramaeters properly, it'd give you good results.
For your image, I tried following parameters.
method - CV_HOUGH_GRADIENT
minDist - 100
dp - 1
param1 - 80
param2 - 10
minRadius - 250
maxRadius - 300
I got the following output
Note: I tried this in C++.

Image interpolation - nearest neighbor (Processing)

I've been having trouble with an image interpolation method in Processing. This is the code I've come up with and I'm aware that it will throw an out of bounds exception since the outer loop goes further than the original image but how can I fix that?
PImage nearestneighbor (PImage o, float sf)
{
PImage out = createImage((int)(sf*o.width),(int)(sf*o.height),RGB);
o.loadPixels();
out.loadPixels();
for (int i = 0; i < sf*o.height; i++)
{
for (int j = 0; j < sf*o.width; j++)
{
int y = round((o.width*i)/sf);
int x = round(j / sf);
out.pixels[(int)((sf*o.width*i)+j)] = o.pixels[(y+x)];
}
}
out.updatePixels();
return out;
}
My idea was to divide both components that represent the point in the scaled image by the scale factor and round it in order to obtain the nearest neighbor.
For getting rid of the IndexOutOfBoundsException try caching the result of (int)(sf*o.width) and (int)(sf*o.height).
Additionally you might want to make sure that x and y don't leave the bounds, e.g. by using Math.min(...) and Math.max(...).
Finally, it should be int y = round((i / sf) * o.width; since you want to get the pixel in the original scale and then muliply with the original width. Example: Assume a 100x100 image and a scaling factor of 1.2. The scaled height would be 120 and thus the highest value for i would be 119. Now, round((119 * 100) / 1.2) yields round(9916.66) = 9917. On the other hand round(119 / 1.2) * 100 yields round(99.16) * 100 = 9900 - you have a 17 pixel difference here.
Btw, the variable name y might be misleading here, since its not the y coordinate but the index of the pixel at the coordinates (0,y), i.e. the first pixel at height y.
Thus your code might look like this:
int scaledWidth = (int)(sf*o.width);
int scaledHeight = (int)(sf*o.height);
PImage out = createImage(scaledWidth, scaledHeight, RGB);
o.loadPixels();
out.loadPixels();
for (int i = 0; i < scaledHeight; i++) {
for (int j = 0; j < scaledWidth; j++) {
int y = Math.min( round(i / sf), o.height ) * o.width;
int x = Math.min( round(j / sf), o.width );
out.pixels[(int)((scaledWidth * i) + j)] = o.pixels[(y + x)];
}
}

Drawing an image using sub-pixel level accuracy using Graphics2D

I am currently attempting to draw images on the screen at a regular rate like in a video game.
Unfortunately, because of the rate at which the image is moving, some frames are identical because the image has not yet moved a full pixel.
Is there a way to provide float values to Graphics2D for on-screen position to draw the image, rather than int values?
Initially here is what I had done:
BufferedImage srcImage = sprite.getImage ( );
Position imagePosition = ... ; //Defined elsewhere
g.drawImage ( srcImage, (int) imagePosition.getX(), (int) imagePosition.getY() );
This of course thresholds, so the picture doesn't move between pixels, but skips from one to the next.
The next method was to set the paint color to a texture instead and draw at a specified position. Unfortunately, this produced incorrect results that showed tiling rather than correct antialiasing.
g.setRenderingHint ( RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON );
BufferedImage srcImage = sprite.getImage ( );
g.setPaint ( new TexturePaint ( srcImage, new Rectangle2D.Float ( 0, 0, srcImage.getWidth ( ), srcImage.getHeight ( ) ) ) );
AffineTransform xform = new AffineTransform ( );
xform.setToIdentity ( );
xform.translate ( onScreenPos.getX ( ), onScreenPos.getY ( ) );
g.transform ( xform );
g.fillRect(0, 0, srcImage.getWidth(), srcImage.getHeight());
What should I do to achieve the desired effect of subpixel rendering of an Image in Java?
You can use a BufferedImage and AffineTransform, draw to the buffered image, then draw the buffered image to the component in the paint event.
/* overrides the paint method */
#Override
public void paint(Graphics g) {
/* clear scene buffer */
g2d.clearRect(0, 0, (int)width, (int)height);
/* draw ball image to the memory image with transformed x/y double values */
AffineTransform t = new AffineTransform();
t.translate(ball.x, ball.y); // x/y set here, ball.x/y = double, ie: 10.33
t.scale(1, 1); // scale = 1
g2d.drawImage(image, t, null);
// draw the scene (double percision image) to the ui component
g.drawImage(scene, 0, 0, this);
}
Check my full example here: http://pastebin.com/hSAkYWqM
You can composite the image yourself using sub-pixel accuracy, but it's more work on your part. Simple bilinear interpolation should work well enough for a game. Below is psuedo-C++ code for doing it.
Normally, to draw a sprite at location (a,b), you'd do something like this:
for (x = a; x < a + sprite.width; x++)
{
for (y = b; y < b + sprite.height; y++)
{
*dstPixel = alphaBlend (*dstPixel, *spritePixel);
dstPixel++;
spritePixel++;
}
dstPixel += destLineDiff; // Move to start of next destination line
spritePixel += spriteLineDiff; // Move to start of next sprite line
}
To do sub-pixel rendering, you do the same loop, but account for the sub-pixel offset like so:
float xOffset = a - floor (a);
float yOffset = b - floor (b);
for (x = floor(a), spriteX = 0; x < floor(a) + sprite.width + 1; x++, spriteX++)
{
for (y = floor(b), spriteY = 0; y < floor (b) + sprite.height + 1; y++, spriteY++)
{
spriteInterp = bilinearInterp (sprite, spriteX + xOffset, spriteY + yOffset);
*dstPixel = alphaBlend (*dstPixel, spriteInterp);
dstPixel++;
spritePixel++;
}
dstPixel += destLineDiff; // Move to start of next destination line
spritePixel += spriteLineDiff; // Move to start of next sprite line
}
The bilinearInterp() function would look something like this:
Pixel bilinearInterp (Sprite* sprite, float x, float y)
{
// Interpolate the upper row of pixels
Pixel* topPtr = sprite->dataPtr + ((floor (y) + 1) * sprite->rowBytes) + floor(x) * sizeof (Pixel);
Pixel* bottomPtr = sprite->dataPtr + (floor (y) * sprite->rowBytes) + floor (x) * sizeof (Pixel);
float xOffset = x - floor (x);
float yOffset = y - floor (y);
Pixel top = *topPtr + ((*(topPtr + 1) - *topPtr) * xOffset;
Pixel bottom = *bottomPtr + ((*(bottomPtr + 1) - *bottomPtr) * xOffset;
return bottom + (top - bottom) * yOffset;
}
This should use no additional memory, but will take additional time to render.
I successfully solved my problem after doing something like lawrencealan proposed.
Originally, I had the following code, where g is transformed to a 16:9 coordinate system before the method is called:
private void drawStar(Graphics2D g, Star s) {
double radius = s.getRadius();
double x = s.getX() - radius;
double y = s.getY() - radius;
double width = radius*2;
double height = radius*2;
try {
BufferedImage image = ImageIO.read(this.getClass().getResource("/images/star.png"));
g.drawImage(image, (int)x, (int)y, (int)width, (int)height, this);
} catch (IOException ex) {
Logger.getLogger(View.class.getName()).log(Level.SEVERE, null, ex);
}
}
However, as noted by the questioner Kaushik Shankar, turning the double positions into integers makes the image "jump" around, and turning the double dimensions into integers makes it scale "jumpy" (why the hell does g.drawImage not accept doubles?!). What I found working for me was the following:
private void drawStar(Graphics2D g, Star s) {
AffineTransform originalTransform = g.getTransform();
double radius = s.getRadius();
double x = s.getX() - radius;
double y = s.getY() - radius;
double width = radius*2;
double height = radius*2;
try {
BufferedImage image = ImageIO.read(this.getClass().getResource("/images/star.png"));
g.translate(x, y);
g.scale(width/image.getWidth(), height/image.getHeight());
g.drawImage(image, 0, 0, this);
} catch (IOException ex) {
Logger.getLogger(View.class.getName()).log(Level.SEVERE, null, ex);
}
g.setTransform(originalTransform);
}
Seems like a stupid way of doing it though.
Change the resolution of your image accordingly, there's no such thing as a bitmap with sub-pixel coordinates, so basically what you can do is create an in memory image larger than what you want rendered to the screen, but allows you "sub-pixel" accuracy.
When you draw to the larger image in memory, you copy and resample that into the smaller render visible to the end user.
For example: a 100x100 image and it's 50x50 resized / resampled counterpart:
See: http://en.wikipedia.org/wiki/Resampling_%28bitmap%29

looping through bitmap into array

is it possible to loop through bitmap and set each color value to an array? at the moment only top row of array is getting written to dst bitmap. eg
.
Bitmap dst = Bitmap.createBitmap(width, height,input.getConfig() ); //output pic
int origPixel = 0;
int []arr = new int[input.getWidth()*input.getHeight()];
int color = 0;
for(int j=0;j<dst.getHeight();j++){
for(int i=0;i<dst.getWidth();i++){
origPixel= input.getPixel(i,j);
color = ........do something special with that pixel transform it whatever
if( Math.pow(i - centerX, 2) + ( Math.pow(j - centerY, 2) ) <= 22500 ){
arr[i]=color;
}else{
arr[i]=origPixel;
}
}
}
Bitmap dst2 = Bitmap.createBitmap(arr,width,height,input.getConfig());
return dst2;
you need to update arr[k] where k initialize before first loop and increment in the second loop see the modified code a hunk of your code:
int k =0;
for(int j=0;j<dst.getHeight();j++){
for(int i=0;i<dst.getWidth();i++, k++){
origPixel= input.getPixel(i,j);
color = ........do something special with that pixel transform it whatever
if( Math.pow(i - centerX, 2) + ( Math.pow(j - centerY, 2) ) <= 22500 ){
arr[k]=color;
}else{
arr[k]=origPixel;
}
you are overriding values in array.

Categories

Resources