I am beginner in programming i show this class
http://www.magicandlove.com/blog/2014/03/06/people-detection-in-processing-with-opencv/
& i want to run it in net beans but main method is missed & some errors appears like can not find PImage also , size ,background
can you help me how to run it & what should classes must be have.
PImage small;
HOGDescriptor hog;
byte [] bArray;
int [] iArray;
int pixCnt1, pixCnt2;
int w, h;
float ratio;
void setup() {
size(640, 480);
ratio = 0.5;
w = int(width*ratio);
h = int(height*ratio);
background(0);
// Define and initialise the default capture device.
cap = new Capture(this, width, height);
cap.start();
// Load the OpenCV native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
println(Core.VERSION);
pixCnt1 = w*h*4;
pixCnt2 = w*h;
bArray = new byte[pixCnt1];
iArray = new int[pixCnt2];
small = createImage(w, h, ARGB);
hog = new HOGDescriptor();
hog.setSVMDetector(HOGDescriptor.getDefaultPeopleDetector());
noFill();
stroke(255, 255, 0);
}
void draw() {
if (cap.available()) {
cap.read();
}
else {
return;
}
image(cap, 0, 0);
You need to copy everything from that page including the import, i'm thinking those objects you're trying to create are from the things that were suppose to be imported so that might solve your problem with that.
Related
I'm trying to run two Kinect V1 cameras simultaneously in Processing 3. I have gotten to a solution that is not sustainable, and am I trying to make something more stable/reliable.
At the moment, whenever I try to run both cameras simultaneously on a single sketch, I am hit with the error
"Could not claim interface on camera: -3
Failed to open camera subdevice or it is not disabled.Failed to open motor subddevice or it is not disabled.Failed to open audio subdevice or it is not disabled.There are no kinects, returning null"
One camera opens, the other does not. It is not always consistent which camera opens, which leads me to believe there's something tripping over permissions after the objects are created, or when the second object is initialized.
My code is as such
import SimpleOpenNI.*;
import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;
import org.openkinect.tests.*;
//Imported libraries
//Some might be unnecessary but I don't have time to check
//Better safe than sorry, maybe I'll delete later
Kinect kinect;
Kinect kinect2;
PImage depthImage;
PImage depthImage2;
//Set depth threshold
float minDepth = 996;
float maxDepth = 2493;
float iWidth1 = 0;
float iHeight1 = 0;
float iWidth2 = 0;
float iHeight2 = 0;
//Double check for the number of devices, mostly for troubleshooting
int numDevices = 0;
//control which device is being controlled (in case I want device control)
int deviceIndex = 0;
void setup() {
//set Arbitrary size
size(640, 360);
//Set up window to resize, need to figure out how to keep things centered
surface.setResizable(true);
//not necessary, but good for window management. Window label
surface.setTitle("KINECT 1");
//get number of devices, print to console
numDevices = Kinect.countDevices();
println("number of V1 Kinects = "+numDevices);
//set up depth for the first kinect tracking
kinect = new Kinect(this);
kinect.initDepth();
//Blank Image
depthImage = new PImage(kinect.width, kinect.height);
//set up second window
String [] args = {"2 Frame Test"};
SecondApplet sa = new SecondApplet();
PApplet.runSketch(args, sa);
}
//Draw first window's Kinect Threshold
void draw () {
if ((width/1.7778) < height) {
iWidth1 = width;
iHeight1 = width/1.7778;
} else {
iWidth1 = height*1.7778;
iHeight1 = height;
}
//Raw Image
image(kinect.getDepthImage(), 0, 0, iWidth1, iHeight1);
//Threshold Equation
int[] rawDepth = kinect.getRawDepth();
for (int i=0; i < rawDepth.length; i++) {
if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
depthImage.pixels[i] = color(255);
} else {
depthImage.pixels[i] = color(1);
}
}
}
public class SecondApplet extends PApplet {
public void settings() {
//arbitrary size
size(640, 360);
kinect2 = new Kinect(this);
kinect2.initDepth();
//Blank Image
depthImage2 = new PImage(kinect2.width, kinect2.height);
}
void draw () {
if ((width/1.7778) < height) {
iWidth2 = width;
iHeight2 = width/1.7778;
} else {
iWidth2 = height*1.7778;
iHeight2 = height;
}
image(kinect2.getDepthImage(), 0, 0, iWidth2, iHeight2);
surface.setResizable(true);
surface.setTitle("KINECT 2");
int[] rawDepth2 = kinect2.getRawDepth();
for (int i=0; i < rawDepth2.length; i++) {
if (rawDepth2[i] >= minDepth && rawDepth2[i] <= maxDepth) {
depthImage2.pixels[i] = color(255);
} else {
depthImage2.pixels[i] = color(1);
}
}
}
}
Curiously, the code returns a confirmation that there are two kinect devices connected in the console. For some reason, it cannot access both at the same time.
I'm not a very experienced code, so this code might look amateur. Open to feedback on other parts, but really just looking to solve this problem.
This code returns the error pasted above when there are two Kinect V1's connected to the computer.
Running Mac OS11.6.8 on an Intel MacBook Pro
Using Daniel Schiffman's OpenKinect for Processing as a starting point for the code.
I've run a successful iteration of this code with a slimmed down version of Daniel Schiffman's Depth Threshold example.
import org.openkinect.freenect.*;
import org.openkinect.processing.*;
Kinect kinect;
// Depth image
PImage depthImg;
// Which pixels do we care about?
// These thresholds can also be found with a variaty of methods
float minDepth = 996;
float maxDepth = 2493;
// What is the kinect's angle
float angle;
void setup() {
size(1280, 480);
kinect = new Kinect(this);
kinect.initDepth();
angle = kinect.getTilt();
// Blank image
depthImg = new PImage(kinect.width, kinect.height);
}
void draw() {
// Draw the raw image
image(kinect.getDepthImage(), 0, 0);
// Calibration
//minDepth = map(mouseX,0,width, 0, 4500);
//maxDepth = map(mouseY,0,height, 0, 4500);
// Threshold the depth image
int[] rawDepth = kinect.getRawDepth();
for (int i=0; i < rawDepth.length; i++) {
if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
depthImg.pixels[i] = color(255);
} else {
depthImg.pixels[i] = color(0);
}
}
// Draw the thresholded image
depthImg.updatePixels();
image(depthImg, kinect.width, 0);
//Comment for Calibration
fill(0);
text("TILT: " + angle, 10, 20);
text("THRESHOLD: [" + minDepth + ", " + maxDepth + "]", 10, 36);
//Calibration Text
//fill(255);
//textSize(32);
//text(minDepth + " " + maxDepth, 10, 64);
}
Using this code, I was able to get both cameras operating using the following process:
Connect a single Kinect v1 to the computer
Open and run the above code
Duplicate the sketch file
Connect the second Kinect V1 to the computer
Open and run the duplicated sketch of the same code
This worked for my purposes and remained stable for an extended period of time. However, this isn't a sustainable solution if anyone other than me wants to utilize this program.
Any help with this problem would be greatly appreciated
I am trying to draw an .obj file offscreen under Android.
The image should then be passed to OpenCV for further use as a matrix.
In 2D the whole thing already works without problems.
public class ObjDrawer extends PApplet {
PGraphics buffer;
PShape obj;
int width;
int height;
public ObjDrawer(int width, int height) {
this.width = width;
this.height = height;
setup();
}
#Override
public void setup() {
buffer = createGraphics(this.width, this.height, P3D);
//obj = loadShape("Wuerfel_40mm.obj");
noLoop();
}
public Mat testDraw() {
buffer.beginDraw();
buffer.background(255);
buffer.ellipse(200, 200, 100, 100);
//buffer.shape(obj);
buffer.endDraw();
return toMat(buffer.get());
}
Mat toMat(PImage image) {
//source: https://gist.github.com/Spaxe/3543f0005e9f8f3c4dc5
int w = image.width;
int h = image.height;
Mat mat = new Mat(h, w, CvType.CV_8UC4);
byte[] data8 = new byte[w * h * 4];
int[] data32 = new int[w * h];
arrayCopy(image.pixels, data32);
ByteBuffer bBuf = ByteBuffer.allocate(w * h * 4);
IntBuffer iBuf = bBuf.asIntBuffer();
iBuf.put(data32);
bBuf.get(data8);
mat.put(0, 0, data8);
return mat;
}
}
When using 3D, I now get the following error message:
java.lang.NullPointerException: Attempt to invoke virtual method 'boolean processing.core.PGraphics.isGL()' on a null object reference
at processing.core.PApplet.makeGraphics(PApplet.java:1584)
at processing.core.PApplet.createGraphics(PApplet.java:1568)
at Analyzer.ObjDrawer.setup(ObjDrawer.java:52)
at Analyzer.ObjDrawer.<init>(ObjDrawer.java:45)
at com.quickbirdstudios.opencvexample.MainActivity$cvCameraViewListener$1.onCameraFrame(MainActivity.kt:75)
at org.opencv.android.CameraBridgeViewBase.deliverAndDrawFrame(CameraBridgeViewBase.java:392)
at org.opencv.android.JavaCameraView$CameraWorker.run(JavaCameraView.java:373)
at java.lang.Thread.run(Thread.java:920)
In my research I read about a3d, this is missing for me. Since all the stuff I found regarding a3d was about 10 years old: has anything changed in this regard or has it been replaced or do I need to include this separately?
I have included Processing as in the tutorial on https://android.processing.org/tutorials/android_studio/index.html
integrated. I use Android Mode for Processing 3 4.3.0, because the mode manager in Processing 4 is currently broken.
many thanks in advance!
I need to store HCURSOR in BufferedImage with its real size and color.
I have found similar questions 1 and 2 which work fine with standard 32x32 cursor, but if I change color or size then BufferedImage becomes invalid, giving me a result like this:
Firstly, my problem was to get a real cursor size. But then I found the way to get it via JNA from the registry.
Then I need to save it to BufferedImage. I tried to use code snippets getImageByHICON() and getIcon() from the first link above, but there's an error somewhere -- the image is still incorrect or broken. Maybe I don't understand how to use it correctly because I am not much familiar with BufferedImage creation.
How can I save HCURSOR to BufferedImage if I have cursors real size and CURSORINFO?
Here is my full code:
import com.sun.jna.Memory;
import com.sun.jna.platform.win32.*;
import javax.swing.*;
import java.awt.*;
import java.awt.image.BufferedImage;
class CursorExtractor {
public static void main(String[] args) {
BufferedImage image = getCursor();
JLabel icon = new JLabel();
icon.setIcon(new ImageIcon(image));
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setContentPane(icon);
frame.pack();
frame.setVisible(true);
Toolkit toolkit = Toolkit.getDefaultToolkit();
Point pointerPos = new Point(1, 1);
Cursor c = toolkit.createCustomCursor(image, pointerPos, "cursorname");
frame.setCursor(c);
}
public static BufferedImage getCursor() {
// Read an int (& 0xFFFFFFFFL for large unsigned int)
int baseSize = Advapi32Util.registryGetIntValue(
WinReg.HKEY_CURRENT_USER, "Control Panel\\Cursors", "CursorBaseSize");
final User32.CURSORINFO cursorinfo = new User32.CURSORINFO();
User32.INSTANCE.GetCursorInfo(cursorinfo);
WinDef.HCURSOR hCursor = cursorinfo.hCursor;
return getImageByHICON(baseSize, baseSize, hCursor);
}
public static BufferedImage getImageByHICON(final int width, final int height, final WinDef.HICON hicon) {
final WinGDI.ICONINFO iconinfo = new WinGDI.ICONINFO();
try {
// get icon information
if (!User32.INSTANCE.GetIconInfo(hicon, iconinfo)) {
return null;
}
final WinDef.HWND hwdn = new WinDef.HWND();
final WinDef.HDC dc = User32.INSTANCE.GetDC(hwdn);
if (dc == null) {
return null;
}
try {
final int nBits = width * height * 4;
// final BitmapInfo bmi = new BitmapInfo(1);
final Memory colorBitsMem = new Memory(nBits);
// // Extract the color bitmap
final WinGDI.BITMAPINFO bmi = new WinGDI.BITMAPINFO();
bmi.bmiHeader.biWidth = width;
bmi.bmiHeader.biHeight = -height;
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 32;
bmi.bmiHeader.biCompression = WinGDI.BI_RGB;
GDI32.INSTANCE.GetDIBits(dc, iconinfo.hbmColor, 0, height, colorBitsMem, bmi, WinGDI.DIB_RGB_COLORS);
// g32.GetDIBits(dc, iconinfo.hbmColor, 0, size, colorBitsMem,
// bmi,
// GDI32.DIB_RGB_COLORS);
final int[] colorBits = colorBitsMem.getIntArray(0, width * height);
final BufferedImage bi = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
bi.setRGB(0, 0, width, height, colorBits, 0, height);
return bi;
} finally {
com.sun.jna.platform.win32.User32.INSTANCE.ReleaseDC(hwdn, dc);
}
} finally {
User32.INSTANCE.DestroyIcon(new WinDef.HICON(hicon.getPointer()));
GDI32.INSTANCE.DeleteObject(iconinfo.hbmColor);
GDI32.INSTANCE.DeleteObject(iconinfo.hbmMask);
}
}
}
I originally answered this question suggesting that you use the GetSystemMetrics() function, using the constant SM_CXCURSOR (13) for the width of the cursor in pixels, and SM_CYCURSOR (14) for the height. The linked documentation states "The system cannot create cursors of other sizes."
But then I see you posted a similar question here, and stated that those values don't change from 32x32. What happens there, as noted in this answer, is that the cursor is still actually that size, but only the smaller image is displayed on the screen; the rest of the pixels are simply "invisible". The same appears to be true for "larger" images, in that internally the "icon" associated with the cursor is still the same 32x32 size, but the screen displays something else.
Interestingly, the icon when hovering over the Swing window is always 32x32. Your choice to use Cursor c = toolkit.createCustomCursor(image, pointerPos, "cursorname"); is scaling down the bitmap image to a new (smaller) cursor in the window. I can keep the default cursor with a simple:
Cursor c = Cursor.getDefaultCursor();
I made the following changes to your code to get an ugly pixellated version at the right size:
changed method arguments width and height to w and h: getImageByHICON(final int w, final int h, final WinDef.HICON hicon)
at the start of the try block, set int width = 32 and int height = 32.
after fetching the colorBitsMem from GetDIBits() inserted the following:
final int[] colorBitsBase = colorBitsMem.getIntArray(0, width * height);
final int[] colorBits = new int[w * h];
for (int row = 0; row < h; row++) {
for (int col = 0; col < w; col++) {
int r = row * 32 / h;
int c = col * 32 / w;
colorBits[row * w + col] = colorBitsBase[r * 32 + c];
}
}
So with a 64x64 system icon, I see this in the swing window:
That size matches my mouse cursor. The pixels, notsomuch.
Another option, inspired by this answer is to use a better bitmap scaling than my simple integer math with pixels. In your getCursor() method, do:
BufferedImage before = getImageByHICON(32, 32, hCursor);
int w = before.getWidth();
int h = before.getHeight();
BufferedImage after = new BufferedImage(baseSize, baseSize, BufferedImage.TYPE_INT_ARGB);
AffineTransform at = new AffineTransform();
at.scale(baseSize / 32d, baseSize / 32d);
AffineTransformOp scaleOp = new AffineTransformOp(at, AffineTransformOp.TYPE_BILINEAR);
after = scaleOp.filter(before, after);
return after;
Which is giving me this:
Yet another option, in your getCursor() class is CopyImage().
WinDef.HCURSOR hCursor = cursorinfo.hCursor;
HANDLE foo = User32.INSTANCE.CopyImage(hCursor, 2, baseSize, baseSize, 0x00004000);
return getImageByHICON(baseSize, baseSize, new WinDef.HCURSOR(foo.getPointer()));
Gives this:
I am trying to create a color tracking bird flock, using live video from my webcam. I was instructed to use a constructor to create an array of .gifs that could work independently and follow a specific color around the video.
I did some research and this is as far as I got. Now I am getting an error that I don't really understand. For a very early dummy example of the intentions i have with the code, please see this .gif: Flock of birds
import processing.video.*;
import gifAnimation.*;
video = new Movie(); /// This is the line that gives me the error
// class
Birdy [] arrayOfBirds;
int numberOfBirds = 10;
class Birdy
{
//variables
int numberOfBeaks;
String birdName;
color birdColor;
PVector location;
// constructor, allows you to make new Birds in the rest of the code
// A constructor is part of the class
Birdy (int nob, String bname, color bColor, PVector loc) {
numberOfBeaks = nob;
birdName = bname;
birdColor = bColor;
location = loc;
}
//The bird appears
void showBird()
{
fill(birdColor);
textSize(24);
text(birdName, location.x, location.y);
ellipse(location.x, location.y, 20, 20);
}
}
void setup() {
size(640, 480);
//fill the array Of Birds with new Birds
arrayOfBirds = new Birdy[numberOfBirds];
//to make 10 birds and put them in the array
for (int i = 0; i < numberOfBirds; i++)
{
// each new bird needs its own set of parameters but will do this when i figure out how to work with this one first!
arrayOfBirds[i]= new Birdy(2, "Tweety "+i, color(255-(i*25), i*25, 255), new PVector(i*40, i*40));
}
}
void draw(int x, int y) {
if (video.available()) {
video.read();
image(video, 0, 0, width, height); // Draw the webcam video onto the screen
int colorX = 0; // X-coordinate of the closest in color video pixel
int colorY = 0; // Y-coordinate of the closest in color video pixel
float closestColor = 500; //we set this to be abritrarily large, once program runs, the first pixel it scans will be set to this value
// Search for the closest in color pixel: For each row of pixels in the video image and
// for each pixel in the yth row, compute each pixel's index in the video
background(0);
//show that first bird we called Tweety by calling the showBird() function on Tweety
Tweety.showBird();
//show all the birds in the array by calling the showBird() method on each object in the array
for(int i = 0; i < arrayOfBirds.length; i++){
arrayOfBirds[i].location = new PVector(x,y);
arrayOfBirds[i].showBird();
}
}
setup();
Gif loopingGif;
Capture video;
size(640, 480); // Change size to 320 x 240 if too slow at 640 x 480 // Uses the default video input ---- but i dont think it works
video = new Capture(this, width, height, 30);
video.start();
noStroke();
smooth();
frameRate(10);
loopingGif = new Gif(this, "circle.gif");
String [] animas = {};
video.loadPixels();
int index = 0;
for (int y = 0; y < video.height; y++) {
for (int x = 0; x < video.width; x++) {
// Get the color stored in the pixel
color pixelValue = video.pixels[index];
// Determine the color of the pixel
float colorProximity = abs(red(pixelValue)-27)+abs(green(pixelValue)-162)+abs(blue(pixelValue)-181); //select pixel
// If that value is closer in color value than any previous, then store the
// color proximity of that pixel, as well as its (x,y) location
if (colorProximity < closestColor) {
closestColor = colorProximity;
closestColor=closestColor-10; //Once it "locks" on to an object of color, it wont let go unless something a good bit better (closer in color) comes along
colorY = y;
colorX = x;
}
index++;
}
draw(x,y);
}
image (loopingGif, colorX, colorY);
loopingGif.play();
}here
You need to declare your variable by giving it a type:
Movie video = new Movie();
You've got some other weird things going on here. Why are you specifically calling the setup() function? Processing does that for you automatically. You've also got a bunch of code outside of a function at the bottom of your sketch. Maybe you meant to put that code inside the setup() function?
If you're still getting errors, edit your question to include their exact full text.
I'm trying to have an image scale to a certain size depending on the horizontal size sent to an update function, but the following code doesnt seem to size the image correctly.
EDIT: The code:
public class GlassesView extends View {
private Paint paint;
private BitmapFactory.Options options;
private Bitmap bitmapOrg;
private Bitmap target;
private Bitmap bitmapRev;
private Bitmap resizedBitmap;
private int currY;
public int glassesX;
public int glassesY;
public float glassesSizeX;
public float glassesSizeY;
private boolean drawGlasses;
private boolean glassesMirrored;
public GlassesView(Context context) {
super(context);
paint = new Paint();
paint.setDither(false);
paint.setAntiAlias(false);
options = new BitmapFactory.Options();
options.inDither = false;
options.inScaled = false;
bitmapOrg = Bitmap.createScaledBitmap(BitmapFactory.decodeResource(getResources(),
R.drawable.micro_glasses, options), 32, 5, false);
bitmapRev = Bitmap.createScaledBitmap(BitmapFactory.decodeResource(getResources(),
R.drawable.glasses_reverse, options), 32, 5, false);
drawGlasses = false;
}
#Override
protected void onDraw(Canvas canvas) {
canvas.drawBitmap(target, 0, 0, paint);
boolean moving = currY < glassesY;
if (moving) {
currY++;
}
if (drawGlasses) {
int newWidth = resizedBitmap.getWidth();
int newHeight = resizedBitmap.getHeight();
Paint bluey = new Paint();
bluey.setColor(Color.argb(64, 0, 0, 255));
canvas.drawRect(new Rect(glassesX, currY, glassesX + newWidth,
currY + newHeight), bluey);
canvas.drawBitmap(resizedBitmap, glassesX, currY, paint);
}
if (moving) {
invalidate();
}
}
public void drawGlasses(int x1, int x2, int y, boolean mirror) {
drawGlasses = true;
glassesMirrored = mirror;
if (!mirror) {
glassesSizeX = (float) (x2 - x1) / (float) (25 - 16);
glassesSizeY = glassesSizeX;
glassesY = y - (int)(1*glassesSizeX);
glassesX = (int) (x1 - (glassesSizeX * 16));
} else {
glassesSizeX = (float) (x1 - x2) / (float) (25 - 16);
glassesSizeY = glassesSizeX;
glassesY = y - (int)(1*glassesSizeX);
glassesX = (int) (x1 - (glassesSizeX * 16));
}
currY = -1;
if (!glassesMirrored) {
resizedBitmap = Bitmap.createScaledBitmap(bitmapOrg,
(int) (bitmapOrg.getWidth() * glassesSizeX),
(int) (bitmapOrg.getHeight() * glassesSizeY), false);
} else {
resizedBitmap = Bitmap.createScaledBitmap(bitmapRev,
(int) (bitmapRev.getWidth() * glassesSizeX),
(int) (bitmapRev.getHeight() * glassesSizeY), false);
}
}
public void setTargetPic(Bitmap targetPic) {
target = targetPic;
}
}
The result. (The blue rectangle being the bounding box of the image's intended size)
Which part am I going wrong at?
EDIT 2:
Here are the glasses:
EDIT 3:
Out of curiousity, I ran it on my actual phone, and got a much different result, the image was stretched passed the intended blue box.
EDIT 4:
I tried running the app on a few emulators to see if it was an Android version incompatibility thing, but they all seemed to work perfectly. The scaling issue only occurs on my phone (Vibrant, rooted, CM7) and my cousin's (Droid, also rooted). These are the only physical devices I have tested on, but they both seem to have the same issue.
I'd really appreciate if someone could help me out here, this is a huge roadblock in my project and no other forums or message groups are responding.
EDIT 5:
I should mention that in update 4, the code changed a bit, which fixed the problem in the emulators as I stated, but doesn't work on physical devices. Changes are updated in the code above. *desperate for help* :P
EDIT 6:
Yet another update, I tested the same code on my old G1, and it works perfectly as expected. I have absolutely no clue now.
have you been using the /res/drawable-nodpi/ directory to store your images?
Apparently if you use the /drawable-ldpi/, /drawable-mdpi/ or /drawable-hdpi/ folders, Android will apply scaling to the image when you load it depending on the device. The reason your G1 may work is that it may not require any scaling depending on the folder you used.
Also, your IMGUR links are broken... also your code doesn't seem correct... you call DrawGlasses with no arguments.
Bitmap resizedBitmap = Bitmap.createBitmap(bitmapOrg, 0, 0, glassesWidth,
glassesHeight, matrix, false);
Change the last parameter from false to true.
you can have a try.
The follow is i used zoom in method by scale:
private void small() {
int bmpWidth=bmp.getWidth();
int bmpHeight=bmp.getHeight();
Log.i(TAG, "bmpWidth = " + bmpWidth + ", bmpHeight = " + bmpHeight);
/* 设置图片缩小的比例 */
double scale=0.8;
/* 计算出这次要缩小的比例 */
scaleWidth=(float) (scaleWidth*scale);
scaleHeight=(float) (scaleHeight*scale);
/* 产生reSize后的Bitmap对象 */
Matrix matrix = new Matrix();
matrix.postScale(scaleWidth, scaleHeight);
Bitmap resizeBmp = Bitmap.createBitmap(bmp,0,0,bmpWidth,
bmpHeight,matrix,true);
Found what was wrong. The image has to be in a folder called "drawable-nodpi", otherwise it will be scaled depending on DPI.