I'm in a situation when I need to use bitmaps blending in Android. And make it similar to iOS version of app.
iOS guys used PorterDuff modes, that are currently absent in Android's PorterDuff class out-of-the-box.
Exactly I need soft-light mode, but forget that - I decided to deal with whole idea of some modes lacking for Android.
Here are the links that I've found during research:
wikipedia
w3
specification
w3 svg specs
All of them include desired formula so that it seems quite easy to just write a method and implement every blending mode that I might need.
However, that's where troubles start.
The formulas in the links above only deal with color component, leaving alpha aside as if it was something for granted. But it's not.
So far I've ended up having this SO question's code. I'm thinking about re-writing overlay_byte(int sc, int dc, int sa, int da) method to implement softlight formula, but can't figure out what to do with sa and da variables.
P. S.: I'm aware that performance of bare java implementation of blending mode will be far from native android's PorterDuff class's, but that can be dealt with later...
ADDED LATER:
Since yesterday I've done some deeper research and found a way to implement soft-light blending mode on java. Here's the source (the same that works inside android when we use out-of-the-box modes like PorterDuff.Mode.LIGHTEN).
I took softlight_byte method from there and all that he uses and translated it to java (did the best I could here). The result works, but gives incorrect image: bottom part seems ok, but top one get's re-burned. Here's my code, re-written to java:
public Bitmap applyBlendMode(Bitmap srcBmp, Bitmap destBmp)
{
int width = srcBmp.getWidth();
int height = srcBmp.getHeight();
int srcPixels[] = new int[width * height];;
int destPixels[] = new int[width * height];
int resultPixels[] = new int[width * height];
int aS = 0, rS = 0, gS = 0, bS = 0;
int rgbS = 0;
int aD = 0, rD = 0, gD = 0, bD = 0;
int rgbD = 0;
try
{
srcBmp.getPixels(srcPixels, 0, width, 0, 0, width, height);
destBmp.getPixels(destPixels, 0, width, 0, 0, width, height);
srcBmp.recycle();
destBmp.recycle();
}
catch(IllegalArgumentException e) {
}
catch(ArrayIndexOutOfBoundsException e) {
}
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
rgbS = srcPixels[y*width + x];
aS = (rgbS >> 24) & 0xff;
rS = (rgbS >> 16) & 0xff;
gS = (rgbS >> 8) & 0xff;
bS = (rgbS ) & 0xff;
rgbD = destPixels[y*width + x];
aD = ((rgbD >> 24) & 0xff);
rD = (rgbD >> 16) & 0xff;
gD = (rgbD >> 8) & 0xff;
bD = (rgbD ) & 0xff;
rS = softlight_byte(rD, rS, aS, aD);
gS = softlight_byte(gD, gS, aS, aD);
bS = softlight_byte(bD, bS, aS, aD);
aS = aS + aD - Math.round((aS * aD)/255f);
resultPixels[y*width + x] = ((int)aS << 24) | ((int)rS << 16) | ((int)gS << 8) | (int)bS;
}
}
return Bitmap.createBitmap(resultPixels, width, height, srcBmp.getConfig());
}
int softlight_byte(int sc, int dc, int sa, int da) {
int m = (da != 0) ? dc * 256 / da : 0;
int rc;
if (2 * sc <= sa) {
rc = dc * (sa + ((2 * sc - sa) * (256 - m) >> 8));
} else if (4 * dc <= da) {
int tmp = (4 * m * (4 * m + 256) * (m - 256) >> 16) + 7 * m;
rc = dc * sa + (da * (2 * sc - sa) * tmp >> 8);
} else {
int tmp = sqrtUnitByte(m) - m;
rc = dc * sa + (da * (2 * sc - sa) * tmp >> 8);
}
return clampDiv255Round(rc + sc * (255 - da) + dc * (255 - sa));
}
/** returns 255 * sqrt(n/255) */
private int sqrtUnitByte(int n) {
return skSqrtBits(n, 15 + 4);
}
/** Return the integer square root of value, with a bias of bitBias */
int skSqrtBits(int value, int bitBias) {
if (value < 0) { Log.wtf(TAG, "ASSERTION!"); } // bitBias always 15+4, no check required
int root = 0;
int remHi = 0;
int remLo = value;
do {
root <<= 1;
remHi = (remHi<<2) | (remLo>>30);
remLo <<= 2;
int testDiv = (root << 1) + 1;
if (remHi >= testDiv) {
remHi -= testDiv;
root++;
}
} while (--bitBias >= 0);
return root;
}
int clampDiv255Round(int prod) {
if (prod <= 0) {
return 0;
} else if (prod >= 255*255) {
return 255;
} else {
return skDiv255Round(prod);
}
}
/** Just the rounding step in SkDiv255Round: round(value / 255)
*/
private static int skDiv255Round(int prod) {
prod += 128;
return (prod + (prod >> 8)) >> 8;
}
Can someone check it for possible mistakes? This is really of big importance to me!
P. S.: the applyBlendMode was taken from SO question I mentioned above, and softlight_byte(rD, rS, aS, aD); realization - rewritten from C++.
Related
I have the following constructor for a RecoloredImage that takes an old image, and replaces every old colored pixel with a new colored pixel. However, the image doesn't actually change. The code between the comments is purely for testing purposes, and the resulting printed line is not at all the new color I want.
public RecoloredImaged(Image inputImage, Color oldColor, Color newColor) {
int width = (int) inputImage.getWidth();
int height = (int) inputImage.getHeight();
WritableImage outputImage = new WritableImage(width, height);
PixelReader reader = inputImage.getPixelReader();
PixelWriter writer = outputImage.getPixelWriter();
// -- testing --
PixelReader newReader = outputImage.getPixelReader();
// -- end testing --
int ob = (int) oldColor.getBlue() * 255;
int or = (int) oldColor.getRed() * 255;
int og = (int) oldColor.getGreen() * 255;
int nb = (int) newColor.getBlue() * 255;
int nr = (int) newColor.getRed() * 255;
int ng = (int) newColor.getGreen() * 255;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int argb = reader.getArgb(x, y);
int a = (argb >> 24) & 0xFF;
int r = (argb >> 16) & 0xFF;
int g = (argb >> 8) & 0xFF;
int b = argb & 0xFF;
if (g == og && r == or && b == ob) {
r = nr;
g = ng;
b = nb;
}
argb = (a << 24) | (r << 16) | (g << 8) | b;
writer.setArgb(x, y, argb);
// -- testing --
String s = Integer.toHexString(newReader.getArgb(x, y));
if (!s.equals("0"))
System.out.println(s);
// -- end testing --
}
}
image = outputImage;
}
The cast operator has a higher precedence than the multiplication operator. Your calculations for the or, ..., nb values are therefore compiled to the same bytecode as this code:
int ob = ((int) oldColor.getBlue()) * 255;
int or = ((int) oldColor.getRed()) * 255;
int og = ((int) oldColor.getGreen()) * 255;
int nb = ((int) newColor.getBlue()) * 255;
int nr = ((int) newColor.getRed()) * 255;
int ng = ((int) newColor.getGreen()) * 255;
Just add brackets to tell java to do the multiplication before casting. Otherwise you'll only get values 0 or 255 as results.
int ob = (int) (oldColor.getBlue() * 255);
int or = (int) (oldColor.getRed() * 255);
int og = (int) (oldColor.getGreen() * 255);
int nb = (int) (newColor.getBlue() * 255);
int nr = (int) (newColor.getRed() * 255);
int ng = (int) (newColor.getGreen() * 255);
I have a class(an AsyncTask) which does image processing and generates yuv bytes continously, at around ~200ms interval.
Now I send these yuv bytes to another method where the they are recorded using FFmpeg frame recorder:
public void recordYuvData() {
byte[] yuv = getNV21();
System.out.println(yuv.length + " returned yuv bytes ");
if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
startTime = System.currentTimeMillis();
return;
}
if (RECORD_LENGTH > 0) {
int i = imagesIndex++ % images.length;
yuvimage = images[i];
timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
}
/* get video data */
if (yuvimage != null && recording) {
((ByteBuffer) yuvimage.image[0].position(0)).put(yuv);
if (RECORD_LENGTH <= 0) {
try {
long t = 1000 * (System.currentTimeMillis() - startTime);
if (t > recorder.getTimestamp()) {
recorder.setTimestamp(t);
}
recorder.record(yuvimage);
} catch (FFmpegFrameRecorder.Exception e) {
e.printStackTrace();
}
}
}
}
This method; recordYuvData() is initiated on button click.
If I initiate it only once , then only the initial image gets recorded, rest are not.
If I initiate this each time after the end of the image processing it records but leads to 'weird' fps count of the video; and finally this leads to application crash after sometime.
For above what I feel is, at the end of image processing a new instance of recordYuvData() is created without ending the previous one, accumulating many instances of recordYuvData(). [correct me if I am wrong]
So, how do I update 'ONLY' yuv bytes in the method without running it again?
Thanks....!
Edit:
On Click:
record.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
recordYuvdata();
startRecording();
getNV21()
byte[] getNV21(Bitmap bitmap) {
int inputWidth = 1024;
int inputHeight = 640;
int[] argb = new int[inputWidth * inputHeight];
bitmap.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
System.out.println(argb.length + "#getpixels ");
byte[] yuv = new byte[inputWidth * inputHeight * 3 / 2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
return yuv;
}
void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
System.out.println(yuv420sp.length + " #encoding " + frameSize);
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
}
index++;
}
}
}
i am trying to get difference between two images ( same size ) i found this code :
BufferedImage img1 = null;
BufferedImage img2 = null;
try{
URL url1 = new URL("http://rosettacode.org/mw/images/3/3c/Lenna50.jpg");
URL url2 = new URL("http://rosettacode.org/mw/images/b/b6/Lenna100.jpg");
img1 = ImageIO.read(url1);
img2 = ImageIO.read(url2);
} catch (IOException e) {
e.printStackTrace();
}
int width1 = img1.getWidth(null);
int width2 = img2.getWidth(null);
int height1 = img1.getHeight(null);
int height2 = img2.getHeight(null);
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
long diff = 0;
for (int i = 0; i < height1; i++) {
for (int j = 0; j < width1; j++) {
int rgb1 = img1.getRGB(i, j);
int rgb2 = img2.getRGB(i, j);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1 ) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2 ) & 0xff;
diff += Math.abs(r1 - r2);
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
}
}
double n = width1 * height1 * 3;
double p = diff / n / 255.0;
System.out.println("diff percent: " + (p * 100.0)); `
it works fine for the two images given in the url , but when i changed the images given i get this exception :
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Coordinate out of bounds!
at sun.awt.image.ByteInterleavedRaster.getDataElements(ByteInterleavedRaster.java:299)
at java.awt.image.BufferedImage.getRGB(BufferedImage.java:871)
at Main.main(Main.java:77)
I changed the code to :
File sourceimage1 = new File("C:\\lo.jpg");
File sourceimage2 = new File("C:\\lo1.jpg");
img1 = ImageIO.read(sourceimage1);
img2 = ImageIO.read(sourceimage2);
the two images are black and white , and their dimensions are smaller than the two previous images (lenna50 and lenna100)
the lo.jpg and lo1.jpg are the same image to test the algorithm also they are in black and white
how can I change the code to make it work for any image dimension ?
Toggle the i and j
in the following code as I have done below:
int rgb1 = img1.getRGB(j, i);
int rgb2 = img2.getRGB(j, i);
Your error clearly says that while reading rgb point as code in line img1.getRGB(i, j); it is going out of Array for image RGB. Check the values of i & j inside your inner for loop and check whether something you are doing wrong. As already Hirak pointed may be that you are not initializing your variables properly and so the reason it is going out of height or width.
I am using Android phone with ICECREAMSANDWICH version. In that, I am checking the face-detection behaviour. Additionally, referring the codes available on google.
During debugging with my phone int getMaxNumDetectedFaces () returns 3. So my phone is supported for this.
Then the following code is not working.
public void onFaceDetection(Face[] faces, Camera face_camera1) {
// TODO Auto-generated method stub
if(faces.length>0)
{
Log.d("FaceDetection","face detected:" +faces.length + "Face 1 location X:"+faces[0].rect.centerX()+"Y:"+faces[0].rect.centerY());
}
}
in this faces.length retruning zero tell some suggestions to solve this error.
I had work with the FaceDetection some time ago. When I was working on that the onFaceDetection didn't work for me, so,I found another way for work on it.
I worked with PreviewCallback, this method takes each frame and you can use it to recognize faces. The only problem here is the format, the default format is NV21 , and you can change it by setPreviewFormat(int), but that didn't work for me too, so, I had to make de conversion for get a Bitmap type that recives the FaceDetector. Here is my code:
public PreviewCallback mPreviewCallback = new PreviewCallback(){
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Size size = camera.getParameters().getPreviewSize();
Bitmap mfoto_imm = this.getBitmapFromNV21(data, size.width, size.height, true); //here I get the Bitmap from getBitmapFromNV21 that is the conversion method
Bitmap mfoto= mfoto_imm.copy(Bitmap.Config.RGB_565, true);
imagen.setImageBitmap(mfoto);
int alto= mfoto.getHeight();
int ancho= mfoto.getWidth();
int count;
canvas= new Canvas(mfoto);
dibujo.setColor(Color.GREEN);
dibujo.setAntiAlias(true);
dibujo.setStrokeWidth(8);
canvas.drawBitmap(mfoto, matrix, dibujo);
FaceDetector mface= new FaceDetector(ancho,alto,1);
FaceDetector.Face [] face= new FaceDetector.Face[1];
count = mface.findFaces(mfoto, face);
PointF midpoint = new PointF();
int fpx = 0;
int fpy = 0;
if (count > 0) {
face[count-1].getMidPoint(midpoint); // you have to take the last one less 1
fpx= (int)midpoint.x; // middle pint of the face in x.
fpy= (int)midpoint.y; // middle point of the face in y.
}
canvas.drawCircle(fpx, fpy, 10, dibujo); // here I draw a circle on the middle of the face
imagen.invalidate();
}
}
and here are the conversion methods.
public Bitmap getBitmapFromNV21(byte[] data, int width, int height, boolean rotated) {
Bitmap bitmap = null;
int[] pixels = new int[width * height];
// Conver the array
this.yuv2rgb(pixels, data, width, height, rotated);
if(rotated)
{
bitmap = Bitmap.createBitmap(pixels, height, width, Bitmap.Config.RGB_565);
}
else
{
bitmap = Bitmap.createBitmap(pixels, width, height, Bitmap.Config.RGB_565);
}
return bitmap;
}
public void yuv2rgb(int[] out, byte[] in, int width, int height, boolean rotated)
throws NullPointerException, IllegalArgumentException
{
final int size = width * height;
if(out == null) throw new NullPointerException("buffer 'out' == null");
if(out.length < size) throw new IllegalArgumentException("buffer 'out' length < " + size);
if(in == null) throw new NullPointerException("buffer 'in' == null");
if(in.length < (size * 3 / 2)) throw new IllegalArgumentException("buffer 'in' length != " + in.length + " < " + (size * 3/ 2));
// YCrCb
int Y, Cr = 0, Cb = 0;
int Rn = 0, Gn = 0, Bn = 0;
for(int j = 0, pixPtr = 0, cOff0 = size - width; j < height; j++) {
if((j & 0x1) == 0)
cOff0 += width;
int pixPos = height - 1 - j;
for(int i = 0, cOff = cOff0; i < width; i++, cOff++, pixPtr++, pixPos += height) {
// Get Y
Y = 0xff & in[pixPtr]; // 0xff es por el signo
// Get Cr y Cb
if((pixPtr & 0x1) == 0) {
Cr = in[cOff];
if(Cr < 0) Cr += 127; else Cr -= 128;
Cb = in[cOff + 1];
if(Cb < 0) Cb += 127; else Cb -= 128;
Bn = Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
Gn = - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
Rn = Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
}
int R = Y + Rn;
if(R < 0) R = 0; else if(R > 255) R = 255;
int B = Y + Bn;
if(B < 0) B = 0; else if(B > 255) B = 255;
int G = Y + Gn;
if(G < 0) G = 0; else if(G > 255) G = 255; //At this point the code could apply some filter From the separate components of the image.For example, they could swap 2 components or remove one
int rgb = 0xff000000 | (R << 16) | (G << 8) | B; //Depending on the option the output buffer is filled or not applying the transformation
if(rotated)
out[pixPos] = rgb;
else
out[pixPtr] = rgb;
}
}
}
};
The setPreviewFormat(int) in some devices doesn't work, but maybe you can try and create the Bitmap without use the conversion.
I hope this help you.
In our application, we need to use capture frame from camera # 33 fps and pass it to compression before sending it to server,
My compression module can take YUV Image, to compress the image, my camera configuration as follows,
Width = 500 px,
height = 300 px ,
Image format : YV12
On the preview callback
camera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera arg1) {
}
}
data size is coming out to be 230400, but i suppose it would be around
500*300(Y) + 500*300/4(U) + 500*300/4(V) i.e. 2250000
i.e. 5400 bytes more, does that mean, i can ignore the reamining one ?
Also i need to create YUVImage object , but stride info is not coming, so how we can create YUVImage from above data.
Sincere Thanks for reading and i really appreciate if anyone can help me out on this.
Cant help with the data size question but to get YUV from Camera preview you have 2 choices. If running Android 2.2 or later your can use the android.graphics.YuvImage class and just pass it's constructor your bytearray from PreviewCallback.
If you need to support pre 2.2 then you need to do something like:
/**
* Decodes YUV frame to a buffer which can be use to create a bitmap.
* use this for OS < FROYO which has a native YUV decoder
* decode Y, U, and V values on the YUV 420 buffer described as YCbCr_422_SP by Android
* #param rgb the outgoing array of RGB bytes
* #param fg the incoming frame bytes
* #param width of source frame
* #param height of source frame
* #throws NullPointerException
* #throws IllegalArgumentException
*/
private static void decodeYUV_impl(int[] rgb, byte[] fg, int width, int height) throws NullPointerException, IllegalArgumentException
{
int sz = width * height;
if (rgb == null)
throw new NullPointerException("buffer out is null");
if (rgb.length < sz)
throw new IllegalArgumentException("buffer out size " + rgb.length
+ " < minimum " + sz);
if (fg == null)
throw new NullPointerException("buffer 'fg' is null");
if (fg.length < sz)
throw new IllegalArgumentException("buffer fg size " + fg.length
+ " < minimum " + sz * 3 / 2);
int i, j;
int Y, Cr = 0, Cb = 0;
for (j = 0; j < height; j++) {
int pixPtr = j * width;
final int jDiv2 = j >> 1;
for (i = 0; i < width; i++) {
Y = fg[pixPtr];
if (Y < 0)
Y += 255;
if ((i & 0x1) != 1) {
final int cOff = sz + jDiv2 * width + (i >> 1) * 2;
Cb = fg[cOff];
if (Cb < 0)
Cb += 127;
else
Cb -= 128;
Cr = fg[cOff + 1];
if (Cr < 0)
Cr += 127;
else
Cr -= 128;
}
int R = Y + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
if (R < 0)
R = 0;
else if (R > 255)
R = 255;
int G = Y - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1)
+ (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
if (G < 0)
G = 0;
else if (G > 255)
G = 255;
int B = Y + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
if (B < 0)
B = 0;
else if (B > 255)
B = 255;
rgb[pixPtr++] = (0xff000000 + (B << 16) + (G << 8) + R);
}
}
}