Android semantic segmentation post-processing is too slow - java

I'd really appreciate it if anyone can advise with a task I've been working without success for the last week.
I have semantic segmentation model (MobileNetV3 + Lightweight ASPP).Short info: input - 1024x1024, output - same size and 2 classes (bg and vehicle), so my output shape is (1, 1048576, 2). I'm not the mobile dev or java world guy, so I used a few complete andoid examples for image segmentation to test it:
the one from google: https://github.com/tensorflow/examples/tree/master/lite/examples/image_segmentation
and another one open-sourced: https://github.com/pillarpond/image-segmenter-android
I successfully converted it to tflite format and its inference time on OnePlus 7 with GPU enabled and 10 threads is between 105-140ms for such size. But here I run into a problem: general execution time in these two android examples or any you can find for semantic segmentation is about 1050-1300ms (which is less than 1FPS). The slower part of this pipeline is image post-processing (~900-1150ms). You can see that part in the Deeplab#segment method. Since I have only 1 class besides bg - I don't have this third loop, but everything else is untouched and still very slow. Output size is not small in comparison to other common mobile sizes like 128/226/512, but still. I think it shouldn't take so much time to process 1024x1024 matrix and draw rectangles in canvas on modern smartphones.
I tried different solutions, like splitting matrix manipulations into threads or creating all these objects like RectF and Recognition once before and just filling their attributes with new data inside nested loops, but I didn't succeed on either of them. On the desktop side I easily handle it with numpy and opencv and I don't even close to understanding how can I do the same in Android and will it even be efficient or not.
Here's code which I use in python:
CLASS_COLORS = [(0, 0, 0), (255, 255, 255)] # black for bg and white for mask
def get_image_array(image_input, width, height):
img = cv2.imread(image_input, 1)
img = cv2.resize(img, (width, height))
img = img.astype(np.float32)
img[:, :, 0] -= 128.0
img[:, :, 1] -= 128.0
img[:, :, 2] -= 128.0
img = img[:, :, ::-1]
return img
def get_segmentation_array(seg_arr, n_classes):
output_height = seg_arr.shape[0]
output_width = seg_arr.shape[1]
seg_img = np.zeros((output_height, output_width, 3))
for c in range(n_classes):
seg_arr_c = seg_arr[:, :] == c
seg_img[:, :, 0] += ((seg_arr_c)*(CLASS_COLORS[c][0])).astype('uint8')
seg_img[:, :, 1] += ((seg_arr_c)*(CLASS_COLORS[c][1])).astype('uint8')
seg_img[:, :, 2] += ((seg_arr_c)*(CLASS_COLORS[c][2])).astype('uint8')
return seg_img
interpreter = tf.lite.Interpreter(model_path=f"my_model.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
img_arr = get_image_array("input.png", 1024, 1024)
interpreter.set_tensor(input_details[0]['index'], np.array([x]))
interpreter.invoke()
output = interpreter.get_tensor(output_details[0]['index'])
output = output.reshape((1024, 1024, 2)).argmax(axis=2)
seg_img = get_segmentation_array(output, 2)
cv2.imwrite("output.png", seg_img)
Maybe there's anything powerful than the current solution for post-processing.
I would really appreciate any help with this. I'm sure there's anything that can improve post-processing and reduce its time to ~100ms, so I will have ~5FPS in general.

New Update. Thanks to Farmaker, I used a piece of code found in his repo from comment above and now pipeline looks like:
int channels = 3;
int n_classes = 2;
int float_byte_size = 4;
int width = model.inputWidth;
int height = model.inputHeight;
int[] intValues = new int[width * height];
ByteBuffer inputBuffer = ByteBuffer.allocateDirect(width * height * channels * float_byte_size).order(ByteOrder.nativeOrder());
ByteBuffer outputBuffer = ByteBuffer.allocateDirect(width * height * n_classes * float_byte_size).order(ByteOrder.nativeOrder());
Bitmap input = textureView.getBitmap(width, height);
input.getPixels(intValues, 0, width, 0, 0, height, height);
inputBuffer.rewind();
outputBuffer.rewind();
for (final int value: intValues) {
inputBuffer.putFloat(((value >> 16 & 0xff) - 128.0) / 1.0f);
inputBuffer.putFloat(((value >> 8 & 0xff) - 128.0) / 1.0f);
inputBuffer.putFloat(((value & 0xff) - 128.0) / 1.0f);
}
tfLite.run(inputBuffer, outputBuffer);
final Bitmap output = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
outputBuffer.flip();
int[] pixels = new int[width * height];
for (int i = 0; i < width * height; i++) {
float max = outputBuffer.getFloat();
float val = outputBuffer.getFloat();
int id = val > max ? 1 : 0;
pixels[i] = id == 0 ? 0x00000000 : 0x990000ff;
}
output.setPixels(pixels, 0, width, 0, 0, width, height);
resultView.setImageBitmap(resizeBitmap(output, resultView.getWidth(), resultView.getHeight()));
public static Bitmap resizeBitmap(Bitmap bm, int newWidth, int newHeight) {
int width = bm.getWidth();
int height = bm.getHeight();
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
// CREATE A MATRIX FOR THE MANIPULATION
Matrix matrix = new Matrix();
// RESIZE THE BIT MAP
matrix.postScale(scaleWidth, scaleHeight);
// "RECREATE" THE NEW BITMAP
Bitmap resizedBitmap = Bitmap.createBitmap(
bm, 0, 0, width, height, matrix, false);
bm.recycle();
return resizedBitmap;
}
Right now post-processing time is ~70-130ms, 95th is around 90ms, which alongside ~60ms of image pre-processing time, ~140ms inference time and around 30-40ms for other stuff with enabled GPU and 10 threads gives me general execution time around 330ms which is 3FPS! And this is for a large model for 1024x1024.
At this point, I'm more than satisfied and want to try different configurations for my model, including MobilenetV3 small as a backbone.

Related

Draw Image using CYMK values

I want to convert a buffered image from RGBA format to CYMK format without using auto conversion tools or libraries,so i tried to extract the RGBA values from individual pixels that i got using BufferedImage.getRGB() and here what I've done so far :
BufferedImage img = new BufferedImage("image path")
int R,G,B,pixel,A;
float Rc,Gc,Bc,K,C,M,Y;
int height = img.getHeight();
int width = img.getWidth();
for(int y = 0 ; y < height ; y++){
for(int x = 0 ; x < width ; x++){
pixel = img.getRGB(x, y);
//I shifted the int bytes to get RGBA values
A = (pixel>>24)&0xff;
R = (pixel>>16)&0xff;
G = (pixel>>8)&0xff;
B = (pixel)&0xff;
Rc = (float) ((float)R/255.0);
Gc = (float) ((float)G/255.0);
Bc = (float) ((float)B/255.0);
// Equations i found on the internet to get CYMK values
K = 1 - Math.max(Bc, Math.max(Rc, Gc));
C = (1- Rc - K)/(1-K);
Y = (1- Bc - K)/(1-K);
M = (1- Gc - K)/(1-K);
}
}
Now after I've extracted it ,i want to draw or construct an image using theses values ,can you tell me of a method or a way to do this because i don't thinkBufferedImage.setRGB() would work ,and also when i printed the values of C,Y,M some of them hadNaN value can someone tell me what that means and how to deal with it ?
While it is possible, converting RGB to CMYK without a proper color profile will not produce the best results. For better performance and higher color fidelity, I really recommend using an ICC color profile (see ICC_Profile and ICC_ColorSpace classes) and ColorConvertOp. :-)
Anyway, here's how to do it using your own conversion. The important part is creating a CMYK color space, and a ColorModel and BufferedImage using that color space (you could also load a CMYK color space from an ICC profile as mentioned above, but the colors would probably look more off, as it uses different calculations than you do).
public static void main(String[] args) throws IOException {
BufferedImage img = ImageIO.read(new File(args[0]));
int height = img.getHeight();
int width = img.getWidth();
// Create a color model and image in CMYK color space (see custom class below)
ComponentColorModel cmykModel = new ComponentColorModel(CMYKColorSpace.INSTANCE, false, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
BufferedImage cmykImg = new BufferedImage(cmykModel, cmykModel.createCompatibleWritableRaster(width, height), cmykModel.isAlphaPremultiplied(), null);
WritableRaster cmykRaster = cmykImg.getRaster();
int R,G,B,pixel;
float Rc,Gc,Bc,K,C,M,Y;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
pixel = img.getRGB(x, y);
// Now, as cmykImg already is in CMYK color space, you could actually just invoke
//cmykImg.setRGB(x, y, pixel);
// and the method would perform automatic conversion to the dest color space (CMYK)
// But, here you go... (I just cleaned up your code a little bit):
R = (pixel >> 16) & 0xff;
G = (pixel >> 8) & 0xff;
B = (pixel) & 0xff;
Rc = R / 255f;
Gc = G / 255f;
Bc = B / 255f;
// Equations I found on the internet to get CMYK values
K = 1 - Math.max(Bc, Math.max(Rc, Gc));
if (K == 1f) {
// All black (this is where you would get NaN values I think)
C = M = Y = 0;
}
else {
C = (1- Rc - K)/(1-K);
M = (1- Gc - K)/(1-K);
Y = (1- Bc - K)/(1-K);
}
// ...and store the CMYK values (as bytes in 0..255 range) in the raster
cmykRaster.setDataElements(x, y, new byte[] {(byte) (C * 255), (byte) (M * 255), (byte) (Y * 255), (byte) (K * 255)});
}
}
// You should now have a CMYK buffered image
System.out.println("cmykImg: " + cmykImg);
}
// A simple and not very accurate CMYK color space
// Full source at https://github.com/haraldk/TwelveMonkeys/blob/master/imageio/imageio-core/src/main/java/com/twelvemonkeys/imageio/color/CMYKColorSpace.java
final static class CMYKColorSpace extends ColorSpace {
static final ColorSpace INSTANCE = new CMYKColorSpace();
final ColorSpace sRGB = getInstance(CS_sRGB);
private CMYKColorSpace() {
super(ColorSpace.TYPE_CMYK, 4);
}
public static ColorSpace getInstance() {
return INSTANCE;
}
public float[] toRGB(float[] colorvalue) {
return new float[]{
(1 - colorvalue[0]) * (1 - colorvalue[3]),
(1 - colorvalue[1]) * (1 - colorvalue[3]),
(1 - colorvalue[2]) * (1 - colorvalue[3])
};
}
public float[] fromRGB(float[] rgbvalue) {
// NOTE: This is essentially the same equation you use, except
// this is slightly optimized, and values are already in range [0..1]
// Compute CMY
float c = 1 - rgbvalue[0];
float m = 1 - rgbvalue[1];
float y = 1 - rgbvalue[2];
// Find K
float k = Math.min(c, Math.min(m, y));
// Convert to CMYK values
return new float[]{(c - k), (m - k), (y - k), k};
}
public float[] toCIEXYZ(float[] colorvalue) {
return sRGB.toCIEXYZ(toRGB(colorvalue));
}
public float[] fromCIEXYZ(float[] colorvalue) {
return sRGB.fromCIEXYZ(fromRGB(colorvalue));
}
}
PS: Your question talks about RGBA and CMYK, but your code just ignores the alpha value, so I did the same. If you really wanted to, you could just keep the alpha value as-is and have a CMYK+A image, to allow alpha-compositing in CMYK color space. I'll leave that as an exercise. ;-)

How to create BufferedImage for 32 bits per sample, 3 samples image data

I am trying to create a BufferedImage from some image data which is a byte array. The image is RGB format with 3 samples per pixel - R, G, and B and 32 bits per sample (for each sample, not all 3 samples).
Now I want to create a BufferedImage from this byte array. This is what I have done:
ColorModel cm = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), new int[] {32, 32, 32}, false, false, Transparency.OPAQUE, DataBuffer.TYPE_INT);
Object tempArray = ArrayUtils.toNBits(bitsPerSample, pixels, samplesPerPixel*imageWidth, endian == IOUtils.BIG_ENDIAN);
WritableRaster raster = cm.createCompatibleWritableRaster(imageWidth, imageHeight);
raster.setDataElements(0, 0, imageWidth, imageHeight, tempArray);
BufferedImage bi = new BufferedImage(cm, raster, false, null);
The above code works with 24 bits per sample RGB image but not 32 bits per sample. The generated image is garbage which is shown on the right of the image. It is supposed to be like the left side of the image.
Note: the only image reader on my machine which can read this image is ImageMagick. All the others show similar results as the garbage one to the right of the following image.
The ArrayUtils.toNBits() just translates the byte array to int array with correct endianess. I'm sure this one is correct as I have cross checked with other methods to generate the same int array.
I guess the problem might arise from the fact I am using all the 32 bits int to represent the color which would contain negative values. Looks like I need long data type, but there is no DataBuffer type for long.
Instances of ComponentColorModel created with transfer types
DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT
have pixel sample values which are treated as unsigned integral
values.
The above quote is from Java document for ComponentColorModel. This means the 32 bit sample does get treated as unsigned integer value. Then the problem could be somewhere else.
Has any body met similar problem and got a workaround or I may have done some thing wrong here?
Update2: The "real" problem lies in the fact when 32 bit sample is used, the algorithm for the ComponentColorModel will shift 1 to the left 0 times (1<<0) since shift on int is always within 0~31 inclusive. This is not the expected value. To solve this problem (actually shift left 32 times), the only thing needs to be done is change 1 from int to long type as 1L as shown in the fix below.
Update: from the answer by HaraldK and the comments, we have finally agreed that the problem is coming from Java's ComponentColorModel which is not handling 32 bit sample correctly. The proposed fix by HaraldK works for my case too. The following is my version:
import java.awt.Transparency;
import java.awt.color.ColorSpace;
import java.awt.image.ComponentColorModel;
import java.awt.image.DataBuffer;
public class Int32ComponentColorModel extends ComponentColorModel {
//
public Int32ComponentColorModel(ColorSpace cs, boolean alpha) {
super(cs, alpha, false, alpha ? Transparency.TRANSLUCENT : Transparency.OPAQUE, DataBuffer.TYPE_INT);
}
#Override
public float[] getNormalizedComponents(Object pixel, float[] normComponents, int normOffset) {
int numComponents = getNumComponents();
if (normComponents == null || normComponents.length < numComponents + normOffset) {
normComponents = new float[numComponents + normOffset];
}
switch (transferType) {
case DataBuffer.TYPE_INT:
int[] ipixel = (int[]) pixel;
for (int c = 0, nc = normOffset; c < numComponents; c++, nc++) {
normComponents[nc] = ipixel[c] / ((float) ((1L << getComponentSize(c)) - 1));
}
break;
default: // I don't think we can ever come this far. Just in case!!!
throw new UnsupportedOperationException("This method has not been implemented for transferType " + transferType);
}
return normComponents;
}
}
Update:
This seems to be a known bug: ComponentColorModel.getNormalizedComponents() does not handle 32-bit TYPE_INT, reported 10 (TEN!) years ago, against Java 5.
The upside, Java is now partly open-sourced. We can now propose a patch, and with some luck it will be evaluated for Java 9 or so... :-P
The bug proposes the following workaround:
Subclass ComponentColorModel and override getNormalizedComponents() to properly handle 32 bit per sample TYPE_INT data by dividing the incoming pixel value by 'Math.pow(2, 32) - 1' when dealing with this data, rather than using the erroneous bit shift. (Using a floating point value is ok, since getNormalizedComponents() converts everything to floating point anyway).
My fix is a little different, but the basic idea is the same (feel free to optimize as you see fit :-)):
private static class TypeIntComponentColorModel extends ComponentColorModel {
public TypeIntComponentColorModel(final ColorSpace cs, final boolean alpha) {
super(cs, alpha, false, alpha ? TRANSLUCENT : OPAQUE, DataBuffer.TYPE_INT);
}
#Override
public float[] getNormalizedComponents(Object pixel, float[] normComponents, int normOffset) {
int numComponents = getNumComponents();
if (normComponents == null) {
normComponents = new float[numComponents + normOffset];
}
switch (transferType) {
case DataBuffer.TYPE_INT:
int[] ipixel = (int[]) pixel;
for (int c = 0, nc = normOffset; c < numComponents; c++, nc++) {
normComponents[nc] = ((float) (ipixel[c] & 0xffffffffl)) / ((float) ((1l << getComponentSize(c)) - 1));
}
break;
default:
throw new UnsupportedOperationException("This method has not been implemented for transferType " + transferType);
}
return normComponents;
}
}
Consider the below code. If run as is, for me it displays a mostly black image, with the upper right quarter white overlayed with a black circle. If I change the datatype to TYPE_USHORT (uncomment the transferType line), it displays half/half white and a linear gradient from black to white, with an orange circle in the middle (as it should).
Using ColorConvertOp to convert to a standard type seems to make no difference.
public class Int32Image {
public static void main(String[] args) {
// Define dimensions and layout of the image
int w = 300;
int h = 200;
int transferType = DataBuffer.TYPE_INT;
// int transferType = DataBuffer.TYPE_USHORT;
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), false, false, Transparency.OPAQUE, transferType);
WritableRaster raster = colorModel.createCompatibleWritableRaster(w, h);
BufferedImage image = new BufferedImage(colorModel, raster, false, null);
// Start with linear gradient
if (raster.getTransferType() == DataBuffer.TYPE_INT) {
DataBufferInt buffer = (DataBufferInt) raster.getDataBuffer();
int[] data = buffer.getData();
for (int y = 0; y < h; y++) {
int value = (int) (y * 0xffffffffL / h);
for (int x = 0; x < w; x++) {
int offset = y * w * 3 + x * 3;
data[offset] = value;
data[offset + 1] = value;
data[offset + 2] = value;
}
}
}
else if (raster.getTransferType() == DataBuffer.TYPE_USHORT) {
DataBufferUShort buffer = (DataBufferUShort) raster.getDataBuffer();
short[] data = buffer.getData();
for (int y = 0; y < h; y++) {
short value = (short) (y * 0xffffL / h);
for (int x = 0; x < w; x++) {
int offset = y * w * 3 + x * 3;
data[offset] = value;
data[offset + 1] = value;
data[offset + 2] = value;
}
}
}
// Paint something (in color)
Graphics2D g = image.createGraphics();
g.setColor(Color.WHITE);
g.fillRect(0, 0, w / 2, h);
g.setColor(Color.ORANGE);
g.fillOval(100, 50, w - 200, h - 100);
g.dispose();
System.out.println("image = " + image);
// image = new ColorConvertOp(null).filter(image, new BufferedImage(image.getWidth(), image.getHeight(), BufferedImage.TYPE_INT_ARGB));
JFrame frame = new JFrame();
frame.add(new JLabel(new ImageIcon(image)));
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
}
To me, this seems to suggest that there's something wrong with the ColorModel using transferType TYPE_INT. But I'd be happy to be wrong. ;-)
Another thing you could try, is to scale the values down to 16 bit, use a TYPE_USHORT raster and color model, and see if that makes a difference. I bet it will, but I'm too lazy to try. ;-)

How would I get a BufferedImage from an OpenGL window?

I'm coding a Java LWJGL game, and everything's going along great, except whenever I try to figure out a way to create a BufferedImage of the current game area. I've searched the internet, browsed all of the opengl functions, and I am getting no where... Anyone have any ideas? Here's all I have so far, but it only makes a blank .png:
if(Input.getKeyDown(Input.KEY_F2)) {
try {
String fileName = "screenshot-" + Util.getSystemTime(false);
File imageToSave = new File(MainComponent.screenshotsFolder, fileName + ".png");
int duplicate = 0;
while(true) {
duplicate++;
if(imageToSave.exists() == false) {
imageToSave.createNewFile();
break;
}
imageToSave = new File(MainComponent.screenshotsFolder, fileName + "_" + duplicate + ".png");
}
imageToSave.createNewFile();
// Create a buffered image:
BufferedImage image = new BufferedImage(MainComponent.WIDTH, MainComponent.HEIGHT, BufferedImage.TYPE_INT_ARGB);
//Wrtie the new buffered image to file:
ImageIO.write(image, "png", imageToSave);
} catch (IOException e) {
e.printStackTrace();
}
}
You never actually write something into your BufferedImage.
Read the Buffer
You can use glReadPixels to access the selected buffer. (I assume WIDTH and HEIGHT as your OpenGLContext dimensions.)
FloatBuffer imageData = BufferUtils.createFloatBuffer(WIDTH * HEIGHT * 3);
GL11.glReadPixels(0, 0, WIDTH, HEIGHT, GL11.GL_RGB, GL11.GL_FLOAT, imageData);
imageData.rewind();
Use whatever parameters suit your needs best, I just picked floats randomly.
Set the Image Data
You already figured out how to create and save your image, but in between you should also set some content to the image. You can do this with BufferedImage().setRGB() (Note that I don't use a good naming as you do, to keep this example concise.)
// create image
BufferedImage image = new BufferedImage(
WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB
);
// set content
image.setRGB(0, 0, WIDTH, HEIGHT, rgbArray, 0, WIDTH);
// save it
File outputfile = new File("Screenshot.png");
try {
ImageIO.write(image, "png", outputfile);
} catch (IOException e) {
e.printStackTrace();
}
The most tricky part is now getting the rgbArray. The problems are that
OpenGL gives you three values (in this case, i.e. using GL11.GL_RGB), while the BufferedImage expects one value.
OpenGL counts the rows from bottom to top while BufferedImage counts from top to bottom.
Calculate one Integer from three Floats
To get rid of problem one you have to calculate the integer value which fits the three number you get.
I will show this with a simple example, the color red which is (1.0f, 0.0f, 0.0f) in your FloatBuffer.
For the integer value it might be easy to think of numbers in hex values, as you might know from CSS where it's very common to name colors with those. Red would be #ff0000 in CSS or in Java of course 0xff0000.
Colors in RGB with integers are usually represented from 0 to 255 (or 00 to ff in hex), while you use 0 to 1 with floats or doubles. So first you have to map them to the correct range by simply multiplying the values by 255 and casting them to integers:
int r = (int)(fR * 255);
Now you can think of the hex value as just putting those numbers next to each other:
rgb = 255 0 0 = ff 00 00
To achieve this you can bitshift the integer values. Since one hex value (0-f) is 4 byte long, you have to shift the value of green 8 bytes to the left (two hex values) and the value of red 16 bytes. After that you can simply add them up.
int rgb = (r << 16) + (g << 8) + b;
Getting from BottomUp to TopDown
I know the terminology bottom-up -> top-down is not correct here, but it was catchy.
To access 2D data in a 1D array you usually use some formula (this case row-major order) like
int index = offset + (y - yOffset) * stride + (x - xOffset);
Since you want to have the complete image the offsets can be left out and the formula simplified to
int index = y * stride + x;
Of course the stride is simply the WIDTH, i.e. the maximum achievable x value (or in other terms the row length).
The problem you now face is that OpenGL uses the bottom row as row 0 while the BufferedImage uses the top row as row 0. To get rid of that problem just invert y:
int index = ((HEIGHT - 1) - y) * WIDTH + x;
Filling the int[]-array with the Buffer's Data
Now you know how to calculate the rgb value, the correct index and you have all data you need. Let's fill the int[]-array with those information.
int[] rgbArray = new int[WIDTH * HEIGHT];
for(int y = 0; y < HEIGHT; ++y) {
for(int x = 0; x < WIDTH; ++x) {
int r = (int)(imageData.get() * 255) << 16;
int g = (int)(imageData.get() * 255) << 8;
int b = (int)(imageData.get() * 255);
int i = ((HEIGHT - 1) - y) * WIDTH + x;
rgbArray[i] = r + g + b;
}
}
Note three things about this little piece of code.
The size of the array. Obviously it's just WIDTH * HEIGHT and not WIDTH * HEIGHT * 3 as the buffer's size was.
Since OpenGL uses row-major order, you have to use the column value (x) as the inner loop for this 2D array (and of course there are other ways to write this, but this seemed to be the most intuitive one).
Accessing imageData with imageData.get() is probably not the safest way to do it, but since the calculations are carefully done it should do the job just fine. Just remember to flip() or rewind() the buffer before calling get() the first time!
Putting it all together
So with all the information available now we can just put a method saveScreenshot() together.
private void saveScreenshot() {
// read current buffer
FloatBuffer imageData = BufferUtils.createFloatBuffer(WIDTH * HEIGHT * 3);
GL11.glReadPixels(
0, 0, WIDTH, HEIGHT, GL11.GL_RGB, GL11.GL_FLOAT, imageData
);
imageData.rewind();
// fill rgbArray for BufferedImage
int[] rgbArray = new int[WIDTH * HEIGHT];
for(int y = 0; y < HEIGHT; ++y) {
for(int x = 0; x < WIDTH; ++x) {
int r = (int)(imageData.get() * 255) << 16;
int g = (int)(imageData.get() * 255) << 8;
int b = (int)(imageData.get() * 255);
int i = ((HEIGHT - 1) - y) * WIDTH + x;
rgbArray[i] = r + g + b;
}
}
// create and save image
BufferedImage image = new BufferedImage(
WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB
);
image.setRGB(0, 0, WIDTH, HEIGHT, rgbArray, 0, WIDTH);
File outputfile = getNextScreenFile();
try {
ImageIO.write(image, "png", outputfile);
} catch (IOException e) {
e.printStackTrace();
System.err.println("Can not save screenshot!");
}
}
private File getNextScreenFile() {
// create image name
String fileName = "screenshot_" + getSystemTime(false);
File imageToSave = new File(fileName + ".png");
// check for duplicates
int duplicate = 0;
while(imageToSave.exists()) {
imageToSave = new File(fileName + "_" + ++duplicate + ".png");
}
return imageToSave;
}
// format the time
public static String getSystemTime(boolean getTimeOnly) {
SimpleDateFormat dateFormat = new SimpleDateFormat(
getTimeOnly?"HH-mm-ss":"yyyy-MM-dd'T'HH-mm-ss"
);
return dateFormat.format(new Date());
}
I also uploaded a very simple full working example.

Android: how to display camera preview with callback?

What I need to do is quite simple, I want to manually display preview from camera using camera callback and I want to get at least 15fps on a real device. I don't even need the colors, I just need to preview grayscale image.
Images from camera are in YUV format and you have to process it somehow, which is the main performance problem. I'm using API 8.
In all cases I'm using camera.setPreviewCallbackWithBuffer(), that is faster than camera.setPreviewCallback(). It seems that I cant get about 24 fps here, if I'm not displaying the preview. So there is not the problem.
I have tried these solutions:
1. Display camera preview on a SurfaceView as a Bitmap. It works, but the performance is about 6fps.
baos = new ByteOutputStream();
yuvimage=new YuvImage(cameraFrame, ImageFormat.NV21, prevX, prevY, null);
yuvimage.compressToJpeg(new Rect(0, 0, prevX, prevY), 80, baos);
jdata = baos.toByteArray();
bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length); // Convert to Bitmap, this is the main issue, it takes a lot of time
canvas.drawBitmap(bmp , 0, 0, paint);
2. Display camera preview on a GLSurfaceView as a texture. Here I was displaying only luminance data (greyscale image), which is quite easy, it requires only one arraycopy() on each frame. I can get about 12fps, but I need to apply some filters to the preview and it seems, that it can't be done fast in OpenGL ES 1. So I can't use this solution. Some details of this in another question.
3. Display camera preview on a (GL)SurfaceView using NDK to process the YUV data. I find a solution here that uses some C function and NDK. But I didn't manage to use it, here some more details. But anyway, this solution is done to return ByteBuffer to display it as a texture in OpenGL and it won't be faster than the previous attempt. So I would have to modify it to return int[] array, that can be drawn with canvas.drawBitmap(), but I don't understand C enough to do this.
So, is there any other way that I'm missing or some improvement to the attempts I tried?
I'm working on exactly the same issue, but haven't got quite as far as you have.
Have you considered drawing the pixels directly to the canvas without encoding them to JPEG first? Inside the OpenCV kit http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.3.1/OpenCV-2.3.1-android-bin.tar.bz2/download (which doesn't actually use opencv; don't worry), there's a project called tutorial-0-androidcamera that demonstrates converting the YUV pixels to RGB and then writing them directly to a bitmap.
The relevant code is essentially:
public void onPreviewFrame(byte[] data, Camera camera, int width, int height) {
int frameSize = width*height;
int[] rgba = new int[frameSize+1];
// Convert YUV to RGB
for (int i = 0; i < height; i++)
for (int j = 0; j < width; j++) {
int y = (0xff & ((int) data[i * width + j]));
int u = (0xff & ((int) data[frameSize + (i >> 1) * width + (j & ~1) + 0]));
int v = (0xff & ((int) data[frameSize + (i >> 1) * width + (j & ~1) + 1]));
y = y < 16 ? 16 : y;
int r = Math.round(1.164f * (y - 16) + 1.596f * (v - 128));
int g = Math.round(1.164f * (y - 16) - 0.813f * (v - 128) - 0.391f * (u - 128));
int b = Math.round(1.164f * (y - 16) + 2.018f * (u - 128));
r = r < 0 ? 0 : (r > 255 ? 255 : r);
g = g < 0 ? 0 : (g > 255 ? 255 : g);
b = b < 0 ? 0 : (b > 255 ? 255 : b);
rgba[i * width + j] = 0xff000000 + (b << 16) + (g << 8) + r;
}
Bitmap bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bmp.setPixels(rgba, 0/* offset */, width /* stride */, 0, 0, width, height);
Canvas canvas = mHolder.lockCanvas();
if (canvas != null) {
canvas.drawBitmap(bmp, (canvas.getWidth() - width) / 2, (canvas.getHeight() - height) / 2, null);
mHolder.unlockCanvasAndPost(canvas);
} else {
Log.w(TAG, "Canvas is null!");
}
bmp.recycle();
}
Of course you'd have to adapt it to meet your needs (ex. not allocating rgba each frame), but it might be a start. I'd love to see if it works for you or not -- i'm still fighting problems orthogonal to yours at the moment.
I think Michael's on the right track. First you can try this method to convert from RGB to Grayscale. Clearly it's doing almost the same thing as his,but a little more succinctly for what you want.
//YUV Space to Greyscale
static public void YUVtoGrayScale(int[] rgb, byte[] yuv420sp, int width, int height){
final int frameSize = width * height;
for (int pix = 0; pix < frameSize; pix++){
int pixVal = (0xff & ((int) yuv420sp[pix])) - 16;
if (pixVal < 0) pixVal = 0;
if (pixVal > 255) pixVal = 255;
rgb[pix] = 0xff000000 | (pixVal << 16) | (pixVal << 8) | pixVal;
}
}
}
Second, don't create a ton of work for the garbage collector. Your bitmaps and arrays are going to be a fixed size. Create them once, not in onFramePreview.
Doing that you'll end up with something that looks like this:
public PreviewCallback callback = new PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
if ( (mSelectView == null) || !inPreview )
return;
if (mSelectView.mBitmap == null)
{
//initialize SelectView bitmaps, arrays, etc
//mSelectView.mBitmap = Bitmap.createBitmap(mSelectView.mImageWidth, mSelectView.mImageHeight, Bitmap.Config.RGB_565);
//etc
}
//Pass Image Data to SelectView
System.arraycopy(data, 0, mSelectView.mYUVData, 0, data.length);
mSelectView.invalidate();
}
};
And then the canvas where you want to put it looks like this:
class SelectView extends View {
Bitmap mBitmap;
Bitmap croppedView;
byte[] mYUVData;
int[] mRGBData;
int mImageHeight;
int mImageWidth;
public SelectView(Context context){
super(context);
mBitmap = null;
croppedView = null;
}
#Override
protected void onDraw(Canvas canvas){
if (mBitmap != null)
{
int canvasWidth = canvas.getWidth();
int canvasHeight = canvas.getHeight();
// Convert from YUV to Greyscale
YUVtoGrayScale(mRGBData, mYUVData, mImageWidth, mImageHeight);
mBitmap.setPixels(mRGBData, 0, mImageWidth, 0, 0, mImageWidth, mImageHeight);
Rect crop = new Rect(180, 220, 290, 400);
Rect dst = new Rect(0, 0, canvasWidth, (int)(canvasHeight/2));
canvas.drawBitmap(mBitmap, crop, dst, null);
}
super.onDraw(canvas);
}
This example shows a cropped and distorted selection of the camera preview in real time, but you get the idea. It runs at high FPS on a Nexus S in greyscale and should work for your needs as well.
Is this not what you want? Just use a SurfaceView in your layout, then somewhere in your init like onResume():
SurfaceView surfaceView = ...
SurfaceHolder holder = surfaceView.getHolder();
...
Camera camera = ...;
camera.setPreviewDisplay(holder);
It just sends the frames straight to the view as fast as they arrive.
If you want grayscale, modify the camera parameters with setColorEffect("mono").
For very basic and simple effects, there is
Camera.Parameters parameters = mCamera.getParameters();
parameters.setColorEffect(Parameters.EFFECT_AQUA);
I figured out that this effects do DIFFERENTLY depending on the device.
For instance, on my phone (galaxy s II) it looks kinda like a comic effect as in contrast to the galaxy s 1 it is 'just' a blue shade.
It's pro: It's working as live-preview.
I looked around some other camera apps and they obviously also faced this problem.
So what did they do?
They are capturing the default camera image, applying a filter to the bitmap data, and show this image in a simple ImageView. It's for sure not that cool as in live preview, but you won't ever face performance problems.
I believe I read in a blog that the grayscale data is in the first x*y bytes. Yuv should represent luminance, so the data is there, although it isn't a perfect grayscale. Its great for relative brightness, but not grayscale, as each color isn't as bright as each other in rgb. Green is usually given a stronger weight in luminosity conversions. Hope this helps!
Is there any special reason that you are forced to use GLES 1.0 ?
Because if not, see the accepted answer here:
Android SDK: Get raw preview camera image without displaying it
Generally it mentions using Camera.setPreviewTexture() in combination with GLES 2.0.
In GLES 2.0 you can render a full-screen-quad all over the screen, and create whatever effect you want.
It's most likely the fastest way possible.

Set BufferedImage alpha mask in Java

I have two BufferedImages I loaded in from pngs. The first contains an image, the second an alpha mask for the image.
I want to create a combined image from the two, by applying the alpha mask. My google-fu fails me.
I know how to load/save the images, I just need the bit where I go from two BufferedImages to one BufferedImage with the right alpha channel.
I'm too late with this answer, but maybe it is of use for someone anyway. This is a simpler and more efficient version of Michael Myers' method:
public void applyGrayscaleMaskToAlpha(BufferedImage image, BufferedImage mask)
{
int width = image.getWidth();
int height = image.getHeight();
int[] imagePixels = image.getRGB(0, 0, width, height, null, 0, width);
int[] maskPixels = mask.getRGB(0, 0, width, height, null, 0, width);
for (int i = 0; i < imagePixels.length; i++)
{
int color = imagePixels[i] & 0x00ffffff; // Mask preexisting alpha
int alpha = maskPixels[i] << 24; // Shift blue to alpha
imagePixels[i] = color | alpha;
}
image.setRGB(0, 0, width, height, imagePixels, 0, width);
}
It reads all the pixels into an array at the beginning, thus requiring only one for-loop. Also, it directly shifts the blue byte to the alpha (of the mask color), instead of first masking the red byte and then shifting it.
Like the other methods, it assumes both images have the same dimensions.
I played recently a bit with this stuff, to display an image over another one, and to fade an image to gray.
Also masking an image with a mask with transparency (my previous version of this message!).
I took my little test program and tweaked it a bit to get the wanted result.
Here are the relevant bits:
TestMask() throws IOException
{
m_images = new BufferedImage[3];
m_images[0] = ImageIO.read(new File("E:/Documents/images/map.png"));
m_images[1] = ImageIO.read(new File("E:/Documents/images/mapMask3.png"));
Image transpImg = TransformGrayToTransparency(m_images[1]);
m_images[2] = ApplyTransparency(m_images[0], transpImg);
}
private Image TransformGrayToTransparency(BufferedImage image)
{
ImageFilter filter = new RGBImageFilter()
{
public final int filterRGB(int x, int y, int rgb)
{
return (rgb << 8) & 0xFF000000;
}
};
ImageProducer ip = new FilteredImageSource(image.getSource(), filter);
return Toolkit.getDefaultToolkit().createImage(ip);
}
private BufferedImage ApplyTransparency(BufferedImage image, Image mask)
{
BufferedImage dest = new BufferedImage(
image.getWidth(), image.getHeight(),
BufferedImage.TYPE_INT_ARGB);
Graphics2D g2 = dest.createGraphics();
g2.drawImage(image, 0, 0, null);
AlphaComposite ac = AlphaComposite.getInstance(AlphaComposite.DST_IN, 1.0F);
g2.setComposite(ac);
g2.drawImage(mask, 0, 0, null);
g2.dispose();
return dest;
}
The remainder just display the images in a little Swing panel.
Note that the mask image is gray levels, black becoming full transparency, white becoming full opaque.
Although you have resolved your problem, I though I could share my take on it. It uses a slightly more Java-ish method, using standard classes to process/filter images.
Actually, my method uses a bit more memory (making an additional image) and I am not sure it is faster (measuring respective performances could be interesting), but it is slightly more abstract.
At least, you have choice! :-)
Your solution could be improved by fetching the RGB data more than one pixel at a time(see http://java.sun.com/javase/6/docs/api/java/awt/image/BufferedImage.html), and by not creating three Color objects on every iteration of the inner loop.
final int width = image.getWidth();
int[] imgData = new int[width];
int[] maskData = new int[width];
for (int y = 0; y < image.getHeight(); y++) {
// fetch a line of data from each image
image.getRGB(0, y, width, 1, imgData, 0, 1);
mask.getRGB(0, y, width, 1, maskData, 0, 1);
// apply the mask
for (int x = 0; x < width; x++) {
int color = imgData[x] & 0x00FFFFFF; // mask away any alpha present
int maskColor = (maskData[x] & 0x00FF0000) << 8; // shift red into alpha bits
color |= maskColor;
imgData[x] = color;
}
// replace the data
image.setRGB(0, y, width, 1, imgData, 0, 1);
}
For those who are using alpha in the original image.
I wrote this code in Koltin, the key point here is that if you have the alpha on your original image you need to multiply these channels.
Koltin Version:
val width = this.width
val imgData = IntArray(width)
val maskData = IntArray(width)
for(y in 0..(this.height - 1)) {
this.getRGB(0, y, width, 1, imgData, 0, 1)
mask.getRGB(0, y, width, 1, maskData, 0, 1)
for (x in 0..(this.width - 1)) {
val maskAlpha = (maskData[x] and 0x000000FF)/ 255f
val imageAlpha = ((imgData[x] shr 24) and 0x000000FF) / 255f
val rgb = imgData[x] and 0x00FFFFFF
val alpha = ((maskAlpha * imageAlpha) * 255).toInt() shl 24
imgData[x] = rgb or alpha
}
this.setRGB(0, y, width, 1, imgData, 0, 1)
}
Java version (just translated from Kotlin)
int width = image.getWidth();
int[] imgData = new int[width];
int[] maskData = new int[width];
for (int y = 0; y < image.getHeight(); y ++) {
image.getRGB(0, y, width, 1, imgData, 0, 1);
mask.getRGB(0, y, width, 1, maskData, 0, 1);
for (int x = 0; x < image.getWidth(); x ++) {
//Normalize (0 - 1)
float maskAlpha = (maskData[x] & 0x000000FF)/ 255f;
float imageAlpha = ((imgData[x] >> 24) & 0x000000FF) / 255f;
//Image without alpha channel
int rgb = imgData[x] & 0x00FFFFFF;
//Multiplied alpha
int alpha = ((int) ((maskAlpha * imageAlpha) * 255)) << 24;
//Add alpha to image
imgData[x] = rgb | alpha;
}
image.setRGB(0, y, width, 1, imgData, 0, 1);
}
Actually, I've figured it out. This is probably not a fast way of doing it, but it works:
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
Color c = new Color(image.getRGB(x, y));
Color maskC = new Color(mask.getRGB(x, y));
Color maskedColor = new Color(c.getRed(), c.getGreen(), c.getBlue(),
maskC.getRed());
resultImg.setRGB(x, y, maskedColor.getRGB());
}
}

Categories

Resources