Android: how to display camera preview with callback? - java

What I need to do is quite simple, I want to manually display preview from camera using camera callback and I want to get at least 15fps on a real device. I don't even need the colors, I just need to preview grayscale image.
Images from camera are in YUV format and you have to process it somehow, which is the main performance problem. I'm using API 8.
In all cases I'm using camera.setPreviewCallbackWithBuffer(), that is faster than camera.setPreviewCallback(). It seems that I cant get about 24 fps here, if I'm not displaying the preview. So there is not the problem.
I have tried these solutions:
1. Display camera preview on a SurfaceView as a Bitmap. It works, but the performance is about 6fps.
baos = new ByteOutputStream();
yuvimage=new YuvImage(cameraFrame, ImageFormat.NV21, prevX, prevY, null);
yuvimage.compressToJpeg(new Rect(0, 0, prevX, prevY), 80, baos);
jdata = baos.toByteArray();
bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length); // Convert to Bitmap, this is the main issue, it takes a lot of time
canvas.drawBitmap(bmp , 0, 0, paint);
2. Display camera preview on a GLSurfaceView as a texture. Here I was displaying only luminance data (greyscale image), which is quite easy, it requires only one arraycopy() on each frame. I can get about 12fps, but I need to apply some filters to the preview and it seems, that it can't be done fast in OpenGL ES 1. So I can't use this solution. Some details of this in another question.
3. Display camera preview on a (GL)SurfaceView using NDK to process the YUV data. I find a solution here that uses some C function and NDK. But I didn't manage to use it, here some more details. But anyway, this solution is done to return ByteBuffer to display it as a texture in OpenGL and it won't be faster than the previous attempt. So I would have to modify it to return int[] array, that can be drawn with canvas.drawBitmap(), but I don't understand C enough to do this.
So, is there any other way that I'm missing or some improvement to the attempts I tried?

I'm working on exactly the same issue, but haven't got quite as far as you have.
Have you considered drawing the pixels directly to the canvas without encoding them to JPEG first? Inside the OpenCV kit http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.3.1/OpenCV-2.3.1-android-bin.tar.bz2/download (which doesn't actually use opencv; don't worry), there's a project called tutorial-0-androidcamera that demonstrates converting the YUV pixels to RGB and then writing them directly to a bitmap.
The relevant code is essentially:
public void onPreviewFrame(byte[] data, Camera camera, int width, int height) {
int frameSize = width*height;
int[] rgba = new int[frameSize+1];
// Convert YUV to RGB
for (int i = 0; i < height; i++)
for (int j = 0; j < width; j++) {
int y = (0xff & ((int) data[i * width + j]));
int u = (0xff & ((int) data[frameSize + (i >> 1) * width + (j & ~1) + 0]));
int v = (0xff & ((int) data[frameSize + (i >> 1) * width + (j & ~1) + 1]));
y = y < 16 ? 16 : y;
int r = Math.round(1.164f * (y - 16) + 1.596f * (v - 128));
int g = Math.round(1.164f * (y - 16) - 0.813f * (v - 128) - 0.391f * (u - 128));
int b = Math.round(1.164f * (y - 16) + 2.018f * (u - 128));
r = r < 0 ? 0 : (r > 255 ? 255 : r);
g = g < 0 ? 0 : (g > 255 ? 255 : g);
b = b < 0 ? 0 : (b > 255 ? 255 : b);
rgba[i * width + j] = 0xff000000 + (b << 16) + (g << 8) + r;
}
Bitmap bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bmp.setPixels(rgba, 0/* offset */, width /* stride */, 0, 0, width, height);
Canvas canvas = mHolder.lockCanvas();
if (canvas != null) {
canvas.drawBitmap(bmp, (canvas.getWidth() - width) / 2, (canvas.getHeight() - height) / 2, null);
mHolder.unlockCanvasAndPost(canvas);
} else {
Log.w(TAG, "Canvas is null!");
}
bmp.recycle();
}
Of course you'd have to adapt it to meet your needs (ex. not allocating rgba each frame), but it might be a start. I'd love to see if it works for you or not -- i'm still fighting problems orthogonal to yours at the moment.

I think Michael's on the right track. First you can try this method to convert from RGB to Grayscale. Clearly it's doing almost the same thing as his,but a little more succinctly for what you want.
//YUV Space to Greyscale
static public void YUVtoGrayScale(int[] rgb, byte[] yuv420sp, int width, int height){
final int frameSize = width * height;
for (int pix = 0; pix < frameSize; pix++){
int pixVal = (0xff & ((int) yuv420sp[pix])) - 16;
if (pixVal < 0) pixVal = 0;
if (pixVal > 255) pixVal = 255;
rgb[pix] = 0xff000000 | (pixVal << 16) | (pixVal << 8) | pixVal;
}
}
}
Second, don't create a ton of work for the garbage collector. Your bitmaps and arrays are going to be a fixed size. Create them once, not in onFramePreview.
Doing that you'll end up with something that looks like this:
public PreviewCallback callback = new PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
if ( (mSelectView == null) || !inPreview )
return;
if (mSelectView.mBitmap == null)
{
//initialize SelectView bitmaps, arrays, etc
//mSelectView.mBitmap = Bitmap.createBitmap(mSelectView.mImageWidth, mSelectView.mImageHeight, Bitmap.Config.RGB_565);
//etc
}
//Pass Image Data to SelectView
System.arraycopy(data, 0, mSelectView.mYUVData, 0, data.length);
mSelectView.invalidate();
}
};
And then the canvas where you want to put it looks like this:
class SelectView extends View {
Bitmap mBitmap;
Bitmap croppedView;
byte[] mYUVData;
int[] mRGBData;
int mImageHeight;
int mImageWidth;
public SelectView(Context context){
super(context);
mBitmap = null;
croppedView = null;
}
#Override
protected void onDraw(Canvas canvas){
if (mBitmap != null)
{
int canvasWidth = canvas.getWidth();
int canvasHeight = canvas.getHeight();
// Convert from YUV to Greyscale
YUVtoGrayScale(mRGBData, mYUVData, mImageWidth, mImageHeight);
mBitmap.setPixels(mRGBData, 0, mImageWidth, 0, 0, mImageWidth, mImageHeight);
Rect crop = new Rect(180, 220, 290, 400);
Rect dst = new Rect(0, 0, canvasWidth, (int)(canvasHeight/2));
canvas.drawBitmap(mBitmap, crop, dst, null);
}
super.onDraw(canvas);
}
This example shows a cropped and distorted selection of the camera preview in real time, but you get the idea. It runs at high FPS on a Nexus S in greyscale and should work for your needs as well.

Is this not what you want? Just use a SurfaceView in your layout, then somewhere in your init like onResume():
SurfaceView surfaceView = ...
SurfaceHolder holder = surfaceView.getHolder();
...
Camera camera = ...;
camera.setPreviewDisplay(holder);
It just sends the frames straight to the view as fast as they arrive.
If you want grayscale, modify the camera parameters with setColorEffect("mono").

For very basic and simple effects, there is
Camera.Parameters parameters = mCamera.getParameters();
parameters.setColorEffect(Parameters.EFFECT_AQUA);
I figured out that this effects do DIFFERENTLY depending on the device.
For instance, on my phone (galaxy s II) it looks kinda like a comic effect as in contrast to the galaxy s 1 it is 'just' a blue shade.
It's pro: It's working as live-preview.
I looked around some other camera apps and they obviously also faced this problem.
So what did they do?
They are capturing the default camera image, applying a filter to the bitmap data, and show this image in a simple ImageView. It's for sure not that cool as in live preview, but you won't ever face performance problems.

I believe I read in a blog that the grayscale data is in the first x*y bytes. Yuv should represent luminance, so the data is there, although it isn't a perfect grayscale. Its great for relative brightness, but not grayscale, as each color isn't as bright as each other in rgb. Green is usually given a stronger weight in luminosity conversions. Hope this helps!

Is there any special reason that you are forced to use GLES 1.0 ?
Because if not, see the accepted answer here:
Android SDK: Get raw preview camera image without displaying it
Generally it mentions using Camera.setPreviewTexture() in combination with GLES 2.0.
In GLES 2.0 you can render a full-screen-quad all over the screen, and create whatever effect you want.
It's most likely the fastest way possible.

Related

Why set pixels images very slow

I am trying to render an image where I can manipulate all the pixels of the image, it works but I get 40 fps.
While with graphics.fillrect I get 4000 fps and it's really slow and I need this to make a 3D game but it's very very slow.
‍‍‍‍‍
public class Renderer {
private BufferedImage image = TextureLoader.load("./res/image.png");
byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
public void render(Graphics2D g) {
for(int x = 0; x < 1920; x++) {
for(int y = 0; y < 1080; y++) {
setRGB(x,y, Color.RED.getRGB());
}
}
g.drawImage(image, 0, 0, 1920, 1080, null);
}
public void setRGB(int x, int y, int rgb) {
int a = (y * 1920 + x) * 3;
pixels[a] = (byte) ((rgb >> 0) & 0xFF);
pixels[a + 1] = (byte) ((rgb >> 8) & 0xFF);
pixels[a + 2] = (byte) ((rgb >> 16) & 0xFF);
}
}
‍‍‍‍‍
In general, setting every single pixel in a loop is a time consuming process. You could try to speed up your code by using various methods of WritableRaster but personally, I would use a 3d-library like LWJGL or even a 3d-game engine.
Generally it's not a good idea to use Java's provided libraries when one wants to create a 3D game. They are very limited and have poor performance. I would advise looking at third-party libraries such as LWJGL. To replace the Java graphics libraries, use OpenGL and GLFW.
There are many tutorials on OpenGL which will help you learn, such as ThinMatrix's tutorials.
If you don't want to do all the work yourself, try a game engine or framework such as libGDX or jmonkeyengine, which are already optimized and provide an easy way to make your 3D game.

Android semantic segmentation post-processing is too slow

I'd really appreciate it if anyone can advise with a task I've been working without success for the last week.
I have semantic segmentation model (MobileNetV3 + Lightweight ASPP).Short info: input - 1024x1024, output - same size and 2 classes (bg and vehicle), so my output shape is (1, 1048576, 2). I'm not the mobile dev or java world guy, so I used a few complete andoid examples for image segmentation to test it:
the one from google: https://github.com/tensorflow/examples/tree/master/lite/examples/image_segmentation
and another one open-sourced: https://github.com/pillarpond/image-segmenter-android
I successfully converted it to tflite format and its inference time on OnePlus 7 with GPU enabled and 10 threads is between 105-140ms for such size. But here I run into a problem: general execution time in these two android examples or any you can find for semantic segmentation is about 1050-1300ms (which is less than 1FPS). The slower part of this pipeline is image post-processing (~900-1150ms). You can see that part in the Deeplab#segment method. Since I have only 1 class besides bg - I don't have this third loop, but everything else is untouched and still very slow. Output size is not small in comparison to other common mobile sizes like 128/226/512, but still. I think it shouldn't take so much time to process 1024x1024 matrix and draw rectangles in canvas on modern smartphones.
I tried different solutions, like splitting matrix manipulations into threads or creating all these objects like RectF and Recognition once before and just filling their attributes with new data inside nested loops, but I didn't succeed on either of them. On the desktop side I easily handle it with numpy and opencv and I don't even close to understanding how can I do the same in Android and will it even be efficient or not.
Here's code which I use in python:
CLASS_COLORS = [(0, 0, 0), (255, 255, 255)] # black for bg and white for mask
def get_image_array(image_input, width, height):
img = cv2.imread(image_input, 1)
img = cv2.resize(img, (width, height))
img = img.astype(np.float32)
img[:, :, 0] -= 128.0
img[:, :, 1] -= 128.0
img[:, :, 2] -= 128.0
img = img[:, :, ::-1]
return img
def get_segmentation_array(seg_arr, n_classes):
output_height = seg_arr.shape[0]
output_width = seg_arr.shape[1]
seg_img = np.zeros((output_height, output_width, 3))
for c in range(n_classes):
seg_arr_c = seg_arr[:, :] == c
seg_img[:, :, 0] += ((seg_arr_c)*(CLASS_COLORS[c][0])).astype('uint8')
seg_img[:, :, 1] += ((seg_arr_c)*(CLASS_COLORS[c][1])).astype('uint8')
seg_img[:, :, 2] += ((seg_arr_c)*(CLASS_COLORS[c][2])).astype('uint8')
return seg_img
interpreter = tf.lite.Interpreter(model_path=f"my_model.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
img_arr = get_image_array("input.png", 1024, 1024)
interpreter.set_tensor(input_details[0]['index'], np.array([x]))
interpreter.invoke()
output = interpreter.get_tensor(output_details[0]['index'])
output = output.reshape((1024, 1024, 2)).argmax(axis=2)
seg_img = get_segmentation_array(output, 2)
cv2.imwrite("output.png", seg_img)
Maybe there's anything powerful than the current solution for post-processing.
I would really appreciate any help with this. I'm sure there's anything that can improve post-processing and reduce its time to ~100ms, so I will have ~5FPS in general.
New Update. Thanks to Farmaker, I used a piece of code found in his repo from comment above and now pipeline looks like:
int channels = 3;
int n_classes = 2;
int float_byte_size = 4;
int width = model.inputWidth;
int height = model.inputHeight;
int[] intValues = new int[width * height];
ByteBuffer inputBuffer = ByteBuffer.allocateDirect(width * height * channels * float_byte_size).order(ByteOrder.nativeOrder());
ByteBuffer outputBuffer = ByteBuffer.allocateDirect(width * height * n_classes * float_byte_size).order(ByteOrder.nativeOrder());
Bitmap input = textureView.getBitmap(width, height);
input.getPixels(intValues, 0, width, 0, 0, height, height);
inputBuffer.rewind();
outputBuffer.rewind();
for (final int value: intValues) {
inputBuffer.putFloat(((value >> 16 & 0xff) - 128.0) / 1.0f);
inputBuffer.putFloat(((value >> 8 & 0xff) - 128.0) / 1.0f);
inputBuffer.putFloat(((value & 0xff) - 128.0) / 1.0f);
}
tfLite.run(inputBuffer, outputBuffer);
final Bitmap output = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
outputBuffer.flip();
int[] pixels = new int[width * height];
for (int i = 0; i < width * height; i++) {
float max = outputBuffer.getFloat();
float val = outputBuffer.getFloat();
int id = val > max ? 1 : 0;
pixels[i] = id == 0 ? 0x00000000 : 0x990000ff;
}
output.setPixels(pixels, 0, width, 0, 0, width, height);
resultView.setImageBitmap(resizeBitmap(output, resultView.getWidth(), resultView.getHeight()));
public static Bitmap resizeBitmap(Bitmap bm, int newWidth, int newHeight) {
int width = bm.getWidth();
int height = bm.getHeight();
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
// CREATE A MATRIX FOR THE MANIPULATION
Matrix matrix = new Matrix();
// RESIZE THE BIT MAP
matrix.postScale(scaleWidth, scaleHeight);
// "RECREATE" THE NEW BITMAP
Bitmap resizedBitmap = Bitmap.createBitmap(
bm, 0, 0, width, height, matrix, false);
bm.recycle();
return resizedBitmap;
}
Right now post-processing time is ~70-130ms, 95th is around 90ms, which alongside ~60ms of image pre-processing time, ~140ms inference time and around 30-40ms for other stuff with enabled GPU and 10 threads gives me general execution time around 330ms which is 3FPS! And this is for a large model for 1024x1024.
At this point, I'm more than satisfied and want to try different configurations for my model, including MobilenetV3 small as a backbone.

Alpha channel ignored when using ImageIO.read()

I'm currently having an issue with alpha channels when reading PNG files with ImageIO.read(...)
fileInputStream = new FileInputStream(path);
BufferedImage image = ImageIO.read(fileInputStream);
//Just copying data into an integer array
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, width, height, pixels, 0, width);
However, when trying to read values from the pixel array by bit shifting as seen below, the alpha channel is always returning -1
int a = (pixels[i] & 0xff000000) >> 24;
int r = (pixels[i] & 0xff0000) >> 16;
int g = (pixels[i] & 0xff00) >> 8;
int b = (pixels[i] & 0xff);
//a = -1, the other channels are fine
By Googling the problem I understand that the BufferedImage type needs to be defined as below to allow for the alpha channel to work:
BufferedImage image = new BufferedImage(width, height BufferedImage.TYPE_INT_ARGB);
But ImageIO.read(...) returns a BufferedImage without giving the option to specify the image type. So how can I do this?
Any help is much appreciated.
Thanks in advance
I think, your "int unpacking" code might be wrong.
I used (pixel >> 24) & 0xff (where pixel is the rgba value of a specific pixel) and it worked fine.
I compared this with the results of java.awt.Color and they worked fine.
I "stole" the "extraction" code directly from java.awt.Color, this is, yet another reason, I tend not to perform these operations this way, it's to easy to screw them up
And my awesome test code...
BufferedImage image = ImageIO.read(new File("BYO image"));
int width = image.getWidth();
int height = image.getHeight();
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixel = image.getRGB(x, y);
//value = 0xff000000 | rgba;
int a = (pixel >> 24) & 0xff;
Color color = new Color(pixel, true);
System.out.println(x + "x" + y + " = " + color.getAlpha() + "; " + a);
}
}
nb: Before some one tells that this is inefficient, I wasn't going for efficiency, I was going for quick to write
You may also want to have a look at How to convert get.rgb(x,y) integer pixel to Color(r,g,b,a) in Java?, which I also used to validate my results
I think the problem is that you're using arithmetic shift (>>) instead of logical shift (>>>). Thus 0xff000000 >> 24 becomes 0xffffffff (i.e. -1)

How to create BufferedImage for 32 bits per sample, 3 samples image data

I am trying to create a BufferedImage from some image data which is a byte array. The image is RGB format with 3 samples per pixel - R, G, and B and 32 bits per sample (for each sample, not all 3 samples).
Now I want to create a BufferedImage from this byte array. This is what I have done:
ColorModel cm = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), new int[] {32, 32, 32}, false, false, Transparency.OPAQUE, DataBuffer.TYPE_INT);
Object tempArray = ArrayUtils.toNBits(bitsPerSample, pixels, samplesPerPixel*imageWidth, endian == IOUtils.BIG_ENDIAN);
WritableRaster raster = cm.createCompatibleWritableRaster(imageWidth, imageHeight);
raster.setDataElements(0, 0, imageWidth, imageHeight, tempArray);
BufferedImage bi = new BufferedImage(cm, raster, false, null);
The above code works with 24 bits per sample RGB image but not 32 bits per sample. The generated image is garbage which is shown on the right of the image. It is supposed to be like the left side of the image.
Note: the only image reader on my machine which can read this image is ImageMagick. All the others show similar results as the garbage one to the right of the following image.
The ArrayUtils.toNBits() just translates the byte array to int array with correct endianess. I'm sure this one is correct as I have cross checked with other methods to generate the same int array.
I guess the problem might arise from the fact I am using all the 32 bits int to represent the color which would contain negative values. Looks like I need long data type, but there is no DataBuffer type for long.
Instances of ComponentColorModel created with transfer types
DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT
have pixel sample values which are treated as unsigned integral
values.
The above quote is from Java document for ComponentColorModel. This means the 32 bit sample does get treated as unsigned integer value. Then the problem could be somewhere else.
Has any body met similar problem and got a workaround or I may have done some thing wrong here?
Update2: The "real" problem lies in the fact when 32 bit sample is used, the algorithm for the ComponentColorModel will shift 1 to the left 0 times (1<<0) since shift on int is always within 0~31 inclusive. This is not the expected value. To solve this problem (actually shift left 32 times), the only thing needs to be done is change 1 from int to long type as 1L as shown in the fix below.
Update: from the answer by HaraldK and the comments, we have finally agreed that the problem is coming from Java's ComponentColorModel which is not handling 32 bit sample correctly. The proposed fix by HaraldK works for my case too. The following is my version:
import java.awt.Transparency;
import java.awt.color.ColorSpace;
import java.awt.image.ComponentColorModel;
import java.awt.image.DataBuffer;
public class Int32ComponentColorModel extends ComponentColorModel {
//
public Int32ComponentColorModel(ColorSpace cs, boolean alpha) {
super(cs, alpha, false, alpha ? Transparency.TRANSLUCENT : Transparency.OPAQUE, DataBuffer.TYPE_INT);
}
#Override
public float[] getNormalizedComponents(Object pixel, float[] normComponents, int normOffset) {
int numComponents = getNumComponents();
if (normComponents == null || normComponents.length < numComponents + normOffset) {
normComponents = new float[numComponents + normOffset];
}
switch (transferType) {
case DataBuffer.TYPE_INT:
int[] ipixel = (int[]) pixel;
for (int c = 0, nc = normOffset; c < numComponents; c++, nc++) {
normComponents[nc] = ipixel[c] / ((float) ((1L << getComponentSize(c)) - 1));
}
break;
default: // I don't think we can ever come this far. Just in case!!!
throw new UnsupportedOperationException("This method has not been implemented for transferType " + transferType);
}
return normComponents;
}
}
Update:
This seems to be a known bug: ComponentColorModel.getNormalizedComponents() does not handle 32-bit TYPE_INT, reported 10 (TEN!) years ago, against Java 5.
The upside, Java is now partly open-sourced. We can now propose a patch, and with some luck it will be evaluated for Java 9 or so... :-P
The bug proposes the following workaround:
Subclass ComponentColorModel and override getNormalizedComponents() to properly handle 32 bit per sample TYPE_INT data by dividing the incoming pixel value by 'Math.pow(2, 32) - 1' when dealing with this data, rather than using the erroneous bit shift. (Using a floating point value is ok, since getNormalizedComponents() converts everything to floating point anyway).
My fix is a little different, but the basic idea is the same (feel free to optimize as you see fit :-)):
private static class TypeIntComponentColorModel extends ComponentColorModel {
public TypeIntComponentColorModel(final ColorSpace cs, final boolean alpha) {
super(cs, alpha, false, alpha ? TRANSLUCENT : OPAQUE, DataBuffer.TYPE_INT);
}
#Override
public float[] getNormalizedComponents(Object pixel, float[] normComponents, int normOffset) {
int numComponents = getNumComponents();
if (normComponents == null) {
normComponents = new float[numComponents + normOffset];
}
switch (transferType) {
case DataBuffer.TYPE_INT:
int[] ipixel = (int[]) pixel;
for (int c = 0, nc = normOffset; c < numComponents; c++, nc++) {
normComponents[nc] = ((float) (ipixel[c] & 0xffffffffl)) / ((float) ((1l << getComponentSize(c)) - 1));
}
break;
default:
throw new UnsupportedOperationException("This method has not been implemented for transferType " + transferType);
}
return normComponents;
}
}
Consider the below code. If run as is, for me it displays a mostly black image, with the upper right quarter white overlayed with a black circle. If I change the datatype to TYPE_USHORT (uncomment the transferType line), it displays half/half white and a linear gradient from black to white, with an orange circle in the middle (as it should).
Using ColorConvertOp to convert to a standard type seems to make no difference.
public class Int32Image {
public static void main(String[] args) {
// Define dimensions and layout of the image
int w = 300;
int h = 200;
int transferType = DataBuffer.TYPE_INT;
// int transferType = DataBuffer.TYPE_USHORT;
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), false, false, Transparency.OPAQUE, transferType);
WritableRaster raster = colorModel.createCompatibleWritableRaster(w, h);
BufferedImage image = new BufferedImage(colorModel, raster, false, null);
// Start with linear gradient
if (raster.getTransferType() == DataBuffer.TYPE_INT) {
DataBufferInt buffer = (DataBufferInt) raster.getDataBuffer();
int[] data = buffer.getData();
for (int y = 0; y < h; y++) {
int value = (int) (y * 0xffffffffL / h);
for (int x = 0; x < w; x++) {
int offset = y * w * 3 + x * 3;
data[offset] = value;
data[offset + 1] = value;
data[offset + 2] = value;
}
}
}
else if (raster.getTransferType() == DataBuffer.TYPE_USHORT) {
DataBufferUShort buffer = (DataBufferUShort) raster.getDataBuffer();
short[] data = buffer.getData();
for (int y = 0; y < h; y++) {
short value = (short) (y * 0xffffL / h);
for (int x = 0; x < w; x++) {
int offset = y * w * 3 + x * 3;
data[offset] = value;
data[offset + 1] = value;
data[offset + 2] = value;
}
}
}
// Paint something (in color)
Graphics2D g = image.createGraphics();
g.setColor(Color.WHITE);
g.fillRect(0, 0, w / 2, h);
g.setColor(Color.ORANGE);
g.fillOval(100, 50, w - 200, h - 100);
g.dispose();
System.out.println("image = " + image);
// image = new ColorConvertOp(null).filter(image, new BufferedImage(image.getWidth(), image.getHeight(), BufferedImage.TYPE_INT_ARGB));
JFrame frame = new JFrame();
frame.add(new JLabel(new ImageIcon(image)));
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
}
To me, this seems to suggest that there's something wrong with the ColorModel using transferType TYPE_INT. But I'd be happy to be wrong. ;-)
Another thing you could try, is to scale the values down to 16 bit, use a TYPE_USHORT raster and color model, and see if that makes a difference. I bet it will, but I'm too lazy to try. ;-)

Cropping image lowers quality and border looks bad

Using some math, i created the following java-function, to input a Bitmap, and have it crop out a centered square in which a circle is cropped out again with a black border around it.
The rest of the square should be transparent.
Additionatly, there is a transparent distance to the sides to not damage the preview when sending the image via Messengers.
The code of my function is as following:
public static Bitmap edit_image(Bitmap src,boolean makeborder) {
int width = src.getWidth();
int height = src.getHeight();
int A, R, G, B;
int pixel;
int middlex = width/2;
int middley = height/2;
int seitenlaenge,startx,starty;
if(width>height)
{
seitenlaenge=height;
starty=0;
startx = middlex - (seitenlaenge/2);
}
else
{
seitenlaenge=width;
startx=0;
starty = middley - (seitenlaenge/2);
}
int kreisradius = seitenlaenge/2;
int mittx = startx + kreisradius;
int mitty = starty + kreisradius;
int border=2;
int seitenabstand=55;
Bitmap bmOut = Bitmap.createBitmap(seitenlaenge+seitenabstand, seitenlaenge+seitenabstand, Bitmap.Config.ARGB_8888);
bmOut.setHasAlpha(true);
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
int distzumitte = (int) (Math.pow(mittx-x,2) + Math.pow(mitty-y,2)); // (Xm-Xp)^2 + (Ym-Yp)^2 = dist^2
distzumitte = (int) Math.sqrt(distzumitte);
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
R = (int)Color.red(pixel);
G = (int)Color.green(pixel);
B = (int)Color.blue(pixel);
int color = Color.argb(A, R, G, B);
int afterx=x-startx+(seitenabstand/2);
int aftery=y-starty+(seitenabstand/2);
if(x < startx || y < starty || afterx>=seitenlaenge+seitenabstand || aftery>=seitenlaenge+seitenabstand) //seitenrand
{
continue;
}
else if(distzumitte > kreisradius)
{
color=0x00FFFFFF;
}
else if(distzumitte > kreisradius-border && makeborder) //border
{
color = Color.argb(A, 0, 0, 0);
}
bmOut.setPixel(afterx, aftery, color);
}
}
return bmOut;
}
This function works fine, but there are some problems occuring that i wasn't able to resolve yet.
The quality of the image is decreased significantly
The border is not really round, but appears to be flat at the edges of the image (on some devices?!)
I'd appreciate any help regarding that problems. I got to admit that i'm not the best in math and there should probably be a better formula to ceate the border.
your source code is hard to read, since it is a mix of German and English in the variable names. Additionally you don't say which image library you use, so we don't exactly know where the classes Bitmap and Color come from.
Anyway, it is very obvious, that you are operating only on a Bitmap. Bitmap means the whole image is stored in the RAM pixel by pixel. There is no lossy compression. I don't see anything in your source code, that can affect the quality of the image.
It is very likely, that the answer is in the Code that you don't show us. Additionally, what you describe (botrh of the problems) sounds like a very typical low quality JPEG compression. I am sure, somewhere after you call you function, you convert/save the image to a JPEG. Try to do that at that position to BMP, TIFF or PNG and see that the error disappears magically. Maybe you can also set the quality level of the JPEG somewhere to avoid that.
To make it easier for others (maybe) also to find a good answer, please allow me to translate your code to English:
public static Bitmap edit_image(Bitmap src,boolean makeborder) {
int width = src.getWidth();
int height = src.getHeight();
int A, R, G, B;
int pixel;
int middlex = width/2;
int middley = height/2;
int sideLength,startx,starty;
if(width>height)
{
sideLength=height;
starty=0;
startx = middlex - (sideLength/2);
}
else
{
sideLength=width;
startx=0;
starty = middley - (sideLength/2);
}
int circleRadius = sideLength/2;
int middleX = startx + circleRadius;
int middleY = starty + circleRadius;
int border=2;
int sideDistance=55;
Bitmap bmOut = Bitmap.createBitmap(sideLength+sideDistance, sideLength+sideDistance, Bitmap.Config.ARGB_8888);
bmOut.setHasAlpha(true);
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
int distanceToMiddle = (int) (Math.pow(middleX-x,2) + Math.pow(middleY-y,2)); // (Xm-Xp)^2 + (Ym-Yp)^2 = dist^2
distanceToMiddle = (int) Math.sqrt(distanceToMiddle);
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
R = (int)Color.red(pixel);
G = (int)Color.green(pixel);
B = (int)Color.blue(pixel);
int color = Color.argb(A, R, G, B);
int afterx=x-startx+(sideDistance/2);
int aftery=y-starty+(sideDistance/2);
if(x < startx || y < starty || afterx>=sideLength+sideDistance || aftery>=sideLength+sideDistance) //margin
{
continue;
}
else if(distanceToMiddle > circleRadius)
{
color=0x00FFFFFF;
}
else if(distanceToMiddle > circleRadius-border && makeborder) //border
{
color = Color.argb(A, 0, 0, 0);
}
bmOut.setPixel(afterx, aftery, color);
}
}
return bmOut;
}
I think that you need to check PorterDuffXferMode.
You will find some technical informations about compositing images modes HERE.
There is some good example of making bitmap with rounded edges HERE. You just need to tweak a bit source code and you're ready to go...
Hope it will help.
Regarding the quality I can't see anything wrong with your method. Running the code with Java Swing no quality is lost. The only problem is that the image has aliased edges.
The aliasing problem will tend to disappear as the screen resolution increases and would be more noticeable for lower resolutions. This might explain why you see it in some devices only.The same problem applies to your border but in that case it would be more noticable since the color is single black.
Your algorithm defines a square area of the original image. To find the square it starts from the image's center and expand to either the width or the height of the image whichever is smaller. I am referring to this area as the square.
The aliasing is caused by your code that sets the colors (I am using pseudo-code):
if ( outOfSquare() ) {
continue; // case 1: this works but you depend upon the new image' s default pixel value i.e. transparent black
} else if ( insideSquare() && ! insideCircle() ) {
color = 0x00FFFFFF; // case 2: transparent white. <- Redundant
} else if ( insideBorder() ) {
color = Color.argb(A, 0, 0, 0); // case 3: Black color using the transparency of the original image.
} else { // inside the inner circle
// case 4: leave image color
}
Some notes about the code:
Case 1 depends upon the default pixel value of the original image i.e. transparent black. It works but better to set it explicitly
Case 2 is redundant. Handle it in the same way you handle case 1. We are only interested in what happens inside the circle.
Case 3 (when you draw the border) is not clear what it expects. Using the alpha of the original image has the potential of messing up your new image if it happens that the original alpha varies along the circle's edges. So this is clearly wrong and depending on the image, can potentially be another cause of your problems.
Case 4 is ok.
Now at your circle's periphery the following color transitions take place:
If border is not used: full transparency -> full image color (case 2 and 4 in the pseudocode)
If border is used: full transparency -> full black -> full image color (cases 2, 3 and 4)
To achieve a better quality at the edges you need to introduce some intermediate states that would make the transitions smoother (the new transitions are shown in italics):
Border is not used: full transparency -> partial transparency with image color -> full image color
Border is used: full transparency -> partial transparency of Black color -> full Black color -> partial transparency of Black color + Image color (i.e. blending) -> Full image color
I hope that helps

Categories

Resources