byte array to BufferedImage or DecodedImage at Android - java

I want to use the function below at Android but ImageIO.read function can not be use at Android.
DecodedImage decode(byte[] image) {
return Exceptions.sneak().get(() -> {
BufferedImage buffered = ImageIO.read(new ByteArrayInputStream(image));
if (buffered == null)
throw new IllegalArgumentException("Unsupported image format.");
int width = buffered.getWidth();
int height = buffered.getHeight();
int[] pixels = new int[width * height];
buffered.getRGB(0, 0, width, height, pixels, 0, width);
return new DecodedImage(width, height, pixels);
});
}
Is there any equivalent function to do this operation in Java for Android?

Related

Converting java.awt.* code to Android.graphics.* code

I am trying to port some graphics code that is written using java.awt.* library to instead use the android.graphics.* library. However, I don't have much experience with graphics.
Here is the java.awt.* code (which works):
/**
* Converts the given <code>MyImageBitmap</code> to the specified image format and returns it as byte array.
*
* #param myImageBitmapthe given MyImageBitmap, not null
* #param format the given target image format ("png", "gif", "jpg"), not null
* #return the converted data in byte array format, not null
*/
private byte[] convert(MyImageBitmap myImageBitmap, String format) {
final int width = myImageBitmap.getWidth();
final int height = myImageBitmap.getHeight();
final int[] MASKS = {0x000000ff, 0x000000ff, 0x000000ff};
DataBuffer buffer = new DataBufferByte(myImageBitmap.getPixels(), myImageBitmap.getLength());
WritableRaster writableRaster = Raster.createPackedRaster(buffer, width, height, width, MASKS, null);
BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
bufferedImage.setData(writableRaster);
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
ImageIO.write(bufferedImage, format, outputStream);
outputStream.close();
return outputStream.toByteArray();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
This is the MyImageBitmap class:
/**
* A <code>MyImageBitmap</code> instance contains an image information in bitmap format.
*/
public class MyImageBitmap implements Serializable {
//...member variables
/**
* Creates an instance of <code>MyImageBitmap</code> with specified data.
*
* #param pixels the image pixes, not null
* #param width the image width, not null
* #param height the image height, not null
* #param ppi pixel per inch, not null
* #param depth the image depth, not null
* #param lossyFlag lossy flag, not null
*/
public MyImageBitmap(byte[] pixels, int width, int height, int ppi, int depth, int lossyFlag) {
this.pixels = pixels;
this.width = width;
this.height = height;
this.ppi = ppi;
this.depth = depth;
this.lossyFlag = lossyFlag;
this.length = pixels != null ? pixels.length : 0;
}
//...getters
}
This is what I have tried (with no success):
private byte[] convert(MyImageBitmap myImageBitmap, String format) {
int width = myImageBitmap.getWidth();
int height = myImageBitmap.getHeight();
byte[] imgRGB888 = myImageBitmap.getPixels();
Bitmap bmp2 = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
int[] colors = new int[width * height];
int r,g,b;
for (int ci = 0; ci < colors.length; ci++)
{
r = (int)(0xFF & imgRGB888[3*ci]);
g = (int)(0xFF & imgRGB888[3*ci+1]);
b = (int)(0xFF & imgRGB888[3*ci+2]);
colors[ci] = Color.rgb(r, g, b);
}
bmp2.setPixels(colors, 0, width, 0, 0, width, height);
Bitmap.CompressFormat compressFormat;
if (format.equals("jpeg")){
compressFormat = android.graphics.Bitmap.CompressFormat.JPEG;
}else if (format.equals("png")){
compressFormat = android.graphics.Bitmap.CompressFormat.PNG;
}else {//must be gif...try to convert to png
compressFormat = android.graphics.Bitmap.CompressFormat.PNG;
}
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()){
bmp2.compress(compressFormat, 100, outputStream);
return outputStream.toByteArray();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
When I run the above code (my attempt at porting over the awt code) I get an ArrayIndexOutOfBoundsException on this line r = (int)(0xFF & imgRGB888[3*ci]);.
I ended up figuring out what the issue was. My algorithm for converting from
byte array to int color array was wrong. Below is the correct implementation. The images are now being correctly displayed in the Android ImageView!
/**
* Converts the given <code>MyImageBitmap</code> to the specified image format and returns it as byte array.
*
* #param myImageBitmap the given bitmap, not null
* #param format the given target image format, not null
* #return the converted data in byte array format, not null
*/
private byte[] convert(MyImageBitmap myImageBitmap, String format) {
int width = myImageBitmap.getWidth();
int height = myImageBitmap.getHeight();
byte[] imgRGB888 = myImageBitmap.getPixels();
Bitmap bmp2 = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
int[] colors = new int[width * height];
//We need to convert the image from a byte array to a
// int color array so we can create the Android Bitmap
int r,g,b;
for (int ci = 0; ci < colors.length; ci++) {
r = (int)(0x000000ff & imgRGB888[ci]);
g = (int)(0x000000ff & imgRGB888[ci]);
b = (int)(0x000000ff & imgRGB888[ci]);
colors[ci] = Color.rgb(r, g, b);
}
bmp2.setPixels(colors, 0, width, 0, 0, width, height);
Bitmap.CompressFormat compressFormat;
if (format.equals("jpeg")){
compressFormat = android.graphics.Bitmap.CompressFormat.JPEG;
}else{//must be png
compressFormat = android.graphics.Bitmap.CompressFormat.PNG;
}
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()){
bmp2.compress(compressFormat, 100, outputStream);
return outputStream.toByteArray();
} catch (IOException e) {
throw new RuntimeException(e);
}
}

Merge two images with different size so that they overlap correctly

I have two images with different sizes. I want to merge these two images so that the front images overlays correctly the background image.
background (width:144 height:147):
front: (width:227 height:238)
Currently my result looks like this, but i need it to overlay perfectly
My approach is.
I am resizing the smaller image to the bigger one. For that I am using the a external lib named imgscalr (https://mvnrepository.com/artifact/org.imgscalr/imgscalr-lib/4.2).
As you can see result image is not correct, so I tried to scale the front image so that it overlay the background, also I changed the root of the x / y when I draw the front image on the background but i still have a difference. Any idea how I can merge the two images, so that the strokes image (front) overlays the background.
public static byte[] mergeBackgroundWithStrokes(byte[] backgroundImage, byte[] frontImage)
throws IOException, PDProcessingException {
Path backgroundfile = readAllBytes(backgroundImage, "background");
Path outputFile = Files.createTempFile("output", ".png");
BufferedImage backgroundBuffImg = ImageIO.read(backgroundfile.toFile());
BufferedImage frontBuffImg = makeColorTransparent(frontImage, Color.WHITE);
int width = Math.max(backgroundBuffImg.getWidth(), frontBuffImg.getWidth());
int height = Math.max(backgroundBuffImg.getHeight(), frontBuffImg.getHeight());
backgroundBuffImg = resize(backgroundBuffImg, width, height);
//scaling front image
int scaledWidth = (int) ((width));
int scaledHeight = (int) ((height) * 1.02);
frontBuffImg = resize(frontBuffImg, scaledWidth, scaledHeight);
BufferedImage newImage = mergeImages(backgroundBuffImg, frontBuffImg);
ImageIO.write(newImage, "PNG", outputFile.toFile());
return Files.readAllBytes(outputFile);
}
public static BufferedImage resize(BufferedImage img, int width, int height)
{
if (img.getWidth() == width && img.getHeight() == height) {
return img;
} else {
return Scalr.resize(img, width, height);
}
}
public static BufferedImage mergeImages(BufferedImage background, BufferedImage front) {
int width = background.getWidth();
int height = background.getHeight();
BufferedImage newImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
Graphics g = newImage.getGraphics();
g.drawImage(background, 0, 0, null);
g.drawImage(front, 15, -10, background.getWidth(), background.getHeight(), 0, 0, width, height, null);
return newImage;
}
Here you can see the complete class: https://pastebin.com/F4VrBwRv

JavaFX PixelReader Alpha-Channel not working

public class Loader {
public static byte[] loadImage(String path) {
Image image;
try {
image = new Image(new FileInputStream(path));
int width = (int) image.getWidth();
int height = (int) image.getHeight();
byte[] data = new byte[width * height * 4];
image.getPixelReader().getPixels(0, 0, width, height, PixelFormat.getByteBgraPreInstance(), data, 0, width * 4);
return data;
}catch(IOException e) {
e.printStackTrace();
System.exit(-1);
}
return null;
}
}
#Override
public void render(GraphicsContext gc) {
gc.clearRect(0, 0, width, height);
gc.getPixelWriter().setPixels(x++, 0, 819 ,720, PixelFormat.getByteBgraPreInstance(), data, 0, 819*4);
gc.getPixelWriter().setPixels(400, 0,819 ,720, PixelFormat.getByteBgraPreInstance(), data, 0, 819*4);
}
Hello, at the moment i have a Problem with the PixelWriter/PixelReader from JavaFx. I try to read the Pixel from an Image and store it in a Buffer, after that I want to render it to the Screen. But the Image contains now no Alpha Value, so there are nor transparent Pixels. I searched for a few hours on the Internet but I can't find an answer. Maybe there is a problem with the Format.
Thanks in advance.
I haven't tried it (because you did not provide a complete and executable example) but I think your problem is that you use a pixel format with pre-multiplied alpha. Try again with one of the simple rgb formats without pre-multiplication.

How do i create image from pixels stored as r, g, b each occupying different position in array?

I am reading a .jpg image and accesing the pixels as:
if (type == BufferedImage.TYPE_3BYTE_BGR) {
System.out.println("type.3byte.bgr");
byte[] pixels = (byte[]) sourceImage.getData().getDataElements(0, 0, w, h, null);
}
// process this array called pixels and display the resulting image
// first i convert it to integer
int offseet=0;
int[] data=new int[width*height*3];
for ( i = 0; i < data.length; i++) {
data[i] = pixels[offset++] & 0xff;
}
// and then process this array. For now I am not processing it. Now when i create a buffered
//image from data array and display it the image displayed is not the same.
// the code i use is
writeEdges(data);
private BufferedImage edgesImage;
private void writeEdges(int arb[]) {
if (edgesImage == null) {
edgesImage = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
}
edgesImage.getWritableTile(0, 0).setDataElements(0, 0, width,height, arb);
}
// for the penguins.jpg image(provided in sample pictures in windows 7)!
the output i get is
I do not know how is your code running because it would not compile as jcomeau_ictx pointed out. The problem is that you use different image type for reading and writing.
Here is a code snippet that would generate the same image as the input.
public static void main(String[] args) {
BufferedImage sourceImage = null;
try {
sourceImage = ImageIO.read(new File("IScSH.jpg"));
} catch (IOException e) {
}
int type = sourceImage.getType();
int w = sourceImage.getWidth();
int h = sourceImage.getHeight();
byte[] pixels = null;
if (type == BufferedImage.TYPE_3BYTE_BGR) {
System.out.println("type.3byte.bgr");
pixels = (byte[]) sourceImage.getData().getDataElements(0, 0, w, h, null);
}
try {
BufferedImage edgesImage = new BufferedImage(w, h, BufferedImage.TYPE_3BYTE_BGR);
edgesImage.getWritableTile(0, 0).setDataElements(0, 0, w, h, pixels);
ImageIO.write(edgesImage, "jpg", new File("nvRhP.jpg"));
} catch (IOException e) {
}
}

Java / OpenGL: Getting the image of a Canvas as a BufferedImage

I've got some code that initializes OpenGL to render to a java.awt.Canvas.
The problem is, I can't figure out how I can get the buffer of the canvas and turn it into a BufferedImage.
I've tried overriding getGraphics(), cloning the Raster, and replacing the CanvasPeer with a custom one.
I'm guessing OpenGL doesn't use java graphics in any way then, so how can I get OpenGL's buffer and convert it into a BufferedImage?
I am using LWJGL's code for setting parent:
Display.setParent(display_parent);
Display.create();
You need to copy data from OpenGL buffer. I was using this method:
FloatBuffer grabScreen(GL gl)
{
int w = SCREENWITDH;
int h = SCREENHEIGHT;
FloatBuffer bufor = FloatBuffer.allocate(w*h*4); // 4 = rgba
gl.glReadBuffer(GL.GL_FRONT);
gl.glReadPixels(0, 0, w, h, GL.GL_RGBA, GL.GL_FLOAT, bufor); //Copy the image to the array imageData
return bufor;
}
You need to use something similar according to your OpenGL wrapper. This is JOGL example.
And here for LWJGL wrapper:
private static synchronized byte[] grabScreen()
{
int w = screenWidth;
int h = screenHeight;
ByteBuffer bufor = BufferUtils.createByteBuffer(w * h * 3);
GL11.glReadPixels(0, 0, w, h, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, bufor); //Copy the image to the array imageData
byte[] byteimg = new byte[w * h * 3];
bufor.get(byteimg, 0, byteimg.length);
return byteimg;
}
EDIT
This may be useful also (it's not fully mine, should be tuned too):
BufferedImage toImage(byte[] data, int w, int h)
{
if (data.length == 0)
return null;
DataBuffer buffer = new DataBufferByte(data, w * h);
int pixelStride = 3; //assuming r, g, b, skip, r, g, b, skip...
int scanlineStride = 3 * w; //no extra padding
int[] bandOffsets = { 0, 1, 2 }; //r, g, b
WritableRaster raster = Raster.createInterleavedRaster(buffer, w, h, scanlineStride, pixelStride, bandOffsets,
null);
ColorSpace colorSpace = ColorSpace.getInstance(ColorSpace.CS_sRGB);
boolean hasAlpha = false;
boolean isAlphaPremultiplied = true;
int transparency = Transparency.TRANSLUCENT;
int transferType = DataBuffer.TYPE_BYTE;
ColorModel colorModel = new ComponentColorModel(colorSpace, hasAlpha, isAlphaPremultiplied, transparency,
transferType);
BufferedImage image = new BufferedImage(colorModel, raster, isAlphaPremultiplied, null);
AffineTransform flip;
AffineTransformOp op;
flip = AffineTransform.getScaleInstance(1, -1);
flip.translate(0, -image.getHeight());
op = new AffineTransformOp(flip, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
image = op.filter(image, null);
return image;
}
I don't think this is possible for your situation, and here's why:
LWJGL doesn't draw directly to the canvas (at least not in Windows). The canvas is only used to obtain a window handle to provide as the parent window to OpenGL. As such, the canvas is never directly drawn to. To capture the contents, you'll probably have to resort to a screen capture.

Categories

Resources