Android QR bitmap need help to remove margin - java

I have encoded a String into a QR bitmap. The picture becomes like this:
What do I need to change so that there is no whitespaces around the QR? I tried to read on the documentation about MultiFormatWriter() and setPixels(), but couldn't find out where it is wrong.
Here is the code:
Bitmap encodeAsBitmap(String str) throws WriterException {
BitMatrix result;
try {
result = new MultiFormatWriter().encode(str,
BarcodeFormat.QR_CODE, 500, 500, null);
} catch (IllegalArgumentException iae) {
return null;
}
int w = result.getWidth();
int h = result.getHeight();
int[] pixels = new int [w * h];
for (int i = 0; i < h; i++) {
int offset = i * w;
for (int j = 0; j < w; j++) {
pixels[offset + j] = result.get(i, j) ? BLACK : WHITE;
}
}
Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, 500, 0, 0, w, h);
return bitmap;
}

You should use hints param for set custom margins.
Map<EncodeHintType, Object> hints = new EnumMap<>(EncodeHintType.class);
hints.put(EncodeHintType.MARGIN, marginSize);
BitMatrix result = new MultiFormatWriter().encode(contentsToEncode, BarcodeFormat.QR_CODE, imageWidth, imageHeight, hints);

I think the problem is the way you set your pixels in the Bitmap.
According to the documentation:
stride int: The number of colors in pixels[] to skip between rows. Normally this value will be the same as the width of the bitmap, but it can be larger (or negative).
So I suggest the following:
bitmap.setPixels(pixels, 0, w, 0, 0, w, h);
Edit:
Just noticed you assume that the input's size is 500. you can try to compute it (assuming your string represents a square). If it is a rectangle, you have to be able to compute the size somehow so the MultiFormatWriter can read it.
So your code can be:
Bitmap encodeAsBitmap(String str, int size) throws WriterException {
BitMatrix result;
try {
result = new MultiFormatWriter().encode(str,
BarcodeFormat.QR_CODE, size, size, null);
} catch (IllegalArgumentException iae) {
return null;
}
int[] pixels = new int [size * size];
for (int i = 0; i < size; i++) {
int offset = i * size;
for (int j = 0; j < size; j++) {
pixels[offset + j] = result.get(i, j) ? BLACK : WHITE;
}
}
Bitmap bitmap = Bitmap.createBitmap(size, size, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, size, 0, 0, size, size);
return bitmap;
}

Related

How to scale a transparent org.eclipse.swt.graphics.Image, loaded from a PNG; Java

I have an org.eclipse.swt.graphics.Image, loaded from a PNG, and want to scale it in high quality (antialiasing, interpolation). But I do not want to lose transparency and get just a white background. (I need this Image to put it on an org.eclipse.swt.widgets.Label .)
Does anybody know how to do that?
Thank you!
Based on Mark's answer I found a better solution without the "hacky bit": first copy the alphaData from the origin then use GC to scale the image.
public static Image scaleImage(final Device device, final Image orig, final int scaledWidth, final int scaledHeight) {
final Rectangle origBounds = orig.getBounds();
if (origBounds.width == scaledWidth && origBounds.height == scaledHeight) {
return orig;
}
final ImageData origData = orig.getImageData();
final ImageData destData = new ImageData(scaledWidth, scaledHeight, origData.depth, origData.palette);
if (origData.alphaData != null) {
destData.alphaData = new byte[destData.width * destData.height];
for (int destRow = 0; destRow < destData.height; destRow++) {
for (int destCol = 0; destCol < destData.width; destCol++) {
final int origRow = destRow * origData.height / destData.height;
final int origCol = destCol * origData.width / destData.width;
final int o = origRow * origData.width + origCol;
final int d = destRow * destData.width + destCol;
destData.alphaData[d] = origData.alphaData[o];
}
}
}
final Image dest = new Image(device, destData);
final GC gc = new GC(dest);
gc.setAntialias(SWT.ON);
gc.setInterpolation(SWT.HIGH);
gc.drawImage(orig, 0, 0, origBounds.width, origBounds.height, 0, 0, scaledWidth, scaledHeight);
gc.dispose();
return dest;
}
This way we don't have to make assumptions about the underlying ImageData.
Using a method described by Sean Bright here: https://stackoverflow.com/a/15685473/6245535, we can extract the alpha information from the image and use it to fill the ImageData.alphaData array which is responsible for the transparency:
public static Image resizeImage(Display display, Image image, int width, int height) {
Image scaled = new Image(display, width, height);
GC gc = new GC(scaled);
gc.setAntialias(SWT.ON);
gc.setInterpolation(SWT.HIGH);
gc.drawImage(image, 0, 0, image.getBounds().width, image.getBounds().height, 0, 0, width, height);
gc.dispose();
ImageData canvasData = scaled.getImageData();
canvasData.alphaData = new byte[width * height];
// This is the hacky bit that is making assumptions about
// the underlying ImageData. In my case it is 32 bit data
// so every 4th byte in the data array is the alpha for that
// pixel...
for (int idx = 0; idx < (width * height); idx++) {
int coord = (idx * 4) + 3;
canvasData.alphaData[idx] = canvasData.data[coord];
}
// Now that we've set the alphaData, we can create our
// final image
Image finalImage = new Image(display, canvasData);
scaled.dispose();
return finalImage;
}
Note that this method assumes that you are working with 32 bit depth of color; it won't work otherwise.

Android: How to pass Bitmap Images to ArrayList<Bitmap> and retrieve it

"Base64Parts" is a string that has been split to equal parts, and I am trying to generate a QR code for each string and place it in a arraylist so that i can retrieve it and generate a GIF. Am I adding the bitmap images to the arraylist in the correct way? Because i can only retrieve "bmp_images.get(0)". Not others (eg-bmp_images.get(1)). My code is given below.
//Declaring QR code generator
QRCodeWriter writer = new QRCodeWriter();
//Declaring Array
ArrayList<Bitmap> bmp_images = new ArrayList<Bitmap>();
for (int i = 0; i < numberOfPartsSplit; i++){
try {
Hashtable<EncodeHintType, ErrorCorrectionLevel> hintMap = new Hashtable<EncodeHintType, ErrorCorrectionLevel>();
hintMap.put(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.L);
BitMatrix bitMatrix = writer.encode(Base64Parts.get(i), BarcodeFormat.QR_CODE, 512, 512, hintMap);
int width = bitMatrix.getWidth();
int height = bitMatrix.getHeight();
Bitmap bmp = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
bmp.setPixel(x, y, bitMatrix.get(x, y) ? Color.BLACK : Color.WHITE);
}
}
bmp_images.add(i,bmp); //the code added for arraylist of images
((ImageView) findViewById(R.id.image_holder)).setImageBitmap(bmp_images.get(0)); //use different values
} catch (WriterException e) {
e.printStackTrace();
}
((ImageView) findViewById(R.id.image_holder)).setImageBitmap(bmp_images.get(0));
paste this line outside for loop. Once all iteration done, set the needed index to get that particular value.

Awkward image after applying the Sobel operator - java

after a deep search i can not understand why my result image is not what I am expecting compared to the one from wikipedia - sobel operator using the same kernel for Sobel operator.
http://s29.postimg.org/kjex7dx6f/300px_Valve_original_1.png
http://s14.postimg.org/vxhvffm29/Untitled.png
So, I have a button listener that load a bmp image, apply Sobel and display an ImageIcon
There is the code :
javax.swing.JFileChooser choose = new javax.swing.JFileChooser();
choose.setFileFilter(new DoFileFilter(".bmp"));
int returnVal = choose.showOpenDialog(this);
if (returnVal == javax.swing.JFileChooser.APPROVE_OPTION) {
try {
java.io.FileInputStream imgis = null;
// System.out.println("Ai ales fisierul : " +
// choose.getSelectedFile());
String path = choose.getSelectedFile().toString();
Path.setText(path);
imgis = new java.io.FileInputStream(path);
java.awt.image.BufferedImage img = javax.imageio.ImageIO.read(imgis);
DirectImgToSobel ds = new DirectImgToSobel(img);
javax.swing.ImageIcon image;
image = new javax.swing.ImageIcon(ds.getBuffImg());
ImgPrev.setIcon(image);
javax.swing.JFrame frame = (javax.swing.JFrame) javax.swing.SwingUtilities.getWindowAncestor(jPanel1);
frame.pack();
frame.repaint();
} catch (FileNotFoundException ex) {
Logger.getLogger(Display.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(Display.class.getName()).log(Level.SEVERE, null, ex);
}
}
And Sobel class :
public class DirectImgToSobel {
private final java.awt.image.BufferedImage img;
private java.awt.image.BufferedImage buffimg;
private int[][]
sobel_x = { { -1, 0, 1 }, { -2, 0, 2 }, { -1, 0, 1 } },
sobel_y = { { -1, -2, -1 }, { 0, 0, 0 }, { 1, 2, 1 } };
public DirectImgToSobel() {
this.img = null;
}
public DirectImgToSobel(java.awt.image.BufferedImage img) {
this.img = img;
aplicaFiltru();
}
private void aplicaFiltru() {
this.buffimg = new java.awt.image.BufferedImage(this.img.getWidth(), this.img.getHeight(),
java.awt.image.BufferedImage.TYPE_BYTE_GRAY);
for (int x = 1; x < this.img.getWidth() - 1; x++) {
for (int y = 1; y < this.img.getHeight() - 1; y++) {
int pixel_x =
(sobel_x[0][0] * img.getRGB(x-1,y-1)) + (sobel_x[0][1] * img.getRGB(x,y-1)) + (sobel_x[0][2] * img.getRGB(x+1,y-1)) +
(sobel_x[1][0] * img.getRGB(x-1,y)) + (sobel_x[1][1] * img.getRGB(x,y)) + (sobel_x[1][2] * img.getRGB(x+1,y)) +
(sobel_x[2][0] * img.getRGB(x-1,y+1)) + (sobel_x[2][1] * img.getRGB(x,y+1)) + (sobel_x[2][2] * img.getRGB(x+1,y+1));
int pixel_y =
(sobel_y[0][0] * img.getRGB(x-1,y-1)) + (sobel_y[0][1] * img.getRGB(x,y-1)) + (sobel_y[0][2] * img.getRGB(x+1,y-1)) +
(sobel_y[1][0] * img.getRGB(x-1,y)) + (sobel_y[1][1] * img.getRGB(x,y)) + (sobel_y[1][2] * img.getRGB(x+1,y)) +
(sobel_y[2][0] * img.getRGB(x-1,y+1)) + (sobel_y[2][1] * img.getRGB(x,y+1)) + (sobel_y[2][2] * img.getRGB(x+1,y+1));
this.buffimg.setRGB(x, y, (int) Math.sqrt(pixel_x * pixel_x + pixel_y * pixel_y));
}
}
buffimg = thresholdImage(buffimg, 28);
java.awt.Graphics g = buffimg.getGraphics();
g.drawImage(buffimg, 0, 0, null);
g.dispose();
}
public java.awt.image.BufferedImage getBuffImg() {
return this.buffimg;
}
public static java.awt.image.BufferedImage thresholdImage(java.awt.image.BufferedImage image, int threshold) {
java.awt.image.BufferedImage result = new java.awt.image.BufferedImage(image.getWidth(), image.getHeight(),
java.awt.image.BufferedImage.TYPE_BYTE_GRAY);
result.getGraphics().drawImage(image, 0, 0, null);
java.awt.image.WritableRaster raster = result.getRaster();
int[] pixels = new int[image.getWidth()];
for (int y = 0; y < image.getHeight(); y++) {
raster.getPixels(0, y, image.getWidth(), 1, pixels);
for (int i = 0; i < pixels.length; i++) {
if (pixels[i] < threshold)
pixels[i] = 0;
else
pixels[i] = 255;
}
raster.setPixels(0, y, image.getWidth(), 1, pixels);
}
return result;
}
}
For obtain same result as in Wikipedia you have to do:
Use brightness of image point instead of colors packed to single int that returns getRGB.
Normalize result (map low values to black and high values to white).
EDIT: I accidentally found good article about Sobel filters in Java: http://asserttrue.blogspot.ru/2010/08/smart-sobel-image-filter.html
EDIT2: Check this How to convert get.rgb(x,y) integer pixel to Color(r,g,b,a) in Java? question described how to extract colors from image.
But my suggestion is to do float brightness = (new Color(img.getRGB(x, y))).RGBtoHSB()[2]; and apply Sobel to brightness.
About your threshold function: you should get grayscaled image, not black-and-white.
like:
if (pixels[i] < threshold) pixels[i] = 0;
else pixels[i] = (int)((pixels[i] - threshold)/(255.0 - threshold)*255.0);
But, again, rgba color representation isn't suitable for math.
Normalizing will be improved by finding minimum and maximum pixel values and stretch (min-max) range to (0-255)
change the image type from
TYPE_BYTE_GRAY to TYPE_INT_RGB
use the correct color channel do convolve
sobel_x[0][0] * new Color(img.getRGB(x-1,y-1)).getBlue()
pack the convolutioned color into bit packed RGB, and set the color
int packedRGB=(int)Math.sqrt(pixel_x*pixel_x+pixel_y*pixel_y);
packedRGB=(packedRGB << 16 | packedRGB << 8 | RGB);
this.buffimg.setRGB(x, y, packedRGB);
Convolution accepts only 1 color channel, it can be r,g,b or gray [(r+g+b)/3], and returns one color channel, thats why you have to pack it back to bit packed RGB, because BufferedImage.setColor() takes only bit-packed RGB.
My code
`
static BufferedImage inputImg,outputImg;
static int[][] pixelMatrix=new int[3][3];
public static void main(String[] args) {
try {
inputImg=ImageIO.read(new File("your input image"));
outputImg=new BufferedImage(inputImg.getWidth(),inputImg.getHeight(),TYPE_INT_RGB);
for(int i=1;i<inputImg.getWidth()-1;i++){
for(int j=1;j<inputImg.getHeight()-1;j++){
pixelMatrix[0][0]=new Color(inputImg.getRGB(i-1,j-1)).getRed();
pixelMatrix[0][1]=new Color(inputImg.getRGB(i-1,j)).getRed();
pixelMatrix[0][2]=new Color(inputImg.getRGB(i-1,j+1)).getRed();
pixelMatrix[1][0]=new Color(inputImg.getRGB(i,j-1)).getRed();
pixelMatrix[1][2]=new Color(inputImg.getRGB(i,j+1)).getRed();
pixelMatrix[2][0]=new Color(inputImg.getRGB(i+1,j-1)).getRed();
pixelMatrix[2][1]=new Color(inputImg.getRGB(i+1,j)).getRed();
pixelMatrix[2][2]=new Color(inputImg.getRGB(i+1,j+1)).getRed();
int edge=(int) convolution(pixelMatrix);
outputImg.setRGB(i,j,(edge<<16 | edge<<8 | edge));
}
}
File outputfile = new File("your output image");
ImageIO.write(outputImg,"jpg", outputfile);
} catch (IOException ex) {System.err.println("Image width:height="+inputImg.getWidth()+":"+inputImg.getHeight());}
}
public static double convolution(int[][] pixelMatrix){
int gy=(pixelMatrix[0][0]*-1)+(pixelMatrix[0][1]*-2)+(pixelMatrix[0][2]*-1)+(pixelMatrix[2][0])+(pixelMatrix[2][1]*2)+(pixelMatrix[2][2]*1);
int gx=(pixelMatrix[0][0])+(pixelMatrix[0][2]*-1)+(pixelMatrix[1][0]*2)+(pixelMatrix[1][2]*-2)+(pixelMatrix[2][0])+(pixelMatrix[2][2]*-1);
return Math.sqrt(Math.pow(gy,2)+Math.pow(gx,2));
}
`

compare bitmaps after taking views of the android screen - compare method not working

I'm trying to compare two different views to compare the image to see if it's the same or not. This is my code...
public boolean equals(View view1, View view2){
view1.setDrawingCacheEnabled(true);
view1.buildDrawingCache();
Bitmap b1 = view1.getDrawingCache();
view2.setDrawingCacheEnabled(true);
view2.buildDrawingCache();
Bitmap b2 = view2.getDrawingCache();
ByteBuffer buffer1 = ByteBuffer.allocate(b1.getHeight() * b1.getRowBytes());
b1.copyPixelsToBuffer(buffer1);
ByteBuffer buffer2 = ByteBuffer.allocate(b2.getHeight() * b2.getRowBytes());
b2.copyPixelsToBuffer(buffer2);
return Arrays.equals(buffer1.array(), buffer2.array());
}
However, this is returning true no matter what. Can anyone tell me why im doing wrong?
Not sure what's wrong with that code, if anything, but did you try Bitmap.sameAs(Bitmap)?
UPDATE: The code below works fine, but your code above seems to always return null from the .getDrawingCache() not sure if this is your problem or not. I don't have the time to look too deeply into this, but you might check getDrawingCache() returns null to see how a similar problem was solved, else please provide you logcat.
Here is a port (not really checked too stringently) of the sameAs function from API 15, introduced in API 12
One special check they do is to see if the image is alpha channel, and a few optimization's to avoid the array check if possible (probably not an issue with your use case) might as well take advantage of the open source when you can ;-)
boolean SameAs(Bitmap A, Bitmap B) {
// Different types of image
if(A.getConfig() != B.getConfig())
return false;
// Different sizes
if (A.getWidth() != B.getWidth())
return false;
if (A.getHeight() != B.getHeight())
return false;
// Allocate arrays - OK because at worst we have 3 bytes + Alpha (?)
int w = A.getWidth();
int h = A.getHeight();
int[] argbA = new int[w*h];
int[] argbB = new int[w*h];
A.getPixels(argbA, 0, w, 0, 0, w, h);
B.getPixels(argbB, 0, w, 0, 0, w, h);
// Alpha channel special check
if (A.getConfig() == Config.ALPHA_8) {
// in this case we have to manually compare the alpha channel as the rest is garbage.
final int length = w * h;
for (int i = 0 ; i < length ; i++) {
if ((argbA[i] & 0xFF000000) != (argbB[i] & 0xFF000000)) {
return false;
}
}
return true;
}
return Arrays.equals(argbA, argbB);
}
#Idistic's answer helped me to get another solution which is good also for images with higher resolutions which can cause OutOfMemory error. The main idea was to split the images into several parts and compare their bytes. In my case 10 parts was enough, I think it is enough for the most of the cases.
private boolean compareBitmaps(Bitmap bitmap1, Bitmap bitmap2)
{
if (Build.VERSION.SDK_INT > 11)
{
return bitmap1.sameAs(bitmap2);
}
int chunkNumbers = 10;
int rows, cols;
int chunkHeight, chunkWidth;
rows = cols = (int) Math.sqrt(chunkNumbers);
chunkHeight = bitmap1.getHeight() / rows;
chunkWidth = bitmap1.getWidth() / cols;
int yCoord = 0;
for (int x = 0; x < rows; x++)
{
int xCoord = 0;
for (int y = 0; y < cols; y++)
{
try
{
Bitmap bitmapChunk1 = Bitmap.createBitmap(bitmap1, xCoord, yCoord, chunkWidth, chunkHeight);
Bitmap bitmapChunk2 = Bitmap.createBitmap(bitmap2, xCoord, yCoord, chunkWidth, chunkHeight);
if (!sameAs(bitmapChunk1, bitmapChunk2))
{
recycleBitmaps(bitmapChunk1, bitmapChunk2);
return false;
}
recycleBitmaps(bitmapChunk1, bitmapChunk2);
xCoord += chunkWidth;
}
catch (Exception e)
{
return false;
}
}
yCoord += chunkHeight;
}
return true;
}
private boolean sameAs(Bitmap bitmap1, Bitmap bitmap2)
{
// Different types of image
if (bitmap1.getConfig() != bitmap2.getConfig())
return false;
// Different sizes
if (bitmap1.getWidth() != bitmap2.getWidth())
return false;
if (bitmap1.getHeight() != bitmap2.getHeight())
return false;
int w = bitmap1.getWidth();
int h = bitmap1.getHeight();
int[] argbA = new int[w * h];
int[] argbB = new int[w * h];
bitmap1.getPixels(argbA, 0, w, 0, 0, w, h);
bitmap2.getPixels(argbB, 0, w, 0, 0, w, h);
// Alpha channel special check
if (bitmap1.getConfig() == Bitmap.Config.ALPHA_8)
{
final int length = w * h;
for (int i = 0; i < length; i++)
{
if ((argbA[i] & 0xFF000000) != (argbB[i] & 0xFF000000))
{
return false;
}
}
return true;
}
return Arrays.equals(argbA, argbB);
}
private void recycleBitmaps(Bitmap bitmap1, Bitmap bitmap2)
{
bitmap1.recycle();
bitmap2.recycle();
bitmap1 = null;
bitmap2 = null;
}

Two similar methods with BufferedImage, one working, one not. Why?

I have tried to make method which changes one color of BufferedImage to be invisible.
I can't find solution myself so I ask for your help.
Here is method made by me:
public static BufferedImage makeWithoutColor(BufferedImage img, Color col)
{
BufferedImage img1 = img;
BufferedImage img2 = new BufferedImage(img1.getWidth(), img1.getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D g = img2.createGraphics();
g.setComposite(AlphaComposite.Src);
g.drawImage(img1, null, 0, 0);
g.dispose();
for(int i = 0; i < img2.getWidth(); i++)
{
for(int j = 0; i < img2.getHeight(); i++)
{
if(img2.getRGB(i, j) == col.getRGB())
{
img2.setRGB(i, j, 0x8F1C1C);
}
}
}
return img2;
}
And here is one from tutorial i read.
public static BufferedImage makeColorTransparent(BufferedImage ref, Color color) {
BufferedImage image = ref;
BufferedImage dimg = new BufferedImage(image.getWidth(), image.getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D g = dimg.createGraphics();
g.setComposite(AlphaComposite.Src);
g.drawImage(image, null, 0, 0);
g.dispose();
for(int i = 0; i < dimg.getHeight(); i++) {
for(int j = 0; j < dimg.getWidth(); j++) {
if(dimg.getRGB(j, i) == color.getRGB()) {
dimg.setRGB(j, i, 0x8F1C1C);
}
}
}
return dimg;
}
Your mistake is this line:
for(int j = 0; i < img2.getHeight(); i++)
should be:
for(int j = 0; j < img2.getHeight(); j++)
// ^ ^ as Ted mentioned...
I assume that by "invisible" you mean that you want to make one color transparent. You aren't going to be able to do it using this approach, because setRGB doesn't affect the alpha channel. You are better off using an image filter. Here's an approach taken from this thread:
public static Image makeWithoutColor(BufferedImage img, Color col)
{
ImageFilter filter = new RGBImageFilter() {
// the color we are looking for... Alpha bits are set to opaque
public int markerRGB = col.getRGB() | 0xFF000000;
public final int filterRGB(int x, int y, int rgb) {
if ((rgb | 0xFF000000) == markerRGB) {
// Mark the alpha bits as zero - transparent
return 0x00FFFFFF & rgb;
} else {
// nothing to do
return rgb;
}
}
};
ImageProducer ip = new FilteredImageSource(im.getSource(), filter);
return Toolkit.getDefaultToolkit().createImage(ip);
}
This will turn any pixel with the indicated RGB color and any transparency to a fully transparent pixel of the same color.

Categories

Resources