after a deep search i can not understand why my result image is not what I am expecting compared to the one from wikipedia - sobel operator using the same kernel for Sobel operator.
http://s29.postimg.org/kjex7dx6f/300px_Valve_original_1.png
http://s14.postimg.org/vxhvffm29/Untitled.png
So, I have a button listener that load a bmp image, apply Sobel and display an ImageIcon
There is the code :
javax.swing.JFileChooser choose = new javax.swing.JFileChooser();
choose.setFileFilter(new DoFileFilter(".bmp"));
int returnVal = choose.showOpenDialog(this);
if (returnVal == javax.swing.JFileChooser.APPROVE_OPTION) {
try {
java.io.FileInputStream imgis = null;
// System.out.println("Ai ales fisierul : " +
// choose.getSelectedFile());
String path = choose.getSelectedFile().toString();
Path.setText(path);
imgis = new java.io.FileInputStream(path);
java.awt.image.BufferedImage img = javax.imageio.ImageIO.read(imgis);
DirectImgToSobel ds = new DirectImgToSobel(img);
javax.swing.ImageIcon image;
image = new javax.swing.ImageIcon(ds.getBuffImg());
ImgPrev.setIcon(image);
javax.swing.JFrame frame = (javax.swing.JFrame) javax.swing.SwingUtilities.getWindowAncestor(jPanel1);
frame.pack();
frame.repaint();
} catch (FileNotFoundException ex) {
Logger.getLogger(Display.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(Display.class.getName()).log(Level.SEVERE, null, ex);
}
}
And Sobel class :
public class DirectImgToSobel {
private final java.awt.image.BufferedImage img;
private java.awt.image.BufferedImage buffimg;
private int[][]
sobel_x = { { -1, 0, 1 }, { -2, 0, 2 }, { -1, 0, 1 } },
sobel_y = { { -1, -2, -1 }, { 0, 0, 0 }, { 1, 2, 1 } };
public DirectImgToSobel() {
this.img = null;
}
public DirectImgToSobel(java.awt.image.BufferedImage img) {
this.img = img;
aplicaFiltru();
}
private void aplicaFiltru() {
this.buffimg = new java.awt.image.BufferedImage(this.img.getWidth(), this.img.getHeight(),
java.awt.image.BufferedImage.TYPE_BYTE_GRAY);
for (int x = 1; x < this.img.getWidth() - 1; x++) {
for (int y = 1; y < this.img.getHeight() - 1; y++) {
int pixel_x =
(sobel_x[0][0] * img.getRGB(x-1,y-1)) + (sobel_x[0][1] * img.getRGB(x,y-1)) + (sobel_x[0][2] * img.getRGB(x+1,y-1)) +
(sobel_x[1][0] * img.getRGB(x-1,y)) + (sobel_x[1][1] * img.getRGB(x,y)) + (sobel_x[1][2] * img.getRGB(x+1,y)) +
(sobel_x[2][0] * img.getRGB(x-1,y+1)) + (sobel_x[2][1] * img.getRGB(x,y+1)) + (sobel_x[2][2] * img.getRGB(x+1,y+1));
int pixel_y =
(sobel_y[0][0] * img.getRGB(x-1,y-1)) + (sobel_y[0][1] * img.getRGB(x,y-1)) + (sobel_y[0][2] * img.getRGB(x+1,y-1)) +
(sobel_y[1][0] * img.getRGB(x-1,y)) + (sobel_y[1][1] * img.getRGB(x,y)) + (sobel_y[1][2] * img.getRGB(x+1,y)) +
(sobel_y[2][0] * img.getRGB(x-1,y+1)) + (sobel_y[2][1] * img.getRGB(x,y+1)) + (sobel_y[2][2] * img.getRGB(x+1,y+1));
this.buffimg.setRGB(x, y, (int) Math.sqrt(pixel_x * pixel_x + pixel_y * pixel_y));
}
}
buffimg = thresholdImage(buffimg, 28);
java.awt.Graphics g = buffimg.getGraphics();
g.drawImage(buffimg, 0, 0, null);
g.dispose();
}
public java.awt.image.BufferedImage getBuffImg() {
return this.buffimg;
}
public static java.awt.image.BufferedImage thresholdImage(java.awt.image.BufferedImage image, int threshold) {
java.awt.image.BufferedImage result = new java.awt.image.BufferedImage(image.getWidth(), image.getHeight(),
java.awt.image.BufferedImage.TYPE_BYTE_GRAY);
result.getGraphics().drawImage(image, 0, 0, null);
java.awt.image.WritableRaster raster = result.getRaster();
int[] pixels = new int[image.getWidth()];
for (int y = 0; y < image.getHeight(); y++) {
raster.getPixels(0, y, image.getWidth(), 1, pixels);
for (int i = 0; i < pixels.length; i++) {
if (pixels[i] < threshold)
pixels[i] = 0;
else
pixels[i] = 255;
}
raster.setPixels(0, y, image.getWidth(), 1, pixels);
}
return result;
}
}
For obtain same result as in Wikipedia you have to do:
Use brightness of image point instead of colors packed to single int that returns getRGB.
Normalize result (map low values to black and high values to white).
EDIT: I accidentally found good article about Sobel filters in Java: http://asserttrue.blogspot.ru/2010/08/smart-sobel-image-filter.html
EDIT2: Check this How to convert get.rgb(x,y) integer pixel to Color(r,g,b,a) in Java? question described how to extract colors from image.
But my suggestion is to do float brightness = (new Color(img.getRGB(x, y))).RGBtoHSB()[2]; and apply Sobel to brightness.
About your threshold function: you should get grayscaled image, not black-and-white.
like:
if (pixels[i] < threshold) pixels[i] = 0;
else pixels[i] = (int)((pixels[i] - threshold)/(255.0 - threshold)*255.0);
But, again, rgba color representation isn't suitable for math.
Normalizing will be improved by finding minimum and maximum pixel values and stretch (min-max) range to (0-255)
change the image type from
TYPE_BYTE_GRAY to TYPE_INT_RGB
use the correct color channel do convolve
sobel_x[0][0] * new Color(img.getRGB(x-1,y-1)).getBlue()
pack the convolutioned color into bit packed RGB, and set the color
int packedRGB=(int)Math.sqrt(pixel_x*pixel_x+pixel_y*pixel_y);
packedRGB=(packedRGB << 16 | packedRGB << 8 | RGB);
this.buffimg.setRGB(x, y, packedRGB);
Convolution accepts only 1 color channel, it can be r,g,b or gray [(r+g+b)/3], and returns one color channel, thats why you have to pack it back to bit packed RGB, because BufferedImage.setColor() takes only bit-packed RGB.
My code
`
static BufferedImage inputImg,outputImg;
static int[][] pixelMatrix=new int[3][3];
public static void main(String[] args) {
try {
inputImg=ImageIO.read(new File("your input image"));
outputImg=new BufferedImage(inputImg.getWidth(),inputImg.getHeight(),TYPE_INT_RGB);
for(int i=1;i<inputImg.getWidth()-1;i++){
for(int j=1;j<inputImg.getHeight()-1;j++){
pixelMatrix[0][0]=new Color(inputImg.getRGB(i-1,j-1)).getRed();
pixelMatrix[0][1]=new Color(inputImg.getRGB(i-1,j)).getRed();
pixelMatrix[0][2]=new Color(inputImg.getRGB(i-1,j+1)).getRed();
pixelMatrix[1][0]=new Color(inputImg.getRGB(i,j-1)).getRed();
pixelMatrix[1][2]=new Color(inputImg.getRGB(i,j+1)).getRed();
pixelMatrix[2][0]=new Color(inputImg.getRGB(i+1,j-1)).getRed();
pixelMatrix[2][1]=new Color(inputImg.getRGB(i+1,j)).getRed();
pixelMatrix[2][2]=new Color(inputImg.getRGB(i+1,j+1)).getRed();
int edge=(int) convolution(pixelMatrix);
outputImg.setRGB(i,j,(edge<<16 | edge<<8 | edge));
}
}
File outputfile = new File("your output image");
ImageIO.write(outputImg,"jpg", outputfile);
} catch (IOException ex) {System.err.println("Image width:height="+inputImg.getWidth()+":"+inputImg.getHeight());}
}
public static double convolution(int[][] pixelMatrix){
int gy=(pixelMatrix[0][0]*-1)+(pixelMatrix[0][1]*-2)+(pixelMatrix[0][2]*-1)+(pixelMatrix[2][0])+(pixelMatrix[2][1]*2)+(pixelMatrix[2][2]*1);
int gx=(pixelMatrix[0][0])+(pixelMatrix[0][2]*-1)+(pixelMatrix[1][0]*2)+(pixelMatrix[1][2]*-2)+(pixelMatrix[2][0])+(pixelMatrix[2][2]*-1);
return Math.sqrt(Math.pow(gy,2)+Math.pow(gx,2));
}
`
Related
I'm creating a app that gets a image from camera (using CameraKit library), process the image and do a OCR Read using Google Vision Api, And get this error:
FATAL EXCEPTION: main
Process: com.., PID: 1938
java.lang.OutOfMemoryError: Failed to allocate a 63701004 byte
allocation with 16777216 free bytes and 60MB until OOM
at dalvik.system.VMRuntime.newNonMovableArray(Native Method)
at android.graphics.Bitmap.nativeCreate(Native Method)
at android.graphics.Bitmap.createBitmap(Bitmap.java:905)
at android.graphics.Bitmap.createBitmap(Bitmap.java:882)
at android.graphics.Bitmap.createBitmap(Bitmap.java:849)
at
com.****.****.Reader.ReaderResultActivity.createContrast(ReaderResultActivity.java:123)
at
com.*****.****.Reader.ReaderResultActivity.onCreate(ReaderResultActivity.java:47)
at android.app.Activity.performCreate(Activity.java:6672)
at
android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1140)
at
android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2612)
at
android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2724)
at android.app.ActivityThread.-wrap12(ActivityThread.java)
at
android.app.ActivityThread$H.handleMessage(ActivityThread.java:1473)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6123)
at java.lang.reflect.Method.invoke(Native Method)
at
com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:867)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:757)
ReaderResultActivity Code:
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_reader_result);
ImageView img1 = (ImageView)findViewById(R.id.imageView2);
ImageView img2 = (ImageView)findViewById(R.id.imageView3);
ImageView img3 = (ImageView)findViewById(R.id.imageView4);
TextView scanResults = (TextView)findViewById(R.id.textView);
//Get bitmap from a static class.
Bitmap bitmap = Reader.img;
Bitmap grayScale = toGrayscale(bitmap);
Bitmap blackWhiteImage = createContrast(grayScale, 50);
Bitmap invertColor = invertColor(blackWhiteImage);
//Show process steps
img1.setImageBitmap(grayScale);
img2.setImageBitmap(blackWhiteImage);
img3.setImageBitmap(invertColor);
TextRecognizer detector = new TextRecognizer.Builder(getApplicationContext()).build();
try {
if (detector.isOperational()) {
Frame frame = new Frame.Builder().setBitmap(invertColor).build();
SparseArray<TextBlock> textBlocks = detector.detect(frame);
String blocks = "";
String lines = "";
String words = "";
for (int index = 0; index < textBlocks.size(); index++) {
//extract scanned text blocks here
TextBlock tBlock = textBlocks.valueAt(index);
blocks = blocks + tBlock.getValue() + "\n" + "\n";
for (Text line : tBlock.getComponents()) {
//extract scanned text lines here
lines = lines + line.getValue() + "\n";
for (Text element : line.getComponents()) {
//extract scanned text words here
words = words + element.getValue() + ", ";
}
}
}
if (textBlocks.size() == 0) {
scanResults.setText("Scan Failed: Found nothing to scan");
} else {
lines = lines.replaceAll("o", "0");
lines = lines.replaceAll("A", "1");
scanResults.setText(lines + "\n");
}
} else {
scanResults.setText("Could not set up the detector!");
}
} catch (Exception e) {
Toast.makeText(this, "Failed to load Image", Toast.LENGTH_SHORT)
.show();
Log.e("312", e.toString());
}
}
private Bitmap processImage(Bitmap bitmap){
Bitmap grayScale = toGrayscale(bitmap);
Bitmap blackWhiteImage = createContrast(grayScale, 50);
Bitmap invertColor = invertColor(blackWhiteImage);
return invertColor;
}
public Bitmap toGrayscale(Bitmap bmpOriginal) {
int width, height;
height = bmpOriginal.getHeight();
width = bmpOriginal.getWidth();
Bitmap bmpGrayscale = Bitmap.createBitmap(width, height, bmpOriginal.getConfig());
Canvas c = new Canvas(bmpGrayscale);
Paint paint = new Paint();
ColorMatrix cm = new ColorMatrix();
cm.setSaturation(0);
ColorMatrixColorFilter f = new ColorMatrixColorFilter(cm);
paint.setColorFilter(f);
c.drawBitmap(bmpOriginal, 0, 0, paint);
return bmpGrayscale;
}
public static Bitmap createContrast(Bitmap src, double value) {
// image size
int width = src.getWidth();
int height = src.getHeight();
// create output bitmap
Bitmap bmOut = Bitmap.createBitmap(width, height, src.getConfig());
// color information
int A, R, G, B;
int pixel;
// get contrast value
double contrast = Math.pow((100 + value) / 100, 2);
// scan through all pixels
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
// get pixel color
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
// apply filter contrast for every channel R, G, B
R = Color.red(pixel);
R = (int)(((((R / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(R < 0) { R = 0; }
else if(R > 255) { R = 255; }
G = Color.red(pixel);
G = (int)(((((G / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(G < 0) { G = 0; }
else if(G > 255) { G = 255; }
B = Color.red(pixel);
B = (int)(((((B / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(B < 0) { B = 0; }
else if(B > 255) { B = 255; }
// set new pixel color to output bitmap
bmOut.setPixel(x, y, Color.argb(A, R, G, B));
}
}
return bmOut;
}
Bitmap invertColor(Bitmap src){
Bitmap copy = src.copy(src.getConfig(), true);
for (int x = 0; x < copy.getWidth(); ++x) {
for (int y = 0; y < copy.getHeight(); ++y) {
int color = copy.getPixel(x, y);
int r = Color.red(color);
int g = Color.green(color);
int b = Color.blue(color);
int avg = (r + g + b) / 3;
int newColor = Color.argb(255, 255 - avg, 255 - avg, 255 - avg);
copy.setPixel(x, y, newColor);
}
}
return copy;
}
Already try to do this in Manifest
android:largeHeap="true"
But the application just stop running when is on:
ReaderResultActivity.createContrast(ReaderResultActivity.java:123)
The same line that appears on error without the "largeHeap" tag.
Just dont know what to do, but i think that has something with all those "Bitmap.CreateBitmap" in every process function.
But without doing this, in OCR reading, appear a error saying that the bitmap has a wrong format.
You are loading three bitmaps in different imageviews without scaling it according to the size you want to show on your UI.
Android devices's camera captures pictures with much higher resolution than the screen density of your device.
Given that you are working with limited memory, ideally you only want to load a lower resolution version in memory. The lower resolution version should match the size of the UI component that displays it. An image with a higher resolution does not provide any visible benefit, but still takes up precious memory and incurs additional performance overhead due to additional on the fly scaling.
You can optimize it by following developer documentation suggestions - https://developer.android.com/topic/performance/graphics/load-bitmap.html
I have encoded a String into a QR bitmap. The picture becomes like this:
What do I need to change so that there is no whitespaces around the QR? I tried to read on the documentation about MultiFormatWriter() and setPixels(), but couldn't find out where it is wrong.
Here is the code:
Bitmap encodeAsBitmap(String str) throws WriterException {
BitMatrix result;
try {
result = new MultiFormatWriter().encode(str,
BarcodeFormat.QR_CODE, 500, 500, null);
} catch (IllegalArgumentException iae) {
return null;
}
int w = result.getWidth();
int h = result.getHeight();
int[] pixels = new int [w * h];
for (int i = 0; i < h; i++) {
int offset = i * w;
for (int j = 0; j < w; j++) {
pixels[offset + j] = result.get(i, j) ? BLACK : WHITE;
}
}
Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, 500, 0, 0, w, h);
return bitmap;
}
You should use hints param for set custom margins.
Map<EncodeHintType, Object> hints = new EnumMap<>(EncodeHintType.class);
hints.put(EncodeHintType.MARGIN, marginSize);
BitMatrix result = new MultiFormatWriter().encode(contentsToEncode, BarcodeFormat.QR_CODE, imageWidth, imageHeight, hints);
I think the problem is the way you set your pixels in the Bitmap.
According to the documentation:
stride int: The number of colors in pixels[] to skip between rows. Normally this value will be the same as the width of the bitmap, but it can be larger (or negative).
So I suggest the following:
bitmap.setPixels(pixels, 0, w, 0, 0, w, h);
Edit:
Just noticed you assume that the input's size is 500. you can try to compute it (assuming your string represents a square). If it is a rectangle, you have to be able to compute the size somehow so the MultiFormatWriter can read it.
So your code can be:
Bitmap encodeAsBitmap(String str, int size) throws WriterException {
BitMatrix result;
try {
result = new MultiFormatWriter().encode(str,
BarcodeFormat.QR_CODE, size, size, null);
} catch (IllegalArgumentException iae) {
return null;
}
int[] pixels = new int [size * size];
for (int i = 0; i < size; i++) {
int offset = i * size;
for (int j = 0; j < size; j++) {
pixels[offset + j] = result.get(i, j) ? BLACK : WHITE;
}
}
Bitmap bitmap = Bitmap.createBitmap(size, size, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, size, 0, 0, size, size);
return bitmap;
}
This question already has an answer here:
LWJGL Texture is flipped upside down when displayed
(1 answer)
Closed 7 years ago.
I'm rendering a skybox with a cube texture. But the color of the texture turned out to be wrong. The code below is used to load images. Is that because of the GL_RGB format? Any ideas about that?
protected void loadImageData()
{
String fileDir;
for (int i = 0; i < 6; i++)
{
fileDir = "images/skybox/sky" + (i + 1) + ".jpg";
try
{
BufferedImage image = ImageIO.read(new File(fileDir));
byte[] data = ((DataBufferByte) image.getData().getDataBuffer()).getData();
imageBuff[i] = GLBuffers.newDirectByteBuffer(data);
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
setup texture:
loadImageData();
gl.glGenTextures(1, cubeTexBuff);
gl.glBindTexture(GL4.GL_TEXTURE_CUBE_MAP, cubeTexBuff.get(0));
gl.glTexStorage2D(GL4.GL_TEXTURE_CUBE_MAP, 0, GL4.GL_RGB8, imageSize, imageSize);
for (int i = 0; i < 6; i++)
{
gl.glTexImage2D(GL4.GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL4.GL_RGB8,
imageSize, imageSize, 0, GL4.GL_RGB, GL4.GL_UNSIGNED_BYTE, imageBuff[i]);
}
gl.glTexParameteri(GL4.GL_TEXTURE_CUBE_MAP, GL4.GL_TEXTURE_MAG_FILTER, GL4.GL_LINEAR);
gl.glTexParameteri(GL4.GL_TEXTURE_CUBE_MAP, GL4.GL_TEXTURE_MIN_FILTER, GL4.GL_LINEAR);
Original image:
Mapping result:
updated:
with this alternative provided: enter link description here the color turns out to be correct finally, yet still a small problem (the direction becomes opposite), see the result below:
String fileDir;
TextureData texData = null;
for (int i = 0; i < 6; i++)
{
fileDir = "images/skybox/sky" + (i + 1) + ".jpg";
try
{
texData = TextureIO.newTextureData(gl.getGLProfile(),
new File(fileDir), false, TextureIO.JPG);
texDataBuff[i] = texData.getBuffer();
texDataBuff[i].rewind();
//notice that the image used for skybox must be square
imageSize = texData.getHeight();
}
catch (IOException e)
{
e.printStackTrace();
}
}
gl.glGenTextures(1, cubeTexBuff);
gl.glBindTexture(GL4.GL_TEXTURE_CUBE_MAP, cubeTexBuff.get(0));
gl.glTexStorage2D(GL4.GL_TEXTURE_CUBE_MAP, 0, GL4.GL_RGB8, imageSize, imageSize);
for (int i = 0; i < 6; i++)
{
gl.glTexImage2D(GL4.GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL4.GL_RGB8,
imageSize, imageSize, 0, texData.getPixelFormat(), texData.getPixelType(), texDataBuff[i]);
}
gl.glTexParameteri(GL4.GL_TEXTURE_CUBE_MAP, GL4.GL_TEXTURE_MAG_FILTER, GL4.GL_LINEAR);
gl.glTexParameteri(GL4.GL_TEXTURE_CUBE_MAP, GL4.GL_TEXTURE_MIN_FILTER, GL4.GL_LINEAR);
original image:
mapping result:
You can do this, flipping included
ImageUtil.flipImageVertically(imageBuff[i]);
{
TextureData textureData = AWTTextureIO.newTextureData(gl3.getGLProfile(), textureBuffIm,
mipmap);
gl3.glBindTexture(GL3.GL_TEXTURE_2D, objects[Objects.texture.ordinal()]);
{
int[] alignment = new int[1];
gl3.glGetIntegerv(GL3.GL_UNPACK_ALIGNMENT, alignment, 0);
// System.out.println("alignment[0] "+alignment[0]);
// System.out.println("textureData.getAlignment() "+textureData.getAlignment());
if (alignment[0] != textureData.getAlignment()) {
gl3.glPixelStorei(GL3.GL_UNPACK_ALIGNMENT, textureData.getAlignment());
}
{
gl3.glTexImage2D(GL3.GL_TEXTURE_2D, 0, textureData.getInternalFormat(), textureData.getWidth(),
textureData.getHeight(), textureData.getBorder(), textureData.getPixelFormat(),
textureData.getPixelType(), textureData.getBuffer());
}
if (alignment[0] != textureData.getAlignment()) {
gl3.glPixelStorei(GL3.GL_UNPACK_ALIGNMENT, alignment[0]);
}
Remember that AWTTextureIO flips always the texture, so if you want to have it right, you flip it before, so that AWTTextureIO will flip it back and you will have the original image.
You can flip it back, if you still need to use that imageBuffered
I'm having a problem that's been driving me crazy for days. Hopefully, someone here can help me understand what's happening. I'm trying to write a simple Java program that will take a directory of JPEGs, convert them to greyscale, and save them to the same folder.
My procedure is to set the red, green, and blue components of each pixel to that pixel's luminance value. The code runs fine and seems to do what I want. If I view the completed image in a JFrame, it shows up black and white. However, when I save the image (using ImageIO.write()), for some reason, it becomes colorized and looks rather red. I'd love to post the images but I guess my reputation is not good enough...
Since I can't put the images, I'll try to explain it as well as I can. Here's what I know:
If I view the newly created image using the Java program, it appears black and white as I desire.
If I save the image and try to view it using an external program, it does not appear black and white at all and just looks like a watered down version of the original image.
If I open that same saved image (the one that should be black and white but is not) using the Java program, it does indeed appear black and white.
If I save the file as a png instead, everything works fine.
Here's the relevant code I'm using if anyone would like to see it:
import java.io.*;
import javax.swing.*;
import javax.imageio.ImageIO;
import java.awt.*;
import java.awt.image.*;
public class ImageEZ {
public static void displayImage(BufferedImage img) {
class ImageFrame extends JFrame {
ImageFrame(BufferedImage img) {
super();
class ImagePanel extends JPanel {
BufferedImage image;
ImagePanel(BufferedImage image) {
this.image = ImageEZ.duplicate(image);
}
protected void paintComponent(Graphics g) {
super.paintComponent(g);
g.drawImage(image, 0, 0, image.getWidth(), image.getHeight(), this);
}
}
ImagePanel panel = new ImagePanel(img);
add(panel);
}
}
JFrame frame = new ImageFrame(img);
frame.setSize(img.getWidth(), img.getHeight());
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
public static BufferedImage duplicate(BufferedImage img) {
BufferedImage dup = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_ARGB);
dup.setRGB(0, 0, img.getWidth(), img.getHeight(), ImageEZ.getRGB(img), 0, img.getWidth());
return dup;
}
public static int[] getRedArray(BufferedImage img) {
int[] tArray = ImageEZ.getRGB(img);
for (int i = 0; i < tArray.length; i++) {
tArray[i] = tArray[i] << 8;
tArray[i] = tArray[i] >>> 24;
}
return tArray;
}
public static int[] getRedArray(int[] tArray) {
int[] nArray = new int[tArray.length];
for (int i = 0; i < tArray.length; i++) {
nArray[i] = tArray[i] << 8;
nArray[i] = nArray[i] >>> 24;
}
return nArray;
}
public static int[] getGreenArray(BufferedImage img) {
int[] tArray = ImageEZ.getRGB(img);
for (int i = 0; i < tArray.length; i++) {
tArray[i] = tArray[i] << 16;
tArray[i] = tArray[i] >>> 24;
}
return tArray;
}
public static int[] getGreenArray(int[] tArray) {
int[] nArray = new int[tArray.length];
for (int i = 0; i < tArray.length; i++) {
nArray[i] = tArray[i] << 16;
nArray[i] = nArray[i] >>> 24;
}
return nArray;
}
public static int[] getBlueArray(BufferedImage img) {
int[] tArray = ImageEZ.getRGB(img);
for (int i = 0; i < tArray.length; i++) {
tArray[i] = tArray[i] << 24;
tArray[i] = tArray[i] >>> 24;
}
return tArray;
}
public static int[] getBlueArray(int[] tArray) {
int[] nArray = new int[tArray.length];
for (int i = 0; i < tArray.length; i++) {
nArray[i] = tArray[i] << 24;
nArray[i] = nArray[i] >>> 24;
}
return nArray;
}
public static int[] YBRtoRGB(int[] ybr) {
int[] y = getRedArray(ybr);
int[] r = getBlueArray(ybr);
int[] b = getGreenArray(ybr);
int[] red = new int[y.length];
int[] green = new int[y.length];
int[] blue = new int[y.length];
for (int i = 0; i < red.length; i++) {
red[i] = (int) (y[i] + 1.402*r[i]);
green[i] = (int) (y[i] + -.344*b[i] + -.714*r[i]);
blue[i] = (int) (y[i] + 1.772*b[i]);
}
int[] RGB = new int[red.length];
for (int i = 0; i < red.length; i++) {
RGB[i] = red[i] << 16 | green[i] << 8 | blue[i] | 255 << 24;
}
return RGB;
}
public static int[] getLumArray(BufferedImage img) {
int[] red = getRedArray(img); //Returns an array of the red values of the pixels
int[] green = getGreenArray(img);
int[] blue = getBlueArray(img);
int[] Y = new int[red.length];
for (int i = 0; i < red.length; i++) {
Y[i] = (int) (.299*red[i] + .587*green[i] + .114*blue[i]);
}
return Y;
}
// Converts an image to greyscale using the luminance of each pixel
public static BufferedImage deSaturate(BufferedImage original) {
BufferedImage deSaturated = new BufferedImage(original.getWidth(),
original.getHeight(),
BufferedImage.TYPE_INT_ARGB);
int[] Y = ImageEZ.getLumArray(original); //Returns an array of the luminances
for (int i = 0; i < Y.length; i++) {
Y[i] = 255 << 24 | Y[i] << 16;
}
int[] rgb = ImageEZ.YBRtoRGB(Y); //Converts the YCbCr colorspace to RGB
deSaturated.setRGB(0, 0, original.getWidth(), original.getHeight(),
rgb, 0, original.getWidth());
return deSaturated;
}
// Takes a folder of JPEGs and converts them to Greyscale
public static void main(String[] args) throws Exception {
File root = new File(args[0]);
File[] list = root.listFiles();
for (int i = 0; i < list.length; i++) {
BufferedImage a = ImageEZ.deSaturate(ImageIO.read(list[i]));
displayImage(a); //Displays the converted images.
boolean v = ImageIO.write(a, "jpg", new File(list[i].getParent() + "\\" + i + ".jpg"));
}
// Displays the first newly saved image
displayImage(ImageIO.read(new File(list[0].getParent() + "\\" + 0 + ".png")));
}
}
I just want to stress, this is not a question about alternative methods for turning making an image black and white. What I really want to know is why it works as a png but not as a jpg. Thanks a lot to all who read this far!
This is a known issue with ImageIO.
When saved/loaded as jpeg, the API doesn't know how to handle the alpha component (as I understand the problem).
The solution is to not write images with an alpha component to jpg format, or use a non-alpha based image, such as TYPE_INT_RGB instead...
What I'm trying to do is to compute 2D DCT of an image in Java and then save the result back to file.
Read file:
coverImage = readImg(coverPath);
private BufferedImage readImg(String path) {
BufferedImage destination = null;
try {
destination = ImageIO.read(new File(path));
} catch (IOException e) {
e.printStackTrace();
}
return destination;
}
Convert to float array:
cover = convertToFloatArray(coverImage);
private float[] convertToFloatArray(BufferedImage source) {
securedImage = (WritableRaster) source.getData();
float[] floatArray = new float[source.getHeight() * source.getWidth()];
floatArray = securedImage.getPixels(0, 0, source.getWidth(), source.getHeight(), floatArray);
return floatArray;
}
Run the DCT:
runDCT(cover, coverImage.getHeight(), coverImage.getWidth());
private void runDCT(float[] floatArray, int rows, int cols) {
dct = new FloatDCT_2D(rows, cols);
dct.forward(floatArray, false);
securedImage.setPixels(0, 0, cols, rows, floatArray);
}
And then save it as image:
convertDctToImage(securedImage, coverImage.getHeight(), coverImage.getWidth());
private void convertDctToImage(WritableRaster secured, int rows, int cols) {
coverImage.setData(secured);
File file = new File(securedPath);
try {
ImageIO.write(coverImage, "png", file);
} catch (IOException ex) {
Logger.getLogger(DCT2D.class.getName()).log(Level.SEVERE, null, ex);
}
}
But what I get is: http://kyle.pl/up/2012/05/29/dct_stack.png
Can anyone tell me what I'm doing wrong? Or maybe I don't understand something here?
This is a piece of code, that works for me:
//reading image
BufferedImage image = javax.imageio.ImageIO.read(new File(filename));
//width * 2, because DoubleFFT_2D needs 2x more space - for Real and Imaginary parts of complex numbers
double[][] brightness = new double[img.getHeight()][img.getWidth() * 2];
//convert colored image to grayscale (brightness of each pixel)
for ( int y = 0; y < image.getHeight(); y++ ) {
raster.getDataElements( 0, y, image.getWidth(), 1, dataElements );
for ( int x = 0; x < image.getWidth(); x++ ) {
//notice x and y swapped - it's JTransforms format of arrays
brightness[y][x] = brightnessRGB(dataElements[x]);
}
}
//do FT (not FFT, because FFT is only* for images with width and height being 2**N)
//DoubleFFT_2D writes data to the same array - to brightness
new DoubleFFT_2D(img.getHeight(), img.getWidth()).realForwardFull(brightness);
//visualising frequency domain
BufferedImage fd = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_RGB);
outRaster = fd.getRaster();
for ( int y = 0; y < img.getHeight(); y++ ) {
for ( int x = 0; x < img.getWidth(); x++ ) {
//we calculate complex number vector length (sqrt(Re**2 + Im**2)). But these lengths are to big to
//fit in 0 - 255 scale of colors. So I divide it on 223. Instead of "223", you may want to choose
//another factor, wich would make you frequency domain look best
int power = (int) (Math.sqrt(Math.pow(brightness[y][2 * x], 2) + Math.pow(brightness[y][2 * x + 1], 2)) / 223);
power = power > 255 ? 255 : power;
//draw a grayscale color on image "fd"
fd.setRGB(x, y, new Color(c, c, c).getRGB());
}
}
draw(fd);
Resulting image should look like big black space in the middle and white spots in all four corners. Usually people visualise FD so, that zero frequency appears in the center of the image. So, if you need classical FD (one, that looks like star for reallife images), you need to upgrade "fd.setRGB(x, y..." a bit:
int w2 = img.getWidth() / 2;
int h2 = img.getHeight() / 2;
int newX = x + w2 >= img.getWidth() ? x - w2 : x + w2;
int newY = y + h2 >= img.getHeight() ? y - h2 : y + h2;
fd.setRGB(newX, newY, new Color(power, power, power).getRGB());
brightnessRGB and draw methods for the lazy:
public static int brightnessRGB(int rgb) {
int r = (rgb >> 16) & 0xff;
int g = (rgb >> 8) & 0xff;
int b = rgb & 0xff;
return (r+g+b)/3;
}
private static void draw(BufferedImage img) {
JLabel picLabel = new JLabel(new ImageIcon(img));
JPanel jPanelMain = new JPanel();
jPanelMain.add(picLabel);
JFrame jFrame = new JFrame();
jFrame.add(jPanelMain);
jFrame.pack();
jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jFrame.setVisible(true);
}
I know, I'm a bit late, but I just did all that for my program. So, let it be here for those, who'll get here from googling.