Efficiently extracting RGBA buffer from BufferedImage - java

I've been trying to load in bufferedImages in java as IntBuffers. However, one problem I've come across is getting the pixel data from an image with semi or complete transparency. Java only seems to allow you to get the RGB value, which in my case is a problem because any pixels that should be transparent are rendered completely opaque. After about a few hours of searching I came across this way of getting the RGBA values...
Color color = new Color(image.getRGB(x, y), true);
Although it does work, it can't possibly be the best way of doing this. Does anyone know of a more efficient way to complete the same task, one that does not require an instance of a color object for EVERY pixel. You can see how this would be bad if you're trying to load in a fairly large image. Here is my code just in case you need a reference...
public static IntBuffer getImageBuffer(BufferedImage image) {
int width = image.getWidth();
int height = image.getHeight();
int[] pixels = new int[width * height];
for (int i = 0; i < pixels.length; i++) {
Color color = new Color(image.getRGB(i % width, i / width), true);
int a = color.getAlpha();
int r = color.getRed();
int g = color.getGreen();
int b = color.getBlue();
pixels[i] = a << 24 | b << 16 | g << 8 | r;
}
return BufferUtils.toIntBuffer(pixels);
}
public static IntBuffer toIntBuffer(int[] elements) {
IntBuffer buffer = ByteBuffer.allocateDirect(elements.length << 2).order(ByteOrder.nativeOrder()).asIntBuffer();
buffer.put(elements).flip();
return buffer;
}
*Edit: The bufferedImage passed into the parameter is loaded from the disk

Here's some old code I have that converts images to OpenGL for LWJGL. Since the byte order has to be swapped, it isn't useful (I think) to load the image as for example integers.
public static ByteBuffer decodePng( BufferedImage image )
throws IOException
{
int width = image.getWidth();
int height = image.getHeight();
// Load texture contents into a byte buffer
ByteBuffer buf = ByteBuffer.allocateDirect( 4 * width * height );
// decode image
// ARGB format to -> RGBA
for( int h = 0; h < height; h++ )
for( int w = 0; w < width; w++ ) {
int argb = image.getRGB( w, h );
buf.put( (byte) ( 0xFF & ( argb >> 16 ) ) );
buf.put( (byte) ( 0xFF & ( argb >> 8 ) ) );
buf.put( (byte) ( 0xFF & ( argb ) ) );
buf.put( (byte) ( 0xFF & ( argb >> 24 ) ) );
}
buf.flip();
return buf;
}
Example usage:
BufferedImage image = ImageIO.read( getClass().getResourceAsStream(heightMapFile) );
int height = image.getHeight();
int width = image.getWidth();
ByteBuffer buf = TextureUtils.decodePng(image);

If interested, I did a jvm port of gli that deals with these stuff so that you don't have to worry about.
An example of texture loading:
public static int createTexture(String filename) {
Texture texture = gli.load(filename);
if (texture.empty())
return 0;
gli_.gli.gl.setProfile(gl.Profile.GL33);
gl.Format format = gli_.gli.gl.translate(texture.getFormat(), texture.getSwizzles());
gl.Target target = gli_.gli.gl.translate(texture.getTarget());
assert (texture.getFormat().isCompressed() && target == gl.Target._2D);
IntBuffer textureName = intBufferBig(1);
glGenTextures(textureName);
glBindTexture(target.getI(), textureName.get(0));
glTexParameteri(target.getI(), GL12.GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(target.getI(), GL12.GL_TEXTURE_MAX_LEVEL, texture.levels() - 1);
IntBuffer swizzles = intBufferBig(4);
texture.getSwizzles().to(swizzles);
glTexParameteriv(target.getI(), GL33.GL_TEXTURE_SWIZZLE_RGBA, swizzles);
Vec3i extent = texture.extent(0);
glTexStorage2D(target.getI(), texture.levels(), format.getInternal().getI(), extent.x, extent.y);
for (int level = 0; level < texture.levels(); level++) {
extent = texture.extent(level);
glCompressedTexSubImage2D(
target.getI(), level, 0, 0, extent.x, extent.y,
format.getInternal().getI(), texture.data(0, 0, level));
}
return textureName.get(0);
}

Related

Draw Image using CYMK values

I want to convert a buffered image from RGBA format to CYMK format without using auto conversion tools or libraries,so i tried to extract the RGBA values from individual pixels that i got using BufferedImage.getRGB() and here what I've done so far :
BufferedImage img = new BufferedImage("image path")
int R,G,B,pixel,A;
float Rc,Gc,Bc,K,C,M,Y;
int height = img.getHeight();
int width = img.getWidth();
for(int y = 0 ; y < height ; y++){
for(int x = 0 ; x < width ; x++){
pixel = img.getRGB(x, y);
//I shifted the int bytes to get RGBA values
A = (pixel>>24)&0xff;
R = (pixel>>16)&0xff;
G = (pixel>>8)&0xff;
B = (pixel)&0xff;
Rc = (float) ((float)R/255.0);
Gc = (float) ((float)G/255.0);
Bc = (float) ((float)B/255.0);
// Equations i found on the internet to get CYMK values
K = 1 - Math.max(Bc, Math.max(Rc, Gc));
C = (1- Rc - K)/(1-K);
Y = (1- Bc - K)/(1-K);
M = (1- Gc - K)/(1-K);
}
}
Now after I've extracted it ,i want to draw or construct an image using theses values ,can you tell me of a method or a way to do this because i don't thinkBufferedImage.setRGB() would work ,and also when i printed the values of C,Y,M some of them hadNaN value can someone tell me what that means and how to deal with it ?
While it is possible, converting RGB to CMYK without a proper color profile will not produce the best results. For better performance and higher color fidelity, I really recommend using an ICC color profile (see ICC_Profile and ICC_ColorSpace classes) and ColorConvertOp. :-)
Anyway, here's how to do it using your own conversion. The important part is creating a CMYK color space, and a ColorModel and BufferedImage using that color space (you could also load a CMYK color space from an ICC profile as mentioned above, but the colors would probably look more off, as it uses different calculations than you do).
public static void main(String[] args) throws IOException {
BufferedImage img = ImageIO.read(new File(args[0]));
int height = img.getHeight();
int width = img.getWidth();
// Create a color model and image in CMYK color space (see custom class below)
ComponentColorModel cmykModel = new ComponentColorModel(CMYKColorSpace.INSTANCE, false, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
BufferedImage cmykImg = new BufferedImage(cmykModel, cmykModel.createCompatibleWritableRaster(width, height), cmykModel.isAlphaPremultiplied(), null);
WritableRaster cmykRaster = cmykImg.getRaster();
int R,G,B,pixel;
float Rc,Gc,Bc,K,C,M,Y;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
pixel = img.getRGB(x, y);
// Now, as cmykImg already is in CMYK color space, you could actually just invoke
//cmykImg.setRGB(x, y, pixel);
// and the method would perform automatic conversion to the dest color space (CMYK)
// But, here you go... (I just cleaned up your code a little bit):
R = (pixel >> 16) & 0xff;
G = (pixel >> 8) & 0xff;
B = (pixel) & 0xff;
Rc = R / 255f;
Gc = G / 255f;
Bc = B / 255f;
// Equations I found on the internet to get CMYK values
K = 1 - Math.max(Bc, Math.max(Rc, Gc));
if (K == 1f) {
// All black (this is where you would get NaN values I think)
C = M = Y = 0;
}
else {
C = (1- Rc - K)/(1-K);
M = (1- Gc - K)/(1-K);
Y = (1- Bc - K)/(1-K);
}
// ...and store the CMYK values (as bytes in 0..255 range) in the raster
cmykRaster.setDataElements(x, y, new byte[] {(byte) (C * 255), (byte) (M * 255), (byte) (Y * 255), (byte) (K * 255)});
}
}
// You should now have a CMYK buffered image
System.out.println("cmykImg: " + cmykImg);
}
// A simple and not very accurate CMYK color space
// Full source at https://github.com/haraldk/TwelveMonkeys/blob/master/imageio/imageio-core/src/main/java/com/twelvemonkeys/imageio/color/CMYKColorSpace.java
final static class CMYKColorSpace extends ColorSpace {
static final ColorSpace INSTANCE = new CMYKColorSpace();
final ColorSpace sRGB = getInstance(CS_sRGB);
private CMYKColorSpace() {
super(ColorSpace.TYPE_CMYK, 4);
}
public static ColorSpace getInstance() {
return INSTANCE;
}
public float[] toRGB(float[] colorvalue) {
return new float[]{
(1 - colorvalue[0]) * (1 - colorvalue[3]),
(1 - colorvalue[1]) * (1 - colorvalue[3]),
(1 - colorvalue[2]) * (1 - colorvalue[3])
};
}
public float[] fromRGB(float[] rgbvalue) {
// NOTE: This is essentially the same equation you use, except
// this is slightly optimized, and values are already in range [0..1]
// Compute CMY
float c = 1 - rgbvalue[0];
float m = 1 - rgbvalue[1];
float y = 1 - rgbvalue[2];
// Find K
float k = Math.min(c, Math.min(m, y));
// Convert to CMYK values
return new float[]{(c - k), (m - k), (y - k), k};
}
public float[] toCIEXYZ(float[] colorvalue) {
return sRGB.toCIEXYZ(toRGB(colorvalue));
}
public float[] fromCIEXYZ(float[] colorvalue) {
return sRGB.fromCIEXYZ(fromRGB(colorvalue));
}
}
PS: Your question talks about RGBA and CMYK, but your code just ignores the alpha value, so I did the same. If you really wanted to, you could just keep the alpha value as-is and have a CMYK+A image, to allow alpha-compositing in CMYK color space. I'll leave that as an exercise. ;-)

LWJGL Assimp: Loading Textures

I'm using the LWJGL 3 version of assimp, and I've stumbled my way through loading a model. The issue I'm running into is loading actual pixel data for textures. What is the process of loading these textures by use of a AIMaterial object?
When I did this for a quick demo test, I just used the regular Image IO from Java. It's not as fancy but it works and might get you going:
public static ByteBuffer decodePng( BufferedImage image )
throws IOException
{
int width = image.getWidth();
int height = image.getHeight();
// Load texture contents into a byte buffer
ByteBuffer buf = ByteBuffer.allocateDirect(
4 * width * height );
// decode image
// ARGB format to -> RGBA
for( int h = 0; h < height; h++ )
for( int w = 0; w < width; w++ ) {
int argb = image.getRGB( w, h );
buf.put( (byte) ( 0xFF & ( argb >> 16 ) ) );
buf.put( (byte) ( 0xFF & ( argb >> 8 ) ) );
buf.put( (byte) ( 0xFF & ( argb ) ) );
buf.put( (byte) ( 0xFF & ( argb >> 24 ) ) );
}
buf.flip();
return buf;
}
Where the image was loaded as part of another routine:
public Texture(InputStream is) throws Exception {
try {
// Load Texture file
BufferedImage image = ImageIO.read(is);
this.width = image.getWidth();
this.height = image.getHeight();
// Load texture contents into a byte buffer
ByteBuffer buf = xogl.utils.TextureUtils.decodePng(image);
// Create a new OpenGL texture
this.id = glGenTextures();
// Bind the texture
glBindTexture(GL_TEXTURE_2D, this.id);
// Tell OpenGL how to unpack the RGBA bytes. Each component is 1 byte size
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Upload the texture data
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, this.width, this.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, buf);

java glReadpixels to OpenCV mat

I use lwjgl to render Images in OpenGL, now i want to store the content of the Framebuffer as RGB in an OpenCV Matrix. To make shure everything runs fine, im showing the captured image on Panel of a jFrame.
But heres the problem: While showing stored jpegs everything looks fine but if im trying to show the captured Framebuffer i only see stripes!
Here is the code for a screenshot:
public Mat takeMatScreenshot()
{
int width = m_iResolutionX;
int height = m_iResolutionY;
int pixelCount = width * height;
byte[] pixelValues = new byte[ pixelCount * 3 ];
ByteBuffer pixelBuffer = BufferUtils.createByteBuffer( width * height * 3 );
glBindFramebuffer( GL_FRAMEBUFFER, m_iFramebuffer );
glReadPixels( 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, pixelBuffer );
for( int i = 0; i < pixelCount; i++ )
{
int line = height - 1 - (i / width); // flipping the image upside down
int column = i % width;
int bufferIndex = ( line * width + column ) * 3;
pixelValues[bufferIndex + 0 ] = (byte)(pixelBuffer.get(bufferIndex + 0) & 0xFF) ;
pixelValues[bufferIndex + 1 ] = (byte)(pixelBuffer.get(bufferIndex + 1) & 0xFF);
pixelValues[bufferIndex + 2 ] = (byte)(pixelBuffer.get(bufferIndex + 2) & 0xFF);
}
Mat image = new Mat(width, height, CvType.CV_8UC3);
image.put(0, 0, pixelValues);
new ImageFrame(image);
return image;
}
And here the code for displaying a Mat:
public static Image toBufferedImage(Mat m)
{
int type = BufferedImage.TYPE_BYTE_GRAY;
if ( m.channels() == 3 )
type = BufferedImage.TYPE_3BYTE_BGR;
if( m.channels() == 4 )
type = BufferedImage.TYPE_4BYTE_ABGR;
int bufferSize = m.channels()*m.cols()*m.rows();
byte [] b = new byte[bufferSize];
m.get( 0, 0, b ); // get all the pixels
BufferedImage image = new BufferedImage( m.cols(), m.rows(), type );
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(b, 0, targetPixels, 0, b.length);
return image;
}
It would be great if anyoune could help me!
Cheers!
Oh no! Facepalm!
The constructor of a OpenCV Mat Object is: Mat(rows, cols)!
So the right solution is:
Mat image = new Mat(height, width, CvType.CV_8UC3);
image.put(0, 0, pixelValues);

Highlight differences between images

There is this image comparison code I am supposed to modify to highlight/point out the difference between two images. Is there a way to modify this code so as to highlight the differences in images. If not any suggestion on how to go about it would be greatly appreciated.
int width1 = img1.getWidth(null);
int width2 = img2.getWidth(null);
int height1 = img1.getHeight(null);
int height2 = img2.getHeight(null);
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
long diff = 0;
for (int i = 0; i < height1; i++) {
for (int j = 0; j < width1; j++) {
int rgb1 = img1.getRGB(j, i);
int rgb2 = img2.getRGB(j, i);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2) & 0xff;
diff += Math.abs(r1 - r2);
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
}
}
double n = width1 * height1 * 3;
double p = diff / n / 255.0;
return (p * 100.0);
This solution did the trick for me. It highlights differences, and has the best performance out of the methods I've tried. (Assumptions: images are the same size. This method hasn't been tested with transparencies.)
Average time to compare a 1600x860 PNG image 50 times (on same machine):
JDK7 ~178 milliseconds
JDK8 ~139 milliseconds
Does anyone have a better/faster solution?
public static BufferedImage getDifferenceImage(BufferedImage img1, BufferedImage img2) {
// convert images to pixel arrays...
final int w = img1.getWidth(),
h = img1.getHeight(),
highlight = Color.MAGENTA.getRGB();
final int[] p1 = img1.getRGB(0, 0, w, h, null, 0, w);
final int[] p2 = img2.getRGB(0, 0, w, h, null, 0, w);
// compare img1 to img2, pixel by pixel. If different, highlight img1's pixel...
for (int i = 0; i < p1.length; i++) {
if (p1[i] != p2[i]) {
p1[i] = highlight;
}
}
// save img1's pixels to a new BufferedImage, and return it...
// (May require TYPE_INT_ARGB)
final BufferedImage out = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
out.setRGB(0, 0, w, h, p1, 0, w);
return out;
}
Usage:
import javax.imageio.ImageIO;
import java.io.File;
ImageIO.write(
getDifferenceImage(
ImageIO.read(new File("a.png")),
ImageIO.read(new File("b.png"))),
"png",
new File("output.png"));
Some inspiration...
What I would do is set each pixel to be the difference between one pixel in one image and the corresponding pixel in the other image. The difference that is being calculated in your original code is based on the L1 norm. This is also called the sum of absolute differences too. In any case, write a method that would take in your two images, and return an image of the same size that sets each location to be the difference for each pair of pixels that share the same location in the final image. Basically, this will give you an indication as to which pixels are different. The whiter the pixel, the more difference there is between these two corresponding locations.
I'm also going to assume you're using a BufferedImage class, as getRGB() methods are used and you are bit-shifting to access individual channels. In other words, make a method that looks like this:
public static BufferedImage getDifferenceImage(BufferedImage img1, BufferedImage img2) {
int width1 = img1.getWidth(); // Change - getWidth() and getHeight() for BufferedImage
int width2 = img2.getWidth(); // take no arguments
int height1 = img1.getHeight();
int height2 = img2.getHeight();
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
// NEW - Create output Buffered image of type RGB
BufferedImage outImg = new BufferedImage(width1, height1, BufferedImage.TYPE_INT_RGB);
// Modified - Changed to int as pixels are ints
int diff;
int result; // Stores output pixel
for (int i = 0; i < height1; i++) {
for (int j = 0; j < width1; j++) {
int rgb1 = img1.getRGB(j, i);
int rgb2 = img2.getRGB(j, i);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2) & 0xff;
diff = Math.abs(r1 - r2); // Change
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
diff /= 3; // Change - Ensure result is between 0 - 255
// Make the difference image gray scale
// The RGB components are all the same
result = (diff << 16) | (diff << 8) | diff;
outImg.setRGB(j, i, result); // Set result
}
}
// Now return
return outImg;
}
To call this method, simply do:
outImg = getDifferenceImage(img1, img2);
This is assuming that you are calling this within a method of your class. Have fun and good luck!
Just to note that the answer from #NickGrealy can be made 10 times faster if you don't need to keep the first image and modify it in place.
Example:
// img1 will be updated with the changes from img2
public static BufferedImage getDifferenceImage(BufferedImage img1, BufferedImage img2) {
byte[] magenta = {-1, 0, -1};
byte[] buff1 = ((DataBufferByte) img1.getRaster().getDataBuffer()).getData();
byte[] buff2 = ((DataBufferByte) img2.getRaster().getDataBuffer()).getData();
for (int i = 1; i < buff1.lenght; i += 4) {
if (buff1[i] != buff2[i]) {
System.arraycopy(magenta, 0, buff1, i, 3);
}
}
}
I needed a fast approach to use on potentially lot of images for visual regression checking.
It runs in < 2 ms on my machine, and I am in a case where img1 is already saved on disk so I don't need to play with it, I'm just interested in the differences to be updated in the buffered image and write it to a new location for further inspection.

DCT2 of png using jTransforms

What I'm trying to do is to compute 2D DCT of an image in Java and then save the result back to file.
Read file:
coverImage = readImg(coverPath);
private BufferedImage readImg(String path) {
BufferedImage destination = null;
try {
destination = ImageIO.read(new File(path));
} catch (IOException e) {
e.printStackTrace();
}
return destination;
}
Convert to float array:
cover = convertToFloatArray(coverImage);
private float[] convertToFloatArray(BufferedImage source) {
securedImage = (WritableRaster) source.getData();
float[] floatArray = new float[source.getHeight() * source.getWidth()];
floatArray = securedImage.getPixels(0, 0, source.getWidth(), source.getHeight(), floatArray);
return floatArray;
}
Run the DCT:
runDCT(cover, coverImage.getHeight(), coverImage.getWidth());
private void runDCT(float[] floatArray, int rows, int cols) {
dct = new FloatDCT_2D(rows, cols);
dct.forward(floatArray, false);
securedImage.setPixels(0, 0, cols, rows, floatArray);
}
And then save it as image:
convertDctToImage(securedImage, coverImage.getHeight(), coverImage.getWidth());
private void convertDctToImage(WritableRaster secured, int rows, int cols) {
coverImage.setData(secured);
File file = new File(securedPath);
try {
ImageIO.write(coverImage, "png", file);
} catch (IOException ex) {
Logger.getLogger(DCT2D.class.getName()).log(Level.SEVERE, null, ex);
}
}
But what I get is: http://kyle.pl/up/2012/05/29/dct_stack.png
Can anyone tell me what I'm doing wrong? Or maybe I don't understand something here?
This is a piece of code, that works for me:
//reading image
BufferedImage image = javax.imageio.ImageIO.read(new File(filename));
//width * 2, because DoubleFFT_2D needs 2x more space - for Real and Imaginary parts of complex numbers
double[][] brightness = new double[img.getHeight()][img.getWidth() * 2];
//convert colored image to grayscale (brightness of each pixel)
for ( int y = 0; y < image.getHeight(); y++ ) {
raster.getDataElements( 0, y, image.getWidth(), 1, dataElements );
for ( int x = 0; x < image.getWidth(); x++ ) {
//notice x and y swapped - it's JTransforms format of arrays
brightness[y][x] = brightnessRGB(dataElements[x]);
}
}
//do FT (not FFT, because FFT is only* for images with width and height being 2**N)
//DoubleFFT_2D writes data to the same array - to brightness
new DoubleFFT_2D(img.getHeight(), img.getWidth()).realForwardFull(brightness);
//visualising frequency domain
BufferedImage fd = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_RGB);
outRaster = fd.getRaster();
for ( int y = 0; y < img.getHeight(); y++ ) {
for ( int x = 0; x < img.getWidth(); x++ ) {
//we calculate complex number vector length (sqrt(Re**2 + Im**2)). But these lengths are to big to
//fit in 0 - 255 scale of colors. So I divide it on 223. Instead of "223", you may want to choose
//another factor, wich would make you frequency domain look best
int power = (int) (Math.sqrt(Math.pow(brightness[y][2 * x], 2) + Math.pow(brightness[y][2 * x + 1], 2)) / 223);
power = power > 255 ? 255 : power;
//draw a grayscale color on image "fd"
fd.setRGB(x, y, new Color(c, c, c).getRGB());
}
}
draw(fd);
Resulting image should look like big black space in the middle and white spots in all four corners. Usually people visualise FD so, that zero frequency appears in the center of the image. So, if you need classical FD (one, that looks like star for reallife images), you need to upgrade "fd.setRGB(x, y..." a bit:
int w2 = img.getWidth() / 2;
int h2 = img.getHeight() / 2;
int newX = x + w2 >= img.getWidth() ? x - w2 : x + w2;
int newY = y + h2 >= img.getHeight() ? y - h2 : y + h2;
fd.setRGB(newX, newY, new Color(power, power, power).getRGB());
brightnessRGB and draw methods for the lazy:
public static int brightnessRGB(int rgb) {
int r = (rgb >> 16) & 0xff;
int g = (rgb >> 8) & 0xff;
int b = rgb & 0xff;
return (r+g+b)/3;
}
private static void draw(BufferedImage img) {
JLabel picLabel = new JLabel(new ImageIcon(img));
JPanel jPanelMain = new JPanel();
jPanelMain.add(picLabel);
JFrame jFrame = new JFrame();
jFrame.add(jPanelMain);
jFrame.pack();
jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jFrame.setVisible(true);
}
I know, I'm a bit late, but I just did all that for my program. So, let it be here for those, who'll get here from googling.

Categories

Resources