How to control fps on video without using JMF in Java - java

I'm about to write a program that can take .rgb video file as input, and specify his fps and play it out. But how can I meet this purpose without using JMF package, because I think the meaning of this program is to know how the frame-by-frame works, so I decide it to use BufferedImage as my tool to implement this program, but I still not figure it out how it works on the video (frame-by-frame).
here's the starter code that TA gave us:
I try lots of things, but the delay between two frames is very obvious, I don't know why. is there any efficiency way to play the video frame-by-frame without using any JMF package.
import java.awt.*;
import java.awt.image.*;
import java.io.*;
import javax.swing.*;
public class imageReader {
public static void main(String[] args) {
String fileName = args[0];
int width = Integer.parseInt(args[1]);
int height = Integer.parseInt(args[2]);
BufferedImage img = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
try {
File file = new File(fileName+".rgb");
InputStream is = new FileInputStream(file);
//BufferedInputStream bis = new BufferedInputStream(is);
long len = file.length(); //bytes in one frame (consist of R,G,B frame)
byte[] bytes = new byte[10000000];
int offset = 0;
int numRead = 0;
while (offset < bytes.length && (numRead=is.read(bytes, offset, bytes.length-offset)) >= 0) {
offset += numRead;
}
System.out.println("\noffset = "+ offset); //7603200
System.out.println("numRead = "+ numRead); //7603200
System.out.println("file length: " + len); //7603200
System.out.println("frame #: "+ (offset/(width*height*3))); //100
JFrame frame = new JFrame();
frame.setVisible(true);
int ind = 0;
int f = 1;
while(ind < offset){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
byte a = 0;
byte r = bytes[ind];
byte g = bytes[ind+height*width];
byte b = bytes[ind+height*width*2];
int pix = 0xff000000 | ((r & 0xff) << 16) | ((g & 0xff) << 8) | (b & 0xff);
//int pix = ((a << 24) + (r << 16) + (g << 8) + b);
img.setRGB(x,y,pix);
ind++;
}
}
JLabel label = new JLabel(new ImageIcon(img));
frame.getContentPane().add(label, BorderLayout.CENTER);
frame.pack();
System.out.println("frame #: "+ ind);
}
System.out.print("end");
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
// Use a label to display the image
/*JFrame frame = new JFrame();
*JLabel label = new JLabel(new ImageIcon(img));
*frame.getContentPane().add(label, BorderLayout.CENTER);
*frame.pack();
*frame.setVisible(true);
**/
}
}
What is going wrong with this code??

Related

How to recognize an image in another image?

So I'm fairly new to Java, so I'm not sure that this my method of doing this is actually a good idea, but basically I am trying to check for an instance of an image inside another image. So for my testing to see if this would work, I had a 200x148 jpg, got the bytes from that, then got the bytes from a screenshot of the window, and got the bytes from that, and then compared them.
Now, since normally the first image wouldn't be in that screenshot, I opened it in my photos app and put it in while the program was sleeping (before it took the screenshot). And yes, I can confirm that the first image was in the screenshot by looking at the screenshot. However, when I compare the strings (to check if a String with all of the byte data of image one is in a String with all of the byte data of image two), it turns out negative.
Here's the code that I'm trying to use so far:
public static void main(String[] args) throws IOException, HeadlessException, AWTException, InterruptedException {
// Get the first image
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(ImageIO.read(new File("profile.jpg")), "jpg", baos);
byte[] bytes = baos.toByteArray();
String bytes1S = "";
for (int i = 0; i < bytes.length; i++) {
bytes1S += bytes[i];
}
// Give yourself enough time to open the other image
TimeUnit.SECONDS.sleep(6);
// Take the screenshot
BufferedImage image = new Robot().createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
ImageIO.write(image, "jpg", new File("screenshot.jpg"));
baos = new ByteArrayOutputStream();
ImageIO.write(ImageIO.read(new File("screenshot.jpg")), "jpg", baos);
byte[] bytes2 = baos.toByteArray();
String bytes2S = "";
for (int i = 0; i < bytes2.length; i++) {
bytes2S += bytes2[i];
}
// Check if the second String of bytes contains the first String of bytes.
if (bytes2S.contains(bytes1S))
System.out.println("Yes");
else
System.out.println("No");
}
And for reference, here's the first image, and the screenshot that it took:
What's the reason behind it not detecting the first image in the screenshot, and is there a better way to do this (preferably without another library)?
A brute-force approach is to simply load both images as BufferedImage objects, and then walk through the "main" image, pixel by pixel, and see if the "sub image" can be found there.
I have implemented this a while ago, and will post the code below as a MCVE.
But note: When you save images as JPG files, then they are compressed, and this compression is lossy. This means that the pixels will not have the perfectly same colors, even if they have been equal on the screen. In the example below, this is solved pragmatically, with a "threshold" that defines how different the pixels may be. But this is a bit arbitrary and not so reliable. (A more robust solution would require more effort).
I'd strongly recommend to save the images as PNG files. They use a lossless compression. So for PNG files, you can set threshold=0 in the code below.
import java.awt.Color;
import java.awt.Graphics;
import java.awt.Point;
import java.awt.image.BufferedImage;
import java.net.URL;
import java.util.function.IntBinaryOperator;
import javax.imageio.ImageIO;
import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.SwingUtilities;
public class FindImageInImage
{
public static void main(String[] args) throws Exception
{
BufferedImage mainImage =
ImageIO.read(new URL("https://i.stack.imgur.com/rEouF.jpg"));
BufferedImage subImage =
ImageIO.read(new URL("https://i.stack.imgur.com/wISyn.jpg"));
int threshold = 100;
Point location = findImageLocation(
mainImage, subImage, threshold);
System.out.println("At " + location);
SwingUtilities.invokeLater(() -> showIt(mainImage, subImage, location));
}
private static void showIt(
BufferedImage mainImage, BufferedImage subImage, Point location)
{
JFrame f = new JFrame();
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.getContentPane().add(new JPanel()
{
#Override
protected void paintComponent(Graphics g)
{
super.paintComponent(g);
g.drawImage(mainImage, 0, 0, null);
if (location != null)
{
g.setColor(Color.RED);
g.drawRect(location.x, location.y,
subImage.getWidth(), subImage.getHeight());
}
}
});
f.setSize(1500, 800);
f.setLocationRelativeTo(null);
f.setVisible(true);
}
static Point findImageLocation(
BufferedImage mainImage,
BufferedImage subImage,
int threshold)
{
return findImageLocation(mainImage, subImage, (rgb0, rgb1) ->
{
int difference = computeDifference(rgb0, rgb1);
if (difference > threshold)
{
return 1;
}
return 0;
});
}
private static int computeDifference(int rgb0, int rgb1)
{
int r0 = (rgb0 & 0x00FF0000) >> 16;
int g0 = (rgb0 & 0x0000FF00) >> 8;
int b0 = (rgb0 & 0x000000FF);
int r1 = (rgb1 & 0x00FF0000) >> 16;
int g1 = (rgb1 & 0x0000FF00) >> 8;
int b1 = (rgb1 & 0x000000FF);
int dr = Math.abs(r0 - r1);
int dg = Math.abs(g0 - g1);
int db = Math.abs(b0 - b1);
return dr + dg + db;
}
static Point findImageLocation(
BufferedImage mainImage,
BufferedImage subImage,
IntBinaryOperator rgbComparator)
{
int w = mainImage.getWidth();
int h = mainImage.getHeight();
for (int x=0; x < w; x++)
{
for (int y = 0; y < h; y++)
{
if (isSubImageAt(mainImage, x, y, subImage, rgbComparator))
{
return new Point(x, y);
}
}
}
return null;
}
static boolean isSubImageAt(
BufferedImage mainImage, int x, int y,
BufferedImage subImage,
IntBinaryOperator rgbComparator)
{
int w = subImage.getWidth();
int h = subImage.getHeight();
if (x + w > mainImage.getWidth())
{
return false;
}
if (y + h > mainImage.getHeight())
{
return false;
}
for (int ix=0; ix < w; ix++)
{
for (int iy = 0; iy < h; iy++)
{
int mainRgb = mainImage.getRGB(x + ix, y + iy);
int subRgb = subImage.getRGB(ix, iy);
if (rgbComparator.applyAsInt(mainRgb, subRgb) != 0)
{
return false;
}
}
}
return true;
}
}

How to assign new color values to ranges of color

I need my program to go through the pixels in an image, change them to grey scale. Then I need to take a range of gray values and colorize them using if - else and if-else-if statements. Can someone please help me figure this out?
Here's my code so far:
import java.awt.*;
import java.awt.image.BufferedImage;
import java.io.*;
import javax.imageio.ImageIO;
import javax.swing.JFrame;
public class Colorize {
BufferedImage image;
int width;
int height;
public Colorize() {
try {
File input = new File("Grayscale.jpg");
image = ImageIO.read(input);
width = image.getWidth();
height = image.getHeight();
for(int i=0; i<height; i++){
for(int j=0; j<width; j++){
int col = image.getRGB(i, j);
Color c = new Color(col, true);
int red = c.getRed();
int green = c.getGreen();
int blue = c.getBlue();
if ((red>= 1)&&(red<=30)) {
c = new Color(c.getRed() + 10, c.getGreen(), c.getBlue());
}
if ((red>= 31)&&(red<=60)) {
c = new Color(c.getRed(), c.getGreen() + 10, c.getBlue());
}
if ((red>= 61)&&(red<=90)) {
c = new Color(c.getRed(), c.getGreen(), c.getBlue() + 10);
}
if ((red>= 91)&&(red<=120)) {
c = new Color(c.getRed() + 10, c.getGreen() + 10, c.getBlue());
}
if ((red>= 121)&&(red<=150)) {
c = new Color(c.getRed() + 10, c.getGreen(), c.getBlue() + 10);
}
if ((red>= 151)&&(red<=180)) {
c = new Color(c.getRed(), c.getGreen() + 10, c.getBlue() + 10);
}
if ((red>= 181)&&(red<=210)) {
c = new Color(c.getRed() - 10, c.getGreen(), c.getBlue());
}
if ((red>= 211)&&(red<=240)) {
c = new Color(c.getRed(), c.getGreen() - 10, c.getBlue());
}
else {
c = new Color(c.getRed(), c.getGreen(), c.getBlue());
}
image.setRGB(j,i,c.getRGB());
}
}
File output = new File("Colorize.jpg");
ImageIO.write(image, "jpg", output);
} catch (Exception e) {}
}
static public void main(String args[]) throws Exception
{
Colorize obj = new Colorize();
}
}
Here's the image in case you guys want to try the code out. So far nothing is being written to the folder.
There must be an exception somewhere, unfortunately you catch exception and do nothing.
Replace
catch (Exception e) {}
with
catch (Exception e) {
e.printStackTrace();
}
That will help you find what's happening. My guess is you will get a FileNotFoundException because Grayscale.jpg is probably not in your working directory.
To convert your image to a grayscale one, you need your colors Weighted average. 0.2126 r 0.7152 g 0.0722 b. Getting colors this way is the same as getRed() getGreen() and getBlue() works. Finally add up your colors and set it with setRGB. This will turn your image to a grayscale.
for(int i=0; i<image.getWidth(); i++){
for(int j=0; j<image.getHeight(); j++){
int color = image.getRGB(i,j);
int r = ((color >> 16) & 0xFF) * 0.2126;
int g = ((color >> 8) & 0xFF) * 0.7152;
int b = ((color) & 0xFF) * 0.0722;
int finalColor = (r << 16) | (g << 8) | b;
image.setRGB(i,j,finalColor);
}
}
Edit your question with more information about your coloring so I could help more.

How can I turn a series of images into a video using JCodec?

I'm trying to use JCodec to turn a series of images into a video inside of a Java SE desktop application. The few methods I've tried all resulted in a video that Windows Media Player could not play.
It is unclear to me if this is a codec issue (doubtful) or if I'm not creating the video properly. When I try to play the video in Windows Media Player I get:
Windows Media Player cannot play the file. The Player might not
support the file type or might not support the codec that was used to
compress the file.
Here is the cobbled together sample that I've been using most recently. I really don't understand the internals of the video format, so I'm not even entirely sure what some of the code is doing.
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.Arrays;
import javax.imageio.ImageIO;
import org.jcodec.codecs.h264.H264Encoder;
import org.jcodec.codecs.h264.H264Utils;
import org.jcodec.common.NIOUtils;
import org.jcodec.common.SeekableByteChannel;
import org.jcodec.common.model.ColorSpace;
import org.jcodec.common.model.Picture;
import org.jcodec.containers.mp4.Brand;
import org.jcodec.containers.mp4.MP4Packet;
import org.jcodec.containers.mp4.TrackType;
import org.jcodec.containers.mp4.muxer.FramesMP4MuxerTrack;
import org.jcodec.containers.mp4.muxer.MP4Muxer;
import org.jcodec.scale.AWTUtil;
import org.jcodec.scale.RgbToYuv420;
public class CreateVideo {
private SeekableByteChannel ch;
private Picture toEncode;
private RgbToYuv420 transform;
private H264Encoder encoder;
private ArrayList<ByteBuffer> spsList;
private ArrayList<ByteBuffer> ppsList;
private FramesMP4MuxerTrack outTrack;
private ByteBuffer _out;
private int frameNo;
private MP4Muxer muxer;
public CreateVideo(File out) throws IOException {
ch = NIOUtils.writableFileChannel(out);
// Transform to convert between RGB and YUV
transform = new RgbToYuv420(0, 0);
// Muxer that will store the encoded frames
muxer = new MP4Muxer(ch, Brand.MP4);
// Add video track to muxer
outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Allocate a buffer big enough to hold output frames
_out = ByteBuffer.allocate(1920 * 1080 * 6);
// Create an instance of encoder
encoder = new H264Encoder();
// Encoder extra data ( SPS, PPS ) to be stored in a special place of
// MP4
spsList = new ArrayList<ByteBuffer>();
ppsList = new ArrayList<ByteBuffer>();
}
public void encodeImage(BufferedImage bi) throws IOException {
if (toEncode == null) {
toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
}
// Perform conversion
for (int i = 0; i < 3; i++) {
Arrays.fill(toEncode.getData()[i], 0);
}
transform.transform(AWTUtil.fromBufferedImage(bi), toEncode);
// Encode image into H.264 frame, the result is stored in '_out' buffer
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, frameNo, 25, 1, frameNo, true, null, frameNo, 0));
frameNo++;
}
public void finish() throws IOException {
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
NIOUtils.closeQuietly(ch);
}
public static void main(String[] args) throws IOException {
CreateVideo encoder = new CreateVideo(new File("C:\\video\\video.mp4"));
for (int i = 10; i < 34; i++) {
String filename = String.format("C:\\video\\%02d.png", i);
BufferedImage bi = ImageIO.read(new File(filename));
encoder.encodeImage(bi);
}
encoder.finish();
}
}
I'm not tied to H264 or MP4 if there is some easier codec/container. The only requirement is that it should play on a baseline Windows 7 box with no additional software installed.
I had a lot of issues with jcodec's RgbToYuv420() transform.
I used my own function to transform my RGB BufferedImage to Yuv420.
One issue I had was that RgbToYuv420 averages the computed upix and vpix
values, and it was causing the colors in my mp4 to be all wonky.
// make a YUV420J out of BufferedImage pixels
private Picture makeFrame(BufferedImage bi, ColorSpace cs)
{
DataBuffer imgdata = bi.getRaster().getDataBuffer();
int[] ypix = new int[imgdata.getSize()];
int[] upix = new int[ imgdata.getSize() >> 2 ];
int[] vpix = new int[ imgdata.getSize() >> 2 ];
int ipx = 0, uvoff = 0;
for (int h = 0; h < bi.getHeight(); h++) {
for (int w = 0; w < bi.getWidth(); w++) {
int elem = imgdata.getElem(ipx);
int r = 0x0ff & (elem >>> 16);
int g = 0x0ff & (elem >>> 8);
int b = 0x0ff & elem;
ypix[ipx] = ((66 * r + 129 * g + 25 * b) >> 8) + 16;
if ((0 != w % 2) && (0 != h % 2)) {
upix[uvoff] = (( -38 * r + -74 * g + 112 * b) >> 8) + 128;
vpix[uvoff] = (( 112 * r + -94 * g + -18 * b) >> 8) + 128;
uvoff++;
}
ipx++;
}
}
int[][] pix = { ypix, upix, vpix, null };
// old API
// return new Picture(bi.getWidth(), bi.getHeight(), pix, ColorSpace.YUV420J);
return Picture.createPicture(bi.getWidth(), bi.getHeight(), pix, ColorSpace.YUV420J);
}
With 0.2.0 there is now https://github.com/jcodec/jcodec/blob/master/javase/src/main/java/org/jcodec/api/awt/AWTSequenceEncoder8Bit.java
This example creates a MP4 with a fixed background and overlaid animation. No sound.
AWTSequenceEncoder8Bit enc = AWTSequenceEncoder8Bit.create2997Fps(outputMovieFile);
int framesToEncode = (int) (29.97 * durationInSeconds);
java.awt.image.BufferedImage image = ...(some background image)
for (int frameIndx = 0, x = 0, y = 0, incX = speed, incY = speed; frameIndx < framesToEncode; frameIndx++, x += incX, y += incY) {
Graphics g = image.getGraphics();
g.setColor(Color.YELLOW);
if (x >= image.getWidth() - ballSize) incX = -speed;
if (y >= image.getHeight() - ballSize) incY = -speed;
if (x <= 0) incX = speed;
if (y <= 0) incY = speed;
g.fillOval(x, y, ballSize, ballSize);
enc.encodeImage(image);
}
enc.finish();

Jpeg DCT and IDCT not calculated properly

I'm trying to calculate DCT and IDCT of an input image, and display the IDCT output as resultant image. But my IDCT values are going above 300. My input image is a '.rgb' image.
I am also considering the height and width of the input image as constant i.e., 352*288
I am representing my input red, green and blue integers of each pixels as rgb[3][64][1583] where [3] -> index of red/green/blue and [64] -> index of pixel in each block and the [1583]->
index of the 8*8 block i.e., one of the block in 1583 blocks.
And lastly im keeping my quantization table as which has uniform value -> 2^N where N is passed as parameter.In this code the quantLevel is the N above.
Following is my Code:
import java.awt.BorderLayout;
import java.awt.Color;
import java.awt.FlowLayout;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.util.Arrays;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
public class Jpeg {
double rgb[][][]=new double[3][64][1584];
double rgbfinal[][][]=new double[3][64][1584];
double R[][]=new double[64][1584];
double G[][]=new double[64][1584];
double B[][]=new double[64][1584];
public void go(String fname, int quantLevel){
int numBlocks=(352*288)/64;
String fileName = fname;
int width=352,height=288;
BufferedImage img = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
try {
File file = new File(fname);
InputStream is = new FileInputStream(file);
long len = file.length();
byte[] bytes = new byte[(int)len];
int offset = 0;
int numRead = 0;
while (offset < bytes.length && (numRead=is.read(bytes, offset, bytes.length-offset)) >= 0) {
offset += numRead;
}
int ind = 0;
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
byte r = bytes[ind];
byte g = bytes[ind+height*width];
byte b = bytes[ind+height*width*2];
int pix = 0xff000000 | ((r & 0xff) << 16) | ((g & 0xff) << 8) | (b & 0xff);
img.setRGB(x,y,pix);
ind++;
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
int indexRow=0,indexCol=0,indexBlock=0,indexBits=0,indexPixelBlock=0,indexPixel=0;
long count=0L;
String binary="";
indexPixel=0;
indexBlock=0;
int i=0;
while(count!=(long)(numBlocks*64)){
int pix = img.getRGB(indexCol, indexRow);
int red = (pix >> 16) & 0x000000FF;
int green = (pix >>8 ) & 0x000000FF;
int blue = (pix) & 0x000000FF;
rgb[0][indexPixel][indexBlock]=red;
rgb[1][indexPixel][indexBlock]=green;
rgb[2][indexPixel][indexBlock]=blue;
count++;
indexPixel++;
if(indexCol==width-1 && indexRow==height-1)
break;
if(indexCol%7==0 && indexCol!=0 && indexPixel%8==0)
{
indexPixel=indexPixelBlock;
indexBlock++;
}
if(indexPixel%8==0 && indexCol%7!=0 && indexBlock!=1583)
{
indexPixel=indexPixelBlock;
indexBlock++;
}
if(indexCol==width-1)
{
indexCol=0;
indexRow++;
}
else
indexCol++;
if((indexPixel)%8==0 && indexBlock==numBlocks-1 && indexCol%7==0)
{
indexBlock=0;
indexPixelBlock=indexPixel;
}
}
calcQuantizedDCT(quantLevel);
calcInverseDCT(quantLevel);
JFrame frame = new JFrame();
frame.setLocation(0,0);
frame.setSize(1024, 720);
frame.getContentPane().setBackground(Color.WHITE);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
JPanel p=new JPanel();
p.setLayout(new FlowLayout(FlowLayout.LEFT));
p.setLocation(100,100);
JLabel label = new JLabel(new ImageIcon(img));
label.setLocation(0,0);
label.setSize(352,288);
p.add(label);
frame.add(p);
frame.setVisible(true);
return;
}
void calcQuantizedDCT(int quantLevel)
{
String binary="";
int indexBlock=0,indexPixel=0,indexBits=0,red=0,green=0,blue=0,x=0,y=0,indexPixelTemp=0,u=0,v=0;
double sumRed=0,sumGreen=0,sumBlue=0;
String substr="";
int i=0;
for(indexBlock=0;indexBlock<1584;indexBlock++)
{
indexPixel=0;
//
while(indexPixel!=64 && u<8)
{
while(indexPixelTemp<64 && x<8)
{
red=(int)rgb[0][indexPixelTemp][indexBlock];
green=(int)rgb[1][indexPixelTemp][indexBlock];
blue=(int)rgb[2][indexPixelTemp][indexBlock];
// System.out.println(red);
sumRed+=red*Math.cos((Math.PI*(2*y+1)*u)/(2*8))*Math.cos((Math.PI*(2*x+1)*v)/(2*8));
sumGreen+=green*Math.cos((Math.PI*(2*x+1)*u)/(2*8))*Math.cos((Math.PI*(2*y+1)*v)/(2*8));
sumBlue+=blue*Math.cos((Math.PI*(2*x+1)*u)/(16))*Math.cos((Math.PI*(2*y+1)*v)/(16));
indexPixelTemp++;
y++;
if(y==8)
{
x++;
y=0;
}
}
//System.out.println("SumRed :"+sumRed);
if(u==0 && v==0)
{
//System.out.println("U & V & Pixel & Block "+u+" "+v+" "+indexPixel+" "+indexBlock);
R[indexPixel][indexBlock]=(Math.sqrt(1.0/64.0)*sumRed)/Math.pow(2,quantLevel);
G[indexPixel][indexBlock]=(Math.sqrt(1.0/64.0)*sumGreen)/Math.pow(2,quantLevel);
B[indexPixel][indexBlock]=(Math.sqrt(1.0/64.0)*sumBlue)/Math.pow(2,quantLevel);
}
else
{
//System.out.println("U & V & Pixel & Block "+u+" "+v+" "+indexPixel+" "+indexBlock);
R[indexPixel][indexBlock]=(Math.sqrt(2.0/32.0)*sumRed)/Math.pow(2,quantLevel);
G[indexPixel][indexBlock]=(Math.sqrt(2.0/32.0)*sumGreen)/Math.pow(2,quantLevel);
B[indexPixel][indexBlock]=(Math.sqrt(2.0/32.0)*sumBlue)/Math.pow(2,quantLevel);
}
indexPixel++;
if(indexPixel==64)
break;
indexPixelTemp=0;
v++;
if(v==8)
{
v=0;
u++;
}
x=0;y=0;sumGreen=0;sumRed=0;sumBlue=0;
}
u=0;v=0;
}
/* for(int j=0;j<64;j++)
{
System.out.print(R[j][0]+" ");
if(j%7==0 && j!=0)
System.out.println();
}
*/
}
void calcInverseDCT(int quantLevel)
{
String binary="";
int indexBlock=0,indexPixel=0,indexBits=0,u=0,v=0,x=0,y=0,indexPixelTemp=0,sumRed=0,sumGreen=0,sumBlue=0,red=0,green=0,blue=0;
for(indexBlock=0;indexBlock<1584;indexBlock++)
{
for(indexPixel=0;indexPixel<64;indexPixel++)
{
R[indexPixel][indexBlock]=R[indexPixel][indexBlock]*Math.pow(2,quantLevel);
G[indexPixel][indexBlock]=G[indexPixel][indexBlock]*Math.pow(2,quantLevel);
B[indexPixel][indexBlock]=B[indexPixel][indexBlock]*Math.pow(2,quantLevel);
}
}
int i=0;
indexPixelTemp=0;
indexPixel=0;
for(indexBlock=0;indexBlock<1584;indexBlock++)
{
indexPixel=0;
while(indexPixel<64 && x<8)
{
indexPixelTemp=0;
while(indexPixelTemp<64 && u<8)
{
red=(int)R[indexPixelTemp][indexBlock];
if(u==0 && v==0)
sumRed+=Math.sqrt(1.0/2.0)*red*Math.cos((Math.PI*(2*x+1)*u)/(2*8))*Math.cos((Math.PI*(2*y+1)*v)/(2*8));
else
sumRed+=red*Math.cos((Math.PI*(2*x+1)*u)/(2*8))*Math.cos((Math.PI*(2*y+1)*v)/(2*8));
green=(int)G[indexPixelTemp][indexBlock];
if(u==0 && v==0)
sumGreen+=Math.sqrt(1.0/2.0)*green*Math.cos((Math.PI*(2*x+1)*u)/(2*8))*Math.cos((Math.PI*(2*y+1)*v)/(2*8));
else
sumGreen+=green*Math.cos((Math.PI*(2*x+1)*u)/(2*8))*Math.cos((Math.PI*(2*y+1)*v)/(2*8));
blue=(int)B[indexPixelTemp][indexBlock];
if(u==0 && v==0)
sumBlue+=Math.sqrt(1.0/2.0)*blue*Math.cos((Math.PI*(2*x+1)*u)/(2*8))*Math.cos((Math.PI*(2*y+1)*v)/(2*8));
else
sumBlue+=blue*Math.cos((Math.PI*(2*x+1)*u)/(2*8))*Math.cos((Math.PI*(2*y+1)*v)/(2*8));
indexPixelTemp++;
v++;
if(v==8)
{
u++;
v=0;
}
}
rgbfinal[0][indexPixel][indexBlock]=sumRed;
rgbfinal[1][indexPixel][indexBlock]=sumGreen;
rgbfinal[2][indexPixel][indexBlock]=sumBlue;
indexPixel++;
indexPixelTemp=0;
y++;
if(y==8)
{
y=0;
x++;
}
u=0;v=0;sumGreen=0;sumRed=0;sumBlue=0;
}
if(i==3)
break;
x=0;y=0;
}
System.out.println();
/*for(i=0;i<64;i++)
{
System.out.print(rgbfinal[0][i][0]+" ");
if(i%7==0 && i!=0)
System.out.println();
}*/
}
public static void main(String args[]){
Jpeg a = new Jpeg();
a.go("src/image2.rgb",0);
}
}
I am not trying to display the output image as the ouput of IDCT for Red which i checked goes above 255.
Please help.
First, are you hoping the create a JPEG-compatible codec? Or are you just toying with multimedia coding concepts? If the latter, then more power to you. If the former, you have a lot of work ahead.
To directly answer your question: It's normal to transform an 8x8 block using a 2D forward DCT and get numbers that go above 255. Low-level implementations of the forward DCT usually take an 8x8 vector of unsigned 8-bit samples as input and output an 8x8 vector of signed 16-bit samples.
If you actually are hoping to create a JPEG-compatible codec, you still have a few topics to study. For starters, JPEG does not compress RGB. RGB data is converted to YUV and those blocks get transformed (then quantized, zigzagged, and entropy coded).

display image after after on the panel with a time gap

import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.image.*;
import java.io.*;
import javax.swing.ImageIcon;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JLabel;
public class Video implements ActionListener
{
static int width=480;
static int height=368;
static JFrame frame = new JFrame();
static JButton button = new JButton("Submit");
static BufferedImage img = new BufferedImage((int) (width), (int) (height), BufferedImage.TYPE_INT_RGB);
static BufferedImage img1[] = new BufferedImage[60];
static {
for (int i = 0; i < img1.length; i++) {
img1[i] = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
}
}
public static void main(String[] args)
{
Video V1 = new Video();
String fileName = args[0];
try {
File file = new File(fileName);
InputStream is = new FileInputStream(file);
long len = file.length();
byte[] bytes = new byte[(int)len];
int offset = 0;
int numRead = 0;
int ind =0;
int[][] pixarray=new int[height+100][width+100];
int[][] red=new int[width*2][height*2];
int[][] green=new int[width*2][height*2];
int[][] blue=new int[width*2][height*2];
while (offset < bytes.length && (numRead=is.read(bytes, offset, bytes.length-offset)) >= 0) {
offset += numRead;
}
for(int frames=0;frames<60;frames++)
{
ind=height*width*frames*3;
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
byte a = 0;
byte r = bytes[ind];
byte g = bytes[ind+height*width];
byte b = bytes[ind+height*width*2];
int pix = 0xff000000 | ((r & 0xff) << 16) | ((g & 0xff) << 8) | (b & 0xff);
pixarray[y][x]=pix;
red[y][x] = (pix >>> 16) & 0xff;
green[y][x] = (pix >>> 8) & 0xff;
blue[y][x] = pix & 0xff;
img1[frames].setRGB(x,y,pix);
ind++;
}
}
}
}
catch (IOException e) {
e.printStackTrace();
}
JLabel label = new JLabel(new ImageIcon(img1[50]));
frame.setLayout(new FlowLayout());
//frame.setSize(200,100);
frame.setVisible(true);
// Button button = new Button("Submit");
// frame.add(button);
frame.getContentPane().add(label, BorderLayout.CENTER);
frame.getContentPane().add(button, BorderLayout.CENTER);
frame.pack();
frame.setVisible(true);
button.addActionListener(V1);
}
public void actionPerformed(ActionEvent e) {
System.out.println("1");
for(int i=0;i<img1.length;i++)
{
try {
Thread.sleep(500);
} catch (InterruptedException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
System.out.println(img1.length);
}
}
}
Is my code. My img1 is an array of frames. in this instance my video has 60 frames.
I wanna add each frame to my panel with a time gap. i am unable to do it because every time i add it to the panel in my ActionEvent its acting weird. help me out please.
Use a Swing Timer (not a TimerTask). When the Timer fires the code will execute in the EDT so you can safely reset the icon of your JLabel. So I would first create ImageIcons from your BufferedImages and store the Icons in your array.
Read the secton from the Swing tutorial on How to Use Timers for more information. You may also want to check out the section on Concurreny to understand why executing code in the EDT is important.
You can create a TimerTask
class VideoTask extends TimerTask {
private Frame frame;
private int frameId;
public void run() {
frame.drawImage(....);
frameId++;
}
}
And in the action listener for the button - schedule the task:
VideoTask videoTask = new VideoTask(frame);
videoTask.schedule(..);

Categories

Resources