Enlarging image using GD results in dismal quality compared to Scalr (JAVA) - java

I have been dealing with very small images and I have tried to discover the best way to increase them. I compared two different sources: imagecopyresampled (PHP: http://www.php.net/manual/en/function.imagecopyresampled.php) versus Scalr (JAVA: How to resize images in a directory?). I am including the codes that I am using below for the sake of completeness, but they are strongly based on other threads or sites referenced above. If someone thinks that I should remove, I will do that! In both sources it seems that the algorithm used to deal with the issue seems to be the same: Bicubic interpolation. However, in the case of the JAVA source, the quality of the resized image is MUCH MUCH better in my implementation (it is not even possible to compare). Am I doing something wrong when I am using the PHP source? If not, does anyone can explain me the difference between them?
JavaCode:
import java.awt.image.*;
import java.io.*;
import javax.imageio.*;
import static org.imgscalr.Scalr.*;
public class App2 {
public static void main(String[] args) throws IOException {
for (File sourceImageFile : new File("imgs").listFiles()) {
if (sourceImageFile.getName().endsWith(".jpg"))
res(sourceImageFile.getAbsolutePath());
}
}
public static void res(String arg) throws IOException {
File sourceImageFile = new File(arg);
BufferedImage img = ImageIO.read(sourceImageFile);
BufferedImage thumbnail = resize(img, 500);
thumbnail.createGraphics().drawImage(thumbnail, 0, 0, null);
ImageIO.write(thumbnail, "jpg", new File("resized/" + sourceImageFile.getName()));
}
}
PHP code:
<?php
// The file
$filename = 'photos/thePhoto.jpg';
// Set a maximum height and width
$width = 250;
$height = 250;
// Content type
header('Content-Type: image/jpeg');
// Get new dimensions
list($width_orig, $height_orig) = getimagesize($filename);
$ratio_orig = $width_orig/$height_orig;
if ($width/$height > $ratio_orig) {
$width = $height*$ratio_orig;
} else {
$height = $width/$ratio_orig;
}
// Resample
$image_p = imagecreatetruecolor($width, $height);
$image = imagecreatefromjpeg($filename);
imagecopyresampled($image_p, $image, 0, 0, 0, 0, $width, $height, $width_orig, $height_orig);
// Output
imagejpeg($image_p, null, 100);
?>
Just to be clear, in the JAVA code, I have to keep the images in a folder. In the PHP code, I provide the name of the file. Off course, I compared exactly the same image. Furthermore, both codes are running without any kind of problem.
![PHP versus Java (in this order)]: http://www.facebook.com/media/set/?set=a.122440714606196.1073741825.100005208033163&type=1&l=55d93c4969

Related

Reading Binary Picture Data from exiftool?

I'm working on a .opus music library software which converts audio/video files to .opus files and tags them with metadata automatically.
Previous versions of the program have saved the album art as binary data apparently as revealed by exiftool.
The thing is that when I run the command to output data as binary using the -b option, the entire thing is in binary seemingly. I'm not sure how to get the program to parse it. I was kind of expecting an entry like Picture : 11010010101101101011....
The output looks similar to this though:
How can I parse the picture data so I can reconstruct the image for newer versions of the program? (I'm using Java8_171 on Kubuntu 18.04)
It looks like you're trying to open the raw bytes in a text editor, which will of course give you gobble-dee-gook since those raw bytes do not represent characters that can be displayed by any text editor. I can see from your output from exiftool that you are able to know the length of the image in bytes. Providing you know the beginning byte position in the file, this should make your task relatively easy with a little bit of Java code. If you can get the starting position of the image inside your file, you should be able to do something like:
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.*;
public class SaveImage {
public static void main(String[] args) throws IOException {
byte[] imageBytes;
try (RandomAccessFile binaryReader =
new RandomAccessFile("your-file.xxx", "r")) {
int dataLength = 0; // Assign this the byte length shown in your
// post instead of zero
int startPos = 0; // I assume you can find this somehow.
// If it's not at the beginning
// change it accordingly.
imageBytes = new byte[dataLength];
binaryReader.read(imageBytes, startPos, dataLength);
}
try (InputStream in = new ByteArrayInputStream(imageBytes)) {
BufferedImage bImageFromConvert = ImageIO.read(in);
ImageIO.write(bImageFromConvert,
"jpg", // or whatever file format is appropriate
new File("/path/to/your/file.jpg"));
}
}
}

Is there any way to make image compression and saving faster on Android?

The situation
I should show 200-350 frames animation in my application. Images have 500x300ish resolution. If user wants to share animation, i have to convert it to Video. For convertion i am using ffmpeg command.
ffmpeg -y -r 1 -i /sdcard/videokit/pic00%d.jpg -i /sdcard/videokit/in.mp3 -strict experimental -ar 44100 -ac 2 -ab 256k -b 2097152 -ar 22050 -vcodec mpeg4 -b 2097152 -s 320x240 /sdcard/videokit/out.mp4
To convert images to video ffmpeg wants actual files not Bitmap or byte[].
Problem
Compressing bitmaps to image files taking to much time. 210 image convertion takes about 1 minute to finish on average device(HTC ONE m7). Converting image files to mp4 takes about 15 seconds on the same device. All together user have to wait about 1.5 minutes.
What i have tried
I changed comrpession format form PNG to JPEG(1.5 minute result is
achieved with JPEG compression(quality=80),with PNG it takes about
2-2.5 minutes) success
Tried to find how pass byte[] or bitmap to ffmpeg - no succes.
QUESTION
Is there any way(library (even native)) to make saving process faster.
Is there any way to pass byte[] or Bitmap objects (i mean png file decompressed to Android Bitmap Class Object) to ffmpeg library video creating method
Is there any other working library which will create mp4(or any supported format(supported by main Social Networks)) from byte[] or Bitmap objects in about 30 seconds(for 200 frames).
You can convert Bitmap (or byte[]) to YUV format quickly, using renderscript (see https://stackoverflow.com/a/39877029/192373). You can pass these YUV frames to ffmpeg library (as suggests halfelf), or use the built-in native MediaCodec which uses dedicated hardware on modt devices (but compression options are less flexible than all-software ffmpeg).
There are two steps slow us down. Compressing image to PNG/JPG and writing them to disk. Both can be skipped if we directly code against ffmpeg libs, instead of calling ffmpeg command. (There are other improvements too, such like GPU encoding and multithreading, but much more complicated.)
Some approaches to code:
Only use C/C++ NDK for android programming. FFmpeg will happily work. But I guess it's not an option here.
Build it from scratch by Java JNI. Not much experience here. I only know this could link java to c/c++ libs.
Some java wrapper. Luckily I found javacpp-presets. (There are others too, but this one is good enough and up to date.)
This library includes a good example ported from famous dranger's ffmpeg tutorial, though it is a demuxing one.
We can try to write a muxing one, following ffmpeg's muxing.c example.
import java.io.*;
import org.bytedeco.javacpp.*;
import static org.bytedeco.javacpp.avcodec.*;
import static org.bytedeco.javacpp.avformat.*;
import static org.bytedeco.javacpp.avutil.*;
import static org.bytedeco.javacpp.swscale.*;
public class Muxer {
public class OutputStream {
public AVStream Stream;
public AVCodecContext Ctx;
public AVFrame Frame;
public SwsContext SwsCtx;
public void setStream(AVStream s) {
this.Stream = s;
}
public AVStream getStream() {
return this.Stream;
}
public void setCodecCtx(AVCodecContext c) {
this.Ctx = c;
}
public AVCodecContext getCodecCtx() {
return this.Ctx;
}
public void setFrame(AVFrame f) {
this.Frame = f;
}
public AVFrame getFrame() {
return this.Frame;
}
public OutputStream() {
Stream = null;
Ctx = null;
Frame = null;
SwsCtx = null;
}
}
public static void main(String[] args) throws IOException {
Muxer t = new Muxer();
OutputStream VideoSt = t.new OutputStream();
AVOutputFormat Fmt = null;
AVFormatContext FmtCtx = new AVFormatContext(null);
AVCodec VideoCodec = null;
AVDictionary Opt = null;
SwsContext SwsCtx = null;
AVPacket Pkt = new AVPacket();
int GotOutput;
int InLineSize[] = new int[1];
String FilePath = "/path/xxx.mp4";
avformat_alloc_output_context2(FmtCtx, null, null, FilePath);
Fmt = FmtCtx.oformat();
AVCodec codec = avcodec_find_encoder_by_name("libx264");
av_format_set_video_codec(FmtCtx, codec);
VideoCodec = avcodec_find_encoder(Fmt.video_codec());
VideoSt.setStream(avformat_new_stream(FmtCtx, null));
AVStream stream = VideoSt.getStream();
VideoSt.getStream().id(FmtCtx.nb_streams() - 1);
VideoSt.setCodecCtx(avcodec_alloc_context3(VideoCodec));
VideoSt.getCodecCtx().codec_id(Fmt.video_codec());
VideoSt.getCodecCtx().bit_rate(5120000);
VideoSt.getCodecCtx().width(1920);
VideoSt.getCodecCtx().height(1080);
AVRational fps = new AVRational();
fps.den(25); fps.num(1);
VideoSt.getStream().time_base(fps);
VideoSt.getCodecCtx().time_base(fps);
VideoSt.getCodecCtx().gop_size(10);
VideoSt.getCodecCtx().max_b_frames();
VideoSt.getCodecCtx().pix_fmt(AV_PIX_FMT_YUV420P);
if ((FmtCtx.oformat().flags() & AVFMT_GLOBALHEADER) != 0)
VideoSt.getCodecCtx().flags(VideoSt.getCodecCtx().flags() | AV_CODEC_FLAG_GLOBAL_HEADER);
avcodec_open2(VideoSt.getCodecCtx(), VideoCodec, Opt);
VideoSt.setFrame(av_frame_alloc());
VideoSt.getFrame().format(VideoSt.getCodecCtx().pix_fmt());
VideoSt.getFrame().width(1920);
VideoSt.getFrame().height(1080);
av_frame_get_buffer(VideoSt.getFrame(), 32);
// should be at least Long or even BigInteger
// it is a unsigned long in C
int nextpts = 0;
av_dump_format(FmtCtx, 0, FilePath, 1);
avio_open(FmtCtx.pb(), FilePath, AVIO_FLAG_WRITE);
avformat_write_header(FmtCtx, Opt);
int[] got_output = { 0 };
while (still_has_input) {
// convert or directly copy your Bytes[] into VideoSt.Frame here
// AVFrame structure has two important data fields:
// AVFrame.data (uint8_t*[]) and AVFrame.linesize (int[])
// data includes pixel values in some formats and linesize is size of each picture line.
// For example, if formats is RGB. linesize should has 3 valid values equaling to `image_width * 3`. And data will point to three arrays containing rgb values.
// But I guess we'll need swscale() to convert pixel format here. From RGB to yuv420p (or other yuv family formats).
Pkt = new AVPacket();
av_init_packet(Pkt);
VideoSt.getFrame().pts(nextpts++);
avcodec_encode_video2(VideoSt.getCodecCtx(), Pkt, VideoSt.getFrame(), got_output);
av_packet_rescale_ts(Pkt, VideoSt.getCodecCtx().time_base(), VideoSt.getStream().time_base());
Pkt.stream_index(VideoSt.getStream().index());
av_interleaved_write_frame(FmtCtx, Pkt);
av_packet_unref(Pkt);
}
// get delayed frames
for (got_output[0] = 1; got_output[0] != 0;) {
Pkt = new AVPacket();
av_init_packet(Pkt);
avcodec_encode_video2(VideoSt.getCodecCtx(), Pkt, null, got_output);
if (got_output[0] > 0) {
av_packet_rescale_ts(Pkt, VideoSt.getCodecCtx().time_base(), VideoSt.getStream().time_base());
Pkt.stream_index(VideoSt.getStream().index());
av_interleaved_write_frame(FmtCtx, Pkt);
}
av_packet_unref(Pkt);
}
// free c structs
avcodec_free_context(VideoSt.getCodecCtx());
av_frame_free(VideoSt.getFrame());
avio_closep(FmtCtx.pb());
avformat_free_context(FmtCtx);
}
}
For porting C code, normally several changes should be done:
Mostly the work is to replace every C struct member access (. and ->) to java getter/setter.
Also there are many C address-of operators &, just delete them.
Change C NULL macro and C++ nullptr pointer to Java null object.
C codes used to check bool result of an int type in if, for, while. Have to compare them with 0 in java.
And there may be other API changes, as long as referencing to javacpp-presets docs, it'll be ok.
Note that I omitted all error handling codes here. It may be needed in real development/production.
Really I don't want to make publicity but to use pkzip and its SDK may be a good
solution. Pkzip compress file to 95% as they say.
The Smartcrypt SDK is available in all major programming languages, including C++, Java, and C#, and can be used to encrypt both structured and unstructured data. Changes to existing applications typically consist of two or three lines of code.

OpenCV in Java for Image Filtering

i have java codes from Tutorials Point. This code for Robinson filter.
package improctry2;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.highgui.Highgui;
import org.opencv.imgproc.Imgproc;
public class ImProcTry2 {
public static void main( String[] args )
{
try {
int kernelSize = 9;
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
Mat source = Highgui.imread("grayscale.jpg",
Highgui.CV_LOAD_IMAGE_GRAYSCALE);
Mat destination = new Mat(source.rows(),source.cols(),source.type());
Mat kernel = new Mat(kernelSize,kernelSize, CvType.CV_32F){
{
put(0,0,-1);
put(0,1,0);
put(0,2,1);
put(1,0-2);
put(1,1,0);
put(1,2,2);
put(2,0,-1);
put(2,1,0);
put(2,2,1);
}
};
Imgproc.filter2D(source, destination, -1, kernel);
Highgui.imwrite("output.jpg", destination);
} catch (Exception e) {
System.out.println("Error: " + e.getMessage());
}
}
}
This is is my image input, and this the output (i can't post the pictures because i'm new in stackoverflow).
As you can see, the image turn into black and nothing appears. I'm using Netbeans IDE 8.0 and i already put the OpenCV library in Netbeans. I also run another OpenCV Java codes and they work very well. And i also run this code in Eclipse but the result is same.
Anybody can help me?
Thank You
You create a 9x9 kernel matrix, but then fill only a 3x3 submatrix of it, leaving other elements unititialized. To fix it, just change:
int kernelSize = 9;
to:
int kernelSize = 3;
Your code actually works in the newest Opencv (3.0 beta), but those unititialized elements break it in older versions (I checked 2.4.10). To print elements of a matrix use:
System.out.println(kernel.dump());
PS.
Welcome to stackoverflow. :)

Image resizing Java

I have a weird problem with resizing images and can't figure out what I'am doing wrong. I've read lots of posts wwhich basically have the same code as I:
(I use the java library Scalr)
File image = new File("myimage.png");
File smallImage = new File("myimage_s");
try {
BufferedImage bufimage = ImageIO.read(image);
BufferedImage bISmallImage = Scalr.resize(bufimage, 30); // after this line my dimensions in bISmallImage are correct!
ImageIO.write(bISmallImage, "png", smallImage); // but my smallImage has the same dimension as the original foto
} catch (Exception e) {}
Can someone tell me what I am doing wrong?
I do not see anything wrong with your code.
I pulled it into a quick test project in Eclipse targeting Java SE 7 and using imgscalr 4.2 on Windows 7 Pro 64-bit:
import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;
import org.imgscalr.Scalr;
public class ScalrTest {
public static void main(String[] args) {
File image = new File("myimage.png");
File smallImage = new File("myimage_s.png"); // FORNOW: added the file extension just to check the result a bit more easily
// FORNOW: added print statements just to be doubly sure where we're reading from and writing to
System.out.println(image.getAbsolutePath());
System.out.println(smallImage.getAbsolutePath());
try {
BufferedImage bufimage = ImageIO.read(image);
BufferedImage bISmallImage = Scalr.resize(bufimage, 30); // after this line my dimensions in bISmallImage are correct!
ImageIO.write(bISmallImage, "png", smallImage); // but my smallImage has the same dimension as the original foto
} catch (Exception e) {
System.out.println(e.getMessage()); // FORNOW: added just to be sure
}
}
}
With the following myimage.png...
..., it produced the following myimage_s.png:
Maybe there is an environmental issue that's hamstringing your code, but possibilities that come to mind would come with a clear error.

I cant get any output from the Sample Code of Face Detection and recognition code using JavaCV on Eclipse(Juno).

I was practicing on some face recognition and detection codes using Java on JavaCv on Eclpise Juno. The Thing is i was trying to run the sample code below but i cant get the expected result or output. The sample code is as follows
import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.*;
import com.googlecode.javacv.cpp.*;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_calib3d.*;
import static com.googlecode.javacv.cpp.opencv_objdetect.*;
public class Demo {
public static void main(String[] args) throws Exception {
String classifierName = null;
if (args.length > 0) {
classifierName = args[0];
} else {
System.err.println("C://opencv/data/haarcascades\"haarcascade_frontalface_alt.xml\".");
System.exit(1);
}
// Preload the opencv_objdetect module to work around a known bug.
Loader.load(opencv_objdetect.class);
// We can "cast" Pointer objects by instantiating a new object of the desired class.
CvHaarClassifierCascade classifier = new CvHaarClassifierCascade(cvLoad(classifierName));
if (classifier.isNull()) {
System.err.println("Error loading classifier file \"" + classifierName + "\".");
System.exit(1);
}
// CanvasFrame is a JFrame containing a Canvas component, which is hardware accelerated.
// It can also switch into full-screen mode when called with a screenNumber.
CanvasFrame frame = new CanvasFrame("Some Title");
// OpenCVFrameGrabber uses opencv_highgui, but other more versatile FrameGrabbers
// include DC1394FrameGrabber, FlyCaptureFrameGrabber, OpenKinectFrameGrabber,
// PS3EyeFrameGrabber, VideoInputFrameGrabber, and FFmpegFrameGrabber.
FrameGrabber grabber = new OpenCVFrameGrabber(0);
grabber.start();
// FAQ about IplImage:
// - For custom raw processing of data, getByteBuffer() returns an NIO direct
// buffer wrapped around the memory pointed by imageData.
// - To get a BufferedImage from an IplImage, you may call getBufferedImage().
// - The createFrom() factory method can construct an IplImage from a BufferedImage.
// - There are also a few copy*() methods for BufferedImage<->IplImage data transfers.
IplImage grabbedImage = grabber.grab();
int width = grabbedImage.width();
int height = grabbedImage.height();
IplImage grayImage = IplImage.create(width, height, IPL_DEPTH_8U, 1);
IplImage rotatedImage = grabbedImage.clone();
// Let's create some random 3D rotation...
CvMat randomR = CvMat.create(3, 3), randomAxis = CvMat.create(3, 1);
// We can easily and efficiently access the elements of CvMat objects
// with the set of get() and put() methods.
randomAxis.put((Math.random()-0.5)/4, (Math.random()-0.5)/4, (Math.random()-0.5)/4);
cvRodrigues2(randomAxis, randomR, null);
double f = (width + height)/2.0; randomR.put(0, 2, randomR.get(0, 2)*f);
randomR.put(1, 2, randomR.get(1, 2)*f);
randomR.put(2, 0, randomR.get(2, 0)/f); randomR.put(2, 1, randomR.get(2, 1)/f);
System.out.println(randomR);
// Objects allocated with a create*() or clone() factory method are automatically released
// by the garbage collector, but may still be explicitly released by calling release().
// You shall NOT call cvReleaseImage(), cvReleaseMemStorage(), etc.
//on objects allocated this way.
CvMemStorage storage = CvMemStorage.create();
// We can allocate native arrays using constructors taking an integer as argument.
CvPoint hatPoints = new CvPoint(3);
// Again, FFmpegFrameRecorder also exists as a more versatile alternative.
FrameRecorder recorder = new OpenCVFrameRecorder("output.avi", width, height);
recorder.start();
while (frame.isVisible() && (grabbedImage = grabber.grab()) != null) {
cvClearMemStorage(storage);
// Let's try to detect some faces! but we need a grayscale image...
cvCvtColor(grabbedImage, grayImage, CV_BGR2GRAY);
CvSeq faces = cvHaarDetectObjects(grayImage, classifier, storage,
1.1, 3, CV_HAAR_DO_CANNY_PRUNING);
int total = faces.total();
for (int i = 0; i < total; i++) {
CvRect r = new CvRect(cvGetSeqElem(faces, i));
int x = r.x(), y = r.y(), w = r.width(), h = r.height();
cvRectangle(grabbedImage, cvPoint(x, y), cvPoint(x+w, y+h), CvScalar.RED, 1, CV_AA, 0);
// To access the elements of a native array, use the position() method.
hatPoints.position(0).x(x-w/10) .y(y-h/10);
hatPoints.position(1).x(x+w*11/10).y(y-h/10);
hatPoints.position(2).x(x+w/2) .y(y-h/2);
cvFillConvexPoly(grabbedImage, hatPoints.position(0), 3, CvScalar.GREEN, CV_AA, 0);
}
// Let's find some contours! but first some thresholding...
cvThreshold(grayImage, grayImage, 64, 255, CV_THRESH_BINARY);
// To check if an output argument is null we may call either isNull() or equals(null).
CvSeq contour = new CvSeq(null);
cvFindContours(grayImage, storage, contour, Loader.sizeof(CvContour.class),
CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
while (contour != null && !contour.isNull()) {
if (contour.elem_size() > 0) {
CvSeq points = cvApproxPoly(contour, Loader.sizeof(CvContour.class),
storage, CV_POLY_APPROX_DP, cvContourPerimeter(contour)*0.02, 0);
cvDrawContours(grabbedImage, points, CvScalar.BLUE, CvScalar.BLUE, -1, 1, CV_AA);
}
contour = contour.h_next();
}
cvWarpPerspective(grabbedImage, rotatedImage, randomR);
frame.showImage(rotatedImage);
recorder.record(rotatedImage);
}
recorder.stop();
grabber.stop();
frame.dispose();
}
}
The Output i am getting is a line printed in red and its like.
C://opencv/data/haarcascades"haarcascade_frontalface_alt.xml".
Can anybody show what i missed?
I am new to image processing and so please can anyone indicate me where i could get good tutorials and sample source codes that could teach me how to master all the in-built functions in JavaCv and their functionalities? I was working on my final year project and really need your hand on this one.
With lots of respect
Sisay
haarcascade_frontalface_alt.xml is trained classifier for detecting frontal face. It is usually present in opencv_installation_folder/opencv/data/haarcascade folder. you can give the direct path of your classifier instead of taking it from command line as
classifierName = opencv_installation_folder/opencv/data/harcascade/haarcascade_frontalface_alt.xml
that demo expects you to give it the cascade-file as an argument. it just stops, if it does not get one.
maybe you want to change the beginning like this:
public class Demo {
public static void main(String[] args) throws Exception {
String classifierName = "C:/opencv/data/haarcascades/haarcascade_frontalface_alt.xml";
if (args.length > 0) {
classifierName = args[0];
}
like that, it takes an arg from cmdline if present, else it takes the default-value

Categories

Resources