OpenCV taking weird pictures - Java - java

I got this headache problem and I can't seem to fix the issue. What I am doing I have a machine that a computer is hooked up too and when ever a condition is true it will take a picture. But the issue is when it does take a picture it sometimes weird, View below. I'v tried inverting the picture but not everything is backward. I looked everywhere... nothing helped me out. I tried many different sample codes they either don't work or still have this issue.
Normal picture:
http://imgur.com/ve4bp9M
Weird Picture:
http://imgur.com/5Z46oPz
public Mat getCapture(){
if(camera==null || !camera.isOpened()){
camera = new VideoCapture(0);
setCameraValues();
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
Mat m = null;
if(!camera.isOpened()){
System.out.println("Error");
}
else {
m = new Mat();
while(true){
camera.read(m);
if (!m.empty()){
//Mat file is not empty
break;
}
}
}
camera.release();
return m;
}
Below is sets the camera's settings the focus, zoom, brightness ect.
public void setCameraValues()
{
this.camera.set(28, ((Integer)this.values.get(0)).intValue());
this.camera.set(27, ((Integer)this.values.get(1)).intValue());
this.camera.set(10, ((Integer)this.values.get(2)).intValue());
this.camera.set(11, ((Integer)this.values.get(3)).intValue());
this.camera.set(12, ((Integer)this.values.get(4)).intValue());
this.camera.set(15, ((Integer)this.values.get(5)).intValue());
this.camera.set(20, ((Integer)this.values.get(6)).intValue());
this.camera.set(33, ((Integer)this.values.get(7)).intValue());
this.camera.set(34, ((Integer)this.values.get(8)).intValue());
this.camera.set(3, ((Integer)this.values.get(9)).intValue());
this.camera.set(4, ((Integer)this.values.get(10)).intValue());
}
EDIT:
The webcam I am using is Microsoft LifeCam HD

Related

Android camera2 preview image disorder when saved using ImageReader

I am taking a series of pictures using Android Camera2 API for real time pose estimation and environment reconstruction (the SLAM problem). Currently I simply save all of these pictures in my SD card for off-line processing.
I setup the processing pipeline according to google's Camera2Basic using a TextureView as well as an ImageReader, where they are both set as target surfaces for a repeat preview request.
mButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
if(mIsShooting){
try {
mCaptureSession.stopRepeating();
mPreviewRequestBuilder.removeTarget(mImageReader.getSurface());
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler);
mIsShooting = false;
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
else{
try {
mCaptureSession.stopRepeating();
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler);
mIsShooting = true;
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
}
});
The ImageReader is added/removed when pressing the button. The ImageReader's OnImageAvailableListener is implemented as follow:
private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image img = reader.acquireLatestImage();
if(null == img){
return;
}
if(img.getTimestamp() <= mLatestFrameTime){
Log.i(Tag, "disorder detected!");
return;
}
mLatestFrameTime = img.getTimestamp();
ImageSaver saver = new ImageSaver(img, img.getTimestamp());
saver.run();
}
};
I use acquireLatestImage (with buffer size set to 2) to discard old frames and have also checked the image's timestamp to make sure they are monotonously increasing.
The reader does receive images at an acceptable rate (about 25fps). However a closer look at the saved image sequence show they are not
always saved in chronological order.
The following pictures come from a long sequence shot by the program (sorry for not being able to post pictures directly :( ):
Image 1:
Image 2:
Image 3:
Such disorder does not occur very often but they can occur any time and seems not to be an initialization problem. I suppose it has something to do with the ImageReader's buffer size as with larger buffer less "flash backs" are occurred. Does anyone have the same problem?
I finally find that such disorder disappears when setting ImageReader's format to be YUV_420_888 in its constructor. Originally I set this field as JPEG.
Using JPEG format incurs not only large processing delay but also disorder. I guess the conversion from image sensor data to desired format utilizes other hardware such as DSP or GPU which does not guarantee chronological order.
Are you using TEMPLATE_STILL_CAPTURE for the capture requests when you enable the ImageReader, or just TEMPLATE_PREVIEW? What devices are you seeing issues with?
If you're using STILL_CAPTURE, make sure you check if the device supports the ENABLE_ZSL flag, and set it to false. When it is set to true (generally the default on devices that support it, for the STILL_CAPTURE template), images may be returned out of order since there's a zero-shutter-lag queue in place within the camera device.

How to Detect Multiple Images with AR core

I'm trying out detecting multiple augmented images with AR core, with
https://developers.google.com/ar/develop/java/augmented-images/guide
and other online tutorials. Currently, I have the database setup and loaded with images. However
Collection<AugmentedImage> augmentedImages = frame.getUpdatedTrackables(AugmentedImage.class);
did not seem to capture and match the feature points of my images in my db.
Can you advise me about what I need to do?
I have set up and loaded multiple images from db. The app is able to detect only 1 image previously. However, after tweaking my code to detect multiple images, it did not work properly.
Tried researching and debugging however, still unable to solve it.
private void onUpdateFrame(FrameTime frameTime)
{
Frame frame = arFragment.getArSceneView().getArFrame();
Collection<AugmentedImage> augmentedImages = frame.getUpdatedTrackables(AugmentedImage.class);
for (AugmentedImage augmentedImage : augmentedImages)
{
int i =augmentedImages.size();
Log.d("NoImage",""+i);
if (augmentedImage.getTrackingState() == TrackingState.TRACKING)
{
if (augmentedImage.getName().contains("img1") && !modelAdded)
{
renderObject(arFragment, augmentedImage.createAnchor(augmentedImage.getCenterPose()),R.raw.car);
modelAdded = true;
}
else if (augmentedImage.getName().contains("img2") && !modelAdded)
{
renderObject(arFragment, augmentedImage.createAnchor(augmentedImage.getCenterPose()), R.raw.car);
modelAdded = true;
}
else if (augmentedImage.getName().contains("img3") && !modelAdded)
{
renderObject(arFragment, augmentedImage.createAnchor(augmentedImage.getCenterPose()), R.raw.car);
modelAdded = true;
}
}
}
}

Problem taking a screenshot with windows with transparency from Java on Linux

I am facing a problem taking screenshots from Java on Linux with transparent windows.
The problem is that the screenshot taken with Robot deals with transparent windows as if they were opaque.
It is very similar to the problem stated at: Taking a screenshot in Java on Linux?
I wonder if there is any satisfactory solution to avoid this problem.
This is the code I use to take the screenshots:
protected BufferedImage getScreenShot()
{
BufferedImage screenShotImage = null;
try
{
screenShotImage = new Robot().createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
}
catch( Exception ex )
{
ex.printStackTrace();
}
return( screenShotImage );
}
The code to get the screenshot is the following (at the derived JFrame class):
public void M_repaint( )
{
long timeStamp = System.currentTimeMillis();
if( ( timeStamp - _lastScreenShotTimeStamp ) > 4000 )
{
updateAlpha( 0.0f );
SwingUtilities.invokeLater( new Runnable(){
#Override
public void run()
{
BufferedImage image = getScreenShot();
try
{
ImageIO.write(image, "png", new File( "robotScreenshotBefore.png"));
}
catch( Exception ex )
{
ex.printStackTrace();
}
try
{
String[] cmd = { "./lens.screenshot.sh" };
Process script_exec = Runtime.getRuntime().exec(cmd);
script_exec.waitFor();
}
catch( Exception ex )
{
ex.printStackTrace();
}
image = getScreenShot();
try
{
ImageIO.write(image, "png", new File( "robotScreenshotAfter.png"));
}
catch( Exception ex )
{
ex.printStackTrace();
}
_lensJPanel.setScreenShotImage( image );
updateAlpha( 1.0f );
}
});
_lastScreenShotTimeStamp = timeStamp;
}
repaint();
}
The script ./lens.screenshot.sh has the following contents:
#/bin/bash
rm gnome-screenshot.png
gnome-screenshot --file="gnome-screenshot.png"
The application is a magnifying lens.
The way the application works is that every time the window (the lens) changes its position on the screen, the function M_repaint( ) is called.
Inside that function there is a kind of timer, that makes that when 4 seconds have elapsed from last screenshot, a new screenshot is taken in case the window appearence has changed
Previously to take the screenshot, the JFrame in made invisible, so that it does not appear inside the screenshot itself.
But once the window has been painted on the screen, it appears in the screenshot even if it had been made invisible previously.
I attach one of the sets of screenshots taken from the application with the previus code ( robotScreenshotBefore.png, gnome-screenshot.png and robotScreenshotAfter.png)
I have updated the information on the question, and I will also attach the screenshots taken from a Linux machine
As we can see the first screenshot (the one that happens in a normal execution), shows the window just made transparent.
The following two screenshots show the window correctly made invisible (the first of them is taken directly from Linux and the last one, is taken with Robot, after having invoked the gnome screenshot tool)
The problem is that the application cannot wait for so much time before taking the screenshot, as this waiting time is showed as an anoying flicker.
robotScreenshotBefore.png
gnome-screenshot.png
robotScreenshotAfter.png
Finally the solution found was to wait for some milliseconds for the transparency to take effect before taking the screenshot, in case the OS was not Windows.
So, the final function used to paint is the following:
public void M_repaint( )
{
long timeStamp = System.currentTimeMillis();
if( ( timeStamp - _lastScreenShotTimeStamp ) > 4000 )
{
updateAlpha( 0.0f );
SwingUtilities.invokeLater( new Runnable(){
#Override
public void run()
{
if( !OSValidator.isWindows() )
{
try
{
Thread.sleep( 26 ); // if mac or linux (and may be others), give enough time for transparency to take effect.
}
catch( Exception ex )
{
ex.printStackTrace();
}
}
BufferedImage image = getScreenShot();
_lensJPanel.setScreenShotImage( image );
updateAlpha( 1.0f );
}
});
_lastScreenShotTimeStamp = timeStamp;
}
repaint();
}

Sikuli - Click on element does not work with Java API but it works from IDE - WPF app

I am evaluating tools for testing a WPF based app. I am currently trying Sikuli with the Java API. When I try to click on an object from Java code, the mouse cursor goes to the object and the object is highlighted, however the click action does not work, because the expected menu does not open. The click() method returns status 1 though.
If I am doing a click from Sikuli IDE, it works fine.
I tried 1.0.1 version and also the nightly build. Here's my code:
#Test
public void testLogin() {
Screen s = new Screen();
try {
s.wait(Constants.overflowMenu);
System.out.println(s.click(Constants.overflowMenu));
s.wait(Constants.signInMenuOption, 5);
} catch (FindFailed e) {
Assert.fail(e.getMessage());
}
}
What am I doing wrong?
try this code, it worked for me. what it does, it checks the image, click on it and then check this image again on the screen, if it still exists, click again.
Screen screen = new Screen();
Pattern pattern = null;
try
{
pattern = new Pattern(imageLocation);
screen.wait(pattern,30);
screen.click(pattern);
System.out.println("First Attempt To Find Image.");
}
catch(FindFailed f)
{
System.out.println("Exception In First Attempt: " +f.getMessage());
System.out.println("FindFailed Exception Handled By Method: ClickObjectUsingSikuli. Please check image being used to identify the webelement. supplied image: " +imageLocation);
Assert.fail("Image wasn't found. Please use correct image.");
}
Thread.sleep(1000);
//In case image/object wasn't clicked in first attempt and cursor stays in the same screen, then do second atempt.
if(screen.exists(pattern) != null)
{
try
{
screen.getLastMatch().click(pattern);
System.out.println("Second Attempt To Find Image.");
System.out.println("Object: " +imageLocation + " is clicked successfully.");
}
catch(FindFailed f)
{
System.out.println("Exception In Second Attempt: " +f.getMessage());
System.out.println("FindFailed Exception Handled By Method: ClickObjectUsingSikuli. Please check image being used to identify the webelement. supplied image: " +imageLocation);
}
}
In my case it seems it was a problem with the fact that I have two monitors..

Compare image to actual screen

I'd like to make my Java program compare the actual screen with a picture (screenshot).
I don't know if it's possible, but I have seen it in Jitbit (a macro recorder) and I would like to implement it myself. (Maybe with that example you understand what I mean).
Thanks
----edit-----
In other words, is it possible to check if an image is showing in? To find and compare that pixels in the screen?
You may try aShot: documentation link
1) aShot can ignore areas you mark with special color.
2) aShot can provide image which display difference between images.
private void compareTowImages(BufferedImage expectedImage, BufferedImage actualImage) {
ImageDiffer imageDiffer = new ImageDiffer();
ImageDiff diff = imageDiffer
.withDiffMarkupPolicy(new PointsMarkupPolicy()
.withDiffColor(Color.YELLOW))
.withIgnoredColor(Color.MAGENTA)
.makeDiff(expectedImage, actualImage);
// areImagesDifferent will be true if images are different, false - images the same
boolean areImagesDifferent = diff.hasDiff();
if (areImagesDifferent) {
// code in case of failure
} else {
// Code in case of success
}
}
To save image with differences:
private void saveImage(BufferedImage image, String imageName) {
// Path where you are going to save image
String outputFilePath = String.format("target/%s.png", imageName);
File outputFile = new File(outputFilePath);
try {
ImageIO.write(image, "png", outputFile);
} catch (IOException e) {
// Some code in case of failure
}
}
You can do this in two steps:
Create a screenshot using awt.Robot
BufferedImage image = new Robot().createScreenCapture(new Rctangle(Toolkit.getDefaultToolkit().getScreenSize()));
ImageIO.write(image, "png", new File("/screenshot.png"));
Compare the screenshots using something like that: How to check if two images are similar or not using openCV in java?
Have a look at Sikuli project. Their automation engine is based on image comparison.
I guess, internally they are still using OpenCV for calculating image similarity, but there are plenty of OpenCV Java bindings like this, which allow to do so from Java.
Project source code is located here: https://github.com/sikuli/sikuli
Ok then, so I found an answer after a few days.
This method takes the screenshot:
public static void takeScreenshot() {
try {
BufferedImage image = new Robot().createScreenCapture(new Rectangle(490,490,30,30));
/* this two first parameters are the initial X and Y coordinates. And the last ones are the increment of each axis*/
ImageIO.write(image, "png", new File("C:\\Example\\Folder\\capture.png"));
} catch (IOException e) {
e.printStackTrace();
} catch (HeadlessException e) {
e.printStackTrace();
} catch (AWTException e) {
e.printStackTrace();
}
}
And this other one will compare the images
public static String compareImage() throws Exception {
// savedImage is the image we want to look for in the new screenshot.
// Both must have the same width and height
String c1 = "savedImage";
String c2 = "capture";
BufferedInputStream in = new BufferedInputStream(new FileInputStream(c1
+ ".png"));
BufferedInputStream in1 = new BufferedInputStream(new FileInputStream(
c2 + ".png"));
int i, j;
int k = 1;
while (((i = in.read()) != -1) && ((j = in1.read()) != -1)) {
if (i != j) {
k = 0;
break;
}
}
in.close();
in1.close();
if (k == 1) {
System.out.println("Ok...");
return "Ok";
} else {
System.out.println("Fail ...");
return "Fail";
}
}

Categories

Resources