I know we can simulate the print screen with the following code:
robot.keyPress(KeyEvent.VK_PRINTSCREEN);
..but then how to return some BufferedImage?
I found on Google some method called getClipboard() but Netbeans return me some error on this one (cannot find symbol).
I am sorry to ask this, but could someone show me a working code on how returning from this key press a BufferedImage that I could then save?
This won't necessarily give you a BufferedImage, but it will be an Image. This utilizes Toolkit.getSystemClipboard.
final Clipboard clipboard = Toolkit.getDefaultToolkit().getSystemClipboard();
if (clipboard.isDataFlavorAvailable(DataFlavor.imageFlavor)) {
final Image screenshot = (Image) clipboard.getData(DataFlavor.imageFlavor);
...
}
If you really need a BufferedImage, try as follows...
final GraphicsConfiguration config
= GraphicsEnvironment.getLocalGraphicsEnvironment()
.getDefaultScreenDevice().getDefaultConfiguration();
final BufferedImage copy = config.createCompatibleImage(
screenshot.getWidth(null), screenshot.getHeight(null));
final Object monitor = new Object();
final ImageObserver observer = new ImageObserver() {
public void imageUpdate(final Image img, final int flags,
final int x, final int y, final int width, final int height) {
if ((flags & ALLBITS) == ALLBITS) {
synchronized (monitor) {
monitor.notifyAll();
}
}
}
};
if (!copy.getGraphics().drawImage(screenshot, 0, 0, observer)) {
synchronized (monitor) {
try {
monitor.wait();
} catch (final InterruptedException ex) { }
}
}
Though, I'd really have to ask why you don't just use Robot.createScreenCapture.
final Robot robot = new Robot();
final GraphicsConfiguration config
= GraphicsEnvironment.getLocalGraphicsEnvironment()
.getDefaultScreenDevice().getDefaultConfiguration();
final BufferedImage screenshot = robot.createScreenCapture(config.getBounds());
Related
I have problem with detecting if barcode is inside specified area. For testing purposes camera source preview and surface view has same size 1440x1080 to prevent scaling between camera and view. I get positive checks even if I see QR Code isn't in box what represents image. Whats wrong?
False positive check
ScannerActivity
public class ScannerActivity extends AppCompatActivity {
private static final String TAG = "ScannerActivity";
private SurfaceView mSurfaceView; // Its size is forced to 1440x1080 in XML
private CameraSource mCameraSource;
private ScannerOverlay mScannerOverlay; // Its size is forced to 1440x1080 in XML
#Override
protected void onCreate(Bundle savedInstanceState) {
// .. create and init views
// ...
BarcodeDetector barcodeDetector = new BarcodeDetector.Builder(this)
.setBarcodeFormats(Barcode.ALL_FORMATS)
.build();
mCameraSource = new CameraSource.Builder(this, barcodeDetector)
.setRequestedPreviewSize(1440, 1080)
.setRequestedFps(20.0f)
.setFacing(CameraSource.CAMERA_FACING_BACK)
.setAutoFocusEnabled(true)
.build();
barcodeDetector.setProcessor(new Detector.Processor<Barcode>() {
#Override
public void release() {
}
#Override
public void receiveDetections(Detector.Detections<Barcode> detections) {
parseDetections(detections.getDetectedItems());
}
});
}
private void parseDetections(SparseArray<Barcode> barcodes) {
for (int i = 0; i < barcodes.size(); i++) {
Barcode barcode = barcodes.valueAt(i);
if (isInsideBox(barcode)) {
runOnUiThread(() -> {
Toast.makeText(this, "GOT DETECTION: " + barcode.displayValue, Toast.LENGTH_SHORT).show();
});
}
}
}
private boolean isInsideBox(Barcode barcode) {
Rect barcodeBoundingBox = barcode.getBoundingBox();
Rect scanBoundingBox = mScannerOverlay.getBox();
boolean checkResult = barcodeBoundingBox.left >= scanBoundingBox.left &&
barcodeBoundingBox.right <= scanBoundingBox.right &&
barcodeBoundingBox.top >= scanBoundingBox.top &&
barcodeBoundingBox.bottom <= scanBoundingBox.bottom;
Log.d(TAG, "isInsideBox: "+(checkResult ? "YES" : "NO"));
return checkResult;
}
}
Explanation to your issue is simple, but the solution is not trivial to explain.
The coordinates of the box from your UI will mostly not the be the same like the imaginary box on each preview frame. You must transform the coordinates from the UI box to scanBoundingBox.
I open sourced an example which implement the same usecase you are trying to accomplish. In this example I took another approach, I cut the box out of each frame first before feeding it to Google Vision, which is also more efficient, since Google Vision don't have to analyse the whole picture and waste tons of CPU...
I decied to cropp frame by wrapping barcode detector however I don't know why but cropped frame is rotated by 90 degress even smarphone is upright orientation.
Box Detector class
public class BoxDetector extends Detector<Barcode> {
private Detector<Barcode> mDelegate;
private int mBoxWidth;
private int mBoxHeight;
// Debugging
private CroppedFrameListener mCroppedFrameListener;
public BoxDetector(Detector<Barcode> delegate, int boxWidth, int boxHeight) {
mDelegate = delegate;
mBoxWidth = boxWidth;
mBoxHeight = boxHeight;
}
public void setCroppedFrameListener(CroppedFrameListener croppedFrameListener) {
mCroppedFrameListener = croppedFrameListener;
}
#Override
public SparseArray<Barcode> detect(Frame frame) {
int frameWidth = frame.getMetadata().getWidth();
int frameHeight = frame.getMetadata().getHeight();
// I assume that box is centered.
int left = (frameWidth / 2) - (mBoxWidth / 2);
int top = (frameHeight / 2) - (mBoxHeight / 2);
int right = (frameWidth / 2) + (mBoxWidth / 2);
int bottom = (frameHeight / 2) + (mBoxHeight / 2);
YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frameWidth, frameHeight, null);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(left, top, right, bottom), 100, outputStream);
byte[] jpegArray = outputStream.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Frame croppedFrame = new Frame.Builder()
.setBitmap(bitmap)
.setRotation(frame.getMetadata().getRotation())
.build();
if(mCroppedFrameListener != null) {
mCroppedFrameListener.onNewCroppedFrame(croppedFrame.getBitmap(), croppedFrame.getMetadata().getRotation());
}
return mDelegate.detect(croppedFrame);
}
public interface CroppedFrameListener {
void onNewCroppedFrame(Bitmap bitmap, int rotation);
}
}
Box Detector usuage
BarcodeDetector barcodeDetector = new BarcodeDetector.Builder(this)
.setBarcodeFormats(Barcode.ALL_FORMATS)
.build();
BoxDetector boxDetector = new BoxDetector(
barcodeDetector,
mBoxSize.getWidth(),
mBoxSize.getHeight());
boxDetector.setCroppedFrameListener(new BoxDetector.CroppedFrameListener() {
#Override
public void onNewCroppedFrame(final Bitmap bitmap, int rotation) {
Log.d(TAG, "onNewCroppedFrame: new bitmap, rotation: "+rotation);
runOnUiThread(new Runnable() {
#Override
public void run() {
mPreview.setImageBitmap(bitmap);
}
});
}
});
Cropped frame is rotated
I am attempting to capture a video recording through an external camera, Logitec C922. Using java, I can make this possible through webcam api.
<dependency>
<groupId>com.github.sarxos</groupId>
<artifactId>webcam-capture</artifactId>
<version>0.3.10</version>
</dependency>
<dependency>
<groupId>xuggle</groupId>
<artifactId>xuggle-xuggler</artifactId>
<version>5.4</version>
</dependency>
However, for the life of me, I cannot make it record at 60FPS. The video randomly stutters when stored, and is not smooth at all.
I can connect to the camera, using the following details.
final List<Webcam> webcams = Webcam.getWebcams();
for (final Webcam cam : webcams) {
if (cam.getName().contains("C922")) {
System.out.println("### Logitec C922 cam found");
webcam = cam;
break;
}
}
I set the size of the cam to the following:
final Dimension[] nonStandardResolutions = new Dimension[] { WebcamResolution.HD720.getSize(), };
webcam.setCustomViewSizes(nonStandardResolutions);
webcam.setViewSize(WebcamResolution.HD720.getSize());
webcam.open(true);
And then I capture the images:
while (continueRecording) {
// capture the webcam image
final BufferedImage webcamImage = ConverterFactory.convertToType(webcam.getImage(),
BufferedImage.TYPE_3BYTE_BGR);
final Date timeOfCapture = new Date();
// convert the image and store
final IConverter converter = ConverterFactory.createConverter(webcamImage, IPixelFormat.Type.YUV420P);
final IVideoPicture frame = converter.toPicture(webcamImage,
(System.currentTimeMillis() - start) * 1000);
frame.setKeyFrame(false);
frame.setQuality(0);
writer.encodeVideo(0, frame);
}
My writer is defined as follows:
final Dimension size = WebcamResolution.HD720.getSize();
final IMediaWriter writer = ToolFactory.makeWriter(videoFile.getName());
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264, size.width, size.height);
I am honestly not sure what in my code could be causing this. Given that I lower the resolution, I get no problems. ( 480p ) Could the issue be with the codes I am using?
As some of the comments mentioned, introducing queues does solve the problem. Here is the general logic to performs the needed steps. Note, I've setup my code for a lower resolution, as it allows me to capture 100FPS per sec. Adjust as needed.
Class to link the image/video capture and class to edit it :
public class WebcamRecorder {
final Dimension size = WebcamResolution.QVGA.getSize();
final Stopper stopper = new Stopper();
public void startRecording() throws Exception {
final Webcam webcam = Webcam.getDefault();
webcam.setViewSize(size);
webcam.open(true);
final BlockingQueue<CapturedFrame> queue = new LinkedBlockingQueue<CapturedFrame>();
final Thread recordingThread = new Thread(new RecordingThread(queue, webcam, stopper));
final Thread imageProcessingThread = new Thread(new ImageProcessingThread(queue, size));
recordingThread.start();
imageProcessingThread.start();
}
public void stopRecording() {
stopper.setStop(true);
}
}
RecordingThread :
public void run() {
try {
System.out.println("## capturing images began");
while (true) {
final BufferedImage webcamImage = ConverterFactory.convertToType(webcam.getImage(),
BufferedImage.TYPE_3BYTE_BGR);
final Date timeOfCapture = new Date();
queue.put(new CapturedFrame(webcamImage, timeOfCapture, false));
if (stopper.isStop()) {
System.out.println("### signal to stop capturing images received");
queue.put(new CapturedFrame(null, null, true));
break;
}
}
} catch (InterruptedException e) {
System.out.println("### threading issues during recording:: " + e.getMessage());
} finally {
System.out.println("## capturing images end");
if (webcam.isOpen()) {
webcam.close();
}
}
}
ImageProcessingThread:
public void run() {
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264, size.width, size.height);
try {
int frameIdx = 0;
final long start = System.currentTimeMillis();
while (true) {
final CapturedFrame capturedFrame = queue.take();
if (capturedFrame.isEnd()) {
break;
}
final BufferedImage webcamImage = capturedFrame.getImage();
size.height);
// convert the image and store
final IConverter converter = ConverterFactory.createConverter(webcamImage, IPixelFormat.Type.YUV420P);
final long end = System.currentTimeMillis();
final IVideoPicture frame = converter.toPicture(webcamImage, (end - start) * 1000);
frame.setKeyFrame((frameIdx++ == 0));
frame.setQuality(0);
writer.encodeVideo(0, frame);
}
} catch (final InterruptedException e) {
System.out.println("### threading issues during image processing:: " + e.getMessage());
} finally {
if (writer != null) {
writer.close();
}
}
The way it works is pretty simple. The WebcamRecord class creates an instance of a queue that is shared between the video capture and the image processing. The RecordingThread sends bufferedImages to the queue ( in my case, it a pojo, called CapturedFrame ( which has a BufferedImage in it ) ). The ImageProcessingThread will listen and pull data from the queue. If it does not receive a signal that the writing should end, the loop never dies.
My code is used to draw an image onto the screen by g.draw(img);. Is there a way to make the image cycle through different images instead of being static? I've tried .gif files but they don;t work. Here is my code:
static BufferedImage img = null;
{
try {
img = ImageIO.read(new File("assets/textures/bird.png"));
} catch (IOException e) {
System.out.println(e.getMessage();
}
}
Is there a way to animate the textures?
You can create a small class to do this for you:
public class SimpleImageLoop {
private final BufferedImage[] frames;
private int currentFrame;
public SimpleImageLoop(BufferedImage[] frames) {
this.frames = frames;
this.currentFrame = 0;
}
/**
* Moves the loop to the next frame.
* If we are on the last frame, this loops back to the first
*/
public void nextFrame() {
this.currentFrame++;
if (this.currentFrame >= frames.length) {
this.currentFrame = 0;
}
}
/**
* Draws the current frame on the provided graphics context
*/
public void draw(Graphics g) {
g.draw(this.frames[this.currentFrame];
}
}
Then you need a simple animation loop to call through update() and draw():
final SimpleImageLoop imageLoop = new SimpleImageLoop(frames);
while (true) {
imageLoop.nextFrame();
imageLoop.draw(g);
}
If you need to smooth out the result, you can include additional parameters like how many loops should you perform, the time duration of the frames, etc.
I´m trying to copy a png file to clipboard within a program and maintain its alpha channel when pasted in another program (e.g. ms office, paint, photoshop). The problem is, that the alpha channel turns black in most of the programs. I've been searching the web for hours now and can't find a solution. The Code I'm using:
setClipboard(Toolkit.getDefaultToolkit().getImage(parent.getSelectedPicturePath()));
public static void setClipboard(Image image) {
ImageSelection imgSel;
if (OSDetector.isWindows()) {
imgSel = new ImageSelection(image);
} else {
imgSel = new ImageSelection(getBufferedImage(image));
}
Toolkit.getDefaultToolkit().getSystemClipboard().setContents(imgSel, null);
}
Is there any way to maintain the alpha channel in Java? I've tried converting the png to BufferedImage, Image, etc. and the pasting it to the clipboard, but nothing works.
Assuming that OSDetector is working properly, I was able to get the OP's code to work out of the box on Windows Server 2008R2 64-bit running Oracle JDK 1.8.0_131. The OP omitted the code for getBufferedImage(), however I suspect it was some variant of the version from this blog.
When I tested the code using the blog's version of getBufferedImage() on Windows (ignoring the OSDetector check), I was able to reproduce a variant of the issue where the entire image was black, which turned out to be a timing issue with the asynchronous calls to Image.getWidth(), Image.getHeight(), and Graphics.drawImage(), all of which return immediately and take an observer for async updates. The blog code passes null (no observer) for all of these invocations, and expects results to be returned immediately, which was not the case when I tested.
Once I modified getBufferedImage() to use callbacks, I reproduced the exact issue: alpha channels appear black. The reason for this behavior is that the image with the transparency is drawn onto a graphics context that defaults to a black canvas. What you are seeing is exactly what you would see if you viewed the image on a web page with a black background.
To change this, I used a hint from this StackOverflow answer and painted the background white.
I used the ImageSelection implementation from this site, which simply wraps an Image instance in a Transferrable using DataFlavor.imageFlavor.
Ultimately for my tests, both the original image and the buffered image variants worked on Windows. Below is the code:
public static void getBufferedImage(Image image, Consumer<Image> imageConsumer) {
image.getWidth((img, info, x, y, w, h) -> {
if (info == ImageObserver.ALLBITS) {
BufferedImage buffered = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
Graphics2D g2 = buffered.createGraphics();
g2.setColor(Color.WHITE); // You choose the background color
g2.fillRect(0, 0, w, h);
if (g2.drawImage(img, 0, 0, w, h, (img2, info2, x2, y2, w2, h2) -> {
if (info2 == ImageObserver.ALLBITS) {
g2.dispose();
imageConsumer.accept(img2);
return false;
}
return true;
})) {
g2.dispose();
imageConsumer.accept(buffered);
}
return false;
}
return true;
});
}
public static void setClipboard(Image image) {
boolean testBuffered = true; // Both buffered and non-buffered worked for me
if (!testBuffered) {
Toolkit.getDefaultToolkit().getSystemClipboard().setContents(new ImageSelection(image), null);
} else {
getBufferedImage(image, (buffered) -> {
ImageSelection imgSel = new ImageSelection(buffered);
Toolkit.getDefaultToolkit().getSystemClipboard().setContents(imgSel, null);
});
}
}
I hope this helps. Best of luck.
Is this right answer? Have you tried this?
public void doCopyToClipboardAction()
{
// figure out which frame is in the foreground
MetaFrame activeMetaFrame = null;
for (MetaFrame mf : frames)
{
if (mf.isActive()) activeMetaFrame = mf;
}
// get the image from the current jframe
Image image = activeMetaFrame.getCurrentImage();
// place that image on the clipboard
setClipboard(image);
}
// code below from exampledepot.com
//This method writes a image to the system clipboard.
//otherwise it returns null.
public static void setClipboard(Image image)
{
ImageSelection imgSel = new ImageSelection(image);
Toolkit.getDefaultToolkit().getSystemClipboard().setContents(imgSel, null);
}
// This class is used to hold an image while on the clipboard.
static class ImageSelection implements Transferable
{
private Image image;
public ImageSelection(Image image)
{
this.image = image;
}
// Returns supported flavors
public DataFlavor[] getTransferDataFlavors()
{
return new DataFlavor[] { DataFlavor.imageFlavor };
}
// Returns true if flavor is supported
public boolean isDataFlavorSupported(DataFlavor flavor)
{
return DataFlavor.imageFlavor.equals(flavor);
}
// Returns image
public Object getTransferData(DataFlavor flavor)
throws UnsupportedFlavorException, IOException
{
if (!DataFlavor.imageFlavor.equals(flavor))
{
throw new UnsupportedFlavorException(flavor);
}
return image;
}
}
Source : http://alvinalexander.com/java/java-copy-image-to-clipboard-example
I'm not tried this myself and I'm not sure about that. Hopefully you get right answer.
Here is a very simple, self contained example that works. Reading or creating the image is up to you. This code just creates a red circle drawn on an alpha-type BufferedImage. When I paste it in any program that supports transparency, it shows correctly. Hope it helps.
import java.awt.*;
import java.awt.datatransfer.*;
import java.awt.image.BufferedImage;
import java.io.IOException;
public class CopyImageToClipboard {
public void createClipboardImageWithAlpha() {
//Create a buffered image of the correct type, with alpha.
BufferedImage image = new BufferedImage(600, 600, BufferedImage.TYPE_INT_ARGB);
Graphics2D g2d = image.createGraphics();
//Draw in the buffered image.
g2d.setColor(Color.red);
g2d.fillOval(10, 10, 580, 580);
//Add the BufferedImage to the clipboard with transferable image flavor.
Clipboard clipboard = Toolkit.getDefaultToolkit().getSystemClipboard();
Transferable transferableImage = getTransferableImage(image);
clipboard.setContents(transferableImage, null);
}
private Transferable getTransferableImage(final BufferedImage bufferedImage) {
return new Transferable() {
#Override
public DataFlavor[] getTransferDataFlavors() {
return new DataFlavor[] { DataFlavor.imageFlavor };
}
#Override
public boolean isDataFlavorSupported(DataFlavor flavor) {
return DataFlavor.imageFlavor.equals(flavor);
}
#Override
public Object getTransferData(DataFlavor flavor) throws UnsupportedFlavorException, IOException {
if (DataFlavor.imageFlavor.equals(flavor)) {
return bufferedImage;
}
return null;
}
};
}
}
I have a code that encode a image to bitmap. But I am not sure on how to display it on the screen. By the way, I am using Blackberry JavaME. This is the code where i use to encode the image. Is this the way I can use in order to get image from a sd card and display it on the screen?
FileConnection conn =
(FileConnection)Connector.open("image1.png",Connector.READ_WRITE);
if(conn.exists()) {
InputStream is = conn.openInputStream();
BitmapField bitmap = null;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int ch;
while ((ch = is.read()) != -1)
{
baos.write(ch);
}
byte imageData[] = baos.toByteArray();
bitmap = new BitmapField(
EncodedImage.createEncodedImage(imageData, 0, imageData.length).getBitmap());
add(bitmap);
}
Secko is correct that you can override paint but I think you are getting stuck because you have not created a ui application.
Here is a very simple ui application example for displaying a bitmap field, if you copy it exactly you will need an images folder under src with image.png inside of it.
This was modified from the HelloWorldDemo that comes with the SDK. I recommend that if you are just starting out you look at the samples in the folder.
\plugins\net.rim.ejde.componentpack5.0.0_5.0.0.25\components\samples\
good luck
Ray
public class DisplayBitmaps extends UiApplication
{
public static void main(String[] args)
{
DisplayBitmaps theApp = new DisplayBitmaps();
theApp.enterEventDispatcher();
}
public DisplayBitmaps()
{
pushScreen(new DisplayBitmapsScreen());
}
}
final class DisplayBitmapsScreen extends MainScreen
{
DisplayBitmapsScreen()
{
Bitmap bitmap = EncodedImage.getEncodedImageResource("images/image.png").getBitmap();
BitmapField bitmapField = new BitmapField(bitmap);
add(bitmapField);
}
public void close()
{
super.close();
}
}
Edit for when the image is on the sdcard
DisplayBitmapsScreen()
{
//Bitmap bitmap = EncodedImage.getEncodedImageResource("images/image.png").getBitmap();
try {
FileConnection fc = (FileConnection) Connector.open("file:///SDCard/BlackBerry/pictures/image.png");
if (fc.exists()) {
byte[] image = new byte[(int) fc.fileSize()];
InputStream inStream = fc.openInputStream();
inStream.read(image);
inStream.close();
EncodedImage encodedImage = EncodedImage.createEncodedImage(image, 0, -1);
BitmapField bitmapField = new BitmapField(encodedImage.getBitmap());
fc.close();
add(bitmapField);
}
} catch (Exception e) { System.out.println("EXCEPTION " + e); }
}
Overriding paint in Field or any extension Field class could also display an image, but I didn't really understand from Secko's example where he would display the image so I have included drawImage in this example below.
protected void paint(Graphics graphics) {
graphics.drawImage(x, y, width, height, image, frameIndex, left, top);
super.paint(graphics);
}
You can do it with:
paint(Graphics g);
Perhaps in a function:
protected void DrawStuff(Graphics g) {
this.paint(g);
}