Can someone tell me how I can use the Zxing Library for an Augmented Reality App? I know the easiest way to use Zxing is via Intent, but I need the Camera View so I can not use the barcode App.
I have a SurfaceHolder.Callback which is added to the main activity and overwrites following method:
#Override
public void surfaceCreated(SurfaceHolder holder) {
mCamera = Camera.open();
try {
mCamera.setPreviewDisplay(holder);
} catch (IOException e) {
Log.d(TAG, "Can not set surface holder");
}
mCamera.startPreview();
Parameters parameters = mCamera.getParameters();
parameters.setPreviewSize(1280, 720);
parameters.setPictureSize(1280, 720);
mCamera.setParameters(parameters);
QrCodeReader reader = new QrCodeReader();
mCamera.setPreviewCallback(reader);
}
The setted picture size has to be available because it is in the list parameters.getSupportedPictureSizes().
And this method in the QrCodeReader class which implements PreviewCallback:
private Result result;
private MultiFormatReader reader = new MultiFormatReader();
private boolean init = false;
public QrCodeReader(){
Hashtable<DecodeHintType, Object> hints = new Hashtable<DecodeHintType,
Object>();
hints.put(DecodeHintType.TRY_HARDER, Boolean.TRUE);
reader.setHints(hints);
}
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
PlanarYUVLuminanceSource source = new PlanarYUVLuminanceSource(data,
1280, 720, 0, 0, 1280, 720, true);
HybridBinarizer hybBin = new HybridBinarizer(source);
BinaryBitmap bitmap = new BinaryBitmap(hybBin);
try {
result = reader.decodeWithState(bitmap);
Log.d("Result", "Result found!");
} catch (NotFoundException e) {
Log.d(TAG, "NotFoundException");
} finally {
reader.reset();
}
}
The Logcat only shows NotFoundException.
NotFoundException is normal. If the frame doesn't have a barcode, that's the result. This doesn't mean anything is wrong per se. Keep scanning.
Related
I have problem with detecting if barcode is inside specified area. For testing purposes camera source preview and surface view has same size 1440x1080 to prevent scaling between camera and view. I get positive checks even if I see QR Code isn't in box what represents image. Whats wrong?
False positive check
ScannerActivity
public class ScannerActivity extends AppCompatActivity {
private static final String TAG = "ScannerActivity";
private SurfaceView mSurfaceView; // Its size is forced to 1440x1080 in XML
private CameraSource mCameraSource;
private ScannerOverlay mScannerOverlay; // Its size is forced to 1440x1080 in XML
#Override
protected void onCreate(Bundle savedInstanceState) {
// .. create and init views
// ...
BarcodeDetector barcodeDetector = new BarcodeDetector.Builder(this)
.setBarcodeFormats(Barcode.ALL_FORMATS)
.build();
mCameraSource = new CameraSource.Builder(this, barcodeDetector)
.setRequestedPreviewSize(1440, 1080)
.setRequestedFps(20.0f)
.setFacing(CameraSource.CAMERA_FACING_BACK)
.setAutoFocusEnabled(true)
.build();
barcodeDetector.setProcessor(new Detector.Processor<Barcode>() {
#Override
public void release() {
}
#Override
public void receiveDetections(Detector.Detections<Barcode> detections) {
parseDetections(detections.getDetectedItems());
}
});
}
private void parseDetections(SparseArray<Barcode> barcodes) {
for (int i = 0; i < barcodes.size(); i++) {
Barcode barcode = barcodes.valueAt(i);
if (isInsideBox(barcode)) {
runOnUiThread(() -> {
Toast.makeText(this, "GOT DETECTION: " + barcode.displayValue, Toast.LENGTH_SHORT).show();
});
}
}
}
private boolean isInsideBox(Barcode barcode) {
Rect barcodeBoundingBox = barcode.getBoundingBox();
Rect scanBoundingBox = mScannerOverlay.getBox();
boolean checkResult = barcodeBoundingBox.left >= scanBoundingBox.left &&
barcodeBoundingBox.right <= scanBoundingBox.right &&
barcodeBoundingBox.top >= scanBoundingBox.top &&
barcodeBoundingBox.bottom <= scanBoundingBox.bottom;
Log.d(TAG, "isInsideBox: "+(checkResult ? "YES" : "NO"));
return checkResult;
}
}
Explanation to your issue is simple, but the solution is not trivial to explain.
The coordinates of the box from your UI will mostly not the be the same like the imaginary box on each preview frame. You must transform the coordinates from the UI box to scanBoundingBox.
I open sourced an example which implement the same usecase you are trying to accomplish. In this example I took another approach, I cut the box out of each frame first before feeding it to Google Vision, which is also more efficient, since Google Vision don't have to analyse the whole picture and waste tons of CPU...
I decied to cropp frame by wrapping barcode detector however I don't know why but cropped frame is rotated by 90 degress even smarphone is upright orientation.
Box Detector class
public class BoxDetector extends Detector<Barcode> {
private Detector<Barcode> mDelegate;
private int mBoxWidth;
private int mBoxHeight;
// Debugging
private CroppedFrameListener mCroppedFrameListener;
public BoxDetector(Detector<Barcode> delegate, int boxWidth, int boxHeight) {
mDelegate = delegate;
mBoxWidth = boxWidth;
mBoxHeight = boxHeight;
}
public void setCroppedFrameListener(CroppedFrameListener croppedFrameListener) {
mCroppedFrameListener = croppedFrameListener;
}
#Override
public SparseArray<Barcode> detect(Frame frame) {
int frameWidth = frame.getMetadata().getWidth();
int frameHeight = frame.getMetadata().getHeight();
// I assume that box is centered.
int left = (frameWidth / 2) - (mBoxWidth / 2);
int top = (frameHeight / 2) - (mBoxHeight / 2);
int right = (frameWidth / 2) + (mBoxWidth / 2);
int bottom = (frameHeight / 2) + (mBoxHeight / 2);
YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frameWidth, frameHeight, null);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(left, top, right, bottom), 100, outputStream);
byte[] jpegArray = outputStream.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Frame croppedFrame = new Frame.Builder()
.setBitmap(bitmap)
.setRotation(frame.getMetadata().getRotation())
.build();
if(mCroppedFrameListener != null) {
mCroppedFrameListener.onNewCroppedFrame(croppedFrame.getBitmap(), croppedFrame.getMetadata().getRotation());
}
return mDelegate.detect(croppedFrame);
}
public interface CroppedFrameListener {
void onNewCroppedFrame(Bitmap bitmap, int rotation);
}
}
Box Detector usuage
BarcodeDetector barcodeDetector = new BarcodeDetector.Builder(this)
.setBarcodeFormats(Barcode.ALL_FORMATS)
.build();
BoxDetector boxDetector = new BoxDetector(
barcodeDetector,
mBoxSize.getWidth(),
mBoxSize.getHeight());
boxDetector.setCroppedFrameListener(new BoxDetector.CroppedFrameListener() {
#Override
public void onNewCroppedFrame(final Bitmap bitmap, int rotation) {
Log.d(TAG, "onNewCroppedFrame: new bitmap, rotation: "+rotation);
runOnUiThread(new Runnable() {
#Override
public void run() {
mPreview.setImageBitmap(bitmap);
}
});
}
});
Cropped frame is rotated
I am attempting to capture a video recording through an external camera, Logitec C922. Using java, I can make this possible through webcam api.
<dependency>
<groupId>com.github.sarxos</groupId>
<artifactId>webcam-capture</artifactId>
<version>0.3.10</version>
</dependency>
<dependency>
<groupId>xuggle</groupId>
<artifactId>xuggle-xuggler</artifactId>
<version>5.4</version>
</dependency>
However, for the life of me, I cannot make it record at 60FPS. The video randomly stutters when stored, and is not smooth at all.
I can connect to the camera, using the following details.
final List<Webcam> webcams = Webcam.getWebcams();
for (final Webcam cam : webcams) {
if (cam.getName().contains("C922")) {
System.out.println("### Logitec C922 cam found");
webcam = cam;
break;
}
}
I set the size of the cam to the following:
final Dimension[] nonStandardResolutions = new Dimension[] { WebcamResolution.HD720.getSize(), };
webcam.setCustomViewSizes(nonStandardResolutions);
webcam.setViewSize(WebcamResolution.HD720.getSize());
webcam.open(true);
And then I capture the images:
while (continueRecording) {
// capture the webcam image
final BufferedImage webcamImage = ConverterFactory.convertToType(webcam.getImage(),
BufferedImage.TYPE_3BYTE_BGR);
final Date timeOfCapture = new Date();
// convert the image and store
final IConverter converter = ConverterFactory.createConverter(webcamImage, IPixelFormat.Type.YUV420P);
final IVideoPicture frame = converter.toPicture(webcamImage,
(System.currentTimeMillis() - start) * 1000);
frame.setKeyFrame(false);
frame.setQuality(0);
writer.encodeVideo(0, frame);
}
My writer is defined as follows:
final Dimension size = WebcamResolution.HD720.getSize();
final IMediaWriter writer = ToolFactory.makeWriter(videoFile.getName());
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264, size.width, size.height);
I am honestly not sure what in my code could be causing this. Given that I lower the resolution, I get no problems. ( 480p ) Could the issue be with the codes I am using?
As some of the comments mentioned, introducing queues does solve the problem. Here is the general logic to performs the needed steps. Note, I've setup my code for a lower resolution, as it allows me to capture 100FPS per sec. Adjust as needed.
Class to link the image/video capture and class to edit it :
public class WebcamRecorder {
final Dimension size = WebcamResolution.QVGA.getSize();
final Stopper stopper = new Stopper();
public void startRecording() throws Exception {
final Webcam webcam = Webcam.getDefault();
webcam.setViewSize(size);
webcam.open(true);
final BlockingQueue<CapturedFrame> queue = new LinkedBlockingQueue<CapturedFrame>();
final Thread recordingThread = new Thread(new RecordingThread(queue, webcam, stopper));
final Thread imageProcessingThread = new Thread(new ImageProcessingThread(queue, size));
recordingThread.start();
imageProcessingThread.start();
}
public void stopRecording() {
stopper.setStop(true);
}
}
RecordingThread :
public void run() {
try {
System.out.println("## capturing images began");
while (true) {
final BufferedImage webcamImage = ConverterFactory.convertToType(webcam.getImage(),
BufferedImage.TYPE_3BYTE_BGR);
final Date timeOfCapture = new Date();
queue.put(new CapturedFrame(webcamImage, timeOfCapture, false));
if (stopper.isStop()) {
System.out.println("### signal to stop capturing images received");
queue.put(new CapturedFrame(null, null, true));
break;
}
}
} catch (InterruptedException e) {
System.out.println("### threading issues during recording:: " + e.getMessage());
} finally {
System.out.println("## capturing images end");
if (webcam.isOpen()) {
webcam.close();
}
}
}
ImageProcessingThread:
public void run() {
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264, size.width, size.height);
try {
int frameIdx = 0;
final long start = System.currentTimeMillis();
while (true) {
final CapturedFrame capturedFrame = queue.take();
if (capturedFrame.isEnd()) {
break;
}
final BufferedImage webcamImage = capturedFrame.getImage();
size.height);
// convert the image and store
final IConverter converter = ConverterFactory.createConverter(webcamImage, IPixelFormat.Type.YUV420P);
final long end = System.currentTimeMillis();
final IVideoPicture frame = converter.toPicture(webcamImage, (end - start) * 1000);
frame.setKeyFrame((frameIdx++ == 0));
frame.setQuality(0);
writer.encodeVideo(0, frame);
}
} catch (final InterruptedException e) {
System.out.println("### threading issues during image processing:: " + e.getMessage());
} finally {
if (writer != null) {
writer.close();
}
}
The way it works is pretty simple. The WebcamRecord class creates an instance of a queue that is shared between the video capture and the image processing. The RecordingThread sends bufferedImages to the queue ( in my case, it a pojo, called CapturedFrame ( which has a BufferedImage in it ) ). The ImageProcessingThread will listen and pull data from the queue. If it does not receive a signal that the writing should end, the loop never dies.
I have a program that uses OpenCV to take a picture using your webcam. It works like a charm on windows, yet, it doesn't work on OSx. The Frame where the Webcam view should appear stays empty. And when I take a picture, it just shows a black void, as if it couldnt find the webcam
public void run(){
try {
grabber = new VideoInputFrameGrabber(0);
grabber.start();
while (active) {
IplImage originalImage = grabber.grab();
Label.setIcon(new ImageIcon( originalImage.getBufferedImage() ));
}
grabber.stop();
grabber.flush();
} catch (Exception ex) {
//Logger.getLogger(ChPanel.class.getName()).log(Leve l.SEVERE, null, ex);
}
}
public BufferedImage saveImage(){
IplImage img;
try {
//capture image
img = grabber.grab();
// save to file
File outputfile = new File(Project.getInstance().getFileURLStr() + " capture" + fotoCount++ + ".jpg");
ImageIO.write(img.getBufferedImage(), "jpg", outputfile);
//get file and set it in the project library
BufferedImage ImportFile = ImageIO.read(outputfile);
Project p = Project.getInstance();
MainScreen ms = MainScreen.getInstance();
ImageIcon takenPhoto = new ImageIcon(ImportFile);
p.setNextImage(takenPhoto);
ms.setPanels();
return ImportFile;
} catch (com.googlecode.javacv.FrameGrabber.Exception e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
Does anyone know how to solve this? I suspect something about rights to use the webcam or something like that
grabber = new VideoInputFrameGrabber(0);
Here 0 is specified for Capture device number 0
May be the number 0th device is not available for video capture
Use this code to get the list of devices and number respectively.
import com.googlecode.javacv.cpp.videoInputLib.videoInput;
class Main {
public static void main(String[] args) {
int n=videoInput.listDevices();
for(int i=0;i<n;i++)
{
System.out.println(i+" = "+videoInput.getDeviceName(i));
}
}
}
And then specify the number for that device
grabber = new VideoInputFrameGrabber(1); // 0 or 1 or 2
To interact with webcam I use this library webcam-capture you can easely add openCV dependency with maven. This is a great library
I have a weird problem,
I have an activity, within this activity I have a layout I want to make an image of it in order to share it on social networks.
This layout contains differents dynamic images and texts. That's why I can't just store it on as a static image and share it on demand. I need to generate the image right when the user touch the share button.
The problem is that I need to adjust the layout before sharing, what I do ?
ImageView profilPic = (ImageView)dashboardView.findViewById(R.id.profilePic);
RelativeLayout.LayoutParams params = (RelativeLayout.LayoutParams)profilPic.getLayoutParams();
params.setMargins(10, 62, 0, 0);
profilPic.invalidate();
profilPic.requestLayout();
First I modify the margin of my layout, then I make an image of it
Bitmap bitmap = null;
try {
bitmap = Bitmap.createBitmap(dashboardView.getWidth(),
dashboardView.getHeight(), Bitmap.Config.ARGB_4444);
dashboardView.draw(new Canvas(bitmap));
} catch (Exception e) {
// Logger.e(e.toString());
}
FileOutputStream fileOutputStream = null;
File path = Environment
.getExternalStorageDirectory();
File file = new File(path, "wayzupDashboard" + ".png");
try {
fileOutputStream = new FileOutputStream(file);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
BufferedOutputStream bos = new BufferedOutputStream(fileOutputStream);
bitmap.compress(CompressFormat.PNG, 100, bos);
try {
bos.flush();
bos.close();
fileOutputStream.flush();
fileOutputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
And finally I share it.
Basically it works, except that it captures the layout BEFORE redrawing the layout with modified layoutParams.
I need to capture this layout once and only when the new layout parameters are took into account, after layout is redraw.
If I remove the capture code, well it works, when I touch the share button I see the layout moving. But when I have the capture code in place it just capture the layout before modifying margins.
How can I ensure that the layout is redraw before capturing it ?
For the record, to correctly capture the layout only once the view was setup I needed to first setup the view and then capture the layout in a delayed thread. This is the only way I found to make it works.
public void onShareButton() {
dashboardController.setupViewBeforeSharing();
new Timer().schedule(new TimerTask() {
#Override
public void run() {
Bitmap bitmap = null;
try {
bitmap = Bitmap.createBitmap(dashboardView.getWidth(),
dashboardView.getHeight(), Bitmap.Config.ARGB_4444);
dashboardView.draw(new Canvas(bitmap));
} catch (Exception e) {
// Logger.e(e.toString());
}
FileOutputStream fileOutputStream = null;
File path = Environment
.getExternalStorageDirectory();
File file = new File(path, "wayzupDashboard" + ".png");
try {
fileOutputStream = new FileOutputStream(file);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
BufferedOutputStream bos = new BufferedOutputStream(fileOutputStream);
bitmap.compress(CompressFormat.PNG, 100, bos);
try {
bos.flush();
bos.close();
fileOutputStream.flush();
fileOutputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
Intent share = new Intent(Intent.ACTION_SEND);
share.setType("image/png");
share.putExtra(Intent.EXTRA_TEXT, R.string.addPassengerButton);
share.putExtra(Intent.EXTRA_STREAM,
Uri.parse("file://" + file.getAbsolutePath()));
startActivity(Intent.createChooser(share, "Share image"));
MainActivity.this.runOnUiThread(new Runnable() {
public void run() {
dashboardController.setupViewAfterSharing();
}
});
}
}, 300);
}
An easier way, is to set-up a listener for layout pass on the dashboardView view.
Setting-up listener using View.addOnLayoutChangeListener() - addOnLayoutChangeListener.
Set the listener before changing the layout parameters and remove it after the image was saved to persistence.
Hope it'll add any improvement.
I have a code that encode a image to bitmap. But I am not sure on how to display it on the screen. By the way, I am using Blackberry JavaME. This is the code where i use to encode the image. Is this the way I can use in order to get image from a sd card and display it on the screen?
FileConnection conn =
(FileConnection)Connector.open("image1.png",Connector.READ_WRITE);
if(conn.exists()) {
InputStream is = conn.openInputStream();
BitmapField bitmap = null;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int ch;
while ((ch = is.read()) != -1)
{
baos.write(ch);
}
byte imageData[] = baos.toByteArray();
bitmap = new BitmapField(
EncodedImage.createEncodedImage(imageData, 0, imageData.length).getBitmap());
add(bitmap);
}
Secko is correct that you can override paint but I think you are getting stuck because you have not created a ui application.
Here is a very simple ui application example for displaying a bitmap field, if you copy it exactly you will need an images folder under src with image.png inside of it.
This was modified from the HelloWorldDemo that comes with the SDK. I recommend that if you are just starting out you look at the samples in the folder.
\plugins\net.rim.ejde.componentpack5.0.0_5.0.0.25\components\samples\
good luck
Ray
public class DisplayBitmaps extends UiApplication
{
public static void main(String[] args)
{
DisplayBitmaps theApp = new DisplayBitmaps();
theApp.enterEventDispatcher();
}
public DisplayBitmaps()
{
pushScreen(new DisplayBitmapsScreen());
}
}
final class DisplayBitmapsScreen extends MainScreen
{
DisplayBitmapsScreen()
{
Bitmap bitmap = EncodedImage.getEncodedImageResource("images/image.png").getBitmap();
BitmapField bitmapField = new BitmapField(bitmap);
add(bitmapField);
}
public void close()
{
super.close();
}
}
Edit for when the image is on the sdcard
DisplayBitmapsScreen()
{
//Bitmap bitmap = EncodedImage.getEncodedImageResource("images/image.png").getBitmap();
try {
FileConnection fc = (FileConnection) Connector.open("file:///SDCard/BlackBerry/pictures/image.png");
if (fc.exists()) {
byte[] image = new byte[(int) fc.fileSize()];
InputStream inStream = fc.openInputStream();
inStream.read(image);
inStream.close();
EncodedImage encodedImage = EncodedImage.createEncodedImage(image, 0, -1);
BitmapField bitmapField = new BitmapField(encodedImage.getBitmap());
fc.close();
add(bitmapField);
}
} catch (Exception e) { System.out.println("EXCEPTION " + e); }
}
Overriding paint in Field or any extension Field class could also display an image, but I didn't really understand from Secko's example where he would display the image so I have included drawImage in this example below.
protected void paint(Graphics graphics) {
graphics.drawImage(x, y, width, height, image, frameIndex, left, top);
super.paint(graphics);
}
You can do it with:
paint(Graphics g);
Perhaps in a function:
protected void DrawStuff(Graphics g) {
this.paint(g);
}