I've got a dental camera and iam try to get windows to press space when the camera button is pressed
I have the OEM software and driver installed, it works perfect, gets the feed and makes a snapshot when camera button is pressed. I need to use another software for the feed and the snapshot, the software gets the feed but doesn't react to camera button, it only reacts to space key press(part of the oem driver), so my way of solving this was getting the device by product id and listening the button press event and remapping it space press.
I am pretty much stuck at this point.
How can I listen on events coming from the device I've got?
public static Device findDCam(){
// Create the libusb context
Context context = new Context();
// Initialize the libusb context
int result = LibUsb.init(context);
if (result < 0)
{
throw new LibUsbException("Unable to initialize libusb", result);
}
// Read the USB device list
DeviceList list = new DeviceList();
result = LibUsb.getDeviceList(context, list);
if (result < 0)
{
throw new LibUsbException("Unable to get device list", result);
}
try
{
// Iterate over all devices and list them
for (Device device: list)
{
DeviceDescriptor descriptor = new DeviceDescriptor();
result = LibUsb.getDeviceDescriptor(device, descriptor);
if (result < 0)
{
throw new LibUsbException(
"Unable to read device descriptor", result);
}
if(descriptor.idProduct()== -3810){
System.out.println("D cam found");
return device;
}
}
}
finally
{
// Ensure the allocated device list is freed
LibUsb.freeDeviceList(list, true);
}
// Deinitialize the libusb context
LibUsb.exit(context);
return null;
}
I've also thought that maybe it's impossible using usb4java since as far as i understood, if i want to listen on the usb port i need to take control from the driver and then its pointless.
Maybe iam going all wrong and i should use the driver instead?
Or maybe there is an app that can read button presses from a specific device and remap it?
If the camera has a standard driver, this should work through this video capture SDK. To quick test it, run the demo executable included in the package, select the camera in the list, check the "webcam snapshot button" checkbox and start the camera. Then press the camera button to test the snapshot.
Related
I need some help solving a problem with SpeechRecognizer.
Background
My task is to implement a voice memo feature: the user can record a short audio, save it, and then listen to it. If the user does not have the opportunity to listen to the audio, he can tap on the special "Aa" button and get a transcript of his voice note as text.
Since I did not find a suitable way to recognize prerecorded audio, I decided to implement speech recognition using SpeechRecognizer at the same time as recording audio. The recognition results are stored in a string, and when the user taps the "Aa" button, this string is displayed on the screen.
Source
In the Activity, I declare a SpeechRecognizer and an Intent for it, as well as a string to store the recognized text, and a special variable isStoppedByUser. It is needed so that recognition stops only when the user himself stops recording (if the user pauses during speaking, recognition may stop automatically, but I do not need this).
private SpeechRecognizer speechRecognizer;
private Intent speechRecognizerIntent;
private String recognizedMessage = "";
private boolean isStoppedByUser = false;
I initialize the SpeechRecognizer in a separate method that is called from onCreate().
private void initSpeechRecognizer() {
speechRecognizer = SpeechRecognizer.createSpeechRecognizer(this);
speechRecognizerIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
speechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
speechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 5);
speechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, getClass().getPackage().getName());
speechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
boolean isRecognitionAvailable = SpeechRecognizer.isRecognitionAvailable(this);
Toast.makeText(this, "isRecognitionAvailable = " + isRecognitionAvailable, Toast.LENGTH_SHORT).show();
Log.i(TAG, "isRecognitionAvailable: " + isRecognitionAvailable);
speechRecognizer.setRecognitionListener(new RecognitionListener() {
#Override
public void onRmsChanged(float rmsdB) {
Log.d(TAG, "onRmsChanged() called with: rmsdB = [" + rmsdB + "]");
}
#Override
public void onResults(Bundle results) {
Log.d(TAG, "onResults() called with: results = [" + results + "]");
ArrayList<String> data = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
recognizedMessage += " " + data.get(0);
Log.d(TAG, "onResults(): recognizedMessage = " + recognizedMessage);
// If recognition stops by itself (as a result of a pause in speaking), we start recognition again
if (!isStoppedByUser) {
speechRecognizer.startListening(speechRecognizerIntent);
}
}
#Override
public void onError(int error) {
Log.d(TAG, "onError() called with: error = [" + error + "]");
if (!isStoppedByUser) {
speechRecognizer.startListening(speechRecognizerIntent);
}
}
// Other callback methods. They have nothing but logging
// ...
});
}
The user starts recording:
startRecording();
isStoppedByUser = false;
recognizedMessage = "";
speechRecognizer.startListening(speechRecognizerIntent);
The user stops recording:
isStoppedByUser = true;
speechRecognizer.stopListening();
// Further processing of recorded audio
// ...
Problem
I tested this functionality on two devices: Xiaomi 9T and Realme 8i.
Everything works fine on Xiaomi: as I speak, the onRmsChanged() method is called several times per second with different rmsdB values, I can clearly see this in the logs. That is, the sound level changes. Then other callback methods are called, and the string is successfully formed.
But on Realme, the onRmsChanged() method is called only once, at the very beginning, with a value of -2.0. Nothing else happens while I'm speaking, and when I stop recording, the onError() method is called with code 7 (ERROR_NO_MATCH).
It's as if the SpeechRecognizer can't hear me, but there are no problems with the microphone, and the RECORD_AUDIO permission is also granted: the audio itself is successfully recorded and can be listened to.
If I open the Google app and enter a voice request, everything also works fine.
I will be very grateful if you recommend what other parameters can be set to solve this problem. Thank you!
The problem turned out to be that it is impossible to use the microphone API for both recording and speech recognition at the same time. Therefore, the fact that everything works fine on Xiaomi turned out to be just a happy accident.
Mostly my android webview works fine when fullscreening videos but there are some videos where it encounters problems, some I've already fixed, some I can't
1) Fullscreen button doesn't always trigger onShowCustomView. FIXED: Updating Chrome from the Google app store fixes this bug
2) Video goes fullscreen and you can hear the audio but sometimes the whole screen is black, you cannot even see the video player UI. Example: https://www.engadget.com/2019/05/22/vader-immortal-review-oculus-quest-vr/
I have a hack fix for this at the moment by resizing the view after a 2s delay but I'd obviously prefer a more robust solution
3) Video goes fullscreen temporarily but then closes itself immediately. This only happens occasionally but I've seen it happen when opening vimeo videos from twitter. It's not consistent so you'll have to click through a few https://mobile.twitter.com/search?q=Vimeo&src=typed_query
Any idea how I can make this work more consistently?
My custom web chrome client:
public void onShowCustomView(View paramView, CustomViewCallback paramCustomViewCallback)
{
if (this.mCustomView != null)
{
onHideCustomView(); // I verified this wasn't the cause of point 3
return;
}
this.mCustomView = paramView;
this.mOriginalSystemUiVisibility = browserUi.getBrowser().getWindow().getDecorView().getSystemUiVisibility();
previousOrientation = browserUi.getBrowser().getRequestedOrientation();
this.mCustomViewCallback = paramCustomViewCallback;
FrameLayout decorView;
if(browserUi.isPreviewWindow()){
decorView = browserUi.getBrowser().findViewById(R.id.prev_window_fullscreen_video_container);
}else {
decorView = browserUi.getBrowser().findViewById(R.id.fullscreen_video_container);
}
decorView.setVisibility(View.VISIBLE);
decorView.addView(this.mCustomView, new FrameLayout.LayoutParams(-1, -1));
if(!browserUi.isPreviewWindow()) {
this.mCustomView.setKeepScreenOn(true); // prevents sleep while viewing
int flags = SYSTEM_UI_FLAG_LOW_PROFILE | SYSTEM_UI_FLAG_HIDE_NAVIGATION;
flags |= SYSTEM_UI_FLAG_FULLSCREEN;
flags |= SYSTEM_UI_FLAG_LAYOUT_STABLE;
flags |= SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION;
flags |= SYSTEM_UI_FLAG_LAYOUT_FULLSCREEN;
flags |= SYSTEM_UI_FLAG_IMMERSIVE_STICKY;
decorView.setSystemUiVisibility(flags);
int screenOrientation = ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE;
browserUi.getBrowser().setRequestedOrientation(screenOrientation);
}
}
I am taking a series of pictures using Android Camera2 API for real time pose estimation and environment reconstruction (the SLAM problem). Currently I simply save all of these pictures in my SD card for off-line processing.
I setup the processing pipeline according to google's Camera2Basic using a TextureView as well as an ImageReader, where they are both set as target surfaces for a repeat preview request.
mButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
if(mIsShooting){
try {
mCaptureSession.stopRepeating();
mPreviewRequestBuilder.removeTarget(mImageReader.getSurface());
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler);
mIsShooting = false;
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
else{
try {
mCaptureSession.stopRepeating();
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler);
mIsShooting = true;
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
}
});
The ImageReader is added/removed when pressing the button. The ImageReader's OnImageAvailableListener is implemented as follow:
private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image img = reader.acquireLatestImage();
if(null == img){
return;
}
if(img.getTimestamp() <= mLatestFrameTime){
Log.i(Tag, "disorder detected!");
return;
}
mLatestFrameTime = img.getTimestamp();
ImageSaver saver = new ImageSaver(img, img.getTimestamp());
saver.run();
}
};
I use acquireLatestImage (with buffer size set to 2) to discard old frames and have also checked the image's timestamp to make sure they are monotonously increasing.
The reader does receive images at an acceptable rate (about 25fps). However a closer look at the saved image sequence show they are not
always saved in chronological order.
The following pictures come from a long sequence shot by the program (sorry for not being able to post pictures directly :( ):
Image 1:
Image 2:
Image 3:
Such disorder does not occur very often but they can occur any time and seems not to be an initialization problem. I suppose it has something to do with the ImageReader's buffer size as with larger buffer less "flash backs" are occurred. Does anyone have the same problem?
I finally find that such disorder disappears when setting ImageReader's format to be YUV_420_888 in its constructor. Originally I set this field as JPEG.
Using JPEG format incurs not only large processing delay but also disorder. I guess the conversion from image sensor data to desired format utilizes other hardware such as DSP or GPU which does not guarantee chronological order.
Are you using TEMPLATE_STILL_CAPTURE for the capture requests when you enable the ImageReader, or just TEMPLATE_PREVIEW? What devices are you seeing issues with?
If you're using STILL_CAPTURE, make sure you check if the device supports the ENABLE_ZSL flag, and set it to false. When it is set to true (generally the default on devices that support it, for the STILL_CAPTURE template), images may be returned out of order since there's a zero-shutter-lag queue in place within the camera device.
I implemented GCM for push notifications like stated in the Android Guide (https://developer.android.com/google/gcm/client.html) in one of my apps. The app and notifications are working fine on Kitkat and Lollipop.
But lastly I became some mails from users that upgraded their phones from to Lollipop. With that the notifications will not be displayed anymore. Only solution so far is to remove the app and reinstall it from the appstore.
Did someone face a similar problem and if so, did you find a solution to fix it?
This is a GCM ID issue. Try using Thread.sleep and retry for a number of times, till the GCM ID is recieved.
int noOfAttemptsAllowed = 5; // Number of Retries allowed
int noOfAttempts = 0; // Number of tries done
bool stopFetching = false; // Flag to denote if it has to be retried or not
String regId = "";
while (!stopFetching)
{
noOfAttempts ++;
GCMRegistrar.register(getApplicationContext(), "XXXX_SOME_KEY_XXXX");
try
{
// Leave some time here for the register to be
// registered before going to the next line
Thread.sleep(2000); // Set this timing based on trial.
} catch (InterruptedException e) {
e.printStackTrace();
}
try
{
// Get the registration ID
regId = GCMRegistrar.getRegistrationId(LoginActivity.this);
} catch (Exception e) {}
if (!regId.isEmpty() || noOfAttempts > noOfAttemptsAllowed)
{
// If registration ID obtained or No Of tries exceeded, stop fetching
stopFetching = true;
}
if (!regId.isEmpty())
{
// If registration ID Obtained, save to shared preferences
saveRegIDToSharedPreferences();
}
}
The Thread.sleep and noOfAttemptsAllowed can be played around with based on your design and other parameters. We had a sleep time of 7000 so that probability of getting registered at first attempt is higher. However, if it fails, the next attempt would consume another 7000ms. This might cause users to think your app is slow. So, play around intelligently with those two values.
In my Blackberry app I've implemented the camera and would like to replace the default shutter sound with my own. I figured I could do this by silencing the default camera sound by using the method enableShutterFeedback(false) then playing my own sound, or playing my sound immediately before the camera is activated.
private void initializeCamera()
{
try
{
// Create a player for the Blackberry's camera
Player player = Manager.createPlayer( "capture://video" );
// Set the player to the REALIZED state (see Player javadoc)
player.realize();
// Grab the video control and set it to the current display
_videoControl = (VideoControl)player.getControl( "VideoControl" );
if (_videoControl != null)
{
// Create the video field as a GUI primitive (as opposed to a
// direct video, which can only be used on platforms with
// LCDUI support.)
_videoField = (Field) _videoControl.initDisplayMode (VideoControl.USE_GUI_PRIMITIVE, "net.rim.device.api.ui.Field");
_videoControl.setDisplayFullScreen(true);
_videoControl.setVisible(false);
}
cc = (CameraControl)player.getControl("CameraControl");
cc.enableShutterFeedback(false);
// Set the player to the STARTED state (see Player javadoc)
player.start();
}
catch(Exception e)
{
MyApp.errorDialog("ERROR " + e.getClass() + ": " + e.getMessage());
}
}
This results in a Null pointer exception but can't figure out what's causing it, the camera's video doesn't get displayed. If I remove the CameraControl code in bold then the camera's video is shown. Any ideas what I should try to get rid of the shutter sound? I tried VolumeControl in place of CameraControl, same results, null pointer.
The CameraControl code gives a NPE because player.getControl returns null, and it does so because the string param is not correct. Try this one:
CameraControl control = (CameraControl) p.getControl("javax.microedition.amms.control.camera.CameraControl");