I want my app to do some tasks no matter if the app is currently active or at the background.
What I have done is:
private static void myCurrMethod() {
boolean checkIn = false;
if (1==1) {
checkIn = true;
}
// Sending message
Time now = new Time();
now.setToNow();
final String res = "Time is " + now.hour + ":" + now.minute + ":"
+ now.second + " stat " + checkIn;
new Thread(new Runnable() {
public void run() {
try {
Socket clientSocket = new Socket("xx.xx.xx.xx", 6790);
DataOutputStream outToServer = new DataOutputStream(
clientSocket.getOutputStream());
outToServer.writeBytes(res + '\n');
clientSocket.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}).start();
// End of it
Handler handler = new Handler();
handler.postDelayed(new Runnable() {
#Override
public void run() {
myCurrMethod();
}
}, (TIME_INTERVAL_WIFI_EN * MS_PER_MIN));
}
The socket part is to check how often is message being sent.
I have tried this when I am using another app. It works completely fine, if my phone is connected to my computer which has ADT installed. However the problem is, it does not work properly, or works for few times when my phone is not connected to my computer.
I found on the internet that there are ways to run, but they all seem like running things on the background while app is active. Also there are so many things on the internet some people suggest Services, some suggest AsyncTask, some others suggest a way like mine. I am confused, what is the best way to do this?
Note: I don't really discard this from my app. For instance if my app is not started at all, then it shouldn't work. If my app is removed from app list, then this shouldn't work. Basically what I want is the default behaviour that we could have in old Symbian apps.
Apps do not "close" - they might be killed to free resources when they are not active, but the "recents" list does not mean "active" (it can be on the list and be killed already, or not on the list and still be in an active task stack).
If you want to run things off the UI thread, then use another thread, AsyncTask, etc. They will live as long as your app does and continue to run even if another activity is on the screen.
If you need your process to survive your activity lifecycle or start when your app is not active (or continue running the process even if the activity is killed) then you need a service.
There are many references here on SO to help implement any solution. In your particular case, the connection to the computer - or a charger - may make a difference in how many active tasks Android allows to continue running, so your inactive activity is being killed faster when not connected. You likely need a service to "finish up" whatever processing is expected to occur while the app is inactive or in the background.
Related
I need some help solving a problem with SpeechRecognizer.
Background
My task is to implement a voice memo feature: the user can record a short audio, save it, and then listen to it. If the user does not have the opportunity to listen to the audio, he can tap on the special "Aa" button and get a transcript of his voice note as text.
Since I did not find a suitable way to recognize prerecorded audio, I decided to implement speech recognition using SpeechRecognizer at the same time as recording audio. The recognition results are stored in a string, and when the user taps the "Aa" button, this string is displayed on the screen.
Source
In the Activity, I declare a SpeechRecognizer and an Intent for it, as well as a string to store the recognized text, and a special variable isStoppedByUser. It is needed so that recognition stops only when the user himself stops recording (if the user pauses during speaking, recognition may stop automatically, but I do not need this).
private SpeechRecognizer speechRecognizer;
private Intent speechRecognizerIntent;
private String recognizedMessage = "";
private boolean isStoppedByUser = false;
I initialize the SpeechRecognizer in a separate method that is called from onCreate().
private void initSpeechRecognizer() {
speechRecognizer = SpeechRecognizer.createSpeechRecognizer(this);
speechRecognizerIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
speechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
speechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 5);
speechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, getClass().getPackage().getName());
speechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
boolean isRecognitionAvailable = SpeechRecognizer.isRecognitionAvailable(this);
Toast.makeText(this, "isRecognitionAvailable = " + isRecognitionAvailable, Toast.LENGTH_SHORT).show();
Log.i(TAG, "isRecognitionAvailable: " + isRecognitionAvailable);
speechRecognizer.setRecognitionListener(new RecognitionListener() {
#Override
public void onRmsChanged(float rmsdB) {
Log.d(TAG, "onRmsChanged() called with: rmsdB = [" + rmsdB + "]");
}
#Override
public void onResults(Bundle results) {
Log.d(TAG, "onResults() called with: results = [" + results + "]");
ArrayList<String> data = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
recognizedMessage += " " + data.get(0);
Log.d(TAG, "onResults(): recognizedMessage = " + recognizedMessage);
// If recognition stops by itself (as a result of a pause in speaking), we start recognition again
if (!isStoppedByUser) {
speechRecognizer.startListening(speechRecognizerIntent);
}
}
#Override
public void onError(int error) {
Log.d(TAG, "onError() called with: error = [" + error + "]");
if (!isStoppedByUser) {
speechRecognizer.startListening(speechRecognizerIntent);
}
}
// Other callback methods. They have nothing but logging
// ...
});
}
The user starts recording:
startRecording();
isStoppedByUser = false;
recognizedMessage = "";
speechRecognizer.startListening(speechRecognizerIntent);
The user stops recording:
isStoppedByUser = true;
speechRecognizer.stopListening();
// Further processing of recorded audio
// ...
Problem
I tested this functionality on two devices: Xiaomi 9T and Realme 8i.
Everything works fine on Xiaomi: as I speak, the onRmsChanged() method is called several times per second with different rmsdB values, I can clearly see this in the logs. That is, the sound level changes. Then other callback methods are called, and the string is successfully formed.
But on Realme, the onRmsChanged() method is called only once, at the very beginning, with a value of -2.0. Nothing else happens while I'm speaking, and when I stop recording, the onError() method is called with code 7 (ERROR_NO_MATCH).
It's as if the SpeechRecognizer can't hear me, but there are no problems with the microphone, and the RECORD_AUDIO permission is also granted: the audio itself is successfully recorded and can be listened to.
If I open the Google app and enter a voice request, everything also works fine.
I will be very grateful if you recommend what other parameters can be set to solve this problem. Thank you!
The problem turned out to be that it is impossible to use the microphone API for both recording and speech recognition at the same time. Therefore, the fact that everything works fine on Xiaomi turned out to be just a happy accident.
I'm trying to implement a simple Websocket application in Java that is able to scale horizontally, by using Redis and the Redisson library.
The Websocket server basically keeps track of connected clients, and sends message that are received to an Rtopic - this works great.
To consume, I have code that adds a listener when a client is registered : it associated a Client object with a listener by:
private static RedissonClient redisson = RedissonRedisServer.createRedisConnectionWithConfig();
public static final RTopic subcriberTopic = redisson.getTopic("clientsMapTopic");
public static boolean sendToPubSub(ConnectedClient q, String message) {
boolean[] success = {true};
MessageListener<Message> listener = new MessageListener<Message>() {
#Override
public void onMessage(CharSequence channel, Message message) {
logger.debug("The message is : " + message.getMediaId());
try {
logger.debug("ConnectedClient mediaid: " + q.getMediaid() + ",Message mediaid " + message.getMediaId());
if (q.getMediaid().equals(message.getMediaId())) {
// we need to verify if the message goes to the right receiver
logger.debug("MESSAGE from PUBSUB to (" + q.getId() + ") # " + q.getSession().getId() + " " + message);
// this is the actual message to the websocket client
// this executes on the wrong connected client when the connection is closed and reopened
q.getSession().getBasicRemote().sendText(message.getMessage());
}
} catch (Exception e) {
e.printStackTrace();
success[0] = false;
}
}
};
int listenerId = subcriberTopic.addListener(Message.class, listener);
}
The problem I am observing is as follows:
initial connection from client registers listener associated with that object
sent message to the ws server gets picked up by listener and sent properly
disconnect websocket - create new connection - new listener gets created
sent message to the ws server gets picked up by same original listener and uses that connected client instead of the newly registered one
sending fails (because client and ws connection don't exist)and is not processes further
It seems I just need to remove the listener for the client if the client gets removed, but I haven't found a good way to do that because although I see in the debugger that the listener has the associated connected client object, I'm unable to retrieve them without adding code for that.
Am I observing this correctly and what is a good way to make this work properly?
When I was writing the question, I kind of leaned to an answer that I had in mind and tried, which worked.
I added a ConcurrentHashmap to keep track of the relation between the connected client and the listener.
In the logic where I handled websocket error that pointed to a client removing, I then removed the listener that was associated (and the entry from the map).
Now it works as expected.
small snippet:
int listenerId = subcriberTopic.addListener(Message.class, listener);
clientListeners.put(q,(Integer)listenerId);
And then in the websocket onError handler that triggers the cleanup:
// remove the associated listener
int listenerIdForClient = MessageContainer.clientListeners.get(cP);
MessageContainer.subcriberTopic.removeListener((Integer) listenerIdForClient);
// remove entry from map
MessageContainer.clientListeners.remove(cP);
Now the listener gets cleaned up properly and the next time a new listener gets created and handles the messages.
I've been coding around in circles trying to work this one out, new to java and have been lurking here to find things out for a while but I really can't get passed this one. I adapted some code by Desmond Shaw (http://www.codepool.biz/how-to-implement-a-java-websocket-server-for-image-transmission-with-jetty.html) to create a websocket to tranfer jpg images from a server to remote clients. I want to send files from the server to the browser windows of connected clients when some specific files on the server change (they are pages of music scores that are created using Max/MSP in real-time), but I don't seem to be able to cancel the timers I'm creating to watch these files in my home directory for changes.
More specifically I'm sending messages from the remote browser clients (through javascript buttons operated by the users) over a websocket connection to specify which of the files they wish to see updated on their screen (i.e part one, which refers to a file on the server called "1.png" and is the violin part, or part 2 which is the server file "2.png" and is the cello part etc). This is then used within the websocket handler running on my server to send the right files to that client when a filewatcher detects they have changed on the server. I can get everything going except stopping the timers running the filewatchers, when a different part is requested by the client (say the violin player wants to look at the cello players part). Below is the method I have edited to respond to the messages from the clients:
#OnWebSocketMessage //part request from websocket client (remote browser)
public void onMessage( String message) {
System.out.println("Message: '" + message + "' received");
sFclient = message;
if (sFclient == "1" || sFclient == "2" || sFclient == "3" || sFclient == "4") {
System.out.println("Part " + sFclient + " joined");
}
else {
sFclientOut = 0;
}
}
public void onChange( File file ) {
System.out.println( "File "+ file.getName() +" has changed!" );
TimerTask task = new FileWatcher(new File("/Users/benedict/" + message + ".png")) {
try {
File f = new File("/Users/benedict/" + sFclient + ".png");
BufferedImage bi = ImageIO.read(f);
ByteArrayOutputStream out = new ByteArrayOutputStream();
ImageIO.write(bi, "png", out);
ByteBuffer byteBuffer = ByteBuffer.wrap(out.toByteArray());
mSession.getRemote().sendBytes(byteBuffer);
out.close();
byteBuffer.clear();
}
catch (IOException e) {
e.printStackTrace();
}
Timer timer1 = new Timer(); {
timer1.schedule(task , new Date(), 20 );
if (sFclientOut == 0){
task.cancel();
timer1.cancel();
}
}
}
I had it working mostly using an if statement which I've now abandoned but have been editing and now it doesn't make as much sense probably. Any help at all would be appreciated, but my main question is should I be trying to cancel the threads handling the timertasks or use a completely different approach altogether like a switch statement for example. I have tried sending a message before every new message from the browsers ("0") to cancel the old threads but the Timertasks just don't start at all, which I think is because that cancels the timertask and doesn't let it run again?
Thanks,
Benedict
Ok here's the final working solution, I just had to cancel the tasks based on a message from the clients (in this case a 0)
else if (message.equals("0")) {
zerocounter = zerocounter + 1;
if (zerocounter >= 2) {
task.cancel();
}
I implemented GCM for push notifications like stated in the Android Guide (https://developer.android.com/google/gcm/client.html) in one of my apps. The app and notifications are working fine on Kitkat and Lollipop.
But lastly I became some mails from users that upgraded their phones from to Lollipop. With that the notifications will not be displayed anymore. Only solution so far is to remove the app and reinstall it from the appstore.
Did someone face a similar problem and if so, did you find a solution to fix it?
This is a GCM ID issue. Try using Thread.sleep and retry for a number of times, till the GCM ID is recieved.
int noOfAttemptsAllowed = 5; // Number of Retries allowed
int noOfAttempts = 0; // Number of tries done
bool stopFetching = false; // Flag to denote if it has to be retried or not
String regId = "";
while (!stopFetching)
{
noOfAttempts ++;
GCMRegistrar.register(getApplicationContext(), "XXXX_SOME_KEY_XXXX");
try
{
// Leave some time here for the register to be
// registered before going to the next line
Thread.sleep(2000); // Set this timing based on trial.
} catch (InterruptedException e) {
e.printStackTrace();
}
try
{
// Get the registration ID
regId = GCMRegistrar.getRegistrationId(LoginActivity.this);
} catch (Exception e) {}
if (!regId.isEmpty() || noOfAttempts > noOfAttemptsAllowed)
{
// If registration ID obtained or No Of tries exceeded, stop fetching
stopFetching = true;
}
if (!regId.isEmpty())
{
// If registration ID Obtained, save to shared preferences
saveRegIDToSharedPreferences();
}
}
The Thread.sleep and noOfAttemptsAllowed can be played around with based on your design and other parameters. We had a sleep time of 7000 so that probability of getting registered at first attempt is higher. However, if it fails, the next attempt would consume another 7000ms. This might cause users to think your app is slow. So, play around intelligently with those two values.
My work is developing software for network capable cameras for retail enviroments. One of the peices of software my team is developing is a webserver that retrieves various reports generated in HTML by the camera itself (which has its own embedded webserver) and stored on the camera. Our software will then GET these reports from the camera and store it on a central webserver.
While we are fine plugging in the IPs of the cameras into our software, I am developing a simple Java class that will query the network and locate all cameras on the network.
The problem though is that while it runs just fine on my PC, and my coworker's PC, when we attempt to run it on the actual webserver PC that will host our software... it runs, but says every IP in the subnet is offline / unreachable EXCEPT for the gateway IP.
For example, if I run it from my PC or my coworkers PC when plugged into the closed LAN, I get the following active IPs found along with a flag telling me if its a camera or not.
(gateway is 192.168.0.1, subnet mask is 255.255.255.0, which means full range of 256 devices to be looked for)
IP:/192.168.0.1 Active:true Camera:false
IP:/192.168.0.100 Active:true Camera:true <- this is camera 1
IP:/192.168.0.101 Active:true Camera:true <- this is camera 2
IP:/192.168.0.103 Active:true Camera:false <- my PC
IP:/192.168.0.104 Active:true Camera:false <- this is our webserver
But for some reason, when running the same program from the webserver PC, using the same JRE, I only get the following found
IP:/192.168.0.1 Active:true Camera:false
Now my code, instead of enumerating through each IP in order on the main Thread, instead creates a seperate Thread for each IP to be checked and runs them concurrently (else it would take little over 21 minutes to enumerate through the entire IP range at a timeout of 5000ms / IP). The main Thread then re-runs these IP scan threads every 15 seconds over and over.
I have checked that all the threads are running to completion on all the PCs, no exceptions are being thrown. Even verified that none of the threads are getting stuck. Each Thread takes about 5001 to 5050ms from start to complete, and those Threads that have an active IP finish sooner (>5000ms), so I know that its correctly waiting the full 5000ms in the ipAddr.isReachable(5000) method.
Me and my coworker are stumped at this point while it seems to reach those active IPs fine when run on our PCs, yet getting no response from the webserver PC???
We have ruled out firewall issues, admin access issues, etc.. The only difference is that our webserver is Embedded Win XP, and our PCs are Windows 7.
This has us stumped. Any ideas why?
Below is the code that is running each IP Thread:
public void CheckIP() {
new Thread() {
#Override
public void run() {
try {
isActive = ipAddr.isReachable(5000);
if (isActive) {
if (!isCamera) {
isCamera = new IpHttpManager().GetResponse(ipAddr.toString());
}
} else {
isCamera = false;
}
} catch (Exception e) {
e.printStackTrace();
}
}
}.start();
}
EDIT: Here is the code that builds each IP to check after determining the range based on gateway and subnet...
for(int i=subMin; i<=subMax; i++) {
byte[] ip = new byte[] {(byte)oct[0],(byte)oct[1],(byte)oct[2],(byte)i};
try {
scanners[subCount] = new IpScan(InetAddress.getByAddress(ip));
subCount++;
} catch (UnknownHostException e) {
e.printStackTrace();
}}
Thanks everyone, but I never did figure out or pinpoint why this oddity was happening. Everything I checked for was not the cause, so this question can be closed.
In any case, I ended up working around it completely. Instead of using InetAddress, I just went native and built my own ICMP ping class instead, via JNA, invoking Windows libraries IPHLPAPI.DLL and WSOCK32.DLL. Here is what I used...
public interface InetAddr extends StdCallLibrary {
InetAddr INSTANCE = (InetAddr)
Native.loadLibrary("wsock32.dll", InetAddr.class);
ULONG inet_addr(String cp); //in_addr creator. Creates the in_addr C struct used below
}
public interface IcmpEcho extends StdCallLibrary {
IcmpEcho INSTANCE = (IcmpEcho)
Native.loadLibrary("iphlpapi.dll", IcmpEcho.class);
int IcmpSendEcho(
HANDLE IcmpHandle, //Handle to the ICMP
ULONG DestinationAddress, //Destination address, in the form of an in_addr C Struct defaulted to ULONG
Pointer RequestData, //Pointer to the buffer where my Message to be sent is
short RequestSize, //size of the above buffer. sizeof(Message)
byte[] RequestOptions, //OPTIONAL!! Can set this to NULL
Pointer ReplyBuffer, //Pointer to the buffer where the replied echo is written to
int ReplySize, //size of the above buffer. Normally its set to the sizeof(ICMP_ECHO_REPLY), but arbitrarily set it to 256 bytes
int Timeout); //time, as int, for timeout
HANDLE IcmpCreateFile(); //win32 ICMP Handle creator
boolean IcmpCloseHandle(HANDLE IcmpHandle); //win32 ICMP Handle destroyer
}
And then using those to create the following method...
public void SendReply(String ipAddress) {
final IcmpEcho icmpecho = IcmpEcho.INSTANCE;
final InetAddr inetAddr = InetAddr.INSTANCE;
HANDLE icmpHandle = icmpecho.IcmpCreateFile();
byte[] message = new String("thisIsMyMessage!".toCharArray()).getBytes();
Memory messageData = new Memory(32); //In C/C++ this would be: void *messageData = (void*) malloc(message.length);
messageData.write(0, message, 0, message.length); //but ignored the length and set it to 32 bytes instead for now
Pointer requestData = messageData;
Pointer replyBuffer = new Memory(256);
replyBuffer.clear(256);
// HERE IS THE NATIVE CALL!!
reply = icmpecho.IcmpSendEcho(icmpHandle,
inetAddr.inet_addr(ipAddress),
requestData,
(short) 32,
null,
replyBuffer,
256,
timeout);
// NATIVE CALL DONE, CHECK REPLY!!
icmpecho.IcmpCloseHandle(icmpHandle);
}
public boolean IsReachable () {
return (reply > 0);
}
My guess is that your iteration logic to determine the different ip address is based upon different configuration hence your pc's fetches all addresses but your webserver doesn't.
Try adding debug in the logic where you build up the list of ip adresses to check.