Android Battery information issue - java

I'm using the following code to get the voltage of a battery at two different times (t1,t2). t1 is before the execution of a task and t2 is after the execution of a task. So by rule, t2 must be smaller than t1.
However, in execution it is not true. I am getting multiple values, that are greater, smaller and equal to t1. How could this be possible? Even in android battery monitoring tools, I have noticed that sometimes the total battery mAh value increases few points without plugging the charger.
public void onCreate() {
BroadcastReceiver batteryReceiver = new BroadcastReceiver() {
int scale = -1;
int level = -1;
int voltage = -1;
int temp = -1;
#Override
public void onReceive(Context context, Intent intent) {
level = intent.getIntExtra(BatteryManager.EXTRA_LEVEL, -1);
scale = intent.getIntExtra(BatteryManager.EXTRA_SCALE, -1);
temp = intent.getIntExtra(BatteryManager.EXTRA_TEMPERATURE, -1);
voltage = intent.getIntExtra(BatteryManager.EXTRA_VOLTAGE, -1);
Log.e("BatteryManager", "level is "+level+"/"+scale+", temp is "+temp+", voltage is "+voltage);
}
};
IntentFilter filter = new IntentFilter(Intent.ACTION_BATTERY_CHANGED);
registerReceiver(batteryReceiver, filter);
}

Android framework continously get information from its power_supply subsystem. So after an update, it will send out an BATTERY_STATE_CHANGED intent.
private void updateLocked() {
if (!mUpdatesStopped) {
// Update the values of mAcOnline, et. all.
native_update();
// Process the new values. Sendout the intent
processValuesLocked();
}
}
But it just update the information after a period. So it might contains some extent of inaccuracy.
In the implementation of the native_update. Android simply reads out the content of some files under /sys/class/power_supply/battery
setBooleanField(env, obj, gPaths.acOnlinePath, gFieldIds.mAcOnline);
setBooleanField(env, obj, gPaths.usbOnlinePath, gFieldIds.mUsbOnline);
setBooleanField(env, obj, gPaths.wirelessOnlinePath, gFieldIds.mWirelessOnline);
setBooleanField(env, obj, gPaths.batteryPresentPath, gFieldIds.mBatteryPresent);
setIntField(env, obj, gPaths.batteryCapacityPath, gFieldIds.mBatteryLevel);
setVoltageField(env, obj, gPaths.batteryVoltagePath, gFieldIds.mBatteryVoltage);
setIntField(env, obj, gPaths.batteryTemperaturePath, gFieldIds.mBatteryTemperature);
So if you want the voltage information, you can simply:
$cat /sys/class/power_supply/battery/voltage_now
$cat /sys/class/power_supply/battery/batt_vol
Note that voltage_now is in microvolts, not millivolts.
You can also read it programatically if you want.
However, it may still not accurate enough since the content of these data is updated by the opearting system(might be something like a power driver), so for really accurate stats, you may have to try hardware approach.:)

How could this be possible?
The battery mAh value is measured as voltage from the battery. But this voltage varies from time to time. It drops a little when there is a high current draw and goes back when, for example the processor is idle. Also it changes on temperature change. So overall it is very inaccurate.

Related

Drawing and audio not working together in Java Android App

I am currenty doing a small game. But my application does not work properly. Program does not stop with error, but firstly it playes part of track, and after that (around 5-10 seconds) it stops and drawing begins.
Without audio added, code works absolutely fine.
I also tested app on several different devices and emulators, but issue keeps occuring.
Here are some methods from class "Game.java", where I am trying to implement audio.
public class Game {
Plane background;
Plane playfield;
Context context;
List<SingleNote> notes = new ArrayList<SingleNote>();
List<SingleNote> notes_to_compute = new ArrayList<SingleNote>();
Date date = new Date();
long time_initial;
float[] touch_coord = new float[2];
private int mStreamId;
public Game(Context context){
this.context = context;
}
#RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR1)
public void load(){
background = new Plane(context, R.raw.vertex_shader, R.raw.fragment_shader, R.drawable._map1);
playfield = new Plane(context, R.raw.vertex_shader, R.raw.fragment_shader, R.drawable.s_playfield);
InitNotes(R.raw._map1);
time_initial = date.getTime();
MyThread m = new MyThread();
m.context = context;
m.start();
}
#RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR1)
public void draw(){
background.draw();
playfield.draw();
update();
}
}
class MyThread extends Thread {
Context context;
public void run(){
SoundManagement.playSoundPool(context, R.raw._map1_a);
}
}
And static PlaySondPool method in class
public static void playSoundPool(Context context, int soundID) {
int MAX_STREAMS = 20;
int REPEAT = 0;
SoundPool soundPool = new SoundPool(MAX_STREAMS, AudioManager.STREAM_MUSIC, REPEAT);
soundPool.setOnLoadCompleteListener(new SoundPool.OnLoadCompleteListener() {
#Override
public void onLoadComplete(SoundPool soundPool, int soundId, int status) {
int priority = 0;
int repeat = 0;
float rate = 1.f; // Frequency Rate can be from .5 to 2.0
// Set volume
AudioManager mgr = (AudioManager)context.getSystemService(Context.AUDIO_SERVICE);
float streamVolumeCurrent =
mgr.getStreamVolume(AudioManager.STREAM_MUSIC);
float streamVolumeMax =
mgr.getStreamMaxVolume(AudioManager.STREAM_MUSIC);
float volume = streamVolumeCurrent / streamVolumeMax;
// Play it
soundPool.play(soundId, volume, volume, priority, repeat, rate);
}
});
soundPool.load(context, soundID, 1);
}
I thought using separate thread would help, but same issue occurs.
What should I do in order to avoid this error? Maybe use another library, or play sound from other class?
Whole code here: https://github.com/arthur100500/AndroidProject
Additional debug ingormation from android studio
Logcat: https://pastebin.com/MQSaWw8R
Run: https://pastebin.com/5D4AGuHZ
How it looks: https://youtu.be/X7IBquHs1jA
I think there is every likelihood that the choice of SoundPool for playback is the source of the problem you are experiencing. SoundPool is designed for the handling of audio files that are short (a couple seconds) and can be held in memory. You've indicated that you are trying to use it to play a 5-minute long file. According to the documentation in the API, only the first MB of the 5-minute file is being handled.
Soundpool sounds are expected to be short as they are predecoded into
memory. Each decoded sound is internally limited to one megabyte
storage, which represents approximately 5.6 seconds at 44.1kHz stereo
(the duration is proportionally longer at lower sample rates or a
channel mask of mono). A decoded audio sound will be truncated if it
would exceed the per-sound one megabyte storage space.
Streaming via an AudioTrack should be a more fitting alternative for this sound file.
An AudioTrack instance can operate under two modes: static or
streaming. In Streaming mode, the application writes a continuous
stream of data to the AudioTrack, using one of the write() methods.
These are blocking and return when the data has been transferred from
the Java layer to the native layer and queued for playback. The
streaming mode is most useful when playing blocks of audio data that
for instance are:
too big to fit in memory because of the duration of the sound to play,
too big to fit in memory because of the characteristics of the audio data (high sampling rate, bits per sample ...)
received or generated while previously queued audio is playing.

Increase/decrease audio play speed of AudioInputStream with Java

Getting into the complex world of audio using Java I am using this
library , which basically I improved and published on Github.
The main class of the library is StreamPlayer and the code has comments and is straightforward to understand.
The problem is that it supports many functionalities except speed increase/decrease audio speed. Let's say like YouTube does when you change the video speed.
I have no clue how I can implement such a functionality. I mean, what can I do when writing the audio to the sample rate of targetFormat? I have to restart the audio again and again every time....
AudioFormat targetFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, sourceFormat.getSampleRate()*2, nSampleSizeInBits, sourceFormat.getChannels(),
nSampleSizeInBits / 8 * sourceFormat.getChannels(), sourceFormat.getSampleRate(), false);
The code of playing the audio is:
/**
* Main loop.
*
* Player Status == STOPPED || SEEKING = End of Thread + Freeing Audio Resources.<br>
* Player Status == PLAYING = Audio stream data sent to Audio line.<br>
* Player Status == PAUSED = Waiting for another status.
*/
#Override
public Void call() {
// int readBytes = 1
// byte[] abData = new byte[EXTERNAL_BUFFER_SIZE]
int nBytesRead = 0;
int audioDataLength = EXTERNAL_BUFFER_SIZE;
ByteBuffer audioDataBuffer = ByteBuffer.allocate(audioDataLength);
audioDataBuffer.order(ByteOrder.LITTLE_ENDIAN);
// Lock stream while playing.
synchronized (audioLock) {
// Main play/pause loop.
while ( ( nBytesRead != -1 ) && status != Status.STOPPED && status != Status.SEEKING && status != Status.NOT_SPECIFIED) {
try {
//Playing?
if (status == Status.PLAYING) {
// System.out.println("Inside Stream Player Run method")
int toRead = audioDataLength;
int totalRead = 0;
// Reads up a specified maximum number of bytes from audio stream
//wtf i have written here xaxaxoaxoao omg //to fix! cause it is complicated
for (; toRead > 0
&& ( nBytesRead = audioInputStream.read(audioDataBuffer.array(), totalRead, toRead) ) != -1; toRead -= nBytesRead, totalRead += nBytesRead)
// Check for under run
if (sourceDataLine.available() >= sourceDataLine.getBufferSize())
logger.info(() -> "Underrun> Available=" + sourceDataLine.available() + " , SourceDataLineBuffer=" + sourceDataLine.getBufferSize());
//Check if anything has been read
if (totalRead > 0) {
trimBuffer = audioDataBuffer.array();
if (totalRead < trimBuffer.length) {
trimBuffer = new byte[totalRead];
//Copies an array from the specified source array, beginning at the specified position, to the specified position of the destination array
// The number of components copied is equal to the length argument.
System.arraycopy(audioDataBuffer.array(), 0, trimBuffer, 0, totalRead);
}
//Writes audio data to the mixer via this source data line
sourceDataLine.write(trimBuffer, 0, totalRead);
// Compute position in bytes in encoded stream.
int nEncodedBytes = getEncodedStreamPosition();
// Notify all registered Listeners
listeners.forEach(listener -> {
if (audioInputStream instanceof PropertiesContainer) {
// Pass audio parameters such as instant
// bit rate, ...
listener.progress(nEncodedBytes, sourceDataLine.getMicrosecondPosition(), trimBuffer, ( (PropertiesContainer) audioInputStream ).properties());
} else
// Pass audio parameters
listener.progress(nEncodedBytes, sourceDataLine.getMicrosecondPosition(), trimBuffer, emptyMap);
});
}
} else if (status == Status.PAUSED) {
//Flush and stop the source data line
if (sourceDataLine != null && sourceDataLine.isRunning()) {
sourceDataLine.flush();
sourceDataLine.stop();
}
try {
while (status == Status.PAUSED) {
Thread.sleep(50);
}
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
logger.warning("Thread cannot sleep.\n" + ex);
}
}
} catch (IOException ex) {
logger.log(Level.WARNING, "\"Decoder Exception: \" ", ex);
status = Status.STOPPED;
generateEvent(Status.STOPPED, getEncodedStreamPosition(), null);
}
}
// Free audio resources.
if (sourceDataLine != null) {
sourceDataLine.drain();
sourceDataLine.stop();
sourceDataLine.close();
sourceDataLine = null;
}
// Close stream.
closeStream();
// Notification of "End Of Media"
if (nBytesRead == -1)
generateEvent(Status.EOM, AudioSystem.NOT_SPECIFIED, null);
}
//Generate Event
status = Status.STOPPED;
generateEvent(Status.STOPPED, AudioSystem.NOT_SPECIFIED, null);
//Log
logger.info("Decoding thread completed");
return null;
}
Feel free to download and check out the library alone if you want. :) I need some help on this... Library link.
Short answer:
For speeding up a single person speaking, use my Sonic.java native Java implementation of my Sonic algorithm. An example of how to use it is in Main.Java. A C-language version of the same algorithm is used by Android's AudioTrack. For speeding up music or movies, find a WSOLA based library.
Bloated answer:
Speeding up speech is more complex than it sounds. Simply increasing the sample rate without adjusting the samples will cause speakers to sound like chipmunks. There are basically two good schemes for linearly speeding up speech that I have listened to: fixed-frame based schemes like WSOLA, and pitch-synchronous schemes like PICOLA, which is used by Sonic for speeds up to 2X. One other scheme I've listened to is FFT-based, and IMO those implementations should be avoided. I hear rumor that FFT-based can be done well, but no open-source version I am aware of was usable the last time I checked, probably in 2014.
I had to invent a new algorithm for speeds greater than 2X, since PICOLA simply drops entire pitch periods, which works well so long as you don't drop two pitch periods in a row. For faster than 2X, Sonic mixes in a portion of samples from each input pitch period, retaining some frequency information from each. This works well for most speech, though some languages such as Hungarian appear to have parts of speech so short that even PICOLA mangles some phonemes. However, the general rule that you can drop one pitch period without mangling phonemes seems to work well most of the time.
Pitch-synchronous schemes focus on one speaker, and will generally make that speaker clearer than fixed-frame schemes, at the expense of butchering non-speech sounds. However, the improvement of pitch synchronous schemes over fixed-frame schemes is hard to hear at speeds less than about 1.5X for most speakers. This is because fixed-frame algorithms like WSOLA basically emulate pitch synchronous schems like PICOLA when there is only one speaker and no more than one pitch period needs to be dropped per frame. The math works out basically the same in this case if WSOLA is tuned well to the speaker. For example, if it is able to select a sound segment of +/- one frame in time, then a 50ms fixed frame will allow WSOLA to emulate PICOLA for most speakers who have a fundamental pitch > 100 Hz. However, a male with a deep voice of say 95 Hz would be butchered with WSOLA using those settings. Also, parts of speech, such as at the end of a sentence, where our fundamental pitch drops significantly can also be butchered by WSOLA when parameters are not optimally tuned. Also, WSOLA generally falls apart for speeds greater than 2X, where like PICOLA, it starts dropping multiple pitch periods in a row.
On the positive side, WSOLA will make most sounds including music understandable, if not high fidelity. Taking non-harmonic multi-voice sounds and changing the speed without introducing substantial distortion is impossible with overlap-and-add (OLA) schemes like WSOLA and PICOLA.
Doing this well would require separating the different voices, changing their speeds independently, and mixing the results together. However, most music is harmonic enough to sound OK with WSOLA.
It turns out that the poor quality of WSOLA at > 2X is one reason folks rarely listen at higher speeds than 2X. Folks simply don't like it. Once Audible.com switched from WSOLA to a Sonic-like algorithm on Android, they were able to increase the supported speed range from 2X to 3X. I haven't listened on iOS in the last few years, but as of 2014, Audible.com on iOS was miserable to listen to at 3X speed, since they used the built-in iOS WSOLA library. They've likely fixed it since then.
Looking at the library you linked, it doesn't seem like a good place to start specifically for this playback speed issue; is there any reason you aren't using AudioTrack? It seems to support everything you need.
EDIT 1: AudioTrack is Android specific, but the OP's question is Desktop javaSE based; I will only leave it here for future reference.
1. Using AudioTrack and adjusting playback speed (Android)
Thanks to an answer on another SO post (here), there is a class posted which uses the built in AudioTrack to handle speed adjustment during playback.
public class AudioActivity extends Activity {
AudioTrack audio = new AudioTrack(AudioManager.STREAM_MUSIC,
44100,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
intSize, //size of pcm file to read in bytes
AudioTrack.MODE_STATIC);
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//read track from file
File file = new File(getFilesDir(), fileName);
int size = (int) file.length();
byte[] data = new byte[size];
try {
FileInputStream fileInputStream = new FileInputStream(file);
fileInputStream.read(data, 0, size);
fileInputStream.close();
audio.write(data, 0, data.length);
} catch (IOException e) {}
}
//change playback speed by factor
void changeSpeed(double factor) {
audio.setPlaybackRate((int) (audio.getPlaybackRate() * factor));
}
}
This just uses a file to stream the whole file in one write command, but you could adjust it otherwise (the setPlayBackRate method is the main part you need).
2. Applying your own playback speed adjustment
The theory of adjusting playback speed is with two methods:
Adjust the sample rate
Change the number of samples per unit time
Since you are using the initial sample rate (because I'm assuming you have to reset the library and stop the audio when you adjust the sample rate?), you will have to adjust the number of samples per unit time.
For example, to speed up an audio buffer's playback you can use this pseudo code (Python-style), found thanks to Coobird (here).
original_samples = [0, 0.1, 0.2, 0.3, 0.4, 0.5]
def faster(samples):
new_samples = []
for i = 0 to samples.length:
if i is even:
new_samples.add(0.5 * (samples[i] + samples[i+1]))
return new_samples
faster_samples = faster(original_samples)
This is just one example of speeding up the playback and is not the only algorithm on how to do so, but one to get started on. Once you have calculated your sped up buffer you can then write this to your audio output and the data will playback at whatever scaling you choose to apply.
To slow down the audio, apply the opposite by adding data points between the current buffer values with interpolation as desired.
Please note that when adjusting playback speed it is often worth low pass filtering at the maximum frequency desired to avoid unnecessary artifacts.
As you can see the second attempt is a much more challenging task as it requires you implementing such functionality yourself, so I would probably use the first but thought it was worth mentioning the second.

Frequently repeated threads in Android

I'm building a barcode scanner which, different from other implementations, does the scanning part continuously in the background rather than waiting the user to trigger the process.
Now, the most (or what I think is the most) obvious way to achieve this is to process the scanning part in another thread to make sure that the main thread won't be interrupted. So that the user won't be bothered with UI lags, stutters, and whatnot.
I'm not the brightest guy when it comes to concurrency. But I've did my homework and done some research about it which, in turn has lead me to write this:
...
mScannerExecutor = Executors.newFixedThreadPool(3);
...
Camera.PreviewCallback previewCallback = new Camera.PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
Camera.Size size = parameters.getPreviewSize();
final Image barcode = new Image(size.width, size.height, "Y800");
barcode.setData(data);
Runnable scan = new Runnable() {
#Override
public void run() {
int result = mBarcodeScanner.scanImage(barcode);
if (result != 0) {
if(isInPreview) {
isInPreview = false;
mCamera.stopPreview();
}
SymbolSet symbolSet = mBarcodeScanner.getResults();
mListener.onBarcodeScanned(symbolSet.iterator().next());
if (enableRepeatedScanning) {
new Handler().postDelayed(restartPreview, mRescanIntervalMillis);
}
}
}
};
mScannerExecutor.execute(scan);
}
};
But the above code has been causing a lot of error in its execution. I can't even keep the app running for more than a mere couple of seconds. The error message varies from time to time, but this below was shown the most:
Fatal signal 8 (SIGFPE), code -6, fault addr 0x17b8 in tid 6410 (pool-1-thread-1)
I have a strong feeling that this design in general is heavily flawed. Thus the constant crashing.
What can I do to make this right? Did I miss something really important here?
p.s., The previewCallback defined above will be called very frequently; once every 2000ms (2 secs).

android: taking pictures in task or thread at regular interval?

I'm writing an android app which should take pictures in a user-defined interval (20 sec - 1 min). It should take the pictures even while it is running in background or while the device is sleeping. The app will run for a very long time period. If it is necessary to wake up the device, it should put back to sleep as soon as possible to save batterie life. After taking a picture the app will process some additional work (comparison of two pictures).
I read some stuff about sheduling alarms (http://developer.android.com/training/scheduling/alarms.htm), creating Services (also # android training) and Android AsyncTasks, Java threads (http://www.mergeconflict.net/2012/05/java-threads-vs-android-asynctask-which.html)
... but I'm still not sure what is the best way to achieve this.
My questions are:
Should I use thread or a task to take the pictures in background?
(the comparison of the two pictures might take longer than a few
milliseconds but i don't know anything about the cpu load of this
operation)
Should I use an alarm to wake the device up or are there any alternative solutions?
How can both (alarms and thread/task) work together? (Include the Alarm in the Task/Thread?)
Many thanks for your help in advance.
As to our question I know I can help get started with the aspect of repeating the picture taking task at a user defined time interval. For such a task you can user a Timer to achieve this. The code would look something like this:
mTmr = new Timer();
mTsk = new TimerTask() {
#Override
public void run() {
//Take picture or do whatever you want
}
};
mTmr.schedule(mTsk, 0, USER_DEFINED_EXECUTION_INTERVAL);
schedule begins the timer. The first parameter of schedule used here is the task to run which is mTsk. The second parameter is the delay until the first execution (in milliseconds), in this case no delay. The third parameter is what you'll want to manipulate which is the interval of executions. The parameter is the time between executions so if it were 20 seconds you'd pass in 20,000. If it were a minute it would be 60,000. You can get this value from the user using any method you'd like.
To keep the timer running make sure you don't call mTmr.cancel in onPause because for your case you want to keep the timer running while the user isn't on the app. Not calling cancel means the timer will hold it's resources until the app is closed by the user.
OR you can look at this How to schedule a periodic task in Java? If you'd like to use ScheduledExecutorService instead of a Timer.
I have made this app - Lenx. It uses Camera extensively and I am processing image in the background. I have used AsyncTask to process the image and it has never given any problems. The app also has a timer which starts the process after certain interval. The logic that I have used is very simple.
I have not used Camera2 API yet, so the code might be deprecated. I created CameraPreview class which implements Camera.PreivewCallback.
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
if (data == null) {
return;
}
int expectedBytes = previewWidth * previewHeight *
ImageFormat.getBitsPerPixel(ImageFormat.NV21) / 8;
if (expectedBytes != data.length) {
Log.e(TAG, "Mismatched size of buffer! Expected ");
mState = STATE_NO_CALLBACKS;
mCamera.setPreviewCallbackWithBuffer(null);
return;
}
if (mProcessInProgress || mState == STATE_PROCESS_IN_PROGRESS) {
mCamera.addCallbackBuffer(data);
return;
}
if (mState == STATE_PROCESS) {
mProcessInProgress = true;
processDataTask = new ProcessDataTask();
processDataTask.execute(data);
}
}
public void startProcessing() {
mState = STATE_PROCESS;
}
And my AsyncTask is something like this
private class ProcessDataTask
extends
AsyncTask<byte[], Void, Boolean> {
#Override
protected Boolean doInBackground(byte[]... datas) {
mState = STATE_PROCESS_IN_PROGRESS;
Log.i(TAG, "background process started");
byte[] data = datas[0];
long t1 = java.lang.System.currentTimeMillis();
// process your data
long t2 = java.lang.System.currentTimeMillis();
Log.i(TAG, "processing time = " + String.valueOf(t2 - t1));
mCamera.addCallbackBuffer(data);
mProcessInProgress = false;
return true;
}
#Override
protected void onPostExecute(Boolean result) {
mState = STATE_PROCESS_WAIT;
}
}
onPreviewFrame() will always get called as long as the camera preview is running. You need to take the data and process it only when you trigger something. So simply change the state of a variable, in this case, mState, and based on the state, call your AsyncTask.

Is there a system variable that holds the device-specific low-battery threshold for Android?

I am checking to see if the battery has reached critical level.
Android sends an intent to your app when the battery crosses the low-battery threshold in either direction. But this only works if the threshold is crossed while your app is running (the intent is not sticky, so it doesn't hang around). So if it's low when the user opens the app, you're out of luck (or at least information).
There is also a sticky intent, ACTION_BATTERY_CHANGED, that has information about the battery level and a scale for calculating percentages, which is great. However, I have been unable to find the system variable that contains the low-battery threshold (it apparently varies across device).
Doing a search, I found: When android fires ACTION_BATTERY_LOW, a source listing of Android system code, which uses the system variable com.android.internal.R.integer.config_lowBatteryWarningLevel. However, I have been unable to access this variable myself (my guess is that it is protected).
I would like to have a reasonable standard to compare my battery percentage to, so I know when to turn off battery-intensive functionality. That is all.
Here is my code:
private BroadcastReceiver powerListener = new BroadcastReceiver() {
public void onReceive(Context context, Intent intent) {
int batteryLevel = intent.getIntExtra(BatteryManager.EXTRA_LEVEL, 0);
int batteryScale = intent.getIntExtra(BatteryManager.EXTRA_SCALE, 1);
int batteryPercentLeft = (batteryLevel * 100) / batteryScale;
if (batteryPercentLeft <= com.android.internal.R.integer.config_lowBatteryWarningLevel) {
_thread.onBatteryStateReceived(DataModel.BatteryState.LOW);
}
}
};
I get a compile error for the system variable. Is there an alternative? It seems like this should be a straightforward thing to do. I just want to match system behavior, nothing fancy.
Note that this is possible, but uses reflection so should not be relied upon:
try {
Class clazz = Class.forName("com.android.internal.R$integer");
Field field = clazz.getDeclaredField("config_lowBatteryWarningLevel");
field.setAccessible(true);
int LowBatteryLevel = _context.getResources().getInteger(field.getInt(null));
Log.d("LowBattery","warninglevel " + LowBatteryLevel);
} catch (ClassNotFoundException | NoSuchFieldException | IllegalAccessException e) {
e.printStackTrace();
}
Source: https://stackoverflow.com/a/49424298/608312

Categories

Resources