I am using SourcesAFIS Fingerprint in an Android Java application to compare users' fingerprints and have the following problem: My application is taking too long to convert the user's fingerprint bytes into FingerprintTemplate, even sometimes the application is closed. To my misfortune I need to create this FingerprintTemplate object inside a loop in order to get the biometrics that are returned from the database, which ends up slowing down even more.
Code snippet
//Returns database biometries and assigns list
listBiometria = conSql.selecionarBiometria();
FingerprintTemplate candidate = new FingerprintTemplate();
candidate.dpi(500);
candidate.create(img);
for(Biometry biometry : listBiometria)
{
FingerprintTemplate probe = new FingerprintTemplate()
.dpi(500)
.create(biometry.getBiometria());
score = new FingerprintMatcher()
.index(probe)
.match(candidate);
}
Well if in case anyone has a problem similar to this, I found the github of the creator of SourcesAFIS and asked this question there and received the following answer:Android feature extractor performance is indeed poor. Improvements are possible. Meantime it is recommended to use a recent device with solid floating-point performance.
In any case, you shouldn't be looping over images like that. Perform feature extraction after image acquisition and subsequently cache the template like the tutorial says.
Link: https://github.com/robertvazan/sourceafis-net/issues/2
Related
We have identified in our user base that since the last google fit app update there's been a dramatic drop in data, and since it began we have tried to identify the issue in our code. Giving the timing, we thought the version we were using ( 18.0 at the time ) was the problem.
Upgrading to SDK 20.0 did not improve the results, but stopped the data from stalling. currently we can assume 50-60% of the users connected to google fit trough the SDK are no longer corretcly retrieving data according to the (previously working) implementation. They are not lost, and they still send some bits here and there, but it's no longer what it used to be.
This graph showcases the timeline of events that lead us the conclusion that one of the sides must be doing something wrong.
The code examples below have been stripped of most data processing code for readability, but it is there.
Our Fitness client requests FitnessOptions.ACCESS_READ for all the types mentioned below, plus others depending on the App, every time it's initialised, either in foreground or background, making sure we only request those accepted by the user.
We can confirm the next data types no longer return any value when requesting daily total or local device daily total, but do return data chunks of the same period when requested in a non-aggregated read:
DataType.TYPE_STEP_COUNT_DELTA
DataType.TYPE_CALORIES_EXPENDED
DataType.TYPE_HEART_RATE_BPM
we also tried changing those possible to their aggregate counterparts, with no avail:
DataType.AGGREGATE_CALORIES_EXPENDED
DataType.AGGREGATE_STEP_COUNT_DELTA
This is our current getDailyTotal implementation, working before the update, and is written straight out as the examples on the developer site show:
Fitness.getHistoryClient(context, account)
.readDailyTotal(type)
.addOnSuccessListener {
Logger.i("${type.name}::DailyTotal::Success")
onResponse(it)
}
This currently returns 0 no matter the time of the day it's asked.
Then we have our complementary code, which emulates what getDailyTotal does in the insides, also as per developer site examples:
from: day start at 00:00:00, UTC+1
to: day end at 23:59:59, UTC+1
type: any DataType.
val readRequest = DataReadRequest.Builder()
.enableServerQueries()
.aggregate(type)
.bucketByTime(1, TimeUnit.DAYS)
.setTimeRange(from.time, to.time, TimeUnit.MILLISECONDS)
.build()
val account = GoogleSignIn
.getAccountForExtension(context, fitnessOptions!!)
GFitClient.request(context, account, readRequest) {
if (it == null) {
aggregatedRequestError(type)
} else {
Logger.i(TAG, "Aggregated ${type.name} received.")
}
}
The common result here is either 1) a null or empty result, 2) actually getting the result ( in the case of DataType.TYPE_STEP_COUNT_DELTA sometimes it happens ) or 3) a APIException code 5012, this datatype can't be aggregated.
We are using the single aggregate since the double, that could be called by (type, type.aggregate) has been deprecated since a couple versions already, although some developer site examples still use it.
The use ( or not ) of .enableServerQueries() does not modify the final result.
Finally we assume the worst and we request anything for that day no matter what and then we aggregate manually. This usually reports results, wether others did not. sadly those results are never conclusive enough to feel comfortable.
val readRequest = DataReadRequest.Builder()
.enableServerQueries()
.read(type)
.bucketByTime(1, TimeUnit.DAYS)
.setTimeRange(from.time, to.time, TimeUnit.MILLISECONDS)
.build()
val account = GoogleSignIn
.getAccountForExtension(context, fitnessOptions!!)
This tends to work but the manual processing of the data is complex given the intricate nested nature of datasets, buckets and the overall dataset structure.
We have also noticed issues when retrieving data that is clearly seen on the fit app, but doesn't appear on the SDK, for example, Huawei Health activities appearing on the App while the SDK returns only a subset of them, and the other way around, the SDK returning us data ( for example, a whole night worth of sleep sessions ( light, rem, deep... ), while the fit app shows that same sleep as a single Sleep block without any sessions.
Sleep session as shown in a third party app, with the same data the SDK returns us:
The same sleep session shown in the Google fit app:
As far as the documentation says:
For the Android APIs, read by data type and the Fit platform will
return the merged stream by default. This automatically includes all
data available to your app, including data written by other apps. You
won't be able to see a list of which apps or devices the data came
from with the Android APIs.
We believe that the merged stream is not behaving properly, not in real time ( which could be explained by a delay between the App showing the data directly from the backend and the SDK not having the data yet written ), but also not in a matter of minutes or hours of difference, sometimes never showing up.
To understand how we retrieve this data, we have a background WorkerManager CouroutineJob that every once in a while ( when the system lets so, given doze mode permissions, but what we would prefer (and ask so via WorkerManager configuration ) is once every hour or couple of hours, to keep the data up to date with the one displayed in the fitness app ), we request data from last update to last day's end day or/and we request today's daily total ( or up to the current time, depends on how far the "doesn't work" funnel we go, and also on the last update's date).
Is there anything wrong in our implementation?
has google fit changed the way it reports its data to connected apps?
can we somehow get more truthful data?
is there any way to request the same data differently, more efficiently? we are deeply interested mostly in getting daily summaries, totals and averages, rather than time buckets / sessions. We request both but they go to different data funnels covering different use cases.
There is no answer yet.
Our solution has ended up having a rowdy succession of checks for data and on every failure we try a different way.
I`m using Android Google Places API to autocomplete streets and addresses. The problem is that it gives all streets from a whole country. Of course I added bounds to limit place for search, but it doesnt work correctly - it gives only priority, so in other words best results will be higher in list, nothing more
So code:
AutocompleteFilter typeFilter = new AutocompleteFilter.Builder()
.setTypeFilter(AutocompleteFilter.TYPE_FILTER_ADDRESS)
.setCountry("RU")
.build();
Intent intent =
new PlaceAutocomplete.IntentBuilder(PlaceAutocomplete.MODE_OVERLAY)
.zzih(searchString) //that is for passing search string from toolbar
.setFilter(typeFilter)
.setBoundsBias(city.getBounds())
.build(this);
In short the problem is:
When I type in search something like "Lenina Street" I see a lot of useless results out of bounds set in .setBoundsBias(city.getBounds()). Just imagine that something like "Lenina Street" exists in almost every locality!
How can I fix the problem and limit search results?
P.S.
I know I can use Google Places Web API or by GeoDataApi.getAutocompletePredictions() and filter results manually,
but that means I have to write UI manually too, what I dont want to
do.
Thats even worse than I thought. Even if I get results from Web API or through GeoDataApi I have only predictions which doesnt contain coordinates, only placeId. So if I want to filter predictions by coordinates I have to do request for each placeId. In other words if I got 20 places I will have to do 20 more requests to find out coordinates.
Also I can add city name in searchString, that makes results better (but not at all) but it makes writing of address unclear and city name takes place, so its not good solution too.
I'm afraid Places API for Android doesn't support strict bounds yet. There is a feature request in Google Issue tracker to implement this:
https://issuetracker.google.com/issues/38188994
Feel free to star this feature request to add your vote and subscribe to notifications from Google.
In the meantime the workaround might be using Places API web service that supports strict bounds and implement the UI manually.
UPDATE
The feature request was marked as Fixed by Google. Have a look at https://stackoverflow.com/a/50134855/5140781 that shows how to apply strict bounds in Places API for Android.
Let's say I want to calculate the cumulative estimate of my defects. I do
double estimate = 0.0;
Double tEstimate = 0.0;
Collection<Defect> defects = project.getDefects(null);
for(Defect d : defects){
tEstimate = d.getEstimate();
if(tEstimate != null){
estimate += tEstimate;
}
}
Here each call to d.getEstimate() does a callback to the server, meaning this code runs extremely slowly. I would like to take the one-time performance hit up front and download all the info along with the Defect object, probably including getting some information I won't use, but avoid hitting the latency of a server callback during each iteration of the loop.
You are using the VersionOne Object model SDK. It does lack robustness because of the very thing you are complaining about. One of the inefficiencies is how it knows that you are requesting a list of assets but first gets all of the assets with a predetermined set of attributes such as AssetState and checks to see if it is dead asset. After this, it makes another call to get the same list of assets again but with your specified attributes. This could be remedied by applying a greedy algorithm that could grab a set a of attributes such that each member of this set is returned regardless of which attributes are requested in your .get_() method. Why? This already (sort of) happens in the Rest based VersionOne API as it stands. If the query returned all attributes, it would probably a little wasteful especially for humongous backlogs.
Anyway, the VersionOne will be deprecating the Object Model in the near future so if you plan on a lot of coding using the OM, consider this.
Here are some ways to circumvent this problem
1) Rewrite your code to use the VersionOne APIClient SDK. It has XML plumbing so that you will save you a lot of time writing your own. This is a little bit more verbose but it is more powerful, fast and efficient. The Object model is actually built upon the APIClient.
2) Rewrite your code using Java and the raw VersionOne Rest API - The requires that you understand http and the VersionOne Rest API.
3) If you cannot change from the Object model, you can mix the 2 sdks. When you need to read large amounts to data, just use APIClient code to manage that segment of the code. Kind of pointless when you can just learn the APIclient and use exclusively unless you have a huge investment in using the Object model and you can't change. The code gets mucky real fast. Not recommended.
The rest-1.v1 API endpoint exposes operations for assets, including DeepCopy. There is no client code that enumerates all of the operations, so you must first explore the asset using the meta.v1 API endpoint. Using the API Client backdoor from the Object Model, you can get to the classes that will allow you to call an operation once you know the name.
I'm working on a project which uses Facebook Graph Api to download information about users of our application and their Facebook friends. There is a need to update this data, but as I understand Real-Time Update is not an option. For example I would like to have update of profile feed of friends of our app user, and I don't see a way to do this with Real-Time Update.
Could someone give me some advice on this update mechanism? I need to update app users, their friend connections and profile feeds of users and their friends. I understand I'll have to poll Facebook servers to retrieve this data. What I'm trying to find out is some good practices when doing these things. Update frequency? Way to recognize that data has changed? If anyone has experience with this kind of things every advice would mean a lot.
Thanks.
You can use the since= query string parameter of the Graph API call. Here's some pseudocode to help you along
var usersLastPostDate = GetLastPostDateFromDataStore(userId);
if(usersLastPostDate not populated) {
streamItems = GraphApiGet(userId, "me/feed")
lastStreamItemDate = GetNewestStreamItemDate(streamItems)
StoreLastPostDateIntoDataStore(userId, lastStreamItemDate )
}
else {
streamItems = GraphApiGet(userId, "me/feed?since=" + usersLastPostDate )
}
Not massively useful for your use case (as you're wanting to get data which changes frequently), but worth pointing out that the Graph API now supports ETags - https://developers.facebook.com/blog/post/627/.
ETags will tell you if the data has changed since the last time you requested it. This won't stop you from hitting Facebooks API throttling limits, but is a quick and easy way to tell if the data has changed since you last asked for it.
There is no one answer to your question, as it depends on what your application is doing. How often do you need to get the updated information? If your data is stale for 5 minutes, is that really a problem? Can you grab the data from Facebook lazily, when some user action requires that you have it?
If you do need to do a lot of polling try and use non-blocking IO, especially if you're expecting to have a lot of open HTTP requests to Facebook whilst you're polling. Build a reliable queueing mechanism and HTTP poker to ensure requests are being made as expected. Without any idea of what technology stack you're using it's hard to be more specific than that.
HTH
What about this: Open Graph Subscription system ?
Is there a generally-accepted way to return a large list of objects using Java EE?
For example, if you had a database ResultSet that had millions of objects how would you return those objects to a (remote) client application?
Another example -- that is closer to what I'm actually doing -- would be to aggregate data from hundreds of sources, normalize it, and incrementally transfer it to a client system as a single "list".
Since all the data cannot fit in memory, I was thinking that a combination of a stateful SessionBean and some sort of custom Iterator that called back to the server would do the trick.
So, in other words, if I have an API like Iterator<Data> getData() then what's a good way to implement getData() and Iterator<Data>?
How have you successfully solved this problem in the past?
Definitely don't duplicate the entire DB into Java's memory. This makes no sense and only makes things unnecessarily slow and memory-hogging. Rather introduce pagination at database level. You should query only the data you actually need to display on the current page, like as Google does.
If you actually have a hard time in implementing this properly and/or figuring the SQL query for the specific database, then have a look at this answer. For JPA/Hibernate equivalent, have a look at this answer.
Update as per the comments (which actually changes the entire question subject...), here's a basic (pseudo) kickoff example:
List<Source> inputSources = createItSomehow();
Source outputSource = createItSomehow();
for (Source inputSource : inputSources) {
while (inputSource.next()) {
outputSource.write(inputSource.read());
}
}
This way you effectively end up with a single entry in Java's memory instead of the entire collection as in the following (inefficient) example:
List<Source> inputSources = createItSomehow();
List<Entry> entries = new ArrayList<Entry>();
for (Source inputSource : inputSources) {
while (inputSource.next()) {
entries.add(inputSource.read());
}
}
Source outputSource = createItSomehow();
for (Entry entry : entries) {
outputSource.write(entry);
}
Pagination is a good solution when working with a web based ui. sometimes, however, it is much more efficient to stream everything in one call. the rmiio library was written explicitly for this purpose, and is already known to work in a variety of app servers.
If your list is huge, you must assume that it can't fit in memory. Or at least that if your server need to handle that on many concurrent access then you have high risk of OutOfMemoryException.
So basically, what you do is paging and using batch reading. let say you load 1 thousand objects from your database, you send them to the client request response. And you loop until you have processed all objects. (See response from BalusC)
Problem is same on client side, and you'll likely to need to stream the data to the file system to prevent OutOfMemory errors.
Please also note : It is okay to load millions of object from a database as an administrative task : like for performing a backup, and export of some 'exceptional' case. But you should not use it as a request any user could do. It will be slow and drain server resources.