Here is question which might be asked several times before but I am struggling to frame a query.
So aws cognito works as you have to pass the IDToken + authentication provider to cognito identity federation and it provides the temporary credentials valid for an hour. So what happens after an hour is, I get Authentication Exception.
Now I observed CognitoCachingCredentialProvider tries to refresh before performing given task let's say execute lambda or make dynamodb query. But what is a good way to handle expiry, intercept refresh, fetch token first and set it to credentialprovider and then continue refresh.
May it be UserPool IDToken or Google's IDToken, all I need to know is how to know if credentials are expired and I need to fetch the new IDTokens from providers and refresh credentials before processing the request.
I have tried hourly task (55 minutes actually) but sometimes it won't work and not very reliable so far.
Thanks
It's a bit tricky to get just right, but there's two common ways to handle it.
One is to do what you suggested - track when the token was vended, and then refresh if it's within some threshold of expiring (e.g. refresh if it's < 5 minutes from expiry).
The other is to blindly try to refresh, then catch the exception that gets thrown when a token is expired and refres/retry there. If you go this route, be careful to only retry once there so you don't spam the service if the request isn't just right.
Related
Евгений Кравцов:
I develop some tiny service with http backend and android app. And recently i felt the lack of knowledge in such a systems.
Case:
- Client makes order in app and send request to server
- Server successfuly recieved data and make a database row for order
- After database work completes, backend tries to respond to App with 200 success code.
- App user faced some internet connection problems and can not receive server response. App gets timeout exception and notify user, that order was not successful
- After some time, internet connection on user device restored and he send another request with same oreder.
- Backend recieves this again and create duplicate for previous order
So i got 2 orders in database, but user wanted to create only one.
Question: How can i prevent such a behavior?
You should add a key to your table in the db, for example defining the key as compound key of user_id, type - so specific user cannot make 2 orders with the same order type.
This will cause the second db insert to fail.
I'm new to Shiro and got confused about the current subject concept:
Subject subject = SecurityUtils.getSubject(); // gets the current subject
subject.login(...); // do login
subject.logout(); // do logout
In my application I need to run work from different users concurrently, thus multiple users(subjects) co-exist concurrently, new users login and old users logout on the fly: clients sends work with [username, password] to server, the server do credential check by Shiro with the given [username, password], if [username, password] not exist in database, reject the work, if exist, login and dispatch the work to be processed, in the meanwhile other clients sends their work and login, my question is in a later time when the work for a user is done and I need to logout out the user, how do I get the subject for it?
SecurityUtils.getSubject() returns the subject bound to the current thread (typical web app pattern), the source of the session is typically from information in a Http Session or Http Request. (Shiro is NOT bound to the Servlet API, it is just a really common model). So in the context of your application, the request might be just some method call containing the current User/Subject (not sure how your application makes this association or authenticates them, but that is a different question). This means you may not need to use SecurityUtils.getSubject().
Once you have a Subject and if you want to use SecurityUtils.getSubject() elsewhere in your code, you could wrap the call in a Callable: https://shiro.apache.org/subject.html#automatic-association (this is basically what Shiro's Servlet module does)
I am having a use-case in which I am consuming twitter filter stream in a multithreaded environment using twitter HBC.
I receive keywords to track from user on per request that I receive, process them and save them to the database.
Example
Request 1: track keywords [apple, google]
Request 2: track keywords [yahoo, microsoft]
What I am currently doing is, opening a new thread for each request and processing them.
I am making connection for every end-pints a below (I am following this official HBC example)
endpoint.trackTerms(Lists.newArrayList("twitterapi", "#yolo"));
Authentication auth = new OAuth1(consumerKey, consumerSecret, token, secret);
// Create a new BasicClient. By default gzip is enabled.
Client client = new ClientBuilder()
.hosts(Constants.STREAM_HOST)
.endpoint(endpoint)
.authentication(auth)
.processor(new StringDelimitedProcessor(queue))
.build();
// Establish a connection
client.connect();
The problem that I am facing is twitter gives me warning of having more that one connection.
But as according to the above code and my use case I have to make a new instance of Client object for ever request that I receive as my end points
(trackTerms) are different for every request that I receive.
What is the proper way to connect to twitter for my use-case (multithreaded system) to avoid to many connections warning and rate limit warnings
Current warnings from twitter:
2017-02-01 13:47:33 INFO TwitterStreamImpl:62 - 420:Returned by the Search and Trends API when you are being rate limited (https://dev.twitter.com/docs/rate-limiting).
Returned by the Streaming API:
Too many login attempts in a short period of time.
Running too many copies of the same application authenticating with the same account name.
Exceeded connection limit for user
Thanks.
I am measuring the cost of requests to GAE by inspecting the x-appengine-estimated-cpm-us-dollars header. This works great and in combination with x-appengine-resource-usage and
x-traceurl I can even get more detailed information.
However, a large part of my application run in the context of task queues. Thus, a huge part of the instance hour costs are consumed by queues. Each time code is executed outside of a request its costs are not included in the x-appengine-estimated-cpm-us-dollars header.
I am looking for a way to measure the full costs consumed by each request. I.e. costs generated by the request itself and the cost of the tasks that have been added by this request.
It is an overkill. There is a tool you can download google app engine log and convert them to sqlite.
http://code.google.com/p/google-app-engine-samples/source/browse/trunk/logparser/logparser.py
With this tool, cpm usd for both task request and normal request would be all downloaded together. You can store daily log into separate sqlite file and do as much analysis as you want.
In terms of relate the cost of task back to original request. The log data downloaded with this tool includes the full output of logging module.
So you can simply logging an generate id in the original request
pass the id to task.
logging the received id again in the task request.
find normal and task request pair via id.
for example:
# in org request
a_id = genereate_a_random_id()
logging.info(a_id) # the id will be included
taskqueue.add(url='/path_to_task', params={'id': a_id})
# in task request
a_id = self.request.get('id')
logging.info(a_id)
EDIT1
I think there is another possible way to estimate the cost of normal request + task request.
The trick is change the async task to sync (assume the cost would be the same).
I didn't try it but it is much easier to try.
# in org request, add a variable to identify debug
debug = self.request.get('DEBUG')
if debug:
self.redirect('/path_to_task')
else:
taskqueue.add(url='/path_to_task')
Thus, while testing the normal request with DEBUG parameter. It will firstly process the normal request then return x-appengine-estimated-cpm-us-dollars for normal request. Later it will redirect your test client to the relative task request (task request could also be access and trigger via url client as normal request) and return x-appengine-estimated-cpm-us-dollars for task request. You can simply add them together to get the total cost.
I access a Sybase database from a Java application. I can connect to it, execute statements, all of this works fine.
My issue is that I would like to handle correctly the cases when connection fails.
From my understanding, it can fail for the following reasons:
Incorrect password
Password expired
Account is locked
So my question is: how can I properly handle these error cases, how can I recognize which one happened?
From several tests, I found out the different cases:
Expired password
In this case, the database allows a connection, but a very limited one. The only thing you can call is the procedure to change password. The connection returned comes with a SQLWarning with the errorcode 4022, and a description stating:
The password has expired, but you are
still allowed to log in. You must
change your password before you can
continue. If a login trigger is set,
it will not be executed.
Thanks to the specific error code, it is possible to recognized the error, and propose to change the password in the client program.
Invalid password and Locked account
There is no difference for both cases. When requesting a connection, it throws a SQLException which links to a SQLWarning as next exception, with the error code 4002, and a very simple description:
Login failed.
As such, there is not really a way to handle these cases specifically.
Bonus case: password expiring soon
When a password will expire soon, the connection returned will contain a SQLWarning with the code 4023, stating:
Your password will expire in %s days.
This allows to show a warning in the client program, proposing the change the password already.
Handle it like you would other messages from the database. Catch the exception a log the message (this will help you identify the cause. At the same time display a message to the user that either a database error occurred or that the database could not be accessed.