Time (20 seconds validity) based google authentication code, i need to check the time before reading the 4 digit code.
Collect the google auth code using TOTP
Apply the code automatically in our application
Problem,
while reading - code at the edge (18/19th seconds), and send the code automatically to our text box, but validity expired and authentication was failed. so i want to check the code along with validity time
a. if validity time greater than 10 seconds i can get the code and pass it to text box
b. if validity time less than 10 seconds ,wait for 10 seconds
code:
public static String getTOTPCode(String secretKey) {
String normalizedBase32Key = secretKey.replace(" ", "").toUpperCase();
Base32 base32 = new Base32();
byte[] bytes = base32.decode(normalizedBase32Key);
String hexKey = Hex.encodeHexString(bytes);
return TOTP.getOTP(hexKey);
}
Jar file
commons-code1.8 jar
totp-1.0 jar
refer the above, and let me know how can get the validity time for the OTP?
See about TOTP. Server should generally allow codes from ±1 time interval anyway. So most probably you don't have to be concerned with such artificial delays.
Other than that, depending on TOTP parameters, you should know when is the tripping point of next time intervals. So you can just check how close you are to a tripping point based on current time.
P.S. I heard some servers adjust time calculations based on client previous auth attempts so that client/server time shifts do not break auth. e.g. when client machine doesn't use NTP and thus clock does off.
update: about generating timestamp TOTP : Do the seconds count?
Related
This question already has an answer here:
I want to get current time in Java but without the Internet and without the system time
(1 answer)
Closed 6 years ago.
I am new here:) I am developing a small app in android (java) and I need something to control the seconds elapsed between two events.
The problem with System.currentTimeMillis () is that the user of the device can change his system date for example after the first event, and so when I take the value returned by System.currentTimeMillis () after the second event, and I make the difference between the two values, this obtained difference is not valid at all.
Another option I tried was System.nanoTime (). Although the user changes his system time, the seconds count is valid. But here the problem is that if after the first event, the user switches off the device, the value returned by System.nanoTime() after the second event is not valid because with the device restart, the counter of System.nanoTime() also restarts , and therefore, the elapsed time is again not valid.
Does anybody know any method to count the seconds between two events, considering user date changes and user restarts of the device ?
Thanks in advance!
Since you want to avoid the errors that can be introduced by the user messing up with the system date, you cannot rely on that source for information. A reliable time (and accurate, if matters) can be obtained using the NTP (Network Time Protocol) protocol. Take a look at this question for more details about it: Java NTP client.
An alternative you may consider is, instead of finding a reliable clock to compute the date/time difference, you can check if the user has changed the system clock. A simple way would be to store the timestamp and check periodically (every second of so) if the new timestamp you get from the system clock is smaller (before) the previous one. If so, you can take action.
From a servlet, how can I set a cookie that will never expire?
I have tried doing it like this:
Cookie cookie = new Cookie("xyz","");
cookie.setMaxAge(-1);
But when I do it this way, it expires as soon as the user closes the browser.
If you call setMaxAge() with a negative number (or don't call it at all), the cookie will expire at the end of the user's browser session. From the documentation:
A negative value means that the cookie is not stored persistently and will be deleted when the Web browser exits.
The typical approach for cookies that "never" expires is to set the expiration time some far amount into the future. For example:
cookie.setMaxAge(60 * 60 * 24 * 365 * 10);
Will set the expiration time for 10 years in the future. It's highly probable that the user will clear their cookies (or probably just get a new computer) within the next 10 years, so this is effectively the same as specifying that it should never expire.
Keep in mind that cookies don't actually have a concept of a "maximum age"; the way this is actually implemented is Java adds the specified number of seconds to the server's current system time, then sends that time as the expiration time for the cookie. So, large mismatches between server time and client time can complicate things a little - the best you can do (at least, easily) is ensure your server clock is set correctly and hope the clients' clocks are set.
Setting the maximum age: You use setMaxAge to specify how long (in seconds) the cookie should be valid. Following would set up a cookie for 24 hours.
cookie.setMaxAge(60*60*24);
I recently started load testing my webapp.
I used apache access log sampler. I followed this tutorial.
https://jmeter.apache.org/usermanual/jmeter_accesslog_sampler_step_by_step.pdf
I am able to make it work. But now the problem i replayed all the get requests in less than 10 mins.
I want jmeter to run the get requests based on the time stamp at which the get request is posted.
I am not able to find any such configuration online.
I can write script to curl the get request at that particular timestamp. But i want to use jmeter.
Is it possible.
EDIT
I created a sample csv file with below lines:
0,/myAPP/home
5000,/myAPP/home
5000,/myAPP/home
First i created a thread group as shown in image:
Here i selected loop count forever. If not selected only first line in csv file is running.rest of the lines are not running.
Now i added csv data set config as shown in image:
Now i added constant timer as shown in image:
Now i added HTTP request as shown in image:
I added view results tree listener and hit the play button.
When i see the sample start in view result tree for each of the samples, the delay is not according to the delay present in csv file. What am i doing wrong.
EDIT2
I made the constant timer as child of the HTTP request. Please find the timings of the requests in the below screen shot. Do you see anything wrong .
EDIT3
I followed bean shell timre approach and it worked fine when the delay is greater than previous response time. But it didnt work properly when previous response time is greater than delay.
I modified the csv file as below ( reduced delay to 100 msec)
0,/myAPP/home
100,/myAPP/home
100,/myAPP/home
I deleted constant timer and added below bean shell timer.
This is the results table:
These are the log lines:
The out of the box access log sampler won't work for what you're trying to do.
You have two options:
1 - Do all of the processing work outside of Jmeter and create a CSV file with two fields: URL and Delay and then use the CSV data set config.
Youtube JMeter CSV tutorial:
http://www.youtube.com/watch?v=aEJNc3TW-g8
2 - Do the processing within JMeter. If you're a Java developer you can write a beanshell script to read the file and parse out the URL and timestamp and calculate the delay.
Here is an example:
How to read a string from .csv file and split it?
EDIT 1
Using the data from your question and screenshot everything looks like it's working fine.
A couple of things about JMeter delays (using timers).
- JMeter will add a delay AFTER the request (not before it)
- Jmeter will start the delay AFTER the server is done responding.
In your case (I'm rounding to the nearest second):
Initial request at 12:59:53
+ Request took 24.5 seconds
+ 0 second delay
= next request should be at 13:00:18 which is indeed when the next request happened.
Second request at 13:00:18
+ Request took 1.8 seconds
+ 5 second delay
= next request should be at 13:00:25 which is indeed when the next request happened.
I think what you want is that the next request will NOT factor in the response time. Offhand, you'd need to create a delay of ${delay} - ${responseTime}
EDIT 2
In order to create a delay that will factor in the response time you need to use the beanshell timer and not the constant timer.
Here is the code (put this in the script section of the beanshell timer):
rtime = Integer.parseInt(String.valueOf(prev.getTime())); // on first run will raise warning
delay = Integer.parseInt(vars.get("delay"));
Integer sleep = delay - rtime;
log.info( "Sleep for " + sleep.toString() + " milli-seconds" );
return sleep;
EDIT 3
What if response_time > desired_delay?
If the sleep calculation is less than zero, nothing will break. It will just sleep for zero milliseconds.
With just one thread there is no way to make an additional request if the existing request hasn't completed. Technically it should be possible to have a different thread start making requests when one thread isn't enough to keep the exact delays, but this would require intra-thread communication which can get very messy very quickly and requires beanshell scripting.
I know it's an old thread but I encountered the same need - to re-simulate logged requests,
what I did was:
my csv looks like: {timstamp},{method} for example :
2016-06-24T18:25:03.621Z,/myAPP/home
2016-06-24T18:25:04.621Z,/myAPP/home
I save the diff from now to first timestamp in setUp Thread Group as property (not variable becaues variable is not shared between groups)
I use this diff to calculate delay before every request is executed
works perfectly, multithreaded and requests are never repeated :)
*Note: I wanted to go through the csv only once, if you want to repeat it more than once you'll have to recalculate the diff initialized in step 2
*Note 2: make sure you have enough threads to run all requests asyncronously
This approach will give you 2 main benefits over delays:
-using real logs data without converting to delays
-it solves your problem when delay is smaller than response time because in this case the request will run from another thread
I am working on google app engine. And I am working on something that requires me to do something if the time-difference between the time sent by the client and the time at server is less than 15 second. The code works perfectly when I try it on the test server (on my own machine). But fails when I the code is deployed on app-engine. I am guess that is probably because if the timezone at server is different, there might be few hours added/subtracted when that occurs.
What I basically do is let the client send his timestamp along with so other data. And when the server subsequently needs to calculate the time difference, I subract server's timestamp from the client's.
If the timezone is different then clearly I will run into problems.
I know that one solution to avoid this timezone fiasco is to let the server timestamp both the initial request and subsequent processing later on, but for my application, it is essential that a timer starts ticking right from when the client makes a request and that if 15 seconds have passed, with respect to the client, then no action be taken by the server.
Since I am the developer of both client side and server side I can show you what I have done.
Sample client side call
new Register().sendToServer(username,location,new TimeStamp(new Date().getTime()));
this is stored in data-store and retrieved a little bit later, if some conditions are met.
Server side sample to find difference
Date date= new Timestamp(new Date().getTime());
Date d1 = (Date) timeStampList[i];
long timedif = Math.abs(d1.getTime() - date.getTime());
if(timedif <= 15000)
do something;
else
do something else;
So basically, my question is how do I normalize the timezone variations ?
thanks
Simpy use absolute unix time: System.currentTimeMillis().
This time is absolute, i.e. no of miliseconds since Jan 1st, 1970, UTC midnight), so you can simply calculate the difference.
As #Diego Basch pointed out if the time difference is in the 30 minutes to full hours magnitude, you should deal with the timezone difference, because your client is not in the same timezone as the server.
You can determine the client timezone in JavaScript with new Date().getTimezoneOffset(). Note: the offset to UTC is returned in minutes.
On the server you should be able to determine the timezone with Calendar.getInstance().getTimeZone().
How do i measure how long a client has to wait for a request.
On the server side it is easy, through a filter for example.
But if we want to take into accout the total time including latency and data transfer, it gets diffcult.
is it possible to access the underlying socket to see when the request is finished?
or is it neccessary to do some javascript tricks? maybe through clock synchronisation between browser and server? are there any premade javascripts for this task?
You could wrap the HttpServletResponse object and the OutputStream returned by the HttpServletResponse. When output starts writing you could set a startDate, and when it stops (or when it's flushed etc) you can set a stopDate.
This can be used to calculate the length of time it took to stream all the data back to the client.
We're using it in our application and the numbers look reasonable.
edit: you can set the start date in a ServletFilter to get the length of time the client waited. I gave you the length of time it took to write output to the client.
There's no way you can know how long the client had to wait purely from the server side. You'll need some JavaScript.
You don't want to synchronize the client and server clocks, that's overkill. Just measure the time between when the client makes the request, and when it finishes displaying its response.
If the client is AJAX, this can be pretty easy: call new Date().getTime() to get the time in milliseconds when the request is made, and compare it to the time after the result is parsed. Then send this timing info to the server in the background.
For a non-AJAX application, when the user clicks on a request, use JavaScript to send the current timestamp (from the client's point of view) to the server along with the query, and pass that same timestamp back through to the client when the resulting page is reloaded. In that page's onLoad handler, measure the total elapsed time, and then send it back to the server - either using an XmlHttpRequest or tacking on an extra argument to the next request made to the server.
If you want to measure it from your browser to simulate any client request you can watch the net tab in firebug to see how long it takes each piece of the page to download and the download order.
Check out Jiffy-web, developed by netflix to give them a more accurate view of the total page -> page rendering time
I had the same problem. But this JavaOne Paper really helped me to solve this problem. I would request you to go thru it and it basically uses javascript to calculate the time.
You could set a 0 byte socket send buffer (and I don't exactly recommend this) so that when your blocking call to HttpResponse.send() you have a closer idea as to when the last byte left, but travel time is not included. Ekk--I feel queasy for even mentioning it. You can do this in Tomcat with connector specific settings. (Tomcat 6 Connector documentation)
Or you could come up with some sort of javascript time stamp approach, but I would not expect to set the client clock. Multiple calls to the web server would have to be made.
timestamp query
the real request
reporting the data
And this approach would cover latency, although you still have have some jitter variance.
Hmmm...interesting problem you have there. :)