I am trying to chase down a problem with CAS that is causing the following exception to be thrown:
javax.naming.TimeLimitExceededException: [LDAP: error code 3 - Timelimit Exceeded]; remaining name ''
at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3097)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2987)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2794)
at com.sun.jndi.ldap.LdapNamingEnumeration.getNextBatch(LdapNamingEnumeration.java:129)
at com.sun.jndi.ldap.LdapNamingEnumeration.hasMoreImpl(LdapNamingEnumeration.java:198)
at com.sun.jndi.ldap.LdapNamingEnumeration.hasMore(LdapNamingEnumeration.java:171)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:295)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:361)...
The error is returned virtually instantly. The client side timeout is set to 10 seconds, but that isn't occuring as, based on looking through the com.sun.jndi.ldap code, it appears that the domain controller is returning a response with a status of 3, indicating a time limit exceeded.
We are hitting an Active Directory global catalog, and our filter and base are pretty broad: base = '', filter = (proxyAddresses=*:someone#somewhere.com)
However, the query succeeds sometimes, but returns an immediate status code 3.
Does anyone know what might be causing this kind of behavior? Or perhaps how to go about determining what exactly is occurring?
Turns out our search filter was too broad.
As you can see, we were using a wildcard in the filter, and the query took a little less than 2 seconds.
However, 2 seconds is far shorter than the Active Directory configured time limit so I couldn't figure out why the error was occurring immediately (not even taking 2 seconds when it failed).
I assume AD must have been accruing the time taken by multiple requests from the same account, and at some point began returning the time limit exceeded error.
To solve it, we modified the search filter so that it no longer included a wildcard. The search then runs almost instantaneously, and the time limit exceeded no longer occurs.
Related
I have a Java GAE instance that runs TaskQueue jobs. In the last 48 hours I have been getting lots of CancellationException errors (about 36 in a 24 hour period).
The error always occurs on a DataStore query to a particular table. The query is on an indexed column, projectId,and the query is a composite OR (i.e. projectId = -1 or projectId = 23). The entire table in question has 96 records and any one query will pull back 20 records at most.
Does anyone have any idea why I'm getting these problems
Thanks
Paul
Update: Sorry my mistake, this isn't part of a Task Queue call it is the code that sets up the task queue call. The time the call took is on average about 59.5 secs (thanks for pointing that out Alex). Looking at the trace (thanks Alex) it seems that the delay is in the datastore call. It can be as quick as 21ms or over 59000ms, which is way to long. I just checked the logs again and there hasn't been any of those exceptions in the last 24 hours so this might be a 48 hour glitch.
Understanding "CancellationException: Task was cancelled" error while doing a Google Datastore query
Based on this question, it sounds like CancellationException means that you are hitting the 10-minute timeout that TaskQueue Jobs have to complete execution.
Do your logs show how long these requests are running for before they throw an exception (for example stackdriver shows that this deferred task ran for 902ms)?
Querying a table of 96 records should be fine, but it's hard to be certain without being able to see what the code of your task looks like.
You could split your task up into smaller 'sub-tasks' and have sub-task launch the next sub-task before returning.
EDIT: you could also try going here https://console.cloud.google.com/traces/traces to see a break down of where your request is spending its time
I'm not sure why this is happening and I've already searched the internet why this is happening but I can't find the answer I'm looking for.
Basically this happens when I try to send a request when the Wi-Fi is off and the mobile data is ON but there is no data. It takes 2 minutes for the exception to be thrown. I wanna know the proper reason why. The timeouts are these:
urlconn.setConnectTimeout(60000);
urlconn.setReadTimeout(60000);
Does this mean that both timeouts occur that's why it took 2 minutes or are there other reason that I'm not aware of why this is happening?
Note: I can only post a code snippet due to confidentiality reasons.
Both of them are occurring. There's no data, so the connection fails, that's one minute. Then there's nothing to read from a stream that doesn't exist due to no connection, that's another minute.
So I'm able to access the Facebook graph API with my app but after a certain point, I get the dreaded "403 Forbidden #4" error returned in the response header. I get the error whether I call the URL through my app, or just enter it in the browser directly. I know for a fact that I'm not exceeding the 600 calls in 600 seconds limit by utilizing WAS7 caching (I'll make a new, direct call every 5 minutes, or 2 calls every 600 seconds).
What's more frustrating is that the 403 error "locks me out" for a good long time (maybe 6-10 hours). Once I wait this amount of time, I'm able to fetch the data as if nothing happened both in my application, and through the browser. I guess my question is, does Facebook keep some sort of longer tally of Graph calls? Maybe over the span of a day? And does it kick you out from accessing the API if you go over this amount? Let me know if I'm being too ambiguous or confusing as I'd be happy to clarify. Thanks!
For reference, the URL I'm using is as follows:
https://graph.facebook.com/POSTID/?fields=caption,full_picture,description,id,message,name,type,link
I recently started load testing my webapp.
I used apache access log sampler. I followed this tutorial.
https://jmeter.apache.org/usermanual/jmeter_accesslog_sampler_step_by_step.pdf
I am able to make it work. But now the problem i replayed all the get requests in less than 10 mins.
I want jmeter to run the get requests based on the time stamp at which the get request is posted.
I am not able to find any such configuration online.
I can write script to curl the get request at that particular timestamp. But i want to use jmeter.
Is it possible.
EDIT
I created a sample csv file with below lines:
0,/myAPP/home
5000,/myAPP/home
5000,/myAPP/home
First i created a thread group as shown in image:
Here i selected loop count forever. If not selected only first line in csv file is running.rest of the lines are not running.
Now i added csv data set config as shown in image:
Now i added constant timer as shown in image:
Now i added HTTP request as shown in image:
I added view results tree listener and hit the play button.
When i see the sample start in view result tree for each of the samples, the delay is not according to the delay present in csv file. What am i doing wrong.
EDIT2
I made the constant timer as child of the HTTP request. Please find the timings of the requests in the below screen shot. Do you see anything wrong .
EDIT3
I followed bean shell timre approach and it worked fine when the delay is greater than previous response time. But it didnt work properly when previous response time is greater than delay.
I modified the csv file as below ( reduced delay to 100 msec)
0,/myAPP/home
100,/myAPP/home
100,/myAPP/home
I deleted constant timer and added below bean shell timer.
This is the results table:
These are the log lines:
The out of the box access log sampler won't work for what you're trying to do.
You have two options:
1 - Do all of the processing work outside of Jmeter and create a CSV file with two fields: URL and Delay and then use the CSV data set config.
Youtube JMeter CSV tutorial:
http://www.youtube.com/watch?v=aEJNc3TW-g8
2 - Do the processing within JMeter. If you're a Java developer you can write a beanshell script to read the file and parse out the URL and timestamp and calculate the delay.
Here is an example:
How to read a string from .csv file and split it?
EDIT 1
Using the data from your question and screenshot everything looks like it's working fine.
A couple of things about JMeter delays (using timers).
- JMeter will add a delay AFTER the request (not before it)
- Jmeter will start the delay AFTER the server is done responding.
In your case (I'm rounding to the nearest second):
Initial request at 12:59:53
+ Request took 24.5 seconds
+ 0 second delay
= next request should be at 13:00:18 which is indeed when the next request happened.
Second request at 13:00:18
+ Request took 1.8 seconds
+ 5 second delay
= next request should be at 13:00:25 which is indeed when the next request happened.
I think what you want is that the next request will NOT factor in the response time. Offhand, you'd need to create a delay of ${delay} - ${responseTime}
EDIT 2
In order to create a delay that will factor in the response time you need to use the beanshell timer and not the constant timer.
Here is the code (put this in the script section of the beanshell timer):
rtime = Integer.parseInt(String.valueOf(prev.getTime())); // on first run will raise warning
delay = Integer.parseInt(vars.get("delay"));
Integer sleep = delay - rtime;
log.info( "Sleep for " + sleep.toString() + " milli-seconds" );
return sleep;
EDIT 3
What if response_time > desired_delay?
If the sleep calculation is less than zero, nothing will break. It will just sleep for zero milliseconds.
With just one thread there is no way to make an additional request if the existing request hasn't completed. Technically it should be possible to have a different thread start making requests when one thread isn't enough to keep the exact delays, but this would require intra-thread communication which can get very messy very quickly and requires beanshell scripting.
I know it's an old thread but I encountered the same need - to re-simulate logged requests,
what I did was:
my csv looks like: {timstamp},{method} for example :
2016-06-24T18:25:03.621Z,/myAPP/home
2016-06-24T18:25:04.621Z,/myAPP/home
I save the diff from now to first timestamp in setUp Thread Group as property (not variable becaues variable is not shared between groups)
I use this diff to calculate delay before every request is executed
works perfectly, multithreaded and requests are never repeated :)
*Note: I wanted to go through the csv only once, if you want to repeat it more than once you'll have to recalculate the diff initialized in step 2
*Note 2: make sure you have enough threads to run all requests asyncronously
This approach will give you 2 main benefits over delays:
-using real logs data without converting to delays
-it solves your problem when delay is smaller than response time because in this case the request will run from another thread
I'm using hbase 0.94.15 (no cluster setup).
I adapted the BulkDeleteEndpoing (see org.apache.hadoop.hbase.coprocessor.example.BulkDeleteEndpoint) and am calling it from the client.
It works fine for a limited amount of data (probably around 20.000 rows wrt our table design), but after that I get an error containing responseTooSlow and execCoprocessor.
I've read that this is due to a client disconnect, as it doesn't get any response within 60 seconds (default hbase.rpc.timeout) (https://groups.google.com/forum/#!topic/nosql-databases/FPeMLHrYkco).
My question now is, how do I prevent the client from closing the connection before the endpoint is done?
I do not want to set the default rpc timeout to some high value (setting the timeout for this specific call only would be an option, but would only be a workaround)
in some mailing list I found a comment that it is able to poll for the status of the endpoint, but (1) I can't find the site anymore, and (2) I can't find any other information regarding this idea. Maybe it is available in some more recent version?
any other ideas are appreciated
What we might try is limit the scope of the scan object (may be calling
Scan.setMaxResultsPerColumnFamily()) and invoking the co-processor in a loop,
with the startKey updated in every iteration. Need to figure out how to update
the startKey at the end of every invocation.