So I'm able to access the Facebook graph API with my app but after a certain point, I get the dreaded "403 Forbidden #4" error returned in the response header. I get the error whether I call the URL through my app, or just enter it in the browser directly. I know for a fact that I'm not exceeding the 600 calls in 600 seconds limit by utilizing WAS7 caching (I'll make a new, direct call every 5 minutes, or 2 calls every 600 seconds).
What's more frustrating is that the 403 error "locks me out" for a good long time (maybe 6-10 hours). Once I wait this amount of time, I'm able to fetch the data as if nothing happened both in my application, and through the browser. I guess my question is, does Facebook keep some sort of longer tally of Graph calls? Maybe over the span of a day? And does it kick you out from accessing the API if you go over this amount? Let me know if I'm being too ambiguous or confusing as I'd be happy to clarify. Thanks!
For reference, the URL I'm using is as follows:
https://graph.facebook.com/POSTID/?fields=caption,full_picture,description,id,message,name,type,link
Related
I'm not sure why this is happening and I've already searched the internet why this is happening but I can't find the answer I'm looking for.
Basically this happens when I try to send a request when the Wi-Fi is off and the mobile data is ON but there is no data. It takes 2 minutes for the exception to be thrown. I wanna know the proper reason why. The timeouts are these:
urlconn.setConnectTimeout(60000);
urlconn.setReadTimeout(60000);
Does this mean that both timeouts occur that's why it took 2 minutes or are there other reason that I'm not aware of why this is happening?
Note: I can only post a code snippet due to confidentiality reasons.
Both of them are occurring. There's no data, so the connection fails, that's one minute. Then there's nothing to read from a stream that doesn't exist due to no connection, that's another minute.
My application is using Struts 1.x and it's running on WAS..
All action classes are working fine except one wherein I click on one button and one action(which is expected to complete in 1hour) is called and then it starts executing ..the issue comes when same action is called after few minutes without any button trigger or any change of code.This happens after every few minutes for n number of times...
If anyone has any idea about this please let me know.
A request that takes 1 hour to complete is not normal: you should redesign this functionality.
Briefly, you have this problem because the request takes too much time to complete. For a technical explanation of the cause of your problem see Why does the user agent resubmit a request after server does a TCP reset?
Solution: create a separate thread (or a pool of parallel threads, if possible) to handle the long-running computation and send immediately a response page saying "Request accepted". This page could also use JavaScript to send periodically an "is it completed?" request to the server. You should also provide a mechanism to inquiry for pending requests, so users that close the browser without waiting for the final "Yes, completed!" response can get the result when they want.
I am trying to chase down a problem with CAS that is causing the following exception to be thrown:
javax.naming.TimeLimitExceededException: [LDAP: error code 3 - Timelimit Exceeded]; remaining name ''
at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3097)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2987)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2794)
at com.sun.jndi.ldap.LdapNamingEnumeration.getNextBatch(LdapNamingEnumeration.java:129)
at com.sun.jndi.ldap.LdapNamingEnumeration.hasMoreImpl(LdapNamingEnumeration.java:198)
at com.sun.jndi.ldap.LdapNamingEnumeration.hasMore(LdapNamingEnumeration.java:171)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:295)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:361)...
The error is returned virtually instantly. The client side timeout is set to 10 seconds, but that isn't occuring as, based on looking through the com.sun.jndi.ldap code, it appears that the domain controller is returning a response with a status of 3, indicating a time limit exceeded.
We are hitting an Active Directory global catalog, and our filter and base are pretty broad: base = '', filter = (proxyAddresses=*:someone#somewhere.com)
However, the query succeeds sometimes, but returns an immediate status code 3.
Does anyone know what might be causing this kind of behavior? Or perhaps how to go about determining what exactly is occurring?
Turns out our search filter was too broad.
As you can see, we were using a wildcard in the filter, and the query took a little less than 2 seconds.
However, 2 seconds is far shorter than the Active Directory configured time limit so I couldn't figure out why the error was occurring immediately (not even taking 2 seconds when it failed).
I assume AD must have been accruing the time taken by multiple requests from the same account, and at some point began returning the time limit exceeded error.
To solve it, we modified the search filter so that it no longer included a wildcard. The search then runs almost instantaneously, and the time limit exceeded no longer occurs.
I have a couple of HTTP Request setup for my Thread Group. I noticed that the first request is always taking longer than any other requests. I reordered my requests and the problem still persists.
This is making it hard to analyse the response time.
Is it a known problem with JMeter? Is there a work around?
This is the setup that I have
org.apache.jmeter.threads.ThreadGroup#69bb01
org.apache.jmeter.config.ConfigTestElement#b3600d
org.apache.jmeter.sampler.DebugSampler#67149d
https: 1st request
Query Data:
https: 2nd request
Query Data:
Query Data:
org.apache.jmeter.reporters.ResultCollector#11b53af
org.apache.jmeter.reporters.ResultCollector#11308c7
org.apache.jmeter.reporters.ResultCollector#a5643e
org.apache.jmeter.reporters.ResultCollector#585611
org.apache.jmeter.reporters.Summariser#1e8f4b9
org.apache.jmeter.reporters.ResultCollector#11ad922
org.apache.jmeter.reporters.ResultCollector#1a56999
This could well be because
Servers usually need a warm-up before they reach their full speed:
this is particularly true for the Java platform where you surely don’t
want to measure class loading time, JSP compilation time or native
compilation time.
http://nico.vahlas.eu/2010/03/30/some-thoughts-on-stress-testing-web-applications-with-jmeter-part-2/
Are you allowing for some warm-up traffic to the servers under measurement first, to allow things to get in cache, JSP pages to compile, the database working set to be in memory, etc?
I am trying to perform some computations on a server. For this, the client initially inputs some data which I am capturing through Javascript. Now, I would perhaps make a XMLHttpRequest to a server to send this data. Let's say the computation takes an hour and the client leaves the system or switches off the system.
In practice, I would use perhaps polling from the client side to determine if the result is available. But is there some way I could implement this in the form of a call back, for instance, the next time the client logs in, I would just contact the client side Javascript to pass the result... Any suggestions? I am thinking all this requires some kind of a webserver sitting on the client side but I was wondering if there's a better approach to do this.
Your best bet is to just poll when the user gets to the web page.
What I did in something similar was to gradually change my polling time, so I would start with several seconds, then gradually increase the interval. In your case, just poll after 15 minutes, then increase every 5 minutes when it fails, and if the user closes the browser then you can just start the polling again.
If you want some callback, you could just send an email when it is finished, to let the user know.
Also, while you are doing the processing, try to give some feedback as to how far you have gone, how much longer it may be, anything to show that progress is being made, that the browser isn't locked up. If nothing else, show a time with how long the processing has been going on, to give the user some sense of progress.