Cassandra - Set write timeout with Java API - java

I am trying to set the write timeout in Cassandra with the Java drive. SocketOptions allows me to set a read and connect timeout but not a write timeout.
Does anyone knows the way to do this without changing the cassandra.yaml?
thanks
Altober

The name is misleading, but SocketOptions.getReadTimeoutMillis() applies to all requests from the driver to cassandra. You can think of it as a client-level timeout. If a response hasn't been returned by a cassandra node in that period of time an OperationTimeoutException will be raised and another node will be tried. Refer to the javadoc link above for more nuanced information about when the exception is raised to the client. Generally, you will want this timeout to be greater than your timeouts in cassandra.yaml, which is why 12 seconds is the default.
If you want to effectively manage timeouts at the client level, you can control this on a per query basis by using executeAsync along with a timed get on the ResultSetFuture to give up on the request after a period of time, i.e.:
ResultSet result = session.executeAsync("your query").get(300, TimeUnit.MILLISECONDS);
This will throw a TimeoutException if the request hasn't been completed in 300 ms.

Related

How to set connectTimeout in case of slow internet and if I don't know the size of file to download

private fun downloadAPKStream() : InputStream? {
val url = URL(this.url)
val connection = url.openConnection() as HttpURLConnection
connection.requestMethod = "GET"
connection.connect() connection.connectTimeout = 5000
fileSize = connection.contentLength
val inputStream = connection.inputStream
return inputStream
}
I'm using this method to download apk file. But here if internet is slow then due to timeout of 5000 ms, download gets stuck in between without get completed. And if I comment this line or I don't provide any **connection.connectTimeout then it runs fine but sometimes get stuck in infinite time loop. What should I do to make it download files of any size and with slow internet as well.
You got timeout meaning wrong. It is not a max. allowed time of given (network in this case) operation, but max. allowed time of inactivity after which operation is considered stalled and fail. So you should set the timeout to some sane value, that would make sense in real life. As value is in milliseconds, the 5000 is not the one because it's just 5 seconds - any small network hiccup and your connection will get axed. Set it to something higher, like 30 secs or 1 minute or more.
Also note that this is connection timeout only. This means you should be able to establish protocol connection to remote server during that time, but this got nothing to data transfer itself. Data transfer oa process that comes next, once connection is established. For data transfer timeout (which definitely should be set higher) you need to use setReadTimeout().
Finally, you must set connection timeout prior calling connect() otherwise it makes no sense as it is already too late - this is what you got in your code now.
PS: use Download Manager instead.

couchdb gen_server call timeout during purge

I'm running an analysis on time duration to run a couchdb purge using a java program. The couchdb connections and calls are handled using ektorp. For a small number of documents purging takes place and I receive a success response.
But when I purge ~ 10000 or more, I get the following error:
org.ektorp.DbAccessException: 500:Internal Server Error
URI: /dbname/_purge
Response Body:
{
"error" : "timeout",
"reason" : "{gen_server,call,
....
On checking the db status using a curl command, the actual purging has taken place. But this timeout does not allow me to monitor the actual time of the purging method in my java program since this throws an exception.
On some research, I believe this is due to a default timeout value of an erlang gen_server process. Is there anyway for me to fix this?
I have tried changing the timeout values of the StdHttpClient to no avail.
HttpClient authenticatedHttpClient = new StdHttpClient.Builder()
.url(url)
.username(Conf.COUCH_USERNAME)
.password(Conf.COUCH_PASSWORD)
.connectionTimeout(600*1000)
.socketTimeout(600*1000)
.build();
CouchDB Dev here. You are not supposed to use purge with large numbers of documents. This is to remove accidentally added data from the DB, like credit card or social security numbers. This isn’t meant for general operations.
Consequently, you can’t raise that gen_server timeout :)

Is it possible to increase the timeout for Google Cloud Datastore requests?

We are developping an application that uses the Google Cloud Datastore, important detail: it's not an gae application!
Everything works fine for normal usage. We designed a test that fetches over 30000 records but when we tried to run the test we got the following error:
java.net.SocketTimeoutException: Read timed out
We found that a Timeout Exception occurs after 30 seconds, so this explains the error.
I have two questions:
Is there a way to increase this timeout?
Is it possible to use pagination to query the datastore? We found when you have an aep application you can use the cursor, but our application isn't.
You can use cursors in the exact same way as a GAE app using Datastore. Take a look at this page for info.
In particular, the ResultQueryBatch object has an .getEndCursor() method which you can then use when you reissue a Query with setStartCursor(...). Here's a code snippet from the page above:
Query q = ...
if (response.getBatch().getMoreResults() == QueryResultBatch.MoreResultsType.NOT_FINISHED) {
ByteString endCursor = response.getBatch().getEndCursor();
q.setStartCursor(endCursor);
// reissue the query to get more results...
}
You should definitely use cursors to ensure that you get all your results. The rpc has additional constraints to time like total rpc size, so you shouldn't depend on a single rpc answering your entire query.

Camel JMS Proxy Timeout occurred after 20000 millis waiting for reply message

I am using a Camel Proxy which returns a result from another process.
public interface DataProcessingInterface {
public List<ResponseData> processPreview(ClientData criteria, Config config);
}
And this is configured link this
<camel:proxy
id="processPreviewProxy"
serviceInterface="model.jms.DataProcessingInterface"
serviceUrl="jms:queue:processPreview"/>
But sometimes the other process takes a long time to return the results and I am having getting the timeout exception
TemporaryQueueReplyManager - Timeout occurred after 20000 millis waiting for reply message with correlationID [Camel-ID-PC01-2661-1367403764103-0-15]. Setting ExchangeTimedOutException on (MessageId: ID-PC01-2661-1367403764103-0-17 on ExchangeId: ID-PC01-2661-1367403764103-0-16) and continue routing.
How do I tell Camel to wait till the response is ready. It should take forever if that is how long it takes. The client is managed in a different thread so it the duration it takes will not affect the client.
Also is it possible to re-establish the connection if the TimeoutException is thrown so I can continue to wait?
"Forever"? No. You can't wait forever.
A (synchronous) request/reply typically have a time out value set by a reason. If you don't get a reply within a given time, then try again or skip it. In the JMS case, you set both requestTimeout and timeToLive to achieve this. Read this section. In Camel, you can achieve such things with redelivery and error handlers.
Anyway, if you would set the value to "Forever" (or at least very long, such as multiple hours) - then a server/application restart would still make the request fail.
You can set a very high request timeout on the jms endpoint.
jms:queue:processPreview?requestTimeout=xxxx

apache http client: connection timeout for "no route to host" case

I have a problem with apache http client (4.2.1) connection timeouts, if host exists but does not respond in time, the connection is closed by timeout (everything is as expected), but if there is no such host, the client keeps waiting longer than expected (about 12 seconds instead of 5 specified in configuration). Eventually this results in NoRouteToHostException, probably because of the specific network issues (when I was trying to reproduce this in another network, I've got socket read timeout exception after 5 sec of waiting, as expected).
I'm using the following timeout settings:
http.socket.timeout = 5 sec
http.connection.timeout = 5 sec
Any thoughts are appreciated.
Update
If someone ever has the same issue, it is probably caused by the connection retries, performed by the client. I'll update this post when I solve the problem
Update2
Eventually I've been able to fix the issue. The problem was caused by the connection retries performed by DefaultHttpRequestRetryHandler, used by AbstractHttpClient (that is the parent of DefaultHttpClient), if there is no request retry handler specified explicitly. So, if you want to get rid of it, just specify the request retry handler with smaller number of retries
Eventually I've been able to fix the issue. The problem was caused by the connection retries performed by DefaultHttpRequestRetryHandler, used by AbstractHttpClient (that is the parent of DefaultHttpClient), if there is no request retry handler specified explicitly. So, if you want to get rid of it, just specify the request retry handler with smaller number of retries

Categories

Resources