I recently made a change from my existing implementation of creating a new client for every request to one standard singleton client.
Now as i began reading into the docs, i found out that there's also a thing called cache which is being used. There was some content which said that having multiple caches trying to access the same cache directory may end up causing them stomping over each other and causing crashes.
The docs I'm looking at are in their github repo:
okhttp recipes for regular issues faced
My questions:
I'm not specifically setting any cache or cache control for my client. Just a few timeout values and a connection pool. Will there be any caching done by default if i use a new request and response object for each call i make?
client = new Okhttpclient.Builder()
. connectTimeout (10000,TimeUnit.MILLISECONDS)
.readTimeout(15000,TimeUnit.MILLISECONDS)
. connectionPool (new ConnectionPool ())
.build();
The above client is set as singleton and returned to all calling servlets. The request is created as
Request req = new Request.Builder()
.url(someurl)
.get()
.build();
Response res = client.newCall(req).execute();
If so, will there be issues as the mentioned stomping part. I don't need caching at all as mostly I'm just writing stuff to another server and when I'm reading i do need it to be the current values and not caches one... So do i need to explicitly set the cache-control set to force-network or will my default no specified setting be the same?
EDIT: this is the excerpt from the Response caching part of the docs
To cache responses, you'll need a cache directory that you can read
and write to, and a limit on the cache's size. The cache directory
should be private, and untrusted applications should not be able to
read its contents!
It is an error to have multiple caches accessing the same cache
directory simultaneously. Most applications should call new
OkHttpClient() exactly once, configure it with their cache, and use
that same instance everywhere. Otherwise the two cache instances will
stomp on each other, corrupt the response cache, and possibly crash
your program.
OkHttp cache directory is set for each client instance. What the doc is telling you is that you shouldn't configure multiple clients to use the same cache directory. Caching has to be enabled explicitly, and isn't enabled in the code snippet in your question.
Having configured caching on the client instance, you can control response caching for each request. See Cache.
Related
I am trying to learn how caching works in REST. I know all headers like Cache control, Max-age, Expires etc. I was going through example mentioned in this post.
What I know about Http cache is (I may be wrong), browser sends Http request to server, and if it has cache headers, browser will store the response in local cache. If client hits another request for the same response, browser will check the cache and if response is not expired, then it will return from cache instead of requesting to server.
Example given in this link, client hits server every time and server checks if client has expired copy or not. In this case, we hit server every time instead of retrieving data from cache.
Am I missing something here?
In mentioned post server side cache is used.
In other words:
RESTEasy Cache can avoid calling UserDatabase if it already contains requested User (by EntityTag key, based on user ID).
Everything is done on server side. It has no any connection with expire date/time request/response headers.
This might be of some help :
Cache response only for GET request when response is 200 OK,
Test environment : Jboss6.4 and maven 3.0
Dependency :
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-cache-core</artifactId>
<version>Any version after 3.0</version>
</dependency>
Code Changes : Add singleton for ServerCacheFeature in your application class.
singletons.add(new ServerCacheFeature());
Add this annotation to your function :
#Cache(maxAge=15, mustRevalidate = false, noStore = false, proxyRevalidate = false, sMaxAge = 15)
noStore can be use to enable/disable to cache resp
Architecture is midTier Liberty server that receives http requests and brokers to various back ends, some REST, some just JSON. When I configure for SSL (only thru envVars which is quite cool) ... it appears I get a full handShake w/every request. Additionally, the server side uses a different thread with each request (may be related). This is Liberty so it is multiThreaded. Servlet has static ref to POJO that does all apache httpClient work. Not using HttpClientContext (in this case). Basic flow is at end (struggling w/formatting for post legality)
EnvVars are:
-Djavax.net.ssl.keyStore=/root/lWasServers/certs/zosConnKey.jks
-Djavax.net.ssl.trustStore=/root/lWasServers/certs/zosConnTrust.jks
-Djavax.net.ssl.keyStorePassword=fredpwd
-Dhttp.maxConnections=40
Looked at many similar problems, but again, right now this flow does not use client context. Hoping I'm missing something simple. Code being appended on first response as I continue to struggle here w/FF in RHEL.
private static PoolingHttpClientConnectionManager cm = null ;
private static CloseableHttpClient httpClient = null ;
// ....
cm = new PoolingHttpClientConnectionManager();
cm.setMaxTotal(512);
cm.setDefaultMaxPerRoute(256) ;
httpClient = HttpClients.custom().setConnectionManager(cm).build() ;
// ...
responseBody = httpClient.execute(httpGet, responseHandler);
If a persistent HTTP connection is stateful and is associated with a particular security context or identity, such as SSL key or NTLM user name, HttpClient tries to make sure this connection cannot be accidentally re-used within a different security context or by a different user. Usually the most straight-forward way of letting HttpClient know that requests are logically related and belong to the same session is by executing those requests with the same HttpContext instance. See HttpClient tutorial for details. One can also disable connection state tracking if HttpClient can only be accessed by a single user or within the same security context. Use with caution.
OK, while I'm not exactly an expert at reading the ssl trace, I do believe I have resolved it. I am on a thread but that is controlled by the server. I now pass the HttpSession in and keep a reference to the HttpClientConnection that I now create for each session. I pool these HttpClientConnection objects (rudimentary pooling, basically just get/release). So all calls w/in an http session use the same HttpClientContext. Now it appears that I am NOT handShaking all the time. There may have been a better way to do it, but this does indeed work, I have a few gremlins to look into (socket timeouts in < 1 millisecond?) ... but I'm confident that I'm non longer handShaking with each request (only each time I end up creating a new context) ... so this is all good. Thanks,
Is it possible to save the cookies to a text file and use them in later requests?
Or can we define a text file as CookieStore?
Is there some good explained example?
How long are the cookies saved in programs normally and where? In the memory and for how long? Just for the time the program runs in the VM?
Do we have to fetch each cookie from the local CookieStore iterating through the list and add them manually to the text file and add these cookies later to the CookieStore again?
BasicCookieStore class shipped with HttpClient is Serializable, so its instances could be written to and read from an object stream. If you want a more elegant persistence mechanism, you will have to implement it by fetching individual cookies from the store and writing them to a persistent store.
For one of our requirements I am talking between two servers using HTTP protocol. The interaction is long running, where a user might not interact with the other site for pretty long intervals.
When they come to the page, the log in into the remote site. Every time user tried to interact with the remote site, internally I make a HTTP call (authetication is done based on sessionId).
I was wondering if there is a way to also refresh the session and ensure that it does not expire.
As per my limited understanding, browser handles this by passing keep-alive in header or cookie (which I don't understand completely). Can anyone suggest a programmatic way in Java to achieve keep-alive behavior
1.
<session-config>
<session-timeout>-1</session-timeout>
</session-config>
Simply paste this piece if code in your deployment descriptor (DD).
If you want to keep your session alive for a particular duration of time replace -1 with any positive numeric value.
Time specified here is in minutes.
2.
If you want to change session timeout value for a particular session instance without affecting the timeout length of any other session in the application :
session.setMaxInactiveInterval(30*60);
**********************
Note :
1.In DD, the time specified is in minutes.
2.If you do it programatically, the time specified is in seconds.
Hope this helps :)
I guess below code can help you, if you can pass JSESSIONID cookie then your container will take responsibility to keep the session alive if its same session that might be created from some other browser.
find the link below that explained a lot.
Click Here
Code snippet from the link above
BasicCookieStore cookieStore = new BasicCookieStore();
BasicClientCookie cookie = new BasicClientCookie("JSESSIONID", "97E3D1012B3802114FA0A844EDE7B2ED");
cookie.setDomain("localhost");
cookie.setPath("/CookieTest");
cookieStore.addCookie(cookie);
HttpClient client = HttpClientBuilder.create().setDefaultCookieStore(cookieStore).build();
final HttpGet request = new HttpGet("http://localhost:1234/CookieTest/CookieTester");
HttpResponse response = client.execute(request);
Take a look of Apache HttpClient and see its tutorial. HttpClient supports keep alive headers and other features that should enable you to programmatically make an HTTP call.
I've to do an application that performs a Login POST request in a certain host, then navigates some pages, finds and retrieves some data.
Becase the website resouce is protected by session, so I have to login the website first before I can do some operation such as get or post some data.
My question is because HttpClient is not thread-safe, how can I create only one HttpClient instance but threads can perform on it safely?
Remember that the underlying connection must login first before it can be used to operate.
Here is an answer: http://pro-programmers.blogspot.com/2009/06/apache-httpclient-multi-threads.html
You can make HttpClient thread safe by specifying a thread safe client manager.
API : http://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/impl/conn/tsccm/ThreadSafeClientConnManager.html
http://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/impl/client/DefaultHttpClient.html#DefaultHttpClient%28org.apache.http.conn.ClientConnectionManager%29
Example : http://thinkandroid.wordpress.com/2009/12/31/creating-an-http-client-example/