I'm beginning an initial review of vert.x and comparing it to akka-http. One area where akka appears to shine is streaming of response bodies.
In akka-http it is possible to create a streaming entity that utilizes back-pressure which allows the client to decide when it is ready to consume data.
As an example, it is possible to create a response with an entity consisting of 1 billion instances of "42" values:
//Iterator is "lazy", therefore this function returns immediately
val bodyData : () => Iterator[ChunkStreamPart] = () =>
Iterator
.continually("42")
.take(1000000000)
.map(ChunkStreamPart.apply)
val route =
get {
val entity : HttpEntity =
Chunked(ContentTypes.`text/plain(UTF-8)`, Source fromIterator bodyData)
complete(HttpResponse(entity=entity))
}
The above code will not "blow up" the server's memory and will return the response to the client before the billion values have been generated.
The "42" values will get created on-the-fly as the client tries to consume the response body.
Question: is this streaming capability also present in vert.x?
A cursory review of the HttpServerResponse class would indicate that it is not since the write member function can only take in a String or a vert.x Buffer. From my limited understanding it seems that Buffer is not lazy and holds the data in memory which means the 1 billion "42" example would crash a server with just a few concurrent requests.
Thank you in advance for your consideration and response.
Related
I am looking for a way to get the total amount of disk space used by a
Elasticsearch cluster. I have found suggestions to use the following REST API endpoints to obtain this information among other things:
GET /_cat/stats
GET /_nodes/stats
I was wondering can the same information be obtained using the Elasticsearch Java High-Level REST Client or the Transport Client in an older version of Elasticsearch?
Yes, you are correct that disk stats can be obtained using the _nodes/stats API as REST high-level client doesn't provide any direct API for node stats, you can see all the API supported by it here.
But you can use the low level rest client which is provide in high level client and below is the working example code.
private void getDiskStats(RestHighLevelClient restHighLevelClient) throws IOException {
RestClient lowLevelClient = restHighLevelClient.getLowLevelClient();
Request request = new Request(
"GET",
"/_nodes/stats");
Response response = lowLevelClient.performRequest(request);
if (response.getStatusLine().getStatusCode() == 200) {
System.out.println("resp: \n"+ EntityUtils.toString(response.getEntity()));
}
}
You can see I am printing the O/P of above API on console and verified that it contains the disk usage status which comes in below format:
"most_usage_estimate": {
"path": "/home/opster/runtime/elastic/elasticsearch-7.8.1/data/nodes/0",
"total_in_bytes": 124959473664,
"available_in_bytes": 6933352448,
"used_disk_percent": 94.45151916481107
},
Hi I am trying to send List of object from Java controller to Angular in JSON. If the result is 20000 it is working fine. But in case of 30000 angular is getting empty body. Can any one help regarding how to get this 30000 record at 1 time. Angular service getting data and converting from JSON
Java Code return:
List<**VO>
private extractData(res: Response) {
let body = res.json(); -- Error body is coming - ""
let data = body || [];
return data;
}
There's hardly ever a good reason to load all the data in the front end with large datasets as it causes a lot of problems and generally makes your web application slow. You're doing pagination, but you're doing it only in the front end. Pagination can also be done in the back end. Instead of getting ALL items from the back end and handling pagination solely on the front end, you should only fetch all data for the current page from the back end. Extend your API with a page number and page size and make the API return the count for the total number of results as well either as part of the response body or in an HTTP header. Each time a user changes to a different page you request the data for that particular page again from the back end.
As a workaround, you can get records partially, like 10000 record from single request. Then merge the records on javascript side.
I'm a beginner and I'm a little bit lost with Resteasy
I'd like to send a post request with an URL similar to this : http://myurl.com/options?value=3name=picture
String myValue = "3";
String myName = "picture";
String key = "topsecret";
I'm not too sure about what's coming. I've seen several tutorial classes (not very clear to me) and another way similar to this
final MultivaluedMap<String, Object> queryParams = new MultivaluedMapImpl<>();
queryParams.add("value", myValue);
queryParams.add("name", myPicture);
final ResteasyClient client = new ResteasyClientBuilder().build();
final ResteasyWebTarget target = client.target(url).queryParams(queryParams);;
final Builder builder = target.request();
When I write I have loads of warning. Is it the right way do it ? What about the API key ?
First of all, you must check the documentation of the API you want to consume regarding how the API key must be sent to the server. Not all APIs follow the same approach.
For example purposes, let's assume that the API key must be sent in the X-Api-Key header. It's a non standard and I've made it up just to demonstrate how to use the client API.
So you can have the following:
// Create a client
Client client = ClientBuilder.newClient();
// Define a target
WebTarget target = client.target("http://myurl.com/options")
.queryParam("value", "3")
.queryParam("name", "picture");
// Perform a request to the target
Response response = target.request().header("X-Api-Key", "topsecret")
.post(Entity.text(""));
// Process the response
// This part is up to you
// Close the response
response.close();
// Close the client
client.close();
The above code uses the JAX-RS API, which is implemented by RESTEasy. You'd better use Client instead of ResteasyClient whenever possible to ensure portability with other implementations.
The above code also assumes that you want to send an empty text in the request payload. Modify it accordingly.
Response instances that contain an un-consumed entity input stream should be closed. This is typical for scenarios where only the response headers and the status code are processed, ignoring the response entity.
Going beyond the scope of the question, bear in mind that Client instances are heavy-weight objects that manage the underlying client-side communication infrastructure. Hence initialization as well as disposal of a Client instance may be a rather expensive operation.
The documentation advises to create only a small number of Client instances and reuse them when possible. It also states that Client instances must be properly closed before being disposed to avoid leaking resources.
Suppose I have the following web service call using #GET method:
#GET
#Path(value = "/user/{id}")
#Produces(MediaType.APPLICATION_JSON)
public Response getUserCache(#PathParam("id") String id, #Context HttpHeaders headers) throws Exception {
HashMap<String, Object> map = new HashMap<String, Object>();
map.put("id", id);
SqlSession session = ConnectionFactory.getSqlSessionFactory().openSession();
Cre8Mapper mapper = session.getMapper(Cre8Mapper.class);
// slow it down 5 seconds
Thread.sleep(5000);
// get data from database
User user = mapper.getUser(map);
if (user == null) {
return Response.ok().status(Status.NOT_FOUND).build();
} else {
CacheControl cc = new CacheControl();
// save data for 60 seconds
cc.setMaxAge(60);
cc.setPrivate(true);
return Response.ok(gson.toJson(user)).cacheControl(cc).status(Status.OK).build();
}
}
To experiment, I slow down the current thread 5 seconds before fetching data from my database.
When I call my web service using Firefox Poster, within 60 seconds it seemed much faster on the 2nd, 3rd calls and so forth, until it passed 60 seconds.
However, when I paste the URI to a browser (Chrome), it seemed to slow down 5s everytime. And I'm really confused about how caching is actually done with this technique. Here are my questions:
Does POSTER actually look at the header max-age and decide when to
fetch the data?
In client side (web, android....),
when accessing my web service do I need to check the header and then
perform caching manually or the browser already cached the data
itself?
Is there a way to avoid fetching data from the database
every time? I guess I would have to store my data in memory somehow,
but could it potentially run out of memory?
In this tutorial
JAX-RS caching tutorial:
How does caching actually work? The first line always fetch the data from the database:
Book myBook = getBookFromDB(id);
So how it is considered cached? Unless the code doesn't execute in top/down order.
#Path("/book/{id}")
#GET
public Response getBook(#PathParam("id") long id, #Context Request request) {
Book myBook = getBookFromDB(id);
CacheControl cc = new CacheControl();
cc.setMaxAge(86400);
EntityTag etag = new EntityTag(Integer.toString(myBook.hashCode()));
ResponseBuilder builder = request.evaluatePreconditions(etag);
// cached resource did change -> serve updated content
if (builder == null){
builder = Response.ok(myBook);
builder.tag(etag);
}
builder.cacheControl(cc);
return builder.build();
}
From your questions i see that you're mixing client side caching(http) with server side caching(database). I think the root cause for this is the different behavior you observed in firefox and chrome first i will try to clear this
When I call my web service using Firefox Poster, within 60 seconds it
seemed much faster on the 2nd, 3rd calls and so forth, until it passed
60 seconds. However, when I paste the URI to a browser (Chrome), it
seemed to slow down 5s everytime.
Example :
#Path("/book")
public Response getBook() throws InterruptedException {
String book = " Sample Text Book";
TimeUnit.SECONDS.sleep(5); // thanks #fge
final CacheControl cacheControl = new CacheControl();
cacheControl.setMaxAge((int) TimeUnit.MINUTES.toSeconds(1));
return Response.ok(book).cacheControl(cacheControl).build();
}
I have a restful webservice up and running and url for this is
http://localhost:8780/caching-1.0/api/cache/book - GET
FireFox :
First time when i accessed url ,browser sent request to server and got response back with cache control headers.
Second Request with in 60 seconds (using Enter) :
This time firefox didn't went to server to get response,instead its loaded data from cache
Third Request after 60 seconds (using Enter) :
this time firefox made request to server and got response.
Fourth Request using Refresh (F5 or ctrl F5) :
If i refresh page ( instead of hitting enter) with in 60 seconds of previous request firefox didn't load data from cache instead it made request to server with special header in request
Chrome :
Second Request with in 60 seconds (using Enter ) : This time chrome sent request again to server instead of loading data from cache ,and in request it added header cache-control = "max-age=0"
Aggregating Results :
As chrome responding differently to enter click you saw different behavior in firefox and chrome ,its nothing do with jax-rs or your http response . To summarize clients (firefox/chrome/safari/opera) will cache data for specified time period in cache control , client will not make new request to server unless time expires or until we do a force refresh .
I hope this clarifies your questions 1,2,3.
4.In this tutorial JAX-RS caching tutorial: How does caching actually
work? The first line always fetch the data from the database:
Book myBook = getBookFromDB(id);
So how it is considered cached? Unless the code doesn't execute in
top/down order.
The example you referring is not talking about minimizing data base calls instead its about saving band width over network ,Client already has data and its checking with server(revalidating) if data is updated or not if there is no update in data in response you're sending actual entity .
Yes.
When using a browser like firefox or chrome, you don't need to worry about HTTP cache because modern browsers will handle it. For example, it uses in-memory cache when using Firefox. When using Android, it depends on how you interact with the origin server. According to WebView, it's actually a browser object, but you need to handle HTTP cache on your own if using HTTPClient.
It's not about HTTP caching but your server-side logic. the common answer is using database cache so that you don't need to hit database in every HTTP request.
Actually JAX-RS just provides you ways to work with HTTP cache headers. you need to use CacheControl and/or EntityTag to do time based cache and conditional requests. for example, when using EntityTag, the builder will handle response status code 304 which you never need to worry about.
I am evaluating performance of my transport library and it will be helpful if I get suggestions on the following:
I use a Junit sampler to perform the following:
HTTP POST test: I send a HTTP POST request: This will cause a DB write. I have to evaluate all the parameters (throughput, avg. response time) holistically for POST + DB_WRITE operation. As response to this POST request, I get a unique id. So if I send 1000 successful POST requests, I will have 1000 unique ids.
Now my question is how can I use these unique ids for my next test case, perform a HTTP GET on each of created unique ids.
I can parse the HTTP POST response and write the unique id into a file and try using that file for my HTTP GET test. But the problem is if I create a thread group of 10 different threads, there will be issues of synchronization on file writing.
Is there any PostProcessor I can use to record results in filesystem?
As for me looks like you can avoid usage of file to store and then read generated id's.
Logic is the following:
execute your POST request;
parse response returned from POST - using Regular Expression Extractor or any other post-processor attached to the request - to extract your ID;
store extracted ID in user-unique / thread-unique variable - in the same post-processor;
how to do this for Regular Expression Extractor see below: ${__javaScript('${username}'+'UnicID')} generates unique variable for each user/thread, to avoid interference in multi-user cases;
seems that can also use threadNum function instead of ${username} variable;
if POST request completed successfully, ID extracted and stored in variable - execute your GET request were extracted ID is used as param;
use ${__V(${username}UnicID)} construction to get back previously saved ID.
You may add also add Debug PostProcessor to POST request sampler - to monitor generated variables and their values.
Seems that's all.
Thread Group
Number of Threads = X
Loop Count = N
. . .
HTTP Request POST
checkingReturnCode // Response Assertion
extractUniqueID // Regular Expression Extractor (e.g.)
Reference Name = ${__javaScript('${username}'+'UnicID')}
Regular Expression = ...
Template = $1$
Match No. = 1
Default Value = NOTFOUND
IF Controller // execute GET only if POST was successful
Condition = ${JMeterThread.last_sample_ok} // you may change this to verify that variable with extracted ID is not empty
HTTP Request GET
param = ${__V(${username}UnicID)}
. . .
Hope this will help.
There won't be any problems with synchronization (they are resolved by file system). In every thread (which is POST-ing) you should open your file for writing and append a new line to it. Again, don't worry about synchronization, OS will take care of it.