I'm currently working on a Java application that reads from Table A (with BLOB stored), writes some data from table A to table B while uploading the BLOB data to a file server. I tested the application on a test database(with around 400 rows) and it works fine. I need to implement the application as a background service that reads table A and sends HTTP POST requests to a REST server, followed by an insertion to table B and upload to file server. After the POST request, the server needs to return HTTP 202 created. I tried something like this:
#POST
#Path("attachments")
public void moveToMinio() throws Exception {
TiedostoDaoImpl tiedostoDao = new TiedostoDaoImpl();
List<Integer> id = tiedostoDao.getDistinctCustomerId();
for (Integer userId : id){
AttachmentService.insertAndUploadService(userId);
}
}
tiedostoDao.getDistinctCustomerId() returns a list of distinct customer id from table A and passes that id to AttachmentService.insertAndUploadService() inside a for loop. This somehow gets the job done, but I doubt this isn't the right way as it returns HTTP 200 and not 202. Is this right way to send a POST request? The production database may have millions of rows, what's the right way to process all those rows without affecting the server efficiency? I've been stuck with this for a while as I'm a java newbie and would really appreciate any help/suggestion.
If you want a http response after every row is processed, first you need to divide your method into which processes one row at one time, then you can use a Response to contain your http code and entity, like this:
#POST
#Path("attachments")
public Response moveToMinio() throws Exception {
TiedostoDaoImpl tiedostoDao = new TiedostoDaoImpl();
Integer userId = tiedostoDao.getOneCustomerId();
String uploadLink = AttachmentService.insertAndUploadService(userId);
return Response.status(Response.Status.ACCEPTED).entity(uploadLink).build();
}
Please refer to this How to run a background task in a servlet based web application?
Before you return response, put the job into a global queue and let the background process do the job.
Related
final StreamingOutput stream = new StreamingOutput() {
#Override
public void write(final OutputStream out) {
dao.getData(
query,
new SdmxObserver(writerFactory.getDataWriter(sdmxFormat, out, properties), request
.getRemoteAddr(), request.getHeader("User-Agent"), query.toString(), sdmxFormat
.toString(), query.getlist()));
}
};
res = Response.ok(stream, MediaType.valueOf("application/vnd.sdmx.genericdata+xml;version=2.1"))
.cacheControl(cc).lastModified(lastModified).header("Vary", "Accept,Accept-Encoding").build();
return res;
The database call to retrieve the data is taking long as the data is huge and for that before downloading the data in the browser it is taking long time and user thought the server is not reachable. How could we send the chunk of data by using multiple database query in the same web service response, so that the download starts quickly and keep adding the response when next query fetch the data from database.
Paginate the response content into pages up to X records. This will allow the API to retrieve a single page of results at a time, one by one.
http://localhost/api/v1/resource?page=1&size=20
http://localhost/api/v1/resource?page=2&size=20
http://localhost/api/v1/resource?page=3&size=20
...
You can take a look at PageRequest and Pageable defined in Spring Data when defining your API.
I am writing a REST API in JAX-RS 2.0, JDK 8 for the below requirement
POST API /server/fileUpload/ (Multipart Form data) where I need to send a Big .AI (Adobe Illustrator) File in this.
The Server, takes the file and return Status 202 (Accepted), Acknowledging that file transfer happened Successfully. (From endpoint to Server)
Now at the Server, I am using Java + Imagemagik to convert .AI File (20-25 MB File) to small JPG Thumbnail, place on a Apache HTTP Server and share the location (like http://happyplace/thumbnail0987.jpg)
Now the Second Response should come from Server with Status 200 OK and Thumbnail URL
is it feasible with one REST API? (Async/similar)
or should I split it to 2 API calls, Please suggest
No. In http, one request gets one response. The client must send a second request to get a second response.
You can use WebSockets for that.
If you are calling from script the call will be async you can handle the Thumbnail URL when you get a response. When you are calling from java program i suggest to run it on a different thread, If the execution is not sequential i.e ( Remaining lines can be executed without getting URL). If url is needed for the remaining section of code you can make one call and wait for the response then execute remaining code.
You need to make different APIs for both scenarios. One for showing file upload status and another for all file conversion and manipulation.
On the client side second request must be callback of first request.
The best way to handle these kind of scenario is to use Java Reactive (Project Reactor, WebFlux).
You can return two response using custom middlewares in asp.net (however not recommended).
Return response from one middleware and subsequently you can invoke next middleware and return second response from second middleware
Hi I am trying to send List of object from Java controller to Angular in JSON. If the result is 20000 it is working fine. But in case of 30000 angular is getting empty body. Can any one help regarding how to get this 30000 record at 1 time. Angular service getting data and converting from JSON
Java Code return:
List<**VO>
private extractData(res: Response) {
let body = res.json(); -- Error body is coming - ""
let data = body || [];
return data;
}
There's hardly ever a good reason to load all the data in the front end with large datasets as it causes a lot of problems and generally makes your web application slow. You're doing pagination, but you're doing it only in the front end. Pagination can also be done in the back end. Instead of getting ALL items from the back end and handling pagination solely on the front end, you should only fetch all data for the current page from the back end. Extend your API with a page number and page size and make the API return the count for the total number of results as well either as part of the response body or in an HTTP header. Each time a user changes to a different page you request the data for that particular page again from the back end.
As a workaround, you can get records partially, like 10000 record from single request. Then merge the records on javascript side.
I've built a complex web app using angularjs, java and hibernate.
I'm using http request for saving the form data.
$http({
method : 'POST',
url : $scope.servlet_url,
headers : {'Content-Type' : 'application/json'},
data : $scope.deals
}).success(function(data) {
}).error(function(data) {
});
I'm using angular version 1.2.19.
When hitting the save button this request will be triggered with the data available in the form and it moves to the server where the data is saved in the database. Before saving into the database many validations are done and also some external data is fetched which are related to the form data. Hence, it takes some time to save. (approximately around 5 to 7 minutes based on the form data provided). After the data is saved i'm redirecting to another page based on the response provided.
But the response takes time in my case, and in the mean time the same request is triggered again without any clue. There's no place the request is called in the code, but the request is triggered. It's a bit confusing.
But the same code works fine if the save takes less than 5 minutes. If it exceeds 5 minutes, it goes into a infinite loop and the save happens as many times the request is triggered. The response for the first request hits the angular controller but the controller doesn't identify the response, means we can't handle the response in this case. If this happens the page gets struck and we manually need to refresh or move the page.
Is there any way to prevent the duplicate request in angularjs? If there is a way, could anyone please help me achieve it.
Thanks in advance.
Suppose I have the following web service call using #GET method:
#GET
#Path(value = "/user/{id}")
#Produces(MediaType.APPLICATION_JSON)
public Response getUserCache(#PathParam("id") String id, #Context HttpHeaders headers) throws Exception {
HashMap<String, Object> map = new HashMap<String, Object>();
map.put("id", id);
SqlSession session = ConnectionFactory.getSqlSessionFactory().openSession();
Cre8Mapper mapper = session.getMapper(Cre8Mapper.class);
// slow it down 5 seconds
Thread.sleep(5000);
// get data from database
User user = mapper.getUser(map);
if (user == null) {
return Response.ok().status(Status.NOT_FOUND).build();
} else {
CacheControl cc = new CacheControl();
// save data for 60 seconds
cc.setMaxAge(60);
cc.setPrivate(true);
return Response.ok(gson.toJson(user)).cacheControl(cc).status(Status.OK).build();
}
}
To experiment, I slow down the current thread 5 seconds before fetching data from my database.
When I call my web service using Firefox Poster, within 60 seconds it seemed much faster on the 2nd, 3rd calls and so forth, until it passed 60 seconds.
However, when I paste the URI to a browser (Chrome), it seemed to slow down 5s everytime. And I'm really confused about how caching is actually done with this technique. Here are my questions:
Does POSTER actually look at the header max-age and decide when to
fetch the data?
In client side (web, android....),
when accessing my web service do I need to check the header and then
perform caching manually or the browser already cached the data
itself?
Is there a way to avoid fetching data from the database
every time? I guess I would have to store my data in memory somehow,
but could it potentially run out of memory?
In this tutorial
JAX-RS caching tutorial:
How does caching actually work? The first line always fetch the data from the database:
Book myBook = getBookFromDB(id);
So how it is considered cached? Unless the code doesn't execute in top/down order.
#Path("/book/{id}")
#GET
public Response getBook(#PathParam("id") long id, #Context Request request) {
Book myBook = getBookFromDB(id);
CacheControl cc = new CacheControl();
cc.setMaxAge(86400);
EntityTag etag = new EntityTag(Integer.toString(myBook.hashCode()));
ResponseBuilder builder = request.evaluatePreconditions(etag);
// cached resource did change -> serve updated content
if (builder == null){
builder = Response.ok(myBook);
builder.tag(etag);
}
builder.cacheControl(cc);
return builder.build();
}
From your questions i see that you're mixing client side caching(http) with server side caching(database). I think the root cause for this is the different behavior you observed in firefox and chrome first i will try to clear this
When I call my web service using Firefox Poster, within 60 seconds it
seemed much faster on the 2nd, 3rd calls and so forth, until it passed
60 seconds. However, when I paste the URI to a browser (Chrome), it
seemed to slow down 5s everytime.
Example :
#Path("/book")
public Response getBook() throws InterruptedException {
String book = " Sample Text Book";
TimeUnit.SECONDS.sleep(5); // thanks #fge
final CacheControl cacheControl = new CacheControl();
cacheControl.setMaxAge((int) TimeUnit.MINUTES.toSeconds(1));
return Response.ok(book).cacheControl(cacheControl).build();
}
I have a restful webservice up and running and url for this is
http://localhost:8780/caching-1.0/api/cache/book - GET
FireFox :
First time when i accessed url ,browser sent request to server and got response back with cache control headers.
Second Request with in 60 seconds (using Enter) :
This time firefox didn't went to server to get response,instead its loaded data from cache
Third Request after 60 seconds (using Enter) :
this time firefox made request to server and got response.
Fourth Request using Refresh (F5 or ctrl F5) :
If i refresh page ( instead of hitting enter) with in 60 seconds of previous request firefox didn't load data from cache instead it made request to server with special header in request
Chrome :
Second Request with in 60 seconds (using Enter ) : This time chrome sent request again to server instead of loading data from cache ,and in request it added header cache-control = "max-age=0"
Aggregating Results :
As chrome responding differently to enter click you saw different behavior in firefox and chrome ,its nothing do with jax-rs or your http response . To summarize clients (firefox/chrome/safari/opera) will cache data for specified time period in cache control , client will not make new request to server unless time expires or until we do a force refresh .
I hope this clarifies your questions 1,2,3.
4.In this tutorial JAX-RS caching tutorial: How does caching actually
work? The first line always fetch the data from the database:
Book myBook = getBookFromDB(id);
So how it is considered cached? Unless the code doesn't execute in
top/down order.
The example you referring is not talking about minimizing data base calls instead its about saving band width over network ,Client already has data and its checking with server(revalidating) if data is updated or not if there is no update in data in response you're sending actual entity .
Yes.
When using a browser like firefox or chrome, you don't need to worry about HTTP cache because modern browsers will handle it. For example, it uses in-memory cache when using Firefox. When using Android, it depends on how you interact with the origin server. According to WebView, it's actually a browser object, but you need to handle HTTP cache on your own if using HTTPClient.
It's not about HTTP caching but your server-side logic. the common answer is using database cache so that you don't need to hit database in every HTTP request.
Actually JAX-RS just provides you ways to work with HTTP cache headers. you need to use CacheControl and/or EntityTag to do time based cache and conditional requests. for example, when using EntityTag, the builder will handle response status code 304 which you never need to worry about.