Jersey Streamingoutput of large data set - java

final StreamingOutput stream = new StreamingOutput() {
#Override
public void write(final OutputStream out) {
dao.getData(
query,
new SdmxObserver(writerFactory.getDataWriter(sdmxFormat, out, properties), request
.getRemoteAddr(), request.getHeader("User-Agent"), query.toString(), sdmxFormat
.toString(), query.getlist()));
}
};
res = Response.ok(stream, MediaType.valueOf("application/vnd.sdmx.genericdata+xml;version=2.1"))
.cacheControl(cc).lastModified(lastModified).header("Vary", "Accept,Accept-Encoding").build();
return res;
The database call to retrieve the data is taking long as the data is huge and for that before downloading the data in the browser it is taking long time and user thought the server is not reachable. How could we send the chunk of data by using multiple database query in the same web service response, so that the download starts quickly and keep adding the response when next query fetch the data from database.

Paginate the response content into pages up to X records. This will allow the API to retrieve a single page of results at a time, one by one.
http://localhost/api/v1/resource?page=1&size=20
http://localhost/api/v1/resource?page=2&size=20
http://localhost/api/v1/resource?page=3&size=20
...
You can take a look at PageRequest and Pageable defined in Spring Data when defining your API.

Related

How to get universe,classes,objects,report names, report fields from sap using java

i am able to connect SAP BO server via java (SDK) after this i don't know queries to get all BO Metadata (universe name,classes,objects,report names,report variables separately like oracle) as i need store all bo metadata into my local db(mysql) from BO server. I am new to SAP BO. i was struck on this. please suggest any one on this.All leads are appreciable.
public static void main(String[] args) throws Exception {
IEnterpriseSession enterpriseSession = null;
try {
// Establish connection
System.out.println("Connecting...");
IEnterpriseSession enterpriseSession = sessionMgr.logon(user, pass, host, auth);
IInfoStore infoStore =(IInfoStore)enterpriseSession.getService("InfoStore");
}
my expected output would be that how retrieve all BO Metadata(universe name, classes,objects,report names,report columns) in tabular form lie sql tables
Easiest way to retrieve metadata from the BO Server is to use CMS queries. You can use CMS queries with the REST API.
A simple example to retrieve metadata from your universes in your cms:
API URL: http://host:port/biprws/v1/cmsquery
HTTP Method: GET
Data Formats: Application/JSON, Application/XML
Headers: x-sap-logontoken (you can retrieve a logontoken via the rest api as well)
if you use json use following request body:
{
query:"select * FROM CI_APPOBJECTS WHERE SI_KIND='Universe' order by SI_NAME asc"
}
This blog is a good place to start:
https://blogs.sap.com/2017/05/10/query-the-businessobjects-repository-using-bi-platform-rest-sdk-rws/

How can I send POST requests from a background service in Java?

I'm currently working on a Java application that reads from Table A (with BLOB stored), writes some data from table A to table B while uploading the BLOB data to a file server. I tested the application on a test database(with around 400 rows) and it works fine. I need to implement the application as a background service that reads table A and sends HTTP POST requests to a REST server, followed by an insertion to table B and upload to file server. After the POST request, the server needs to return HTTP 202 created. I tried something like this:
#POST
#Path("attachments")
public void moveToMinio() throws Exception {
TiedostoDaoImpl tiedostoDao = new TiedostoDaoImpl();
List<Integer> id = tiedostoDao.getDistinctCustomerId();
for (Integer userId : id){
AttachmentService.insertAndUploadService(userId);
}
}
tiedostoDao.getDistinctCustomerId() returns a list of distinct customer id from table A and passes that id to AttachmentService.insertAndUploadService() inside a for loop. This somehow gets the job done, but I doubt this isn't the right way as it returns HTTP 200 and not 202. Is this right way to send a POST request? The production database may have millions of rows, what's the right way to process all those rows without affecting the server efficiency? I've been stuck with this for a while as I'm a java newbie and would really appreciate any help/suggestion.
If you want a http response after every row is processed, first you need to divide your method into which processes one row at one time, then you can use a Response to contain your http code and entity, like this:
#POST
#Path("attachments")
public Response moveToMinio() throws Exception {
TiedostoDaoImpl tiedostoDao = new TiedostoDaoImpl();
Integer userId = tiedostoDao.getOneCustomerId();
String uploadLink = AttachmentService.insertAndUploadService(userId);
return Response.status(Response.Status.ACCEPTED).entity(uploadLink).build();
}
Please refer to this How to run a background task in a servlet based web application?
Before you return response, put the job into a global queue and let the background process do the job.

REST call to save large ResultSet to downloadable CSV

I am creating a REST service that loads data from Oracle database table using JDBC, and saves the resultset in CSV.
Since the table is very large, it's expected that the process will take about one hour.
How can I download CSV while saving data to it
(so we get a cycle like this: load chunk (some amount of rows), save to CSV, and flush that part (download) )?
this is intended to:
prevent holding the whole file in server memory, by flushing it periodically to the client.(Assuming JDBC driver does not fetch all table immediately)
show (almost) immediate progress to the user (so user won't wait until CSV is complete)
something like this is good enough:
#GET
#Produces("text/csv")
#Path("/get")
public Response getData(#Context HttpServletRequest request,
#Context HttpServletResponse response) throws IOException {
response.setHeader("Content-Disposition", "attachment; filename=data_file.csv");
ServletOutputStream outputStream = response.getOutputStream();
// here you read from ResultSet.
while (resultSet.next()) {
byte b = (byte) (resultSet.getByte("columnA"));
outputStream.write(b);
}
outputStream.flush();
outputStream.close();
return Response.ok().build();
}

How to send a large input stream to a Spring REST service?

Have a Spring Rest application that run inside an embedded Jetty container.
On Client I use RestTemplate(try to).
Use case :
Having an InputStream (I don't have the File), I want to send it to the REST service.
The InputStream can be quite large (no byte[] !).
What I've tried so far :
Added StandardServletMultipartResolver to the Dispatcher context;
On servlet registration executed :
ServletRegistration.Dynamic dispatcher = ...
MultipartConfigElement multipartConfigElement = new MultipartConfigElement("D:/temp");
dispatcher.setMultipartConfig(multipartConfigElement);
On client :
restTemplate.getMessageConverters().add(new FormHttpMessageConverter());
MultiValueMap<String, Object> parts = new LinkedMultiValueMap<String, Object>();
parts.add("attachmentData", new InputStreamResource(data) {
// hacks ...
#Override
public String getFilename() {
//avoid null file name
return "attachment.zip";
}
#Override
public long contentLength() throws IOException {
// avoid calling getInputStream() twice
return -1L;
}
}
ResponseEntity<Att> saved = restTemplate.postForEntity(url, parts, Att.class)
On server :
#RequestMapping("/attachment")
public ResponseEntity<Att> saveAttachment(#RequestParam("attachmentData") javax.servlet.http.Part part) {
try {
InputStream is = part.getInputStream();
// consume is
is.close();
part.delete();
return new ResponseEntity<Att>(att, HttpStatus.CREATED);
}
}
What is happening :
The uploaded InputStream is stored successfully in the configured temp folder (MultiPart1970755229517315824), the Part part parameter is correctly Injected in the handler method.
The delete() method does not delete the file (smth still has opened handles on it).
Anyway it looks very ugly.
Is there a smoother solution ?
You want to use HTTP's Chunked Transfer Coding. You can enable that by setting SimpleClientHttpRequestFactory.setBufferRequestBody(false). See SPR-7909.
You should rather use byte[], and write a wrapper around the webservice to actually send the "large string" in chunks. Add a parameter in the webservice which will indicate the "contentID" of the content, so that the other side knows this part belongs to which half-filled "bucket". Another parameter "chunkID" would help in sequencing of the chunks on the other side. Finally, third parameter, "isFinalChunk" would be set if whatever you are sending is the final thing. This is pretty non-fancy functionality achievable in less than 100 lines of code.
The only issue with this is that you end up making "n" calls to the webservice rather than just one call, which would aggregate the connect delays etc. For realtime stuff, some more network QoS is required, but otherwise you should be fine.
I think this is much simpler, and once you have your own class wrapper to do this simple chopping and gluing, it is scalable to a great extent if your server can handle multiple webservice calls.

How does caching work in JAX-RS?

Suppose I have the following web service call using #GET method:
#GET
#Path(value = "/user/{id}")
#Produces(MediaType.APPLICATION_JSON)
public Response getUserCache(#PathParam("id") String id, #Context HttpHeaders headers) throws Exception {
HashMap<String, Object> map = new HashMap<String, Object>();
map.put("id", id);
SqlSession session = ConnectionFactory.getSqlSessionFactory().openSession();
Cre8Mapper mapper = session.getMapper(Cre8Mapper.class);
// slow it down 5 seconds
Thread.sleep(5000);
// get data from database
User user = mapper.getUser(map);
if (user == null) {
return Response.ok().status(Status.NOT_FOUND).build();
} else {
CacheControl cc = new CacheControl();
// save data for 60 seconds
cc.setMaxAge(60);
cc.setPrivate(true);
return Response.ok(gson.toJson(user)).cacheControl(cc).status(Status.OK).build();
}
}
To experiment, I slow down the current thread 5 seconds before fetching data from my database.
When I call my web service using Firefox Poster, within 60 seconds it seemed much faster on the 2nd, 3rd calls and so forth, until it passed 60 seconds.
However, when I paste the URI to a browser (Chrome), it seemed to slow down 5s everytime. And I'm really confused about how caching is actually done with this technique. Here are my questions:
Does POSTER actually look at the header max-age and decide when to
fetch the data?
In client side (web, android....),
when accessing my web service do I need to check the header and then
perform caching manually or the browser already cached the data
itself?
Is there a way to avoid fetching data from the database
every time? I guess I would have to store my data in memory somehow,
but could it potentially run out of memory?
In this tutorial
JAX-RS caching tutorial:
How does caching actually work? The first line always fetch the data from the database:
Book myBook = getBookFromDB(id);
So how it is considered cached? Unless the code doesn't execute in top/down order.
#Path("/book/{id}")
#GET
public Response getBook(#PathParam("id") long id, #Context Request request) {
Book myBook = getBookFromDB(id);
CacheControl cc = new CacheControl();
cc.setMaxAge(86400);
EntityTag etag = new EntityTag(Integer.toString(myBook.hashCode()));
ResponseBuilder builder = request.evaluatePreconditions(etag);
// cached resource did change -> serve updated content
if (builder == null){
builder = Response.ok(myBook);
builder.tag(etag);
}
builder.cacheControl(cc);
return builder.build();
}
From your questions i see that you're mixing client side caching(http) with server side caching(database). I think the root cause for this is the different behavior you observed in firefox and chrome first i will try to clear this
When I call my web service using Firefox Poster, within 60 seconds it
seemed much faster on the 2nd, 3rd calls and so forth, until it passed
60 seconds. However, when I paste the URI to a browser (Chrome), it
seemed to slow down 5s everytime.
Example :
#Path("/book")
public Response getBook() throws InterruptedException {
String book = " Sample Text Book";
TimeUnit.SECONDS.sleep(5); // thanks #fge
final CacheControl cacheControl = new CacheControl();
cacheControl.setMaxAge((int) TimeUnit.MINUTES.toSeconds(1));
return Response.ok(book).cacheControl(cacheControl).build();
}
I have a restful webservice up and running and url for this is
http://localhost:8780/caching-1.0/api/cache/book - GET
FireFox :
First time when i accessed url ,browser sent request to server and got response back with cache control headers.
Second Request with in 60 seconds (using Enter) :
This time firefox didn't went to server to get response,instead its loaded data from cache
Third Request after 60 seconds (using Enter) :
this time firefox made request to server and got response.
Fourth Request using Refresh (F5 or ctrl F5) :
If i refresh page ( instead of hitting enter) with in 60 seconds of previous request firefox didn't load data from cache instead it made request to server with special header in request
Chrome :
Second Request with in 60 seconds (using Enter ) : This time chrome sent request again to server instead of loading data from cache ,and in request it added header cache-control = "max-age=0"
Aggregating Results :
As chrome responding differently to enter click you saw different behavior in firefox and chrome ,its nothing do with jax-rs or your http response . To summarize clients (firefox/chrome/safari/opera) will cache data for specified time period in cache control , client will not make new request to server unless time expires or until we do a force refresh .
I hope this clarifies your questions 1,2,3.
4.In this tutorial JAX-RS caching tutorial: How does caching actually
work? The first line always fetch the data from the database:
Book myBook = getBookFromDB(id);
So how it is considered cached? Unless the code doesn't execute in
top/down order.
The example you referring is not talking about minimizing data base calls instead its about saving band width over network ,Client already has data and its checking with server(revalidating) if data is updated or not if there is no update in data in response you're sending actual entity .
Yes.
When using a browser like firefox or chrome, you don't need to worry about HTTP cache because modern browsers will handle it. For example, it uses in-memory cache when using Firefox. When using Android, it depends on how you interact with the origin server. According to WebView, it's actually a browser object, but you need to handle HTTP cache on your own if using HTTPClient.
It's not about HTTP caching but your server-side logic. the common answer is using database cache so that you don't need to hit database in every HTTP request.
Actually JAX-RS just provides you ways to work with HTTP cache headers. you need to use CacheControl and/or EntityTag to do time based cache and conditional requests. for example, when using EntityTag, the builder will handle response status code 304 which you never need to worry about.

Categories

Resources