We have set of services in a vertx cluster. We serve web front end through a API gateway which is one service within the cluster. Client ask a requirement for download some data as a CSV file. It should be transmitted as bellow.
Service A --(Event bus)---> API gateway ---(Web socket)---> Browser
My question is, is it wise to stream such file over event bus from Service A to API gateway? (File may get as large as 100 MB)
You can, but its not designed for it. Will create congestion because the entirety of the file will be kept in memory until transfer is complete. Just set up a http server, communicate the url through a consumer and upload it over http. Then you get all http support as well.
If you don't want a perm http server for it, just start one whenever a request for an upload comes in.
Related
I have 2 services - Ingress (input node) and Storage.
Client send requests on Ingress to get some data (large files).
Ingress send request to Storage to get data that Client needs.
Maybe, somebody can tell what I can use to restream response from Storage to Client without OutOfMemory issues.
Now I've implemented it as saving result in file on Ingress, rereading it and sending as response to Client. But it works really slow, of course.
Thank you.
Spring Cloud Gateway (more documentation here) can help. It's primary purpose seems to be as a configuration-driven gateway, but it can be embedded into an application to serve just certain endpoints; so you may be able to configure it in your "Ingress" service to route certain requests to your Storage service.
If that doesn't work (or, as was in my case, it's too much work), you can use some specific classes from Spring Cloud Gateway in your own service. Specifically, I've used the ProxyExchange class to proxy calls to another service and stream the results back to the original caller.
I have a spring boot application running on a server, lets call it server1. I have written a downloadAPI on server1 which calls another spring boot application running on server2 to download a large file (as chunks). With this kind of setup, I have observed following behaviour: From the client side (a REST client that has made a http call to server 1) what is observed is that the data only starts to appear once all the chunks are fetched in server 1. For a large file, this would mean the client has to wait for minutes before they start receiving the data from the API. Expectation is that these chunks should be available to client in real time as soon as server1 gets them. Is it possible (or) not possible ?
I have a Java web service that returns a large amount of data. Is there a standard way to stream a response rather than trying to return a huge chunk of data at once?
This problem is analogous to the older problem with bringing back large RSS feeds. You can do it by parameterizing the request: http://host/myservice?start=0&count=100, or by including next/prev urls in the response itself.
The latter approach has a lot of advantages. I'll search for a link that describes it and post it here if I find one.
I would look into a comet like approach:
From WIKI:
Comet is a web application model in which a long-held HTTP request
allows a web server to push data to a browser, without the browser
explicitly requesting it.
Basically, rather than sending the large data all at once, allow your web server to push data at its own pace and according to your needs.
Webservice might not be a good method for data transfer.
If I were you, I would like to setup another service like FTP or SFTP.
The server puts the data to the specific path of the FTP server and sends the path information to the client through the webservice response.
We are trying to figure out if we can copy each http request coming in to our tomcat production server and send it to a development test server to get a real simulation of the production traffic.
The original request handling should not have any impact. The production server need not wait for a response for the copied request from the development server.
Is there a simple way to do this?
If you really want to do it live, what I'd recommend is to put a Http Servlet Filter in front of your production webapp. In this filter, copy the request data into a new request and send it (asynchronously) to your development server. This way, at least you don't have to modify your application code.
But it think you should try to avoid doing that in a production environment. Instead, you could dump the request data (see Istvan answer) and do the request from a development machine.
Not that I know of. Maybe you can setup http://tomcat.apache.org/tomcat-7.0-doc/config/filter.html#Request_Dumper_Filter in a way that it sends the logs to a remote server that has some small app playing back the requests based on what's received.
"sends the logs" = configure log4j so that it stores the log on a network share or use socketappender
Is it possible to monitor file uploads, somehow, with Play! framework? Also, if the file should be BIG (i.e. +500MB), would it be possible to save the received bytes into a temporary file instead of keeping it in memory? (see update below)
note : there is no code to show as I'm wondering these questions and cannot seem to find the answers with Google
Thanks!
** Update **
(I almost forgot about this question.) Well, apparantly, files uploaded are stored in temporary files, and the files are not passed as a byte array (or something), but as a Java File object to the action controller.
But even in a RESTful environment, file monitoring can be achieved.
** Update 2 **
Is there a way to get early event listeners on incoming HTTP requests? This could allow for monitoring request data transfer.
Large requests and temp files
Play! is already storing large HTTP requests in temp files named after UUIDs (thus reducing the memory footprint of the server). Once the request is done, this file gets deleted.
Monitoring uploads in Play!
Play! is using (the awesome) Netty project for its HTTP server stack (and also on the client stack, if you're considering Async HTTP client.
Netty is:
asynchronous
event-driven
100% HTTP
Given Play!'s stack, you should be able to implement your "upload progress bar" or something. Actually, Async HTTP client already achieves progress listeners for file uplaods and resumable download (see the quick start guide).
But play.server package doesn't seem to provide such functionality/extension point.
Monitoring uploads anyway
I think Play! is meant to be behind a "real" HTTP server in reverse proxy mode (like nginx or lighthttpd).
So you'd better off using an upload progress module for one of those servers (like HttpUploadProgressModule for nginx) than messing with Play!'s HTTP stack.