Efficient way to write to file in rest service - java

i have a java rest service that receives around 1000 requests per second. Each request has a payload of 1KB. I need to write this payload to a single file. Since there will be 1000 requests per second should i synchronize the writes to the file that uses FileWriter ? I also need to acknowledge the write to file succeeded for each request in response. This means i need to flush the write for each request.
If i synchronize the file writes the rest service performance will be degraded. Is there a way to write to file without synchronizing the file writes ?

Did you check how Log4J is logging information to text file? Its pretty even with millions of log msgs at a given time. You can refer to WriterAppender implementation for reference.
http://grepcode.com/file/repository.springsource.com/org.apache.log4j/com.springsource.org.apache.log4j/1.2.15/org/apache/log4j/WriterAppender.java#WriterAppender

Related

How to Implement Huawei's Chunked File Upload Using Java

I need to implement a deployment pipeline, and at the end of the pipeline, we are uploading a file, in this case, to Huawei's app store. But for a file with more than 5 megabytes in size, we have to use a chunked API. I'm not familiar of how chunked uploads work. Can someone give me an implementation guideline, preferably in java of how to implement such mechanism? The API parameters are as follow :
Edit :
In response in the comment below, let me clarify my question. Looking up for some references of how to do a chunked request, libraries such as httpclient and okhttp simply set the chunk flag to true, and seemed to hide the details from the library's client :
https://www.java-tips.org/other-api-tips-100035/147-httpclient/1359-how-to-use-unbuffered-chunk-encoded-post-request.html
Yet, the Input Parameters of the API seems to expect that I manage the chunk manually, since it expect ChunkSize and a sequence number. I'm thinking that I might need to use the plain java http interface to work with the API, yet I failed to find any good source to get me starting. If there is anyone who could give me a reference or an implementation guidance, that will definitely help.
More updates :
I tried to manually chunk my file into several parts, each of 1 megabyte in size. Then I thought I could try calling the API for every chunk, using a multipart/form-data. But the server side always close the connection before writing even begin, causing : Connection reset by peer: socket write error.
It shouldn't be a proxy issue, since I have set it up, and I could get the token, url and auth code without problem.
File segmentation: a file with more than a few gigabytes is uploaded to the server. If you can only use the simplest upload, receive, process and succeed, I can only say that your server is very good. Even if the server is good enough, this operation is not allowed. So we have to find a way to solve this problem.
First of all, we have to solve the problem of large files. There is no way to cut them into several m bytes and send them to the server many times and save them. Then name these files with MD5 + index of the source file. Of course, some friends use UUID + index to name them. The differences between the two will be described in detail below. When you upload these small files to the server separately, it is better to save these records to the database.
(1) When the first block upload is completed, write the name, type, MD5, upload date, address and unfinished status of the source file to a table, and change the splicing completion status to finished. Temporarily named file table
(2) After each block upload, the record is saved in the database. The MD5 + index name of the source file, the MD5 of the block (this is a key point), the upload time and the file address. Save into database and name it file__ TEM table
Second transmission function: many online disks realize this function. At the beginning of upload, send Ajax request to query the existence of the file to be uploaded. Here, H5 provides a method to obtain the MD5 file, and then use ajax to request whether the MD5 exists in the file and whether the status is completed. If it exists, also verify whether the local file still exists. In the case of simultaneous existence. You can return the presence status to the front desk, and then you can proudly tell the customer, seconds passed.
here is the link:
https://blog.csdn.net/weixin_42584752/article/details/80873376

How to improve file IO performance in a rest service

The scenario I have at hand is from my spring boot rest service, read a word doc from resources folder and pass the byte array to the client
I read the word doc in memory using FileInputStream, convert input stream to a byte array using Apache Common IO IOUtils and place it in the response body of the rest service.
The problem here is that I always read the file in memeirh oer service request which is detrimental for there local memory of the process where service is running on.
I can’t read the file line by line and return it to the service caller in that fashion as I need to return the byte array back to the caller all together
Another problem I foresee is with how the file is read. I want to be a non blocking IO instead of a blocking IO.
Wondering what would be an efficient way to solve this
Do you actually need to read the file every time a request comes in.
Otherwise you could just read the file on server startup and then keep the file in memory stored in a Spring Bean. Then fetch it from there on every call?
If you don't want to upload file every time, it's better to create the #Bean, doing that in init/postconstruct phase. You also can add some functionality to your retrieve() method, which checks and stores file modification time with invokation of File.lastModified() to decide whether you have to reload the content or not.

Ways to buffer REST response

There's a REST endpoint, which serves large (tens of gigabytes) chunks of data to my application.
Application processes the data in it's own pace, and as incoming data volumes grow, I'm starting to hit REST endpoint timeout.
Meaning, processing speed is less then network throughoutput.
Unfortunately, there's no way to raise processing speed enough, as there's no "enough" - incoming data volumes may grow indefinitely.
I'm thinking of a way to store incoming data locally before processing, in order to release REST endpoint connection before timeout occurs.
What I've came up so far, is downloading incoming data to a temporary file and reading (processing) said file simultaneously using OutputStream/InputStream.
Sort of buffering, using a file.
This brings it's own problems:
what if processing speed becomes faster then downloading speed for
some time and I get EOF?
file parser operates with
ObjectInputStream and it behaves weird in cases of empty file/EOF
and so on
Are there conventional ways to do such a thing?
Are there alternative solutions?
Please provide some guidance.
Upd:
I'd like to point out: http server is out of my control.
Consider it to be a vendor data provider. They have many consumers and refuse to alter anything for just one.
Looks like we're the only ones to use all of their data, as our client app processing speed is far greater than their sample client performance metrics. Still, we can not match our app performance with network throughoutput.
Server does not support http range requests or pagination.
There's no way to divide data in chunks to load, as there's no filtering attribute to guarantee that every chunk will be small enough.
Shortly: we can download all the data in a given time before timeout occurs, but can not process it.
Having an adapter between inputstream and outpustream, to pefrorm as a blocking queue, will help a ton.
You're using something like new ObjectInputStream(new FileInputStream(..._) and the solution for EOF could be wrapping the FileInputStream first in an WriterAwareStream which would block when hitting EOF as long a the writer is writing.
Anyway, in case latency don't matter much, I would not bother start processing before the download finished. Oftentimes, there isn't much you can do with an incomplete list of objects.
Maybe some memory-mapped-file-based queue like Chronicle-Queue may help you. It's faster than dealing with files directly and may be even simpler to use.
You could also implement a HugeBufferingInputStream internally using a queue, which reads from its input stream, and, in case it has a lot of data, it spits them out to disk. This may be a nice abstraction, completely hiding the buffering.
There's also FileBackedOutputStream in Guava, automatically switching from using memory to using a file when getting big, but I'm afraid, it's optimized for small sizes (with tens of gigabytes expected, there's no point of trying to use memory).
Are there alternative solutions?
If your consumer (the http client) is having trouble keeping up with the stream of data, you might want to look at a design where the client manages its own work in progress, pulling data from the server on demand.
RFC 7233 describes the Range Requests
devices with limited local storage might benefit from being able to request only a subset of a larger representation, such as a single page of a very large document, or the dimensions of an embedded image
HTTP Range requests on the MDN Web Docs site might be a more approachable introduction.
This is the sort of thing that queueing servers are made for. RabbitMQ, Kafka, Kinesis, any of those. Perhaps KStream would work. With everything you get from the HTTP server (given your constraint that it cannot be broken up into units of work), you could partition it into chunks of bytes of some reasonable size, maybe 1024kB. Your application would push/publish those records/messages to the topic/queue. They would all share some common series ID so you know which chunks match up, and each would need to carry an ordinal so they can be put back together in the right order; with a single Kafka partition you could probably rely upon offsets. You might publish a final record for that series with a "done" flag that would act as an EOF for whatever is consuming it. Of course, you'd send an HTTP response as soon as all the data is queued, though it may not necessarily be processed yet.
not sure if this would help in your case because you haven't mentioned what structure & format the data are coming to you in, however, i'll assume a beautifully normalised, deeply nested hierarchical xml (ie. pretty much the worst case for streaming, right? ... pega bix?)
i propose a partial solution that could allow you to sidestep the limitation of your not being able to control how your client interacts with the http data server -
deploy your own webserver, in whatever contemporary tech you please (which you do control) - your local server will sit in front of your locally cached copy of the data
periodically download the output of the webservice using a built-in http querying library, a commnd-line util such as aria2c curl wget et. al, an etl (or whatever you please) directly onto a local device-backed .xml file - this happens as often as it needs to
point your rest client to your own-hosted 127.0.0.1/modern_gigabyte_large/get... 'smart' server, instead of the old api.vendor.com/last_tested_on_megabytes/get... server
some thoughts:
you might need to refactor your data model to indicate that the xml webservice data that you and your clients are consuming was dated at the last successful run^ (ie. update this date when the next ingest process completes)
it would be theoretically possible for you to transform the underlying xml on the way through to better yield records in a streaming fashion to your webservice client (if you're not already doing this) but this would take effort - i could discuss this more if a sample of the data structure was provided
all of this work can run in parallel to your existing application, which continues on your last version of the successfully processed 'old data' until the next version 'new data' are available
^
in trade you will now need to manage a 'sliding window' of data files, where each 'result' is a specific instance of your app downloading the webservice data and storing it on disc, then successfully ingesting it into your model:
last (two?) good result(s) compressed (in my experience, gigabytes of xml packs down a helluva lot)
next pending/ provisional result while you're streaming to disc/ doing an integrity check/ ingesting data - (this becomes the current 'good' result, and the last 'good' result becomes the 'previous good' result)
if we assume that you're ingesting into a relational db, the current (and maybe previous) tables with the webservice data loaded into your app, and the next pending table
switching these around becomes a metadata operation, but now your database must store at least webservice data x2 (or x3 - whatever fits in your limitations)
... yes you don't need to do this, but you'll wish you did after something goes wrong :)
Looks like we're the only ones to use all of their data
this implies that there is some way for you to partition or limit the webservice feed - how are the other clients discriminating so as not to receive the full monty?
You can use in-memory caching techniques OR you can use Java 8 streams. Please see the following link for more info:
https://www.conductor.com/nightlight/using-java-8-streams-to-process-large-amounts-of-data/
Camel could maybe help you the regulate the network load between the REST producer and producer ?
You might for instance introduce a Camel endpoint acting as a proxy in front of the real REST endpoint, apply some throttling policy, before forwarding to the real endpoint:
from("http://localhost:8081/mywebserviceproxy")
.throttle(...)
.to("http://myserver.com:8080/myrealwebservice);
http://camel.apache.org/throttler.html
http://camel.apache.org/route-throttling-example.html
My 2 cents,
Bernard.
If you have enough memory, Maybe you can use in-memory data store like Redis.
When you get data from your Rest endpoint you can save your data into Redis list (or any other data structure which is appropriate for you).
Your consumer will consume data from the list.

Processing data based on the metadata in the file using apache camel

I have to setup camel to process data files where the first line of the file is the metadata and then it follows with millions of lines of actual data. The metadata dictates how the data is to be processed. What I am looking for is something like this:
Read first line (metadata) and populate a bean (with metadata) --> 2. then send data 1000 lines at a time to the data processor which will refer to the bean in step # 1
Is it possible in Apache Camel?
Yes.
An example architecture might look something like this:
You could setup a simple queue that could be populated with file names (or whatever identifier you are using to locate each individual file).
From the queue, you could route through a message translator bean, whose sole is to translate a request for a filename into a POJO that contains the metadata from the first line of the file.
(You have a few options here)
Your approach to processing the 1000 line sets will depend on whether or not the output or resulting data created from the 1000 lines sets needs to be recomposed into a single message and processed again later. If so, you could implement a composed message processor made up of a message producer/consumer, a message aggregator and a router. The message producer/consumer would receive the POJO with the metadata created in step2 and enqueue as many new requests are necessary to process all of the lines in the file. The router would route from this queue through your processing pipeline and into the message aggregator. Once aggregated, a single unified message with all of your important data will be available for you to do what you will.
If instead each 1000 line set can be processed independently and rejoining is not required, than it is not necessary to agggregate the messages. Instead, you can use a router to route from step 2 to a producer/consumer that will, like above, enquene the necessary number of new requests for each file. Finally, the router will route from this final queue to a consumer that will do the processing.
Since you have a large quantity of data to deal with, it will likely be difficult to pass around 1000 line groups of data through messages, especially if they are being placed in a queue (you don't want to run out of memory). I recommend passing around some type of indicator that can be used to identify which line of the file a specific request was for, and then parse the 1000 lines when you need them. You could do this in a number of ways, like by calculating the number of bytes deep into a file a specific line is, and then using a file reader's skip() method to jump to that line when the request hits the bean that will be processing it.
Here are some resources provided on the Apache Camel website that describe the enterprise integration patterns that I mentioned above:
http://camel.apache.org/message-translator.html
http://camel.apache.org/composed-message-processor.html
http://camel.apache.org/pipes-and-filters.html
http://camel.apache.org/eip.html

Specify InputStream for ServletResponce instead of copying InputStream in OutputStream

In short I have a Servlet, which retrieves pictures/videos e t.c. from underlying data store.
In order to archive this I need to copy files InputStream to ServletResponce *OutputStream*
From my point of view this is not effective, since I'll need to copy the file in memory before sending it, it would be more convinient to specify InputStream, from which OutputStream would read data and send it straight away, after reading some data in the buffer.
I looked at ServletResponce documentation and it have some buffer for the message data, so I have a few questions regarding it.
Is this the right mechanism?
What If I decide not to send the file at the end of Servlet processing?
For example:
If I have copied InputStream in OutputStream, and then find out that this is not authorized request, and user have no right to see this Object (Mistake in design maybe) I would still send some data to the client, although this is not what I intended, or not.
To address your first concern, you can easily copy InputStream to OutputStream using IOUtils from Apache Commons Lang:
IOUtils.copy(fileInputStream, servletOutputStream);
It uses 4K buffer, so memory consumption should not be a concern. In fact you cannot just send straight away data from InputStream. At the lowest level the operating system still has to read file contents to some memory location and in order to send it to socket, you need to provide a memory location where the data to be sent resides. Streams are just a useful abstraction.
About your second question: this is how HTTP works: if you start streaming data to the client, servlet container sends all response headers first. If you abort in the middle, from the client perspective it looks like interrupted download.
Is this the right mechanism?
Basically, it is the only mechanism provided by the Servlet APIs. You need to design your servlet with this in mind.
(It is hard to see how it could be done any other way. A read syscall reads data into memory from a device (the disk). A write syscall writes data from memory to a device (the network interface). There is no syscall to transfer data directly from one device to another. The best you can do is to reduce the amount of copying of data within the application. If you use something like IOUtils.copy, it should minimize that as far as possible. The only way you could avoid going through application memort would be to use some special purpose hardware / operating system combination optimized for content delivery.)
However, this is probably moot anyway. In most cases, the performance bottleneck is likely to be movement of data over the network. Data can probably be read from disk to memory, copied, and written to the network interface orders of magnitude faster than it can move through the network to the user's web browser (or whatever).
If it is NOT moot, then a practical way to do content delivery would be to use a separate web server implemented in native code that us optimized for delivering static content; e.g. something like nginx.)
What If I decide not to send the file at the end of Servlet processing? For example: If I have copied InputStream in OutputStream, and then find out that this is not authorized request, and user have no right to see this Object (Mistake in design maybe) I would still send some data to the client, although this is not what I intended, or not.
You should write your servlet to do the access checks BEFORE reading the content into memory. And ideally, before you "commit" the response by sending the response header.

Categories

Resources