limit size of requests with com.sun.net.httpserver.HttpExchange - java

abusive users may attempt to send really large requests to my httpserver so while a "maxRequestSize" config would've been useful, I have yet to find a way of dealing with this. I also thought there might be a timeout option somewhere but couldn't find anything like that either.
httpExchange.getRequestBody() returns an InputStream but based on my research there's no way to determine the length of an InputStream without first processing it

Related

Handle HTTP POST multipart response through ServerSocket

Good afternoon everyone,
First of all, I'll say that it's only for personal purpose in a certain way, it's made to use for little projects to improve my Java knowledge, but my idea is to make this kind of things to understand better the way developers works with sockets and bytes, as I really like to understand this kind of things better for my future ideas.
Actually I'm making a lightweight HTTP server in Java to understand the way it works, and I've been reading documentation but still have some difficulties to actually understand part of the official documentation. The main problem I'm facing is that, something I'd like to know if it's related or not, the content-length seems to have a higher length than the one I get from the BufferedReader. I don't know if the issue is about the way chars are managed and bytes are being parsed to chars on the BufferedReader, so it has less data, so probably what I have to do is treat this part as a binary, so I'd have to read the bytes of the InputStream, but here comes the real deal I'm facing.
As Readers reads a certain amount of bytes, and then it stops and uses this as buffer, this means the data from the InputStream is being used on the Reader, and it's no longer on the stream, so using read() would end up on a -1 as there aren't more bytes to read. A multipart is divided in multiple elements separated with a boundary, and a newline that delimiters the information from the content. I still have to get the information as an String to process it, but the content should be parsed into a binary data, and, without modifying the buffer length, implying I'd require knowledge about the exact length I require to get only the information, the most probably result would be the content being transferred to the BufferedReader buffer. Is it possible to do it even with the processed data from the BufferedStream, or should I find a way to get that certain content as binary without being processed?
As I said, I'm new working with sockets and services, so I don't exactly know which are the possibilities it's another kind of issue, so any help would be appreciated, thank you in advance.
Answer from Remy Lebeau, that can be found on the comments, which become useful for me:
since multipart data is both textual and binary, you are going to have to do your own buffering of the socket data so you have more control and know where the data switches back and forth. At the very least, since you can read binary data directly from a BufferedInputStream, and access its internal buffer, you can let it handle the actual buffering for you, and it is not difficult to write a custom readLine() method that can read a line of text from a BufferedInputStream without using BufferedReader

Ways to buffer REST response

There's a REST endpoint, which serves large (tens of gigabytes) chunks of data to my application.
Application processes the data in it's own pace, and as incoming data volumes grow, I'm starting to hit REST endpoint timeout.
Meaning, processing speed is less then network throughoutput.
Unfortunately, there's no way to raise processing speed enough, as there's no "enough" - incoming data volumes may grow indefinitely.
I'm thinking of a way to store incoming data locally before processing, in order to release REST endpoint connection before timeout occurs.
What I've came up so far, is downloading incoming data to a temporary file and reading (processing) said file simultaneously using OutputStream/InputStream.
Sort of buffering, using a file.
This brings it's own problems:
what if processing speed becomes faster then downloading speed for
some time and I get EOF?
file parser operates with
ObjectInputStream and it behaves weird in cases of empty file/EOF
and so on
Are there conventional ways to do such a thing?
Are there alternative solutions?
Please provide some guidance.
Upd:
I'd like to point out: http server is out of my control.
Consider it to be a vendor data provider. They have many consumers and refuse to alter anything for just one.
Looks like we're the only ones to use all of their data, as our client app processing speed is far greater than their sample client performance metrics. Still, we can not match our app performance with network throughoutput.
Server does not support http range requests or pagination.
There's no way to divide data in chunks to load, as there's no filtering attribute to guarantee that every chunk will be small enough.
Shortly: we can download all the data in a given time before timeout occurs, but can not process it.
Having an adapter between inputstream and outpustream, to pefrorm as a blocking queue, will help a ton.
You're using something like new ObjectInputStream(new FileInputStream(..._) and the solution for EOF could be wrapping the FileInputStream first in an WriterAwareStream which would block when hitting EOF as long a the writer is writing.
Anyway, in case latency don't matter much, I would not bother start processing before the download finished. Oftentimes, there isn't much you can do with an incomplete list of objects.
Maybe some memory-mapped-file-based queue like Chronicle-Queue may help you. It's faster than dealing with files directly and may be even simpler to use.
You could also implement a HugeBufferingInputStream internally using a queue, which reads from its input stream, and, in case it has a lot of data, it spits them out to disk. This may be a nice abstraction, completely hiding the buffering.
There's also FileBackedOutputStream in Guava, automatically switching from using memory to using a file when getting big, but I'm afraid, it's optimized for small sizes (with tens of gigabytes expected, there's no point of trying to use memory).
Are there alternative solutions?
If your consumer (the http client) is having trouble keeping up with the stream of data, you might want to look at a design where the client manages its own work in progress, pulling data from the server on demand.
RFC 7233 describes the Range Requests
devices with limited local storage might benefit from being able to request only a subset of a larger representation, such as a single page of a very large document, or the dimensions of an embedded image
HTTP Range requests on the MDN Web Docs site might be a more approachable introduction.
This is the sort of thing that queueing servers are made for. RabbitMQ, Kafka, Kinesis, any of those. Perhaps KStream would work. With everything you get from the HTTP server (given your constraint that it cannot be broken up into units of work), you could partition it into chunks of bytes of some reasonable size, maybe 1024kB. Your application would push/publish those records/messages to the topic/queue. They would all share some common series ID so you know which chunks match up, and each would need to carry an ordinal so they can be put back together in the right order; with a single Kafka partition you could probably rely upon offsets. You might publish a final record for that series with a "done" flag that would act as an EOF for whatever is consuming it. Of course, you'd send an HTTP response as soon as all the data is queued, though it may not necessarily be processed yet.
not sure if this would help in your case because you haven't mentioned what structure & format the data are coming to you in, however, i'll assume a beautifully normalised, deeply nested hierarchical xml (ie. pretty much the worst case for streaming, right? ... pega bix?)
i propose a partial solution that could allow you to sidestep the limitation of your not being able to control how your client interacts with the http data server -
deploy your own webserver, in whatever contemporary tech you please (which you do control) - your local server will sit in front of your locally cached copy of the data
periodically download the output of the webservice using a built-in http querying library, a commnd-line util such as aria2c curl wget et. al, an etl (or whatever you please) directly onto a local device-backed .xml file - this happens as often as it needs to
point your rest client to your own-hosted 127.0.0.1/modern_gigabyte_large/get... 'smart' server, instead of the old api.vendor.com/last_tested_on_megabytes/get... server
some thoughts:
you might need to refactor your data model to indicate that the xml webservice data that you and your clients are consuming was dated at the last successful run^ (ie. update this date when the next ingest process completes)
it would be theoretically possible for you to transform the underlying xml on the way through to better yield records in a streaming fashion to your webservice client (if you're not already doing this) but this would take effort - i could discuss this more if a sample of the data structure was provided
all of this work can run in parallel to your existing application, which continues on your last version of the successfully processed 'old data' until the next version 'new data' are available
^
in trade you will now need to manage a 'sliding window' of data files, where each 'result' is a specific instance of your app downloading the webservice data and storing it on disc, then successfully ingesting it into your model:
last (two?) good result(s) compressed (in my experience, gigabytes of xml packs down a helluva lot)
next pending/ provisional result while you're streaming to disc/ doing an integrity check/ ingesting data - (this becomes the current 'good' result, and the last 'good' result becomes the 'previous good' result)
if we assume that you're ingesting into a relational db, the current (and maybe previous) tables with the webservice data loaded into your app, and the next pending table
switching these around becomes a metadata operation, but now your database must store at least webservice data x2 (or x3 - whatever fits in your limitations)
... yes you don't need to do this, but you'll wish you did after something goes wrong :)
Looks like we're the only ones to use all of their data
this implies that there is some way for you to partition or limit the webservice feed - how are the other clients discriminating so as not to receive the full monty?
You can use in-memory caching techniques OR you can use Java 8 streams. Please see the following link for more info:
https://www.conductor.com/nightlight/using-java-8-streams-to-process-large-amounts-of-data/
Camel could maybe help you the regulate the network load between the REST producer and producer ?
You might for instance introduce a Camel endpoint acting as a proxy in front of the real REST endpoint, apply some throttling policy, before forwarding to the real endpoint:
from("http://localhost:8081/mywebserviceproxy")
.throttle(...)
.to("http://myserver.com:8080/myrealwebservice);
http://camel.apache.org/throttler.html
http://camel.apache.org/route-throttling-example.html
My 2 cents,
Bernard.
If you have enough memory, Maybe you can use in-memory data store like Redis.
When you get data from your Rest endpoint you can save your data into Redis list (or any other data structure which is appropriate for you).
Your consumer will consume data from the list.

Most efficient java way to test 300,000+ URLs [duplicate]

This question already has answers here:
Preferred Java way to ping an HTTP URL for availability
(6 answers)
Closed 9 years ago.
I'm trying to find the most efficient way to test 300,000+ URLs in a database to basically check if the URLs are still valid.
Having looked around the site I've found many excellent answers and am now using something along the lines of:
Read URL from file....
Test URL:
final URL url = new URL("http://" + address);
final HttpURLConnection urlConn = (HttpURLConnection) url.openConnection();
urlConn.setConnectTimeout(1000 * 10);
urlConn.connect();
urlConn.getResponseCode(); // Do something with the code
urlConn.disconnect();
Write details back to file....
So a couple of questions:
1) Is there a more efficient way to test URLs and get response codes?
2) Initially I am able to test about 50 URLs per minute, but after 5 or so minutes things really slow down - I imagine there is some resources I'm not releasing but am not sure what
3) Certain URLs (e.g. www.bhs.org.au) will cause the above to hang for minutes (not good when I have so many URLs to test) even with the connect timeout set, is there anyway I can tighten this up?
Thanks in advance for any help, it's been a quite a few years since I've written any code and I'm starting again from scratch :-)
By far the fastest way to do this would be to use java.nio to open a regular TCP connection to your target host on port 80. Then, simply send it a minimal HTTP request and process the result yourself.
The main advantage of this is that you can have a pool of 10 or 100 or even 1000 connections open and loading at the same time rather than having to do them one after the other. With this, for example, it won't matter much if one server (www.bhs.org.au) takes several minutes to respond. It'll simply hog one of your many connections in the pool, but others will keep running.
You could also achieve that same thing with a little more overhead but a lot less complex coding by using a Thread Pool to run many HttpURLConnections (the way you are doing it now) in parallel in multiple threads.
This may or may not help, but you might want to change your request method to HEAD instead of using the default, which is GET:
urlConn.setRequestMethod("HEAD");
This tells the server that you do not really need a response back, other than the response code.
The article What Is a HTTP HEAD Request Good for describes some uses for HEAD, including link verification:
[Head] asks for the response identical to the one that would correspond to a GET request, but without the response body. This is useful for retrieving meta-information written in response headers, without having to transport the entire content.... This can be used for example for creating a faster link verification service.

How can I make my android app receive/send data faster?

I have an app that needs to transfer data back and forth between a server, but the speed is not satisfactory right now. The main part is that I'm receiving and parsing JSON data (about 200 characters long) over 3g from a server, and the fastest it will ever do the task is about 5 seconds, but sometimes it will take long enough to timeout (upwards of 30 seconds). My server is a rackspace cloud server.
I thought I was following best practices, but it can't be so with these kinds of speeds. I am using AsyncTask and the same global HttpClient variable for everything.
Can you help me find a better way?
I've thought about these options:
using TCP instead of HTTP
encoding the data to try to reduce the size (not sure how this would work)
I don't know a lot about TCP, but it seems like it would be less overhead. What would be the pros and cons of using TCP instead of HTTP? is it practical for a cell phone to do?
Thanks
fyi - once I solve the problem I'll accept an answer that's the most helpful. So far I've received some really great answers
EDIT: I made it so that I can see the progress as it downloads and I've noticed that it is staying at 0% for a long time then it is quickly going to 100% -- does anyone have any ideas in light of this new info? It may be relevant that I'm using a Samsung Epic with Froyo.
Try using GZIP to compress the data being sent. Note a code complete example, but it should get you on the right path.
Rejinderi is right; GSON rocks.
HttpGet getRequest = new HttpGet(url);
getRequest.addHeader("Accept-Encoding", "gzip");
InputStream instream = response.getEntity().getContent();
Header contentEncoding = response.getFirstHeader("Content-Encoding");
if (contentEncoding != null && contentEncoding.getValue().equalsIgnoreCase("gzip")) {
instream = new GZIPInputStream(instream);
}
TCP is just HTTP at a lower level and if you really need performance then TCP is the one you should use. HTTP is easier to develop as there are more support and easier to implement as a developer it wraps a lot of things up so you don't have to implement them yourself. The overhead for your case shouldnt be that much.
As for the JSON data. check if its taking a long time, the normal JSON library java has is damn slow take a look here
http://www.cowtowncoder.com/blog/archives/2009/09/entry_326.html
Debug and see if that is the case. if its the json parse speed i suggest you use the gson library. Its cleaner and easy to implement and much MUCH faster.
Sounds like you need to profile the application to find out where your bottleneck is. You said you are sending data of about 200 chars. That is miniscule and I don't see how compression or anything strictly data related is going to make much of an impact on such a small data set.
I think it is more likely that you have some communication issues, perhaps attempting to establish a new connection for every transfer or something along those lines that is giving you all the overhead.
Profiling is the key to resolving your issues, anything else is a shot in the dark.

HttpURLConnection: What's the deal with having to read the whole response?

My current problem is very similar to this one.
I have a downloadFile(URL) function that creates a new HttpURLConnection, opens it, reads it, returns the results. When I call this function on the same URL multiple times, the second time around it almost always returns a response code of -1 (But throws no exception!!!).
The top answer in that question is very helpful, but there are a few things I'm trying to understand.
So, if setting http.keepAlive to false solves the problem, it indicates what exactly? That the server is responding in a way that violates the http protocol? Or more likely, my code is violating the protocol in some way? What will the trace tell me? What should I look for?
And what's the deal with this:
You need to read everything from error
stream. Otherwise, it's going to
confuse next connection and that's the
cause of -1.
Does this mean if the response is some type of error (which would be what response code(s)?), the stream HAS to be fully read? Also, every time I am attempting an http request I am basically creating a new connection, and then disconnect()ing it at the end.
However, in my case I'm not getting a 401 or whatever. It's always a 200. But my second connection almost always fails. Does this mean there's some other data I should be reading that I'm not (in a similar manner that the error stream must be fully read)?
Please help shed some light on this? I feel like there's some fundamental http protocol understanding I'm missing.
PS If I were just using the Apache HttpClient, would I not have to deal with all these protocol details? Does it take care of everything for me?
The support for keep-alive in the default HTTP URL handler is very buggy. We always turn it off.
Use Apache HttpClient with a pooled connection manager if you want keep-alive. If you don't want change your code, you can get another handler like this one,
http://www.innovation.ch/java/HTTPClient/
If your second connection always fails, that means your server doesn't support keepalive. With Keepalive, the HTTP handler simply leaves connection open (even if you call disconnect). The server closes connection if keep-alive is not supported but the handler doesn't know till you make next request on the connection so the 2nd connection fails.
Regarding the read error stream, it only applies if you get non-200 responses.
i think you're probably talking about this HttpURLConnection bug, fixed in froyo:
http://code.google.com/p/android/issues/detail?id=2939
see that bug for other workarounds. if this isn't the bug you've hit, please raise a bug with a repeatable test case at http://code.google.com/p/android/issues/entry.

Categories

Resources