I want to get the image size from the URL.
My source is working locally but aws Ubuntu server will return -1.
What's the problem?
URL url=new URL("https://www.bithumb.com/resources/img/comm/sp_coin.png?v=180510");
int image_size = url.openConnection().getContentLength();
System.out.println("mage size:: "+image_size);
If the server doesn't respond with a valid Content-Length header, getContentLength returns -1. As an alternative, you can always get the size of the returned data:
size = IOUtils.toByteArray(url.openStream()).length;
Using Apahe commons IO IOUtils.
What's the problem?
Difficult to say. However, ultimately it is up to the implementation of the web service (not Ubuntu or AWS) to include a content length in the response.
So if you want to find out why (and fix it) you need to examine the implementation of the (your?) web service, and how it is creating the responses.
As #jspcal points out, you can work around the problem. But in order to do that you would need to (at least) read the entire image from the response data stream. Whether this is a viable / efficient solution depends how / when you intend to use the content length.
Related
I am building a JSP application. I am trying to send an screenshot image from the page to the servlet by using base64 encoding. It makes the returning string super long with 100k characters length. So when I post this to the servlet and with getParameter, I am getting null there.
Is there a way to get them by chunks are am I missing something?
I found this maybe useful for you.
Passing a huge String as post parameter to a servlet
Namit Rana:
We used GZip compression/decompression to lower the size of the string. And it worked effectively.
So, the .net service compressed the huge string, sent it to our Servlet. We then decompress it at our server.
I need to implement a deployment pipeline, and at the end of the pipeline, we are uploading a file, in this case, to Huawei's app store. But for a file with more than 5 megabytes in size, we have to use a chunked API. I'm not familiar of how chunked uploads work. Can someone give me an implementation guideline, preferably in java of how to implement such mechanism? The API parameters are as follow :
Edit :
In response in the comment below, let me clarify my question. Looking up for some references of how to do a chunked request, libraries such as httpclient and okhttp simply set the chunk flag to true, and seemed to hide the details from the library's client :
https://www.java-tips.org/other-api-tips-100035/147-httpclient/1359-how-to-use-unbuffered-chunk-encoded-post-request.html
Yet, the Input Parameters of the API seems to expect that I manage the chunk manually, since it expect ChunkSize and a sequence number. I'm thinking that I might need to use the plain java http interface to work with the API, yet I failed to find any good source to get me starting. If there is anyone who could give me a reference or an implementation guidance, that will definitely help.
More updates :
I tried to manually chunk my file into several parts, each of 1 megabyte in size. Then I thought I could try calling the API for every chunk, using a multipart/form-data. But the server side always close the connection before writing even begin, causing : Connection reset by peer: socket write error.
It shouldn't be a proxy issue, since I have set it up, and I could get the token, url and auth code without problem.
File segmentation: a file with more than a few gigabytes is uploaded to the server. If you can only use the simplest upload, receive, process and succeed, I can only say that your server is very good. Even if the server is good enough, this operation is not allowed. So we have to find a way to solve this problem.
First of all, we have to solve the problem of large files. There is no way to cut them into several m bytes and send them to the server many times and save them. Then name these files with MD5 + index of the source file. Of course, some friends use UUID + index to name them. The differences between the two will be described in detail below. When you upload these small files to the server separately, it is better to save these records to the database.
(1) When the first block upload is completed, write the name, type, MD5, upload date, address and unfinished status of the source file to a table, and change the splicing completion status to finished. Temporarily named file table
(2) After each block upload, the record is saved in the database. The MD5 + index name of the source file, the MD5 of the block (this is a key point), the upload time and the file address. Save into database and name it file__ TEM table
Second transmission function: many online disks realize this function. At the beginning of upload, send Ajax request to query the existence of the file to be uploaded. Here, H5 provides a method to obtain the MD5 file, and then use ajax to request whether the MD5 exists in the file and whether the status is completed. If it exists, also verify whether the local file still exists. In the case of simultaneous existence. You can return the presence status to the front desk, and then you can proudly tell the customer, seconds passed.
here is the link:
https://blog.csdn.net/weixin_42584752/article/details/80873376
I just realized that my base64 encoded Header "Authentication" can't be read
with request.getHeader("Authentication").
I found this post about that it's a security Feature in URLConnection
getRequestProperty("Authorization") always returns null
, i don't know why but it seems to be true for request.getHeader as well.
How can i still get this Header if l don't want to Switch to other libraries?
I was searching through https://fossies.org/dox/apache-tomcat-6.0.45-src/catalina_2connector_2Request_8java_source.html#l01947 and found a section where restricted headers will be used if Globals.IS_SECURITY_ENABLED is set.
Since I'm working on a reverse Proxy and only Need to pass requests/Responses through I did simply set "System.setSecurityManager(null);" and for my case it might be a valid solution but if you want to use authentication there is no reason to use this Workaround.
My bad, it does work with https now.
The accepted solution did not work for me – may have something to do with different runtime environments.
However, i've managed to come up with a working snippet to access the underlying MessageHeader-collection via reflection and extract the "Authorization"-header value.
I'm handling a Web Service and need some help. The process is that a pdf will be encoded with base64 and sent to my web service. I will then decode it back into a pdf and place it in the appropriate folder. The issue is that the request needs to contain the actual giant base64 string. First question is is this possible. Second, I am using postman to make the requests and was wondering how to even copy the base64 string into it. It seems there's a string limit. Any help would be greatly appreciated.
I don't know about postman but I can suggest to use JAX-RS and implement a ReaderInterceptor and a WriterInterceptor using Base64.Decoder#wrap respectively Base64.Encoder#wrap.
Otherwise, maybe postman has similar features?
Use streams like these as much as possible to reduce memory usage.
Tutorial:
https://jersey.java.net/documentation/latest/filters-and-interceptors.html#d0e9806
Alright it just seems to be an issue with Postman. When you place a string of that size it will give you errors and only put a certain length per line. It will still receive the entire string. I am able to receive it and decode it. Thank you all for your help!
I am struggling with the maxPostSize parameter in tomcat server.xml.
I increased it from the default 2 MByte to 6 mg to solve some problem that we had, and after one week the problem appears again and I increased the size to 10 MByte. I am not sure if it is a good idea to use the unlimited size.
I am trying to find a way to check this parameter size in run time, for specific post requests.( to find out the source of the problem). is there any way to check this param size at run time?
You can create a Filter, check if its a Post request, iterate the request parameters and summarize their sizes. But maybe you should try to change your application to not to send so much data in a single request.
You could use HttpServletRequest#getHeader to obtain content-length header which value should be size of the request. It's not exactly what you want but it's the easiest thing you could do for debugging purposes.