I have a JPG file with 800KB. I try to upload to S3 and keep getting timeout error.
Can you please figure what is wrong? 800KB is rather small for upload.
Error Message: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.
HTTP Status Code: 400
AWS Error Code: RequestTimeout
Long contentLength = null;
System.out.println("Uploading a new object to S3 from a file\n");
try {
byte[] contentBytes = IOUtils.toByteArray(is);
contentLength = Long.valueOf(contentBytes.length);
} catch (IOException e) {
System.err.printf("Failed while reading bytes from %s", e.getMessage());
}
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(contentLength);
s3.putObject(new PutObjectRequest(bucketName, key, is, metadata));
Is it possible that IOUtils.toByteArray is draining your input stream so that there is no more data to be read from it when the service call is made? In that case a stream.reset() would fix the issue.
But if you're just uploading a file (as opposed to an arbitrary InputStream), you can use the simpler form of AmazonS3.putObject() that takes a File, and then you won't need to compute the content length at all.
http://docs.amazonwebservices.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3.html#putObject(java.lang.String, java.lang.String, java.io.File)
This will automatically retry any such network errors several times. You can tweak how many retries the client uses by instantiating it with a ClientConfiguration object.
http://docs.amazonwebservices.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html#setMaxErrorRetry(int)
If your endpoint is behind a VPC it will also silently error out. You can add a new VPC endpoint here for s3
https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/
Related
So, I have a server application that returns ZIP files and I'm working with huge files (>=5GB). I am then using the jersey client to do a GET request from this application after which I want to basically extract the ZIP and save it as a folder. This is the client configuration:
Client client = ClientBuilder.newClient();
client.register(JacksonJaxbJsonProvider.class);
client.register(MultiPartFeature.class);
return client;
And here's the code fetching the response from the server:
client.target(subMediumResponseLocation).path("download?delete=true").request()
.get().readEntity(InputStream.class)
My code then goes through a bunch of (unimportant for this question) steps and finally gets to the writing of data.
try (ZipInputStream zis = new ZipInputStream(inputStream)) {
ZipEntry ze = zis.getNextEntry();
while(ze != null){
String fileName = ze.getName();
if(fileName.contains(".")) {
size += saveDataInDirectory(folder,zis,fileName);
}
is.closeEntry();
ze = zis.getNextEntry();
}
zis.closeEntry();
} finally {
inputStream.close();
}
Now the issue I'm getting is that the ZipInputStream refuses to work. I can debug the application and see that there are bytes in the InputStream but when it get to the while(ze != null) check, it returns null on the first entry, resulting in an empty directory.
I have also tried writing the InputStream from the client to a ByteArrayOutputStream using
the transferTo method, but I get a java heap space error saying the array length is too big (even though my heap space settings are Xmx=16gb and Xms=12gb).
My thoughts were that maybe since the InputStream is lazy loaded by Jersey using the UrlConnector directly, this doesn't react well with the ZipInputStream. Another possible issue is that I'm not using a ByteArrayInputStream for the ZipInputStream.
What would a proper solution for this be (keeping in mind the heap issues)?
Ok so I solved it, apparently my request was getting a 404 for adding the query param in the path... .path("download?delete=true")
I'm trying to send a json object (serialized as a string) into an SQS queue that triggers a lambda. The SQS message is exceeding the maximum 256 kB limit that SQS has. I was trying to gzip compress my message before sending it. Here is how I'm trying to do it:
public static String compress(String str) throws Exception {
System.out.println("Original String Length : " + str.length());
ByteArrayOutputStream obj=new ByteArrayOutputStream();
GZIPOutputStream gzip = new GZIPOutputStream(obj);
gzip.write(str.getBytes("UTF-8"));
gzip.close();
String base64Encoded = Base64.getEncoder().encodeToString(obj.toByteArray());
System.out.println("Compressed String length : " + base64Encoded.length());
return base64Encoded;
}
The lambda that this SQS queue triggers is a nodejs based lambda where I need to unzip and decode this message. Im trying to use the zlib library in nodejs to unzip and decode my message like this:
exports.handler = async (event, context) => {
let msg = null
event.Records.forEach(record => {
let { body } = record;
var buffer = zlib.inflateSync(new Buffer(body, 'base64')).toString();
msg = JSON.parse(JSON.parse(JSON.stringify(buffer.toString(), undefined, 4)))
});
}
I'm getting the following error on execution:
{
"errorType": "Error",
"errorMessage": "incorrect header check",
"code": "Z_DATA_ERROR",
"errno": -3,
"stack": [
"Error: incorrect header check",
" at Zlib.zlibOnError [as onerror] (zlib.js:180:17)",
" at processChunkSync (zlib.js:429:12)",
" at zlibBufferSync (zlib.js:166:12)",
" at Object.syncBufferWrapper [as unzipSync] (zlib.js:764:14)",
" at /var/task/index.js:12:19",
" at Array.forEach (<anonymous>)",
" at Runtime.exports.handler (/var/task/index.js:10:17)",
" at Runtime.handleOnce (/var/runtime/Runtime.js:66:25)"
]
}
Can someone tell me how I can approach this problem in a better way? Is there aa better way to compress the string in java? Is there a better way to decompress, decode and parse the json in nodejs?
256Kb for the message is huge, if you send millions messages like this, it will be extremely hard to process them all, think about replication that SQS has to do internally.
SQS is not a database and its not meant to store a lot of text.
I assume that you message contains a lot of business information in addition to some technical message identification parameters.
Usually this points on a wrong design of the system. So you can try the following:
Think about the storage for the content of the business information. It should not be SQS, it can be anything, Mongo, Postgres/MySQL whatever, Maybe ElasticSearch or even Redis in some cases. Since the application is on cloud, aws has many additional storage engines (S3, DynamoDB, aurora, etc). So find the one that suits your use case the best. Probably S3 is the way to go if you only need a document by some key (path), but the decision is beyond the scope of this question.
The "sender" of the message will store the business related information in this storage, and will send a short message to SQS that will contain a pointer (url, foreign key, or application specific document id, whatever) on the document so that the receiver will be able to get that document from the storage once it gets the SQS message.
With this approach you don't need to zip anything, the messages will be short.
The problem is that you are sending a gzip stream, and then trying to read a zlib stream. They are two different things. Either send gzip and receive gzip, or send zlib and receive zlib. E.g. zlib.gunzipSync on the receive side.
I am making a request to a server, which I have no control over. It returns a downloadable response. I am downloading the file in client as follows
File backupFile = new File("Download.zip");
CloseableHttpResponse response = ...;
try(InputStream inputStream = response.getEntity().getContent()) {
try(FileOutputStream fos = new FileOutputStream(backupFile)) {
int inByte;
while((inByte = inputStream.read()) != -1) {
fos.write(inByte);
}
}
}
I am getting the following exception:
Premature end of Content-Length delimited message body (expected: 548846; received: 536338
at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:142)
at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:120)
Premature end of Content-Length delimited message body (expected:
I went through the above SO question, but that question and its answers address a serious bug, where the server doesn't deliver what it promised. Also I am not closing the client before downloading the file is complete.
In my case, the file (zip file) is perfect, just that the estimate of size is off by a minute fraction.
Reported this to the server maintainer, but I was wondering if there was a way to ignore this exception. Assuming the checks on the downloaded file is done by self.
Assuming the file is complete as is, you can simply catch the exception, flush the rest of the stream, close it, and the file should be written in its entirety as given by the server. Of course if the file is only partially complete, then you won't be able to open the file as a zip file in any context, so do be sure that the file is correct as it is being sent and that it is only a problem of content length.
I'm trying to extract specific files from Amazon S3 without having to read all the bytes because the archives can be huge and I only need 2 or 3 files out of it.
I'm using the AWS Java SDK. Here's the code (Exception Handing skipped):
AWSCredentials credentials = new BasicAWSCredentials("accessKey", "secretKey");
AWSCredentialsProvider credentialsProvider = new AWSStaticCredentialsProvider(credentials);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(Regions.US_EAST_1).withCredentials(credentialsProvider).build();
S3Object object = s3Client.getObject("bucketname", "file.tar.gz");
S3ObjectInputStream objectContent = object.getObjectContent();
TarArchiveInputStream tarInputStream = new TarArchiveInputStream(new GZIPInputStream(objectContent));
TarArchiveEntry currentEntry;
while((currentEntry = tarInputStream.getNextTarEntry()) != null) {
if(currentEntry.getName().equals("1/foo.bar") && currentEntry.isFile()) {
FileOutputStream entryOs = new FileOutputStream("foo.bar");
IOUtils.copy(tarInputStream, entryOs);
entryOs.close();
break;
}
}
objectContent.abort(); // Warning at this line
tarInputStream.close(); // warning at this line
When I use this method it gives a warning that not all the bytes from the stream were read which I'm doing intentionally.
WARNING: Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
Is it necessary to drain the stream and what would be the downsides of not doing it? Can I just ignore the warning?
You don't have to worry about the warning - it only warns you that it will result in the closure of HTTP connection and that there might be data which you will miss. Since close() delegates to abort() you get the warning in either of the calls.
Note that it is not guaranteed as you are not reading the whole archive anyway if the files you are interested in are located towards the end of the archive.
S3's HTTP server supports ranges, so if you could influence the format of the archive or during the creation of it generate some metadata you could actually skip or perhaps request only the file you are interested in.
i have application that picks the file from a dedicated path on my device and sends it to server.
I m using ksoap2 lib to call .NET webservice to send my file to server. i am using Base 64 encoding.
I can send file with max size of 1MB without encryption and 850Kb with encryption. Encyrption algorithm i am using is 3DES.
If i try to send files larger than above size i get following error: Caused by: java.lang.OutOfMemoryError at org.ksoap2.transport.HttpTransportSE.call(HttpTransportSE.java:121)
My Test environment: Android emulator with API Level 8, Android 2.2 and SDCard memory 512 MB
Is it that i am missing out something? Can using BLOB help me in this scenario
Is there any way to send larger file? i have heard of sending data chunks but have no idea on that . any link or sample code will really help.
to get file data using following code:
here url = where file is stored
public byte[] getFileData( String vURL){
instream = new FileInputStream(vURL);
size = (int) vURL.length();
fileContent = new byte[size];
instream.read(fileContent);
}
Encode the data using following code:
byte[] res = Utilities.getFileData(file);
String mdata = android.util.Base64.encodeToString(res, android.util.Base64.DEFAULT);
calling server side web service and sending data to server
SoapObject request = new SoapObject(nameSpace, methodName);
if (fileData != null && !fileData.equals("")) {
request.addProperty("vBLOBData", fileData);
}
SoapSerializationEnvelope envelope = getEnvelope(request);
HttpTransportSE ht = new HttpTransportSE(url); // ,3000
ht.debug = true;
ht.call(soapAction, envelope);
response = (envelope.getResponse()).toString();
Not able to send filedata more than 1 MB.
Thanks in advance
I don't know what you are trying to achieve, but why you don't divide your file into parts and send each part individually into a loop or into an android background service using a timer that sends a part every x seconds.
Try to set your buffer to 1024 before sending it , its about the limit of your buffer size and your ram
Use GZip Zipping algorithm to zip large file from mobile side; use same unzipping from server.
Also use MultipartEntity can help to upload large file content.
If compression doesn't help - as mentioned in previous post - you will probably have to segment the message yourself.
Segmentation