File Download by flushing response OutputStream - java

I am trying to download 800 MB file from google drive in a streamed fashion. Like I fetch bytes from google drive & write to my response output stream & flush it. Here is the code for it
public void downloadFileAsStream(String accessToken, String fileId
, HttpServletResponse response) throws Exception {
Credential credential = new GoogleCredential().setAccessToken(accessToken);
Drive service = new Drive.Builder(HTTP_TRANSPORT, JSON_FACTORY, credential).build();
File file = null;
try {
file = service.files().get(fileId).setFields("name, size").execute();
} catch (Exception ex) {
logger.error("Exception occurred while getting file from google drive", ex);
throw ex;
}
long fileSize = file.getSize();
OutputStream ros = response.getOutputStream();
for (long i = 0; i<= fileSize; i=i+10000000) {
byte[] fileRangeBytes = getBytes(service, accessToken, fileId, i, directDownloadThreshold);
ros.write(fileRangeBytes);
ros.flush();
}
ros.close();
}
private byte[] getBytes(Drive drive, String accessToken, String fileId, long position, long byteCount) throws Exception {
byte[] receivedByteArray = null;
String downloadUrl = "https://www.googleapis.com/drive/v3/files/" + fileId + "?alt=media&access_token="
+ accessToken;
try {
com.google.api.client.http.HttpRequest httpRequestGet = drive.getRequestFactory().buildGetRequest(new GenericUrl(downloadUrl));
httpRequestGet.getHeaders().setRange("bytes=" + position + "-" + (position + byteCount - 1));
com.google.api.client.http.HttpResponse response = httpRequestGet.execute();
InputStream is = response.getContent();
receivedByteArray = IOUtils.toByteArray(is);
response.disconnect();
} catch (IOException e) {
e.printStackTrace();
throw e;
}
return receivedByteArray;
}
The problem here is that the files are not getting downloaded in the browser immediately in chunks
Rather my application just keeps waiting till the whole file is written to response's outputstream.
So why is the flushing to the browser not happening in my case though I have responseOutputStream.flush() inside the for loop like in this question Java file download hangs

Related

Read pdf files placed at one server from another server

I have two node on production environment. I have placed pdf files at one server and want to read it from both server. when am calling 'file' method directly pdf get displayed in browser but when i call 'pdfFiles' nothing is displayed in browser.
public Resolution file(){
try {
final HttpServletRequest request = getContext().getRequest();
String fileName = (String) request.getParameter("file");
File file = new File("pdf file directory ex /root/pdffiles/" + fileName);
getContext().getResponse().setContentType("application/pdf");
getContext().getResponse().addHeader("Content-Disposition",
"inline; filename=" + fileName);
FileInputStream streamIn = new FileInputStream(file);
BufferedInputStream buf = new BufferedInputStream(streamIn);
int readBytes = 0;
ServletOutputStream stream = getContext().getResponse().getOutputStream();
// read from the file; write to the ServletOutputStream
while ((readBytes = buf.read()) != -1)
stream.write(readBytes);
} catch (Exception exc) {
LOGGER.logError("reports", exc);
}
return null;
}
public Resolution pdfFile() {
final HttpServletRequest request = getContext().getRequest();
final HttpClient client = new HttpClient();
try {
String fileName = (String) request.getParameter("file");
final String url = "http://" + serverNameNode1 //having pdf files
+ "/test/sm.action?reports&file=" + fileName;
final PostMethod method = new PostMethod(url);
try {
client.executeMethod(method);
} finally {
method.releaseConnection();
}
} catch (final Exception e) {
LOGGER.logError("pdfReports", "error occured2 " + e.getMessage());
}
return null;
}
Included below part of code after 'client.executeMethod(method);' in 'pdfFile()' method and it works for me.
buf = new BufferedInputStream(method.getResponseBodyAsStream());
int readBytes = 0;
stream = getContext().getResponse().getOutputStream();
// write to the ServletOutputStream
while ((readBytes = buf.read()) != -1)
stream.write(readBytes);

Missing bytes when calling HttpServletRequest.getInputSteam() method

I am creating Restful web service that accepts any file and saves it into filesystem. I am using Dropwizard to implement the service and Postman/RestClient to hit the request with data. I am not creating multipart (form-data) request.
Every thing is working fine except the file saved has first character missing. Here is my code for calling the service method and saving it into file system:
Input Request:
http://localhost:8080/test/request/Sample.txt
Sample.txt
Test Content
Rest Controller
#PUT
#Consumes(value = MediaType.WILDCARD)
#Path("/test/request/{fileName}")
public Response authenticateDevice(#PathParam("fileName") String fileName, #Context HttpServletRequest request) throws IOException {
.......
InputStream inputStream = request.getInputStream();
writeFile(inputStream, fileName);
......
}
private void writeFile(InputStream inputStream, String fileName) {
OutputStream os = null;
try {
File file = new File(this.directory);
file.mkdirs();
if (file.exists()) {
os = new FileOutputStream(this.directory + fileName);
logger.info("File Written Successfully.");
} else {
logger.info("Problem Creating directory. File can not be saved!");
}
byte[] buffer = new byte[inputStream.available()];
int n;
while ((n = inputStream.read(buffer)) != -1) {
os.write(buffer, 0, n);
}
} catch (Exception e) {
logger.error("Error in writing to File::" + e);
} finally {
try {
os.close();
inputStream.close();
} catch (IOException e) {
logger.error("Error in closing input/output stream::" + e);
}
}
}
In output, file is saved but first character from the content is missing.
Output:
Sample.txt:
est Content
In above output file, character T is missing and this happens for all the file formats.
I don't know what point I am missing here.
Please help me out on this.
Thank You.

How to upload a big file on google cloud with wicket

I try to upload a big file on Google cloud with wicket. I use FileUploadField and UploadFile methods. Nevertheless I can only upload small files (less than 10kb). If I upload a bigger file I obtain an exception (java.security.AccessControlException: access denied). I do not have the permission to create a buffer file and write.
final FileUploadField FiletoUpload = new FileUploadField("uploadfile", new Model());
form.add(FiletoUpload);
form.add(new Button("upload") {
#Override
public void onSubmit() {
//here we upload
getRequestCycle().scheduleRequestHandlerAfterCurrent(new IRequestHandler() {
#Override
public void respond(IRequestCycle irc) {
FileUpload uploadedFile = FiletoUpload.getFileUpload();
HttpServletResponse httpResponse = (HttpServletResponse) irc.getResponse().getContainerResponse();
InputStream CORPUS = null;
try {
CORPUS = uploadedFile.getInputStream();
} catch (IOException ex) {
Logger.getLogger(Upload.class.getName()).log(Level.SEVERE, null, ex);
}
try {
doGet(null, httpResponse);
} catch (IOException ex) {
Logger.getLogger(Upload.class.getName()).log(Level.SEVERE, null, ex);
}
uploadedFile.closeStreams();
}
#Override
public void detach(IRequestCycle irc) {
}
});
}
});
I do not use blobstore, I use com.google.appengine.tools.cloudstorage.*. I open a channel to write on the cloud.
public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
resp.setContentType("text/plain");
GcsService gcsservice = GcsServiceFactory.createGcsService();
GcsFilename uploadfile = new GcsFilename(BUCKETNAME, FILENAME);
GcsFileOptions optionsBuilder = new GcsFileOptions.Builder()
.mimeType("text/plain")
.acl("bucket-owner-full-control")
.build();
GcsOutputChannel writechannel = gcsservice.createOrReplace(uploadfile, optionsBuilder );
buffer = new StringBuffer();
int char_read = 0;
long i;
for(i=0; i< CORPUSsize; i++){
char_read = CORPUS.read(); //I read the corpus of the file
char mychar = (char) char_read;
buffer.append(mychar);
}
writechannel.write(ByteBuffer.wrap(buffer.toString().getBytes("UTF-8")));
writechannel.close();
}
Is there a solution to upload a big file without obtaining a permission exception?
Wicket uses a DiskFileItemFactory, which stores files on disk after a 10240 bytes threshold.
You have to change the sizeThreshold or uses an alternative implementation of FileItemFactory. See the MultipartServletWebRequestImpl constructor.

Upload file via streaming using Jersey 2

I am trying to create a file upload API using Jersey. I would like to obtain details about the upload progress in the server side (is it possible?). Searching the web, the suggestion was to use stream to transfer the file. But... even was described below, the server just to execute the "putFile" method after the file arrives completely. Another problem is that these code only works to small files, when I try a file greater than 40mb
#Path("/file")
public class LargeUpload {
private static final String SERVER_UPLOAD_LOCATION_FOLDER = "/Users/diego/Documents/uploads/";
#PUT
#Path("/upload/{attachmentName}")
#Consumes(MediaType.APPLICATION_OCTET_STREAM)
public Response putFile(#PathParam("attachmentName") String attachmentName,
InputStream fileInputStream) throws Throwable {
String filePath = SERVER_UPLOAD_LOCATION_FOLDER + attachmentName;
saveFile(fileInputStream, filePath);
String output = "File saved to server location : ";
return Response.status(200).entity(output).build();
}
// save uploaded file to a defined location on the server
private void saveFile(InputStream uploadedInputStream, String serverLocation) {
try {
OutputStream outpuStream = new FileOutputStream(new File(
serverLocation));
int read = 0;
byte[] bytes = new byte[1024];
outpuStream = new FileOutputStream(new File(serverLocation));
while ((read = uploadedInputStream.read(bytes)) != -1) {
outpuStream.write(bytes, 0, read);
}
outpuStream.flush();
outpuStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void main(String[] args) throws FileNotFoundException {
ClientConfig config = new ClientConfig();
config.property(ClientProperties.CHUNKED_ENCODING_SIZE, 1024);
Client client = ClientBuilder.newClient(config);
File fileName = new File("/Users/diego/Movies/ff.mp4");
InputStream fileInStream = new FileInputStream(fileName);
String sContentDisposition = "attachment; filename=\"" + fileName.getName()+"\"";
Response response = client.target("http://localhost:8080").path("upload-controller/webapi/file/upload/"+fileName.getName()).
request(MediaType.APPLICATION_OCTET_STREAM).header("Content-Disposition", sContentDisposition).
put(Entity.entity(fileInStream, MediaType.APPLICATION_OCTET_STREAM));
System.out.println(response);
}

What is wrong with this Threading example? [duplicate]

This is what I do to write to InputStream
public OutputStream getOutputStream(#Nonnull final String uniqueId) throws PersistenceException {
final PipedOutputStream outputStream = new PipedOutputStream();
final PipedInputStream inputStream;
try {
inputStream = new PipedInputStream(outputStream);
new Thread(
new Runnable() {
#Override
public void run() {
PutObjectRequest putObjectRequest = new PutObjectRequest("haritdev.sunrun", "sample.file.key", inputStream, new ObjectMetadata());
PutObjectResult result = amazonS3Client.putObject(putObjectRequest);
LOGGER.info("result - " + result.toString());
try {
inputStream.close();
} catch (IOException e) {
}
}
}
).start();
} catch (AmazonS3Exception e) {
throw new PersistenceException("could not generate output stream for " + uniqueId, e);
} catch (IOException e) {
throw new PersistenceException("could not generate input stream for S3 for " + uniqueId, e);
}
try {
return new GZIPOutputStream(outputStream);
} catch (IOException e) {
LOGGER.error(e.getMessage(), e);
throw new PersistenceException("Failed to get output stream for " + uniqueId + ": " + e.getMessage(), e);
}
}
and in the following method, I see my process die
protected <X extends AmazonWebServiceRequest> Request<X> createRequest(String bucketName, String key, X originalRequest, HttpMethodName httpMethod) {
Request<X> request = new DefaultRequest<X>(originalRequest, Constants.S3_SERVICE_NAME);
request.setHttpMethod(httpMethod);
if (bucketNameUtils.isDNSBucketName(bucketName)) {
request.setEndpoint(convertToVirtualHostEndpoint(bucketName));
request.setResourcePath(ServiceUtils.urlEncode(key));
} else {
request.setEndpoint(endpoint);
if (bucketName != null) {
/*
* We don't URL encode the bucket name, since it shouldn't
* contain any characters that need to be encoded based on
* Amazon S3's naming restrictions.
*/
request.setResourcePath(bucketName + "/"
+ (key != null ? ServiceUtils.urlEncode(key) : ""));
}
}
return request;
}
The process fails on request.setResourcePath(ServiceUtils.urlEncode(key)); and I can't even debug because of that, even though the key is valid name and is not NULL
Can someone please help?
This is how the request looks before dying
request = {com.amazonaws.DefaultRequest#1931}"PUT https://my.bucket.s3.amazonaws.com / "
resourcePath = null
parameters = {java.util.HashMap#1959} size = 0
headers = {java.util.HashMap#1963} size = 0
endpoint = {java.net.URI#1965}"https://my.bucket.s3.amazonaws.com"
serviceName = {java.lang.String#1910}"Amazon S3"
originalRequest = {com.amazonaws.services.s3.model.PutObjectRequest#1285}
httpMethod = {com.amazonaws.http.HttpMethodName#1286}"PUT"
content = null
I tried the same approach and it failed for me as well.
I ended up writing all my data to the output stream first, and then initiating the upload to S3 after copying the data from the output stream to the input stream:
...
// Data written to outputStream here
...
byte[] byteArray = outputStream.toByteArray();
amazonS3Client.uploadPart(new UploadPartRequest()
.withBucketName(bucket)
.withKey(key)
.withInputStream(new ByteArrayInputStream(byteArray))
.withPartSize(byteArray.length)
.withUploadId(uploadId)
.withPartNumber(partNumber));
Kind of defeats the purpose of writing to a stream, if the entire data block has to be written and copied in memory before the upload to S3 can even begin, but it's the only way I could get it to work.
Here is what I tried and worked -
try (PipedOutputStream pipedOutputStream = new PipedOutputStream();
PipedInputStream pipedInputStream = new PipedInputStream()) {
new Thread(new Runnable() {
public void run() {
try {
// write some data to pipedOutputStream
} catch (IOException e) {
// handle exception
}
}
}).start();
PutObjectRequest putObjectRequest = new PutObjectRequest(BUCKET, FILE_NAME, pipedInputStream, new ObjectMetadata());
s3Client.putObject(putObjectRequest);
}
This code worked with S3 throwing warning that content-length is not set and s3 will be buffered and could result in OutOfMemoryException. I am not convinced about any cheap method to set content-length in ObjectMetadata just to get rid of this message and hopefully AWS SDK would not be stream the whole stream into memory just to find the content-length.

Categories

Resources