Get size of files using Google drive API - java

I am trying to list files from google drive along with their sizes for my app & below is the how i tried getting the size
File file = service.files().get(file.getId()).setFields("size").execute();
file.getSize()
I came to know that the size obtained from this call is not right as google drive only populates file size for files apart from google docs, sheet Get size of file created on Google drive using Google drive api in android
Also I tried determining file's size by making http GET to
webContentLink & checking the Content-Length header like below
HttpURLConnection conn = null;
String url = "https://docs.google.com/a/document/d/1Gu7Q2Av2ZokZZyLjqBJHG7idE1dr35VE6rTuSii36_M/edit?usp=drivesdk";
try {
URL urlObj = new URL(url);
conn = (HttpURLConnection) urlObj.openConnection();
conn.setRequestMethod("HEAD");
conn.getInputStream();
System.out.println(conn.getContentLength());
} catch (IOException e) {
e.printStackTrace();
} finally {
conn.disconnect();
}
But in this case , the file size is not correct as it comes out be very large
Is there any way I can determine the file size ?

Found the solution to the problem. For files uploaded to google drive, file size can be determined with file.getSize() call. For Google apps files such as doc, spreadsheet etc file.getSize() will return null as they don't consumer any space in drive. So as a hacky way, I am exporting as a pdf & determining the size. Below is the code for that
try {
Drive service = new Drive.Builder(HTTP_TRANSPORT, JSON_FACTORY, credential)
.setApplicationName("xyz")
.build();
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
service.files().export(fileId, "application/pdf")
.executeMediaAndDownloadTo(outputStream);
int fileSize = outputStream.toByteArray().length;
} catch (Exception ex) {
}

Related

Read a file using Java from an S3 bucket and HTTP PUT file to presigned AWS S3 URL of another bucket in a way that simulates an actual file upload

New to Java and HTTP requests.
Why this question is not a duplicate: I'm not using AWS SDK to generate any presigned URL. I get it from an external API.
Here is what I'm trying to accomplish:
Step 1: Read the source S3 bucket for a file (for now .xlsx)
Step 2: Parse this file by converting it to an InputStreamReader (I need help here)
Step 3: Do a HTTP PUT of this file by transferring the contents of the InputStreamReader to an OutputStreamWriter, on a pre-signed S3 URL that I already have obtained from an external team. The file must sit in the destination S3 bucket, in the exact way a file is uploaded manually by dragging and dropping. (Also need help here)
Here is what I've tried:
Step 1: Read the S3 bucket for the file
public class LambdaMain implements RequestHandler<S3Event, String> {
#Override
public String handleRequest(final S3Event event, final Context context) {
System.out.println("Create object was called on the S3 bucket");
S3EventNotification.S3EventNotificationRecord record = event.getRecords().get(0);
String srcBucket = record.getS3().getBucket().getName();
String srcKey = record.getS3().getObject().getUrlDecodedKey();
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(DefaultAWSCredentialsProviderChain.getInstance())
.build();
S3Object s3Object = s3Client.getObject(new GetObjectRequest(
srcBucket, srcKey));
String presignedS3Url = //Assume that I have this by making an external API call
InputStreamReader inputStreamReader = parseFileFromS3(s3Object); #Step 2
int responseCode = putContentIntoS3URL(inputStreamReader, presignedS3Url); #Step 3
}
Step 2: Parse the file into an InputStreamReader to copy it to an OutputStreamWriter:
private InputStreamReader parseFileFromS3(S3Object s3Object) {
return new InputStreamReader(s3Object.getObjectContent(), StandardCharsets.UTF_8);
}
Step 3: Make a HTTP PUT call by copying the contents from InputStreamReader to OutputStreamWriter:
private int putContentIntoS3URL(InputStreamReader inputStreamReader, String presignedS3Url) {
URL url = null;
try {
url = new URL(presignedS3Url);
} catch (MalformedURLException e) {
e.printStackTrace();
}
HttpURLConnection httpCon = null;
try {
assert url != null;
httpCon = (HttpURLConnection) url.openConnection();
} catch (IOException e) {
e.printStackTrace();
}
httpCon.setDoOutput(true);
try {
httpCon.setRequestMethod("PUT");
} catch (ProtocolException e) {
e.printStackTrace();
}
OutputStreamWriter outputStreamWriter = null;
try {
outputStreamWriter = new OutputStreamWriter(
httpCon.getOutputStream());
} catch (IOException e) {
e.printStackTrace();
}
try {
IOUtils.copy(inputStreamReader, outputStreamWriter);
} catch (IOException e) {
e.printStackTrace();
}
try {
outputStreamWriter.close();
} catch (IOException e) {
e.printStackTrace();
}
try {
httpCon.getInputStream();
} catch (IOException e) {
e.printStackTrace();
}
int responseCode = 0;
try {
responseCode = httpCon.getResponseCode();
} catch (IOException e) {
e.printStackTrace();
}
return responseCode;
}
The issue with the among approach is that when I read an .xlsx file via an S3 insert trigger and PUT into the URL, when I download the uploaded file - it gets downloaded as some gibberish.
When I try reading in a .png file and PUT into the URL, when I download the uploaded file - it gets downloaded as some text file with some gibberish (I did see the word PNG in it though)
It feels like I'm making mistakes with:
Incorrectly creating an OutputStreamWriter since I don't understand how to send a file via a HTTP request
Assuming that every file type can be handled in a generic way.
Not setting the content-type in the HTTP request
Expecting S3 to magically understand my file type after the PUT operation
I would like to know if my above 4 assumptions are correct or incorrect.
The intention is that, I do the PUT on the file data correctly so it sits in the S3 bucket along with the correct file type/extension. I hope my effort is worthy to garner some help. I've done a lot of searching into HTTP PUT and File/IO, but I'm unable to LINK them together for my use-case, since I perform a File I/O followed by a HTTP PUT.
UPDATE 1:
I've added the setRequestProperty("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"), but the file doesn't sit in the S3 bucket with the file extension. It simply sits there as an object.
UPDATE 2:
I think this also has something to do with setContentDisposition() header, although I'm not sure how I go about setting these headers for Excel files.
UPDATE 3:
This may simply have to do with how the Presigned S3 URL itself is vended out to us. As mentioned in the question, I said that we get the Presigned S3 URL from some other team. The question itself has multiple parts that need answering.
Does the default Presigned S3 URL ALLOW clients to set the content-type and content-disposition in the HTTP header?: I've set up another separate question here since it's quite unclear: Can a client set file name and extension programmatically when he PUTs file content to a presigned S3 URL that the service vends out?
If the answer to above question is TRUE, then and only then must we go into how to set the file contents and write it to the OutputStream
You are using InputStreamReader and OutputStreamWriter, which are both bridges between a byte stream and a character stream. However, you are using these with byte data, which means you first convert your bytes to characters, and then back to bytes. Since your data is not character data, this conversion might explain why you get gibberish as a result.
I'd start trying to get rid of the reader and writer, instead directly using the InputStream (which you already got from s3Object.getObjectContent()), and the OutputStream (which you got from httpCon.getOutputStream()). IOUtils.copy should also support this.
Also as a side note, when you construct the InputStreamReader you set StandardCharsets.UTF_8 as the charset to use, but when you construct the OutputStreamWriter you don't set the charset. Should the default charset not be UTF-8, this conversion would probably also result in gibberish.

Downloaded file with Google Drive API. Can't find it anywhere

I am trying to download a file with google drive api. The code gives no error but I can't find the file anywhere.
This is the code
String fileId = "...";
OutputStream outputStream = new ByteArrayOutputStream();
try {
service.files().export(fileId, "text/csv")
.executeMediaAndDownloadTo(outputStream);
} catch (IOException e) {
System.out.println("Ceva nu a mers bine");
e.printStackTrace();
}
System.out.println(outputStream == null);
Any ideea?
Based from this example in Google documentation, I don't see any error with your code.
String fileId = "1ZdR3L3qP4Bkq8noWLJHSr_iBau0DNT4Kli4SxNc2YEo";
OutputStream outputStream = new ByteArrayOutputStream();
driveService.files().export(fileId, "application/pdf")
.executeMediaAndDownloadTo(outputStream);
Make sure that you are using the correct fileId of the specific file you want to download.
You can check on these links: How to get Google Drive file ID, How to download a file from google drive using drive api java?

Download image in spark java

I followed the discussion on spark github page as well as stack overflow to understand how to upload files using spark and apache file uploads.
Now I want the user to have an option to download the image on click.
For example my uploaded files get stored in /tmp/imageName.jpg on the server.
On the client side i want to give the user an option to download the file when the user clicks in the hyperlink.
click here
When the user click on the hyperlink I will call the function with the file path but can't understand how to send the image in response.
I do know that HTML5 has download attribute but that would require the files to be kept in public folder on the server which is not possible.
I went through the previous similar question add tried to replicate for my scenario without success
How can I send a PNG of a QR-code in a HTTP response body (with Spark)?
How download file using java spark?
Edit:
I did follow the link provided in the answer to force download the image, but using response.raw() i'm not able to get the response
response.type("application/force-download");
response.header("Content-Transfer-Encoding", "binary");
response.header("Content-Disposition","attachment; filename=\"" + "xxx\"");//fileName);
try {
HttpServletResponse raw = response.raw();
PrintWriter out = raw.getWriter();
File f= new File("/tmp/Tulips.jpg");
InputStream in = new FileInputStream(f);
BufferedInputStream bin = new BufferedInputStream(in);
DataInputStream din = new DataInputStream(bin);
while(din.available() > 0){
out.print(din.read());
out.print("\n");
}
}
catch (Exception e1) {
e1.printStackTrace();
}
response.status(200);
return response.raw();
Edit 2:
I'm not sure what is the difference between using response.body () vs response.raw().someFunction(). In either case I can seem to send the data back in response. Even if i write a simple response.body("hello") it doesn't reflect in my response.
Is there a difference in how a file would be read as opposed to an image ? Exampling using ImageIO class ?
Below is the solution that work for me:
Service.java
get(API_CONTEXT + "/result/download", (request, response) -> {
String key = request.queryParams("filepath");
Path path = Paths.get("/tmp/"+key);
byte[] data = null;
try {
data = Files.readAllBytes(path);
} catch (Exception e1) {
e1.printStackTrace();
}
HttpServletResponse raw = response.raw();
response.header("Content-Disposition", "attachment; filename=image.jpg");
response.type("application/force-download");
try {
raw.getOutputStream().write(data);
raw.getOutputStream().flush();
raw.getOutputStream().close();
} catch (Exception e) {
e.printStackTrace();
}
return raw;
});
Angular Code
$scope.downloadImage= function(filepath) {
console.log(filepath);
window.open('/api/v1/result/download?filepath='+filepath,'_self','');
}

Reading from URL - Java, C#, Android, Visual Studio

is it possible to read from a URL that is directed to a text file. e.g.
https://dl.dropboxusercontent.com/u/53441658/read.txt
Instead of giving a file path that is located on the computer, I want to give a path to a text file, via a url. Something like this;
StreamWriter sw = new StreamWriter("https://dl.dropboxusercontent.com/u/53441658/read.txt", true);
In Java (Android)
URL url = new URL("ftp://mirror.csclub.uwaterloo.ca/index.html");
URLConnection urlConnection = url.openConnection();
InputStream in = new BufferedInputStream(urlConnection.getInputStream());
try {
readStream(in);
finally {
in.close();
}
}
P.S. You must do this in separate thread like Async Task. Read more on Android Official Documentation.
In Dot.Net C#
using(WebClient client = new WebClient()) {
string s = client.DownloadString(url);
}

zip the files which are present at one FTP location and copy to another FTP location directly

I want to create zip file of files which are present at one ftp location and Copy this zip file to other ftp location without saving locally.
I am able to handle this for small size of files.It works well for small size files 1 mb etc
But if file size is big like 100 MB, 200 MB , 300 MB then its giving error as,
java.io.FileNotFoundException: STOR myfile.zip : 550 The process cannot access the
file because it is being used by another process.
at sun.net.ftp.FtpClient.readReply(FtpClient.java:251)
at sun.net.ftp.FtpClient.issueCommand(FtpClient.java:208)
at sun.net.ftp.FtpClient.openDataConnection(FtpClient.java:398)
at sun.net.ftp.FtpClient.put(FtpClient.java:609)
My code is
URLConnection urlConnection=null;
ZipOutputStream zipOutputStream=null;
InputStream inputStream = null;
byte[] buf;
int ByteRead,ByteWritten=0;
***Destination where file will be zipped***
URL url = new URL("ftp://" + ftpuser+ ":" + ftppass + "#"+ ftppass + "/" +
fileNameToStore + ";type=i");
urlConnection=url.openConnection();
OutputStream outputStream = urlConnection.getOutputStream();
zipOutputStream = new ZipOutputStream(outputStream);
buf = new byte[size];
for (int i=0; i<li.size(); i++)
{
try
{
***Souce from where file will be read***
URL u= new URL((String)li.get(i)); // this li has values http://xyz.com/folder
/myPDF.pdf
URLConnection uCon = u.openConnection();
inputStream = uCon.getInputStream();
zipOutputStream.putNextEntry(new ZipEntry((String)li.get(i).substring((int)li.get(i).lastIndexOf("/")+1).trim()));
while ((ByteRead = inputStream .read(buf)) != -1)
{
zipOutputStream.write(buf, 0, ByteRead);
ByteWritten += ByteRead;
}
zipOutputStream.closeEntry();
}
catch(Exception e)
{
e.printStackTrace();
}
}
if (inputStream != null) {
try {
inputStream .close();
}
catch (Exception e) {
e.printStackTrace();
}
}
if (zipOutputStream != null) {
try {
zipOutputStream.close();
} catch (Exception e){
e.printStackTrace();
}
}
Can anybody let me know how I can avoid this error and handle large files
This is unrelated to file sizes; as the error says, you can't replace the file because some other process is currently locking it.
The reason why you see it more often with large files is because these take longer to transfer hence the chance of concurrent accesses is higher.
So the only solution is to make sure that no one uses the file when you try to transfer it. Good luck with that.
Possible other solutions:
Don't use Windows on the server.
Transfer the file under a temporary name and rename it when it's complete. That way, other processes won't see incomplete files. Always a good thing.
Use rsync instead of inventing the wheel again.
Back in the day, before we had network security, there were FTP servers that allowed 3rd party transfers. You could use site specific commands and send a file to another FTP server directly. Those days are long gone. Sigh.
Ok, maybe not long gone. Some FTP servers support the proxy command. There is a discussion here: http://www.math.iitb.ac.in/resources/manuals/Unix_Unleashed/Vol_1/ch27.htm

Categories

Resources