How to get spreadsheets from a specific Google Drive folder? - java

The code provided in this tutorial (snippet given below) retrieves a list of all the spreadsheets for the authenticated user.
public class MySpreadsheetIntegration {
public static void main(String[] args) throws AuthenticationException,
MalformedURLException, IOException, ServiceException {
SpreadsheetService service = new SpreadsheetService("MySpreadsheetIntegration-v1");
// TODO: Authorize the service object for a specific user (see other sections)
// Define the URL to request. This should never change.
URL SPREADSHEET_FEED_URL = new URL(
"https://spreadsheets.google.com/feeds/spreadsheets/private/full");
// Make a request to the API and get all spreadsheets.
SpreadsheetFeed feed = service.getFeed(SPREADSHEET_FEED_URL,
SpreadsheetFeed.class);
List<SpreadsheetEntry> spreadsheets = feed.getEntries();
// Iterate through all of the spreadsheets returned
for (SpreadsheetEntry spreadsheet : spreadsheets) {
// Print the title of this spreadsheet to the screen
System.out.println(spreadsheet.getTitle().getPlainText());
}
}
}
But I don't want to get all the spreadsheets. I only want to get those spreadsheets that are in a particular folder (if the folder exists, otherwise terminate the program). Is it possible using this API? If yes, how?
As far as my understanding goes, the SpreadsheetFeed has to be changed. But I didn't get any example snippet against it.

I worked out the solution as follows:
First, get the fileId of that particular folder. Use setQ() to pass query checking for folder and folder name. The following snippet will be useful:
result = driveService.files().list()
.setQ("mimeType='application/vnd.google-apps.folder'
AND title='" + folderName + "'")
.setPageToken(pageToken)
.execute();
Then, get the list of files in that particular folder. I found it from this tutorial. Snippet is as follows:
private static void printFilesInFolder(Drive service, String folderId) throws IOException {
Children.List request = service.children().list(folderId);
do {
try {
ChildList children = request.execute();
for (ChildReference child : children.getItems()) {
System.out.println("File Id: " + child.getId());
}
request.setPageToken(children.getNextPageToken());
} catch (IOException e) {
System.out.println("An error occurred: " + e);
request.setPageToken(null);
}
} while (request.getPageToken() != null &&
request.getPageToken().length() > 0);
}
Lastly, check for spreadsheets and get worksheet feeds for them. The following snippet might help.
URL WORKSHEET_FEED_URL = new URL("https://spreadsheets.google.com/feeds/worksheets/" + fileId + "/private/full");
WorksheetFeed feed = service.getFeed(WORKSHEET_FEED_URL, WorksheetFeed.class);
worksheets = feed.getEntries();

Related

Google drive api to get all children is not working if I dynamically pass fileId to query

I am trying google drive api to search parents of a folder. In search query i have to pass file id dynamically instead of hard coding. I tried below code. but I am getting file not found json response.
here its not taking fileId as value i think its consider as String
if I hardcode the value it is working.
FileList result = service.files().list().setQ("name='testfile' ").execute();
for (com.google.api.services.drive.model.File file : result.getFiles()) {
System.out.printf("Found file: %s (%s)\n",
file.getName(), file.getId());
String fileId =file.getId();
FileList childern = service.files().list().setQ(" + \"file.getId()\" in parents").setFields("files(id, name, modifiedTime, mimeType)").execute();
This should help.
String fileid=file.getId()
service.files().list().setQ("'" + fileId + "'" + " in parents").setFields("files(id, name, modifiedTime, mimeType)").execute();
Make sure you have valid file.getId()
I know your question states java but the only sample of this working is in C#. Another issue is as far as i know PageStreamer.cs does not have an equivalent in the java client library.
I am hoping that C# and java are close enough that this might give you some ideas of how to get it working in Java. My java knowledge is quote basic but i may be able to help you debug it if you want to try to convert this.
try
{
// Initial validation.
if (service == null)
throw new ArgumentNullException("service");
// Building the initial request.
var request = service.Files.List();
// Applying optional parameters to the request.
request = (FilesResource.ListRequest)SampleHelpers.ApplyOptionalParms(request, optional);
var pageStreamer = new Google.Apis.Requests.PageStreamer<Google.Apis.Drive.v3.Data.File, FilesResource.ListRequest, Google.Apis.Drive.v3.Data.FileList, string>(
(req, token) => request.PageToken = token,
response => response.NextPageToken,
response => response.Files);
var allFiles = new Google.Apis.Drive.v3.Data.FileList();
allFiles.Files = new List<Google.Apis.Drive.v3.Data.File>();
foreach (var result in pageStreamer.Fetch(request))
{
allFiles.Files.Add(result);
}
return allFiles;
}
catch (Exception Ex)
{
throw new Exception("Request Files.List failed.", Ex);
}

How to use Azure storage blob services [duplicate]

This question already has answers here:
Compiler error "archive for required library could not be read" - Spring Tool Suite
(24 answers)
Closed 4 years ago.
I need your help as I am new to this field. I want to use Azure storage blob service to upload images, list and download, but I am facing some problems.
I have imported a project from this repository, and as soon as I import I am getting errors:
Description Resource Path Location Type
Archive for required library: 'C:/Users/NUTRIP-DEVLP1/.m2/repository/org/apache/commons/commons-lang3/3.4/commons-lang3-3.4.jar' in project 'blobAzureApp' cannot be read or is not a valid ZIP file blobAzureApp Build path Build Path Problem
Description Resource Path Location Type
The project cannot be built until build path errors are resolved blobAzureApp Unknown Java Problem
Should I run this as a normal Java application or a Maven project? If Maven, how do I run it?
I suggest you using official java sdk in your maven project.
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-storage-blob</artifactId>
<version>10.1.0</version>
</dependency>
sample upload code:
static void uploadFile(BlockBlobURL blob, File sourceFile) throws IOException {
FileChannel fileChannel = FileChannel.open(sourceFile.toPath());
// Uploading a file to the blobURL using the high-level methods available in TransferManager class
// Alternatively call the Upload/StageBlock low-level methods from BlockBlobURL type
TransferManager.uploadFileToBlockBlob(fileChannel, blob, 8*1024*1024, null)
.subscribe(response-> {
System.out.println("Completed upload request.");
System.out.println(response.response().statusCode());
});
}
sample list code:
static void listBlobs(ContainerURL containerURL) {
// Each ContainerURL.listBlobsFlatSegment call return up to maxResults (maxResults=10 passed into ListBlobOptions below).
// To list all Blobs, we are creating a helper static method called listAllBlobs,
// and calling it after the initial listBlobsFlatSegment call
ListBlobsOptions options = new ListBlobsOptions(null, null, 10);
containerURL.listBlobsFlatSegment(null, options)
.flatMap(containersListBlobFlatSegmentResponse ->
listAllBlobs(containerURL, containersListBlobFlatSegmentResponse))
.subscribe(response-> {
System.out.println("Completed list blobs request.");
System.out.println(response.statusCode());
});
}
private static Single <ContainersListBlobFlatSegmentResponse> listAllBlobs(ContainerURL url, ContainersListBlobFlatSegmentResponse response) {
// Process the blobs returned in this result segment (if the segment is empty, blobs() will be null.
if (response.body().blobs() != null) {
for (Blob b : response.body().blobs().blob()) {
String output = "Blob name: " + b.name();
if (b.snapshot() != null) {
output += ", Snapshot: " + b.snapshot();
}
System.out.println(output);
}
}
else {
System.out.println("There are no more blobs to list off.");
}
// If there is not another segment, return this response as the final response.
if (response.body().nextMarker() == null) {
return Single.just(response);
} else {
/*
IMPORTANT: ListBlobsFlatSegment returns the start of the next segment; you MUST use this to get the next
segment (after processing the current result segment
*/
String nextMarker = response.body().nextMarker();
/*
The presence of the marker indicates that there are more blobs to list, so we make another call to
listBlobsFlatSegment and pass the result through this helper function.
*/
return url.listBlobsFlatSegment(nextMarker, new ListBlobsOptions(null, null,1))
.flatMap(containersListBlobFlatSegmentResponse ->
listAllBlobs(url, containersListBlobFlatSegmentResponse));
}
}
sample download code:
static void getBlob(BlockBlobURL blobURL, File sourceFile) {
try {
// Get the blob using the low-level download method in BlockBlobURL type
// com.microsoft.rest.v2.util.FlowableUtil is a static class that contains helpers to work with Flowable
blobURL.download(new BlobRange(0, Long.MAX_VALUE), null, false)
.flatMapCompletable(response -> {
AsynchronousFileChannel channel = AsynchronousFileChannel.open(Paths
.get(sourceFile.getPath()), StandardOpenOption.CREATE, StandardOpenOption.WRITE);
return FlowableUtil.writeFile(response.body(), channel);
}).doOnComplete(()-> System.out.println("The blob was downloaded to " + sourceFile.getAbsolutePath()))
// To call it synchronously add .blockingAwait()
.subscribe();
} catch (Exception ex){
System.out.println(ex.toString());
}
}
More details, please refer to this doc.Hope it helps you.

Batching multiple files to Amazon S3 using the Java SDK

I'm trying to upload multiple files to Amazon S3 all under the same key, by appending the files. I have a list of file names and want to upload/append the files in that order. I am pretty much exactly following this tutorial but I am looping through each file first and uploading that in part. Because the files are on hdfs (the Path is actually org.apache.hadoop.fs.Path), I am using the input stream to send the file data. Some pseudocode is below (I am commenting the blocks that are word for word from the tutorial):
// Create a list of UploadPartResponse objects. You get one of these for
// each part upload.
List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(
bk.getBucket(), bk.getKey());
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);
try {
int i = 1; // part number
for (String file : files) {
Path filePath = new Path(file);
// Get the input stream and content length
long contentLength = fss.get(branch).getFileStatus(filePath).getLen();
InputStream is = fss.get(branch).open(filePath);
long filePosition = 0;
while (filePosition < contentLength) {
// create request
//upload part and add response to our list
i++;
}
}
// Step 3: Complete.
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(bk.getBucket(),
bk.getKey(),
initResponse.getUploadId(),
partETags);
s3Client.completeMultipartUpload(compRequest);
} catch (Exception e) {
//...
}
However, I am getting the following error:
com.amazonaws.services.s3.model.AmazonS3Exception: The XML you provided was not well-formed or did not validate against our published schema (Service: Amazon S3; Status Code: 400; Error Code: MalformedXML; Request ID: 2C1126E838F65BB9), S3 Extended Request ID: QmpybmrqepaNtTVxWRM1g2w/fYW+8DPrDwUEK1XeorNKtnUKbnJeVM6qmeNcrPwc
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1109)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:741)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:461)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:296)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3743)
at com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload(AmazonS3Client.java:2617)
If anyone knows what the cause of this error might be, that would be greatly appreciated. Alternatively, if there is a better way to concatenate a bunch of files into one s3 key, that would be great as well. I tried using java's builtin SequenceInputStream but that did not work. Any help would be greatly appreciated. For reference, the total size of all the files could be as large as 10-15 gb.
I know it's probably a bit late but worth giving my contribution.
I've managed to solve a similar problem using the SequenceInputStream.
The tricks is in being able to calculate the total size of the result file and then feeding the SequenceInputStream with an Enumeration<InputStream>.
Here's some example code that might help:
public void combineFiles() {
List<String> files = getFiles();
long totalFileSize = files.stream()
.map(this::getContentLength)
.reduce(0L, (f, s) -> f + s);
try {
try (InputStream partialFile = new SequenceInputStream(getInputStreamEnumeration(files))) {
ObjectMetadata resultFileMetadata = new ObjectMetadata();
resultFileMetadata.setContentLength(totalFileSize);
s3Client.putObject("bucketName", "resultFilePath", partialFile, resultFileMetadata);
}
} catch (IOException e) {
LOG.error("An error occurred while combining files. {}", e);
}
}
private Enumeration<? extends InputStream> getInputStreamEnumeration(List<String> files) {
return new Enumeration<InputStream>() {
private Iterator<String> fileNamesIterator = files.iterator();
#Override
public boolean hasMoreElements() {
return fileNamesIterator.hasNext();
}
#Override
public InputStream nextElement() {
try {
return new FileInputStream(Paths.get(fileNamesIterator.next()).toFile());
} catch (FileNotFoundException e) {
System.err.println(e.getMessage());
throw new RuntimeException(e);
}
}
};
}
Hope this helps!

Dropbox Core API JAVA Authorization Code

Using the dropbox core api tutorial I am able to upload a file.
However, my question is an exact replica of this SO post--- That is, once I have my authorization code and comment out the user auth lines so that I dont have to manually re-authorize approval every time I use dropbox I get the following errors:
Exception in thread "main" com.dropbox.core.DbxException$BadRequest: {"error_description": "code has already been used", "error": "invalid_grant"}
OR
Exception in thread "main" com.dropbox.core.DbxException$BadRequest: {"error_description": "code has expired (within the last hour)", "error": "invalid_grant"}
I am positive I have the correct authorization code.
I hope that I'm missing something, else whats the point of an API if you have to induce manual intervention every time you use it?
Edit: My Exact Code (keys have been scrambled)
import com.dropbox.core.*;
import java.io.*;
import java.util.Locale;
public class DropboxUpload {
public static void main(String[] args) throws IOException, DbxException {
// Get your app key and secret from the Dropbox developers website.
final String APP_KEY = "2po9b49whx74h67";
final String APP_SECRET = "m98f734hnr92kmh";
DbxAppInfo appInfo = new DbxAppInfo(APP_KEY, APP_SECRET);
DbxRequestConfig config = new DbxRequestConfig("JavaTutorial/1.0",
Locale.getDefault().toString());
DbxWebAuthNoRedirect webAuth = new DbxWebAuthNoRedirect(config, appInfo);
// Have the user sign in and authorize your app.
//String authorizeUrl = webAuth.start();
//System.out.println("1. Go to: " + authorizeUrl);
//System.out.println("2. Click \"Allow\" (you might have to log in first)");
//System.out.println("3. Copy the authorization code.");
//String code = new BufferedReader(new InputStreamReader(System.in)).readLine().trim();
DbxAuthFinish authFinish = webAuth.finish("VtwxzitUoI8DDDLx0PlLut5Gjpw3");
String accessToken = authFinish.accessToken;
DbxClient client = new DbxClient(config, accessToken);
System.out.println("Linked account: " + client.getAccountInfo().displayName);
File inputFile = new File("/home/dropboxuser/Documents/test.txt");
FileInputStream inputStream = new FileInputStream(inputFile);
try {
DbxEntry.File uploadedFile = client.uploadFile("/Public/test.txt",
DbxWriteMode.add(), inputFile.length(), inputStream);
System.out.println("Uploaded: " + uploadedFile.toString());
} finally {
inputStream.close();
}
DbxEntry.WithChildren listing = client.getMetadataWithChildren("/");
System.out.println("Files in the root path:");
for (DbxEntry child : listing.children) {
System.out.println(" " + child.name + ": " + child.toString());
}
FileOutputStream outputStream = new FileOutputStream("test.txt");
try {
DbxEntry.File downloadedFile = client.getFile("/Public/test.txt", null,
outputStream);
System.out.println("Metadata: " + downloadedFile.toString());
} finally {
outputStream.close();
}
}
}
You should be storing and reusing the access token, not the authorization code.
So after doing this once:
String accessToken = authFinish.accessToken;
You should just replace the whole thing with
String accessToken = "<the one you already got>";
BTW, if you just need an access token for your own account, you can generate one with the click of a button! See https://www.dropbox.com/developers/blog/94/generate-an-access-token-for-your-own-account.

Search in spreadsheets not working for new files created

I create copies of my spreadsheet template on google docs with document list api and I realised that:
1. title queries works fine
2. content queries are not working(*) or partially working(**)
(*)for majority of spreadsheets: I searched every word from the content of a spreadsheet and I get no results
(**) for a few spreadsheets I find results for some words that are copied from template; the particular words queries are not working
3. If I update the spreadsheet after a few minutes all queries work fine.
(I make this searches from UI)
This are the steps for creating this files:
1. Copy spreadsheet template to root
private String sendPostCopyRequest(String authorizationToken, String resourceID, String title, int noRetries) throws IOException{
/*
resourceId = resource id for the template that i want to copy
title = the title of the new file created
*/
String urlStr = "https://docs.google.com/feeds/default/private/full";
URL url = new URL(urlStr);
HttpURLConnection copyHttpUrlConn = (HttpURLConnection) url.openConnection();
copyHttpUrlConn.setDoOutput(true);
copyHttpUrlConn.setRequestMethod("POST");
String outputString = "<?xml version='1.0' encoding='UTF-8'?>" +
"<entry xmlns=\"http://www.w3.org/2005/Atom\"> " +
"<id>https://docs.google.com/feeds/default/private/full/" + resourceID +"</id>" +
" <title>" + title + "</title></entry>";
copyHttpUrlConn.setRequestProperty("GData-Version", "3.0");
copyHttpUrlConn.setRequestProperty("Content-Type","application/atom+xml");
copyHttpUrlConn.setRequestProperty("Content-Length", outputString.length() + "");
copyHttpUrlConn.setRequestProperty("Authorization", "GoogleLogin auth=" + authorizationToken);
OutputStream outputStream = copyHttpUrlConn.getOutputStream();
outputStream.write(outputString.getBytes());
copyHttpUrlConn.getResponseCode();
return readIdFromResponse(copyHttpUrlConn.getInputStream());
}
2. I update some cells using this method:
public boolean setCellValue(SpreadsheetService spreadSheetService, SpreadsheetEntry entry, int worksheetNumber, String position, String value) throws IOException, ServiceException {
List<WorksheetEntry> worksheets = entry.getWorksheets();
WorksheetEntry worksheet = worksheets.get(worksheetNumber);
URL cellFeedUrl = worksheet.getCellFeedUrl();
CellQuery query = new CellQuery(cellFeedUrl);
query.setReturnEmpty(true);
query.setRange(position);
CellFeed cellFeed = spreadSheetService.query(query, CellFeed.class);
CellEntry cell = cellFeed.getEntries().get(0);
cell.changeInputValueLocal(value);
cell.update();
return true;
}
3. I move the created file to a new folder (collection)
public DocumentListEntry moveSpreadSheet(DocsService docsService, String entryId, String destinationFolderDocId) throws MalformedURLException, IOException, ServiceException {
DocumentListEntry newEntry = null;
newEntry = new com.google.gdata.data.docs.SpreadsheetEntry();
newEntry.setId(entryId);
String destFolderUri = "https://docs.google.com/feeds/default/private/full/folder%3A"+ destinationFolderDocId + "/contents";
return docsService.insert(new URL(destFolderUri), newEntry);
}
(the same results with gdata java sdk api 1.4.5, 1.4.6, 1.4.7)
This happens from 2011-12-23 (with aproximation). For all the spreadsheets created with the same code before this date all queries work fine.
I can provide any other information on request.
Update:
This issue seems to appear also at uploading spreadsheets with conversion.
If I update the files after a period of time after creation/upload (~2 hours) the queries returns them in results.
Your issue could be related to slowish Google indexing of spreadsheet contents.
https://groups.google.com/a/googleproductforums.com/d/msg/docs/vEhI_HkKX3I/MGKqkryrx90J
"at the moment it can take about 10 minutes to index the content you've written into your spreadsheet. So if you type something in, and then search for it right away, it might not show up yet in your list of document results. Give it a few more minutes (we are working on making this faster)"

Categories

Resources