Currently have to integrate Google Cloud Platform services into my app but recieving the following exception:
**W/System.err: java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
at com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentials(DefaultCredentialsProvider.java:134)
W/System.err: at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:119)
at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:91)
at com.google.api.gax.core.GoogleCredentialsProvider.getCredentials(GoogleCredentialsProvider.java:67)
at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:135)
at com.google.cloud.speech.v1.stub.GrpcSpeechStub.create(GrpcSpeechStub.java:94)
at com.google.cloud.speech.v1.stub.SpeechStubSettings.createStub(SpeechStubSettings.java:131)
at com.google.cloud.speech.v1.SpeechClient.<init>(SpeechClient.java:144)
at com.google.cloud.speech.v1.SpeechClient.create(SpeechClient.java:126)
at com.google.cloud.speech.v1.SpeechClient.create(SpeechClient.java:118)
at com.dno.app.ui.TranscriptFragment$1.onClick(TranscriptFragment.java:72)**
Environment variable is set:
.json file is here:
the app crashes at authImplicit() in this code block (fragment):
transcriptBtn = getActivity().findViewById(R.id.transcript_button);
transcriptBtn.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
try (SpeechClient speechClient = SpeechClient.create()) {
authImplicit(); // issue with Google Platform Login Authentification
// - does not read the environment variable, and therefore cannot get access to the .json file.
// The path to the audio file to transcribe
String fileName = getFile().getName();
// Reads the audio file into memory
Path path = Paths.get(fileName);
byte[] data = Files.readAllBytes(path);
ByteString audioBytes = ByteString.copyFrom(data);
// Builds the sync recognize request
RecognitionConfig config =
RecognitionConfig.newBuilder()
.setEncoding(RecognitionConfig.AudioEncoding.FLAC)
.setSampleRateHertz(16000)
.setLanguageCode("en-US")
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setContent(audioBytes).build();
// Performs speech recognition on the audio file
RecognizeResponse response = speechClient.recognize(config, audio);
List<SpeechRecognitionResult> results = response.getResultsList();
for (SpeechRecognitionResult result : results) {
// There can be several alternative transcripts for a given chunk of speech. Just use the
// first (most likely) one here.
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s%n", alternative.getTranscript());
}
} catch (IOException e) {
e.printStackTrace();
}
}
});
code for authImplicit():
private void authImplicit() {
// If you don't specify credentials when constructing the client, the client library will
// look for credentials via the environment variable GOOGLE_APPLICATION_CREDENTIALS.
Storage storage = StorageOptions.getDefaultInstance().getService();
System.out.println("Buckets:");
Page<Bucket> buckets = storage.list();
for (Bucket bucket : buckets.iterateAll()) {
System.out.println(bucket.toString());
}
}
I have selected a service account of the type, Owner, so I shouldn't be lacking any permissions.
EDIT (Still not working):
I tried using this example but it still doesn't work: The Application Default Credentials are not available
EDIT #2 (Working on server-side):
As it turns out, Google does not currently support Android for this task. Since this, I've moved the code to an ASP.NET back-end, and the code is now running smoothly.
Thank you for the assistance below.
I can understand you you've read the documentation and implemented all the steps stated here - https://cloud.google.com/storage/docs/reference/libraries#windows
As #John Hanley mentioned, did you check printing environmental variables ?
Its definitely not owner related permissions issue as the exception says
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
Now, to solve this, do you want to do only using environmental variables ? or other approaches are fine ?
if you are ok with other approaches, then take a look at this code
private void authImplicit() {
//please give exact file name for this credentials.json
Credentials credentials = GoogleCredentials
.fromStream(new FileInputStream("C:\\Users\\dbgno\\Keys\\credentials.json"));
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
.setProjectId("Some Project").build().getService();
System.out.println("Buckets:");
Page<Bucket> buckets = storage.list();
for (Bucket bucket : buckets.iterateAll()) {
System.out.println(bucket.toString());
}
}
Edit 1: working towards solution
Try this and see if you can read the JSON file that you are giving as input. The print statement should show the service account using which you are aiming to authenticate
public static Credential getDriveService(String fileLocation, String servAccAdmin)
throws IOException {
HttpTransport httpTransport = new NetHttpTransport();
JsonFactory jsonFactory = new JacksonFactory();
GoogleCredential googleCredential =
GoogleCredential.fromStream(new FileInputStream(fileLocation), httpTransport, jsonFactory)
.createScoped(SCOPES);
System.out.println("--------------- " + googleCredential.getServiceAccountId());
Credential credential = new GoogleCredential.Builder()
.setTransport(googleCredential.getTransport())
.setJsonFactory(googleCredential.getJsonFactory())
.setServiceAccountPrivateKeyId(googleCredential.getServiceAccountPrivateKeyId())
.setServiceAccountId(googleCredential.getServiceAccountId()).setServiceAccountScopes(SCOPES)
.setServiceAccountPrivateKey(googleCredential.getServiceAccountPrivateKey())
.setServiceAccountUser(servAccAdmin).build();
return credential;
}
Edit 2:
As we are seeing some credential issue, I am trying to see the way I access any other google API service, may it be drive or gmail where we manually pass the key file and build credentials, and these credentials should be used in further service calls. Now try adding these dependencies, this is just to troubleshoot by using the same way that we access google drive/gmail and etc. We will get to know if google cloud api is able to build the credential or not
<dependency>
<groupId>com.google.http-client</groupId>
<artifactId>google-http-client</artifactId>
<version>${project.http.version}</version>
</dependency>
<dependency>
<groupId>com.google.http-client</groupId>
<artifactId>google-http-client-jackson2</artifactId>
<version>${project.http.version}</version>
</dependency>
<dependency>
<groupId>com.google.oauth-client</groupId>
<artifactId>google-oauth-client-jetty</artifactId>
<version>${project.oauth.version}</version>
</dependency>
Related
The Application Default Credentials are not available.
They are available if running in Google Compute Engine.
Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials
Keep getting the above error instead I have set the environment variable on my local machine with the below command
export GOOGLE_APPLICATION_CREDENTIALS="/Users/macbook/Downloads/fetebird-2b6fa8261292.json"
If I check the path for the environment variable with the below command on terminal it does show the path of the variable
echo $GOOGLE_APPLICATION_CREDENTIALS
On Micronaut application I am trying to create a storage bucket during the startup
#Singleton
public class StartUp implements ApplicationEventListener<StartupEvent> {
private final GoogleCloudStorageService googleCloudStorageService;
public StartUp(GoogleCloudStorageService googleCloudStorageService) {
this.googleCloudStorageService = googleCloudStorageService;
}
#Override
public void onApplicationEvent(StartupEvent event) {
try {
this.googleCloudStorageService.createBucketWithStorageClassAndLocation().subscribe();
} catch (IOException e) {
e.printStackTrace();
}
}
}
On the service
#Singleton
public record GoogleCloudStorageService(GoogleCloudStorageConfiguration googleUploadObjectConfiguration, GoogleCredentialsConfiguration googleCredentialsConfiguration) {
private static final Logger LOG = LoggerFactory.getLogger(GoogleCloudStorageService.class);
public Observable<Void> createBucketWithStorageClassAndLocation() throws IOException {
GoogleCredentials credentials = GoogleCredentials.getApplicationDefault(); // fromStream(new FileInputStream(googleCredentialsConfiguration.getLocation()));
Storage storage = StorageOptions.newBuilder().setCredentials(credentials).setProjectId(googleUploadObjectConfiguration.projectId()).build().getService();
StorageClass storageClass = StorageClass.COLDLINE;
try {
Bucket bucket =
storage.create(
BucketInfo.newBuilder(googleUploadObjectConfiguration.bucketName())
.setStorageClass(storageClass)
.setLocation(googleUploadObjectConfiguration.locationName())
.build());
LOG.info(String.format("Created bucket %s in %s with storage class %s", bucket.getName(), bucket.getLocation(), bucket.getStorageClass()));
} catch (Exception ex) {
LOG.error(ex.getMessage());
}
return Observable.empty();
}
}
The environment variable is NULL while running the application
System.out.println(System.getenv("GOOGLE_APPLICATION_CREDENTIALS"))
The GoogleCredentials credentials = GoogleCredentials.getApplicationDefault(); causing an exception as
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
at com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentials(DefaultCredentialsProvider.java:134)
at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:120)
at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:92)
at fete.bird.service.gcp.GoogleCloudStorageService.createBucketWithStorageClassAndLocation(GoogleCloudStorageService.java:24)
at fete.bird.core.StartUp.onApplicationEvent(StartUp.java:24)
at fete.bird.core.StartUp.onApplicationEvent(StartUp.java:11)
at io.micronaut.context.DefaultBeanContext.notifyEventListeners(DefaultBeanContext.java:1307)
at io.micronaut.context.DefaultBeanContext.publishEvent(DefaultBeanContext.java:1292)
at io.micronaut.context.DefaultBeanContext.start(DefaultBeanContext.java:248)
at io.micronaut.context.DefaultApplicationContext.start(DefaultApplicationContext.java:166)
at io.micronaut.runtime.Micronaut.start(Micronaut.java:71)
at io.micronaut.runtime.Micronaut.run(Micronaut.java:311)
at io.micronaut.runtime.Micronaut.run(Micronaut.java:297)
at fete.bird.ServiceApplication.main(ServiceApplication.java:8)
Is it on StartupEvent the micronaut doesn't access the environment variable?
Well I was missing the below instruction
Local development/testing
If running locally for development/testing, you can use the Google Cloud SDK. Create Application Default Credentials with gcloud auth application-default login, and then google-cloud will automatically detect such credentials.
https://github.com/googleapis/google-cloud-java
However, this solution is not perfect, since it is using the OAuth authentication and getting the warning as Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a "quota exceeded" or "API not enabled" error. For more information about service accounts.
My mule application writes json record to a kinesis stream. I use KPL producer library. When run locally, it picks AWS credentials from .aws/credentials and writes record to kinesis successfully.
However, when I deploy my application to Cloudhub, it throws AmazonClientException, obviously due to not having access to any of directories that DefaultAWSCredentialsProviderChain class supports. (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html)
This is how I attach credentials and it looks locally in .aws/credentials:
config.setCredentialsProvider( new
DefaultAWSCredentialsProviderChain());
I couldn't figure out a way to provide credentials explicitly using my-app.properies file.
Then I tried to create a separate configuration file with getters/setters. set access key and private key as private and then impement a getter:
public AWSCredentialsProvider getCredentials() {
if(accessKey == null || secretKey == null) {
return new DefaultAWSCredentialsProviderChain();
}
return new StaticCredentialsProvider(new BasicAWSCredentials(getAccessKey(), getSecretKey()));
}
}
This was intended to be used instead of DefaultAWSCredentialsProviderChain class this way---
config.setCredentialsProvider(new AWSConfig().getCredentials());
Still throws the same error when deployed.
The following repo states that it is possible to provide explicit credentials. I need help to figure out how because I can't find a proper documentation / example.
https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/src/com/amazonaws/services/kinesis/producer/sample/SampleProducer.java
I have Faced the same issue so, I got this solution I hope this will work for you also.
#Value("${s3_accessKey}")
private String s3_accessKey;
#Value("${s3_secretKey}")
private String s3_secretKey;
//this above value I am taking from Application.properties file
BasicAWSCredentials creds = new BasicAWSCredentials(s3_accessKey,
s3_secretKey);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().
withCredentials(new AWSStaticCredentialsProvider(creds))
.withRegion(Regions.US_EAST_2)
.build();
The Amazon Java SDK has marked the Constructors for AmazonS3Client deprecated in favor of some AmazonS3ClientBuilder.defaultClient(). Following the recommendation, though, does not result in an AmazonS3 client that works the same. In particular, the client has somehow failed to account for Region. If you run the tests below, the thisFails test demonstrates the problem.
public class S3HelperTest {
#Test
public void thisWorks() throws Exception {
AmazonS3 s3Client = new AmazonS3Client(); // this call is deprecated
s3Client.setS3ClientOptions(S3ClientOptions.builder().setPathStyleAccess(true).build());
assertNotNull(s3Client);
}
#Test
public void thisFails() throws Exception {
AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient();
/*
* The following line throws like com.amazonaws.SdkClientException:
* Unable to find a region via the region provider chain. Must provide an explicit region in the builder or
* setup environment to supply a region.
*/
s3Client.setS3ClientOptions(S3ClientOptions.builder().setPathStyleAccess(true).build());
}
}
com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:371)
at com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:337)
at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
at com.amazonaws.services.s3.AmazonS3ClientBuilder.defaultClient(AmazonS3ClientBuilder.java:54)
at com.climate.tenderfoot.service.S3HelperTest.thisFails(S3HelperTest.java:21)
...
Is this an AWS SDK Bug? Is there some "region default provider chain" or some mechanism to derive the region from the Environment and set it into the client? It seems really weak that the method to replace the deprecation doesn't result in the same capability.
Looks like a region is required for the builder.
Probably this thread is related (I would use .withRegion(Regions.US_EAST_1) though in the 3rd line):
To emulate the previous behavior (no region configured), you'll need
to also enable "forced global bucket access" in the client builder:
AmazonS3 client =
AmazonS3ClientBuilder.standard()
.withRegion("us-east-1") // The first region to try your request against
.withForceGlobalBucketAccess(true) // If a bucket is in a different region, try again in the correct region
.build();
This will suppress the exception you received and automatically retry
the request under the region in the exception. It is made explicit in
the builder so you are aware of this cross-region behavior. Note: The
SDK will cache the bucket region after the first failure, so that
every request against this bucket doesn't have to happen twice.
Also, from the AWS documentation if you want to use AmazonS3ClientBuilder.defaultClient(); then you need to have
~/.aws/credentials and ~/.aws/config files
~/.aws/credentials contents:
[default]
aws_access_key_id = your_id
aws_secret_access_key = your_key
~/.aws/config contents:
[default]
region = us-west-1
From the same AWS documentation page, if you don't want to hardcode the region/credentials, you can have it as environment variables in your Linux machine the usual way:
export AWS_ACCESS_KEY_ID=your_access_key_id
export AWS_SECRET_ACCESS_KEY=your_secret_access_key
export AWS_REGION=your_aws_region
BasicAWSCredentials creds = new BasicAWSCredentials("key_ID", "Access_Key");
AWSStaticCredentialsProvider provider = new
AWSStaticCredentialsProvider(creds);
AmazonSQS sqs =AmazonSQSClientBuilder.standard()
.withCredentials(provider)
.withRegion(Regions.US_EAST_2)
.build();
Create file named "config" under .aws.
And place below content.
~/.aws/config contents:
[default]
region = us-west-1
output = json
AmazonSQS sqsClient = AmazonSQSClientBuilder
.standard()
.withRegion(Regions.AP_SOUTH_1)
.build();
I'm trying to disable download/print/copy content using the method setViewersCanCopyContent(false) and setWritersCanShare(false) when creating a file with a Service Account, but if I open the file in a browser that I'm not logged in in a Google account, I'm still able to execute those functionalities.
EDIT (added more info)
Here is how I am working: I have this service account and, also, I have what I've called a "service account owner", that is the email I used to create the service account in developer console > IAM. When I call my application, my code creates a folder in service account's Drive and then I move it to my service account owner's Drive and set it as owner (using setTransferOwnership(true)) - I use this approach because, as I could note, service account's Drive is not accessible via browser (only via API).
Then, when I create a file, I call setParents({FOLDER_ID}) where FOLDER_ID is the ID of the folder in service account owner's Drive. Then, when I login service account owner's Drive and select a file, I can see that the service account is the owner of the file and anyone with the link can view file, but everyone that can view the file, can also download/print/copy content.
Here is the code I'm using:
HttpTransport transport = GoogleNetHttpTransport.newTrustedTransport();
JsonFactory jsonFactory = JacksonFactory.getDefaultInstance();
final Credential credential = new GoogleCredential.Builder()
.setTransport(transport)
.setJsonFactory(jsonFactory)
.setServiceAccountId({SERVICE_ACCOUNT_ID})
.setServiceAccountScopesArrays.asList(DriveScopes.DRIVE_METADATA_READONLY, DriveScopes.DRIVE, DriveScopes.DRIVE_FILE, DriveScopes.DRIVE_APPDATA)
.setServiceAccountPrivateKeyFromP12File(new File("{PATH_TO_P12_FILE}")
.build();
Drive drive = new Drive.Builder(transport, jsonFactory, credential)
.setHttpRequestInitializer(new HttpRequestInitializer() {
#Override
public void initialize(HttpRequest httpRequest) throws IOException {
credential.initialize(httpRequest);
httpRequest.setConnectTimeout(2 * 60000); // 2 minutes connect timeout
httpRequest.setReadTimeout(2 * 60000); // 2 minutes read timeout
}
})
.setApplicationName("{MY_APP}")
.build();
File fileMetadata = new File();
fileMetadata.setName("test.doc");
fileMetadata.setMimeType("application/vnd.google-apps.document");
fileMetadata.setViewersCanCopyContent(false);
fileMetadata.setWritersCanShare(false);
fileMetadata.setParents(Arrays.asList("{SERVICE_ACCOUNT_OWNER_FOLDER_ID}"));
File file = null;
try {
FileContent mediaContent = new FileContent("application/vnd.openxmlformats-officedocument.wordprocessingml.document", new java.io.File("{PATH_TO_FILE.doc}"));
file = this.drive.files()
.create(fileMetadata, mediaContent)
.setFields("id, webViewLink")
.execute();
} catch (IOException e) {
e.printStackTrace();
}
Permission readPermission = new Permission();
readPermission.setType("anyone");
readPermission.setRole("reader");
drive.getDrive().permissions().create(file.getId(), readPermission)
.execute();
Is it possible to disable these functionalities with a Service Account?
I am not sure i completely understand the problem here.
First off a service account is not you. Think of it as a dummy user It only has the access to your account that you have given it. I am going to assume that you shared the folder with the service account.
Now if we look at your code when you are authenticating your service account you are authenticating it with a scope this are the scopes of access.
setServiceAccountScopesArrays.asList(DriveScopes.DRIVE_METADATA_READONLY, DriveScopes.DRIVE, DriveScopes.DRIVE_FILE, DriveScopes.DRIVE_APPDATA)
Adding all of those is pretty much over kill as DriveScopes.Drive gives you full access anyway.
So if you want to prevent the service account from writing to the files then you should only use DriveScopes.DRIVE_METADATA_READONLY
What i dont understand
In my Google Drive, when I select the created file, I can see that I am the owner of the file and anyone with the link can see it. But everyone is able to download/print/copy content of the file.
When you login to the Google drive website logged in as yourself you can create a file and the service account will be able to to see it. But i am not sure what you mean by everyone else. No one can see your personal files in Google drive that you haven't given access to. If you share a file with someone there is no way to prevent them from download/print/copy it. That has nothing to do with a service account.
update:
Then, when I login service account owner's Drive and select a file, I can see that the service account is the owner of the file and anyone with the link can view file, but everyone that can view the file, can also download/print/copy content.
No if you give someone a link to the file and they view it in their google drive there is no way to prevent them from download/print/coping the content
You're doing it the correct way. fileMetadata.setViewersCanCopyContent(false); is how you should be able to prevent downloads. I'm wondering if somehow this is being lost during the transferOwnership.
If you want your file to be owned by a regular account (your "service account owner"), then you will find it much simpler to just create the file directly under that account, rather than go through the extra steps of first creating it under a Service Account and then transferring it. It will clean your code up and also avoid any subtle side-effects of the transfer process.
I noted that if I update file like this (after creation):
File updatedFile = new File();
updatedFile.setViewersCanCopyContent(false);
updatedFile.setWritersCanShare(false);
drive.files().update(file.getId(), updatedFile).execute();
Then it disables download/print/copy. I am still not able disable it on file creation, but this way is enough for my requirements.
I've GAE application which creates some data in the Google Cloud Datastore and stores some binary files into the Google Cloud Storage - let's call the application WebApp.
Now I have a different application running on the Google compute engine. Let's call the application ComputeApp.
The ComputeApp is a backend process which is processing data created by the WebApp. I asked here in this question previously which API can I use to communicate with Storage from the ComputeApp. I implemented the Storage communication using of the Google Cloud Storage JSON API Client Library for Java.
Everything works fine as far as I'm communicating with the Storage in the Google cloud. I'm using the service account authentication.
Now I need to run my ComputeApp locally, in my development PC so I'll take data created by my local WebApp and stored into the local debug Storage. I need it because I want to have a testing environment so I can debug may app locally.
My WebApp running locally stores binary data to the local Datastore. I can see it through the local admin console: (localhost:8080/_ah/admin). There is list of entities GsFileInfo and a list of the __ah_FakeCloudStorage... entities, representing my storage data.
How should I modify my ComputeApp code to force it to connect to my local debug Storage and access these binary data stored locally instead of connect to the Google cloud?
Create account in Google Project Console, and Service account credentials
Initialize:
private static final String APPLICATION_NAME = "your-webapp";
private static final JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
if (credential.createScopedRequired()) {
credential = credential.createScoped(StorageScopes.all());
}
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
storageService = new Storage.Builder(httpTransport, JSON_FACTORY, credential)
.setApplicationName(APPLICATION_NAME).build();
InputStreamContent mediaContent = new InputStreamContent(contentType, inputStream);
StorageObject objectMetadata = null;
objectMetadata = new StorageObject()
.setName(name)
.setMetadata(ImmutableMap.of("date", new Date().toString()))
.setAcl(ImmutableList.of(
new ObjectAccessControl().setEntity("allUsers").setRole("READER")
))
.setContentDisposition("attachment");
Storage.Objects.Insert insertObject = storageService.objects().insert("staging.your-app.appspot.com", objectMetadata,
mediaContent);
// For small files, you may wish to call setDirectUploadEnabled(true), to
// reduce the number of HTTP requests made to the server.
if (mediaContent.getLength() > 0 && mediaContent.getLength() <= 2 * 1000 * 1000 /* 2MB */) {
insertObject.getMediaHttpUploader().setDirectUploadEnabled(true);
}
insertObject.execute();