I wrote a file to Google Cloud Storage using the instructions given here:
https://developers.google.com/appengine/docs/java/googlestorage/overview
The code runs and executes successfully, however, after logging into my Storage account, I don't see the newly written file in my bucket.
Any ideas as to why?
So this is the export code:
This is the code I am using:
try {
// Get the file service
FileService fileService = FileServiceFactory.getFileService();
/**
* Set up properties of your new object
* After finalizing objects, they are accessible
* through Cloud Storage with the URL:
* http://storage.googleapis.com/my_bucket/my_object
*/
GSFileOptionsBuilder optionsBuilder = new GSFileOptionsBuilder()
.setBucket(bucket)
.setKey(key)
.setAcl("public-read");
// Create your object
AppEngineFile writableFile = fileService
.createNewGSFile(optionsBuilder.build());
// Open a channel for writing
boolean lockForWrite = false;
FileWriteChannel writeChannel = fileService.openWriteChannel(
writableFile, lockForWrite);
// For this example, we write to the object using the
// PrintWriter
PrintWriter out = new PrintWriter(Channels.newWriter(
writeChannel, "UTF8"));
Iterator<String> it = spRes.iterator();
while (it.hasNext()) {
out.println(it.next());
}
// Close without finalizing and save the file path for writing
// later
out.close();
String path = writableFile.getFullPath();
// Write more to the file in a separate request:
writableFile = new AppEngineFile(path);
// Lock the file because we intend to finalize it and
// no one else should be able to edit it
lockForWrite = true;
writeChannel = fileService.openWriteChannel(writableFile,
lockForWrite);
// Now finalize
writeChannel.closeFinally();
} catch (IOException e) {
result = "Failed to export";
e.printStackTrace();
}
I believe that you have not added the email of your application which you can find under Application Setting of your appengine application.
Then you need to add this email in the Team under Google API Console for Google Cloud Storage with is Owner privilege. Make sure you are also using the same bucket name which you created in Online Browser Tool for Cloud Storage in the UploadOptions.
Looks like I had 2 different projects setup in the Google Cloud Console. I was updating the wrong project.
Works now.
Thank you for your help.
As Ankur said, you have to deploy your code on appengine to write to Cloud Storage. Otherwise the files will only be stored on your local hard disk.
Related
We have getFileSystemClient API - https://learn.microsoft.com/en-us/java/api/overview/azure/storage-file-datalake-readme?view=azure-java-stable#create-a-datalakefilesystemclient
and create file system API - https://learn.microsoft.com/en-us/java/api/overview/azure/storage-file-datalake-readme?view=azure-java-stable#create-a-file-system
Before saving data into Azure Data Lake File System, how do I check if file-system already exist or not? If file-system does not exist, invoke "create file system API" else getFileSystemClient API.
Unfortunately, there is no such direct ways to check if file-system exists, you can use getFileName() to get the file name with out path and use try catch block.
Within a file system, this function creates a new file. If another file with the identical name already exists, it will be overwritten.
DataLakeFileSystemClient.createFileWithResponse(String fileName, String permissions, String umask, PathHttpHeaders headers, Map<String,String> metadata, DataLakeRequestConditions requestConditions, Duration timeout, Context context) Method | Microsoft Docs
DataLakeServiceClient does not provide an API to check if a file system exists. The following worked for me (may be there is a better way to do it):
DataLakeServiceClient dataLakeServiceClient = new DataLakeServiceClientBuilder()
.endpoint("https://XXXX.dfs.core.windows.net")
.credential(new DefaultAzureCredentialBuilder().build())
.buildClient();
Optional<FileSystemItem> raw1 =
dataLakeServiceClient.listFileSystems().stream().filter(x -> x.getName().equals("raw1")).findFirst();
if (raw1.isPresent()) {
// proceed
}
I am trying to transfer a file using googles Nearby Connections API. Largely I can get all components of the transfer to work so that all of the files data is transferred but the issue is that the files data is then stored in Nearby's scoped storage so I am unable to access it from my app to be able to process that data into the appropriate file type and then save and rename as necessary.
The current method I am using to try and process the payload is
private void processFilePayload(long payloadId) {
// BYTES and FILE could be received in any order, so we call when either the BYTES or the FILE
// payload is completely received. The file payload is considered complete only when both have
// been received.
Payload filePayload = completedFilePayloads.get(payloadId);
String filename = filePayloadFilenames.get(payloadId);
if (filePayload != null && filename != null) {
completedFilePayloads.remove(payloadId);
filePayloadFilenames.remove(payloadId);
// Get the received file (which will be in the Downloads folder)
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
File fromPayload = filePayload.asFile().asJavaFile();
Uri uri = Uri.fromFile(fromPayload);
try {
// Copy the file to a new location.
InputStream in = getApplicationContext().getContentResolver().openInputStream(uri);
copyStream(in, new FileOutputStream(new File(getApplicationContext().getCacheDir(), filename)));
} catch (IOException e) {
// Log the error.
Log.e("copy file", e.toString());
} finally {
// Delete the original file.
getApplicationContext().getContentResolver().delete(uri, null, null);
}
} else {
File payloadFile = filePayload.asFile().asJavaFile();
// Rename the file.
payloadFile.renameTo(new File(payloadFile.getParentFile(), filename));
}
}
}
});
Because of android 11's scoped storage to be able to access the file the files Uri needs to be used to create an input stream with a content resolver to access the file.
According to the Nearby Documentation there should be a method Payload.File.asUri so I would be able to use the line Uri payloadUri = filePayload.asFile().asUri(); but this is not actually available in the API despite using the most recent version of Nearby.
As well as this the use of Payload.File.AsJavaFile() should be deprecated according to the google Nearby documentation
I have seen some other answers for similar problems where the suggestion is to use Media.Store but this is not possible as the file does not have any extension yet so doesn't show up as any particular file type.
Note: I have requested read/write external storage permissions both in the manifest and at runtime.
===Update===
Payload.asFile.asUri() is available in com.google.android.gms:play-services-nearby:18.0.0
============
Sorry about that. We'll be releasing an update soon with #asUri properly exposed.
In the meantime, if you target API 29, you can use requestLegacyExternalStorage=true as a workaround. (See more: https://developer.android.com/about/versions/11/privacy/storage)
Is there any way to download google drive file to custom location? I am using this code to get the file,
courses.get(0).getCourseMaterialSets().get(0).getMaterials().get(0).getDriveFile()
This function is returning File type output. How to save it locally?
Or is there any way to download google drive files using classroom API?
I don't think there's a way to do this using Classroom API. To download files from Google Drive, check the download files tutorial using Android API for Drive.
Downloading a file
Preferred method: using alt=media
To download files, you make an authorized HTTP GET request to the
file's resource URL and include the query parameter alt=media. For
example:
GET https://www.googleapis.com/drive/v2/files/0B9jNhSvVjoIVM3dKcGRKRmVIOVU?alt=media
Authorization: Bearer
ya29.AHESVbXTUv5mHMo3RYfmS1YJonjzzdTOFZwvyOAUVhrs Downloading the file
requires the user to have at least read access. Additionally, your app
must be authorized with a scope that allows reading of file content.
For example, an app using the drive.readonly.metadata scope would not
be authorized to download the file contents. Users with edit
permission may restrict downloading by read-only users by setting the
restricted label to true.
Here's a snippet from the guide:
/**
* Download a file's content.
*
* #param service Drive API service instance.
* #param file Drive File instance.
* #return InputStream containing the file's content if successful,
* {#code null} otherwise.
*/
private static InputStream downloadFile(Drive service, File file) {
if (file.getDownloadUrl() != null && file.getDownloadUrl().length() > 0) {
try {
// uses alt=media query parameter to request content
return service.files().get(file.getId()).executeMediaAsInputStream();
} catch (IOException e) {
// An error occurred.
e.printStackTrace();
return null;
}
} else {
// The file doesn't have any content stored on Drive.
return null;
}
}
How to rename or copy file from azure storage using java file system sdk
Is there any way to rename or copy file stored in azurestorage from azurestorage.jar for java if so pls help us.
Assume the file is on the file share mounted in the system, you can use Files.copy(...) to copy file
Path sourcePath = new File("path/to/source/file").toPath();
Path targetPath = new File("path/to/target/file").toPath();
Files.copy(sourcePath, targetPath);
Note that the code will download the source file to local host, then upload to the azure storage service.
If you want to avoid download and upload, use azure storage rest api to copy file. If you don't want to deal with rest api directly, use azure-sdk-for-java or similar SDKs for python and C#.
https://stackoverflow.com/a/66774796/12066108 shows how to copy file with azure-sdk-for-java library.
You could use CloudFile.startCopy(source) to rename and copy blob files. And here is the complete code.
package nau.edu.cn.steven;
import com.microsoft.azure.storage.CloudStorageAccount;
import com.microsoft.azure.storage.file.CopyStatus;
import com.microsoft.azure.storage.file.CloudFile;
import com.microsoft.azure.storage.file.CloudFileClient;
import com.microsoft.azure.storage.file.CloudFileDirectory;
import com.microsoft.azure.storage.file.CloudFileShare;
public class AzureCopyFile {
// Connection string
public static final String storageConnectionString =
"DefaultEndpointsProtocol=http;"
+ "AccountName=your_account_name;"
+ "AccountKey= your_account_key";
public static void main( String[] args )
{
try {
CloudStorageAccount account = CloudStorageAccount.parse(storageConnectionString);
CloudFileClient fileClient = account.createCloudFileClient();
// Get a reference to the file share
CloudFileShare share = fileClient.getShareReference("sampleshare");
if(share.createIfNotExists()) {
System.out.println("New share created");
}
// Get a reference to the root directory for the share for example
CloudFileDirectory rootDir = share.getRootDirectoryReference();
// Old file
CloudFile oldCloudFile = rootDir.getFileReference("Readme.txt");
// New file
CloudFile newCloudFile = rootDir.getFileReference("Readme2.txt");
// Start copy
newCloudFile.startCopy(oldCloudFile.getUri());
// Exit until copy finished
while(true) {
if(newCloudFile.getCopyState().getStatus() != CopyStatus.PENDING) {
break;
}
// Sleep for a second maybe
Thread.sleep(1000);
}
}
catch(Exception e) {
System.out.print("Exception encountered: ");
System.out.println(e.getMessage());
System.exit(-1);
}
}
}
According to the javadocs for the class CloudFile of Azure File Storage, there is no rename operation supported natively for File Storage, even for Blob Storage.
If you want to do the rename action, you need to perform 2 steps include copy a file with a new name and delete the file with old name.
There are two threads below separately from SO & MSDN.
Programmatically (.NET) renaming an Azure File or Directory using File (not Blob) Storage, that's the same for using Java.
https://social.msdn.microsoft.com/Forums/azure/en-US/04c415fb-cc1a-4270-986b-03a68b05aa81/renaming-files-in-blobs-storage?forum=windowsazuredata.
As #Steven said, the copy operation is supported via the function startCopy for a new file reference.
I am working on a application in appengine that we want to be able to make the content available for offline users. This means we need to get all the used blobstore files and save them off for the offline user. I am using the server side to do this so that it is only done once, and not for every end user. I am using the task queue to run this process as it can easily time out. Assume all this code is running as a task.
Small collections work fine, but larger collections result in a appengine error 202 and it restarts the task again and again. Here is the sample code that comes from combination of Writing Zip Files to GAE Blobstore and following the advice for large zip files at Google Appengine JAVA - Zip lots of images saving in Blobstore by reopening the channel as needed. Also referenced AppEngine Error Code 202 - Task Queue as the error.
//Set up the zip file that will be saved to the blobstore
AppEngineFile assetFile = fileService.createNewBlobFile("application/zip", assetsZipName);
FileWriteChannel writeChannel = fileService.openWriteChannel(assetFile, true);
ZipOutputStream assetsZip = new ZipOutputStream(new BufferedOutputStream(Channels.newOutputStream(writeChannel)));
HashSet<String> blobsEntries = getAllBlobEntries(); //gets blobs that I need
saveBlobAssetsToZip(blobsEntries);
writeChannel.closeFinally();
.....
private void saveBlobAssetsToZip(blobsEntries) throws IOException {
for (String blobId : blobsEntries) {
/*gets the blobstote key that will result in the blobstore entry - ignore the bsmd as
that is internal to our wrapper for blobstore.*/
BlobKey blobKey = new BlobKey(bsmd.getBlobId());
//gets the blob file as a byte array
byte[] blobData = blobstoreService.fetchData(blobKey, 0, BlobstoreService.MAX_BLOB_FETCH_SIZE-1);
String extension = type of file saved from our metadata (ie .jpg, .png, .pfd)
assetsZip.putNextEntry(new ZipEntry(blobId + "." + extension));
assetsZip.write(blobData);
assetsZip.closeEntry();
assetsZip.flush();
/*I have found that if I don't close the channel and reopen it, I can get a IO exception
because the files in the blobstore are too large, thus the write a file and then close and reopen*/
assetsZip.close();
writeChannel.close();
String assetsPath = assetFile.getFullPath();
assetFile = new AppEngineFile(assetsPath);
writeChannel = fileService.openWriteChannel(assetFile, true);
assetsZip = new ZipOutputStream(new BufferedOutputStream(Channels.newOutputStream(writeChannel)));
}
}
What is the proper way to get this to run on appengine? Again small projects work fine and zip saves, but larger projects with more blob files results in this error.
I bet that the instance is running out of memory. Are you using appstats? It can consume a large amount of memory. If that doesn't work you will probably need to increase the instance size.