How to read mail inbox using IMAP protocol and JavaMail and then use local disk to store mails. There is no documentation of mstor.
I try this way but it seems that MStorStore just read local mbox instead of creating and updating it according to the external server passed as params in connect() function. I get error: Folder [Inbox] does not exist.
Session lSession = Session.getDefaultInstance(props);
MStorStore lStore = new MStorStore(lSession , new URLName("mstor:c:/some_path/" + _mailModel.account.login));
lStore.connect(_mailModel.account.imap, _mailModel.account.login, _mailModel.account.password);
Folder lInbox = lStore.getDefaultFolder().getFolder("Inbox");
The questioin is how to create MBox from javax.mail.Store that i could read and update using Mstor.
I don't know if I am answering the right question (or answering a question at all), but, here is a method I wrote in a Scala program that takes an array of javamail Messages (acquired via imap) and writes them to a new mbox file in a directory named "mbox" in the root of my project using MStorStore. The new file is named whatever is passed in the "mboxName" parameter.
def writeToMbox(messages: Array[Message], mboxName: String) {
val mProps = System.getProperties
mProps.setProperty("mstor.mbox.metadataStrategy", "none")
val mSession = Session.getDefaultInstance(mProps)
val mStore = new MStorStore(mSession, new URLName("mstor:mbox"))
mStore.connect
val mFolder = mStore.getDefaultFolder
val localMbox = (new File("mbox", mboxName)).createNewFile
val mbox = mFolder.getFolder(mboxName)
mbox.open(Folder.READ_WRITE)
mbox.appendMessages(messages)
mbox.close(false)
mStore.close
}
Related
I have BlobServiceAsyncClient
Used TenantID, clientID, ClientSecret, ContainerName for creating the blobContainerAsyncClient.
Uploading file as
blobContainerAsyncClient.getBlobAsyncClient(fileName).upload(.........);
You can use the below code
creates a Shared Access Signature with Read only permission and available only for the next 10 minutes.
public string CreateSAS(string blobName)
{
var container = blobClient.GetContainerReference(ContainerName);
// Create the container if it doesn't already exist
container.CreateIfNotExists();
var blob = container.GetBlockBlobReference(blobName);
var sas = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.READ,
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(10),
});
return sas;
}
Please refer this document for more information: https://tech.trailmax.info/2013/07/upload-files-to-azure-blob-storage-with-using-shared-access-keys/
I want to fetch files from SFTP which are created after a given timestamp(time of last pull) in java. I am using j2ssh as of now. Please let me know if some other API supports such a feature.
Jsch supports the ls command which will bring you back all the attributes of the remote file. You can write a little code to eliminate the files you want to retrieve from there.
Java Doc: http://epaul.github.io/jsch-documentation/javadoc/
This example compares the remote file timestamps to find the oldest file, it wouldn't be much of a stretch to modify it to compare your last run date against the remote file date, then do the download as part of the loop.
Code from Finding file size and last modified of SFTP file using Java
try {
list = Main.chanSftp.ls("*.xml");
if (list.isEmpty()) {
fileFound = false;
}
else {
lsEntry = (ChannelSftp.LsEntry) list.firstElement();
oldestFile = lsEntry.getFilename();
attrs = lsEntry.getAttrs();
currentOldestTime = attrs.getMTime();
for (Object sftpFile : list) {
lsEntry = (ChannelSftp.LsEntry) sftpFile;
nextName = lsEntry.getFilename();
attrs = lsEntry.getAttrs();
nextTime = attrs.getMTime();
if (nextTime < currentOldestTime) {
oldestFile = nextName;
currentOldestTime = nextTime;
}
}
I am trying to sent an file through multi-part form data using Scala and Play 2.4.6.
def sendFile(file: FilePart[TemporaryFile]): Option[Future[Unit]] = {
val asyncHttpClient:AsyncHttpClient = WS.client.underlying
val postBuilder = asyncHttpClient.preparePost(s"${config.ocrProvider.host}")
val multiPartPost = postBuilder
.addBodyPart(new StringPart("access_token",s"${config.ocrProvider.accessToken}"))
.addBodyPart(new StringPart("typename",s"${config.ocrProvider.typeName}"))
.addBodyPart(new StringPart("action",s"${config.ocrProvider.actionUpload}"))
.addBodyPart(new FilePart(???)
}
I'm new on Scala and Play, and i would like to sent file method attribute as new FilePart. Is it possible?
Yes, just like
.addBodyPart(new FilePart("myFile", new File("app/controllers/Application.scala")))
You could find a full example of post in play-scala in my answer here: Sending multi part form data in post method in play/scala
How to set the app properties of a file using Google Drive v3 in java?
The reference says: "files.update with {'appProperties':{'key':'value'}}", but I don't understand how to apply that to my java code.
I've tried things like
file = service.files().create(body).setFields("id").execute();
Map<String, String> properties = new HashMap<>();
properties.put(DEVICE_ID_KEY, deviceId);
file.setAppProperties(properties);
service.files().update(file.getId(), file).setFields("appProperties").execute();
but then I get an error that "The resource body includes fields which are not directly writable."
And to get the data:
File fileProperty = service.files().get(sFileId).setFields("appProperties").execute();
So how to set and get the properties for the file?
Thanks! :)
Edit
I tried
file = service.files().create(body).setFields("id").execute();
Map<String, String> properties = new HashMap<>();
properties.put(DEVICE_ID_KEY, deviceId);
file.setAppProperties(properties);
service.files().update(file.getId(), file).execute();
but I still get the same error message.
To avoid this error on v3
"The resource body includes fields which are not directly writable."
when calling update, you need to create an empty File with the new changes and pass that to the update function.
I wrote this and other notes as a v3 Migration Guide here.
The Drive API client for Java v3 indicates that the File.setAppProperties will require a Hashmap<String,String> parameter. Try to remove the setFields("appProperties") call since you are trying to overwrite appProperties itself (you're still calling Update at this time).
When retrieving appProperties, you'll just need to call getAppProperties.
Hope this helps!
File fileMetadata = new File();
java.io.File filePath = new java.io.File(YOUR_LOCAL_FILE_PATH);
Map<String, String> map = new HashMap<String, String>();
map.put(YOUR_KEY, YOUR_VALUE); //can be filled with custom String
fileMetadata.setAppProperties(map);
FileContent mediaContent = new FileContent(YOUR_IMPORT_FORMAT, filePath);
File file = service.files().create(fileMetadata, mediaContent)
.setFields("id, appProperties").
.execute();
YOUR_IMPORT_FORMAT, fill this with the value in this link, https://developers.google.com/drive/api/v3/manage-uploads, there is explanation below the example code
setFields("id, appProperties"), fill this with the value in this link: https://developers.google.com/drive/api/v3/migration, this the most important part I think, if you don't set the value in the setFields method your additional input will not be written
With version v3, to add or update appProperties for an existing file and without this error:
"The resource body includes fields which are not directly writable."
You should do:
String fileId = "Your file id key here";
Map<String, String> appPropertiesMap = new HashMap<String, String>();
appPropertiesMap.put("MyKey", "MyValue");
appPropertiesMap.put("MySecondKey", "any value");
//set only the metadata you want to change
//do not set "id" !!! You will have "The resource body includes fields which are not directly writable." error
File fileMetadata = new File();
fileMetadata.setAppProperties(appPropertiesMap);
File updatedFileMetadata = driveService.files().update(fileId, fileMetadata).setFields("id, appProperties").execute();
System.out.printf("Hey, I see my appProperties :-) %s \n", updatedFileMetadata.toPrettyString());
I'm looking to leverage RackSpace's CloudFiles platform for large object storage (word docs, images, etc). Following some of their guides, I found a useful code snippet, that looks like it should work, but doesn't in my case.
Iterable<Module> modules = ImmutableSet.<Module> of(
new Log4JLoggingModule());
Properties properties = new Properties();
properties.setProperty(LocationConstants.PROPERTY_ZONE, ZONE);
properties.setProperty(LocationConstants.PROPERTY_REGION, "ORD");
CloudFilesClient cloudFilesClient = ContextBuilder.newBuilder(PROVIDER)
.credentials(username, apiKey)
.overrides(properties)
.modules(modules)
.buildApi(CloudFilesClient.class);
The problem is that when this code executes, it tries to log me in the IAD (Virginia) instance of CloudFiles. My organization's goal is to use the ORD (Chicago) instance as primary to be colocated with our cloud and use DFW as a back up environment. The login response results in the IAD instance coming back first, so I'm assuming JClouds is using that. Browsing around, it looks like the ZONE/REGION attributes are ignored for CloudFiles. I was wondering if there is any way to override the code that comes back for authentication to loop through the returned providers and choose which one to login to.
Update:
The accepted answer is mostly good, with some more info available in this snippet:
RestContext<CommonSwiftClient, CommonSwiftAsyncClient> swift = cloudFilesClient.unwrap();
CommonSwiftClient client = swift.getApi();
SwiftObject object = client.newSwiftObject();
object.getInfo().setName(FILENAME + SUFFIX);
object.setPayload("This is my payload."); //input stream.
String id = client.putObject(CONTAINER, object);
System.out.println(id);
SwiftObject obj2 = client.getObject(CONTAINER,FILENAME + SUFFIX);
System.out.println(obj2.getPayload());
We are working on the next version of jclouds (1.7.1) that should include multi-region support for Rackspace Cloud Files and OpenStack Swift. In the meantime you might be able to use this code as a workaround.
private void uploadToRackspaceRegion() {
Iterable<Module> modules = ImmutableSet.<Module> of(new Log4JLoggingModule());
String provider = "swift-keystone"; //Region selection is limited to swift-keystone provider
String identity = "username";
String credential = "password";
String endpoint = "https://identity.api.rackspacecloud.com/v2.0/";
String region = "ORD";
Properties overrides = new Properties();
overrides.setProperty(LocationConstants.PROPERTY_REGION, region);
overrides.setProperty(Constants.PROPERTY_API_VERSION, "2");
BlobStoreContext context = ContextBuilder.newBuilder(provider)
.endpoint(endpoint)
.credentials(identity, credential)
.modules(modules)
.overrides(overrides)
.buildView(BlobStoreContext.class);
RestContext<CommonSwiftClient, CommonSwiftAsyncClient> swift = context.unwrap();
CommonSwiftClient client = swift.getApi();
SwiftObject uploadObject = client.newSwiftObject();
uploadObject.getInfo().setName("test.txt");
uploadObject.setPayload("This is my payload."); //input stream.
String eTag = client.putObject("jclouds", uploadObject);
System.out.println("eTag = " + eTag);
SwiftObject downloadObject = client.getObject("jclouds", "test.txt");
System.out.println("downloadObject = " + downloadObject.getPayload());
context.close();
}
Use swift as you would Cloud Files. Keep in mind that if you need to use Cloud Files CDN stuff, the above won't work for that. Also, know that this way of doing things will eventually be deprecated.