I want capture a specific file name and the easiest way I found was using a class called JavaXT, based on examples of official site (http://www.javaxt.com/javaxt-core/io/Directory/Recursive_Directory_Search) I tried return the result in my console application across
javaxt.io.Directory directory = new javaxt.io.Directory("/temp");
javaxt.io.File[] files;
//Return a list of PDF documents found in the current directory and in any subdirectories
files = directory.getFiles("*.pdf", true);
System.out.println(files);
But the returned value always are strange characters like [Ljavaxt.io.File;#5266db4e
Someone could help me to print the correct file(s) name?
When you try to print an array, what you get is its hashcode. Try this if you want to visualize it:
Integer[] a = { 1, 2, 3 };
System.out.println(a);
the output will be
[Ljava.lang.Integer;#3244331c
If you want to print element by element, you can iterate through the array. In this case, using a for-each:
for (javaxt.io.File f : files)
System.out.println(f);
Note that this will print the String returned by the method toString() of the object.
Your files variable is an array. You need
for(javaxt.io.File f:files) System.out.println(f);
Because files is an array, Java will print the array type and the hex hash code.
Related
I'm passing an array of array to a java method and I need to add that data to a new file (which will be loaded into an s3 bucket)
How do I do this? I haven't been able to find an example of this
Also, I'm sure "object" is not the correct data type this attribute should be. Array doesn't seem to be the correct one.
Java method -
public void uploadStreamToS3Bucket(String[][] locations) {
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(String.valueOf(awsRegion))
.build();
String fileName = connectionRequestRepository.findStream() +".json";
String bucketName = "downloadable-cases";
File locationData = new File(?????) // Convert locations attribute to a file and load it to putObject
s3Client.putObject(new PutObjectRequest(bucketName, fileName, locationData));
} catch (AmazonServiceException ex) {
System.out.println("Error: " + ex.getMessage());
}
}
You're trying to use PutObjectRequest(String,String,File)
but you don't have a file. So you can either:
Write your object to a file and then pass that file
or
Use the PutObjectRequest(String,String,InputStream,ObjectMetadata) version instead.
The later is better as you save the intermediate step.
As for how to write an object to a stream you may ask: Check this How can I convert an Object to Inputstream
Bear in mind to read it you have to use the same format.
It might be worth to think about what kind of format you want to save your information, because it might be needed to be read for another program, or maybe by another human directly from the bucket and there might be other formats / serializers that area easy to read (if you write JSON for instance) or more efficient (if you use another serializer that takes less space).
As for the type of array of array you can use the [][] syntax. For instance an array of array of Strings would be:
String [][] arrayOfStringArrays;
I hope this helps.
I'm trying to list all so-called folders and sub-folders in an s3 bucket.
Now, as I am trying to list all the folders in a path recursively I am not using withDelimeter() function.
All the so-called folder names should end with / and this is my logic to list all the folders and sub-folders.
Here's the scala code (Intentionally not pasting the catch code here):
val awsCredentials = new BasicAWSCredentials(awsKey, awsSecretKey)
val client = new AmazonS3Client(awsCredentials)
def listFoldersRecursively(bucketName: String, fullPath: String): List[String] = {
try {
val objects = client.listObjects(bucketName).getObjectSummaries
val listObjectsRequest = new ListObjectsRequest()
.withPrefix(fullPath)
.withBucketName(bucketName)
val folderPaths = client
.listObjects(listObjectsRequest)
.getObjectSummaries()
.map(_.getKey)
folderPaths.filter(_.endsWith("/")).toList
}
}
Here's the structure of my bucket through an s3 client
Here's the list I am getting using this scala code
Without any apparent pattern, many folders are missing from the list of retrieved folders.
I did not use
client.listObjects(listObjectsRequest).getCommonPrefixes.toList
because it was returning empty list for some reason.
P.S: Couldn't add photos in post directly because of being a new user.
Without any apparent pattern, many folders are missing from the list of retrieved folders.
Here's your problem: you are assuming there should always be objects with keys ending in / to symbolize folders.
This is an incorrect assumption. They will only be there if you created them, either via the S3 console or the API. There's no reason to expect them, as S3 doesn't actually need them or use them for anything, and the S3 service does not create them spontaneously, itself.
If you use the API to upload an object with key foo/bar.txt, this does not create the foo/ folder as a distinct object. It will appear as a folder in the console for convenience, but it isn't there unless at some point you deliberately created it.
Of course, the only way to upload such an object with the console is to "create" the folder unless it already appears -- but appears in the console does not necessarily equate to exists as a distinct object.
Filtering on endsWith("/") is invalid logic.
This is why the underlying API includes CommonPrefixes with each ListObjects response if delimiter and prefix are specified. This is a list of the next level of "folders", which you have to recursively drill down into in order to find the next level.
If you specify a prefix, all keys that contain the same string between the prefix and the first occurrence of the delimiter after the prefix are grouped under a single result element called CommonPrefixes. If you don't specify the prefix parameter, the substring starts at the beginning of the key. The keys that are grouped under the CommonPrefixes result element are not returned elsewhere in the response.
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
You need to access this functionality with whatever library you or using, or, you need to iterate the entire list of keys and discover the actual common prefixes on / boundaries using string splitting.
Well, in case someone faces the same problem in future, the alternative logic I used is as suggested by #Michael above, I iterated through all the keys, splat them at last occurrence of /. The first index of the returned list + / was the key of a folder, appended it to another list. At the end, returned the unique list I was appending into. This gave me all the folders and sub-folders in a certain prefix location.
Note that I didn't use CommonPrefixes because I wasn't using any delimiter and that's because I didn't want the list of folders at a certain level but instead recursively get all the folders and sub-folders
def listFoldersRecursively(bucketName: String, fullPath: String): List[String] = {
try {
val objects = client.listObjects(bucketName).getObjectSummaries
val listObjectsRequest = new ListObjectsRequest()
.withPrefix(fullPath)
.withBucketName(bucketName)
val folderPaths = client.listObjects(listObjectsRequest)
.getObjectSummaries()
.map(_.getKey)
.toList
val foldersList: ArrayBuffer[String] = ArrayBuffer()
for (folderPath <- folderPaths) {
val split = folderPath.splitAt(folderPath.lastIndexOf("/"))
if (!split._1.equals(""))
foldersList += split._1 + "/"
}
foldersList.toList.distinct
P.S: Catch block is intentionalyy missing due to irrelevancy.
The listObjects function (and others) is paginating, returning up to 100 entries every time.
From the doc:
Because buckets can contain a virtually unlimited number of keys, the
complete results of a list query can be extremely large. To manage
large result sets, Amazon S3 uses pagination to split them into
multiple responses. Always check the ObjectListing.isTruncated()
method to see if the returned listing is complete or if additional
calls are needed to get more results. Alternatively, use the
AmazonS3Client.listNextBatchOfObjects(ObjectListing) method as an easy
way to get the next page of object listings.
I want to write a java code, that will basically parse the last number from the URL. Then I need to check, if that number is present in a excel. If not found show an error, else If present return me the row name.
Say URL: http://foxsd5174:3887/PD/outage/area/v1/device/40122480
and from the below excel I need to know, "40122480" falls under which category city or county?
City County Event Level
Device ID 40122480 277136436 268698851
to fetch value from URL i was thing of using the below code.
Please help me out.
Use this post in case you don't know how to get the last number of the url:
How to obtain the last path segment of an uri
To read the excel file use this tutorial https://www.callicoder.com/java-read-excel-file-apache-poi/ which uses apache poi library to get the job done.
Just compare the cell value with the last number of the url.
I wrote the below code to get the last digit of the URL.
public class smartoutage {
public static void main(final String[] args){
System.out.println(getLastBitFromUrl(
"http://goxsd5174:3807/PD/outage/areaEtr/v1/device/40122480?param=true"));
}
public static String getLastBitFromUrl(final String url){
return url.replaceFirst(".*/([^/?]+).*", "$1");
}
}
Output is 40122480
now i need to find if 40122480 is present in the excel and return me the row name for which it belong
City County Event Level
Device ID 40122480 277136436 268698851
Depending on the number of digits at the end of the URL you could parse it in multiple different ways. You could use regex, or substrings. To compare it to an excel file you could first convert the excel file into a csv (They are compatible) use BufferedReader along with FileReader to read in the file. Make an array of strings and using regex or any parsing method separate out the commas within the csv file you created. Then simply run a while loop that loops to EOF and check for the string you've parsed being "40122480" using the equals() method on type string and seeing if your string matches any of the strings you've just parsed from the file.
EDITED*
Rough and quick code I did in a few minutes to maybe help the parsing thing for you.
String parsedUrl[] = new String[8];
String url = "http://foxsd5174:3887/PD/outage/area/v1/device/40122480";
System.out.println(url);
parsedUrl = url.split("/");
System.out.println(parsedUrl[8]);
I am new to java and trying to figure out how to combine several treemaps into a table.
I have a java program that reads a text file and creates a treemap indexing the words in the file. The output has individual words as the key and the list of pages it appears on as the value. an example looks like this:
a 1:4:7
b 1:7
d 2
Now my program currently creates a thread for several text files and creates Treemaps for each file. I would like to combine these treemaps into one output. So say we have a second text file that looks like this:
a 1:2:4
b 3
c 7
The final output I am trying to create is a csv table that looks like this:
key,file1,file2
a,1:4:7,1:2:4
b,1:7,3
c,,7
d,2,
Is there a method to combine maps like this? I am a sql developer primarily so my idea was to print each map to a txt file along with the file name and then pivot this list based on the file name. This didn't seem like a very java like way to approach the problem though.
I think you need to do it manually.
I didnt compile my solution and it didnt write to csv file, but it should give you hint:
public void writeCsv(List<MyTreeMap> list) {
Set<String> ids = new TreeSet<String>();
// ids will store all unique key in your example: a,b,c,d
for(MyTreeMap m:list) {
for(String id:m.keySet()) {
ids.insert(id);
}
}
// iterate ids [a,b,c,d]
for(String id:ids) {
StringBuffer line = new StringBuffer();
line.append(id);
for(MyTreeMap m:list) {
String pages = m.get(id);
// pages will contains "1:4:7" like your example.
line.append(",");
line.append(pages);
}
System.out.println(line);
}
}
I am very new to R and am looking for a possible solution for this problem.
Suppose I have a variables.txt file (or any other file for that matter), which contains a list of variable names. EX, Product,
Ingredient,
Label,
Manufacturer,
Marketing,
This text file is generated in java and this file has to be read in R and variable are to be named according to the names in the file.
My example code is :
list(Product=0,Ingredient=0,Label=0,Manufacturer=0,Marketing=0)
which is now manually hard coded.
I need a way to get these names of variables from the variables.txt file and dynamically assign them in R. How can this be done?? is there any config file concept in R so that can also be a way out??
Maybe you can use:
data = read.table("file.txt",header=TRUE, sep=".") ?
The sep is depends on the seperator in the file. It could be comma, tab, space, dot or whatever.
With header=TRUE that means you want to take the original variable name from the file.
If you need the list structure described above you can use any read.table or read.csv command to get the names into R as mthbnd showed above.
Say your file.txt looks like: Product,Ingredient,Label,Manufacturer,Marketing
Read in the file and create a list from it. The Elements will then be filled with logical(0). Then you can easily set all elements to a 0 by using [ ] in order to keep the list structure
vars <- as.list(read.csv(file = "file.txt", header = T))
vars[] <- 0