How to set HTTP header in Apache JClouds? - java

I'm using Apache JClouds to connect to my Openstack Swift installation. I managed to upload and download objects from Swift. However, I failed to see how to upload dynamic large object to Swift.
To upload dynamic large object, I need to upload all segments first, which I can do as usual. Then I need to upload a manifest object to combine them logically. The problem is to tell Swift this is a manifest object, I need to set a special header, which I don't know how to do that using JClouds api.
Here's a dynamic large object example from openstack official website.
The code I'm using:
public static void main(String[] args) throws IOException {
BlobStore blobStore = ContextBuilder.newBuilder("swift").endpoint("http://localhost:8080/auth/v1.0")
.credentials("test:test", "test").buildView(BlobStoreContext.class).getBlobStore();
blobStore.createContainerInLocation(null, "container");
ByteSource segment1 = ByteSource.wrap("foo".getBytes(Charsets.UTF_8));
Blob seg1Blob = blobStore.blobBuilder("/foo/bar/1").payload(segment1).contentLength(segment1.size()).build();
System.out.println(blobStore.putBlob("container", seg1Blob));
ByteSource segment2 = ByteSource.wrap("bar".getBytes(Charsets.UTF_8));
Blob seg2Blob = blobStore.blobBuilder("/foo/bar/2").payload(segment2).contentLength(segment2.size()).build();
System.out.println(blobStore.putBlob("container", seg2Blob));
ByteSource manifest = ByteSource.wrap("".getBytes(Charsets.UTF_8));
// TODO: set manifest header here
Blob manifestBlob = blobStore.blobBuilder("/foo/bar").payload(manifest).contentLength(manifest.size()).build();
System.out.println(blobStore.putBlob("container", manifestBlob));
Blob dloBlob = blobStore.getBlob("container", "/foo/bar");
InputStream input = dloBlob.getPayload().openStream();
while (true) {
int i = input.read();
if (i < 0) {
break;
}
System.out.print((char) i); // should print "foobar"
}
}
The "TODO" part is my problem.
Edited:
I've been pointed out that Jclouds handles large file upload automatically, which is not so useful in our case. In fact, we do not know how large the file will be or when the next segment will arrive at the time we start to upload the first segment. Our api is designed to make client able to upload their files in chunks of their own chosen size and at their own chosen time, and when done, call a 'commit' to make these chunks as a file. So this makes us want to upload the manifest on our own here.

According to #Everett Toews's answer, I've got my code correctly running:
public static void main(String[] args) throws IOException {
CommonSwiftClient swift = ContextBuilder.newBuilder("swift").endpoint("http://localhost:8080/auth/v1.0")
.credentials("test:test", "test").buildApi(CommonSwiftClient.class);
SwiftObject segment1 = swift.newSwiftObject();
segment1.getInfo().setName("foo/bar/1");
segment1.setPayload("foo");
swift.putObject("container", segment1);
SwiftObject segment2 = swift.newSwiftObject();
segment2.getInfo().setName("foo/bar/2");
segment2.setPayload("bar");
swift.putObject("container", segment2);
swift.putObjectManifest("container", "foo/bar2");
SwiftObject dlo = swift.getObject("container", "foo/bar", GetOptions.NONE);
InputStream input = dlo.getPayload().openStream();
while (true) {
int i = input.read();
if (i < 0) {
break;
}
System.out.print((char) i);
}
}

jclouds handles writing the manifest for you. Here are a couple of examples that might help you, UploadLargeObject and largeblob.MainApp.

Try using
Map<String, String> manifestMetadata = ImmutableMap.of(
"X-Object-Manifest", "<container>/<prefix>");
BlobBuilder.userMetadata(manifestMetadata)
If that doesn't work you might have to use the CommonSwiftClient like in CrossOriginResourceSharingContainer.java.

Related

How to concatenate clips getting from Kinesis Video Stream in Java

I'm using AWS Kinesis Video Stream service to get my video recordings. So due to the Kinesis Video Stream fragment limitation, it turns out I can only retrieve up to ~30 minutes video at one request. And I was intend to retrieve a 2 hour video.
So I loop the request and get all 4 response into a List of InputStream, then I turn them into SequenceInputStream because I try to chain them all together.
However when I success uploaded them to S3 bucket and try to download from there. It shows me file are corrupted. I researched on SequenceInputStream however it seems that my design was okay.
Furthermore, if I extend my video length, let say I have 24 InputStream, and I chained them all to a single SequenceInputStream, it will encounter the SSL Socket Exception: Connection Reset when I run the readAllBytes operation on the sequence input stream.
Is there any way I can achieve what I want or something wrong in my code to cause this?
Here are my source code:
private String downloadMedia(Request request, JSONObject response, JSONObject metaData, Date startDate, Date endDate) throws Exception {
long duration = endDate.getTime() - startDate.getTime();
long durationInMinutes = TimeUnit.MILLISECONDS.toMinutes(duration);
long intervalsCount = durationInMinutes / 30;
ArrayList<GetClipResult> getClipResults = new ArrayList<>();
for (int i = 0; i < intervalsCount; i++){
Media currentMedia = constructMediaAfterIntervalsBreakdown(metaData, request, startDate, endDate);
String deviceName = metaData.getString("name") + "_" + request.getId();
Stream stream = getStreamByName(name, request.getId());
String endPoint = getDataEndpoint(stream.getStreamName());
GetClipResult clipResult = downloadMedia(currentMediaDto, endPoint, stream.getStreamName());
if(clipResult != null){
getClipResults.add(clipResult);
}
startDate = currentMediaDto.getEndTime();
}
//Get presigned URL from S3 service response
String url = response.getJSONArray("data").getJSONObject(0).getJSONArray("parts").getJSONObject(0).getString("url");
if (getClipResults.size() > 0) {
Vector<InputStream> inputStreams = new Vector<>();
for (GetClipResult clipResult : getClipResults){
InputStream videoStream = clipResult.getPayload();
inputStreams.add(videoStream);
}
Enumeration<InputStream> inputStreamEnumeration = inputStreams.elements();;
SequenceInputStream sequenceInputStream = new SequenceInputStream(inputStreamEnumeration);
if (sequenceInputStream.available() > 0){
sequenceInputStream.readAllBytes();
byte[] bytes = sequenceInputStream.readAllBytes();
String message = uploadFileUsingSecureUrl(url, bytes, metaData);
return message;
}
}
return "failed";
}
Edited: I came across couple package that called Xuggler and FFMPEG, however most of them are getting the video file from disk (which has a path), but for my case there isn't any video file because I do not download them to local, they only existed in the runtime and will upload to S3 later on after concatenated.
Appreciates any help! Thank you!
So in the end I just downloaded the clips, saved it to the disk on runtime, merged them using mp4parser and upload to S3. Afterwards I just deleted those on my disk.
If anyone curious about the code, it is taken from https://github.com/sannies/mp4parser/blob/master/examples/src/main/java/com/googlecode/mp4parser/AppendExample.java
Thank you.

Reading and writing files using Java 7 nio

I have files which consist of json elements in an array.
(several file. each file has json array of elements)
I have a process that knows to take each json element as a line from file and process it.
So I created a small program that reads the JSON array, and then writes the elements to another file.
The output of this utility will be the input of the other process.
I used Java 7 NIO (and gson).
I tried to use as much Java 7 NIO as possible.
Is there any improvement I can do?
What about the filter? Which approach is better?
Thanks,
public class TransformJsonsUsers {
public TransformJsonsUsers() {
}
public static void main(String[] args) throws IOException {
final Gson gson = new Gson();
Path path = Paths.get("C:\\work\\data\\resources\\files");
final Path outputDirectory = Paths
.get("C:\\work\\data\\resources\\files\\output");
DirectoryStream.Filter<Path> filter = new DirectoryStream.Filter<Path>() {
#Override
public boolean accept(Path entry) throws IOException {
// which is better?
// BasicFileAttributeView attView = Files.getFileAttributeView(entry, BasicFileAttributeView.class);
// return attView.readAttributes().isRegularFile();
return !Files.isDirectory(entry);
}
};
DirectoryStream<Path> directoryStream = Files.newDirectoryStream(path, filter);
directoryStream.forEach(new Consumer<Path>() {
#Override
public void accept(Path filePath) {
String fileOutput = outputDirectory.toString() + File.separator + filePath.getFileName();
Path fileOutputPath = Paths.get(fileOutput);
try {
BufferedReader br = Files.newBufferedReader(filePath);
User[] users = gson.fromJson(br, User[].class);
BufferedWriter writer = Files.newBufferedWriter(fileOutputPath, Charset.defaultCharset());
for (User user : users) {
writer.append(gson.toJson(user));
writer.newLine();
}
writer.flush();
} catch (IOException e) {
throw new RuntimeException(filePath.toString(), e);
}
}
});
}
}
There is no point of using Filter if you want to read all the files from the directory. Filter is primarily designed to apply some filter criteria and read a subset of files. Both of them may not have any real difference in over all performance.
If you looking to improve performance, you can try couple different approaches.
Multi-threading
Depending on how many files exists in the directory and how powerful your CPU is, you can apply multi threading to process more than one file at a time
Queuing
Right now you are reading and writing to another file synchronously. You can queue content of the file using Queue and create asynchronous writer.
You can combine both of these approaches as well to improve performance further.
Don't put the I/O into the filter. That's not what it's for. You should get the complete list of files and then process it. For example if the I/O creates another file in the directory, the behaviour is undefined. You might miss a file, or see the new file in the accept() method.

NFC with NFC-Tools, Creating NDEF Application

I am attempting to do what I would have guessed would be pretty simple, but as it turns out is not. I have an ACR122 NFC reader and a bunch of Mifare Classic and Mifare Ultralight tags, and all I want to do is read and write a mime-type and a short text string to each card from a Java application. Here's what I've got working so far:
I can connect to my reader and listen for tags
I can detect which type of tag is on the reader
On the Mifare Classic tags I can loop through all of the data on the tag (after programming the tag from my phone) and build an ascii string, but most of the data is "junk" data
I can determine whether or not there is an Application directory on the tag.
Here's my code for doing that:
Main:
public static void main(String[] args){
TerminalFactory factory = TerminalFactory.getDefault();
List<CardTerminal> terminals;
try{
TerminalHandler handler = new TerminalHandler();
terminals = factory.terminals().list();
CardTerminal cardTerminal = terminals.get(0);
AcsTerminal terminal = new AcsTerminal();
terminal.setCardTerminal(cardTerminal);
handler.addTerminal(terminal);
NfcAdapter adapter = new NfcAdapter(handler.getAvailableTerminal(), TerminalMode.INITIATOR);
adapter.registerTagListener(new CustomNDEFListener());
adapter.startListening();
System.in.read();
adapter.stopListening();
}
catch(IOException e){
}
catch(CardException e){
System.out.println("CardException: " + e.getMessage());
}
}
CustomNDEFListener:
public class CustomNDEFListener extends AbstractCardTool
{
#Override
public void doWithReaderWriter(MfClassicReaderWriter readerWriter)
throws IOException{
NdefMessageDecoder decoder = NdefContext.getNdefMessageDecoder();
MadKeyConfig config = MfConstants.NDEF_KEY_CONFIG;
if(readerWriter.hasApplicationDirectory()){
System.out.println("Application Directory Found!");
ApplicationDirectory directory = readerWriter.getApplicationDirectory();
}
else{
System.out.println("No Application Directory Found, creating one.");
readerWriter.createApplicationDirectory(config);
}
}
}
From here, I seem to be at a loss as for how to actually create and interact with an application. Once I can create the application and write Record objects to it, I should be able to write the data I need using the TextMimeRecord type, I just don't know how to get there. Any thoughts?
::Addendum::
Apparently there is no nfc-tools tag, and there probably should be. Would someone with enough rep be kind enough to create one and retag my question to include it?
::Second Addendum::
Also, I am willing to ditch NFC-Tools if someone can point me in the direction of a library that works for what I need, is well documented, and will run in a Windows environment.
Did you checked this library ? It is well written, how ever has poor documentation. Actually no more than JavaDoc.

Get GPS data from an image Java code

I would like to get the metadata from an image file in my local system using Java code
In the attached image you can see the desired data which i would like to pull from java code.
I wrote the below code and do not seem pull the data mentioned in the "Details" tab. The below code's output is and this is not what I look for.
Started ..
Format name: javax_imageio_jpeg_image_1.0
Format name: javax_imageio_1.0
Please give me your ideas. Thanks
try {
ImageInputStream inStream = ImageIO.createImageInputStream(new File("D:\\codeTest\\arun.jpg"));
Iterator<ImageReader> imgItr = ImageIO.getImageReaders(inStream);
while (imgItr.hasNext()) {
ImageReader reader = imgItr.next();
reader.setInput(inStream, true);
IIOMetadata metadata = reader.getImageMetadata(0);
String[] names = metadata.getMetadataFormatNames();
int length = names.length;
for (int i = 0; i < length; i++) {
System.out.println( "Format name: " + names[ i ] );
}
}
} catch (IOException e) {
e.printStackTrace();
}
There's no easy way to do it with the Java Core API. You'd have to parse the image's metadata tree, and interpret the proper EXIF tags. Instead, you can pick up the required code from an existing library with EXIF-parsing capabilities, and use it in yours. For example, I have used the Image class of javaxt, which provides a very useful method to extract GPS metadata from an image. It is as simple as:
javaxt.io.Image image = new javaxt.io.Image("D:\\codeTest\\arun.jpg");
double[] gps = image.getGPSCoordinate();
Plus, javaxt.io.Image has no external dependencies, so you can just use that particular class if you don't want to add a dependency on the entire library.
I suggest you read the EXIF header of the image and then parse the tags for finding the GPS information. In Java there is a great library (called metadata-extractor) for extracting and parsing the EXIF header. Please see the getting started for this library here.
Once you do the first 2 steps in the tutorial, look for the tags starting with [GPS] ([GPS] GPS Longitude, [GPS] GPS Latitude, ...).
Based on #dan-d answer, here is my code (kotlin)
private fun readGps(file: String): Optional<GeoLocation> {
// Read all metadata from the image
// Read all metadata from the image
val metadata: Metadata = ImageMetadataReader.readMetadata(File(file))
// See whether it has GPS data
val gpsDirectories = metadata.getDirectoriesOfType(
GpsDirectory::class.java)
for (gpsDirectory in gpsDirectories) {
// Try to read out the location, making sure it's non-zero
val geoLocation = gpsDirectory.geoLocation
if (geoLocation != null && !geoLocation.isZero) {
return Optional.of(geoLocation)
}
}
return Optional.empty()
}

Compare file extension to file header

I'm starting to design an application, that will, in part, run through a directory of files and compare their extensions to their file headers.
Does anyone have any advice as to the best way to approach this? I know I could simply have a lookup table that will contain the file's header signature. e.g., JPEG: \xFF\xD8\xFF\xE0
I was hoping there might be a simper way.
Thanks in advance for your help.
I'm afraid it'll have to be more complicated than that. Not every file type has a header at all, and some (such as RAR) have their characteristic data structures at the end rather than at the beginning.
You may want to take a look at the Unix file command, which does the same job:
http://linux.die.net/man/1/file
http://linux.die.net/man/5/magic
If you don't need to do dirty work on these values (and you don't have linux) you could simply use an external program, like TrID, that is able to do this thing for you.
Maybe you can just work on its output without caring to doing it by yourself.. in anycase if you have just around 20 kinds of files that you will have to manage having a simple lookup table (eg. HashMap<String,byte[]>) is not that bad. Of cours this will work only if desidered file format has a magic number, otherwise you are on your own (or with an external program).
Because of the problem with the missing significant header for some file types (thanks #Michael) I would create a map of extension to a kind of type checker with a simple API like
public interface TypeCheck throws IOException {
public boolean isValid(InputStream data);
}
Now you can code something like
File toBeTested = ...;
Map<String,TypeCheck> typeCheckByExtension = ...;
TypeCheck check = typeCheckByExtension.get(getExtension(toBeTested.getName()));
if (check != null) {
InputStream in = new FileInputStream(toBeTested);
if (check.isValid(in)) {
// process valid file
} else {
// process invalid file
}
in.close();
} else {
// process unknown file
}
The Header check for JPEG for example may look like
public class JpegTypeCheck implements TypeCheck {
private static final byte[] HEADER = new byte[] {0xFF, 0xD8, 0xFF, 0xE0};
public boolean isValid(InputStream data) throws IOException {
byte[] header = new byte[4];
return data.read(header) == 4 && Arrays.equals(header, HEADER);
}
}
For other types with no significant header you can implement completly other type checks.
You can extract the mime type for each file and compare this to a map of mimetype/extension (Map<String, List<String>>, the first String is the mime type, the second is a list of valid extensions).
Resources :
Get the Mime Type from a File
JMimeMagic
On the same topic :
Java - HowTo extract MimeType from a byte[]
Getting A File's Mime Type In Java
You can know the file type of file reading the header using apache tika. Following code need apache tika jar.
InputStream is = MainApp.class.getResourceAsStream("/NetFx20SP1_x64.txt");
BufferedInputStream bis = new BufferedInputStream(is);
AutoDetectParser parser = new AutoDetectParser();
Detector detector = parser.getDetector();
Metadata md = new Metadata();
md.add(Metadata.RESOURCE_NAME_KEY,MainApp.class.getResource("/NetFx20SP1_x64.txt").getPath());
MediaType mediaType = detector.detect(bis, md);
System.out.println("MIMe Type of File : " + mediaType.toString());

Categories

Resources