How to concatenate clips getting from Kinesis Video Stream in Java - java

I'm using AWS Kinesis Video Stream service to get my video recordings. So due to the Kinesis Video Stream fragment limitation, it turns out I can only retrieve up to ~30 minutes video at one request. And I was intend to retrieve a 2 hour video.
So I loop the request and get all 4 response into a List of InputStream, then I turn them into SequenceInputStream because I try to chain them all together.
However when I success uploaded them to S3 bucket and try to download from there. It shows me file are corrupted. I researched on SequenceInputStream however it seems that my design was okay.
Furthermore, if I extend my video length, let say I have 24 InputStream, and I chained them all to a single SequenceInputStream, it will encounter the SSL Socket Exception: Connection Reset when I run the readAllBytes operation on the sequence input stream.
Is there any way I can achieve what I want or something wrong in my code to cause this?
Here are my source code:
private String downloadMedia(Request request, JSONObject response, JSONObject metaData, Date startDate, Date endDate) throws Exception {
long duration = endDate.getTime() - startDate.getTime();
long durationInMinutes = TimeUnit.MILLISECONDS.toMinutes(duration);
long intervalsCount = durationInMinutes / 30;
ArrayList<GetClipResult> getClipResults = new ArrayList<>();
for (int i = 0; i < intervalsCount; i++){
Media currentMedia = constructMediaAfterIntervalsBreakdown(metaData, request, startDate, endDate);
String deviceName = metaData.getString("name") + "_" + request.getId();
Stream stream = getStreamByName(name, request.getId());
String endPoint = getDataEndpoint(stream.getStreamName());
GetClipResult clipResult = downloadMedia(currentMediaDto, endPoint, stream.getStreamName());
if(clipResult != null){
getClipResults.add(clipResult);
}
startDate = currentMediaDto.getEndTime();
}
//Get presigned URL from S3 service response
String url = response.getJSONArray("data").getJSONObject(0).getJSONArray("parts").getJSONObject(0).getString("url");
if (getClipResults.size() > 0) {
Vector<InputStream> inputStreams = new Vector<>();
for (GetClipResult clipResult : getClipResults){
InputStream videoStream = clipResult.getPayload();
inputStreams.add(videoStream);
}
Enumeration<InputStream> inputStreamEnumeration = inputStreams.elements();;
SequenceInputStream sequenceInputStream = new SequenceInputStream(inputStreamEnumeration);
if (sequenceInputStream.available() > 0){
sequenceInputStream.readAllBytes();
byte[] bytes = sequenceInputStream.readAllBytes();
String message = uploadFileUsingSecureUrl(url, bytes, metaData);
return message;
}
}
return "failed";
}
Edited: I came across couple package that called Xuggler and FFMPEG, however most of them are getting the video file from disk (which has a path), but for my case there isn't any video file because I do not download them to local, they only existed in the runtime and will upload to S3 later on after concatenated.
Appreciates any help! Thank you!

So in the end I just downloaded the clips, saved it to the disk on runtime, merged them using mp4parser and upload to S3. Afterwards I just deleted those on my disk.
If anyone curious about the code, it is taken from https://github.com/sannies/mp4parser/blob/master/examples/src/main/java/com/googlecode/mp4parser/AppendExample.java
Thank you.

Related

Importing csv data from Storage to Cloud SQL not working - status always "pending"

I am new to java (I have experience with C# though)
Sadly, I inherited a terrible project (the code is terrible) and what I need to accomplish is to import some csv files into Cloud SQL
So there's a WS which runs this task, apparently the dev followed this guide to import data. But it is not working. Here's the code (Essential parts, actually it is longer and more ugly)
InstancesImportRequest requestBody = new InstancesImportRequest();
ImportContext ic = new ImportContext();
ic.setKind("sql#importContext");
ic.setFileType("csv");
ic.setUri(bucketPath);
ic.setDatabase(CLOUD_SQL_DATABASE);
CsvImportOptions csv = new CsvImportOptions();
csv.setTable(tablename);
List<String> list = new ArrayList<String>();
// here there is some code that populates the list with the columns
csv.setColumns(list);
ic.setCsvImportOptions(csv);
requestBody.setImportContext(ic);
SQLAdmin sqlAdminService = createSqlAdminService();
SQLAdmin.Instances.SQLAdminImport request = sqlAdminService.instances().sqladminImport(project, instance, requestBody);
Operation response = request.execute();
System.out.println("Executed : Going to sleep.>"+response.getStatus());
int c = 1;
while(!response.getStatus().equalsIgnoreCase("Done")){
Thread.sleep(10000);
System.out.println("sleeped enough >"+response.getStatus());
c++;
if(c==50){
System.out.println("timeout?");
break;
}
}
public static SQLAdmin createSqlAdminService() throws IOException, GeneralSecurityException {
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
JsonFactory jsonFactory = JacksonFactory.getDefaultInstance();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
if (credential.createScopedRequired()) {
credential =
credential.createScoped(Arrays.asList("https://www.googleapis.com/auth/cloud-platform"));
}
return new SQLAdmin.Builder(httpTransport, jsonFactory, credential)
.setApplicationName("Google-SQLAdminSample/0.1")
.build();
}
I am not quite sure how response should be treated, it seems it is an async request. Either way, I always get status Pending; it seems it is not even start to executing.
Of course it ends timing out. What is wrong here, why the requests never starts ? I couldn't find any actual example on the internet about using this java sdk to import files, except the link I gave above
Well, the thing is that the response object is static, so it will always return "Pending" as the initial status since it is a string in the object - it is not actually being updated.
To get the actual status, you have to requested it to google using the sdk. I did something like this (it will be better to use a smaller sleep time, and make it grow as you try more times)
SQLAdmin.Instances.SQLAdminImport request = sqlAdminService.instances().sqladminImport(CLOUD_PROJECT, CLOUD_SQL_INSTANCE, requestBody);
// execution of our import request
Operation response = request.execute();
int tried = 0;
Operation statusOperation;
do {
// sleep one minute
Thread.sleep(60000);
// here we are requesting the status of our operation. Name is actually the unique identifier
Get requestStatus = sqlAdminService.operations().get(CLOUD_PROJECT, response.getName());
statusOperation = requestStatus.execute();
tried++;
System.out.println("status is: " + statusOperation.getStatus());
} while(!statusOperation.getStatus().equalsIgnoreCase("DONE") && tried < 10);
if (!statusOperation.getStatus().equalsIgnoreCase("DONE")) {
throw new Exception("import failed: Timeout");
}

Upload blob in Azure using BlobOutputStream

I'm trying to upload a blob directly from a stream, since I don't know the length of the stream I decided to try with this answer.
This doesn't work, even though it reads from the stream and doesn't throw any exceptions the content isn't uploaded to my container.
I have no problem uploading from files, it only occurs when uploading from a stream.
This is my code, I added a few outs to check whether it was reading something or not but that wasn't the problem:
try {
CloudBlockBlob blob = PublicContainer.getBlockBlobReference(externalFileName);
if (externalFileName.endsWith(".tmp")) {
blob.getProperties().setContentType("image/jpeg");
}
BlobOutputStream blobOutputStream = blob.openOutputStream();
int next = input.read();
while (next != -1) {
System.err.println("writes");
blobOutputStream.write(next);
next = input.read();
}
blobOutputStream.close();
return blob.getUri().toString();
} catch (Exception usex) {
System.err.println("ERROR " + usex.getMessage());
return "";
}
It doesn't fails but it doesn't works.
Is there another way of doing this? Or am I missing something?
UPDATE: I've been checking and I think that the problem is with the InputStream itself, but I don't know why since the same stream will work just fine if I use it to upload to Amazon s3 for instance
I tried to reproduce your issue, but failed. According to your code, it seems that the only obvious missing thing is no calling blobOutputStream.flush(); before close the output stream via blobOutputStream.close();, but it works if missing flush method
Here is my testing code as below.
String STORAGE_CONNECTION_STRING_TEMPLATE = "DefaultEndpointsProtocol=https;AccountName=%s;AccountKey=%s;";
String accountName = "xxxx";
String key = "XXXXXX";
CloudStorageAccount account = CloudStorageAccount.parse(String.format(STORAGE_CONNECTION_STRING_TEMPLATE, accountName, key));
CloudBlobClient client = account.createCloudBlobClient();
CloudBlobContainer container = client.getContainerReference("mycontainer");
container.createIfNotExists();
String externalFileName = "test.tmp";
CloudBlockBlob blob = container.getBlockBlobReference(externalFileName);
if (externalFileName.endsWith(".tmp")) {
blob.getProperties().setContentType("image/jpeg");
}
BlobOutputStream blobOutputStream = blob.openOutputStream();
String fileName = "test.jpg";
InputStream input = new FileInputStream(fileName);
int next = -1;
while((next = input.read()) != -1) {
blobOutputStream.write(next);
}
blobOutputStream.close(); // missing in your code, but works if missing.
input.close();
If you can update in more details, I think it's help for analysising the issue. Any concern, please feel free to let me know.

How to set HTTP header in Apache JClouds?

I'm using Apache JClouds to connect to my Openstack Swift installation. I managed to upload and download objects from Swift. However, I failed to see how to upload dynamic large object to Swift.
To upload dynamic large object, I need to upload all segments first, which I can do as usual. Then I need to upload a manifest object to combine them logically. The problem is to tell Swift this is a manifest object, I need to set a special header, which I don't know how to do that using JClouds api.
Here's a dynamic large object example from openstack official website.
The code I'm using:
public static void main(String[] args) throws IOException {
BlobStore blobStore = ContextBuilder.newBuilder("swift").endpoint("http://localhost:8080/auth/v1.0")
.credentials("test:test", "test").buildView(BlobStoreContext.class).getBlobStore();
blobStore.createContainerInLocation(null, "container");
ByteSource segment1 = ByteSource.wrap("foo".getBytes(Charsets.UTF_8));
Blob seg1Blob = blobStore.blobBuilder("/foo/bar/1").payload(segment1).contentLength(segment1.size()).build();
System.out.println(blobStore.putBlob("container", seg1Blob));
ByteSource segment2 = ByteSource.wrap("bar".getBytes(Charsets.UTF_8));
Blob seg2Blob = blobStore.blobBuilder("/foo/bar/2").payload(segment2).contentLength(segment2.size()).build();
System.out.println(blobStore.putBlob("container", seg2Blob));
ByteSource manifest = ByteSource.wrap("".getBytes(Charsets.UTF_8));
// TODO: set manifest header here
Blob manifestBlob = blobStore.blobBuilder("/foo/bar").payload(manifest).contentLength(manifest.size()).build();
System.out.println(blobStore.putBlob("container", manifestBlob));
Blob dloBlob = blobStore.getBlob("container", "/foo/bar");
InputStream input = dloBlob.getPayload().openStream();
while (true) {
int i = input.read();
if (i < 0) {
break;
}
System.out.print((char) i); // should print "foobar"
}
}
The "TODO" part is my problem.
Edited:
I've been pointed out that Jclouds handles large file upload automatically, which is not so useful in our case. In fact, we do not know how large the file will be or when the next segment will arrive at the time we start to upload the first segment. Our api is designed to make client able to upload their files in chunks of their own chosen size and at their own chosen time, and when done, call a 'commit' to make these chunks as a file. So this makes us want to upload the manifest on our own here.
According to #Everett Toews's answer, I've got my code correctly running:
public static void main(String[] args) throws IOException {
CommonSwiftClient swift = ContextBuilder.newBuilder("swift").endpoint("http://localhost:8080/auth/v1.0")
.credentials("test:test", "test").buildApi(CommonSwiftClient.class);
SwiftObject segment1 = swift.newSwiftObject();
segment1.getInfo().setName("foo/bar/1");
segment1.setPayload("foo");
swift.putObject("container", segment1);
SwiftObject segment2 = swift.newSwiftObject();
segment2.getInfo().setName("foo/bar/2");
segment2.setPayload("bar");
swift.putObject("container", segment2);
swift.putObjectManifest("container", "foo/bar2");
SwiftObject dlo = swift.getObject("container", "foo/bar", GetOptions.NONE);
InputStream input = dlo.getPayload().openStream();
while (true) {
int i = input.read();
if (i < 0) {
break;
}
System.out.print((char) i);
}
}
jclouds handles writing the manifest for you. Here are a couple of examples that might help you, UploadLargeObject and largeblob.MainApp.
Try using
Map<String, String> manifestMetadata = ImmutableMap.of(
"X-Object-Manifest", "<container>/<prefix>");
BlobBuilder.userMetadata(manifestMetadata)
If that doesn't work you might have to use the CommonSwiftClient like in CrossOriginResourceSharingContainer.java.

How to increase speed of getting datas from webserver for android app

I'm designing a android application that has client and server side. The main database is in server which every android device db is trying to look like. Whenever Android device is online, the app syncs with server database(the main database). Because I can change the datas with adding, deleting or updating in the database in server(Users can't change database in Android device, a.k.a Sqlite db). After synchronization, the Sqlite db will be added, deleted or updated.
There are a lot of images in server database, and I'm sending all of data as json type from PHP.
$datas = array('cats' => $cats, 'news' => $news, 'products' => $products);
$datas_json = (object) array('datas' => $datas);
$json = json_encode($datas_json);
echo $json;
And the app reads with BufferReader.
HttpResponse response = httpclient.execute(httppost);
rd = new BufferedReader(new InputStreamReader(response.getEntity().getContent()));
And afterwards, I'm trying to get all data is coming from server as string. There is a real problem about speed.
public String parseFromBuffer(BufferedReader rd) throws IOException
{
String line = "";
String full = "";
while ((line = rd.readLine()) != null)
{
full += line;
}
return full;
}
Because, If you think just one image may have 100.000 characters(in PHP side, I'm using base64 for images to send java and I'm decoding in java again.), imagine 1000 images! So, sometimes it takes 1-2 min to get 4-5 images with this way. After getting 'full' string(it could have 1.000.000 character!), I use jsonParser to get datas.
public jsonParser(String json_string) throws JSONException
{
json = new JSONObject(json_string);
}
...
public void parseJson() throws JSONException
{
JSONObject datas = json.getJSONObject("datas");
JSONArray cats = datas.getJSONArray("cats");
for(int i = 0; i < cats.length(); i++)
{
JSONObject c = cats.getJSONObject(i);
cat_id.add(c.getInt("cat_id"));
cat_name_tr.add(c.getString("cat_name_tr"));
cat_name_eng.add(c.getString("cat_name_eng"));
cat_upper_id.add(c.getInt("cat_upper_id"));
cat_order.add(c.getInt("cat_order"));
}
...
And finally, I insert, update or delete data which comes from jsonParser() to sqlite db.
In short, I have a problem or I need solution about speed of getting images across server-client and also in parseFromBuffer() method. You can suggest any advice, you can criticize of my way of doing this, may be I should serialize something, may be I should download all images in server to 'res' folder somehow...
Thanks in advance.

Android to computer FTP resuming upload strange phenomenon

I have a strange phenomenon when resuming a file transfer.
Look at the picture below you see the bad section.
This happens apparently random, maybe every 10:th time.
Im sending the picture from my Android phone to java server over ftp.
What is it that i forgot here.
I see the connection is killed due to java.net.SocketTimeoutException:
The transfer is resuming like this
Resume at : 287609 Sending 976 bytes more
The bytes are always correct when file is completely received.
Even for the picture below.
Dunno where to start debug this since its working most of the times.
Any suggestions or ideas would be grate i think i totally missed something here.
The device Sender code (only send loop):
int count = 1;
//Sending N files, looping N times
while(count <= max) {
String sPath = batchFiles.get(count-1);
fis = new FileInputStream(new File(sPath));
int fileSize = bis.available();
out.writeInt(fileSize); // size
String nextReply = in.readUTF();
// if the file exist,
if(nextReply.equals(Consts.SERVER_give_me_next)){
count++;
continue;
}
long resumeLong = 0; // skip this many bytes
int val = 0;
buffer = new byte[1024];
if(nextReply.equals(Consts.SERVER_file_exist)){
resumeLong = in.readLong();
}
//UPDATE FOR #Justin Breitfeller, Thanks
long skiip = bis.skip(resumeLong);
if(resumeLong != -1){
if(!(resumeLong == skiip)){
Log.d(TAG, "ERROR skip is not the same as resumeLong ");
skiip = bis.skip(resumeLong);
if(!(resumeLong == skiip)){
Log.d(TAG, "ERROR ABORTING skip is not the same as resumeLong);
return;
}
}
}
while ((val = bis.read(buffer, 0, 1024)) > 0) {
out.write(buffer, 0, val);
fileSize -= val;
if (fileSize < 1024) {
val = (int) fileSize;
}
}
reply = in.readUTF();
if (reply.equals(Consts.SERVER_file_receieved_ok)) {
// check if all files are sent
if(count == max){
break;
}
}
count++;
}
The receiver code (very truncated):
//receiving N files, looping N times
while(count < totalNrOfFiles){
int ii = in.readInt(); // File size
fileSize = (long)ii;
String filePath = Consts.SERVER_DRIVE + Consts.PTPP_FILETRANSFER;
filePath = filePath.concat(theBatch.getFileName(count));
File path = new File(filePath);
boolean resume = false;
//if the file exist. Skip if done or resume if not
if(path.exists()){
if(path.length() == fileSize){ // Does the file has same size
logger.info("File size same skipping file:" + theBatch.getFileName(count) );
count++;
out.writeUTF(Consts.SERVER_give_me_next);
continue; // file is OK don't upload it again
}else {
// Resume the upload
out.writeUTF(Consts.SERVER_file_exist);
out.writeLong(path.length());
resume = true;
fileSize = fileSize-path.length();
logger.info("Resume at : " + path.length() +
" Sending "+ fileSize +" bytes more");
}
}else
out.writeUTF("lets go");
byte[] buffer = new byte[1024];
// ***********************************
// RECEIVE FROM PHONE
// ***********************************
int size = 1024;
int val = 0;
bos = new BufferedOutputStream(new FileOutputStream(path,resume));
if(fileSize < size){
size = (int) fileSize;
}
while (fileSize >0) {
val = in.read(buffer, 0, size);
bos.write(buffer, 0, val);
fileSize -= val;
if (fileSize < size)
size = (int) fileSize;
}
bos.flush();
bos.close();
out.writeUTF("file received ok");
count++;
}
Found the error and the problem was bad logic from my part.
say no more.
I was sending pictures that was being resized just before they where sent.
The problem was when the resume kicked in after a failed transfer
the resized picture was not used, instead the code used the original
pictured that had a larger scale size.
I have now setup a short lived cache that holds the resized temporary pictures.
In the light of the complexity of the app im making I simply forgot that the files during resume was not the same as original.
With a BufferedOutputStream, BufferedInputStream, you need to watch out for the following
Create BufferedOutputStream before BuffererdInputStream (on both client and server)
And flush just after create.
Flush after every write (not just before close)
That worked for me.
Edited
Add sentRequestTime, receivedRequestTime, sentResponseTime, receivedResponseTime to your packet payload. Use System.nanoTime() on these, run your server and client on the same host, use ExecutorService to run multiple clients for that server, and plot your (received-sent) for both request and response packets, time delay on a excel chart (some csv format). Do this before bufferedIOStream and afterIOStream. You will be pleased to know that your performance has boosted by 100%. Made me very happy to plot that graph, took about 45 mins.
I have also heard that using custom buffer's further improves performance.
Edited again
In my case I am using Object IOStreams, I have added a payload of 4 long variables to the object, and initialize sentRequestTime when I send the packet from the client, initialize receivedRequestTime when the server receives the response, so and so forth for the response from server to client too. I then find the difference between received and sent time to find out the delay in response and request. Be careful to run this test on localhost. If you run it between different hardware/devices, their actual time difference may interfere with your test results. Since requestReceivedTime is time stamped at the server end and the requestSentTime is time stamped at the client end. In other words, their own local time is stamped (obviously). And both of these devices running the exact same time to the nano second is not possible. If you must run it between different devices atleast make sure that you have ntp running (to keep them time synchronized). That said, you hare comparing the performance before and after bufferedio (you dont really care about the actual time delays right ?), so time drift should not really matter. Comparing a set of results before buffered and after buffered is your actual interest.
Enjoy!!!

Categories

Resources