There are some video files (mostly .mp4) stored in S3. They could be rather big. I need to get a thumbnail images for the video files - let's say 0.5 second's frame (to skip possible black screen etc.).
I can create the thumbnail if I download whole file but it's too long and I am trying to avoid this and download some minimal fragment.
I know how to download first N bytes from AWS S3 - request with specified range but the problem is the video file piece is corrupted and is not recognized as correct video.
I tried to emulate header bytes retrieving with the code
import java.io.FileInputStream;
import java.io.FileOutputStream;
public class Test {
public static void main(String[] args) throws Exception {
try(FileInputStream fis = new FileInputStream("D://temp//1.mp4");
FileOutputStream fos = new FileOutputStream("D://temp//1_cut.mp4");
) {
byte[] buf=new byte[1000000];
fis.read(buf);
fos.write(buf);
fos.flush();
System.out.println("Done");
}
}
}
To work with static file but the result 1_cut.mp4 is not valid. Neither player can recognize it nor avconv library.
Is there any way to download just fragment of video file and create an image from the fragment?
Not sure if you need full java implementation but in case your file is accessible with direct or signed URL at S3 and you are ok to use ffmpeg than following should do the trick.
ffmpeg -i $amazon_s3_signed_url -ss 00:00:00.500 -vframes 1 thumbnail.png
You can use Amazon Java SDK to create a pre-signed URL and then execute the command above to create a thumbnail.
GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(bucketName, objectKey);
generatePresignedUrlRequest.setMethod(HttpMethod.GET);
generatePresignedUrlRequest.setExpiration(expiration);
URL url = s3client.generatePresignedUrl(generatePresignedUrlRequest);
String urlString = url.toString();
Runtime run = Runtime.getRuntime();
Process proc = run.exec("ffmpeg -i " + urlString +" -ss 00:00:00.500 -vframes 1 thumbnail.png");
Your current approach just to download a random number of sequencial bytes would require you to do repair the partial file that you downloaded... a huge amount of work.
One alternative solution to your question above might be like that:
You would need to forward all Disk I/O read requests from your decoder to the S3 bucket. In case your decoder (avconv) supports reading from an inputstream, here is a good example how to override the read method:
How InputStream's read() method is implemented?
Another alternative is to use existing drivers that just let you access the S3 bucket as it was a local drive:
Windows: https://tntdrive.com/
Linux: https://tecadmin.net/mount-s3-bucket-centosrhel-ubuntu-using-s3fs/#
Related
This problem I am facing in title is very similar to this question previously raised here (Azure storage: Uploaded files with size zero bytes), but it was for .NET and the context for my Java scenario is that I am uploading small-size CSV files on a daily basis (about less than 5 Kb per file). In addition the API code uses the latest version of Azure API that I am using in contrast against the 2010 used by the other question.
I couldn't figure out where have I missed out, but the other alternative is to do it in File Storage, but of course the blob approach was recommended by a few of my peers.
So far, I have mostly based my code on uploading a file as a block of blob on the sample that was shown in the Azure Samples git [page] (https://github.com/Azure-Samples/storage-blob-java-getting-started/blob/master/src/BlobBasics.java). I have already done the container setup and file renaming steps, which isn't a problem, but after uploading, the size of the file at the blob storage container on my Azure domain shows 0 bytes.
I've tried alternating in converting the file into FileInputStream and upload it as a stream but it still produces the same manner.
fileName=event.getFilename(); //fileName is e.g eod1234.csv
String tempdir = System.getProperty("java.io.tmpdir");
file= new File(tempdir+File.separator+fileName); //
try {
PipedOutputStream pos = new PipedOutputStream();
stream= new PipedInputStream(pos);
buffer = new byte[stream.available()];
stream.read(buffer);
FileInputStream fils = new FileInputStream(file);
int content = 0;
while((content = fils.read()) != -1){
System.out.println((char)content);
}
//Outputstream was written as a test previously but didn't work
OutputStream outStream = new FileOutputStream(file);
outStream.write(buffer);
outStream.close();
// container name is "testing1"
CloudBlockBlob blob = container.getBlockBlobReference(fileName);
if(fileName.length() > 0){
blob.upload(fils,file.length()); //this is testing with fileInputStream
blob.uploadFromFile(fileName); //preferred, just upload from file
}
}
There are no error messages shown, just we know that the file touches the blob storage and shows a size 0 bytes. It's a one-way process by only uploading CSV-format files. At the blob container, it should be showing those uploaded files a size of 1-5 KBs each.
Instead of blob.uploadFromFile(fileName); you should use blob.uploadFromFile(file.getAbsolutePath()); because uploadFromFile method requires absolute path. And you don't need the blob.upload(fils,file.length());.
Refer to Microsoft Docs: https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-java#upload-blobs-to-the-container
The Azure team replied to a same query I've put on mail and I have confirmed that the problem was not on the API, but due to the Upload component in Vaadin which has a different behavior than usual (https://vaadin.com/blog/uploads-and-downloads-inputs-and-outputs). Either the CloudBlockBlob or the BlobContainerUrl approach works.
The out-of-the-box Upload component requires manual implementation of the FileOutputStream to a temporary object unlike the usual servlet object that is seen everywhere. Since there was limited time, I used one of their addons, EasyUpload, because it had Viritin UploadFileHandler incorporated into it instead of figuring out how to stream the object from scratch. Had there been more time, I would definitely try out the MultiFileUpload addon, which has additional interesting stuff, in my sandbox workspace.
I had this same problem working with .png (copied from multipart files) files I was doing this:
File file = new File(multipartFile.getOriginalFilename());
and the blobs on Azure were 0bytes but when I changed to this:
File file = new File("C://uploads//"+multipartFile.getOriginalFilename());
it started saving the files properly
I was wondering if there is a way to access a file and it's path from my assets folder in android studio? The reason why I need to access the file and its path is because I am working with a method that REQUIRES the String path for a file, and it must access the file from its String path. However, in android studio I haven't found a way to access the file directly from the String value of its path. I decided to use a workaround and simply read the file from an InputStream and write the file to an OutputStream, but the file is about 170MB, and it is too memory intensive to write the File to an OutputStream. It takes my application about 10:00 Minutes to download the file when I implement that strategy. I have searched all over this website and numerous sources to find a solution (books and documentation) but am unable to find a viable solution. Here is an example of my code:
#Override
public Model doInBackground(String... params){
try {
String filePath = context.getFilesDir() + File.separator + "my_turtle.ttl";
File destinationFile = new File(filePath);
FileOutputStream outputStream = new FileOutputStream(destinationFile);
AssetManager assetManager = context.getAssets();
InputStream inputStream = assetManager.open("sample_3.ttl");
byte[] buffer = new byte[10000000];
int length = 0;
while ((length = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, length);
}
outputStream.close();
inputStream.close();
model = ModelFactory.createDefaultModel();
TDBLoader.loadModel(model, filePath, false);//THIS METHOD REQUIRES THE FILE PATH.
MainActivity.presenter.setModel(model);
}catch(FileNotFoundException e){
e.printStackTrace(System.out);
}
catch(IOException e){
e.printStackTrace(System.out);
}
return model;
}
As you can see the TDBLoader.loadModel() method requires a String for the file URI as the second argument, so it would be convenient to have the ability to access the File directly from my assets folder without utilizing an InputStream. The method takes as an argument (Model model, String url, Boolean showProgress). As I mentioned, the current strategy I am using utilizes too much memory and either crashes the Application entirely, or takes 10 minutes to download the file I need. I am using an AsyncTask to perform this operation, but due to the length of time required to perform the task that kind of defeats the purpose of an AsyncTask in this scenario.
What further complicates things is that I have to use an old version of Apache Jena because I am working with Android Studio and the official version of Apache Jena is not compatible with android studio. So I have to use a port that is 8 years old which doesn't have the updated classes that Apache Jena offers. If I could use the RDFParser class I could pass an InputStream, but that class does not exist in the older version of Apache Jena that I must use.
So I am stuck at this point. The method must utilize the String url path of the file in my assets folder, but I don't know how to access this without writing to a custom file from an InputStream, but writing to the file from the InputStream utilizes too much memory and forces the App to crash. If anyone has a solution I will greatly appreciate it.
Here is an example of my code
new byte[10000000] may fail, as you may not have a single contiguous block of memory that big. Plus, you might not have that much heap space to begin with. Use a smaller number, such as 65536.
It takes my application about 10:00 Minutes to download the file when I implement that strategy
The time will vary by hardware. I would not expect it to be that slow on most devices, but it could be on some.
I was wondering if there is a way to access a file and it's path from my assets folder in android studio?
You are running your app on Android. Android Studio is not running on Android. Assets are not files on the Android device. They are entries in the APK file, which is basically a ZIP archive. In effect, your code is unZIPping 170MB of material and writing it out to a file.
If anyone has a solution I will greatly appreciate it.
Work with some people to port over an updated version of Jena that offers reading RDF from an InputStream.
Or switch to some other RDF library.
Or work with the RDF file format directly.
Or use a smaller RDF file, so the copy takes less time.
Or download the RDF file, if you think that will be preferable to copying over the asset.
Or do the asset-to-file copying in a foreground JobIntentService, updating the progress in its associated Notification, so that the user can do other things on their device while you complete the copy.
Currently, I am using ImageIO.write() in order to write to file. However, this method opens up a Java App on my computer, which then forcefully aborts the Bootstrap process if closed, thereby killing the 'server'. I'm testing locally, using IntelliJ, and the termination of the Bootstrap process means that we are unable to test the functionality without rebooting the server.
My method is below. It runs on an API call from our front-end.
/**
* Saves image to database, assuming that the input is not null or empty.
* #param filename name of file.
* #param fileext extension of file.
* #param uri uri in string form.
*/
public static void saveImageToDisk(String filename, String fileext, String uri) {
try {
String[] components = uri.split(",");
String img64 = components[1];
byte[] decodedBytes = DatatypeConverter.parseBase64Binary(img64);
BufferedImage bfi = ImageIO.read(new ByteArrayInputStream(decodedBytes));
File outputfile = new File(IMAGESTORAGEFOLDER + filename + "." + fileext);
ImageIO.write(bfi, fileext, outputfile);
bfi.flush();
} catch(Exception e) {
e.printStackTrace();
}
}
My question is as follows: How can I save an image (from Raw Data) to file without the server aborting? If my code can be adapted with minimal rewrite, what other improvements can I make to robustify my existing code? I would like a solution with no external dependencies (relying entirely on standard Java libraries).
I am on MacOSX, running IntelliJ IDEA CE. Our server runs with Spark and uses Maven.
Thank you very much.
ImageIO.write() [...] method opens up a Java App on my computer
The issue here is that when you use the ImageIO class, it will also initialize the AWT because of some dependencies in the Java2D class hierarchy. This causes the Java launcher on OS X to also open up an icon in the dock and some other things, and I believe this is what you experience. There's really no new Java application being launched.
You can easily avoid this by passing a system property to the Java launcher at startup, telling it to run in "headless" mode. This is usually appropriate for a server process. Pass the following on the command line (or in the IntelliJ launch dialog):
-Djava.awt.headless=true
Read more about headless mode from Oracle's pages. Headless mode is the cross-platform way of doing this. There's also an OS X/MacOS specific way to hide the icon from the dock (-Dapple.awt.UIElement=true, but I don't recommend that here.
However, for your use case it's better to avoid the usage of ImageIO altogether. It's easier, more compatible, faster, and uses less memory as a bonus. Simply write the Base64 decoded bytes directly to disk. There's no need to treat a file containing an image differently from any other file in this case.
You can rewrite your method as follows:
public static void saveImageToDisk(String filename, String fileext, String uri) {
try {
String[] components = uri.split(",");
String img64 = components[1];
byte[] decodedBytes = DatatypeConverter.parseBase64Binary(img64);
File outputfile = new File(IMAGESTORAGEFOLDER, filename + "." + fileext);
Paths.write(outputFile.toPath(), decodedBytes);
} catch(Exception e) {
// You really shouldn't swallow this exception, but I'll leave that to you...
e.printStackTrace();
}
}
After running multiple users at the same time, running the process multiple times, etc, it seems to just be an artifact of either Java's ImageIO or IntelliJ. As long as the new process is not closed, Bootstrap continues to run properly, even if multiple browsers try to upload images, etc.
I am working on a application in appengine that we want to be able to make the content available for offline users. This means we need to get all the used blobstore files and save them off for the offline user. I am using the server side to do this so that it is only done once, and not for every end user. I am using the task queue to run this process as it can easily time out. Assume all this code is running as a task.
Small collections work fine, but larger collections result in a appengine error 202 and it restarts the task again and again. Here is the sample code that comes from combination of Writing Zip Files to GAE Blobstore and following the advice for large zip files at Google Appengine JAVA - Zip lots of images saving in Blobstore by reopening the channel as needed. Also referenced AppEngine Error Code 202 - Task Queue as the error.
//Set up the zip file that will be saved to the blobstore
AppEngineFile assetFile = fileService.createNewBlobFile("application/zip", assetsZipName);
FileWriteChannel writeChannel = fileService.openWriteChannel(assetFile, true);
ZipOutputStream assetsZip = new ZipOutputStream(new BufferedOutputStream(Channels.newOutputStream(writeChannel)));
HashSet<String> blobsEntries = getAllBlobEntries(); //gets blobs that I need
saveBlobAssetsToZip(blobsEntries);
writeChannel.closeFinally();
.....
private void saveBlobAssetsToZip(blobsEntries) throws IOException {
for (String blobId : blobsEntries) {
/*gets the blobstote key that will result in the blobstore entry - ignore the bsmd as
that is internal to our wrapper for blobstore.*/
BlobKey blobKey = new BlobKey(bsmd.getBlobId());
//gets the blob file as a byte array
byte[] blobData = blobstoreService.fetchData(blobKey, 0, BlobstoreService.MAX_BLOB_FETCH_SIZE-1);
String extension = type of file saved from our metadata (ie .jpg, .png, .pfd)
assetsZip.putNextEntry(new ZipEntry(blobId + "." + extension));
assetsZip.write(blobData);
assetsZip.closeEntry();
assetsZip.flush();
/*I have found that if I don't close the channel and reopen it, I can get a IO exception
because the files in the blobstore are too large, thus the write a file and then close and reopen*/
assetsZip.close();
writeChannel.close();
String assetsPath = assetFile.getFullPath();
assetFile = new AppEngineFile(assetsPath);
writeChannel = fileService.openWriteChannel(assetFile, true);
assetsZip = new ZipOutputStream(new BufferedOutputStream(Channels.newOutputStream(writeChannel)));
}
}
What is the proper way to get this to run on appengine? Again small projects work fine and zip saves, but larger projects with more blob files results in this error.
I bet that the instance is running out of memory. Are you using appstats? It can consume a large amount of memory. If that doesn't work you will probably need to increase the instance size.
I have a problem about streaming my video to server in real-time from my phone.
that is , let my phone be a IP Camera , and server can watch the live video from my phone
I have googled many many solutions,
but there is no one can solve my problem.
I use MediaRecorder to record .
it can save video file in the SD card correctly.
then , I refered this page and used some method as followings
skt = new Socket(InetAddress.getByName(hostname),port);
pfd =ParcelFileDescriptor.fromSocket(skt);
mediaRecorder.setOutputFile(pfd.getFileDescriptor());
now it seems I can send the video stream while recording
however, I wrote a receiver-side program to receive the video stream from Android ,
but it doesn't work . is there any error?
I can receive file , but I can not open the video file .
I guess the problem may caused by file format ?
there are outline of my code.
in android side
Socket skt = new Socket(hostIP,port);
ParcelFileDescriptor pfd =ParcelFileDescriptor.fromSocket(skt);
....
....
mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mediaRecorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT);
mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mediaRecorder.setOutputFile(pfd.getFileDescriptor());
.....
mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT);
mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP);
.....
mediaRecorder.start();
in receiver side (my ACER notebook)
// anyway , I don't think the file extentions will do any effect
File video = new File (strDate+".3gpp");
FileOutputStream fos;
try {
fos = new FileOutputStream(video);
byte[] data = new byte[1024];
int count =-1;
while( (count = fin.read(data,0,1024) ) !=-1)
{
fos.write(data,0,count);
fos.flush();
}
fos.close();
fin.close();
I confused a long time....
thanks in advance
Poc,
The way MediaRecorder writes files is as follows:
Leave space for empty header
Write file contents while recording
When recording finishes, seek to the beginning of file
Write the header at the beginning of the file
Then (I believe) there is another seek to the end of the file where metadata is written.
Because there is no concept of "seeking" on a socket, you will have to figure out when the header comes, seek to the beginning of your file, and then write the header in the appropriate location.
The best place to start here is to use a hexeditor to determine the format of a valid 3gpp file, then analyze this hex against your receiver program's hex output. Also you will want to look into 3gpp file formats.