I'm having problems with ffmpeg, probably due to my inexperience with this software.
My basic need is the following: I have a series of videos with material that I want to protect so that it is not plagiarized. For this I want to add a watermark so that when a user views it, they also see some personal data that prevents them from downloading and sharing it without permission.
What I would like is to create a small Angular + Java application that does this task (invoking ffmpeg via Runtime#exec)
I have seen that from ffmpeg I can emit to a server, like ffserver but I wonder if there is a somewhat simpler way. Something like launching the ffmpeg command from my java application with the necessary configuration and having ffmpeg emit the video along with the watermark through some port/protocol.
EDIT
I have continued to investigate and I have seen that ffmpeg allows you to broadcast for WebRTC, but you need an adapter. What I would like and I don't know if it is possible is to launch ffmpeg so that it acts as a server and it can be consumed from the web.
Related
I am stumped.
I need to stream a video using my own server to a player.
How to stream a video stream to the WebView player?
Server ====socket video stream=====> Client ====> WebView player
If you just want a simple solution that works you can use a static video file on your server and the HTML5 video tag in your webView.
On Android, at this time, you need to have hardware acceleration turned on to support HTML5 video - see 'HTML5 video Support' note at the link below. You also need to set a WebChromeClient for the WebView which is called when anything affecting the UI happens - see the note 'Full Screen Support' at the same link also:
http://developer.android.com/reference/android/webkit/WebView.html
You can find a full working example which appears to be well maintained here:
https://github.com/cprcrack/VideoEnabledWebView
Your server then just has to serve the static video file in the same way it would any static content and it will be delivered using HTTP progressive downloading which looks and feels like streaming to your end user.
This does have limitations, however, as it is not 'real' streaming and you can't for example support adaptive bit rate streaming which allows you deliver different bit rates depending on the network connection. It much simpler however and it does not require a dedicated streaming server.
If you do want to use a dedicated streaming server then it is worth being aware that this is a relatively complex domain and that there are some open source streaming servers available that you might want to take a look at, for example:
http://gstreamer.freedesktop.org
http://www.videolan.org/vlc/streaming.html
I've been using Vuforia for a while now which has the limitation that I can't directly submit to the Natural Feature Tracking processor an image for turning into a trackable data file. Instead it is hard wired to take the image directly from the camera, which gives me no control. See for example the UserDefinedTargets demo.
Does ARToolKit allow me to submit a jpeg to the NFT processor directly from my mobile device for processing ? I want to be able to achieve something like UserDefinedTargets on Vuforia, but with the ability to submit my own natural feature images as jpegs on the mobile device itself. I can then save images taken on the fly for future processing, or even better, save the processed NFT data for future use. I do not want to use some cloud service, e.g. there is a workaround with Vuforia, but I have to use their cloud service and that has its limitations too !
According to the documentation here: http://artoolkit.org/documentation/doku.php?id=3_Marker_Training:marker_nft_training you have a program that can be used to do the feature extraction. It works with a digital image so without having looked into the code I foresee two options for you:
a) Check out the source code and see if you can get that tool running inside an Android phone, most likely via NDK
b) Make a web service that receives an image, runs this program and returns the result, so you can use it as a normal REST API.
Hope this help.
My goal is to have a java application running, when the camera is plugged in or already plugged in, the program can auto detect the camera. After writing that sentence it doesn't seem possible in distinguishing between the USB drives on windows.
Can someone help me with pointing me in the right direction with allowing the user to specify the camera location? If the camera location is specified i should be able to auto-generate some sort of list of jpeg files on it correct?
My overall goal is to have a user enter a "job number", then from the camera (auto-detected or user location specified), the program automatically takes all the photos that exist on it, dump them into a folder named after the job number, then erase the photos on the camera.
It's like an auto photo storage dump pretty much.
I'm currently working with Eclipse and the JavaFX plug-in with using SceneBuilder.
libjitsi is an advanced Java media library for secure real-time audio/video communication. It allows applications to capture, playback, stream, encode/decode and encrypt audio and video flows. It also allows for advanced features such as audio mixing, handling multiple streams, participation in audio and video conferences.
Originally libjitsi was part of the Jitsi client source code but we decided to spin it off so that other projects can also use it.
libjitsi is distributed under the terms of the LGPL.
Feature list
Video capture and rendering on Windows, Mac OS X and Linux.
Video codecs: H.264 and H.263 (VP8 coming in early 2013)
.
.
.
More Info
I want to read in a live video stream, like RTSP, run some basic processing on it, and display it on a website. What are some good ways to do this? I have used OpenCV for Python before but found it to be a hassle. I am also familiar with Java and C++ if there are better libraries available. I haven't done a lot of web development before either.
What kind of live video source that you mean? If you don't intend to do this code-wise, you can use the free VLC Player to act as a streaming service in between any kind of media stream source (file, network, capture device, disc) and your web video client.
But, if you intend to do this code-wise, you can you VLCJ library. Other options can be Xuggler or FMJ.
I'm interested in doing image processing in Java with frames collected from a network video adapter. The first challenge is finding network video adapters/cameras which don't require an ActiveX control for PTZ control and therefore require IE. Then the issue is how to do still image grabs from network video adapters which only make MP4 available.
Does anyone know of some Java friendly network video cameras and adapters?
Anyone know of some Java code to control PTZ on a network camera?
Two ways in Java that I know of. The first (and the one I current recommend) is the LTI-Civil project. The second is to use Xuggler which uses FFmpeg webcam code behind the scenes.