i am trying to simulate a live view using a canon Camera.
I am interacting with the cam using the CanonSDK, i get an image every a short period in order to simulate a video frame by frame. This works fine, i am using java to do the backend and send the images trough BlazeDS to flex.
The problem is not getting the image, the problem is that when i load a new image using something like:
image.source=my_new_image;
the new image is loaded but it produces a short white blink and it ruins the video...
So i would like to know if the is a way to update an image on flex avoiding the blinking problem, or if i could make a video streaming from java and pick it up with flex...
Thanks in advance!!!
The easy way is to use a technique called double buffering, using two Loaders - one for the image which is visible, and one for the image which is being loaded and is invisible. When the image has completed loading it becomes visible, and the other one becomes invisible and the process repeats.
In terms of efficiency, it would be better to at least use a socket connection to the server for transferring the image bytes, preferably in AMF format since it has little overhead. This is all fairly possible in BlazeDS with some scripting.
For better efficiency you may try using a real-time frame or video encoder on the server, however decoding the video on the client will be challenging. For best performance it will be better to use the built-in video decoder and a streaming server such as Flash Media Server.
UPDATE (example script):
This example loads images over HTTP. A more efficient approach would be to use an AMF socket (mentioned above) to transfer the image, then use Loader.loadBytes() to display it.
private var loaderA:Loader;
private var loaderB:Loader;
private var foregroundLoader:Loader;
private var backgroundLoader:Loader;
public function Main()
{
loaderA = new Loader();
loaderB = new Loader();
foregroundLoader = loaderA;
backgroundLoader = loaderB;
loadNext();
}
private function loadNext():void
{
trace("loading");
backgroundLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, loaderCompleteHandler);
backgroundLoader.load(new URLRequest("http://www.phpjunkyard.com/randim/randim.php?type=1"));
}
private function loaderCompleteHandler(event:Event):void
{
trace("loaded");
var loaderInfo:LoaderInfo = event.target as LoaderInfo;
var loader:Loader = loaderInfo.loader;
loader.contentLoaderInfo.removeEventListener(Event.COMPLETE, loaderCompleteHandler);
if (contains(foregroundLoader))
removeChild(foregroundLoader);
var temp:Loader = foregroundLoader;
foregroundLoader = backgroundLoader;
backgroundLoader = temp;
addChild(foregroundLoader);
loadNext();
}
Related
Overview
I would like to use a custom video source to live stream video via WebRTC Android implementation. If I understand correctly, existing implementation only supports front and back facing cameras on Android phones. The following classes are relevant in this scenario:
Camera1Enumerator.java
VideoCapturer.java
PeerConnectionFactory
VideoSource.java
VideoTrack.java
Currently for using front facing camera on Android phone I'm doing the following steps:
CameraEnumerator enumerator = new Camera1Enumerator(false);
VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
VideoSource videoSource = peerConnectionFactory.createVideoSource(false);
videoCapturer.initialize(surfaceTextureHelper, this.getApplicationContext(), videoSource.getCapturerObserver());
VideoTrack localVideoTrack = peerConnectionFactory.createVideoTrack(VideoTrackID, videoSource);
My scenario
I've a callback handler that receives video buffer in byte array from custom video source:
public void onReceive(byte[] videoBuffer, int size) {}
How would I be able to send this byte array buffer? I'm not sure about the solution, but I think I would have to implement custom VideoCapturer?
Existing questions
This question might be relevant, though I'm not using libjingle library, only native WebRTC Android package.
Similar questions/articles:
for iOS platform but unfortunately I couldn't help with the answers.
for native C++ platform
article about native implementation
There are two possible solutions to this problem:
Implement custom VideoCapturer and create VideoFrame using byte[] stream data in onReceive handler. There actually exists a very good example of FileVideoCapturer, which implements VideoCapturer.
Simply construct VideoFrame from NV21Buffer, which is created from our byte array stream data. Then we only need to use our previously created VideoSource to capture this frame. Example:
public void onReceive(byte[] videoBuffer, int size, int width, int height) {
long timestampNS = TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime());
NV21Buffer buffer = new NV21Buffer(videoBuffer, width, height, null);
VideoFrame videoFrame = new VideoFrame(buffer, 0, timestampNS);
videoSource.getCapturerObserver().onFrameCaptured(videoFrame);
videoFrame.release();
}
I am getting live video feedback from Parrot AR.Drone 2.0. I am able to get the incoming video streams from drone(using command-ffplay tcp://192.168.1.1:5555) and successfully output the live video for me. I notice that ffplay will display its own frame along with the live video.
So, is that possible to "direct" or put the frame into our own Java frame in application? How could I achieve that if I wish to implement that function in my own JCheckBox? E.G. If I click JCheckBox, it should automatically get live video streams from drone and display for me in application instead of using ffplay frame?
This question is old but I just found it so I decided to write a possible solution for other users.
There are to ways to solve this one wrapping yourself the FFmpeg CLI for Java or using a wrapped FFmpeg library. The former will take a quite time and effort to make it. While the latter will be more practical to use.
For example, a nice library is JavaCV. You just need to add the maven repository to your project using the pom.xml file:
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>javacv-platform</artifactId>
<version>1.4.3</version>
</dependency>
Then you can create a SimplePlayer class and use the FFmpegFrameGrabber class to decode a frame that is converted into an image and displayed in your Java app.
public class SimplePlayer
{
private static volatile Thread playThread;
private AnimationTimer timer;
private int counter;
public SimplePlayer(String source, GrabberListener grabberListener)
{
if (grabberListener == null) return;
if (source.isEmpty()) return;
counter = 0;
playThread = new Thread(() -> {
try {
FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(source);
grabber.start();
grabberListener.onMediaGrabbed(grabber.getImageWidth(), grabber.getImageHeight());
Java2DFrameConverter converter = new Java2DFrameConverter();
while (!Thread.interrupted()) {
Frame frame = grabber.grab();
if (frame == null) {
break;
}
if (frame.image != null) {
Image image = SwingFXUtils.toFXImage(converter.convert(frame), null);
Platform.runLater(() -> {
grabberListener.onImageProcessed(image);
});
}
grabber.stop();
grabber.release();
Platform.exit();
} catch (Exception exception) {
System.exit(1);
}
});
playThread.start();
}
public void stop()
{
playThread.interrupt();
}
}
You can find the full implementation in this GitHub repository.
You need to decode the almost-but-not-quite H264 video format that the AR.Drone uses. To do that, you need to do two things:
Handle the AR.Drone's custom video framing, which uses headers in their PaVE format. The format is documented in section 7.3 of the AR.Drone Developer Guide, but almost any of the existing AR.Drone libraries have code to handle PaVE headers.
Decode the H264 video frames that remain. Xuggler wraps native libraries with Java, and is probably the best, fastest way to decode H264. An alternative is the h264j library, which is pure Java, but is slower and has some decoding glitches.
For more info, see these related questions:
Xuggler and playing from live stream
https://stackoverflow.com/questions/30307003/use-a-xuggler-videostream-in-javacv
I have built a simple app with Xcode 5, using very basic functions. As my app is going to have a large target audience, I want it to support different languages. I have done the translation part, but what I want is a view controller which displays language selection only the first time the app is opened. I am new to developing apps, so please explain me in detail. Thanks in advance.
Use NSUserDefaults to store data between app launches. You will need something like:
static NSString * const kShowIntroductionKey = #"ShowIntroductionKey";
- (void)showOnFirstLaunch
{
NSUserDefaults *userDefaults = [NSUserDefaults standardUserDefaults];
BOOL wasIntroductionShowed = [userDefaults boolForKey:kShowIntroductionKey];
if (!wasIntroductionShowed){
// Show your screen here!
[userDefaults setBool:YES forKey:kShowIntroductionKey];
[userDefaults synchronize];
}
}
Also may be it will be better to use native iOS localization mechanism.
I have a list that contains about 20 image URLs and some other things.
I want to display the other things (description) and allow the user to interact with the app while I load the 20 images.
What I noticed is that no matter what I tried, I can't interact with the form until the images finished loading even though I am doing the loading in another thread.
This is my solution I am using now.
private Container createServerItems() throws Exception {
Container list = new Container(new BoxLayout(BoxLayout.Y_AXIS));
final int size = mediaList.size();
final Button buttons[] = new Button[size];
System.out.println("In here: " + size);
for (int i = 0; i < size; i++) {
Container mainContainer = new Container(new BorderLayout());
Media m = new Media();
m.fromJSONString(mediaList.elementAt(i).toString());
buttons[i] = new Button("please wait");
final int whichButton = i;
Display.getInstance().callSerially(new Runnable() {
public void run() {
try {
System.out.println(MStrings.replaceAll(m.getImgURL(), "\"", ""));
final StreamConnection streamConnection = (StreamConnection) Connector.open(MStrings.replaceAll(m.getImgURL(), "\"", ""));
Image image = Image.createImage(streamConnection.openInputStream());
streamConnection.close();
buttons[whichButton].setText("");
buttons[whichButton].setIcon(image.scaled(32, 32));
} catch (Exception e) {
}
}
});
TextArea t = new TextArea(m.getDesc());
t.setEditable(false);
t.setFocusable(false);
t.setGrowByContent(true);
mainContainer.addComponent(BorderLayout.WEST, buttons[i]);
mainContainer.addComponent(BorderLayout.CENTER, t);
list.addComponent(mainContainer);
}
return list;
}
APPROACH I : LWUIT 1.5 has a powerful LWUIT4IO library to address your problem.
An excerpt from Shai's Blog link
A feature in LWUIT4IO to which I didn't give enough spotlight is the
cache map, its effectively a lean hashtable which stores its data
using weak/soft references (depending on the platform) and falls back
to storage when not enough memory is available. Its a great way to
cache data without going overboard. One of the cool things about it is
the fact that we use it seamlessly for our storage abstraction (which
hides RMS or equivalent services) in effect providing faster access to
RMS storage which is often slow on devices.
Another useful link is here
The idea is to delegate the Network IO functionality to a singleton to avoid any UI deadlocks, like the one you are facing.
A very good video demo here by vprise, explains how to bind GUI functionality to your netbeans. In this video at around 7:00 mins it explains the use of ImageDownloadService class which binds the component to its thumbnail url which will seamlessly fetch from the network and populate the Image.
APPROACH II: Difficult one of create custom logic
Create a singleton that will interface with the network to fetch the
data
Use a queue to handle the sequential image download services
Create a new thread for this singleton and wait on the queue.
With each image download service bind a listener with the invoking
component so that it easier to update the right component.
According to the lwuit spec, callSerially() executes on the Event Dispatch Thread, which means that it will block other events until it completes. You need to move your code to load the image outside of that method and keep only the setText and setIcon calls in callSerially().
I am building a web application, in Java, where i want the whole screenshot of the webpage, if i give the URL of the webpage as input.
The basic idea i have is to capture the display buffer of the rendering component..I have no idea of how to do it..
plz help..
There's a little trick I used for this app:
count down demo app http://img580.imageshack.us/img580/742/capturadepantalla201004wd.png
Java application featuring blog.stackoverflow.com page ( click on image to see the demo video )
The problem is you need to have a machine devoted to this.
So, the trick is quite easy.
Create an application that takes as
argument the URL you want to fetch.
Then open it with Desktop.open( url
) that will trigger the current
webbrowser.
And finally take the screenshot with
java.awt.Robot and save it to diks.
Something like:
class WebScreenShot {
public static void main( String [] args ) {
Desktop.getDesktop().open( args[0] );
Robot robot = new Robot();
Image image = robot.createScreenCapture( getScreenResolutionSize() );
saveToDisk( image );
}
}
This solution is far from perfect, because it needs the whole OS, but if you can have a VM devoted to this app, you can craw the web and take screenshots of it quite easy.
The problem of having this app as a non-intrusive app is that up to this date, there is not a good html engine renderer for Java.
For a pure-java solution that can scale to support concurrent rendering, you could use a java HTML4/CSS2 browser, such as Cobra, that provides a Swing component for the GUI. When you instantiate this component, you can call it's paint(Graphics g) method to draw itself into an off-screen image
E.g.
Component c = ...; // the browser component
BufferedImage bi = new BufferedImage(c.getWidth(), c.getHeight(), TYPE_INT_RGB)
Graphics2d g = bi.createGraphics();
c.paint(g);
You can then use the java image API to save this as a JPG.
JPEGImageEncoder encoder = JPEGCodec.createEncoder(new FileOutputStream("screen.jpg"));
enncoder.encode(bi); // encode the buffered image
Java-based browsers typically pale in comparison with the established native browsers. However, as your goal is static images, and not an interactive browser, a java-based browser may be more than adequate in this regard.