I m currently working on my project of remote desktop administration. I m using robot class to capture images and send over network. It works well but bit slower.
Because all the time we need to captuure and send image its too costly. Is it possible to detect only a portion of screen which is changed and send only that portion?
Please any one guide me on this. Thank you!!!
The keyword you're looking for (in order to be able to look this up and figure the solution yourself) is dirty rectangles.
You can look into some code here.
I looked into this awhile back, and the image capture is implemented particularly inefficiently. I don't recall the specific detail, but it was pretty bad the way they did it. I felt, at the time, that the only way to do it better would be to implement it in JNI. Which you could use JNA to shortcut.
I don't know if any platform's screen capture routines will allow only changed sections to be sent, but you could implement a decent image diff; although that could get expensive too. You would really need to measure whats going on to see if it works for you.
Related
I'm trying to stream a part of my Screen to another Computer using Java. I've already tried to use a robot to make Screenshots in an interval controlled by a timer, which worked well. But the Streaming doesn't work well with ImageIO and an Image Stream. There's just a too low Framerate. I've already searched around, but all I could find where similar problems.
My Questions are:
Is there a library to compress the images created from the robot?
Has anyone done something like this before?
Am I doing this completely wrong and there is a better way?
You are trying todo what VNC does. See
https://de.wikipedia.org/wiki/Virtual_Network_Computing you may look into a VNC implementation for actual code. The compression is not that easy. Since it needs to be fast (relatime) and offer a good compression ratio over slow networks.
I am currently working on a RCP-Application where i can draw an Internal-Block-Diagram.
Maybe most of you know "Papyrus" from eclipse. This modelling tool provides an Internal-Block-Diagram but I think it is a little bit overloaded so I decided to do it on my own.
I found this awesome tutorial :
https://www.vainolo.com/tutorials/gef-tutorials/
It helped me a lot how GEF works but one thing is not explained. How to draw ports. In the picture below you can see what i am capable of.
I am trying to modify my application that a user is able to draw ports. Like in the next picture:
Does somebody know how this is done in GEF ?
As far as I understand it, it has something to do with the figure of a node. Every node has a figure which is displayed inside the diagram. A port is an extension of the edge of a node and it is not possible to exceed the edge of a node. So I think that papyrus uses a different way to make this happen.
I tried to get the source code of Papyrus but i found nothing neither a documentation about it...
I am thankfully for every opinion.
Papyrus use GMF to create these ports.
You may check classes with containing BorderItem, for example AbstractBorderItemEditPart.
Be careful, the tutorial you are following seems to have been written for GEF3.
As far as I know, there is no "easy" way to manage port in pure GEF3.
There was a major change in GEF last year, you should be able to easily create port with the new GEF4 API.
I am looking for an addition for our "livestream and podcast" solution, which uses a camera to film speeches in our house.
It has been requested to view the slides of our speakers directly as a image in the webbrowser instead of the video stream. We don't want/can not install software on the speakers laptop, so I thought about a Java applet, which the speaker can just run via a webbrowser.
So what I need is technically this:
[speakers laptop] -> [Screencapture every N seconds via applet on a webpage] -> [Displaying the screen of the speaker on a different webpage for the external viewers]
I know there are Java applications which do record the screen, but save the file output locally. I need something that does the same, but sends the image to the server. On the server side I thought about a websocket.js accepting and displaying the image (other suggestions are welcome).
It would be great if somebody could help me out here. Btw, I never programmed in Java, so telling me which frameworks I need won't really help me.
Thanks!!
I was recently asked to evaluate possibilities for live screen-cast via applet. Most video APIs do not support codecs that have high enough compression (e.g. JMF). Some APIs can do advanced formats (JFFMPEG, Xuggle) but also use natives. While natives are normally no problem for an app. launched (free floating) using Java Web Start or a Plug-In 2 applet, the makers of Xuggle identify 'the order of loading natives' as a problem (e.g. won't work) for both JWS and applets.
It is a pity that more than a decade into its development, Java has no reasonable API for video capture/processing that can be deployed for a wide use (applet/JWS based - for the 'general public') GUI.
Perhaps you can find a solution using Flash.
Update 1
In fact, I do not need the screen to be recorded as a video.
In fact, you mentioned much of that in your initial question, but I focused on just a few keywords before drafting a reply. My bad. :P
OK.
Getting an image is relatively easy. An applet would need to be trusted in order to get a screenshot, but once trusted, it is just a few lines of code to get the image.
Encoding the image to JPEG of particular quality/compression setting (in memory) is also doable.
Sending the image to the server would depend on the size in bytes and connection speed, but one image with a high compression, every 10 seconds, should be doable. The server would need to implement functionality to accept the image.
As far as displaying the image on the client, it seems you already have some ideas based around JS. If you can make that work that would be optimal, since it can then be viewed in browsers with no Java.
I would still recommend you deploy the app. to the 'speaker' using Java Web Start, rather than embed an applet. A JWS app. will give you less deployment & maintenance troubles, and the JWS launch is ..nicer. Further, a free floating frame launched using JWS can minimize itself (or in later JREs, become transparent), during the action of taking a screen image - thereby capturing everything on the screen except itself.
Update 2
I actually found this code here.
That is ..horrible. Not the code, the site. When I visited it I got a message saying a pop-up had been suppressed (fair enough). Then there was the irritating 'vibrating dialog' hovering in the middle of the page (and following the scroll). You click the little x to see - another tab opened with yet another floating dialog, saying some other rubbish about how "You've won.." - with sound loud enough to drown out my high volume trance/dance playlist.
Then after closing that the hell out of my FF, I go back to the original page, close the damn 'dialog', scroll down & see.. a red background to the code (shudder). That is as far as I could manage. I closed the page with the code.
Try this code instead, for a single screen-shot.
Would it be possible to use this on the client side..
Yes.
.. and receive it with javascript on the server side?
Not really. Unless you mean an IIS based server running Microsoft's JScript. JavaScript is a client side technology.
For security reasons, servers need to protect themselves. E.G. From:
Someone creating a slavebot that uploads all the 1000s of docs on the slave machine's to the site - to make it crash.
People high-jacking your server for storing and serving bestiality porn (or worse).
Because of things like that (bad people have lots of imagination), while servers can easily accept uploads, they are generally not configured by default to allow them.
.. (I don't want Java on my server ;-)
It can be done using PHP, ASP, CGI etc. It does not need Java specifically, but it does need some active involvement from the server, if only to check the size of what is being uploaded and abort if it gets too large!
..Will take a look at the link you posted, but as I said, I can't program in Java, though I can understand some of it. Thanks!
It sounds like you'll need some help getting the server-side of it ready, as well. It is trivial for someone that knows how (not me), but a potential security nightmare for the inexperienced.
Update 3
where do I add the function to send the picture?
Sorry. I've not tried to implement that - you'd want to want to encode it to JPEG before sending, to reduce the size. See this code for how to provide an adjustable compression/quality where the user can see the effect.
There are various ways to get an image to a server. E.G. sockets, HTTP, FTP.. AFAIU it would depend on how the server is accepting it. I am unfamiliar with the specific term 'websocket' or the node.js script. Can you link to what you mean?
..the old code added to pastebin, so it's readable
Smart thinking. I notice it uses sockets, it was in the back of my mind that sockets would be best for this, since they have low overhead and short wait times.
I'm toying with an idea for creating a Java application to automate a process that I have to do regularly and before I start any coding I thought I would seek advice as to the best way to approach it.
Basically, the application I use has a large number of images present on the screen at any one time, and what I would like to know is if there is a way to have Java identify if any of these two images are the same. If they are, I would like to automate mouse movement and button clicks.
After a bit of reading, I'm thinking that the PixelGrabber and Robot classes might be the right way to start, but like I said, I'm looking for any information on this that can be offered.
What are your suggestions?
I believe the Robot class and a Pixel Grabber would be sufficient. If you are inclined to program the solution yourself, maybe for educational purpose, by all means please do. If you, however, don't want to reinvent the wheel, you may take a look at this project:
http://sikuli.org/
I, for example, use it to do stuff that would be hard to achieve with Selenium alone. If you still can't achieve your goal after some scripting, Sikuli provides a nice API which you can use from inside your java program.
The Robot class would be sufficient to take images and being able to inspect pixels. But it seems to make more sense, to recreate your desktop with images inside of a java application (a very simple gallery application). Then operations are simplier. An other way of realizing operations I do not see.
I am a cameraman and I want to make an app for my Moto Droid that will
calculate my depth of field given four inputs.
I am literally brand
new to javascript and this programming stuff, so I was wondering if
anyone could help me out.
I have a very basic GUI set up using Droiddraw which allows me to
input my 4 variables, which are:
Focus (#+id/focust)
Focal Length (#+id/flt)
Aperture (#+id/apt)
Circle of Confusion (#+id/coct)
Equations for this calculation are located here
for example...
to get hyperfocal distance I need to get: ((f^2)/(N*c))+f
all of these variables will be drawn from inputs in the GUI, but I don't know how to call them, how to write the actual math, and how to address the results so I can make them appear in the "results area" on the bottom of the screen.
I've never done java before and I only want to make this app because the existing ones don't fit my needs.
Can someone help?
Thanks!
If I'm not mistaken, DroidDraw is a tool for building the XML user interface description used by the Java API. If you want to program for Android in JavaScript, something like PhoneGap might be a better choice. It lets you build real Android application using HTML and JavaScript.
On the other hand, if you want to use the XML and Java APIs, then you should probably run through the Android tutorials. The first one is Hello, World.
Since you're just getting started with programming, I can't stress tutorials enough. It's true that your idea shouldn't be too hard to implement, but you need to understand the basics first.
I don't mean to give the impression that one style (PhoneGap vs. Java and XML) is better. For your purposes, either should be fine. It's more a question of what you prefer. Java/XML is the paradigm supported by Google, and provides access to more functionality. On the other hand, if you already know HTML or JavaScript (or are interested in learning them), PhoneGap will certainly provide everything you need. I think PhoneGap is also intended to make it easier for beginners, though I haven't used it, so I don't know how successful they have been.
The XML file that is generated by DroidDraw can't be used within PhoneGap. If you do choose to use PhoneGap, then you will need to build the interface in HTML. You might be able to use something like DreamWeaver or FrontPage or one of any number of HTML editors to help you with this step.
The XML file is just a description of an interface. When you start your application, the Android platform uses this description to build the user interface that you see. Once that has happened, you can move data from the interface to Java, or from Java to the interface, without any hassle. You certainly won't be limited by the XML interface description - it's pretty flexible.
If you've been going through the Android tutorials, then it might be best to forget that I even mentioned PhoneGap. It's a wildly different alternative that is the right choice for some people and some applications. But the Android tutorials won't help you to understand it. I only brought it up because you mentioned JavaScript in your original post.