I am working on a gui in JavaFX which needs to composite a large number of objects (often using alpha masks and similar) on the canvas.
For comparison on the HTML5 canvas this can easily be done by the drawImage function with the help of a temporary canvas object outside the DOM structure. For example to draw an image on the canvas with an alpha mask, I first draw the image on the temporary canvas, draw (i.e. blit) the mask over it using globalCompositeOperation = "destination-in", then draw the temporary canvas on the original one using composite mode source-over. The temporary canvas can be re-used for each such operation. Easy as pie.
However, from what I can see so far the recommended way of doing this in JavaFX is the use of grouped layers, i.e. multiple overlayed canvas nodes which never get "flattened".
I could have done it like this in HTML5 too but in my most recent project this would have resulted in dozens or hundreds of visible layers which is obviously extremely silly. My approach gave me excellent performance.
That being said, is there a reasonable way to do the same thing on the JavaFX canvas? I consider manually performing pixel-by-pixel copying to be a clunky last-resort thing.
What am I missing? Am I thinking about JavaFX in a wrong way?
I have done this before on JavaFx and Android I didn't know they do so on HTML 5
so anyway
you can do the same as you do on HTML 5 you can create what is called mainCanvas that canvas contains the finished version of another one let's say tempCanvas in the temp canvas you draw what ever you want and apply the masks you want too then you take a snapshot of the canvas ( as Canvas is a Node you can use this code to take a snap shot of it)
WritableImage writableImage = new WritableImage(mainCanvas.getWidth(), mainCanvas.getHeight());
tempCanvas.snapshot(null, writableImage);
GraphicsContext context = mainCanvas.getGraphicsContext2D();
context.drawImage(writableImage,mainCanvas.getWidth(), mainCanvas.getHeight());
Related
I am working with a JavaFX Canvas animating the motion of Shape and Polyline objects over time.
Currently, every frame, the location of X and Y of each Shape or Polyline in a list is edited as required and the object is moved.
This results in about 20-30fps
An earlier approach I have tried simply clears the canvas every frame and redraws each object again. No lists of objects are stored.
this results in 60fps
This second method seems to be a far messier approach yet results in a far better framerate.
Are there any best practices or recommended ways to animate on a JavaFX canvas? Anything clean and recommended yet results in a good framerate?
Many Thanks
I just gave a talk about these issues at the JavaLand conference. It is indeed true that for general animations with path based shapes (like Polyline and Polygon) using the Canvas is currently the fastest standard option. This is due to a bug in JavaFX which can make such animations via the scene graph slow. I have reported this issue and a bug fix is on the way.
https://bugs.openjdk.java.net/browse/JDK-8178521
In this JIRA issue I refer to hardware versus software rendering but it also affects scene graph versus canvas rendering because the canvas does not seem to be affected by this bug.
I'm working on a painting program and I'd like to be able to scale (zoom) my JavaFX canvas without anti-aliasing.
After some research, I came across this: JavaFX ImageView without any smoothing which explains the different workarounds.
I decided to implement workaround #4, which is to read the pixels from a snapshot of the canvas and scale it up and draw to an ImageView. However, this is not practical as performance is really bad, as demonstrated here by drawing moderately fast strokes on a very small canvas (640,480):
I suppose I could implement a smoothing algorithm for the strokes, but I'm not sure how long it would take before I came to another stop because of this performance.
Will we ever get a: canvas.setInterpolation(Interpolation.NEAREST_NEIGHBOUR)? Is there another way to implement this with even better performance?
My last resort is to go back to Swing which actually can be set to disable interpolation.
I'm currently using a Canvas with an associated BufferStrategy. How can I copy the contents (image) of the Canvas and save it to a BufferedImage. So far I've only tried
BufferedImage image = ...
g = image.createGraphics();
canvas.printAll(g); //or paintAll
However, it appears that only a 'blank' (filled with the background color) screen is being drawn; I it's assume because that's the default behavior of Canvas.paint(). Is there as way for me to retrieve the image currently on screen (the last one shown with BufferStrategy.show()), or do I need to draw to a BufferedImage and copy the image to the BufferStrategy's graphics. It seems like that would/could be a big slow down; would it?
Why (because I know someone wants to ask): I already have a set of classes set up as a framework, using Canvas, for building games that can benefit from rapidly refreshing screens, and I'm looking to make an addon framework to have one of these canvases tied into a remote server, sending it's displayed image to other connected clients. (Along the lines of remotely hosting a game.) I'm sure there are better ways to do this in given particular cases (e.g. why does the server need to display the game at all?) But I've already put in a lot of work and am wondering if this method is salvageable without modifying (to much of) the underlying Game framework code I already have.
Can graphics rendered using OpenGL work with graphics rendered not using OpenGL?
I am starting to learn OpenGL, but I am still shy when it comes to actually coding everything in OpenGL, I feel more comfortable drawing them out with JPanel or Canvas. I'm assuming that it wouldn't cause much issue code wise, but displaying it all at the same time could cause issues? Or am I stuck with one or the other?
Integrating OpenGL graphics with another non-OpenGL image or rendering boils down to compositing images. You can take a 2D image and load it as a texture in OpenGL, such that you can then use that texture to paint a surface in OpenGL, or as is suggested by your question, paint a background. Alternatively, you can use framebuffers in OpenGL to render an OpenGL scene to a texture, when can then be converted to a 2D bitmap and combined with another image.
There are limitations to this approach of course. Once an OpenGL scene has been moved to a 2D image, generally you lose all depth (it's possible to preserve depth in an additional channel in the image if you want to do that, but it would involve additional work).
In addition, since presumably you want one image to not simply overwrite the other, you're going to have to include an alpha (transparency) channel in one of your images, so that when you combine them, areas which haven't been drawn will end up showing the underlying image.
However, I would suggest you undertake the effort to simply find one rendering API that serves all your needs. The extra work you do to combine rendering output from two APIs is probably going to be wasted effort in the long run. It's one thing to embed an OpenGL control into an enclosing application that renders many of it's controls using a more conventional API like AWT. On the other hand, it's highly unusual to try to composite output from both OpenGL and another rendering API into the same output area.
Perhaps if you could provide a more concrete example of what kinds of rendering you're talking about, people could offer more helpful advice.
You're stuck with one or the other. You can't put them together.
I'm in the process of writing a custom heatmap generator. I'm wondering what the fastest way is to draw boxes (up to around 1 million) in Java. Most questions I've found have concentrated on dynamic images (like in games), and I'm wondering if there's a better way to go for static images. I've tried using swing (via a GridLayout and adding a colored canvas to each box), drawing directly on the panel with Graphics2D, and also by using the Processing libraries. While Processing is pretty fast and generates a clean image, the window seems to have problems keeping it; it generates different parts of the image whenever you minimize, move the windows, etc.
I've heard of OpenGL, but I've never touched it, and I wanted some feedback as to whether that (or something else) would be a better approach before investing time in it.
For static images, I paint them to a BufferedImage (BI) and then draw that via Graphics2D.
I keep a boolean that tells me whether the BI is up to date. That way I only incur the expensive painting cost once. If you want to get fancy, you can scale the BI to handle minor resizing. For a major resizing you'll probably want to repaint the BI so as not to introduce artifacts. It's also useful for overlaying data (such as cross hairs, the value under the cursor, etc) as you're only painting the BI and the data.