I have been writing a game in Java2D with Canvas for a while, And want to port it to JavaFX. I took a look at the JavaFX Canvas, And i have a few questions about it:
Currently, I have the game running on a separate thread from the application. The thread updates and renders the game. But with the JavaFX canvas, You cant draw to it unless it is on the JavaFX Application Thread. Do you know of any ways to get around this?
The Java2D Canvas has a BufferStratagy that can be used for triple buffering. But the JavaFX Canvas only has a built in double buffer. I am guessing it would be possible to write a triple buffer manually (I know how to do it with a double buffer). Can you explain how i could do this?
Would it be best just to stick with java2D instead of switching to JavaFX?
Thanks for your help. :)
Related
So I need to make a simple game for a project, and the StdDraw from princeton library was provided.
From looking through the source code it seems like the library handles animation by drawing on two (for double buffering) BufferedImages and then displays on on a JFrame. To increase the window size the BufferedImage is simply increased in size and the drawings scaled up. But this causes severe lag and screen tear when going above say 700x700 pixels.
I don't know enough about Java to know if this is the most efficient method of doing animation but there must be a way to animate a simple triangle fullscreen without lag. I can easily edit the library or even just write my own if someone could give me some pointers on what is the best way to handle animation.
Google drive for StdDraw.java if anyone wants to look
I'm working on a painting program and I'd like to be able to scale (zoom) my JavaFX canvas without anti-aliasing.
After some research, I came across this: JavaFX ImageView without any smoothing which explains the different workarounds.
I decided to implement workaround #4, which is to read the pixels from a snapshot of the canvas and scale it up and draw to an ImageView. However, this is not practical as performance is really bad, as demonstrated here by drawing moderately fast strokes on a very small canvas (640,480):
I suppose I could implement a smoothing algorithm for the strokes, but I'm not sure how long it would take before I came to another stop because of this performance.
Will we ever get a: canvas.setInterpolation(Interpolation.NEAREST_NEIGHBOUR)? Is there another way to implement this with even better performance?
My last resort is to go back to Swing which actually can be set to disable interpolation.
Can graphics rendered using OpenGL work with graphics rendered not using OpenGL?
I am starting to learn OpenGL, but I am still shy when it comes to actually coding everything in OpenGL, I feel more comfortable drawing them out with JPanel or Canvas. I'm assuming that it wouldn't cause much issue code wise, but displaying it all at the same time could cause issues? Or am I stuck with one or the other?
Integrating OpenGL graphics with another non-OpenGL image or rendering boils down to compositing images. You can take a 2D image and load it as a texture in OpenGL, such that you can then use that texture to paint a surface in OpenGL, or as is suggested by your question, paint a background. Alternatively, you can use framebuffers in OpenGL to render an OpenGL scene to a texture, when can then be converted to a 2D bitmap and combined with another image.
There are limitations to this approach of course. Once an OpenGL scene has been moved to a 2D image, generally you lose all depth (it's possible to preserve depth in an additional channel in the image if you want to do that, but it would involve additional work).
In addition, since presumably you want one image to not simply overwrite the other, you're going to have to include an alpha (transparency) channel in one of your images, so that when you combine them, areas which haven't been drawn will end up showing the underlying image.
However, I would suggest you undertake the effort to simply find one rendering API that serves all your needs. The extra work you do to combine rendering output from two APIs is probably going to be wasted effort in the long run. It's one thing to embed an OpenGL control into an enclosing application that renders many of it's controls using a more conventional API like AWT. On the other hand, it's highly unusual to try to composite output from both OpenGL and another rendering API into the same output area.
Perhaps if you could provide a more concrete example of what kinds of rendering you're talking about, people could offer more helpful advice.
You're stuck with one or the other. You can't put them together.
Just a forewarning: I'm new to java - I normally use UnrealScript and C# but I'm branching out, so there is likely one or two things I've done incorrectly or against the normal java convection (that would be more in align with convictions of those other two languages)
Feel free to point them out, and I'll mold myself to them accordingly
I'm making a JRPGish style game in java using BlueJ. I'm not aiming massively high, and am more doing it as a proof-of-concept rather than a full blown game.
So far its going ok. I'm got a sprite animation working using a sprite sheet and the player can walk around with the sprite changing to the correct animation depending on the direction.
However I'm having an issue when the player stops moving - there sometimes is a afterimage of the previous frame - In fairness, this may be happening all the time except you cant see it if the player stops moving on the first and last frame of the walking animations, as those take up the same pixel space and thus are hidden (if that makes sense)
Here is an image of the issue in action:
This is after the player has moved, and then stopped, leaving the last frame of the "moveRight" sprite behind.
I have created a small version of my project that has just the character animation playing when you press a key, and stopping when you release, in which the issue appears
The Skeleton class
The Character class
The GameManager class
The KeyManager class
To start the game run the Main method in GameManager
You'll need to save this image with the filename of "James.png" and place it in the same folder of the java project for it to work
Thanks in advance for any help given.
paintComponent() passes Graphics to drawCharactor(). drawCharactor() should not disposes that Graphics object unless you made a copy, this Graphics is shared.
Also, do not call repaint() from drawCharactor(). repaint() schedules another paint cycle. You already do call repaint() from a timer.
Do not use java.util.Timer use javax.swing.Timer for Swing painting. See How to Use Swing Timers
For more information and examples Performing Custom Painting and Painting in AWT and Swing.
Consider posting a minimal example.
I have spent a bit of time researching about whether it is possible to draw on top of a VLCJ movie within a Java application. I have found a few bits of conflicting advice some saying it is not possible and some referencing articles which have moved on oracle.com.
Can someone clarify if it is or is not possible to draw java2d graphics like rectangles/lines which also have transparent backgrounds so the video stream underneath can be viewed whilst the shapes are present on screen?
If this is not possible with vlcj what would be a good alternative for a linux and windows compatible media player allowing for annotation over a playing video stream? Please note i do not have to be limited to java but something where i can get re-use out of developed drawing routines for multiple platforms would be ideal.
Yes, you can do it. For the normal hardware rendered video player, you need to have at least Java 6u10 (preferably 7) and achieve this by overlaying a transparent JWindow on top of the VLC canvas (it's not too hard to add events to the canvas to check for updates in position / size and then move the overlayed window correspondingly.)
The other way that doesn't involve using overlaid windows is to use a DirectMediaPlayer, where you have access to the framebuffer directly (and can therefore do what you like with the pixels, including wrapping them as textures round 3D objects and so on.) So with this approach, you could simply draw what you wanted onto the frame buffer before rendering it to screen in the way you chose. This is the most flexible approach, but comes with the downside that if you're not very careful about your implementation, you lose all the GPU acceleration and end up crippling the CPU, especially for HD video.
If a simple overlay would do the trick, I'd try that first, and just resort to a DirectMediaPlayer if you have to.