I am using a combination of PDFBox with Apache Batik in order to render PDF pages as SVG documents. Most of them work fine, but I have some issues when rendering specific images into SVG.
This is the code I use. It is mostly based on the post over there.
public void extractBookSvg(File pdfFile) throws Exception {
// ... preliminary business actions
SVGGeneratorContext ctx = createContext();
SVGGraphics2D g = null;
try (PDDocument document = PDDocument.load(pdfFile, MemoryUsageSetting.setupMixed(2147483648l))) {
PDFRenderer renderer = new PDFRenderer(document);
long startTime = System.currentTimeMillis();
int pageNr = 0;
for (PDPage page : document.getPages()) {
long startTimeForPage = System.currentTimeMillis();
g = createGraphics(ctx);
renderer.renderPageToGraphics(pageNr, g, 3.47222f);
pageNr++;
try (OutputStream os = new ByteArrayOutputStream();
Writer out = new OutputStreamWriter(os)) {
g.stream(out, true);
//... do other business actions
}
}
}
finally {
pdfFile.delete();
if (g != null) {
g.finalize();
g.dispose();
}
}
}
private SVGGraphics2D createGraphics(SVGGeneratorContext ctx) {
SVGGraphics2D g2d = new CustomSVGGraphics2D(ctx, false);
return g2d;
}
private SVGGeneratorContext createContext() {
DOMImplementation impl = GenericDOMImplementation.getDOMImplementation();
String svgNS = "http://www.w3.org/2000/svg";
Document myFactory = impl.createDocument(svgNS, "svg", null);
SVGGeneratorContext ctx = SVGGeneratorContext.createDefault(myFactory);
return ctx;
}
public static class CustomSVGGraphics2D extends SVGGraphics2D {
public CustomSVGGraphics2D(SVGGeneratorContext generatorCtx, boolean textAsShapes) {
super(generatorCtx, textAsShapes);
}
#Override
public GraphicsConfiguration getDeviceConfiguration() {
return new CustomGraphicsConfiguration();
}
}
private static final class CustomGraphicsConfiguration extends GraphicsConfiguration {
#Override
public AffineTransform getNormalizingTransform() {
return null;
}
#Override
public GraphicsDevice getDevice() {
return new CustomGraphicsDevice();
}
#Override
public AffineTransform getDefaultTransform() {
return null;
}
#Override
public ColorModel getColorModel(int transparency) {
return null;
}
#Override
public ColorModel getColorModel() {
return null;
}
#Override
public java.awt.Rectangle getBounds() {
return null;
}
}
private static final class CustomGraphicsDevice extends GraphicsDevice {
#Override
public int getType() {
return 0;
}
#Override
public String getIDstring() {
return null;
}
#Override
public GraphicsConfiguration[] getConfigurations() {
return null;
}
#Override
public GraphicsConfiguration getDefaultConfiguration() {
return null;
}
}
As mentioned above, the issue comes when rendering images: they either do not get rendered at all (show up as black boxes), or on some images that have opacity lower than 1, they are shown with opacity 1.
Here is an example of both cases:
PDF render of transparent image
SVG render of transparent image
These images do not get rendered at all in SVG
How they actually show in the SVG
However, if I directly render those pages as images (using BufferedImage instead Graphics2D), they both render fine (with a lower quality than the svg, of course).
Also, I have tried debugging the PDF with the PDFDebugger utility, and those images do not appear in the resources XObject list of the page, and I can't seem to find them nowhere else.
My questions are:
how can I find out if the issue comes from the way PDFBox renders into the Graphics2D object, or it comes from the way Batik does the SVG generation?
is there a way to make sure those images are properly displayed at generation time? Because I see no errors in the logs.
if so, what could be the solution?
Thank you!
Related
I am fairly new to programming and have recently encountered a problem that I have not found a solution for online. I have created an ImageView in FXML and gave it an FXid of "gif1". I am using code from this stackoverflow post but I tried modifying it so that it would fit my needs. I put the entirety of the code in a separate java file that I called "Gif.java". Typically, "Gif.java" uses an hbox and other non applicable methods to set the ImageView. I removed those because I will be using an existing FXML document so there is no need to create a new one. In my code, I call on "Gif.java" and expect a returned ImageView. However, when I set my ImageView (gif1) to the returned ImageView, it doesn't update on the screen.
Here is the link to the gif post: How I can stop an animated GIF in JavaFX?
Here is my code that I use to update gif1:
#FXML
private void setUp(){
Gif newGif = new Gif("rightrock.gif"); // Creates object newGif and passes through path to gif
gif1 = newGif.returnImage2(); // Sets gif1 to returned ImageView
}
Here is the return code in Gif.java:
public class Gif{
private Animation ani;
private ImageView test;
public Gif(String image) {
// TODO: provide gif file, ie exchange banana.gif with your file
ani = new AnimatedGif(getClass().getResource(image).toExternalForm(), 1000);
ani.setCycleCount(1);
ani.play();
Button btPause = new Button( "Pause");
btPause.setOnAction( e -> ani.pause());
Button btResume = new Button( "Resume");
btResume.setOnAction( e -> ani.play());
}
public void returnImage(ImageView x){
test = x; // Sets ImageView test to parameter x
}
public ImageView returnImage2() {
return test; // Return instance field test
}
public static void main(String[] args) {
launch(args);
}
public class AnimatedGif extends Animation {
public AnimatedGif( String filename, double durationMs) {
GifDecoder d = new GifDecoder();
d.read( filename);
Image[] sequence = new Image[ d.getFrameCount()];
for( int i=0; i < d.getFrameCount(); i++) {
WritableImage wimg = null;
BufferedImage bimg = d.getFrame(i);
sequence[i] = SwingFXUtils.toFXImage( bimg, wimg);
}
super.init( sequence, durationMs);
}
}
public class Animation extends Transition {
private ImageView imageView;
private int count;
private int lastIndex;
private Image[] sequence;
private Animation() {
}
public Animation( Image[] sequence, double durationMs) {
init( sequence, durationMs);
}
private void init( Image[] sequence, double durationMs) {
this.imageView = new ImageView(sequence[0]);
this.sequence = sequence;
this.count = sequence.length;
setCycleCount(1);
setCycleDuration(Duration.millis(durationMs));
setInterpolator(Interpolator.LINEAR);
}
protected void interpolate(double k) {
final int index = Math.min((int) Math.floor(k * count), count - 1);
if (index != lastIndex) {
imageView.setImage(sequence[index]);
lastIndex = index;
}
}
public ImageView getView() {
returnImage(imageView); // This runs returnImage with the ImageView that I want to set
return imageView;
}
}
Any help would be greatly appreciated!
I want to develop a simple interactive game (like arcanoid). I already have implemented a menu and different views, and now I need to develop the actually game (draw flying ball, some movable platform) and I don't know how to do this. I need something like canvas where I can draw my graphic each frame.
I have tryed to implement this with Canvas and Timer. But it doesn't want update graphic itself, but only when user clicks on screen or similar. Also I saw com.google.gwt.canvas.client.Canvas, but I cannot understand how to use it in Vaadin application.
So my question is next: is it possible to draw some graphic each frame with high framerate in any way? If possible, how can I do this?
P.S. I use the Vaadin 7.3.3.
ADDED LATER:
Here is a link to my educational project with implementation below.
I'll be glad if it helps someone.
ORIGINAL ANSWER:
Well... I found the solution myself. First of all, I have created my own widget - "client side" component (according to this article).
Client side part:
public class GWTMyCanvasWidget extends Composite {
public static final String CLASSNAME = "mycomponent";
private static final int FRAMERATE = 30;
public GWTMyCanvasWidget() {
canvas = Canvas.createIfSupported();
initWidget(canvas);
setStyleName(CLASSNAME);
}
Connector:
#Connect(MyCanvas.class)
public class MyCanvasConnector extends AbstractComponentConnector {
#Override
public Widget getWidget() {
return (GWTMyCanvasWidget) super.getWidget();
}
#Override
protected Widget createWidget() {
return GWT.create(GWTMyCanvasWidget.class);
}
}
Server side part:
public class MyCanvas extends AbstractComponent {
#Override
public MyCanvasState getState() {
return (MyCanvasState) super.getState();
}
}
Then I just add MyCanvas component on my View:
private void createCanvas() {
MyCanvas canvas = new MyCanvas();
addComponent(canvas);
canvas.setSizeFull();
}
And now I can draw anything on Canvas (on client side in GWTMyCanvasWidget) with great performance =). Example:
public class GWTMyCanvasWidget extends Composite {
public static final String CLASSNAME = "mycomponent";
private static final int FRAMERATE = 30;
private Canvas canvas;
private Platform platform;
private int textX;
public GWTMyCanvasWidget() {
canvas = Canvas.createIfSupported();
canvas.addMouseMoveHandler(new MouseMoveHandler() {
#Override
public void onMouseMove(MouseMoveEvent event) {
if (platform != null) {
platform.setCenterX(event.getX());
}
}
});
initWidget(canvas);
Window.addResizeHandler(new ResizeHandler() {
#Override
public void onResize(ResizeEvent resizeEvent) {
resizeCanvas(resizeEvent.getWidth(), resizeEvent.getHeight());
}
});
initGameTimer();
resizeCanvas(Window.getClientWidth(), Window.getClientHeight());
setStyleName(CLASSNAME);
platform = createPlatform();
}
private void resizeCanvas(int width, int height) {
canvas.setWidth(width + "px");
canvas.setCoordinateSpaceWidth(width);
canvas.setHeight(height + "px");
canvas.setCoordinateSpaceHeight(height);
}
private void initGameTimer() {
Timer timer = new Timer() {
#Override
public void run() {
drawCanvas();
}
};
timer.scheduleRepeating(1000 / FRAMERATE);
}
private void drawCanvas() {
canvas.getContext2d().clearRect(0, 0, canvas.getCoordinateSpaceWidth(), canvas.getCoordinateSpaceHeight());
drawPlatform();
}
private Platform createPlatform() {
Platform platform = new Platform();
platform.setY(Window.getClientHeight());
return platform;
}
private void drawPlatform() {
canvas.getContext2d().fillRect(platform.getCenterX() - platform.getWidth() / 2, platform.getY() - 100, platform.getWidth(), platform.getHeight());
}
}
I have problem with return value from my method. I created an Image and I want scale it and return.
final Image img = new Image(src);
img.addLoadHandler(new LoadHandler() {
#Override
public void onLoad(LoadEvent arg0) {
resize img...
}
}
return img;
How do I return it after I have changed its size?
Need not return the image just to re-size.
The Image should be added to the DOM first.Then you can do operations on that .
You can do something like this :
Image image = new Image();
image.addLoadHandler(new LoadHandler() {
#Override
public void onLoad(LoadEvent event) {
// resize image
image.getElement().getStyle().setVisibility(Style.Visibility.Visible);
}
});
image.getElement().getStyle().setVisibility(Style.Visibility.HIDDEN);
RootPanel.get().add(image);
image.setUrl(url);
I´m trying to copy a png file to clipboard within a program and maintain its alpha channel when pasted in another program (e.g. ms office, paint, photoshop). The problem is, that the alpha channel turns black in most of the programs. I've been searching the web for hours now and can't find a solution. The Code I'm using:
setClipboard(Toolkit.getDefaultToolkit().getImage(parent.getSelectedPicturePath()));
public static void setClipboard(Image image) {
ImageSelection imgSel;
if (OSDetector.isWindows()) {
imgSel = new ImageSelection(image);
} else {
imgSel = new ImageSelection(getBufferedImage(image));
}
Toolkit.getDefaultToolkit().getSystemClipboard().setContents(imgSel, null);
}
Is there any way to maintain the alpha channel in Java? I've tried converting the png to BufferedImage, Image, etc. and the pasting it to the clipboard, but nothing works.
Assuming that OSDetector is working properly, I was able to get the OP's code to work out of the box on Windows Server 2008R2 64-bit running Oracle JDK 1.8.0_131. The OP omitted the code for getBufferedImage(), however I suspect it was some variant of the version from this blog.
When I tested the code using the blog's version of getBufferedImage() on Windows (ignoring the OSDetector check), I was able to reproduce a variant of the issue where the entire image was black, which turned out to be a timing issue with the asynchronous calls to Image.getWidth(), Image.getHeight(), and Graphics.drawImage(), all of which return immediately and take an observer for async updates. The blog code passes null (no observer) for all of these invocations, and expects results to be returned immediately, which was not the case when I tested.
Once I modified getBufferedImage() to use callbacks, I reproduced the exact issue: alpha channels appear black. The reason for this behavior is that the image with the transparency is drawn onto a graphics context that defaults to a black canvas. What you are seeing is exactly what you would see if you viewed the image on a web page with a black background.
To change this, I used a hint from this StackOverflow answer and painted the background white.
I used the ImageSelection implementation from this site, which simply wraps an Image instance in a Transferrable using DataFlavor.imageFlavor.
Ultimately for my tests, both the original image and the buffered image variants worked on Windows. Below is the code:
public static void getBufferedImage(Image image, Consumer<Image> imageConsumer) {
image.getWidth((img, info, x, y, w, h) -> {
if (info == ImageObserver.ALLBITS) {
BufferedImage buffered = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
Graphics2D g2 = buffered.createGraphics();
g2.setColor(Color.WHITE); // You choose the background color
g2.fillRect(0, 0, w, h);
if (g2.drawImage(img, 0, 0, w, h, (img2, info2, x2, y2, w2, h2) -> {
if (info2 == ImageObserver.ALLBITS) {
g2.dispose();
imageConsumer.accept(img2);
return false;
}
return true;
})) {
g2.dispose();
imageConsumer.accept(buffered);
}
return false;
}
return true;
});
}
public static void setClipboard(Image image) {
boolean testBuffered = true; // Both buffered and non-buffered worked for me
if (!testBuffered) {
Toolkit.getDefaultToolkit().getSystemClipboard().setContents(new ImageSelection(image), null);
} else {
getBufferedImage(image, (buffered) -> {
ImageSelection imgSel = new ImageSelection(buffered);
Toolkit.getDefaultToolkit().getSystemClipboard().setContents(imgSel, null);
});
}
}
I hope this helps. Best of luck.
Is this right answer? Have you tried this?
public void doCopyToClipboardAction()
{
// figure out which frame is in the foreground
MetaFrame activeMetaFrame = null;
for (MetaFrame mf : frames)
{
if (mf.isActive()) activeMetaFrame = mf;
}
// get the image from the current jframe
Image image = activeMetaFrame.getCurrentImage();
// place that image on the clipboard
setClipboard(image);
}
// code below from exampledepot.com
//This method writes a image to the system clipboard.
//otherwise it returns null.
public static void setClipboard(Image image)
{
ImageSelection imgSel = new ImageSelection(image);
Toolkit.getDefaultToolkit().getSystemClipboard().setContents(imgSel, null);
}
// This class is used to hold an image while on the clipboard.
static class ImageSelection implements Transferable
{
private Image image;
public ImageSelection(Image image)
{
this.image = image;
}
// Returns supported flavors
public DataFlavor[] getTransferDataFlavors()
{
return new DataFlavor[] { DataFlavor.imageFlavor };
}
// Returns true if flavor is supported
public boolean isDataFlavorSupported(DataFlavor flavor)
{
return DataFlavor.imageFlavor.equals(flavor);
}
// Returns image
public Object getTransferData(DataFlavor flavor)
throws UnsupportedFlavorException, IOException
{
if (!DataFlavor.imageFlavor.equals(flavor))
{
throw new UnsupportedFlavorException(flavor);
}
return image;
}
}
Source : http://alvinalexander.com/java/java-copy-image-to-clipboard-example
I'm not tried this myself and I'm not sure about that. Hopefully you get right answer.
Here is a very simple, self contained example that works. Reading or creating the image is up to you. This code just creates a red circle drawn on an alpha-type BufferedImage. When I paste it in any program that supports transparency, it shows correctly. Hope it helps.
import java.awt.*;
import java.awt.datatransfer.*;
import java.awt.image.BufferedImage;
import java.io.IOException;
public class CopyImageToClipboard {
public void createClipboardImageWithAlpha() {
//Create a buffered image of the correct type, with alpha.
BufferedImage image = new BufferedImage(600, 600, BufferedImage.TYPE_INT_ARGB);
Graphics2D g2d = image.createGraphics();
//Draw in the buffered image.
g2d.setColor(Color.red);
g2d.fillOval(10, 10, 580, 580);
//Add the BufferedImage to the clipboard with transferable image flavor.
Clipboard clipboard = Toolkit.getDefaultToolkit().getSystemClipboard();
Transferable transferableImage = getTransferableImage(image);
clipboard.setContents(transferableImage, null);
}
private Transferable getTransferableImage(final BufferedImage bufferedImage) {
return new Transferable() {
#Override
public DataFlavor[] getTransferDataFlavors() {
return new DataFlavor[] { DataFlavor.imageFlavor };
}
#Override
public boolean isDataFlavorSupported(DataFlavor flavor) {
return DataFlavor.imageFlavor.equals(flavor);
}
#Override
public Object getTransferData(DataFlavor flavor) throws UnsupportedFlavorException, IOException {
if (DataFlavor.imageFlavor.equals(flavor)) {
return bufferedImage;
}
return null;
}
};
}
}
I'm looking for a way to load up a page and save the rendering as an image just as you would do with CutyCapt (QT + webkit EXE to do just that).
At the moment and without JavaFX, I do it by calling an external process from java and rendering to file than loading that file into an ImageBuffer... Neither very optimized nor practical and even less cross platform...
Using JavaFX2+ I tried playing with the WebView & WebEngine:
public class WebComponentTrial extends Application {
private Scene scene;
#Override
public void start(final Stage primaryStage) throws Exception {
primaryStage.setTitle("Web View");
final Browser browser = new Browser();
scene = new Scene(browser, 1180, 800, Color.web("#666970"));
primaryStage.setScene(scene);
scene.getStylesheets().add("webviewsample/BrowserToolbar.css");
primaryStage.show();
}
public static void main(final String[] args) {
launch(args);
}
}
class Browser extends Region {
static { // use system proxy settings when standalone application
// System.setProperty("java.net.useSystemProxies", "true");
}
final WebView browser = new WebView();
final WebEngine webEngine = browser.getEngine();
public Browser() {
getStyleClass().add("browser");
webEngine.load("http://www.google.com/");
getChildren().add(browser);
}
#Override
protected void layoutChildren() {
final double w = getWidth();
final double h = getHeight();
layoutInArea(browser, 0, 0, w, h, 0, HPos.CENTER, VPos.CENTER);
}
#Override
protected double computePrefWidth(final double height) {
return 800;
}
#Override
protected double computePrefHeight(final double width) {
return 600;
}
}
There is a deprecated method : renderToImage in Scene (see links below) that would do something that comes close and with which I'd might be able to work but it is deprecated...
It being deprecated in JavaFX seems to mean that there is no javadoc advertising the replacement method and because I don't have access to the code, I cannot see how it was done...
Here are a couple of sites where I found some information but nothing to render a webpage to an image:
http://tornorbye.blogspot.com/2010/02/how-to-render-javafx-node-into-image.html
canvasImage and saveImage(canvasImage, fc.getSelectedFile()) from this one :
http://javafx.com/samples/EffectsPlayground/src/Main.fx.html
Others:
http://download.oracle.com/javafx/2.0/webview/jfxpub-webview.htm
http://download.oracle.com/javafx/2.0/get_started/jfxpub-get_started.htm
http://fxexperience.com/2011/05/maps-in-javafx-2-0/
I have done this by launching JavaFX WebView on a Swing JFrame and JFXPanel. And then I use the paint() method on JFXPanel once the WebEngine status is SUCCEEDED.
You may follow this tutorial to make a WebView: Integrating JavaFX into Swing Applications
The code below demonstrate how I capture the rendered screen from JFXPanel.
public static void main(String args[]) {
jFrame = new JFrame("Demo Browser");
jfxPanel = new JFXPanel();
jFrame.add(jfxPanel);
jFrame.setVisible(true);
jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
Platform.runLater(new Runnable() {
#Override
public void run() {
browser = new FXBrowser();
jfxPanel.setScene(browser.getScene());
jFrame.setSize((int) browser.getWebView().getWidth(), (int) browser.getWebView().getHeight());
browser.getWebEngine().getLoadWorker().stateProperty().addListener(
new ChangeListener() {
#Override
public void changed(ObservableValue observable,
Object oldValue, Object newValue) {
State oldState = (State) oldValue;
State newState = (State) newValue;
if (State.SUCCEEDED == newValue) {
captureView();
}
}
});
}
});
}
});
}
private static void captureView() {
BufferedImage bi = new BufferedImage(jfxPanel.getWidth(), jfxPanel.getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics graphics = bi.createGraphics();
jfxPanel.paint(graphics);
try {
ImageIO.write(bi, "PNG", new File("demo.png"));
} catch (IOException e) {
e.printStackTrace();
}
graphics.dispose();
bi.flush();
}
For JavaFX 2.2 users there is a much more clean and elegant solution based on JavaFX Node snapshot. You can take a look to JavaFX node documentation at:
http://docs.oracle.com/javafx/2/api/javafx/scene/Node.html
Here is an example of taking a capture from a WebView Node (as webView)
File destFile = new File("test.png");
WritableImage snapshot = webView.snapshot(new SnapshotParameters(), null);
RenderedImage renderedImage = SwingFXUtils.fromFXImage(snapshot, null);
try {
ImageIO.write(renderedImage, "png", destFile);
} catch (IOException ex) {
Logger.getLogger(GoogleMap.class.getName()).log(Level.SEVERE, null, ex);
}
I had no problems with this solution, and the webview is rendered perfectly in the PNG according to the node size. Any JavaFx node can be rendered and saved to a file with this method.
Hope this help!
Workaround posted by JavaFX engineers: Snapshot does not work with (invisible) WebView nodes.
When taking a snapshot of a scene that contains a WebView node, wait for at least 2 frames before issuing the snapshot command. This can be done by using a counter in an AnimationTimer to skip 2 pulses and take the snapshot on the 3rd pulse.
Once you have got your snapshot, you can convert the image to an awt BufferedImage and encode the image to format like png or jpg, using ImageIO.