im running PDFLib 9.x on a linux server with php 5.4. I need to get a list of all layers of a certain input PDF and then apply changes to some of them (visibility to be exact). Been digging through the API reference for quite some time now but can only find functions which create new layers in the output document and modify those. Also google doesnt supply anything valuable. I've found this example on their website but it's in Java and i lack the expertise to apply this code to PHP.
https://www.pdflib.com/pcos-cookbook/special/layers/
Maybe someone could help me out?
I need to get a list of all layers of a certain input PDF and then apply changes to some of them (visibility to be exact).
this is not possible. When you import a PDF page with PDFlib+PDI, you can't change the content of the imported page. So it's not possible to change the layer properties.
The sample code you shared, is just for retrieving the layer information of an imported document, but not for manipulate them.
Related
Good evening, I'm working on a project with a team, we have to make a browser without using JEditorPane or any other class that reads HTML.
How can we do that? Do we need to make a new class that does what JEditorPane does? Can I find somewhere JEditorPane's code? Thanks!
Well, this is an answer:
If you need to display web content without using any pre-existing engine (such JEditorPanel or a ChromeBind), you need to read the HTML as a XML file and construct your native View based on it (without CSS and JS this is a fairly easy task) by constructing the screen based on a one-to-one equivalent of a HTML tag to a Java JComponent.
Modern Web Browsers are pretty complicated, so there are a lot of different pieces that come together to display a web page. In order to build a browser, you need to first understand what a browser is. For that, I recommend reading this tutorial.
Once you have an understanding of how a browser actually works you need to determine which pieces you can reuse and which pieces you have to write from scratch. Do you have to write the entire rendering engine? Good luck! Can you use an existing engine like Gecko or Webkit? Or maybe you can get a little closer to done and use the java port of Webkit?
Once you have a better understanding of the question come back and ask more direct questions when you get stuck at a specific piece. As it is, your first step is to gain an understanding of the problem you are trying to solve.
I've searched and searched, coming across questions that address parts of the problem, but nothing comprehensive. I'm using GWT and eclipse to develop a website that uses highcharts to make some fancy plots.
The idea is that the user will be able to select one of their local data files of type csv and upon selection of the file, the plot will be rendered using their data and our fancy algorithms.
We don't want to send enormous amounts of data to the server as this will become costly and time consuming for the user. Is there a way to process or at least pre-process the user's data using Java code to be implemented in a GWT-eclipse project?
Any help is greatly appreciated!
This is a duplicate of GWT Toolkit: preprocessing files on client side
One of the answers points to these links:
http://code.google.com/p/gwt-nes-port/wiki/FileAPI - GWT wrapper for HTML5 File API
http://www.html5rocks.com/en/tutorials/file/dndfiles/ - HTML5 FileAPI
But, alas, the FileAPI is pretty new: http://caniuse.com/fileapi
The other alternative you have, to avoid server, is a text area to paste the CSV file into, then read that using GWT. This is a common trick and I think you can even copy+paste from certain spreadsheet programs this way.
You cannot do it in a universal way in GWT in all browsers currently. GWT translates to javascript and it does not have the required privileges to process client side the files.
For more detailed answer you can reference - How to retrieve file from GWT FileUpload component?
I have an existing swing desktop application that I wish to convert to a web application. The first thing that is stopping from doing so is that the desktop application deals with writing and reading from PDF files. Also the user fills up the PDF forms which needs to be read by the application.
Now a typical use case in the desktop application is like, the user logs in opens a PDF form and fills it up. The swing application manages where the file is stored so it goes to the file and reads the form, extracts the data and stores the data in the db. The user might not fill up the form all in one go. He might save it come back to it later and continue.
All of this needs to be done by the web app now. My problem is I don't want the user to download and upload the form multiple times to the server. That would eat the bandwidth and also asking the use to save the file locally and upload it back once he completes filling the form doesn't appeal to me since the desktop application nicely used to manage the location of these files as well.
Would I need to implement something like a dropbox kind of thing? A small deamon running continuously to check what file has been updated and upload it to the server? That would be difficult since at the server I wouldn't know if the file was latest or not. Is there anything like this that someone might have done before?
I have anther suggestion: why don't you show the user a form with the same fields and transfer them to the PDF after the user submits. This way the Pdf does not leave the server and you transmit just the minimal amount of data.
Switching to a web-version of the application may force you to re-think some of the way you are doing things. Certainly browsers are intentionally limited in their access to the local file system which introduces a major hurdle for your current mode of operation.
Even if you could display the PDF within a browser, detect the completion of edits and send this back to the server from within browser code (which is probably possible), you'll be subject to different browsers doing different (strange) things with whatever pdf plugin is installed.
As Vitaliy mentioned already, switching being able to populate a (web) form in the browser means that whole download upload problem goes away. But then you have to take what the user has done in a web page and pump that into a PDF somehow. If you don't HAVE to start with a PDF, but could collect the data and produce a PDF at the end then you might have more options. For example you could use iText to create a PDF directly if you don't have too many styles of document to work with. You could use something like Docmosis which you can give templates to and get it to populate and render PDFs later. With the Docmosis option you can also ask Docmosis for the list of fields in the template so could build a web form based on the selected template, let the user fill it in, then push that data to Docmosis to produce the file.
Hopefully there's some options there that are useful to you.
Adobe documents how to do this here. Never underestimate the power of design-by-google. I know this can be made to work because I've used PDF forms on line.
I worked on a similar issue a few years ago, although I wasn't dealing with signed forms. The signature definitely makes it a little more difficult. However, I was able to use iText to create a PDF form with fields, and only submit the field data back to the server. Offhand, I unfortunately do not remember exactly what/how we did it, but can confirm it is doable (with limitations/caveats). Ex: User had to have PDF reader plugin installed & User was forced to d/l the pdf every time.
Basically what I did was use iText to create an FDF from a PDF (with form existing form fields). The submit button in the FDF actually submits the form data to a URL of your choosing (not unlike an HTML form). Once you have that data, I believe I merged the form fields (from the FDF) with the PDF on the server side using iText.
Once you use the server to maintain all the form data, the synchronization/locking process you use to ensure that a single user is updating the latest and greatest form data is up to you.
Your comment under jowierun indicates that you want to deal with word/excel/etc docs as well, so I am not entirely sure I am understanding your needs. Your initial post discussed the needs to fill out PDF forms locally, but afterwards it sounds like you are looking for a file-sharing system instead.
Can you please clarify exactly what you are trying to accomplish?
I'm working on an existing Java web application (HTML/CSS/JS/JSP/Servlets and Java classes in this particular app) that currently uses an applet to print checks.
My boss recently came to me and informed me that there are errors coming back on user's machines when testing the check printing against the latest versions of Java.
He is wondering how we could set up the application to print checks off without using an applet.
In the past, I've used Crystal Reports to lay out forms and print them but that was in asp.net.
I know there are Java PDF libraries available but I'm not at all familiar with any of them and not sure that they could be used to format and print checks in a Java web application.
So, I'm ultimately wanting to know about what has worked for those who have implemented check or form printing using Java/JSP/Servlets.
2012-02-24 # 13:15EST edit
I mentioned "Java PDF libraries" above but have since found out that PDF cannot be used as end-users should not be able to save the check documents (unless PDF's can be made to not be saveable and just printable). All of the data is managed right on the database (Oracle in our case).
I've used iText to create PDF files before for things like this. PDF is your answer, since the whole point of the format is that it never really changes. Much better than an Applet.
http://itextpdf.com/
I ended up digging deeper into using iText and came across flying-saucer which makes it super-easy to render a PDF from XML or XHTML.
Check it out at http://code.google.com/p/flying-saucer/
I also found out how to partially hide the save functionality by rendering the PDF inside a hidden iframe: Create a "print-only" PDF with itext
I have a project where they want me to embed a website into a java application and they want the website to have a similar color scheme as the rest of the application. I know a lot about CSS and building websites but I am not aware of a way to change the look of a website as it comes in on the fly. Is there someone who can help?
Update:
I don't have access to the header because it is not my website. To give more info about the project is we have a browser embedded in a java client application. The user needs to access a website that displays the contents of a database. I have no access to the original html or css from the site.
What i need is to change the background color and font sizes of the incoming webpage to match the look and feel of the java application.
One approach would be to replace their CSS with your own.
You could also take the approach used by the Stylish plugin, which involves a lot !important decelerations to override the site's CSS. Since this is a Java app, I assume the user will not have opportunity to supply their own CSS, so using !important here doesn't precisely go against the standard.
In your particular situation, I'd look into data scraping, all you need to do is scrape the website for the data, and then re-style it to present it how you want.
Good luck
The Greasemonkey add-on for Firefox does just this. You can write a bit of Javascript code and have it run when certain web pages load. One common thing to use it for is to make changes to the DOM to move page elements around, hide or resize elements, change colors, etc. There are a bunch of examples at userscripts.org if you want to get an idea of what I am talking about.
Your code would simply need to do something similar. Load the page (including the normal style sheets) and then use the DOM to make changes to style elements as desired. Browse through the source of the page to get the names/ids of important elements, and your code can key off of those. Loading an additional CSS file containing your changes is an option, but doing it programmatically will likely give you more flexibility in the event that the target website changes.
Depends on what do you use to show the pages in Java. Most browser implementations support dynamic changes to the DOM, so you can simply add a CSS file to header as a last element, and it will be applied.
you need to know the markup of the html / css so you can make the best skin.
you could theoretically do it by styling just the basic tags: h1...h6, p, etc... but it would not be as good and would probably fail to produce the best results at times and even produce horrible things at times.
if you KNOW the site markup then you can make a skin and simply use CSS/images to skin it as you wanted it.
just include your CSS markup LAST so that it overrides the one already present on the site that you want to skin differently.
should not be a difficult thing per se. the skin itself is probably the better (more effort required) part of the job.
On the fly, should mean changing the html fetched. So parsing and replacing tokens seems to be a/the way.
You could change the locations of the style sheet files by replacing the href value in a link that points to a css file, and set the value to your style sheet (a different URI).
<link type="text/css" href="mylocalURI" rel="stylesheet />
(this should be the result of a process/replacement)
I think you understand what should happend for inline styles.
I would use JTidy to normalize the original site HTML to XHTML, then use XSLT to filter only the interesting/relevant information, obtaining XML format; and finally (since I wouln't want to convert XML to objects), XSLT again to transform the "pure" XML into the HTML look & feel I need/want.
All of this can be assembled as streams, using more or less 4 Kb of buffer per filter (12 Kb total) per thread. Also meaning that it will run fast enough. And all built on standard, open-source available components.
Cheers.