I need a way to generate HTML interface (form), starting from wsdl, to submit web service requests. The request submission is made by server side code. The user fills out the form and posts data.
I am looking for a library (Java) that might help me to write the code.
I'm not trying to create java classes of the web service, I have to generate form fields for any wsdl url.
According to MikeC http://www.soapclient.com/soaptest.html is a tool to create HTML forms from WSDL documents. Unfortunately, its not a Java library and it also had at least one limitation: no multidimensional array support.
But with a little effort you should be able to write an own parser/transformer for your specific use case. See also How to parse WSDL in Java? to find more information about a WSDL parser for JAVA.
Also possible XSLT http://www.ibm.com/developerworks/library/ws-xsltwsdl/.
Related
What is the difference between web crawler and parser?
In java there are some name for fetching libraries . For example , they name nutch as a crawler and jsoup as a parser .
Are they do the same purpose?
Are they fully similar for the job?
thanks
The jsoup library is a Java library for working with real-world HTML. It is capable of fetching and working with HTML. However, it is not a Web-Crawler in general as it is only capable of fetching one page at a time (without writing a custom program (=crawler) using jsoup to fetch, extract and fetch new urls).
A Web crawler uses a HTML parser to extract URLs from a previously fetched Website and adds this newly discovered URL to its frontier.
A general sequence diagram of a Web crawler can be found in this answer: What sequence of steps does crawler4j follow to fetch data?
To summarize it:
A HTML parser is a necessary component of a Web crawler for parsing and extracting URLs from given HTML input. However, a HTML parser alone, is not a Web crawler as it lacks some necessary features such as maintaining previously visted URLs, politeness, etc.
This is easily answered by looking this up on Wikipedia:
A parser is a software component that takes input data (frequently
text) and builds a data structure
https://en.wikipedia.org/wiki/Parsing#Computer_languages
A Web crawler, sometimes called a spider or spiderbot and often
shortened to crawler, is an [Internet bot] that systematically browses
the World Wide Web, typically for the purpose of Web indexing (web
spidering).
https://en.wikipedia.org/wiki/Web_crawler
I've been reading about some xPages functionality about wrapping a document with DominoDocument.wrap(...) that is supposed to do this for you. However, I am in a stand along Java program.
Is it possible to get the xPages Jar Files to import into my application somehow? I do have the basic Notes Jars working.
it is unlikely you will get this working using the DominoDocument class. when you wrap the Document, it creates a 'DominoRichTextItem', during this process it extracts the attachments to the local filesystem so that the web server can easily serve the content during the user's browsing/editing session.
this attachment extraction process relies on the user's session information, as well as the xpages applications 'persistence service'.
you are better of trying to emulate this idea by using the standard notes API to extract the text/html mime part, and the extracting attachments/inline image mime parts for whatever your purpose
i would like to parse web content and get only text from the web content. I am getting web content as HTML/java script. Now i need only text from the content.
Can some one help me in doing this? I am using HTML parser to do this.
For example i need the text content in the below file which is in bold.
The URLConnection class contains many methods that let you communicate with
the URL over the network. URLConnection is an HTTP-centric class; that
is, many of its methods are useful only when you are working with HTTP
URLs. However, most URL protocols allow you to read from and write to
the connection. This section describes both functions.
can some one suggest me or provide some sample code to do this.
Thanks in Advance.
You could use an Html parser. A safe choise will be HtmlParser.
An unorthodox method that i like to use is tools like HtmlUnit, which is basically for unit testing, but they have advanced xpath parsing capabilities, provides automatic login, session handling kind of capabilities too.
I recommend use HtmlUnit for web download and Jsoup as html/xml parser.
I use them to extract infos from websites (Google search too).
How to pass the whole pdf content as response in Restful web services using java platform.
I tried with converting the responses to String and byte array. First case, got registered expression error. In Second case, getting unexpected results.
PDF data should be transferable as a response of a rest-ful request just fine.
Set the correct content type and transfer the binary content of the PDF.
Nothing special about it.
What are you doing right now? Are you using an library?
Describe your "unexepected results".
Describe your "expression error"
Basically, you need to provide a lot more details.
Your responses from the Java platform are most definitely going to be byte arrays in order to provide the PDF. From the server side you need to make sure that MIME types for PDF are registered and that it's providing and accepting the correct headers for PDF.
If you're serving PDF, Java needs to figure out where that is and hosted under the url that you defined your RESTful resource.
If it's dynamic, your PDF library (I've used iText in the past) needs to be able to output the PDF binary and serve it via your defined RESTful resource.
Not sure if this is related to your problem - but I have seen that Adobe Acrobat doesn't handle HTTP Range headers well and if you say you except Ranges it will throw some very weird Range requests and ignore the partial content headers you send back. Just a warning.
I am searching on ways to make a small app using GWT for converting documents
from one format to other.
Mainly these formats .doc , .pdf , .odt , .rtf.. and maybe a couple
more.
Has anyone tried this before??
I came across the library JODConverter but it needs open office to be
already installed and i don't really know how many people have used it
with gwt in past.
Please give me some starting pointers, or if anyone has experience
with this kind of app, do share.
Thanks and regards,
Rohit
I was looking into implementing something like this a few month ago.
Since GWT compiles your code to JavaScript there is no way for you to do that on the client side, JavaScript can't access the file system.
So you would need to upload the file to the server first and do the conversion on the server side and send the converted file back.
I have never heard of JODConverter before, the the library I wanted to use was Apache POI . Unfortunately I can't tell you anything about it, because I haven't tried it yet.
It sounds like JOD Converter is precisely what you need since you're looking at multi format conversions from Java. You would install OpenOffice on your server and link it up with JOD Converter. When a document is uploaded, your application would call JOD Converter to perform the conversion and stream the converted document back to the caller. Alternatively you can put the file somewhere, and send a link (URL) back to the caller so they can fetch the document. You can also look at JOD Reports or Docmosis if you need to manipulate the documents.
GWT is mostly a client side toolkit. Are you trying to make a tool that does all the conversion on the client side, with no help from the server? In that case, you should be looking for JavaScript libraries that can read/convert all those formats. If you are planning to have the user upload their files to the server, then you can use whatever technology you want on the server, and just use GWT for the UI.