At the moment, in my company, we have tons of REST functions working and we will create more soon. We want to publish a documentation page for those API calls that doesn't exist yet. In that way, the UI guys can work even if we don't have have the real API yet.
I know for the existing ones, Swagger is ok but, for the ones that does not exist yet, we can not create "empty" methods and publish them just for Swagger to take them and show the documentation.
Also with Swagger I am not able to show multiple REAL payload examples for a particular request.
Doing some research, I found POSTMAN. Even if I don't have the real API yet, with POSTMAN I am able to document every element that I need. But.... POSTMAN does not have a "Pretty Print" functionality yet.
So I decided to export from POSTMAN a JSON file with all its content about my API calls, including sample payload requests. After that, I WANT to code a program in Java, receiving that json file and giving me the desired output in PDF or HTML format according to my selection when invoking my java program. (something like: java -jar Convert.jar sample.json pdf)
At the moment, I have successfully loaded (in my java program) the json file in memory and it is an actual complex big POJO object.
My question: Should I iterate over the object and its child objects (manually) in order to produce a generic HTML file with its information? Or does exist a generic library or function that receives a big POJO and presents that object in a generic (yet well presented) HTML (or pdf) format?
Thanks.
Related
There is a betting exchange website which offer their data in XML from the following link:
http://odds.smarkets.com/oddsfeed.xml
I would like to access this link to retrieve the latest data (in java). Previously I have had to download the (very large) file and add it to my project and get the data from there. What is the best way to achieve this without having to download the file every time I want to access the data?
I plan on storing the returned data into a database.
Thanks
Well this seems to be very tricky question .I would suggest you to create a simple web service application[Client/server architecture] to get the contents from this url. You can use REST to call this url. But what contents you need to read depends on the functionality that you want to achieve.You need to write your custom logic to read the data.Here in you will be acting as client and the url would be your service.
You can refer following link
https://community.atlassian.com/t5/Confluence-questions/Access-page-content-via-URL/qaq-p/163060
im running PDFLib 9.x on a linux server with php 5.4. I need to get a list of all layers of a certain input PDF and then apply changes to some of them (visibility to be exact). Been digging through the API reference for quite some time now but can only find functions which create new layers in the output document and modify those. Also google doesnt supply anything valuable. I've found this example on their website but it's in Java and i lack the expertise to apply this code to PHP.
https://www.pdflib.com/pcos-cookbook/special/layers/
Maybe someone could help me out?
I need to get a list of all layers of a certain input PDF and then apply changes to some of them (visibility to be exact).
this is not possible. When you import a PDF page with PDFlib+PDI, you can't change the content of the imported page. So it's not possible to change the layer properties.
The sample code you shared, is just for retrieving the layer information of an imported document, but not for manipulate them.
I am trying to do some WebScraping of a site and the data is in dynamically loaded containers. It appears that the loading is done via JavaScript. Therefore I would need to execute it, in order to get it. I have written 2 versions of the program, one in Java and one in C#, so helping me with one would be nice enough.
I am currently using WebResponse/WebRequest in C# and HttpURLConnection in Java. I have to login into the site first, which already works like a charm. Now I need to parse the content, so the data gets filled in and the containers loaded. Is there an easy way to run the html through a browser control or an already included library?
In C# you could just use a hidden WebControl. With that you could execute everything you need.
I want to grab data from a website (for example, the names, identification number, and list of resources someone is using) and post it to another website.
What I was thinking of doing was using cURL to grab the information from an existing REST api on one website. Then, what I wanted to do is write a program or an api to post that information onto another website.
Upon using a cURL, how/where can I store that information so that I can use it via another program? Would it be easier to write one single program that extracts the information from the first website and posts it to the other? If so would it be possible to do so using Java/give an idea on how to do so? I'm not asking for code, just a method to do this. I'm using the Eclipse for Java Web EE developer's IDE.
I'd write it as 2-3 programs. One that extracts the data, one that formats the data (if necessary), one that posts the data.
My gut tells me the easiest way to do this is a pure bash script. But if you want to use Java for this you can.
I would save the output in a file for the post-er to read from. This has the benefit of letting you write/test the poster without the 2 other programs working. That said, I recommend you write the get-er program first. That way you know what data you're really dealing with.
Now, if you happen to write both the formatter and the post-er in java, I would write this as one program instead of "piping" files between them. The formatter will read in the file, turn it into a data structure/class, and the post-er will read this data structure/class.
This is only superficially different from my previous paragraph. The point is each "part" is independent from each other. This allows you to test a part without running the whole thing. That's the important thing.
As for how/where to store the information from the get-er, just redirect it to a file. Here's a tutorial on how.
Truth be told, I can't tell if you're using the linux cURL program or a java implementation like this one. My answer would be very different depending on this.
Below I am pasting content from Teambox api documentation:
https://teambox.com/api/upload
uploads#create POST
/api/1/projects/:project_id/uploads
/api/1/uploads
Creates a new upload. Note that you will need to post an upload using form encoding for this to work.
Parameters you should pass
{
"page_id": 456,
"project_id": 123,
"name": "Name",
"asset": "<File Data>"
}
Questions:
What do we mean by upload using form encoding?
What does asset: <File Data> represent?
Any code example would be great too. Thanks
Seems that you have to post data to teambox in your call.
POST is one of the methods commonly used by web forms, when passing data to a server.
This "official" link tells what is what
First described manners describe a call thought as a GET request.
Latest form you get takes the shape of a POST call.
You can build a POST call in many ways, depending on the native language your client app uses.
If is a javascript based web app, you can use XHTTPRequest object which resides on browsers.
If your app runs in a Linux based system, maybe you can use libraries that allow you build-up that call, such cURL library.
So, can you show the code or get deeper in your description?
Would be nice to know your environment and programming language.