New to the development scene, please ignore my ignorance if I happen to not make any sense......
I'm trying to access a xml file located in my EJB directory which has to stay there, I need to parse it into a javascript accessible object preferably JSON, to dynamically manipulate it using Javascript / Angular....
using JBOSS, and the file's location is something like
/FOO-ejb/src/main/resources/Config.xml, obviously not accessible through the web since it does not reside under a webserver root directory,
Java is the back-end and I can't seem to find any other ways to access this file to serve it to the front-end,
I'm heading towards the direction of using a service within the EJB to access the file, parse it, then use a REST service to serve the object to the front-end....or write a JSP to read in the file, parse it etc....
are there any other better solutions for this?
Thank you everyone for your time!
I think what you want to do is not achievable since it would mean you'd use Javascript to access the file system which is not possible though HTML5 offers some File API that could work but not to access any file in the file system.
So I'd say that the direction you're heading is the most appropriate and maybe easier because even if you find a way to do it in JavaScript it would be a browser-dependant or some weird workaround that could be broken in future browser's version.
I used Apache Abdera in a Servlet in the past to parse an XML RSS feed and convert it to JSON. Abdera is good at that and worked perfect for me. After getting the JSON object I just had to send it to the response and on the client side I used an AJAX call to the servlet to get the JSON object.
The code was something like this:
try {
PrintWriter result = response.getWriter();
// Creates Abdera object and client to process the request.
Abdera abderaObj = new Abdera();
AbderaClient client = new AbderaClient(abderaObj);
AbderaClient.registerTrustManager(); // For SSL connections.
// Sent the HTTP request of the ATOM Feed through AbderaClient.
ClientResponse resp = client.get( "http://url/to/your/feed" );
// if the response was OK...
if (resp.getType() == ResponseType.SUCCESS) {
// We get the document as a Feed
Document<Feed> doc = resp.getDocument();
// Creates a JSON writer to convert the ATOM Feed
Writer json = abderaObj.getWriterFactory().getWriter("json");
// Converts the (XML) ATOM Feed into JSON object
doc.writeTo(json, result);
}
} catch (Exception ex) {
ex.printStackTrace(System.out);
}
Related
I am working on a Java PDF Generation micro service using spring boot. The pdf generation is meant to be a 2 stage process.
Templating - html template with some sort of expression language, which reads a json structure directly
HTML to PDF - this generates a pdf from the html produced in step 1
Note: I have some java and JavaScript (nunjucks/nodejs) solution for steps 1 and 2, but I really need a more maintainable approach as follows.
My restful endpoint will take 2 parameters an html template and a json
The json structure maps to the html template one to one and both files have been predefined to strict contract.
The service endpoint should not do any object mapping of json data to html Dom elements e.g. table, rows, etc
The endpoint only embeds the json data to the html using logic in java code
The html gets executed/rendered and reads the json directly using some sort of expression language, since it contains the json structure
The endpoint then response with an html with the dynamic data, which can be sent to the pdf generator web service endpoint
Below is a sample code:
#POST
#Path("createPdf")
public Response createPdf(String htmlTemplateParam, String json) {
//Read template from endpoint param or start off with reading local html template from resource folder
String templateHtml = IOUtils.toString(getClass().getResourceAsStream(HTMLTemplateFiles.INCOMING_TEMPLATE));
RequestJson requestJson = [Prepare json before passing to HTML Template]];
if(requestJson.isContentValid()) {
LOG.info("incoming data successfully validated");
// TODO
// Pass the requestJson (Endpoint Param JSON ) to templateHtml
// Trigger the reading of the Json data and populating different HTML DOM elements using some sort of expression predefined in HTML
// Get hold of the rendered HTML
String resolvedHtml = [HTML with data from json param into endpoint];
// The next bit is done
String pdf = htmlToPdfaHandler.generatePdfFromHtml(resolvedHtml);
javax.ws.rs.core.Response response = Response.ok().entity(Base64.decodeBase64(pdf)).build();
}
}
The templating stage one is where I need help.
Please, what is the best technical solution for this?
I am comfortable with Java and JavaScript framework and happy to learn any framework you suggest.
But, my main design goal is to ensure that as we have new templates and a template and data changes, a non-techie can change html/json and generate pdf.
Also, no java code changes should be required for template and data changes.
There are a few things in my head like jsonpath, thyme leaf, JavaScript, etc But, I love best practice and like to learn from someone with the real-life experience of similar use case.
After further research and first answer I am also thinking of the freemarker solution below.
But, how would I create a free marker template-data auto-magically from reading input json.i.e. without creating POJO/DTO?
Based on first answer :
Configuration cfg = new Configuration(new Version("2.3.23"));
cfg.setDefaultEncoding("UTF-8");
// Loading you HTML template (via file or input stream):
Template template = cfg.getTemplate("template.html");
// Will this suffice for all JSON Structure, including nested deep ones
Type mapType = new TypeToken<Map<String, Object>>(){}.getType();
Map<String, String[]> templateData = new Gson().fromJson(json, mapType);
try (StringWriter out = new StringWriter()) {
// In output stream the result will be template with values from map:
template.process(templateData, out);
System.out.println(out.getBuffer().toString());
out.flush();
}
Thanks in advance.
NOTE: Code snippets, pseudo code, references are welcomed.
One of the options might be FreeMarker usage:
Configuration cfg = new Configuration(new Version("2.3.23"));
cfg.setDefaultEncoding("UTF-8");
// Loading you HTML template (via file or input stream):
Template template = cfg.getTemplate("template.html");
// You need convert json to map of parameters (key-value):
Map<String, Object> templateData = new HashMap<>();
templateData.put("msg", "Today is a beautiful day");
try (StringWriter out = new StringWriter()) {
// In output stream the result will be template with values from map:
template.process(templateData, out);
System.out.println(out.getBuffer().toString());
out.flush();
}
I have a JSON that looks more or less like this:
{"id":"id","date":"date","csvdata":"csvdata".....}
where csvdata property is a big amount of data in JSON format too.
I was trying to POST this JSON using AJAX in Play! Framework 1.4.x so I sended just like that, but when I receive the data in the server side, the csvdata looks like [object Object] and stores it in my db.
My first thought to solve this was to send the csvdata json in string format to store it like a longtext, but when I try to do this, my request fails with the following error:
413 (Request Entity Too Large)
And Play's console show me this message:
Number of request parameters 3623 is higher than maximum of 1000, aborting. Can be configured using 'http.maxParams'
I also tried to add http.maxParams=5000 in application.conf but the only result is that Play's console says nothing and in my database this field is stored as null.
Can anyone help me, or maybe suggest another solution to my problem?
Thanks you so much in advance.
Is it possible that you sent "csvdata" as an array, not a string? Each element in the array would be a separate parameter. I have sent 100KB strings using AJAX and not run into the http.maxParams limit. You can check the contents of the request body using your browser's developer tools.
If your csvdata originates as a file on the client's machine, then the easiest way to send it is as a File. Your controller action would look like:
public static void upload(String id, Date date, File csv) {
...
}
When Play! binds a parameter to the File type, it writes the contents of the parameter to a temporary file which you can read in. (This avoids running out of memory if a large file is uploaded.) The File parameter type was designed for a normal form submit, but I have used it in AJAX when the browser supported some HTML5 features (File API and Form Data).
I am trying to write a ASP.net Web API that sends XML files from the database and recieve/read it on android
The XML file that I send is something like this
<ArrayOfMerchant xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/MvcApplication1.Models">
<Merchant>
<Address>ABC</Address>
<City>HHHH</City>
<Country>EEEE</Country>
<Id>1</Id>
<Latitude/>
<Longitude/>
<Name>Some store</Name>
</Merchant>
</ArrayOfMerchant>
Opened on browser and it looks fine.
On the Android side I am trying to receive and read it with HttpURLConnection.
Everything works , but when I try to convert the Input-stream into string, the string is something like
String = [{"Id":1,"Name":"Some store","Address":"ABC","City:"EEEE","Country":"Canada","Longitude":"","Latitude":""}]
Question:
1)Why does it display differently with different markup and also different ordering of the elements?
2)How can I receive / retrieve it as a normal XML file so I can parse it?
1) I dot know, depends on your ASP code and configuration. You can try to change parameters of your HTTP request to see how your ASP app respond when you change Accept header or User-Agent. There are some tools.
2) Actually, you don't need XML to parse the data .
Execution of the following code:
Jsoup.connect(baseURL + dataJSSrc).execute();
throws an Exception:
org.jsoup.UnsupportedMimeTypeException: Unhandled content type. Must be text/*, application/xml, or application/xhtml+xml. Mimetype=application/x-javascript, URL=http://www.abc.com/playdata/206/8910.js?44613.77
but when I use
URLConnection conn = new URL(baseURL + dataJSSrc).openConnection();
it is OK!
in the following code
System.out.println(conn.getContentType()); // out put 'application/x-javascript'
Can Jsoup only be used to download HTML or XML?
Whilst I don't disagree with BalusC's answer, you can use Jsoup to download anything you like. By default, Jsoup will throw an exception if it retrieves content with a mime type that it will not be able to parse as HTML, to avoid parsing e.g. images. However you can disable that test with connection.ignoreContentType(true) if you just want to get at the bytes or as a string:
String script = Jsoup.connect(jsUrl).ignoreContentType(true).execute().body();
or
byte[] bytes = Jsoup.connect(imageUrl).ignoreContentType(true).execute().bodyAsBytes();
You will get more control with a full-fledged HTTP client, but this method can be useful in a pinch.
Jsoup is designed as a HTML/XML parser, not as a pure HTTP client. If you need to download some non-HTML/XML files, then rather use a normal HTTP client, not a HTML/XML parser.
Something with using the right tool for the job.
I am using the Selenium 2 Java API to interact with web pages. My question is: How can i detect the content type of link destinations?
Basically, this is the background: Before clicking a link, i want to be sure that the response is an HTML file. If not, i need to handle it in another way. So, let's say there is a download link for a PDF file. The application should directly read the contents of that URL instead of opening it in the browser.
The goal is to have an application which automatically knows wheather the current location is an HTML, PDF, XML or whatever to use appropriate parsers to extract useful information out of the documents.
Update
Added bounty: Will reward it to the best solution which allows me to get the content type of a given URL.
As Jochen suggests, the way to get the Content-type without also downloading the content is HTTP HEAD, and the selenium webdrivers does not seem to offer functionality like that. You'll have to find another library to help you with fetching the content type of an url.
A Java library that can do this is Apache HttpComponents, especially HttpClient.
(The following code is untested)
HttpClient httpclient = new DefaultHttpClient();
HttpHead httphead = new HttpHead("http://foo/bar");
HttpResponse response = httpclient.execute(httphead);
BasicHeader contenttypeheader = response.getFirstHeader("Content-Type");
System.out.println(contenttypeheader);
The project publishes JavaDoc for HttpClient, the documentation for the HttpClient interface contains a nice example.
You can figure out the content type will processing the data coming in.
Not sure why you need to figure this out first.
If so, use the HEAD method and look at the Content-Type header.
You can retrieve all the URLs from the DOM, and then parse the last few characters of each URL (using a java regex) to determine the link type.
You can parse characters proceeding the last dot. For example, in the url http://yoursite.com/whatever/test.pdf, extract the pdf, and enforce your test logic accordingly.
Am I oversimplifying your problem?