Which templating engine is this? - java

At my current place of work we have to use a web-service which works with a text template we feed to it. Now we would like to use those templates at other places in code and so I wondered what template language that could possibly be and if it's some off-the-shelf or off-the-net software. It's presumably something from Java or .NET world, but can be essentially anything.
It's got the following tokens:
$$variable$$
##function##
##function_call[$$parameter$$]##
Condition: ##IF[$$boolen$$]##THEN##Text##[ELSE##Text##]ENDIF##
Does someone recognize this?

it seems an home-grown templating system. At least it is not:
Velocity (Java)
StringTemplate (Java)
FreeMarker (Java)

##Looks like a really grim one##, $$made$$ by someone with ###no sense### of making things loook attractive $when$ written:######$$$$[][]
I would be surprised if it was the syntax of an actual product or open-source template engine.

Related

How to create a simple Italian Model for a Named Entity Extraction of Persons using OpenNLP?

I have to do a project with OpenNLP, strictly in italian language. Since it's almost impossible to find some existing structures in this language, my idea is to create a simple model myself. Reading some posts on this platform, my idea is try to do this using model-builder addon.
First of all, it's possible to obtain my goal with this addon?
If so, referring to this other post, what kind of file is meant by "modelOutFile"? In my case I don't have an existing model.
N.B.: the addon uses some deprecated functions (such as nameFinderME.train()).
Naively, I tried to pass as a "modelOutFile" a simple empty file "model.bin", but, of course I bumped into an error:
Cannot invoke "java.util.Properties.getProperty(String)" because "manifest" is null
Furthermore, I used a few names and sentences for the test (I only wanted to know if this worked), not the large amount requested (15000 sentences at least).
I'm open to other suggestions instead of the use of modelbuilder addons.
Hope someone can help me.

Translating logic into Kotlin (or Java) code

I have a use-case where I want to enable users to write simple logic, and behind the scenes, convert this logic into a condition in the code.
For example, the user might write:
someFieldName > 10 AND otherFieldName is NULL
And I'd like that to generate the following code:
if (data["someFieldName"] > 10 && data["otherFieldName"] == null) {
// Do something
}
After doing some research, I saw that one of the options is using eval (by leveraging a JS engine), although it doesn't fit all use cases.
I also saw that it's possible to use tools like ANTLR, which seems a bit like overkill.
Are there any simple off-the-shelf products we can use for such purposes? Or would creating a simple parser ourselves be the best way to handle it?
Your use case can be adequately addressed by MVEL2.
There is no need for you to write a parser and AST with ANTLR or convert to Java code, just evaluate the expression with appropriate parameters.
In fact any Java expression language library would do. You could also look at JUEL.
However, looking at the expression, I would say it aligns more towards MVEL2.
Give both the libraries a try.

'Search engine' on local database

I have a webapp with multiple objects with multiple Strings I'd like to search through. I would like to sort 'matches' by the best 'match'. Example: searching for 'stackoverflow is great'.
"Stackoverflow is a great website".
"This website has a great community".
"Stackoverflow".
"This website is good". // Here you could even consider 'is' as not being a match
Since I feel this would be reinventing the wheel, I'm searching for a library that's configurable though not monsterous. Since I don't know how this would be positioned in the application (using JPA or just a normal class), I think it's worth mentioning I'm using JSF and JPA.
Do you know of any library for this, configurable to which fields to search et cetera?
I've personally never used an API for this before, but I recommend lucene.
My friends used it and they didn't have any problems with it.
It has best result functions and everything, multi-platform and is open source.
I hope this is what you need.

Java - Extracting plaintext from web-page source code (getting massive quantities of lyrics from website)

O community, I'm in the process of writing the pseudocode for an application that extracts song lyrics from a remote host (web-server, not my own) by reading the page's source code.
This is assuming that:
Lyrics are being displayed in plaintext
Portion of source code containing lyrics is readable by Java front-end application
I'm not looking for source code to answer the question, but what is the technical term used for querying a remote webpage for plaintext content?
If I can determine the webpage naming scheme, I could set the pointer of the URL object to the appropriate webpage, right? The only limitations would be irregular capitalization, and would only be effective if the plaintext was found in EXACTLY the same place.
Do you have any suggestions?
I was thinking something like this for "Buck 65", singing "I look good"
URL url = new URL(http://www.elyrics.net/read/b/buck-65-lyrics/i-look-good-lyrics.html);
I could substitute "buck-65-lyrics" & "i-look-good-lyrics" to reflect user input?
Input re-directed to PostgreSQL table
Current objective:
User will request name of {song, artist, album}, Java front-end will query remote webpage
Full source code (containing plaintext) will be extracted with Java front-end
Lyrics will be extracted from source code (somehow)
If song is not currently indexed by PostgreSQL server, will be added to table.
Operations will be made on the plaintext to suit the objectives of the program
I'm only looking for direction. If I'm headed completely in the wrong direction, please let me know. This is only for the pseudocode. I'm not looking for answers, or hand-outs, I need assistance in determining what I need to do. Are there external libraries for extracting plaintext that you know of? What technical names are there for what I'm trying to accomplish?
Thanks, Tyler
This approach is referred to as screen or data scraping. Note that employing it often breaks the target service's terms of service. Usually, this is not a robust approach, which is why API-like services with guarantees about how they operate are preferable.
Your approach sounds like it will work for the most part, but a few things to keep in mind.
If the web service you're interacting with requires a very precise URL scheme, you should not feed your user-provided data directly into it, since it is likely to be muddied by missing words, abbreviations, or misspellings. You might be better off doing some sort of search, first, and using that search's best result.
Reading HTML data is more complicated than you think. Use an existing library like jsoup to assist you.
The technical term to extract content from a site is web scraping, you can google that. There are a lot of online libraries, for java there is jsoup. Though its easy to write your own regex.
1st thing I would do i use curl and get the content from the site just for testing, this will give you a fair idea of what to do.
You will have to use a HTML parser. One of the most popular is jsoup.
Take care abut the legal aspect fo what you you do ;)

How to write LALR parser for some grammar in Java?

I want to write Java code to build a LALR parser for my grammar. Can someone please suggest some books or some links where I can learn how to write Java code for a LALR parser?
Writing a LALR parser by hand is difficult, but it can he done. If you want to learn the theory behind constructing parsers for them by hand, consider looking into "Parsing Techniques: A Practical Guide" by Grune and Jacobs. It's an excellent book on general parsing techniques, and the chapter on LR parsing is particularly good.
If you're more interested in just getting a LALR parser that is written in Java, consider looking into Java CUP, which is a general purpose parser generator for Java.
Hope this helps!
You can split the LALR functionality in two parts: preparation of the tables and parsing the input.
The first part is complex and errorprone, so even if you like knowing how it works I suggest to use a proven working table generator for the LALR states (and for the tokenizer DFA as well).
The second part consists of consuming those tables using some quite simple algorithms to tokenize and process the input into a parse tree/concrete syntax tree. This is easier to implement yourself if you like to do so, and you still have full control over how it works and what it does.
When doing parsing tasks, I personally use the free GOLD Parsing System, which has a nice UI for creating and debugging the grammar and it does also generate table files which can then be loaded and processed by an existing engine or your own implementation (the file format for these CGT files is well documented).
As previously stated, you would always use a parser-generator to produce an LALAR parser. A few such tools for Java are:
SableCC (my personal favourite)
CUP
Beaver3
SJPT
Gold
Just want to mention that my project CookCC ( http://coconut2015.github.io/cookcc/ ) is a LALR(1) parser + Lexer (much like flex).
The unique feature of CookCC is that you can write your lexer and parser in Java using Java annotations. See the calculator example here: https://github.com/coconut2015/cookcc/blob/master/tests/javaap/calc/Calculator.java

Categories

Resources