Drools fusion: generate rules automatically - java

I'm working with drools fusion and I want to test the perfomance of this cep system based on the number of rules implemented. I now have a simple rule file with the .drl extension. I would like to dynamically generate about a 1000 rules. So how can this be done automatically without having them to create one for one in the .drl file?

Have you ever heard about template engines? After all, DRL files are just plain text files. Here are some of them you can use:
String Template: http://www.stringtemplate.org/
Velocity: http://velocity.apache.org/
FreeMarker: http://freemarker.org/
Even Drools comes with some support for templates: http://docs.jboss.org/drools/release/6.3.0.Final/drools-docs/html_single/#d0e5930
If you don't like fancy stuff, you can always go back to the old good StringBuffer class.
Hope it helps.

Related

How to enter input data from an Excel to a Selenium project?

I would like your support by providing information (scripts, videos, or books) regarding how to enter input data (for example: username and password) to a selenium project from an Excel file using Cucumber and Serenity DB.
Is it possible?
Thanks for all.
By principle, Cucumber doesn't supports data from external files. Instead it encourages to provide examples with scenario. However there are few non standard way available with cucumber to use examples from the external file. One of them, you can refer in grasshopper's post.
Another alternate is using gherkin with QAF which provides lots of features inbuilt data-providers including XML/CSV/JSON/EXCEL/DB. Here is the step-by-step-tutorial to start with.
From the FAQ:
"We advise you not to use Excel or csv files to define your test cases; using Excel or csv files is considered an anti-pattern.
One of the goals of Cucumber is to have executable specifications. This means your feature files should contain just the right level of information to document the expected behaviour of the system. If your test cases are kept in separate files, how would you be able to read the documentation?
This also means you shouldn’t have too many details in your feature file. If you do, you might consider moving them to your step definitions or helper methods. For instance, if you have a form where you need to populate lots of different fields, you might use the Builder pattern to do so."

OpenAPI - generate server code for a changing api?

I am maintaining a Java application where we're constantly adding new features (changes in the api). I want to move towards using OpenAPI as a way to document the api. I see two schools of thought:
Write the code, use some annotations to generate the OpenAPI spec.
Write the OpenAPI, use it to generate some server code.
While both seem fine and dandy, the server code is simply stubbed out, and would then require a lot of manual plugging in of services. While that seems fine as a one time cost, then next time I update the interface, it seems to me the only two options are
Generate them all again, re-do all the manual wiring.
Hand edit the previously generated classes to match the new spec file (potentially introducing errors).
Am I correct with those options? If so, it seems that using the code to generate the api spec file is really the only sane choice.
I would recommend an API First approach where you describe your API in the yaml file and generate with each new addition.
Now how do you deal with generator overwriting manual work?
You could use inheritance to create models and controllers based on the code that is generated.
You can also use the .ignore file provided with the generator to if you want to be sure of files not being overwritten.

how to format i18n files?

i got three files for internationalization: messages_es.properties, messages_en.properties and messages_pt.properties, those files follow the rule:
message1=value
message2=value2
and it's values changes according the file. example:
messages_en.properties:
hello=welcome
messages_pt.properties:
hello=bem vindo
the problem is, along the project construction those files becames inconsistent, like, lines that exists in one file doesn't exist on the others, the lines are not ordened in these files... i want to know if there is some way to easy rearrange and format those i18n files so the lines that exists in one file and don't exists in the other should be copied and the lines be ordered equals?
Interesting question, you are dealing with text files so there are a lot of possible options to manage this situation but depends on your scenario (source control, ide, etc).
If your are using Eclipse check: http://marketplace.eclipse.org/content/eclipse-resourcebundle-editor
And for IntelliJ: https://www.jetbrains.com/idea/features/i18n_support.html
Yes, the messages should usually appear in each file, unless there's a default message for some key that doesn't need translating (perhaps technical terms). Different IDEs have different support for managing message files.
As far as ordering the messages, there's no technical need to do so, but it can help the human maintainers. Any text-editor's sort routine will work just fine.
The NetBeans IDE has a properties editor across languages, displaying them side-by-side in a matrix. Similarly there are stand-alone editors that allow to do this. One would assume that such an editor would keep the source text synchronized and in one consistent layout.
First go looking for a translator's editor that can maintain a fixed layout. A format like gettext (.po/.pot) which is similar to .properties might be a better choice, depending on the tool.
For more than three languages it would make sense to use a source format more directed at translators, like the XML format xliff (though .properties are well known). And generate from this source (via XSLT perhaps) the several .properties files, or even ListResourceBundles.
The effort for i18n should not stop at providing a list of phrases to
translate, but some info where needed (disambiguating note), and maybe
even a glossary for a consistent use of the same term. The text
presented to the user is a very significant of the products quality
and appeal. Using different synonyms may make the user-interface
fuzzy, needlessly unclear, tangled.
The problem you are facing is invalid Localization process. It has nothing to do with properties files and it is likely that you shouldn't even compare these files now (that is until you fix the process).
To compare properties files, you can use very simple trick: sort each one of them and use standard diff tool to show differences. Sure, you'll miss the comments and logical arrangement in the English file, but at least you can see what's going on. That could be done, but it is a lot of manual work.
Instead of manually fix the files, you should fix the broken process. The successful localization process is basically similar to this one:
Once English file is modified, send the English file for translation. By that I mean all the translations should be based on English file and the localization files should be recreated (stay tuned).
Use Translation Memory to fill up the translations you already have. This could be done by your translation service provider or yourself if you really know how to do it (guess what? it is difficult).
Have the translators translate strings that are missing.
Put localized file back.
Before releasing the software to public have somebody to walk the Linguistic Reviewer through the UI and correct mistranslations.
I intentionally skipped few steps (like localization testing, using pseudo-translations and searching for i18n defects, etc.), but if you use this kind of process, your properties files should always be in sync.
And now your question could be reduced to the one that was already asked (and answered):
Managing the localization of Java properties files.
Look at java.util.PropertyResourceBundle. It is a convenience class for reading a property file and you can obtain a Set<String> of the keys. This should help to compare the contents of several resource files.
But I think that a better approach is to maintain the n languages in a single file, e.g., using XML and to generate the resource files from a single source.
<entry>
<key>somekey</key>
<value lang="en">good bye</value>
<value lang="es">hasta luego</value>
</entry>

What is the most efficient way of repeatedly writing to an XML file in Android?

I am writing an application which needs to add nodes to an existing XML file repeatedly, throughout the day. Here is a sample of the list of nodes that I am trying to append:
<gx:Track>
<when>2012-01-21T14:37:18Z</when>
<gx:coord>-0.12345 52.12345 274.700</gx:coord>
<when>2012-01-21T14:38:18Z</when>
<gx:coord>-0.12346 52.12346 274.700</gx:coord>
<when>2012-01-21T14:39:18Z</when>
<gx:coord>-0.12347 52.12347 274.700</gx:coord>
....
This can happen up to several times a second over a long time and I would like to know what the best or most efficient way of doing this is.
Here is what I am doing right now: Use a DocumentBuilderFactory to parse the XML file, look for the container node, append the child nodes and then use the TransformerFactory to write it back to the SD card. However, I have noticed that as the file grows larger, this is taking more and more time.
I have tried to think of a better way and this is the only thing I can think of: Use RandomAccessFile to load the file and use .seek() to a specific position in the file. I would calculate the position based on the file length and subtract what I 'know' is the length of the file after what I'm appending.
I'm pretty sure this method will work but it feels a bit blind as opposed to the ease of using a DocumentBuilderFactory.
Is there a better way of doing this?
You should try using JAXB. It's a Java XML Binding library that comes in most of the Java 1.6 JDKs. JAXB lets you specify an XML Schema Definition file (and has some experimental support for DTD). The library will then compile Java classes for you to use in your code that translate back into an XML Document.
It's very quick and useful with optional support for validation. This would be a good starting point. This would be another good one to look at. Eclipse also has some great tools for generating the Java classes for you, and providing a nice GUI tool for XSD creation. The Eclipse plugins are called Davi I believe.

FreeMarker how to find corresponding java classes

I am investigating large project that uses FreeMarker. I am newbie to FreeMarker. How I can find which classes of java are used to receive values for templates? Investigate all project seems enormous work.
Thanks.
May be need some plugins for Eclipse?
FreeMarker is a typical "dynamic language", which means refactoring/changing is hard. The templates don't declare what they expect to be in the data-model. Furthermore, when a template tries to read a value from the data-model, like with ${foo.bar}, it could mean foo.get("bar") or foo.getBar() or whatever the ObjectWrapper used makes possible, and it's only decided when the template is executed. Certainly you will need to fall back to good-old search-and-replace and lot of testing (a good test suite is essential...) if you change something. And of course, you could look at the place in the program where the data-model is built, and see what was put into it. Or dump the data-model somehow on runtime.

Categories

Resources