Which JS Script Engine will be chosen by Java? - java

ScriptEngineManager.getEngineByName looks up and creates a ScriptEngine for a given name.
Rhino registers itself as "js", "rhino", "JavaScript", "javascript", "ECMAScript", and "ecmascript"
Nashorn registers itself as "nashorn", "Nashorn", "js", "JS", "JavaScript", "javascript", "ECMAScript", and "ecmascript"
If I use a name like "js" which both Nashorn and Rhino have registered with, which script engine will be used? Will it use Nashorn on Java 8 and Rhino otherwise?

Looking at the JavaDoc for registerEngineName:
Registers a ScriptEngineFactory to handle a language name. Overrides
any such association found using the Discovery mechanism.
And also at the registerEngineName source code (note that nameAssociations is a hash map):
public void registerEngineName(String name, ScriptEngineFactory factory) {
if (name == null || factory == null) throw new NullPointerException();
nameAssociations.put(name, factory);
}
So, it seems that, for a given name, getEngineByName will return the script engine factory that was the last to be registered for that name.
As script engine factories are loaded through the ServiceLoader mechanism, the loading order will depend on the order that the service configuration files are enumerated by the relevant class loaders' getResources method.
For a default installation, all this does not matter too much as Java 8 only includes Nashorn, and Java 7 and earlier only include Rhino. If you would add an additional engine through the system class path, it will be loaded after the one loaded by the bootstrap/extension class loader, and thus take precedence.

Reading the code, registerEngineName is indeed deterministic, however the discovery mechanism is a separate thing (as implied by the JavaDoc), and it is non-deterministic, because it adds all the engines to a HashSet during discovery, and when asked for an engine by name, it just uses the first match it finds.
You can run into this if you install an updated Rhino ScriptEngine in Java 7 and request it by any of the usual names (js, rhino, etc).
But unless you do that, both Java 7 and Java 8 come with exactly one implementation, which answers to js, javascript, ecmascript, etc. As long as you don't ask for rhino or nashorn, it should work in both cases.

Related

Interfaces in Lua (mocking dependencies)

Preamble
I am writing Lua code to control virtual devices attached to a virtual computer. I do not have control over the "component" library/dependency.
-- Component is used to access other devices which are attached to the computer on which this is running
local component = require "component"
-- ...
-- ge_terminal is a device which is connected.
-- Said device is found when the computer starts up
-- So the ge_terminal field/property is not hard-coded, but added at runtime
component.ge_terminal.get_heat()
Such calls are very common in my program. My issue is that I do not have access to the external component to which component.ge_terminal refers within my development/testing environment.
The Question
I would like to know how to abstract calls to external components/devices to so that I can test the business-logic without access said external components/devices.
Solutions in other languages
In other languages such as Java or C#, this might be achieved using an interface, or by mocking the external dependency during testing. I would ideally like to find a solution which does not involve additional libraries (which I do know exist for mocking in Lua).
interface IComponentManager {
String getProperty(String name) { ... }
}
interface IDevice {
String getProperty(String name) { ... }
}
// By abstracting how components are accessed, the production versions can be replaced with mock versions.
// The production version calls the live equivalent of "component.ge_terminal" to return the real device.
// The mock version returns custom IDevices which can be configured for testing.
IComponentManager manager = new ProductionComponentManager();
IComponentManager manager = new MockComponentManager();
IDevice device = manager.getComponent("ge_terminal");
String heat = device.getProperty("heat");
(I am aware that this is not prime Java code, but it's just for demonstration)

Java, how to using expression language in pure java

I have a JavaEE web application, as it will be deployed to some kind of application servers such as weblogic websphere tomcat etc. so the EL api will be avaiable for me, but I don't need to care about what EL implementation is used.
I want to use EL in my java code, not jsp pages, to translate some simple expression to value constants, like this:
public static void main(String[] args) {
// TODO Auto-generated method stub
ExpressionFactory ef = new ExpressionFactoryImpl();
SimpleContext ec = new SimpleContext();
ec.setVariable("value", ef.createValueExpression("0", String.class));
ValueExpression ve = ef.createValueExpression(ec, "${value == \"1\" ? \"enabled\" : \"disabled\"}", String.class);
System.out.println(ve.getValue(ec));
}
however, in this example, I used JUEL library, so I need to access JUEL implementation classes. my question is how Can I do the same thing but just use classes in javax.el package ?
Do I need to implement my own ELContext and VariableMapper and other abstract classes ?
I believe you don't need to hack EL together with your code, I think in some cases that might be problemmatic.
There are quite a lot of libraries offering similar functionalities (known as Template Engines), try Velocity or FreeMarker for example. They are extremely easy to use and have quite a lot of options.
MVEL solve my problem, it's flexible and powerful!

How start with UIMA and simple NLP tasks?

I've recently found out about UIMA (http://uima.apache.org/). It looks promising for simple NLP tasks, such as tokenizing, sentence splitting, part-of-speech tagging etc.
I've managed to get my hands on an already configured minimal java sample that is using OpenNLP components for its pipeline.
The code looks like this:
public void ApplyPipeline() throws IOException, InvalidXMLException,
ResourceInitializationException, AnalysisEngineProcessException {
XMLInputSource in = new XMLInputSource(
"opennlp/OpenNlpTextAnalyzer.xml");
ResourceSpecifier specifier = UIMAFramework.getXMLParser()
.parseResourceSpecifier(in);
AnalysisEngine ae = UIMAFramework.produceAnalysisEngine(specifier);
JCas jcas = ae.newJCas();
jcas.setDocumentText("This is my text.");
ae.process(jcas);
this.doSomethingWithResults(jcas);
jcas.reset();
ae.destroy();
}
private void doSomethingWithResults(JCas jcas) {
AnnotationIndex<Annotation> idx = jcas.getAnnotationIndex();
FSIterator<Annotation> it = idx.iterator();
while (it.hasNext()) {
System.out.println(it.next().toString());
}
}
Excerpt from OpenNlpTextAnalyzer.xml:
<delegateAnalysisEngine key="SentenceDetector">
<import location="SentenceDetector.xml" />
</delegateAnalysisEngine>
<delegateAnalysisEngine key="Tokenizer">
<import location="Tokenizer.xml" />
</delegateAnalysisEngine>
The java code produces output like this:
Token
sofa: _InitialView
begin: 426
end: 435
pos: "NNP"
I'm trying to get the same information from each Annotation object that the toString() method uses. I've already looked into UIMA's source code to understand where the values are coming from. My attempts to retrieve them sort of works, but they aren't smart in any way.
I'm struggling to find easy examples that, extract information out of the JCas objects.
I'm looking for a way to get for instance all Annotations produces by my PosTagger or by the SentenceSplitter for further usage.
I guess
List<Feature> feats = it.next().getType().getFeatures();
is a start to get values, but due to UIMA owns classes for primitive types, even the source code of the toString method in the annotation class reads like a slap in the face.
Where do I find java code that uses basic UIMA stuff and where are good tutorials (except javadoc from the framework itself)?
Generate JCas wrapper classes for your annotation types (you can do this using the type system editor UIMA plugin for Eclipse that comes with UIMA). This will provide you with Java classes that you can use to access the annotations - these offer getters and setters for features.
You should have a look at uimaFIT, which provides a more convenient API including convenience methods to retrieve annotations from the JCas, e.g. select(jcas, Token.class) (where Token.class is one of the classes you generated with the type system editor).
You could find some quick-starting Groovy scripts and a collection of UIMA components on the DKPro Core page.
There is material from the UIMA#GSCL 2013 tutorial (slides and sample code) which might be useful for you. Go here and scroll down to "Tutorial".
Disclosure: I'm developer on UIMA, uimaFIT, DKPro Core and co-organizer on the UIMA#GSCL 2013 workshop.

How to dynamically differentiate the memcahce instances in java code?

Can anyone suggest any design pattern to dynamically differentiate the memcahce instances in java code?
Previously in my application there is only one memcache instance configured this way
Step-1:
dev.memcached.location=33.10.77.88:11211
dev.memcached.poolsize=5
Step-2:
Then i am accessing that memcache in code as follows,
private MemcachedInterface() throws IOException {
String location =stringParam("memcached.location", "33.10.77.88:11211");
MemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses(location));
}
Then i am invoking that memcache as follows in code using above MemcachedInterface(),
Step-3:
MemcachedInterface.getSoleInstance();
And then i am using that MemcachedInterface() to get/set data as follows,
MemcachedInterface.set(MEMCACHED_CUSTS, "{}");
resp = MemcachedInterface.gets(MEMCACHED_CUSTS);
My question is if i introduce an new memcache instance in our architechture,configuration is done as follows,
Step-1:
dev.memcached.location=33.10.77.89:11211
dev.memcached.poolsize=5
So, first memcache instance is in 33.10.77.88:11211 and second memcache instance is in 33.10.77.89:11211
until this its ok...but....
how to handle Step-2 and Step-3 in this case,To get the MemcachedInterface dynamically.
1)should i use one more interface called MemcachedInterface2() in step-2
Now the actual problem comes in,
I am having 4 webservers in my application.Previoulsy all are writing to MemcachedInterface(),but now as i will introduce one more memcache instance ex:MemcachedInterface2() ws1 and ws2 should write in MemcachedInterface() and ws3 and ws4 should write in ex:MemcachedInterface2()
So,if i use one more interface called MemcachedInterface2() as mentioned above,
This an code burden as i should change all the classes using WS3 and WS4 to Ex:MemcachedInterface2() .
Can anyone suggest one approach with limited code changes??
xmemcached supports constistent hashing which will allow your client to choose the right memcached server instance from the pool. You can refer to this answer for a bit more detail Do client need to worry about multiple memcache servers?
So, if I understood correctly, you'll have to
use only one memcached client in all your webapps
since you have your own wrapper class around the memcached client MemcachedInterface, you'll have to add some method to this interface, that enables to add/remove server to an existing client. See the user guide (scroll down a little): https://code.google.com/p/xmemcached/wiki/User_Guide#JMX_Support
as far as i can see is, you have duplicate code running on different machines as like parallel web services. thus, i recommend this to differentiate each;
Use Singleton Facade service for wrapping your memcached client. (I think you are already doing this)
Use Encapsulation. Encapsulate your memcached client for de-couple from your code. interface L2Cache
For each server, give them a name in global variable. Assign those values via JVM or your own configuration files via jar or whatever. JVM: --Dcom.projectname.servername=server-1
Use this global variable as a parameter, configure your Service getInstance method.
public static L2Cache getCache() {
if (System.getProperty("com.projectname.servername").equals("server-1"))
return new L2CacheImpl(SERVER_1_L2_REACHIBILITY_ADDRESSES, POOL_SIZE);
}
good luck with your design!
You should list all memcached server instances as space separated in your config.
e.g.
33.10.77.88:11211 33.10.77.89:11211
So, in your code (Step2):
private MemcachedInterface() throws IOException
{
String location =stringParam("memcached.location", "33.10.77.88:11211 33.10.77.89:11211");
MemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses(location));
}
Then in Step3 you don't need to change anything...e.g. MemcachedInterface.getSoleInstance(); .
You can read more in memcached tutorial article:
Use Memcached for Java enterprise performance, Part 1: Architecture and setup
http://www.javaworld.com/javaworld/jw-04-2012/120418-memcached-for-java-enterprise-performance.html
Use Memcached for Java enterprise performance, Part 2: Database-driven web apps
http://www.javaworld.com/javaworld/jw-05-2012/120515-memcached-for-java-enterprise-performance-2.html

Methods for getting annotation metadata in Java

I'm working on a JSR-303 validation framework for GWT. Some of you may have heard of it even though it is a small project. Here is gwt-validation.
In the old days (v1.0) it used a marker interface for each class and each class had metadata generated separately. This was bad because it was not part of the JSR-303 standard and we moved on to the next idea.
In version 2.0 it scans the classpath at runtime using Reflections. This is great. The downside is that it doesn't seem to be able to work inside of containerized environments or those with special restrictions.
This is probably my fault, look at the following code:
//this little snippet goes through the classpath urls and ommits jars that are on the forbidden list.
//this is intended to remove jars from the classpath that we know are not ones that will contain patterns
Set<URL> classPathUrls = ClasspathHelper.forJavaClassPath();
Set<URL> useableUrls = new HashSet<URL>();
for(URL url : classPathUrls) {
boolean use = true;
for(String jar : this.doNotScanJarsInThisList) {
if(url.toString().contains(jar)) {
use = false;
break;
}
}
if(use) {
useableUrls.add(url);
}
use = false;
}
ConfigurationBuilder builder = new ConfigurationBuilder()
.setUrls(useableUrls)
.setScanners( new TypeAnnotationsScanner(),
new FieldAnnotationsScanner(),
new MethodAnnotationsScanner(),
new SubTypesScanner()
)
.useParallelExecutor()
;
this.reflections = new Reflections(builder);
I'm using the filter to remove jars that I know can't have annotations in them that I'm interested in. As I mention this gives a huge speed boost (especially on large classpaths) but the ClasspathHelper.forJavaClassPath() that I'm basing this on probably isn't the best way to go in container environments. (e.g. Tomcat, JBoss)
Is there a better way or at least a way that will work with a container environment and still let my users filter out classes they don't want?
I've looked, some, into how the Hibernate Validation project (the reference implementation for JSR-303) and they appear to at least be using (at least in part) the Annotations Processing in Java 6. This can't be all of the story because that didn't show up until JDK6 and Hibernate Validator is JDK5 compatible. (See: hibernate documentation)
So, as always, there's more to the story.
I've read these threads, for reference:
About Scannotation which has been pretty much replaced by Reflections.
This one but it uses File and I'm not sure what the implications are of that in things like GAE (Google App Engine) or Tomcat.
Another that goes over a lot of the things I've talked about already.
These threads have only helped so much.
I've also read about the annotation processing framework and I must be missing something. It appears to do what I want but then again it appears to only work at compile time which I know isn't what is done by Hibernate Validator. (Can anyone explain how it does scanning? It works on GAE which means it can't use any of the IO packages.)
Further, would this code work better than what I have above?
Set<URL> classPathUrls = ClasspathHelper.forClassLoader(Thread.currentThread().getContextClassLoader());
Could that correctly get the classloader inside of a Tomcat or JBoss container? It seems scan a smaller set of classes and still finish okay.
So, in any case, can anyone help me get pointed in the right direction? Or am I just stuck with what I've got?
You could take a look at Spring's annotation support.
Spring can scan annotations in files (using asm IIRC), and works in and out of a container.
It may not be easy because it goes through Spring's Resource abstraction, but it should be doable to reuse (or extract) the relevant code.

Categories

Resources