I am new to java and DLL-s
I need to access DLL's methods from java. So go easy on me.
I have tried using JNA to access the DLL here is what I have done.
import com.sun.jna.Library;
public class mapper {
public interface mtApi extends Library {
public boolean IsStopped();
}
public static void main(String []args){
mtApi lib = (mtApi) Native.loadLibrary("MtApi", mtApi.class);
boolean test = lib.IsStopped();
System.out.println(test);
}
}
When I run the code, I am getting the following error:
Exception in thread "main" java.lang.UnsatisfiedLinkError:Error looking up function 'IsStopped':The specified procedure could not be found.
I understand that this error is saying it cannot find the function, but I have no idea how to fix it.
I am trying to use this API mt4api
and here is the method, I am attempting to access MQL4
Can anyone tell me what I am doing wrong?
I have looked at other alternatives, like jni4net, but I cannot get this working either.
If anyone can link me to a tutorial that shows me how to set this up, or knows how to, I would be greatfull.
Trading?Hunting for milliseconds to shave-off?Go rather into Distributed Processing... Definitely safer than relying on API !
While your OP was directed onto how bend java to call .NET DLL-functions,
let me sketch a much future-safer solution.
Using AI/ML-regression based predictors for FOREX trading, I was hunting in the same forest. The best solution found within the last about 12-years, having spent about a few hundreds man*years of experience, was setup in the following manner:
Host A executes trades: operates MetaTrader Terminal 4, with both Script and EA --- the distributed-processing system communicates with with a use of ZeroMQ low-latency messaging/signalling framework ( about a few tens of microseconds needed )
Host B executes AI/ML processing of predictions for a traded instrument ( about a few hundreds of microseconds apply )
Cluster C executes continuous AI/ML predictor re-trainings and HyperParameterSPACE model selections ( many CPU-hours indeed needed, continuous model self-adapting process running 24/7 )
Signalling / Messaging layer with ZeroMQ has ports and/or bindings available and ready for most of the mainstream and many of niche programming languages, including java.
Hidden dangers of going just against a published API:
While the efforts for system integration and testing are immense, the API specifications are always dangerous for specification creeping.
This said, add countless man*months consumed on debugging after a silent change in MT4 language specifications that de-rail your previous tools + libraries. Why? Just imagine. Some time ago, MQL4 stopped to be MQL4 and was silently shifted towards MQL5, under a name New-MQL4. Among other changes in compilation, there were many small and big nails in the coffin -- string surprisingly ceased to be a string and was hidden as an internal struct -- which one could guess what will cause with all DLL-calls.
So, beware of API creepings.
Does it hurt a distributed processing solution?
No.
With a wise message-layout design, there are no adverse effects of MetaTrader Terminal 4 behaviour and all the logic ( incl. the strategy decision ) is put outside this creeping platform.
Doable. Fast and smart. Also could use remote-GPU-cluster processing, if your budget allows.
Does it work even in Strategy Tester?
Yes, it does.
If anyone has the gut to rely on the in-built Strategy Tester, the distributed-processing model still works there. Performance depends on the preferred style of modelling, a full one year, tick-by-tick simulation, with a quite complex AI/ML components took a few days on a common COTS desktops PC-systems ( after years of Quant R&D, we do not use Strategy Tester internally at all, but the request was to batch-test the y/y tick-data, so could be commented here ).
Related
As simple as this operation seems I can't find any documentation regarding how to receive a multipart message using ZMQ (Jeromq). I checked The Guide but it only contains C code with this info and it seems that I'm supposed to receive messages the same way no matter what kind of message I'm receiving.
In reality what happens is that I receive the multipart message in two messages with this code:
while (running.get()) {
items.poll();
if (items.pollin(0)) {
ByteArray message = receiver.recv(0);
System.out.println("Received " + String(message, Charset.forName("UTF-8")));
}
}
The "Received" part will get printed twice if I send a multipart message like this:
publisher.sendMore(message.key);
publisher.send(objectMapper.writeValueAsString(message.data));
What am I doing wrong?
Edit: I know there is a language selector below the examples but this particular problem is not present in any of the examples only explained inline with C code.
Edit
I tried to explore the API and found the hasReceiveMore() method. I tried using it, but it didn't work, I ended up with an infinite loop with this code:
List<String> parts = new ArrayList<>();
while(receiver.hasReceiveMore()) {
parts.add(receiver.recvStr());
}
Q : "What am I doing wrong?"
Your code has to actively assume each message to might have been composed as a multipart-message (Zero-Warranties in this, the less a-priori) and actively check for the presence of the ZMQ_RECVMORE flag, after each subsequent .recv()-method call, until the .getsockopt( ZMQ_RECVMORE )-method says otherwise.
JeroMQ might have translated this published native-API into some other utlity methods, so best re-read the JeroMQ-source code to find, where this native-API multipart-message handling-"protocol" gets wrapped into the JeroMQ-tooling.
EPILOGUE : Verba docent, Exempla trahunt...Having helped more than 1.3 M Community members and countless anonymous site visitors, I got punished and censored for helping.Censorship of deleting comments continues. The spirit of StackOverflow turns to digital totalitarianism. Delete, delete and punish those, who keep thinking and present help and advice to those, who ask for a sponsored help...-------------------------------------Let's review the facts:-----"I couldn't find any docs regarding how I should receive the parts, but I tried something that looks like what you mentioned...it didn't work either. – Adam Arold 20 hours ago"Either of not finding "any docs" or a "not working" (another not published, reproducible MCVE) were my fault or omission, were they?( my answer to these false claims was administratively deleted a few minutes after being posted... Self-explanatory )-----"This is not an answer and it doesn't contain a solution. I'm not sure why you're surprised. What you cited is the C API that has nothing to do with the JeroMQ API. In the end the solution was that I have to recv before I try to check the RECVMORE flag. This was not in your answer. Alternatively ZMsg can be used. – Adam Arold 11 hours ago"ANALYSIS OF THE CLAIMS :Sentence #1. :"This is not an answer and it doesn't contain a solution."This IS an answer, in spite of the "claim". It contains several important pieces of information, that she/he/anyone would otherwise have to try to seek for hours (days or weeks?) to later study and conceptually well comprehend architecture-wise so as not to make any ill-formulated code-design(s) or get principally trapped into one's own, misconcepted, decisions, if not having been advised and warned about thses possible shortcommings I've personally met ( and help others not to repeat ) throughout my last 13+ years spent with fabulous Martin Sustrik's masterpiece - the ZeroMQ, since v2.1+. So this "claim" is both wrong and unsupported by facts. A minor "claim" that the answer did not contain a solution is absurd, StackOverflow Community members are neither an employee to shout at, the less we bear a commitment to program a code that will snap and fit all the needs of the (unpublished) use-case.Sentence #2 :(an expressed feeling)- rather skipped as it is more a case of insult than a fair argument, isn't it?Sentence #3 :"What you cited is the C API that has nothing to do with the JeroMQ API."Oh sure, YES , the C API ( and the ZeroMQ-RFC docs on mandatory "wire-line" protocols properties that any peer-implementation has to obey... ) is the starting point and a cardinal reference in all of this. And NO , both the published ZeroMQ RFC-documents and the API are the rock-solid reference for anyone to start with, so as to best understand, how the internal engines and all the mandatory "wire-line" properties obeying protocol pumps are working (and must be working), so as to declare themselves to retain the ZeroMQ-compatibility. The JeroMQ-authors did their work based on these documented properties, didn't they? If they did not or if they "cut some corners" on doing that, the story is lost and was not my fault they did not meet and/or cover all the ZMTP/ZeroMQ-RFC/API properties & requirements, was it? That said, any wrapper/binding, including any version of the JeroMQ must also conform to these inner working rules, which is sufficiently self-documented & demonstrated, if nowhere else, in the JeroMQ source-code (Which warning was also the part of the Answer provided, wasn't it?), if it aspires to be a ZeroMQ-compatible tool. Again, should your current (used) JeroMQ-implementation misses to meet a well documented JeroMQ-API documentation you would like to use & read through (to find both the description and examples of code for the use-cases), which was claimed it did not or that the will to seek and find any such (source-coded) information, it is not the Community sponsoring member to punish for the lack of both the former, the more the latter.Sentence 4. + 5. :This needs to get highlighted:"In the end the solution was that I have to recv before I try to check the RECVMORE flag. This was not in your answer."First of all, it WAS in the Answer - the very first sentence:"...code has to actively assume each message to might have been composed as a multipart-message (Zero-Warranties in this, the less a-priori) and actively check for the presence of the ZMQ_RECVMORE flag, after each subsequent .recv()-method call, until the .getsockopt( ZMQ_RECVMORE )-method says otherwise."My generation grew up in a deep belief, that if we've made an error or 've made a poor decision, based on an unsupported assumption, we never punish anyone else, for (us) having made an error of a bad decision. Surprisingly, not working here. Why would anyone ever punish a person, who reached out and came to help you solve your problem and sponsored your personal need to get a step further? No one will if the culture to ask for a sponsored help and punish anyone who did would grow further. Isn't this called an arrogant or dictator-alike style of person to person behaviour? Be it the former or latter, it is neither fair, the less a style to be promoted the less rewarded as a Community preferred behaviour. The "argument" per se is empty, void - Not having called a .recv()-method, nothing gets ever from inside the ZeroMQ-API abstract horizon, the less an indication, promised to learn by .getsockopt()-method's use on getting a RECVMORE-flag ( sure, after some .recv() has been confirmed to have gotten a substance --The-Message-- That is both elementary and does not need to "include" it in any text about ZeroMQ/JeroMQ messaging as it is self-explanatory - Would anyone claim that it was unfair not to explain that asking for an email-attachment makes no sense if there were no email delivered so far? No one fair ever would. So, the Answer did the very opposite - it did warn about this, that for every .recv()-ed message, professional designer ought always assume a { 0+ }-RECVMORE-flagged multi-part message components, that follow the first one .recv()-ed and need to get dug out of the API.The last sentence :"Alternatively ZMsg can be used."This claim remains an undecidable problem, as the O/P contains zero information about a version. Native ZeroMQ API has evolved since its premiere release via v2.0-v2.1-..-v2.11, via v3.0-v3.1-v3.2, refactored and extended via v4.0-v4.1-v4.2-v4.3 and still counting, and a "claimed" Zmsg-abstraction is sure not to be present in earlier implementations, so the version number is cardinal on this ( also being a part of the StackOverflow best practices for how to ask good questions with a problem-reproducing MCVE / MWE code and all relevant details, the version number being one part of that, isn't it? ).
I've been using Nashorn for awk-like bulk data processing. The idea is, that there's a lot of incoming data, coming row by row, one by another. And each row consists of named fields. These data are processed by user-defined scripts stored somewhere externally and editable by users. Scripts are simple, like if( c>10) a=b+3, where a, b and c are fields in the incoming data rows. The amount of data is really huge. Code is like that (an example to show the use case):
ScriptEngine engine = new NashornScriptEngineFactory().getScriptEngine(
new String[]{"-strict", "--no-java", "--no-syntax-extensions", "--optimistic-types=true"},
null,
scr -> false);
CompiledScript cs;
Invocable inv=(Invocable) engine;
Bindings bd=engine.getBindings(ScriptContext.ENGINE_SCOPE);
bd.remove("load");
bd.remove("loadWithNewGlobal");
bd.remove("exit");
bd.remove("eval");
bd.remove("quit");
String scriptText=readScriptText();
cs = ((Compilable) engine).compile("function foo() {\n"+scriptText+"\n}");
cs.eval();
Map params=readIncomingData();
while(params!=null)
{
Map<String, Object> res = (Map) inv.invokeFunction("foo", params);
writeProcessedData(res);
params=readIncomingData();
}
Now nashorn is obsolete and I'm looking for alternatives. Was googling for a few days but didn't found exact match for my needs. The requirements are:
Speed. There's a lot of data so it shall be really fast. So I assume as well, precompilation is the must
Shall work under linux/openJDK
Support sandboxing at least for data access/code execution
Nice to have:
Simple, c-like syntax (not lua;)
Support sandboxing for CPU usage
So far I found that Rhino is still alive (last release dated 13 Jan 2020) but I'm not sure is it still supported and how fast it is - as I remember, one of reasons Java switched to Nashorn was speed. And speed is very important in my case. Also found J2V8 but linux is not supported. GraalVM looks like a bit overkill, also didn't get how to use it for such a task yet - maybe need to explore further if it is suitable for that, but looks like it is complete jvm replacement and cannot be used as a library.
It's not necessary shall be javascript, maybe there are other alternatives.
Thank you.
GraalVM's JavaScript can be used as a library with the dependencies obtained as any Maven artifact. While the recommended way to run it is to use the GraalVM distribution, there are some explanations how to run it on OpenJDK.
You can restrict things script should have access to, like Java classes, creating threads, etc:
From the documentation:
The following access parameters may be configured:
* Allow access to other languages using allowPolyglotAccess.
* Allow and customize access to host objects using allowHostAccess.
* Allow and customize host lookup to host types using allowHostLookup.
* Allow host class loading using allowHostClassLoading.
* Allow the creation of threads using allowCreateThread.
* Allow access to native APIs using allowNativeAccess.
* Allow access to IO using allowIO and proxy file accesses using fileSystem.
And it is several times faster than Nashorn. Some measurements can be found for example in this article:
GraalVM CE provides performance comparable or superior to Nashorn with
the composite score being 4 times higher. GraalVM EE is even faster.
Scenario
I'm working with a Java model built from scratch in Eclipse. What's important in this model is that we save our output to MATLAB (.mat) files. I constantly add new features, which require new fields that in turn will have to be exported to the .mat file at every iteration. Upon restarting a crashed simulation, I might have to import the .mat file. To export or import my .mat file I use JMatIO.
For example, if I would add a new field rho_m (a simple double) to my class CModel, I have to add to my Save() method:
mlModel.setField("rho_m", new MLDouble(null, new double[] {rho_m}, 1));
And to my Load() method:
rho_m = ((MLDouble)mlModel.getField("rho_m")).getReal(0);
Note that even though rho_m is a double, it needs to be treated as a double[] in JMatIO. This probably has something to do with MATLAB being orientated towards matrices and matrix operations.
Problem
Instead of doing this manually (prone to errors, annoying to maintain) I would like to automate this procedure. Ideally, I would like my IDE to detect all the fields in CModel and write the code based on the field's name and type. Is there any way to do this in Java/Eclipse?
Ideas so far
I have no formal training in low-level programming languages (yes, Java is low-level to me) and am still relatively new to Java. I do have some experience with MATLAB. In MATLAB I think I could use eval() and fieldnames() in a for loop to do what I mentioned. My last resort is to copy-paste the Java code to MATLAB and from there generate the code using a huge, ugly script. Every time I want to make changes to the model I'd rerun the MATLAB script.
Besides that idea I've found terms like UML, but do not have the background knowledge to figure out if this is what I'm looking for or not.
Any help, even if it's just a small push in the right direction, is greatly appreciated. Let me know if I need to further clarify anything.
Looking at your scenario, you are doing model-driven code generation, that is, you have a model and want to get some code generated according to your current model. Therefore, you need a model-driven code generator.
I lead the ABSE/AtomWeaver project, so I'll outline what you can do to get what you want using AtomWeaver (There are however other solutions like MetaEdit+, XText or Eclipse's own GMT/EMF sub-system).
AtomWeaver is an IDE where you can build a model and generate code from that model. You can change your model as many times you want and hit the "Generate" button to get an updated version of your code. ABSE is the name of the modeling method.
We don't need to go into details, but essentially ABSE follows a "building-block" approach. You create a Template that represents a feature or concept of your model. Then, you can associate a mini-code generator just to that concept. You can then "instantiate" and combine those building blocks to quickly build your models. Variables increase the flexibility of your models.
You can also change your models, or add new features ("blocks") and generate again. The generators are built using the Lua programming language, a very simple language with C-Like syntax.
The best way to understand the ABSE development method and the AtomWeaver IDE is to download the IDE and see the samples or try the tutorials. And yes, you can use AtomWeaver for free.
Are there any tools (free/commercial) that can audit an application for internationalization? (or localization-readiness, if you prefer)
Primarily interested in:
Mulitlingual Implementation tests
Examples:
* [javascript] alert('Oops wrong choice!');
* [java] String msg = resourcebundle.getString("key.x").concat("4");
* [jdbc] String query=".. order by abc"; //should be NLS_SORT or equiv.
Date Implementation tests
Examples:
* SimpleDateFormat used without Locale
* Apache's DateFormatUtils used
Numeric Implementation tests
Examples:
* NumberFormat used without Locale
javascript-validation tests
Examples:
* [javascript] checkIsDecimal { //decimal point checked against "." }
* [javascript] hardcoded character range [A-z]
Cheers.
Have a look at Globalyzer - http://lingoport.com/globalyzer - as it is just that, a tool for performing static analysis on code specifically for internationalization. It works with a variety of programming languages too. Supports detection and correction for embedded strings (string externalization capabilities too), potential locale-limiting methods/functions/classes depending upon the programming language and requirements, as well as other issues like programming patterns and embedded images. There are default "rule sets" which get you a good start, and then you can customize your rules for both detection and filtering of issues. Plus there's an underlying database that helps you tag or keep track of i18n issues as you work with them. There's a server component, where you create and share your rule sets with your team members, then desktop and command line clients which run locally on your machine to analyze your source, so you're not sending any code or reporting off your local machine.
Based on your examples, you mostly want to diagnose
functions that produce output, whose input isn't somehow
internationalized.
So for the alert case, you want to find any print call
that acquires a string that is not produced by
one of possibly several well-know translation routines.
For the jdbc case, you want to identify ordering constraints
that are not locale specific.
For the various date cases, you want date routines that
are known to produce locale-specific answers.
The javascript validation is harder to guess at intent;
presumaly you want to diagnose functions that are known
to be wired to a particular locale; this seems a lot like
the date case. For range checks, you want capture anything
that compares a character to another for less or greater than.
For the wired-locale functions, it seems just knowing their
name would be enough (although perhaps there has to be some overload resolution,
e.g., by number of arguments), so NumberFormat(?,?) is bad,
and NumberFormat(?,?,?) is OK.
Why can't you write a regular expression to look (hueristically) for the bad cases?
For the range case, you just need to recognize expressions
of the form of [exp] < [literal-char] or [exp] < [literal-string].
A regexp to look for just "< '.+" would seem adequate.
Are there common cases that these would miss?
EDIT (from comment below: "I've been using regexp but...")
If you want a tool that is deeper than regexp, you pretty much
have to go to language parsing, name/type resolution, and having
data flow analysis would be helpful. Since you want to process
multiple (computer) languages, the tool has to be multi-lingual capable.
And it appears you want to be able to customize it to check for
the specific cases relevant to your application.
The DMS Software Reengineering Toolkit
has all these properties, including
parsers for Java, JavaScript and SQL. It is designed to be customized,
so you have to do that in advance of using it.
I had studied IntelliJ IDEA's code analyzers, and it does have those that you requested. It's a commercial IDE, specialized in java, but knows other languages as well.
http://www.jetbrains.com/idea/
I want to write a Java func grabTopResults(String f) such that grabTopResults("automata theory") returns me a list of the top 100 cited papers on scholar.google.com for "automata theory".
Does anyone have suggestions for what libraries will make my life easy?
Thanks!
As I'm sure Google can afford the bandwidth, I'll ignore the question of whether this is immoral/illegal/prohibited by Google's T&C
First thing you need to do is figure out what HTTP request (or requests) you need to issue in order to obtain the page with the data you need. Once you've figured this out, use HttpClient to issue the same request from Java code. The previous link shows example code that explains how to do this.
Once you've downloaded the content of the relevant page, you'll need to use a HTML parser to extract the data you're interested in. The Jericho parser suggested by peperg is a good choice.
If the Google police come knocking, you've never heard of me, OK?
I use http://jericho.htmlparser.net/docs/index.html . Google Scholar doesn't have API ( http://code.google.com/p/google-ajax-apis/issues/detail?id=109 ). Of course it is not allowed by Google (read terms of use. Automatic requestr are forbidden).
Below is a bit of example code which gets the titles on the first page using the open source product TestPlan. It is a standalone product, but if you really need it I could help you integrated it into your Java code (it is written in Java itself).
GotoURL http://scholar.google.com/
SubmitForm with
%Params:q% automate theory
end
set %Items% as response //div[#class='gs_r']
foreach %Item% in %Items%
set %Title% as selectIn %Item% h3
Notice %Title%
end
This produces output like the below (my IP is Germany, thus a german response). Obviously you could format it however you like, or write it to a file; this is just a rough test.
00000000-00 GOTOURL http://scholar.google.com/
00000001-00 SUBMITFORM default
00000002-00 NOTICE [ZITATION] Stochastic complexity in statistical inquiry theory
00000003-00 NOTICE AUTOMATED THEORY FORMATION IN MATHEMATICS1
00000004-00 NOTICE Constraint generation via automated theory formation
00000005-00 NOTICE [BUCH] Automated theorem proving: after 25 years
00000006-00 NOTICE [BUCH] Introduction to the Theory of Computation
00000007-00 NOTICE [ZITATION] Computer-controlled systems: theory and design
00000008-00 NOTICE [BUCH] … , randomness & incompleteness: papers on algorithmic information theory
00000009-00 NOTICE [BUCH] Automatic control systems
00000010-00 NOTICE [BUCH] VLSI physical design automation: theory and practice
00000011-00 NOTICE Singular Control Systems.