php api vs java traversal framework for neo4j - java

Does anyone know whether it is better to use core java traversal api provided by neo4j or use php api for neo 4j. Would there be any limitation in terms of distribution and scalability for a large dataset if I use php. Would one be really faster than other for say more than 1000 requests per second.
Would there be any design issues that i may run into while using either of them.
I am trying to build a friends of friends relationship till level 6.
Thanks for any help!

The Java API will be faster as the PHP Neo4j library relies on REST to call Neo4j, and there will be overhead in the REST Traversal framework vs the Java Traversal framework.
Now in terms of the actual traversal, there shouldn't be that much of a difference, because by the end of the day, the actual traversal is done in Java, either by native API, or by the REST endpoint translating into Java(groovy I believe).

Related

What is the best way to build and expose a Machine Learning model REST api?

I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python.
Now I have a use case where in I want to expose a REST api which builds Machine Learning Model, and another REST api which makes the prediction. What architecture should help me to achieve the same. (An example of the same maybe a Amazon Machine Learning. They have exposed REST api for generating model and making prediction)
I searched round the internet and found following ways:
Write the whole thing in Java - ML model + REST api
Write the whole thing in Python - ML model + REST api
But playing around with Machine Learning, its models and predictions is really easier and more supported in python with libraries like sklearn, rather than Java. I would really like to use python for Machine Learning part.
I was thinking about and approach wherein I write REST api using JAVA but use sub-process to make python ML calls. Will that work?
Can someone help me regarding the probable architectural approaches that I can take. Also please suggest the most feasible solution.
Thanks in advance.
As others mentioned,
using AzureML is easy solution to deploy ML model as web service/ rest service. However, you need to build the model in Azure platform using graphical interface (drag and drop, configure). People may not like this approach if they have used python -sklearn code build a model. Though, AzureML has option to include R and python script, i did not like it much.
Another option is to store the python ML model as .pkl file and using Flask / DJango rest framework, deploy the model. client apps can consume the rest service. Here is an excellent tutorial on youtube.
https://www.youtube.com/watch?v=s-i6nzXQF3g
From what ive done in the past i suggest 2 options(maybe theres more but this are the ones that i have implemented)
If you have access and budget to cloud services, Azure ML its excelent choice, greate ML framework and environment, and to create your rest API you just need like 2 clicks to expose it ,and then consume it using JSON from any language.
Use scikit-learn and code your REST API in python , but can be consumed from any language, this option is not as easy and user friendly as Azure ML because you will have to code everything by hand and play with the model persistence functions of scikit, but once exposed, you can use it in java(or anything else) . I used this as a reference : https://loads.pickle.me.uk/2016/04/04/deploying-a-scikit-learn-classifier-to-production/
Spark MLlib: i havent tried this option, but i asked myself a question here in stack overflow and got some interesting answers: How to serve a Spark MLlib model?
Well it depends the situation you use python for ML.
For classification models like randomforest,use your train dataset to built tree structures and export as nested dict.Whatever the language you uesd,transform the model object to a kind of data structure then you can ues it anywhere.
BUT if your situation is a large scale,real-timeing,distributional datesets,far as I know,maybe the best way is to deploy the whole ML process on severs.
I'm using Node.js as my rest service and I just call out to the system to interact with my python that holds the stored model. You could always do that if you are more comfortable writing your services in JAVA, just make a call to Runtime exec or use ProcessBuilder to call the python script and get the reply back.
By far, the fastest way to get your sklearn model into an API is FlashAI.io , the service was made for this purpose specifically – I came into this when I was facing the same dilemma recently as I had trained a Scikit-learn model on my local PC using Python, and I wanted to quickly expose it in an API that could be called via an HTTP POST request.
There are other options that were mentioned, all of which require some learning curve, cost in time and effort to simply expose your model. FlashAI lets you expose your model within a couple minutes. Just save your .pkl file and upload it. Your model gets assigned a unique model ID and you just use that to make API requests without any limit. Done and done :)
I have been experimenting with this same task and would like to add another option, not using a REST API: The format of the Apache Spark models is compatible in both the Python and Jave implementations of the framework. So, you could train and build your model in Python (using PySpark), export, and import on the Java side for serving/predictions. This works well.
There are, however, some downsides to this approach:
Spark has two separate ML packages (ML and MLLib) for different data formats (RDD and dataframes)
The algorithms for training models in each of these packages are not the same (no model parity)
The models and training classes don't have uniform interfaces. So, you have to be aware of what the expected format is and might have to transform your data accordingly for both training and inference.
Pre-processing for both training and inference has to be the same, so you either need to do this on the Python side for both stages or somehow replicate the pre-processing on the Java side.
So, if you don't mind the downsides of a Rest API solution (availability, network latency), then this might be the preferable solution.

How to send/receive data to/from MetaTrader Ternminal 4 with JAVA (or anything!)

I have been working on an algorithm ( Not mine, I am just modifying it ) that predicts when to buy and sell on the FOREX market. I need to be able to open and close orders, dynamically update parameters of the orders ( such as stoploss, maximum stop etc. ) and receive real time tick data.
I have been researching for well over a week, and have no success.
The closest I have gotten is using JavoNet and Mt4 Api
I managed to import the DLL into java and use a MQL4 function, which was AccountBalance(), however this has returned 0.0, which was not the account balance, I messed around with the code and the settings on MT4 client but still no luck.
Q0: Can anyone please point me in the right direction?
I am new to automated FOREX trading but from what I understand there is a broker somewhere with a MT4 server and I connect to that server with my MT4 client on my windows machine.
Q1: If this is the case, do I need to make an API work with the server side instead of my client side?
All these DLL's I have tried so far have been used with the MT4 client software on my machine.
I have also been doing some reading on the FIX-Protocol and ZeroMQ.
Q2: Can these help me achieve my goal in any way (instead of creating some bridges between JAVA and MT4 DLL's)?
A0: yes, forget straight about REST and synchronous, blocking chains in FX-trading domain
A1: well, not a typical way. MetaTrader Server is a proprietary suite of systems on the Broker-side and theirs API are not disclosed to allow some 3rd party integrations against.
A2: FIX-Protocol is the industry standard LP-interfacing lingua franca. In case you have contracted relations with your institutional trading provider, incl. the FIX-Protocol GWY-port, this may provide you an A-level access to the Market and to integrate your trading tools against. If this is the case, forget about MT4 instrumentation, as prime-time cadences are far beyond the MT4 Terminal localhost processing architecture ( multiple events with a sub-millisecond TimeDOMAIN resolution are common, whereas MQL4 does not provide any direct support for multithreaded-concurrent / better parallel programme scheduling designs ). FIX-Protocol events are simply off-the picture above, being far left, "before" the graph starts from 1st [ms] column.
ZeroMQ may help liberate your further designs from MQL4 limitations. May like to read my other posts on distributed systems, where MQL4 / ZeroMQ / ML-AI-predictors / GPU-processing infrastructures appear.
Anyway:
Enjoy the Wild Worlds of MQL4/MQL5
Interested? May also like reading other MQL4, ZeroMQ distributed processing and low-latency trading posts
You can try MetaApi https://metaapi.cloud cloud service which provides REST API and WebSocket API access to both MetaTrader 4 and MetaTrader 5 accounts.
Official REST API documentation: https://metaapi.cloud/docs/client
SDKs: https://metaapi.cloud/sdks (javascript, python and Java SDKs are provided as per April 2021)
It supports reading account information, positions, orders, trade history, receiving quotes, and accessing market data.
The service also provides copy trading API https://metaapi.cloud/docs/copyfactory and API to calculate forex trading metrics on a MetaTrader account https://metaapi.cloud/docs/metastats.
I started to code an expert with MQL5, naturally on MT5 platform, and I must admit that the difficulty of managing the application along with the increase of its complexity is high. It's not only due to a missing garbage collector, that of course imposes the deletion of the new instances, but also because Java offers a set of powerful data structures and syntax that MQL5 naturally doesn't have. Last but not least, talking about the community and the third party libraries available, there's a light year of the distance between Java and MQL5. I.e. if I need to find a library for a JSON conversion on the Java side I find dozens of official and stable versions, in the MQL5 community I have found only rubbish that I had to modify myself.
So, after numerous failed tries on coding my expert in MQL5 (not a simple one of course), I decided to adopt a radical approach: coding an application, client-side MQL5, and server-side Java, that provides a Java facade for the MT5 platform. Same API, same basic events and so on. Even though I thought more than once that I was getting stuck in a blind path, I kept coding and eventually, I made it, obtaining a really solid result.
Naturally, the REST interface drastically reduces the performances, and each request, even with Tomcat and MT5 running in the same localhost, is in the order of milliseconds, not micros, but on the other side this reduces only the suitability of this architecture, it doesn't make it useless at all.
Strategies like scalpelling and every kind of high-frequency trading are not good for such kind of scenario, vice-versa every other strategy in the longer period, even if intraday's ones, can be implemented successfully without any cons.
Last but not least, it isn't necessary to use the WebRequest() MQL5 method to call any Servlet container, it is possible to import the wininet.dll from the OS (talking about Windows) and the strategy tester will work as if the strategy has been coded in MQL5, maybe just a little bit slower.
To sum up, I wouldn't be so sarcastic on the Java facade approach for the FX trading platforms, citing only the nude performances without contextualizing the overall scenario is a naive approach to face the argument.
If you need to send/receive synchronous message between MT4 and Java application, REST would be the best approach because fast response matters in this scenario. Message Queue solutions like ZeroMQ fits better in asynchronous solutions, so it won't help you. Once you choose REST approach, you can use MQL4 WebRequest() to call your Java application.
WebRequest isn't the end of the world, you can submit http requests from your EA using API, works even with Strategy Tester.
In order to collect the tick information and open, update or close orders, you can use mt4 server api.
please check this url.
http://mtapi.online/#overlappable-4
Maybe you will find what you want.
And then I have also mt4 server api. If you have any questions please update me.

NodeJs or Java for Data Analytics?

I have to do analysis of data that would be about 50 TB. I have been searching for few time but still confused which to use?
While searching I came across few points about node that it supports less computation service, numerical analysis and can be used for samller data set.
Is it true?
We have to design complex algorithms for statistics and display the result in web browser.
We will be using logstash and elastic search for filtering and storing data.
So which language would be the better choice. Java or Node?
Definitely Java. This kind of thing is Java's domain. Hadoop, Elastisearch, Lucene, Cassandra, Solr are all written in Java. Spark and Storm also run on the Java Virtual Machine. If you intend to use any of these tools, Java would be a first-class language.
Node may be useful in implementing the server side of the front end to enable designers to use JavaScript on both the client and server side. But, as far as web server speed and scalability is concerned, according to the tech empower benchmarks, Java is faster too.

Java vs Ruby for SOAP handling

I need to make a decision between using Ruby vs. Java for SOAP integration. My entire web application is built on Ruby on Rails, and there is a significant back-end component that has to integrate with legacy systems using SOAP.
Java has extensive SOAP libraries like Apache Axis and seems to integrate very well with this type of "legacy" web services while Ruby has some gems like Savon and handSOAP.
I'm biased towards using Ruby libraries, but am concerned about performance / scalability issues. What are the performance/scalability issues attached with using Ruby?
To get more context, the integration with the legacy system has two components: a daily process, whose performance is less important, and a realtime query engine, whose performance is very important because users are waiting while the query is being handled.
I faced the same challenge recently. I originally went with Java, but wound up porting everything over to Ruby using Builder to construct the requests and Nokogiri for parsing the responses. I also use SoapUI to help with development/debugging of requests.
Why did I wind up going with Ruby over Java...
Simpler infrastructure. Why have two different paradigms in your architecture if you don't need it. If your site is Ruby on Rails, why introduce Java into it unless you need it.
Java has some nice libraries like Axis to convert SOAP requests to objects. But that really isn't a big problem., but really that isn't that much of a win when most of my logic is in Ruby. It was much easier for me just to work with the DOM through Nokogiri than to have intermediary Java objects.
All of my logic (ActiveRecord model objects, validation, etc.) were all in Java. I wound up having to replicate logic like database persistence to communicate between the Java code and the Rails code...boo
The performance concern seems like a red herring. If you are making SOAP requests, the network overhead will likely be your bottleneck, not the language parsing/executing.

Pros and Cons of DFC and DFS?

I am new to Documentum, I have to upgrade one code from Documentum foundation class to Documentum Foundation Services. Can someone provide the pros and cons of each, and good source of information to get started with it.
btw, I am writing code in Java to get information from documentum.
DFS is an abstraction layer on top of DFC.
SourceRebels is partially right, except for the detail that EMC is now treating DFS as a primary model of integration for external applications (API). You no longer need to use a compiled language (Java or .Net), since you can do everything via SOAP webservice calls. DFC remains available for low-level interaction, but with every Documentum release there are more services added to DFS.
One of the key differences is the object model. In DFS, you can create a batch of operations to send to the server for execution (for instance, create 10 objects). There are also some complex operations in DFS that would take much more code to accomplish using DFC. DFS also allows you to deploy your code to machines without the DFC installed.
Your best resource for Documentum-related questions is http://developer.emc.com.
IMHO they are not comparable because they are not focused on the same. DFC is an API to access Documentum while DFS is a service framework with some predefined services providing some functionality to interact with Documentum.
Thats important: I never used DFS :-)
DFC = Do-it-yourself. Traditional Client-Server programming. Faster.
DFS = Use predefined services or do it yourself for non-trivial tasks. SOA. Probably you need to deploy your services in a new server or purchase more Documentum licenses (not sure about that). Slow but I will feel more comfortable using this if I want to access Documentum from some legacy systems.
Thats my grain of salt I hope you find it useful.
DFS is the new age API for Documentum ( built on the web services concept). You need to read the documentation for DFS which is pretty explanatory. In addition to this you need to have a basic understanding of web service calls (exposing a service, WSDL, building remote clients).

Categories

Resources