How to capture save or update events in Couchbase - java

I would like to be able to do some data manipulation when documents are updated or created in Couchbase.
Documents can arrive in our database either via Sync Gateway or our own code which streams data in from an http service. It would be great to have one place where I can intercept all updates.
We are running a Spring Boot REST API against this data so this would be the good place to have the interceptor/listener. Either way my preference would be for a Java solution.
The data is written as JSON rather than using Spring entities so I can't use ApplicationListener which only listens to events on Entity classes. Correct me if I'm wrong. I can find precious few examples of setting up ApplicationListeners so I may be wrong here but I can't seem to get it working.
I see that there is an Eventing service where you write Javascript but for a number of reasons I'm not keen to go that way. I'm not keen on fragmenting our API code across platforms and languages, not sure I can run the eventing service on our systems etc. Again, I'm open to debate though.
That leaves DCP only as far as I can tell which seems very low level.
https://blog.couchbase.com/couchbases-history-everything-dcp/ but looks like the tool for the job.
The QUESTION: Is there an alternative, less low level, way to catch update events in Couchbase for JSON objects NOT entities other than DCP.

Disclaimer: I work for Couchbase and develop the Java DCP client.
If you've already evaluated the Eventing service and decided it doesn't meet your requirements, the Java DCP client might be worth looking into even though it's not officially supported. It's used by the official Couchbase connectors for Kafka, Spark, and Elasticsearch (all of which are open source) and is actively maintained.
If you only care about events that happened since your app started up, usage can be as simple as registering a callback and starting the event stream. Things get a bit more complicated if you need to remember your place in the stream and resume later (to process events that occurred while you were offline, for example), but there's example code for that case too.
The DCP protocol itself is well documented. If you decide to go this route, it might be good to read at least the Architecture section of that documentation. Also be aware that because the Java DCP Client is unsupported, the API can change without notice. (Officially supporting the library and providing a friendlier API are among our long-term goals, but we haven't committed to anything yet.)

Like David, I also work for Couchbase as a product manager for the Eventing service.
I would like to be able to do some data manipulation when documents are updated or created in Couchbase.
Eventing certainly allows anyone to respond to and perform data manipulation on mutations (inserts or upserts) via tiny JavaScript fragments. Just take a look at couchbase-eventing-small-scripts-that-solve-big-problems for a quick introduction and also the eventing-examples from the documentation.
If you do go the Eventing service route on a SGW enabled bucket you will need to suppress a duplicate mutation via the crc64() function built into Eventing (for details goto eventing-language-constructs and search for: Sync Gateway). In addition if you want to have Eventing directly update the source bucket if SGW is enabled on that bucket there is a more involved workaround (just reach out to me and I will be happy to provide it)
Next you stated:
not sure I can run the Eventing service on our systems
The Eventing service bundled with the Couchbase Enterprise offering, it provides scalable infrastructure to run simple JavaScript fragments on data or documents as they change or mutate without the overhead of an SDK. You either add stand alone Eventing node(s) to your Couchbase cluster or collocate the Eventing service with other existing nodes.

Related

How to send/receive data to/from MetaTrader Ternminal 4 with JAVA (or anything!)

I have been working on an algorithm ( Not mine, I am just modifying it ) that predicts when to buy and sell on the FOREX market. I need to be able to open and close orders, dynamically update parameters of the orders ( such as stoploss, maximum stop etc. ) and receive real time tick data.
I have been researching for well over a week, and have no success.
The closest I have gotten is using JavoNet and Mt4 Api
I managed to import the DLL into java and use a MQL4 function, which was AccountBalance(), however this has returned 0.0, which was not the account balance, I messed around with the code and the settings on MT4 client but still no luck.
Q0: Can anyone please point me in the right direction?
I am new to automated FOREX trading but from what I understand there is a broker somewhere with a MT4 server and I connect to that server with my MT4 client on my windows machine.
Q1: If this is the case, do I need to make an API work with the server side instead of my client side?
All these DLL's I have tried so far have been used with the MT4 client software on my machine.
I have also been doing some reading on the FIX-Protocol and ZeroMQ.
Q2: Can these help me achieve my goal in any way (instead of creating some bridges between JAVA and MT4 DLL's)?
A0: yes, forget straight about REST and synchronous, blocking chains in FX-trading domain
A1: well, not a typical way. MetaTrader Server is a proprietary suite of systems on the Broker-side and theirs API are not disclosed to allow some 3rd party integrations against.
A2: FIX-Protocol is the industry standard LP-interfacing lingua franca. In case you have contracted relations with your institutional trading provider, incl. the FIX-Protocol GWY-port, this may provide you an A-level access to the Market and to integrate your trading tools against. If this is the case, forget about MT4 instrumentation, as prime-time cadences are far beyond the MT4 Terminal localhost processing architecture ( multiple events with a sub-millisecond TimeDOMAIN resolution are common, whereas MQL4 does not provide any direct support for multithreaded-concurrent / better parallel programme scheduling designs ). FIX-Protocol events are simply off-the picture above, being far left, "before" the graph starts from 1st [ms] column.
ZeroMQ may help liberate your further designs from MQL4 limitations. May like to read my other posts on distributed systems, where MQL4 / ZeroMQ / ML-AI-predictors / GPU-processing infrastructures appear.
Anyway:
Enjoy the Wild Worlds of MQL4/MQL5
Interested? May also like reading other MQL4, ZeroMQ distributed processing and low-latency trading posts
You can try MetaApi https://metaapi.cloud cloud service which provides REST API and WebSocket API access to both MetaTrader 4 and MetaTrader 5 accounts.
Official REST API documentation: https://metaapi.cloud/docs/client
SDKs: https://metaapi.cloud/sdks (javascript, python and Java SDKs are provided as per April 2021)
It supports reading account information, positions, orders, trade history, receiving quotes, and accessing market data.
The service also provides copy trading API https://metaapi.cloud/docs/copyfactory and API to calculate forex trading metrics on a MetaTrader account https://metaapi.cloud/docs/metastats.
I started to code an expert with MQL5, naturally on MT5 platform, and I must admit that the difficulty of managing the application along with the increase of its complexity is high. It's not only due to a missing garbage collector, that of course imposes the deletion of the new instances, but also because Java offers a set of powerful data structures and syntax that MQL5 naturally doesn't have. Last but not least, talking about the community and the third party libraries available, there's a light year of the distance between Java and MQL5. I.e. if I need to find a library for a JSON conversion on the Java side I find dozens of official and stable versions, in the MQL5 community I have found only rubbish that I had to modify myself.
So, after numerous failed tries on coding my expert in MQL5 (not a simple one of course), I decided to adopt a radical approach: coding an application, client-side MQL5, and server-side Java, that provides a Java facade for the MT5 platform. Same API, same basic events and so on. Even though I thought more than once that I was getting stuck in a blind path, I kept coding and eventually, I made it, obtaining a really solid result.
Naturally, the REST interface drastically reduces the performances, and each request, even with Tomcat and MT5 running in the same localhost, is in the order of milliseconds, not micros, but on the other side this reduces only the suitability of this architecture, it doesn't make it useless at all.
Strategies like scalpelling and every kind of high-frequency trading are not good for such kind of scenario, vice-versa every other strategy in the longer period, even if intraday's ones, can be implemented successfully without any cons.
Last but not least, it isn't necessary to use the WebRequest() MQL5 method to call any Servlet container, it is possible to import the wininet.dll from the OS (talking about Windows) and the strategy tester will work as if the strategy has been coded in MQL5, maybe just a little bit slower.
To sum up, I wouldn't be so sarcastic on the Java facade approach for the FX trading platforms, citing only the nude performances without contextualizing the overall scenario is a naive approach to face the argument.
If you need to send/receive synchronous message between MT4 and Java application, REST would be the best approach because fast response matters in this scenario. Message Queue solutions like ZeroMQ fits better in asynchronous solutions, so it won't help you. Once you choose REST approach, you can use MQL4 WebRequest() to call your Java application.
WebRequest isn't the end of the world, you can submit http requests from your EA using API, works even with Strategy Tester.
In order to collect the tick information and open, update or close orders, you can use mt4 server api.
please check this url.
http://mtapi.online/#overlappable-4
Maybe you will find what you want.
And then I have also mt4 server api. If you have any questions please update me.

Realtime Application with Nodejs MongoDB and JAVA as a backend

I am working on a web application that uses mongodb as the database. Data is inserted into this db via a Java Application and i want to somehow monitor/understand that the data is inserted from a nodejs application so that i can push some information to the clients via socket.io.
I know this is quite easy when we remove Java part from the equation and carry out the insertion via nodejs. But this is not the case for me; so i need pointers on mongodb - nodejs push kind of a thing..
It would be very nice if the solution remains only with Java, Nodejs and mongodb. But if some other 3rd party framework or technology (like mq) must be included i would be happy to hear that.
I would suggest that you should look for apache kafka connect which provide source connector to keep watch on your mongo db. Confluent
has created mongo connector which provide the above functionality. You can go through the above link for further research. Apache kafka is a messaging queue system.
I'd suggest having the Java App let your front end (node) app know when it has changed something. That leaves the responsibility of knowing what has changed and how with the system making the changes. Watching the DB for changes sounds like a good idea, and will likely work but it's far more likely to cause you issues. Consider:
What data to watch?
What changes do you consider worthy of action?
What happens when you change how your data is updated?
How do you know when a change is atomic?
All these issues are at least mitigated if your front end is simply told what has changed and how, where to get the resource and when it happend.
How you tell your front end is up to you. Simple HTTP calls from Java to your front end is simple, if a little unreliable and unpredictable (load wise). A queue/notification service like Amazon SNS might be a little more robust.

GAE/GWT server side data inconsistent / not persisting between instances

I'm writing a game app on GAE with GWT/Java and am having a issues with server-side persistent data.
Players are polling using RPC for active games and game states, all being stores on the server. Sometimes client polling fails to find game instances that I know should exist. This only happens when I deploy to google appspot, locally everything is fine.
I understand this could be to do with how appspot is a clouded service and that it can spawn and use a new instance of my servlet at any point, and the existing data is not persisting between instances.
Single games only last a minute or two and data will change rapidly, (multiple times a second) so what is the best way to ensure that RPC calls to different instances will use the same server-side data?
I have had a look at the DataStore API and it seems to be database like storage which i'm guessing will be way too slow for what I need. Also Memcache can be flushed at any point so that's not useful.
What am I missing here?
You have two issues here: persisting data between requests and polling data from clients.
When you have a distributed servlet environment (such as GAE) you can not make request to one instance, save data to memory and expect that data is available on other instances. This is true for GAE and any other servlet environment where you have multiple servers.
So to you need to save data to some shared storage: Datastore is costly, persistent, reliable and slow. Memcache is fast, free, but non-reliable. Usually we use a combination of both. Some libraries even transparently combine both: NDB, objectify.
On GAE there is also a third option to have semi-persisted shared data: backends. Those are always-on instances, where you control startup/shutdown.
Data polling: if you have multiple clients waiting for updates, it's best not to use polling. Polling will make a lot of unnecessary requests (data did not change on server) and there will still be a minimum delay (since you poll at some interval). Instead of polling you use push via Channel API. There are even GWT libs for it: gwt-gae-channel, gwt-channel-api.
Short answer: You did not design your game to run on App Engine.
You sound like you've already answered your own question. You understand that data is not persisted across instances. The two mechanisms for persisting data on the server side are memcache and the datastore, but you also understand the limitations of these. You need to architect your game around this.
If you're not using memcache or the datastore, how are you persisting your data (my best guess is that you aren't actually persisting it). From the vague details, you have not architected your game to be able to run across multiple instances, which is essential for any app running on App Engine. It's a basic design principle that you don't know which instance any HTTP request will hit. You have to rearchitect to use the datastore + memcache.
If you want to use a single server, you can use backends, which behave like single servers that stick around (if you limit it to one instance). Frankly though, because of the cost, you're better off with Amazon or Rackspace if you go this route. You will also have to deal with scaling on your own - ie if a game is running on a particular server instance, you need to build a way such that playing the game consistently hits that instance.
Remember you can deploy GWT applications without GAE, see this explanation:
https://developers.google.com/web-toolkit/doc/latest/DevGuideServerCommunication#DevGuideRPCDeployment
You may want to ask yourself: Will your application ever NEED multiple server instances or GAE-specific features?
If so, then I agree with Peter Knego's reply regarding memcache etc.
If not, then you might be able to work around your problem by choosing a different hosting option (other than GAE). Particularly one that lets you work with just a single instance. You could then indeed simply manage all your game data in server memory, like I understand you have been doing so far.
If this solution suits your purpose, then all you need to do is find a suitable hosting provider. This may well be a cloud-based PaaS offer, provided that they let you put a hard limit (unlike with GAE) on the number of server instances, and that it goes as low as one. For example, Heroku (currently) lets you do that, as far as I understand, and apparently it's suitable for GWT applications, according to this thread:
https://stackoverflow.com/a/8583493/2237986
Note that the above solution involves a bit of fiddling and I don't know your needs well enough to make a strong recommendation. There may be easier and better solutions for what you're trying to do. In particular, have a look at non-cloud-based hosting options and server architectures that are optimized for highly time-critical, real-time multiplayer gaming.
Hope this helps! Keep us posted on your progress.

Proper way to cache values from remote database in an Android application

I'm well aware of how I can communicate with an outside server using Android. Currently I'm using the relatively new AppEngine Connected Android project to do it, and everything works fairly well. The only thing that I'm concerned with is handling downtime for the server (if at all) and internet loss on the client side.
With that in mind, I have the following questions on an implementation:
What is the standard technique for caching values in a SQLite Database for Android while still trying to constantly receive data from the web-application.
How can I be sure that I have the most up-to-date information when that information is available.
How would I wrap up this logic (of determining which one to pull from and whether or not the data is recent) into a ContentProvider?
Is caching the data even that good of an idea? Or should I simply assume that if the user isn't connected to the internet, then the information isn't readily available.
Good news! There's a android construct built just for doing this kind of stuff, its called a SyncAdapter. This is how all the google apps do database syncing. Plus there's a great google IO video all about using it! It's actually one of my favorites. It gives you a nice really high level overview of how to go about keeping remote resources synced with your server using something called REST.

Opinions/headers for splitting an application into front-end/back-end (not user/admin, but ui/logic)

I am building a new web application and playing around with the architecture and would like some opinions about splitting UI and business logic and running them on separate servers.
This means that if someone requests a page, the front end will itself request the data from a back-end server and then not actually perform any calculations/logic but just use the data to populate a template and then respond with that.
Back-End: Java + JAX-WS
Front-End: Kohana 3.1 (PHP)
Data Interchange Format: JSON
Advantages:
clear separation of logic and UI
ability to choose language/framework best suited for either end
possibility to add logic/UI servers depending on which one is the bottleneck in case of performance issues
possibility to make the API publicly available without any extra work (pseudo-internal requests will go to the same API as requests from third-part applications)
ability to change (if need be) the framework/language of either side without having to edit the other
ability to specify different server hardware according to the needs of the logic/UI application
better security (if API private) (??)
Disadvantages:
latency (??)
more servers
So what do you think? Is this a good idea? I haven't been able to find much information so far but my guess is that many big sites do it this way, right? How will performance be affected (I am thinking of running it on EC2)? What are further advantages/disadvantages? Any thoughts on the languages/frameworks choices?
A similar architectural pattern is often employed, though generally the UI part is often moved to the client. So you have a backend that responds with JSON, a quick http server with full-blown caching (and that can use html5 app caching as well) and a rich javascript client which requests the JSON from the backend and builds the UI.
More on this pattern: http://www.metaskills.net/2008/05/24/the-ajax-head-design-pattern/
The main negative of the approach is that is generally more work in the beginning - if you don't need an external API then using a simpler architecture will be easier to program.
You also might want to employ the idea of keeping your servers stateless and let the client side handle any state.
This simplifies the whole load-balancing and fail-over stuff and makes you think about a more resource-oriented architecture.
And if you are set on JSON already, you might want to explore the idea of NOT mapping POJOS to your data and use a document store like MongoDB or CouchDB to access JSON data directly.

Categories

Resources