In our company, we are developing a brokerage web-based application in which customers can log in from client product and place order requests to the server.
The server has both WebSocket and rest endpoints. Further, it is deployed in JBOSS container. client applications push orders via WebSocket to the server at a high rate (approximately ~5000 requests per second).
And other inquiry services such as login/buying power validations are provided through rest services.
The complication arose when we were having a load test in different environments. We logged into the server via client product and push order requests at high rate via web socket,
at the same time we tried to log into the server with another client product. But all of sudden rest API became overwhelmed and unresponsive.
The server only prioritized on processing order requests but failed in response to a login request. Recommendations are highly appreciated in this scenario.
Are there any throttling mechanism available for JBOSS container or does precaution should be taken at the application level?
Related
We are currently planning to migrate our production application to Microservices. Our application is currently implemented using Scala and Akka, because it uses high concurrency to process items that are delivered via MQ, this is a Messaging application (delivers mails and sms). We have a few services exposed with REST, but the main processing is done via MQ and some other tasks by doing batch processing, the question here is if the MQ part can be migrated to a RESTful approach, and how to achieve that grade of high concurrency, if it cannot be created as REST resource can it be kept as a Microservice which will be continuously listening to a MQ (JMSListener), as it is now? Thanks.
I want to host a Neo4j web service for a Wikipedia graph of pages and categories and basically get some recommendations out via cypher queries.
I have already created and populated the database.
How do I “ideally” setup such a service ?
Should I keep 1 dedicated instance for the Neo4j server and separate
instances for running Tomcat or Jetty which receive the client’s
requests and then forward the request to the Neo4j server instance
via the REST API ?
Or directly send requests (cypher via REST) from the client to the 1 neo4j instance ?
Or should I choose unmanaged extensions provided my Neo4j ?
Or is there any other way to set it up keeping scaling in mind?
I do plan to run load balancing and HA clusters in the future.
The web service will be accessed by browsers and mobile apps.
I have never hosted such a web service before so it would be great if someone helps me out :)
I would recommend that you create an API app that sits between your clients and Neo4j. Your clients would make requests to the API server, which would then make a Cypher request to Neo4j (could be one instance or an HA cluster).
The benefits of this include being able to implement caching at the API layer, authenticate requests before they hit your database server, being able to instantly update Cypher queries by deploying to the API server (imagine if the Cypher logic lived in your mobile app - you would be at the mercy of app store / user upgrades), and easily scaling your API by deploying more API servers.
I've skipped this layer and just used unmanaged extensions to extend the Neo4j REST API and have clients access Neo4j directly which works OK for rapidly implementing a prototype, but you lose many of the benefits listed above with one additional downside that you will have to restart your database in order to deploy new versions of the unmanaged extension.
I have a Spring-Cloud based application.
I have two gateway servers behind an HTTP Load balancer. The gateway servers redirect calls to 3 types of backend servers (let's call them UI1, UI2, REST) by querying a Eureka server for an endpoint.
However, if I am understanding the Spring documentation correctly, once a connection has been made between the client and the endpoint, Eureka is no longer needed until a disaster occurs. Client-side load balancing means that the client now has knowledge of an endpoint, and as long as it is working, it will not fetch a new endpoint from Eureka.
This is good in general, but in my setup, the client is actually the Gateway server, not the client browser. The client browser connects to the HTTP load balancer. Everything else is pretty much managed by the gateway.
So, it appears that if I have 2 Gateways and 6 backend servers of each type - I'm not getting any benefits of scalability. Each gateway will take ownership of the first two backend servers per server type, and that's it. The other 4 servers will just hang there, waiting for some timeout or crash to occur on the first 2 servers, so they could serve the next requests.
This doesn't sound like the correct architecture to me. I would have liked Eureka/Client side load balancing to have the ability to do some sort of round-robin or other approach, that would distribute my calls evenly between all servers as much as possible.
Am I wrong?
I am looking for a simplest solution to create a client-server network architecture using Spring 3 framework. The architecture woill have many clients and multiple servers. Each client can connnect to each server. Each client can define a set of services that would have to be generated during runtime by the server.
Communication protocol:
Client says hello to one of 5 servers.
Server gather its local metadata about stored data and send it to client
Client pick some of this info and send the metadata subset to server deciding which data it will need later.
Server basing on the metadata choice, picked by the client, generates dynamically services that will be made available to the client supplying him with data pointed by the requested (step 3) config (eg in form serialized JSON)
Client get the information about generated services and use it for future calls to those services.
The biggest issue is that client doesn't know nothing about server resources to be served until it receives answer and server has no services since it get request from client.
I considered Spring 3:
HTTP Invokers
JMS
Netty (joined with spring)
But as far as I tried the above it's ether hard to provide the dynamic service generation requirement or the amount of code (Netty) is big.
I have rejected SOAP due to its heavy nature.
On the other hand REST does not bring here as far as I know any benefits. It is just a way of serving data and it require some kind of servlet container like Tomcat as it uses HTTP. #Timmmm 's great and simple answer to REST fashion
What I am after:
as simple as possible
dynamic generation of services based on client choice
keep server lightweight i.e. no additional server instance (it would be nice to eliminate tomcat; but ts not crucial)
spring based
What technology would you recommend?
It is quiet hard to accomplish this task with the requirement of configuration based service generation during runtime.
I do NOT want to base on properties files, services must be generated on the fly based on the client request.
Thank you in advance for answers and tips.
I would look at RESTful architecture. Some of its principals is what you are after, including discovery.
Spring provides easy integration with REST.
Well, past few days I am trying to put plans for my chess server project on the table. It is planned to be made of several web services which interact mutualy. Web services are planned to be made with Java language and deployed onto Apache Tomcat 7.0.20 with Axis2 1.6.0 web service engine.
So far, there will be authentication web service, player pool web service, validation web service and 'bussiness logic' web service which will be only web service known to client application. All clients requests will be done through this service and forwarded respectively.
All players moves must go through web services because of move validation, history and so on. Problem occurs when other player needs to be informed of opponents played move. Persistant client request (to discover is opponent played move) towards service is out of question because player turn change must be immidiate when opponent play his move. How to achive this using java web services and eralier mentioned technologies? Is it possible to web service contacts opposed player and inform him about opponents move? Is there another way to do this with this scope of technologies?
Edit: Client application is planned to be desktop application, possibly Java or C#.
One option for Tomcat-to-web-browser push communication is Comet (sometimes called CometD or Bayeaux).
From the wiki article:
Comet is a web application model in which a long-held HTTP request allows a web server to push data to a browser, without the browser explicitly requesting it.
(Note: Emphasis Mine)
What this means is that the server can notify the client of pending changes without the browser specifically polling. With a good JavaScript framework (such as Dojo or this jQuery plugin), you can seamlessly work with older browsers by polling.
Some good links for learning more about Comet:
RESTful Web Services and Comet
Comet Slideshow Example on Grizzly
Advanced IO and Tomcat
Dojo Foundation CometD
Hopefully this helps.
You do not state your client technology (browser? desktop app?)
Anyway, there is no direct solution if you are addressing home users. To work with home users, you would need them to open / NAT the needed port to their computer so you can access their PC. It is very complicated to the average home user.
For browser it is even more complicated as they are clients, not servers. Maybe some framework might be used to simulate a server inside a browser (I think there is at least one but I cannot recall its name), but that would work like just by repeating ajax calls to the server until a change is made.
And last, if you are checking the server every second, that would be a latency low enough even for "fast-chess".