I have a spring boot application running on a server, lets call it server1. I have written a downloadAPI on server1 which calls another spring boot application running on server2 to download a large file (as chunks). With this kind of setup, I have observed following behaviour: From the client side (a REST client that has made a http call to server 1) what is observed is that the data only starts to appear once all the chunks are fetched in server 1. For a large file, this would mean the client has to wait for minutes before they start receiving the data from the API. Expectation is that these chunks should be available to client in real time as soon as server1 gets them. Is it possible (or) not possible ?
Related
We are deploying Spring Boot app to the Kubernetes.
When the firs user requests comes it takes more than 10s to response.
Subsequent requests take 200ms.
I have created a warmup procedure to run the key services in #PostConstruct. I reduces the time to process the first query to 4s.
So I wanted to simulate this first call. I know that Kubernetes rediness probe can make a POST request, but I need authorization and other things. Can I make a real HTTP call to the controller from the app itself?
Sure, you can always make an HTTP client to localhost
The solution isnt specific to k8s or Spring or Java, but any web server
You could also try making your readiness probe just the tcp port or some internal script
try RestTemplate , you can consume any web service
We have set of services in a vertx cluster. We serve web front end through a API gateway which is one service within the cluster. Client ask a requirement for download some data as a CSV file. It should be transmitted as bellow.
Service A --(Event bus)---> API gateway ---(Web socket)---> Browser
My question is, is it wise to stream such file over event bus from Service A to API gateway? (File may get as large as 100 MB)
You can, but its not designed for it. Will create congestion because the entirety of the file will be kept in memory until transfer is complete. Just set up a http server, communicate the url through a consumer and upload it over http. Then you get all http support as well.
If you don't want a perm http server for it, just start one whenever a request for an upload comes in.
I am working on a desktop application that connects to a repository and perform CRUD operations using CMIS API. CMIS API uses http communication internally. How can I know, the number of server calls performed by the desktop application.
If you are using OpenCMIS, set the log level of org.apache.chemistry.opencmis.client.bindings.spi.http.DefaultHttpInvoker to debug. It logs every URL that is called by the client.
The documentation says that when spring cloud config server detects configuration chages it fires an RefreshRemoteApplicationEvent. But documentation said nothing about how that event is handled. So is it true that each application which receive such event shoud handle it by itself? E.g it is not required to refresh entire Spring context when such event was received?
I think the documentation only talks about the server side, i.e. the Spring application that talks to the git repository and exposes the condensed information to interested clients. In this process, for example using webhooks, the server can be informed about changes in the git repository, and in turn sends out events to applications that might need to be re-configured.
Your question seems to be concerned about the client side. If your application uses Spring Cloud Config, it should automatically request the new configuration data as soon as the event described above arrives at the client. This in turn should mean that the new configuration values are available or some configured behaviour (log level?) changes.
To actually make the server fire an event that arrives at the client, the documentation suggests Spring Cloud Bus. If you create (for example) a RabbitMQ instance, and make this available to both your clients and your server, Spring automatically attaches to this system and is able to process messages. Additionally, the Spring Cloud Config server automatically sends the desired events using this system, and the clients automatically process these.
In short, if you add Spring Cloud Bus to all involved applications (and make the system used by it, e.g. RabbitMQ, available to them), everything works as expected.
I have a Spring-Cloud based application.
I have two gateway servers behind an HTTP Load balancer. The gateway servers redirect calls to 3 types of backend servers (let's call them UI1, UI2, REST) by querying a Eureka server for an endpoint.
However, if I am understanding the Spring documentation correctly, once a connection has been made between the client and the endpoint, Eureka is no longer needed until a disaster occurs. Client-side load balancing means that the client now has knowledge of an endpoint, and as long as it is working, it will not fetch a new endpoint from Eureka.
This is good in general, but in my setup, the client is actually the Gateway server, not the client browser. The client browser connects to the HTTP load balancer. Everything else is pretty much managed by the gateway.
So, it appears that if I have 2 Gateways and 6 backend servers of each type - I'm not getting any benefits of scalability. Each gateway will take ownership of the first two backend servers per server type, and that's it. The other 4 servers will just hang there, waiting for some timeout or crash to occur on the first 2 servers, so they could serve the next requests.
This doesn't sound like the correct architecture to me. I would have liked Eureka/Client side load balancing to have the ability to do some sort of round-robin or other approach, that would distribute my calls evenly between all servers as much as possible.
Am I wrong?