After about 6 minutes the client (akà browser) doesn't receive any new updates from the subscribed queue, when the device is in sleep mode. If I look into RabbitMQ Management the related queues disappeared (named e.g. stomp-subscription-MBSsZ9XB0XCScXbSc3bCcg). After waking up of the device new queues are created and the messaging works only for new created messages. The old ones never reached the device.
Here is my setup:
Backend: Java application with Spring and RabbitTemplate
Frontend: Angular application, which subcribes via RxStompService
Use case: WebView running in a Xamarin.Forms app on an Android tablet, which opens the URL to the frontend application
This is how the message is sent from backend to frontend:
AMQPMessage<CustomNotificationMessage> msg = new AMQPMessage<CustomNotificationMessage>(
1, SOME_ROUTING_KEY, mand, trigger, new CustomNotificationMessage() );
rabbitTemplate.setRoutingKey(SOME_ROUTING_KEY);
rabbitTemplate.convertAndSend(msg);
RabbitMqConfig.java:
#Bean
public RabbitTemplate rabbitTemplate() {
CachingConnectionFactory connectionFactoryFrontend = new CachingConnectionFactory("some.url.com");
connectionFactoryFrontend.setUsername("usr");
connectionFactoryFrontend.setPassword("pass");
connectionFactoryFrontend.setVirtualHost("frontend");
RabbitTemplate template = new RabbitTemplate(connectionFactoryFrontend);
template.setMessageConverter(jsonMessageConverter());
template.setChannelTransacted(true);
template.setExchange("client-notification");
return template;
}
My idea now is to use a TTL for the frontend queues. But how do I do that, where I don't have created a queue at all?
On the other side I see methods like setReceiveTimeout(), setReplyTimeout() or setDefaultReceiveQueue() in the RabbitTemplate, but I don't know if this would be right. Is it more a client thing? The Subscription on the client side looks like the following:
this.someSubscription = this.rxStompService.watch('/exchange/client-notification/SOME_ROUTING_KEY')
.subscribe(async (message: Message) => {
// do something with message
}
This is the according my-stomp-config.ts:
export const MyStompConfig: InjectableRxStompConfig = {
// Which server?
brokerURL: `${environment.BrokerURL}`,
// Headers
// Typical keys: login, passcode, host
connectHeaders: {
login: 'usr',
passcode: 'pass',
host: 'frontend'
},
// How often to heartbeat?
// Interval in milliseconds, set to 0 to disable
heartbeatIncoming: 0, // Typical value 0 - disabled
heartbeatOutgoing: 20000, // Typical value 20000 - every 20 seconds
// Wait in milliseconds before attempting auto reconnect
// Set to 0 to disable
// Typical value 500 (500 milli seconds)
reconnectDelay: 500,
// Will log diagnostics on console
// It can be quite verbose, not recommended in production
// Skip this key to stop logging to console
debug: (msg: string): void => {
//console.log(new Date(), msg);
}
};
In the documentation I see a connectionTimeout parameter, but the default value should be ok.
Default 0, which implies wait for ever.
Some words about power management: I excluded the app from energy saving, but that doesn't change something. It also happens with the default browser.
How can I make the frontend queues live longer than six minutes?
The main problem is that you can't do anything if the operating system is cutting the ressources (WiFi, CPU, ...). If you awake the devices the queues are getting created again, but all old messages (when the device was sleeping) are lost.
So the workaround is to reload the data if the device is wakeup. Because this is application specific code samples are not so useful. I use Xamarin.Forms OnResume() and MessagingCenter where I subscribe within my browser control. From there I execute a Javascript code with postMessage() and a custom message.
The web application itself has a listener for these messages and reloads the data accordingly.
Related
I'm very new to SpringBoot Admin, HazelCast and Docker Swarm...
What I'm trying to do is to run 2 instances of SpringBoot Admin Server, in Docker Swarm.
It works fine with one instance. I have every feature of SBA working well.
If I set the number of replicas to "2", with the following in swarm, then the logging in page doesn't work (it shows up but I can't log in, with no error in the console):
mode: replicated
replicas: 2
update_config:
parallelism: 1
delay: 60s
failure_action: rollback
order: start-first
monitor: 60s
rollback_config:
parallelism: 1
delay: 60s
failure_action: pause
order: start-first
monitor: 60s
restart_policy:
condition: any
delay: 60s
max_attempts: 3
window: 3600s
My current HazelCast config is the following (as specified in SpringBoot Admin doc):
#Bean
public Config hazelcast() {
// This map is used to store the events.
// It should be configured to reliably hold all the data,
// Spring Boot Admin will compact the events, if there are too many
MapConfig eventStoreMap = new MapConfig(DEFAULT_NAME_EVENT_STORE_MAP).setInMemoryFormat(InMemoryFormat.OBJECT)
.setBackupCount(1).setEvictionPolicy(EvictionPolicy.NONE)
.setMergePolicyConfig(new MergePolicyConfig(PutIfAbsentMapMergePolicy.class.getName(), 100));
// This map is used to deduplicate the notifications.
// If data in this map gets lost it should not be a big issue as it will atmost
// lead to
// the same notification to be sent by multiple instances
MapConfig sentNotificationsMap = new MapConfig(DEFAULT_NAME_SENT_NOTIFICATIONS_MAP)
.setInMemoryFormat(InMemoryFormat.OBJECT).setBackupCount(1).setEvictionPolicy(EvictionPolicy.LRU)
.setMergePolicyConfig(new MergePolicyConfig(PutIfAbsentMapMergePolicy.class.getName(), 100));
Config config = new Config();
config.addMapConfig(eventStoreMap);
config.addMapConfig(sentNotificationsMap);
config.setProperty("hazelcast.jmx", "true");
// WARNING: This setups a local cluster, you change it to fit your needs.
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(true);
TcpIpConfig tcpIpConfig = config.getNetworkConfig().getJoin().getTcpIpConfig();
tcpIpConfig.setEnabled(true);
// NetworkConfig network = config.getNetworkConfig();
// InterfacesConfig interfaceConfig = network.getInterfaces();
// interfaceConfig.setEnabled( true )
// .addInterface( "192.168.1.3" );
// tcpIpConfig.setMembers(singletonList("127.0.0.1"));
return config;
}
```
I guess these inputs are not enough for you to properly help, but since I don't really weel understand the way HazelCast is workging, I don't really know what is useful or not. So please don't hesitate to ask me for what is needed to help! :)
Do you guys have any idea of what I'm doing wrong?
Many thanks!
Multicast is not working in Docker Swarm in default overlay driver (at least it stated here).
I have tried to make it run with weave network plugin but without luck.
In my case, it was enough to switch Hazelcast to the TCP mode and provide the network I like to search for the other replicas.
Something like that:
-Dhz.discovery.method=TCP
-Dhz.network.interfaces=10.0.251.*
-Dhz.discovery.tcp.members=10.0.251.*
I'm trying to execute an application under (reasonable) load. What is happening under load is that when trying to place a message onto a queue, the application stalls for about 4 seconds before completing the send. The strange part is that immediately after doing this, the next message takes a matter of milliseconds to place onto the queue. The message is in fact the same message - so the message size isn't a factor.
The application is using Spring Boot 2.1.6, Apache Qpid 0.43.0 as the JMS/AMQP provider.
The message bus being used is Azure ServiceBus, but I have observed the same behaviour using Artemis.
On the Apache Qpid JmsConnectionFactory, I've tried fiddling with the properties "forceSyncSend".
I've tried using the Spring Boot CachingConnectionFactory to cache message producers only. I have increased the default cache size from 1 to 20 without any success.
I've looked at the JmsTemplate parameters but can't find any parameters in regard to message producers (plenty with listeners but that's another story).
The code doing the sending is quite simple:
private void sendToQueue(Object message, String queueName) {
jmsTemplate.convertAndSend(queueName, message, (Message jmsMessage) -> {
jmsMessage.setStringProperty(OBJECT_TYPE_PARAMETER, message.getClass().getSimpleName());
return jmsMessage;
});
Is there anything obvious to try? Are there any tuning parameters to stop this stalling happening?
The load on the system is not trivial, but it is not excessive (it needs to go a lot higher than where it is at the moment!)
Any ideas?
In my application.properties file I have...
server.port=8086
server.connection-timeout=15000
I know that the file is being loaded correctly because the server is running on port 8086.
In the application I have a RestController
#RestController
class TestController {
#GetMapping()
fun getValues(): ResponseEntity<*> {
return someLongRunningProcessPossiblyHanging()
}
}
When I call the endpoint, the request never times out, it just hangs indefinitely.
Am I missing something?
NOTE: I've also been informed that Tomcat uses this field in minutes, not milliseconds (rather unusual choice IMO). I've tried setting this to server.connection-timeout=1 denoting 1 minute, but this didn't work either.
NOTE: I don't want another HTTP request to cause the previous request to time out, I want each HTTP request to timeout of it's own accord, should too much time elapse to serve the request.
connection-timeout does not apply to long running requests. It does apply to the initial connection, when the server waits for the client to say something.
Tomcat docs (not Spring Boot) define it as The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented [...]
To test the setting server.connection-timeout=4000 I connect using netcat and I don't send any HTTP request/headers. I get:
$ time nc -vv localhost 1234
Connection to localhost 1234 port [tcp/*] succeeded!
real 0m4.015s
user 0m0.000s
sys 0m0.000s
Alternatives
1) Async
From brightinventions.pl - Spring MVC Thread Pool Timeouts:
In Spring MVC there is no way to configure a timeout unless you use async method. With async method one can use spring.mvc.async.request-timeout= to set amount of time (in milliseconds) before asynchronous request handling times out.
I've set spring.mvc.async.request-timeout=4000 and I get a timeout in the browser with this:
#GetMapping("/test-async")
public Callable<String> getFoobar() {
return () -> {
Thread.sleep(12000); //this will cause a timeout
return "foobar";
};
}
See Spring Boot REST API - request timeout?
2) Servlet filter
Another solution would be to use a servlet filter brightinventions.pl - Request timeouts in Spring MVC (Kotlin):
override fun doFilterInternal(request: HttpServletRequest, response: HttpServletResponse, filterChain: FilterChain) {
val completed = AtomicBoolean(false)
val requestHandlingThread = Thread.currentThread()
val timeout = timeoutsPool.schedule({
if (completed.compareAndSet(false, true)) {
requestHandlingThread.interrupt()
}
}, 5, TimeUnit.SECONDS)
try {
filterChain.doFilter(request, response)
timeout.cancel(false)
} finally {
completed.set(true)
}
}
3) Tomcat Stuck Thread Detection Valve?
Tomcat has a Stuck Thread Detection Valve but I don't know if this can be configured programmatically using Spring Boot.
From the official docs:
server.connection-timeout= # Time that connectors wait for another HTTP request before closing the connection. When not set, the connector's container-specific default is used. Use a value of -1 to indicate no (that is, an infinite) timeout.
Another ref, also mentions the same. It should work for you.
When I call the endpoint, the request never times out, it just hangs indefinitely.
server.connection-timeout isn't a request timeout. It is a timeout for idle connections, i.e. those that have already had a request/response pair and on which the server is now awaiting a second request. It is essentially a server-side read timeout.
I have create a host with name dev002-All-Series, added tapper item to it with key test.ping.count add host and ip addres to allowed hosts. Then I try to send a data with zabbix-metrics library with code like that:
private MetricRegistry metricRegistry;
private Meter pingMeter;
private void init() {
metricRegistry = new MetricRegistry();
metricRegistry.register("jvm.attribute.guage.set", new JvmAttributeGaugeSet());
ZabbixSender zabbixSender = new ZabbixSender("zabbixHost", 10051);
ZabbixReporter zabbixReporter = ZabbixReporter.forRegistry(metricRegistry)
.hostName(HostUtil.getHostName()).prefix("test.").build(zabbixSender);
//FIXME us right time unit and amount
zabbixReporter.start(10, TimeUnit.SECONDS);
pingMeter = metricRegistry.meter("ping");
}
Note that zabbix-metrics library surrond ping meter with test. prefix and .count posyfix.
So why I hae receive that I have failed to send my data? The response is:
{"response":"success","info":"processed: 0; failed: 8; total: 8; seconds spent: 0.000013"}
What is neccesary configure in addition in zabbix to send data? Also is there a way to the reason why zabbix do not receive data - does is logs such requests?
Possible popular reasons:
incorrect host name; make sure to match the "Host name" field (not "Visible name", not IP, not DNS...); note that it is case sensitive
incorrect item key; make sure it matches the one in the item key properties exactly - also case sensitive
incorrect allowed host field contents, or data coming from a different host than expected - check that field for syntax errors, remember that in older Zabbix versions spaces are not supported in that field and tcpdump your incoming connection - does it arrive from the host you expected ?
host/item not in the configuration cache - if you just added or changed host/item, it might not be in the config cache yet. The config cache is updated every 60 seconds by default
if the host is monitored by a Zabbix proxy, you must send data to that proxy
In general, forget your application for a moment and test with zabbix_sender. If that works, check what is your application doing differently. If that fails, check all the items above.
As for logging, currently Zabbix does not log failures or their reasons.
I have found a problem. It turns that metrics zabbix library do not convert data well (for version 0.0.1). It send clock value as long in milliseconds, while zabbix needs to receive it in seconds. After manual converting I got:
{"response":"success","info":"processed: 2; failed: 0; total: 2; seconds spent: 0.000016"}
I it very funy that even when I got 2 successful processed elements, zabbix do not show any values at graphic.
UPDATED
To get all works you should check not only clock in data object, but clock in reqeust too. By default metrics zabbix uses zabbix-sedner version 0.0.1 which send clocks in milliseconds. To make metrics works with zabbix 3.0 which expect clock time in seconds you should change zabbix-sedner version to 0.0.3. Here is a maven sample:
<dependency>
<groupId>io.github.hengyunabc</groupId>
<artifactId>metrics-zabbix</artifactId>
<version>0.0.1</version>
</dependency>
<dependency>
<groupId>io.github.hengyunabc</groupId>
<artifactId>zabbix-sender</artifactId>
<version>0.0.3</version>
</dependency>
See also
I'm trying to implement excel export for some amount of data. After 5 minutes I receive a 504 Gateway timeout. In the backend the process continues with its work.
For the whole service to finish, I need approximately 15 minutes. Is there anything I can do to prevent this? I dont have access to the servers in production.
The app is Spring boot with Oracle database. I'm using POI for this export.
One common way to handle these kinds of problems is to have the first request start the process in the background, and when the file has been generated, download the results from another place. The first request finishes immediately, and the user can then check another view to see if the file has been generated, and download the results.
You can export the data in smaller chunks. Run a test with say 10K records, make a note of the id of the last record and repeat the export starting at the next record. If 10K finishes quickly, then try 50K. If you have a timer that might come in handy. Good luck.
I had the same situation where the timeout of the network calls wasn't in our hand, so I guess you have something where it is 5 mins to receive the 1st byte and then the timeout is gone.
My solution was, let's assume you have a controller and a query layer to talk to the database. In this case, you make your process in the Async way. The call to this controller should just trigger that async execution and return the success status immediately, without waiting. Here execution will happen in the background. Futures can be used here as they are async and you can also handle the result once completed by using callback methods of Future.
You can implement using Future and callback methods in java8 like below:
Futures.addCallback(
exportData,
new FutureCallback<String>() {
public void onSuccess(String message) {
System.out.println(message);
}
public void onFailure(Throwable thrown) {
thrown.getCause();
}
},
service)
and in Scala like:
val result = Future {
exportData(data)
}
result.onComplete {
case Success(message) => println(s"Got the callback result:
$message")
case Failure(e) => e.printStackTrace
}