Hazelcast in SpringBoot Admin to run 2 instances via Docker Swarm - java

I'm very new to SpringBoot Admin, HazelCast and Docker Swarm...
What I'm trying to do is to run 2 instances of SpringBoot Admin Server, in Docker Swarm.
It works fine with one instance. I have every feature of SBA working well.
If I set the number of replicas to "2", with the following in swarm, then the logging in page doesn't work (it shows up but I can't log in, with no error in the console):
mode: replicated
replicas: 2
update_config:
parallelism: 1
delay: 60s
failure_action: rollback
order: start-first
monitor: 60s
rollback_config:
parallelism: 1
delay: 60s
failure_action: pause
order: start-first
monitor: 60s
restart_policy:
condition: any
delay: 60s
max_attempts: 3
window: 3600s
My current HazelCast config is the following (as specified in SpringBoot Admin doc):
#Bean
public Config hazelcast() {
// This map is used to store the events.
// It should be configured to reliably hold all the data,
// Spring Boot Admin will compact the events, if there are too many
MapConfig eventStoreMap = new MapConfig(DEFAULT_NAME_EVENT_STORE_MAP).setInMemoryFormat(InMemoryFormat.OBJECT)
.setBackupCount(1).setEvictionPolicy(EvictionPolicy.NONE)
.setMergePolicyConfig(new MergePolicyConfig(PutIfAbsentMapMergePolicy.class.getName(), 100));
// This map is used to deduplicate the notifications.
// If data in this map gets lost it should not be a big issue as it will atmost
// lead to
// the same notification to be sent by multiple instances
MapConfig sentNotificationsMap = new MapConfig(DEFAULT_NAME_SENT_NOTIFICATIONS_MAP)
.setInMemoryFormat(InMemoryFormat.OBJECT).setBackupCount(1).setEvictionPolicy(EvictionPolicy.LRU)
.setMergePolicyConfig(new MergePolicyConfig(PutIfAbsentMapMergePolicy.class.getName(), 100));
Config config = new Config();
config.addMapConfig(eventStoreMap);
config.addMapConfig(sentNotificationsMap);
config.setProperty("hazelcast.jmx", "true");
// WARNING: This setups a local cluster, you change it to fit your needs.
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(true);
TcpIpConfig tcpIpConfig = config.getNetworkConfig().getJoin().getTcpIpConfig();
tcpIpConfig.setEnabled(true);
// NetworkConfig network = config.getNetworkConfig();
// InterfacesConfig interfaceConfig = network.getInterfaces();
// interfaceConfig.setEnabled( true )
// .addInterface( "192.168.1.3" );
// tcpIpConfig.setMembers(singletonList("127.0.0.1"));
return config;
}
```
I guess these inputs are not enough for you to properly help, but since I don't really weel understand the way HazelCast is workging, I don't really know what is useful or not. So please don't hesitate to ask me for what is needed to help! :)
Do you guys have any idea of what I'm doing wrong?
Many thanks!

Multicast is not working in Docker Swarm in default overlay driver (at least it stated here).
I have tried to make it run with weave network plugin but without luck.
In my case, it was enough to switch Hazelcast to the TCP mode and provide the network I like to search for the other replicas.
Something like that:
-Dhz.discovery.method=TCP
-Dhz.network.interfaces=10.0.251.*
-Dhz.discovery.tcp.members=10.0.251.*

Related

Queues are deleted after 6 minutes, when Android tablet goes in standby

After about 6 minutes the client (akà browser) doesn't receive any new updates from the subscribed queue, when the device is in sleep mode. If I look into RabbitMQ Management the related queues disappeared (named e.g. stomp-subscription-MBSsZ9XB0XCScXbSc3bCcg). After waking up of the device new queues are created and the messaging works only for new created messages. The old ones never reached the device.
Here is my setup:
Backend: Java application with Spring and RabbitTemplate
Frontend: Angular application, which subcribes via RxStompService
Use case: WebView running in a Xamarin.Forms app on an Android tablet, which opens the URL to the frontend application
This is how the message is sent from backend to frontend:
AMQPMessage<CustomNotificationMessage> msg = new AMQPMessage<CustomNotificationMessage>(
1, SOME_ROUTING_KEY, mand, trigger, new CustomNotificationMessage() );
rabbitTemplate.setRoutingKey(SOME_ROUTING_KEY);
rabbitTemplate.convertAndSend(msg);
RabbitMqConfig.java:
#Bean
public RabbitTemplate rabbitTemplate() {
CachingConnectionFactory connectionFactoryFrontend = new CachingConnectionFactory("some.url.com");
connectionFactoryFrontend.setUsername("usr");
connectionFactoryFrontend.setPassword("pass");
connectionFactoryFrontend.setVirtualHost("frontend");
RabbitTemplate template = new RabbitTemplate(connectionFactoryFrontend);
template.setMessageConverter(jsonMessageConverter());
template.setChannelTransacted(true);
template.setExchange("client-notification");
return template;
}
My idea now is to use a TTL for the frontend queues. But how do I do that, where I don't have created a queue at all?
On the other side I see methods like setReceiveTimeout(), setReplyTimeout() or setDefaultReceiveQueue() in the RabbitTemplate, but I don't know if this would be right. Is it more a client thing? The Subscription on the client side looks like the following:
this.someSubscription = this.rxStompService.watch('/exchange/client-notification/SOME_ROUTING_KEY')
.subscribe(async (message: Message) => {
// do something with message
}
This is the according my-stomp-config.ts:
export const MyStompConfig: InjectableRxStompConfig = {
// Which server?
brokerURL: `${environment.BrokerURL}`,
// Headers
// Typical keys: login, passcode, host
connectHeaders: {
login: 'usr',
passcode: 'pass',
host: 'frontend'
},
// How often to heartbeat?
// Interval in milliseconds, set to 0 to disable
heartbeatIncoming: 0, // Typical value 0 - disabled
heartbeatOutgoing: 20000, // Typical value 20000 - every 20 seconds
// Wait in milliseconds before attempting auto reconnect
// Set to 0 to disable
// Typical value 500 (500 milli seconds)
reconnectDelay: 500,
// Will log diagnostics on console
// It can be quite verbose, not recommended in production
// Skip this key to stop logging to console
debug: (msg: string): void => {
//console.log(new Date(), msg);
}
};
In the documentation I see a connectionTimeout parameter, but the default value should be ok.
Default 0, which implies wait for ever.
Some words about power management: I excluded the app from energy saving, but that doesn't change something. It also happens with the default browser.
How can I make the frontend queues live longer than six minutes?
The main problem is that you can't do anything if the operating system is cutting the ressources (WiFi, CPU, ...). If you awake the devices the queues are getting created again, but all old messages (when the device was sleeping) are lost.
So the workaround is to reload the data if the device is wakeup. Because this is application specific code samples are not so useful. I use Xamarin.Forms OnResume() and MessagingCenter where I subscribe within my browser control. From there I execute a Javascript code with postMessage() and a custom message.
The web application itself has a listener for these messages and reloads the data accordingly.

How to listen files from GCS ? is it possible to leverage GcsInboundFileSynchronizer and GcsStreamingMessageSource in multi-node applications?

I am reading spring cloud gcp storage documentation
and there written that I can listen for a new files using GcsInboundFileSynchronizer or GcsStreamingMessageSource just configuring spring bean like this:
#Bean
#InboundChannelAdapter(channel = "streaming-channel", poller = #Poller(fixedDelay = "5000"))
public MessageSource<InputStream> streamingAdapter(Storage gcs) {
GcsStreamingMessageSource adapter =
new GcsStreamingMessageSource(new GcsRemoteFileTemplate(new GcsSessionFactory(gcs)));
adapter.setRemoteDirectory("your-gcs-bucket");
return adapter;
}
I have a coule of questions:
What if my application is started on 2+ nodes. How the files will be distributed ? Round robin ? is there any way to configure batching ? Is it possible to accept repeated notifications (like in pub sub and any other MQ systems)?
What does it mean "new files"? Lets say my bucket contains 2 files(1.txt and 2.txt). Then I start applciation for the first time. Will GcsStreamingMessageSource accept these files. Or let say application was crashed for some reasons. Then I put a new file to the bucket and start application againg.
Is there any recovery abilities? let say the application was crashed for some reason while file processing. Will it redelivered ?
P.S.
For now we use bucket notifications which are sent to the pubsub. And application listens for a PubSub topic and downloads file based on notifications header. Is it more reliable way?

Cassandra behavior on contact point based on data center

Cassandra setup in 3 data-center (dc1, dc2 & dc3) forming a cluster
Running a Java Application on dc1.
dc1 application has Cassandra connectors pointed to dc1 (ips of cassandra in dc1 alone given to the application)
turning off the dc1 cassandra nodes application throws exception in application like
All host(s) tried for query failed (no host was tried)
More Info:
cassandra-driver-core-3.0.8.jar
netty-3.10.5.Final.jar
netty-buffer-4.0.37.Final.jar
netty-codec-4.0.37.Final.jar
netty-common-4.0.37.Final.jar
netty-handler-4.0.37.Final.jar
netty-transport-4.0.37.Final.jar
Keyspace : Network topology
Replication : dc1:2, dc2:2, dc3:2
Cassandra Version : 3.11.4
Here are some things I have found out with connections and Cassandra (and BTW, I believe Cassandra has one of the best HA configurations of any database I've worked with over the past 25 years).
1) Ensure you have all of the components specified in your connection connection. Here is an example of some of the connection components, but there are others as well (maybe you've already done this):
cluster = Cluster.builder()
.addContactPoints(nodes.split(","))
.withCredentials(username, password)
.withPoolingOptions(poolingOptions)
.withLoadBalancingPolicy(
new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder()
.withLocalDc("MYLOCALDC")
.withUsedHostsPerRemoteDc(1)
.allowRemoteDCsForLocalConsistencyLevel()
.build()
)
).build();
2) Unless the entire DC you're "working in" is down, you could receive errors. Cassandra doesn't fail over to alternate DCs unless every node is down in the DC. If less than all nodes are down and your client can't satisfy the client CL settings, you will receive errors. I was actually hoping, when I did testing a while back, that if you couldn't achieve client CL in the LOCAL DC (even if some nodes in the current DC were up) and alternate DCs could, that it would automatically fail over, but this is not the case (since I last tested).
Maybe that helps?
-Jim

Redis: Spring Boot application requests keep failing while shutdown one of the redis master node

   Blocked by a redis issue these days, thanks for any suggestion in advance. Below are some details:
Evn: Spring boot 2.0.3.RELEASE, Redis 3.0.6 cluster(3 master, 3 slave),
Starter: spring-boot-starter-data-redis(defaul version with spring boot), which means the application will use letture as the redis client
Error scenaro:
Start the application and send some requests, everything goes fine
Stop on the master node, the corresponding slave will takes about 20s to failover to be a master, which also goes fine.
During the upper 20s(failover time) period, if keep sending the reuqest.
In the 20s, requests fail, this is expected
After the 20s(the slave becomes a master), requests still fail, this is unexpected
During the upper 20s(failover time) period, if no reuqest sent. After the slave becomes a master, the laster requests goes fine.
no write operation during upper steps
config:
conf
cache:
type: redis
redis:
cluster: ip1:port(m),ip1:port(s),ip2:port(m),ip2:port(s),ip3:port(m),ip3:port(s)
max-redirects: 3
password: xxxx
timeout: 1000
pool:
max-active: 500
max-wait: 1500
Code: just create a simple CacheManager bean
java
#Bean
public CacheManager cacheManager(RedisConnectionFacotry redisConnectionFacotry){
return new RedisCacheManager(RedisCacheWriter.nonLockingRedisCacheWriter(redisConnectionFacotry),
redisCacheConfig) // set serializer and timeout
}
we use spring cache in the code with annotations like: #CachePut etc. The total data in redis is less than 10M and total volumn of redis is 2G.
NEEDS YOUR HELP :)
Issue goes as we change from lettuce to jedis, with no root cause found.

HazelCast Perfomance Test with Spring Batch throws TargetDisconnectedException

I have a HazelCast Server setup with 2 nodes and the server has been healthy and has never shown any issues. My clients are from multiple instances of spring batch and each spring batch has 120 threads and there are five instances of spring batch which means there might be around 120*5=600 threads trying to access IMap to set/get values.
When I started around 3 instances of the batch, the time taken to set values in IMap goes beyond 50 seconds and slowly the following exception is thrown:
com.hazelcast.spi.exception.TargetDisconnectedException
at com.hazelcast.client.spi.impl.ClientCallFuture.get(ClientCallFuture.java:128)
at com.hazelcast.client.spi.impl.ClientCallFuture.get(ClientCallFuture.java:111)
at com.hazelcast.client.spi.ClientProxy.invoke(ClientProxy.java:110)
at com.hazelcast.client.proxy.ClientMapProxy.set(ClientMapProxy.java:380)
at com.ebay.app.raptor.dfmailbat.components.cache.client.processor.CacheClient.setDealsResponseInCache(CacheClient.java:101)
at com.ebay.app.raptor.dfmailbat.components.deal.finder.service.manager.DealFinderManager.getDeals(DealFinderManager.java:59)
Each Spring has one static instance of IMap and uses the same instance to set/get values. Something like this:
static {
ClientConfig clientConfig = new ClientConfig();
List<String> addresses = getClusterAddresses();
if (addresses != null) {
clientConfig.getNetworkConfig().setAddresses(addresses);
s_client = HazelcastClient.newHazelcastClient(clientConfig);
}
else {
s_logger.log(LogLevel.ERROR, "No host in Database for hazelcast client set up");
}
if (s_client != null) {
s_map = s_client.getMap("ItemDealsMap");
}
}
The s_map is used by all the threads to set/get entries from IMap. I set a eviction time of 12 hours when I set the entries into the IMap. I am using HazelCast 3.3 on both the server and the client. This issue is consistently reproducible when the number of concurrent threads are increased. When I shutdown the spring batch and start again, it works well. Could you please help me with this.

Categories

Resources