I'm getting into RxJava and am looking for a good way to share a number of BehaviourSubjects with multiple subscribers. Each BehaviourSubject is identified by a unique subject and only one subscription should be made to the back end for each subject.
If there are no current subscribers for the BehaviourSubject it should unsubscribe from the back end.
The following code does what I want, but the MyFakeService class lacks the elegance I that RxJava promises.
package au.play;
import io.reactivex.Observable;
import io.reactivex.disposables.Disposable;
import io.reactivex.functions.Consumer;
import io.reactivex.observers.DisposableObserver;
import io.reactivex.subjects.BehaviorSubject;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
public class Demo {
public static class MyFakeBackEnd {
private final Observable<Long> FAKE_SOURCE = Observable.interval(30, 10, TimeUnit.MILLISECONDS);
public Observable<Long> getObservable(String subject) {
return FAKE_SOURCE;
}
}
public static class MyFakeService {
private final MyFakeBackEnd myFakeBackEnd = new MyFakeBackEnd();
private final Map<String, Observable<Long>> subjectMap = new ConcurrentHashMap<>();
public Observable<Long> getObservable(String subject) {
return subjectMap.computeIfAbsent(subject, (String key) -> {
BehaviorSubject<Long> behaviourSubject = BehaviorSubject.createDefault(-1L);
AtomicReference<Disposable> atomicDisposable = new AtomicReference<>();
return behaviourSubject
.doOnSubscribe(disposable -> {
System.out.println("First subscriber for <" + key + ">");
final DisposableObserver<Long> disposableObserver = new DisposableObserver<Long>() {
#Override
public void onNext(Long value) {
behaviourSubject.onNext(value);
}
#Override
public void onError(Throwable e) {
e.printStackTrace();
}
#Override
public void onComplete() {
System.out.println("Why complete?");
}
};
myFakeBackEnd.getObservable(subject).subscribeWith(disposableObserver);
atomicDisposable.set(disposableObserver);
})
.doOnDispose(() -> {
System.out.println("Last observer unsubscribed : <" + key + ">");
atomicDisposable.get().dispose();
behaviourSubject.onNext(-2L);
}).share();
});
}
}
public static void main(String[] args) throws InterruptedException {
MyFakeService service = new MyFakeService();
System.out.println("C-1 subscription, should trigger 'First subscriber for <firstSubject>' and then start receiving updates. Initial value should be -1");
Disposable firstDisposable = service.getObservable("firstSubject").subscribe(createConsumer("C-1"));
Thread.sleep(45);
System.out.println("C-2 subscription, should not trigger 'First subscriber for <firstSubject>'. Should receive same updates as C-1.");
Disposable secondDisposable = service.getObservable("firstSubject").subscribe(createConsumer("C-2"));
System.out.println("C-3 subscription, should trigger 'First subscriber for <secondSubject>' and then start receiving updates. Initial value should be -1");
Disposable thirdDisposable = service.getObservable("secondSubject").subscribe(createConsumer("C-3"));
Thread.sleep(45);
System.out.println("Dispose of C-1 subscription. C-2 should continue getting updates.");
firstDisposable.dispose();
Thread.sleep(45);
System.out.println("Dispose of C-2 subscription. Should trigger 'Last observer unsubscribed : <firstSubject>'.");
secondDisposable.dispose();
Thread.sleep(45);
System.out.println("Dispose of C-3 subscription. Should trigger 'Last observer unsubscribed : <secondSubject>'.");
thirdDisposable.dispose();
Thread.sleep(45);
System.out.println("C-4 subscription, should trigger 'First subscriber for <secondSubject>' and then start receiving updates. Initial value should be -2 as this subject has been subscribed to before.");
Disposable fourthDisposable = service.getObservable("secondSubject").subscribe(createConsumer("C-3"));
Thread.sleep(45);
fourthDisposable.dispose();
}
private static Consumer<Long> createConsumer(final String id) {
return (data) -> System.out.println(id + " : <" + data + ">");
}
}
It seems very likely that there is a better solution to this that I can't spot because I'm new to the framework. Any ideas?
Related
I am evaluating Ignite as a caching layer for our architecture. When trying out Ignite Java thin client for the use case mentioned below, I do not find any pointer(s) in ignite doc/any forum as to how this is being tackled by the ignite community. Any pointer(s) will be helpful before I go ahead and use my custom solution.
Use case: All nodes in an ignite cluster go down and come back up. Basically, thin client loses connection to all cluster nodes for some time.
What I was expecting
I am using continuous query and register for disconnect events. Hence, I was expecting some disconnect event which I never got. Reference code below.
public static QueryCursor<Cache.Entry<String, String>> subscribeForDataUpdates(ClientCache<String, String> entityCache,
AtomicLong totalUpdatesTracker) {
ClientDisconnectListener disconnectListener = reason ->
System.out.printf("Client: %s received disconnect event with reason:%s %n",
getClientIpAddr(),
reason.getMessage());
ContinuousQuery<String, String> continuousQuery = new ContinuousQuery<>();
continuousQuery.setLocalListener(new CacheUpdateListener(entityCache.getName(), totalUpdatesTracker));
QueryCursor<Cache.Entry<String, String>> queryCursor = entityCache.query(continuousQuery, disconnectListener);
System.out.printf("Client: %s - subscribed for change notification(s) for entity cache: %s %n",
getClientIpAddr(),
entityCache.getName());
return queryCursor;
}
What I ended up doing
Writing my own checker to re-initialize the thin client connection to ignite cluster and re-subscribing for continuous query updates.
import io.vavr.control.Try;
import org.apache.ignite.cache.query.QueryCursor;
import org.apache.ignite.client.IgniteClient;
import javax.cache.Cache;
import javax.inject.Inject;
import java.time.LocalDateTime;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
import static com.cisco.ignite.consumer.CacheChangeSubscriber.subscribeForDataUpdates;
import static com.cisco.ignite.consumer.Utils.addShutDownHookToCloseCacheUpdates;
import static com.cisco.ignite.consumer.Utils.getClientIpAddr;
public class ClusterConnectionChecker implements Runnable {
private static final List<QueryCursor<Cache.Entry<String, String>>> querySubscriptions = new ArrayList<>();
#Inject
private CacheChangeSubscriber cacheChangeSubscriber;
private IgniteClient thinClientInstance;
private final long secondsDelayBetweenChecks;
private final List<String> cacheNames;
private final AtomicLong totalUpdatesTracker;
private boolean needsReSubscription = false;
public ClusterConnectionChecker(IgniteClient client, long delayBetweenChecks,
List<String> cacheNames, AtomicLong totalUpdatesTracker) {
this.thinClientInstance = client;
this.secondsDelayBetweenChecks = delayBetweenChecks;
this.cacheNames = cacheNames;
this.totalUpdatesTracker = totalUpdatesTracker;
}
#Override
public void run() {
while(!Thread.interrupted()) {
try {
Thread.sleep(TimeUnit.SECONDS.toMillis(secondsDelayBetweenChecks));
boolean isClusterConnectionActive = isConnectionToClusterActive();
if (!isClusterConnectionActive) {
needsReSubscription = true;
System.out.printf("Time: %s | Connection to ignite cluster is not active !!! %n",
LocalDateTime.now());
reInitializeThinClient();
reSubscribeForUpdates();
} else {
// we only need to conditionally re-subscribe
if (needsReSubscription) {
reSubscribeForUpdates();
}
}
} catch (InterruptedException ie) {
// do nothing - just reset the interrupt flag.
Thread.currentThread().interrupt();
}
}
}
private boolean isConnectionToClusterActive() {
return Try.of(() -> {
return thinClientInstance.cluster().state().active();
}).recover(ex -> {
return false;
}).getOrElse(false);
}
private void reInitializeThinClient() {
Try.of(() -> {
thinClientInstance = cacheChangeSubscriber.createThinClientInstance();
if (thinClientInstance.cluster().state().active()) {
System.out.printf("Client: %s | Thin client instance was re-initialized since it was not active %n",
getClientIpAddr());
}
return thinClientInstance;
}).onFailure(th -> System.out.printf("Client: %s | Failed to re-initialize ignite cluster connection. " +
"Will re-try after:%d seconds %n", getClientIpAddr(),secondsDelayBetweenChecks));
}
private void reSubscribeForUpdates() {
if (isConnectionToClusterActive()) {
System.out.printf("Client: %s | Re-subscribing for cache updates after cluster connection re-init... %n",
getClientIpAddr());
// re-set the counter to 0 since we are re-subscribing fresh
totalUpdatesTracker.set(0);
cacheNames.forEach(name -> querySubscriptions.add(subscribeForDataUpdates(
thinClientInstance.getOrCreateCache(name),
totalUpdatesTracker)));
addShutDownHookToCloseCacheUpdates(querySubscriptions, thinClientInstance);
needsReSubscription = false;
}
}
}
I created an example client/server application to get familiar with Spring Webflux/Reactor Netty. Now I am a little bit confused about the behaviour on the client side when the response contains a Flux and the media type is "text/event-stream". What I could see is that each element produced on the server is sent immediately to the client but not yet delivered to the subscriber. The first delivery to the subscriber happens after the producer of the server side has completed the Flux.
This means for me that all the elements are first collected somewhere in reactor-netty on the client side until it gets a complete/error event.
Are my conclusions true or can i do something wrong there?
If it is true, will this be changed in the near future? With my currently observed behaviour most of the benefits using Spring Webflux are negated because as with Spring Mvc the consumer has to wait until the whole element collection has been created and transfered before he can start working on the elements.
My server app is:`
#SpringBootApplication
public class ServerApp {
public static void main(String[] args) {
new SpringApplicationBuilder().sources(ServerApp.class).run(args);
}
#RestController
public static class TestController {
#GetMapping(value = "/test", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> testFlux() {
class AsyncSink implements Consumer<SynchronousSink<String>> {
private List<String> allStrings = List.of(
"Hello Flux1!",
"Hello Flux2!",
"Hello Flux3!",
"Hello Flux4!",
"Hello Flux5!");
private int index = 0;
#Override
public void accept(SynchronousSink<String> sink) {
if (index == allStrings.size()) {
sink.complete();
}
else {
sink.next(allStrings.get(index++));
}
}
}
return Flux.generate(new AsyncSink());
}
}
}
and my client app is:
#SpringBootApplication
public class ClientApp {
public static void main(String[] args) throws IOException {
ConfigurableApplicationContext aContext = new SpringApplicationBuilder().web(WebApplicationType.NONE).sources(ClientApp.class).run(args);
Flux<String> aTestFlux = aContext.getBean(TestProxy.class).getFlux();
aTestFlux.subscribe(new TestSubscriber());
System.out.println("Press ENTER to exit.");
System.in.read();
}
#Bean
public WebClient webClient() {
return WebClient.builder().baseUrl("http://localhost:8080").build();
}
#Component
public static class TestProxy {
#Autowired
private WebClient webClient;
public Flux<String> getFlux() {
return webClient.get().uri("/test").accept(MediaType.TEXT_EVENT_STREAM).exchange().flatMapMany(theResponse -> theResponse.bodyToFlux(String.class));
}
}
private static class TestSubscriber extends BaseSubscriber<String> {
#Override
public void hookOnSubscribe(Subscription subscription) {
System.out.println("Subscribed");
request(Long.MAX_VALUE);
}
#Override
public void hookOnNext(String theValue) {
System.out.println(" - " + theValue);
request(1);
}
#Override
protected void hookOnComplete() {
System.out.println(" done");
}
#Override
protected void hookOnCancel() {
System.out.println(" cancelled");
}
#Override
protected void hookOnError(Throwable theThrowable) {
theThrowable.printStackTrace(System.err);
}
}
}
When i access the url http://localhost:8080/test with chrome browser i see:
data:Hello Flux1!
data:Hello Flux2!
data:Hello Flux3!
data:Hello Flux4!
data:Hello Flux5!
which for me looks like 5 http event have been sent.
Taken from the reactive documentation and rewritten to fit your need.
My guess is that in your example you have given you pass the generate function a consumer that when finished will be emitted.
By instead using the method Flux#generate(Callable<S> stateSupplier, BiFunction<S,SynchronousSink<T>,S> generator) you supply a state that will contain the items you want emitted, then in the supplied BiFunction you emit each item, one by one.
Flux<String> flux = Flux.generate(
() -> List.of("1!", "2!", "3!", "4!", "5!"),
(state, sink) -> {
if (index == allStrings.size()) {
sink.complete();
} else {
sink.next(state.get(index++));
}
});
I have not tested the code, written on mobile.
I want to make parallelism each parent and child entities, in a process which must be return quickly childEntities. So I couldn't decide clearly, which way is suitable for this process. Because in that parallel threads also calls http call and springdataRepository's save method one time(I will manage thread size because of JDBC connection pool size).
By the way, I have just tried RxJava-2 library yet.
I expected that -> If a parallel flow process throws an exception, onErrorResumeNextmethod (or near something) must be go on and complete all process after exception. But it suspends the flow completely.
So what I need -> Completely Non/Blocking parallel flows, if one of throws exception, just catch it and then continue the rest of the parallel process.
Any ideas ? Any other solution ideas is acceptable.(Like manual thread management)
That is what I tried, but not working as expected.
package com.mypackage;
import io.reactivex.Flowable;
import io.reactivex.schedulers.Schedulers;
import lombok.extern.slf4j.Slf4j;
import java.util.ArrayList;
import java.util.List;
#Slf4j
public class TestApp {
public static void main(String[] args) {
long start = System.currentTimeMillis();
List<String> createdParentEntities = new ArrayList<>();
List<String> erroredResponses = new ArrayList<>();
List<String> childEntities = new ArrayList<>();
Flowable.range(1, 100) // 100: is not fixed normalle
.parallel(100) // It will be changed according to size
.runOn(Schedulers.io())
.map(integer -> createParentEntity(String.valueOf(integer)))
.sequential()
.onErrorResumeNext(t -> {
System.out.println(t.getMessage());
if (t instanceof Exception) {
erroredResponses.add(t.getMessage());
return Flowable.empty();
} else {
return Flowable.error(t);
}
})
.blockingSubscribe(createdParentEntities::add);
if (!createdParentEntities.isEmpty()) {
Flowable.fromIterable(createdParentEntities)
.parallel(createdParentEntities.size())
.runOn(Schedulers.io())
.doOnNext(TestApp::createChildEntity)
.sequential()
.blockingSubscribe(childEntities::add);
}
System.out.println("====================");
long time = System.currentTimeMillis() - start;
log.info("Total Time : " + time);
log.info("TOTAL CREATED ENTITIES : " + createdParentEntities.size());
log.info("CREATED ENTITIES " + createdParentEntities.toString());
log.info("ERRORED RESPONSES " + erroredResponses.toString());
log.info("TOTAL ENTITIES : " + childEntities.size());
}
public static String createParentEntity(String id) throws Exception {
Thread.sleep(1000); // Simulated for creation call
if (id.equals("35") || id.equals("75")) {
throw new Exception("ENTITIY SAVE ERROR " + id);
}
log.info("Parent entity saved : " + id);
return id;
}
public static String createChildEntity(String parentId) throws Exception {
Thread.sleep(1000);// Simulated for creation call
log.info("Incoming entity: " + parentId);
return "Child Entity: " + parentId + " parentId";
}
}
I need to guarantee consumer exclusivity with a variable number of consumer threads in different runtimes consuming from a fixed number of queues (where the number of queues is much greater than that of consumers).
My general thought was that I'd have each consumer thread attempt to establish an exclusive connection to clear a queue, and, if it went a given period without receiving a message from that queue, redirect it to another queue.
Even if a queue is temporarily cleared, it's liable to receive messages again in the future, so that queue cannot simply be forgotten about -- instead, a consumer should return to it later. To achieve that rotation, I thought I'd use a queue-of-queues. The danger would be losing references to queues within the queue-of-queues when consumers fail; I thought that seemed solvable with acknowledgements, as follows.
Essentially, each consumer thread waits to get a message (A) with a reference to a queue (1) from the queue-of-queues; message (A) remains initially unacknowledged. The consumer happily attempts to clear queue (1), and once queue (1) remains empty for a given amount of time, the consumer requests a new queue name from the queue-of-queues. Upon receiving a second message (B) and a reference to a new queue (2), the reference to queue (1) is put back on the end of the queue-of-queues as a new message (C), and finally message (A) is acknowledged.
In fact, the queue-of-queue's delivered-at-least-and-probably-only-once guarantee almost gets me exclusivity for the normal queues (1, 2) here, but in order to make sure I absolutely don't lose references to queues, I need to republish queue (1) as message (C) before I acknowledge message (A). That means if a server fails after republishing queue (1) as message (C) but before acknowledging (A), two references to queue (1) could exist in the queue-of-queues, and exclusivity is no longer guaranteed.
Therefore, I'd need to use AMQP's exclusive consumers flags, which are great, but as it stands, I'd also like to NOT republish a reference to a queue if I received a "403 ACCESS REFUSED" for it, so that duplicate references do not proliferate.
However, I'm using Spring's excellent AMQP library, and I don't see how I can hook in with an error handler. The setErrorHandler method exposed on the container doesn't seem for the "403 ACCESS REFUSED" errors.
Is there a way that I can act on the 403s with the frameworks I'm currently using? Alternatively, is there another way I can achieve the guarantees that I need? My code is below.
The "monitoring service":
import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
import org.joda.time.Period;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.amqp.AmqpAuthenticationException;
import org.springframework.amqp.core.MessageListener;
import org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Optional;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.function.Supplier;
public class ListenerMonitoringService {
private static final Logger log = LoggerFactory.getLogger(ListenerMonitoringService.class);
private static final Period EXPIRATION_PERIOD = Period.millis(5000);
private static final long MONTIORING_POLL_INTERVAL = 5000;
private static final long MONITORING_INITIAL_DELAY = 5000;
private final Supplier<AbstractMessageListenerContainer> messageListenerContainerSupplier;
private final QueueCoordinator queueCoordinator;
private final ScheduledExecutorService executorService;
private final Collection<Record> records;
public ListenerMonitoringService(Supplier<AbstractMessageListenerContainer> messageListenerContainerSupplier,
QueueCoordinator queueCoordinator, ScheduledExecutorService executorService) {
this.messageListenerContainerSupplier = messageListenerContainerSupplier;
this.queueCoordinator = queueCoordinator;
this.executorService = executorService;
records = new ArrayList<>();
}
public void registerAndStart(MessageListener messageListener) {
Record record = new Record(messageListenerContainerSupplier.get());
// wrap with listener that updates record
record.container.setMessageListener((MessageListener) (m -> {
log.trace("{} consumed a message from {}", record.container, Arrays.toString(record.container.getQueueNames()));
record.freshen(DateTime.now(DateTimeZone.UTC));
messageListener.onMessage(m);
}));
record.container.setErrorHandler(e -> {
log.error("{} received an {}", record.container, e);
// this doesn't get called for 403s
});
// initial start up
executorService.execute(() -> {
String queueName = queueCoordinator.getQueueName();
log.debug("Received queue name {}", queueName);
record.container.setQueueNames(queueName);
log.debug("Starting container {}", record.container);
record.container.start();
// background monitoring thread
executorService.scheduleAtFixedRate(() -> {
log.debug("Checking container {}", record.container);
if (record.isStale(DateTime.now(DateTimeZone.UTC))) {
String newQueue = queueCoordinator.getQueueName();
String oldQueue = record.container.getQueueNames()[0];
log.debug("Switching queues for {} from {} to {}", record.container, oldQueue, newQueue);
record.container.setQueueNames(newQueue);
queueCoordinator.markSuccessful(queueName);
}
}, MONITORING_INITIAL_DELAY, MONTIORING_POLL_INTERVAL, TimeUnit.MILLISECONDS);
});
records.add(record);
}
private static class Record {
private static final DateTime DATE_TIME_MIN = new DateTime(0);
private final AbstractMessageListenerContainer container;
private Optional<DateTime> lastListened;
private Record(AbstractMessageListenerContainer container) {
this.container = container;
lastListened = Optional.empty();
}
public synchronized boolean isStale(DateTime now) {
log.trace("Comparing now {} to {} for {}", now, lastListened, container);
return lastListened.orElse(DATE_TIME_MIN).plus(EXPIRATION_PERIOD).isBefore(now);
}
public synchronized void freshen(DateTime now) {
log.trace("Updating last listened to {} for {}", now, container);
lastListened = Optional.of(now);
}
}
}
The "queue-of-queues" handler:
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Envelope;
import com.rabbitmq.client.GetResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.amqp.rabbit.connection.CachingConnectionFactory;
import org.springframework.amqp.rabbit.connection.Connection;
import org.springframework.amqp.rabbit.connection.ConnectionFactory;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
import java.io.IOException;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
private class MetaQueueCoordinator implements QueueCoordinator {
private static final Logger log = LoggerFactory.getLogger(MetaQueueCoordinator.class);
private final Channel channel;
private final Map<String, Envelope> envelopeMap;
private final RabbitTemplate rabbitTemplate;
public MetaQueueCoordinator(ConnectionFactory connectionFactory) {
Connection connection = connectionFactory.createConnection();
channel = connection.createChannel(false);
envelopeMap = new ConcurrentHashMap<>();
rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setExchange("");
rabbitTemplate.setRoutingKey("queue_of_queues");
}
#Override
public String getQueueName() {
GetResponse response;
try {
response = channel.basicGet("queue_of_queues", false);
} catch (IOException e) {
log.error("Unable to get from channel");
throw new RuntimeException(e);
}
String queueName = new String(response.getBody());
envelopeMap.put(queueName, response.getEnvelope());
return queueName;
}
#Override
public void markSuccessful(String queueName) {
Envelope envelope = envelopeMap.remove(queueName);
if (envelope == null) {
return;
}
log.debug("Putting {} at the end of the line...", queueName);
rabbitTemplate.convertAndSend(queueName);
try {
channel.basicAck(envelope.getDeliveryTag(), false);
} catch (IOException e) {
log.error("Unable to acknowledge {}", queueName);
}
}
#Override
public void markUnsuccessful(String queueName) {
Envelope envelope = envelopeMap.remove(queueName);
if (envelope == null) {
return;
}
try {
channel.basicAck(envelope.getDeliveryTag(), false);
} catch (IOException e) {
log.error("Unable to acknowledge {}", queueName);
}
}
}
The ErrorHandler is for handling errors during message delivery, not setting up the listener itself.
The upcoming 1.5 release publishes application events when exceptions such as this occur.
It will be released later this summer; this feature is currently only available in the 1.5.0.BUILD-SNAPSHOT; a release candidate should be available in the next few weeks.
The project page shows how to get the snapshot from the snapshots repo.
I'm currently experimenting with websockets using the Pusher library for Java.
Pusher automatically changes its connection state from CONNECTED to DISCONNECTED if the internet connection is lost. However, this only seems to happen after 150 seconds of being disconnected. This is very unfortunate as in those 150s, a lot of messages can get lost, and a de facto old message can still be seen as the most up-to-date.
How can I know if the last received message is the most up-to-date? Or is there any way to decrease the timeout for the connection state?
Here is the pusher code I'm using:
import com.pusher.client.Pusher;
import com.pusher.client.channel.Channel;
import com.pusher.client.channel.ChannelEventListener;
import com.pusher.client.channel.SubscriptionEventListener;
import com.pusher.client.connection.ConnectionEventListener;
import com.pusher.client.connection.ConnectionState;
import com.pusher.client.connection.ConnectionStateChange;
public class Testing {
public static void main(String[] args) throws Exception {
// Create a new Pusher instance
Pusher pusher = new Pusher("PusherKey");
pusher.connect(new ConnectionEventListener() {
#Override
public void onConnectionStateChange(ConnectionStateChange change) {
System.out.println("State changed to " + change.getCurrentState() +
" from " + change.getPreviousState());
}
#Override
public void onError(String message, String code, Exception e) {
System.out.println("There was a problem connecting!");
}
}, ConnectionState.ALL);
// Subscribe to a channel
Channel channel = pusher.subscribe("channel", new ChannelEventListener() {
#Override
public void onSubscriptionSucceeded(String channelName) {
System.out.println("Subscribed!");
}
#Override
public void onEvent(String channelName, String eventName, String data) {
System.out.println("desilo se");
}
});
// Bind to listen for events called "my-event" sent to "my-channel"
channel.bind("my-event", new SubscriptionEventListener() {
#Override
public void onEvent(String channel, String event, String data) {
System.out.println("Received event with data: " + data);
}
});
while(true){
try {
Thread.sleep(1000);
} catch(InterruptedException ex) {
Thread.currentThread().interrupt();
}
}
}
}
}
Just found the answer: Initiate Pusher-object with PusherOptions-object.
Here is the PusherOptions-class: http://pusher.github.io/pusher-java-client/src-html/com/pusher/client/PusherOptions.html
Here is a simple example how I decreased my connection-timeout from 150s to 15s:
// Define timeout parameters
PusherOptions opt = new PusherOptions();
opt.setActivityTimeout((long)10000L);
opt.setPongTimeout((long)5000L);
// Create a new Pusher instance
Pusher pusher = new Pusher(PUSHER_KEY, opt);
ActivityTimeout defines how often a ping is sent out to check the connectivity, PongTimeout defines the waiting time until a response from the ping-signal is expected.
The minimum ActivityTimeout is 1000ms, however such a low value is strongly discouraged by Pusher, probably to decrease the server-traffic.