How to mock result from KafkaTemplate - java

I have a method for sending kafka message like this:
#Async
public void sendMessage(String topicName, Message message) {
ListenableFuture<SendResult<String, Message >> future = kafkaTemplate.send(topicName, message);
future.addCallback(new ListenableFutureCallback<>() {
#Override
public void onSuccess(SendResult<String, Message > result) {
//do nothing
}
#Override
public void onFailure(Throwable ex) {
log.error("something wrong happened"!);
}
});
}
And now I am writing unit tests for it. I would like to test also the two callback methods onSuccess and onFailure methods, so my I idea is to mock the KafkaTemplate, something like :
KafkaTemplate kafkaTemplate = Mockito.mock(KafkaTemplate.class);
But now I am getting stuck on the mocking result for these two cases:
when(kafkaTemplate.send(anyString(), any(Message.class))).thenReturn(????);
what should I put in the thenReturn method for the case success and for the case failure? Does anyone have an idea please? Thank you very much!

You can mock the template but it's better to mock the interface.
Sender sender = new Sender();
KafkaOperations template = mock(KafkaOperations.class);
SettableListenableFuture<SendResult<String, String>> future = new SettableListenableFuture<>();
when(template.send(anyString(), any(Message.class))).thenReturn(future);
sender.setTemplate(template);
sender.send(...);
future.set(new SendResult<>(...));
...or...
future.setException(...
EDIT
Updated to CompletableFuture (Spring for Apache Kafka 3.0.x and later)...
public class Sender {
private KafkaOperations<String, String> template;
public void setTemplate(KafkaOperations<String, String> template) {
this.template = template;
}
public void send(String topic, Message<?> data) {
CompletableFuture<SendResult<String, String>> future = this.template.send(data);
future.whenComplete((result, ex) -> {
if (ex == null) {
System.out.println(result);
}
else {
System.out.println(ex.getClass().getSimpleName() + "(" + ex.getMessage() + ")");
}
});
}
}
#ExtendWith(OutputCaptureExtension.class)
public class So57475464ApplicationTests {
#Test
public void test(CapturedOutput captureOutput) {
Message message = new GenericMessage<>("foo");
Sender sender = new Sender();
KafkaOperations template = mock(KafkaOperations.class);
CompletableFuture<SendResult<String, String>> future = new CompletableFuture<>();
given(template.send(any(Message.class))).willReturn(future);
sender.setTemplate(template);
sender.send("foo", message);
future.completeExceptionally(new RuntimeException("foo"));
assertThat(captureOutput).contains("RuntimeException(foo)");
}
}

Related

Unit testing Spring Cloud Stream Producer-Processor-Consumer Scenario

I have created an sample app for producer-processor-consumer scenario using Spring Cloud Scenario. Here, I have used legacy annotation based approach.
In unit tests, I wanted to test simple scenario of producing a message and asserting consumed message after it undergoes transformation. But I am not receiving message at consumer binding end. Please let me know what could be missing here.
Producer.java
#EnableBinding(MyProcessor.class)
public class Producer {
#Bean
#InboundChannelAdapter(value = MyProcessor.OUTPUT, poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public MessageSource<String> produceMessage() {
return () -> new GenericMessage<>("Hello Spring Cloud World >>> " + Instant.now());
}
}
TransformProcessor.java
#EnableBinding(MyProcessor.class)
public class TransformProcessor {
#Transformer(inputChannel = MyProcessor.OUTPUT, outputChannel = MyProcessor.INPUT)
public String transform(String message) {
System.out.println("Transforming the message: " + message);
return message.toUpperCase();
}
}
Consumer.java
#EnableBinding(MyProcessor.class)
public class Consumer {
#StreamListener(MyProcessor.INPUT)
public void consume(String message) {
System.out.println("Consuming transformed message: " + message);
}
}
MyProcessor.java
public interface MyProcessor {
String INPUT = "my-input";
final static String OUTPUT = "my-output";
#Input(INPUT)
SubscribableChannel anInput();
#Output(OUTPUT)
MessageChannel anOutput();
}
SpringCloudStreamLegacyApplicationTests.java
#SpringBootTest
class SpringCloudStreamLegacyApplicationTests {
#Autowired
private MyProcessor myProcessor;
#Autowired
private MessageCollector messageCollector;
#Test
public void testConsumer() {
myProcessor.anOutput().send(new GenericMessage<byte[]>("hello".getBytes()));
Message<?> poll = messageCollector.forChannel(myProcessor.anInput()).poll();
System.out.println("Received: " + poll.getPayload());
}
}
Here, I am expecting a message to be received in handleMessage method.
Note, I am using following dependency for tests:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-test-support</artifactId>
<scope>test</scope>
</dependency>

Spring WebFlux reactive WebSocket prevent connection closing

I'm working on simple chat module for my application using Spring WebFlux with ReactiveMongoRepository on backend and Angular 4 on front. I'm able to receive data through WebSocketSession but after streaming all messages from db i want to keep the connection so i could update message list. Can anyone give me clues how to achieve that, or maybe i'm following wrong assumptions ?
Java Backend responsible for WebSocket, my subscriber only logs current state, nothing relevant there:
WebFluxConfiguration:
#Configuration
#EnableWebFlux
public class WebSocketConfig {
private final WebSocketHandler webSocketHandler;
#Autowired
public WebSocketConfig(WebSocketHandler webSocketHandler) {
this.webSocketHandler = webSocketHandler;
}
#Bean
#Primary
public HandlerMapping webSocketMapping() {
Map<String, Object> map = new HashMap<>();
map.put("/websocket-messages", webSocketHandler);
SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping();
mapping.setOrder(10);
mapping.setUrlMap(map);
return mapping;
}
#Bean
public WebSocketHandlerAdapter handlerAdapter() {
return new WebSocketHandlerAdapter();
}
}
WebSocketHandler Implementation
#Component
public class MessageWebSocketHandler implements WebSocketHandler {
private final MessageRepository messageRepository;
private ObjectMapper mapper = new ObjectMapper();
private MessageSubscriber subscriber = new MessageSubscriber();
#Autowired
public MessageWebSocketHandler(MessageRepository messageRepository) {
this.messageRepository = messageRepository;
}
#Override
public Mono<Void> handle(WebSocketSession session) {
session.receive()
.map(WebSocketMessage::getPayloadAsText)
.map(this::toMessage)
.subscribe(subscriber::onNext, subscriber:: onError, subscriber::onComplete);
return session.send(
messageRepository.findAll()
.map(this::toJSON)
.map(session::textMessage));
}
private String toJSON(Message message) {
try {
return mapper.writeValueAsString(message);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
private Message toMessage(String json) {
try {
return mapper.readValue(json, Message.class);
} catch (IOException e) {
throw new RuntimeException("Invalid JSON:" + json, e);
}
}
}
and MongoRepo
#Repository
public interface MessageRepository extends
ReactiveMongoRepository<Message,String> {
}
FrontEnd Handling:
#Injectable()
export class WebSocketService {
private subject: Rx.Subject<MessageEvent>;
constructor() {
}
public connect(url): Rx.Subject<MessageEvent> {
if (!this.subject) {
this.subject = this.create(url);
console.log('Successfully connected: ' + url);
}
return this.subject;
}
private create(url): Rx.Subject<MessageEvent> {
const ws = new WebSocket(url);
const observable = Rx.Observable.create(
(obs: Rx.Observer<MessageEvent>) => {
ws.onmessage = obs.next.bind(obs);
ws.onerror = obs.error.bind(obs);
ws.onclose = obs.complete.bind(obs);
return ws.close.bind(ws);
});
const observer = {
next: (data: Object) => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify(data));
}
}
};
return Rx.Subject.create(observer, observable);
}
}
in other service i'm mapping observable from response to my type
constructor(private wsService: WebSocketService) {
this.messages = <Subject<MessageEntity>>this.wsService
.connect('ws://localhost:8081/websocket-messages')
.map((response: MessageEvent): MessageEntity => {
const data = JSON.parse(response.data);
return new MessageEntity(data.id, data.user_id, data.username, data.message, data.links);
});
}
and finally subscribtion with send function which i can't use because of closed connection:
ngOnInit() {
this.messages = [];
this._ws_subscription = this.chatService.messages.subscribe(
(message: MessageEntity) => {
this.messages.push(message);
},
error2 => {
console.log(error2.json());
},
() => {
console.log('Closed');
}
);
}
sendTestMessage() {
this.chatService.messages.next(new MessageEntity(null, '59ca30ac87e77d0f38237739', 'mickl', 'test message angular', null));
}
Assuming your chat messages are being persisted to the datastore as they're being received, you could use the tailable cursors feature in Spring Data MongoDB Reactive (see reference documentation).
So you could create a new method on your repository like:
public interface MessageRepository extends ReactiveSortingRepository< Message, String> {
#Tailable
Flux<Message> findWithTailableCursor();
}
Note that tailable cursors have some limitations: you mongo collection needs to be capped and entries are streamed in their order of insertion.
Spring WebFlux websocket support does not yet support STOMP nor message brokers, but this might be a better choice for such a use case.

How can i reduce amount of boilerplate code in vert.x

I've read several tutorials on vertx.io but I can't still understand how I can minimise repeated code.
For example, i need to implement RESTful service which gets data from DB. I've prepared 2 bean classes for tables (Customer, Administrator) and implemented services classes:
AdministratorService.java:
public void getAll(RoutingContext routingContext) {
jdbc.getConnection(ar -> {
SQLConnection connection = ar.result();
connection.query(Queries.SELECT_ALL_ADMINS, result -> {
List<Administrator> admins = result.result().getRows().stream().map(Administrator::new).collect(Collectors.toList());
routingContext.response()
.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(admins));
connection.close();
});
});
}
public void getOneById(RoutingContext routingContext) {
final String id = routingContext.request().getParam("id");
if (id == null) {
routingContext.response().setStatusCode(400).end();
} else {
jdbc.getConnection(ar -> {
// Read the request's content and create an instance of Administrator.
SQLConnection connection = ar.result();
select(id, connection, Queries.SELECT_ONE_ADMIN_BY_ID, result -> {
if (result.succeeded()) {
routingContext.response()
.setStatusCode(200)
.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(result.result()));
} else {
routingContext.response()
.setStatusCode(404).end();
}
connection.close();
});
});
}
}
CustomerService.java:
public void getAll(RoutingContext routingContext) {
jdbc.getConnection(ar -> {
SQLConnection connection = ar.result();
connection.query(Queries.SELECT_ALL_CUSTOMERS, result -> {
List<Customer> customers = result.result().getRows().stream().map(Customer::new).collect(Collectors.toList());
routingContext.response()
.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(customers));
connection.close();
});
});
}
public void getOneById(RoutingContext routingContext) {
final String id = routingContext.request().getParam("id");
if (id == null) {
routingContext.response().setStatusCode(400).end();
} else {
jdbc.getConnection(ar -> {
// Read the request's content and create an instance of Administrator.
SQLConnection connection = ar.result();
select(id, connection, Queries.SELECT_ONE_CUSTOMER_BY_ID, result -> {
if (result.succeeded()) {
routingContext.response()
.setStatusCode(200)
.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(result.result()));
} else {
routingContext.response()
.setStatusCode(404).end();
}
connection.close();
});
});
}
}
Not hard to see that part
.routingContext.response()
.putHeader("content-type", "application/json; charset=utf-8")
is repeated in each method. And in general speaking all difference between these classes is sql requests and bean classes.
Could you share your example or show how to change my methods?
VertX is not a framework, which makes easy for some developers to design their own structure, but for some it becomes nightmare. What you are looking for is a pre designed framework, which is ready with router, controllers, DB connections. Apparently that's not what vertx is, its more like a library, extend it the way you want.
I see in your code that for every Service function you are getting SQL connection. If you have worked on other frameworks like Spring, the connection is already available using DI.
You need to implement DI, some MVC design and then your boilerplate code will be removed.
I have done something similar, but for MongoDB.
VertX with MongoDB
This is example
1. first deploy verticle
/**
* deploy verticle
*/
#PostConstruct
public void deployVerticle() {
Vertx vertx = Vertx.vertx();
log.info("deply vertx start...... ");
vertx.deployVerticle(dbVerticle);
DeploymentOptions options = new DeploymentOptions();
options.setInstances(4);
StaticServer.setApplicationContext(context);
vertx.deployVerticle(StaticServer.class.getName(), options);
log.info("deply vertx end...... ");
}
2. second StaticServer inject router
#Override
public void start() throws Exception {
Map<String, Api> apis = applicationContext.getBeansOfType(Api.class);
JavaConfig javaConfig = applicationContext.getBean(JavaConfig.class);
Router router = Router.router(vertx);
apis.forEach((k, v) -> RouterUtils.injectRouter(v, router));
vertx.createHttpServer().requestHandler(router).listen(javaConfig.httpPort());
}
public static void injectRouter(Api api, Router router) {
Map<Method, RequestMapping> annotatedMethods = MethodIntrospector.selectMethods(api.getClass(), (MetadataLookup<RequestMapping>)
method -> AnnotatedElementUtils.findMergedAnnotation(method, RequestMapping.class));
RequestMapping annotatedClass = api.getClass().getDeclaredAnnotation(RequestMapping.class);
annotatedMethods.forEach((method, request) -> {
Class<?>[] params = method.getParameterTypes();
Assert.isAssignable(RoutingContext.class, params[0]);
router.route(request.method(), annotatedClass.value() + request.path()).handler(context -> {
try {
context.response().putHeader("content-type", "application/json; charset=utf-8");
method.invoke(api, context);
} catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
log.error("e :", e.getCause());
}
});
});
}
3. last pub query event
#RequestMapping("/")
public void root(RoutingContext context) {
JsonObject query = new JsonObject();
query.put("sql", "select * from user where username = ?");
query.put("params", (new JsonArray()).add("zhengfc"));
context.vertx().eventBus().request("db.query", query, ar -> {
if (ar.succeeded()) {
context.response().end(ar.result().body().toString());
} else {
log.error("db.query failed: {}", ar.cause());
}
});
}

spring tcp socket , authorizing clients and handle pending response

The Spring framework support tcp connection as well , i wrote code below to setup a simple socket server , i am confused about adding below futures to my socket server :
authorizing clients based on a unique identifier ( for example a client secret received from client, maybe using TCP Connection Events )
send a message directly to specific client (based on identifier)
broadcast a message
UPDATE :
Config.sendMessage added to send message to single client
Config.broadCast added to broadcast message
authorizeIncomingConnection to authorize clients , accept or reject connections
tcpConnections static filed added to keep tcpEvent sources
Questions !
is using tcpConnections HashMap good idea ?!
is the authorization method i implemented a good one ?!
Main.java
#SpringBootApplication
public class Main {
public static void main(final String[] args) {
SpringApplication.run(Main.class, args);
}
}
Config.java
#EnableIntegration
#IntegrationComponentScan
#Configuration
public class Config implements ApplicationListener<TcpConnectionEvent> {
private static final Logger LOGGER = Logger.getLogger(Config.class.getName());
#Bean
public AbstractServerConnectionFactory AbstractServerConnectionFactory() {
return new TcpNetServerConnectionFactory(8181);
}
#Bean
public TcpInboundGateway TcpInboundGateway(AbstractServerConnectionFactory connectionFactory) {
TcpInboundGateway inGate = new TcpInboundGateway();
inGate.setConnectionFactory(connectionFactory);
inGate.setRequestChannel(getMessageChannel());
return inGate;
}
#Bean
public MessageChannel getMessageChannel() {
return new DirectChannel();
}
#MessageEndpoint
public class Echo {
#Transformer(inputChannel = "getMessageChannel")
public String convert(byte[] bytes) throws Exception {
return new String(bytes);
}
}
private static ConcurrentHashMap<String, TcpConnection> tcpConnections = new ConcurrentHashMap<>();
#Override
public void onApplicationEvent(TcpConnectionEvent tcpEvent) {
TcpConnection source = (TcpConnection) tcpEvent.getSource();
if (tcpEvent instanceof TcpConnectionOpenEvent) {
LOGGER.info("Socket Opened " + source.getConnectionId());
tcpConnections.put(tcpEvent.getConnectionId(), source);
if (!authorizeIncomingConnection(source.getSocketInfo())) {
LOGGER.warn("Socket Rejected " + source.getConnectionId());
source.close();
}
} else if (tcpEvent instanceof TcpConnectionCloseEvent) {
LOGGER.info("Socket Closed " + source.getConnectionId());
tcpConnections.remove(source.getConnectionId());
}
}
private boolean authorizeIncomingConnection(SocketInfo socketInfo) {
//Authorization Logic , Like Ip,Mac Address WhiteList or anyThing else !
return (System.currentTimeMillis() / 1000) % 2 == 0;
}
public static String broadCast(String message) {
Set<String> connectionIds = tcpConnections.keySet();
int successCounter = 0;
int FailureCounter = 0;
for (String connectionId : connectionIds) {
try {
sendMessage(connectionId, message);
successCounter++;
} catch (Exception e) {
FailureCounter++;
}
}
return "BroadCast Result , Success : " + successCounter + " Failure : " + FailureCounter;
}
public static void sendMessage(String connectionId, final String message) throws Exception {
tcpConnections.get(connectionId).send(new Message<String>() {
#Override
public String getPayload() {
return message;
}
#Override
public MessageHeaders getHeaders() {
return null;
}
});
}
}
MainController.java
#Controller
public class MainController {
#RequestMapping("/notify/{connectionId}/{message}")
#ResponseBody
public String home(#PathVariable String connectionId, #PathVariable String message) {
try {
Config.sendMessage(connectionId, message);
return "Client Notified !";
} catch (Exception e) {
return "Failed To Notify Client , cause : \n " + e.toString();
}
}
#RequestMapping("/broadCast/{message}")
#ResponseBody
public String home(#PathVariable String message) {
return Config.broadCast(message);
}
}
Usage :
Socket Request/Response Mode
notify single client
http://localhost:8080/notify/{connectionId}/{message}
broadCast
http://localhost:8080/broadCast/{message}
The TcpConnectionOpenEvent contains a connectionId property. Each message coming from that client will have the same property in the IpHeaders.CONNECTION_ID message header.
Add a custom router that keeps track of the logged-on state of each connection.
Lookup the connection id and if not authenticated, route to a challenge/response subflow.
When authenticated, route to the normal flow.
To use arbitrary messaging (rather than request/response) use a TcpReceivingChannelAdapter and TcpSendingMessageHandler instead of an inbound gateway. Both configured to use the same connection factory. For each message sent to the message handler, add the IpHeaders.CONNECTION_ID header to target the specific client.
To broadcast, send a message for each connection id.

RxJava onErrorResumeNext()

I have two observables (named A and B for simplicity) and one subscriber. So, the Subscriber subscribes to A and if there's an error on A then B (which is the fallback) kicks in. Now, whenever A hits an error B gets called fine, however A calls onComplete() on the subscriber, so B response never reaches the subscriber even if B execution is successful.
Is this the normal behaviour? I thought onErrorResumeNext() should continue the stream and notify the subscriber once completed as noted in the documentation (https://github.com/ReactiveX/RxJava/wiki/Error-Handling-Operators#onerrorresumenext).
This is the overall structure of what I'm doing (omitted several "boring" code):
public Observable<ModelA> observeGetAPI(){
return retrofitAPI.getObservableAPI1()
.flatMap(observableApi1Response -> {
ModelA model = new ModelA();
model.setApi1Response(observableApi1Response);
return retrofitAPI.getObservableAPI2()
.map(observableApi2Response -> {
// Blah blah blah...
return model;
})
.onErrorResumeNext(observeGetAPIFallback(model))
.subscribeOn(Schedulers.newThread())
})
.onErrorReturn(throwable -> {
// Blah blah blah...
return model;
})
.subscribeOn(Schedulers.newThread());
}
private Observable<ModelA> observeGetAPIFallback(ModelA model){
return retrofitAPI.getObservableAPI3().map(observableApi3Response -> {
// Blah blah blah...
return model;
}).onErrorReturn(throwable -> {
// Blah blah blah...
return model;
})
.subscribeOn(Schedulers.immediate());
}
Subscription subscription;
subscription = observeGetAPI.subscribe(ModelA -> {
// IF THERE'S AN ERROR WE NEVER GET B RESPONSE HERE...
}, throwable ->{
// WE NEVER GET HERE... onErrorResumeNext()
},
() -> { // IN CASE OF AN ERROR WE GET STRAIGHT HERE, MEANWHILE, B GETS EXECUTED }
);
Any ideas what I'm doing wrong?
Thanks!
EDIT:
Here's a rough timeline of what's happening:
---> HTTP GET REQUEST B
<--- HTTP 200 REQUEST B RESPONSE (SUCCESS)
---> HTTP GET REQUEST A
<--- HTTP 200 REQUEST A RESPONSE (FAILURE!)
---> HTTP GET FALLBACK A
** onComplete() called! ---> Subscriber never gets fallback response since onComplete() gets called before time.
<--- HTTP 200 FALLBACK A RESPONSE (SUCCESS)
And here's a link to a simple diagram I made which represent's what I want to happen:
Diagram
The Rx calls used in the following should simulate what you are doing with Retrofit.
fallbackObservable =
Observable
.create(new Observable.OnSubscribe<String>() {
#Override
public void call(Subscriber<? super String> subscriber) {
logger.v("emitting A Fallback");
subscriber.onNext("A Fallback");
subscriber.onCompleted();
}
})
.delay(1, TimeUnit.SECONDS)
.onErrorReturn(new Func1<Throwable, String>() {
#Override
public String call(Throwable throwable) {
logger.v("emitting Fallback Error");
return "Fallback Error";
}
})
.subscribeOn(Schedulers.immediate());
stringObservable =
Observable
.create(new Observable.OnSubscribe<String>() {
#Override
public void call(Subscriber<? super String> subscriber) {
logger.v("emitting B");
subscriber.onNext("B");
subscriber.onCompleted();
}
})
.delay(1, TimeUnit.SECONDS)
.flatMap(new Func1<String, Observable<String>>() {
#Override
public Observable<String> call(String s) {
logger.v("flatMapping B");
return Observable
.create(new Observable.OnSubscribe<String>() {
#Override
public void call(Subscriber<? super String> subscriber) {
logger.v("emitting A");
subscriber.onNext("A");
subscriber.onCompleted();
}
})
.delay(1, TimeUnit.SECONDS)
.map(new Func1<String, String>() {
#Override
public String call(String s) {
logger.v("A completes but contains invalid data - throwing error");
throw new NotImplementedException("YUCK!");
}
})
.onErrorResumeNext(fallbackObservable)
.subscribeOn(Schedulers.newThread());
}
})
.onErrorReturn(new Func1<Throwable, String>() {
#Override
public String call(Throwable throwable) {
logger.v("emitting Return Error");
return "Return Error";
}
})
.subscribeOn(Schedulers.newThread());
subscription = stringObservable.subscribe(
new Action1<String>() {
#Override
public void call(String s) {
logger.v("onNext " + s);
}
},
new Action1<Throwable>() {
#Override
public void call(Throwable throwable) {
logger.v("onError");
}
},
new Action0() {
#Override
public void call() {
logger.v("onCompleted");
}
});
The output from the log statements is:
RxNewThreadScheduler-1 emitting B
RxComputationThreadPool-1 flatMapping B
RxNewThreadScheduler-2 emitting A
RxComputationThreadPool-2 A completes but contains invalid data - throwing error
RxComputationThreadPool-2 emitting A Fallback
RxComputationThreadPool-1 onNext A Fallback
RxComputationThreadPool-1 onCompleted
This seems like what you are looking for but maybe I'm missing something.

Categories

Resources