I've read several tutorials on vertx.io but I can't still understand how I can minimise repeated code.
For example, i need to implement RESTful service which gets data from DB. I've prepared 2 bean classes for tables (Customer, Administrator) and implemented services classes:
AdministratorService.java:
public void getAll(RoutingContext routingContext) {
jdbc.getConnection(ar -> {
SQLConnection connection = ar.result();
connection.query(Queries.SELECT_ALL_ADMINS, result -> {
List<Administrator> admins = result.result().getRows().stream().map(Administrator::new).collect(Collectors.toList());
routingContext.response()
.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(admins));
connection.close();
});
});
}
public void getOneById(RoutingContext routingContext) {
final String id = routingContext.request().getParam("id");
if (id == null) {
routingContext.response().setStatusCode(400).end();
} else {
jdbc.getConnection(ar -> {
// Read the request's content and create an instance of Administrator.
SQLConnection connection = ar.result();
select(id, connection, Queries.SELECT_ONE_ADMIN_BY_ID, result -> {
if (result.succeeded()) {
routingContext.response()
.setStatusCode(200)
.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(result.result()));
} else {
routingContext.response()
.setStatusCode(404).end();
}
connection.close();
});
});
}
}
CustomerService.java:
public void getAll(RoutingContext routingContext) {
jdbc.getConnection(ar -> {
SQLConnection connection = ar.result();
connection.query(Queries.SELECT_ALL_CUSTOMERS, result -> {
List<Customer> customers = result.result().getRows().stream().map(Customer::new).collect(Collectors.toList());
routingContext.response()
.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(customers));
connection.close();
});
});
}
public void getOneById(RoutingContext routingContext) {
final String id = routingContext.request().getParam("id");
if (id == null) {
routingContext.response().setStatusCode(400).end();
} else {
jdbc.getConnection(ar -> {
// Read the request's content and create an instance of Administrator.
SQLConnection connection = ar.result();
select(id, connection, Queries.SELECT_ONE_CUSTOMER_BY_ID, result -> {
if (result.succeeded()) {
routingContext.response()
.setStatusCode(200)
.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(result.result()));
} else {
routingContext.response()
.setStatusCode(404).end();
}
connection.close();
});
});
}
}
Not hard to see that part
.routingContext.response()
.putHeader("content-type", "application/json; charset=utf-8")
is repeated in each method. And in general speaking all difference between these classes is sql requests and bean classes.
Could you share your example or show how to change my methods?
VertX is not a framework, which makes easy for some developers to design their own structure, but for some it becomes nightmare. What you are looking for is a pre designed framework, which is ready with router, controllers, DB connections. Apparently that's not what vertx is, its more like a library, extend it the way you want.
I see in your code that for every Service function you are getting SQL connection. If you have worked on other frameworks like Spring, the connection is already available using DI.
You need to implement DI, some MVC design and then your boilerplate code will be removed.
I have done something similar, but for MongoDB.
VertX with MongoDB
This is example
1. first deploy verticle
/**
* deploy verticle
*/
#PostConstruct
public void deployVerticle() {
Vertx vertx = Vertx.vertx();
log.info("deply vertx start...... ");
vertx.deployVerticle(dbVerticle);
DeploymentOptions options = new DeploymentOptions();
options.setInstances(4);
StaticServer.setApplicationContext(context);
vertx.deployVerticle(StaticServer.class.getName(), options);
log.info("deply vertx end...... ");
}
2. second StaticServer inject router
#Override
public void start() throws Exception {
Map<String, Api> apis = applicationContext.getBeansOfType(Api.class);
JavaConfig javaConfig = applicationContext.getBean(JavaConfig.class);
Router router = Router.router(vertx);
apis.forEach((k, v) -> RouterUtils.injectRouter(v, router));
vertx.createHttpServer().requestHandler(router).listen(javaConfig.httpPort());
}
public static void injectRouter(Api api, Router router) {
Map<Method, RequestMapping> annotatedMethods = MethodIntrospector.selectMethods(api.getClass(), (MetadataLookup<RequestMapping>)
method -> AnnotatedElementUtils.findMergedAnnotation(method, RequestMapping.class));
RequestMapping annotatedClass = api.getClass().getDeclaredAnnotation(RequestMapping.class);
annotatedMethods.forEach((method, request) -> {
Class<?>[] params = method.getParameterTypes();
Assert.isAssignable(RoutingContext.class, params[0]);
router.route(request.method(), annotatedClass.value() + request.path()).handler(context -> {
try {
context.response().putHeader("content-type", "application/json; charset=utf-8");
method.invoke(api, context);
} catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
log.error("e :", e.getCause());
}
});
});
}
3. last pub query event
#RequestMapping("/")
public void root(RoutingContext context) {
JsonObject query = new JsonObject();
query.put("sql", "select * from user where username = ?");
query.put("params", (new JsonArray()).add("zhengfc"));
context.vertx().eventBus().request("db.query", query, ar -> {
if (ar.succeeded()) {
context.response().end(ar.result().body().toString());
} else {
log.error("db.query failed: {}", ar.cause());
}
});
}
Related
I am developing prototype for a new project. The idea is to provide a Reactive Spring Boot microservice to bulk index documents in Elasticsearch. Elasticsearch provides a High Level Rest Client which provides an Async method to bulk process indexing requests. Async delivers callbacks using listeners are mentioned here. The callbacks receive index responses (per requests) in batches. I am trying to send this response back to the client as Flux. I have come up with something based on this blog post.
Controller
#RestController
public class AppController {
#SuppressWarnings("unchecked")
#RequestMapping(value = "/test3", method = RequestMethod.GET)
public Flux<String> index3() {
ElasticAdapter es = new ElasticAdapter();
JSONObject json = new JSONObject();
json.put("TestDoc", "Stack123");
Flux<String> fluxResponse = es.bulkIndex(json);
return fluxResponse;
}
ElasticAdapter
#Component
class ElasticAdapter {
String indexName = "test2";
private final RestHighLevelClient client;
private final ObjectMapper mapper;
private int processed = 1;
Flux<String> bulkIndex(JSONObject doc) {
return bulkIndexDoc(doc)
.doOnError(e -> System.out.print("Unable to index {}" + doc+ e));
}
private Flux<String> bulkIndexDoc(JSONObject doc) {
return Flux.create(sink -> {
try {
doBulkIndex(doc, bulkListenerToSink(sink));
} catch (JsonProcessingException e) {
sink.error(e);
}
});
}
private void doBulkIndex(JSONObject doc, BulkProcessor.Listener listener) throws JsonProcessingException {
System.out.println("Going to submit index request");
BiConsumer<BulkRequest, ActionListener<BulkResponse>> bulkConsumer =
(request, bulkListener) ->
client.bulkAsync(request, RequestOptions.DEFAULT, bulkListener);
BulkProcessor.Builder builder =
BulkProcessor.builder(bulkConsumer, listener);
builder.setBulkActions(10);
BulkProcessor bulkProcessor = builder.build();
// Submitting 5,000 index requests ( repeating same JSON)
for (int i = 0; i < 5000; i++) {
IndexRequest indexRequest = new IndexRequest(indexName, "person", i+1+"");
String json = doc.toJSONString();
indexRequest.source(json, XContentType.JSON);
bulkProcessor.add(indexRequest);
}
System.out.println("Submitted all docs
}
private BulkProcessor.Listener bulkListenerToSink(FluxSink<String> sink) {
return new BulkProcessor.Listener() {
#Override
public void beforeBulk(long executionId, BulkRequest request) {
}
#SuppressWarnings("unchecked")
#Override
public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
for (BulkItemResponse bulkItemResponse : response) {
JSONObject json = new JSONObject();
json.put("id", bulkItemResponse.getResponse().getId());
json.put("status", bulkItemResponse.getResponse().getResult
sink.next(json.toJSONString());
processed++;
}
if(processed >= 5000) {
sink.complete();
}
}
#Override
public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
failure.printStackTrace();
sink.error(failure);
}
};
}
public ElasticAdapter() {
// Logic to initialize Elasticsearch Rest Client
}
}
I used FluxSink to create the Flux of Responses to send back to the Client. At this point, I have no idea whether this correct or not.
My expectation is that the calling client should receive the responses in batches of 10 ( because bulk processor processess it in batches of 10 - builder.setBulkActions(10); ). I tried to consume the endpoint using Spring Webflix Client. But unable to work it out. This is what I tried
WebClient
public class FluxClient {
public static void main(String[] args) {
WebClient client = WebClient.create("http://localhost:8080");
Flux<String> responseFlux = client.get()
.uri("/test3")
.retrieve()
.bodyToFlux(String.class);
responseFlux.subscribe(System.out::println);
}
}
Nothing is printing on console as I expected. I tried to use System.out.println(responseFlux.blockFirst());. It prints all the responses as a single batch at the end and not in batches at .
If my approach is correct, what is the correct way to consume it? For the solution in my mind, this client will reside is another Webapp.
Notes: My understanding of Reactor API is limited. The version of elasticsearch used is 6.8.
So made the following changes to your code.
In ElasticAdapter,
public Flux<Object> bulkIndex(JSONObject doc) {
return bulkIndexDoc(doc)
.subscribeOn(Schedulers.elastic(), true)
.doOnError(e -> System.out.print("Unable to index {}" + doc+ e));
}
Invoked subscribeOn(Scheduler, requestOnSeparateThread) on the Flux, Got to know about it from, https://github.com/spring-projects/spring-framework/issues/21507
In FluxClient,
Flux<String> responseFlux = client.get()
.uri("/test3")
.headers(httpHeaders -> {
httpHeaders.set("Accept", "text/event-stream");
})
.retrieve()
.bodyToFlux(String.class);
responseFlux.delayElements(Duration.ofSeconds(1)).subscribe(System.out::println);
Added "Accept" header as "text/event-stream" and delayed Flux elements.
With the above changes, was able to get the response in real time from the server.
How can I control the IndexResponse when using the Elasticsearch async api w/ the HighLevelRestClient v7.5?
Maybe I need to mock the Low Level REST Client and use that mock for my High Level REST Client? 🤔
#Test
void whenIndexResponseHasFailuresDoItShouldReturnFalse() {
// arrange
var indexResponse = mock(IndexResponse.class);
when(indexResponse.getResult()).thenReturn(Result.UPDATED);
var restHighLevelClient = mock(RestHighLevelClient.class);
when(restHighLevelClient.indexAsync())
//do something here??
var indexReqest = new IndexRequest(...);
//act
var myHelper = new MyHelper(restHighLevelClient);
var result = myHelper.doIt(indexReqest)
.get();
//assert
assert(result).isFalse();
}
class MyHelper {
//injected RestHighLevelClient
CompletableFuture<Boolean> doIt(Customer customer) {
var result = new CompletableFuture<Boolean>();
var indexRequest = new IndexRequest(...);
restHighLevelClient.indexAsync(indexRequest, RequestOptions.DEFAULT
, new ActionListener<IndexResponse>() {
#Override
public void onResponse(IndexResponse indexResponse) { //want to control indexResponse
if (indexResponse.getResult() == Result.UPDATED) {
result.complete(false);
} else {
result.complete(true);
}
}
#Override
public void onFailure(Exception e) {
...
}
});
return result;
}
}
Update
Sample project using Oleg's answer
Mock RestHighLevelClient then inside indexAsync mock IndexResponse and pass it to the ActionListener.
RestHighLevelClient restHighLevelClient = mock(RestHighLevelClient.class);
when(restHighLevelClient.indexAsync(any(), any(), any())).then(a -> {
ActionListener<IndexResponse> listener = a.getArgument(2);
IndexResponse response = mock(IndexResponse.class);
when(response.getResult()).then(b -> {
return Result.UPDATED;
});
listener.onResponse(response);
return null;
});
MyHelper myHelper = new MyHelper(restHighLevelClient);
Boolean result = myHelper.doIt(null).get();
assertFalse(result);
Also, configure Mockito to support mocking final methods otherwise a NPE will be thrown when mocking indexAsync.
Option 1
Instead of using the mockito-core artifact, include the mockito-inline artifact in your project
Option 2
Create a file src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker with mock-maker-inline as the content
I have a method for sending kafka message like this:
#Async
public void sendMessage(String topicName, Message message) {
ListenableFuture<SendResult<String, Message >> future = kafkaTemplate.send(topicName, message);
future.addCallback(new ListenableFutureCallback<>() {
#Override
public void onSuccess(SendResult<String, Message > result) {
//do nothing
}
#Override
public void onFailure(Throwable ex) {
log.error("something wrong happened"!);
}
});
}
And now I am writing unit tests for it. I would like to test also the two callback methods onSuccess and onFailure methods, so my I idea is to mock the KafkaTemplate, something like :
KafkaTemplate kafkaTemplate = Mockito.mock(KafkaTemplate.class);
But now I am getting stuck on the mocking result for these two cases:
when(kafkaTemplate.send(anyString(), any(Message.class))).thenReturn(????);
what should I put in the thenReturn method for the case success and for the case failure? Does anyone have an idea please? Thank you very much!
You can mock the template but it's better to mock the interface.
Sender sender = new Sender();
KafkaOperations template = mock(KafkaOperations.class);
SettableListenableFuture<SendResult<String, String>> future = new SettableListenableFuture<>();
when(template.send(anyString(), any(Message.class))).thenReturn(future);
sender.setTemplate(template);
sender.send(...);
future.set(new SendResult<>(...));
...or...
future.setException(...
EDIT
Updated to CompletableFuture (Spring for Apache Kafka 3.0.x and later)...
public class Sender {
private KafkaOperations<String, String> template;
public void setTemplate(KafkaOperations<String, String> template) {
this.template = template;
}
public void send(String topic, Message<?> data) {
CompletableFuture<SendResult<String, String>> future = this.template.send(data);
future.whenComplete((result, ex) -> {
if (ex == null) {
System.out.println(result);
}
else {
System.out.println(ex.getClass().getSimpleName() + "(" + ex.getMessage() + ")");
}
});
}
}
#ExtendWith(OutputCaptureExtension.class)
public class So57475464ApplicationTests {
#Test
public void test(CapturedOutput captureOutput) {
Message message = new GenericMessage<>("foo");
Sender sender = new Sender();
KafkaOperations template = mock(KafkaOperations.class);
CompletableFuture<SendResult<String, String>> future = new CompletableFuture<>();
given(template.send(any(Message.class))).willReturn(future);
sender.setTemplate(template);
sender.send("foo", message);
future.completeExceptionally(new RuntimeException("foo"));
assertThat(captureOutput).contains("RuntimeException(foo)");
}
}
I'm working on simple chat module for my application using Spring WebFlux with ReactiveMongoRepository on backend and Angular 4 on front. I'm able to receive data through WebSocketSession but after streaming all messages from db i want to keep the connection so i could update message list. Can anyone give me clues how to achieve that, or maybe i'm following wrong assumptions ?
Java Backend responsible for WebSocket, my subscriber only logs current state, nothing relevant there:
WebFluxConfiguration:
#Configuration
#EnableWebFlux
public class WebSocketConfig {
private final WebSocketHandler webSocketHandler;
#Autowired
public WebSocketConfig(WebSocketHandler webSocketHandler) {
this.webSocketHandler = webSocketHandler;
}
#Bean
#Primary
public HandlerMapping webSocketMapping() {
Map<String, Object> map = new HashMap<>();
map.put("/websocket-messages", webSocketHandler);
SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping();
mapping.setOrder(10);
mapping.setUrlMap(map);
return mapping;
}
#Bean
public WebSocketHandlerAdapter handlerAdapter() {
return new WebSocketHandlerAdapter();
}
}
WebSocketHandler Implementation
#Component
public class MessageWebSocketHandler implements WebSocketHandler {
private final MessageRepository messageRepository;
private ObjectMapper mapper = new ObjectMapper();
private MessageSubscriber subscriber = new MessageSubscriber();
#Autowired
public MessageWebSocketHandler(MessageRepository messageRepository) {
this.messageRepository = messageRepository;
}
#Override
public Mono<Void> handle(WebSocketSession session) {
session.receive()
.map(WebSocketMessage::getPayloadAsText)
.map(this::toMessage)
.subscribe(subscriber::onNext, subscriber:: onError, subscriber::onComplete);
return session.send(
messageRepository.findAll()
.map(this::toJSON)
.map(session::textMessage));
}
private String toJSON(Message message) {
try {
return mapper.writeValueAsString(message);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
private Message toMessage(String json) {
try {
return mapper.readValue(json, Message.class);
} catch (IOException e) {
throw new RuntimeException("Invalid JSON:" + json, e);
}
}
}
and MongoRepo
#Repository
public interface MessageRepository extends
ReactiveMongoRepository<Message,String> {
}
FrontEnd Handling:
#Injectable()
export class WebSocketService {
private subject: Rx.Subject<MessageEvent>;
constructor() {
}
public connect(url): Rx.Subject<MessageEvent> {
if (!this.subject) {
this.subject = this.create(url);
console.log('Successfully connected: ' + url);
}
return this.subject;
}
private create(url): Rx.Subject<MessageEvent> {
const ws = new WebSocket(url);
const observable = Rx.Observable.create(
(obs: Rx.Observer<MessageEvent>) => {
ws.onmessage = obs.next.bind(obs);
ws.onerror = obs.error.bind(obs);
ws.onclose = obs.complete.bind(obs);
return ws.close.bind(ws);
});
const observer = {
next: (data: Object) => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify(data));
}
}
};
return Rx.Subject.create(observer, observable);
}
}
in other service i'm mapping observable from response to my type
constructor(private wsService: WebSocketService) {
this.messages = <Subject<MessageEntity>>this.wsService
.connect('ws://localhost:8081/websocket-messages')
.map((response: MessageEvent): MessageEntity => {
const data = JSON.parse(response.data);
return new MessageEntity(data.id, data.user_id, data.username, data.message, data.links);
});
}
and finally subscribtion with send function which i can't use because of closed connection:
ngOnInit() {
this.messages = [];
this._ws_subscription = this.chatService.messages.subscribe(
(message: MessageEntity) => {
this.messages.push(message);
},
error2 => {
console.log(error2.json());
},
() => {
console.log('Closed');
}
);
}
sendTestMessage() {
this.chatService.messages.next(new MessageEntity(null, '59ca30ac87e77d0f38237739', 'mickl', 'test message angular', null));
}
Assuming your chat messages are being persisted to the datastore as they're being received, you could use the tailable cursors feature in Spring Data MongoDB Reactive (see reference documentation).
So you could create a new method on your repository like:
public interface MessageRepository extends ReactiveSortingRepository< Message, String> {
#Tailable
Flux<Message> findWithTailableCursor();
}
Note that tailable cursors have some limitations: you mongo collection needs to be capped and entries are streamed in their order of insertion.
Spring WebFlux websocket support does not yet support STOMP nor message brokers, but this might be a better choice for such a use case.
My aim is to connect browser clients having proper headers with the server. I pass these headers from StompClient.
My UI code in which i passed token in the header is
function connect() {
var socket = new SockJS('/websocket/api/add');
stompClient = Stomp.over(socket);
stompClient.connect({"token" : "12345"}, function(frame) {
setConnected(true);
console.log('Connected: ' + frame);
});
}
In backend i am able to read the headers in the preSend() method of ChannelInterceptorAdapter
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
MessageHeaders headers = message.getHeaders();
System.out.println("preSend : HEADERS : {}" + headers);
return super.preSend(message, channel);
}
But here i am not able to close the wesocket session. How can we do that?
Also i was able to close the websocket session but i couldn't receive the headers in afterConnectionEstablished() method of WebSocketHandlerDecorator
public void configureWebSocketTransport(final WebSocketTransportRegistration registration) {
registration.addDecoratorFactory(new WebSocketHandlerDecoratorFactory() {
#Override
public WebSocketHandler decorate(final WebSocketHandler handler) {
return new WebSocketHandlerDecorator(handler) {
#Override
public void afterConnectionEstablished(final WebSocketSession session) throws Exception {
session.close(CloseStatus.NOT_ACCEPTABLE);
super.afterConnectionEstablished(session);
}
};
}
});
super.configureWebSocketTransport(registration);
}
Can someone guide me how can i close the websocketsession based on the header we pass from UI at server side?
You can try sending the client's token to the server through a message when connection established, then let the server save that session into a map, whose key is the corresponding token.
So when you want to close a session by its token, you can query for the session from that map using the token.
Sample Code:
Save the session with its token:
#Override
public void handleMessage(WebSocketSession session, WebSocketMessage<?> message) throws Exception {
String messageToString = message.getPayload().toString();
if (messageToString.startsWith("token=")) {
tokenToSessionMapping.put(messageToString.substring("token=".length()));
}
// Other handling message code...
}
Close the session by token:
WebSocketSession sessionByToken = tokenToSessionMapping.get(token);
if (sessionByToken != null && sessionByToken.isOpen()) {
sessionByToken.close(CloseStatus.NOT_ACCEPTABLE);
}
And other things to notice:
Since the tokenToSessionMapping is static and is shared among sessions. You should use a thread-safe implementation such as ConcurrentHashMap.
When the session is closed, you'd better remove the corresponding entry from the map tokenToSessionMapping. Otherwise the map size will just keep growing. You can do this by the override method afterConnectionClosed().
#Override
public void afterConnectionClosed(WebSocketSession session, CloseStatus status) throws Exception {
Log.info("Socket session closed: {}", status.toString());
String foundKey = null;
for (Map.Entry<String, String> entry : tokenToSessionMapping.entrySet()) {
if (Objects.equals(entry.getValue(), session)) {
foundKey = entry.getKey();
}
}
if (foundKey != null) {
tokenToSessionMapping.remove(foundKey);
}
}