How can I run Netty on multiple ports in Springboot application - java

I have a Springboot application using Netty and I want it to run on multiple ports 8080, 8082, 8084.
I tried using NettyServerCustomizer using the code below, but then it only works on the last mentioned port (8084 in this example)
#Component
public class NettyWebServerFactoryPortCustomizer
implements WebServerFactoryCustomizer<NettyReactiveWebServerFactory> {
#Override
public void customize(NettyReactiveWebServerFactory serverFactory) {
//serverFactory.setPort(8088);
serverFactory.addServerCustomizers(new PortCustomizer(8080));
serverFactory.addServerCustomizers(new PortCustomizer(8082));
serverFactory.addServerCustomizers(new PortCustomizer(8084));
}
private static class PortCustomizer implements NettyServerCustomizer {
private final int port;
private PortCustomizer(int port) {
this.port = port;
}
#Override
public HttpServer apply(HttpServer httpServer) {
return httpServer.port(port);
}
}
}
Any tips would be useful.
Thanks

Spring boot only has a single Netty service. All you are doing is changing the port for the single service.
Adding more Netty services is fairly simple, eg here
However, plugging in all the Spring Boot facilities (handlers, controllers) to provide their ease of use is a serious exercise.

Related

Spring boot - task on startup preventing application interface exposure

Is it possible to somehow in Spring boot application achieve some startup procedure that blocks every exposure of endpoints (and possibly other application public interfaces) until the startup procedure is completed?
I mean something like
#Component
public class MyBlockingStartupRunner implements ApplicationRunner {
#Override
public void run(ApplicationArguments args) {
// doing some task, calling external API, processing return values, ...
startTask();
// at this point app should be available for rest calls, scheduled tasks etc.
someObject.appIsReadyToGo(); // alternatively app would be ready at the end of the method
}
}
Problem with this approach of using ApplicationRunner is there might be some API calls to the server that I am unable to serve and therefore I would need to add some check at every API endpoint to prevent this. Or, alternatively, create some interceptor that would "block" all public communication and which would probably read some property from some service which tells it if app is ready or not. But thats not the approach I would like and I wonder if Spring implemented this somehow.
If it is acceptable to run your start-up tasks before the web server instance has started at all, you could use SmartLifecycle to add a start task.
#Component
class MyStartup implements SmartLifecycle {
private final ServletWebServerApplicationContext ctx;
private final Log logger = LogFactory.getLog(MyStartup.class);
#Autowired
MyStartup(ServletWebServerApplicationContext ctx) {
this.ctx = ctx;
}
#Override
public void start() {
logger.info("doing start stuff: " + ctx.getWebServer());
startTask();
}
#Override
public void stop() {}
#Override
public boolean isRunning() {
return false;
}
#Override
public int getPhase() {
return 100;
}
}
Because the task runs before the web server has started (rather than blocking access) this might be a different approach.

How can I subscribe to a websocket user queue from a Python client?

I want to create a service where a Python client can subscribe to a user queue on a websocket served by spring-boot. There are several resources available online, however these all focus on
connecting with a JavaScript client (SockJS) client instead of a Python client, or
connecting to a topic instead of a user queue.
I found the following resources:
Spring-boot 2.0.2 allows to create a simple websocket server. The article shows how to publish to a topic (#SendTo annotation). Spring Websockets
This article from Baeldung describes how to create a subscription to a user queue (#SendToUser annotation). Baeldung Websockets
I found websocket-client and websockets as active recent Python modules, however the manuals do not explain on subscribing to user queues.
Python websocket-client Pypi Website
Python websockets Pypi Website
Is there an example how to connect to a user queue from a Python client?
Example websocket server
Spring configuration:
#Configuration
#EnableWebSocketMessageBroker
public class ClientWebsocketConfiguration extends AbstractWebSocketMessageBrokerConfigurer
{
#Override
public void configureMessageBroker(#NotNull MessageBrokerRegistry aMessageBrokerRegistry)
{
aMessageBrokerRegistry.enableSimpleBroker("/queue", "/user");
aMessageBrokerRegistry.setApplicationDestinationPrefixes("/app");
aMessageBrokerRegistry.setUserDestinationPrefix("/user");
}
#Override
public void registerStompEndpoints(#NotNull StompEndpointRegistry aStompEndpointRegistry)
{
aStompEndpointRegistry.addEndpoint("/websocket").setAllowedOrigins("*").withSockJS();
}
}
Controller:
#Controller
public class SubscriptionController
{
#NotNull
#MessageMapping("/subscribe")
#SendToUser("/queue/reply")
public ReplyMessage processSubscribeFromClient(
#Payload Object object,
Principal principal)
{
return new ReplyMessage("Hello World");
}
}
Message:
class ReplyMessage
{
#Nullable private String content;
public ReplyMessage()
{
}
public ReplyMessage(#Nullable String content)
{
this.content = content;
}
#Nullable
public String getContent()
{
return content;
}
}

How to have DropWizard JUnit App Rule definition use startup information from a docker rule?

The general problem I am trying to solve is this. I have a solution, but it's very clunky, and I'm hoping someone knows of a more orderly one.
Dropwizard offers a JUnit TestRule called DropwizardAppRule, which is used for integration tests. You use it like this:
#ClassRule
public static final DropWizardAppRule<MyConfiguration> APP_RULE = new DropwizardAppRule(MyApplication.class, myYmlResourceFilePath, ConfigOverride("mydatabase.url", myJdbcUrl));
It will start up your application, configuring it with your yml resource file, with overrides that you specified in the constructor. Note, however, that your overrides are bound at construction time.
There are also JUnit rules out there to start up a Docker container, and I'm using one to start up MySql, and a JUnit RuleChain to enforce the fact that the container must start up before I launch my Dropwizard application that depends on it.
All that works great, if I'm willing to specify in advance what port I want the MySql container to expose. I'm not. I want these integration tests to run on a build machine, quite possibly in parallel for branch builds of the same project, and I would strongly prefer to use the mechanism where you ask Docker to pick any available port, and use that.
The problem I run into with that, is that the exposed container port is not known at the time that the DropwizardAppRule is constructed, which is the only time you can bind configuration overrides.
The solution I adopted was to make a wrapper JUnit Rule, like so:
public class CreateWhenRunRuleWrapper<T extends ExternalResource> extends ExternalResource {
private final Supplier<T> wrappedRuleFactory;
private T wrappedRule;
public CreateWhenRunRuleWrapper(Supplier<T> wrappedRuleFactory) {
this.wrappedRuleFactory = wrappedRuleFactory;
}
public T getWrappedRule() {
return wrappedRule;
}
#Override
protected void before() throws Throwable {
wrappedRule = wrappedRuleFactory.get();
wrappedRule.before();
}
#Override
protected void after() {
wrappedRule.after();
}
}
This works, allowing me to construct the DropWizardAppRule class in the before() method, but is quite obviously outside JUnit's design intent, as shown by the fact that I had to locate it in the org.junit.rules package, in order to empower my class to be able to call the before() and after() methods of the late-created Rule.
What would be a more orderly, best practice way to accomplish the same objective?
2 Options we came up with:
The hacky solution is to use static {} which executes the code after spinning up the container but before initialising the dropwizard instance:
public static final GenericContainer mongodb = new GenericContainer("mongo:latest").withExposedPorts(27017);
static {
mongodb.start();
System.setProperty("dw.mongoConfig.uri", "mongodb://" + mongodb.getContainerIpAddress() + ":" + mongodb.getMappedPort(27017));
}
#ClassRule
public static final DropwizardIntegrationAppRule<Config> app1 = new DropwizardIntegrationAppRule<>(Service.class);
The second option is cleaner and much like yours.
private static final MongoDContainerRule mongo = new MongoDContainerRule();
private static final DropwizardIntegrationAppRule<Config> app = new DropwizardIntegrationAppRule<>(Service.class);
#ClassRule
public static final RuleChain chain = RuleChain
.outerRule(mongo)
.around(app)
MongoDContainerRule is like your wrapper but it also sets the right port through system properties.
public class MongoDContainerRule extends MongoDBContainerBase {
private static final GenericContainer mongodb = new GenericContainer("mongo:latest").withExposedPorts(27017);
#Override
protected void before() throws Throwable {
mongodb.start();
System.setProperty("dw.mongoConfig.uri", "mongodb://" + mongodb.getContainerIpAddress() + ":" + mongodb.getMappedPort(27017));
System.setProperty("dw.mongoConfig.tls", "false");
System.setProperty("dw.mongoConfig.dbName", DB_NAME);
}
#Override
protected void after() {
mongodb.stop();
}
}
The container will expose mongodb on a free port. mongodb.getMappedPort(internalPort) will return it. System.setProperty("dw.*") injects values into the dropwizard config.

Guice service dependencies

I have three guava Services which get started by a guava ServiceManager asynchronously.
The first service is the database connection pool which needs to start fully before the second/third service can successfully process incoming messages. Obviously with these being started asynchronously, the database may not have fully started before the second/third service starts to process a message, which will lead to exceptions.
What is the desired pattern here?
I can inject the database service into the other services and call the awaitRunning() method in the service startup, but then I will suffer the same issue when the ServiceManager is shutdown.
I believe guice does not have an out-of-the-box mechanism for this. Spring e.g. has a depends-on attribute that can define some ordering. There are frameworks that give you this with guice as well (e.g. dropwizard guicey implements an order annotation). This is however fairly simple to solve.
The approach is to use multibindings to define a manager for all dependency classes. This I will call Managed (adopted from jetty). The interface will implement an ordering. We then use a manager that starts all the services one by one in a well defined order (can also be used for shutdown if wanted).
See my code example here:
public class ExecutionOrder {
public static void main(String[] args) {
Injector createInjector = Guice.createInjector(new AbstractModule() {
#Override
protected void configure() {
Multibinder<Managed> multiBinder = Multibinder.newSetBinder(binder(), Managed.class);
multiBinder.addBinding().to(Service1.class);
multiBinder.addBinding().to(Service2.class);
bind(ManagedManager.class).in(Singleton.class);
}
});
createInjector.getInstance(ManagedManager.class); // start it
}
public interface Managed extends Comparable<Managed> {
public default void start() {}
public default int getOrder() { return 0;}
#Override
default int compareTo(Managed o) {
return Integer.compare(getOrder(), o.getOrder());
}
}
public static class ManagedManager {
#Inject
public ManagedManager(final Set<Managed> managed) {
managed.stream().sorted().forEach(Managed::start);
}
}
public static class Service1 implements Managed {
#Override
public void start() {
System.out.println("Started Service 1");
}
#Override
public int getOrder() {
return 1;
}
}
public static class Service2 implements Managed {
#Override
public void start() {
System.out.println("Started Service 2");
}
#Override
public int getOrder() {
return 2;
}
}
}
My - admittedly stupidly named - ManagedManager is injected by guice with all Managed interfaces, using guice's multibindings (see the module I initialise). I then sort that and call start.
The start method would be where you initialise your services (e.g. your database connection). By overwriting the getOrder() method you can define which service is started at which point.
That way you get a well defined startup behaviour and you can adapt the interface to have a well defined shutdown behaviour as well.
I hope this helps,
Artur

DropWizard/Jersey API Clients

DropWizard uses Jersey under the hood for REST. I am trying to figure out how to write a client for the RESTful endpoints my DropWizard app will expose.
For the sake of this example, let's say my DropWizard app has a CarResource, which exposes a few simple RESTful endpoints for CRUDding cars:
#Path("/cars")
public class CarResource extends Resource {
// CRUDs car instances to some database (DAO).
public CardDao carDao = new CarDao();
#POST
public Car createCar(String make, String model, String rgbColor) {
Car car = new Car(make, model, rgbColor);
carDao.saveCar(car);
return car;
}
#GET
#Path("/make/{make}")
public List<Car> getCarsByMake(String make) {
List<Car> cars = carDao.getCarsByMake(make);
return cars;
}
}
So I would imagine that a structured API client would be something like a CarServiceClient:
// Packaged up in a JAR library. Can be used by any Java executable to hit the Car Service
// endpoints.
public class CarServiceClient {
public HttpClient httpClient;
public Car createCar(String make, String model, String rgbColor) {
// Use 'httpClient' to make an HTTP POST to the /cars endpoint.
// Needs to deserialize JSON returned from server into a `Car` instance.
// But also needs to handle if the server threw a `WebApplicationException` or
// returned a NULL.
}
public List<Car> getCarsByMake(String make) {
// Use 'httpClient' to make an HTTP GET to the /cars/make/{make} endpoint.
// Needs to deserialize JSON returned from server into a list of `Car` instances.
// But also needs to handle if the server threw a `WebApplicationException` or
// returned a NULL.
}
}
But the only two official references to Drop Wizard clients I can find are totally contradictory to one another:
DropWizard recommended project structure - which claims I should put my client code in a car-client project under car.service.client package; but then...
DropWizard Client manual - which makes it seem like a "DropWizard Client" is meant for integrating my DropWizard app with other RESTful web services (thus acting as a middleman).
So I ask, what is the standard way of writing Java API clients for your DropWizard web services? Does DropWizard have a client-library I can utilize for this type of use case? Am I supposed to be implementing the client via some Jersey client API? Can someone add pseudo-code to my CarServiceClient so I can understand how this would work?
Here is a pattern you can use using the JAX-RS client.
To get the client:
javax.ws.rs.client.Client init(JerseyClientConfiguration config, Environment environment) {
return new JerseyClientBuilder(environment).using(config).build("my-client");
}
You can then make calls the following way:
javax.ws.rs.core.Response post = client
.target("http://...")
.request(MediaType.APPLICATION_JSON)
.header("key", value)
.accept(MediaType.APPLICATION_JSON)
.post(Entity.json(myObj));
Yes, what dropwizard-client provides is only to be used by the service itself, most likely to communicate other services. It doesn't provide anything for client applications directly.
It doesn't do much magic with HttpClients anyway. It simply configures the client according to the yml file, assigns the existing Jackson object mapper and validator to Jersey client, and I think reuses the thread pool of the application. You can check all that on https://github.com/dropwizard/dropwizard/blob/master/dropwizard-client/src/main/java/io/dropwizard/client/JerseyClientBuilder.java
I think I'd go about and structure my classes as you did using Jersey Client. Following is an abstract class I've been using for client services:
public abstract class HttpRemoteService {
private static final String AUTHORIZATION_HEADER = "Authorization";
private static final String TOKEN_PREFIX = "Bearer ";
private Client client;
protected HttpRemoteService(Client client) {
this.client = client;
}
protected abstract String getServiceUrl();
protected WebResource.Builder getSynchronousResource(String resourceUri) {
return client.resource(getServiceUrl() + resourceUri).type(MediaType.APPLICATION_JSON_TYPE);
}
protected WebResource.Builder getSynchronousResource(String resourceUri, String authToken) {
return getSynchronousResource(resourceUri).header(AUTHORIZATION_HEADER, TOKEN_PREFIX + authToken);
}
protected AsyncWebResource.Builder getAsynchronousResource(String resourceUri) {
return client.asyncResource(getServiceUrl() + resourceUri).type(MediaType.APPLICATION_JSON_TYPE);
}
protected AsyncWebResource.Builder getAsynchronousResource(String resourceUri, String authToken) {
return getAsynchronousResource(resourceUri).header(AUTHORIZATION_HEADER, TOKEN_PREFIX + authToken);
}
protected void isAlive() {
client.resource(getServiceUrl()).get(ClientResponse.class);
}
}
and here is how I make it concrete:
private class TestRemoteService extends HttpRemoteService {
protected TestRemoteService(Client client) {
super(client);
}
#Override
protected String getServiceUrl() {
return "http://localhost:8080";
}
public Future<TestDTO> get() {
return getAsynchronousResource("/get").get(TestDTO.class);
}
public void post(Object object) {
getSynchronousResource("/post").post(object);
}
public void unavailable() {
getSynchronousResource("/unavailable").get(Object.class);
}
public void authorize() {
getSynchronousResource("/authorize", "ma token").put();
}
}
if anyone is trying to use DW 0.8.2 when building a client, and you're getting the following error:
cannot access org.apache.http.config.Registry
class file for org.apache.http.config.Registry not found
at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:858)
at org.apache.maven.plugin.compiler.CompilerMojo.execute(CompilerMojo.java:129)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
... 19 more
update your dropwizard-client in your pom.xml from 0.8.2 to 0.8.4 and you should be good. I believe a jetty sub-dependency was updated which fixed it.
<dependency>
<groupId>io.dropwizard</groupId>
<artifactId>dropwizard-client</artifactId>
<version>0.8.4</version>
<scope>compile</scope>
</dependency>
You can integrated with Spring Framework to implement

Categories

Resources