Trying to do the Junit 5 E2E functional testing using Micronaut declarative HTTP client.
public interface IProductOperation {
#Get(value = "/search/{text}")
#Secured(SecurityRule.IS_ANONYMOUS)
Maybe<?> freeTextSearch(#NotBlank String text);
}
Declarative micronaut HTTP client
#Client(
id = "feteBirdProduct",
path = "/product"
)
public interface IProductClient extends IProductOperation {
}
JUnit - 5 testing
#MicronautTest
public record ProductControllerTest(IProductClient iProductClient) {
#Test
#DisplayName("Should search the item based on the name")
void shouldSearchTheItemBasedOnTheName() {
var value = iProductClient.freeTextSearch("test").blockingGet();
System.out.println(value);
}
}
Controller
#Controller("/product")
public class ProductController implements IProductOperation {
private final IProductManager iProductManager;
public ProductController(IProductManager iProductManager) {
this.iProductManager = iProductManager;
}
#Override
public Maybe<List> freeTextSearch(String text) {
LOG.info("Controller --> Finding all the products");
return iProductManager.findFreeText(text);
}
}
When I run the test, I get a 500 internet server error. I think when I run the test the application is also running. Not sure what is the reason for 500 internal server error.
Any help will be appreciated
Is #Get(value = "/search/{text}") causing the issue ?. If yes how can I solve with the declarative client
Service discovery
application.yml
consul:
client:
defaultZone: ${CONSUL_HOST:localhost}:${CONSUL_PORT:8500}
registration:
enabled: true
application-test.yml
micronaut:
server:
port: -1
http:
services:
feteBirdProduct:
urls:
- http://product
consul:
client:
registration:
enabled: false
Related
I have several microservices in my architecture. I want to implement an API Gateway to route request to services. To achieve that, I implement spring-cloud-gateway and this is my application.yml
server:
port: 9090
spring:
application:
name: "API-GATEWAY"
cloud:
gateway:
routes:
- id: task-service
uri: 'http://localhost:8083'
predicates:
- Path=/task/**
So far everything works as expected. a request localhost:9090/task/123 is to localhost:8083/task/123. Here comes to second part.
I want some users access to only some endpoints. In my JWT token, I have role field.
{
"accountName": "erdem.ontas",
"surname": "Öntaş",
"roles": [
"ADMIN",
"USER"
],
}
I don't want specify authorization in every service separately, is there any way to specify role based access in spring-cloud-gateway? For example I want USER role to be able to access to GET http://localhost:9090/task/ but not to GET http://localhost:9090/dashboard/
If you do not want and need to create full OAuth 2 Server/Client infrastructure and want to keep it simple just create a custom GatewayFilter in which just check if the JWT token extracted from the header has the preconfigured roles.
So start with a simple GatewayFilter
#Component
public class RoleAuthGatewayFilterFactory extends
AbstractGatewayFilterFactory<RoleAuthGatewayFilterFactory.Config> {
public RoleAuthGatewayFilterFactory() {
super(Config.class);
}
#Override
public GatewayFilter apply(Config config) {
return (exchange, chain) -> {
var request = exchange.getRequest();
// JWTUtil can extract the token from the request, parse it and verify if the given role is available
if(!JWTUtil.hasRole(request, config.getRole())){
// seems we miss the auth token
var response = exchange.getResponse();
response.setStatusCode(HttpStatus.UNAUTHORIZED);
return response.setComplete();
}
return chain.filter(exchange);
};
}
#Data
public static class Config {
private String role;
}
#Override
public List<String> shortcutFieldOrder() {
// we need this to use shortcuts in the application.yml
return Arrays.asList("role");
}
}
Here we just create a simple filter which receives the required role from the config (application.yml) and checks if the request is authorized to continue.
To use the filter just add filters into you route config.
server:
port: 9090
spring:
application:
name: "API-GATEWAY"
cloud:
gateway:
routes:
- id: task-service
uri: 'http://localhost:8083'
filters:
- RoleAuth=ADMIN
predicates:
- Path=/task/**
So this way the RoleAuth filter can be reused over the several routes.
I am trying to merge Spring Cloud Gateway with Discovery Client with Spring Security with OAuth. I got most of it working except that I cannot do both OAuth and Discovery Client.
When I use Discovery Client it correctly resolves to the service say /v1/whoami goes to the whoami service requesting /, when I enable security, I would get a 404 when it tries to request /oauth/authorization/google as it should be /v1/oauth/authorization/google
To fix the above I add this
#Bean
public ForwardedHeaderTransformer forwardedHeaderTransformer() {
return new ForwardedHeaderTransformer();
}
However, when I do that it will look up /v1/whoami as /v1/whoami which does not exist.
I tried creating and registering this class but it does not work either
public class ForwardedHeaderTransformerForOAuthOnly extends ForwardedHeaderTransformer {
#Override
public ServerHttpRequest apply(ServerHttpRequest request) {
System.out.println(">>>> " + request.getPath().value());
if (isOauth(request)) {
System.out.println(">>>> IS OAUTH");
return super.apply(request);
}
return request;
//return super.apply(request);
}
private boolean isOauth(ServerHttpRequest request) {
return request.getPath().value().startsWith("/oauth2/authorization/") || request.getPath().value().startsWith("/login/oauth2/code/");
}
}
I got it working adding the following to eat the prefix before the service ID.
spring:
cloud:
gateway:
discovery:
locator:
predicates:
- Path='/*/'+serviceId+'/**'
filters:
- StripPrefix=2
Combined with adding
#Bean
public ForwardedHeaderTransformer forwardedHeaderTransformer() {
return new ForwardedHeaderTransformer();
}
I would like to call a REST web service from my client application using FEIGN and SEEDSTACK. The web service, which is developed with SEEDSATCK too, is configured with the following authentication method: "filters: [ authcBasic ]"
How to configure or program the client to get the authentication right? How to pass the USER and PASSWORD information?
client FEIGNAPI class:
#FeignApi
public interface neosdServer {
#RequestLine("GET /file/getfilesprop")
List<NeosdFile> getfilesprop();
#RequestLine("GET /file/getfiles")
List<String> getfiles();
}
client APPLICATION.YAML
feign:
endpoints:
neosdClient.interfaces.rest.neosdServer:
baseUrl: http://localhost:8080
encoder: feign.jackson.JacksonEncoder
decoder: feign.jackson.JacksonDecoder
server APPLICATION.YAML
web:
urls:
-
pattern: /file/getfiles
filters: [ authcBasic, 'roles[read]' ]
The current SeedStack integration doesn't support configuring interceptors on feign builders for now. Instead, to do authentication you can specify a header in your Feign interface with the #Headers annotation (example for basic authentication):
#FeignApi
#Headers({"Authorization: Basic {credentials}"})
public interface neosdServer {
#RequestLine("GET /file/getfilesprop")
List<NeosdFile> getfilesprop(#Param("credentials") String credentials);
#RequestLine("GET /file/getfiles")
List<String> getfiles(#Param("credentials") String credentials);
}
Note that #Headers can also be used on individual methods.
You will then have to pass the credentials as method parameter. An example implementation, with credentials coming from your application configuration, coudl be:
public class MyClass {
#Configuration("myApp.credentials.user")
private String username;
#Configuration("myApp.credentials.password")
private String password;
#Inject
private NeoSdClient client;
public void myMethod() {
List<String> files = client.getFiles(encodeCredentials());
}
private String encodeCredentials() {
return BaseEncoding
.base64()
.encode((username + ":" + password)
.getBytes(Charsets.UTF_8));
}
}
I created an issue on the Feign add-on repository to track the implementation of interceptor support: https://github.com/seedstack/feign-addon/issues/4.
I'm developing an application in the event-driven architecture.
I'm trying to model the following flow of events:
UserAccountCreated (user-management-events) -> sending an e-mail -> MailNotificationSent (notification-service-events)
The notification-service application executes the whole flow. It waits for the UserAccountCreated event by listening to user-management-events topic. When the event is received, the application sends the email and publishes a new event - MailNotificationSent to the notification-service-events topic.
I have no problems with listening to the first event (UserAccountCreated) - application receives it and performs the rest of the flow. I also have no problem with publishing the MailNotificationSent event. Unfortunately, for development purposes, I want to listen to the MailNotificationSent event in the notification service, so the application has to listen to both UserAccountCreated and MailNotificationSent. Here I'm not able to make it works.
Let's take a look at the implementation:
NotificationStreams:
public interface NotificationStreams {
String INPUT = "notification-service-events-in";
String OUTPUT = "notification-service-events-out";
#Input(INPUT)
SubscribableChannel inboundEvents();
#Output(OUTPUT)
MessageChannel outboundEvents();
}
NotificationsEventsListener:
#Slf4j
#Component
#RequiredArgsConstructor
public class NotificationEventsListener {
#StreamListener(NotificationStreams.INPUT)
public void notificationServiceEventsIn(Flux<ActivationLinkSent> input) {
input.subscribe(event -> {
log.info("Received event ActivationLinkSent: " + event.toString());
});
}
}
UserManagementEvents:
public interface UserManagementEvents {
String INPUT = "user-management-events";
#Input(INPUT)
SubscribableChannel inboundEvents();
}
UserManagementEventsListener:
#Slf4j
#Component
#RequiredArgsConstructor
public class UserManagementEventsListener {
private final Gate gate;
#StreamListener(UserManagementEvents.INPUT)
public void userManagementEvents(Flux<UserAccountCreated> input) {
input.subscribe(event -> {
log.info("Received event UserAccountCreated: " + event.toString());
gate.dispatch(SendActivationLink.builder()
.email(event.getEmail())
.username(event.getUsername())
.build()
);
});
}
}
KafkaStreamsConfig:
#EnableBinding(value = {NotificationStreams.class, UserManagementEvents.class})
public class KafkaStreamsConfig {
}
EventPublisher:
#Slf4j
#RequiredArgsConstructor
#Component
public class EventPublisher {
private final NotificationStreams eventsStreams;
private final AvroMessageBuilder messageBuilder;
public void publish(Event event) {
MessageChannel messageChannel = eventsStreams.outboundEvents();
AvroActivationLinkSent activationLinkSent = new AvroActivationLinkSent(); activationLinkSent.setEmail(((ActivationLinkSent)event).getEmail());
activationLinkSent.setUsername(((ActivationLinkSent)event).getUsername() + "-domain");
activationLinkSent.setTimestamp(System.currentTimeMillis());
messageChannel.send(messageBuilder.buildMessage(activationLinkSent));
}
}
application config:
spring:
devtools:
restart:
enabled: true
cloud:
stream:
default:
contentType: application/*+avro
kafka:
binder:
brokers: localhost:9092
schemaRegistryClient:
endpoint: http://localhost:8990
kafka:
consumer:
group-id: notification-group
auto-offset-reset: earliest
kafka:
bootstrap:
servers: localhost:9092
The application seems to ignore the notification-service-events listener. It works when listening to only one stream.
I'm almost 100% sure that this is not an issue with publishing the event, because I've connected manually to Kafka and verified that messages are published properly:
kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic notification-service-events-out --from-beginning
Do you have any ideas what else I should check? Is there any additional configuration on the Spring side?
I've found where the problem was.
I was missing bindings configuration. In the application properties, I should have added the following lines:
cloud:
stream:
bindings:
notification-service-events-in:
destination: notification-service-events
notification-service-events-out:
destination: notification-service-events
user-management-events-in:
destination: user-management-events
In the user-management-service I didn't have such a problem because I used a different property:
cloud:
stream:
default:
contentType: application/*+avro
destination: user-management-events
Using Spring cloud contract to verify contract between my producer and consumer. In my consumer controller, I am using Feign client to call another micro-service method to get some data. But now in spring cloud contract making that stub call for this micro-service is impossible.
Using Spring Cloud with Netflix OSS.
Config-service and eureka is up. Now I installed my producer locally at port 8090. Consumer using Feign clients to call producer to get some data. Now I am getting 500 error. It is showing that URL not found. The closest match is /ping. I believe Feign client is unable to mock, it is somehow trying to connect with eureka not from the locally installed producer. Can you help me on it.
Any example or any idea will be great.
Thanks
It is possible, here is my JUnit test that does it (ParticipantsService uses a Feign Client)
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#AutoConfigureStubRunner(ids = {"com.ryanjbaxter.spring.cloud:ocr-participants:+:stubs"}, workOffline = true)
#DirtiesContext
#ActiveProfiles("test")
public class OcrRacesApplicationTestsBase {
#Autowired
protected ParticipantsService participantsService;
private List<Participant> participants = new ArrayList<>();
//Hack to work around https://github.com/spring-cloud/spring-cloud-commons/issues/156
static {
System.setProperty("eureka.client.enabled", "false");
System.setProperty("spring.cloud.config.failFast", "false");
}
#Before
public void setup() {
this.participants = new ArrayList<>();
this.participants.add(new Participant("Ryan", "Baxter", "MA", "S", Arrays.asList("123", "456")));
this.participants.add(new Participant("Stephanie", "Baxter", "MA", "S", Arrays.asList("456")));
}
#After
public void tearDown() {
this.participants = new ArrayList<>();
}
#Test
public void contextLoads() {
List<Participant> participantList = participantsService.getAllParticipants();
assertEquals(participants, participantList);
}
}