Flux returns empty objects - java

I am starting with Flux today for it is pretty powerful. Now I have set up a whole simple Spring boot 2 project to work with this but the returned objects are empty.
I started a very simple Spring Boot project with some dependencies:
Reactive Web (embedded Netty + Spring WebFlux)
Reactive MongoDB (Spring Data MongoDB)
Thymeleaf template engine
Lombok (to simplify writing POJOs)
And added some code:
controller:
#RestController
public class ChapterController {
#Autowired
private ChapterRepository repository;
#GetMapping("/chapters")
public Flux<Chapter> listing() {
return repository.findAll();
}
}
Repository:
public interface ChapterRepository extends ReactiveCrudRepository<Chapter, String> {}
Configuration: (to load some data in the embeded Mongodb)
#Configuration
public class LoadDatabase {
#Bean
CommandLineRunner init(ChapterRepository repository){
return args -> {
Flux.just(
new Chapter("The life of Batman"),
new Chapter("Batmans most glorious' code"),
new Chapter("The hero we needed but didn't deserve, Batman."))
.flatMap(repository::save)
.subscribe(System.out::println);
};
}
}
Data class:
#Data
#Document
public class Chapter {
#Id
private String id;
private String name;
public Chapter(String name) {
this.name = name;
}
}
Ok, now when I start the application and access the endpoint: http://localhost:8080/chapters it returns this:
[
{},
{},
{}
]
It is displaying the same amount of objects that I created in the LoadDatabase class. When I change the amount of objects created, it shows that amount on the endpoint.
I don't know what I did wrong, I tried debugging the flux object returned. But I can't make anything out of it.
I hope someone can spot my mistake!

You are getting empty objects because the data is not saved and something went wrong.
You are using #Data lombok annotation which is like having implicit #Getter, #Setter, #ToString, #EqualsAndHashCode and #RequiredArgsConstructor annotations on the class (except that no constructor will be generated if any explicitly written constructor exists). but sometimes it doesn't work if not configured in IDE correctly so try once with manual getters and setters for the properties.
If the manual getters/setters work then try below to troubleshoot for lombok.
Ensure that your IDE is aware of lombok.
IntelliJ : Lombok added but getters and setters not recognized in Intellij IDEA
Eclipse : Lombok is not generating getter and setter
If the problem still exists then follow this similar thread's one of the comments here

Related

Springboot Get Api Endpoint returns 404 not found

I'm trying to create a Springboot Get API endpoint but I get a 404 not found error' here is my code
profile.java
#Getter
#Setter
public class Profile {
private String slackUsername;
private Boolean backend;
private Integer age;
private String bio;
ProfileController
#RestController
public class ProfileController {
#Autowired
private Profile profile;
#GetMapping(path = "/profile")
private ResponseEntity<String> userInfo(){
profile.setSlackUsername("Ajava");
profile.setBackend(true);
profile.setAge(00);
profile.setBio("My name is Anakhe Ajayi, I'm learning Java everyday and I love Jesus");
return ResponseEntity.ok(profile.toString());
}
Main
#SpringBootApplication
#ComponentScan("com/ajavacode/HNGBackendStage1/api.profile")
public class HngBackendStage1Application {
public static void main(String[] args) {
SpringApplication.run(HngBackendStage1Application.class, args);
}
}
porm.xml
Please fix the value in #ComponentScan annotation like this and try again.
#SpringBootApplication
#ComponentScan(basePackages = "com.ajavacode.HNGBackendStage1")
public class HngBackendStage1Application {
public static void main(String[] args) {
SpringApplication.run(HngBackendStage1Application.class, args);
}
}
Also there is an issue in ProfileController class, you are auto wiring Profile class inside it. Profile class is not a bean so this is incorrect and userInfo() method is private, it should be public. Here is the fixed version.
#RestController
public class ProfileController {
#GetMapping(path = "/profile")
public ResponseEntity<String> userInfo(){
Profile profile=new Profile();
profile.setSlackUsername("Ajava");
profile.setBackend(true);
profile.setAge(00);
profile.setBio("My name is Anakhe Ajayi, I'm learning Java everyday and I love Jesus");
return ResponseEntity.ok(profile.toString());
}
You have to check some items in your code;
Make sure the endpoint you are sending request , is correct .
Add #RequestMapping("Profile") to the controller to avoid repeated endpoint and reduce the ambiguity
Make sure your pom is correct
Looking at your code, there are a few things to mention.
First of all, your package structure looks good (besides the fact that I, personally, would keep everything lowercase).
With the package structure that you have in place, you actually don't need any #ComponentScan annotation at all. The annotation #SpringBootApplication at your main class by default scans for components with the package of that class as the base package. So you only need to set something if you want to explicitly scan for components in some other package, e.g., either at a higher level or if you want to skip packages in the hierarchy.
Next thing is the controller. Question here is: What do you actually want to achieve?
I assume that you want to build an application that provides a GET /profile endpoint that returns a response object like the example below:
{
"slackUsername": "alice",
"backend": false,
"age": 42,
"bio": "I'm just an example"
}
If my understanding is correct, there is at least one thing that is a bit odd: Currently, you defined a controller that would return the String representation of the Profile object. That isn't necessarily something as shown in the example above. If you do not override the toString() method, the result would be something like com.ajavacode.HNGBackendStage1.api.Profile#6d06d69c (see this Baeldung article for instance). And even if you use Lombok's #Data or #ToString annotations, the result will not be a JSON or XML representation but something that is suitable for logging, for instance.
Spring will already take care of the serialization into JSON (or XML) format. In your controller you can just return the Profile object or, alternatively, a ResponseEntity<Profile>:
#GetMapping(path = "/profile")
public Profile userInfo(){
Profile profile=new Profile();
profile.setSlackUsername("Ajava");
profile.setBackend(true);
profile.setAge(00);
profile.setBio("My name is Anakhe Ajayi, I'm learning Java everyday and I love Jesus");
return profile;
}
The above example would create a response with the profile as the response body and HTTP status code 200 OK. In case you use ResponseEntity, you could also adjust the HTTP status code but in your case that probably is not necessary (yet).
Autowiring the Profile class also is not correct, as already mentioned. You only need to autowire classes beans, i.e., classes that are annotated with #Component, #Service, or #Repository. The class you "autowired" is just a POJO class representing some "data object", nothing that provides any kind of business logic.

Mapstruct value from application.properties

Is it possible to set a value from a field from the application.properties file?
I am looking for something like
#Mapping(target="version", expression="${application.version}")
StateDto stateToStateDto(State state);
where application.version=v1 is from the application.properties file.
Consider a "util service" like:
#Service
public class PropertyService {
#org.springframework.beans.factory.annotation.Value("${application.version}")
private String appVersion;
// accessors, more properties/stuff..
}
Then you can define your Mapping like:
#Mapper(// ..., ...
componentModel = "spring")
public abstract class StateMapper {
#Autowired
protected PropertyService myService;
#org.mapstruct.Mapping(target="version", expression="java(myService.getAppVersion())")
public abstract StateDto stateToStateDto(State state);
// ...
}
See also:
Mapstruct - How can I inject a spring dependency in the Generated Mapper class
Mapstruct Expressions
My minimal solution #github
As far as my knowledge goes, this is not possible. Mapstruct analyses the #Mapping annotation in compile time. And the annotation parameters require constants. So getting them from a file would not be possible.
You can always implement something in MapStruct that fulfills your needs. But I would go with a simple self-implemented mapper where you take the value from your version field in runtime from the environment.
This is not possible through MapStruct. However, a feature could be raised that would support some custom expression language that would use Spring #Value and inject that.
e.g.
#Mapping(target="version", expression="springValue(${application.version})")
StateDto stateToStateDto(State state);
and then MapStruct will generate something like:
#Component
public class StateMapperImpl {
#Value("${application.version}")
private String version;
// ...
}

Mapping multiple graphQL schema files to separate resolvers - Spring Boot

I'm finding it really difficult to separate queries from one schema file. I want to have something like this:
car.graphqls
type Query {
car(id: ID!): Car
}
type Car {
id: ID!,
name: String!
}
house.graphqls
type Query {
house(id: ID!): House
}
type House {
id: ID!,
owner: String,
street: String
}
I searched a lot but I can't find a way to write two java classes and implement getHouse() in one of them and getCar() in other.
#Component
public class CarQuery implements GraphQLQueryResolver {
#Autowired
private CarService carService;
public List<Car> getCar(final int id) {
return this.carService.getCar(id);
}
}
public class HouseQuery implements GraphQLQueryResolver {
#Autowired
private HouseService houseService;
public List<House> getHouse(final int id) {
return this.houseService.getHouse(id);
}
}
I found out that the graphql-java-tools package which I'm using will search through the project and finds all schema files (that end with .graphqls), but the code which I showed above gives me this error:
Caused by: com.coxautodev.graphql.tools.FieldResolverError: No method found with any of the following signatures (with or without one of [interface graphql.schema.DataFetchingEnvironment] as the last argument), in priority order:
com.example.polls.resolvers.CarQuery.house(~count)
com.example.polls.resolvers.CarQuery.getHouse(~count)
I also found some advises that I need to have only one Root Query in schema files, and to extend all other Query types in schema files. I tried to write to house.graphqls something like this, but failed:
extend Type Query {
house(id: ID!): House
}
Is there a way to tell graphql and java what schema file I want to be mapped to which java resolver file?
Thanks AllirionX. Your answer was helpful.
I would just like to summarize final solution to all who are looking for answer how to create multiple schema files with separate query types in each of them and map those query types to different Java Components using GraphQLQueryResolver.
My Spring Boot project structure
I have two schema files A.graphqls and B.graphqls.
A.graphqls
---------------
type Person {
id: ID!,
name: String
}
type Query {
getPerson(id: Int):Person
}
type Mutation {
createPerson(name: String):Int
}
B.graphqls
---------------
type Book {
id: ID!,
title: String,
owner: Person
}
extend type Query {
getBooks(count: Int):[Book]
}
extend type Mutation {
deleteBook(id: Int):Int
}
schema {
query: Query,
mutation: Mutation
}
I will explain what I learned about rules we need to follow about this topic (I don't guarantee that this is all necessary, but that is how I managed to get it work how I wanted it to work).
The key here is to only have one schema definition. It doesn't matter in which file (A.graphqls or B.graphqls or C.graphqls...) - In example, I added it to B.graphqls file at the bottom.
Also, you can have only one "type Query" definition in ONE file. In all other schema files you will need to extend that type with "extend type Query" (yeah, I know, it makes sense now...). In which schema file you do that main definition for Query that is not relevant. Everything in this paragraph applies to mutations also.
You can use type defined in one .graphqls file in other .graphqls file. It will get recognized. So, in this example, you can use Person type reference in B.graphqls.
Java resolvers:
import com.coxautodev.graphql.tools.GraphQLQueryResolver;
import graphql.demo.model.Person;
import graphql.demo.service.PersonService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.List;
#Component
public class AQuery implements GraphQLQueryResolver {
#Autowired
private PersonService personService;
public Person getPerson(final int id) {
return this.personService.getPerson(id);
}
}
And second one...
#Component
public class BQuery implements GraphQLQueryResolver {
#Autowired
private BookService bookService;
public List<Book> getBooks(final int count) {
return this.bookService.getBooks(count);
}
}
Names of this classes are not important. We could also have only one class that implements GraphQLQueryResolver and we could implement all query methods from both A.graphqls and B.graphqls files (getBooks() and getPerson() methods). As long as we implement all methods, it's not important in which resolver class we implemented it graphql-java will find it.
Same applies to mutations using GraphQLMutationResolver.
I have full working example (MySQL, Spring Boot, React with Apollo client) on my github, so you can check it out. There is also mysql script for generating database used in project. There is plenty of tables, but there are just for testing purposes, what is important is file structure and files I explained above. If you are not interested in client app, you can test it using graphiql, of course.
https://github.com/dusko-dime/spring-react-graphql
Hope this can be helpful to someone and thanks for helping me once again :)
Multiple schema files
Graphql-java-tools will find all the .graphqls files to build a schema instance. Multiple schema files work out of the box.
Multiple resolvers
Each resolver component must implement GraphQLQueryResolver or GraphQLMutationResolver and be scannable by spring boot. Make sure it has a #Component annotation and is in one of the spring #ComponentScan basePackages.
The error
No method found with any of the following signature means that graphql-tools was not able to find a resolver component with a method matching the signatures. In this case, the HouseQuery resolver is missing the #Component annotation.
Also there can be issue with your Resolver name
Please ensure that you specify the same name of your Resolver method as you have defined in your GraphQL Schema

How to cleanly test Spring Controllers that retrieve parameters with DomainClassConverter?

I am big on clean well-isolated unit tests. But I am stumbling on the "clean" part here for testings a controller that uses DomainClassConverter feature to get entities as parameters for its mapped methods.
#Entity
class MyEntity {
#Id
private Integer id;
// rest of properties goes here.
}
The controller is defined like this
#RequestMapping("/api/v1/myentities")
class MyEntitiesController {
#Autowired
private DoSomethingService aService;
#PostMapping("/{id}")
public ResponseEntity<MyEntity> update(#PathVariable("id")Optional<MyEntity> myEntity) {
// do what is needed here
}
}
So from the DomainClassConverter small documentation I know that it uses CrudRepository#findById to find entities. What I would like to know is how can I mock that cleanly in a test.
I have had some success by doing this steps:
Create a custom Converter/Formatter that I can mock
Instantiate my own MockMvc with above converter
reset mock and change behaviour at each test.
The problem is that the setup code is complex and thus hard to debug and explain (my team is 99% junior guys coming from rails or uni so we have to keep things simple). I was wondering if there is a way to inject the desired MyEntity instances from my unit test while keep on testing using the #Autowired MockMvc.
Currently I am trying to see if I can inject a mock of the CrudRepository for MyEntity but no success. I have not worked in Spring/Java in a few years (4) so my knowledge of the tools available might not be up to date.
So from the DomainClassConverter small documentation I know that it uses CrudRepository#findById to find entities. What I would like to know is how can I mock that cleanly in a test.
You will need to mock 2 methods that are called prior the CrudRepository#findById in order to return the entity you want. The example below is using RestAssuredMockMvc, but you can do the same thing with MockMvc if you inject the WebApplicationContext as well.
#RunWith(SpringRunner.class)
#SpringBootTest(classes = SomeApplication.class)
public class SomeControllerTest {
#Autowired
private WebApplicationContext context;
#MockBean(name = "mvcConversionService")
private WebConversionService webConversionService;
#Before
public void setup() {
RestAssuredMockMvc.webAppContextSetup(context);
SomeEntity someEntity = new SomeEntity();
when(webConversionService.canConvert(any(TypeDescriptor.class), any(TypeDescriptor.class)))
.thenReturn(true);
when(webConversionService.convert(eq("1"), any(TypeDescriptor.class), any(TypeDescriptor.class)))
.thenReturn(someEntity);
}
}
At some point Spring Boot will execute the WebConversionService::convert, which will later call DomainClassConverter::convert and then something like invoker.invokeFindById, which will use the entity repository to find the entity.
So why mock WebConversionService instead of DomainClassConverter? Because DomainClassConverter is instantiated during application startup without injection:
DomainClassConverter<FormattingConversionService> converter =
new DomainClassConverter<>(conversionService);
Meanwhile, WebConversionService is a bean which will allow us to mock it:
#Bean
#Override
public FormattingConversionService mvcConversionService() {
WebConversionService conversionService = new WebConversionService(this.mvcProperties.getDateFormat());
addFormatters(conversionService);
return conversionService;
}
It is important to name the mock bean as mvcConversionService, otherwise it won't replace the original bean.
Regarding the stubs, you will need to mock 2 methods. First you must tell that your mock can convert anything:
when(webConversionService.canConvert(any(TypeDescriptor.class), any(TypeDescriptor.class)))
.thenReturn(true);
And then the main method, which will match the desired entity ID defined in the URL path:
when(webConversionService.convert(eq("1"), any(TypeDescriptor.class), any(TypeDescriptor.class)))
.thenReturn(someEntity);
So far so good. But wouldn't be better to match the destination type as well? Something like eq(TypeDescriptor.valueOf(SomeEntity.class))? It would, but this creates a new instance of a TypeDescriptor, which will not match when this stub is called during the domain conversion.
This was the cleanest solution I've put to work, but I know that it could be a lot better if Spring would allow it.

Springfox / Swagger does not resolve polymorphic field

I'm having a simple Spring Boot application with one REST endpoint to return a "Job" object, which contains a list of polymorphics, next to other stuff.
We go Code First approach and try to create the API models to fit our needs. But the generated Api Doc does not represent our model the in it's full complexity, as it does not resolve the list of polymorphics.
The Job object looks like
#Data // Lombok Getters and Setters
public final class Job {
private String foo;
private String bar;
private List<Condition> conditionList;
}
Condition is a parent object for a set of different conditions
public abstract class Condition {
}
Two example implementations of a Condition would be
#Data
public final class Internal extends Condition {
private String nodeId;
}
and
#Data
public final class Timed extends Condition {
private ZonedDateTime timestamp;
}
The REST controller is stupidly simple:
#RestController
#RequestMapping("/hello")
public class MyController {
#GetMapping
public ResponseEntity<Job> getJob() {
return new ResponseEntity<>(new Job(), HttpStatus.OK);
}
}
Now, when I open the Swagger UI and look at the generated definition, the element conditionList is an empty object {}
I tried to use the #JsonSubTypes and #ApiModel on the classed, but there was no difference in the output. I might not have used them correctly, or maybe Swagger is just not able to fulfill the job, or maybe I'm just blind or stupid.
How can I get Swagger to include the Subtypes into the generated api doc?
We "fixed" the problem by changing the structure. So it's more of a workaround.
Instead of using a List of polymorphics, we now use a "container" class, which contains each type as it's own type.
The Condition object became a "container" or "manager" class, instead of a List.
In the Job class, the field is now defined as:
private Condition condition;
The Condition class itself is now
public final class Condition{
private List<Internal> internalConditions;
// etc...
}
And, as example, the Internal lost it's parent type and is now just
public final class Internal{
// Logic...
}
The Swagger generated JSON now looks like this (excerpt):
"Job": {
"Condition": {
"Internal": {
}
"External": {
}
//etc...
}
}
Useful display of polymorphic responses in Swagger UI with Springfox 2.9.2 seems hard (impossible?). Workaround feels reasonable.
OpenAPI 3.0 appears to improve support for polymorphism. To achieve your original goal, I would either
Wait for Springfox to get Open API 3.0 support (issue 2022 in Springfox Github). Unfortunately, the issue has been open since Sept 2017 and there is no indication of Open API 3.0 support being added soon (in Aug 2019).
Change to Spring REST Docs, perhaps adding the restdocs-api-spec extension to generate Open API 3.0.
We have run into similar problems with polymorphism but have not yet attempted to implement a solution based on Spring REST Docs + restdocs-api-spec.

Categories

Resources