From Design perspective, before development what should be the better way before sharing the contract to consumer, creating RAML first and then generating DTOS or vica versa?
From my understanding there is no difference because both should be done.
RAML is used to document and define API where DTOs play an important role. Defined resources without proper DTOs is uncompleted API. In the same time resource names, query parameters and some other API details are first things to define, so I would go with defining RAML first. Furthermore, when consumer is on a more complicated level than just fetching the data and has some business logic it could be quite expensive to change the use of API if DTOs are not defined. Thus I would advise to agree on this before hand.
It is not possible to see all the details before API implementation starts, but it is better to spend couple of hours more during technical design phase than couple of days or weeks during implementation phase when API requirements change.
Related
I know this question has already been asked, but I have a different scenario in which I have the following structure:
As you can see the front-end (UI) will always call a GraphQL API Gateway microservice that will redirect the request to different microservices depending on the functionality or data required. I simplified the graphic by drawing 3 microservices, but we have more than 30.
When I started working in this project, something that I noticed is that the GraphQL API Gateway microservice is repeating many DTOs that are already defined on the other microservices. As a consequence, this GraphQL API Gateway microservice is full of Java classes that serve as DTOs (with same fields, datatypes and no modifications), so basically, what developers have been doing was to copy and paste a DTO from user microservice and paste them in the GraphQL API Gateway microservice. The same pattern was followed for all the 30 microservices DTOs. Additionally, DTOs across microservices are being copy-pasted based on the needs. You may say that there should not be an issue, but whenever a microservice requires a DTO structure change, we need to change the DTO in all the other microservices.
Some approaches that me and my team evaluated were to:
Either create a common library exclusively for DTOs that could be reused by many projects. The only concern is that this will add some "dependency" and break the "microservices" idea. Additionally, I think this will not fully solve the issue we have since we would need to keep this dependency up to date in each microservice.
Use some sort of utility such JSON schema that will generate required DTOs based on needs.
This is something that I have been discussing with the team, most of the team would like to go for the JSON Schema approach. However, I would like to know if there could be something more efficient from your point of view, I took a look to some approaches like exposing domain objects (https://www.baeldung.com/java-microservices-share-dto), but I do not think this apply for the particular case I am talking in this post.
So, any suggestion or feedback will be highly appreciated. Thanks in advance.
I have a POJO class and I need to call a RESTful web service using some properties from the POJO as parameters to the service. The caveat is that I won't know the endpoint and its parameters 'till runtime. Basically, the user will configure at runtime the endpoint, input/output schemas and mappings from/to those schemas to the POJO class. Then I have to call the API with the appropriate values.
This is going to be a really broad answer.
It sounds like a question that would benefit as 'code as data'.
What I mean by this, is that the amount of possibilities that you have to be able to deal with at runtime, is close to the complexities of using a programming language itself.
When this happens, there's generally a few choices that people either choose by accident, or consciously choose depending on who the user is.
Limit the scope of the problem, and make your configuration that complex it may as well be a programming language itself.
Embed a scripting language, or create some runtime loading of plugins in the native language.
Use an off the shelf library / solution.
I'd recommend 2 or 3 over 1 if your user is yourself or the configuration can be provided by another programmer.
I am new to Spring and Hibernate. I am actually facing problems defining the layers of my application which is to create a movie site where one can search movies,theaters,search movies by theater names and theaters by movie name.I summing up my queries as follows :-
What could be the entities in my application, I have created MovieEntity and TheaterEntity so far, how to proceed with the mappings between two.
My project structure should be something like this:
Entities, repositories and services. I am not sure where to fit my service layer, as all the methods I need to implement are defined in entities.
Thanks in advance .
There are many ways to do this, and therefore you will not find one definitive answer to your question. (I didn't downvote you, but I suspect that this is the reason for it.)
I would recommend looking at various open source projects (check github) and see how this is done by convention.
One popular way is to create DAO interfaces as a point of access to your data layer and create implementations of those DAOs that are specific to Hibernate. Your services would contain business logic and can use Spring autowiring to link to these interfaces. Your controllers shouldn't contain business logic and should really just route requests. Keep your validation code separate whenever possible too. Doing so makes it particularly easy to unit test.
I have one module written on Java - web service module which accepts request process it(some business rules here), saves(modify or delete) values in db (using Hibernate) and then send status response). Is it reasonable to refactor that module so at the end there will be 2 modules - 1 is web service module and 2 - processing module where business rules applied and db processes made? and if yes then what is the good practices for information exchange between modules ?
Thanks !
Remember "KISS" - keep it simple; stupid.
It's more important to have a clean and maintanable code, focused on the
domain model, rather than breaking it up based on technical considerations.
Yes; database storage is one aspect, yes, handling webservice calls is another, but its too easy to spend a lot of time to make a "clean" separation, with the only result that it takes longer to change things later. (As everone thats been working on an 14 layered "enterprise" application can tell you.)
Ideally, the "business logic" is the one module you write, and the webservice adaptation and the data storing just should work, "magically". As that is not the case, you obviously have to deal with that too, but its not the primary focus.
I strongly recommend that: the business rules = your datamodel. The webservice methods should be as thin as possible and expose the model as cleanly as possible.
This is a rather insighful article about the "business layer" http://thedailywtf.com/Articles/The-Mythical-Business-Layer.aspx
Also remember that "layers" are abstract concepts, and its not a fundamental requirement that they are "physically" separated in diffent eclipse projects etc. Really, it's not.
I am working on an existing Java project with a typical services - dao setup for which only a webapplication was available. My job is to add webservices on top of the services layer, but the webservices have their own functional analysis and datamodel. The functional analyses ofcource focuses on what is possible in the different service methods.
As good practice demands, we used the WSDL first strategy and generated JAXB bound Java classes and a SEI for the webservices. After having implemented the webservices partially, we noticed a 70% match between the datamodel. This resulted in writing converters which take the webservice JAXB classes and map them with the service layer classes.
Customer customer = new Customer();
customer.setName(wsCustomer.getName());
customer.setFirstName(wsCustomer.getFirstName();
..
This is a very obvious example, some other mappings where little more complicated.
Can anyone give his best practices, experiences, solutions to this kind of situations?
Are any of these frameworks usefull?
http://transmorph.sourceforge.net/wiki/index.php/Main_Page
http://ezmorph.sourceforge.net/
Please don't start a discussion about WSDL first vs code first.
I am experiencing the same issue on my project. I created a factory for the generated objects and use it for creating objects.
Customer customer = factory.createCustomer(wsCustomer);
Which isolates the construction code, w/o altering the generated code.
I think the real question is... how much of the code generators do you want to use in the future, and can you get them to generate what you're doing now.
Converting everything to your current data model is a good idea, if you don't care about the code generation capabilities of your tools, or they can adapt to what you want.