I have a doubt about the architecture of the app I am working on.
It is built like this following modules:
module-app
module-domain
module-rest
module-rest-api
module-rest-client
In the module-rest-api I store the DTOs of my controllers. But now i have to add others DTOs to call an external client.
So the question is, where do i have to implement those external DTO, in the module-rest-apior inside the external client package into the module-app?
I would appreciate some help, thank you.
There's no one answer here because where you put those DTO's very much depends on personal preference.
For example, you could have a module-dto. In the module-dto you could have a number of packages and divide them by purpose e.g.
com.mycompany.project.dto.outbound. //for external requests
com.mycompany.project.dto.inbound. //for incoming api requests
This way you can just import your dto module anywhere. I've always maintained my DTO's is a standalone module for this very reason that I can just use it then from anywhere then.
Related
I know this question has already been asked, but I have a different scenario in which I have the following structure:
As you can see the front-end (UI) will always call a GraphQL API Gateway microservice that will redirect the request to different microservices depending on the functionality or data required. I simplified the graphic by drawing 3 microservices, but we have more than 30.
When I started working in this project, something that I noticed is that the GraphQL API Gateway microservice is repeating many DTOs that are already defined on the other microservices. As a consequence, this GraphQL API Gateway microservice is full of Java classes that serve as DTOs (with same fields, datatypes and no modifications), so basically, what developers have been doing was to copy and paste a DTO from user microservice and paste them in the GraphQL API Gateway microservice. The same pattern was followed for all the 30 microservices DTOs. Additionally, DTOs across microservices are being copy-pasted based on the needs. You may say that there should not be an issue, but whenever a microservice requires a DTO structure change, we need to change the DTO in all the other microservices.
Some approaches that me and my team evaluated were to:
Either create a common library exclusively for DTOs that could be reused by many projects. The only concern is that this will add some "dependency" and break the "microservices" idea. Additionally, I think this will not fully solve the issue we have since we would need to keep this dependency up to date in each microservice.
Use some sort of utility such JSON schema that will generate required DTOs based on needs.
This is something that I have been discussing with the team, most of the team would like to go for the JSON Schema approach. However, I would like to know if there could be something more efficient from your point of view, I took a look to some approaches like exposing domain objects (https://www.baeldung.com/java-microservices-share-dto), but I do not think this apply for the particular case I am talking in this post.
So, any suggestion or feedback will be highly appreciated. Thanks in advance.
In client-server applications with spring-boot and angular. Most resources I find explain how to expose REST endpoint from spring boot and consume it from angular with an http client.
Most of the time, communicating in JSON is preconized, maintaining DTOs (DataTransfertObject) in both angular and spring boot side.
I wonder if people with fullstack experience knows some alternative avoiding maintaining DTOs in both front and backend, maybe sharing models between the two ends of the application?
Swagger would be a good tool to use here.
You can take a code-first approach which will generate a swagger spec from your Java controllers & TOs or a spec-first approach which will generate your Java controllers & TOs from a swagger spec.
Either way, you can then use the swagger spec to generate a set of TypeScript interfaces for the client side.
As Pace said Swagger would be a great feature that you can use. In this case plus having great documentation for the API endpoints you can sync object models between frontend and backend. You have to just use the .json or .yaml file of Swagger to generate the services and object models on the frontend side by using ng-swagger-gen.
Then put the command for generating services and object model in package.json when you wanna build or run your application for instance:
...
"scripts": {
...
"start": "ng-swagger-gen && ng serve",
"build": "ng-swagger-gen && ng build -prod"
...
},
...
So after running one of these commands you will have updated object models and if an object property name or type changed, add/remove object property you will get the error and you have to fix it first then move forward.
Note: Just keep in mind the services and object models will be generated based on the Swagger file so it always should be updated.
PS: I was working on a project that we even every code on the backend side was generated based on the Swagger file ;) so they just change the Swagger file and that's it.
This is a difficult topic, since we are dealing with two different technology stacks. The only way I see is to generate those objects from a common data model.
Sounds good, doesn't work. It is not just about maintaining dto's.
Lets say api changes String to List. It is not enough to update your typescript dto from string to string[]. There is logic behind string manipulation that now needs to handle list of strings. Personally i do not find that troublesome to maintain dto's on both sides. Small tradeoff for flexibility and cleaner code (you will have different methods in different dto's)
Consider the following example.
There is a server application that has three layers. The service layer that is exposed to the clients, the database layer that is used for persistence and the business layer that calculates something.
The application serves three different types of data. Users, Payments and Worktimes.
How should I package my application?
Package by layer:
foo.bar.service
UserService.class
PaymentsService.class
WorktimeService.class
foo.bar.persistence
UserPersistenceService.class
PaymentPersistenceService.class
WorktimePersistenceService.class
foo.bar.business
PaymentCalculator.class
Or package by type:
foo.bar.user
UserService.class
UserPersistenceService.class
foo.bar.payment
PaymentService.class
PaymentsPersistenceService.class
PaymentCalculator.class
foo.bar.worktime
WorktimeService.class
WorktimePersistenceService.class
I guess the first example could become confusing if the application grows and more and more logic is implemented. However, it appears to be easier to find the right place when the task is to "extend the persistent service layer to do something fancy".
The second example can be easier extended without flooding packages with millions of classes.
Is there any best practice to choose between them? Or do you guys have any other idea of a package structure.
As far as I am concerned I would package by layer AND type :
foo.bar.service.user.UserService
foo.bar.persistence.user.UserPersistence
And so on
For most of my projects I use maven and some multi modules :
Domain Model (Model Objects)
Persistence
Service
Applications
With this way, you could get different jars (or not if you have got only a simple project) and it is easier to retrieve the good place to extend/modify the existing sources.
What is more important for your application?
Do you deploy the layers/types to different machines (or might want to do so in the future)?
Do different people work on different layers/types?
If so use layers/types to seperate packages.
If your application is a little larger, you probably want to use both, resulting in packages like
foo.bar.<layer>.<type>
or
foo.bar.<type>.<layer>
Usually there is a model package in which you put the types (Users, Payments and Worktimes in your example).
Next to the model package there are layer packages like presentation and service (I normally nest data access layers within the service package to encourage these layers to only be used from service implementations but that's up to you).
If the project is a bit bigger I split out the service layer into a separate module. If it is bigger still (or the domain model is large and complex) I also move the model package into a separate module, resulting in a three-module project: model, service and presentation.
If there are also multiple presentation channels (e.g. web application, REST service and dekstop client app) these can also be split out into separate modules.
I have a JAX-RS service (I use Jersey), and now I have to do the client. I was wondering how you guys use to deal with the model objects.
Do you put you model classes in a different jar in order to share it between the client and the server? Do you always use DTO or do you sometimes (always?) returns jpa entities.
The service I have to use (I haven't created it but I can modify it), often returns entities so I was wondering if it wasn't a bit weird if I externalize those classes.
What do you think? What are you use to do?
It depends on the complexity of the project and on the purpose you use JAX-RS in it:
for very simple projects I wouldn't create one more DTO layer whatsoever
for a project like yours that seems to use JAX-RS just to move data from a java client to a java server I wouldn't create one more layer either. That's because you are in charge at both ends (client and server) and because you reuse the same objects in both places (putting them in a separate jar and maven module is a good idea)
for projects that use JAX-RS to expose an API to external clients it's a good idea to separate the model from the API with DTOs so they are allowed to evolve independently. For example you don't always want to expose all the fields via an API or to break your clients when changing something in the model.
LATER EDIT
for a project that transfers only a subset of their model data fields to the client a DTO layer is useful for efficiency reasons
Let me exaplain you the complete situation currently I am stuck with in.
We are developing very much complex application in GWT and Hibernate, we are trying to host client and server code on different servers because of client's requirement. Now, I am able to achieve so using JNDI.
Here comes the tricky part, client need to have that application on different Platform also, database would be same and methods would be the same, lets say iPhone / .Net version of our application. we don't want to generate Server code again because it's gonna be the same for all.
I have tried for WebServices wrapper on the top of my server code but because of complexity of architecture and Classes dependencies I am not able to do so. For example, Lets consider below code.
class Document {
List<User>;
List<AccessLevels>;
}
Document class have list of users, list of accesslevels and lot more list of other classes and that other classes have more lists. Some important server methods takes Class (Document or any other) as input and return some other class in output. And we shouldn't use complex architecture in WebServices.
So, I need to stick with JNDI. Now, I don't know how can I access JNDI call to any other application ???
Please suggest ways to overcome this situation. I am open for technology changes that means JNDI / WebServices or any other technology that servers me well.
Thanking You,
Regards,
I have never seen JNDI used as a mechanism for request/response inter-process communication. I don't believe that this will be a productive line of attack.
You believe that Web Services are inappropriate when the payloads are complex. I disagree, I have seen many successful projects using quite large payloads, with many nested classes. Trivial example: Customers with Orders with Order Lines with Products with ... and so on.
It is clearly desirable to keep payload sizes small, there are serialization and network costs, big objects will be more expensive. But it's by far preferable to have one big request than lot's of little one. A "busy" interface will not perform well across a network.
I suspect that the one problem you may have is that certain of the server-side classes are not pure data, they refer to classes that only make sense on the server, you don't want those classes in you client.
I this case you need to build an "adapter" layer. This is dull work, but no matter what Inter-process communication technique you use you will need to do it. You need what I refer to as Data Transfer Objects (DTOs) - these represent payloads that are understood in client, using only classes reasonable for the client, and which the server can consume and create.
Lets suppose that you use technology XXX (JNDI, Web Service, direct socket call, JMS)
Client --- sends Document DTO --XXX---> Adapter transform DTO to server's Document
and similarly in reverse. My claim is that no matter what XXX is chosen you have the same problem, you need the client to work with "cut-down" objects that reveal none of the server's implementation details.
The adapter has responsibility for creating and understanding DTOs.
I find that working with RESTful Web Services using JAX/RS is very easy once you have a set of DTOs it's the work of minutes to create Web Services.