Currently we're usign jax-ws to create and consume SOAP web services, generating java classes through the wsimport goal.
We then, create a Service class to call this web service. The code in the Service class is like this:
public WebServiceRS webServiceCall() {
Security security = SecurityFactory.getTokenSecurity();
MessageHeader header = MessageHeaderFactory.getMessageHeader(SERVICE_ACTION);
LOGGER.info("Calling CreatePassengerNameRecordRQ webService ...");
Holder<Security> securityHolder = new Holder<>(security);
Holder<MessageHeader> headerHolder = new Holder<>(header);
addHandlers(port);
WebServiceRS webServiceRS = port.webServiceRQ(
headerHolder,
securityHolder,
getRequestBody()
);
...
return webServiceRS
}
So what we noticed and that could be problematic is that this Service class depends on the .wsdl generated files like Security, MessageHeader, WebServiceRS and so on. And so when we update the .wsdl file to a newer version the code would eventually break because of this dependency.
What we're trying to achieve, then, is a way to invert this dependency so that we can update the web service (i.e .wsdl generated classes) without changing our Service class. We're having problems because we haven't figured a pattern to solve this issue. The main difficulty seems to arise from the fact that we can't change these generated classe to implement an interface as an example.
We're interested in patterns or best practices that can loose this coupling.
Solution 1. Use Adapter pattern
In your case, the contract of service will change and you need to sync two different objects. Adapter will provide a different interface to its subject.
Create an interface for your client model
public interface ClientModel {
String getValue();
}
Create Adapter which will convert DTO to ClientModel
public class ClientModelAdapter implements ClientModel {
private DTO dto;
public ClientModelAdapter(DTO dto) {
this.dto = dto;
}
#Override
public String getValue() {
return dto.getValue();
}
}
public class DTO {
private String value;
public String getValue() {
return value;
}
}
What is the exact difference between Adapter and Proxy patterns?
Solution 2. Use Mapper Pattern
You can simply create a mapper between DTO and ClientModel
public class ClientModelMapper {
ClientModel map(DTO dto) {
ClientModel clientModel = new ClientModel();
clientModel.setValue(dto.getValue());
return clientModel;
}
}
Mapper component that transfers the data, making sure that both DTO and client model don't need to know about each other.
In this case, you can use MapStruct framework to avoid handmade mapping.
Solution 3. Make service API backward compatible
Probably you can avoid breaking changes in SOAP service and make API backward compatible. Backwards compatibility and Web Services
You could put in behind a proxy class. That way the only thing you would need to adapt to the change of .wsdl file would be the proxy class.
Related
I try to do something strange may be. My vision:
I need a standard architecture with controllers, services and other layers.
I need clients for my controllers that I can distrubute separate.
And I wrote something like this:
module-api
module-impl
Module-api contains only OpenFeign clients like:
#FeignClient(path = TopLevelClient.ROOT_PATH)
public interface TopLevelClient implements CommonInterfaceForClient<EntityDto> {
String ROOT_PATH = "/entity";
#GetMapping
ResponseEntity<EntityDto> getAll();
}
Module-impl contains controllers and other logic like:
#RestController
#RequestMapping(TopLevelClient.ROOT_PATH)
public class TopLevelController implements TopLevelClient {
String ROOT_PATH = "/entity";
#Override
#GetMapping
ResponseEntity<List<EntityDto>> getAll() {...}
}
And of course I have a some strange common parametrized interace like this:
CommonInterfaceForClient
Okay, that's all right. But it's ok only for an one level REST path. I mean
/entity/{id}
But I need to request sub level:
/entitiy/{id}/sub-entity/{subId}
And I can image how I have to do this. I can't implements two interfaces at once cause first interface parametrized with an other DTO. I can't use method for calculate a mapping path cause a value must be a constant.
I mean I want to do something like this:
#FeignClient(path = SubLevelClient.ROOT_PATH)
public class SubLevelClient implements ... {
String ROOT_PATH = "/entity/{entityId}/sub-entity";
#GetMapping
ResponseEntity<List<SubEntityDto>> getAll(#PathVariable long entityId);
}
May be you can tell me something useful or may be something from the best practice?
I just want to take few common interfaces, services and controllers/clients for common actions but I want to write with one's own hands. Or may be it's a stupid idea and you can tell me a turnkey solution.
I'm having a simple Spring Boot application with one REST endpoint to return a "Job" object, which contains a list of polymorphics, next to other stuff.
We go Code First approach and try to create the API models to fit our needs. But the generated Api Doc does not represent our model the in it's full complexity, as it does not resolve the list of polymorphics.
The Job object looks like
#Data // Lombok Getters and Setters
public final class Job {
private String foo;
private String bar;
private List<Condition> conditionList;
}
Condition is a parent object for a set of different conditions
public abstract class Condition {
}
Two example implementations of a Condition would be
#Data
public final class Internal extends Condition {
private String nodeId;
}
and
#Data
public final class Timed extends Condition {
private ZonedDateTime timestamp;
}
The REST controller is stupidly simple:
#RestController
#RequestMapping("/hello")
public class MyController {
#GetMapping
public ResponseEntity<Job> getJob() {
return new ResponseEntity<>(new Job(), HttpStatus.OK);
}
}
Now, when I open the Swagger UI and look at the generated definition, the element conditionList is an empty object {}
I tried to use the #JsonSubTypes and #ApiModel on the classed, but there was no difference in the output. I might not have used them correctly, or maybe Swagger is just not able to fulfill the job, or maybe I'm just blind or stupid.
How can I get Swagger to include the Subtypes into the generated api doc?
We "fixed" the problem by changing the structure. So it's more of a workaround.
Instead of using a List of polymorphics, we now use a "container" class, which contains each type as it's own type.
The Condition object became a "container" or "manager" class, instead of a List.
In the Job class, the field is now defined as:
private Condition condition;
The Condition class itself is now
public final class Condition{
private List<Internal> internalConditions;
// etc...
}
And, as example, the Internal lost it's parent type and is now just
public final class Internal{
// Logic...
}
The Swagger generated JSON now looks like this (excerpt):
"Job": {
"Condition": {
"Internal": {
}
"External": {
}
//etc...
}
}
Useful display of polymorphic responses in Swagger UI with Springfox 2.9.2 seems hard (impossible?). Workaround feels reasonable.
OpenAPI 3.0 appears to improve support for polymorphism. To achieve your original goal, I would either
Wait for Springfox to get Open API 3.0 support (issue 2022 in Springfox Github). Unfortunately, the issue has been open since Sept 2017 and there is no indication of Open API 3.0 support being added soon (in Aug 2019).
Change to Spring REST Docs, perhaps adding the restdocs-api-spec extension to generate Open API 3.0.
We have run into similar problems with polymorphism but have not yet attempted to implement a solution based on Spring REST Docs + restdocs-api-spec.
I'm trying to develop a simple application using OSGi framework. My question involves an "utility bundle" available in the framework: let me explain with a pretty verbose example. At the moment I'm trying to build an event my bundle will send.
From what I understood, what i need is to do something like the following (event admin felix):
public void reportGenerated(Report report, BundleContext context)
{
ServiceReference ref = context.getServiceReference(EventAdmin.class.getName());
if (ref != null)
{
EventAdmin eventAdmin = (EventAdmin) context.getService(ref);
Dictionary properties = new Hashtable();
properties.put("title", report.getTitle());
properties.put("path" , report.getAbsolutePath());
properties.put("time", System.currentTimeMillis());
Event reportGeneratedEvent = new Event("com/acme/reportgenerator/GENERATED", properties);
eventAdmin.sendEvent(reportGeneratedEvent);
}
}
Now, since an OSGi application may have lots of bundles, I thought to create a subclass of Event for every bundle (eg. I have a bundle named "BundleExample"? Inside it's exported classes there will be a "BundleExampleEvent"). I know this doesn't add any information since you can know which event you received by looking at "topic", but please bear with me for the moment.
Now, the Event constructor needs a topic and a Map<String, Object>. However, to "simplify" the event constructor, I would like to have only the topic and the list of parameters to put inside the map. For example here's what might be a BundleExampleEvent class:
public class BundleExampleEvent extends Event{
private int importantVariable;
public BundleExampleEvent(String topic, int importantVariable) {
super(topic, Utils.toMap("importantVariable", importantVariable));
//here toMap is static
}
public int getImportantVariable() {
return this.importantVariable;
}
}
Ok, please note the Utils.toMap: it's a function that allows you to convert a sequence of String, Object into a Map. Ok, now Utils is an example of a utility class (stupid, useless but a utility class nonetheless). In the spirit of OSGi I want to make this utility class a bundle as well: my thought would be to start this Utils bundle at framework boot and then whenever I need one of its utility I want to fetch a reference via #Reference annotation.
This can work greatly in any bundle interface implementation, like this:
#Component
public class BundleExampleImpl implements BundleExample {
#Reference
private Utils utils;
#Override
public String sayHello() {
return this.utils.fetchHello();
//another useless utility function, but hopefully it conveys what i'm trying to do
}
}
But what about other classes (i.e. called by BundleExampleImpl during its work)? For example what about the BundleExampleEvent? I need to call it from sayHello method and I want to use this utility also inside that class in order to compute the Map! In the previous example i used a static function, but I would like to use the reference of Utils OSGi gave me.
Of course I could add a parameter inside the constructor of BundleExampleEvent in order to satisfy the link but I rather not to do it because it's pretty silly that something would depend on an "utility class"; my question are:
Is this the only method available if I want a "utility bundle"?
Or can I do something weird like adding a reference of Utils also in my BundleExampleEvent; i.e. something like this:
public class BundleExampleEvent extends Event{
#Reference
private Utils utils;
private int importantVariable;
public BundleExampleEvent(String topic, int importantVariable) {
super(topic, Utils.toMap("importantVariable", importantVariable));
//here toMap is static
}
public int getImportantVariable() {
return this.importantVariable;
}
}
Or maybe the whole idea of having an "utility bundle" is just pure trash?
Thanks for any reply. Hope I could convey my problem in the clearest way
I don't think there is any point in Utils being a service. Things should only be a service if they can conceivably have multiple implementations. In your case, the consumer of the Util functionality only ever wants a single implementation... the implementation is the contract.
I don't even think the utils code should be in a bundle. Just make it into a library that is statically linked into the bundles that need it.
In your case the Utils utils would be an OSGi service. Then you want to use this service inside an object that is not a service like BundleExampleEvent.
What you could do is to create a service that creates BundleExampleEvent instances and feeds it with an OSGi service. Kind of like a factory as a service. The problem with this is that services in OSGi are dynamic. If the service needed by the BundleExampleEvent instance goes away then the object would have to be discarded. So this only works for short lived objects.
In the eventadmin example a different solution would be to not use a special event class but instead create a service that has a method to send such an event. Then all the magic would happen inside this method and the result would be an event without further logic. You could also inject EventAdmin into that service using DS.
This works very well in OSGI but has the disadvantage of the anemic domain model (http://www.martinfowler.com/bliki/AnemicDomainModel.html).
I am not sure which variant to prefer.
I am working on GWT project with JDK7. It has two entryPoints (two clients) that are located in separate packages of the project. Clients share some code that is located in /common package, which is universal and accessible to both by having the following line in their respective xml-build files:
<source path='ui/common' />
Both clients have their own specific implementations of the Callback class which serves their running environments and performs various actions in case of failure or success. I have the following abstract class that implements AsyncCallback interface and then gets extended by its respective client.
public abstract class AbstractCallback<T> implements AsyncCallback<T> {
public void handleSuccess( T result ) {}
...
}
Here are the client's classes:
public class Client1Callback<T> extends AbstractCallback<T> {...}
and
public class Client2Callback<T> extends AbstractCallback<T> {...}
In the common package, that also contains these callback classes, I am working on implementing the service layer that serves both clients. Clients use the same back-end services, just handle the results differently. Based on the type of the client I want to build a corresponding instance of AbstractCallback child without duplicating anonymous class creation for each call. I am going to have many declarations that will look like the following:
AsyncCallback<MyVO> nextCallback = isClient1 ?
new Client1Callback<MyVO>("ABC") {
public void handleSuccess(MyVO result) {
doThatSameAction(result);
}
}
:
new Client2Callback<MyVO>("DEF") {
public void handleSuccess(MyVO result) {
doThatSameAction(result);
}
};
That will result in a very verbose code.
The intent (in pseudo-code) is to have the below instead:
AsyncCallback<MyVO> nextCallback = new CallbackTypeResolver.ACallback<MyVO>(clientType, "ABC"){
public void handleSuccess(MyVO result) {
doThatSameAction(result);
}
};
I was playing with the factory pattern to get the right child instance, but quickly realized that I am not able to override handleSuccess() method after the instance is created.
I think the solution may come from one of the two sources:
Different GWT way of dealing with custom Callback implementations, lets call it alternative existent solution.
Java generics/types juggling magic
I can miss something obvious, and would appreciate any advice.
I've read some articles here and on Oracle about types erasure for generics, so I understand that my question may have no direct answer.
Refactor out the handleSuccess behavior into its own class.
The handleSuccess behavior is a separate concern from what else is going on in the AsyncCallback classes; therefore, separate it out into a more useful form. See Why should I prefer composition over inheritance?
Essentially, by doing this refactoring, you are transforming an overridden method into injected behavior that you have more control over. Specifically, you would have instead:
public interface SuccessHandler<T> {
public void handleSuccess(T result);
}
Your callback would look something like this:
public abstract class AbstractCallback<T> implements AsyncCallback<T> {
private final SuccessHandler<T> handler; // Inject this in the constructor
// etc.
// not abstract anymore
public void handleSuccess( T result ) {
handler.handleSuccess(result);
}
}
Then your pseudocode callback creation statement would be something like:
AsyncCallback<MyVO> nextCallback = new CallbackTypeResolver.ACallback<MyVO>(
clientType,
"ABC",
new SuccessHandler<MyVO>() {
public void handleSuccess(MyVO result) {
doThatSameMethod(result);
}
});
The implementations of SuccessHandler don't have to be anonymous, they can be top level classes or even inner classes based on your needs. There's a lot more power you can do once you're using this injection based framework, including creating these handlers with automatically injected dependencies using Gin and Guice Providers. (Gin is a project that integrates Guice, a dependency injection framework, with GWT).
I am writing a web service and one of the operation in service is getShortURL(String longURL). In this method I first check whether longURL exists in database, if yes, return it otherwise create a shortURL, insert it in database and return to client.
My confusion is how to organize and name my classes. Apart from the web service class, right now I have 3 classes:
URLData: It just has URL attributes and getters and setters.
MongoDB: It connects to database(right now connection attributes are hard-coded in it), inserts in database, and retrieves raw string from database.
MongoDBUtil: This class has again insert(URLData) method, it calls MongoDB.insert() to insert into database. Also has retrieveURLData which in turn calls MongoDB equivalent method to do the actual job.
Web service method sets URLData setters and calls MongoDBUtil.retrieve or insert.
I am thinking that URLData class should be named URLDataBusinessObject and along with setters and getters it can have insert, update and delete methods.
MongoDBUtil can be renamed to UrlDAO and it can have different kinds of retrieve methods.
MongoDB is more kinda Select query class, not sure how to design and name it.
Please advise
URLData is fine. Don't bloat your class name with long irrelevant words. If you want to make clear that this is a business object, create a package like com.yourcompany.yourproject.bo for example, then put your URLData class in there.
Yes, UrlDAO is more specific than MongoDBUtil. You can create a com.yourcompany.yourproject.dao package for it.
Looks fine for me. However if you use some kind of framework (e.g. Spring) you don't have to create your own class to hold the database connection configurations.
I suggest you google for some tutorial on the topic, you will learn both how to use the technology and how to name/orginize your classes.
This question might be suited more for http://programmers.stackexchange.com.
Nevertheless: yes, I would change the naming.
1) URLDataBusinessObject No, never. You're adding 14 characters to a classname without adding any value. URLData was just fine.
2) You should change the naming of your DAO classes to be non-DB specific, unless you explicitly have an architecture aiming at multiple databases and the DB-specific classes perform DB-specific tasks.
I'm assuming this isn't the case and thus you should give it a more general name.
Persistence can be just fine, DAO as well, anything that shows the intended usage without going into specifics is eligible.
3) MongoDBUtil is your interface to the persistence layer, it's not a utility class in heart and soul. What's the purpose of this class? If all you do is chain the method call to MongoDB you might as well drop it and go straight to the latter.
To create a simple layered design build interfaces for all the persistence specific operations and interfaces for all the domain objects. Then code against those rather than their concrete implementations. That way it's easy to swap out a mongo persistence layer for a different one, functionality is organised so that others can easily understand it and can also test against interfaces rather than concrete implementations. You'd have something like:
URLData interface
URLDataDTO class (used in the business layer)
Persistence interface
MongoPersistence class (used in the persistence layer)
My current project does something similar and also works with Mongo. The persistence layer interface has methods like "void put(URLData)". When called the Mongo implementation constructs a new MongoURLData from the URLData passed in, extracts the DBObject then persists it. Methods like "URLData get(String id);" work the other way around. The Mongolayer queries the database and creates new URLDataDTO objects from Mongo DBObjects. The web service is then responsible for serialising/deserialising DTO objects that are sent to or received from client applications.
My Mongo Domain objects all inherit from something this:
public abstract class MongoDO<T extends Object> {
DBObject dbobject = null;
public MongoDO(T dto) {
this.dbobject = new BasicDBObject();
};
public MongoDO(DBObject obj) {
this.setDBObject(obj);
};
public abstract T toDTO() throws StorageException;
public DBObject getDBObject() {
return dbobject;
}
public void setDBObject(DBObject obj) {
this.dbobject = obj;
}
public ObjectId getIdObject() {
return (ObjectId) this.getDBObject().get("_id");
}
public void setIdObject(ObjectId id) {
this.getDBObject().put("_id", id);
}
protected String getField(String field) {
if (dbobject.containsField(field) && dbobject.get(field) !=null) {
return dbobject.get(field).toString();
} else
return null;
}
protected void setField(String field, String value) {
dbobject.put(field, value);
}
}
An example Mongo implementation would be:
public class MongoURLData extends MongoDO<URLData> implements URLData {
private static final String FIELD_SHORT_URL = "surl";
public String getShortUrl() {
return getField(FIELD_SHORT_URL);
}
public void setShortUrl(String shortUrl) {
setField(FIELD_SHORT_URL, shortUrl);
}
public URLData toDTO(){
URLDataDTO dto = new URLDataDTO();
dto.setShortURL(getShortURL);
return dto;
}
}