Java multi-schema generator using annotations - java

I have a series of inter-related Java classes which form a super-set of possible schema. I'm looking for some way to annotate/tag individual fields such that I can create separate JSON schema for each 'namespace'.
A simple example:
class SupersetClass {
#BelongsToSchema(schema={"alice"}, description="foo")
Integer a;
#BelongsToSchema(schema={"alice", "bob"}, description="bar")
String b;
#BelongsToSchema(schema={"bob"}, description="")
Long c;
}
Output would have separate Alice and Bob schema, where Alice has a and b, and Bob has b and c.
My current idea is iterate over the schema I'd like to generate, then use reflection to create a custom derived class and pass that into jackson-mapper, but this is quite OTT if there's already a good way to do this.

Disclaimer: I'm the maintainer of the victools/jsonschema-generator library mentioned below.
If you're not dead-set on using the (slightly outdated) FasterXML/jackson-module-jsonSchema, you could use the victools/jsonschema-generator library. The latter is supporting the newer JSON Schema Draft versions and gives you a lot of flexibility in terms of what ends up in your generated schema. However, it is (at least as of today) not supporting the same range of jackson specific annotations out-of-the-box.
That being said, there are multiple ways to go about what you're asking.
1. Simply ignore properties that belong to a different context
SchemaGeneratorConfigBuilder configBuilder = new SchemaGeneratorConfigBuilder(
SchemaVersion.DRAFT_2019_09, OptionPreset.PLAIN_JSON);
configBuilder.forFields()
.withIgnoreCheck(field -> {
BelongsToSchema annotation = field.getAnnotation(BelongsToSchema.class);
return annotation != null
&& !Arrays.asList(annotation.schema()).contains("alice");
});
2. Exclude the schema of properties from a different context while mentioning them
configBuilder.forFields()
.withTargetTypeOverridesResolver(field -> {
BelongsToSchema annotation = field.getAnnotation(BelongsToSchema.class);
if (annotation == null || Arrays.asList(annotation.value()).contains("alice")) {
return null;
}
return Collections.singletonList(field.getContext().resolve(Object.class));
});
3. Include an external $ref instead of the actual subschema
configBuilder.forFields()
.withCustomDefinitionProvider((field, context) -> {
BelongsToSchema annotation = field.getAnnotation(BelongsToSchema.class);
if (annotation == null || Arrays.asList(annotation.value()).contains("alice")) {
return null;
}
ObjectNode customSubschema = context.getGeneratorConfig().createObjectNode()
.put(SchemaKeyword.TAG_REF.forVersion(SchemaVersion.DRAFT_2019_09),
"https://your-external.ref/" + field.getSimpleTypeDescription());
return new CustomPropertyDefinition(customSubschema);
});
There are probably a few more possibilities depending on what it is you want exactly.
I encourage you to play around with it a bit and have a look at the documentation. If you have project-specific questions, feel free to raise those as Issues on GitHub.

Related

Serialize Java Thrift Object to JSON while keeping enum names

My java project uses the Apache Thrift framework and it has a similar Thrift object structure as the following:
struct MyStruct {
1: required string something;
2: optional OptionEnum option;
}
enum OptionEnum {
VALUE_A = 0,
VALUE_B = 1
}
So when my project compiles it builds a Java class for this structure (ie: class MyStruct).
What I am trying to do is to serialize this into a string Json.
I've tried using TSerializer:
TSerializer serializer = new TSerializer(new TSimpleJSONProtocol.Factory());
return serializer.toString(instanceOfMyStruct);
This generates a json but loses the string name of the enum (it converts into a numeric value instead):
{
something: 'value',
option: 1
}
Is there a way to keep the enum name (I mean option being VALUE_B instead of 1 in the above example) ?
The issue here is that the conversion is baked into the code parts generated by the Thrift compiler. The protocol level classes only know about a few very basic data types - when the data reach that level it is already too late.
So, unless you want to fork and implement your own (incompatible) version of the code generator, I'm afraid there is no way.
PS: I might add that the main purpose driving the design is efficiency, not human readibility.

How do I add a custom directive to a query resolved through a singleton

I have managed to add custom directives to the GraphQL schema but I am struggling to work out how to add a custom directive to a field definition. Any hints on the correct implementation would be very helpful.
I am using GraphQL SPQR 0.9.6 to generate my schema
ORIGINAL ANSWER: (now outdated, see the 2 updates below)
It's currently not possible to do this. GraphQL SPQR v0.9.9 will be the first to support custom directives.
Still, in 0.9.8 there's a possible work-around, depending on what you're trying to achieve. SPQR's own meta-data about a field or a type is kept inside custom directives. Knowing that, you can get a hold of the Java method/field underlying the GraphQL field definition. If what you want is e.g. an instrumentation that does something based on a directive, you could instead obtain any annotations on the underlying element, having the full power of Java at your disposal.
The way to get the method would something like:
Operation operation = Directives.getMappedOperation(env.getField()).get();
Resolver resolver = operation.getApplicableResolver(env.getArguments().keySet());
Member underlyingElement = resolver.getExecutable().getDelegate();
UPDATE:
I posted a huge answer on this GitHub issue. Pasting it here as well.
You can register an additional directive as such:
generator.withSchemaProcessors(
(schemaBuilder, buildContext) -> schemaBuilder.additionalDirective(...));
But (according to my current understanding), this only makes sense for query directives (something the client sends as a part of the query, like #skip or #deffered).
Directives like #dateFormat simply make no sense in SPQR: they're there to help you when parsing SDL and mapping it to your code. In SPQR, there's no SDL and you start from your code.
E.g. #dateFormat is used to tell you that you need to provide date formatting to a specific field when mapping it to Java. In SPQR you start from the Java part and the GraphQL field is generated from a Java method, so the method must already know what format it should return. Or it has an appropriate annotation already. In SPQR, Java is the source of truth. You use annotations to provide extra mapping info. Directives are basically annotation in SDL.
Still, field or type level directives (or annotations) are very useful in instrumentations. E.g. if you want to intercept field resolution and inspect the authentication directives.
In that case, I'd suggest you simply use annotations for the same purpose.
public class BookService {
#Auth(roles= {"Admin"}) //example custom annotation
public Book addBook(Book book) { /*insert a Book into the DB */ }
}
As each GraphQLFieldDefinition is backed by a Java methods (or a field), you can get the underlying objects in your interceptor or wherever:
GraphQLFieldDefinition field = ...;
Operation operation = Directives.getMappedOperation(field).get();
//Multiple methods can be hooked up to a single GraphQL operation. This gets the #Auth annotations from all of them
Set<Auth> allAuthAnnotations = operation.getResolvers().stream()
.map(res -> res.getExecutable().getDelegate()) //get the underlying method
.filter(method -> method.isAnnotationPresent(Auth.class))
.map(method -> method.getAnnotation(Auth.class))
.collect(Collectors.toSet());
Or, to inspect only the method that can handle the current request:
DataFetchingEnvironment env = ...; //get it from the instrumentation params
Auth auth = operation.getApplicableResolver(env.getArguments().keySet()).getExecutable().getDelegate().getAnnotation(Auth.class);
Then you can inspect your annotations as you wish, e.g.
Set<String> allNeededRoles = allAuthAnnotations.stream()
.flatMap(auth -> Arrays.stream(auth.roles))
.collect(Collectors.toSet());
if (!currentUser.getRoles().containsAll(allNeededRoles)) {
throw new AccessDeniedException(); //or whatever is appropriate
}
Of course, there's no real need to actually implement authentication this way, as you're probably using a framework like Spring or Guice (maybe even Jersey has the needed security features), that already has a way to intercept all methods and implement security. So you can just use that instead. Much simpler and safer. E.g. for Spring Security, just keep using it as normal:
public class BookService {
#PreAuth(...) //standard Spring Security
public Book addBook(Book book) { /*insert a Book into the DB */ }
}
Make sure you also read my answer on implementing security in GraphQL if that's what you're after.
You can use instrumentations to dynamically filter the results in the same way: add an annotation on a method, access it from the instrumentation, and process the result dynamically:
public class BookService {
#Filter("title ~ 'Monkey'") //example custom annotation
public List<Book> findBooks(...) { /*get books from the DB */ }
}
new SimpleInstrumentation() {
// You can also use beginFieldFetch and then onCompleted instead of instrumentDataFetcher
#Override
public DataFetcher<?> instrumentDataFetcher(DataFetcher<?> dataFetcher, InstrumentationFieldFetchParameters parameters) {
GraphQLFieldDefinition field = parameters.getEnvironment().getFieldDefinition();
Optional<String> filterExpression = Directives.getMappedOperation(field)
.map(operation ->
operation.getApplicableResolver(parameters.getEnvironment().getArguments().keySet())
.getExecutable().getDelegate()
.getAnnotation(Filter.class).value()); //get the filtering expression from the annotation
return filterExpression.isPresent() ? env -> filterResultBasedOn Expression(dataFetcher.get(parameters.getEnvironment()), filterExpression) : dataFetcher;
}
}
For directives on types, again, just use Java annotations. You have access to the underlying types via:
Directives.getMappedType(graphQLType).getAnnotation(...);
This, again, probably only makes sense only in instrumentations. Saying that because normally the directives provide extra info to map SDL to a GraphQL type. In SPQR you map a Java type to a GraphQL type, so a directive makes no sense in that context in most cases.
Of course, if you still need actual GraphQL directives on a type, you can always provide a custom TypeMapper that puts them there.
For directives on a field, it is currently not possible in 0.9.8.
0.9.9 will have full custom directive support on any element, in case you still need them.
UPDATE 2: GraphQL SPQR 0.9.9 is out.
Custom directives are now supported. See issue #200 for details.
Any custom annotation meta-annotated with #GraphQLDirective will be mapped as a directive on the annotated element.
E.g. imagine a custom annotation #Auth(requiredRole = "Admin") used to denote access restrictions:
#GraphQLDirective //Should be mapped as a GraphQLDirective
#Retention(RetentionPolicy.RUNTIME)
#Target({ElementType.METHOD}) //Applicable to methods
public #interface Auth {
String requiredRole();
}
If a resolver method is then annotated with #Auth:
#GraphQLMutation
#Auth(requiredRole = {"Admin"})
public Book addBook(Book newBook) { ... }
The resulting GraphQL field fill look like:
type Mutation {
addBook(newBook: BookInput): Book #auth(requiredRole : "Admin")
}
That is to say the #Auth annotation got mapped to a directive, due to the presence of #GraphQLDirective meta-annotation.
Client directives can be added via: GraphQLSchemaGenerator#withAdditionalDirectives(java.lang.reflect.Type...).
SPQR 0.9.9 also comes with ResolverInterceptors which can intercept the resolver method invocation and inspect the annotations/directives. They are much more convenient to use than Instrumentations, but are not as general (have a much more limited scope). See issue #180 for details, and the related tests for usage examples.
E.g. to make use of the #Auth annotation from above (not that #Auth does not need to be a directive for this to work):
public class AuthInterceptor implements ResolverInterceptor {
#Override
public Object aroundInvoke(InvocationContext context, Continuation continuation) throws Exception {
Auth auth = context.getResolver().getExecutable().getDelegate().getAnnotation(Auth.class);
User currentUser = context.getResolutionEnvironment().dataFetchingEnvironment.getContext();
if (auth != null && !currentUser.getRoles().containsAll(Arrays.asList(auth.rolesRequired()))) {
throw new IllegalAccessException("Access denied"); // or return null
}
return continuation.proceed(context);
}
}
If #Auth is a directive, you can also get it via the regular API, e.g.
List<GraphQLDirective> directives = dataFetchingEnvironment.getFieldDefinition().get.getDirectives();
DirectivesUtil.directivesByName(directives);

XML base data validation rule in java

Hi I am completely new in XML in Java.In my recent project I need to create validation rules in XML,but the the problem is that different user group may have different rule
For example
<root>
<user-group type="sale">
<parameter-name ="loginName">
<max-length>10</max-length>
<min-length>4</min-length>
</parameter-name>
<parameter-name ="password">
<max-length>10</max-length>
<min-length>4</min-length>
</parameter-name>
</user-group>
<user-group type="clerk">
<parameter-name ="loginName">
<max-length>16</max-length>
<min-length>4</min-length>
</parameter-name>
<parameter-name ="password">
<max-length>12</max-length>
<min-length>8</min-length>
</parameter-name>
</user-group>`
</root>
So how to write a Java stuff to implements the above rule.
Thanks in advance.
Read the XML using one of the known XML parsers. Refer
XML Parsing for Java
As you read through the XML, you can create a data structure to store the rules. This is explained below.
Loop through each of the "user-group" XML nodes in your Java program, create a map implementation, you can use a HashMap, with key - "clerk" value will be a POJO bean defining a "rule"
For example here is your "Rules" class -
public class Rules {
private String ruleName;
private int maxLength;
private int minLength;
public String getRuleName() {
return ruleName;
}
public void setRuleName(String ruleName) {
this.ruleName = ruleName;
}
public int getMinLength() {
return minLength;
}
public void setMinLength(int minLength) {
this.minLength = minLength;
}
public void setMaxLength(int maxLength) {
this.maxLength = maxLength;
}
public int getMaxLength() {
return maxLength;
}
}
Now you can use this HashMap anywhere in your program, to implement the rules. Seems like you would need to implement rules on the UI. In that case, I would recommend using established frameworks like Struts, Spring or an equivalent framework.
Hope this gives you a headstart ;)
The simple answer: use XML schemas with define namespaces. This way each user-group type can define what the structure of that node is. Setting this as an attribute is not really the most effective way to do this. I can elaborate later tonight on how to use XSD with namespaces so that you could create a document with "different" user-group nodes, specified in different namespaces, that each entity could validate and use without any problems. I don't have time to show an example, but I found this: Creating an XML document using namespaces in Java
The most simplistic explanation I can come up with is the definition of "table". For a furniture store, a "table" entity has maybe a round or square surface with most likely 4 legs, etc. But a "table" could mean something completely different for some other group. Using your XML as an example, it would be something like this:
<root>
<sale:user-group xmlns:sale="SOME_URL">
<some structure and rules>
</sale:user-group>
<clerk:user-group xmlns:clerk="SOME_OTHER_URL">
<different structure and rules>
</clerk:user-group>
</root>
The link I provided should answer your question. If not, I will come back tonight and show you a simple XSD that might fit your case.

Enterprise Architect - MDA converst Map to List

I am using EA for creation of PIM. When I generate java code directly I am getting correct data type where I want - Map where I set qualifiers on association properties which as I understand meas that it is going to be a map. And that works as expected. Hovewer when I do the MDA transformation and generete code - properties are conveted to List (which is bad) bug setters and getters method keep using Map as in the following example:
public class Check {
private List< Comp> comps;
private List< Gratuity> gratuities;
public Check(){
}
public Map<String, Comp> getcomps(){
return comps;
}
public Map<String, Gratuity> getgratuities(){
return gratuities;
}
I am using default transformation package for Java. I tried to add following line to Java transformation for connector is source section
%connectorType%
%PI="\n "%
{
%TRANSFORM_CURRENT()%
%TRANSFORM_REFERENCE("Connector",connectorGUID)%
Source
{
%TRANSFORM_REFERENCE("Class",connectorSourceElemGUID)%
access=%qt%%connectorSourceAccess == "Public" ? "Private" : value%%qt%
qualifier=%connectorSourceQualifier%
%TRANSFORM_CURRENT("Source","access")%
}
Target
{
%TRANSFORM_REFERENCE("Class",connectorDestElemGUID)%
access=%qt%%connectorDestAccess == "Public" ? "Private" : value%%qt%
%TRANSFORM_CURRENT("Target","access")%
%PI="\n"%
}
}
but that doesn't seem to help
This is an incomplete answer, but it's too long to go in a comment.
I'm not convinced that the connector source qualifier determines which collection class (Map, List) is used. There are three things involved here: the MDA transform template, the code generation template and the collection class options.
Check Tools -- Options -- Source Code Engineering -- Java. There you'll find settings for Default Collection Class and Additional Collection Classes (these are used for attributes), and (by clicking the Collection Classes button) collection class settings for associations. Check these.
Also, check the Linked Attribute Declaration template for Java code generation. It seems to me that this does not check the qualifier, but it does check %linkAttCollectionClass%.
I got a reply form Enterprise Architect support which says it is bug - original message:
I am sorry it does not work because there's an issue with regard to transformation of the Connector 'qualifier'.
The transformation template '%TRANSFORM_CURRENT()%' (and your new added 'qualifier="tr: String') is all correct, but the issue makes it fail to transform that qualifier value across.
We are going to resolve this issue in a future release of EA. Unfortunately I cannot provide a timeframe for release.
Issue ID: 13106266

Good practice to validate immutable values objects

Suppose a MailConfiguration class specifying settings for sending mails :
public class MailConfiguration {
private AddressesPart addressesPart;
private String subject;
private FilesAttachments filesAttachments;
private String bodyPart;
public MailConfiguration(AddressesPart addressesPart, String subject, FilesAttachments filesAttachements,
String bodyPart) {
Validate.notNull(addressesPart, "addressesPart must not be null");
Validate.notNull(subject, "subject must not be null");
Validate.notNull(filesAttachments, "filesAttachments must not be null");
Validate.notNull(bodyPart, "bodyPart must not be null");
this.addressesPart = addressesPart;
this.subject = subject;
this.filesAttachements = filesAttachements;
this.bodyPart = bodyPart;
}
// ... some useful getters ......
}
So, I'm using two values objects : AddressesPart and FilesAttachment.
Theses two values objects have similar structures so I'm only going to expose here AddressesPart :
public class AddressesPart {
private final String senderAddress;
private final Set recipientToMailAddresses;
private final Set recipientCCMailAdresses;
public AddressesPart(String senderAddress, Set recipientToMailAddresses, Set recipientCCMailAdresses) {
validate(senderAddress, recipientToMailAddresses, recipientCCMailAdresses);
this.senderAddress = senderAddress;
this.recipientToMailAddresses = recipientToMailAddresses;
this.recipientCCMailAdresses = recipientCCMailAdresses;
}
private void validate(String senderAddress, Set recipientToMailAddresses, Set recipientCCMailAdresses) {
AddressValidator addressValidator = new AddressValidator();
addressValidator.validate(senderAddress);
addressValidator.validate(recipientToMailAddresses);
addressValidator.validate(recipientCCMailAdresses);
}
public String getSenderAddress() {
return senderAddress;
}
public Set getRecipientToMailAddresses() {
return recipientToMailAddresses;
}
public Set getRecipientCCMailAdresses() {
return recipientCCMailAdresses;
}
}
And the associated validator : AddressValidator
public class AddressValidator {
private static final String EMAIL_PATTERN
= "^[_A-Za-z0-9-]+(\\.[_A-Za-z0-9-]+)*#[A-Za-z0-9]+(\\.[A-Za-z0-9]+)*(\\.[A-Za-z]{2,})$";
public void validate(String address) {
validate(Collections.singleton(address));
}
public void validate(Set addresses) {
Validate.notNull(addresses, "List of mail addresses must not be null");
for (Iterator it = addresses.iterator(); it.hasNext(); ) {
String address = (String) it.next();
Validate.isTrue(address != null && isAddressWellFormed(address), "Invalid Mail address " + address);
}
}
private boolean isAddressWellFormed(String address) {
Pattern emailPattern = Pattern.compile(EMAIL_PATTERN);
Matcher matcher = emailPattern.matcher(address);
return matcher.matches();
}
}
Thus, I have two questions :
1) If for some reasons, later, we want to validate differently an address mail (for instance to include/exclude some aliases matching to existing mailingList), should I expose a kind of IValidator as a constructor parameter ? like the following rather than bringing concrete dependence (like I made):
public AddressValidator(IValidator myValidator) {
this.validator = myValidator;
}
Indeed, this will respect the D principle of SOLID principle : Dependency injection.
However, if we follow this logical, would a majority of Values Objects own an abstract validator or it's just an overkill the most of time (thinking to YAGNI ?) ?
2) I've read in some articles than in respect of DDD, all validations must be present and only present in Aggregate Root, means in this case : MailConfiguration.
Am I right if I consider that immutable objects should never be in an uncohesive state ? Thus, would validation in constructor as I made be preferred in the concerned entity (and so avoiding aggregate to worry about validation of it's "children" ?
There's a basic pattern in DDD that perfectly does the job of checking and assembling objects to create a new one : the Factory.
I've read in some articles than in respect of DDD, all validations
must be present and only present in Aggregate Root
I strongly disagree with that. There can be validation logic in a wide range of places in DDD :
Validation upon creation, performed by a Factory
Enforcement of an aggregate's invariants, usually done in the Aggregate Root
Validation spanning accross several objects can be found in Domain Services.
etc.
Also, I find it funny that you bothered to create an AddressesPart value object -which is a good thing, without considering making EMailAddress a value object in the first place. I think it complicates your code quite a bit because there's no encapsulated notion of what an email address is, so AddressesPart (and any object that will manipulate addresses for that matter) is forced to deal with the AddressValidator to perform validation of its addresses. I think it shouldn't be its responsibility but that of an AddressFactory.
I'm not quite sure if I follow you 100%, but one way to handle ensuring immutable objects are only allowed to be created if they are valid is to use the Essence Pattern.
In a nutshell, the idea is that the parent class contains a static factory that creates immutable instances of itself based on instances of an inner "essence" class. The inner essence is mutable and allows objects to be built up, so you can put the pieces together as you go, and can be validated along the way as well.
The SOLID principals and good DDD is abided by since the parent immutable class is still doing only one thing, but allows others to build it up through it's "essence".
For an example of this, check out the Ldap extension to the Spring Security library.
Some observations first.
Why no generics? J2SE5.0 came out in 2004.
Current version of Java SE has Objects.requiresNonNull as standard. Bit of a mouthful and the capitalisation is wrong. Also returns the passed object so doesn't need a separate line.
this.senderAddress = requiresNonNull(senderAddress);
Your classes are not quite immutable. They are subclassable. Also they don't make a safe copy of their mutable arguments (Sets - shame there aren't immutable collection types in the Java library yet). Note, copy before validation.
this.recipientToMailAddresses = validate(new HashSet<String>(
recipientToMailAddresses
));
The use of ^ and $ in the regex is a little misleading.
If the validation varies, then there's two obvious (sane) choices:
Only do the widest variation in this class. Validate more specifically in the context it is going to be used.
Pass in the validator used and have this as a property. To be useful, client code would have to check and do something reasonable with this information, which is unlikely.
It doesn't make a lot of sense to pass the validator into the constructor and then discard it. That's making the constructor overcomplicated. Put it in a static method, if you must.
The enclosing instance should check that its argument are valid for that particular use, but should not overlap with classes ensuring that they are generally valid. Where would it end?
Although an old question but for anyone stumbling upon the subject matter, please keep it simple with POJOs (Plain Old Java Objects).
As for validations, there is no single truth because for a pure DDD you need to keep the context always in mind.
For example a user with no credit card data can and should be allowed to create an account. But credit card data is needed when checking out on some shopping basket page.
How this is beautifully solved by DDD is by moving the bits and pieces of code to the Entities and Value Objects where it naturally belong.
As a second example, if address should never be empty in the context of a domain level task, then Address value object should force this assertion inside the object instead of using asking a third party library to check if a certain value object is null or not.
Moreover Address as a standalone value object doesn't convey much at its own when compared with ShippingAddress, HomeAddress or CurrentResidentialAddress ... the ubiquitous language, in other words names convey their intent.

Categories

Resources