Reference annotation on bundle field - java

I'm developing an application with OSGi. Looking inside the OSGi compendium 6.0 (section 112.8.1) I've come across the declarative service; in particular I looked the following paragraph:
For a field, the defaults for the Reference annotation are:
The name
of the bind method or field is used for the name of the reference.
1:1 cardinality if the field is not a collection. 0..n cardinality if the field is a collection.
Static reluctant policy if the field is not declared volatile. Dynamic reluctant policy if the field is declared volatile
The requested service is the type of the field.
For example:
#Reference
volatile Collection<LogService> log;
Now, I read from Neil Bartlett's OSGi in practice (section 11.10.2) that synchronization and concurrency of bind and unbind methods of Reference annotation are a bit tricky (especially in dynamic policy scnearios). In particular, a thread-safe example of a reference of a service via annotation may be:
#Component( provide = MailboxListener.class, properties = { "service.ranking = 10"})
public class LogMailboxListener implements MailboxListener {
private final AtomicReference<Log> logRef = newAtomicReference <Log> () ;
public void messagesArrived ( String mboxName, Mailbox mbox, long [ ] ids ) {
Log log = logRef.get();
if (log != null )
log.log(Log.INFO, ids.length + "message(s) arrived in mailbox " + mboxName, null);
else
System.err.println("No log available!");
}
#Reference( service = Log.class, dynamic = true, optional = true )
public void setLog(Log log) {
logRef.set(log);
}
public void unsetLog(Log log) {
logRef.compareAndSet(log, null);
}
}
I think I grasped from the book why there the dynamic policy needs this adjustements from the multi-threading scenario. My question is: if the reference annotation were on a field (declarative service 1.3) how could I achieve thread-safety? Only by defining the reference as "volatile" (as compendium suggest)? Or there is some tricky part that will create problems in the application?
Thanks for any kind reply

When you use a dynamic policy reference on a field, the field must be volatile. In your example, each time the set of LogServices change, a new collection is injected into the field. So this will be safe since if your code is iterating over the old collection, the old collection is unaltered. When you code goes back to the log field, it will see the new collection.
So all you need to do is declare the field volatile and do not store the field value somewhere else since the field will be updated to a new collection whenever the set of bound services changes.

Related

How can my Freemarker ObjectWrapper access a template setting

Use case: system administrator stores a Freemarker template in a database which is used (by Spring Boot REST API) to present information stored by system users (respondents) in a locale-aware way to a different user type (reviewer).
A respondent's response might be stored in this sort of object (or in lists of this sort of object, in the event a question posed to the respondent is expected to have multiple answers):
// snip
import com.fasterxml.jackson.databind.node.ObjectNode;
// more imports snipped
public class LanguageStringMap {
private Map<Language, String> languageStringMap;
public LanguageStringMap(ObjectNode languageMapNode) {
// snip of code instantiating a LanguageStringMap from JSON
}
public void put(Language language, String value) {
if (value.length() == 0)
throw new IllegalArgumentException(String.format(
"value for language '%s' of zero length", language.getCode()));
languageStringMap.put(language, value);
}
public String get(Language language) { return languageStringMap.get(language); }
}
What I think I want to do is write an ObjectWrapper that maps instances of LanguageStringMap to a string (obtained by calling the get() method with a language derived from the Locale requested by the reviewer's browser and set in the template's settings). This presents a cleaner user experience to the system administrator than making the uploaded template contain a bunch of template method calls would.
To do this, my object wrapper needs to access a template setting. I have perused the pertinent Freemarker documentation, but I am still unclear on how to do this or if it is even possible.
I think it would be a mistake to try to implement this with resource bundles uploaded to the database alongside the templates, but that is a consideration.
Typically you simply put the locale specific string into the data-model before the template is processed, along with all the other variables. In that case no ObjectWrapper customization is needed. But if you have to use an ObjectWrapper-based solution, then you can get the locale inside an ObjectWrapper method (like in the override of DefaultObjectWrapper.handleUnknownType) with Environment.getCurrentEnvironment().getLocale().

is there a Cacheable in C# similar to Java?

In Java Spring Boot, I can easily enable caching using the annotation #EnableCaching and make methods cache the result using #Cacheable, this way, any input to my method with the exact same parameters will NOT call the method, but return immediately using the cached result.
Is there something similar in C#?
What I did in the past was i had to implement my own caching class, my own data structures, its a big hassle. I just want an easy way for the program to cache the result and return the exact result if the input parameters are the same.
EDIT: I dont want to use any third party stuff, so no MemCached, no Redis, no RabbitMQ, etc... Just looking for a very simple and elegant solution like Java's #Cacheable.
Caches
A cache is the most valuable feature that Microsoft provides. It is a type of memory that is relatively small but can be accessed very quickly. It essentially stores information that is likely to be used again. For example, web browsers typically use a cache to make web pages load faster by storing a copy of the webpage files locally, such as on your local computer.
Caching
Caching is the process of storing data into cache. Caching with the C# language is very easy. System.Runtime.Caching.dll provides the feature for working with caching in C#. In this illustration I am using the following classes:
ObjectCache
MomoryCache
CacheItemPolicy
ObjectCache
: The CacheItem class provides a logical representation of a cache entry, that can include regions using the RegionName property. It exists in the System.Runtime.Caching.
MomoryCache
: This class also comes under System.Runtime.Caching and it represents the type that implements an in-cache memory.
CacheItemPolicy
: Represents a set of eviction and expiration details for a specific cache entry.
.NET provides
System.Web.Caching.Cache - default caching mechanizm in ASP.NET. You can get instance of this class via property Controller.HttpContext.Cache also you can get it via singleton HttpContext.Current.Cache. This class is not expected to be created explicitly because under the hood it uses another caching engine that is assigned internally. To make your code work the simplest way is to do the following:
public class DataController : System.Web.Mvc.Controller{
public System.Web.Mvc.ActionResult Index(){
List<object> list = new List<Object>();
HttpContext.Cache["ObjectList"] = list; // add
list = (List<object>)HttpContext.Cache["ObjectList"]; // retrieve
HttpContext.Cache.Remove("ObjectList"); // remove
return new System.Web.Mvc.EmptyResult();
}
}
System.Runtime.Caching.MemoryCache - this class can be constructed in user code. It has the different interface and more features like update\remove callbacks, regions, monitors etc. To use it you need to import library System.Runtime.Caching. It can be also used in ASP.net application, but you will have to manage its lifetime by yourself.
var cache = new System.Runtime.Caching.MemoryCache("MyTestCache");
cache["ObjectList"] = list; // add
list = (List<object>)cache["ObjectList"]; // retrieve
cache.Remove("ObjectList"); // remove
You can write a decorator with a get-or-create functionality. First, try to get value from cache, if it doesn't exist, calculate it and store in cache:
public static class CacheExtensions
{
public static async Task<T> GetOrSetValueAsync<T>(this ICacheClient cache, string key, Func<Task<T>> function)
where T : class
{
// try to get value from cache
var result = await cache.JsonGet<T>(key);
if (result != null)
{
return result;
}
// cache miss, run function and store result in cache
result = await function();
await cache.JsonSet(key, result);
return result;
}
}
ICacheClient is the interface you're extending. Now you can use:
await _cacheClient.GetOrSetValueAsync(key, () => Task.FromResult(value));

How do I add a custom directive to a query resolved through a singleton

I have managed to add custom directives to the GraphQL schema but I am struggling to work out how to add a custom directive to a field definition. Any hints on the correct implementation would be very helpful.
I am using GraphQL SPQR 0.9.6 to generate my schema
ORIGINAL ANSWER: (now outdated, see the 2 updates below)
It's currently not possible to do this. GraphQL SPQR v0.9.9 will be the first to support custom directives.
Still, in 0.9.8 there's a possible work-around, depending on what you're trying to achieve. SPQR's own meta-data about a field or a type is kept inside custom directives. Knowing that, you can get a hold of the Java method/field underlying the GraphQL field definition. If what you want is e.g. an instrumentation that does something based on a directive, you could instead obtain any annotations on the underlying element, having the full power of Java at your disposal.
The way to get the method would something like:
Operation operation = Directives.getMappedOperation(env.getField()).get();
Resolver resolver = operation.getApplicableResolver(env.getArguments().keySet());
Member underlyingElement = resolver.getExecutable().getDelegate();
UPDATE:
I posted a huge answer on this GitHub issue. Pasting it here as well.
You can register an additional directive as such:
generator.withSchemaProcessors(
(schemaBuilder, buildContext) -> schemaBuilder.additionalDirective(...));
But (according to my current understanding), this only makes sense for query directives (something the client sends as a part of the query, like #skip or #deffered).
Directives like #dateFormat simply make no sense in SPQR: they're there to help you when parsing SDL and mapping it to your code. In SPQR, there's no SDL and you start from your code.
E.g. #dateFormat is used to tell you that you need to provide date formatting to a specific field when mapping it to Java. In SPQR you start from the Java part and the GraphQL field is generated from a Java method, so the method must already know what format it should return. Or it has an appropriate annotation already. In SPQR, Java is the source of truth. You use annotations to provide extra mapping info. Directives are basically annotation in SDL.
Still, field or type level directives (or annotations) are very useful in instrumentations. E.g. if you want to intercept field resolution and inspect the authentication directives.
In that case, I'd suggest you simply use annotations for the same purpose.
public class BookService {
#Auth(roles= {"Admin"}) //example custom annotation
public Book addBook(Book book) { /*insert a Book into the DB */ }
}
As each GraphQLFieldDefinition is backed by a Java methods (or a field), you can get the underlying objects in your interceptor or wherever:
GraphQLFieldDefinition field = ...;
Operation operation = Directives.getMappedOperation(field).get();
//Multiple methods can be hooked up to a single GraphQL operation. This gets the #Auth annotations from all of them
Set<Auth> allAuthAnnotations = operation.getResolvers().stream()
.map(res -> res.getExecutable().getDelegate()) //get the underlying method
.filter(method -> method.isAnnotationPresent(Auth.class))
.map(method -> method.getAnnotation(Auth.class))
.collect(Collectors.toSet());
Or, to inspect only the method that can handle the current request:
DataFetchingEnvironment env = ...; //get it from the instrumentation params
Auth auth = operation.getApplicableResolver(env.getArguments().keySet()).getExecutable().getDelegate().getAnnotation(Auth.class);
Then you can inspect your annotations as you wish, e.g.
Set<String> allNeededRoles = allAuthAnnotations.stream()
.flatMap(auth -> Arrays.stream(auth.roles))
.collect(Collectors.toSet());
if (!currentUser.getRoles().containsAll(allNeededRoles)) {
throw new AccessDeniedException(); //or whatever is appropriate
}
Of course, there's no real need to actually implement authentication this way, as you're probably using a framework like Spring or Guice (maybe even Jersey has the needed security features), that already has a way to intercept all methods and implement security. So you can just use that instead. Much simpler and safer. E.g. for Spring Security, just keep using it as normal:
public class BookService {
#PreAuth(...) //standard Spring Security
public Book addBook(Book book) { /*insert a Book into the DB */ }
}
Make sure you also read my answer on implementing security in GraphQL if that's what you're after.
You can use instrumentations to dynamically filter the results in the same way: add an annotation on a method, access it from the instrumentation, and process the result dynamically:
public class BookService {
#Filter("title ~ 'Monkey'") //example custom annotation
public List<Book> findBooks(...) { /*get books from the DB */ }
}
new SimpleInstrumentation() {
// You can also use beginFieldFetch and then onCompleted instead of instrumentDataFetcher
#Override
public DataFetcher<?> instrumentDataFetcher(DataFetcher<?> dataFetcher, InstrumentationFieldFetchParameters parameters) {
GraphQLFieldDefinition field = parameters.getEnvironment().getFieldDefinition();
Optional<String> filterExpression = Directives.getMappedOperation(field)
.map(operation ->
operation.getApplicableResolver(parameters.getEnvironment().getArguments().keySet())
.getExecutable().getDelegate()
.getAnnotation(Filter.class).value()); //get the filtering expression from the annotation
return filterExpression.isPresent() ? env -> filterResultBasedOn Expression(dataFetcher.get(parameters.getEnvironment()), filterExpression) : dataFetcher;
}
}
For directives on types, again, just use Java annotations. You have access to the underlying types via:
Directives.getMappedType(graphQLType).getAnnotation(...);
This, again, probably only makes sense only in instrumentations. Saying that because normally the directives provide extra info to map SDL to a GraphQL type. In SPQR you map a Java type to a GraphQL type, so a directive makes no sense in that context in most cases.
Of course, if you still need actual GraphQL directives on a type, you can always provide a custom TypeMapper that puts them there.
For directives on a field, it is currently not possible in 0.9.8.
0.9.9 will have full custom directive support on any element, in case you still need them.
UPDATE 2: GraphQL SPQR 0.9.9 is out.
Custom directives are now supported. See issue #200 for details.
Any custom annotation meta-annotated with #GraphQLDirective will be mapped as a directive on the annotated element.
E.g. imagine a custom annotation #Auth(requiredRole = "Admin") used to denote access restrictions:
#GraphQLDirective //Should be mapped as a GraphQLDirective
#Retention(RetentionPolicy.RUNTIME)
#Target({ElementType.METHOD}) //Applicable to methods
public #interface Auth {
String requiredRole();
}
If a resolver method is then annotated with #Auth:
#GraphQLMutation
#Auth(requiredRole = {"Admin"})
public Book addBook(Book newBook) { ... }
The resulting GraphQL field fill look like:
type Mutation {
addBook(newBook: BookInput): Book #auth(requiredRole : "Admin")
}
That is to say the #Auth annotation got mapped to a directive, due to the presence of #GraphQLDirective meta-annotation.
Client directives can be added via: GraphQLSchemaGenerator#withAdditionalDirectives(java.lang.reflect.Type...).
SPQR 0.9.9 also comes with ResolverInterceptors which can intercept the resolver method invocation and inspect the annotations/directives. They are much more convenient to use than Instrumentations, but are not as general (have a much more limited scope). See issue #180 for details, and the related tests for usage examples.
E.g. to make use of the #Auth annotation from above (not that #Auth does not need to be a directive for this to work):
public class AuthInterceptor implements ResolverInterceptor {
#Override
public Object aroundInvoke(InvocationContext context, Continuation continuation) throws Exception {
Auth auth = context.getResolver().getExecutable().getDelegate().getAnnotation(Auth.class);
User currentUser = context.getResolutionEnvironment().dataFetchingEnvironment.getContext();
if (auth != null && !currentUser.getRoles().containsAll(Arrays.asList(auth.rolesRequired()))) {
throw new IllegalAccessException("Access denied"); // or return null
}
return continuation.proceed(context);
}
}
If #Auth is a directive, you can also get it via the regular API, e.g.
List<GraphQLDirective> directives = dataFetchingEnvironment.getFieldDefinition().get.getDirectives();
DirectivesUtil.directivesByName(directives);

Ways to pass additional data to Custom RevisionEntity in Hibernate Envers?

It's RESTful web app. I am using Hibernate Envers to store historical data. Along with revision number and timestamp, I also need to store other details (for example: IP address and authenticated user). Envers provides multiple ways to have a custom revision entity which is awesome. I am facing problem in setting the custom data on the revision entity.
#RevisionEntity( MyCustomRevisionListener.class )
public class MyCustomRevisionEntity extends DefaultRevisionEntity {
private String userName;
private String ip;
//Accessors
}
public class MyCustomRevisionListener implements RevisionListener {
public void newRevision( Object revisionEntity ) {
MyCustomRevisionEntity customRevisionEntity = ( MyCustomRevisionEntity ) revisionEntity;
//Here I need userName and Ip address passed as arguments somehow, so that I can set them on the revision entity.
}
}
Since newRevision() method does not allow any additional arguments, I can not pass my custom data (like username and ip) to it. How can I do that?
Envers also provides another approach as:
An alternative method to using the org.hibernate.envers.RevisionListener is to instead call the getCurrentRevision( Class revisionEntityClass, boolean persist ) method of the org.hibernate.envers.AuditReader interface to obtain the current revision, and fill it with desired information.
So using the above approach, I'll have to do something like this:
Change my current dao method like:
public void persist(SomeEntity entity) {
...
entityManager.persist(entity);
...
}
to
public void persist(SomeEntity entity, String userName, String ip) {
...
//Do the intended work
entityManager.persist(entity);
//Do the additional work
AuditReader reader = AuditReaderFactory.get(entityManager)
MyCustomRevisionEntity revision = reader.getCurrentRevision(MyCustomRevisionEntity, false);
revision.setUserName(userName);
revision.setIp(ip);
}
I don't feel very comfortable with this approach as keeping audit data seems a cross cutting concern to me. And I obtain the userName and Ip and other data through HTTP request object. So all that data will need to flow down right from entry point of application (controller) to the lowest layer (dao layer).
Is there any other way in which I can achieve this? I am using Spring.
I am imagining something like Spring keeping information about the 'stack' to which a particular method invocation belongs. So that when newRevision() in invoked, I know which particular invocation at the entry point lead to this invocation. And also, I can somehow obtain the arguments passed to first method of the call stack.
One good way to do this would be to leverage a ThreadLocal variable.
As an example, Spring Security has a filter that initializes a thread local variable stored in SecurityContextHolder and then you can access this data from that specific thread simply by doing something like:
SecurityContext ctx = SecurityContextHolder.getSecurityContext();
Authorization authorization = ctx.getAuthorization();
So imagine an additional interceptor that your web framework calls that either adds additional information to the spring security context, perhaps in a custom user details object if using spring security or create your own holder & context object to hold the information the listener needs.
Then it becomes a simple:
public class MyRevisionEntityListener implements RevisionListener {
#Override
public void newRevision(Object revisionEntity) {
// If you use spring security, you could use SpringSecurityContextHolder.
final UserContext userContext = UserContextHolder.getUserContext();
MyRevisionEntity mre = MyRevisionEntity.class.cast( revisionEntity );
mre.setIpAddress( userContext.getIpAddress() );
mre.setUserName( userContext.getUserName() );
}
}
This feels like the cleanest approach to me.
It is worth noting that the other API getCurrentRevision(Session,boolean) was deprecated as of Hibernate 5.2 and is scheduled for removal in 6.0. While an alternative means may be introduced, the intended way to perform this type of logic is using a RevisionListener.

Good practice to validate immutable values objects

Suppose a MailConfiguration class specifying settings for sending mails :
public class MailConfiguration {
private AddressesPart addressesPart;
private String subject;
private FilesAttachments filesAttachments;
private String bodyPart;
public MailConfiguration(AddressesPart addressesPart, String subject, FilesAttachments filesAttachements,
String bodyPart) {
Validate.notNull(addressesPart, "addressesPart must not be null");
Validate.notNull(subject, "subject must not be null");
Validate.notNull(filesAttachments, "filesAttachments must not be null");
Validate.notNull(bodyPart, "bodyPart must not be null");
this.addressesPart = addressesPart;
this.subject = subject;
this.filesAttachements = filesAttachements;
this.bodyPart = bodyPart;
}
// ... some useful getters ......
}
So, I'm using two values objects : AddressesPart and FilesAttachment.
Theses two values objects have similar structures so I'm only going to expose here AddressesPart :
public class AddressesPart {
private final String senderAddress;
private final Set recipientToMailAddresses;
private final Set recipientCCMailAdresses;
public AddressesPart(String senderAddress, Set recipientToMailAddresses, Set recipientCCMailAdresses) {
validate(senderAddress, recipientToMailAddresses, recipientCCMailAdresses);
this.senderAddress = senderAddress;
this.recipientToMailAddresses = recipientToMailAddresses;
this.recipientCCMailAdresses = recipientCCMailAdresses;
}
private void validate(String senderAddress, Set recipientToMailAddresses, Set recipientCCMailAdresses) {
AddressValidator addressValidator = new AddressValidator();
addressValidator.validate(senderAddress);
addressValidator.validate(recipientToMailAddresses);
addressValidator.validate(recipientCCMailAdresses);
}
public String getSenderAddress() {
return senderAddress;
}
public Set getRecipientToMailAddresses() {
return recipientToMailAddresses;
}
public Set getRecipientCCMailAdresses() {
return recipientCCMailAdresses;
}
}
And the associated validator : AddressValidator
public class AddressValidator {
private static final String EMAIL_PATTERN
= "^[_A-Za-z0-9-]+(\\.[_A-Za-z0-9-]+)*#[A-Za-z0-9]+(\\.[A-Za-z0-9]+)*(\\.[A-Za-z]{2,})$";
public void validate(String address) {
validate(Collections.singleton(address));
}
public void validate(Set addresses) {
Validate.notNull(addresses, "List of mail addresses must not be null");
for (Iterator it = addresses.iterator(); it.hasNext(); ) {
String address = (String) it.next();
Validate.isTrue(address != null && isAddressWellFormed(address), "Invalid Mail address " + address);
}
}
private boolean isAddressWellFormed(String address) {
Pattern emailPattern = Pattern.compile(EMAIL_PATTERN);
Matcher matcher = emailPattern.matcher(address);
return matcher.matches();
}
}
Thus, I have two questions :
1) If for some reasons, later, we want to validate differently an address mail (for instance to include/exclude some aliases matching to existing mailingList), should I expose a kind of IValidator as a constructor parameter ? like the following rather than bringing concrete dependence (like I made):
public AddressValidator(IValidator myValidator) {
this.validator = myValidator;
}
Indeed, this will respect the D principle of SOLID principle : Dependency injection.
However, if we follow this logical, would a majority of Values Objects own an abstract validator or it's just an overkill the most of time (thinking to YAGNI ?) ?
2) I've read in some articles than in respect of DDD, all validations must be present and only present in Aggregate Root, means in this case : MailConfiguration.
Am I right if I consider that immutable objects should never be in an uncohesive state ? Thus, would validation in constructor as I made be preferred in the concerned entity (and so avoiding aggregate to worry about validation of it's "children" ?
There's a basic pattern in DDD that perfectly does the job of checking and assembling objects to create a new one : the Factory.
I've read in some articles than in respect of DDD, all validations
must be present and only present in Aggregate Root
I strongly disagree with that. There can be validation logic in a wide range of places in DDD :
Validation upon creation, performed by a Factory
Enforcement of an aggregate's invariants, usually done in the Aggregate Root
Validation spanning accross several objects can be found in Domain Services.
etc.
Also, I find it funny that you bothered to create an AddressesPart value object -which is a good thing, without considering making EMailAddress a value object in the first place. I think it complicates your code quite a bit because there's no encapsulated notion of what an email address is, so AddressesPart (and any object that will manipulate addresses for that matter) is forced to deal with the AddressValidator to perform validation of its addresses. I think it shouldn't be its responsibility but that of an AddressFactory.
I'm not quite sure if I follow you 100%, but one way to handle ensuring immutable objects are only allowed to be created if they are valid is to use the Essence Pattern.
In a nutshell, the idea is that the parent class contains a static factory that creates immutable instances of itself based on instances of an inner "essence" class. The inner essence is mutable and allows objects to be built up, so you can put the pieces together as you go, and can be validated along the way as well.
The SOLID principals and good DDD is abided by since the parent immutable class is still doing only one thing, but allows others to build it up through it's "essence".
For an example of this, check out the Ldap extension to the Spring Security library.
Some observations first.
Why no generics? J2SE5.0 came out in 2004.
Current version of Java SE has Objects.requiresNonNull as standard. Bit of a mouthful and the capitalisation is wrong. Also returns the passed object so doesn't need a separate line.
this.senderAddress = requiresNonNull(senderAddress);
Your classes are not quite immutable. They are subclassable. Also they don't make a safe copy of their mutable arguments (Sets - shame there aren't immutable collection types in the Java library yet). Note, copy before validation.
this.recipientToMailAddresses = validate(new HashSet<String>(
recipientToMailAddresses
));
The use of ^ and $ in the regex is a little misleading.
If the validation varies, then there's two obvious (sane) choices:
Only do the widest variation in this class. Validate more specifically in the context it is going to be used.
Pass in the validator used and have this as a property. To be useful, client code would have to check and do something reasonable with this information, which is unlikely.
It doesn't make a lot of sense to pass the validator into the constructor and then discard it. That's making the constructor overcomplicated. Put it in a static method, if you must.
The enclosing instance should check that its argument are valid for that particular use, but should not overlap with classes ensuring that they are generally valid. Where would it end?
Although an old question but for anyone stumbling upon the subject matter, please keep it simple with POJOs (Plain Old Java Objects).
As for validations, there is no single truth because for a pure DDD you need to keep the context always in mind.
For example a user with no credit card data can and should be allowed to create an account. But credit card data is needed when checking out on some shopping basket page.
How this is beautifully solved by DDD is by moving the bits and pieces of code to the Entities and Value Objects where it naturally belong.
As a second example, if address should never be empty in the context of a domain level task, then Address value object should force this assertion inside the object instead of using asking a third party library to check if a certain value object is null or not.
Moreover Address as a standalone value object doesn't convey much at its own when compared with ShippingAddress, HomeAddress or CurrentResidentialAddress ... the ubiquitous language, in other words names convey their intent.

Categories

Resources