How to store UUID fields in Java classes? - java

I work on an android project and have performance problems with UUID fields. Here is the project sample code (not written by me):
public class OrderVo {
#Expose
#SerializedName("UId")
private UUID mGuId;
public String getGuId() {
return UUIDHelper.toString(mGuId);
}
public void setGuId(String guId) {
mGuId = UUIDHelper.fromString(guId);
}
...
This object is used in RecyclerView and its methods are called in bindData of the adapter. This causes a performance issue cause on every scroll getGuId() is called and String objects are created from UUID and GC is actively working to clean that objects (and that causes lag when fast scrolling). Is there any benefits of keeping mGuId field as UUID and not a simple String?

If the callers do not convert the UUID-string back to a string you can indeed store the values as String.
One drawback with this solution is the missing type checking. The string must not contain a UUID but can contain anything else.

Related

Best way to update an object with another object of the same type in java in rest

I've came up with this question while designing a REST API.
To update a resource the client must submit the entire resource with updated fields. The thing is that not all fields are editable. What if the client sends a resource with updated fields that he was not supposed to update? Should I throw some kind of exception or just ignore them and update all editable fields?
I came up with the following solution
class User {
private String firstName; //non-editable
private String lastName; //non-editable
private String occupation; //editable
private String address; //editable
//getters and setters
}
public void updateUser(User original, User update) {
original.setAddress(update.getAddress());
original.setOccupation(update.getOccupation());
}
The issue I see here is that the updateUser method is tightly coupled with the User class. If I add or remove editable/non-editable fields I will always have to change the method. Is there some kind of best practices or patterns that address this kind of problem.
A static method on User would encapsulate this functionality correctly. That way you can keep all your user-specific code together.
class User {
private String firstName; //non-editable
private String lastName; //non-editable
private String occupation; //editable
private String address; //editable
static public void updateUser(User original, User update) {
original.setAddress(update.getAddress());
original.setOccupation(update.getOccupation());
}
}
Also, if you care so much about firstName/lastName not being changed after creation, I'd make sure to not include setters for them at all.
As a user of the API I would want to know when I'm editing values that aren't editable, because that probably means that I either have a bug in my code or that I misunderstood the functionality of the API. In either case I'd like to find that out as early in the process as possible, so I'd prefer to see an exception via an error code in the response, rather than what appears to be a success to me but doesn't actually do what I thought it did.
That said, it depends on the specific use-case. It's hard to say without more details on how the API is being used. It may make more sense to only update the editable fields and just make that really clear in the API documentation.
As coderkevin has already called out, using a static method on the User class would be an appropriate place to put the code that actually does the updates.

Obfuscate source code using ProGuard

Recently I've had some problems with people cheating using an app for root users called Gamecih. Gamecih let's users pause games and change variables in runtime.
If I obfuscate my code I though it'll be hard for cheaters to know what variables to change in runtime, but I'm also worried it might cause some other problems.
I serialize game objects using Javas Serializable interface and then write them out to a file. Now let's say I'm serializing an object of the class "Player". It gets serialized and saved to a file. Then a user downloads the update with the Proguard implementation. Proguard will rename classes and class member names. Won't that cause major errors when trying to read in an already saved Player object?
If I had not yet launched my game, this wouldn't be a problem. But now some players are playing on the same saved game(it's an RPG) for months. They would be pretty pissed off if they downloaded an update and had to start over from scratch.
I know I can instruct Proguard not to obfuscate certain classes, but it's the Player class I really need to obfuscate.
Clarification: Let's say I have the following simple unobfuscated class:
public class Player {
private int gold;
private String name;
//Lots more.
public Player(String name)
{
this.name = name;
}
public int getGold() {
return gold;
}
public void setGold(int gold) {
this.gold = gold;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
A player is created, serialized and saved to a file. After I implement obfuscator, it might look like this:
public class Axynasf {
private int akdmakn;
private String anxcmjna;
public Axynasf(String adna)
{
anxcmjna=adna;
}
public int getAkdmakn() {
return akdmakn;
}
public void setAkdmakn(int akdmakn) {
this.akdmakn = akdmakn;
}
public String getAnxcmjna() {
return anxcmjna;
}
public void setAnxcmjna(String anxcmjna) {
this.anxcmjna = anxcmjna;
}
}
Imagine that I post an update now and a player that has an unobfuscated version of Player downloads that update. When trying to read that Object there will be different member names and a different class name. I'll most likely get ClassCastException or something of the sorts.
No expert in Proguard, but I think you're right to assume it is going to break serialisation.
One possible way of solving this might be to implement a layer over your current save structure - You can tell Proguard which classes you don't want to obfuscate. Leave the Player (and alike objects) the same for now and don't obfuscate. Once the object has been de-serialised, pass it up to your new layer (which is obfuscated) which the rest of the game deals with - if you don't retain the non-obfuscated object, then it'll cause cheaters problems tweaking during game play (although not at load time). At the same time, you could look at moving your player's game files over to another save option that doesn't depend on serialisation, which will probably make such issues easier in the future.
For ensuring compatible serialization in ProGuard:
ProGuard manual > Examples > Processing serializable classes
For upgrading a serialized class to a different class in Java:
JDK documentation > Serialization > Object Input Classes > readResolve
JDK documentation > Serialization > Object Serialization Examples > Evolution/Substitution
I understand ppl can update vars # runtime w/ the app you named.
If you change the member names, the values will still give hints.
If you obfuscate, the class name will change but new name will end on a forum anyway.
So this is not enough
What you could do in your update is, at startup, load serialized data in old object, transfer to "new" obfuscated class, use a custom serialization (with an XOR using the deviceID value or the gmail adress so to make it less obvious).
Try to have your player data into several classes too.
What I would do in your situation:
Release an update with obfuscated and non-obfuscated class. When player gets loaded it will try with both classes. If player got loaded with non-obf. class then map this class to your obfuscated class.
When player gets saved it will save with obfuscated class.
After a proper amount of time release an update with only the obfuscated classes.

programming idiom for reading a one-record csv file into class fields

Suppose i want to read the following file:
TestFile;100
into the fields of the class:
public class MyReader {
String field1;
Integer field2;
}
There are two slightly different ways to read the content:
public class MyReader {
public void loadValues(File file) throws IOException {
//a generic method that reads the content file to a string throwing an IOException
String line = readFileToString(file);
String[] values = line.split(";");
field1=values[0];
field2=Integer.parseInt(values[1]);
}
//then i'll have well known getters:
public String getField1() {
return field1;
}
public Integer getField2() {
return field2;
}
Now the second idiom:
private String[] values; //made the values a class field
public void loadValues(File file) throws IOException {
//a generic method that reads the content file to a string throwing an IOException
String line = readFileToString(file);
values = line.split(";");
}
public String getField1() {
return values[0]
}
public Integer getField2() {
return Integer.parseInt(values[1]);
}
the big difference is in exception management.
I deliberately omitted to catch two runtime exceptions that may happen in both cases:
ArrayIndexOutOfBoundsException if the file contains less than two fields
NumberFormatException if the Integer parsing fails
First approach
All fields are loaded at startup. Sufficient that one of them is not parsable, i get the NumberFormatException. If there are less fields than the required i'll get the out of bounds exception. That sounds good, expecially if i want to ensure the correctness of all the field values behind the fact that i will use a specific one: fail fast paradigm.
Suppose that i have one hundred fields, that the record contains an error that in turn is logged to a file for troubleshooting. As is, this code gives something like this:
Exception in thread "main" java.lang.NumberFormatException: For input string: ""
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
that is: i get the error, the value that caused the error, the line number that gives the error, but NOT the field name. Good for the developer, not so good for the system engeneer that normally has not access to the source code.
Second approach
The fields are "extracted" and parsed from the data string only when they are accessed through getters.
Each getter may return the two errors described above, in the very same circumstances.
There is a certain degree of "fault tolerance" as opposed to "fail fast paradigm". If the 100th field of the record contains an incorrect value, but the client of the class doesn't call for its getter, the exception will never be thrown.
On the other hand, when the getter of a incorrect value is called, the logged exception will contain the information of which field caused the problem (in the stack trace):
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
at java.lang.Integer.parseInt(Integer.java:449)
at java.lang.Integer.parseInt(Integer.java:499)
at com.test.MyReader.getField2(MyReader.java:39)
at com.test.MyReader.test(MyReader.java:33)
at com.test.MyReader.main(MyReader.java:16)
Question(s)
Both approaches has pros and cons, and one can say that the decision on which one adopt is context-dependent. The questions are:
getters and setters are a javabeans subject. is it "acceptable" that they return exceptions?
the class in the two approaches may apparently have the same interface, but since they have a completely different architecutre of exception, the client is supposed to use them in a completely different way. So how to rapresent this fact to the client? Maybe the getters semantic is misleading, perhaps there is a more eloquent way?
First of all the second approach won't work as you store the values[] array as local variable and it won't be accessible to other functions (getters in your case).
Second don't throw the exceptions in your getters as you api you expose will be misleading and won't conform to any convention.
Third, instead of parsing the string yourself, consider using a ready to use library for csv parsing and don't reinvent the wheel, eg. http://opencsv.sourceforge.net/
Fourth, create a simple POJO object for the csv records, you can check for erros while assigning the values to each fields and then throw the exception or not, provide default values etc.
Hope that helps

Good practice to validate immutable values objects

Suppose a MailConfiguration class specifying settings for sending mails :
public class MailConfiguration {
private AddressesPart addressesPart;
private String subject;
private FilesAttachments filesAttachments;
private String bodyPart;
public MailConfiguration(AddressesPart addressesPart, String subject, FilesAttachments filesAttachements,
String bodyPart) {
Validate.notNull(addressesPart, "addressesPart must not be null");
Validate.notNull(subject, "subject must not be null");
Validate.notNull(filesAttachments, "filesAttachments must not be null");
Validate.notNull(bodyPart, "bodyPart must not be null");
this.addressesPart = addressesPart;
this.subject = subject;
this.filesAttachements = filesAttachements;
this.bodyPart = bodyPart;
}
// ... some useful getters ......
}
So, I'm using two values objects : AddressesPart and FilesAttachment.
Theses two values objects have similar structures so I'm only going to expose here AddressesPart :
public class AddressesPart {
private final String senderAddress;
private final Set recipientToMailAddresses;
private final Set recipientCCMailAdresses;
public AddressesPart(String senderAddress, Set recipientToMailAddresses, Set recipientCCMailAdresses) {
validate(senderAddress, recipientToMailAddresses, recipientCCMailAdresses);
this.senderAddress = senderAddress;
this.recipientToMailAddresses = recipientToMailAddresses;
this.recipientCCMailAdresses = recipientCCMailAdresses;
}
private void validate(String senderAddress, Set recipientToMailAddresses, Set recipientCCMailAdresses) {
AddressValidator addressValidator = new AddressValidator();
addressValidator.validate(senderAddress);
addressValidator.validate(recipientToMailAddresses);
addressValidator.validate(recipientCCMailAdresses);
}
public String getSenderAddress() {
return senderAddress;
}
public Set getRecipientToMailAddresses() {
return recipientToMailAddresses;
}
public Set getRecipientCCMailAdresses() {
return recipientCCMailAdresses;
}
}
And the associated validator : AddressValidator
public class AddressValidator {
private static final String EMAIL_PATTERN
= "^[_A-Za-z0-9-]+(\\.[_A-Za-z0-9-]+)*#[A-Za-z0-9]+(\\.[A-Za-z0-9]+)*(\\.[A-Za-z]{2,})$";
public void validate(String address) {
validate(Collections.singleton(address));
}
public void validate(Set addresses) {
Validate.notNull(addresses, "List of mail addresses must not be null");
for (Iterator it = addresses.iterator(); it.hasNext(); ) {
String address = (String) it.next();
Validate.isTrue(address != null && isAddressWellFormed(address), "Invalid Mail address " + address);
}
}
private boolean isAddressWellFormed(String address) {
Pattern emailPattern = Pattern.compile(EMAIL_PATTERN);
Matcher matcher = emailPattern.matcher(address);
return matcher.matches();
}
}
Thus, I have two questions :
1) If for some reasons, later, we want to validate differently an address mail (for instance to include/exclude some aliases matching to existing mailingList), should I expose a kind of IValidator as a constructor parameter ? like the following rather than bringing concrete dependence (like I made):
public AddressValidator(IValidator myValidator) {
this.validator = myValidator;
}
Indeed, this will respect the D principle of SOLID principle : Dependency injection.
However, if we follow this logical, would a majority of Values Objects own an abstract validator or it's just an overkill the most of time (thinking to YAGNI ?) ?
2) I've read in some articles than in respect of DDD, all validations must be present and only present in Aggregate Root, means in this case : MailConfiguration.
Am I right if I consider that immutable objects should never be in an uncohesive state ? Thus, would validation in constructor as I made be preferred in the concerned entity (and so avoiding aggregate to worry about validation of it's "children" ?
There's a basic pattern in DDD that perfectly does the job of checking and assembling objects to create a new one : the Factory.
I've read in some articles than in respect of DDD, all validations
must be present and only present in Aggregate Root
I strongly disagree with that. There can be validation logic in a wide range of places in DDD :
Validation upon creation, performed by a Factory
Enforcement of an aggregate's invariants, usually done in the Aggregate Root
Validation spanning accross several objects can be found in Domain Services.
etc.
Also, I find it funny that you bothered to create an AddressesPart value object -which is a good thing, without considering making EMailAddress a value object in the first place. I think it complicates your code quite a bit because there's no encapsulated notion of what an email address is, so AddressesPart (and any object that will manipulate addresses for that matter) is forced to deal with the AddressValidator to perform validation of its addresses. I think it shouldn't be its responsibility but that of an AddressFactory.
I'm not quite sure if I follow you 100%, but one way to handle ensuring immutable objects are only allowed to be created if they are valid is to use the Essence Pattern.
In a nutshell, the idea is that the parent class contains a static factory that creates immutable instances of itself based on instances of an inner "essence" class. The inner essence is mutable and allows objects to be built up, so you can put the pieces together as you go, and can be validated along the way as well.
The SOLID principals and good DDD is abided by since the parent immutable class is still doing only one thing, but allows others to build it up through it's "essence".
For an example of this, check out the Ldap extension to the Spring Security library.
Some observations first.
Why no generics? J2SE5.0 came out in 2004.
Current version of Java SE has Objects.requiresNonNull as standard. Bit of a mouthful and the capitalisation is wrong. Also returns the passed object so doesn't need a separate line.
this.senderAddress = requiresNonNull(senderAddress);
Your classes are not quite immutable. They are subclassable. Also they don't make a safe copy of their mutable arguments (Sets - shame there aren't immutable collection types in the Java library yet). Note, copy before validation.
this.recipientToMailAddresses = validate(new HashSet<String>(
recipientToMailAddresses
));
The use of ^ and $ in the regex is a little misleading.
If the validation varies, then there's two obvious (sane) choices:
Only do the widest variation in this class. Validate more specifically in the context it is going to be used.
Pass in the validator used and have this as a property. To be useful, client code would have to check and do something reasonable with this information, which is unlikely.
It doesn't make a lot of sense to pass the validator into the constructor and then discard it. That's making the constructor overcomplicated. Put it in a static method, if you must.
The enclosing instance should check that its argument are valid for that particular use, but should not overlap with classes ensuring that they are generally valid. Where would it end?
Although an old question but for anyone stumbling upon the subject matter, please keep it simple with POJOs (Plain Old Java Objects).
As for validations, there is no single truth because for a pure DDD you need to keep the context always in mind.
For example a user with no credit card data can and should be allowed to create an account. But credit card data is needed when checking out on some shopping basket page.
How this is beautifully solved by DDD is by moving the bits and pieces of code to the Entities and Value Objects where it naturally belong.
As a second example, if address should never be empty in the context of a domain level task, then Address value object should force this assertion inside the object instead of using asking a third party library to check if a certain value object is null or not.
Moreover Address as a standalone value object doesn't convey much at its own when compared with ShippingAddress, HomeAddress or CurrentResidentialAddress ... the ubiquitous language, in other words names convey their intent.

Alternatives to com.google.appengine.api.datastore.Text?

In a Google App Engine JPA Entity a java.lang.String is limited to 500 character, while a com.google.appengine.api.datastore.Text has an unlimited size (with the caveat that entityManager.persist doesn't work with entities bigger than 1 Mb - I believe, but can't find the reference about it).
However, if I use the Google specific Text type, I would tightly couple my application with Google App Engine. I was wondering if there is a more lightweight way to achieve the same result, such as through some standard annotation.
Maybe, annotating a String with a JSR-303 annotation. If the size is above 500, it would know to use the non-indexed Text type. Example: #Size(max = 3000)
Obvsiously I'm just dreaming, but maybe there is some standard way to avoid App Engine specific data types. Anybody knows?
UPDATE: found issue 10, Support #Lob JPA annotation in datanucleus-appengine
DataNucleus plugin for Google App Engine.
I'm guessing you might have thought of this already; but I'm posting this anyway since it adds a point not covered in your question. My solution here was to make it private, and then for my object definition I don't ever return a Text object. I admit this doesn't accomplish your goal of avoiding the App Engine specific data type; but it at least makes it so that you don't have to rewrite code outside of the one class which relies on the Text object should you decide to port your application off of app engine.
So, for example, assuming text is a blogPost variable...
public class BlogPost {
private Text post;
// ...
public String getPost() {
return post.getValue();
}
public void setPost(String val) {
this.post = new Text(val);
}
}
If that code feels a little sloppy it might be slightly (I don't know how the Text#getValue method works internally) more efficient to do this...
public class BlogPost {
private Text post;
private String postValue;
// ...
public String getPost() {
return postValue;
}
public void setPost(String val) {
this.post = new Text(val);
this.postValue = val;
}
}

Categories

Resources