In my model class, I have overridden the toString() method and the contents of the toString method is given below.
getQueryName() + " -" + (getCriticalities().isEmpty() ? "" : " <b>Criticalities:</b> " + getCriticalities())
+ (getCompletions().isEmpty() ? "" : " <b>Completions:</b> " + getCompletions()) + (getPriorities().isEmpty() ? "" : " <b>Priorities:</b> " + getPriorities())
+ (getUrgencies().isEmpty() ? "" : " <b>Urgencies:</b> " + getUrgencies())
+ (getHistorical().isEmpty() || getHistorical().equals("None") ? "" : " <b>Historical:</b> " + getHistorical())
+ (getPlannerGroups().isEmpty() ? "" : " <b>Planner Groups:</b> " + getPlannerGroups())
+ (getWorkOrderNumbers().isEmpty() ? "" : " <b>Work Order Number:</b> " + getWorkOrderNumbers())
+ (getWorkCenters().isEmpty() ? "" : " <b>Work Centers:</b> " + getWorkCenters());
I have to change words like criticalities, completions in the language that the user has selected. I already have property files for languages. But as it is an entity class, I am unsure of how to change it.
I think you have 2 options:
Return language keys instead of literals in toString method, and then use bundle.getString(object.toString())
Or asign the key to a variable and then use bundle.getString(variable) inside the toString method.
I am afraid you are running into "separation of concerns" problem.
In an ideal world, it is a view that is responsible for I18n, i.e. resolving translations. Is there any specific reason why your model should be language aware?
Personally, I don't think so. You should keep your models as language-independent as possible and format all the numbers, as well as translate all the messages in your views.
One thing to note here: if things like Criticalities needs translation and there are not too many items, it usually makes sense to use enums rather than strings (using strings in programming logic is considered bad practice, by the way).
If you have to use a specific object, it may make sense to add a specific identifier that you can later use to resolve the translations (in ResourceBundle look-ups). Sometimes you can use object values, but this will decrease the translation quality, as translators needs context (and the easiest way to provide context is bring it in a translation key).
Related
I have been using JSF, JPA and MySQL with EclipseLink for 5 years. I found that I want to shift to Object db as it is very fast specially with a very large dataset. During migration, I found this error.
In JPA with EclipseLink, I passed objects as parameters. But in Object DB, I need to pass the id of objects to get the results. I have to change this in several places. Can enyone help to overcome this issue.
THis code worked fine with EclipseLink and MySQL. Here I pass the object"salesRep" as the parameter.
String j = "select b from "
+ " Bill b "
+ " where b.billCategory=:cat "
+ " and b.billType=:type "
+ " and b.salesRep=:rep ";
Map m = new HashMap();
m.put("cat", BillCategory.Loading);
m.put("type", BillType.Billed_Bill);
m.put("rep", getWebUserController().getLoggedUser());
I have to chage like this to make it work in ObjectDB.Here I have to pass the id (type long) of the object"salesRep" as the parameter.
String j = "select b from "
+ " Bill b "
+ " where b.billCategory=:cat "
+ " and b.billType=:type "
+ " and b.salesRep.id=:rep ";
Map m = new HashMap();
m.put("cat", BillCategory.Loading);
m.put("type", BillType.Billed_Bill);
m.put("rep", getWebUserController().getLoggedUser().getId());
There is a difference between EclipseLink and ObjectDB in handling detached entity objects. The default behaviour of ObjectDB is to follow the JPA specification and stop loading referenced objects by field access (transparent navigation) once an object becomes detached. EclipseLink does not treat detached objects this way.
This could make a difference in situations such as in a JSF application, where an object becomes detached before loading all necessary referenced data.
One solution (the JPA portable way) is to make sure that all the required data is loaded before objects become detached.
Another possible solution is to enable loading referenced objects by access (transparent navigation) for detached objects, by setting the objectdb.temp.no-detach system property. See #3 in this forum thread.
I'm quite new on Android and recently I've faced such issue:
Im using Android Contacts database, specifically Data table. I'm putting there some info with new mimetype and trying to look for this info during search. The problem is, i'm using SQLite LIKE operator which is Case Sensitive for non-latin characters. Another problem is that i can't change databse in any way, because it's android built-in database.
Builder builder = Data.CONTENT_URI.buildUpon();
loader.setSelection(getIndividualsSelection());
query = query.trim();
if( (null != query) && !query.equals("")){
loader.setSelection(Data.MIMETYPE + "='" +
MY_MIMETYPE + "' " + " AND ( " +
MY_DATA_COLUMN +
" LIKE '"+ query + "%' " +
specialCharsEscape + " COLLATE NOCASE)");
loader.setSelectionArgs(null);
loader.setUri(builder.build());
loader.setProjection(MY_PROJECTION);
loader.setSortOrder(MY_SORT_ORDER);
}
This is all inside of onCreateLoader funcion of LoaderCallbacks, where loader is of CursorLoader type. Do You have any idea how to force my SQLite not to be Case Sensitive?
I've tried off course using SQLite functions UCASE and LCASE but it doesn't work. Using Regexp results in exception for this database as well as using MATCH... Will appreciate any help.
Android has localized collations.
To actually be able to use a collation for comparisons, use BETWEEN instead of LIKE; for example:
MyDataColumn BETWEEN 'mąka' COLLATE UNICODE AND 'mąka' COLLATE UNICODE
is the last UTF-8-encodable Unicode character; a Java string probably would encode it with surrogates as "\uDBFF\uDFFF".
Please note that the UNICODE collation is broken in some Android versions.
I have a result of a db query in java.sql.ResultSet that needs to be converted to hierarchical data structure. It looks a bit like so:
name|version|pname|code|count
n1|1.1|p1|c1|3
n1|1.1|p1|c2|2
n1|1.1|p2|c1|1
n1|1.2|p1|c1|0
n2|1.0|p1|c1|5
I need that converted into a hierarchical data structure:
N1
+ 1.1
+ p1
+ c1(3)
+ c2(2)
+ p2
+ c1(1)
+ 1.2
+ p1
+ c1(0)
N2
+ 1.0
+ p1
+ c1(5)
So my data structure can look something like this
Name {
String name
List<Version> versions
}
Version {
String version
List<PName> pnames
}
PName {
String pName
List<CodeCount> codeCounts
}
CodeCount {
String code
Integer count
}
Anyone have suggestions/code snippets on the best way to do this?
There are a few ways, and how you do it depends on how robust your solution needs to be.
One would be to just write a couple of objects that had the attributes in the database. Then you could get the result set, and iterate over it, creating a new object each time the key field (for example, "name") changed, and adding it to a list of that object. Then you'd set the attributes appropriately. That is the "quick and dirty" solution.
A slightly more robust way would be to use something like Hibernate to do the mapping.
If you do decide to do that, I would also suggest redoing your tables so that they accurately reflect your object structure. It may not be needed if you just want a fast solution. But if you are seeking a robust solution for commercial or enterprise software, it's probably a good idea.
I have db column whose datatype is Number (15) and i have the corresponding field in java classes as long. The question is how would i map it using java.sql.Types.
would Types.BIGINT work?
Or shall i use something else?
P.S:
I can't afford to change the datatype within java class and within DB.
From this link it says that java.sql.Types.BIGINT should be used for long in Java to Number in SQL (Oracle).
Attaching screenshot of the table in case the link ever dies.
A good place to find reliable size mappings between Java and Oracle Types is in the Hibernate ORM tool. Documented in the code here, Hibernate uses an Oracle NUMBER(19,0) to represent a java.sql.Types.BIGINT which should map to a long primitave
I always use wrapper type, because wrapper types can be express null values.
In this case I will use Long wrapper type.
I had a similar problem where I couldn't modify the Java Type or the Database Type. In my situation I needed to execute a native SQL query (to be able to utilize Oracle's Recursive query abilities) and map the result set to a non-managed entity (essentially a simple pojo class).
I found a combination of addScalar and setResultTransformer worked wonders.
hibernateSes.createSQLQuery("SELECT \n"
+ " c.notify_state_id as \"notifyStateId\", \n"
+ " c.parent_id as \"parentId\",\n"
+ " c.source_table as \"sourceTbl\", \n"
+ " c.source_id as \"sourceId\", \n"
+ " c.msg_type as \"msgType\", \n"
+ " c.last_updt_dtm as \"lastUpdatedDateAndTime\"\n"
+ " FROM my_state c\n"
+ "LEFT JOIN my_state p ON p.notify_state_id = c.parent_id\n"
+ "START WITH c.notify_state_id = :stateId\n"
+ "CONNECT BY PRIOR c.notify_state_id = c.parent_id")
.addScalar("notifyStateId", Hibernate.LONG)
.addScalar("parentId", Hibernate.LONG)
.addScalar("sourceTbl",Hibernate.STRING)
.addScalar("sourceId",Hibernate.STRING)
.addScalar("msgType",Hibernate.STRING)
.addScalar("lastUpdatedDateAndTime", Hibernate.DATE)
.setParameter("stateId", notifyStateId)
.setResultTransformer(Transformers.aliasToBean(MyState.class))
.list();
Where notifyStateId, parentId, sourceTbl, sourceId, msgType, and lastUpdatedDateAndTime are all properties of MyState.
Without the addScalar's, I would get a java.lang.IllegalArgumentException: argument type mismatch because Hibernate was turning Oracle's Number type into a BigDecimal but notifyStateId and parentId are Long types on MyState.
I have run into an interesting problem which I'm pretty sure is the fault of HashMap. Consider the following debug code (AMap is a HashMap, key is a value passed to this method)
System.out.println("getBValues - Given: " + key);
System.out.println("getBValues - Contains Key: " + AMap.containsKey(key));
System.out.println("getBValues - Value: " + AMap.get(key));
for(Map.Entry<A,HashSet<B>> entry : AMap.entrySet()) {
System.out.println("getBValues(key) - Equal: " + (key.equals(entry.getKey())));
System.out.println("getBValues(key) - HashCode Equal: "+(key.hashCode() == entry.getKey().hashCode()));
System.out.println("getBValues(key) - Key: " + entry.getKey());
System.out.println("getBValues(key) - Value: " + entry.getValue());
}
Now in this Map I insert a single key (Channel) and value. Later I try and get the value back with get() and run this debug code which in my case gives this output:
getBValues - Given: Channel(...)
getBValues - Contains Key: false <--- Doesnt contain key?!
getBValues - Value: null <--- Null (bad)
getBValues(key) - Equal: true <--- Given key and AMap key is equal
getBValues(key) - HashCode Equal: true
getBValues(key) - Key: Channel(Same...)
getBValues(key) - Value: [] <--- Not null (This is the expected result)
As you can see, fetching the key from the HashMap directly doesn't work but looping through I get the exact same key, meaning its there it just can't be found with get(). My question is what would cause this? How can get() not find a key that exists?
I would provide an some example code of this but I can't seem to reproduce this independently.
Any suggestions on what might be causing this?
I'll bet you didn't override equals and hashCode properly in your key Channel class. That would explain it.
Joshua Bloch tells you how to do it correctly in his "Effective Java" Chapter 3.
http://java.sun.com/developer/Books/effectivejava/Chapter3.pdf
From what I can see, we still haven't ruled out if it has to do with immutability.
If you do:
aMap.put(key, value);
key.setFieldIncludedInHashCodeAndEquals(25);
then you would get the result from above.
To rule this out, either show us more of your code or, in the for loop in your example above, add
System.out.println(aMap.get(entry.getKey()));
Also, use a debugger. That way, you can see if your object is in the correct bucket.