I am writing an Android app, in Java, which uses an SQLite database containing dozens of tables. I have a few Datasource classes set up to pull data from these tables and turn them into their respective objects. My problem is that I do not know the most efficient way to structure code that accesses the database in Java.
The Datasource classes are getting very repetitive and taking a long time to write. I would like to refactor the repetition into a parent class that will abstract away most of the work of accessing the database and creating objects.
The problem is, I am a PHP (loosely-typed) programmer and I'm having a very hard time solving this problem in a strictly-typed way.
Thinking in PHP, I'd do something like this:
public abstract class Datasource {
protected String table_name;
protected String entity_class_name;
public function get_all () {
// pseudo code -- assume db is a connection to our database, please.
Cursor cursor = db.query( "select * from {this.table_name}");
class_name = this.entity_class_name;
entity = new $class_name;
// loops through data in columns and populates the corresponding fields on each entity -- also dynamic
entity = this.populate_entity_with_db_hash( entity, cursor );
return entity;
}
}
public class ColonyDatasource extends Datasource {
public function ColonyDataSource( ) {
this.table_name = 'colony';
this.entity_class_name = 'Colony';
}
}
Then new ColonyDatasource.get_all() would get all the rows in table colony and return a bunch of Colony objects, and creating the data source for each table would be as easy as creating a class that has little more than a mapping of table information to class information.
Of course, the problem with this approach is that I have to declare my return types and can't use variable class names in Java. So now I'm stuck.
What should one do instead?
(I am aware that I could use a third-party ORM, but my question is how someone might solve this without one.)
First, is that you don't want to do these lines in your Java code:
class_name = this.entity_class_name;
entity = new $class_name;
It is possible to do what you are suggesting, and in languages such as Java it is called reflection.
https://en.wikipedia.org/wiki/Reflection_(computer_programming)
In this (and many cases) using reflection to do what you want is a bad idea for many reasons.
To list a few:
It is VERY expensive
You want the compiler to catch any mistakes, eliminating as many runtime errors as possible.
Java isn't really designed to be quacking like a duck: What's an example of duck typing in Java?
Your code should be structured in a different way to avoid this type of approach.
Sadly, I do believe that because it is strictly typed, you can't automate this part of your code:
// loops through data in columns and populates the corresponding fields on each entity -- also dynamic
entity = this.populate_entity_with_db_hash( entity, cursor );
Unless you do it through means of reflection. Or shift approaches entirely and begin serializing your objects (¡not recommending, just saying it's an option!). Or do something similar to Gson https://code.google.com/p/google-gson/. I.e. turn the db hash into a json representation and then using gson to turn that into an object.
What you could do, is automate the "get_all" portion of the object in the abstract class since that would be repetitive nearly every instance, but use an implementation so that you can have the abstract function rest assured that it can call a method of it's extending object. This will get you most of the way towards your "automated" approach, reducing the amount of code you must retype.
To do this we must consider the fact that Java has:
Generics (https://en.wikipedia.org/wiki/Generics_in_Java)
Function overloading.
Every Object in Java extends from the Object class, always.
Very Liskov-like https://en.wikipedia.org/wiki/Liskov_substitution_principle
Package scope: What is the default scope of a method in Java?
Try something like this (highly untested, and most likely wont compile) code:
// Notice default scoping
interface DataSourceInterface {
//This is to allow our GenericDataSource to call a method that isn't defined yet.
Object cursorToMe(Cursor cursor);
}
//Notice how we implement here?, but no implemented function declarations!
public abstract class GenericDataSource implements DataSourceInterface {
protected SQLiteDatabase database;
// and here we see Generics and Objects being friends to do what we want.
// This basically says ? (wildcard) will have a list of random things
// But we do know that these random things will extend from an Object
protected List<? extends Object> getAll(String table, String[] columns){
List<Object> items = new ArrayList<Object>();
Cursor cursor = database.query(table, columns, null, null, null, null,null);
cursor.moveToFirst();
while (!cursor.isAfterLast()) {
// And see how we can call "cursorToMe" without error!
// depending on the extending class, cursorToMe will return
// all sorts of different objects, but it will be an Object nonetheless!
Object object = this.cursorToMe(cursor);
items.add(object);
cursor.moveToNext();
}
// Make sure to close the cursor
cursor.close();
return items;
}
}
//Here we extend the abstract, which also has the implements.
// Therefore we must implement the function "cursorToMe"
public class ColonyDataSource extends GenericDataSource {
protected String[] allColumns = {
ColonyOpenHelper.COLONY_COLUMN_ID,
ColonyOpenHelper.COLONY_COLUMN_TITLE,
ColonyOpenHelper.COLONY_COLUMN_URL
};
// Notice our function overloading!
// This getAll is also changing the access modifier to allow more access
public List<Colony> getAll(){
//See how we are casting to the proper list type?
// Since we know that our getAll from super will return a list of Colonies.
return (List<Colony>)super.getAll(ColonyOpenHelper.COLONY_TABLE_NAME, allColumns);
}
//Notice, here we actually implement our db hash to object
// This is the part that would only be able to be done through reflection or what/not
// So it is better to just have your DataSource object do what it knows how to do.
public Colony cursorToMe(Cursor cursor) {
Colony colony = new Colony();
colony.setId(cursor.getLong(0));
colony.setTitle(cursor.getString(1));
colony.setUrl(cursor.getString(2));
return colony;
}
}
If your queries are virtually identical except for certain parameters, consider using prepared statements and binding
In SQLite, do prepared statements really improve performance?
So another option that I have yet to explore fully is something called Java Persistence API, there are projects that implement annotations very similar to this. The majority of these are in the form of an ORM which provide you with Data access objects (http://en.wikipedia.org/wiki/Data_access_object)
An open source project called "Hibernate" seems to be one of the go-to solutions for ORM in Java, but I have also heard that it is a very heavy solution. Especially for when you start considering a mobile app.
An android specific ORM solution is called OrmLite (http://ormlite.com/sqlite_java_android_orm.shtml), this is based off of Hibernate, but is very much stripped down and without as many dependencies for the very purpose of putting it on an android phone.
I have read that people using one will transition to the other very nicely.
Related
I am implementing an API and I keep running into this issue, I think there is something wrong with my core design but I'm not sure what and I'm feeling overwhelmed by design principles.
Basically I'll have an object with a bunch of related fields. It is non-trivial to populate the fields with the proper info and relies on one or more external API calls using a client I pass in to the constructor, these API calls also provide information related to more than one field, thus it is desirable to populate many fields at the same time. I want to keep the constructor simple/fast and keep my object testable so I don't put any logic there, just assignments. However, what I end up doing is creating a single method to populate all the fields and calling this in all of my getters after a null check, i.e. lazily populating the object fields.
I think that this is bad because it violates the "fail-fast" principle, especially because I am using a client to call an external service, which may fail if, say, the client credentials are invalid. However, I am having trouble restructuring my code.
I've thought about extracting the client logic into a service/connector, ClothingConnector for example, however I'm not sure this would solve my problem as I still wouldn't want to call this in the constructor, and it would still be beneficial to populate many fields at once.
class Person {
ClientToGetClothing clothingClient;
Pants pants;
Shirt shirt;
Fabric shirtFabric;
Fabric pantsFabric;
public Person(ClientToGetClothing clothingClient) {
this.clothingClient = clothingClient;
}
private void populateClothing() {
PantsResponse pantsInfo = clothingClient.retrievePantsInfo();
this.pants = pantsInfo.getPants();
ShirtResponse shirtInfo = clothingClient.retrieveShirtInfo();
this.shirt = pantsInfo.getShirt();
// do some more things with my pants + shirt and assign results to more fields, calculate the fabric in this example
}
public Shirt getShirt() {
if (shirt == null) {
populateClothing;
}
return this.shirt;
}
// ...
}
Firstly, I would decouple the ClothingClient from the Person object. Have a factory do the attribute population and then return the person class.
https://en.wikipedia.org/wiki/Factory_method_pattern#Java
If there are updates for some entity being sent to my backend in the form of a DTO and I have a class that needs access to both the previous values of this entity and the new values, what is a good way to set this up?
The current code looks a bit like this:
//temporarily store old values
SomeReferencedEntity previousReferencedEntity = existingObject.getSomeReferencedEntity();
boolean previouslyEnabled = existingObject.isEnabled();
int previousNumber = existingObject.getNumber();
//update entity with new values
existingObject.setSomeReferencedEntity(newSomeReferencedEntity); //from dto conversion
existingObject.setEnabled(dto.isEnabled());
existingObject.setNumber(dto.getNumber());
//call other class with new and old values
someExternalSystemAdministrator.update(previousReferencedEntity, previouslyEnabled, previousNumber, existingObject);
This doesn't seem like a great way to do this though. So what are my options?
I have thought about instantiating entities from my DTO just so I can
pass 2 full entities (including references to some other entities I would have to instantiate) to my externalSystemAdministrator class, so we
can just compare object by object. However that felt like abuse of
the entity class that I would never actually persist for the new
values.
I considered passing the new DTO to the
externalSystemAdministrator class but there are quite some steps that
need to be taken before the new values are understandable for that
class. It also doesn't feel good to let a class like that see any DTO's.
I could also create some simple value class for the externalSystemAdministrator class to deal with and pass 2 instances of that, instead of passing it my domain entities. This is my favorite option now but it does mean I need to be able to set some values to my actual entities later, based on response of the external system.
Is there a smarter way to set this up? Preferably in a common/standard/accepted way.
Create an event that notifies the change. For instance, you will have the event class, represented by a old and new version of your DTO:
public class ObjectChangedEvent {
private MyDto oldValue;
private MyDto newValue;
...
}
Then the code you posted above would publish the event after updating the entity:
applicationEventPublisher.publishEvent(objectChangedEvent);
And finally your component that needs to know both versions would be notified of the change:
public class MyComponent implements ApplicationListener<ObjectChangedEvent> {
#Override
public void onApplicationEvent(ObjectChangedEvent event) {
//your logic goes here
}
}
You can find more details here
By default Spring make synchronous calls, but I highly recommend you to change to change to asynchronous calls. In this specific case it doesn't seem to change much, but synchronous requests that takes a lot of time to execute might make your application stop answering new requests.
Context
Suppose you have a component with a great many options to modify its behavior. Think a table of data with some sorting, filtering, paging, etc. The options could then be isFilterable, isSortable, defaultSortingKey, etc etc. Of course there will be a parameter object to encapsulate all of these, let's call it TableConfiguration. Of course we don't want to have a huge constructor, or a set of telescopic constructors, so we use a builder, TableConfigurationBuilder. The example usage could be:
TableConfiguration config = new TableConfigurationBuilder().sortable().filterable().build();
So far so good, a ton of SO questions deals with this already.
Moving forward
There is now a ton of Tables and each of them uses its own TableConfiguration. However, not all of the "configuration space" is used uniformly: let's say most of the tables is filterable, and most of those are paginated. Let's say, there are only 20 different combinations of configuration options that make sense and are actually used. In line with the DRY principle, these 20 combinations live in methods like these:
public TableConfiguration createFilterable() {
return new TableConfigurationBuilder().filterable().build();
}
public TableConfiguration createFilterableSortable() {
return new TableConfigurationBuilder().filterable().sortable().build();
}
Question
How to manage these 20 methods, so that developers adding new tables can easily find the configuration combination they need, or add a new one if it does not exist yet?
All of the above I use already, and it works reasonably well if I have an existing table to copy-paste ("it's exactly like Customers"). However, every time something out of the ordinary is required, it's hard to figure out:
Is there a method doing exactly what I want? (problem A)
If not, which one is the closest one to start from? (problem B)
I tried to give the methods some very descriptive names to express what configuration options are being built in inside, but it does not scale really well...
Edit
While thinking about the great answers below, one more thing occurred to me:
Bonus points for grouping tables with the same configuration in a type-safe way. In other words, while looking at a table, it should be possible to find all its "twins" by something like go to definition and find all references.
I think that if you are already using the builder pattern, then sticking to the builder pattern would be the best approach. There's no gaining in having methods or an enum to build the most frequently used TableConfiguration.
You have a valid point regarding DRY, though. Why setting the most common flags to almost every builder, in many different places?
So, you would be needing to encapsulate the setting of the most common flags (to not repeat yourself), while still allowing to set extra flags over this common base. Besides, you also need to support special cases. In your example, you mention that most tables are filterable and paginated.
So, while the builder pattern gives you flexibility, it makes you repeat the most common settings. Why not making specialized default builders that set the most common flags for you? These would still allow you to set extra flags. And for special cases, you could use the builder pattern the old-fashioned way.
Code for an abstract builder that defines all settings and builds the actual object could look something like this:
public abstract class AbstractTableConfigurationBuilder
<T extends AbstractTableConfigurationBuilder<T>> {
public T filterable() {
// set filterable flag
return (T) this;
}
public T paginated() {
// set paginated flag
return (T) this;
}
public T sortable() {
// set sortable flag
return (T) this;
}
public T withVeryStrangeSetting() {
// set very strange setting flag
return (T) this;
}
// TODO add all possible settings here
public TableConfiguration build() {
// build object with all settings and return it
}
}
And this would be the base builder, which does nothing:
public class BaseTableConfigurationBuilder
extends AbstractTableConfigurationBuilder<BaseTableConfigurationBuilder> {
}
Inclusion of a BaseTableConfigurationBuilder is meant to avoid using generics in the code that uses the builder.
Then, you could have specialized builders:
public class FilterableTableConfigurationBuilder
extends AbstractTableConfigurationBuilder<FilterableTableConfigurationBuilder> {
public FilterableTableConfigurationBuilder() {
super();
this.filterable();
}
}
public class FilterablePaginatedTableConfigurationBuilder
extends FilterableTableConfigurationBuilder {
public FilterablePaginatedTableConfigurationBuilder() {
super();
this.paginated();
}
}
public class SortablePaginatedTableConfigurationBuilder
extends AbstractTableConfigurationBuilder
<SortablePaginatedTableConfigurationBuilder> {
public SortablePaginatedTableConfigurationBuilder() {
super();
this.sortable().paginated();
}
}
The idea is that you have builders that set the most common combinations of flags. You could create a hierarchy or have no inheritance relation between them, your call.
Then, you could use your builders to create all combinations, without repeting yourself. For example, this would create a filterable and paginated table configuration:
TableConfiguration config =
new FilterablePaginatedTableConfigurationBuilder()
.build();
And if you want your TableConfiguration to be filterable, paginated and also sortable:
TableConfiguration config =
new FilterablePaginatedTableConfigurationBuilder()
.sortable()
.build();
And a special table configuration with a very strange setting that is also sortable:
TableConfiguration config =
new BaseTableConfigurationBuilder()
.withVeryStrangeSetting()
.sortable()
.build();
I would remove your convenience methods that call several methods of the builder. The whole point of a fluent builder like this is that you don't need to create 20 something methods for all acceptable combinations.
Is there a method doing exactly what I want? (problem A)
Yes, the method that does what you want is the new TableConfigurationBuilder(). Btw, I think it's cleaner to make the builder constructor package private and make it accessible via a static method in TableConfiguration, then you can simply call TableConfiguration.builder().
If not, which one is the closest one to start from? (problem B)
If you already have an instance of TableConfiguration or TableConfigurationBuilder it may be nice to pass it into the builder such that it becomes preconfigured based on the existing instance. This allows you to do something like:
TableConfiguration.builder(existingTableConfig).sortable(false).build()
If almost all configuration options are booleans, then you may OR them together:
public static int SORTABLE = 0x1;
public static int FILTERABLE = 0x2;
public static int PAGEABLE = 0x4;
public TableConfiguration createTable(int options, String sortingKey) {
TableConfigurationBuilder builder = new TableConfigurationBuilder();
if (options & SORTABLE != 0) {
builder.sortable();
}
if (options & FILTERABLE != 0) {
builder.filterable();
}
if (options & PAGEABLE != 0) {
builder.pageable();
}
if (sortingKey != null) {
builder.sortable();
builder.setSortingKey(sortingKey);
}
return builder.build();
}
Now table creation doesn't look so ugly:
TableConfiguration conf1 = createTable(SORTEABLE|FILTERABLE, "PhoneNumber");
How about having a configuration string? This way, you could encode the table settings in a succinct, yet still readable way.
As an example, that sets the table to be sortable and read-only:
defaultTable().set("sr");
In a way, these strings resemble the command-line interface.
This could be applicable to other scenarios that support the table re-use. Having a method that creates the Customers table, we can alter it in a consistent way:
customersTable().unset("asd").set("qwe");
Possibly, this DSL could be even improved by providing a delimiter character, that would separate the set and unset operations. The previous sample would then look as follows:
customersTable().alter("asd|qwe");
Furthermore, these configuration strings could be loaded from files, allowing the application to be configurable without recompilation.
As for helping a new developer, I can see the benefit in a nicely separated subproblem that can be easily documented.
What I would have done, if I didn't know SO
Assumption: There probably is way less than 2^(# of config flags) reasonable configurations for the table.
Figure out all the configuration combinations that are currently used.
Draw a chart or whatever, find clusters.
Find outliers and think very hard why they don't fit into those clusters: is that really a special case, or an omission, or just laziness (no one implemented full text search for this table yet)?
Pick the clusters, think hard about them, package them as methods with descriptive names and use them from now on.
This solves problem A: which one to use? Well, there is only a handful of options now. And problem B as well: if I want something special? No, you most probably don't.
I've been developing a massive Role Playing Game. The problem is that I'm having trouble engineering how will I manage the Item and Inventory system. Currently I have something similar to this:
public abstract class Item has 5 Nested classes which all are abstract and static that represent the types of Items. Every Nested class has an unique use(), delete() (Which finalizes the class instance) and sell()(triggers delete) void. They also have optional getter and setter methods, like the setAll() method which fills all necesary fields.
Default: Has base price, tradeability boolean, String name, etc... Very flexible
Weapon: Else than the things that the Default type has, it has integers for stat bonus on being equipped(used in the equip() and unequip() voids). Interacts with public class Hero.
Equipment: Similar to Weapon, just that it has an Enum field called 'EquipSlot' that determines where it is equipped.
Consumable: Similar to default, just that has a consume() void that enables the player to apply certain effects to an Hero when using it. Consuming usually means triggering the delete() void.
Special: Usually quest related items where the 'Tradeable' boolean is static, final and always false.
Now, the way that I make customized items is this.
First, I make a new class (Not abstract)
Then, I make it extend Item.ItemType
Then, I make a constructor which has the setAll(info) void inside.
Then, I can use this class in other classes.
It all looks like this:
package com.ep1ccraft.Classes.Items.Defaults;
import com.ep1ccraft.apis.Item.*;
public class ItemExample extends Item.Default {
public ItemExample() { // Constructor
this.setAll(lots of arguments here);
}
}
then I can do:
ItemExample something = new ItemExample();
And I have a perfect ItemExample with all the properties that I want, So, I can make various instances of it, and use amazing methods like 'getName()' and that kind of stuff.
The problems come to Naming the instances, as I do not know how to make an automated form that will give the instance a Different name from the other instance so they don't collide. Also, I want to implement an inventory system that uses slots as containers and can keep stacks (Stackable items only), also the main feature of it is that you can drag and drop them into other slots (For example, to reorganize or to move to another inventory instance like a bank, or to place in an hero's weapon or equipment slots, if it is allowed) and that you can click on them to display a screen that shows the name, description and possible actions of the Item (Which trigger the previously mentioned delete() and use() voids).
Thank you for reading all that! I know that maybe I'm asking for too much, but I'll appreciate any answers anyway!
So basically, you're asking for a unique identifier for your object. There are probably countless approaches to this, but the three different approaches that immediately come to mind are:
1: A UUID; something like:
java.util.UUID.randomUUID()
Pros: A very, very simple solution...
Cons: It does generate a large amount of bytes (16 + the object itself), taking memory / disk storage, which might be an issue in a MMO
2: A global running number; something like:
class ID {
private static volatile long id = 0;
public synchronized long nextId() {
return id++;
}
}
Pros: Again, a simple solution...
Cons: Even though this is a very simple class, it does contain "volatile" and "synchronized", which might be an issue for an MMO, especially if it is used heavily. Also, What happens after X years of running time, if you run out of numbers. A 64 bit long does require quite a lot of named objects to be created, it may not be an issue after all... you'll have to do the math yourself.
3: Named global running numbers; something like:
class NamedID {
private static volatile Map<String, Long> idMap = new HashMap<String, Long>();
public synchronized long nextId(String name) {
Long id = idMap.get(name);
if (id == null) {
id = 0;
} else {
id++;
}
idMap.put(name, id);
return id;
}
}
Pros: You get id's "localized" to whatever name you're using for it.
Cons: A bit more complex solution, and worse than "2" in terms of speed, since the synchronzation lasts longer.
Note: I couldn't figure out how to make this last suggestion faster; i thought of using a ConcurrentHashMap, but that won't work since it works on a lower level; i.e. it will not guarantee that two thread does not interfere with each other between the idMap.get and the idMap.put statements.
I'm not sure where to start or what information is relevant please let me know what additional information may be useful in solving this problem.
I am developing a simple cometd application and I'm using mongodb as my storage backend. I obtain a single mongodb instance when the application starts and I use this instance for all queries. This is in fact recommended by the mongo java driver documentation as stated here: http://www.mongodb.org/display/DOCS/Java+Driver+Concurrency. I was grasping at straws thinking that the issue had something to do with thread safety but according to that link mongodb is completely thread safe.
Here's where it gets interesting. I have a class that extends BasicDBObject.
public class MyBasicDBObject {
private static final String MAP = "map";
public boolean updateMapAnd(String submap, String key, byte[] value) {
Map topMap = (Map)this.get(MAP);
Map embeddedMap = topMap.get(submap);
byte[] oldValue = embeddedMap.get(key);
newValue = UtilityClass.binaryAnd(oldValue, value);
embeddedMap.put(key, newValue);
topMap.put(submap, embeddedMap);
this.put(MAP, topMap);
}
public boolean updateMapXor(String submap, String key, byte[] value) {
Map topMap = (Map)this.get(MAP);
Map embeddedMap = topMap.get(submap);
byte[] oldValue = embeddedMap.get(key);
newValue = UtilityClass.binaryXor(oldValue, value);
embeddedMap.put(key, newValue);
topMap.put(submap, embeddedMap);
this.put(MAP, topMap);
}
}
Next two skeleton classes that extend MyBasicDBObject.
public class FirstDBObject extends MyBasicDBObject { //no code }
public class SecondDBObject extends MyBasicDBObject { //no code }
The only reason I've set up my classes this way is to improve code readability in dealing with these two objects within the same scope. This lets me do the following...
//a cometd service callback
public void updateMapObjectsFoo(ServerSession remote, Message message) {
//locate the objects to update...
FirstDBObject first = (FirstDBObject) firstCollection.findOne({ ... });
SecondDBObject second = (SecondDBObject) secondCollection.findOne({ ... });
//update them as follows
first.updateMapAnd("default", "someKey1", newBinaryData1);
second.updateMapAnd("default", "someKey2", newBinaryData2);
//save (update) them to their respective collections
firstCollection.save(first);
secondCollection.save(second);
}
public void updateMapObjectsBar(ServerSession remote, Message message) {
//locate the objects to update...
FirstDBObject first = (FirstDBObject) firstCollection.findOne({ ... });
SecondDBObject second = (SecondDBObject) secondCollection.findOne({ ... });
/**
* the only difference is these two calls
*/
first.updateMapXor("default", "someKey1", newBinaryData1);
second.updateMapXor("default", "someKey2", newBinaryData2);
//save (update) them to their respective collections
firstCollection.save(first);
secondCollection.save(second);
}
The UtilityClass does exactly as the methods are named, bitwise & and bitwise ^ by iterating over the passed byte arrays.
This is where I'm totally lost. updateMapObjectsFoo() works exactly as expected, both first and second reflect the changes in the database. updateMapObjectsBar() on the other hand only manages to properly update first.
Inspection via debugging updateMapObjectsBar() shows that the binary objects are in fact updated properly on both objects, but when I head over to the mongo shell to investigate the problem I see that first is updated in the DB and second is not. Where did I get the idea that thread safety had anything to do with it? The only difference that bugs me is that secondCollection is used by other cometd services while firstCollection is not. That seems relevant in one hand, but not in the other since Foo works and Bar does not.
I have torn the code apart and put it back together and I keep coming back to this same problem. What in the world is going on here?
It seems I left out the most relevant part of all which is the nightmare of java generics and the mongodb driver's reliance on this feature of the language. BasicDBObject is essentially a wrapper for a Map<String, Object>. The problem is that once you store an object in that map, you must cast it back to what it was when you put it in there. Yes that may seem completely obvious, and I knew that full well before posting this question.
I cannot pinpoint what happened exactly but I will offer this advice to java + mongodb users. You will be casting, A LOT, and the more complicated your data structures the more casts you will need. Long story short, don't do this:
DBObject obj = (DBObject) collection.findOne(new BasicDBObject("_id", new ObjectId((String)anotherObj.get("objId"))));
One liners are tempting when you are doing rapid prototypes but when you start doing that over and over you are bound to make mistakes. Write more code now, and suffer less frustration later:
DBObject query = new DBObject();
String objId = (String) anotherObj.get("objId");
query.put("_id", new ObjectId(objId));
obj = (DBObject) collection.findOne(query);
I think this is annoyingly verbose but I should expect as much interacting directly with Mongo instead of using some kind of library to make my life easier. I have made a fool of myself on this one, but hopefully someone will learn from my mistake and save themselves a lot of frustration.
Thanks to all for your help.
It could very easily be a multi-threading issue. While you are correct that the Mongo, DB, and DBCollection objects are threadsafe if there is only one Mongo instance, DBObjects are not threadsafe. But even if they were threadsafe, your updateMapObjectsFoo/Bar methods do nothing to ensure that they are atomic operations on the database.
Unfortunately, the changes you would need to make to your code are more intense than just sprinkling a few "synchronized" keywords around. See if http://www.mongodb.org/display/DOCS/Atomic+Operations doesn't help you understand the scope of the problem and some potential solutions.