Java Collection For Ensuring Uniqueness While Providing References - java

I have a structure like this:
public class Foo
{
public int A ;
public int B ;
public int C ;
}
I need to add these to a collection one-by-one in such a way that I end up with no more than one copy where A, B, and C are all equal. I also need references to the objects for another class, like this:
public class Bar
{
public Foo A ;
public Foo B ;
public Foo C ;
}
I tried using a TreeSet < Foo >, which worked to ensure uniqueness, but I cannot get a reference back out of a TreeSet (only a boolean of whether or not it is/was in the set), so I can't pass that reference on to Bar. I tried using a TreeMap < Foo , Integer > along with an ArrayList < Foo >, and that works to ensure uniqueness and to allow me to get references to the objects, but it wastes a massive amount of time and memory to maintain the ArrayList and the Integers.
I need a way to say "If this Foo is not yet in the collection, add it; Otherwise, give me the Foo already in the collection instead of the one I created to check for its presence in the collection.".
(It just occurred to me that I could do something like TreeMap < Foo , Foo >, and that would do what I want, but it still seems like a waste, even though it's nowhere near as much of one, so I'll continue with this question in hope of enlightenment.)
(And yes, I did implement Comparable to do the uniqueness-check in the trees; That part works already.)

I would use e.g. a TreeMap<Foo, Foo> object. When you put a new Foo in the map, specify it as both the key and the value. This lets you use get to return the Foo already in the collection. Note that you have to handle the case of a Foo already being in the map yourself.

A solution in Sorted collection in Java by Neil Coffey gave what I need, which is using ArrayList < Foo > and always doing Collections . binarySearch to get either the index of the element already in the list, or the point at which the element should be inserted into the list.
This maintains a constantly-sorted list at O(log n) time like a tree, but allows retrieval of existing instances at the same time. Unfortunately, it has O(n) insertion time, but that isn't the end of the world in this case, though it's still suboptimal.

To ensure uniqueness in a Set, you need to over-ride equals() and hashcode() so that two instances of Foo with the same A,B,C are .equals().
Ideally, anything you put in a Set should be immutable (i.e. your three ints should be final. From the documentation:
Great care must be exercised if mutable objects are used as set
elements. The behavior of a set is not specified if the value of an
object is changed in a manner that affects equals comparisons while
the object is an element in the set.
Unfortunately, Set doesn't provide any method that allows you to get the actual instance - you would need a Map or another collection as you have already tried.
Update another approach would be to create your own modified version of TreeSet based on the JDK source code to add a method to obtain the instance you need (extending the standard TreeSet won't do what you need because the relevant fields are private, unless you use reflection to access them).

Apparently a TreeList is based on a TreeMap thus making this approach redundant, but I thought I'd just comment on it anyway for completeness.
If a copy of a Foo object exists in the TreeList (e.g. as returned by contains) then you can retrieve the copy using the tailSet and first methods.

Related

What is the benefit for Collections.singleton() to return a Set instead of a Collection?

The Collections.singleton() method returns a Set with that single argument instead of a Collection.
Why is that so? From what I can see, apart from Set being a subtype of Collection, I can see no advantage... Is this only because Set extends Collection anyway so there is no reason not to?
And yes, there is also Collections.singletonList() but this is another matter since you can access random elements from a List with .get()...
Immutable
The benefit is found in the first adjective read in that JavaDoc documentation: immutable.
There are times when you are working with code that demands a Set (or List, etc.). In your own context you may have a strict need for only a single item. To accomplish your own goal of enforcing the rule of single-item-only while needing to present that item in a set, use a Set implementation that forbids you from adding more than one item.
“Immutable” on Collections::singleton means that, once created, the resulting Set object is guaranteed to have one, and only one item. Not zero, and not more than one. No more can be added. The one item cannot be removed.
For example, imagine your code is working with an Employee object representing the CEO (Chief Executive Officer) of your company. Your code is explicitly dealing with the CEO only, so you know there can be only one such Employee object at a time, always one CEO exactly. Yet you want to leverage some existing code that creates a report for a specified collection of Employee objects. By using Collection.singleton you are guaranteed that your own code does not mistakenly have other than one single employee, while still being able to pass a Set.
Set< Employee > ceo = Collections.singleton( new Employee( "Tim Cook" ) ) ; // Always exactly one item in this context, only one CEO is possible.
ceo.add( … ) ; // Fails, as the collection is immutable.
ceo.clear() ; // Fails, as the collection is immutable.
ceo.remove( … ) ; // Fails, as the collection is immutable.
someReport.processEmployees( ceo ) ;
Java 9: Set.of & List.of
Java 9 and later offers new interface methods Set.of and List.of to get the same effect, an immutable collection of a single element.
Set< Pet > pet = Set.of( someDog ) ;
Sibling of methods are overloaded to accept any number of elements to be in the immutable collection, not just one element.
Set< Pet > pets = Set.of( someDog , someOtherDog , someCat ) ;
I'm not sure there's a "benefit" or "advantage" per se? It's just the method that returns a singleton Set, and happens to be the default implementation when you want a singleton Collection as well, since a singleton Collection happens to be a mathematical set as well.
I wondered the same thing and came across your question in my research. Here is my conclusion:
Returning a Set keeps the Collections API clean.
Here are the methods for getting a singleton Collection:
public static <T> Set<T> singleton(T o)
public static <T> List<T> singletonList(T o)
public static <K,V> Map<K,V> singletonMap(K key, V value)
What if the API designers decided on having a singletonSet method and singleton method? It would look like this:
public static <T> Collection<T> singleton(T o)
public static <T> Set<T> singletonSet(T o)
public static <T> List<T> singletonList(T o)
public static <K,V> Map<K,V> singletonMap(K key, V value)
Is the singleton method really necessary? Let's think about why we would need some of these methods.
Think about when you would call singletonList? You probably have an API that requires List instead of Collection or Set. I will use this poor example:
public void needsList(List<?> list);
You can only pass a List. needsList hopefully needs the data indexed and is not arbitrarily requesting a List instead of a Collection.
However, you could also pass a List to a method that required any Collection:
public void needsAnyCollection(Collection<?> collection);
But if that is the case, then why use a List? A List has a more complicated API and involves storing indexes. Do you really need the indexes? Would a Set not suffice? I argue that you should use a Set, because needsAnyCollection does not care about the order.
This is where singletonSet really shines. You know that if the collection is of size 1 (singleton), then the data must be unique. Collections of size 1 are coincidentally a Set.
There is no need for a method which returns a singleton of type Collection, because it is accidentally a Set.
The reason singleton Collections exist is to provide a low memory collection if you know you are going to store 1 element and it is not going to be mutated. Especially in high volume services this can have significant impact due to garbage collection latency.
This applies for both Set.of("1"); and Collections.singleton("1");
Since Set is a Collection already returning the more constrained contract is a good thing for users of the libraries.
You get additional functionality without paying anything for it.
And you as a user should do the same as for any other API and library, you should store the least needed contract for it.
So if the only thing you'll ever need to do with the structure is to iterate in a loop I'd suggest to choose Iterable instead of using List, Set or Collection. Since aCollection is an Iterable this will work out of the box.
Not all the lists are random access, just take a LinkedList (random access in the API, not in the implementation)as a counter-example. By the way I agree with Louis Wasserman, a Set simply makes sense because it is closer to the mathematical definition, it just feels natural.

JVM optimisation of hashCode() on List

Imagine a simple case:
class B{
public final String text;
public B(String text){
this.text = text;
}
}
class A {
private List<B> bs = new ArrayList<B>;
public B getB(String text){
for(B b :bs){
if(b.text.equals(text)){
return b;
}
}
return null;
}
[getter/setter]
}
Imagine that for each instance of A, the List<B> is large and we need to call getB(String) often. However assume that it is also possible for the list to change (add/remove element, or even being reassigned).
At this stage, the average complexity for getB(String) is O(n). In order to improved that I was wondering if we could use some clever caching.
Imagine we cache the List<B> in a Map<String, B> where the key is B.text. That would improve the performance but it won't work if the list is changed (new element or deleted element) or reassigned (A.bs points to a new reference).
To go around that I thought that, along with the Map<String, B>, we could store a hash of the list bs. When we call getB(String) method, we compute the hash of the list bs. If the hash hasn't changed, we fetch the result from the map, if it has we reload the map.
The problem is that computing the hash for a java.util.List goes through all the element of the list and computes their hash, which is at least O(n).
Question
What I'd like to know is whether the JVM will be faster at computing the hash for the List than going through my loop in the getB(String) method. May be that depends on the implementation of hash for B. If so what kind of things could work? In a nutshell, I'd like to know whether this is stupid or could bring some performance improvement.
Without actually explaining why, you seem for some reason to believe that it is essential to keep the list structure as well. The only reasonable reason for this is that you need the order of the collection to be kept consistent. If you switch to a "plain" map, the order of the values is no longer constant, e.g. kept in the order in which you add the items to the map.
If you need both to keep the order (list behaviour) and access individual items using a key, you can use a LinkedHashMap, which essentially joins the behaviour of a LinkedList and a HashMap. Even if LinkedHashMap.values() returns a collection and not a list, the list behaviour is guaranteed within the collection.
Another issue with your question is, that you cannot use the list's hash code to safely determine changes. If the hash code has changed, you are indeed sure that the list has changed as well. If two hash codes are identical, you can still not be sure that the lists are actually identical. E.g. if the hash code implementation is based on strings, the hash codes for "1a" and "2B" are identical.
If so what kind of things could work?
Simply put: don't let anything else mutate your list without you knowing about it. I suspect you currently have something like:
public List<String> getAllBs() {
return bs;
}
... and a similar setter. If you stop doing that, and instead just have appropriate mutation methods, then you can make sure that your code is the only code to mutate the list... which means you can either remember that your map is "dirty" or just mutate the map at the same time that you mutate the list.
You could implement your own class IndexedBArrayList which extends ArrayList<B>.
Then you add this functionality to it:
A private HashMap<String, B> index
All mutator methods of ArrayList are overridden to keep this index hash map updated in addition to calling the corresponding super-method.
A new public B getByString(String) method which uses the hash map
From your description it does not seem that you need a List<B>.
Replace the List with a HashMap. If you need to search for Bs the best data structure is the hashmap and not the list.

Getting the reference to a duplicate in a Set

I have a Set object and I use this set to ensure that when I add an element to it that already exists in the set, it's not added. This is the easy part, just use Set.add(); But after this is done I need the reference to the object in the Set.
What I essentially mean is having a .add() that doesn't return a boolean, but the actual object you tried to add (if it wasn't added, the one in the set). Is there already a Set implementation that does this, or do I have to write my own?
At the moment I used a Set.add() and if it returns false I use an iterator to look for the one in the set. Although this works, I find it ugly. Especially when using the HashSet implementation which should be able to find the object a lot faster using hashcodes. Any ideas?
EDIT: Wow, lots of answers in a relatively short time, thanks. Ok, so what I'm trying to do is create a certain datastructure that loads data from some place and creates objects from it. This data might contain duplicates, and this wouldn't be a problem if I used a set and just needed this one set, but the datastructure needs to add references to these unique objects to other objects in the datastructure, therefore I need the references to the (unique) objects in the set. Also, I can't just not load the data that is already contained in the set, because there is more (unique) data linked to it, which is also added, together with a reference to that data that was already contained in the set. For illustration purposes (because the above explanation is far from clear) I'll give an example here:
Data:
foo bar
1 3
1 4
2 5
Datastructure:
Set<Foo> totalFooSet
Set<Bar> totalBarSet
Foo:
sometype data
Set<Bar> barSet
Bar:
sometype data
Set<Foo> fooSet
This is sort of like a many-to-many relation.
I'm not sure if there is some major design flaw here, I've looked it over with some other people and we can't figure out how to do this differently. I like the idea of using the HashMap, so I'll create a subclass and add an addAndReturn() function to it.
(As #AlexR says, I'm assuming that you want a reference to the previous object equal to the one you are trying to add now)
Instead of using a Set, try using a HashMap with the same object as a key and a value. Then you can do the following:
Foo objectToAdd = //obtained the normal way
Map<Foo,Foo> psuedoSet = //this is stored somewhere
Foo result = psuedoSet.get(objectToAdd);
if (result == null) {
pseudoSet.put(objectToAdd, objectToAdd);
result = objectToAdd;
}
return result;
Similar to Sean's answer (which I upvoted), but possibly more reusable.
public class HashMapBackedSet<T> extends HashMap<T,T>{
public T add( T toAdd ){
T existing = get( toAdd );
if( existing != null ){
return existing;
}
put( toAdd, toAdd );
return toAdd;
}
}
If I understand you correctly, if the element you just tried to add is already contained in the set, you want the instance which is already in the set (which is equal to the one added, but not necessarily identical)?
This behavior is provided by the interners of the Google Guava library:
Interner<Object> interner = Interners.newStrongInterner();
Object objectInSet = interner.intern(otherObject);
Unfortunately, interners do not provide any other methods like iterating over their contained values, so using them as a set replacement may not be possible for you.
Another option would be a HashMap<T, T> where you store a mapping from each object to itself. Then you can get the reference to the already contained object easily by calling get(). If you don't mind that the object is always overridden, just call put() which returns exactly the object you want (the previously stored object).
Set cannot contain duplicate entries. The purpose of set is not to do this.
As far as I understand you want to get reference to the previous object identical to one that you are trying to add now.
You do not have to iterate the set to find this object. Just user oldObject = set.get(newObject).
This operation is as fast as getting array element by index.
Wrap your set in a class that returns the object when you call add?

Java sets: why there is no T get(Object o)? [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I understand that only one instance of any object according to .equals() is allowed in a Set and that you shouldn't "need to" get an object from the Set if you already have an equivalent object, but I would still like to have a .get() method that returns the actual instance of the object in the Set (or null) given an equivalent object as a parameter.
Any ideas/theories as to why it was designed like this?
I usually have to hack around this by using a Map and making the key and the value same, or something like that.
EDIT: I don't think people understand my question so far. I want the exact object instance that is already in the set, not a possibly different object instance where .equals() returns true.
As to why I would want this behavior, typically .equals() does not take into account all the properties of the object. I want to provide some dummy lookup object and get back the actual object instance in the Set.
While the purity argument does make the method get(Object) suspect, the underlying intent is not moot.
There are various class and interface families that slightly redefine equals(Object). One need look no further than the collections interfaces. For example, an ArrayList and a LinkedList can be equal; their respective contents merely need to be the same and in the same order.
Consequently, there are very good reasons for finding the matching element in a set. Perhaps a clearer way of indicating intent is to have a method like
public interface Collection<E> extends ... {
...
public E findMatch(Object o) throws UnsupportedOperationException;
...
}
Note that this API has value broader that within Set.
As to the question itself, I don't have any theory as to why such an operation was omitted. I will say that the minimal spanning set argument does not hold, because many operations defined in the collections APIs are motivated by convenience and efficiency.
The problem is: Set is not for "getting" objects, is for adding and test for presence.
I understand what are you looking for, I had a similar situation and ended using a map of the same object in key and value.
EDIT: Just to clarify: http://en.wikipedia.org/wiki/Set_(abstract_data_type)
I had the same question in java forum years ago. They told me that the Set interface is defined. It cannot be changed because it will break the current implementations of Set interface. Then, they started to claim bullshit, like you see here: "Set does not need the get method" and started to drill me that Map must always be used to get elements from a set.
If you use the set only for mathematical operations, like intersection or union, then may be contains() is sufficient. However, Set is defined in collections to store data. I explained for need get() in Set using the relational data model.
In what follows, an SQL table is like a class. The columns define attributes (known as fields in Java) and records represent instances of the class. So that an object is a vector of fields. Some of the fields are primary keys. They define uniqueness of the object. This is what you do for contains() in Java:
class Element {
public int hashCode() {return sumOfKeyFields()}
public boolean equals(Object e) {keyField1.equals(e) && keyField2.equals(e) && ..}
I'm not aware of DB internals. But, you specify key fields only once, when define a table. You just annotate key fields with #primary. You do not specify the keys second time, when add a record to the table. You do not separate keys from data, as you do in mapping. SQL tables are sets. They are not maps. Yet, they provide get() in addition to maintaining uniqueness and contains() check.
In "Art of Computer Programming", introducing the search, D. Knuth says the same:
Most of this chapter is devoted to the study of a very simple search
problem: how to find the data that has been stored with a given
identification.
You see, data is store with identification. Not identification pointing to data but data with identification. He continues:
For example, in a numerical application we might want
to find f(x), given x and a table of the values of f; in a
nonnumerical application, we might want to find the English
translation of a given Russian word.
It looks like he starts to speak about mapping. However,
In general, we shall suppose that a set of N records has been stored,
and the problem is to locate the appropriate one. We generally require
the N keys to be distinct, so that each key uniquely identifies its
record. The collection of all records is called a table or file,
where the word "table" is usually used to indicate a small file, and
"file" is usually used to indicate a large table. A large file or a
group of files is frequently called a database.
Algorithms for searching are presented with a so-called argument, K,
and the problem is to find which record has K as its key. Although the
goal of searching is to find the information stored in the record
associated with K, the algorithms in this chapter generally ignore
everything but the keys themselves. In practice we can find the
associated data once we have located K; for example, if K appears in
location TABLE + i, the associated data (or a pointer to it) might be
in location TABLE + i + 1
That is, the search locates the key filed of the record and it should not "map" the key to the data. Both are located in the same record, as fileds of java object. That is, search algorithm examines the key fields of the record, as it does in the set, rather than some remote key, as it does in the map.
We are given N items to be sorted; we shall call them records, and
the entire collection of N records will be called a file. Each
record Rj has a key Kj, which governs the sorting process. Additional
data, besides the key, is usually also present; this extra "satellite
information" has no effect on sorting except that it must be carried
along as part of each record.
Neither, I see no need to duplicate the keys in an extra "key set" in his discussion of sorting.
... ["The Art of Computer Programming", Chapter 6, Introduction]
entity set is collection or set all entities of a particular entity type
[http://wiki.answers.com/Q/What_is_entity_and_entity_set_in_dbms]
The objects of single class share their class attributes. Similarly, do records in DB. They share column attributes.
A special case of a collection is a class extent, which is the
collection of all objects belonging to the class. Class extents allow
classes to be treated like relations
... ["Database System Concepts", 6th Edition]
Basically, class describes the attributes common to all its instances. A table in relational DB does the same. "The easiest mapping you will ever have is a property mapping of a single attribute to a single column." This is the case I'm talking about.
I'm so verbose on proving the analogy (isomorphism) between objects and DB records because there are stupid people who do not accept it (to prove that their Set must not have the get method)
You see in replays how people, who do not understand this, say that Set with get would be redundant? It is because their abused map, which they impose to use in place of set, introduces the redundancy. Their call to put(obj.getKey(), obj) stores two keys: the original key as part of the object and a copy of it in the key set of the map. The duplication is the redundancy. It also involves more bloat in the code and wastes memory consumed at Runtime. I do not know about DB internals, but principles of good design and database normalization say that such duplication is bad idea - there must be only one source of truth. Redundancy means that inconsistency may happen: the key maps to an object that has a different key. Inconsistency is a manifestation of redundancy. Edgar F. Codd proposed DB normalization just to get rid of redundancies and their inferred inconsistencies. The teachers are explicit on the normalization: Normalization will never generate two tables with a one-to-one relationship between them. There is no theoretical reason to separate a single entity like this with some fields in a single record of one table and others in a single record of another table
So, we have 4 arguments, why using a map for implementing get in set is bad:
the map is unnecessary when we have a set of unique objects
map introduces redundancy in Runtime storage
map introduces code bloat in the DB (in the Collections)
using map contradicts the data storage normalization
Even if you are not aware of the record set idea and data normalization, playing with collections, you may discover this data structure and algorithm yourself, as we, org.eclipse.KeyedHashSet and C++ STL designers did.
I was banned from Sun forum for pointing out these ideas. The bigotry is the only argument against the reason and this world is dominated by bigots. They do not want to see concepts and how things can be different/improved. They see only actual world and cannot imagine that design of Java Collections may have deficiencies and could be improved. It is dangerous to remind rationale things to such people. They teach you their blindness and punish if you do not obey.
Added Dec 2013: SICP also says that DB is a set with keyed records rather than a map:
A typical data-management system spends a large amount of time
accessing or modifying the data in the records and therefore requires
an efficient method for accessing records. This is done by identifying
a part of each record to serve as an identifying key. Now we represent
the data base as a set of records.
Well, if you've already "got" the thing from the set, you don't need to get() it, do you? ;-)
Your approach of using a Map is The Right Thing, I think. It sounds like you're trying to "canonicalize" objects via their equals() method, which I've always accomplished using a Map as you suggest.
I'm not sure if you're looking for an explanation of why Sets behave this way, or for a simple solution to the problem it poses. Other answers dealt with the former, so here's a suggestion for the latter.
You can iterate over the Set's elements and test each one of them for equality using the equals() method. It's easy to implement and hardly error-prone. Obviously if you're not sure if the element is in the set or not, check with the contains() method beforehand.
This isn't efficient compared to, for example, HashSet's contains() method, which does "find" the stored element, but won't return it. If your sets may contain many elements it might even be a reason to use a "heavier" workaround like the map implementation you mentioned. However, if it's that important for you (and I do see the benefit of having this ability), it's probably worth it.
So I understand that you may have two equal objects but they are not the same instance.
Such as
Integer a = new Integer(3);
Integer b = new Integer(3);
In which case a.equals(b) because they refer to the same intrinsic value but a != b because they are two different objects.
There are other implementations of Set, such as IdentitySet, which do a different comparison between items.
However, I think that you are trying to apply a different philosophy to Java. If your objects are equal (a.equals(b)) although a and b have a different state or meaning, there is something wrong here. You may want to split that class into two or more semantic classes which implement a common interface - or maybe reconsider .equals and .hashCode.
If you have Joshua Bloch's Effective Java, have a look at the chapters called "Obey the general contract when overriding equals" and "Minimize mutability".
Just use the Map solution... a TreeSet and a HashSet also do it since they are backed up by a TreeMap and a HashMap, so there is no penalty in doing so (actualy it should be a minimal gain).
You may also extend your favorite Set to add the get() method.
[]]
I think your only solution, given some Set implementation, is to iterate over its elements to find one that is equals() -- then you have the actual object in the Set that matched.
K target = ...;
Set<K> set = ...;
for (K element : set) {
if (target.equals(element)) {
return element;
}
}
If you think about it as a mathematical set, you can derive a way to find the object.
Intersect the set with a collection of object containing only the object you want to find. If the intersection is not empty, the only item left in the set is the one you were looking for.
public <T> T findInSet(T findMe, Set<T> inHere){
inHere.retainAll(Arrays.asList(findMe));
if(!inHere.isEmpty){
return inHere.iterator().next();
}
return null;
}
Its not the most efficient use of memory, but its functionally and mathematically correct.
"I want the exact object instance that is already in the set, not a possibly different object instance where .equals() returns true."
This doesn't make sense. Say you do:
Set<Foo> s = new Set<Foo>();
s.Add(new Foo(...));
...
Foo newFoo = ...;
You now do:
s.contains(newFoo)
If you want that to only be true if an object in the set is == newFoo, implement Foo's equals and hashCode with object identity. Or, if you're trying to map multiple equal objects to a canonical original, then a Map may be the right choice.
I think the expectation is that equals truely represent some equality, not simply that the two objects have the same primary key, for example. And if equals represented two really equal objects, then a get would be redundant. The use case you want suggests a Map, and perhaps a different value for the key, something that represents a primary key, rather than the whole object, and then properly implement equals and hashcode accordingly.
Functional Java has an implementation of a persistent Set (backed by a red/black tree) that incidentally includes a split method that seems to do kind of what you want. It returns a triplet of:
The set of all elements that appear before the found object.
An object of type Option that is either empty or contains the found object if it exists in the set.
The set of all elements that appear after the found object.
You would do something like this:
MyElementType found = hayStack.split(needle)._2().orSome(hay);
Object fromSet = set.tailSet(obj).first();
if (! obj.equals(fromSet)) fromSet = null;
does what you are looking for. I don't know why java hides it.
Say, I have a User POJO with ID and name.
ID keeps the contract between equals and hashcode.
name is not part of object equality.
I want to update the name of the user based on the input from somewhere say, UI.
As java set doesn't provide get method, I need to iterate over the set in my code and update the name when I find the equal object (i.e. when ID matches).
If you had get method, this code could have been shortened.
Java now comes with all kind of stupid things like javadb and enhanced for loop, I don't understand why in this particular case they are being purist.
I had the same problem. I fixed it by converting my set to a Map, and then getting them from the map. I used this method:
public Map<MyObject, MyObject> convertSetToMap(Set<MyObject> set)
{
Map<MyObject, MyObject> myObjectMap = new HashMap<MyObject, MyObject>();
for(MyObject myObject: set){
myObjectMap.put(myObject, myObject);
}
return myObjectMap
}
Now you can get items from your set by calling this method like this:
convertSetToMap(myset).get(myobject);
You can override the equals in your class to let it check on only a certain properties like Id or name.
if you have made a request for this in Java bug parade list it here and we can vote it up. I think at least the convenience class java.util.Collections that just takes a set and an object
and is implemented something like
searchSet(Set ss, Object searchFor){
Iterator it = ss.iterator();
while(it.hasNext()){
Object s = it.next();
if(s != null && s.equals(searchFor)){
return s;
}
}
This is obviously a shortcoming of the Set API.
Simply, I want to lookup an object in my Set and update its property.
And I HAVE TO loop through my (Hash)Set to get to my object... Sigh...
I agree that I'd like to see Set implementations provide a get() method.
As one option, in the case where your Objects implement (or can implement) java.lang.Comparable, you can use a TreeSet. Then the get() type function can be obtained by calling ceiling() or floor(), followed by a check for the result being non-null and equal to the comparison Object, such as:
TreeSet myTreeSet<MyObject> = new TreeSet();
:
:
// Equivalent of a get() and a null-check, except for the incorrect value sitting in
// returnedMyObject in the not-equal case.
MyObject returnedMyObject = myTreeSet.ceiling(comparisonMyObject);
if ((null != returnedMyObject) && returnedMyObject.equals(comparisonMyObject)) {
:
:
}
The reason why there is no get is simple:
If you need to get the object X from the set is because you need something from X and you dont have the object.
If you do not have the object then you need some means (key) to locate it. ..its name, a number what ever. Thats what maps are for right.
map.get( "key" ) -> X!
Sets do not have keys, you need yo traverse them to get the objects.
So, why not add a handy get( X ) -> X
That makes no sense right, because you have X already, purist will say.
But now look at it as non purist, and see if you really want this:
Say I make object Y, wich matches the equals of X, so that set.get(Y)->X. Volia, then I can access the data of X that I didn have. Say for example X has a method called get flag() and I want the result of that.
Now look at this code.
Y
X = map.get( Y );
So Y.equals( x ) true!
but..
Y.flag() == X.flag() = false. ( Were not they equals ?)
So, you see, if set allowed you to get the objects like that It surely is to break the basic semantic of the equals. Later you are going to live with little clones of X all claming that they are the same when they are not.
You need a map, to store stuff and use a key to retrieve it.
I understand that only one instance of any object according to .equals() is allowed in a Set and that you shouldn't "need to" get an object from the Set if you already have an equivalent object, but I would still like to have a .get() method that returns the actual instance of the object in the Set (or null) given an equivalent object as a parameter.
The simple interface/API gives more freedom during implementation. For example if Set interface would be reduced just to single contains() method we get a set definition typical for functional programming - it is just a predicate, no objects are actually stored. It is also true for java.util.EnumSet - it contains only a bitmap for each possible value.
It's just an opinion. I believe we need to understand that we have several java class without fields/properties, i.e. only methods. In that case equals cannot be measured by comparing function, one such example is requestHandlers. See the below example of a JAX-RS application. In this context SET makes more sense then any data structure.
#ApplicationPath("/")
public class GlobalEventCollectorApplication extends Application {
#Override
public Set<Class<?>> getClasses() {
Set<Class<?>> classes = new HashSet<Class<?>>();
classes.add(EventReceiverService.class);
classes.add(VirtualNetworkEventSerializer.class);
return classes;
}
}
To answer your question, if you have an shallow-employee object ( i.e. only EMPID, which is used in equals method to determine uniqueness ) , and if you want to get a deep-object by doing a lookup in set, SET is not the data-structure , as its purpose is different.
List is ordered data structure. So it follows the insertion order. Hence the data you put will be available at exact position the time you inserted.
List<Integer> list = new ArrayList<>();
list.add(1);
list.add(2);
list.add(3);
list.get(0); // will return value 1
Remember this as simple array.
Set is un ordered data structure. So it follows no order. The data you insert at certain position will be available any position.
Set<Integer> set = new HashSet<>();
set.add(1);
set.add(2);
set.add(3);
//assume it has get method
set.get(0); // what are you expecting this to return. 1?..
But it will return something else. Hence it does not make any sense to create get method in Set.
**Note****For explanation I used int type, this same is applicable for Object type also.
I think you've answered your own question: it is redundant.
Set provides Set#contains (Object o) which provides the equivalent identity test of your desired Set#get(Object o) and returns a boolean, as would be expected.

Java best practices, add to collection before or after object has been modified?

Say you are adding x number of objects to a collection, and after or before adding them to a collection you are modifying the objects attributes. When would you add the element to the collection before or after the object has been modified.
Option A)
public static void addToCollection(List<MyObject> objects) {
MyObject newObject = new MyObject();
objects.add(newObject);
newObject.setMyAttr("ok");
}
Option B)
public static void addToCollection(List<MyObject> objects) {
MyObject newObject = new MyObject();
newObject.setMyAttr("ok");
objects.add(newObject);
}
To be on the safe side, you should modify before adding, unless there is a specific reason you cannot do this, and you know the collection can handle the modification. The example can reasonably be assumed to be safe, since the general List contract does not depend upon object attributes - but that says nothing about specific implementations, which may have additional behavior that depends upon the object's value.
TreeSet, and Maps in general do no tolerate modifying objects after they have been inserted, because the structure of the collection is dependent upon the attributes of the object. For trees, any attributes used by the comparator cannot be changed once the item has been added. For maps, it's the hashCode that must remain constant.
So, in general, modify first, and then add. This becomes even more important with concurrent collections, since adding first can lead to other collection users seeing an object before it been assigned it's final state.
The example you provided won't have any issues because you're using a List collection which doesn't care about the Object contents.
If you were using something like TreeMap which internally sorts the contents of the Object keys it stores it could cause the Collection to get into an unexpected state. Again this depends on if the equals method uses the attribute you're changing to compare.
The safest way is to modify the object before placing it into the collection.
One of the good design rules to follow, is not to expose half-constructed object to a 3rd party subsystem.
So, according to this rule, initialize your object to the best of your abilities and then add it to the list.
If objects is an ArrayList then the net result is probably the same, however imaging if objects is a special flavor of List that fires some kind of notification event every time a new object is added to it, then the order will matter greatly.
In my opinion its depend of the settted attribure and tyle of collection, if the collection is a Set and the attribute have infulance on the method equal or hascode then definitely i will set this property before this refer also to sorterd list etc. in other cases this is irrelevant. But for this exapmle where object is created i will first set the atributes than add to collection because the code is better organized.
I think either way it's the same, personally I like B, :)
It really does boil down to what the situation requires. Functionally there's no difference.
One thing you should be careful with, is being sure you have the correct handle to the object you want to modify.
Certainly in this instance, modifying the object is part of the "create the object" thought, and so should be grouped with the constructor as such. After you "create the object" you "add it to the collection". Thus, I would do B, and maybe even add a blank line after the modification to give more emphasis on the two separate thoughts.

Categories

Resources