Is there any way or if any java library available which can let me traverse through selected list items only(not all) which satisfies the condition?
For example: I have a list of employee, and I want to traverse the list of employees who are manager only. I don't want to put a condition or filter but want to traverse the list of manager only.
For this, I can define the criteria while creating the list. So every time, when I'll add an item to the list a pointer to a list item, which satisfies the criteria, will be saved in another list.
It's like providing another view to original list.
Although it can be done using filter, but I would have to basically access each list item, compare and then process.
It can have memory overhead as it'll maintain extra list for each criteria, but I believe it'll reduce processing time.
I am expecting that my list would not contain more than 30 items on average.
Update
After some brainstorming, I have come up with below solution.
View<T>
List<T>
boolean checkCondition(T);
boolean updateView(T);
managerView
boolean checkCondition(T){
return token.getDesignation == designation.MANAGER;
}
salaryView
boolean checkCondition(T){
:
}
ViewableList<T> list
list.addView(managerView)
list.addView(salaryView)
ViewableList<T>
List<View<T>> views;
add(T){
originalList.add(T);
foreach views{
if(view.checkCondition(T)){
view.add(T);
}
}
}
addView(View){
views.add(view)
}
I can achieve Insert, Search, and Delete operation easily. But I am still finding difficulty to update the view when the field of view of an object is updated.
Possible solutions
I annotate list item's field; Write an aspect. So whenever the value of annotated field is changed, it can call updateView() of corresponding view.
Employee{
#View(type=DesignationView.class)
Designation designation;
}
But there is a chance that a field is used in constructing multiple views. So I would have to pass list of view classes in #View annotation, which looks pretty odd. Moreover, I want to avoid use of reflection and aspect due to performance. Otherwise there'll not be any sense to put all this effort.
Please lemme know if you have an idea how I can implement it.
Wouldn't it be better if you use Map. In this map, key could be category and value will be list. So when traversing only get the entry for that key.
And how do you expect such a thing to be coded in such a way as to be so generic that it's useful as a general purpose library rather than specific to your very narrow requirements?
Which would be the only kind of implementation that it is a sensible idea to release as a standalone library of course.
So no, something like that isn't going to exist. You're going to have to create some of your own code.
Related
I've got loads of the following to implement.
validateParameter(field_name, field_type, field_validationMessage, visibleBoolean);
Instead of having 50-60 of these in a row, is there some form of nested hashmap/4d array I can use to build it up and loop through them?
Whats the best approach for doing something like that?
Thanks!
EDIT: Was 4 items.
What you could do is create a new Class that holds three values. (The type, the boolean, and name, or the fourth value (you didn't list it)). Then, when creating the HashMap, all you have to do is call the method to get your three values. It may seem like more work, but all you would have to do is create a simple loop to go through all of the values you need. Since I don't know exactly what it is that you're trying to do, all I can do is provide an example of what I'm trying to do. Hope it applies to your problem.
Anyways, creating the Class to hold the three(or four) values you need.
For example,
Class Fields{
String field_name;
Integer field_type;
Boolean validationMessageVisible;
Fields(String name, Integer type, Boolean mv) {
// this.field_name = name;
this.field_type = type;
this.validationMessageVisible = mv;
}
Then put them in a HashMap somewhat like this:
HashMap map = new HashMap<String, Triple>();
map.put(LOCAL STRING FOR NAME OF FIELD, new Field(new Integer(YOUR INTEGER),new Boolean(YOUR BOOLEAN)));
NOTE: This is only going to work as long as these three or four values can all be stored together. For example if you need all of the values to be stored separately for whatever reason it may be, then this won't work. Only if they can be grouped together without it affecting the function of the program, that this will work.
This was a quick brainstorm. Not sure if it will work, but think along these lines and I believe it should work out for you.
You may have to make a few edits, but this should get you in the right direction
P.S. Sorry for it being so wordy, just tried to get as many details out as possible.
The other answer is close but you don't need a key in this case.
Just define a class to contain your three fields. Create a List or array of that class. Loop over the list or array calling the method for each combination.
The approach I'd use is to create a POJO (or some POJOs) to store the values as attributes and validate attribute by attribute.
Since many times you're going to have the same validation per attribute type (e.g. dates and numbers can be validated by range, strings can be validated to ensure they´re not null or empty, etc), you could just iterate on these attributes using reflection (or even better, using annotations).
If you need to validate on the POJO level, you can still reuse these attribute-level validators via composition, while you add more specific validations are you´re going up in the abstraction level (going up means basic attributes -> pojos -> pojos that contain other pojos -> etc).
Passing several basic types as parameters of the same method is not good because the parameters themselves don't tell much and you can easily exchange two parameters of the same type by accident in the method call.
User is going to choose few items to be removed from a list.
I have two options, either pass the id of selected items or their objects' addresses in memory.
First question is that, is it a correct approach to send the objects of selected items rather than their ids?
<input type="checkbox" name="selectedItems" value="${item}"/>
rather than
<input type="checkbox" name="selectedItems" value="${item.id}"/>
If I should send the ids of items, when I pass their IDs, create an object and set the ids I am not able to remove them from the list, whats the best approach ?
Item item = new Item();
item.setID(selectedItems.get(0));
Basket basket = (Basket) session.get(Basket.class, Long.parseLong(basket_id));
basket.getItems.remove(item); <<I cant remove them by just setting their ids!!
session.update(basket);
The list.remove method removes first occurrence of the specified element from the list. Below block of code is copied from ArrayList source code:
for (int index = 0; index < size; index++)
if (o.equals(elementData[index])) {
// REMOVE ITEM FROM THE LIST
}
}
which uses the Object.equals method to check the equality of the object to be removed from the list. So you need to override the equals method in Item class to tell which all items are equal. And when you override equals(), you always need to also override hashCode() so the two methods are consistent
Now when passing an instance of Item to remove from the list, you need to set the values of all the properties of Item that were used to implement the equals method.
But you should not use the database identifier (id) to implement the equals(). Hibernate doesn’t assign identifier values until an entity is saved. So, if the object is added to a Set before being saved, its hash code changes (on save action) while it’s contained by the Set, contrary to the contract of java.util.Set. You could use a combination of properties, that is unique for each instance of Item.
Load item object before deleting.
Item item = session.load(Item.class, selectedItems.get(0))
then call
basket.getItems.remove(item);
There are several approaches, If you are comfortable with hql or sql, i would prefer doing it with query,
Approach one
delete from table where id in (x,y,z,xy...);
advantages: You need not load anything from the DB. You just run a sql and get the response back.
Approach two
Eager fetch the objects into the memory(Ofcoz, send the IDs from the client) iterate over the list of items and match the ID and remove it. Flush the session.
Inside your class for the object to be removed, you need to override equals() method. code it such a way that if all member variables are equal, then the objects are equal. template implementations are available in Netbeans IDE which use some more checks like the instance of check.
Otherwise object member variables could all be same but still objects
will not be equal since they refer to different locations in memory heap.
I almost invariably override equals() and toString() methods of my model classes
simple and strightforward
do not remove the copy, search inside:
for(Iterator<Item> iterator = basket.getItems().iterator(); iterator.hasNext();)
{
Item currentItem = iterator.next();
if(selectedItems.contains(currentItem.getId()))
{
iterator.remove();
}
}
however, NEVER use value="${item}" if you have not overridden item.toString().
Could you not change your ArrayList to a HashMap and store the objects against their ID's? then you can remove them with a single line of code.
Hql.
session.createQuery("delete from Item i where i.id in (:iids)").setParameter("iids", item_ids).executeUpdate();
Don't render objects directly. Don't do it. That way lies pain and madness. Suppose you do this, and haven't overridden toString():
<input type="checkbox" name="selectedItems" value="${item}"/>
Then the following happens:
You fetch items into a List, rendering each using the default class.name#address implementation of toString().
The user chooses some items to remove and submits the form, which makes a new HTTP request. The original List of items is gone, so those addresses point to...well, you have no idea what they point to.
(There are other problems: if a malicious attacker checks the page source, they now know you're running a Java-based server. No reason to volunteer that information.)
Now, you might think something like "hey, I'll just put those items in my session." Now you have the same problem: most session-handling implementations will serialize the items when storing them in the session, and deserialize them into a new copy upon retrieval. Your object references now mean nothing.
You could override toString(), but why? Just pass the IDs and send them to your DB in the removal query. That way, there are no surprises if someone decides to change the toString() implementation.
I got something like this:
Criteria crit = session.createCriteria(Parent.class,"p");
parentsList = crit.createCriteria(
"childSet","c",
JoinType.LEFT_OUTER_JOIN,
Restrictions.eq("jt.2ndParentDto.pk2ndParentDto", pk2ndParent))
.list();
My query returns a list of parents with one child each or none, i already tested the logged query directly, so i am pretty sure of it.
I have to retrieve a list of children, so i am adding the parent and creating the ones missing.
List<ChildDto> list=new ArrayList<ChildDto>();
for(ParentDto item:parentsList){
Iterator<ChildDto> it=item.getChildSet().iterator();
if(it.hasNext()){
ChildDto dto = it.next();
dto.setParentDto(item);
list.add(dto);
}
else{
ChildDto dto = new ChildDto();
dto.setParentDto(item);
list.add(dto);
}
}
return list;
By calling item.getChildSet().iterator() hibernate loads the entire collection so i cannot call item.getChildSet().iterator().hasNext to check if there is something in the set, and i cannot call item.getChildSet().size() neither for the exact same reason...
then how?, what else is there?, i am currently out of ideas, how can i get the only item of the set if there is one?
Update: I just tried Extra lazy loading, but it doesn't change for better or worse...
item.getChildSet().iterator() still causes to load the entire collection.
And when i do item.getChildSet().size() hibernate triggers a count... so i always get size of the entire collection (no use).
And that's pretty much it =/
Update: I got it working with a projection by getting a list of Object[] items, and manually creating the classes.
I don't like to do this because, with a change to the Hbm, you're forced to maintain queries of this kind, so i try to avoid this as much as possible.
I'm not sure I understand exactly what you're asking, but I think you're looking for Hibernate "extra lazy" collections, which allow you to get some information about a collection, including the size, without initializing the entire collection, as well as load large collections into memory in batches, rather than all at once.
Can you change the query to return the child records instead? That way you won't get the whole collection. I am not begin clear. Can you just get the child objects from the database, then call something like getParent() to get the parents you need?
I need to implement an n:m relation in Java.
The use case is a catalog.
a product can be in multiple categories
a category can hold multiple products
My current solution is to have a mapping class that has two hashmaps.
The key of the first hashmap is the product id and the value is a list of category ids
The key to the second hashmap is the category id and the value is a list of product ids
This is totally redundant an I need a setting class that always takes care that the data is stored/deleted in both hashmaps.
But this is the only way I found to make the following performant in O(1):
what products holds a category?
what categories is a product in?
I want to avoid full array scans or something like that in every way.
But there must be another, more elegant solution where I don't need to index the data twice.
Please en-light me. I have only plain Java, no database or SQLite or something available. I also don't really want to implement a btree structure if possible.
If you associate Categories with Products via a member collection, and vica versa, then you can accomplish the same thing:
public class Product {
private Set<Category> categories = new HashSet<Category>();
//implement hashCode and equals, potentially by id for extra performance
}
public class Category {
private Set<Product> contents = new HashSet<Product>();
//implement hashCode and equals, potentially by id for extra performance
}
The only difficult part is populating such a structure, where some intermediate maps might be needed.
But the approach of using auxiliary hashmaps/trees for indexing is not a bad one. After all, most indices placed on databases for example are auxiliary data structures: they coexist with the table of rows; the rows aren't necessarily organized in the structure of the index itself.
Using an external structure like this empowers you to keep optimizations and data separate from each other; that's not a bad thing. Especially if tomorrow you want to add O(1) look-ups for Products given a Vendor, e.g.
Edit: By the way, it looks like what you want is an implementation of a Multimap optimized to do reverse lookups in O(1) as well. I don't think Guava has something to do that, but you could implement the Multimap interface so at least you don't have to deal with maintaining the HashMaps separately. Actually it's more like a BiMap that is also a Multimap which is contradictory given their definitions. I agree with MStodd that you probably want to roll your own layer of abstraction to encapsulate the two maps.
Your solution is perfectly good. Remember that putting an object into a HashMap doesn't make a copy of the Object, it just stores a reference to it, so the cost in time and memory is quite small.
I would go with your first solution. Have a layer of abstraction around two hashmaps. If you're worried about concurrency, implement appropriate locking for CRUD.
If you're able to use an immutable data structure, Guava's ImmutableMultimap offers an inverse() method, which enables you to get a collection of keys by value.
Suppose you have a collection of a few hundred in-memory objects and you need to query this List to return objects matching some SQL or Criteria like query. For example, you might have a List of Car objects and you want to return all cars made during the 1960s, with a license plate that starts with AZ, ordered by the name of the car model.
I know about JoSQL, has anyone used this, or have any experience with other/homegrown solutions?
Filtering is one way to do this, as discussed in other answers.
Filtering is not scalable though. On the surface time complexity would appear to be O(n) (i.e. already not scalable if the number of objects in the collection will grow), but actually because one or more tests need to be applied to each object depending on the query, time complexity more accurately is O(n t) where t is the number of tests to apply to each object.
So performance will degrade as additional objects are added to the collection, and/or as the number of tests in the query increases.
There is another way to do this, using indexing and set theory.
One approach is to build indexes on the fields within the objects stored in your collection and which you will subsequently test in your query.
Say you have a collection of Car objects and every Car object has a field color. Say your query is the equivalent of "SELECT * FROM cars WHERE Car.color = 'blue'". You could build an index on Car.color, which would basically look like this:
'blue' -> {Car{name=blue_car_1, color='blue'}, Car{name=blue_car_2, color='blue'}}
'red' -> {Car{name=red_car_1, color='red'}, Car{name=red_car_2, color='red'}}
Then given a query WHERE Car.color = 'blue', the set of blue cars could be retrieved in O(1) time complexity. If there were additional tests in your query, you could then test each car in that candidate set to check if it matched the remaining tests in your query. Since the candidate set is likely to be significantly smaller than the entire collection, time complexity is less than O(n) (in the engineering sense, see comments below). Performance does not degrade as much, when additional objects are added to the collection. But this is still not perfect, read on.
Another approach, is what I would refer to as a standing query index. To explain: with conventional iteration and filtering, the collection is iterated and every object is tested to see if it matches the query. So filtering is like running a query over a collection. A standing query index would be the other way around, where the collection is instead run over the query, but only once for each object in the collection, even though the collection could be queried any number of times.
A standing query index would be like registering a query with some sort of intelligent collection, such that as objects are added to and removed from the collection, the collection would automatically test each object against all of the standing queries which have been registered with it. If an object matches a standing query then the collection could add/remove it to/from a set dedicated to storing objects matching that query. Subsequently, objects matching any of the registered queries could be retrieved in O(1) time complexity.
The information above is taken from CQEngine (Collection Query Engine). This basically is a NoSQL query engine for retrieving objects from Java collections using SQL-like queries, without the overhead of iterating through the collection. It is built around the ideas above, plus some more. Disclaimer: I am the author. It's open source and in maven central. If you find it helpful please upvote this answer!
I have used Apache Commons JXPath in a production application. It allows you to apply XPath expressions to graphs of objects in Java.
yes, I know it's an old post, but technologies appear everyday and the answer will change in the time.
I think this is a good problem to solve it with LambdaJ. You can find it here:
http://code.google.com/p/lambdaj/
Here you have an example:
LOOK FOR ACTIVE CUSTOMERS // (Iterable version)
List<Customer> activeCustomers = new ArrayList<Customer>();
for (Customer customer : customers) {
if (customer.isActive()) {
activeCusomers.add(customer);
}
}
LambdaJ version
List<Customer> activeCustomers = select(customers,
having(on(Customer.class).isActive()));
Of course, having this kind of beauty impacts in the performance (a little... an average of 2 times), but can you find a more readable code?
It has many many features, another example could be sorting:
Sort Iterative
List<Person> sortedByAgePersons = new ArrayList<Person>(persons);
Collections.sort(sortedByAgePersons, new Comparator<Person>() {
public int compare(Person p1, Person p2) {
return Integer.valueOf(p1.getAge()).compareTo(p2.getAge());
}
});
Sort with lambda
List<Person> sortedByAgePersons = sort(persons, on(Person.class).getAge());
Update: after java 8 you can use out of the box lambda expressions, like:
List<Customer> activeCustomers = customers.stream()
.filter(Customer::isActive)
.collect(Collectors.toList());
Continuing the Comparator theme, you may also want to take a look at the Google Collections API. In particular, they have an interface called Predicate, which serves a similar role to Comparator, in that it is a simple interface that can be used by a filtering method, like Sets.filter. They include a whole bunch of composite predicate implementations, to do ANDs, ORs, etc.
Depending on the size of your data set, it may make more sense to use this approach than a SQL or external relational database approach.
If you need a single concrete match, you can have the class implement Comparator, then create a standalone object with all the hashed fields included and use it to return the index of the match. When you want to find more than one (potentially) object in the collection, you'll have to turn to a library like JoSQL (which has worked well in the trivial cases I've used it for).
In general, I tend to embed Derby into even my small applications, use Hibernate annotations to define my model classes and let Hibernate deal with caching schemes to keep everything fast.
I would use a Comparator that takes a range of years and license plate pattern as input parameters. Then just iterate through your collection and copy the objects that match. You'd likely end up making a whole package of custom Comparators with this approach.
The Comparator option is not bad, especially if you use anonymous classes (so as not to create redundant classes in the project), but eventually when you look at the flow of comparisons, it's pretty much just like looping over the entire collection yourself, specifying exactly the conditions for matching items:
if (Car car : cars) {
if (1959 < car.getYear() && 1970 > car.getYear() &&
car.getLicense().startsWith("AZ")) {
result.add(car);
}
}
Then there's the sorting... that might be a pain in the backside, but luckily there's class Collections and its sort methods, one of which receives a Comparator...