I have a redis set. This set can store 20 members maximum(added withSADD command).
My problem is ; I need to update that members when needed. Members need that modification maximum 10 times for every member. Set members are json as a string. There is only solutoin on my mind nor, get all members update and recreate that set again. But it seems iditoic.
I know there is ZADD sorted set with its score support it seems suitable also I need to update score like data in json , but i just wonder Is there any method updating members in efficient way, or is updating member not acceptable on redis way ?
Note: Redis datastore is using by both node.js and java clients.
Set members themselves are immutable - you can add new members or remove existing ones. That's it.
Given that a set is an unordered collection of unique members, consider the possible outcomes were set members theoretically modifiable when the new value for a member:
is identical to the old value - no change to the set
already exists in the set - equivalent to deleting that member
isn't 1 or 2 - equivalent to deleting the member and adding a new one
Related
I just implemented the integration of Hibernate Search with Elasticsearch using hibernate search 5.8 and ES 5.5.
I have several fields created specifically for sorting, and they are all called [field]Sort.
When I was testing it locally, the first time I let Hibernate create the indexes, it created the String sort fields like this:
nameSort -> text
nameSort.keyword -> keyword
I realized that I should use the suffixed field for sorting.
But then, when I destroyed my Elasticsearch cluster, to start over, it didn't create the suffixed fields, it just created the sort fields as keyword directly.
I recreated the cluster 5 or more times again and it never created the suffixed fields again.
When I finally sent my changes to our staging environment, it created the suffixed fields again, causing my queries to fail, because they are trying to sort by a text field, instead of a keyword field.
Now, I'm really not sure of why it sometimes creates the suffix and sometimes doesn't.
Is there any rule?
Is there a way to avoid it creating 2 fields and making it always create only one keyword field with exactly the name I gave it?
Here's an example of a sort field:
#Field(name = "nameSort", analyze = Analyze.NO, store = Store.YES, index = Index.NO)
#SortableField(forField = "nameSort")
public String getNameSort() {
return name != null ? name.toLowerCase(Locale.ENGLISH) : null;
}
Thanks in advance for any help.
Hibernate Search does no such thing as creating a separate keyword field for text fields. It creates either a text field or a keyword field, depending on whether the field should be analyzed. In your case, the field is not analyzed, so it should create a keyword field.
Now, Hibernate Search is not alone here, and this behavior could stem from the Elasticsearch cluster itself. Did you check whether you have particular index templates on your Elasticsearch cluster? It could lead to Elasticsearch creating a keyword field whenever Hibernate Search creates a text property.
On a side note, you may be interested by the fact Hibernate Search 5.8 allows defining normalizers (same thing as Elasticsearch normalizers), which would allow you to annotate the getName() getter directly and avoid doing the lowercase conversion yourself. See this blog post for more information.
I'm attempting to limit the planning variables that can be associated with a particular entity. In the OptaPlanner manual in section 4.3.4.2.2, an example is shown, but it isn't clear how the list of variables should be generated. What should the list contain? Are these planning variables themselves? Can they be copies? If copies are allowed, then how are they compared? If not, the planning variable is not in scope when defining the planning entity - I realize that this is a Java question, but it isn't apparent how to access the list of planning variables from the planning entity definition.
Is this is a 6.1 feature that was not supported in earlier versions?
Will the Working Memory size be constrained by using this feature? That is my goal.
Your assistance is greatly appreciated!
Here's the example from the manual:
#PlanningVariable
#ValueRange(type = ValueRangeType.FROM_PLANNING_ENTITY_PROPERTY, planningEntityProperty = "possibleRoomList")
public Room getRoom() {
return room;
}
public List<Room> getPossibleRoomList() {
return getCourse().getTeacher().getPossibleRoomList();
}
Let's set the terminology straight first: The planning variable (for example getRoom() in the example) has a value range (which is a list of planning values) which different from entity instance to entity instance.
About such a List of planning values:
Each entity has it's own List instance, although multiple entities can share the same List instance if they have the exact same value range.
No copies: A planning value instance should only exists once in a Solution. So 2 entities with different value ranges, but with the same planning value in their value ranges, should be using the same planning value instance.
Does top link reflect the changes if the ordering of the elements have changed? I have the below mapping :
ManyToManyMapping dummyMapping = new ManyToManyMapping();
dummyMapping.setAttributeName("dummy");
dummyMapping.setReferenceClass(Dummy.class);
dummyMapping.useBasicIndirection();
//aggregationProvidersMapping.useCollectionClass(java.util.ArrayList.class);
dummyMapping.setRelationTableName("DUMMY");
dummyMapping.addSourceRelationKeyFieldName("dummy1.ID", "dummy2.ID");
dummyMapping.addTargetRelationKeyFieldName("dummy1.ORGID", "dummy2.id");
descriptor.addMapping(dummyMapping);
What is the default collection class used if I don't specify any class via the "useCollectionClass"?
"dummy" is using a ArrayList and hence the ordering of the elements is maintained. If the ordering of the elements within the "dummy" attribute has changed, [no additions or deletions], does toplink reflect these changes to the DB ?
For a ManyToManyMapping ,there is no way in which the order can be stored. I resolved this solution by changing to OneToManyMapping.
The legacy code i'm working on uses 21 numerically-named attributes for a class, for 3 differents things (lets call them "firstThing", "secondThing", and "thirdThing").
So I have the firstThing1, firstThing2, ... firstThing7 attributes in my class, and the same for secondThing and thirdThing.
Everywhere in the code where objects of that class are used, it's just pieces of code copied 7 times that all do the same thing, beside of using the correct numerically-named attribute. Not so great.
Instead of changing the whole picture and redesigning the class, i wanted to at least change the functions i'm working on : doing a loop with the redundant code, add the values to a specific ArrayList where they were previously assigned. Now I would know if there is a way to take all those values from my ArrayList, and assign them to the specific numerically-named attribute ? Or a way to test the length of the differents ArrayList and assign values to that many number of attributes ? Or should I just copy
if(listOfFirstThings.size() >= 1)
myObject.setFirstThing1(listOfFirstThings.get(0));
if(listOfFirstThings.size() >= 2)
myObject.setFirstThing2(listOfFirstThings.get(1));
...
21 times to assign everything I need ?
Redesigning the class is the way to go. You've effectively got three collections - which should quite possibly be one collection, with each element having three properties.
Java just isn't designed to use execution-time-generated variable names. You can do it with reflection, but I would strongly encourage you to fix it properly right now. (I'd actually do this as a refactoring step before trying to add whatever new feature you're working on.)
I am trying to count the number of outgoing relationships of a particular type a node has. My code currently looks like this:
int count = 0;
for (Relationship r : node.getRelationships(RelationshipTypes.MODIFIES, Direction.OUTGOING))
{
count++;
}
return count;
The return type of getRelationships is Iterable so I can't use size() or equivalent. I am trying to avoid having to pull every relationship out of the database because some nodes have lots of relationships ( > 5 million). Is there a faster way of doing this?
No. The way neo4j stores relationships on disk for a node is in a linked list, and they do not keep any type of statistics for nodes or relationships. In order to get a count, you will have to go through all relationships for the node, of that type.
Even if you have a cache, with which they store it more efficiently, the system may still not provide a full picture. You method is the best method.
I would try to store outgoing in a data structure and get the size of the structure. This may take more time when the objects are initialized but it seems like the easiest way to quickly get size.
if node.getRelationships(RelationshipTypes.MODIFIES, Direction.OUTGOING) is returning a type of Collection then
to know the number of outgoing relationships of a particular type a node has , you can simply use the following :
int count = node.getRelationships(RelationshipTypes.MODIFIES, Direction.OUTGOING).size();
I see you are using the neo4j api. The other way would be to go with ThinkerPop gremlin query language which is available both for groovy and scala but they will do the same thing internally. As i know neo4j is giving you access trough an iterator because of performance reasons. For instance you could have million relationships but you want to paginate trough the results on the fly. It would be really be slow if Neo4J would return always a collection of relationships. That's why he returns a iterator and gives you access on the fly to the relationships. They are not retrieved from the DB until you need them.
So i would say NO. I hope i could help you.