spring-data-redis module contains RedisAtomicLong class.
In this class you can see
public boolean compareAndSet(long expect, long update) {
return generalOps.execute(new SessionCallback<Boolean>() {
#Override
#SuppressWarnings("unchecked")
public Boolean execute(RedisOperations operations) {
for (;;) {
operations.watch(Collections.singleton(key));
if (expect == get()) {
generalOps.multi();
set(update);
if (operations.exec() != null) {
return true;
}
}
{
return false;
}
}
}
});
}
My question is why it works?
generalOps.multi() starts transaction after get() is invoked. It means that there is possibility that two different thread (or even client) can change value and both of them will succeed.
Is operations.watch prevent it somehow? JavaDoc doesn't explain purpose of this method.
PS: Minor question: why for (;;)? There is always one iteration.
Q: Is operations.watch prevent it somehow?
YES.
Quoting from Redis documentation about transaction:
WATCH is used to provide a check-and-set (CAS) behavior to Redis transactions.
WATCHed keys are monitored in order to detect changes against them. If at least one watched key is modified before the EXEC command, the whole transaction aborts, and EXEC returns a Null reply to notify that the transaction failed.
You can learn more about Redis transaction from that documentation.
Q: why for (;;)? There is always one iteration.
It seems the code you've posted is very old. From Google's cache of this url, I saw the code you provided which is dated back to Oct 15th, 2012!
Latest codes look much different:
compareAndSet method
CompareAndSet class
Is operations.watch prevent it somehow?
YES. After watching a key, if the key has been modified before transaction finishes, EXEC will fail. So if EXEC successes, the value is guaranteed to be unchanged by others.
why for (;;)? There is always one iteration.
In your case, it seems the infinite loop is redundant.
However, if you want to implement a check-and-set operation to modify the value with the old value, the infinite loop is necessary. Check this example from redis doc:
WATCH mykey
val = GET mykey
val = val + 1
MULTI
SET mykey $val
EXEC
Since EXEC might fail, you need to retry the whole process in a loop until it successes.
RedisAtomicLong.compareAndSet implementation is not optimal since it requires 5 requests to Redis
Redisson - Redis Java client provides more efficient implementation.
org.redisson.RedissonAtomicLong#compareAndSetAsync method implemented using atomic EVAL-script:
"local currValue = redis.call('get', KEYS[1]); "
+ "if currValue == ARGV[1] "
+ "or (tonumber(ARGV[1]) == 0 and currValue == false) then "
+ "redis.call('set', KEYS[1], ARGV[2]); "
+ "return 1 "
+ "else "
+ "return 0 "
+ "end",
This script requires only single request to Redis.
Usage example:
RAtomicLong atomicLong = redisson.getAtomicLong("myAtomicLong");
atomicLong.compareAndSet(1L, 2L);
Related
I'm trying to use Optaplanner to replace myself in scheduling our work planning.
The system is having a MySQL database containing the necessary information and relationships. For this issue I'll only use the three tables I need:
Employees --> Have Skills
Jobs --> Have Skills
Skills
In Drools I have the rule
rule 'required Skills'
when
Job(employee != null, missingSkillCount > 0, $missingSkillCount : missingSkillCount)
then
scoreHolder.addHardConstraintMatch(kcontext, -10 * $missingSkillCount);
end
In Class Job I have a function missingSkillCount():
public int getMissingSkillCount() {
if (this.employee == null) {
return 0;
}
int count = 0;
for (Skill skill : this.reqskills) {
if(!this.employee.getSkills().contains(skill)) {
count++;
}
}
return count;
}
When I run my program, Optaplanner returns that none of my workers have any skills...
However, when I manually use this function (adapted to accept an Employee as parameter): public int getMissingSkillCount(Employee employee), it does return the correct values.
I'm puzzled! I somehow understand that containsis checking for the same object, instead of the content of the object. But then I don't understand how to do this efficiently...
1) Are your Jobs in the Drools working memory? I presume they are your #PlanningEntity and the instances are in #PlanningEntityCollectionProperty on your #PlanningSolution, so they will be. You can verify this by just matching a rule on Job() and doing a System.out.println.
2) Try writing the constraint as a ConstraintStream (see docs) and putting a debug breakpoint in the getMissingSkillCount() > 0 lambda to see what's going on.
3) Temporarily turn on FULL_ASSERT to validate there is no score corruption.
4) Turn on DEBUG and then TRACE logging for optaplanner, to see what's going on inside.
Still wondering what makes the difference between letting Optaplanner run getMissingSkillCount() and using it "manually".
I fixed it by overriding equals(), that should have been my first clue!
I'm new to Webflux and I'm trying to implement this scenario:
client ask for data
if data is already present in redis cache => return cached data
otherwise query remote service for data
I've written this code:
ReactiveRedisOperations<String, Foo> redisOps;
private Mono<Foo> getFoo(String k) {
return this.redisOperations.opsForValue().get(k)
.map(f -> this.logCache(k, f))
.switchIfEmpty(this.queryRemoteService(k));
}
private void logCache(String k, Foo f) {
this.logger.info("Foo # {} # {} present in cache. {}",
k,
null != f ? "" : "NOT",
null != f ? "" : "Querying remote");
}
private Mono<Foo> queryRemoteService(String k) {
this.logger.info("Querying remote service");
// query code
}
It prints:
"Querying remote service"
"Foo # test_key # present in cache"
How can I ensure that switchIfEmpty is called only if cached data is not present?
Edit
Accordingly to Michael Berry's answer I refactored my code like follows:
private Mono<Foo> getFoo(String k) {
this.logger.info("Trying to get cached {} data", k);
this.logger.info(this.redisOps.hasKey(k).block() ? "PRESENT" : "NOT present");
return this.redisOperations.opsForValue().get(k)
.switchIfEmpty(this.queryRemoteService(k));
}
private Mono<Foo> queryRemoteService(String k) {
this.logger.info("Querying remote service");
// query code
}
Now my output is this:
Trying to get cached test_key data
PRESENT
Querying provider
So it seems that is executed only one time, but still can't avoid switchIfEmpty is executed. I'm sure that redis contains data for that key
This line:
.map(f -> this.logCache(k, f))
...is rather odd, as you're not mapping anything to anything, you're instead performing a side effect (logging the value.) doOnNext() (rather than map()) would be a much more sensible choice here.
However, I digress, that's not the primary issue here.
How can I ensure that switchIfEmpty is called only if cached data is not present?
It's that way already, but your logging isn't doing what you think. The key concept likely causing you an issue here is that null can never be propagated through a reactive stream. If the stream is empty as it is in this example, then nothing is propagated at all.
Therefore, in your code, neither map() (nor doOnNext() if that were used) will be called, so your "present in cache" line won't be written to the logs (since there's no value to map, and no value to call a side effect against.) Checking whether the value is null in logCache() is therefore pointless - it will never be null.
As far as I see it, your log output here therefore must be the result of two invocations of getFoo():
The first was not present in the cache, so map() wasn't called, switchIfEmpty() switched, and "Querying remote service" was printed;
The second was present in the cache, so your "present in cache" line was printed, switchIfEmpty() wasn't called, and therefore "Querying remote service" wasn't printed.
To make your logging make sense, you should remove the conditional logic from logCache(), and add a "not present in cache" line to the queryRemoteService() method.
I am currently working on a Java project in which I am trying to learn the ins and outs. In previous projects, I have used Java reflection in order to create a toString() by calling each getter in an object and parsing it to display the value. This method has been a helpful, clean, and dynamic way to display the data.
Below is a heavily simplified version of my code:
private static String objectToString(Object o) {
LOGGER.debug("entering ObjectStringUtils::objectToString()");
....
Class<?> oClass = o.getClass();
String className = oClass.getName();
Method[] methods = oClass.getMethods();
for (Method m : methods) {
if ([method is a getter]) {
String methodName;
Object value;
try {
methodName= m.getName();
LOGGER.debug("Invoking " + className + "::" + methodName);
Object value = m.invoke(o);
LOGGER.debug("Invoked " + className + "::" + methodName);
} catch (Exception e) {
e.printStackTrace();
value = null;
}
LOGGER.debug(methodName+ " -> " + value);
}
}
}
This produces logger output which looks like this:
14:47:49,478 [] DEBUG ObjectStringUtils:? - Invoking org.hibernate.impl.SessionImpl::isOpen
14:47:49,613 [] DEBUG ObjectStringUtils:? - Invoked org.hibernate.impl.SessionImpl::isOpen
14:47:49,613 [] DEBUG ObjectStringUtils:? - isOpen -> true
Notice that it took Java 139 milliseconds to call the function. It takes this long to perform the reflection in any method in any class, even if the method is only a standard getter which performs no logic other than to return a value. This means that it takes far too long to perform the operation when there are multiple nested values involved. When I used reflection previously on WebSphere 7, it took a tiny fraction of this long to perform the operation.
So my question is: why is it taking so long to process? I understand that reflection is slower, but it shouldn't be on the magnitude of 140 milliseconds to call a getter. Is this an artifact of the way it takes WebLogic a long time to call a function, or the fact that line numbers appear to be stripped from the .class files? So far, I don't have any idea.
When you benchmark a piece of code, you must time the same operation several times, otherwise the test is meaningless - it could have been caused by garbage collection or by another process running on the same computer.
When Methods are cached - e.g. used in a framework and oClass.getMethods() is called only once, reflective call to a method is only ~2-3 slower than a direct method call. I think that oClass.getMethods() must be the slowest part in your reflection, not the method invocation.
So may be it's SessionImpl::isOpen which is slow by itself? I.e. it checks if it's still connected, or does any slow interaction with the database? 139 ms is very slow even for a DB transaction, so this also could happen because of some errors occuring during this call.
I have a small problem with creating threads in EJB.OK I understand why i can not use them in EJB, but dont know how to replace them with the same functionality.I am trying to download 30-40 webpages/files and i need to start downloading of all files at the same time(approximately).This is need ,because if i run them in one thread in queue.It will excecute more than 3 minutes.
I try with #Asyncronious anotation, but nothing happened.
public void execute(String lang2, String lang1,int number) {
Stopwatch timer = new Stopwatch().start();
htmlCodes.add(URL2String(URLs.get(number)));
timer.stop();
System.out.println( number +":"+ Thread.currentThread().getName() + timer.elapsedMillis()+"miseconds");
}
private void findMatches(String searchedWord, String lang1, String lang2) {
articles = search(searchedWord);
for (int i = 0; i < articles.size(); i++) {
execute(lang1,lang2,i);
}
Here are two really good SO answers that can help. This one gives you your options, and this one explains why you shouldn't spawn threads in an ejb. The problem with the first answer is it doesn't contain a lot of knowledge about EJB 3.0 options. So, here's a tutorial on using #Asynchronous.
No offense, but I don't see any evidence in your code that you've read this tutorial yet. Your asynchronous method should return a Future. As the tutorial says:
The client may retrieve the result using one of the Future.get methods. If processing hasn’t been completed by the session bean handling the invocation, calling one of the get methods will result in the client halting execution until the invocation completes. Use the Future.isDone method to determine whether processing has completed before calling one of the get methods.
guys
I am implementing a simple example of 2 level cache in java:
1st level is memeory
2nd - filesystem
I am new in java and I do this just for understanding caching in java.
And sorry for my English, this language is not native for me :)
I have completed 1st level by using LinkedHashMap class and removeEldestEntry method and it is looks like this:
import java.util.*;
public class level1 {
private static final int max_cache = 50;
private Map cache = new LinkedHashMap(max_cache, .75F, true) {
protected boolean removeEldestEntry(Map.Entry eldest) {
return size() > max_cache;
}
};
public level1() {
for (int i = 1; i < 52; i++) {
String string = String.valueOf(i);
cache.put(string, string);
System.out.println("\rCache size = " + cache.size() +
"\tRecent value = " + i +
" \tLast value = " +
cache.get(string) + "\tValues in cache=" +
cache.values());
}
}
Now, I am going to code my 2nd level. What code, methods I should write to implement this tasks:
1) When the 1st level cache is full, the value shouldn't be removed by removeEldestEntry but it should be moved to 2nd level (to file)
2) When the new values are added to 1st level, firstly this value should be checked in file (2nd level) and if it exists it should be moved from 2nd to 1st level.
And I tried to use LRUMap to upgrade my 1st level but the compiler couldn't find class LRUMap in library, what's the problem? Maybe special syntax needed?
You can either use the built in java serialization mechanism and just send your stuff to file by wrapping FileOutputStrem with DataOutputStream and then calling writeObjet().
This method is simple but not flexible enough. for example you will fail to read old cache from file if your classes changed.
You can use serialization to xml, e.g. JaxB or XStream. I used XStream in past and it worked just fine. You can easily store any collection in file and the restore it.
Obviously you can store stuff in DB but it is more complicated.
A remark is that you are not getting thread safety under consideration for your cache! By default LinkedHashMap is not thread-safe and you would need to synchronize your access to it. Even better you could use ConcurrentHashMap which deals with synchronization internally being able to handle by default 16 separate threads (you can increase this number via one of its constructors).
I don't know your exact requirements or how complicated you want this to be but have you looked at existing cache implementations like the ehcache library?