JSR 107 - Caching (JCache) vs CPU caching - java

I read about JSR 107 Caching (JCache).
I'm confused:
From what I know, every CPU manage his caching memory (without any help from OS).
So, why do we need Java Caching handler ? (if the CPU manage his own cache)
What I miss here ?
Thanks

This is about caching Java objects, like objects that are expensive to create, or need to be shared between multiple Java VMs. See https://jcp.org/en/jsr/detail?id=107
A cache is generally used to temporarily keep data between uses because it takes too much time or is plain impossible to recreate if you just throw it away between uses.
The CPU cache keeps data and instructions in case it has to access it again, because reading it from memory takes more time.
The JSR 107 cache works o a completely different level.

There is a difference between CPU caching and memory caching. This JCache would cache things in memory so you don't have to get it from an expensive resource like disk or over the network.
So CPUs have caches built into them so that they can avoid going to memory. CPUs commonly have three levels of cache and store around 8MB. CPU caching is not something you have to worry about because it is taken care of for you. If something isn't in the CPU cache then it has to go fetch it out of memory.
Caching in memory is to avoid going to disk or even slower resources as I mentioned earlier. This mechanism programs have control over. So if you want to avoid continously asking your DB for some object you can store it memory and keep returning the same object. This saves quite a bit of performance. As Thomas mentioned JCache adds the functionality to be able to provide caching across JVMs. From what I understand this means that different Java programs can share the same cache.

Related

Using memcached with Java and ScheduledFuture objects

I've been playing around with caching objects (by first creating my own cache which turned out a stable implementation but very inefficient) and then trying my hand at using Memcached.
Although memcached works beautifully, I've ran into a problem.
How I'm using my objects is as follows:
I read data from a database into an object, then store the object in memcached.
Every couple of minutes I retrieve the object from memcached, retrieve any additional data from either the database or other objects in memcached, update the object with any new / relevant data, then store the object back into memcached.
Objects that need to be viewed are pulled from memcached, packaged and sent onto a client-side application for display.
This works very well, except when the number of objects I'm creating-storing-updating-viewing in memcached becomes high. Java/Tomcat-jvm doesn't seem to be garbage-collecting "fast enough" on the objects I pulled out of memcached, and the vm runs out of memory.
I'm limited to 8GB of memory (and would preferably like to bring that down to 4 if I can - using memcached), so my question is, is there a solution in preventing the JVMs memory usage from expanding so fast (or tune the garbage collector)?
(PS I have considered using Guava cache from Google, but this limits my options in concurrency e.g. if I have to restart tomcat, and using both Guava and memcached seems like a duplication of sorts which I'd like to avoid of possible)
--
Hein.
The garbage collector can't be "too slow" and run out of memory. Before throwing an OutOfMemoryError, the garbage collector is guaranteed to run. Only if it cannot free enough memory will the error be thrown.
You should use a profiler to see whether you have memory leaks, or if you're just hanging on to too many objects.
Afterwards you may want to tune the GC to improve performance, see for example here: GC tuning

cache and memory handling on a java server

My java app is running out of memory after a while. So I am trying to find the most suitable way to manage the cache size. I do have the problem, that my cube is growing big if I don't clear memory with LRU principle. How do you manage the cache size of you java vm?
At the moment I can set the parameters CACHE_SIZE and FREE_MEMORY_PERCENTAGE. But this doesnt seem to be the right way.
I actually never solved to problem completely and I am not 100% satisfied with the current status, but anyways here is my current solution.
The system uses shared cache between sessions, so the memory which got allocated in the sessions gets cleared after session-end but the memory in the shared cache stays allocated.
A combination of the different parameters mentioned before + a quicker iteration of the garbage collector after each call of the memory operations works, at least, ok.

Best Practice with Garbage collection when application has Global Cache

I am working in application which has a global cache and where we have data publisher also. The problem is this we frequently face the out of memory issue. Now did some tuning but it is not sufficient. We have large no of young generation and tenured generation object. Most of the time app goes to tenured generation and hence it throws OutOfMemoryError.
As we have 2 CPU host we may apply Throughput Collector to avoid pause time but it majorly collects young generation and hence we are using Concurrent low pause collector.
What is the best possible way tune VM for this application?
How can I increase Minor GC which in turn will control our tenured generation.
Thanks in advance.
My first idea was to use a WeakHashMap, but then I found this article: WeakHashMap is not a cache. See the links in that article maybe you'll also find apache commons suitable.
In my opinion a best practice is to not implement an own cache especially if it is a central and important component. It's better to use a library. Otherwise you will always have to 'tune' your implementation and still new problems are coming over and over. So even if you already have a custom cache in your application, the effort of switching to a solid library could pay out.
if you have a cache you need a mechanism to remove stale data from time to time. that's your task and not the vms task.
available cache solutions like http://java-source.net/open-source/cache-solutions/oscache have different strategies how to expire cache entries you can use and extend if necessary.
edit: if (how you indicate in the comment) you can't change the code of the application you can add more memory to the server and adjust the the heap the jvm is allowed to use. this won't solve the problem but it might make it appear less often.

java - can we do our own memory management?

Is it possible to perform the memory management by yourself. e.g. We allocate a chunk of memory outside the heap space so that it is not subject to GC. And we ourselves take care of allocation/deallocation of objects from this chunk of memory.
Some people have pointed to frameworks like Jmalloc/EHcache. Actually i more want to understand that how they actually do it.
I am fine with some direct approach or even some indirect approach ( e.g. first serialize java objects).
You can not allocate Java objects in a foreign memory location, but you can map memory which is e.g. allocated in a native library into a direct ByteBuffer to use it from Java code.
You can use the off the heap memory approach
Look for example jmalloc
and this is also usefull link Difference between on and off the heap
I have a library which does this sort of thing. You create excerpts which can be used a rewritable objects or queued events. It keeps track of where objects start and end. The downside is that the library assumes you will cycle all the object once per day or week. i.e. there is no clean up as such. On the plus side its very efficient, can be used in an event driven manner, persisted and can be shared between processes. You can have hundreds of GB of data while using only a few MB of heap.
https://github.com/peter-lawrey/Java-Chronicle
BTW: It supports ByteBuffers or using Unsafe for extra performance.
If you mean Java objects, then no, this isn't possible with standard VMs. Although you can always modify the VM if you want to experiment (Jikes RVM for example was made for this very purpose), but bear in mind that the result won't really be Java any more.
As for memory allocation for non-java data structures, that is possible and is being done regularly by native libraries and there is even some Java support for it (mentioned in the other answers), with the general caveat that you can very easily self-destruct with it.

Memory consumption for java web app (300MB too high?)

Can I pick your brains about a memory issue?
My java app, which isn't huge (like 14000 LOC) is using about 300MB of memory. It's running on Tomcat with a MySQL database. I'm using Hibernate, Spring and Velocity.
It doesn't seem to have any leaks, cause it stabilizes and 300MB, without growing further. (Also, I done some profiling.) There's been some concern from my team, however, about the amount of space it's using. Does this seem high. Do you have any suggestions for ways to shrink it?
Any thoughts are appreciated.
Joe
The number of LOC is not an indicator of how much heap a Java app is going to use; there is no correlation from one to the other.
300MB is not particularly large for a server application that is caching data, but it is somewhat large for an application that is not holding any type of cached or session data (but since this includes the webserver itself, 300MB is generally reasonable).
The amount of code (LOCs) rarely has much impact on the memory usage of your application, after all, it's the variables and objects stored that take most of the memory. To me, 300 megabytes doesn't sound much, but of course it depends on your specific usage scenario:
How much memory does the production server have?
How many users are there with this amount of memory used?
How much does the memory usage grow per user session?
How many users are you expecting to be concurrently accessing the application in production use?
Based on these, you can do some calculations, eg. is your production environment ready to handle the amount of users you expect, do you need more hardware, do you perhaps need to serialize some data to disk/db etc.
I can't make any promises, but i don't think you need to worry. We run a couple of web app's at work through Glassfish, using hibernate as well, and each uses about 800-900MB in dev, will often have 2 domain's running each of that size.
If you do need to reduce your footprint, at least make sure you are using Velocity 1.6 or higher. 1.5 wasted a fair bit of memory.

Categories

Resources