This question is to do with FatWire Content Server version 7.6.
I have a FatWire template which goes through some assets and returns some markup. It takes about 2 minutes to complete. The result gets cached successfully in FatWire ContentServer cache and I can see it in the CacheManager tool. The ExpiryTime on the template is 10 years in the future. After a short while (usually 1-2 mins), the ExpiryTime changes to a past date (1980-02-01 01:01:01) and hence is expired. The item then disappears from the cache.
Has anyone experienced this before? It is only happening with this template. Any ideas as to the reason the item expires after first being cached successfully?
If you are using old-style page cache implementation (SystemPageCache, SystemItemCache table), then there may I'd suggest to enable some debug, to see if a particular page/element or event is running after this, to cause the change to the table:
enable these loggers (restart afterwards):
com.futuretense.cs.db=DEBUG
com.futuretense.cs.request=DEBUG
com.futuretense.cs.event=DEBUG
com.futuretense.cs=DEBUG
Tail futuretense.txt/sites.log, and reproduce the issue - You should be able to see the point where the new page let is cached with future expiration, and then something subsequent changes it. That may tell you whether it occurred as a result of a system event, or another page request. In case this is occurring on a clustered environment, you would need to set the same logging & tail on the other cluster nodes to spot whether the change is occurring from those.
If you are using new-style page cache ("InCache", cs-cache.xml etc), then it may be that another node is unexpectedly interacting with this node. You could temporarily isolate this node from a cache cluster, by adjusting the multicast settings in cs-cache.xml - e.g. timeToLive=0 will prevent any other nodes on different physical servers from seeing this one.
Related
I successfully enabled redis in a spring boot + vaadin application and it runs fine on my computer. The application is on a test run in a slower environment and an error occours multiple times.
WARN c.v.s.communication.ServerRpcHandler [ServerRpcHandler.java : 266] - Unexpected message id from the client. Expected: 248, got: 249
It seems like it happens when the serialization/deserialization of the VaadinSession takes too long. For example I have a page that has multiple checkboxes. I click on the first, then the second and third. After this the upper warn is thrown and a previous state of the page appears. In this case it might be without any cheched checkboxes or with one or two checked checkboxes. In rare cases it works properly.
I can't think of a solution for the problem. One thing I tried is showing a loading indicator immediately (100ms) (the default is after 300ms of loading) but it doesn't solve the problem.
Can I somehow configure when the serialization/deserialization occurs instead of every UI change or make it faster by leaving parts of the VaadinSession out of it? (I need the data on the current page so I can't make the ui components transient.)
We had a discussion about the problem in my workplace and we think that the components are working properly. The problem occurs when a serialization is slower then the next request's deserialization. (Every UI change begins with a deserialization to get the latest state then serializes the modified state.) My solution was creating an aspect that stores the latest VaadinSession that was sent for serialization and compares every deserialized VaadinSession to the stored one. I keep the one with the higher lastProcessedClientToServerId. This solves the issue almost every time.
I am using S3 Lifecycle Rule to move objects to Glacier. Since objects will be moved to glacier storage I need to make sure my application RDS is also
updated with similar details.
As per my discussion over this thread AWS Lambda for objects moved to glacier, there is no way currently to generate SQS notification to get notified about object being moved to glacier.
Also, as per my understanding currently Lifecycle rule will be evaluated once in a day, but there is not specific time when this will happen in a day. If there was i was planning to have a scheduler which will run after that and update status of archived objects in RDS.
Is there a way that you can suggest which will be close enough to sync this status changes between AWS & RDS?
Let me know your feedback or if you need more information on this to understand use case.
=== My Current approach is as per below.
Below is exact flow that I have implemented, please review and let me know if there is anything that could have been done in better way.
When object is uploaded to system I am marking it with status Tagged and also capturing creation date. My Lifecycle rule is configured with 30 days from creation. So, I have a scheduler which calculates difference between today's date and object creation date for all objects with status Tagged, and check if diff is greater than equal to 30. If so, it updates status to Archived.
If user performs any operation on object with status Archived, we explicitly check in s3 whether object is actually moved to glacier or not. If not we perform operation requested. If moved to glacier we initiate restore process and wait for restore to finish to initiate operation requred.
I appreciate your thoughts and would like to hear your inputs on above approach that i have taken.
Regards.
If I wanted to implement this, I would set the storage class of the object inside my database as "Glacier/Archived" at the beginning of the day it is supposed to transition.
You already know your lifecycle policies, and, as part of object metadata, you also know the creation time of each object. Then it becomes a simple query, which can be scheduled to run every night at 12:00 AM.
You could further enhance your application by defining an algorithm that checks if an object has transitioned to Glacier today, at the moment when object access is requested, it would go and explicitly check if it is actually transitioned or not. If it is marked as Glacier/Archive for more than a day, then checking is no longer required.
Of course, if for any reason, the above solution doesn't work for you, it is possible to write a scanner application to continuously check the status of those objects that are supposed to transition at "DateTime.Today" and are not marked as Glacier/Archive yet.
Background : in Java I'm memory mapping a file (shared).
I'm writing some value at the address 0 of that file. I understand the corresponding PAGE in the PAGE CACHE is flagged as DIRTY and will be written later depending on the dirty_ratio and the like settings.
So far so good.
But I'm wondering what is happening when writing once more at the address 0 while the kernel is writing back the dirty page to the file. Is my process blocked somehow waiting for the writeback to be completed?
It may be. It is only necessary when the device-level I/O requests include a checksum alongside the written data. Otherwise, the first write may be torn, but it can then be corrected by the second write.
As always, carefully consider your safety against power-failure, kernel crashes etc.
The waiting is allegedly avoided in btrfs. (Also, by happenstance, in the legacy ext3 filesystem. But not ext4 or ext2).
This looks like it is a bit of a moving target. The above (as far as I could tell) describes the first optimization of this "stable page write" code, following the complaints when it was first introduced. The commit description mentions several possibilities for future changes.
bdi: allow block devices to say that they require stable page writes
mm: only enforce stable page writes if the backing device requires it
Does my device currently use "stable page writes"?
There is a sysfs attribute you can look at, called stable_pages_required
I'm one of the developers of the Hawk model indexing tool. Our tool indexes XMI models into graphs in order to speed up later queries, and it needs to toggle back and forth between "batch insert" and "transactional update" modes. "batch insert" is used the first time we notice a new file in a directory, and from then on we use "transactional update" mode to keep its graph in sync.
Our recently added OrientDB 2.1.4 backend uses the getTx()/getNoTx() methods in OrientGraphFactory to get the appropriate OrientGraph/OrientGraphNoTx instances. However, we aren't getting very good throughput when compared to Neo4j. Indexing set0.xmi takes 90s when placing the WAL in a Linux ramdisk with OrientDB, while it takes 22s with our Neo4j backend in the same conditions (machine + OS + JDK). We're using these additional settings to try and reduce times:
Increased WAL cache size to 10000
Disable sync on page flush
Save only dirty objects
Use massive insert intent
Disable transactional log
Disable MVCC
Disable validation
Use lightweight edges when possible
We've thought of disabling the WAL when entering "batch insert" mode, but there doesn't seem to be an easy way to toggle that on and off. It appears it can only be set once at program startup and that's it. We've tried explicitly closing the underlying storage so the USE_WAL flag will be read once more while reopening the storage, but that only results in NullPointerExceptions and other random errors.
Any pointers on how we could toggle the WAL, or improve performance beyond that would be greatly appreciated.
Update: We've switched to using the raw document API and marking dirty nodes/edges ourselves and we're now hitting 55 seconds, but the WAL problem still persists. Also tried 2.2.0-beta, but it actually took longer.
We solved this ourselves. Leaving this in case it might help someone :-). We've hit 30 seconds after many internal improvements in our backend (still using the raw doc API) and switching to OrientDB 2.0.15, and we found the way to toggle the Write Ahead Log ourselves. This works for us (db is our ODatabaseDocumentTx instance):
private void reopenWithWALSetTo(final boolean useWAL) {
db.getStorage().close(true, false);
db.close();
OGlobalConfiguration.USE_WAL.setValue(useWAL);
db = new ODatabaseDocumentTx(dbURL);
db.open("admin", "admin");
}
I was being silly and thought I had to close the DB first and then the storage, but it turns out that wasn't the case :-). It's necessary to use the two-arg version of the ODatabaseDocumentTx#close method, as the no-arg version basically does nothing for the OAbstractPaginatedStorage implementation used through plocal:// URLs.
I have seen that that one of the main difference between POST and GET is that POST is not cached but GET is cached.
Could you explain me what do you mean about "cache"?
Also, if I use POST or GET server sends me response. Is there any difference? In all of cases, I have request data and response, is not it?
Thanks
To Cache (in the context of HTTP) means to store a page/response either on the client or some intermediate host - perhaps in a content distribution network. When the client requests a page, then the page can be served from the client's cache (if the client requested it before) or the intermediate host. This is faster and requires fewer resources than getting the page from the server that generated it.
One downside is that if the request changes some state on the server, that change won't happen if the page is served from a cache. This is why POST requests are usually not served from a cache.
Another downside to caching is that the cached copy may be out of date. The HTTP caching mechanisms try to prevent this.
The basic idea behind the GET and POST methods is that a GET message only retrieves information but never changes the state of the server. (Hence the name). As a result, just about any caching system will assume that you can remember the last GET response returned, and that the next one will look the same.
A POST on the other hand is a request that sends new information to the server. So not only can these not be cached (because there's no guaruantuee that the next POST won't modify things even more; think +1 like buttons for example) but they actually have to invalidate parts of the cache because they might modify pages.
As a result, your browser for example will warn you when you try to refresh a page to which you POSTed information, because you might make changes you did not want made by doing so. When GETting a page, it will not do so because you cannot change anything on the site by doing so.
(Or rather; it's your job as a programmer to make sure that nothing changes when GETting a page.)
GET is supposed to return the same result from the server and not change things at the server side and hence idempotent.
Whereas POST means it can modify something at the server(make an entry in db, delete something etc) and hence not idempotent.
And with regards to caching the data in GET has been addressed here in a nice manner.
http://www.ebaytechblog.com/2012/08/20/caching-http-post-requests-and-responses/#.VGy9ovmUeeQ