I would like to clear the Eclipse e4 cache (The .metadata directory) during runtime.
There are lots of posts for clearing the cache by setting the checkbox in the run configurations but I can't find anything on clearing the cache in the code.
I'd prefer to use a method that has already been written (if there is one) in comparison to writing my own.
If I was to do this myself then I'll do it during #PostContextCreate in the life cycle manager.
Is there a method that will do this for me or should I just delete the cache directory?
Update
Here is the issue I'm trying to work around.
https://bugs.eclipse.org/bugs/show_bug.cgi?id=430090#add_comment
To clear the cache at runtime I've overridden the ResourceHandler and added this to loadMostRecentModel.
final Method m = getClass().getSuperclass().getDeclaredMethod("getWorkbenchSaveLocation", new Class<?>[] {});
m.setAccessible(true);
final File workbenchSaveLocation = (File) m.invoke(this, (Object[]) null);
workbenchSaveLocation.delete();
I use reflection as the parent method is private. It would be better to do this instead of writing code to get the file myself as it ensures I always get the correct location.
First, deleting .metadata folder can damage the user data: preferences, launch configs, who knows what else - it depends on particular plug-in implementation.
Also your updates may contain new bundles and fragments with new services and extension.
And the user may rearrange views and do other things persisted with workbench model.
=>
The deletion of workbench model will not resolve all the issues, please consider the following:
restart after update to ensure all the new bundles/extensions/services are applied Howto restart an e4 RCP application
use Model Processors to manipulate the model after load http://blog.vogella.com/2010/10/26/processors-e4-model/
Related
I want to move few assets by creating a new folder using only the workflow in java.I dont want to create the folders manually and then move the assets as there are 10000s of assets that are to be moved to different folders.
If you are looking at creating folder using workflow - A folder in AEM is nothing but a node of jcr:primaryType either sling:Folder or sling:OrderedFolder. If you have com.day.cq.commons.jcr in your classpath, createPath method will help you create a node if it does not exist.
You could also use addNode method followed by setProperty method from javax.jcr.Node api to create this folder of appropriate primary type.
Moving assets to this newly created node(folder), can proceed after this. You could use the clone method from javax.jcr.WorkSpace which has an option to remove the existing node.
There is another straight forward way to move assets.
I would recommend you to use built-in com.adobe.granite.asset.api.AssetManager api to perform CRUD operations on DAM assets.
session = resourceResolver.adaptTo(Session.class);
String assetPath = "/content/dam/folderA/asset1.jpg";
String movePath = "/content/dam/folderB/asset1.jpg";
assetManager.moveAsset(assetPath, copyPath);
session.save()
session.logout()
Further references for AssetManager API.
HelpX Article
API Details
Moving large number of assets might cause the move operation to fail if there no appropriate indexes in place. Monitor logs for warning messages like The query read or traversed more than X nodes.. You might have to add oak based properties to the out-of-the-box /oak:index/ntBaseLucene index to fix this.
More details here.
Hey, all! I have a class method who's primary function is to get a Map object, which works fine; however, it's an expensive operation that doesn't need to be done every time, so I'd like to have the results stored in an XML file using JAXB, to be read from for the majority of calls and updated infrequently.
When I run a class that calls it out of NetBeans the file is created no problem with exactly what I want -- but when I have my JSP call the method nothing happens whatsoever, even though the rest of the information is passed normally. I have the feeling it's somehow lacking write privileges, but the file is just in the root directory so I'm not sure what I'm missing. Thanks for the help!
The code looks roughly like this:
public class DataHandler() {
...
public void config() {
MapHolder bucket = new MapHolder();
MapExporter exp = new MapExporter();
Map map = makeMap();
bucket.setMap(map);
exp.exportMap(bucket);
}
}
And then the JSP has a javabean of Datahandler, and this line:
databean.config();
It's probably a tad more fragmented than it needs to be; the whole bucket rigamarole was because I was stumbling trying to learn how to write a map to an xml file. Mapholder is just a class that I wrap around the map, and MapExporter just uses a JAXB marshaller, and it all does work properly when run from NetBeans.
OK turns out I'm just dumb; everything was working fine, the file was just being stored in a folder at the localhost location. Whoops! That'd be my inexperience with web development at work.
I am currently developing an Eclipse-RCP application that stores per-project preferences using the EclipsePreference mechanism through ProjectScope. At first this seemed to work very well, but we have run into trouble when (read-) accessing these preferences in multithreaded scenarios while at the same time changes are being made to the workspace. What appears to be praticularly problematic is accessing such a preference node (ProjectScope.getNode()) while the project is being deleted by an asynchronous user action (right click on Project -> Delete Project). In such cases we get a pretty mix of
org.osgi.service.prefs.BackingStoreException
java.io.FileNotFoundException
org.eclipse.core.runtime.CoreException
Essentially they all complain that the underlying file is no longer there.
Initial attempts to fix this using checks like IProject.exists() or isAccessible() and even going so far as checking the presence of the actual .prefs file were as futile as expected: They only make the exceptions less likely but do not really prevent them.
So my question is: How are you supposed to safely access things like ProjectScope.getNode()? Do you need to go so far to put every read into a WorkspaceJob or is there some other, clever way to prevent the above problems like putting the read access in Display.asyncExec()?
Although I tried, I did not really find answers to the above question in the Eclipse documentation.
Usually scheduling rules are used to concurrently access resources in the workspace.
I've never worked with ProjectScopeed preferences but if they are stored within a project or its metadata, then a scheduling rule should help to coordinate access. If you are running the preferences access code in a Job, then setting an appropriate scheduling rule should do:
For example:
IProject project = getProjectForPreferences( projectPreferences );
ISchedulingRule rule = project.getWorkspace().getRuleFactory().modifyRule( project );
Job job = new Job( "Access Project Preferences" ) {
#Override
protected IStatus run( IProgressMonitor monitor ) {
if( project.exists() ) {
// read or write project preferences
}
return Status.OK_STATUS;
}
};
job.setRule( rule );
job.schedule();
The code acquires a rule to modify the project and the Job is guaranteed to run only when no other job with a conflichting rule runs.
If your code isn't running within a job, you can also manually acquire a lock with IJobManager.beginRule() and endRule().
For example:
ISchedulingRule rule = ...;
try {
jobManager.beginRule( rule, monitor );
if( project.exists() ) {
// read or write project preferences
}
} finally {
jobManager.endRule( rule );
}
As awkward as it looks, the call to beginRule must be within the try block, see the JavaDoc for more details.
Background:
I have a requirement that messages displayed to the user must vary both by language and by company division. Thus, I can't use out of the box resource bundles, so I'm essentially writing my own version of resource bundles using PropertiesConfiguration files.
In addition, I have a requirement that messages must be modifiable dynamically in production w/o doing restarts.
I'm loading up three different iterations of property files:
-basename_division.properties
-basename_2CharLanguageCode.properties
-basename.properties
These files exist in the classpath. This code is going into a tag library to be used by multiple portlets in a Portal.
I construct the possible .properties files, and then try to load each of them via the following:
PropertiesConfiguration configurationProperties;
try {
configurationProperties = new PropertiesConfiguration(propertyFileName);
configurationProperties.setReloadingStrategy(new FileChangedReloadingStrategy());
} catch (ConfigurationException e) {
/* This is ok -- it just means that the specific configuration file doesn't
exist right now, which will often be true. */
return(null);
}
If it did successfully locate a file, it saves the created PropertiesConfiguration into a hashmap for reuse, and then tries to find the key. (Unlike regular resource bundles, if it doesn't find the key, it then tries to find the more general file to see if the key exists in that file -- so that only override exceptions need to be put into language/division specific property files.)
The Problem:
If a file did not exist the first time it was checked, it throws the expected exception. However, if at a later time a file is then later dropped into the classpath and this code is then re-run, the exception is still thrown. Restarting the portal obviously clears the problem, but that's not useful to me -- I need to be able to allow them to drop new messages in place for language/companyDivision overrides w/o a restart. And I'm not that interested in creating blank files for all possible divisions, since there are quite a few divisions.
I'm assuming this is a classLoader issue, in that it determines that the file did not exist in the classpath the first time, and caches that result when trying to reload the same file. I'm not interested in doing anything too fancy w/ the classLoader. (I'd be the only one who would be able to understand/maintain that code.) The specific environment is WebSphere Portal.
Any ways around this or am I stuck?
My guess is that I am not sure if Apache's FileChangedReloadingStrategy also reports the events of ENTRY_CREATE on a file system directory.
If you're using Java 7, I propose to try the following. Simply, implement a new ReloadingStrategy using Java 7 WatchService. In this way, every time either a file is changed in your target directories or a new property file is placed there, you poll for the event and able to add the properties to your application.
If not on Java 7, maybe using a library such as JNotify would be a better solution to get the event of a new entry in a directory. But again, you need to implement the ReloadingStrategy.
UPDATE for Java 6:
PropertiesConfiguration configurationProperties;
try {
configurationProperties = new PropertiesConfiguration(propertyFileName);
configurationProperties.setReloadingStrategy(new FileChangedReloadingStrategy());
} catch (ConfigurationException e) {
JNotify.addWatch(propertyFileDirectory, JNotify.FILE_CREATED, false, new FileCreatedListener());
}
where
class FileCreatedListener implements JNotifyListener {
// other methods
public void fileCreated(int watchId, String rootPath, String fileName) {
configurationProperties = new PropertiesConfiguration(rootPath + "/" + fileName);
configurationProperties.setReloadingStrategy(new FileChangedReloadingStrategy());
// or any other business with configurationProperties
}
}
I am using Apache JCI's FAM (FileAlterationMonitor) in a Java OSGi Service to monitor and handle changes in the FileSystem. Everything seems to be working fairly well except whenever I start the Service (which starts the FAM using the code below), FAM picks up on ALL the changes that exist in the directory.
Currently I am watching /tmp
/tmp includes a subtree: /tmp/foo/bar/cat/dog
Everytime I start the service and which starts FAM, it reports DirectoryCreate events for:
/tmp/foo
/tmp/foo/bar
/tmp/foo/bar/cat
/tmp/foo/bar/cat/dog
Even if no changes have been made to any part of that subtree.
Code run on service activation:
File watchFolder = new File("/tmp");
watchFolder.mkdirs();
fam = new FilesystemAlterationMonitor();
fam.setInterval(1000);
fam.addListener(watchFolder, listener);
fam.start();
// I've already tried adding:
listener.waitForFirstCheck();
Listener example:
private FileChangeListener listener = new FileChangeListener() {
public void onDirectoryChange(File pDir) { System.out.println(pDir.getAbsolutePath()); }
public void onDirectoryCreate(File pDir) { System.out.println(pDir.getAbsolutePath()); }
...
}
Yes, that's one very annoying feature of JCI. When monitoring is started, it will notify you of all the files and directories it finds with calls to onXxxCreate(). I think you have the following options
After starting the monitoring, wait for some time (couple of seconds) in your FileChangeListener callback implementation before you actually process the events coming from JCI. That's what I did in a project and it works fairly well, although there is the possibility that you miss an actual file creation that just happens within the "grace period"
Take the sources of JCI and modify them to use two new event methods onDirectoryFound(File)and onFileFound(File) that will only be fired when files and directories are found on startup of the monitoring
Take a look at java.nio.file.WatchService that comes with Java 7. IMO the best option, as it uses native methods internally in order to be notified of changes by the OS, instead of starting a thread and checking periodically. With JCI, you may get delays in the range of several seconds until changes are propagated to your callbacks
Forget about WatchService. It is not intuitive and there are issues with it when trying to see if it can detect that the folder it is monitoring is deleted or changed. I would stay far away from it. I have worked with Watcher but prefer Apache IO much more. I believe Camel uses it as well.