Thread-Safe way to access EclipsePreferences (Project) - java

I am currently developing an Eclipse-RCP application that stores per-project preferences using the EclipsePreference mechanism through ProjectScope. At first this seemed to work very well, but we have run into trouble when (read-) accessing these preferences in multithreaded scenarios while at the same time changes are being made to the workspace. What appears to be praticularly problematic is accessing such a preference node (ProjectScope.getNode()) while the project is being deleted by an asynchronous user action (right click on Project -> Delete Project). In such cases we get a pretty mix of
org.osgi.service.prefs.BackingStoreException
java.io.FileNotFoundException
org.eclipse.core.runtime.CoreException
Essentially they all complain that the underlying file is no longer there.
Initial attempts to fix this using checks like IProject.exists() or isAccessible() and even going so far as checking the presence of the actual .prefs file were as futile as expected: They only make the exceptions less likely but do not really prevent them.
So my question is: How are you supposed to safely access things like ProjectScope.getNode()? Do you need to go so far to put every read into a WorkspaceJob or is there some other, clever way to prevent the above problems like putting the read access in Display.asyncExec()?
Although I tried, I did not really find answers to the above question in the Eclipse documentation.

Usually scheduling rules are used to concurrently access resources in the workspace.
I've never worked with ProjectScopeed preferences but if they are stored within a project or its metadata, then a scheduling rule should help to coordinate access. If you are running the preferences access code in a Job, then setting an appropriate scheduling rule should do:
For example:
IProject project = getProjectForPreferences( projectPreferences );
ISchedulingRule rule = project.getWorkspace().getRuleFactory().modifyRule( project );
Job job = new Job( "Access Project Preferences" ) {
#Override
protected IStatus run( IProgressMonitor monitor ) {
if( project.exists() ) {
// read or write project preferences
}
return Status.OK_STATUS;
}
};
job.setRule( rule );
job.schedule();
The code acquires a rule to modify the project and the Job is guaranteed to run only when no other job with a conflichting rule runs.
If your code isn't running within a job, you can also manually acquire a lock with IJobManager.beginRule() and endRule().
For example:
ISchedulingRule rule = ...;
try {
jobManager.beginRule( rule, monitor );
if( project.exists() ) {
// read or write project preferences
}
} finally {
jobManager.endRule( rule );
}
As awkward as it looks, the call to beginRule must be within the try block, see the JavaDoc for more details.

Related

jenkins plugin - Starting and stopping a Stage from inside a plugin

First, some background why I want this crazy thing. I'm building a Plugin in Jenkins that provides an API for scripts that are started from a pipeline-script to independently communicate with jenkins.
For example a shell-script can then tell jenkins to start a new stage from the running script.
I've got the communication between the script and Jenkins working, but the problem is that I now want to try and start a stage from a callback in my code but I can't seem to figure out how to do it.
Stuff I've tried and failed at:
Start a new StageStep.java
I can't seem to find a way to correctly instantiate and inject the step into the lifecycle. I've looked into DSL.java, but cant seem to get to an instance to call invokeStep(), nor was I able to find out how to instantiate DSL.java with the right environment.
Look at StageStepExecution.java and do what it does.
It seems to either invoke the body with an Environment Variable and nothing else, or set some actions and save the state in a config file when it has no body. I could not find out how the Pipeline: Stage View Plugin hooks into this, but it doesn't seem to read the config file. I've tried setting the Actions (even the inner class through reflection) but that did not seem to do anything.
Inject a custom string as Groovy body and call it with csc.newBodyInvoker()
A hacky solution I came up with was just generating the groovy script and running it like the ParallelStep does. But the sandbox does not allow me to call new GroovyShell().evaluate(""), and If I approve that call, the 'stage' step throws a MissingMethodException. So I also do not instatiate the script with the right environment. Providing the EnvironmentExpander does not make any difference.
Referencing and modifying workflow/{n}.xml
Changing the name of a stage in the relevant workflow/{n}.xml and rebooting the server updates the name of the stage, but modifying my custom stage to look like a regular one does not seem to add the step as a stage.
Stuff I've researched:
If some other plugin does something like this, but I couldn't find any example of plugins starting other steps.
How Jenkins handles the scripts and starts the steps, but It seems as though every step is directly called through the method name after the script is parsed, and I found no way to hook into this.
Other plugins using the StageView through other methods, but I could not find any.
add an AtomNode as a head onto the running thread, but I couldn't find how to replace/add the head and am hesitant to mess with jenkins' threading.
I've spent multiple days on this seemingly trivial call, but I can't seem to figure it out.
So the latest thing I tried actually worked, and is displayed correctly, but it ain't pretty.
I basically reimplemented the implementation of DSL.invokeStep(), which required me to use reflection A LOT. This is not safe, and will break with any changes of course so I'll open an issue in the Jenkins' ticket system in the hopes they will add a public interface for doing this. I'm just hoping this won't give me any weird side-effects.
// First, get some environment stuff
CpsThread cpsThread = CpsThread.current();
CpsFlowExecution currentFlowExecution = (CpsFlowExecution) getContext().get(FlowExecution.class);
// instantiate the stage's descriptor
StageStep.DescriptorImpl stageStepDescriptor = new StageStep.DescriptorImpl();
// now we need to put a new FlowNode as the head of the step-stack. This is of course not possible directly,
// but everything is also outside of the sandbox, so putting the class in the same package doesn't work
// get the 'head' field
Field cpsHeadField = CpsThread.class.getDeclaredField("head");
cpsHeadField.setAccessible(true);
Object headValue = cpsHeadField.get(cpsThread);
// get it's value
Method head_get = headValue.getClass().getDeclaredMethod("get");
head_get.setAccessible(true);
FlowNode currentHead = (FlowNode) head_get.invoke(headValue);
// crate a new StepAtomNode starting at the current value of 'head'.
FlowNode an = new StepAtomNode(currentFlowExecution, stageStepDescriptor, currentHead);
// now set this as the new head.
Method head_setNewHead = headValue.getClass().getDeclaredMethod("setNewHead", FlowNode.class);
head_setNewHead.setAccessible(true);
head_setNewHead.invoke(headValue, an);
// Create a new CpsStepContext, and as the constructor is protected, use reflection again
Constructor<?> declaredConstructor = CpsStepContext.class.getDeclaredConstructors()[0];
declaredConstructor.setAccessible(true);
CpsStepContext context = (CpsStepContext) declaredConstructor.newInstance(stageStepDescriptor,cpsThread,currentFlowExecution.getOwner(),an,null);
stageStepDescriptor.checkContextAvailability(context); // Good to check stuff I guess
// Create a new instance of the step, passing in arguments as a Map
Map<String, Object> stageArguments = new HashMap<>();
stageArguments.put("name", "mynutest");
Step stageStep = stageStepDescriptor.newInstance(stageArguments);
// so start the damd thing
StepExecution execution = stageStep.start(context);
// now that we have a callable instance, we set the step on the Cps Thread. Reflection to the rescue
Method mSetStep = cpsThread.getClass().getDeclaredMethod("setStep", StepExecution.class);
mSetStep.setAccessible(true);
mSetStep.invoke(cpsThread, execution);
// Finally. Start running the step
execution.start();

Clear the cache during runtime - Eclipse e4

I would like to clear the Eclipse e4 cache (The .metadata directory) during runtime.
There are lots of posts for clearing the cache by setting the checkbox in the run configurations but I can't find anything on clearing the cache in the code.
I'd prefer to use a method that has already been written (if there is one) in comparison to writing my own.
If I was to do this myself then I'll do it during #PostContextCreate in the life cycle manager.
Is there a method that will do this for me or should I just delete the cache directory?
Update
Here is the issue I'm trying to work around.
https://bugs.eclipse.org/bugs/show_bug.cgi?id=430090#add_comment
To clear the cache at runtime I've overridden the ResourceHandler and added this to loadMostRecentModel.
final Method m = getClass().getSuperclass().getDeclaredMethod("getWorkbenchSaveLocation", new Class<?>[] {});
m.setAccessible(true);
final File workbenchSaveLocation = (File) m.invoke(this, (Object[]) null);
workbenchSaveLocation.delete();
I use reflection as the parent method is private. It would be better to do this instead of writing code to get the file myself as it ensures I always get the correct location.
First, deleting .metadata folder can damage the user data: preferences, launch configs, who knows what else - it depends on particular plug-in implementation.
Also your updates may contain new bundles and fragments with new services and extension.
And the user may rearrange views and do other things persisted with workbench model.
=>
The deletion of workbench model will not resolve all the issues, please consider the following:
restart after update to ensure all the new bundles/extensions/services are applied Howto restart an e4 RCP application
use Model Processors to manipulate the model after load http://blog.vogella.com/2010/10/26/processors-e4-model/

Play Framework await() makes the application act wierd

I am having some strange trouble with the method await(Future future) of the Controller.
Whenever I add an await line anywhere in my code, some GenericModels which have nothing to do with where I placed await, start loading incorrectly and I can not access to any of their attributes.
The wierdest thing is that if I change something in another completely different java file anywhere in the project, play will try to recompile I guess and in that moment it starts working perfectly, until I clean tmp again.
When you use await in a controller it does bytecode enhancement to break a single method into two threads. This is pretty cool, but definitely one of the 'black magic' tricks of Play1. But, this is one place where Play often acts weird and requires a restart (or as you found, some code changing) - the other place it can act strange is when you change a Model class.
http://www.playframework.com/documentation/1.2.5/asynchronous#SuspendingHTTPrequests
To make it easier to deal with asynchronous code we have introduced
continuations. Continuations allow your code to be suspended and
resumed transparently. So you write your code in a very imperative
way, as:
public static void computeSomething() {
Promise delayedResult = veryLongComputation(…);
String result = await(delayedResult);
render(result); }
In fact here, your code will be executed in 2 steps, in 2 different hreads. But as you see it, it’s very
transparent for your application code.
Using await(…) and continuations, you could write a loop:
public static void loopWithoutBlocking() {
for(int i=0; i<=10; i++) {
Logger.info(i);
await("1s");
}
renderText("Loop finished"); }
And using only 1 thread (which is the default in development mode) to process requests, Play is able to
run concurrently these loops for several requests at the same time.
To respond to your comment:
public static void generatePDF(Long reportId) {
Promise<InputStream> pdf = new ReportAsPDFJob(report).now();
InputStream pdfStream = await(pdf);
renderBinary(pdfStream);
and ReportAsPDFJob is simply a play Job class with doJobWithResult overridden - so it returns the object. See http://www.playframework.com/documentation/1.2.5/jobs for more on jobs.
Calling job.now() returns a future/promise, which you can use like this: await(job.now())

Apache JCI FilesystemAlterationMonitor processes changes for existing folder contents on startup

I am using Apache JCI's FAM (FileAlterationMonitor) in a Java OSGi Service to monitor and handle changes in the FileSystem. Everything seems to be working fairly well except whenever I start the Service (which starts the FAM using the code below), FAM picks up on ALL the changes that exist in the directory.
Currently I am watching /tmp
/tmp includes a subtree: /tmp/foo/bar/cat/dog
Everytime I start the service and which starts FAM, it reports DirectoryCreate events for:
/tmp/foo
/tmp/foo/bar
/tmp/foo/bar/cat
/tmp/foo/bar/cat/dog
Even if no changes have been made to any part of that subtree.
Code run on service activation:
File watchFolder = new File("/tmp");
watchFolder.mkdirs();
fam = new FilesystemAlterationMonitor();
fam.setInterval(1000);
fam.addListener(watchFolder, listener);
fam.start();
// I've already tried adding:
listener.waitForFirstCheck();
Listener example:
private FileChangeListener listener = new FileChangeListener() {
public void onDirectoryChange(File pDir) { System.out.println(pDir.getAbsolutePath()); }
public void onDirectoryCreate(File pDir) { System.out.println(pDir.getAbsolutePath()); }
...
}
Yes, that's one very annoying feature of JCI. When monitoring is started, it will notify you of all the files and directories it finds with calls to onXxxCreate(). I think you have the following options
After starting the monitoring, wait for some time (couple of seconds) in your FileChangeListener callback implementation before you actually process the events coming from JCI. That's what I did in a project and it works fairly well, although there is the possibility that you miss an actual file creation that just happens within the "grace period"
Take the sources of JCI and modify them to use two new event methods onDirectoryFound(File)and onFileFound(File) that will only be fired when files and directories are found on startup of the monitoring
Take a look at java.nio.file.WatchService that comes with Java 7. IMO the best option, as it uses native methods internally in order to be notified of changes by the OS, instead of starting a thread and checking periodically. With JCI, you may get delays in the range of several seconds until changes are propagated to your callbacks
Forget about WatchService. It is not intuitive and there are issues with it when trying to see if it can detect that the folder it is monitoring is deleted or changed. I would stay far away from it. I have worked with Watcher but prefer Apache IO much more. I believe Camel uses it as well.

How to disable Java security manager?

Is there any way to completely disable Java security manager?
I'm experimenting with source code of db4o. It uses reflection to persist objects and it seems that security manager doesn't allow reflection to read and write private or protected fields.
My code:
public static void main(String[] args) throws IOException {
System.out.println("start");
new File( DB_FILE_NAME ).delete();
ObjectContainer container = Db4o.openFile( DB_FILE_NAME );
String ob = new String( "test" );
container.store( ob );
ObjectSet result = container.queryByExample( String.class );
System.out.println( "retrieved (" + result.size() + "):" );
while( result.hasNext() ) {
System.out.println( result.next() );
}
container.close();
System.out.println("finish");
}
Output:
start
[db4o 7.4.68.12069 2009-04-18 00:21:30]
AccessibleObject#setAccessible() is not available. Private fields can not be stored.
retrieved (0):
finish
This thread suggests modifying java.policy file to allow reflection but it doesn't seem to work for me.
I'm starting JVM with arguments
-Djava.security.manager -Djava.security.policy==/home/pablo/.java.policy
so specified policy file will be the only policy file used
The file looks like this:
grant {
permission java.security.AllPermission;
permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
};
I spent last 3 hrs on this and don't have any ideas how to make this work.
Any help appreciated.
You could try adding this to the main() of your program:
System.setSecurityManager(null);
Worked for me for a "trusted" WebStart application when I was having security manager issues. Not sure if it will work for your db4o case, but it might be worth a try.
EDIT: I'm not suggesting that this is a general solution to security manager problems. I was just proposing it as a way to help debug the original poster's problem. Clearly, if you want to benefit from a security manager then you should not disable it.
Do you really have two '=' signs in your java.security.policy command line option? That won't work. Make sure you are setting the property as
-Djava.security.policy=/home/pablo/.java.policy
To actually disable the SecurityManager, simply leaving off the java.security.manager system property altogether should be enough.
Update: As I was reading the documentation for policy files to learn more about the "==" syntax, I noticed that unless the policy file is in the current working directory, it needs to be specified as a URL (including scheme). Have you tried prefixing the policy path with the "file:" scheme?
I was also puzzled because (assuming you are running as user "pablo"), it looks like that policy should be loaded by default from your home directory, so you shouldn't need to specify it at all. On the other hand, if you are not running as the user "pablo", maybe the file is not readable.
I found this example of how to make private fields and methods accessible to your code. Basically, it distills down to the use of Field.setAccessible(true) and Method.setAccessible(true)
Field example:
Field privateStringField = PrivateObject.class.
getDeclaredField("privateString");
privateStringField.setAccessible(true);
Method example:
Method privateStringMethod = PrivateObject.class.
getDeclaredMethod("getPrivateString", null);
privateStringMethod.setAccessible(true);
You could also look at using Groovy with your Java code as it (currently) circumvents much of the access level restrictions of Java code. Although, this message board posting seems to suggest this 'feature' may change in future versions of Groovy.

Categories

Resources