I wrote some Hudson plugin's that have changed a bit in terms of the fields and class names (re factoring mostly to clean up the code). Generally speaking everything works fine, but when starting up, Hudson will unmarshall previous plugin data within the build xml.
As most of us know who have done Hudson plugins, hudson will use camel-xstream (or a tool similar) and Hudson will persist the Java objects by marshalling the object's class, state and fields to the build.xml file. If the Object changes and a field name is changed or removed an exception is thrown similar to "Cannot find field x because it does not exist"
My question is - Is there a way to clean up that plugin data so that when I install a new version of the plugin old information will not be persisted and will not cause these unmarshalling exceptions to be thrown?
Thank you
After researching the problem I was trying to resolve I found that Hudson actually has a resolution to this built in.
To be able to resolve start up issues you can do the following:
Go to Manage Hudson link
If it failed to unmarshall objects a
link appears at the top of the page
that reads
'You have data stored in an older format and/or unreadable data.'
Click on Manage button
There will be a list of all failures
press the Discard Unreachable Data button
Related
Background
I'm adding database migrations to an existing project using the open source project mongobee. The actual migrations are all handled by mongobee and its concepts of changelogs and changesets. Part of this enhancement involves checking the current MongoDB database migration version at runtime and comparing it against the version expected by the java application. The reasoning behind this is we'd like to have an installation of our product download code updates (new *.wars) and upon logging in the new version of the application, the admin user would be prompted to update the database if their database version is lower than expected.
We're currently using Maven to package and build our software.
Problem
The one area that's nagging me is how to handle tagging the database version the Java source code expects. I'd like to avoid manually entering this each time we do a build and add a migration.
My proposed solution may not be ideal. My initial thought is to use a convention for the changelog file and class names like "v0001_first_migration" and then at build time, use maybe the maven AntRun plugin to call a separately compiled java file that traverses the migration changelog directory and looks for the latest migration number and then stores that result in a resource file, probably XML. The application can then read that XML file at runtime to get the database version it expects.
1 - Is this feasible?
2 - Is there a way to do something like this in pure Maven without using AntRun?
3 - Is there another option to accomplish this easier?
As an alternative to my proposed solution above, I used a reflection project found here: https://github.com/ronmamo/reflections and iterated through all of the classnames in my migrations directory that follow the aforementioned convention (v0001_first_migration, v0002_second_migration). I parse those using regex to get an Integer and do comparisons to determine the migration version expected by the app. The database side was a lot easier so I won't go over that.
Now, instead of using Ant tasks I'm just popping the expected app migration version into a singleton (gross I know) or alternatively just calling the function that finds the expected app migration depending on where it's used.
WHY a Singleton? The parsing process is expensive and I expect to use this data on each REST call that wants to touch our database. In the REST layer I created the singleton because of some limitations with our current project. The better way here is in the case of Tomcat, create a ServletListener and assign the migration version as an attribute of the ServletContext. Due to the way our REST layer works, I'd be modifying a TON of function signatures to pass in the #Context ServletContext. We don't have Dependency Injection containers either so my options were limited if I didn't want to touch almost every action in the REST layer. The Singleton gets the expected app migration version at startup and that's it so it's still easy to test with mocks and there are no concurrency issues that I can see.
I'm thinking of developing a maven plugin which will cause your maven build to output info messages and above if the build fails.
The context is that I'd like to configure maven to work with warn by default and disable all logs of my company (this will be done by logback configuration) and I'd like to have a plugin which talks to another in-memory logback appender to get the entire log to throw to the user in case the build fails since at that point all the data is relevant.
My question is if and how I can get that "notification" that the build failed?
For those interested my intention, which I still need to validate, is then to programmatically change the consoleAppender back to info and write everything that was accumulated to it.
I was asked about my motivations and so there are two.
The first is that I think (still crunching data to see if I'm right) that our build logs are so verbose it's effecting our build times.
The second is that some of our tests cause exceptions to be thrown as part of them which confuse the logs. I'd still like the entire log in case the build fails so that developers have all the info they need to debug their failure
First i don't understand your intention why not using a continious integration solution which records the whole output and can be stored for a period of time. If you need to analyze you can take a look into it. Apart from that i don't understand your need to do something what you described and what the advantage would be...
Furthermore a maven plugin will simply not work for your intention cause a maven plugin is bound to the life cycle.
If you really need something outside the mave life cycle you could take a look into the EventSpy which could be used in the way you described, but its an extension which must be put into lib/ext folder of your maven installation. Best is to use the AbstractEventSpy as parent for your own implementation.
I am generating the Javadoc of my project using maven (with the javadoc:javadoc goal).
I have also configured the Javadoc Location property of my project to the folder where maven generates the Javadoc. Then I can easily see the full Javadoc of a class from the Eclipse Javadoc view by selecting "Open Attached Javadoc in a Browser".
However, every time I do some changes in the documentation I need to explicitly recreate the documentation with maven, before I can see the documentation updates in the browser.
Is there a way I can instruct Eclipse to automatically generate the Javadoc files when a file is saved ?
I know this is probably not a good idea when not focused on writing documentation, since it may slow down a bit Eclipse. However, when my main task is writing documentation, a bit of automation in this sense will be appreciated. I guess the right solution passes by updating the documentation of only the files that were saved (and not triggering the whole Javadoc generation process), but I do not know if such a thing is possible.
If you're using the Maven Integration for Eclipse (m2e), you can set up a plugin execution filter so that m2e knows that you desire a particular plugin execution to be also executed in Eclipse. You would want to have the plugin run in the background:
<execute >
<runOnIncremental>true</runOnIncremental>
</execute >
The flip side of this all is that it will run your entire Javadoc execution whenever you save something, incremental is misleading in that sense. It may clog up your Eclipse, and not just "a bit", like you say in the updated question. Every plugin execution that does more than the absolute trivial should be heavily scrutinized.
A truly incremental solution will not come from Maven, since it has no sense of only parts of the project having to be built. Rather, you would need Eclipse to do this directly. I think the same thing happens for Java compilation: it is done by Eclipse itself, incrementally. However, according to the Javadoc FAQ:
A9. Can I incrementally build a document from different runs of Javadoc?
Basically no (...)
We call this incremental build and are considering it for a future release.
But nothing is impossible :)
I have a big Java project with thousands of compilation warnings. I would like to find a way to prevent developers commiting files with warnings, so gradually all warnings would disappear. If I commit a file with compilation error Eclipse displays an error message, but I couldnt find any way to do the same with warnings. The closest thing I found was The Commit warning checker http://commitwarncheck.sourceforge.net/ but that is not really integrated into the commit process, it just provides a view. Is there any better solution?
I see 2 options. First, at least with Subclipse, there's an Eclipse preference for this: Window / Preferences / Team / SVN / "Commit resources with warnings". There's one for "errors" as well. Both can be set to "Yes", "No", or "Prompt". However, this will require you to make sure that your entire team keeps these options set as you'd expect - as well as making sure that they have all of the other Eclipse preferences set to generate the same errors / warnings.
Using Subclipse 1.6.18:
Another option is to make use of SVN commit hooks, essentially the beginnings of a Continuous Integration (CI) process. You could actually check for a limited set of things, and allow/deny the commit at that time, but I'd start worrying about commit performance. A better option might be a true CI process that runs a build (either scheduled, or potentially even per-commit) - and emails or otherwise alerts the developer if an issue is detected.
The complication with this later option is repeating the Eclipse build - including detection of all Eclipse-configured errors and warnings - in a scripted manner. (If anyone finds a great way of doing this, please let me know! Eclipse provides a scriptable option for using its code formatter, but I've not seen a similar option for checking for errors / warnings using the checks provided by Eclipse.)
Otherwise, you'll probably be better off starting to migrate over to tools such as Checkstyle and FindBugs that work equally well both inside and outside of Eclipse. (However, relating back to my own interest above, I've not found any combination of a few tools - including these - that can at least match the same checks that Eclipse provides.) Combine this with Maven / m2e, providing a common build configuration / process that can be shared by both Eclipse and your CI system, and you should be in good shape.
I'm writing a Java application that needs a lot of static data that is stored in many enum types. Since I would like an user-friendly way to customize this data using for example xml or json files but I'm not allowed to do it directly with enums I was looking for a way to elegantly do it.
Maybe a good solution would be to have a separate java program that reads the xml files and produces the java sources that are then compiled with the remaining part of the sources. My doubs is how to automatize this process in a stand alone way (eg ant?) and how to integrate it seamlessly with eclipse so that it is autOmatically done when I'm working with the project.. Does anything similar to what I'm looking already exists? Any suggestion to solve my problem?
Thanks!
If the items and the overall structure are somehow fixed (and what varies most is the values of the attributes), you could consider defining the enum with one entry for each of your items and let the enum populate its own constants with data read from an external source (XML / JSON) -- at load time or on demand.
Create a project whose sole job is to generate java from your sources.
Make sure that the generation phase is done by Ant.
Now, wrap this project into eclipse and use a custom ant builder, that calls the target in your already existing build.xml.
This is a standard part of our dev infrastructure, so this definitely works.
You can write a maven plugin that generates the code. There are some plugins that do that. It won't work automatically, but you can connect it to the standard maven lifecycle so it gets executed just before compile.
I just did something like that recently.
You can have ant seamlessly integrate with eclipse to achive that:
In Eclipse open project properties, go to "Builders", click "New...", select "Ant Builder", select a build file, go to "Targets" tab and click "Set Targets..." for "Auto Build". Select the desired target and you are done. The target will run every time you save a source file (if "Build Automatically" is selected).
Have you considered including the XML files in your jar, and loading them on startup into maps that use the enum as a key?