How do you version your projects and manage releases? - java

Our situation is as follows, but I'm curious about this problem in any situation.
We have a framework consisting of 4 projects:
beans
util
framework
web
We also have modules that need a version and depend on a version of beans and util.
Finally we have a customer project that consists of a specific version of the core projects and one or more modules.
Is there a standard way to version these projects?
What seems simple to me is becoming really complicated as we try to deliver releases to QA and then manage our ongoing development with the maintenance of the release (release = tag and possible branch).
I kind of prefer the following:
1.2.0 - major and minor versions + release.
1.2.1 - next release
1.2.0_01 - bug fix in 1.2.0 release (branch)
etc.
Any ideas?

We use major.minor.bugfix. A major release only happens for huge changes. A minor release is called for when there is an API change. All other releases are bugfix releases. There's definitely utility in having a build or revision number there too for troubleshooting, although if you've got really rigorous CM you might not need to include it.
Coordinating among the versions of all these projects can be done really well with help from tools like Apache Ivy or Maven. The build of one project, with its own version number, can involve the aggregation of specific versions of (the products of) other projects, and so your build files provide a strict mapping of versions from the bottom up. Save it all in [insert favorite version control tool here] and you have a nice history recorded.

I use {major}.{minor}.{buildday}.{sequential}. For Windows, we use the utilities stampver.exe and UpdateVersion.exe for .NET projects that handle that mostly automatically.

There are no standard version number systems. Common themes are to have a major, minor and build number, and occasionally a point number as well (1.2.2.1 for example, for version 1.2 point release 2 build 1). The meaning of the version numbers is highly flexible. A frequent choice is to have backwards compatibility between minor versions or point releases though.
Releases are probably best done by labeling a set of source controlled files as long as your source control allows this. Recreating a release is then as simple as syncing to the label and building, which is very useful :)

In the automated build system i'm currently using I version with Major.Minor.Build.X, where Build is every time we hit system test, and X is the last Subversion revision number from the repo the code is being built from. Seems to work quite nicely for Subversion as we can easily get back to the codebase of a particular build if the need arises.

I use a variation on the linux kernel version numbering system:
major.minor.bugfix
where even minor numbers indicate a somewhat stable release that may be distributed at least for testing, and odd minor numbers indicate an unstable/untested release that shouldn't be distributed beyond developers.

Where possible, I prefer to have projects versioned with the same build numbering, unless they are shared. It allows for more consistency between moving parts and it's easier to identify which components constitute a product release.
As workmad3 has stated, there's really no common rule for build numbers. My advice is to use something that makes sense for your team/company.
Some places I've worked at have aligned build numbering with project milestones and iterations,
e.g: Major = Release or Milestone, Minor = Iteration, Build = Build number (from the project start or from the start of iteration), Revision = If the build has to be rebuilt (or branched).

One of the most common conventions is major.minor.bugfix, with an additional suffix indicating a build number or pre-release designation (e.g. alpha, beta, etc.).
My team numbers builds according to project milestones - a build is handed over to our QA group at the end of a development iteration (every few weeks). Interim CI builds are not numbered; because we use Maven, those builds are numbered with a SNAPSHOT suffix.
Whatever you decide, be sure to document it and make sure that everyone understands it. I also suggest you document and consistently apply the release branching policy or it can quickly get confusing for everyone. Although with only 4 projects it should be pretty easy to keep track of what's going on.

You didn't mention if any of the projects access a database, but if any do, that might be another factor to consider. We use a major.minor.bugfix.buildnumber scheme similar to others described in answers to this question, with approximately the same logic, but with the added requirement that any database schema changes require at least a minor increment. This also provides a naming scheme for your database schemas. For example, versions 1.2.3 and 1.2.4 can both run against the "1.2" database schema, but version 1.3.0 requires the "1.3" database schema.

Currently we have no real versioning. We use the svn build number and the release date.
(tag name is like release_081010_microsoft e.g.)
Older Products use major.minor.sub version numbering
Major never changed
Minor changes on every release/featurerelease every 6 months.
Sub is everything which doesn't affect the feature set - mostly bugfixes.

Related

SonarQube: Saving the number of incoming dependencies as metric

SonarQube provides a great tool Dependencies with all known versions of a project and the projects the versions are used by.
I want to save the number of projects a project is used by as a metric. It's useful to see if the project is used by X projects and the diffences between versions.
Overall usage of ALL versions would be enough, but detailed information about every known version whould be useful as well.
Is there a way to access this information during analysis and save it as metric? Sensor/Decorator?
We found DecoratorContext and the methods getIncomingDependencies and getOutgoingDependencies, but getIncomingDependencies returns nothing.
due to the following Jira ticket (http://jira.sonarsource.com/browse/SONAR-6553), I would discourage you from investing some time to develop this feature.

FHIR Migration and Backward Compatibility

As system implementors, we face a dilemma when we migrate from one version of FHIR to the next. We started off using FHIR 0.0.81 and then moved to SVN revision 2833 on Sept. 10, 2014, to incorporate a bug fix. As suggested, we downloaded the Java code from SVN trunk and followed the directions on the FHIR Build Process page.
FHIR 0.0.82 Incompatibilities
Now that FHIR 0.0.82 is available, we want to upgrade to a released version. After downloading 0.0.82, however, we noticed that several resources such as Appointment that were in trunk rev2833 are not in release 0.0.82. This leads to our first questions:
What does trunk contain if it does not contain the latest code destined for the next release?
Should anyone ever use what's in trunk?
Is there a release branch from which 0.0.82 was created?
Trunk Incompatibilities
Since our code has dependencies on resources introduced on trunk but not included in 0.0.82, we have to continue to checkout FHIR directly from SVN. On Oct. 21, 2014, we downloaded SVN revision 3218 Java code. When we integrated that code into our system, we discovered numerous compatibility issues. Here are some of them:
Various Enum values changed from lowercase to uppercase, including Patient.AdministrativeGender and HumanName.NameUser. Though conforming to the Java naming convention is a good idea, changing fundamental data types breaks compilation.
Method names have changed, also resulting in compilation errors. We also discovered that simultaneous name changes occurred. For example, in the HumanName class the old setTextSimple(String) is now setText(String), and the old setText(StringType) is now setTextElement(StringType). Both the name and parameter type of setText() has changed, making migration error prone because one has to decide at each use whether to change the method or its parameter.
The ResourceReference resource type has changed its class name. In the FHIR model package alone, 859 occurences of ResourceReference in 61 file were affected. This does not include changes that rippled through other FHIR packages, or changes that will ripple through our application code and our database schemas.
We notice several new resources in the rev3218 trunk code, including NewBundle. Previously, we had suggested that bundles should be resources, so it's great to see this change. Since trunk is not backward compatible with the 0.0.8x releases, however, I'm not sure if we will have to support the both the old and new way of parsing and composing JSON and XML bundles.
To put a finer point on things, it's important to recognize that some of the above FHIR changes not only affect compilation, but could easily introduce subtle bugs at runtime. In addition, the FHIR changes could require database schema changes and data migration in some applications. For example, our application saves JSON resource streams in a database. Something as simple as changing an enum value from "male" to "MALE" requires migration utilities that update existing database content.
Going Forward
We are investing heavily in FHIR; we want it to succeed and to be adopted widely as a standard. In order for that to occur, issues of backward compatibility and version migration need to be addressed. In that vain, any light that can be shed on the following question will move us all forward:
What is the purpose of the 0.0.8x line of code? Who are its target users?
What is the purpose of the code in trunk? Who are its target users?
Will users of 0.0.8x ever be expected to migrate to the trunk codebase?
If so, what migration strategy will used to address the many incompatibilties between the codebases?
What is the deprecation policy for code in each codebase?
What level of backward compatibility can be expected from revision to revision in the code in trunk?
Is there a FHIR roadmap that system developers can use to plan there own development cycles?
Thanks,
Rich C
My apologies for not documenting the way versioning affects the Java reference implementation more. I'll do so. I will assume that you are familiar with the versioning policy here: http://hl7-fhir.github.io/history.html
There are two versions of FHIR extant at the moment. The first is DSTU 1. This is a fork in SVN ("dstu1"), and is only changed for significant bug reports. The reference implementation there is maintained and backwards compatible. The second version is the trunk version, in which we are preparing for the second DSTU release. The svn is highly unstable at the moment - changing constantly, and we are sometimes reversing changes several times, as we consider various options in committee. Further, there are several large breaking changes between DSTU1 and trunk, and more are coming. So you should not expect that switching between DSTU1 and trunk will be painless. Nor should implementers be using trunk unless they're really bleeding edge (and tightly connected, e.g. the implementers skype channel). When trunk is stable, and we think it's worth releasing an implementers beta, we update the versions and version history, and make a release here: http://hl7.org/implement/standards/FHIR-Develop/ and release a maven package for that version.
In the trunk, since there are many changes being made, we also changed the constants to uppercase, and flipped the way that get/set properties were generated. Agree that this has a price, but there was already a price to pay for switching from DSTU1 to trunk. And when I do a beta release (soon, actually), I'll update the Java reference implementation number and so forth. Note that the Java constants went to uppercase, but the wire format constants did not change, so stored json streams are fine (though they are broken by many other changes in the specification)
Given the scope of the changes between DSTU 1 and trunk (there is no list of these yet, I will have to prepare that when I update the beta), you should expect extensive rework for the transition. Presently, I maintain a single source that implements a server for both (in Pascal, http://github.com/grahamegrieve/fhirserver) but I suspect that this approach is about to get broken by the change the NewBundle represents.
So, specific answers:
What is the purpose of the 0.0.8x line of code? Who are its target users?
Supporting users of the existing DSTU1 specification
What is the purpose of the code in trunk? Who are its target users?
preparing to be DSTU 2. It should start to be more stable in a few weeks time - once we started making backwards incompatible changes, we are trying to get as many of them done as possible now
Will users of 0.0.8x ever be expected to migrate to the trunk codebase?
yes, when DSTU 2 is released, or at least, when we start having connectathons on the trunk version preparing for DSTU2 (first one is planned for January)
If so, what migration strategy will used to address the many incompatibilities between the codebases?
There's going to be a lot of re-writing code. We may release xml transforms for migrating resources from DSTU1 to DSTU2 when it is finalised, but that may not even be possible
4a. What is the deprecation policy for code in each codebase?
DSTU 1 is extremely conservative. trunk will settle, though we will never guarantee stability. The beta releases will be point in time releases of these.
What level of backward compatibility can be expected from revision to revision in the code in trunk?
None, really, at the moment.
Is there a FHIR roadmap that system developers can use to plan their own development cycles?
Well, in addition to the version policy referenced above, there's this: http://www.healthintersections.com.au/?p=2234 (which was for you, no?)
As a supplement to Grahame's response: On the Documentation tab of the spec, there's only one bolded link - Read Prior to Use. That page that tries to make clear that the DSTU release promises neither forward nor backward compatibility. It can't - the whole purpose of DSTU is to gather implementation feedback about what sort of substantive changes are needed to make the standard ready to be locked in stone when we go normative. If we promised forward and backward compatibility in DSTU, then we'd be stuck with whatever decisions we'd made during the initial draft, whether they turned out to be good ones or not.

Organizing Java projects [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm a junior developer and recently started working for a very small office where they do a lot of in-house development. I have never worked in a project that involved more than one developer or were as big and complex as these ones.
The problem is that they don't use all the tools available (version control, automated building, continuous integration, etc) to their full extent: mainly a project is one big project in eclipse/netbeans using cvs for version control and everything checked in (including library jars), they started using branching for the first time when I started doing branches for small tasks and merging them back. As the projects get bigger and more complex, problems start to arise with dependencies, project structure tied to an IDE, building can be a PITA sometimes, etc. It's hectic at best.
What I want is to set up a development environment where most of these problems will go away and I will save time and effort. I would like to set up projects in a manner independent of IDE used using version control (I'm leaning towards SVN right now), avoid dependency messes and automate building as much as possible.
I know that there are multiple approaches and tools for this and do not want to see a holy war started, I would really appreciate practical recommendations based on experience and what you have found to be useful when facing similar problems. All projects are Java projects and range from web applications to "generic" ones, and I use Eclipse most of the time, but also use Netbeans if needed. Thanks in advance.
You seem to be almost exactly in the point where the place I worked at was when I started there 1,5 years ago, only difference being that you've started toying with branches which is actually something we still don't do at my work but more about that later on in this answer.
Anyway, you're listing a very good set of tools which can help a small company and those work really nicely as subtopics so without further ado,
Version control systems
Most commonly small companies currently use CVS or SVN and there's nothing bad in that, in fact I'd be really worried if no version control was really used at all. However you have to use version control right, just having one won't make your life easier. We currently use CVS and are looking into Mercurial, but we've found that the following works as a good set of conventions when working with CVS (and I'd suspect SVN too):
Have separate users for all commiters. It's beyond valuable to know who commited what.
Don't allow empty commit messages. In fact if possible, configure the repository to reject any commits without comments and/or default comment. Initial commit for FooBarizer is better than Empty log message
Use tags to mark milestones, prototypes, alphas, betas, release candidates and final versions. Don't use tags for experimental work or as footnotes/Post-It notes.
Don't use branches since they really don't work if you're continuing on developing the application. This is mainly because in CVS and SVN branching just doesn't work as expected and it becomes an exercise in futility to maintain any more than two living branches ( head and any secondary branch ) over time.
Always remember that for the software company the source code is your source of income and contains all your business value, so treat it that way. Also if you have extra 70 minutes, I really recommend that you watch through this talk Linus Thorvalds gave at Google about git and (d)VCS in general, it's really insightful.
Automated builds and Continuous Integration environments
These are about the same actually. Daily builds is a PR joke and has little no resemblance to the state of the actual software beyond some very rudimentary "Does it compile?" issues. You can compile a lot of awful code noise that doesn't do anything, keeping the software quality up has nothing to do with getting the code to compile.
On the other hand unit tests is a great way to maintain software quality and I can with a bit of personal pride say that rigorous unit testing helps even the worst of the programmers to improve a lot and catch stupid errors. In fact there has so far only been a total of three bugs that code I have written has reached production environments and I'd say that in 18 months that's a pretty damn good achievement. In our new production code we usually have a instruction code coverage of +80%, mostly +90% and in one special case reaching all the way to 98%. This part is very lively field and you're better of Googling for the following: TDD, BDD, unit tests, integration tests, acceptance tests, xUnit, mock objects.
That's a bit of a lengthy preface, I know. The actual meat for all the above is this: If you want to have automated builds, have them occur every time someone commits and make sure there's a constantly increasing and improving amount of unit tests for production code. Have the continuous integration system of your choice (we use Hudson CI) run all the unit tests related to project and only accept builds if all the tests pass. Do not make any compromises! If unit tests show that the software is broken, fix the software.
Additionally, Continuous Integration systems aren't just for compiling code but instead they should be used for tracking the state of the software project's metrics. For Hudson CI I can recommend all these plugins:
Checkstyle - Checks if the actual source code is written in a way you define. Big part of writing maintainable code is to use common conventions.
Cobertura - Code coverage metrics, very useful to see how the coverage develops over time. Also keeping in line with the "source is God" mentality, allows you to discard builds if coverage falls below a certain level.
Task Scanner - Simple but sweet: Scans for specific tags such as BUG, TODO, NOTE etc. in your code and creates a list from them for everyone to read. Simple way to track short notes or known bugs which needs fixing or whatever you can come up with.
Project structure and Dependency Management
This is a controversial one. Basically everyone agrees that having an unified structure is great but since there's several camps with different requirements, habits and views to issue they tend to disagree. For example Maven people really believe that there's only one way - the Maven way - to do things and that's it while Ivy supporters believe that the project structure shouldn't be hammered down your throat by external parties, only the dependencies need to be managed properly and in an unified manner. Just that it's not left unclear, our company simply loves Ivy.
So since we don't use project structure imposed by external parties, I'm going to tell you a bit about how we got into what we got into our current project structure.
In the beginning we used individual projects for actual software and related tests (usually named Product and Product_TEST). This is very close to what you have, one huge directory for everything with the dependencies as JARs directly included in the directory. What we did was that we checked out both projects from CVS and then linked the actual project to the test software project in Eclipse as runtime dependency. A bit clunky but it worked.
We soon came to realize that these extra steps are completely useless since by using Ant - by the way, you can invoke Ant tasks directly in Hudson - we could tell the JAR/WAR building step to ignore everything by either file name (say, everything that ends with Test or TestCase) or by source folder. Pretty soon we converted our software project to use a simple structure two root folders, src and test. We haven't looked back ever since. The only debate we currently have is if we should allow for a third folder called spikes to exist in our standard project structure and that's not a very heated debate at all.
This has worked tremendously well and doesn't require any additional support or plugins from any of IDEs out there which is a great plus - number two reason we didn't choose Maven was seeing how M2Eclipse basically took over Eclipse. And since you must be wondering, number one reason for rejecting Maven was the clunkiness of Maven itself, endless amount of lengthy XML declarations for configuration and the related learning curve was considered a too big cost as to what we would get from using it.
Rather interestingly later on commiting to Ivy instead of Maven has allowed us to a smooth shift to do some Grails development which uses folder and class names as conventions for just about everything when structuring the web application.
Also a final note about Maven, while it claims to promote convention over configuration, if you don't want to do things exactly the way the Maven's structure says you should do things, you're in a world of pain for the aforementioned reasons. Certainly that's an expected side effect of having conventions but no convention shouldn't be final, there always has to be at least some room for changes, bending the rules or choosing the appropriate from a certain set.
In short, my opinion is that Maven is a bazooka, you work in a house and you ultimate goal is to have it bug free. Each of these are good on it's own and work even if you pick any two of them, but the three together just doesn't work.
Final words
As long as you have less than 10 code-centric people, you have all the flexibility needed to do the important decisions. When you go beyond that, you have to live with whatever choices you've made, no matter how good or bad they are. Don't just believe things you hear on the Internet, sit down and test everything rigorously - heck, our senior tech guy even wrote his bachelor's thesis about Java web frameworks just to figure out which one we should use - and really figure out what you really need. Don't commit to anything just because you may need some of the functionality it provides in distant future, pick those things that has the lowest possible negative impact to the whole company. Being the 10th person hired to the company I work at I can undersign everything in this paragraph with my own blood, we currently have 16+ people working and changing certain conventions would actually be a bit scary at this point.
Our development stack (team of 10+ developers)
Eclipse IDE with M2Eclipse and Subclipse/Subversive
Subversion for source control, some developers also use TortoiseSVN where Subclipse fails
Maven 2 for project configuration (dependencies, build plugins) and release mgmt (automatic tagging of releases)
Hudson for Continuous Integration (creates also snapshot releases with source attachments and reports)
Archiva for artifact repository (multiple repositories, e.g. releases and snapshots are separated)
Sonar for code quality tracking (e.g. hotspots, coverage, coding guidelines adherence)
JIRA for bug tracking
Confluence for developer wiki and communication of tech docs with other departments
Docbook for manuals (integrated into build)
JMeter for stress testing and long-term performance monitoring
Selenium/WebDriver for automated browser integration tests
Jetty, Tomcat, Weblogic and Websphere as test environments for web apps. Products are deployed every night and automated tests are run on distributed Hudsons.
Mailinglist with all developers for announcements, general info mails
Daily stand up meetings where everbody tells about what he's currently doing
This setup is considered standard for our company as many departments are using those tools and there is a lot of experience and community support for those.
You are absolutely right about trying to automate as much as possible. If your collegues start to see the benefits when aspects of the development phases are automated, they will be encouraged to improve on their own. Of course, every new technology gimmick ("tool") is a new burden and has to be managed and maintained. This is where the effort is moved. You save time e.g. when maven automatically performs your releases, but you will waste time on managing maven itself. My experience is that every time I introduced a new tool (one of the aboves), it takes time to be adopted and cared about, but in the end it will bring advantages to the whole team when real value is experienced - esp. in times of stress when the tools take over much of the work you would have to do manually.
A fine, admirable instinct. Kudos to you.
Part of your problem might not be solved using tools. I'd say that source code management needs some work, because it doesn't sound like branching, tagging, and merging is done properly. You'll need some training and communication to solve that.
I haven't used CVS myself, so I can't say how well it supports those practices. I will point out that Subversion and Git would be better choices. At worst, you should be reading the Subversion "red bean" book to get some generic advice on how to manage source code.
Personally, I'm not a Maven fan. I believe it's too heavyweight, especially when compared to Ant and Ivy. I'd say that using those with Cruise Control could be the solution to a lot of your problems.
You didn't mention unit testing. Start building TestNG and Fit tests into your build cycle.
Look into IntelliJ - I think its a better IDE than either Eclipse or NetBeans, but that's just me.
Best of luck.
Maven is great, however, it can have a fair bit of a learning curve, and it requires that the project fits a very specific file structure. If you have a big legacy project, it may be difficult to mavenize it. In that case, Ant+Ivy would do the same without the stringent requirements that maven has.
For build automation, Hudson is beyond awesome. I've used a couple different systems, but that is unquestionably the easiest to get set up and administer.
I recommend to use Maven for building your projects. Using Maven brigns value to the project, because:
Maven promotes convention over configuration what equals a good project structure
thanks Maven plugins eases generating projects for IDE's (Eclipse, Netbeans, Idea)
handles all dependecies and complete build lifecycle
faciliates projects modularization (via mulitimodule projects)
helps with releases/versions burden
improve code quality - easy integration with continous integration servers and lot of code quality plugins
Maven can be a bit daunting given its initial learning curve, but it would nicely address many of your concerns. I also recommend you take a look at Git for version control.
For project and repository management, I use trac with subversion.
Here's what i'm using right now, but i will probably switch a few parts (see the end of this post).
Eclipse as IDE with a few plugins : JADClipse (to decompile .class on the fly, pretty useful), DBViewer for a quick access to database through Eclipse, WTP (Web Tools Platform) integrated into Eclipse for running Tomcat 6 as a developement web server (pretty fast), Mylyn (linked with JIRA bug-tracker).
I'm too wondering about "IDE independant projects", right now we are all sticked on Eclipse - Eclipse project files (.project, .classpath, .settings) are even commited in the CVS repository (in order to have a project fully ready once checked out) - but with Netbeans, supported by Sun and running faster and faster with each release (and each new JRE version), the question isn't closed.
CVS for storing projects, with nearly no branches (only for patches).
I'm working on environment production with Oracle SGBDR but I'm using HSQLDB on my developement computer to make test and build and development process way faster (with the help of the open-source DDLUtils tool to ease database creation and data injections). Otherwise i use SQLWorkbench for quick BD tasks (including schemas comparison) or the Oracle (free) SQLDeveloper for some Oracle specific tasks (like investating sessions locks and so on).
Tests are only JUnit tests (either simple unit test cases or more complex test cases (nearly "integrations" ones), nearly always runing on HSQLDB to run faster.
My build system is Ant (launched from Eclipse) for various small tasks (uploading a war on a remote server for example) and (mainly) Maven 2 for :
the build process
the publishing of the released artefacts
the publishing of the project's web site (including reports)
launching tests campaigns (launched every night)
The continuous integration front-end is Luntbuild, and the front-end for the Maven repository is Archiva.
All this works. But I'm pretty disappointed by a few elements of this ecosystem.
Mainly Maven, it's just too time-consuming and i have a lot of griefs versus this tool. Conflicts dependencies resolution is a joke. Lot of XML lines in every POM.xml, redundant in every project (even with the help of a few POM roots). Plugins are way too inconsistent, buggy, and it's really difficult to find clear documentation explaining what has to be configured, and so on.
So i'm wondering about switching from Maven to ANT+Ivy. For what i've seen so far, it's seems pretty cool (there are various Conflict manager for the conflicts dependencies resolutions and you can even write your own conflict manager), there is no need to have an additionnal tool installed and configured (as ANT is running natively under Eclipse, whereas Maven needs a separate plugin - i've tried the 3 Mavens plugins by the way, and have found all the three of them buggy).
However Maven 3 is on its way, i'll give it a try but i don't expect it to be fundamentaly different from Maven 2.
Hudson would seem a better choice than Luntbuild, too, but this part won't be changed for the now.
And Subversion will probably replace CVS in a near future (even if i nearly don't have any trouble with CVS).
Lots of good advice here. I have just a few additions:
I think that, unlike the rest, an IDE is a personal tool, and each developer should have some freedom to select the one that works best for him. (For example, many love Eclipse, while I ditched it for NetBeans because Maven integration was, uh, problematic in Eclipse.)
I thought I was going to hate Maven, but now I get along with it fairly well. The main problem I have these days is finding out where the conventions are documented.
I would advise introducing tools one at a time. If you try to mechanize all aspects of software development at a by-hand shop in one stroke, there will likely be massive resistance. Make your business case and get agreement on one good common tool, or just get permission to set it up for your use but in a commonly-accessible way and let people see what it does for you. After a few of these, people will develop a habit of wondering how aspect X could be automated, so additional tools should be easier to introduce.
The single most best thing you can do without disrupting other people and their way of working is setting up hudson to watch the CVS repository for each of your project. Just doing that will give a central place to see cvs commit messages.
Next step is getting these projects to compile under Hudson. For Eclipse this typically means either switching to ant or - as we did - use ant4eclipse to model the existing eclipse build process. Not easy but very worthwhile. Remember to send out mails when the build breaks - this is extremely important. Ant4eclipse requires team project sets - introducing them in your organization Will make your colleagues happy the next time they need to set up a fresh workspace.
When you have a situation where your stuff builds properly whenever somebody commits changes then consider making that automatically built code the code to actually go to the customer. As it was built on the build server and not on a developers machine, you know that you can reproduce the build. That is invaluable in a "hey fix this ancient version" situation.

How do you decide when to upgrade a library in your project?

I work on a project that uses multiple open source Java libraries. When upgrades to those libraries come out, we tend to follow a conservative strategy:
if it ain't broke, don't fix it
if it doesn't have new features we want, ignore it
We follow this strategy because we usually don't have time to put in the new library and thoroughly test the overall application. (Like many software development teams we're always behind schedule on features we promised months ago.)
But, I sometimes wonder if this strategy is wise given that some performance improvements and a large number of bug fixes usually come with library upgrades. (i.e. "Who knows, maybe things will work better in a way we don't foresee...")
What criteria do you use when you make these types of decisions in your project?
Important: Avoid Technical Debt.
"If it ain't broke, don't upgrade" is a crazy policy that leads to software so broken that no one can fix it.
Rash, untested changes are a bad idea, but not as bad as accumulating technical debt because it appears cheaper in the short run.
Get a "nightly build" process going so you can continuously test all changes -- yours as well as the packages on which you depend.
Until you have a continuous integration process, you can do quarterly major releases that include infrastructure upgrades.
Avoid Technical Debt.
I've learned enough lessons to do the following:
Check the library's change list. What did they fix? Do I care? If there isn't a change list, then the library isn't used in my project.
What are people posting about on the Library's forum? Are there a rash of posts starting shortly after release pointing out obvious problems?
Along the same vein as number 2, don't upgrade immediately. EVERYONE has a bad release. I don't intend to be the first to get bit with that little bug. (anymore that is). This doesn't mean wait 6 months either. Within the first month of release you should know the downsides.
When I decide to go ahead with an upgrade; test, test test. Here automated testing is extremely important.
EDIT: I wanted to add one more item which is at least as important, and maybe more so than the others.
What breaking changes were introduced in this release? In other words, is the library going off in a different direction? If the library is deprecating or replacing functionality you will want to stay on top of that.
One approach is to bring the open source libraries that you use under your own source code control. Then periodically merge the upstream changes into your next release branch, or sooner if they are security fixes, and run your automated tests.
In other words, use the same criteria to decide whether to use upstream changes as you do for release cycles on code you write in house. Consider the open source developers to be part of your virtual development team. This is really the case anyway, it's just a matter of whether you choose to recognise it as part of your development practices.
While you don't want to upgrade just because there's a new version, there's another consideration, which is availability of the old version. I've run into that problem trying to build open source projects.
I usually assume that ignoring a new version of a library (coz' it doesn't have any interesting features or improvements) is a mistake, because one day you'll find out that this version is necessary for the migration to the next version which you might want to upgrade to.
So my advice is to review carefully what has changed in the new version, and consider whether the changes requires a lot of testing, or little.
If a lot of testing are required, it is best to upgrade to the newer library at the next release (major version) of your software (like when moving from v8.0 to v8.5). When this happens, I guess there are other major modifications as well, so a lot of testing is done.
I prefer not to let the versions lag too far behind on dependant libraries.
Up to a year is ok for most libraries unless security or performance issues are known.
Libraries with known security issues are a must for refreshing.
I periodically download the latest version of each library and run my apps unit tests using them.
If they pass, I use them in our development and integration environments for a while and push to QA when I'm satisfied they don't suck.
The above procedure assumes the API hasn't changed significantly. All bets are off if I need to refactor existing code just to use a newer library version. (e.g. Axis 1x vs. 2x) Then I would need to get management involved to make the decision to allocate resources. Such a change would typically be differed until a major revision of the legacy code is planned.
Some important questions:
How widely used is the library? (If it's widely used, bugs will be found and eliminated more quickly)
How actively developed is it?
Is the documentation very clear?
Have there been major changes, minor ones, or just internal changes?
Does the upgrade break backwards compatibility? (Will you have to change any of your code?)
Unless the upgrade looks bad according to the above criteria, it's better to go with it, and if you have any problems, revert to the old version.

is there a way to make a previous version latest without doing another commit in teamprise?

The title more or less sums it up. We use MS TFS as our version control which integrates to eclipse via the teamprise plug-in (Corp standard, primarily an MS shop. I wish we could just use SVN, because frankly the Teamprise plug-in is rather atrocious). Suppose that someone commits a file with changes that we want to keep, just not yet. If this commit is version 3, is there a way to tag version 2 as the latest version with out checking version 2 back in as version 4, thus meaning I will later have to check version 3 in as version 5, rather than just retagging 4 as the latest at the point when I want to use it?
In TFS the "latest" for a particular path is determined as the last file that was checked in for that item (with any merge conflicts resolved). I'd suggest a few things that might give you the workflow that you would like:
Would working in a branch help you at all? Then you could decide when to merge changes in from the mainline of code into the area that you are working on.
Teamprise supports the Sync feature in Eclipse, so you can right click on a project and select "Synchronize". That will show you your changes compared to the latest version on the repository. From there you can compare those files with the latest version and see which ones you would like to update and which ones you would like to leave at your local (workspace) version.
If you do a "View History" on any file or folder you will get a history of changes that have occured with that file. You can then get any particular version you would like by selecting "Get this version" from the history view.
If branching doesn't work for you then you might want to try labels. In TFS labels are like "tags" and are editable. Using the mechanisms (such as sync and get from history) you can decide which versions of files that you would like to have and label these with your particular label. You can then do a "Get Specific" and add your label name if you so wish.
If what you are asking is "is there a way to easily rollback changes" so that you can say that version 4 or a file had a change in that you didn't want so you would like to rollback the code for everyone to version 3, then I'm afraid that the only way to do this now is to check-out the file, get the older version (from the history view) and then check that file in. When released, TFS 2010 is due to have native support for Rollback as a version control operation and Teamprise should be supporting that as soon as it is available in the server.
Additionally, I would add that I personally get a lot out of working with tools like continuous integration (and TFS 2008 has some excellent support for CI out of the box, but other open source CI servers such as CruiseControl and Hudson also support TFS). This, along with use of the fact that check-ins to TFS are atomic, means that developers can learn to trust that the "latest" version of code is always good. It encourages developers to do a get latest frequently and regularly check-in changes.
It is perhaps because of these ways of working that we may have missed out some functionality from Teamprise that could possibly help you more and we could well be just assuming that people want to Get Latest so that is what we are making easiest. If you do not feel that Teamprise is adequately supporting you in accessing the features of Team Foundation Server then I would love to hear from you. My email address is martin#teamprise.com. Alternatively you can contact our support hotline at support#teamprise.com, visit the Teamprise forums at http://support.teamprise.com or give our support team a call on (217) 356-8515, ext. 2. We love to get feedback from our customers to make the product better, and it is customers that do not feel well served by the current tools that often give the best feedback.
If Rollback is a feature that is highly desired by you, then please let us know.

Categories

Resources