As system implementors, we face a dilemma when we migrate from one version of FHIR to the next. We started off using FHIR 0.0.81 and then moved to SVN revision 2833 on Sept. 10, 2014, to incorporate a bug fix. As suggested, we downloaded the Java code from SVN trunk and followed the directions on the FHIR Build Process page.
FHIR 0.0.82 Incompatibilities
Now that FHIR 0.0.82 is available, we want to upgrade to a released version. After downloading 0.0.82, however, we noticed that several resources such as Appointment that were in trunk rev2833 are not in release 0.0.82. This leads to our first questions:
What does trunk contain if it does not contain the latest code destined for the next release?
Should anyone ever use what's in trunk?
Is there a release branch from which 0.0.82 was created?
Trunk Incompatibilities
Since our code has dependencies on resources introduced on trunk but not included in 0.0.82, we have to continue to checkout FHIR directly from SVN. On Oct. 21, 2014, we downloaded SVN revision 3218 Java code. When we integrated that code into our system, we discovered numerous compatibility issues. Here are some of them:
Various Enum values changed from lowercase to uppercase, including Patient.AdministrativeGender and HumanName.NameUser. Though conforming to the Java naming convention is a good idea, changing fundamental data types breaks compilation.
Method names have changed, also resulting in compilation errors. We also discovered that simultaneous name changes occurred. For example, in the HumanName class the old setTextSimple(String) is now setText(String), and the old setText(StringType) is now setTextElement(StringType). Both the name and parameter type of setText() has changed, making migration error prone because one has to decide at each use whether to change the method or its parameter.
The ResourceReference resource type has changed its class name. In the FHIR model package alone, 859 occurences of ResourceReference in 61 file were affected. This does not include changes that rippled through other FHIR packages, or changes that will ripple through our application code and our database schemas.
We notice several new resources in the rev3218 trunk code, including NewBundle. Previously, we had suggested that bundles should be resources, so it's great to see this change. Since trunk is not backward compatible with the 0.0.8x releases, however, I'm not sure if we will have to support the both the old and new way of parsing and composing JSON and XML bundles.
To put a finer point on things, it's important to recognize that some of the above FHIR changes not only affect compilation, but could easily introduce subtle bugs at runtime. In addition, the FHIR changes could require database schema changes and data migration in some applications. For example, our application saves JSON resource streams in a database. Something as simple as changing an enum value from "male" to "MALE" requires migration utilities that update existing database content.
Going Forward
We are investing heavily in FHIR; we want it to succeed and to be adopted widely as a standard. In order for that to occur, issues of backward compatibility and version migration need to be addressed. In that vain, any light that can be shed on the following question will move us all forward:
What is the purpose of the 0.0.8x line of code? Who are its target users?
What is the purpose of the code in trunk? Who are its target users?
Will users of 0.0.8x ever be expected to migrate to the trunk codebase?
If so, what migration strategy will used to address the many incompatibilties between the codebases?
What is the deprecation policy for code in each codebase?
What level of backward compatibility can be expected from revision to revision in the code in trunk?
Is there a FHIR roadmap that system developers can use to plan there own development cycles?
Thanks,
Rich C
My apologies for not documenting the way versioning affects the Java reference implementation more. I'll do so. I will assume that you are familiar with the versioning policy here: http://hl7-fhir.github.io/history.html
There are two versions of FHIR extant at the moment. The first is DSTU 1. This is a fork in SVN ("dstu1"), and is only changed for significant bug reports. The reference implementation there is maintained and backwards compatible. The second version is the trunk version, in which we are preparing for the second DSTU release. The svn is highly unstable at the moment - changing constantly, and we are sometimes reversing changes several times, as we consider various options in committee. Further, there are several large breaking changes between DSTU1 and trunk, and more are coming. So you should not expect that switching between DSTU1 and trunk will be painless. Nor should implementers be using trunk unless they're really bleeding edge (and tightly connected, e.g. the implementers skype channel). When trunk is stable, and we think it's worth releasing an implementers beta, we update the versions and version history, and make a release here: http://hl7.org/implement/standards/FHIR-Develop/ and release a maven package for that version.
In the trunk, since there are many changes being made, we also changed the constants to uppercase, and flipped the way that get/set properties were generated. Agree that this has a price, but there was already a price to pay for switching from DSTU1 to trunk. And when I do a beta release (soon, actually), I'll update the Java reference implementation number and so forth. Note that the Java constants went to uppercase, but the wire format constants did not change, so stored json streams are fine (though they are broken by many other changes in the specification)
Given the scope of the changes between DSTU 1 and trunk (there is no list of these yet, I will have to prepare that when I update the beta), you should expect extensive rework for the transition. Presently, I maintain a single source that implements a server for both (in Pascal, http://github.com/grahamegrieve/fhirserver) but I suspect that this approach is about to get broken by the change the NewBundle represents.
So, specific answers:
What is the purpose of the 0.0.8x line of code? Who are its target users?
Supporting users of the existing DSTU1 specification
What is the purpose of the code in trunk? Who are its target users?
preparing to be DSTU 2. It should start to be more stable in a few weeks time - once we started making backwards incompatible changes, we are trying to get as many of them done as possible now
Will users of 0.0.8x ever be expected to migrate to the trunk codebase?
yes, when DSTU 2 is released, or at least, when we start having connectathons on the trunk version preparing for DSTU2 (first one is planned for January)
If so, what migration strategy will used to address the many incompatibilities between the codebases?
There's going to be a lot of re-writing code. We may release xml transforms for migrating resources from DSTU1 to DSTU2 when it is finalised, but that may not even be possible
4a. What is the deprecation policy for code in each codebase?
DSTU 1 is extremely conservative. trunk will settle, though we will never guarantee stability. The beta releases will be point in time releases of these.
What level of backward compatibility can be expected from revision to revision in the code in trunk?
None, really, at the moment.
Is there a FHIR roadmap that system developers can use to plan their own development cycles?
Well, in addition to the version policy referenced above, there's this: http://www.healthintersections.com.au/?p=2234 (which was for you, no?)
As a supplement to Grahame's response: On the Documentation tab of the spec, there's only one bolded link - Read Prior to Use. That page that tries to make clear that the DSTU release promises neither forward nor backward compatibility. It can't - the whole purpose of DSTU is to gather implementation feedback about what sort of substantive changes are needed to make the standard ready to be locked in stone when we go normative. If we promised forward and backward compatibility in DSTU, then we'd be stuck with whatever decisions we'd made during the initial draft, whether they turned out to be good ones or not.
Related
I've added a legend in the picture to make it self explanatory.
Initially, the code in trunk for my project is at version 1.0.
I would create 4 branches with this version of the code: Vendor-A, Vendor-B, 1.1 and 1.2. The red lines represent these parallel development branches. Vendor-specific development and releases are carried out on vendor branches, and code in vendor branches will never be merged with trunk. When releases are made to a vendor, those releases are tagged.
Now, my questions are these:
How accurate is this methodology for product development?
Say, the Trunk is at 1.1 and 1.1 branch ends (expires) after merging 1.1 code into trunk, after which I find a bug in 1.1 code. Now, I would immediately create a bugfix branch and commit the fix into Trunk. So, Should this bugfix be pushed into 1.2 branch and vendor branches? Or should it not be pushed, because these branches are dealing with a different version of the Trunk (1.0)?
How do I tackle development under a vendor branch? Say, I need to fix bugs in Vendor branch, should I just commit changes directly into the Vendor branch?
I would appreciate your suggestions in restructuring / redesigning the process, as well.
Seems okay to me. I'd simplify it a bit however - if I am right in thinking that vendor branches get periodic refreshes from trunk, then you don't need to do an explicit merge from bugfix branches - just merge bugfixes (e.g. 1.1 bugfix) back to trunk, and then do a merge from trunk to all vendor branches.
The trick when merging from trunk to vendors is to keep an accurate track as to what has merged already. Ideally you'll merge everything, and do it in blocks in chronological order. (I find marking commits with a ticket/feature number useful, so I can see from svn log what needs to be merged at a particular time. This ensures that I don't send half-a-feature to another branch.
When I commit a merge, I'll add in the merge string (e.g. "(merge -r1234:2345 -r2667:3123 ../../trunk)" together with a description for the merge. This really helps when looking through logs (say on a vendor branch) to discover the earliest unmerged trunk revision.
I would however also be inclined to maintain 1.0 and 1.1 on different branches. So, if the 1.0 trunk becomes 1.1 once 1.1-branch is merged in, then it may be appropriate to take a branch 1.0 copy from trunk just prior to this. Initially bugfixes will be made to the trunk (1.1) and then merged directly to any vendors who are derived from the 1.1 branch. However it may not apply cleanly (or may not be relevant) to vendors who are derived from 1.0. In this case, apply them to the 1.0 branch first, and then merge from there to all vendors on the earlier version.
Of course, you may find bugs that relate only to 1.0, and are not relevant or do not exist in 1.1 - so this separate branch will assist there too.
With this approach in mind, it is therefore a good idea to upgrade each vendor away from very old versions when you can, so that the number of concurrent versions you need to maintain is minimised. Whether you do that as a matter of course, or as part of a new license/contract, is a matter for your business.
I recently read the blog posts on Pushing Pixels that describe how to achieve native transparency and translucency with pure Java. The needed classes reside on com.sun.awt namely com.sun.awt.AWTUtilities.
I was wondering how i could include the needed classes (not just this one) into my distro since the classes are available only when you have a jdk installed and you start the jvm through there. So the users of my program will not have the needed classes to run my program.
Any ideas?
AWTUtilities, as the package implies is an implementation package and is subject to change. I don't think it is a good idea to distribute the class from both technical and legal viewpoints.
Technically, com.sun.awt.AWTUtlities is bound to have possibly unknown dependencies on other classes and internal implementation details, specific to Java 6 u10 and above (the release in which it appeared). This unknown dependency angle is bound to hold water, since painting and graphics will require changes in some of the native implementations as well.
Secondly, this is bound to change in Java 7, since the only release Sun hasn't made a release of java.awt.AWTUtilities with support for transparency, is because they do not make changes to public APIs except in major releases.
IANAL, but I do not think it is wise to engage in the act of redistributing software without having run past a lawyer. Besides, customers do not like the idea of having an unsupported deployment of any software in their systems.
Update
All Sun JREs (not just JDKs) from 6u10 onwards come with com.sun.awt.AWTUtilities, so the simplest course of action would be to get your users to have Java 6u10 or later on their systems, and for your application to handle any resulting exception gracefully.
I work on a project that uses multiple open source Java libraries. When upgrades to those libraries come out, we tend to follow a conservative strategy:
if it ain't broke, don't fix it
if it doesn't have new features we want, ignore it
We follow this strategy because we usually don't have time to put in the new library and thoroughly test the overall application. (Like many software development teams we're always behind schedule on features we promised months ago.)
But, I sometimes wonder if this strategy is wise given that some performance improvements and a large number of bug fixes usually come with library upgrades. (i.e. "Who knows, maybe things will work better in a way we don't foresee...")
What criteria do you use when you make these types of decisions in your project?
Important: Avoid Technical Debt.
"If it ain't broke, don't upgrade" is a crazy policy that leads to software so broken that no one can fix it.
Rash, untested changes are a bad idea, but not as bad as accumulating technical debt because it appears cheaper in the short run.
Get a "nightly build" process going so you can continuously test all changes -- yours as well as the packages on which you depend.
Until you have a continuous integration process, you can do quarterly major releases that include infrastructure upgrades.
Avoid Technical Debt.
I've learned enough lessons to do the following:
Check the library's change list. What did they fix? Do I care? If there isn't a change list, then the library isn't used in my project.
What are people posting about on the Library's forum? Are there a rash of posts starting shortly after release pointing out obvious problems?
Along the same vein as number 2, don't upgrade immediately. EVERYONE has a bad release. I don't intend to be the first to get bit with that little bug. (anymore that is). This doesn't mean wait 6 months either. Within the first month of release you should know the downsides.
When I decide to go ahead with an upgrade; test, test test. Here automated testing is extremely important.
EDIT: I wanted to add one more item which is at least as important, and maybe more so than the others.
What breaking changes were introduced in this release? In other words, is the library going off in a different direction? If the library is deprecating or replacing functionality you will want to stay on top of that.
One approach is to bring the open source libraries that you use under your own source code control. Then periodically merge the upstream changes into your next release branch, or sooner if they are security fixes, and run your automated tests.
In other words, use the same criteria to decide whether to use upstream changes as you do for release cycles on code you write in house. Consider the open source developers to be part of your virtual development team. This is really the case anyway, it's just a matter of whether you choose to recognise it as part of your development practices.
While you don't want to upgrade just because there's a new version, there's another consideration, which is availability of the old version. I've run into that problem trying to build open source projects.
I usually assume that ignoring a new version of a library (coz' it doesn't have any interesting features or improvements) is a mistake, because one day you'll find out that this version is necessary for the migration to the next version which you might want to upgrade to.
So my advice is to review carefully what has changed in the new version, and consider whether the changes requires a lot of testing, or little.
If a lot of testing are required, it is best to upgrade to the newer library at the next release (major version) of your software (like when moving from v8.0 to v8.5). When this happens, I guess there are other major modifications as well, so a lot of testing is done.
I prefer not to let the versions lag too far behind on dependant libraries.
Up to a year is ok for most libraries unless security or performance issues are known.
Libraries with known security issues are a must for refreshing.
I periodically download the latest version of each library and run my apps unit tests using them.
If they pass, I use them in our development and integration environments for a while and push to QA when I'm satisfied they don't suck.
The above procedure assumes the API hasn't changed significantly. All bets are off if I need to refactor existing code just to use a newer library version. (e.g. Axis 1x vs. 2x) Then I would need to get management involved to make the decision to allocate resources. Such a change would typically be differed until a major revision of the legacy code is planned.
Some important questions:
How widely used is the library? (If it's widely used, bugs will be found and eliminated more quickly)
How actively developed is it?
Is the documentation very clear?
Have there been major changes, minor ones, or just internal changes?
Does the upgrade break backwards compatibility? (Will you have to change any of your code?)
Unless the upgrade looks bad according to the above criteria, it's better to go with it, and if you have any problems, revert to the old version.
Our situation is as follows, but I'm curious about this problem in any situation.
We have a framework consisting of 4 projects:
beans
util
framework
web
We also have modules that need a version and depend on a version of beans and util.
Finally we have a customer project that consists of a specific version of the core projects and one or more modules.
Is there a standard way to version these projects?
What seems simple to me is becoming really complicated as we try to deliver releases to QA and then manage our ongoing development with the maintenance of the release (release = tag and possible branch).
I kind of prefer the following:
1.2.0 - major and minor versions + release.
1.2.1 - next release
1.2.0_01 - bug fix in 1.2.0 release (branch)
etc.
Any ideas?
We use major.minor.bugfix. A major release only happens for huge changes. A minor release is called for when there is an API change. All other releases are bugfix releases. There's definitely utility in having a build or revision number there too for troubleshooting, although if you've got really rigorous CM you might not need to include it.
Coordinating among the versions of all these projects can be done really well with help from tools like Apache Ivy or Maven. The build of one project, with its own version number, can involve the aggregation of specific versions of (the products of) other projects, and so your build files provide a strict mapping of versions from the bottom up. Save it all in [insert favorite version control tool here] and you have a nice history recorded.
I use {major}.{minor}.{buildday}.{sequential}. For Windows, we use the utilities stampver.exe and UpdateVersion.exe for .NET projects that handle that mostly automatically.
There are no standard version number systems. Common themes are to have a major, minor and build number, and occasionally a point number as well (1.2.2.1 for example, for version 1.2 point release 2 build 1). The meaning of the version numbers is highly flexible. A frequent choice is to have backwards compatibility between minor versions or point releases though.
Releases are probably best done by labeling a set of source controlled files as long as your source control allows this. Recreating a release is then as simple as syncing to the label and building, which is very useful :)
In the automated build system i'm currently using I version with Major.Minor.Build.X, where Build is every time we hit system test, and X is the last Subversion revision number from the repo the code is being built from. Seems to work quite nicely for Subversion as we can easily get back to the codebase of a particular build if the need arises.
I use a variation on the linux kernel version numbering system:
major.minor.bugfix
where even minor numbers indicate a somewhat stable release that may be distributed at least for testing, and odd minor numbers indicate an unstable/untested release that shouldn't be distributed beyond developers.
Where possible, I prefer to have projects versioned with the same build numbering, unless they are shared. It allows for more consistency between moving parts and it's easier to identify which components constitute a product release.
As workmad3 has stated, there's really no common rule for build numbers. My advice is to use something that makes sense for your team/company.
Some places I've worked at have aligned build numbering with project milestones and iterations,
e.g: Major = Release or Milestone, Minor = Iteration, Build = Build number (from the project start or from the start of iteration), Revision = If the build has to be rebuilt (or branched).
One of the most common conventions is major.minor.bugfix, with an additional suffix indicating a build number or pre-release designation (e.g. alpha, beta, etc.).
My team numbers builds according to project milestones - a build is handed over to our QA group at the end of a development iteration (every few weeks). Interim CI builds are not numbered; because we use Maven, those builds are numbered with a SNAPSHOT suffix.
Whatever you decide, be sure to document it and make sure that everyone understands it. I also suggest you document and consistently apply the release branching policy or it can quickly get confusing for everyone. Although with only 4 projects it should be pretty easy to keep track of what's going on.
You didn't mention if any of the projects access a database, but if any do, that might be another factor to consider. We use a major.minor.bugfix.buildnumber scheme similar to others described in answers to this question, with approximately the same logic, but with the added requirement that any database schema changes require at least a minor increment. This also provides a naming scheme for your database schemas. For example, versions 1.2.3 and 1.2.4 can both run against the "1.2" database schema, but version 1.3.0 requires the "1.3" database schema.
Currently we have no real versioning. We use the svn build number and the release date.
(tag name is like release_081010_microsoft e.g.)
Older Products use major.minor.sub version numbering
Major never changed
Minor changes on every release/featurerelease every 6 months.
Sub is everything which doesn't affect the feature set - mostly bugfixes.
The title more or less sums it up. We use MS TFS as our version control which integrates to eclipse via the teamprise plug-in (Corp standard, primarily an MS shop. I wish we could just use SVN, because frankly the Teamprise plug-in is rather atrocious). Suppose that someone commits a file with changes that we want to keep, just not yet. If this commit is version 3, is there a way to tag version 2 as the latest version with out checking version 2 back in as version 4, thus meaning I will later have to check version 3 in as version 5, rather than just retagging 4 as the latest at the point when I want to use it?
In TFS the "latest" for a particular path is determined as the last file that was checked in for that item (with any merge conflicts resolved). I'd suggest a few things that might give you the workflow that you would like:
Would working in a branch help you at all? Then you could decide when to merge changes in from the mainline of code into the area that you are working on.
Teamprise supports the Sync feature in Eclipse, so you can right click on a project and select "Synchronize". That will show you your changes compared to the latest version on the repository. From there you can compare those files with the latest version and see which ones you would like to update and which ones you would like to leave at your local (workspace) version.
If you do a "View History" on any file or folder you will get a history of changes that have occured with that file. You can then get any particular version you would like by selecting "Get this version" from the history view.
If branching doesn't work for you then you might want to try labels. In TFS labels are like "tags" and are editable. Using the mechanisms (such as sync and get from history) you can decide which versions of files that you would like to have and label these with your particular label. You can then do a "Get Specific" and add your label name if you so wish.
If what you are asking is "is there a way to easily rollback changes" so that you can say that version 4 or a file had a change in that you didn't want so you would like to rollback the code for everyone to version 3, then I'm afraid that the only way to do this now is to check-out the file, get the older version (from the history view) and then check that file in. When released, TFS 2010 is due to have native support for Rollback as a version control operation and Teamprise should be supporting that as soon as it is available in the server.
Additionally, I would add that I personally get a lot out of working with tools like continuous integration (and TFS 2008 has some excellent support for CI out of the box, but other open source CI servers such as CruiseControl and Hudson also support TFS). This, along with use of the fact that check-ins to TFS are atomic, means that developers can learn to trust that the "latest" version of code is always good. It encourages developers to do a get latest frequently and regularly check-in changes.
It is perhaps because of these ways of working that we may have missed out some functionality from Teamprise that could possibly help you more and we could well be just assuming that people want to Get Latest so that is what we are making easiest. If you do not feel that Teamprise is adequately supporting you in accessing the features of Team Foundation Server then I would love to hear from you. My email address is martin#teamprise.com. Alternatively you can contact our support hotline at support#teamprise.com, visit the Teamprise forums at http://support.teamprise.com or give our support team a call on (217) 356-8515, ext. 2. We love to get feedback from our customers to make the product better, and it is customers that do not feel well served by the current tools that often give the best feedback.
If Rollback is a feature that is highly desired by you, then please let us know.