Managing local files with Maven and SVN - java

I'm looking for a best practice for injecting local files into a project that are not being tracked with source control in such a way that the source-controlled version of the file is blind to the changes.
In particular, I have a context file with database credentials in it. I want to keep the raw "put your credentials here" file in source control, but I need to have that file filled out with the appropriate credentials for my development setup (or the production server, or what have you) without those credentials being pushed back into source control. Obviously, I can just edit the file locally and not check it back in. However, that becomes tedious over time, being careful that I don't accidentally check in the file with the credentials to the central code repository every time that I need to check a change in. An alternative approach would be to check in a "-dist" type file that each user would have to rename and edit to get the project to build at all.
I tried looking into Maven Overlays as that looked like it would require me to build a whole separate project for just my local credentials, with a pom.xml and a war file. That seems like a lot of overhead for just a couple of files. What I'm really after is a way to tell maven "if the file X (which isn't in source control at all) exists locally, use it. If not, use file Y (which does exist in source control)." It seems like there should be a fairly automatic way to handle it.

Simple
I have done this in the past, it is very simple, have a single file for example default.config that gets checked into version control, have another file called local.default.config that is in your svn.ignore file. Have Maven copy the local.default.config over the default.config if it exists, or have it copy both and your application look for local.default.config and then default.config if the first doesn't exist.
You can even use the same default.config name and have the application look in multiple places, with your home.dir as the highest priority, then some place else.
An ideal version of this will read all the files in some priority and use the last found property from all the files, then you could have default.config with all your properties, and local.default.config with only the few that need to change for your local configuration.
More Sophisticated Maven Oriented
Maven has multiple ways to get where you want to be:
Use Maven profiles to enable and disable a property that holds the file name you want to use and use the maven-resources-plugin to copy the file you specify in the profile.
Use the filter feature in Maven with profile driven properties.
Use the maven-replacer-plugin to manipulate the file directly based on profile driven properties
Use the maven-dependency-plugin and store your files in your local Maven repository and pull them down from their during the package phase.
profiles are very powerful and a perfect fit for configuring Maven for different environments. I have a local, dev, qa and release profile in every pom.xml. I set the local profile to active by default, and pick the others as I need them with mvn [goal] -P dev which will automatically disable local and use the properties specificed in the dev profile.
More sophisticated SVN oriented
You could work off a local development feature branch and only have your local configuration on that branch, and when you merge your code changes back to the trunk exclude your changes to the configuration file from the merge. This is actually how I would do it since, we use Git. Branching isn't so painful in SVN that this isn't an option
I am sure there are other Maven solutions as well. Either way you solve it svn.ignore is your friend. And Maven profile usage can be very powerful.

Is the Maven replacer plugin a solution for your need?

We use jasypt to encrypt our passwords within properties files read by Spring. The tool can be used without Spring as well. This makes it very simple to keep your properties files in source control.
If your issue is user credentials, then I would suggest that you use a test account for any automated tests that you run.

I think filtering may suit your needs. You can have a local.filter that is not checked in and prod.filter that is. You can use the prod.filter by default and substitute the local.filter based on a command-line flag or local profile that developers would need to use, but deployers would not.

Related

Distribution of settings.xml for Maven

Every developer in our company needs a settings.xml file for using Maven. This settings.xml file is completely equal for each developer (we do not have any passwords etc. in it). Occasionally, it will be edited by the build manager.
What is an easy way to distribute new versions of settings.xml?
Theoretically, I can think of three possibilities:
Write a newsletter and tell everyone to copy the new settings.xml from a central source.
Put the settings.xml on a network drive and tell Maven to grab it from the globally valid path (how?)
Use the Nexus mechanism of settings.xml templates. I understood that the user can grab a new version by using a Maven goal (so again, writing a newsletter "Please update!") but I am not sure whether it can be run in batch mode inside (each) build process.
Which approach is the most practical one?
You could create a symlink to some network share containing your settings.xml.
For example, in Windows:
mklink C:\Users\username\.m2\settings.xml \\server\share\settings.xml
This needs to be done one time on all developer machines.
Then maven and all other IDEs should treat this file as local.
P.S. Newsletter and wiki could be better approach though. Some devs may still need to modify their settings.xml sometimes for a special project or so. It is better to leave them in control of their dev environment.
Nexus has a feature for managing and distributing settings files.
https://books.sonatype.com/nexus-book/reference/maven-settings.html
I think this is the third option you've listed.
Looks like something that could be automated.

hybris - copy data folder

Scenario: Here we have hybris developer who has is local environment set. A new developer on board the team.
1.Place the projects under custom folder
2.ant all
3.can we copy data folder from the developer machine who has the fully loaded database with all necessary data? If yes what needs to be done after copying?
Ensure your development environment can be built from script. You do not want to be copying things around randomly.
Check your workspace in to your repository, whatever you are using. Ignore most things, not your custom code obviously.
Use a commonly available location for large resources such as the hybris suite zip.
Have your new developer checkout the workspace and run the script. This should pull in and extract resources, run ant build, build/deploy any other tools or applications in your project.
Do not check in data directories and do not copy these around. You can either:
use a shared mysql development instance
init from impex every time and ensure all data is in impex.
I highly recommend your development data can be entirely built out from impex and you can reinit often. This will be needed for development.

Enforce different settings depending on buildstage

I'm adding unit-tests to an existing codebase, and the application itself retrieves data from a server through REST. The URL to the server is hard-coded in the application.
However, developers are obviously not testing new features, bugs, etc on a live environment, but rather on a development-server. To acomplish this, the developement-build have a different "server-url"-string than the production-build.
During developement a non-production-url should be enforced; and when creating a production build, a production-url should be inforced instead.
I'm looking for advice on how to implement a neat solution for this, since missing to change the url can currently have devastating outcomes.
A maven build script only tests the production-value, and not both. I haven't found any way to make build-specific unit-tests (Technologies used: Java, Git, Git-flow, Maven, JUnit)
Application configuration is an interesting topic. What you've pointed out here as an issue is definitely a very practical need, but even more so, if you need to repackage (and possibly build) between different environments, how do you truly know that what you've got there is the same that was actually tested and verified.
So load the configuration from a resource outside of the application package. Java option to a file on filesystem or a JNDI resource are both good options. You can also have defaults for the development by commiting a config file and reading from there if the Java option is not specified.

looking fo the proper handling of config and database schema files in subversion project structure

In SVN we have a project that has all the database logic using hibernate etc. However, that project depends on the database schema being in a certain state that matches the code.
As well, we would also have config scripts that are for the server in runs on in a Config directory.
How does one properly set up the project structure in SVN to overcome this?
The structure could be like this:
--DBHibernateProject
------trunk
------branches
------tags
--DatabaseScriptsProject
------trunk
------branches
------tags
--ConfigProject
------trunk
------branches
------tags
But how do we tie the database scripts project to say Release-1.0 of the DBHibernateProject? The hibernate project has a deployable asset (jar) in the maven repo, but the db scripts one doesn't. I want to ensure the correct db scripts are tied to the correct release of the application.
If "project depends on ..." means, that "for each and every revision of DBHibernateProject we must to use predefined and fixed revisions of DatabaseScriptsProject and ConfigProject (they used/referenced inside DBHibernateProject tree)" you can always use pure Subversion-side solution: externals with PEG-revisions
Can't say more without knowledge about source-tree structure: "depends on" and "also have config scripts" aren't translatable (easy) into formal dependences, like (my poor reconstruction)
Each revision of DBHibernateProject must have related
DatabaseScriptsProject (for correct schema for this code) and
ConfigProject (for scripts, which produce database-schema, which is
used by DBHibernateProject)
If my reconstruction is correct, in Subversion-style (without Maven, it can be my mistake) I'll create in DBHibernateProject tree two directory-type externals, which referenced to "some tree in some state" in DatabaseScriptsProject and ConfigProject trees respectively
You either do it through process or making one SVN project.
You could, make a rule for the team that when you finish a set of database stuff, you tag it with a tag that is the same as the tag for the code that it works with. This can be tedious but is workable if the changes are typically in-sync between the database and the code.
The other way to do it is to make a single project in SVN with one trunk and one set of tags and branches. Then you accomplish the same thing by having some folders at the top level in the repo that hold code, scripts and hibernate stuff. It is possible to manage permissions on your SVN repo so that different people have write permission on specific folders but that creates a cost to keep modifying the permissions everytime you branch (and maybe tag if you foolishly allow modifying tags)
I would strongly recommend using liquibase to manage your database migrations. The change files are read from the classpath which means they can be deployed in the same jar alongside their matching Hibernate class files.
I've never used it, but liquibase does have some support for hibernate which might prove useful.
For a Maven example see:
Lock oracle database before running the Delete/Load data scripts
For some more theoretical reading I recommend:
http://martinfowler.com/articles/evodb.html
http://www.codinghorror.com/blog/2008/02/get-your-database-under-version-control.html
In the interest of fairness there are a few other tools in the same functional space:
http://flywaydb.org/
http://code.google.com/p/dbmigrate/
Couldn't you have a single project layout for the Database project that looks something like this:
----DatabaseProject
-------trunk
--------DBHibernateProject
--------DatabaseScriptsProject
-------branches
-------tags
----OtherProject
-------trunk
-------branches
-------tags
----ConfigProject
-------trunk
-------branches
-------tags
If the config project is not tied to other projects and just scripts for the server then my guess is you could have it as the same layout as the OtherProject.
Keep all my SQL scripts in SVN
Do not allow to modify them (if you want to changes something create new SQL file that contains proper SQL statements)
Configure maven dbpatch plugin https://github.com/m-szalik/dbpatch-maven-plugin

How do you maintain java webapps in different staging environments?

You might have a set of properties that is used on the developer machine, which varies from developer to developer, another set for a staging environment, and yet another for the production environment.
In a Spring application you may also have beans that you want to load in a local environment but not in a production environment, and vice versa.
How do you handle this? Do you use separate files, ant/maven resource filtering or other approaches?
I just put the various properties in JNDI. This way each of the servers can be configured and I can have ONE war file.
If the list of properties is large, then I'll host the properties (or XML) files on another server. I'll use JNDI to specify the URL of the file to use.
If you are creating different app files (war/ear) for each environment, then you aren't deploying the same war/ear that you are testing.
In one of my apps, we use several REST services. I just put the root url in JNDI. Then in each environment, the server can be configured to communicate with the proper REST service for that environment.
I just use different Spring XML configuration files for each machine, and make sure that all the bits of configuration data that vary between machines is referenced by beans that load from those Spring configuration files.
For example, I have a webapp that connects to a Java RMI interface of another app. My app gets the address of this other app's RMI interface via a bean that's configured in the Spring XML config file. Both my app and the other app have dev, test, and production instances, so I have three configuration files for my app -- one that corresponds to the configuration appropriate for the production instance, one for the test instance, and one for the dev instance.
Then, the only thing that I need to keep straight is which configuration file gets deployed to which machine. So far, I haven't had any problems with the strategy of creating Ant tasks that handle copying the correct configuration file into place before generating my WAR file; thus, in the above example, I have three Ant tasks, one that generates the production WAR, one that generates the dev WAR, and one that generates the test WAR. All three tasks handle copying the right config file into the right place, and then call the same next step, which is compiling the app and creating the WAR.
Hope this makes some sense...
We use properties files specific to the environments and have the ant build select the correct set when building the jars/wars.
Environment specific things can also be handled through the directory service (JNDI), depending on your app server. We use tomcat and our DataSource is defined in Tomcat's read only JNDI implementation. Spring makes the lookup very easy.
We also use the ant strategy for building different sites (differeing content, security roles, etc) from the same source project as well.
There is one thing that causes us a little trouble with this build strategy, and that is that often files and directories don't exist until the build is run, so it can make it difficult to write true integration tests (using the same spring set up as when deployed) that are runnable from within the IDE. You also miss out on some of the IDE's ability to check for the existence of files, etc.
I use Maven to filter out the resources under src/main/resources in my project. I use this in combination with property files to pull in customized attributes in my Spring-based projects.
For default builds, I have a properties file in my home directory that Maven then uses as overrides (so things like my local Tomcat install are found correctly). Test server and production server are my other profiles. A simple -Pproduction is all it then takes to build an application for my production server.
Use different properties files and use ant replace filters which will do the replacement based on environment for which the build is done.
See http://www.devrecipes.com/2009/08/14/environment-specific-configuration-for-java-applications/
Separate configuration files, stored in the source control repository and updated by hand. Typically configuration does not change radically between one version and the next so synchronization (even by hand) isn't really a major issue.
For highly scalable systems in production environments I would seriously recommend a scheme in which configuration files are kept in templates, and as part of the build script these templates are used to render "final" configuration files (all environments should use the same process).
I recently also used Maven for alternative configurations for live or staging environments. Production configuration using Maven Profiles. Hope it helps.
I use Ant's copy with a filter file.
In the directory with the config file with variables I have a directory with a file for each environment. The build script know the env and uses the correct variable file.
I have different configuration folders holding the configurations for the target deployment, and I use ANT to select the one to use during the file copy stage.
We use different ant targets for different environments. The way we do it may be a bit inelegant but it works. We will just tell certain ant targets to filter out different resource files (which is how you could exclude certain beans from being loaded), load different database properties, and load different seed data into the database. We don't really have an ant 'expert' running around but we're able to run our builds with different configurations from a single command.
One solution I have seen used is to configure the staging environment so that it is identical to the production environment. This means each environment has a VLAN with the same IP range, and machine roles on the same IP addresses (e.g. the db cluster IP is always 192.168.1.101 in each environment). The firewalls mapped external facing addresses to the web servers, so by swapping host files on your PC the same URL could be used - http://www.myapp.com/webapp/file.jsp would go to either staging or production, depending on which hosts file you had swapped in.
I'm not sure this is an ideal solution, it's quite fiddly to maintain, but it's an interesting one to note.
Caleb P and JeeBee probably have your fastest solution. Plus you don't have to setup different services or point to files on different machines. You can specify your environment either by using a ${user.name} variable or by specifying the profile in a -D argument for Ant or Maven.
Additionally in this setup, you can have a generic properties file, and overriding properties files for the specific environments. Both Ant and Maven support these capabilities.
Don't forget to investigate PropertyPlaceholderConfigurer - this is especially useful in environments where JNDI is not available

Categories

Resources