I have a Java application with function tests that uses a huge amount of data. The tests are run from TeamCity. There are several agents running the tests. I'd like to separate the data into another project that will basically only do an update from version control and store the data on a local machine running the agent.
Then I need every agent to know where on the local machine are the data located and pass it as a parameter to the main build.
Is there a way to configure the builds this way?
The motivation is that the cleaning of a work directory removes this data when they are not separated. Cleaning is sometimes necessary, but never because of the test data.
You could do a separate build, and other build would depends on this one, and probably using artifacts you can get the files you need. Or you in the build, you use a custom location where the agent would clone the repository, so you can know where it is, and use it as a parameter for your others builds.
Related
Scenario: Here we have hybris developer who has is local environment set. A new developer on board the team.
1.Place the projects under custom folder
2.ant all
3.can we copy data folder from the developer machine who has the fully loaded database with all necessary data? If yes what needs to be done after copying?
Ensure your development environment can be built from script. You do not want to be copying things around randomly.
Check your workspace in to your repository, whatever you are using. Ignore most things, not your custom code obviously.
Use a commonly available location for large resources such as the hybris suite zip.
Have your new developer checkout the workspace and run the script. This should pull in and extract resources, run ant build, build/deploy any other tools or applications in your project.
Do not check in data directories and do not copy these around. You can either:
use a shared mysql development instance
init from impex every time and ensure all data is in impex.
I highly recommend your development data can be entirely built out from impex and you can reinit often. This will be needed for development.
I'm coming from a .Net background cutting my teeth on a Java project that is using Maven, Spring and Liquibase. Needless to say, this is a new bag of concepts and frameworks to deal with.
Tests won't complete:
My tests wont complete successfully because they fail when attempting to access a table within my database. They fail because that table doesn't exist. I see that I have many migration files in a Liquibase XML format within my project, but am looking at how to run them.
liquibase-maven-plugin not an option:
I see that others might use the liquibase-maven-plugin plugin, but in my case, the project does not have that plugin referenced in any the pom.xml, only liquibase-core. There are a handful of other developers that knew what they were doing that worked on this project in the past, given that they never referenced this plugin in the pom.xml file, I assume it was for good reason and I wont be stirring that pot.
SpringLiquibase?
They have a reference to to a bean that looks like this: <bean id="liquibase" class="liquibase.integration.spring.SpringLiquibase">, which after further research appears to do automatic data migrations,
GREAT!
....but how do I go about invoking it? Must my project already pass my Tests and actually be "ran" before this logic gets hit? If that is the case and my project must successfully build / test, then I apparently must run my migrations outside of this SpringLiquibase bean.
Should I be using the liquibase command line and if so, can I safely assume this is what the previous developers were doing to initially establish their database?
You are right that the SpringLiquibase setup should do the database update automatically, but it will only do it when the spring framework is started.
My guess is that your normal application startup fires Liquibase through Spring but the test framework does not. Perhaps they had not noticed it because they would tend to make the database change in the liquibase changelog files, then start the normal application for initial testing (which updated the database) then build and run the tests. Now that you are running the tests first, the database is not yet there.
Are you able to tell if your tests are trying to start Spring?
Even in cases where an application is using SpringLiquibase, I usually recommend configuring your project to allow manual updates using liquibase-maven-plugin, ant plugin, or command line because it tends to make a more efficient process. With that setup, you can add changesets and then run liquibase update without going through an entire application startup or even running your tests. You could set it to automatically run on test execution, but the update process is usually infrequent enough that it is better to avoid the liquibase update overhead on every test execution. It is still very helpful to include in your application's spring setup so that in QA and production you don't have to remember to manually update the database, it is just automatically kept up to date.
I have been working on 3 Java Spring and Hibernate projects at a time. Every time if I make any changes on any of these three project then I have to create build again and manually I upload three builds to three different servers.
Is there any mechanism to avoid this manual process? Is there a tool or script that can check if I make changes to my code in Eclipse and save the files which will then commit the code and automatically build and upload the war file to the appropriate server?
This would save me a lot of time.
You need continous integration. Maven is a build tool and won't deploy on change.
Whereas a CI tool such as jenkins will listen to your code reposiotry and every time a file is commited it will call then call whatver maven command you wish.
However re-reading your question, it looks like all you really need is a hot deploy development enivronment. Which is quite easy.
If the projects are on maven, there are plugins for all major application servers.
I'm looking for a best practice for injecting local files into a project that are not being tracked with source control in such a way that the source-controlled version of the file is blind to the changes.
In particular, I have a context file with database credentials in it. I want to keep the raw "put your credentials here" file in source control, but I need to have that file filled out with the appropriate credentials for my development setup (or the production server, or what have you) without those credentials being pushed back into source control. Obviously, I can just edit the file locally and not check it back in. However, that becomes tedious over time, being careful that I don't accidentally check in the file with the credentials to the central code repository every time that I need to check a change in. An alternative approach would be to check in a "-dist" type file that each user would have to rename and edit to get the project to build at all.
I tried looking into Maven Overlays as that looked like it would require me to build a whole separate project for just my local credentials, with a pom.xml and a war file. That seems like a lot of overhead for just a couple of files. What I'm really after is a way to tell maven "if the file X (which isn't in source control at all) exists locally, use it. If not, use file Y (which does exist in source control)." It seems like there should be a fairly automatic way to handle it.
Simple
I have done this in the past, it is very simple, have a single file for example default.config that gets checked into version control, have another file called local.default.config that is in your svn.ignore file. Have Maven copy the local.default.config over the default.config if it exists, or have it copy both and your application look for local.default.config and then default.config if the first doesn't exist.
You can even use the same default.config name and have the application look in multiple places, with your home.dir as the highest priority, then some place else.
An ideal version of this will read all the files in some priority and use the last found property from all the files, then you could have default.config with all your properties, and local.default.config with only the few that need to change for your local configuration.
More Sophisticated Maven Oriented
Maven has multiple ways to get where you want to be:
Use Maven profiles to enable and disable a property that holds the file name you want to use and use the maven-resources-plugin to copy the file you specify in the profile.
Use the filter feature in Maven with profile driven properties.
Use the maven-replacer-plugin to manipulate the file directly based on profile driven properties
Use the maven-dependency-plugin and store your files in your local Maven repository and pull them down from their during the package phase.
profiles are very powerful and a perfect fit for configuring Maven for different environments. I have a local, dev, qa and release profile in every pom.xml. I set the local profile to active by default, and pick the others as I need them with mvn [goal] -P dev which will automatically disable local and use the properties specificed in the dev profile.
More sophisticated SVN oriented
You could work off a local development feature branch and only have your local configuration on that branch, and when you merge your code changes back to the trunk exclude your changes to the configuration file from the merge. This is actually how I would do it since, we use Git. Branching isn't so painful in SVN that this isn't an option
I am sure there are other Maven solutions as well. Either way you solve it svn.ignore is your friend. And Maven profile usage can be very powerful.
Is the Maven replacer plugin a solution for your need?
We use jasypt to encrypt our passwords within properties files read by Spring. The tool can be used without Spring as well. This makes it very simple to keep your properties files in source control.
If your issue is user credentials, then I would suggest that you use a test account for any automated tests that you run.
I think filtering may suit your needs. You can have a local.filter that is not checked in and prod.filter that is. You can use the prod.filter by default and substitute the local.filter based on a command-line flag or local profile that developers would need to use, but deployers would not.
I work at a software company where our primary development language is Java. Naturally, we use Hudson for continuous builds, which it works brilliantly for. However, Hudson is not so good at some of the other things we ask it to do. We also use Hudson jobs to deploy binaries, refresh databases, run load testing, run regressions, etc. We really run into trouble when there are build dependencies (i.e. load testings requires DB refresh).
Here's the one thing that Hudson doesn't do that we really need:
Build dependency: It supports build dependencies for Ant builds, but not for Hudson jobs. We're using the URL invocation feature to cause a Hudson job to invoke another Hudson job. The problem is that Hudson always returns a 200 and does not block until the job is done. This means that the calling job doesn't know a) if the build failed and b) if it didn't fail, how long it took.
It would be nice to not have to use shell scripting to specify the behavior of a build, but that's not totally necessary.
Any direction would be nice. Perhaps we're not using Hudson the right way (i.e. should all builds be Ant builds?) or perhaps we need another product for our one-click deployment, load testing, migration, DB refresh, etc.
Edit:
To clarify, we have parameters in our builds that can cause different dependencies depending on the parameters. I.e. sometimes we want load testing with a DB refresh, sometimes without a DB refresh. Unfortunately, creating a Hudson job for each combination of parameters (as the Join plugin requires) won't work because sometimes the different combinations could lead to dozens of jobs.
I don't think I understand your "build dependency" requirements. Any Hudson job can be configured to trigger another (downstream) job, or be triggered by another (upstream) job.
The Downstream-Ext plugin and Join plugin allow for more complex definition of build dependencies.
There is a CLI for Hudson which allows you to issue commands to a Hudson instance. Use "help" to get precise details. I believe there is one which allows you to invoke a build and await its finish.
http://wiki.hudson-ci.org/display/HUDSON/Hudson+CLI
Do you need an extra job for your 'dependencies'?
Your dependencies sound for me like an extra build step. The script that refreshes the DB can be stored in your scm and every build that needs this step will check it out. You can invoke that script if your parameter "db refresh" is true. This can be done with more than just one of your modules. What is the advantage? Your script logic is in your scm (It's always good to have a history of the changes). You still have the ability to update the script once for all your test jobs (since hey all check out the same script). In addition you don't need to look at several scripts to find out whether your test ran successful or not. Especially if you have one job that is part of several execution lines, it becomes difficult to find out what job triggered which run. Another advantage is that you have less jobs on your Hudson and therefore it is easier to maintain.
I think what you are looking for is http://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin This plugin lets you execute other jobs based on the status of previous jobs. You can even call a shell script from the downstream project to determine any additional conditions. which can in turn call the API for more info.
For example we have a post-build step to notify us, this calls back the JSON API to build a nice topic in our IRC channel that says "All builds ok" or "X,Y failed" , etc.