Scenario: Here we have hybris developer who has is local environment set. A new developer on board the team.
1.Place the projects under custom folder
2.ant all
3.can we copy data folder from the developer machine who has the fully loaded database with all necessary data? If yes what needs to be done after copying?
Ensure your development environment can be built from script. You do not want to be copying things around randomly.
Check your workspace in to your repository, whatever you are using. Ignore most things, not your custom code obviously.
Use a commonly available location for large resources such as the hybris suite zip.
Have your new developer checkout the workspace and run the script. This should pull in and extract resources, run ant build, build/deploy any other tools or applications in your project.
Do not check in data directories and do not copy these around. You can either:
use a shared mysql development instance
init from impex every time and ensure all data is in impex.
I highly recommend your development data can be entirely built out from impex and you can reinit often. This will be needed for development.
Related
I have been working on 3 Java Spring and Hibernate projects at a time. Every time if I make any changes on any of these three project then I have to create build again and manually I upload three builds to three different servers.
Is there any mechanism to avoid this manual process? Is there a tool or script that can check if I make changes to my code in Eclipse and save the files which will then commit the code and automatically build and upload the war file to the appropriate server?
This would save me a lot of time.
You need continous integration. Maven is a build tool and won't deploy on change.
Whereas a CI tool such as jenkins will listen to your code reposiotry and every time a file is commited it will call then call whatver maven command you wish.
However re-reading your question, it looks like all you really need is a hot deploy development enivronment. Which is quite easy.
If the projects are on maven, there are plugins for all major application servers.
I have a typical web application deployed in Tomcat. The requirement is to provide incremental update way instead of full-package delivery (a war file) when update the application.
For example, once I finish a bug fix which changed a jar file, an XML file and jpg file. I call these 3 files as a patch. I am supposed to deliver the patch file. Even when customers want to rollback to original version, I have to provider a way to rollback the patch.
All the process is supposed to automatically.
From my perspective, the requirement doesn’t make sense. full-package delivery is easy and reliable way to update a web application, I don’t want to introduce complex and error-prone way to update.
Do you have idea to implement incremental update requirement? Thanks!
When you deploy the .war or .ear, the application server usually unpack it into an internal directory. You can change files in this directory directly, with a finer granularity. However, for changes to take effect consistently, you will need to restart the server.
Your perspective is indeed fully correct. Nowadays, sizes of files don't play a significant role, I don't see the problem with whole updates. Why isn't the customer happy with whole updates?
Note: If what he wants is dynamic updates, i.e. without restarting the server, then this is anyway a complete different problem, and mostly impossible for production systems in java (but doable during development, with solutions like JRebel).
You can create a Java Program that uses Delta-Sync protocol i.e. Only those files need to uploaded which are updated. If you have used Dropbox then you will understand pretty well.
Dropbox uses Delta-Sync protocol to update file and sync data.
Either way for time being you can use Dropbox by installing on your client (mapping to server's WAR folder) and your local machine and share that folder. Then whenever you change the files in your local machine it will automatically upload and sync those CHANGED (PATCH) files to your client's machine.
I'm looking for a best practice for injecting local files into a project that are not being tracked with source control in such a way that the source-controlled version of the file is blind to the changes.
In particular, I have a context file with database credentials in it. I want to keep the raw "put your credentials here" file in source control, but I need to have that file filled out with the appropriate credentials for my development setup (or the production server, or what have you) without those credentials being pushed back into source control. Obviously, I can just edit the file locally and not check it back in. However, that becomes tedious over time, being careful that I don't accidentally check in the file with the credentials to the central code repository every time that I need to check a change in. An alternative approach would be to check in a "-dist" type file that each user would have to rename and edit to get the project to build at all.
I tried looking into Maven Overlays as that looked like it would require me to build a whole separate project for just my local credentials, with a pom.xml and a war file. That seems like a lot of overhead for just a couple of files. What I'm really after is a way to tell maven "if the file X (which isn't in source control at all) exists locally, use it. If not, use file Y (which does exist in source control)." It seems like there should be a fairly automatic way to handle it.
Simple
I have done this in the past, it is very simple, have a single file for example default.config that gets checked into version control, have another file called local.default.config that is in your svn.ignore file. Have Maven copy the local.default.config over the default.config if it exists, or have it copy both and your application look for local.default.config and then default.config if the first doesn't exist.
You can even use the same default.config name and have the application look in multiple places, with your home.dir as the highest priority, then some place else.
An ideal version of this will read all the files in some priority and use the last found property from all the files, then you could have default.config with all your properties, and local.default.config with only the few that need to change for your local configuration.
More Sophisticated Maven Oriented
Maven has multiple ways to get where you want to be:
Use Maven profiles to enable and disable a property that holds the file name you want to use and use the maven-resources-plugin to copy the file you specify in the profile.
Use the filter feature in Maven with profile driven properties.
Use the maven-replacer-plugin to manipulate the file directly based on profile driven properties
Use the maven-dependency-plugin and store your files in your local Maven repository and pull them down from their during the package phase.
profiles are very powerful and a perfect fit for configuring Maven for different environments. I have a local, dev, qa and release profile in every pom.xml. I set the local profile to active by default, and pick the others as I need them with mvn [goal] -P dev which will automatically disable local and use the properties specificed in the dev profile.
More sophisticated SVN oriented
You could work off a local development feature branch and only have your local configuration on that branch, and when you merge your code changes back to the trunk exclude your changes to the configuration file from the merge. This is actually how I would do it since, we use Git. Branching isn't so painful in SVN that this isn't an option
I am sure there are other Maven solutions as well. Either way you solve it svn.ignore is your friend. And Maven profile usage can be very powerful.
Is the Maven replacer plugin a solution for your need?
We use jasypt to encrypt our passwords within properties files read by Spring. The tool can be used without Spring as well. This makes it very simple to keep your properties files in source control.
If your issue is user credentials, then I would suggest that you use a test account for any automated tests that you run.
I think filtering may suit your needs. You can have a local.filter that is not checked in and prod.filter that is. You can use the prod.filter by default and substitute the local.filter based on a command-line flag or local profile that developers would need to use, but deployers would not.
Our build/deploy process is very tedious, sufficiently manual and error-prone. Could you give proposals for improvement?
So let me describe our deployment strategy and build process.
We are developing system called Application Server (AS for short). It is essentially servlet-based web application hosted on JBoss Web server. AS can be installed in two "environments". Each environment is a directory with webapp's code. This directory is placed on network storage. Storage is mounted to several production servers where JBoss instances are installed. Directory is linked to JBoss' webapps directory. Thus all JBoss instances use the same code for environment. Configuration of JBoss is separate from environment and updated on per instance basis.
So we have two types of patches: webapp patches (for different environments) and configuration patches (for per instance configuration)
Patch is an executable file. In fact it is bash script with embedded binary rpm package. Installation is pretty straight-forward: you just execute file and optionally answer some questions. Important point is that the patch is not a system as a whole - it contains only some classes with fixes and/or scripts that modify configuration files. Classes are copied into WEB-INF/classes (AS is deployed as exploded directory).
The way we build those pathes is:
We take some previous patch files and copy them.
We modify content of patch. The most important part of it is RPM spec. There we change name of patch, change its prerequisite rpm packages and write down actual bash commands for backing up, copying and modifying files. This is one of the most annoying parts because we not always can get actual change-set. That is especially true for new complex features which are spanned among multiple change requests and commits. Also, writing those commands for change-set is tedious and error-prone.
For webapp patches we also modify spec for other environment. Usually they are identical excepting rpm package name.
We put all rpm related files to VCS
We modify build.xml by adding a couple of targets for building new patch. Modification is done by copypasting and editing.
We modify CruiseControl's config by copypasting project and changing ant targets in it
At last, we build a system
Also, I'm interested in any references on patch preparation and deployment practices, preferably for Java applications. I haven't succeed googling that.
The place I work had similar problems, but perhaps not as complex.
We responded by eliminating the concept of patch altogether. We stopped patching, and started simply installing the whole app (even if we do a just a small change).
We now have Cruise Control build complete install kits that happen to contain the build timestamp in the install-kit name. This is a Cruise Control build artifact.
Cruise Control autoinstalls them on a test server, and runs some automated smoke tests. We then run manual tests on the test server. Then we install the artifact on a staging, then production server.
Getting rid of patching caused some people to splutter, "isn't that wasteful if you're just changing a couple of things?" and "why would you overwrite all the software just to patch something?"
But the truth is that good source control, automated install-kit building, and one-step installation has saved us tons of time. It does take a few seconds longer to install, but we can do it far more repeatedly and with less developer labor.
I am working on a small team of web application developers. We edit JSPs in Eclipse on our own machines and then move them over to a shared application server to test the changes. I have an Ant script that will take ALL the JSPs on my machine and move them over to the application server, but will only overwrite JSPs if the ones on my machine are "newer". This works well most of the time, but not all of the time. Our update method doesn't preserve file change day/times, so it is possible that an Update on my machine will set the file day/time to now instead of when the file was actually last changed. If someone else worked on that file 1 hour ago (but hasn't committed the changes yet), then the older file on my PC will actually have a newer date. So when I run the Ant script it will overwrite their changes with an older file.
What I am looking for is an easy way to just move the file I am currently working on. Is there a way to specify the "current" file in an Ant script? Or an easy way to move the current file within Eclipse? Perhaps a good plugin to do this kind of stuff? I could go out to Windows Explorer to separately move the file, but I would much prefer to be able to do it from within Eclipse.
Add a target to your ant build file to copy a single jsp using a command line property definition as #matt b described.
Create a new external tool launch profile and use the "String Substitution Preferences" to pass in the reference to the active file in the editor (resource_name).
See Eclipse Help | Java Development User Guide | Reference | Preferences | Run/Debug | Launching | String Substitution
How would Ant know what file was "current"? It has no way of knowing.
You could pass the name of the file into your Ant script (by taking advantage of the fact that any arguments you pass into Ant with -D are automatically parameters in your script)...
ant -Dfile=myfile.jsp update
and have your script look something like...
<copy file=${myfile} todir="blah"/>
...but it would probably be a pain to constantly type in the name of the file on the commandline.
Honestly, the type of problem you've described are inevitable when you have multiple developers sharing an environment. I think that a better approach for you and your team long-term is to have each developer work/test on a local application server instance, before the code is promoted. This removes all headaches, bottlenecks, and scheduling trouble in sharing an app server with other people.
You should really use source control any time you have multiple people working on the same thing (well, you should use it any time regardless, but that's a different conversation). This way, when conflicts like this occur, the tool will know and make someone perform the merge so that no one's changes get lost. Then, the test server can run on a clean checkout from source, and each developer can also test the full application locally, because everyone's changes will be instantly available to them through the source repository.
I suggest you to use source control. I prefer Subversion. You can use CruiseControl to make the build automatically whenever someone commits new code.
The antrunner4e plugin is exactly what you are looking for -- see http://sourceforge.net/projects/antrunner4e/