we are implementing Maven + Jenkins and we are trying some artifact repositories like Artifactory. We now are deploying with svn -yes, it's awful- so we have a lot of requirements from our clients that should be done instantly, like "please add this button to my js form" so for now is enough commiting JS file and updating that on client deployed Tomcat.
I need to know if there is a way in Maven to install in a server just an incremental part of our deployable files, because the complete war file has about 600MB and it is not viable to download all the war file for a little change of two lines in javascript.
Now we can create a new artifact in each commit, but we cannot understand if it will be possible to achieve a continuous delivery that allow us to send a instant fix to a client server.
All the examples that we saw are referred to deployable artifacts, but there isn't a case in which someone use somethind like mvn install -mySpecificVersion and only the changed files are downloaded, or something similar.
Thanks.
After months of thinking in solutions we figured out that the solution is modules in our project, so the dynamism and instant-fix requirement are related to the core or specific little module. This approach will drive us to a reproducible pipe line and a cheap transfer of data.
Related
I'm working on Windows 10 pro, Java 1_8 or Java 1_15. No corporate proxy,
connectivity to the internet works just fine.
I'm trying to build spring-boot from the source. I cloned the github repo, then checked out the tag that I needed.
git clone https://github.com/spring-projects/spring-boot.git
git checkout tags/v2.3.9.RELEASE
It failed to switch to that tag. It's actually the first time I saw an error while trying to switch to a tag. That's ok. I deleted that directory, switched to the tag in the browser and downloaded the zip.
I unzipped the directory and then attempted to build with gradle from source.
gradlew build
I got the following error
A build scan was not published as you have not authenticated with server 'ge.spring.io'
I originally tried with JDK 1.8 then switched to JDK 15. Same error both times.
I noticed that spring boot page suggested gradle wrapper. So I googled on how to do that and found that I needed to run
gradle wrapper
That produced the same error message.
So I thought maybe I should register with ge.spring.io. I actually looked into that when I first got the error. I didn't see any method to register. So now I looked even deeper and still could not find out how to register. I googled on how to register, nothing. I cannot be the only this has ever happened to. I also find it extremely odd that I cannot register, unless this repo is only for paying customers. If that is the case it's not really open source. Just putting the source out there with no way to build the actual binaries, while the source is open is absurd. So I know this cannot be a paid for sight that I need to register with just to build the source.
I looked through configuration files for where it might be referencing this site, I could not find it.
I thought, maybe the jars had built and it just could not push them to the site. That's fine I just need to jar files. However, I cannot find the jar files either.
By now I'm sure you surmised I'm not familiar with gradle, you would be correct. I can follow instructions on how to run it, but I have never used it n a project. I've used maven, ant, make and others.
If this error can be ignored, and I should look somewhere for the built jars, that would be the solution I'm looking for. If I need to register for the site in order to finish the build, then how do I do that?
The message about the build scan can be ignored. A build scan describes what happened during the build and isn’t needed to access the build’s output.
When you run build, each module’s jar is written to its build/libs directory. For example, you’ll find the jar for the spring-boot module in spring-boot-project/spring-boot/build/libs. Alternatively, you may want to run publishToMavenLocal. It will publish each module to your local Maven cache from where you can consume it in a Maven build, or a Gradle build configured with mavenLocal() as a repository.
Presently, I have many jenkins jobs deploying war files to multiple servers using jboss as maven plugin. I have created multiple post build maven goals to deploy to multiple instances. To enhance the setup, I need to read the data from a SQL table where the mapping exist for each project and instances. Based on the data, I need to dynamically change the instances(undeploy from previous instance and deploy on new instance) and do hot deployment. Any thoughts?
Instead of using DB, I would suggest you to use a simple source repo, e.g. git repo. You can have such mapping data in that file.
So when you do deployment, Jenkins would checkout the repo. Your script could read it and do deployment. The script could be bash, ruby or anything you think it's easy. The script for deployment could be put into the same source repo as well.
Later on, if you need to do any change, you just change the repo and push the change.
The whole idea is infrastructure as code. You should put everything into source control. Jenkins will be a single tool to invoke the script from your source repo.
This question was raised since our main war file is about 40 Mb size (and it's not a single war file in whole project). And all the rest jar files is about 20 Mb thus every release takes 3 times bigger space then if we will not deploy wars.
So maybe there is an option to not deploy whole war files but only resources so then deploy team could build it from Nexus? If I skip deploy for module where we build war file then deployment team cannot be able to build it without access to source code.
Is it common practice to deploy war files in company local repository?
There are two camps. One camp says: deploy (actually upload) every artifact to Nexus. This way the process for every artifact is the same and everybody knows where to find it.
The other camp says: Use Nexus only as dependency repository. If your war-file is the end-product and it isn't a dependency for another project like an ear-file, then there's no reason to upload it to Nexus. In this case you could upload it to a share or to a scm-location and distribute it from there.
For every change - Push to nexus as snapshot !!
In the process of continuous integration development practice - having parallel development, it is never ending process to update the nexus with latest development changes irrespective of size.
Its the best practice to have the deploy goal run and the updated artifacts would be available
to teams working on that -- either deployment team, testing team, other development teams would start consuming the change...
In general I would suggest to just upload the war to Nexus and get your operations team to get it from Nexus for deployment. After Nexus is already server running for managing your binaries . Why would you split it up and use a different storage for your build outputs just because they are a war file vs being a jar or something else. That just makes things more complicated for no good reason.
<maven.deploy.skip>true</maven.deploy.skip>
set this properties maven pom file can fix this problem
The maven deploy plugin has a skip property: http://maven.apache.org/plugins/maven-deploy-plugin/deploy-mojo.html
I have almost no experience in using netbeans and svn so please correct me gently if I am wrong. I come from using python/vim/git so workflow wise is foreign to me.
Currently, I used the Netbeans' svn plugin to checkout my project from a remote repository. The project has several components like webservices, and also a swing client and of the business logics.
Assuming that I need to work with the web services and the swing client, do I create separate projects for each, and import the project as references?
Finally, currently I'm using netbeans to test the webapp on the local glassfish server. How do I deploy on a remote test server so that my team mates can use and test the web app?
If all of the seperate components are a part of the same project in svn then, no, you should only create one project in netbeans. Check out the project from svn and once netbeans has checked out the project it should prompt you to create a netbeans project when it does select project from existing sources, follow the wizard steps and your project will be created. You will have to import any external jars needed by the project that has been checked out into the newly created project in netbeans. NOTE: Netbenas will create a build.xml file for the newly created project so be sure that you do not commit that build.xml file into the repository unless it's needed.
To setup a remote server in netbeans go to tools->servers, select add server, select the server type(glassfish, tomcat, etc) then enter the pertinent information for the server using the wizard.
First question "Assuming that I need to work with the web services and the swing client, do I create separate projects for each, and import the project as references?"
It really depends on your architecture - how coupled are those components. It's also important what packaging system do you use? For example if you use Maven you can easily modulate the project and define dependencies between the modules. Check out this for details. I am referring to maven as it's build-in in Netbeans.
Regarding your testing you basically have 2 options:
a) expose your glassfish to your buddies
b) package and deploy your project remotely - this really cannot be given straight answer - depends on your infrastructure, where's the remote server, how big the deployment package is, how the application server behaves regarding hot deployments etc.
I suggest try the first option.
SVN helps you to keep the project in several versions in the repository.
You can always checkout from repository.
once you checked out, you will be using the project from a saved directory somewhere in your local machine.
Now, you can keep on developing the project, at the same time your friends can also keep doing that, and whenever you think you have done enough changes or development then after verifying your own copy of the project, you can commit the changes to repository.
After committing the changes will be visible in the repository to everyone, and hence anyone can access this updated version of the project residing in the repository.
Note that the earlier version will not be deleted, unless you do so.
Your friend can also checkout a copy of the project from the repository and they can merge it with their existing ones, and they can also commit the changes.
and at the same time, you can ask your friend to check your developed code by checking out the project and deploying on their local server.
I hope it gives you a better picture.
If I were using python I would probably like to use pip as a nice installer for continuous delivery with its nice repository integration and scripting capabilities.
Do I have anything similar in java which would be useful for me in continuous deployment?
Can someone recommend me how they do full continuous deployment in java?
I'm going to have multiple servers with complex configurations and huge multiple clusters with databases, NOSQL's (and using maven for the some of the projects while others are just downloaded pacakges) etc etc... anyone has recommendation for that?
Again I think pip is a very nice installer and could help me, anyone has experience maybe with ubuntu juju?
However if I use ubuntu juju that would mean I would have to use ubuntu based servers and not centos.
There's a kind of bright line between Java app build and Java app deployment. Build CI in Java is pretty straightforward with a variety of tools available - build scripting (Ant, Maven, Gradle, etc), continuous builds (Jenkins, Go, Anthill, etc), and repositories (Nexus, Artifactory, etc). Dependency management for libraries is a hairball for Java, so definitely use Maven or Ivy for it.
Deployment is a much wilder and less mature world. The environments are potentially far more complex, and often include messy non-Java things like relational databases. You can hand-roll scripts, or use ControlTier or Capistrano or something like that (which will still involve some hand-rolling).
I'm not completely clear what pip does, but here is my toolchain for CI/CD
You need a build tool:
Maven (does a lot of stuff, including downloading dependencies and driving you crazy)
ANT (will poke you until you die with xml brackets)
Gradle and others (pretty much everybody including ANT uses/can use Ivy for downloading dependencies from repositories)
You need an CI server
jenkins
various commercial options (Teamcity, Bamboo ...)
For the deploying part you need something to deploy your apps.
This really depends on the build tool you use (which should be able to do the deployment). Maven has some plugins for this afaik, but I think you will have to google for your app server and the build tool to find a solution for your specific need.
Probably what you are looking for is building a deployment pipeline. Check a video example here: http://www.youtube.com/watch?v=6CEQOuHM86Y
There are multiple ways to achieve it. Ill tell you my preferred one.
Components you will need:
VCS server (SVN, Git)
CI Server (Jenkins, Hudson, TeamCity)
Build Tool (Maven, Ant, Gradle)
Artifact Repository (Artifactory, Nexus)
Deployment Tool (Rundeck, Puppet, Deployinator, Capistrano)
Target Environment/s (Application Server like Tomcat, JBoss)
Workflow:
1) CI Server polls VCS Server for changes
2) When a change is found (i.e., a commit), starts job execution, getting an artifact (CI Server will compile and run tests). CI Server internally will use a Build tool like Maven.
3) CI Server uploads artifact to an Artifact Repository
4) Deployment Tool reads Artifact Repository and provides a list of artifacts to be deployed, plus the list of Target Environments where Developer/Ops can select a combination of both and deploy the artifact in the selected server.
Take into consideration some criteria at the moment of picking the tools. If you have a really big infrastructure (like 200+ Target Environments), robust solutions like Puppet make sense. If you have a modest one (let say, 10 Target Environments) then Rundeck may be enough. Also, take into consideration that some of the listed tools are not free (Puppet Enterprise is not free beyond ten nodes, for example).