I'm working on a brand new volume plugin and I'm required all of vol-test tests to be passed. And I have all tests successfully passed (on an environment with installed plugin) except the first one, which is docker plugin install. The thing is that there are three possible ways one can install a custom plugin:
.sock files are UNIX domain sockets.
.spec files are text files containing a URL, such as unix:///other.sock or tcp://localhost:8080.
.json files are text files containing a full json specification for
the plugin.
and we use json, which is simply a REST server implementing docker API (written in java, spring). The installation process for it straight forward: just copy the json file in /etc/docker/plugins and dockerd automatically discovers it.
The problem comes when I try to integrate the plugin into docker plugin install command. As it stated here:
Docker looks first for the plugin on your Docker host. If the plugin does not exist locally, then the plugin is pulled from the registry.
Our installation process doesn't assume a connection to a private or public registry, so we need first docker plugin create command in order to create the plugin locally. And this is where I'm having hard time to wrap my head around how to do that with json-based plugin. As per this doc, I need to specify a path to the plugin. If I use a directory name it expects config.json and rootfs to be present in the directory.
BUT
1. config.json - this is a config, that describes .sock format configs, and not the .json format (please correct me if I'm wrong)
2. how do I create the rootfs and why do I need it if my plugin is just a standalone REST service and it is not even in the container?
Appreciate any help.
config.json - this is a config, that describes .sock format configs, and not the .json format (please correct me if I'm wrong)
I've verified it working with .spec files, not very sure how it works with json files though. For .spec files, you don't mention .spec files in config.json. That is used only for unix socket plugins (option 1). Infact there is no need to have config.json for TCP socket plugins.
how do I create the rootfs and why do I need it if my plugin is just a standalone REST service and it is not even in the container?
In my understanding, rootfs is only for unix socket plugins. Plugin discovery works out of the box if .spec files exists in the right folder. In nutshell, you just create spec file and put it in the right discovery folder and try to bootstrap the container with that plugin name. You don't have to run commands like "docker plugin create/install/enable". You run the server, put the file in right folder and let new containers use that plugin.
Related
So i am not really sure how to ask this question so I am going to try my best.
I currently have a file within my project that is used for SSO oidc configuration. For the most part we do not use it, most of the configuration comes from the dev database. The only value that we do use is the callback url, Which calls back to localhost instead of the dev environment. When my application starts up i check to see if that file exists and pretty much override dev configurations with anything in that file. Mostly so we can just return back to localhost. I also do development work and need to add or change additional values locally so the ability to override is needed for me specifically. So the issue i am trying to find a solution for is when we jar the application that oidc configuration file also gets included and deployed to the server. This then will make the dev environment point to localhost. I tried excluding that oidc configuration file from gradle but then when i run the application locally it also excludes it and then does not have the file locally. I am trying to figure out a way to only exclude that oidc file configuration when deployed to dev/test/prod but keep it locally. Or maybe even a different approach would work too.
For this case, you can create a local directory in your resources folder, then in gradle you can exclude this directory specifically to be bundled when jar is created using below:
jar {
exclude ("DIRECTORY-TO-EXCLUDE/**")
}
I have just started using Openfire. I have created a sample plugin and it works in my IDE. I want to deploy this server on a machine but don't know how.
The read me explains how i can start it in IDE. I know how a war/jar file is deployed. but I don't know how to deploy Openfire. It does not seems to have a jar. Can anyone help?
You can build Openfire by issuing the following command in the root of the source code:
mvn clean package
After you do this, a directory will be created that holds a fully functional Openfire server. You can find it in distribution/target/distribution-base
That folder holds a fully functional copy of your build. You can copy it elsewhere, and run Openfire from there.
Note that the folder does not include installers. Those are (mostly) generated by a proprietary bit of software (Install4J). It's configuration is part of the source tree, but to make use of it, you'll have to obtain a copy of the Install4J application.
I'm working with a dynamic web project in Eclipse and I'm planning on a Java JAX-RS RESTful back-end with a JavaScript single-page app front-end using a framework of the Angular/Durandal/Aurelia flavor. With that said, the typical way to deploy in the Java world is to bundle things up as a WAR file - which is essentially a JAR file. The trouble is, including the node_modules blows up the size of the WAR file considerably. On the other hand, I can execute 'npm install' after deployment. However, on my development machine, where I'm constantly deploying, that will take too much time. I would prefer if I can prepare the install directory on the web server with the 'npm install' modules and then deploy the WAR file on top of it. The trouble is, it seems the WAR file deployment enjoys wiping out folders if they are not contained in the WAR file.
I'm using GlassFish 4.1 application server. The ideal solution for me would be a way to 'cloak' directory in the WAR file by modifying the MANIFEST.MF file such that when it is expanded the cloaked directories are not overwritten. This would be the most parsimonious solution to my problem. However, I know of no cloaking manifest entries for JAR/WAR file manifest.
There may also be creative solutions arrived at using the 'npm link' command. Any suggestions are welcome.
Perhaps this, among other reasons, speaks to why once people gets started with npm on the client-side they start looking at node and express on the server-side. However, I'm not convinced they can't play nice together and I would like to keep the option of all the old school open source Java libraries at my disposal.
I know this question is almost two years old, but perhaps someone will still need an answer.
Put simply, you need to bundle your JavaScript. You should never be wrapping up your node_modules folder in a war, or even deploying it as-is to the server. Mainly because of exactly the issue you were having. It's... not the smallest.
In front-end development, you're expected to use a tool like webpack to gather up all your JS files into a single app.js file. This process will only take the actual files you directly require or import in your own JavaScript (and the files that those files require, etc), leaving out all the rest. Most importantly for this discussion, leaving out all your devDependencies!
Webpack will also bundle up files other than js. Importing your css files will tell webpack to also bundle those up, creating an app.css file alongside your app.js (though you will need to use an appropriate loader to tell Webpack what it means to import 'main.css').
Getting started is a fairly straightforward matter of adding a config file to your project, adding a new devDependency, and figuring out how to get your Java-based build tool to trigger the bundler. The frontend-maven-plugin, for instance, or the gradle-node-plugin.
These days, webpack and its ilk are even smarter. If your node_modules contains ES6 native modules, bundlers can perform tree-shaking on these files to only bundle the exports that are actually imported. This reduces the bundle size even more.
They can also pull out parts of the bundle into a separate file in order to create, say, a vendor.js file that contains the code for Angular, jQuery, etc. Or you can tell the bundler to treat those imports as external, meaning that they are assumed to have been included elsewhere in the web app. But this is all getting into more advanced features than you need at first. Just give webpack's getting started guide a go, see the difference it immediately makes to your war size, and go from there.
If you are using a nodejs build tool like Grunt (but probably not), then it's likely the devDependencies that's taking up so much space. If so, just copy your runtime dependencies out of node_modules.
If not: you don't have to deploy a .war; you can also deploy an 'exploded' directory. You could copy only changed files and touch .reload
Plus to mentioned above tools to pack NPM resources, let me also mention JNPM: https://github.com/OrienteerBAP/JNPM
It provides maven plugin (jnpm-maven-plugin) to download, filter and pack required NPM packages into your JAR/WAR. So in you case you should publish your client code as NPM package and then pack it into your WAR through this plugin.
I have a java project which compares data in two excel files and displays the output. IN my eclipse project i created a folder data and in the code I wrote code read from root/data and it works fine as well. But my manager asked me to move this job to Jenkins. So my question is how do i specify the input folder path in Jenkins , Should it be the same server where Jenkins is installed or Jenkins can read data from another location in another server ?
By default, Jenkins will work on the Job's workspace location, if you provide a path in the job (be it via Parameter or Env. variable etc), it will be relative to that location.
However, you can specify an absolute path for anywhere on the Jenkins Server, which will also work.
If you wish to read data from another server, you will need to make it available to the job's runtime/access level.
One example would be to put this file on IIS or Network Share or other form of sharing, and download it during your chef job into the workspace.
Powershell example for downloading a file from IIS site:
$source = "http://my-web-server-ip/download/mycsvfile.csv"
$destination = "c:\my-jenkins-job-workspace\mycsvfile.csv"
Invoke-WebRequest $source -OutFile $destination
Please consider the above is just a basic implementation of this, and this can be accomplish in a number of ways - which some may be better than others.
I read some comments about the build of dropwizard applications: [1] "Dropwizard is designed to run as a JAR, not as a WAR file." and [2]"You can't do this. Dropwizard embeds Jetty. You should look into just using Jersey as a standard web application.", so, my questions are:
1 - How to deploy a jar file in a production environment?
2 - How will I manage the service? for example, is there a way to monitor the healthy of the application? if the application falls down how can I restart it again automatically?
[1] How to create a war from dropwizard app?
[2] Dropwizard in tomcat container
You can use tools like runit or systemd to manage your dropwizard app on Linux. They can do things like make sure it starts when the system starts up, and can help with detecting failures. There is a bit of scripting involved.
You can point a monitoring tool at the healthcheck URL of your app to send alerts when it's down.
For deployment, I prefer to package apps using the system packaging format, .deb (Debian-based systems, including Ubuntu), or .rpm (RedHat based systems). Use the fpm package builder to create it, and include your runit files (or whatever), and scripts to copy the jar file somewhere on the target system. If you have a private package repository, you can put builds of your app into it, and installation becomes a matter of "apt-get install myapp" or "yum install myapp". Otherwise, drop the package onto your target server and run "rpm -i myapp.rpm" or similar.
After running mvn package of your source directory, the said jar file is created in the target directory by maven.
Just upload this jar file to a directory of your liking on the server, say /opt/myapplication/.
The jar file can be executed on the server with java -jar JARFILE, make sure you have java installed there. That's it, basically.
Now when you run this in production, you want to have the process supervised (and restarted if it fails) and started automatically on bootup. For this, look into your servers startup-system (systemd was mentioned before for those linux distributions that support it, but on current debian/ubuntu versions you have ATM still other boot mechanisms, you probably need to write a start script for /etc/init.d/myapplication).
Health checks are - as mentioned before - integrated in the dropwizard app, you simply request the health check url on a regular base. In professional environments, you should have a tool like nagios that you could point to the URL.
If your server is unix, you can build fpm packages to install your service on server. Just build fpm, copy to server and install it.
Or use fabric (http://www.fabfile.org/).