When I am trying to edit a property within Gradle it re-formats my entire properties file and removes the comments. I am assuming this is because of the way Gradle is reading and writing to the properties file. I would like to just change a property and leave the rest of the properties file untouched including leaving the current comments in place and order of the values. Is this possible to do using Gradle 5.2.1?
I have tried to just use setProperty (which does not write to the file), used a different writer: (versionPropsFile.withWriter { versionProps.store(it, null) } )
and tried a different way to read in the properties file: versionProps.load(versionPropsFile.newDataInputStream())
Here is my current Gradle code:
File versionPropsFile = file("default.properties");
def versionProps = new Properties()
versionProps.load(versionPropsFile.newDataInputStream())
int version_minor = versionProps.getProperty("VERSION_MINOR")
int version_build = versionProps.getProperty("VERSION_BUILD")
versionProps.setProperty("VERSION_MINOR", 1)
versionProps.setProperty("VERSION_BUILD", 2)
versionPropsFile.withWriter { versionProps.store(it, null) }
Here is a piece of what the properties file looks like before gradle touches it:
# Show splash screen at startup (yes* | no)
SHOW_SPLASH = yes
# Start in minimized mode (yes | no*)
START_MINIMIZED = no
# First day of week (mon | sun*)
# FIRST_DAY_OF_WEEK = sun
# Version number
# Format: MAJOR.MINOR.BUILD
VERSION_MAJOR = 1
VERSION_MINOR = 0
VERSION_BUILD = 0
# Build value is the date
BUILD = 4-3-2019
Here is what Gradle does to it:
#Wed Apr 03 11:49:09 CDT 2019
DISABLE_L10N=no
LOOK_AND_FEEL=default
ON_MINIMIZE=normal
CHECK_IF_ALREADY_STARTED=YES
VERSION_BUILD=0
ASK_ON_EXIT=yes
SHOW_SPLASH=yes
VERSION_MAJOR=1
VERSION_MINOR=0
VERSION_BUILD=0
BUILD=04-03-2019
START_MINIMIZED=no
ON_CLOSE=minimize
PORT_NUMBER=19432
DISABLE_SYSTRAY=no
This is not a Gradle issue per se. The default Properties object of Java does not preserve any layout/comment information of properties files. You can use Apache Commons Configuration, for example, to get layout-preserving properties files.
Here’s a self-contained sample build.gradle file that loads, changes and saves a properties file, preserving comments and layout information (at least to the degree that is required by your example file):
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'org.apache.commons:commons-configuration2:2.4'
}
}
import org.apache.commons.configuration2.io.FileHandler
import org.apache.commons.configuration2.PropertiesConfiguration
import org.apache.commons.configuration2.PropertiesConfigurationLayout
task propUpdater {
doLast {
def versionPropsFile = file('default.properties')
def config = new PropertiesConfiguration()
def fileHandler = new FileHandler(config)
fileHandler.file = versionPropsFile
fileHandler.load()
// TODO change the properties in whatever way you like; as an example,
// we’re simply incrementing the major version here:
config.setProperty('VERSION_MAJOR',
(config.getProperty('VERSION_MAJOR') as Integer) + 1)
fileHandler.save()
}
}
Related
I want to code my own minecraft mod and i am following a youtube tutorial.
i did everything right, but in IntelliJ IDEA in the build.gradle file it says "Cannot resolve symbol 'Date'"
here is the full code of the build.gradle file:
buildscript {
repositories {
maven { url = 'https://files.minecraftforge.net/maven' }
jcenter()
mavenCentral()
}
dependencies {
classpath group: 'net.minecraftforge.gradle', name: 'ForgeGradle', version: '4.1.+', changing: true
}
}
apply plugin: 'net.minecraftforge.gradle'
// Only edit below this line, the above code adds and enables the necessary things for Forge to be setup.
apply plugin: 'eclipse'
apply plugin: 'maven-publish'
version = '1.0'
group = 'netjackboi03.forestportal' // http://maven.apache.org/guides/mini/guide-naming-conventions.html
archivesBaseName = 'forestportal'
java.toolchain.languageVersion = JavaLanguageVersion.of(8) // Mojang ships Java 8 to end users, so your mod should target Java 8.
println('Java: ' + System.getProperty('java.version') + ' JVM: ' + System.getProperty('java.vm.version') + '(' + System.getProperty('java.vendor') + ') Arch: ' + System.getProperty('os.arch'))
minecraft {
// The mappings can be changed at any time, and must be in the following format.
// Channel: Version:
// snapshot YYYYMMDD Snapshot are built nightly.
// stable # Stables are built at the discretion of the MCP team.
// official MCVersion Official field/method names from Mojang mapping files
//
// You must be aware of the Mojang license when using the 'official' mappings.
// See more information here: https://github.com/MinecraftForge/MCPConfig/blob/master/Mojang.md
//
// Use non-default mappings at your own risk. they may not always work.
// Simply re-run your setup task after changing the mappings to update your workspace.
mappings channel: 'official', version: '1.16.5'
// makeObfSourceJar = false // an Srg named sources jar is made by default. uncomment this to disable.
// accessTransformer = file('src/main/resources/META-INF/accesstransformer.cfg')
// Default run configurations.
// These can be tweaked, removed, or duplicated as needed.
runs {
client {
workingDirectory project.file('run')
// Recommended logging data for a userdev environment
// The markers can be changed as needed.
// "SCAN": For mods scan.
// "REGISTRIES": For firing of registry events.
// "REGISTRYDUMP": For getting the contents of all registries.
property 'forge.logging.markers', 'REGISTRIES'
// Recommended logging level for the console
// You can set various levels here.
// Please read: https://stackoverflow.com/questions/2031163/when-to-use-the-different-log-levels
property 'forge.logging.console.level', 'debug'
mods {
forestportal {
source sourceSets.main
}
}
}
server {
workingDirectory project.file('run')
// Recommended logging data for a userdev environment
// The markers can be changed as needed.
// "SCAN": For mods scan.
// "REGISTRIES": For firing of registry events.
// "REGISTRYDUMP": For getting the contents of all registries.
property 'forge.logging.markers', 'REGISTRIES'
// Recommended logging level for the console
// You can set various levels here.
// Please read: https://stackoverflow.com/questions/2031163/when-to-use-the-different-log-levels
property 'forge.logging.console.level', 'debug'
mods {
forestportal {
source sourceSets.main
}
}
}
data {
workingDirectory project.file('run')
// Recommended logging data for a userdev environment
// The markers can be changed as needed.
// "SCAN": For mods scan.
// "REGISTRIES": For firing of registry events.
// "REGISTRYDUMP": For getting the contents of all registries.
property 'forge.logging.markers', 'REGISTRIES'
// Recommended logging level for the console
// You can set various levels here.
// Please read: https://stackoverflow.com/questions/2031163/when-to-use-the-different-log-levels
property 'forge.logging.console.level', 'debug'
// Specify the modid for data generation, where to output the resulting resource, and where to look for existing resources.
args '--mod', 'forestportal', '--all',
'--existing', file('src/main/resources').toString(),
'--existing', file('src/generated/resources').toString(),
'--output', file('src/generated/resources/')
mods {
forestportal {
source sourceSets.main
}
}
}
}
}
// Include resources generated by data generators.
sourceSets.main.resources { srcDir 'src/generated/resources' }
dependencies {
// Specify the version of Minecraft to use, If this is any group other then 'net.minecraft' it is assumed
// that the dep is a ForgeGradle 'patcher' dependency. And it's patches will be applied.
// The userdev artifact is a special name and will get all sorts of transformations applied to it.
minecraft 'net.minecraftforge:forge:1.16.5-36.0.58'
// You may put jars on which you depend on in ./libs or you may define them like so..
// compile "some.group:artifact:version:classifier"
// compile "some.group:artifact:version"
// Real examples
// compile 'com.mod-buildcraft:buildcraft:6.0.8:dev' // adds buildcraft to the dev env
// compile 'com.googlecode.efficient-java-matrix-library:ejml:0.24' // adds ejml to the dev env
// The 'provided' configuration is for optional dependencies that exist at compile-time but might not at runtime.
// provided 'com.mod-buildcraft:buildcraft:6.0.8:dev'
// These dependencies get remapped to your current MCP mappings
// deobf 'com.mod-buildcraft:buildcraft:6.0.8:dev'
// For more info...
// http://www.gradle.org/docs/current/userguide/artifact_dependencies_tutorial.html
// http://www.gradle.org/docs/current/userguide/dependency_management.html
}
// Example for how to get properties into the manifest for reading by the runtime..
jar {
manifest {
attributes([
"Specification-Title": "forestportal",
"Specification-Vendor": "examplemodsareus",
"Specification-Version": "1", // We are version 1 of ourselves
"Implementation-Title": project.name,
"Implementation-Version": "${version}",
"Implementation-Vendor" :"examplemodsareus",
"Implementation-Timestamp": new Date().format("yyyy-MM-dd'T'HH:mm:ssZ")
])
}
}
// Example configuration to allow publishing using the maven-publish task
// This is the preferred method to reobfuscate your jar file
jar.finalizedBy('reobfJar')
// However if you are in a multi-project build, dev time needs unobfed jar files, so you can delay the obfuscation until publishing by doing
//publish.dependsOn('reobfJar')
publishing {
publications {
mavenJava(MavenPublication) {
artifact jar
}
}
repositories {
maven {
url "file:///${project.projectDir}/mcmodsrepo"
}
}
}
The problem lies in this line of code:
"Implementation-Timestamp": new Date().format("yyyy-MM-dd'T'HH:mm:ssZ")
I don't know if this is significant to run the mod, but i wanted to make sure it isn't.
Thanks for your help :)
I need to say that I don't know. On 1.16.4 it works perfectly, but I have no idea why this don't work on 1.16.5. Maybe try to delete this line and then see if everything works well without this line.
If I had 20 directories under trunk/ with lots of files in each and only needed 3 of those directories, would it be possible to do a Subversion checkout with only those 3 directories under trunk?
Indeed, thanks to the comments to my post here, it looks like sparse directories are the way to go. I believe the following should do it:
svn checkout --depth empty http://svnserver/trunk/proj
svn update --set-depth infinity proj/foo
svn update --set-depth infinity proj/bar
svn update --set-depth infinity proj/baz
Alternatively, --depth immediates instead of empty checks out files and directories in trunk/proj without their contents. That way you can see which directories exist in the repository.
As mentioned in #zigdon's answer, you can also do a non-recursive checkout. This is an older and less flexible way to achieve a similar effect:
svn checkout --non-recursive http://svnserver/trunk/proj
svn update trunk/foo
svn update trunk/bar
svn update trunk/baz
Subversion 1.5 introduces sparse checkouts which may be something you might find useful. From the documentation:
... sparse directories (or shallow checkouts) ... allows you to easily check out a working copy—or a portion of a working copy—more shallowly than full recursion, with the freedom to bring in previously ignored files and subdirectories at a later time.
I wrote a script to automate complex sparse checkouts.
#!/usr/bin/env python
'''
This script makes a sparse checkout of an SVN tree in the current working directory.
Given a list of paths in an SVN repository, it will:
1. Checkout the common root directory
2. Update with depth=empty for intermediate directories
3. Update with depth=infinity for the leaf directories
'''
import os
import getpass
import pysvn
__author__ = "Karl Ostmo"
__date__ = "July 13, 2011"
# =============================================================================
# XXX The os.path.commonprefix() function does not behave as expected!
# See here: http://mail.python.org/pipermail/python-dev/2002-December/030947.html
# and here: http://nedbatchelder.com/blog/201003/whats_the_point_of_ospathcommonprefix.html
# and here (what ever happened?): http://bugs.python.org/issue400788
from itertools import takewhile
def allnamesequal(name):
return all(n==name[0] for n in name[1:])
def commonprefix(paths, sep='/'):
bydirectorylevels = zip(*[p.split(sep) for p in paths])
return sep.join(x[0] for x in takewhile(allnamesequal, bydirectorylevels))
# =============================================================================
def getSvnClient(options):
password = options.svn_password
if not password:
password = getpass.getpass('Enter SVN password for user "%s": ' % options.svn_username)
client = pysvn.Client()
client.callback_get_login = lambda realm, username, may_save: (True, options.svn_username, password, True)
return client
# =============================================================================
def sparse_update_with_feedback(client, new_update_path):
revision_list = client.update(new_update_path, depth=pysvn.depth.empty)
# =============================================================================
def sparse_checkout(options, client, repo_url, sparse_path, local_checkout_root):
path_segments = sparse_path.split(os.sep)
path_segments.reverse()
# Update the middle path segments
new_update_path = local_checkout_root
while len(path_segments) > 1:
path_segment = path_segments.pop()
new_update_path = os.path.join(new_update_path, path_segment)
sparse_update_with_feedback(client, new_update_path)
if options.verbose:
print "Added internal node:", path_segment
# Update the leaf path segment, fully-recursive
leaf_segment = path_segments.pop()
new_update_path = os.path.join(new_update_path, leaf_segment)
if options.verbose:
print "Will now update with 'recursive':", new_update_path
update_revision_list = client.update(new_update_path)
if options.verbose:
for revision in update_revision_list:
print "- Finished updating %s to revision: %d" % (new_update_path, revision.number)
# =============================================================================
def group_sparse_checkout(options, client, repo_url, sparse_path_list, local_checkout_root):
if not sparse_path_list:
print "Nothing to do!"
return
checkout_path = None
if len(sparse_path_list) > 1:
checkout_path = commonprefix(sparse_path_list)
else:
checkout_path = sparse_path_list[0].split(os.sep)[0]
root_checkout_url = os.path.join(repo_url, checkout_path).replace("\\", "/")
revision = client.checkout(root_checkout_url, local_checkout_root, depth=pysvn.depth.empty)
checkout_path_segments = checkout_path.split(os.sep)
for sparse_path in sparse_path_list:
# Remove the leading path segments
path_segments = sparse_path.split(os.sep)
start_segment_index = 0
for i, segment in enumerate(checkout_path_segments):
if segment == path_segments[i]:
start_segment_index += 1
else:
break
pruned_path = os.sep.join(path_segments[start_segment_index:])
sparse_checkout(options, client, repo_url, pruned_path, local_checkout_root)
# =============================================================================
if __name__ == "__main__":
from optparse import OptionParser
usage = """%prog [path2] [more paths...]"""
default_repo_url = "http://svn.example.com/MyRepository"
default_checkout_path = "sparse_trunk"
parser = OptionParser(usage)
parser.add_option("-r", "--repo_url", type="str", default=default_repo_url, dest="repo_url", help='Repository URL (default: "%s")' % default_repo_url)
parser.add_option("-l", "--local_path", type="str", default=default_checkout_path, dest="local_path", help='Local checkout path (default: "%s")' % default_checkout_path)
default_username = getpass.getuser()
parser.add_option("-u", "--username", type="str", default=default_username, dest="svn_username", help='SVN login username (default: "%s")' % default_username)
parser.add_option("-p", "--password", type="str", dest="svn_password", help="SVN login password")
parser.add_option("-v", "--verbose", action="store_true", default=False, dest="verbose", help="Verbose output")
(options, args) = parser.parse_args()
client = getSvnClient(options)
group_sparse_checkout(
options,
client,
options.repo_url,
map(os.path.relpath, args),
options.local_path)
Or do a non-recursive checkout of /trunk, then just do a manual update on the 3 directories you need.
If you already have the full local copy, you can remove unwanted sub folders by using --set-depth command.
svn update --set-depth=exclude www
See: http://blogs.collab.net/subversion/sparse-directories-now-with-exclusion
The set-depth command support multipile paths.
Updating the root local copy will not change the depth of the modified folder.
To restore the folder to being recusively checkingout, you could use --set-depth again with infinity param.
svn update --set-depth=infinity www
I'm adding this information for those using the TortoiseSvn tool: to obtain the OP same functionality, you can use the Choose items... button in the Checkout Depth section of the Checkout function, as shown in the following screenshot:
Sort of. As Bobby says:
svn co file:///.../trunk/foo file:///.../trunk/bar file:///.../trunk/hum
will get the folders, but you will get separate folders from a subversion perspective. You will have to go separate commits and updates on each subfolder.
I don't believe you can checkout a partial tree and then work with the partial tree as a single entity.
Not in any especially useful way, no. You can check out subtrees (as in Bobby Jack's suggestion), but then you lose the ability to update/commit them atomically; to do that, they need to be placed under their common parent, and as soon as you check out the common parent, you'll download everything under that parent. Non-recursive isn't a good option, because you want updates and commits to be recursive.
I have an issue where Maven + frontend-maven-plugin and webpack doesn't go well together when I install an entire Maven module; Simply put Webpack the htmlwebpackPlugin will not inject the bundled js/css files the first time I install a Maven module, for some reason, even though a template is provided as such:
new HtmlWebpackPlugin({
template : '../resources/public/index.html',
filename : 'index.html',
inject : 'body',
})
However if I manually run the frontend-maven-plugin after installing the entire Maven module, it will actually inject the correct files, which is rather strange behavior.
To go around this, I wanted to know if there's a manual way to somehow inject these bundled files(I only have three; 1 css, 2 js files) with a chunkhash inside my own index.html template? That would make the build much more consistent.
A snip of my webpack.config.js - as you can see we add the chunkhash to the filenames if we are not in dev.
"use strict";
const ExtractTextPlugin = require("extract-text-webpack-plugin");
const HtmlWebpackPlugin = require('html-webpack-plugin');
let path = require('path');
let webpack = require("webpack");
const PATHS = {
build: path.join(__dirname, '../../../target', 'classes', 'public'),
};
const env = process.env.NODE_ENV;
let isDev = false;
if(env == "dev"){
isDev = true;
}
console.log(`Dev environment: ${isDev}`);
module.exports = {
entry: {
main: './js/index.jsx',
vendor: [
"react","react-dom","axios",
"react-table", "mobx", "mobx-react", "mobx-utils", "lodash"],
},
output: {
path: PATHS.build,
filename: `bundle.${isDev ? '' : '[chunkhash]'}.js`
},
plugins: [
new webpack.optimize.CommonsChunkPlugin({name: "vendor", filename: `/static/js/vendor.bundle.${isDev ? '' : '[chunkhash]'}.js`}),
new ExtractTextPlugin(`/static/css/[name].${isDev ? '' : '[chunkhash]'}.css`),
new HtmlWebpackPlugin({
template : '../resources/public/index.html',
filename : 'index.html',
inject : 'body',
})
],
module: {
loaders: [
// Bunch of loaders
]
},
};
I solved it - the issue was basically that Maven/Spring would take the Index.html(which I used as a template) in resources/public outside my target folder and overwrite it in the target folder - basically overwriting the output from webpackHTMLplugin, which makes logical sense in this context.
I solved it by not having any index.html file in resources/public, but just having a template.html in the src folder where webpack is. Thereby, it Maven/Spring doesn't overwrite the output with the empty template.
Total newbie to java/groovy/grails/shiro/you-name-it, so bear with me. I have exhausted tutorials and all the "Shiro LDAP" searches available and still cannot get my project working.
I am running all of this on GGTS with jdk1.7.0_80, Grails 2.3.0, and Shiro 1.2.1.
I have a working project and have successfully ran quick-start-shiro,which built the domains ShiroRole and ShiroUser, the controller authController, the view login.gsp, and the relam ShiroDbRealm. I created a faux user in BootStrap with
def user = new ShiroUser(username: "user123", passwordHash: new Sha256Hash("password").toHex())
user.addToPermissions("*:*")
user.save()
and can successfully log into my homepage, and for all intents and purposes, that is as far as I have gotten. I cannot find a top-down tutorial of how to now log in with my username and password (authenticated through a LDAP server that I have available). From what I understand, I need to create a shiro.ini file and include something along the lines of
[main]
ldapRealm = org.apache.shiro.realm.activedirectory.ActiveDirectoryRealm
ldapRealm.url = ldap://MYURLHERE/
However I don't even know where to put this shiro.ini file. I've seen /src/main/resources, but there is no such directory. Do I manually create this or is it some script creation?
The next step seems to be creating the SecurityManager which reads the shiro.ini somehow with code along the lines of
Factory<org.apache.shiro.mgt.SecurityManager> factory = new IniSecurityManagerFactory("actived.ini");
// Setting up the SecurityManager...
org.apache.shiro.mgt.SecurityManager securityManager = factory.getInstance();
SecurityUtils.setSecurityManager(securityManager);
However this always appears in some Java file in tutorials, but my project is a Groovy project inside of GGTS. Do I need to create a Java file and put it in src/java or something like that?
I've recently found that I may need a ShiroLdapRealm file (similar to ShiroDbRealm) with information like
def appConfig = grailsApplication.config
def ldapUrls = appConfig.ldap.server.url ?: [ "ldap://MYURLHERE/" ]
def searchBase = appConfig.ldap.search.base ?: ""
def searchUser = appConfig.ldap.search.user ?: ""
def searchPass = appConfig.ldap.search.pass ?: ""
def usernameAttribute = appConfig.ldap.username.attribute ?: "uid"
def skipAuthc = appConfig.ldap.skip.authentication ?: false
def skipCredChk = appConfig.ldap.skip.credentialsCheck ?: false
def allowEmptyPass = appConfig.ldap.allowEmptyPasswords != [:] ? appConfig.ldap.allowEmptyPasswords : true
and the corresponding info in Config along the lines of
ldap.server.url = ["ldap://MYRULHERE/"]
ldap.search.base = 'dc=COMPANYNAME,dc=com'
ldap.search.user = '' // if empty or null --> anonymous user lookup
ldap.search.pass = 'password' // only used with non-anonymous lookup
ldap.username.attribute = 'AccountName'
ldap.referral = "follow"
ldap.skip.credentialsCheck = false
ldap.allowEmptyPasswords = false
ldap.skip.authentication = false
But putting all these pieces together hasn't gotten me anywhere! Am I at least on the right track? Any help would be greatly appreciated!
For /src/main/resources it will automatically created for you if you used maven for your project. Moreover, you can also create that directory manually.
I have a java project that is built with buildr and that has some external dependencies:
repositories.remote << "http://www.ibiblio.org/maven2"
repositories.remote << "http://packages.example/"
define "myproject" do
compile.options.target = '1.5'
project.version = "1.0.0"
compile.with 'dependency:dependency-xy:jar:1.2.3'
compile.with 'dependency2:dependency2:jar:4.5.6'
package(:jar)
end
I want this to build a single standalone jar file that includes all these dependencies.
How do I do that?
(there's a logical followup question: How can I strip all the unused code from the included dependencies and only package the classes I actually use?)
This is what I'm doing right now. This uses autojar to pull only the necessary dependencies:
def add_dependencies(pkg)
tempfile = pkg.to_s.sub(/.jar$/, "-without-dependencies.jar")
mv pkg.to_s, tempfile
dependencies = compile.dependencies.map { |d| "-c #{d}"}.join(" ")
sh "java -jar tools/autojar.jar -baev -o #{pkg} #{dependencies} #{tempfile}"
end
and later:
package(:jar)
package(:jar).enhance { |pkg| pkg.enhance { |pkg| add_dependencies(pkg) }}
(caveat: I know little about buildr, this could be totally the wrong approach. It works for me, though)
I'm also learning Buildr and currently I'm packing Scala runtime with my application this way:
package(:jar).with(:manifest => _('src/MANIFEST.MF')).exclude('.scala-deps')
.merge('/var/local/scala/lib/scala-library.jar')
No idea if this is inferior to autojar (comments are welcome), but seems to work with a simple example. Takes 4.5 minutes to package that scala-library.jar thought.
I'm going to use Cascading for my example:
cascading_dev_jars = Dir[_("#{ENV["CASCADING_HOME"]}/build/cascading-{core,xml}-*.jar")]
#...
package(:jar).include cascading_dev_jars, :path => "lib"
Here is how I create an Uberjar with Buildr, this customization of what is put into the Jar and how the Manifest is created:
assembly_dir = 'target/assembly'
main_class = 'com.something.something.Blah'
artifacts = compile.dependencies
artifacts.each do |artifact|
Unzip.new( _(assembly_dir) => artifact ).extract
end
# remove dirs from assembly that should not be in uberjar
FileUtils.rm_rf( "#{_(assembly_dir)}/example/package" )
FileUtils.rm_rf( "#{_(assembly_dir)}/example/dir" )
# create manifest file
File.open( _("#{assembly_dir}/META-INF/MANIFEST.MF"), 'w') do |f|
f.write("Implementation-Title: Uberjar Example\n")
f.write("Implementation-Version: #{project_version}\n")
f.write("Main-Class: #{main_class}\n")
f.write("Created-By: Buildr\n")
end
present_dir = Dir.pwd
Dir.chdir _(assembly_dir)
puts "Creating #{_("target/#{project.name}-#{project.version}.jar")}"
`jar -cfm #{_("target/#{project.name}-#{project.version}.jar")} #{_(assembly_dir)}/META-INF/MANIFEST.MF .`
Dir.chdir present_dir
There is also a version that supports Spring, by concatenating all the spring.schemas