Updating module version when updating version in dependencies (multi-module maven ) - java

My problem: versions-maven-plugin helps me to up version in some module (let's call it A) in my multi-module maven project.
Some modules (let's call it B and C) in this project have in dependencies module A. I need to up versions for this modules (B and C) too. Sometimes, i also need to up version in other module (B-parent) where B (or C) in dependencies (A version up -> B version up -> B-parent version up). Other problem is the modules can be at different levels of nesting.
Example:
root:
---B-parent:
---B (A in dependencies)
---C-parent
---C (A in dependencies)
---A-parent:
---A
Process: A version up -> A-parent version up, C version-up -> C-parent version-up, B version-up -> B-parent version up.
This plugin can't do this.
Is there any idea how this can be done?
Or my strategy of updating versions is not good enough?

I've made a script for increasing version numbers in all dependent modules recursively with a versions-maven-plugin.
Algorithm is as follows:
Run versions:set in target module
Run versions:set in all modules which have been updated by versions:set from previous step. If the module has been already processed - skip it.
Repeat step 2
Python 2.7 code
#!/usr/bin/env python
# -*- coding: utf-8 -*- #
# How To
#
# Run script and pass module path as a first argument.
# Or run it without arguments in module dir.
#
# Script will request the new version number for each module.
# If no version provided - last digit will be incremented (1.0.0 -> 1.0.1).
# cd <module-path>
# <project-dir>/increment-version.py
# ...
# review changes and commit
from subprocess import call, Popen, PIPE, check_output
import os
import re
import sys
getVersionCommand = "mvn org.apache.maven.plugins:maven-help-plugin:2.1.1:evaluate " \
"-Dexpression=project.version 2>/dev/null | grep -v '\['"
def getCurrentModuleVersion():
return check_output(getVersionCommand, shell=True).decode("utf-8").split("\n")[0]
def incrementLastDigit(version):
digits = version.split(".")
lastDigit = int(digits[-1])
digits[-1] = str(lastDigit+1)
return ".".join(digits)
def isUpdatedVersionInFile(version, file):
return "<version>" + version + "</version>" in \
check_output("git diff HEAD --no-ext-diff --unified=0 --exit-code -a --no-prefix {} "
"| egrep \"^\\+\"".format(file), shell=True).decode("utf-8")
def runVersionSet(version):
process = Popen(["mvn", "versions:set", "-DnewVersion="+version, "-DgenerateBackupPoms=false"], stdout=PIPE)
(output, err) = process.communicate()
exitCode = process.wait()
if exitCode is not 0:
print "Error setting the version"
exit(1)
return output, err, exitCode
def addChangedPoms(version, dirsToVisit, visitedDirs):
changedFiles = check_output(["git", "ls-files", "-m"]) \
.decode("utf-8").split("\n")
changedPoms = [f for f in changedFiles if f.endswith("pom.xml")]
changedDirs = [os.path.dirname(os.path.abspath(f)) for f in changedPoms if isUpdatedVersionInFile(version, f)]
changedDirs = [d for d in changedDirs if d not in visitedDirs and d not in dirsToVisit]
print "New dirs to visit:", changedDirs
return changedDirs
if __name__ == "__main__":
visitedDirs = []
dirsToVisit = []
if len(sys.argv) > 1:
if os.path.exists(os.path.join(sys.argv[1], "pom.xml")):
dirsToVisit.append(os.path.abspath(sys.argv[1]))
else:
print "Error. No pom.xml file in dir", sys.argv[1]
exit(1)
else:
dirsToVisit.append(os.path.abspath(os.getcwd()))
pattern = re.compile("aggregation root: (.*)")
while len(dirsToVisit) > 0:
dirToVisit = dirsToVisit.pop()
print "Visiting dir", dirToVisit
os.chdir(dirToVisit)
currentVersion = getCurrentModuleVersion()
defaultVersion = incrementLastDigit(currentVersion)
version = raw_input("New version for {}:{} ({}):".format(dirToVisit, currentVersion, defaultVersion))
if not version.strip():
version = defaultVersion
print "New version:", version
output, err, exitcode = runVersionSet(version)
rootDir = pattern.search(output).group(1)
visitedDirs = visitedDirs + [dirToVisit]
os.chdir(rootDir)
print "Adding new dirs to visit"
dirsToVisit = dirsToVisit + addChangedPoms(version, dirsToVisit, visitedDirs)

Related

sbt-assembly and Lucene "An SPI class of type org.apache.lucene.codecs.Codec with name 'Lucene94' does not exist.¨ exception

OS: Ubuntu 22.10
java: openjdk version "19.0.1" 2022-10-18
scala: 2.13.10
Apache Lucene: 9.4.2
I took the Lucene documentation example and convert it to Scala program:
package test
import org.apache.lucene.analysis.standard.StandardAnalyzer
import org.apache.lucene.document.{Document, Field, TextField}
import org.apache.lucene.index.{DirectoryReader, IndexWriter, IndexWriterConfig}
import org.apache.lucene.queryparser.classic.QueryParser
import org.apache.lucene.search.{IndexSearcher, Query, ScoreDoc}
import org.apache.lucene.store.FSDirectory
import java.nio.file.{Files, Path}
object Test extends App {
val analyzer: StandardAnalyzer = new StandardAnalyzer()
val indexPath: Path = Files.createTempDirectory("tempIndex")
val directory: FSDirectory = FSDirectory.open(indexPath)
val config: IndexWriterConfig = new IndexWriterConfig(analyzer)
val iwriter: IndexWriter = new IndexWriter(directory, config)
val doc: Document = new Document()
val text: String = "This is the text to be indexed."
doc.add(new Field("fieldname", text, TextField.TYPE_STORED))
iwriter.addDocument(doc)
iwriter.close()
// Now search the index:
val ireader: DirectoryReader = DirectoryReader.open(directory)
val isearcher: IndexSearcher = new IndexSearcher(ireader)
// Parse a simple query that searches for "text":
val parser: QueryParser = new QueryParser("fieldname", analyzer)
val query: Query = parser.parse("text")
val hits: Array[ScoreDoc] = isearcher.search(query, 10).scoreDocs
assert (hits.length == 1)
// Iterate through the results:
for (i <- hits.indices) {
val hitDoc = isearcher.doc(hits(i).doc)
assert("This is the text to be indexed.".equals(hitDoc.get("fieldname")))
}
ireader.close()
directory.close()
println("The end!")
}
If I use the following sbt file:
ThisBuild / version := "0.1.0-SNAPSHOT"
ThisBuild / scalaVersion := "2.13.10"
lazy val root = (project in file("."))
.settings(
name := "Test"
)
val luceneVersion = "9.4.2"
libraryDependencies ++= Seq(
"org.apache.lucene" % "lucene-core" % luceneVersion,
"org.apache.lucene" % "lucene-queryparser" % luceneVersion
)
The compilation gives me the error:
[error] Deduplicate found different file contents in the following:
[error] Jar name = lucene-core-9.4.2.jar, jar org = org.apache.lucene, entry target = module-info.class
[error] Jar name = lucene-queries-9.4.2.jar, jar org = org.apache.lucene, entry target = module-info.class
[error] Jar name = lucene-queryparser-9.4.2.jar, jar org = org.apache.lucene, entry target = module-info.class
[error] Jar name = lucene-sandbox-9.4.2.jar, jar org = org.apache.lucene, entry target = module-info.class
So I included in the sbt file:
assembly / assemblyMergeStrategy := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case _ => MergeStrategy.first
}
After that the compilation and execution of the program were ok:
sbt "runMain test.Test"
But if I want to create a fat jar file and execute it, I got the following exception:
plugins.sbt :
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "2.1.0")
java -cp target/scala-2.13/Test-assembly-0.1.0-SNAPSHOT.jar test.Test
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.lucene.codecs.Codec.getDefault(Codec.java:141)
at org.apache.lucene.index.LiveIndexWriterConfig.<init>(LiveIndexWriterConfig.java:128)
at org.apache.lucene.index.IndexWriterConfig.<init>(IndexWriterConfig.java:145)
at test.Test$.delayedEndpoint$test$Test$1(Test.scala:17)
at test.Test$delayedInit$body.apply(Test.scala:12)
at scala.Function0.apply$mcV$sp(Function0.scala:42)
at scala.Function0.apply$mcV$sp$(Function0.scala:42)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
at scala.App.$anonfun$main$1(App.scala:98)
at scala.App.$anonfun$main$1$adapted(App.scala:98)
at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:575)
at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:573)
at scala.collection.AbstractIterable.foreach(Iterable.scala:933)
at scala.App.main(App.scala:98)
at scala.App.main$(App.scala:96)
at test.Test$.main(Test.scala:12)
at test.Test.main(Test.scala)
Caused by: java.lang.IllegalArgumentException: An SPI class of type org.apache.lucene.codecs.Codec with name 'Lucene94' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath. The current classpath supports the following names: []
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:113)
at org.apache.lucene.codecs.Codec$Holder.<clinit>(Codec.java:58)
... 17 more
So, what did I do wrong?
Thanks.
case PathList("META-INF", xs # _*) => MergeStrategy.discard means that you're ignoring all META-INF directories (the whole their content). This is dangerous. The dependencies lucene-core and lucene-sandbox have service files in their META-INF. You should be more selective in what you ignore. Try to ignore only Java 9+ files module-info.class
assembly / assemblyMergeStrategy := {
case x if x.endsWith("module-info.class") => MergeStrategy.discard
case x =>
val oldStrategy = (assembly / assemblyMergeStrategy).value
oldStrategy(x)
}
or at least unignore META-INF/services subdirectories
assembly / assemblyMergeStrategy := {
case PathList("META-INF", "services", xs # _*) => MergeStrategy.concat
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case _ => MergeStrategy.first
}
Drools fat jar nullpointer KieServices
Run Drools Kie project from fat jar

Changing Java version in Bazel

I am using Bazel as the build tool for my Java project. I have JDK 11 installed on my mac, but Bazel uses Java 8 to build the binaries. Does anyone know how I could change this?
BUILD.bazel
java_binary(
name = 'JavaBinary',
srcs = ['JavaBinary.java'],
main_class = 'JavaBinary',
)
load(
"#bazel_tools//tools/jdk:default_java_toolchain.bzl",
"default_java_toolchain",
)
default_java_toolchain(
name = "default_toolchain",
visibility = ["//visibility:public"],
)
JavaBinary.java
public class JavaBinary {
public static void main(String[] args) {
System.out.println("Successfully executed JavaBinary!");
System.out.println("Version: " + System.getProperty("java.version"));
}
}
WORKSPACE.bazel
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_java",
sha256 = "220b87d8cfabd22d1c6d8e3cdb4249abd4c93dcc152e0667db061fb1b957ee68",
url = "https://github.com/bazelbuild/rules_java/releases/download/0.1.1/rules_java-0.1.1.tar.gz",
)
load("#rules_java//java:repositories.bzl", "rules_java_dependencies", "rules_java_toolchains")
rules_java_dependencies()
rules_java_toolchains()
Run it this way:
bazel run :JavaBinary \
--java_toolchain=:default_toolchain \
--javabase=#bazel_tools//tools/jdk:remote_jdk11
You can also create .bazelrc file an then execute bazel run :JavaBinary:
.bazelrc
build --java_toolchain=:default_toolchain
build --javabase=#bazel_tools//tools/jdk:remote_jdk11

Could not load main class from .java class file

I'm using Python to visualize a graph through a tool named wot using jupyter notebook. It utilizes Gephi, a java-based graph utility. I try to run function to return coordinate output files as below:
run_gephi(input_graph_file, output_coord_file, n_steps):
layout = 'fa'
import psutil
memory = int(0.5 * psutil.virtual_memory()[0] * 1e-9)
classpath = os.path.dirname(
pkg_resources.resource_filename('wot', 'commands/resources/graph_layout/GraphLayout.class')) + ':' + \
pkg_resources.resource_filename('wot', 'commands/resources/graph_layout/gephi-toolkit-0.9.2-all.jar')
subprocess.check_call(['java', '-Djava.awt.headless=true', '-Xmx{memory}g'.format(memory=memory), '-cp', classpath, \
'GraphLayout', input_graph_file, output_coord_file, layout, str(n_steps), str(os.cpu_count())])
Then it returns following error in my jupyter notebook:
CalledProcessError Traceback (most recent call last)
<ipython-input-18-5fc832689b87> in <module>
----> 1 df, adata = compute_force_layout(ds)
<ipython-input-7-6cb84b9e0fa0> in compute_force_layout(ds, n_neighbors, n_comps, neighbors_diff, n_steps)
24 writer.write("{u} {v} {w:.6g}\n".format(u=i + 1, v=j + 1, w=W[i, j]))
25
---> 26 run_gephi(input_graph_file, output_coord_file, n_steps)
27 # replace numbers with cids
28 df = pd.read_table(output_coord_file, header=0, index_col='id')
<ipython-input-16-28772d0d10cc> in run_gephi(input_graph_file, output_coord_file, n_steps)
7 pkg_resources.resource_filename('wot', 'commands/resources/graph_layout/gephi-toolkit-0.9.2-all.jar')
8 subprocess.check_call(['java', '-Djava.awt.headless=true', '-Xmx{memory}g'.format(memory=memory), '-cp', classpath, \
----> 9 'GraphLayout', input_graph_file, output_coord_file, layout, str(n_steps), str(os.cpu_count())])
~/anaconda3/lib/python3.7/subprocess.py in check_call(*popenargs, **kwargs)
339 if cmd is None:
340 cmd = popenargs[0]
--> 341 raise CalledProcessError(retcode, cmd)
342 return 0
343
CalledProcessError: Command '['java', '-Djava.awt.headless=true', '-Xmx25g', '-cp', '/home/iik/.local/lib/python3.7/site-packages/wot/commands/resources/graph_layout:/home/iik/.local/lib/python3.7/site-packages/wot/commands/resources/graph_layout/gephi-toolkit-0.9.2-all.jar', 'GraphLayout', '/tmp/gephiznxedn32.net', '/tmp/coordsd64x05ww.txt', 'fa', '10000', '8']' returned non-zero exit status 1.
and following message was found in terminal
Error: Could not find or load main class GraphLayout
I can found "GraphLayout.java" and "gephi-toolkit-0.9.2-all.jar" files in the path, so I really don't know why it can't be loaded.
Do you have any suggestions?
Add *
The class GraphLayout is not contained in Gephi but defined by GraphLayout.java.

Unable to resolve class com.cloudbees.hudson.plugins.folder.Folder

I am trying to gather data from jenkins using groovy script and getting an error:
unable to resolve class com.cloudbees.hudson.plugins.folder.Folder
Below is the code:
import jenkins.model.*
import hudson.model.*
import groovy.time.TimeCategory
use ( TimeCategory ) {
// e.g. find jobs not run in last 1 year
sometimeago = (new Date() - 1.year)
}
jobs = Jenkins.instance.getAllItems()
lastabort = null
jobs.each { j ->
if (j instanceof com.cloudbees.hudson.plugins.folder.Folder) { return }
numbuilds = j.builds.size()
if (numbuilds == 0) {
println 'JOB: ' + j.fullName
println ' -> no build'
return
}
lastbuild = j.builds[numbuilds - 1]
if (lastbuild.timestamp.getTime() < sometimeago) {
println 'JOB: ' + j.fullName
println ' -> lastbuild: ' + lastbuild.displayName + ' = ' + lastbuild.result + ', time: ' + lastbuild.timestampString2
}
}
The error is:
rg.codehaus.groovy.control.MultipleCompilationErrorsExceptio‌​n:
startup failed: Script1.groovy: 12: unable to resolve class
com.cloudbees.hudson.plugins.folder.Folder # line 12, column 20. if (j
instanceof com.cloudbees.hudson.plugins.folder.Folder) { return } ^ 1
error at
org.codehaus.groovy.control.ErrorCollector.failIfErrors(Erro‌​rCollector.java:302)
I see Folder.java in jenkinsci/cloudbees-folder-plugin.
That means you need to:
check if you do have JENKINS/CloudBees Folders Plugin installed, or your groovy script would not be able to resolve that dependency.
Add "import com.cloudbees.hudson.plugins.folder.*" to be sure the script is able to make the instanceOf work.
When running groovy scripts that import libraries in Jenkins, check that your Jenkins build step is an "Execute system Groovy script", not a plain old "Execute Groovy script".
The 'system' scripts run on the existing JVM, as opposed to spawning a new one and therefore losing access to the shared libraries available to the original Jenkins JVM instance.
Groovy Script vs System Groovy Script - https://plugins.jenkins.io/groovy/

How can I run DataNucleus Bytecode Enhancer from SBT?

I've put together a proof of concept which aims to provide a skeleton SBT multimodule project which utilizes DataNucleus JDO Enhancer with mixed Java and Scala sources.
The difficulty appears when I try to enhance persistence classes from SBT. Apparently, I'm not passing the correct classpath when calling Fork.java.fork(...) from SBT.
See also this question:
How can SBT generate metamodel classes from model classes using DataNucleus?
Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class org.datanucleus.util.Localiser
at org.datanucleus.metadata.MetaDataManagerImpl.loadPersistenceUnit(MetaDataManagerImpl.java:1104)
at org.datanucleus.enhancer.DataNucleusEnhancer.getFileMetadataForInput(DataNucleusEnhancer.java:768)
at org.datanucleus.enhancer.DataNucleusEnhancer.enhance(DataNucleusEnhancer.java:488)
at org.datanucleus.api.jdo.JDOEnhancer.enhance(JDOEnhancer.java:125)
at javax.jdo.Enhancer.run(Enhancer.java:196)
at javax.jdo.Enhancer.main(Enhancer.java:130)
[info] Compiling 2 Java sources to /home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/klasses...
java.lang.IllegalStateException: errno = 1
at $54321831a5683ffa07b5$.runner(build.sbt:230)
at $54321831a5683ffa07b5$$anonfun$model$7.apply(build.sbt:259)
at $54321831a5683ffa07b5$$anonfun$model$7.apply(build.sbt:258)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:235)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
For the sake of completeness and information, below you can see a java command line generated by SBT which can be executed by hand on a separate window, for example. It just works fine.
$ java -cp /home/rgomes/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.11.6.jar:/home/rgomes/.ivy2/cache/com.google.code.gson/gson/jars/gson-2.3.1.jar:/home/rgomes/.ivy2/cache/javax.jdo/jdo-api/jars/jdo-api-3.0.jar:/home/rgomes/.ivy2/cache/javax.transaction/transaction-api/jars/transaction-api-1.1.jar:/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-core/jars/datanucleus-core-4.0.4.jar:/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-api-jdo/jars/datanucleus-api-jdo-4.0.4.jar:/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-jdo-query/jars/datanucleus-jdo-query-4.0.4.jar:/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-rdbms/jars/datanucleus-rdbms-4.0.4.jar:/home/rgomes/.ivy2/cache/com.h2database/h2/jars/h2-1.4.185.jar:/home/rgomes/.ivy2/cache/org.postgresql/postgresql/jars/postgresql-9.4-1200-jdbc41.jar:/home/rgomes/.ivy2/cache/com.github.dblock.waffle/waffle-jna/jars/waffle-jna-1.7.jar:/home/rgomes/.ivy2/cache/net.java.dev.jna/jna/jars/jna-4.1.0.jar:/home/rgomes/.ivy2/cache/net.java.dev.jna/jna-platform/jars/jna-platform-4.1.0.jar:/home/rgomes/.ivy2/cache/org.slf4j/slf4j-simple/jars/slf4j-simple-1.7.7.jar:/home/rgomes/.ivy2/cache/org.slf4j/slf4j-api/jars/slf4j-api-1.7.7.jar:/home/rgomes/workspace/poc-scala-datanucleus/model/src/main/resources:/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/klasses javax.jdo.Enhancer -v -pu persistence-h2 -d /home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.ClassEnhancerImpl save
INFO: Writing class file "/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes/model/AbstractModel.class" with enhanced definition
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
INFO: ENHANCED (Persistable) : model.AbstractModel
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.ClassEnhancerImpl save
INFO: Writing class file "/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes/model/Identifier.class" with enhanced definition
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
INFO: ENHANCED (Persistable) : model.Identifier
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
INFO: DataNucleus Enhancer completed with success for 2 classes. Timings : input=112 ms, enhance=102 ms, total=214 ms. Consult the log for full details
Enhancer Processing -v.
Enhancer adding Persistence Unit persistence-h2.
Enhancer processing output directory /home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes.
Enhancer found JDOEnhancer of class org.datanucleus.api.jdo.JDOEnhancer.
Enhancer property key:VendorName value:DataNucleus.
Enhancer property key:VersionNumber value:4.0.4.
Enhancer property key:API value:JDO.
Enhancer enhanced 2 classes.
Below you can see some debugging information which is passed to Fork.java.fork(...):
=============================================================
mainClass=javax.jdo.Enhancer
args=-v -pu persistence-h2 -d /home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes
javaHome=None
cwd=/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes
runJVMOptions=
bootJars ---------------------------------------------
/home/rgomes/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.11.6.jar
/home/rgomes/.ivy2/cache/com.google.code.gson/gson/jars/gson-2.3.1.jar
/home/rgomes/.ivy2/cache/javax.jdo/jdo-api/jars/jdo-api-3.0.jar
/home/rgomes/.ivy2/cache/javax.transaction/transaction-api/jars/transaction-api-1.1.jar
/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-core/jars/datanucleus-core-4.0.4.jar
/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-api-jdo/jars/datanucleus-api-jdo-4.0.4.jar
/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-jdo-query/jars/datanucleus-jdo-query-4.0.4.jar
/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-rdbms/jars/datanucleus-rdbms-4.0.4.jar
/home/rgomes/.ivy2/cache/com.h2database/h2/jars/h2-1.4.185.jar
/home/rgomes/.ivy2/cache/org.postgresql/postgresql/jars/postgresql-9.4-1200-jdbc41.jar
/home/rgomes/.ivy2/cache/com.github.dblock.waffle/waffle-jna/jars/waffle-jna-1.7.jar
/home/rgomes/.ivy2/cache/net.java.dev.jna/jna/jars/jna-4.1.0.jar
/home/rgomes/.ivy2/cache/net.java.dev.jna/jna-platform/jars/jna-platform-4.1.0.jar
/home/rgomes/.ivy2/cache/org.slf4j/slf4j-simple/jars/slf4j-simple-1.7.7.jar
/home/rgomes/.ivy2/cache/org.slf4j/slf4j-api/jars/slf4j-api-1.7.7.jar
/home/rgomes/workspace/poc-scala-datanucleus/model/src/main/resources
/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/klasses
envVars ----------------------------------------------
=============================================================
The project is available in github for your convenience at
https://github.com/frgomes/poc-scala-datanucleus
Just download it and type
./sbt compile
Any help is immensely appreciated. Thanks
You can either use java.lang.ProcessBuilder or sbt.Fork.
See below a generic javaRunner you can add to your build.sbt which employs java.lang.ProcessBuilder.
See also a generic sbtRunner you can add to your build.sbt which employs sbt.Fork. Thanks to #dwijnand for providing insightful information for making sbtRunner work as expected.
def javaRunner(mainClass: String,
args: Seq[String],
classpath: Seq[File],
cwd: File,
javaHome: Option[File] = None,
runJVMOptions: Seq[String] = Nil,
envVars: Map[String, String] = Map.empty,
connectInput: Boolean = false,
outputStrategy: Option[OutputStrategy] = Some(StdoutOutput)): Seq[File] = {
val java_ : String = javaHome.fold("") { p => p.absolutePath + "/bin/" } + "java"
val jvm_ : Seq[String] = runJVMOptions.map(p => p.toString)
val cp_ : Seq[String] = classpath.map(p => p.absolutePath)
val env_ = envVars.map({ case (k,v) => s"${k}=${v}" })
val xcmd_ : Seq[String] = Seq(java_) ++ jvm_ ++ Seq("-cp", cp_.mkString(java.io.File.pathSeparator), mainClass) ++ args
println("=============================================================")
println(xcmd_.mkString(" "))
println("=============================================================")
println("")
IO.createDirectory(cwd)
import scala.collection.JavaConverters._
val cmd = xcmd_.asJava
val pb = new java.lang.ProcessBuilder(cmd)
pb.directory(cwd)
pb.inheritIO
val process = pb.start()
def cancel() = {
println("Run canceled.")
process.destroy()
1
}
val errno = try process.waitFor catch { case e: InterruptedException => cancel() }
if(errno==0) {
if (args.contains("-v")) cwd.list.foreach(f => println(f))
cwd.listFiles
} else {
throw new IllegalStateException(s"errno = ${errno}")
}
}
def sbtRunner(mainClass: String,
args: Seq[String],
classpath: Seq[File],
cwd: File,
javaHome: Option[File] = None,
runJVMOptions: Seq[String] = Nil,
envVars: Map[String, String] = Map.empty,
connectInput: Boolean = false,
outputStrategy: Option[OutputStrategy] = Some(StdoutOutput)): Seq[File] = {
val args_ = args.map(p => p.toString)
val java_ = javaHome.fold("None") { p => p.absolutePath }
val cp_ = classpath.map(p => p.absolutePath)
val jvm_ = runJVMOptions.map(p => p.toString) ++ Seq("-cp", cp_.mkString(java.io.File.pathSeparator))
val env_ = envVars.map({ case (k,v) => s"${k}=${v}" })
def dump: String =
s"""
|mainClass=${mainClass}
|args=${args_.mkString(" ")}
|javaHome=${java_}
|cwd=${cwd.absolutePath}
|runJVMOptions=${jvm_.mkString(" ")}
|classpath --------------------------------------------
|${cp_.mkString("\n")}
|envVars ----------------------------------------------
|${env_.mkString("\n")}
""".stripMargin
def cmd: String =
s"""java ${jvm_.mkString(" ")} ${mainClass} ${args_.mkString(" ")}"""
println("=============================================================")
println(dump)
println("=============================================================")
println(cmd)
println("=============================================================")
println("")
IO.createDirectory(cwd)
val options =
ForkOptions(
javaHome = javaHome,
outputStrategy = outputStrategy,
bootJars = Seq.empty,
workingDirectory = Option(cwd),
runJVMOptions = jvm_,
connectInput = connectInput,
envVars = envVars)
val process = new Fork("java", Option(mainClass)).fork(options, args)
def cancel() = {
println("Run canceled.")
process.destroy()
1
}
val errno = try process.exitValue() catch { case e: InterruptedException => cancel() }
if(errno==0) {
if (args.contains("-v")) cwd.list.foreach(f => println(f))
cwd.listFiles
} else {
throw new IllegalStateException(s"errno = ${errno}")
}
}
Then you need to wire DataNucleus Enhancer as part of your build process. This is done via manipulateBytecode sub-task, as demonstrated below:
lazy val model =
project.in(file("model"))
// .settings(publishSettings:_*)
.settings(librarySettings:_*)
.settings(paranoidOptions:_*)
.settings(otestFramework: _*)
.settings(deps_tagging:_*)
//-- .settings(deps_stream:_*)
.settings(deps_database:_*)
.settings(
Seq(
// This trick requires SBT 0.13.8
manipulateBytecode in Compile := {
val previous = (manipulateBytecode in Compile).value
sbtRunner( // javaRunner also works!
mainClass = "javax.jdo.Enhancer",
args =
Seq(
"-v",
"-pu", "persistence-h2",
"-d", (classDirectory in Compile).value.absolutePath),
classpath =
(managedClasspath in Compile).value.files ++
(unmanagedResourceDirectories in Compile).value :+
(classDirectory in Compile).value,
cwd = (classDirectory in Compile).value,
javaHome = javaHome.value,
envVars = (envVars in Compile).value
)
previous
}
):_*)
.dependsOn(util)
For a complete example, including a few JDO annotated persistence classes and some rudimentary test cases, please have a look at
http://github.com/frgomes/poc-scala-datanucleus
I think the issue is you're passing your dependency jars as boot jars not as the classpath.
From your poc project perhaps something like:
val jvm_ = runJVMOptions.map(p => p.toString) ++
Seq("-cp", cp_ mkString java.io.File.pathSeparator)

Categories

Resources