I am trying to compile an Actor with DynamicMessage with Scala Reflection Toolbox
The actor code looks like
import scala.reflect.runtime.universe
import scala.tools.reflect.ToolBox
val toolbox = universe.runtimeMirror(getClass.getClassLoader).mkToolBox()
val actorCode =
"""
|import akka.actor._
|import com.google.protobuf._
|class SimpleActor extends Actor {
|override def receive: Receive = {
| case dynamicMessage: DynamicMessage => println("Dynamic message received!")
| case _ => println("Whatever!") // the default, catch-all
| }
|}
|object SimpleActor {
|def props() : Props = Props(new SimpleActor())
|}
|
|
|return SimpleActor.props()
|""".stripMargin
val tree = toolbox.parse(actorCode)
toolbox.compile(tree)().asInstanceOf[Props]
I get the error
reflective compilation has failed:
illegal cyclic reference involving type T
scala.tools.reflect.ToolBoxError: reflective compilation has failed:
illegal cyclic reference involving type T
If I run the code outside of the Toolbox it compiles and works fine.
The error is given from the line
case dynamicMessage: DynamicMessage => println("Dynamic message received!")
Anyone knows the nature of this error and how to fix it?
In Scala even without reflective compilation there are bugs in combination of scala-java interop and F-bounded polymorphism:
scalac reports error on valid Java class: illegal cyclic reference involving type T
and others.
And parents of com.google.protobuf.DynamicMessage explore F-bounds:
DynamicMessage
<: AbstractMessage
<: AbstractMessageLite[_,_] (such inheritance is allowed in Java but not in Scala)
[M <: AbstractMessageLite[M, B],
B <: AbstractMessageLite.Builder[M, B]]
[M <: AbstractMessageLite[M, B],
B <: AbstractMessageLite.Builder[M, B]]
<: MessageLite.Builder
<: MessageLiteOrBuilder
<: Cloneable
<: MessageLite
<: MessageLiteOrBuilder
<: Message
<: MessageLite...
<: MessageOrBuilder
<: MessageLiteOrBuilder
But without reflective compilation your code compiles. So this is a bug in combination of reflective compilation, scala-java interop and F-bounded polymorphism.
A workaround is to use real compiler instead of toolbox:
import akka.actor.{ActorSystem, Props}
// libraryDependencies += "com.github.os72" % "protobuf-dynamic" % "1.0.1"
import com.github.os72.protobuf.dynamic.{DynamicSchema, MessageDefinition}
import com.google.protobuf.DynamicMessage
import scala.reflect.internal.util.{AbstractFileClassLoader, BatchSourceFile}
import scala.reflect.io.{AbstractFile, VirtualDirectory}
import scala.reflect.runtime
import scala.reflect.runtime.universe
import scala.reflect.runtime.universe._
import scala.tools.nsc.{Global, Settings}
val actorCode = """
|import akka.actor._
|import com.google.protobuf._
|
|class SimpleActor extends Actor {
| override def receive: Receive = {
| case dynamicMessage: DynamicMessage => println("Dynamic message received!")
| case _ => println("Whatever!") // the default, catch-all
| }
|}
|
|object SimpleActor {
| def props() : Props = Props(new SimpleActor())
|}
|""".stripMargin
val directory = new VirtualDirectory("(memory)", None)
val runtimeMirror = createRuntimeMirror(directory, runtime.currentMirror)
compileCode(actorCode, List(), directory)
val props = runObjectMethod("SimpleActor", runtimeMirror, "props")
.asInstanceOf[Props]
val actorSystem = ActorSystem("actorSystem")
val actor = actorSystem.actorOf(props, "helloActor")
val msg = makeDynamicMessage()
actor ! "hello" // Whatever!
actor ! msg // Dynamic message received!
actorSystem.terminate()
//see (*)
def makeDynamicMessage(): DynamicMessage = {
val schemaBuilder = DynamicSchema.newBuilder
schemaBuilder.setName("PersonSchemaDynamic.proto")
val msgDef = MessageDefinition.newBuilder("Person")
.addField("required", "int32", "id", 1)
.build
schemaBuilder.addMessageDefinition(msgDef)
val schema = schemaBuilder.build
val msgBuilder = schema.newMessageBuilder("Person")
val msgDesc = msgBuilder.getDescriptorForType
msgBuilder
.setField(msgDesc.findFieldByName("id"), 1)
.build
}
def compileCode(
code: String,
classpathDirectories: List[AbstractFile],
outputDirectory: AbstractFile
): Unit = {
val settings = new Settings
classpathDirectories.foreach(dir => settings.classpath.prepend(dir.toString))
settings.outputDirs.setSingleOutput(outputDirectory)
settings.usejavacp.value = true
val global = new Global(settings)
(new global.Run).compileSources(List(new BatchSourceFile("(inline)", code)))
}
def runObjectMethod(
objectName: String,
runtimeMirror: Mirror,
methodName: String,
arguments: Any*
): Any = {
val objectSymbol = runtimeMirror.staticModule(objectName)
val objectModuleMirror = runtimeMirror.reflectModule(objectSymbol)
val objectInstance = objectModuleMirror.instance
val objectType = objectSymbol.typeSignature
val methodSymbol = objectType.decl(TermName(methodName)).asMethod
val objectInstanceMirror = runtimeMirror.reflect(objectInstance)
val methodMirror = objectInstanceMirror.reflectMethod(methodSymbol)
methodMirror(arguments: _*)
}
def createRuntimeMirror(directory: AbstractFile, parentMirror: Mirror): Mirror = {
val classLoader = new AbstractFileClassLoader(directory, parentMirror.classLoader)
universe.runtimeMirror(classLoader)
}
Tensorflow in Scala reflection (here was a similar situation with a bug in combination of reflective compilation, Scala-Java interop, and path-dependent types)
Dynamic compilation of multiple Scala classes at runtime
How to eval code that uses InterfaceStability annotation (that fails with "illegal cyclic reference involving class InterfaceStability")? (also "illegal cyclic reference" during reflective compilation)
Scala Presentation Compiler - Minimal Example
(*) Protocol buffer objects generated at runtime
Related
I want to parametrice when i have a header and then separator when I read a csv from Spark. I've written this:
DataFrameReader dataFrameReader = spark.read();
dataFrameReader = "csv".equalsIgnoreCase(params.getReadFileType()) ?
dataFrameReader
.option("sep",params.getDelimiter())
.option("header",params.isHeader())
:dataFrameReader;
I'm new to Groovy and I don't get dataFrameReader.option corrected mocked.
DataFrameReader dfReaderLoader = Mock(DataFrameReader)
DataFrameReader dfReaderOptionString = Mock(DataFrameReader)
DataFrameReader dfReaderOptionBoolean = Mock(DataFrameReader)
SparkSession sparkSession = Mock(SparkSession)
sparkSession.read() >> dfReaderLoader
dfReaderLoader.option(_ as String, _ as String) >> dfReaderOptionString
dfReaderOptionString.option(_ as String, _ as Boolean) >> dfReaderOptionBoolean
And it gives me a null pointer exception.
java.lang.NullPointerException: Cannot invoke
"org.apache.spark.sql.DataFrameReader.option(String, boolean)" because
the return value of
"org.apache.spark.sql.DataFrameReader.option(String, String)" is null
I do not know what your problem is, but my guess is that you create the mocks, but then do not inject them into your class under test. If you do that, both your own version as well as Leonard's suggested improved version with a default response work:
Class under test + helper class:
class UnderTest {
SparkSession spark
Parameters params
DataFrameReader produce() {
DataFrameReader dataFrameReader = spark.read()
dataFrameReader = "csv".equalsIgnoreCase(params.getReadFileType()) ?
dataFrameReader
.option("sep", params.getDelimiter())
.option("header", params.isHeader())
: dataFrameReader
}
}
class Parameters {
String readFileType
String delimiter
boolean header
}
Spock specification:
package de.scrum_master.stackoverflow.q74923254
import org.apache.spark.sql.DataFrameReader
import org.apache.spark.sql.SparkSession
import org.spockframework.mock.MockUtil
import spock.lang.Specification
class DataFrameReaderTest extends Specification {
def 'read #readFileType data'() {
given:
DataFrameReader dfReaderLoader = Mock(DataFrameReader)
DataFrameReader dfReaderOptionString = Mock(DataFrameReader)
DataFrameReader dfReaderOptionBoolean = Mock(DataFrameReader)
SparkSession sparkSession = Mock(SparkSession)
sparkSession.read() >> dfReaderLoader
dfReaderLoader.option(_ as String, _ as String) >> dfReaderOptionString
dfReaderOptionString.option(_ as String, _ as Boolean) >> dfReaderOptionBoolean
def underTest = new UnderTest(spark: sparkSession, params: parameters)
expect:
underTest.produce().toString().contains(returnedMockName)
where:
readFileType | parameters | returnedMockName
'CSV' | new Parameters(readFileType: readFileType, delimiter: ';', header: true) | 'dfReaderOptionBoolean'
'XLS' | new Parameters(readFileType: readFileType) | 'dfReaderLoader'
}
def 'read #readFileType data (improved)'() {
given:
SparkSession sparkSession = Mock() {
read() >> Mock(DataFrameReader) {
_ >> _
}
}
def parameters = new Parameters(readFileType: readFileType, delimiter: ';', header: true)
def underTest = new UnderTest(spark: sparkSession, params: parameters)
expect:
new MockUtil().isMock(underTest.produce())
where:
readFileType << ['CSV', 'XLS']
}
}
Try it in the Groovy Web Console.
The result should look similar to this in your IDE:
DataFrameReaderTest ✔
├─ read #readFileType data ✔
│ ├─ read CSV data ✔
│ └─ read XLS data ✔
└─ read #readFileType data (improved) ✔
├─ read CSV data (improved) ✔
└─ read XLS data (improved) ✔
If you don't really care about the intermediate invocations of a builder pattern, i.e. an object that returns itself. I'd suggest to use a Stub, which will return itself, if the method return type matches it's type, or you can use this declaration _ >> _ to achieve the same for Mocks.
given:
ThingBuilder builder = Mock() {
_ >> _
}
when:
Thing thing = builder
.id("id-42")
.name("spock")
.weight(100)
.build()
then:
1 * builder.build() >> new Thing(id: 'id-1337') // <-- only assert the last call you actually care about
thing.id == 'id-1337'
Try it in the Groovy Web Console.
That being said, the error would probably go away if you just remove the as String cast of the second argument of option, or fix it to be as Boolean as the error suggests.
The error was in the params.I was not sending Delimiter or Header so it gave error.
I have a code which looks like below
object ErrorTest {
case class APIResults(status:String, col_1:Long, col_2:Double, ...)
def funcA(rows:ArrayBuffer[Row])(implicit defaultFormats:DefaultFormats):ArrayBuffer[APIResults] = {
//call some API ang get results and return APIResults
...
}
// MARK: load properties
val props = loadProperties()
private def loadProperties(): Properties = {
val configFile = new File("config.properties")
val reader = new FileReader(configFile)
val props = new Properties()
props.load(reader)
props
}
def main(args: Array[String]): Unit = {
val prop_a = props.getProperty("prop_a")
val session = Context.initialSparkSession();
import session.implicits._
val initialSet = ArrayBuffer.empty[Row]
val addToSet = (s: ArrayBuffer[Row], v: Row) => (s += v)
val mergePartitionSets = (p1: ArrayBuffer[Row], p2: ArrayBuffer[Row]) => (p1 ++= p2)
val sql1 =
s"""
select * from tbl_a where ...
"""
session.sql(sql1)
.rdd.map{row => {implicit val formats = DefaultFormats; (row.getLong(6), row)}}
.aggregateByKey(initialSet)(addToSet,mergePartitionSets)
.repartition(40)
.map{case (rowNumber,rows) => {implicit val formats = DefaultFormats; funcA(rows)}}
.flatMap(x => x)
.toDF()
.write.mode(SaveMode.Overwrite).saveAsTable("tbl_b")
}
}
when I run it via spark-submit, it throws error Caused by: java.lang.NoClassDefFoundError: Could not initialize class staging_jobs.ErrorTest$. But if I move val props = loadProperties() into the first line of main method, then there's no error anymore. Could anyone give me a explanation on this phenomenon? Thanks a lot!
Caused by: java.lang.NoClassDefFoundError: Could not initialize class staging_jobs.ErrorTest$
at staging_jobs.ErrorTest$$anonfun$main$1.apply(ErrorTest.scala:208)
at staging_jobs.ErrorTest$$anonfun$main$1.apply(ErrorTest.scala:208)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1341)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
... 8 more
I've met the same question as you. I defined a method convert outside main method. When I use it with dataframe.rdd.map{x => convert(x)} in main , NoClassDefFoundError:Could not initialize class Test$ happened.
But when I use a function object convertor, which is the same code with convert method, in main method, no error happened.
I used spark 2.1.0, scala 2.11, it seems like a bug in spark?
I guess the problem is that val props = loadProperties() defines a member for the outer class (of main). Then this member will be serialized (or run) on the executors, which do not have the save environment with the driver.
I am trying to compile sample Spark scala file through sbt and have built maven project in Eclipse IDE
Image
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object simpleSpark {
def main(args : Arrayt[String]){
val logfile = "C:\\spark-1.6.1-bin-hadoop2.6\spark-1.6.1-bin-hadoop2.6\README.md"
val conf = new SparkConf().setAppName("Simple Application").setMaster("local[2]").set("spark.executor.memory", "1g")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
val numHadoops = logData.filter(line => line.contains("Hadoop")).count()
val numSparks = logData.filer(line => line.contains("Spark")).count()
println("Lines with Hadoop: %s, Lines with Spark: %s".format(numHadoops, numHadoops))
}
}
The error says you have illegal start of expression here set("spark.executor.memory",) . Are you sure you set spark.executor.memory correctly in actual code ?
If yes , can you show what you wrote is .sbt file ?
I've put together a proof of concept which aims to provide a skeleton SBT multimodule project which utilizes DataNucleus JDO Enhancer with mixed Java and Scala sources.
The difficulty appears when I try to enhance persistence classes from SBT. Apparently, I'm not passing the correct classpath when calling Fork.java.fork(...) from SBT.
See also this question:
How can SBT generate metamodel classes from model classes using DataNucleus?
Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class org.datanucleus.util.Localiser
at org.datanucleus.metadata.MetaDataManagerImpl.loadPersistenceUnit(MetaDataManagerImpl.java:1104)
at org.datanucleus.enhancer.DataNucleusEnhancer.getFileMetadataForInput(DataNucleusEnhancer.java:768)
at org.datanucleus.enhancer.DataNucleusEnhancer.enhance(DataNucleusEnhancer.java:488)
at org.datanucleus.api.jdo.JDOEnhancer.enhance(JDOEnhancer.java:125)
at javax.jdo.Enhancer.run(Enhancer.java:196)
at javax.jdo.Enhancer.main(Enhancer.java:130)
[info] Compiling 2 Java sources to /home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/klasses...
java.lang.IllegalStateException: errno = 1
at $54321831a5683ffa07b5$.runner(build.sbt:230)
at $54321831a5683ffa07b5$$anonfun$model$7.apply(build.sbt:259)
at $54321831a5683ffa07b5$$anonfun$model$7.apply(build.sbt:258)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:235)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
For the sake of completeness and information, below you can see a java command line generated by SBT which can be executed by hand on a separate window, for example. It just works fine.
$ java -cp /home/rgomes/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.11.6.jar:/home/rgomes/.ivy2/cache/com.google.code.gson/gson/jars/gson-2.3.1.jar:/home/rgomes/.ivy2/cache/javax.jdo/jdo-api/jars/jdo-api-3.0.jar:/home/rgomes/.ivy2/cache/javax.transaction/transaction-api/jars/transaction-api-1.1.jar:/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-core/jars/datanucleus-core-4.0.4.jar:/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-api-jdo/jars/datanucleus-api-jdo-4.0.4.jar:/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-jdo-query/jars/datanucleus-jdo-query-4.0.4.jar:/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-rdbms/jars/datanucleus-rdbms-4.0.4.jar:/home/rgomes/.ivy2/cache/com.h2database/h2/jars/h2-1.4.185.jar:/home/rgomes/.ivy2/cache/org.postgresql/postgresql/jars/postgresql-9.4-1200-jdbc41.jar:/home/rgomes/.ivy2/cache/com.github.dblock.waffle/waffle-jna/jars/waffle-jna-1.7.jar:/home/rgomes/.ivy2/cache/net.java.dev.jna/jna/jars/jna-4.1.0.jar:/home/rgomes/.ivy2/cache/net.java.dev.jna/jna-platform/jars/jna-platform-4.1.0.jar:/home/rgomes/.ivy2/cache/org.slf4j/slf4j-simple/jars/slf4j-simple-1.7.7.jar:/home/rgomes/.ivy2/cache/org.slf4j/slf4j-api/jars/slf4j-api-1.7.7.jar:/home/rgomes/workspace/poc-scala-datanucleus/model/src/main/resources:/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/klasses javax.jdo.Enhancer -v -pu persistence-h2 -d /home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.ClassEnhancerImpl save
INFO: Writing class file "/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes/model/AbstractModel.class" with enhanced definition
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
INFO: ENHANCED (Persistable) : model.AbstractModel
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.ClassEnhancerImpl save
INFO: Writing class file "/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes/model/Identifier.class" with enhanced definition
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
INFO: ENHANCED (Persistable) : model.Identifier
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
INFO: DataNucleus Enhancer completed with success for 2 classes. Timings : input=112 ms, enhance=102 ms, total=214 ms. Consult the log for full details
Enhancer Processing -v.
Enhancer adding Persistence Unit persistence-h2.
Enhancer processing output directory /home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes.
Enhancer found JDOEnhancer of class org.datanucleus.api.jdo.JDOEnhancer.
Enhancer property key:VendorName value:DataNucleus.
Enhancer property key:VersionNumber value:4.0.4.
Enhancer property key:API value:JDO.
Enhancer enhanced 2 classes.
Below you can see some debugging information which is passed to Fork.java.fork(...):
=============================================================
mainClass=javax.jdo.Enhancer
args=-v -pu persistence-h2 -d /home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes
javaHome=None
cwd=/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes
runJVMOptions=
bootJars ---------------------------------------------
/home/rgomes/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.11.6.jar
/home/rgomes/.ivy2/cache/com.google.code.gson/gson/jars/gson-2.3.1.jar
/home/rgomes/.ivy2/cache/javax.jdo/jdo-api/jars/jdo-api-3.0.jar
/home/rgomes/.ivy2/cache/javax.transaction/transaction-api/jars/transaction-api-1.1.jar
/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-core/jars/datanucleus-core-4.0.4.jar
/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-api-jdo/jars/datanucleus-api-jdo-4.0.4.jar
/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-jdo-query/jars/datanucleus-jdo-query-4.0.4.jar
/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-rdbms/jars/datanucleus-rdbms-4.0.4.jar
/home/rgomes/.ivy2/cache/com.h2database/h2/jars/h2-1.4.185.jar
/home/rgomes/.ivy2/cache/org.postgresql/postgresql/jars/postgresql-9.4-1200-jdbc41.jar
/home/rgomes/.ivy2/cache/com.github.dblock.waffle/waffle-jna/jars/waffle-jna-1.7.jar
/home/rgomes/.ivy2/cache/net.java.dev.jna/jna/jars/jna-4.1.0.jar
/home/rgomes/.ivy2/cache/net.java.dev.jna/jna-platform/jars/jna-platform-4.1.0.jar
/home/rgomes/.ivy2/cache/org.slf4j/slf4j-simple/jars/slf4j-simple-1.7.7.jar
/home/rgomes/.ivy2/cache/org.slf4j/slf4j-api/jars/slf4j-api-1.7.7.jar
/home/rgomes/workspace/poc-scala-datanucleus/model/src/main/resources
/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/klasses
envVars ----------------------------------------------
=============================================================
The project is available in github for your convenience at
https://github.com/frgomes/poc-scala-datanucleus
Just download it and type
./sbt compile
Any help is immensely appreciated. Thanks
You can either use java.lang.ProcessBuilder or sbt.Fork.
See below a generic javaRunner you can add to your build.sbt which employs java.lang.ProcessBuilder.
See also a generic sbtRunner you can add to your build.sbt which employs sbt.Fork. Thanks to #dwijnand for providing insightful information for making sbtRunner work as expected.
def javaRunner(mainClass: String,
args: Seq[String],
classpath: Seq[File],
cwd: File,
javaHome: Option[File] = None,
runJVMOptions: Seq[String] = Nil,
envVars: Map[String, String] = Map.empty,
connectInput: Boolean = false,
outputStrategy: Option[OutputStrategy] = Some(StdoutOutput)): Seq[File] = {
val java_ : String = javaHome.fold("") { p => p.absolutePath + "/bin/" } + "java"
val jvm_ : Seq[String] = runJVMOptions.map(p => p.toString)
val cp_ : Seq[String] = classpath.map(p => p.absolutePath)
val env_ = envVars.map({ case (k,v) => s"${k}=${v}" })
val xcmd_ : Seq[String] = Seq(java_) ++ jvm_ ++ Seq("-cp", cp_.mkString(java.io.File.pathSeparator), mainClass) ++ args
println("=============================================================")
println(xcmd_.mkString(" "))
println("=============================================================")
println("")
IO.createDirectory(cwd)
import scala.collection.JavaConverters._
val cmd = xcmd_.asJava
val pb = new java.lang.ProcessBuilder(cmd)
pb.directory(cwd)
pb.inheritIO
val process = pb.start()
def cancel() = {
println("Run canceled.")
process.destroy()
1
}
val errno = try process.waitFor catch { case e: InterruptedException => cancel() }
if(errno==0) {
if (args.contains("-v")) cwd.list.foreach(f => println(f))
cwd.listFiles
} else {
throw new IllegalStateException(s"errno = ${errno}")
}
}
def sbtRunner(mainClass: String,
args: Seq[String],
classpath: Seq[File],
cwd: File,
javaHome: Option[File] = None,
runJVMOptions: Seq[String] = Nil,
envVars: Map[String, String] = Map.empty,
connectInput: Boolean = false,
outputStrategy: Option[OutputStrategy] = Some(StdoutOutput)): Seq[File] = {
val args_ = args.map(p => p.toString)
val java_ = javaHome.fold("None") { p => p.absolutePath }
val cp_ = classpath.map(p => p.absolutePath)
val jvm_ = runJVMOptions.map(p => p.toString) ++ Seq("-cp", cp_.mkString(java.io.File.pathSeparator))
val env_ = envVars.map({ case (k,v) => s"${k}=${v}" })
def dump: String =
s"""
|mainClass=${mainClass}
|args=${args_.mkString(" ")}
|javaHome=${java_}
|cwd=${cwd.absolutePath}
|runJVMOptions=${jvm_.mkString(" ")}
|classpath --------------------------------------------
|${cp_.mkString("\n")}
|envVars ----------------------------------------------
|${env_.mkString("\n")}
""".stripMargin
def cmd: String =
s"""java ${jvm_.mkString(" ")} ${mainClass} ${args_.mkString(" ")}"""
println("=============================================================")
println(dump)
println("=============================================================")
println(cmd)
println("=============================================================")
println("")
IO.createDirectory(cwd)
val options =
ForkOptions(
javaHome = javaHome,
outputStrategy = outputStrategy,
bootJars = Seq.empty,
workingDirectory = Option(cwd),
runJVMOptions = jvm_,
connectInput = connectInput,
envVars = envVars)
val process = new Fork("java", Option(mainClass)).fork(options, args)
def cancel() = {
println("Run canceled.")
process.destroy()
1
}
val errno = try process.exitValue() catch { case e: InterruptedException => cancel() }
if(errno==0) {
if (args.contains("-v")) cwd.list.foreach(f => println(f))
cwd.listFiles
} else {
throw new IllegalStateException(s"errno = ${errno}")
}
}
Then you need to wire DataNucleus Enhancer as part of your build process. This is done via manipulateBytecode sub-task, as demonstrated below:
lazy val model =
project.in(file("model"))
// .settings(publishSettings:_*)
.settings(librarySettings:_*)
.settings(paranoidOptions:_*)
.settings(otestFramework: _*)
.settings(deps_tagging:_*)
//-- .settings(deps_stream:_*)
.settings(deps_database:_*)
.settings(
Seq(
// This trick requires SBT 0.13.8
manipulateBytecode in Compile := {
val previous = (manipulateBytecode in Compile).value
sbtRunner( // javaRunner also works!
mainClass = "javax.jdo.Enhancer",
args =
Seq(
"-v",
"-pu", "persistence-h2",
"-d", (classDirectory in Compile).value.absolutePath),
classpath =
(managedClasspath in Compile).value.files ++
(unmanagedResourceDirectories in Compile).value :+
(classDirectory in Compile).value,
cwd = (classDirectory in Compile).value,
javaHome = javaHome.value,
envVars = (envVars in Compile).value
)
previous
}
):_*)
.dependsOn(util)
For a complete example, including a few JDO annotated persistence classes and some rudimentary test cases, please have a look at
http://github.com/frgomes/poc-scala-datanucleus
I think the issue is you're passing your dependency jars as boot jars not as the classpath.
From your poc project perhaps something like:
val jvm_ = runJVMOptions.map(p => p.toString) ++
Seq("-cp", cp_ mkString java.io.File.pathSeparator)
Now, I'm writing sample for learning scala slick. I'm using some github projs and stackoverflow (Q&A)s. Below my sample code:
import scala.slick.driver.PostgresDriver.simple._
import Database.threadLocalSession
object TestApp extends App{
case class MyTable(id: Option[Int], foo: String, bar: String)
object MyTables extends Table[MyTable]("mytable") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def foo = column[String]("foo", O.NotNull)
def bar = column[String]("bar", O.NotNull)
def * = id.? ~ foo ~ bar <> (MyTable, MyTable.unapply _)
def forInsert = foo ~ bar <>
({ (f, l) => MyTable (None, f, l) }, { ep:MyTable => Some((ep.foo, ep.bar)) })
val findByID = createFinderBy(_.id)
}
implicit val session = Database.forURL("jdbc:postgresql://localhost:5432/myserver",
driver="org.postgresql.Driver",
user="myadmin",
password="myadmin")
session withTransaction {
MyTables.ddl.create
MyTables.foo ~ MyTables.bar).insert("Homer", "Simpson")
MyTables.forInsert.insertAll(
MyTable(None, "Marge", "Bouvier"),
MyTable(None, "Carl", "Carlson"),
MyTable(None, "Lenny", "Leonard")
)
}
}
EXCEPTION:
Exception in thread "main" java.lang.NoClassDefFoundError: scala/Right
at scala.slick.driver.BasicProfile$class.createQueryTemplate(BasicProfile.scala:12)
at scala.slick.driver.PostgresDriver$.createQueryTemplate(PostgresDriver.scala:69)
at scala.slick.ql.Parameters.flatMap(Parameters.scala:9)
at scala.slick.driver.BasicTableComponent$Table.createFinderBy(BasicTableComponent.scala:30)
at TestApp$MyTables$.(TestApp.scala:16)
Create table in postgresql
CREATE TABLE mytable
(
id serial primary key,
foo VARCHAR(40) not null,
bar VARCHAR(40) not null,
);
I'm using this tools and libraries:
Scala IDE - 2.10
Java version - 1.7.0_11
slick_2.10.0-M4-0.10.0-M2.jar
postgresql-9.2-1003-jdbc4.jar
Database - PostgreSQL 8.3
What's Wrong Here? Thanks in advance.