I'm using java + groovy scripts. Is it possible to change this generated by groovy class names (Script1.groovy, Script777.groovy etc.)? It's hard to find correct script in case of exception:/
Caused by: org.json.JSONException: JSONObject["value14"] not found.
at org.json.JSONObject.get(JSONObject.java:498)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
at org.codehaus.groovy.runtime.metaclass.MethodMetaProperty$GetMethodMetaProperty.getProperty(MethodMetaProperty.java:59)
at org.codehaus.groovy.runtime.callsite.GetEffectivePojoPropertySite.getProperty(GetEffectivePojoPropertySite.java:61)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGetProperty(AbstractCallSite.java:227)
at Script4.run(Script4.groovy:23)
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:346)
... 13 more
Yes, it's possible to set custom name for groovy script. You should use groovy.lang.GroovyClassLoader. For example:
GroovyClassLoader groovyClassLoader = new GroovyClassLoader();
Class<Script> parsedClass = groovyClassLoader.parseClass(scriptText, name);
Script script = scriptClass.newInstance();
script.run()
Related
I am using Spotify's Scio library for writing apache beam pipelines in scala. I want to search for files under a directory in a recursive way on a filesystem which can be hdfs, alluxio or GCS. Like *.jar should find all the files under the provided directory and sub-directories.
Apache beam sdk provided org.apache.beam.sdk.io.FileIO class for such purpose where I can find files on one directory level using pipeline.apply(FileIO.match().filepattern(filesPattern)).
How can I make it recursive to search for all files matching the provided pattern?
Currently, I am trying another approach, where I am creating resourceId of the provided pattern and getting current directory of the provided pattern, then I am trying to resolve all sub-directories in the current directory using resourceId.resolve() method. But it is throwing an exception for it.
val currentDir = FileSystems.matchNewResource(filesPattern, false).getCurrentDirectory
val childDir = currentDir.resolve("{#literal *}", StandardResolveOptions.RESOLVE_DIRECTORY)
For currentDir.resolve I am getting following exception:
------------------------------------------------------------
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
Caused by: java.lang.IllegalArgumentException: Illegal character in path at index 0: {#literal *}/
at java.net.URI.create(URI.java:852)
at java.net.URI.resolve(URI.java:1036)
at org.apache.beam.sdk.io.hdfs.HadoopResourceId.resolve(HadoopResourceId.java:46)
at com.sparkcognition.foundation.ingest.jobs.copyjob.FileOperations$.findFiles(BinaryFilesSink.scala:110)
at com.sparkcognition.foundation.ingest.jobs.copyjob.BinaryFilesSink$.main(BinaryFilesSink.scala:39)
at com.sparkcognition.foundation.ingest.jobs.copyjob.BinaryFilesSink.main(BinaryFilesSink.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
... 12 more
Caused by: java.net.URISyntaxException: Illegal character in path at index 0: {#literal *}/
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parseHierarchical(URI.java:3105)
at java.net.URI$Parser.parse(URI.java:3063)
at java.net.URI.<init>(URI.java:588)
at java.net.URI.create(URI.java:850)
... 22 more
Please suggest what should be the right way to search for files recursively using apache beam?
References:
https://beam.apache.org/releases/javadoc/2.11.0/index.html?org/apache/beam/sdk/io/fs/ResourceId.html
It looks like you have some code copied from some faulty javadoc. Some old versions of the example code were published with errors around the asterisks.
To find all files inside currentDir:
val childDir = currentDir.resolve("**", StandardResolveOptions.RESOLVE_FILES)
See the quickstart guide on installing sampleclean:
http://sampleclean.org/release.html
I have followed the steps provided in the quick start and have the correct versions:
$ javac -version
javac 1.7.0_95
$ scala
Welcome to Scala version 2.10.5 (OpenJDK 64-Bit Server VM, Java 1.7.0_95)
I wasn't able to install spark-1.2.2, so I got spark-1.6.1 instead.
After building spark, updating the hive conf file and running the scala commands, I get a java error:
~/sampleclean/spark-1.6.1$ ./bin/spark-shell --jars sampleclean-v0.1.jar
scala> import sampleclean.api.SampleCleanContext
import sampleclean.api.SampleCleanContext
scala> val scc = new SampleCleanContext(sc)
scc: sampleclean.api.SampleCleanContext = sampleclean.api.SampleCleanContext#2d15f9f9
scala> scc.hql("CREATE TABLE restaurant(id String, entity String, name String, category String, city String) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\\n'")
java.lang.NoSuchMethodError: sampleclean.api.SampleCleanContext.hql(Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame;
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:31)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:38)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:40)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
at $iwC$$iwC$$iwC.<init>(<console>:44)
at $iwC$$iwC.<init>(<console>:46)
at $iwC.<init>(<console>:48)
at <init>(<console>:50)
at .<init>(<console>:54)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I think I've followed all of the steps correctly. In addition, I can see the hql function defined in sampleclean/api/SampleCleanContext.scala.
Can somebody help to resolve this error.
Thanks!
I have a java project containing a class MyFileLoader (among others) that successfully loads a file from resources using:
public static List<String> loadFile() {
Path path = System.class.getResource("/my/path/model.bin").getFile().toPath();
return Files.readAllLines(path, UTF_8);
}
and then does some processing.
After adding this project/jar as a dependency in scala, I tried to access MyFileLoader.loadFile. Unfortunately, this gives a java.lang.NullPointerException, as the resource isn't found.
To debug, I ran this command in spark-shell, showing that this resource indeed exists:
scala> getClass.getResource("/my/path/model.bin").getFile
res32: String = file:/some-local-path/my-jar-with-dependencies.jar!/my/path/model.bin
I then tried:
scala> Files.readAllLines(new File(getClass.getResource("/my/path/model.bin").getPath).toPath)
java.nio.file.NoSuchFileException: file:/some-local-path/my-jar-with-dependencies.jar!/my/path/model.bin
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at java.nio.file.Files.newBufferedReader(Files.java:2784)
at java.nio.file.Files.readAllLines(Files.java:3202)
at java.nio.file.Files.readAllLines(Files.java:3242)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:20)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:25)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:27)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:29)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:31)
at $iwC$$iwC$$iwC.<init>(<console>:33)
at $iwC$$iwC.<init>(<console>:35)
at $iwC.<init>(<console>:37)
at <init>(<console>:39)
at .<init>(<console>:43)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:656)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:664)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:669)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:996)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Why aren't I able to load the resource these ways?
Since your file is now packaged inside of jar you will need to use Class.getResourceAsStream(). It seems you are trying to read the URL as a regular file which isn't supported (it likely worked before since it wasn't packaged inside of the jar and was able to be loaded as a regular file).
I'm actually stuck on this for several hours..
My code is simple and just want to add several nodes using Labeled index.
Without index stuff (Don't creating createDeferredSchemaIndex ) the code works without a problem.
My exception is:
groovy.lang.MissingMethodException: No signature of method: org.neo4j.kernel.impl.coreapi.schema.IndexDefinitionImpl.createNode() is applicable for argument types: (java.util.LinkedHashMap, [Lorg.neo4j.graphdb.Label;) values: [[tag:START], [MacMorphoTag]]
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:56)
at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:46)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120)
at NextGenBatchInserterWithId$_run_closure1.doCall(NextGenBatchInserterWithId.groovy:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:278)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1016)
at groovy.lang.Closure.call(Closure.java:423)
at groovy.lang.Closure.call(Closure.java:439)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.callClosureForLine(DefaultGroovyMethods.java:4281)
at org.codehaus.groovy.runtime.IOGroovyMethods.eachLine(IOGroovyMethods.java:466)
at org.codehaus.groovy.runtime.ResourceGroovyMethods.eachLine(ResourceGroovyMethods.java:288)
at org.codehaus.groovy.runtime.ResourceGroovyMethods.eachLine(ResourceGroovyMethods.java:253)
at org.codehaus.groovy.runtime.dgm$797.invoke(Unknown Source)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:271)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:53)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120)
at NextGenBatchInserterWithId.run(NextGenBatchInserterWithId.groovy:71)
at groovy.lang.GroovyShell.runScriptOrMainOrTestOrRunnable(GroovyShell.java:258)
at groovy.lang.GroovyShell.run(GroovyShell.java:502)
at groovy.lang.GroovyShell.run(GroovyShell.java:491)
at groovy.ui.GroovyMain.processOnce(GroovyMain.java:650)
at groovy.ui.GroovyMain.run(GroovyMain.java:381)
at groovy.ui.GroovyMain.process(GroovyMain.java:367)
at groovy.ui.GroovyMain.processArgs(GroovyMain.java:126)
at groovy.ui.GroovyMain.main(GroovyMain.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.codehaus.groovy.tools.GroovyStarter.rootLoader(GroovyStarter.java:106)
at org.codehaus.groovy.tools.GroovyStarter.main(GroovyStarter.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Caught: groovy.lang.MissingMethodException: No signature of method: org.neo4j.kernel.impl.coreapi.schema.IndexDefinitionImpl.shutdown() is applicable for argument types: () values: []
groovy.lang.MissingMethodException: No signature of method: org.neo4j.kernel.impl.coreapi.schema.IndexDefinitionImpl.shutdown() is applicable for argument types: () values: []
at NextGenBatchInserterWithId.run(NextGenBatchInserterWithId.groovy:182)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
I know about the signature method that don't match with my parameters but, by the Neo4J code from here: https://github.com/neo4j/neo4j/blob/master/community/kernel/src/main/java/org/neo4j/unsafe/batchinsert/BatchInserterImpl.java
On line 578:
public long createNode( Map properties, Label... labels )
That's exactly what I send from my groovy.
batch = org.neo4j.unsafe.batchinsert.BatchInserters.inserter(store,config)
batch = batch.createDeferredSchemaIndex( macMorphoTagLabel ).on( 'tag' ).create();
...
Label macMorphoTagLabel = DynamicLabel.label( 'MacMorphoTag' );
def Label[] macMorphoTagArray = [macMorphoTagLabel]
if (!tags[previousWordTag.macMorphoTag]) {
tags[previousWordTag.macMorphoTag] = batch.createNode([tag: previousWordTag.macMorphoTag], macMorphoTagArray)
}
What I missing here? I'm using Groovy 2.3.7 + Neo4j 2.1.5
For instance -> My full code: http://pastebin.com/Z98tMDYi problem happen on line 110
My Json file is: http://pastebin.com/xSxZd4ke (as UTF-8 file)
In line 66 you're reassigning the batch variable. Replace
batch = batch.createDeferredSchemaIndex( macMorphoTagLabel ).on( 'tag' ).create();
with
batch.createDeferredSchemaIndex( macMorphoTagLabel ).on( 'tag' ).create();
I am using the Groovy ConfigSlurper to load a large groovy file (741KB) from a Groovy script and consistently receive a RuntimeException when it tries to do the compilation.
Groovy 2.1.1, Java 1.6 (Apple/MacOSX)
I call it like so:
conf = new ConfigSlurper().parse(new File('conf.groovy').toURL())
And always get the exception below. Is there a known limitation to the size of a file ConfigSlurper can compile?
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
General error during class generation: Class file too large!
java.lang.RuntimeException: Class file too large!
at org.objectweb.asm.ClassWriter.toByteArray(Unknown Source)
at org.codehaus.groovy.control.CompilationUnit$15.call(CompilationUnit.java:797)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1036)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:573)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:551)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:528)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:279)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:258)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:244)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:202)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:212)
at groovy.lang.GroovyClassLoader$parseClass.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at groovy.util.ConfigSlurper.parse(ConfigSlurper.groovy:146)
at groovy.util.ConfigSlurper$parse.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at write_users.run(write_users.groovy:19)
at groovy.lang.GroovyShell.runScriptOrMainOrTestOrRunnable(GroovyShell.java:257)
at groovy.lang.GroovyShell.run(GroovyShell.java:220)
at groovy.lang.GroovyShell.run(GroovyShell.java:150)
at groovy.ui.GroovyMain.processOnce(GroovyMain.java:588)
at groovy.ui.GroovyMain.run(GroovyMain.java:375)
at groovy.ui.GroovyMain.process(GroovyMain.java:361)
at groovy.ui.GroovyMain.processArgs(GroovyMain.java:120)
at groovy.ui.GroovyMain.main(GroovyMain.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.groovy.tools.GroovyStarter.rootLoader(GroovyStarter.java:106)
at org.codehaus.groovy.tools.GroovyStarter.main(GroovyStarter.java:128)
1 error
There is a known limitation but it is arguably not Groovy; it is the ASM library and ultimately Java.
As Stuart Halloway has mentioned in talks, it is often insightful to know what is happening one layer below your level of abstraction.
For example, this link shows that this code:
public byte[] toByteArray() {
if (index > Short.MAX_VALUE) {
throw new RuntimeException("Class file too large!");
}
... is likely the exception shown here:
java.lang.RuntimeException: Class file too large!
at org.objectweb.asm.ClassWriter.toByteArray(Unknown Source)
Why does the ASM method throw this exception? This post states:
It turns out that the magic number for the "code too large" error is
65535 bytes (compiled byte code, not source code).
It is probable that the file is too large for Groovy's internal implementation, which would result in a synthetic method that is too large for the JVM.