Groovy project won't build in GGTS - java

I have some Groovy code which will from fine from the command line, but when I try to use Springsource's GGTS fails:
Caught: java.lang.NoClassDefFoundError: org/apache/commons/cli/ParseException
java.lang.NoClassDefFoundError: org/apache/commons/cli/ParseException
at empyrean.Empyrean.run(Empyrean.groovy:20)
Caused by: java.lang.ClassNotFoundException: org.apache.commons.cli.ParseException
... 1 more
I know this is because I have not got paths or something similar set correctly in GGTS but I cannot work out how to fix this (I used to use STS without a problem, this is the first time I have tried GGTS)
The non-compiling code is this (first line here is line 20 in the file):
def empyreanCli = new CliBuilder(usage:'empyrean [-d] <empyrean script>')
empyreanCli.d(longOpt:'debug',
'output debug data as we go')
empyreanCli.f(longOpt:'file',
'Empyrean script to run')
empyreanCli.u(longOpt:'usage',
'show this usage message')
def empyreanParse = empyreanCli.parse(args)
if (empyreanParse.u || args.size() == 0)
empyreanCli.usage()
else {
def engine = new EmpyreanEngine()
if (empyreanParse.d)
engine.debug = true
if (empyreanParse.f)
engine.process(binsicParse.f)
else
engine.process(args[args.size() - 1])
}
Which, as I say, runs fine from the command line...

To get the project to build I have to explictly add the necessary .jar files to the build path:
project -> properties -> java build path -> libraries -> add JARs

Related

How to fix org.eclipse.jgit.api.errors.JGitInternalException:Entry not found by path Exception while doing commit using Egit

I am trying to integrate Git in my existing RCP application,
For that i have imported egit.core and egit.io plugin as source project into my application and when i tried to commit the file using egit's commit() from CommitOperation Class i am getting the below exception:
Caused by: org.eclipse.jgit.api.errors.JGitInternalException: Entry not found by path: D:\Test\file.txt
at org.eclipse.jgit.api.CommitCommand.createTemporaryIndex(CommitCommand.java:414)
at org.eclipse.jgit.api.CommitCommand.call(CommitCommand.java:194)
at org.eclipse.egit.core.op.CommitOperation.commit(CommitOperation.java:255)
... 39 more
And when i debugged through the code to find out where the exception is happening, it is on CommitCommand.class at the following point
// there must be no unprocessed paths left at this point; otherwise an
// untracked or unknown path has been specified
for (int i = 0; i < onlyProcessed.length; i++)
if (!onlyProcessed[i])
throw new JGitInternalException(MessageFormat.format(
JGitText.get().entryNotFoundByPath, only.get(i)));
I have no idea what the unprocessed path here is meant by?
I am able to access the path. can anyone help me how to proceed further

RuntimeException: Could not extract key occurs only on runtime environment

I am running flink locally on my machine , I am getting the exception below when reading from kafka topic. when running from the ide (intellij) it is running perfectly. however when I deploy my jar to flink runtime environment (locally) using
/bin/flink run ~MyApp-1.0-SNAPSHOT.jar
my class looks like this
case class Foo(id: String, value: String, timestamp: Long, counter: Int)
I am getting this exception
java.lang.RuntimeException: Could not extract key from Foo(some-uuid,some-text,1540348398,1)
at org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:110)
at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
at org.apache.flink.streaming.api.operators.StreamFilter.processElement(StreamFilter.java:40)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
at org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:89)
at org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:154)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:738)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Could not extract key from Foo("some-uuid","text",1540348398,1)
at org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.selectChannels(KeyGroupStreamPartitioner.java:61)
at org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.selectChannels(KeyGroupStreamPartitioner.java:32)
at org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:106)
at org.apache.flink.streaming.runtime.io.StreamRecordWriter.emit(StreamRecordWriter.java:81)
at org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
... 22 more
Caused by: java.lang.NullPointerException
at com.bluevoyant.StreamingJob$$anonfun$3.apply(StreamingJob.scala:41)
at com.bluevoyant.StreamingJob$$anonfun$3.apply(StreamingJob.scala:40)
at org.apache.flink.streaming.api.scala.DataStream$$anon$2.getKey(DataStream.scala:411)
at org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.selectChannels(KeyGroupStreamPartitioner.java:59)
... 26 more
my key partition is simple (partitionFactor = some number)
env.addSource(kafkaConsumer)
.filter(_.id != null)
.keyBy{ r =>
val h = fastHash(r.id) % partitionFactor
math.abs(h)
}
.map(...)
again, this happens only on runtime not when I run it from intellij
this so frustrating, any advice ?

BeanShell command line interpreter features

I'm trying to test BeanShell's command line interpreter in how it processes basic Java commands and syntax on my machine, and see if I can customise its behavior in any way. I've installed version 2.0b4 on my machine running OS X 10.10.1 (the JAR file is in /Library/Java/Extensions as per the instructions).
It's the closest thing to what I've been looking for, an interactive Java interpreter, but it doesn't have some standard features which a good interpreter should have.
I'd like to be able to use the Up arrow key to reuse a previous command, but at the moment it doesn't recognise it, it just shows a control sequence. Is there a way to customise this for BeanShell?
Is there a way to get BeanShell to print out the value of a variable if I've created it beforehand, just by naming it, like
String s = new String( "Hello World!" );
s;
Hello World!.
This is possible in Python for example.
According to the documentation on importing Java classes which(<java class>); should return the classpath location of the specified Java class. But which( java.lang.String ); does not work for me, I get a NullPointerException:
bsh % which(java.lang.String);
Start ClassPath Mapping
Mapping: Directory /Users/srm
// Error: // Uncaught Exception: Method Invocation cp.getClassSource : at Line: 42 : in file: /bsh/commands/which.bsh : cp .getClassSource ( className )
Called from method: which : at Line: 8 : in file: : which ( java .lang .String )
Target exception: java.lang.NullPointerException
java.lang.NullPointerException
Any pointers or help would be appreciated.
Run beanshell with jline.
Download jline jar from http://jline.sourceforge.net/index.html and then you can do:
java -cp jline-1.0.jar:bsh-2.0b4.jar jline.ConsoleRunner bsh.Interpreter
Line editing capability will be provided by jline. I found this hint here.
There are issues running with jline2. First, you'll get:
$ java -cp jline-2.12.jar:bsh-2.0b4.jar jline.ConsoleRunner bsh.Interpreter
Exception in thread "main" java.lang.NoClassDefFoundError: jline/ConsoleRunner
Due to this issue which is fixed. But then, use the new class and you still get:
$ java -cp jline-2.12.jar:bsh-2.0b4.jar jline.console.internal.ConsoleRunner bsh.Interpreter
Exception in thread "main" java.lang.IllegalArgumentException: wrong number of arguments
due to this issue which is not fixed yet.
Use show() command which will trigger showing of value.
bsh % show();
bsh % String s = new String("Hello World");
bsh % s;
<Hello World>
bsh %
It is mentioned in the Useful BeanShell Commands section of the documentation.
Doesn't work for me either
It doesn't fail in my case, but it didn't find it either.
bsh % which(java.lang.String);
Start ClassPath Mapping
Mapping: Archive: file:/Users/me/beanshell/jline-1.0.jar
Mapping: Archive: file:/Users/me/beanshell/bsh-2.0b4.jar
Mapping: Archive: file:/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Classes/classes.jar
End ClassPath Mapping
null
bsh %

Hudson failing to start

I had hudson running on windows server successfully. needed to restart the hudson service. After restart i am getting below error. Any idea, or if anybody experienced this issue.
org.jvnet.hudson.reactor.ReactorException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException
at org.jvnet.hudson.reactor.Reactor.execute(Reactor.java:246)
at hudson.model.Hudson.executeReactor(Hudson.java:719)
at hudson.model.Hudson.<init>(Hudson.java:616)
at org.eclipse.hudson.init.InitialRunnable.run(InitialRunnable.java:51)
at java.lang.Thread.run(Thread.java:619)
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2263)
at com.google.common.cache.LocalCache.get(LocalCache.java:4000)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4004)
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
at hudson.model.TopLevelItemsCache.get(TopLevelItemsCache.java:78)
at hudson.model.LazyTopLevelItem.item(LazyTopLevelItem.java:144)
at hudson.model.LazyTopLevelItem.hasPermission(LazyTopLevelItem.java:271)
at hudson.model.Hudson.getItems(Hudson.java:1303)
at hudson.model.Hudson.getItems(Hudson.java:223)
at hudson.model.Hudson.getAllItems(Hudson.java:1367)
at hudson.model.DependencyGraph.<init>(DependencyGraph.java:78)
at hudson.model.Hudson.rebuildDependencyGraph(Hudson.java:3626)
at hudson.model.Hudson$12.run(Hudson.java:2415)
at org.jvnet.hudson.reactor.TaskGraphBuilder$TaskImpl.run(TaskGraphBuilder.java:146)
at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:259)
at hudson.model.Hudson$4.runTask(Hudson.java:699)
at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:187)
at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:94)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
... 1 more
Caused by: java.lang.NullPointerException
at hudson.model.RunMap.recalcLastStable(RunMap.java:469)
at hudson.model.RunMap.recalcMarkers(RunMap.java:209)
at hudson.model.RunMap.setBuilds(RunMap.java:199)
at hudson.model.RunMap.putAllRunValues(RunMap.java:225)
at hudson.model.RunMap.reset(RunMap.java:292)
at hudson.model.RunMap.load(RunMap.java:640)
at hudson.model.AbstractProject.onLoad(AbstractProject.java:329)
at hudson.model.BaseBuildableProject.onLoad(BaseBuildableProject.java:91)
at hudson.model.TopLevelItemsCache$1.load(TopLevelItemsCache.java:64)
at hudson.model.TopLevelItemsCache$1.load(TopLevelItemsCache.java:57)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2257)
... 20 more
Greatly appreciate help !!
I had a similar problem. There was power outage and some files get corrupted. It was possible to start Hudson without any job (I moved all jobs to different directory).
So I went through latest modified jobs in $HUDSON_HOME/jobs and deleted
empty nextBuildNumber file
empty builds/_runmap.xml file
whole build with empty build.xml and/or changelog.xml files (builds/yyyy-MM-ss_HH-mm-ss directory and builds/xxx link)
Here's a quick hack we used when we ran into the same situation:
#!/usr/bin/env python
"""Remove references from missing builds from all _runmap.xml files"""
from xml.dom.minidom import parse, parseString
import os
import glob
for buildRoot in glob.glob("/var/lib/hudson/jobs/*/builds/"):
runmapFilename = buildRoot + "/_runmap.xml"
if not os.path.exists(runmapFilename):
continue
dom = parse(runmapFilename)
builds = dom.getElementsByTagName("builds")[0]
changed = False
for build in builds.getElementsByTagName("build"):
buildDir = build.getElementsByTagName("buildDir")[0].childNodes[0].data
if not os.path.exists(buildRoot + "/" + buildDir + "/build.xml"):
changed = True
print "missing", buildRoot, buildDir
builds.removeChild(build)
if changed:
os.rename(runmapFilename, runmapFilename + ".org")
f = open(runmapFilename, "w")
f.write(dom.toxml())
f.close()
I unfortunately it seems that the history is lost for the jobs that are corrupted.
In order have back online hudson I have done the following:
removed builds folder entirely
removed the link to the lastS*
set nextBuildNumber to 1
I hope this help.

buildr: package dependencies into a single jar

I have a java project that is built with buildr and that has some external dependencies:
repositories.remote << "http://www.ibiblio.org/maven2"
repositories.remote << "http://packages.example/"
define "myproject" do
compile.options.target = '1.5'
project.version = "1.0.0"
compile.with 'dependency:dependency-xy:jar:1.2.3'
compile.with 'dependency2:dependency2:jar:4.5.6'
package(:jar)
end
I want this to build a single standalone jar file that includes all these dependencies.
How do I do that?
(there's a logical followup question: How can I strip all the unused code from the included dependencies and only package the classes I actually use?)
This is what I'm doing right now. This uses autojar to pull only the necessary dependencies:
def add_dependencies(pkg)
tempfile = pkg.to_s.sub(/.jar$/, "-without-dependencies.jar")
mv pkg.to_s, tempfile
dependencies = compile.dependencies.map { |d| "-c #{d}"}.join(" ")
sh "java -jar tools/autojar.jar -baev -o #{pkg} #{dependencies} #{tempfile}"
end
and later:
package(:jar)
package(:jar).enhance { |pkg| pkg.enhance { |pkg| add_dependencies(pkg) }}
(caveat: I know little about buildr, this could be totally the wrong approach. It works for me, though)
I'm also learning Buildr and currently I'm packing Scala runtime with my application this way:
package(:jar).with(:manifest => _('src/MANIFEST.MF')).exclude('.scala-deps')
.merge('/var/local/scala/lib/scala-library.jar')
No idea if this is inferior to autojar (comments are welcome), but seems to work with a simple example. Takes 4.5 minutes to package that scala-library.jar thought.
I'm going to use Cascading for my example:
cascading_dev_jars = Dir[_("#{ENV["CASCADING_HOME"]}/build/cascading-{core,xml}-*.jar")]
#...
package(:jar).include cascading_dev_jars, :path => "lib"
Here is how I create an Uberjar with Buildr, this customization of what is put into the Jar and how the Manifest is created:
assembly_dir = 'target/assembly'
main_class = 'com.something.something.Blah'
artifacts = compile.dependencies
artifacts.each do |artifact|
Unzip.new( _(assembly_dir) => artifact ).extract
end
# remove dirs from assembly that should not be in uberjar
FileUtils.rm_rf( "#{_(assembly_dir)}/example/package" )
FileUtils.rm_rf( "#{_(assembly_dir)}/example/dir" )
# create manifest file
File.open( _("#{assembly_dir}/META-INF/MANIFEST.MF"), 'w') do |f|
f.write("Implementation-Title: Uberjar Example\n")
f.write("Implementation-Version: #{project_version}\n")
f.write("Main-Class: #{main_class}\n")
f.write("Created-By: Buildr\n")
end
present_dir = Dir.pwd
Dir.chdir _(assembly_dir)
puts "Creating #{_("target/#{project.name}-#{project.version}.jar")}"
`jar -cfm #{_("target/#{project.name}-#{project.version}.jar")} #{_(assembly_dir)}/META-INF/MANIFEST.MF .`
Dir.chdir present_dir
There is also a version that supports Spring, by concatenating all the spring.schemas

Categories

Resources