I am sending some JSON data from Java server via TCP to Logstash (Logstash sends them to Elasticsearch) and these JSON data seems to be escaped in Elastic.
Java serialization:
Map<String, Object> jsonMap = new HashMap<>();
jsonMap.put("age", event.getAge());
for (Entry<String, Serializable> attribute : event.getAttributes().entrySet()) {
jsonMap.put("attribute_" + attribute.getKey(), attribute.getValue());
}
jsonMap.put("message", event.getMessage());
jsonMap.put("cause", event.getCause());
jsonMap.put("timestamp", event.getTimestamp());
jsonMap.put("eventid", event.getEventId());
jsonMap.put("instanceid", event.getInstanceId());
jsonMap.put("origin", event.getOrigin());
jsonMap.put("severity", event.getSeverity());
jsonMap.put("durability", event.getDurability());
jsonMap.put("detail", event.getDetail());
int i = 0;
for (String tag : event.getTags()) {
jsonMap.put("tag_" + String.valueOf(i), tag);
i++;
}
return new JSONObject(jsonMap).toString();
Java Socket:
try (Socket clientSocket = new Socket(url, port);
OutputStreamWriter out = new OutputStreamWriter(
clientSocket.getOutputStream(), "UTF-8")) {
out.write(content.toString());
out.flush();
}
Example data in Elastic:
"message": "{\"detail\":null,\"cause\":null,\"attribute_start\":\"Mon Jan 11 16:15:28 CET 2016\",\"durability\":\"MOMENTARY\",\"attribute_login\":\"\",\"origin\":\"fortuna.ws.navipro\",\"severity\":\"ERROR\",\"attribute_type\":null,\"attribute_methodName\":\"Logout\",\"eventid\":\"ws.navipro.call\",\"attribute_call\":\"[57,7256538816272415441,,OK]{0 connections} CZ() Calling method 'Logout' at 1452525329029(Mon Jan 11 16:15:28 CET 2016-Mon Jan 11 16:15:29 CET 2016; roundtrip=36ms):\\n\\tRequest\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.ClientLogoutRequest\\n\\t\\tkeep: true\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.RequestBody\\n\\t\\tcountry: CZ\\n\\t\\tsessionCC: NULL\\n\\t\\tsessionID:\\n\\t\\tsessionIP:\\n\\t\\tdebug: NULL\\n\\t\\tns: NULL\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.RequestCorpus\\n\\tResponse\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.ClientLogoutResponse\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.ResponseBody\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.ResponseCorpus\\n\\t\\tmessage: \\n\\t\\t[1] \\n\\t\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.Message\\n\\t\\t\\tparam: \\n\\t\\t\\t[1] \\n\\t\\t\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.Message$Param\\n\\t\\t\\t\\tindex: 0\\n\\t\\t\\t\\ttype: NULL\\n\\t\\t\\t\\tvalue: 3\\n\\t\\t\\tid: 104\\n\\t\\t\\tseverity: NOTIFICATION\\n\\t\\t\\tlink: NULL\\n\\t\\tentryLink: NULL\\n\\t\\thint: NULL\\n\\t\\thintType: NULL\\n\\t\\tstatus: OK\\nEND\",\"timestamp\":1452525329030,\"message\":\"NaviPro method Logoutcalled.\",\"tag_1\":\"NaviPro\",\"attribute_end\":\"Mon Jan 11 16:15:29 CET 2016\",\"attribute_sessionId\":\"\",\"age\":0,\"tag_0\":\"Logout\",\"instanceid\":\"Logout\",\"attribute_address\":\""}"
Logstash config:
input {
syslog {
port => 1514
}
tcp {
port => 3333
}
}
filter {
if [type] == "docker" {
json {
source => "message"
}
mutate {
rename => [ "log", "message" ]
}
date {
match => [ "time", "ISO8601" ]
}
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
I want to have data in Elastic as JSON so I can filter fields in Kibana.
EDIT:
If I try to change configuration to this:
input {
tcp {
port => 3333
codec => json
}
}
Logstash refuses to launch with this line in log:
logstash_1 | {:timestamp=>"2016-01-13T10:13:58.583000+0000", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
Problem was in incorrect Logstash configuration. Logstash was running but not sending anything through filter. This is correct configuration:
input {
tcp {
port => 3333
type => "java"
}
}
filter {
if [type] == "java" {
json {
source => "message"
}
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
I created a simple test and all you should need to add to your logstash configuration is codec => json. The default value is "line" and will escape the characters in the string.
input {
tcp {
port => 3333
codec => json
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "elasticsearch:9200"
}
}
Related
I'm trying to setup logstash in docker.
I'm using the logstash:8.0.0 image.
This is my logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.enabled: false
This is my pipeline.conf
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://10.135.95.164:9200"]
index => "instameister"
username => "elastic"
password => ""
}
stdout { codec => rubydebug }
}
And this is the error im getting:
Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:120)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:85)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:837)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1169)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:1156)", "org.jruby.ir.targets.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:39)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$initialize$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:333)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:87)", "org.jruby.RubyClass.newInstance(RubyClass.java:939)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:50)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:376)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:138)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:52)", "org.jruby.runtime.Block.call(Block.java:139)", "org.jruby.RubyProc.call(RubyProc.java:318)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.base/java.lang.Thread.run(Thread.java:829)"]}
All i see is the Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration. But i have no idea whats wrong.
Instead of modifying logstash.yml, you can override the variables in the environment instead. Your pipeline.conf seems to be Ok. rubydebug codec is enabled by default for stdout.
So, assuming that you have a docker compose file, the configuration would be something like this:
logstash:
image: docker.elastic.co/logstash/logstash
container_name: logstash
restart: always
user: root
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
- ./logstash/logs/:/logstash/logs/:rw
environment:
- xpack.monitoring.enabled=false
- outputs.elasticsearch=http://elasticuser:elasticuserpassword#elasticsearch:9200
depends_on:
- elasticsearch
In the ./logstash/pipeline directory, a logstash.conf file with:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "${outputs.elasticsearch}"
}
stdout {
}
}
Adapt to your needs.
The problem was that the key 'username' should be 'user'
This is the working config:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://10.135.95.164:9200"]
user => "elastic"
password => ""
index => "instameister"
manage_template => false
}
stdout { codec => json_lines }
}
I want to connect two applications via rsocket. One is written in GO and second in Kotlin.
I want to realize connection where client sends Stream of data and server send confirmation response.
The problem is with waiting for all elements, if server do not BlockOnLast(ctx), whole stream is read, but response is send before all entries arrive. If BlockOnLast(ctx) is added, Server (GoLang) is stuck.
I wrote also client in Kotlin, and in that case whole communication is working perfectly fine.
Do enyone may help?
GO Server:
package main
import (
"context"
"github.com/golang/protobuf/proto"
"github.com/rsocket/rsocket-go"
"github.com/rsocket/rsocket-go/payload"
"github.com/rsocket/rsocket-go/rx"
"github.com/rsocket/rsocket-go/rx/flux"
"rsocket-go-rpc-test/proto"
)
func main() {
addr := "tcp://127.0.0.1:8081"
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
err := rsocket.Receive().
Fragment(1024).
Resume().
Acceptor(func(setup payload.SetupPayload, sendingSocket rsocket.CloseableRSocket) (rsocket.RSocket, error) {
return rsocket.NewAbstractSocket(
rsocket.RequestChannel(func(payloads rx.Publisher) flux.Flux {
println("START")
payloads.(flux.Flux).
DoOnNext(func(input payload.Payload) {
chunk := &pl_dwojciechowski_proto.Chunk{}
proto.Unmarshal(input.Data(), chunk)
println(string(chunk.Content))
}).BlockLast(ctx)
return flux.Create(func(i context.Context, sink flux.Sink) {
status, _ := proto.Marshal(&pl_dwojciechowski_proto.UploadStatus{
Message: "OK",
Code: 0,
})
sink.Next(payload.New(status, make([]byte, 1)))
sink.Complete()
println("SENT")
})
}),
), nil
}).
Transport(addr).
Serve(ctx)
panic(err)
}
Kotlin Client:
private fun clientCall() {
val rSocket = RSocketFactory.connect().transport(TcpClientTransport.create(8081)).start().block()
val client = FileServiceClient(rSocket)
val requests: Flux<Chunk> = Flux.range(1, 10)
.map { i: Int -> "sending -> $i" }
.map<Chunk> {
Chunk.newBuilder()
.setContent(ByteString.copyFrom(it.toByteArray())).build()
}
val response = client.send(requests).block() ?: throw Exception("")
rSocket.dispose()
System.out.println(response.message)
}
And equivalent for GO written in Kotlin:
val serviceServer = FileServiceServer(DefaultService(), Optional.empty(), Optional.empty())
val closeableChannel = RSocketFactory.receive()
.acceptor { setup: ConnectionSetupPayload?, sendingSocket: RSocket? ->
Mono.just(
RequestHandlingRSocket(serviceServer)
)
}
.transport(TcpServerTransport.create(8081))
.start()
.block()
closeableChannel.onClose().block()
class DefaultService : FileService {
override fun send(messages: Publisher<Service.Chunk>?, metadata: ByteBuf?): Mono<Service.UploadStatus> {
return Flux.from(messages)
.windowTimeout(10, Duration.ofSeconds(500))
.flatMap(Function.identity())
.doOnNext { println(it.content.toStringUtf8()) }
.then(Mono.just(Service.UploadStatus.newBuilder().setCode(Service.UploadStatusCode.Ok).setMessage("test").build()))
}
}
Server Output:
START
sending -> 1
Solution below:
package main
import (
"context"
"github.com/golang/protobuf/proto"
"github.com/rsocket/rsocket-go"
"github.com/rsocket/rsocket-go/payload"
"github.com/rsocket/rsocket-go/rx"
"github.com/rsocket/rsocket-go/rx/flux"
"rsocket-go-rpc-test/proto"
)
type TestService struct {
totals int
pl_dwojciechowski_proto.FileService
}
var statusOK = &pl_dwojciechowski_proto.UploadStatus{
Message: "code",
Code: pl_dwojciechowski_proto.UploadStatusCode_Ok,
}
var statusErr = &pl_dwojciechowski_proto.UploadStatus{
Message: "code",
Code: pl_dwojciechowski_proto.UploadStatusCode_Failed,
}
func main() {
addr := "tcp://127.0.0.1:8081"
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
err := rsocket.Receive().
Fragment(1024).
Acceptor(func(setup payload.SetupPayload, sendingSocket rsocket.CloseableRSocket) (rsocket.RSocket, error) {
return rsocket.NewAbstractSocket(
rsocket.RequestChannel(func(msgs rx.Publisher) flux.Flux {
dataReceivedChan := make(chan bool, 1)
toChan, _ := flux.Clone(msgs).
DoOnError(func(e error) {
dataReceivedChan <- false
}).
DoOnComplete(func() {
dataReceivedChan <- true
}).
ToChan(ctx, 1)
fluxResponse := flux.Create(func(ctx context.Context, s flux.Sink) {
gluedContent := make([]byte, 1024)
for c := range toChan {
chunk := pl_dwojciechowski_proto.Chunk{}
_ = chunk.XXX_Unmarshal(c.Data())
gluedContent = append(gluedContent, chunk.Content...)
}
if <-dataReceivedChan {
marshal, _ := proto.Marshal(statusOK)
s.Next(payload.New(marshal, nil))
s.Complete()
} else {
marshal, _ := proto.Marshal(statusErr)
s.Next(payload.New(marshal, nil))
s.Complete()
}
})
return fluxResponse
}),
), nil
}).
Transport(addr).
Serve(ctx)
panic(err)
}
I am trying to understand Akka Clustering for parallel computation using nodes. So, I wrote one factorial program and want to run that on a cluster of 3 nodes (inclusive master).
I am using a configuration file to provide seed nodes and cluster provider. And reading file in my code.
cluster {
akka {
actor {
provider = "cluster"
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:9876",
"akka.tcp://ClusterSystem#127.0.0.1:6789"]
# auto downing is NOT safe for production deployments.
# you may want to use it during development, read more about it in the docs.
#
# auto-down-unreachable-after = 10s
}
}
}
Following is the java code:
package test
import java.io.File
import akka.actor.{Actor,ActorSystem, Props}
import akka.stream.ActorMaterializer
import com.typesafe.config.ConfigFactory
import scala.concurrent.ExecutionContextExecutor
class Factorial extends Actor {
override def receive = {
case (n: Int) => fact(n)
}
def fact(n:Int): Int ={
if (n<=1){
return 1
}
else {
return n * fact(n - 1)
}
}
}
object ClusterActor {
def main(args: Array[String]): Unit = {
val configFile = "E:/Scala/StatsRuleEngine/Resources/local_configuration.conf"
val config = ConfigFactory.parseFile(new File(configFile))
implicit val system:ActorSystem = ActorSystem("ClusterSystem" ,config.getConfig("cluster"))
implicit val materializer:ActorMaterializer = ActorMaterializer()
implicit val executionContext: ExecutionContextExecutor = system.dispatcher
val FacActor = system.actorOf(Props[Factorial],"Factorial")
FacActor ! (5)
}
}
On running the program, I am getting below error
Remote connection to [null] failed with java.net.ConnectException: Connection refused: no further information: /127.0.0.1:6789
[WARN] [01/21/2019 16:31:15.979] [New I/O boss #3] [NettyTransport(akka://ClusterSystem)] Remote connection to [null] failed with java.net.ConnectException: Connection refused: no further information: /127.0.0.1:9876
I tried to search, but I don't why this error is coming.
When you boot your nodes, you need to specify the exact ports that will be open in config
netty.tcp {
hostname = "127.0.0.1"
port = 0 // THE EXACT PORT
}
So, if your seed nodes say 9876 and 6789. Two of nodes have to specify
netty.tcp {
hostname = "127.0.0.1"
port = 9876
}
and
netty.tcp {
hostname = "127.0.0.1"
port = 6789
}
Note, that the node that is listed first in seed nodes list must start first.
I'm using the Kafka JDK client ver 0.10.2.1 . I am able to produce simple messages to Kafka for a "heartbeat" test, but I cannot consume a message from that same topic using the sdk. I am able to consume that message when I go into the Kafka CLI, so I have confirmed the message is there. Here's the function I'm using to consume from my Kafka server, with the props - I pass the message I produced to the topic only after I have indeed confirmed the produce() was succesful, I can post that function later if requested:
private def consumeFromKafka(topic: String, expectedMessage: String): Boolean = {
val props: Properties = initProps("consumer")
val consumer = new KafkaConsumer[String, String](props)
consumer.subscribe(List(topic).asJava)
var readExpectedRecord = false
try {
val records = {
val firstPollRecs = consumer.poll(MAX_POLLTIME_MS)
// increase timeout and try again if nothing comes back the first time in case system is busy
if (firstPollRecs.count() == 0) firstPollRecs else {
logger.info("KafkaHeartBeat: First poll had 0 records- trying again - doubling timeout to "
+ (MAX_POLLTIME_MS * 2)/1000 + " sec.")
consumer.poll(MAX_POLLTIME_MS * 2)
}
}
records.forEach(rec => {
if (rec.value() == expectedMessage) readExpectedRecord = true
})
} catch {
case e: Throwable => //log error
} finally {
consumer.close()
}
readExpectedRecord
}
private def initProps(propsType: String): Properties = {
val prop = new Properties()
prop.put("bootstrap.servers", kafkaServer + ":" + kafkaPort)
propsType match {
case "producer" => {
prop.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("acks", "1")
prop.put("producer.type", "sync")
prop.put("retries", "3")
prop.put("linger.ms", "5")
}
case "consumer" => {
prop.put("group.id", groupId)
prop.put("enable.auto.commit", "false")
prop.put("auto.commit.interval.ms", "1000")
prop.put("session.timeout.ms", "30000")
prop.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
// poll just once, should only be one record for the heartbeat
prop.put("max.poll.records", "1")
}
}
prop
}
Now when I run the code, here's what it outputs in the console:
13:04:21 - Discovered coordinator serverName:9092 (id: 2147483647
rack: null) for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e. 13:04:23
INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned
partitions [] for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:24
INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:25 INFO
o.a.k.c.c.i.AbstractCoordinator - Successfully joined group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e with generation 1 13:04:26 INFO
o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions
[HeartBeat_Topic.Service_5.2018-08-03.13_04_10.377-0] for group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:27 INFO
c.p.p.l.util.KafkaHeartBeatUtil - KafkaHeartBeat: First poll had 0
records- trying again - doubling timeout to 60 sec.
And then nothing else, no errors thrown -so no records are polled. Does anyone have any idea what's preventing the 'consume' from happening? The subscriber seems to be successful, as I'm able to successfully call the listTopics and list partions no problem.
Your code has a bug. It seems your line:
if (firstPollRecs.count() == 0)
Should say this instead
if (firstPollRecs.count() > 0)
Otherwise, you're passing in an empty firstPollRecs, and then iterating over that, which obviously returns nothing.
I am writing a Jenkinsplugin. I have set up a pipeline script, when I execute the script its calling some shell scripts and setting up a pipeline. Thats working fine.
Example of my code:
node('master') {
try {
def appWorkspace = './app/'
def testWorkspace = './tests/'
stage('Clean up') {
// cleanWs()
}
stage('Build') {
parallel (
app: {
dir(appWorkspace) {
git changelog: false, credentialsId: 'jenkins.git', poll: false, url: 'https://src.url/to/our/repo'
dir('./App') {
sh "#!/bin/bash -lx \n ./gradlew assembleRelease"
}
}
},
tests: {
dir(testWorkspace) {
git changelog: false, credentialsId: 'jenkins.git', poll: false, url: 'https://src.url/to/our/repo'
sh "#!/bin/bash -lx \n nuget restore ./Tests/MyProject/MyProject.sln"
sh "#!/bin/bash -lx \n msbuild ./Tests/MyProject/MyProject.Core/ /p:Configuration=Debug"
}
}
)
}
stage('Prepare') {
parallel (
'install-apk': {
sh '''#!/bin/bash -lx
result="$(adbExtendedVersion shell pm list packages packagename.app)"
if [ ! -z "$result" ]
then
adbExtendedVersion uninstall packagename.app
fi
adbExtendedVersion install ''' + appWorkspace + '''/path/to/app-release.apk'''
},
'start-appium': {
sh "#!/bin/bash -lx \n GetAllAttachedDevices.sh"
sh "sleep 20s"
}
)
}
stage('Test') {
// Reading content of the file
def portsFileContent = readFile 'file.txt'
// Split the file by next line
def ports = portsFileContent.split('\n')
// Getting device IDs to get properties of device
def deviceIDFileContent = readFile 'IDs.txt'
def deviceIDs = deviceIDFileContent.split('\n')
// Define port and id as an pair
def pairs = (0..<Math.min(ports.size(), deviceIDs.size())).collect { i -> [id: deviceIDs[i], port: ports[i]] }
def steps = pairs.collectEntries { pair ->
["UI Test on ${pair.id}", {
sh "#!/bin/bash -lx \n mono $testWorkspace/Tests/packages/NUnit.ConsoleRunner.3.7.0/tools/nunit3-console.exe $testWorkspace/Tests/bin/Debug/MyProject.Core.dll --params=port=${pair.port}"
}]
}
parallel steps
}
}
catch (Exception e) {
println(e);
}
finally {
stage('Clean') {
archiveArtifacts 'TestResult.xml'
sh "#!/bin/bash -lx \n KillInstance.sh"
}
}
}
This is a groovy script defining my pipeline. What I am trying to achieve with my plugin is, that the user who uses this plugin just inserts some pathvariables eg. path to his solution, or path to his github source. My Plugin then executes the above listed script automatically with the given parameters.
My problem is, that I cant find any documentation how to write such a pipeline construct in Java. If someone could point me in the right direction I would appreciate that.