I'm trying to setup logstash in docker.
I'm using the logstash:8.0.0 image.
This is my logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.enabled: false
This is my pipeline.conf
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://10.135.95.164:9200"]
index => "instameister"
username => "elastic"
password => ""
}
stdout { codec => rubydebug }
}
And this is the error im getting:
Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:120)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:85)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:837)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1169)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:1156)", "org.jruby.ir.targets.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:39)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$initialize$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:333)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:87)", "org.jruby.RubyClass.newInstance(RubyClass.java:939)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:50)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:376)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:138)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:52)", "org.jruby.runtime.Block.call(Block.java:139)", "org.jruby.RubyProc.call(RubyProc.java:318)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.base/java.lang.Thread.run(Thread.java:829)"]}
All i see is the Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration. But i have no idea whats wrong.
Instead of modifying logstash.yml, you can override the variables in the environment instead. Your pipeline.conf seems to be Ok. rubydebug codec is enabled by default for stdout.
So, assuming that you have a docker compose file, the configuration would be something like this:
logstash:
image: docker.elastic.co/logstash/logstash
container_name: logstash
restart: always
user: root
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
- ./logstash/logs/:/logstash/logs/:rw
environment:
- xpack.monitoring.enabled=false
- outputs.elasticsearch=http://elasticuser:elasticuserpassword#elasticsearch:9200
depends_on:
- elasticsearch
In the ./logstash/pipeline directory, a logstash.conf file with:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "${outputs.elasticsearch}"
}
stdout {
}
}
Adapt to your needs.
The problem was that the key 'username' should be 'user'
This is the working config:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://10.135.95.164:9200"]
user => "elastic"
password => ""
index => "instameister"
manage_template => false
}
stdout { codec => json_lines }
}
Related
I am using corda 4.5 with gradle plugin version as 5.0.10 and postgres as my DB.
when I am trying to run deployNodes task, getting below error:
[ERROR] 15:51:47+0530 [main] internal.NodeStartupLogging. - Could not find the database driver class. Please add it to the drivers directory. [Error Code: database-missing-driver For further information, please go to https://docs.corda.net/docs/corda-os/4.5/error-codes.html] - Could not find the database driver class. Please add it to the 'drivers' folder. [errorCode=1oswgkz, moreInformationAt=https://errors.corda.net/OS/4.5/1oswgkz]
Following is the deployNode task code in build.gradle file:
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
nodeDefaults {
projectCordapp {
deploy = false
}
cordapp "$confidential_id_release_group:ci-workflows:$confidential_id_release_version"
cordapp "$accounts_release_group:accounts-contracts:$accounts_release_version"
cordapp "$accounts_release_group:accounts-workflows:$accounts_release_version"
cordapp project(':cordapp-contracts-states')
cordapp project(':workflows')
//ext.drivers = ['${rootProject.projectDir}/lib/postgresql-42.2.8.jar']
}
//NOTARY NODE
node {
name "O=Notary,L=London,C=GB"
notary = [validating: true]
p2pAddress("localhost:10002")
rpcSettings {
address("localhost:10003")
adminAddress("localhost:10043")
}
}
// NODEA
node {
name "O=NODEA,L=Lucknow,C=IN"
p2pAddress("localhost:10010")
rpcSettings {
address("localhost:10011")
adminAddress("localhost:10052")
}
rpcUsers = [[user: "userA", "password": "user123", "permissions": ["ALL"]]]
//new DB config
//DB
extraConfig = [
'dataSourceProperties.dataSource.url' : 'jdbc:postgresql://localhost:5432/egdb?currentSchema=nodeA_schema',
'dataSourceProperties.dataSourceClassName' : 'org.postgresql.ds.PGSimpleDataSource',
'dataSourceProperties.dataSource.user' : 'postgres',
'dataSourceProperties.dataSource.password' : 'postgres',
//'dataSourceProperties.driverClassName' : 'org.postgresql.ds.PGSimpleDataSource'
//jarDirs = ['${rootProject.projectDir}/lib/postgresql-42.2.8.jar']
//'drivers' : 'org.postgresql.Driver'
'jarDirs' : ['${rootProject.projectDir}/lib/jdbc/driver/postgresql-42.2.8.jar']
]
//jarDirs = ['${rootProject.projectDir}/lib/postgresql-42.2.8.jar']
//drivers = ext.drivers
}
// NODEB
node {
name "O=NODEB,L=Delhi,C=IN"
p2pAddress("localhost:10010")
rpcSettings {
address("localhost:10011")
adminAddress("localhost:10052")
}
rpcUsers = [[user: "userB", "password": "user123", "permissions": ["ALL"]]]
//new DB config
//DB
extraConfig = [
'dataSourceProperties.dataSource.url' : 'jdbc:postgresql://localhost:5432/egdb?currentSchema=nodeB_schema',
'dataSourceProperties.dataSourceClassName' : 'org.postgresql.ds.PGSimpleDataSource',
'dataSourceProperties.dataSource.user' : 'postgres',
'dataSourceProperties.dataSource.password' : 'postgres',
//'dataSourceProperties.driverClassName' : 'org.postgresql.ds.PGSimpleDataSource'
//jarDirs = ['${rootProject.projectDir}/lib/postgresql-42.2.8.jar']
//'drivers' : 'org.postgresql.Driver'
'jarDirs' : ['${rootProject.projectDir}/lib/jdbc/driver/postgresql-42.2.8.jar']
]
//jarDirs = ['${rootProject.projectDir}/lib/postgresql-42.2.8.jar']
//drivers = ext.drivers
}
}
How to add the postgresql jdbc driver path in build.gradle file? What is the compatible postgresql verion with corda 4.5?
It resolved the issue for me by adding the following line in build.gradle file in the dependencies section:
cordaDriver "org.postgresql:postgresql:42.2.8"
According to this article:
PostgreSQL 9.6 is the lowest acceptable version, the article uses PostgresSQL 11.
The driver version is postgresql-42.1.4.jar.
In order to point your Gradle task to the driver; create a folder (call it drivers), put the driver's jar file inside of it, then inside extraConfig of your node, add drivers = ['absolute_path_to_directory_with_jdbc_driver'] (notice that it's the absolute path to the directory that you created, not the driver file like you did).
Your node configuration is missing database.transactionIsolationLevel, database.schema, and database.runMigration.
Remove jarDirs and add drivers like I mentioned earlier.
I am trying to understand Akka Clustering for parallel computation using nodes. So, I wrote one factorial program and want to run that on a cluster of 3 nodes (inclusive master).
I am using a configuration file to provide seed nodes and cluster provider. And reading file in my code.
cluster {
akka {
actor {
provider = "cluster"
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:9876",
"akka.tcp://ClusterSystem#127.0.0.1:6789"]
# auto downing is NOT safe for production deployments.
# you may want to use it during development, read more about it in the docs.
#
# auto-down-unreachable-after = 10s
}
}
}
Following is the java code:
package test
import java.io.File
import akka.actor.{Actor,ActorSystem, Props}
import akka.stream.ActorMaterializer
import com.typesafe.config.ConfigFactory
import scala.concurrent.ExecutionContextExecutor
class Factorial extends Actor {
override def receive = {
case (n: Int) => fact(n)
}
def fact(n:Int): Int ={
if (n<=1){
return 1
}
else {
return n * fact(n - 1)
}
}
}
object ClusterActor {
def main(args: Array[String]): Unit = {
val configFile = "E:/Scala/StatsRuleEngine/Resources/local_configuration.conf"
val config = ConfigFactory.parseFile(new File(configFile))
implicit val system:ActorSystem = ActorSystem("ClusterSystem" ,config.getConfig("cluster"))
implicit val materializer:ActorMaterializer = ActorMaterializer()
implicit val executionContext: ExecutionContextExecutor = system.dispatcher
val FacActor = system.actorOf(Props[Factorial],"Factorial")
FacActor ! (5)
}
}
On running the program, I am getting below error
Remote connection to [null] failed with java.net.ConnectException: Connection refused: no further information: /127.0.0.1:6789
[WARN] [01/21/2019 16:31:15.979] [New I/O boss #3] [NettyTransport(akka://ClusterSystem)] Remote connection to [null] failed with java.net.ConnectException: Connection refused: no further information: /127.0.0.1:9876
I tried to search, but I don't why this error is coming.
When you boot your nodes, you need to specify the exact ports that will be open in config
netty.tcp {
hostname = "127.0.0.1"
port = 0 // THE EXACT PORT
}
So, if your seed nodes say 9876 and 6789. Two of nodes have to specify
netty.tcp {
hostname = "127.0.0.1"
port = 9876
}
and
netty.tcp {
hostname = "127.0.0.1"
port = 6789
}
Note, that the node that is listed first in seed nodes list must start first.
I am writing a Jenkinsplugin. I have set up a pipeline script, when I execute the script its calling some shell scripts and setting up a pipeline. Thats working fine.
Example of my code:
node('master') {
try {
def appWorkspace = './app/'
def testWorkspace = './tests/'
stage('Clean up') {
// cleanWs()
}
stage('Build') {
parallel (
app: {
dir(appWorkspace) {
git changelog: false, credentialsId: 'jenkins.git', poll: false, url: 'https://src.url/to/our/repo'
dir('./App') {
sh "#!/bin/bash -lx \n ./gradlew assembleRelease"
}
}
},
tests: {
dir(testWorkspace) {
git changelog: false, credentialsId: 'jenkins.git', poll: false, url: 'https://src.url/to/our/repo'
sh "#!/bin/bash -lx \n nuget restore ./Tests/MyProject/MyProject.sln"
sh "#!/bin/bash -lx \n msbuild ./Tests/MyProject/MyProject.Core/ /p:Configuration=Debug"
}
}
)
}
stage('Prepare') {
parallel (
'install-apk': {
sh '''#!/bin/bash -lx
result="$(adbExtendedVersion shell pm list packages packagename.app)"
if [ ! -z "$result" ]
then
adbExtendedVersion uninstall packagename.app
fi
adbExtendedVersion install ''' + appWorkspace + '''/path/to/app-release.apk'''
},
'start-appium': {
sh "#!/bin/bash -lx \n GetAllAttachedDevices.sh"
sh "sleep 20s"
}
)
}
stage('Test') {
// Reading content of the file
def portsFileContent = readFile 'file.txt'
// Split the file by next line
def ports = portsFileContent.split('\n')
// Getting device IDs to get properties of device
def deviceIDFileContent = readFile 'IDs.txt'
def deviceIDs = deviceIDFileContent.split('\n')
// Define port and id as an pair
def pairs = (0..<Math.min(ports.size(), deviceIDs.size())).collect { i -> [id: deviceIDs[i], port: ports[i]] }
def steps = pairs.collectEntries { pair ->
["UI Test on ${pair.id}", {
sh "#!/bin/bash -lx \n mono $testWorkspace/Tests/packages/NUnit.ConsoleRunner.3.7.0/tools/nunit3-console.exe $testWorkspace/Tests/bin/Debug/MyProject.Core.dll --params=port=${pair.port}"
}]
}
parallel steps
}
}
catch (Exception e) {
println(e);
}
finally {
stage('Clean') {
archiveArtifacts 'TestResult.xml'
sh "#!/bin/bash -lx \n KillInstance.sh"
}
}
}
This is a groovy script defining my pipeline. What I am trying to achieve with my plugin is, that the user who uses this plugin just inserts some pathvariables eg. path to his solution, or path to his github source. My Plugin then executes the above listed script automatically with the given parameters.
My problem is, that I cant find any documentation how to write such a pipeline construct in Java. If someone could point me in the right direction I would appreciate that.
I'm trying to send a JSON through Kafka (Jhipster/SpringBoot Producer & Python Consumer), i've been able to receive the message in bytes, but when I try to decode it, i always get some problems before being able to extract the message.
My Producer is like this:
#RestController
#RequestMapping("/api")
public class ProducerResource{
private MessageChannel channel;
public ProducerResource(ProducerChannel channel) {
this.channel = channel.messageChannel();
}
#GetMapping("/subscribableChannel/{count}")
#Timed
#JsonInclude(Include.NON_NULL)
public void produce(#PathVariable int count) {
Map<String,String> json = new HashMap<String,String>();
json.put("message","HELL YEAH");
while(count > 0) {
channel.send(MessageBuilder.withPayload(json.toString().getBytes()).build());
count--;
}
}
}
And my Python Consumer:
def stop_handler(signal, frame, consumer):
print('ArrĂȘt...')
consumer.close()
sys.exit(0)
def read_messages(consumer):
meta = consumer.partitions_for_topic(TOPIC)
for msg in consumer:
print(msg.value)
lol = json.load(msg.value.decode("utf-8"))
print(lol)
def main():
consumer = KafkaConsumer(TOPIC, auto_offset_reset='earliest',
group_id='read', bootstrap_servers=['localhost:9092'])
signal.signal(signal.SIGINT, lambda signal, frame: stop_handler(signal, frame, consumer))
read_messages(consumer)
return 0
if __name__ == "__main__":
exit(main())
And the warning that i get from Python is:
python3 scripts/simple_consumer.py
b'\xff\x01\x0bcontentType\x00\x00\x00\x0c"text/plain"{message=HELL YEAH}'
Traceback (most recent call last):
File "scripts/simple_consumer.py", line 36, in <module>
exit(main())
File "scripts/simple_consumer.py", line 31, in main
read_messages(consumer)
File "scripts/simple_consumer.py", line 23, in read_messages
lol = json.load(msg.value.decode("utf-8"))
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
In my cloud configuration i only have Consumer/Producer HeaderMode = raw with kafka. Any suggestion would be really appreciated!!
UPDATE---
Here is my configuration for cloud:
spring:
profiles:
active: dev
include: swagger
devtools:
restart:
enabled: true
livereload:
enabled: false # we use gulp + BrowserSync for livereload
jackson:
serialization.indent_output: true
cloud:
stream:
default:
consumer:
headerMode: raw
producer:
headerMode: raw
kafka:
binder:
brokers: localhost
zk-nodes: localhost
bindings:
messageChannel:
destination: messageChannel
content-type: application/json
subscribableChannel:
destination: subscribableChannel
I am sending some JSON data from Java server via TCP to Logstash (Logstash sends them to Elasticsearch) and these JSON data seems to be escaped in Elastic.
Java serialization:
Map<String, Object> jsonMap = new HashMap<>();
jsonMap.put("age", event.getAge());
for (Entry<String, Serializable> attribute : event.getAttributes().entrySet()) {
jsonMap.put("attribute_" + attribute.getKey(), attribute.getValue());
}
jsonMap.put("message", event.getMessage());
jsonMap.put("cause", event.getCause());
jsonMap.put("timestamp", event.getTimestamp());
jsonMap.put("eventid", event.getEventId());
jsonMap.put("instanceid", event.getInstanceId());
jsonMap.put("origin", event.getOrigin());
jsonMap.put("severity", event.getSeverity());
jsonMap.put("durability", event.getDurability());
jsonMap.put("detail", event.getDetail());
int i = 0;
for (String tag : event.getTags()) {
jsonMap.put("tag_" + String.valueOf(i), tag);
i++;
}
return new JSONObject(jsonMap).toString();
Java Socket:
try (Socket clientSocket = new Socket(url, port);
OutputStreamWriter out = new OutputStreamWriter(
clientSocket.getOutputStream(), "UTF-8")) {
out.write(content.toString());
out.flush();
}
Example data in Elastic:
"message": "{\"detail\":null,\"cause\":null,\"attribute_start\":\"Mon Jan 11 16:15:28 CET 2016\",\"durability\":\"MOMENTARY\",\"attribute_login\":\"\",\"origin\":\"fortuna.ws.navipro\",\"severity\":\"ERROR\",\"attribute_type\":null,\"attribute_methodName\":\"Logout\",\"eventid\":\"ws.navipro.call\",\"attribute_call\":\"[57,7256538816272415441,,OK]{0 connections} CZ() Calling method 'Logout' at 1452525329029(Mon Jan 11 16:15:28 CET 2016-Mon Jan 11 16:15:29 CET 2016; roundtrip=36ms):\\n\\tRequest\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.ClientLogoutRequest\\n\\t\\tkeep: true\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.RequestBody\\n\\t\\tcountry: CZ\\n\\t\\tsessionCC: NULL\\n\\t\\tsessionID:\\n\\t\\tsessionIP:\\n\\t\\tdebug: NULL\\n\\t\\tns: NULL\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.RequestCorpus\\n\\tResponse\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.ClientLogoutResponse\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.ResponseBody\\n\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.ResponseCorpus\\n\\t\\tmessage: \\n\\t\\t[1] \\n\\t\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.Message\\n\\t\\t\\tparam: \\n\\t\\t\\t[1] \\n\\t\\t\\t\\tCLASS com.etnetera.projects.jnp.fortuna.navipro.ws.Message$Param\\n\\t\\t\\t\\tindex: 0\\n\\t\\t\\t\\ttype: NULL\\n\\t\\t\\t\\tvalue: 3\\n\\t\\t\\tid: 104\\n\\t\\t\\tseverity: NOTIFICATION\\n\\t\\t\\tlink: NULL\\n\\t\\tentryLink: NULL\\n\\t\\thint: NULL\\n\\t\\thintType: NULL\\n\\t\\tstatus: OK\\nEND\",\"timestamp\":1452525329030,\"message\":\"NaviPro method Logoutcalled.\",\"tag_1\":\"NaviPro\",\"attribute_end\":\"Mon Jan 11 16:15:29 CET 2016\",\"attribute_sessionId\":\"\",\"age\":0,\"tag_0\":\"Logout\",\"instanceid\":\"Logout\",\"attribute_address\":\""}"
Logstash config:
input {
syslog {
port => 1514
}
tcp {
port => 3333
}
}
filter {
if [type] == "docker" {
json {
source => "message"
}
mutate {
rename => [ "log", "message" ]
}
date {
match => [ "time", "ISO8601" ]
}
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
I want to have data in Elastic as JSON so I can filter fields in Kibana.
EDIT:
If I try to change configuration to this:
input {
tcp {
port => 3333
codec => json
}
}
Logstash refuses to launch with this line in log:
logstash_1 | {:timestamp=>"2016-01-13T10:13:58.583000+0000", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
Problem was in incorrect Logstash configuration. Logstash was running but not sending anything through filter. This is correct configuration:
input {
tcp {
port => 3333
type => "java"
}
}
filter {
if [type] == "java" {
json {
source => "message"
}
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
I created a simple test and all you should need to add to your logstash configuration is codec => json. The default value is "line" and will escape the characters in the string.
input {
tcp {
port => 3333
codec => json
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "elasticsearch:9200"
}
}