I'd like to add some additional information/attributes like "application" name to my log4j2 Events (for some reason this attribute is not availible in log4j2 anymore..)
The Logs are sent over LAN to a Logstash instance.
I've worked out a solution including:
Custom Appender Layout (https://github.com/majikthys/log4j2-logstash-jsonevent-layout): the layout extracts all attributes from the Log4jLogEvent and the additional attributes that can be provided through the log4j2 configuration and produces a JSON String.
Logstash configuration:
input {
tcp {
codec => json_lines { charset => "UTF-8" }
port => 4560
type => "log4j2-json"
mode => "server"
}
}
...
The solution above works but requires the Layout to be built and added/maintained as a jar in every application.
So the question is - are there any better solutions that I've missed out?
Ideal would be a solution that wouldn't require adding any new jars/classes and uses of 3rd party software. Something like RewriteAppender but without use of "MapMessage".
You can use logstash over GELF with https://github.com/mp911de/logstash-gelf for your use case. Application name is more related to the client than to a central logstash server. Config looks like:
<Configuration>
<Appenders>
<Gelf name="gelf" host="udp:localhost" port="12201" originHost="%host{fqdn}">
<Field name="timestamp" pattern="%d{dd MMM yyyy HH:mm:ss,SSS}" />
<Field name="level" pattern="%level" />
<Field name="simpleClassName" pattern="%C{1}" />
<Field name="applicationName" literal="MyApplicationName" />
<!-- This is a field using MDC -->
<Field name="mdcField2" mdc="mdcField2" />
<DynamicMdcFields regex="mdc.*" />
</Gelf>
</Appenders>
...
</Configuration>
I ended up using a log4j 2 TcpSocketServer to receive remote logs and then forward them locally to logstash over a tcp socket (you could use a file too...).
http://logging.apache.org/log4j/2.x/log4j-core/apidocs/org/apache/logging/log4j/core/net/server/TcpSocketServer.html
I don't think this solution is ideal, but it doesn't require the custom appender.
Related
We are trying to make our project which currently runs on WebSphere also work on Liberty.
In trying to get an MDB to work I get the following error: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2085' ('MQRC_UNKNOWN_OBJECT_NAME')
The relevant portion of the server.xml:
<jmsQueue id="jms/incomingRequestQueue" jndiName="jms/incomingRequestQueue">
<properties.mqJms baseQueueName="QUEUEIN" />
</jmsQueue>
<jmsActivationSpec id="application-ear/application-war/InboundMDB"
authDataRef="mqJms.auth">
<properties.mqJms destinationRef="jms/incomingRequestQueue" destinationType="javax.jms.Queue"
transportType="CLIENT"
hostName="${mqconnection.hostName}" port="${mqconnection.port}"
channel="${mqconnection.channel}"
messageCompression="NONE"
rescanInterval="5000"
sslCipherSuite="${mqconnection.sslCipherSuite}"
brokerControlQueue="${mqconnection.brokerControlQueue}" brokerSubQueue="${mqconnection.brokerSubQueue}"
brokerCCSubQueue="${mqconnection.brokerCCSubQueue}" brokerCCDurSubQueue="${mqconnection.brokerCCDurSubQueue}"/>
</jmsActivationSpec>
The values in the Liberty configuration were taken from WebSphere.
My question is if the reason for this error can only be that the queue name is incorrect, or if something could be missing from the configuration.
Update: the solution turned out to be to change destinationRef to destination and add useJNDI="true"
If you look at the logs on MQ and it appears to be trying to open an MQ object called jms/incomingRequestQueue, try replacing destinationRef with destinationLookup. Some methods of specifying the destination for an activation spec just pass the value straight to MQ instead of doing a lookup in the JNDI context for an admin object and fetching the right property.
See the notes in this table about the relationship between destination and destinationLookup. DestinationRef is a property that Liberty adds and I'm not sure how it all relates to the properties the resource adapter actually exposes but may be making this switch unnecessary. It all depends on what string you're trying to look up as a queue on the queue manager.
Additionally, for those who might have this issue and are using the destination property (likely in conjuntion with JMS 1.1/Java EE 6), where destinationLookup doesn't exist, you can specify useJNDI="true" as a property on the activation spec to resolve this, see the table linked above.
I was doing a migration to Open Liberty some time ago and also had some troubles. I managed to make it work, but cannot guarantee this will work for you as your case might be a bit different.
First, check carefully if baseQueueName="QUEUEIN" is correct (perhaps it is case-sensitive and does not match or something e.g. some prefix is missing).
Maybe setting a correct queueManager will help.
Here is my setup that works and it is almost the same as yours.
<resourceAdapter id="mqJMS" location="..../wmq.jmsra-9.1.4.0.rar"/>
<authData id="mqAlias" password="${env.MQ_PWD}" user="${env.MQ_USER}"/>
<jmsActivationSpec authDataRef="mqAlias" id="app-name/MyMessageBean">
<properties.mqJms destinationRef="jms/MyQ"
destinationType="javax.jms.Queue"
sslCipherSuite="${env.MQ_SSL_CIPHER_SUITE}"
channel="${env.MQ_CHANNEL}"
queueManager="${env.MQ_QUEUE_MANAGER}"
hostName="${env.MQ_HOST}" port="${env.MQ_PORT}"
transportType="CLIENT" />
</jmsActivationSpec>
<jmsQueue id="jms/MyQ" jndiName="jms/MyQ">
<properties.mqJms baseQueueName="${env.MY_QUEUE}"
baseQueueManagerName="${env.MQ_QUEUE_MANAGER}" />
</jmsQueue>
</server>
In general, the reason code 2085 means that the referenced queue could not be found on the Queue Manager.
There is this IBM Article that can be useful, especially Resolving The Problem section where you can see a short description of what they recommend to do in this case.
I'm trying to send the logs from a basic java maven project to fluent-bit configured on a remote machine. Fluent-bit would then write them to a file. This is my basic java configuration.
Java
private final static Logger logger = LoggerFactory.getLogger(App.class);
public static void main(String[] args) {
for (int i = 0; ; i++) {
logger.debug("Warn msg");
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
// do nothing now
}
}
}
And the logback.xml
<appender name="fluentd" class="ch.qos.logback.more.appenders.DataFluentAppender">
<remoteHost>xx.xxx.xxx.xxx</remoteHost>
<port>7777</port>
<encoder>
<pattern>%message%n</pattern>
</encoder>
</appender>
<root level="DEBUG">
<appender-ref ref="fluentd" />
</root>
Fluent-bit configuration :
td-agent-bit.conf
[INPUT]
Name tcp
Listen xx.xxx.xxx.xxx
Port 7777
Parsers_File /etc/td-agent-bit/parsers.conf
Parser custom_parser
[OUTPUT]
Name file
Match *
Path /home/td-agent-bit/output.txt
parsers.conf
[PARSER]
Name custom_parser
Format regex
Regex .*
I keep getting the following exception when the app runs
[2018/09/27 08:29:13] [trace] [in_tcp] read()=74 pre_len=370 now_len=444
[2018/09/27 08:29:13] [debug] [in_serial] invalid JSON message, skipping
But when I try testing the configuration via the command line it works
echo '{"key 1": 10, "key 2": "YYY"}' | nc xx.xxx.xxx.xxx 7777
I don't get any exception and the output file has all permissions. Also the remote machine is a photon-os based system.
Any ideas would be much appreciated.
So after some research and a ticket I opened here, I found out that I was using the wrong plugin.
All java configurations were correct. Just needed to make the following change to the td-agent-bit.conf
[INPUT]
Name forward
Listen xx.xxx.xxx.xxx
Port 7777
We need to use the forward plugin instead of the tcp plugin. This plugin would listen to any incoming messages on the 7777 port and redirect it to the file.
Note that TCP Input plugin only accept JSON maps as records and not msgpack as forward protocol does.
I'm using HttpEventCollectorLogbackAppender for writing my java application logs to the splunk server. I've been trying this for very long and still haven't been able to get my logs into splunk.
Can someone please explain what does the source tag refers to in the HttpEventLogbackAppender?
Below is the HttpEventLogbackAppender in my logback.xml file:
<appender name="splunk-httpeventcollector-appender"
class="com.splunk.logging.appenders.logback.HttpEventCollectorLogbackAppender">
<url>${SPLUNK_HOST_URL}</url>
<host>${CFG_DC}_${APP_ENV}_${CONTAINER_ID}</host>
<token>${SPLUNK_TOKEN}</token>
<source></source> // what does this refer to?
<index>${SPLUNK_INDEX}</index>
<disableCertificateValidation>true</disableCertificateValidation>
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>%d{ISO8601} [%thread] loglevel=%-5level %logger{36} - remotehost=%mdc{req.remoteHost} forwardedfor=%mdc{req.xForwardedFor} requestmethod=%mdc{req.method} requesturi=%mdc{req.requestURI}</Pattern>
</layout>
<batch_size_count>500</batch_size_count>
<send_mode>parallel</send_mode>
</appender>
From Splunk Documentaion , I found the following : Hope it will help you
Link - http://docs.splunk.com/Documentation/Splunk/7.1.2/Data/Aboutdefaultfields
source - The source of an event is the name of the file, stream, or other input from which the event originates.
For data monitored from files and directories, the value of source is the full path, such as /archive/server1/var/log/messages.0 or /var/log/.
The value of source for network-based data sources is the protocol and port, such as UDP:514.
This topic focuses on three key default fields:
host
source
sourcetype
Defining host, source, and sourcetype
The host, source, and sourcetype fields are defined as follows:
host - An event host value is typically the hostname, IP address, or fully qualified domain name of the network host from which the event originated. The host value lets you locate data originating from a specific device. For more information on hosts, see About hosts.
sourcetype - The source type of an event is the format of the data input from which it originates, such as access_combined or cisco_syslog. The source type determines how your data is to be formatted. For more information on source types, see Why source types matter.
Source vs sourcetype
Source and source type are both default fields, but they are entirely different otherwise, and can be easily confused.
The source is the name of the file, stream, or other input from which a particular event originates.
The sourcetype determines how Splunk software processes the incoming data stream into individual events according to the nature of the data.
Events with the same source type can come from different sources, for example, if you monitor source=/var/log/messages and receive direct syslog input from udp:514. If you search sourcetype=linux_syslog, events from both of those sources are returned.
Git Hub -
Logback configuration looks like:
```xml
<!-- Splunk HTTP Appender -->
<appender name="splunkHttpAppender" class="com.splunk.logging.HttpEventCollectorLogbackAppender">
<url>${lsplunk.http.url}</url>
<token>${splunk.http.token}</token>
<source>${splunk.source}</source>
<host>${splunk.httpevent.listener.host}</host>
<messageFormat>${splunk.event.message.format}</messageFormat>
<disableCertificateValidation>${splunk.cert.disable-validation}</disableCertificateValidation>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%date{ISO8601} [%thread] %level: %msg%n</pattern>
</layout>
</appender>
<logger name="com.example.app" additivity="false" level="INFO">
<appender-ref ref="splunkHttpAppender"/>
</logger>
<root level="INFO">
<appender-ref ref="splunkHttpAppender"/>
</root>
Or another way you can do it sent your application logs to s3 (AWS)and from their you can configue the path in inputs.conf splunk path and mention indexer in outputs.conf
I am an absolute novice in this entire stack so I apologize in advance if this is a very dumb question.
I'm working on setting up a local (mock) CAS service so we're able to test our apps against an auth system which at least remotely resembles something we have on our staging/production environments.
I'm using https://github.com/ubc/vagrant-cas as a starting point. I've managed to set up this by modifying cas.properties and deployerConfigContext.xml to enable me to actually pass custom attributes when a user signs in. i.e.
<bean id="attributeRepository" class="org.jasig.services.persondir.support.StubPersonAttributeDao">
<property name="backingMap">
<map>
<entry key="uid" value="uid" />
<entry key="eduPersonAffiliation" value="eduPersonAffiliation" />
<entry key="groupMembership" value="groupMembership" />
<entry key="puid" value="12345678910" />
</map>
</property>
</bean>
This combined with the default org.jasig.cas.authentication.handler.support.SimpleTestUsernamePasswordAuthenticationHandler" means that whenever I sign in with a username and password that is identical (i.e. username 'admin' password 'admin' ) then that user is signed in and the attribute puid is returned with the value of '12345678910' (this same PUID is returned for every username/password combo).
(I had to enable the attributes to be sent back in the 'Services Management' app)
What I actually need is to be able to have multiple users, all with different puid values. i.e.
username:password:1234
username2:password2:5678
etc.
I've noticed there is a org.jasig.cas.adaptors.generic.FileAuthenticationHandler but that only allows for username::password and no custom attributes. (so near yet so far).
I'm way out of my depth, I'm not a java programmer and have hit the limit of my google-fu. Any help pointing me in the right direction would be greatly appreciated.
File-based authn does not support custom attributes. You may be interested in this: https://github.com/Unicon/cas-addons/wiki/Configuring-JSON-ComplexStubPersonAttributeDao
I Am Completely novice in flex could you just let me know about it plz.I want to access data from a database residing on a particular ip address and also i am not sure how to do it pl let me know how it can be done through flex framework.
Being a client side technology, it would be a real problem allowing direct access to the database. What you need is some server application to mediate the access to the database. This could be written in many different ways, but the majority of developers would use PHP/.net/Java
There are many ways to access your data. For simple stuff, you could use a servlet that will fetch data from db and provide it to the flex running on the client.
instead of servlets, you could also use web services. On the flex side, you have three ways to access data: HTTPService, WebService, and RemoteObject.
Its up to you to select one of them ( as I don't know what your requirements are and how well you know on these).
There are many different options. Check out a screencast I did on Flex and Java basics that walks through the various options.
Your Flex frontend
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml"
layout="absolute" backgroundColor="#FFFFFF" viewSourceURL="srcview/index.html">
<mx:RemoteObject id="myservice" fault="faultHandler(event)"
showBusyCursor="true" destionation="yourDest">
<mx:method name="JavaMethodName" result="resultHandler(event)" />
</mx:RemoteObject>
<mx:Script>
<![CDATA[
import mx.rpc.events.ResultEvent;
import mx.rpc.events.FaultEvent;
private function faultHandler(evt:FaultEvent):void
{
trace(evt.fault);
}
private function resultHandler(evt:ResultEvent):void
{
trace(evt.result);
}
]]>
</mx:Script>
<mx:Button x="250" y="157" label="Click" width="79" click="myservice.getOperation('JavaMethodName').send();"/>
</mx:Application>
Remoting-Config.XML
<?xml version="1.0" encoding="UTF-8"?>
<service id="remoting-service"
class="flex.messaging.services.RemotingService">
<adapters>
<adapter-definition id="java-object" class="flex.messaging.services.remoting.adapters.JavaAdapter" default="true"/>
</adapters>
<destination id="yourDest">
<properties>
<source>YourClassName</source>
</properties>
</destination>
<default-channels>
<channel ref="my-amf"/>
</default-channels>
</service>
Your Java Class
import java.util.Date;
public class YourClassName{
public String JavaMethodName() {
Date now = new Date();
return "Yourname " + now;
}
}
Now in your Java Class you need to write out your JDBC connection and call the database and which you can return to flex as an Object from there you can display it in frontend in what ever format.
Look at the documentation for Adobe BlazeDS. This will show you how to do what you want and how to implement for example what Vinothababu suggested. Here's the link: http://opensource.adobe.com/wiki/display/blazeds/BlazeDS/