I want to send a mail notification in a java application via log4j. However the first try with configurated log4j.properties file, worked like charm. But since I want a dynamic subject, which is generated in runtime, I tried the following commands, with no success:
final static Logger logger = Logger.getRootLogger();
...
public static mail(String msg, String subj) {
SMTPAppender mailAppend = new SMTPAppender();
mailAppend.setBufferSize(3);
mailAppend.setSMTPHost("smtphostname");
mailAppend.setTo("ex#mple.com");
mailAppend.setSubject(subj);
logger.addAppender(mailAppend);
logger.error(msg);
}
output:
log4j:ERROR Message object not configured.
So did I miss a necessary getter?
An SMTPAppender can be configured either using a xml or properties
file or manually using the setters. When you use the setters, you need
to activate the options by calling the function activateOptions or
else you would get the "ERROR Message object not configured" message.
This is to ensure that the options would only become effective when
all the related options have been set (e.g. one would not want to have
the host setting become effective before the port is set).
FROM : https://community.oracle.com/thread/1758275?start=0&tstart=0
Related
Currently, I am testing QuestDB in a Apache Camel / Spring Boot scenario for our project. I set up a custom Camel component and a configuration bean holding the connection properties. As far as I can see, my custom Camel component properly connects to the server where a test instance of QuestDB is running. But when sending data over the Camel route, I get error messages:
io.questdb.cairo.CairoException: [2] could not open read-write [file=<dir>/_tab_index.d]
The exception is thrown when creating the CairoEngine like (taken from QuestDB API documentation:
try (CairoEngine engine = new CairoEngine(this.configuration)) {
... other code ...
} catch (Exception e) {
e.printStackTrace();
...
}
where this.configuration is of type CairoConfiguration and contains the "data_dir" and is instantiated like this:
configuration = new DefaultCairoConfiguration(<quest db directory (String)>);
Currently, I am passing the fully qualified path my database directory: /srv/questdb/db. I confirmed that the file _tab_index.d is available at this location.
What am I going wrong? Maybe I should mention, that I set the access rights to the questdb directory to 777, the owner was set to chown root:questdb ...
Indeed, the embedded API is not suitable for what I want to do. I need to one of the other APIs. I tested my scenario withe the InfluxDB line protocol (see Line protocol documentation) and the data gets written to the server without problems.
The doInsert method in my custom component look like this (just for testing) which is called when building a route with the custom QuestDB "to" end point:
public class QuestDbProducer extends DefaultProducer {
... other code ...
private void doInsert(Exchange exchange, String tableName) throws InvalidPayloadException {
try (Sender sender = Sender.builder().address("lxyrpc01.gsi.de:9009").build()) {
sender.table("inventors")
.symbol("born", "Austrian Empire")
.longColumn("id", 0)
.stringColumn("name", "Nicola Tesla")
.atNow();
sender.table("inventors")
.symbol("born", "USA")
.longColumn("id", 1)
.stringColumn("name", "Thomas Alva Edison")
.atNow();
}
}
WARNING: JGRP000014: Discovery.timeout has been deprecated: GMS.join_timeout should be used instead
why am I getting this if it's not defined directly by me? at least I don't think it is, looks like we're using the GMS.join_timeout
Here's how this one is configured
log().info(
"Starting JChannel for Distributable Sessions config:{} with channel name of {}",
configString,
channelName
);
jChannel = new JChannel(new PlainConfigurator(configString));
jChannel.connect(channelName);
replicatedSessionIds = new ReplicatedHashMap<>( jChannel );
sessionIds = replicatedSessionIds;
if (! sessionDistributedTest )
{
replicatedSessionIds.start(TIME_OUT);
}
and the output of that log messsage
Starting JChannel for Distributable Sessions config:TCP(bind_addr=172.20.0.4;bind_port=7800;max_bundle_size=200000):TCPPING(timeout=3000;initial_hosts=dex.master[7800],dex.slave[7800];port_range=1):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK2(use_mcast_xmit=false;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=50000;max_bytes=400000):pbcast.GMS(print_local_addr=true;join_timeout=2000;view_bundling=true):pbcast.STATE_SOCK with channel name of Dex_SpringSecurity_Cluster_Dev
jgroups 3.6.13
You actually do define timeout in configString passed to the channel constructor: TCPPING.timeout.
I have 2 suggestions for you:
Switch to XML based configuration; plain-text configuration will not be supported any longer in 4.0
Use tcp.xml shipped with 3.6.13 and modify it according to you liking. Your config looks a bit dated.
How can I configure the level of JSch logger?
Is it like Log4J configurable via XML?
JSch doesn't seem to use any known logging framework (I use JSch v0.1.49, but the last version is v0.1.51), or any XML configuration file. So here is what I did:
private class JSCHLogger implements com.jcraft.jsch.Logger {
private Map<Integer, MyLevel> levels = new HashMap<Integer, MyLevel>();
private final MyLogger LOGGER;
public JSCHLogger() {
// Mapping between JSch levels and our own levels
levels.put(DEBUG, MyLevel.FINE);
levels.put(INFO, MyLevel.INFO);
levels.put(WARN, MyLevel.WARNING);
levels.put(ERROR, MyLevel.SEVERE);
levels.put(FATAL, MyLevel.SEVERE);
LOGGER = MyLogger.getLogger(...); // Anything you want here, depending on your logging framework
}
#Override
public boolean isEnabled(int pLevel) {
return true; // here, all levels enabled
}
#Override
public void log(int pLevel, String pMessage) {
MyLevel level = levels.get(pLevel);
if (level == null) {
level = MyLevel.SEVERE;
}
LOGGER.log(level, pMessage); // logging-framework dependent...
}
}
Then before using JSch:
JSch.setLogger(new JSCHLogger());
Note that instead of MyLevel and MyLogger, you can use any logging framework classes you want (Log4j, Logback, ...)
You can get a complete example here: http://www.jcraft.com/jsch/examples/Logger.java.html
Just wanted to add a small comment to the accepted answer, but reputation doesnt allow. Sorry if this way via another answer is evil, but really want to mention the following.
The log activation works this way, and it can get you a lot of info about the connection process (key exchange and such). But there is practically no such thing as debug output for the core functionality after authentication, at least for SFTP. And a look at the source shows / confirms there is no logging in ChannelSftp (and the most other classes).
So if you want to activate this in order to inspect communication problems (after authentication) thats wasted - or you need to add suitable statements to the source yourself (I did not yet).
We encounter complete hangs (job threads get stuck for days/infinite) in put, get and even ls - and of course the server provider claims not to be the problem (and indeed the unix sftp commandline-client works - but not from the appserver host, which we have no access to.. so we would have to check network communication). If someone has an idea, Thanks..
I'm working with the DFS Java API and was wondering whether anyone knows a simple way to configure a client-side timeout for service-calls that can be configured on the service context, for example?
I have experienced some rare occasions where a Documentum repository was not responding, that's why I am considering a general timeout for all DFS calls.
For testing a hanging service call, I created a dummy TBO implementation that simply blocks the thread for 10 minutes when updating the document:
#Override
public void saveEx(boolean keepLock, String versionLabels) throws DfException {
if (isNew() == false) {
try {
Thread.sleep(1000*60*10);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
super.saveEx(keepLock, versionLabels);
}
I'm not sure if this behaves exactly like a hanging service call, but at least in my tests it worked as expected - my invocations of the update method of the Object Service took about 10minutes.
Is there any configuration I have not yet found, or maybe a runtime-property to pass to the service context to configure the timeout?
I would prefer using existing features of DFS for this instead of implementing my own mechanism.
Have you tried editing the value in dfs-runtime.properties? I don't think the timeout can be context-specific, but you should be able to change it for the client as a whole.
Reposted from https://community.emc.com/message/3249#3249
"Please see the Server runtime startup settings section of the Deployment guide.
The following list describes the precedence that dfs-runtime.properties files take depending on their location:
local-dfs‑runtime.properties file in the local classpath
runtime properties file specified with ‑Ddfs.runtime.properties.file
dfs‑runtime.properties packaged with emc‑dfs‑rt.jar
For example, settings in the local-dfs‑runtime.properties file on the local classpath will take precedence of identical settings in the dfs‑runtime.properties file that is located in emc‑dfs‑rt.jar or the one specified with the ‑D parameter. The DFS application must be restarted after any changes to the configuration. As a best practice, use the provided configuration file that is deployed in the emc‑dfs‑rt.jar file for your base settings and use an external file to override settings that you specifically wish to change."
I have a JBOSS batch application that sometimes sends hundreds on emails in a minute to the same email address with Log4J errors. This causes problems with Gmail, because it says we are sending emails too quickly for that gmail account.
So I was wondering if there was a way to basically create a "digest" or "aggregate" email puts all the error logs in 1 email and sends that every 5 minutes. So that way every 5 minutes we may get a large email, but at least we actually get the email instead of it being delayed for hours and hours by gmail servers rejecting it.
I read this post that suggested something about using an evaluator to do that, but I couldn't see how that is configured in the Log4J xml configuration file. It also seemed like it might not be able to "digest" all the logs into 1 email anyway.
Has anyone done this before? Or know if it's possible?
From (the archived) SMTPAppender Usage page:
set this property
log4j.appender.myMail.evaluatorClass = com.mydomain.example.MyEvaluator
Now you have to create the evaluator class and implement the org.apache.log4j.spi.TriggeringEventEvaluator interface and place this class in a path where log4j can access it.
//Example TriggeringEventEvaluator impl
package com.mydomain.example;
import org.apache.log4j.spi.LoggingEvent;
import org.apache.log4j.spi.TriggeringEventEvaluator;
public class MyEvaluator implements TriggeringEventEvaluator {
public boolean isTriggeringEvent(LoggingEvent event) {
return true;
}
}
You have to write the evaluator logic within this method.
I created a free useable solution for log4j2 with an ExtendedSmtpAppender.
(If you still use log4j 1.x, simply replace your log4j-1.x.jar with log4j-1.2-api-2.x.jar - and log4j-core-2.x.jar + log4j-api-2.x.jar of course.)
You get it from Maven Central as de.it-tw:log4j2-extras (This requires Java 7+ and log4j 2.8+).
If you are restricted to Java 6 (and thus log4j 2.3) then use de.it-tw:log4j2-Java6-extras
Additionally, see the GitLab project: https://gitlab.com/thiesw/log4j2-extras (or https://gitlab.com/thiesw/log4j2-Java6-extras)
[OLD text:
If you use log4j2, see answer to other stack overflow issue: https://stackoverflow.com/a/34072704/5074004
Or directly go to my external but publically available solution presented in https://issues.apache.org/jira/browse/LOG4J2-1192
]