Group Java log entries per servlet invocation - java

In the application I'm currently working on, we have an in-house built logging setup where log messages from the business methods are buffered in a ThreadLocal variable. The actual logging is deferred until the end of the servlet invocation where there's a Filter that picks all messages up, concatenates them together, tags them with the information from the mapped diagnostic context and then saves them to a SQL database as a single log entry.
We want to get rid of this in-house logger because the code quality is not that good and it's getting a bit hard to maintain. Is the above use case possible to achieve with any publicly available Java logging framework? I looked around in Logback and Log4j documentation a bit but wasn't able to find anything similar.

You can use Logstash
Logstash is an open source, server-side data processing pipeline that
ingests data from a multitude of sources simultaneously, transforms
it, and then sends it to your favorite “stash.”

We are doing something similar with log4j: we're having an engine that processes requests in the background and records log4j messages into a temporary file; at the end of the processing, if something was logged, the content of the temporary file is sent by e-mail.
To start recording to the temp file:
String root = props
.getProperty("gina.nas.log-capture.root", "gina.nas");
String thresholdLogLevel = props.getProperty(
"gina.nas.log-capture.threshold", "WARN");
String fullLogLevel = props.getProperty("gina.nas.log-capture.level",
"INFO");
String pattern = props.getProperty("gina.nas.log-capture.pattern",
"%d * %-5p * %m [%c]%n");
includeFullLog = Boolean.parseBoolean(props.getProperty(
"gina.nas.log-capture.full-log", "true"));
this.layout = new PatternLayout(pattern);
this.logger = root == null ? Logger.getRootLogger() : Logger
.getLogger(root);
// Threshold log
this.thresholdLogFile = File.createTempFile("thcap", ".log");
thresholdLogAppender = initWriterAppender(thresholdLogFile, layout,
Level.toLevel(thresholdLogLevel));
logger.addAppender(thresholdLogAppender);
To stop recording:
logger.removeAppender(thresholdLogAppender);
thresholdLogAppender.close();
if (thresholdLogFile.isFile()) {
if (sendMail && thresholdLogFile.length() > 0) {
LOG.info("Error(s) detected, sending log by email...");
MailService mail = new MailService(props);
Map<String, Object> vars = new HashMap<String, Object>();
vars.put("username", username);
vars.put("taskid", taskID);
vars.put("instance", instance);
vars.put("exception", exception);
vars.put("thresholdLogFile", getFileContent(thresholdLogFile));
mail.sendMessage(LOGCAPTURE_TEMPLATE, null, null, vars);
}
thresholdLogFile.delete();
}
Method initWriteAppender:
private WriterAppender initWriterAppender(File file, PatternLayout layout,
Level level) throws IOException {
OutputStream stream = new FileOutputStream(file);
boolean ok = false;
try {
Writer writer = new OutputStreamWriter(stream, "UTF-8");
try {
WriterAppender result = new WriterAppender(layout, writer);
ok = true;
result.setThreshold(level);
return result;
} finally {
if (!ok) {
writer.close();
}
}
} finally {
if (!ok) {
stream.close();
}
}
}

Related

Is there a way to get java.util.logging.LogManager to report all loggers for which properties are specified or to access the properties?

In LogManager, the method getLoggerNames appears to only return loggers that have been actually instantiated already. However, logging properties can be held "in reserve" until a logger with a given name is instantiated.
Is there a way to get the full list of loggers for which we have settings, or to at least get the current properties set/map, without reading the original file from my own code?
JDK-8033661: readConfiguration does not cleanly reinitialize the logging system was fixed in Java version 9 which added the LogManager.updateConfiguration(Function<String,BiFunction<String,String,String>>) method. Per the documentation this method will read configuration keys and returns a function whose returned value will be applied to the resulting configuration. By supplying an identity function you can iterate the existing configuration keys instead of the actual created loggers by doing something like the following:
Function<String, BiFunction<String,String,String>> consume
= new Function<String, BiFunction<String,String,String>>() {
#Override
public BiFunction<String, String, String> apply(final String k) {
return new BiFunction<String, String, String>() {
#Override
public String apply(String o, String n) {
System.out.println(k +"="+ o);
return o;
}
};
}
};
LogManager.getLogManager().updateConfiguration(consume);
For JDK 8 and older, you have to do one of the following:
Read the logging.properties file yourself.
Override the LogManager.readConfiguration(InputStream) to capture the bytes from the stream and create your own Properties object from the stream. The no arg readConfiguration will call this method so the given stream is the properties file as bytes.
Resort to reflection (yuck!).
The easy way to read the properties file is by using the java.util.Properties class.
final Properties props = new Properties();
try {
String v = System.getProperty("java.util.logging.config.file");
if (v == null) {
v = System.getProperty("java.home") + "/lib/logging.properties";
}
final File f = new File(v).getCanonicalFile();
final InputStream in = new FileInputStream(f);
try {
props.load(in);
} finally {
in.close();
}
} catch (final RuntimeException permissionsOrMalformed) {
} catch (final Exception ioe) {
}

JDK log rotation

I am using JDK logging and I am trying to set log-rotation of log based on size.The configuration file logging.properties is set as shown below :
# Specify the handlers to create in the root logger
# (all loggers are children of the root logger)
# The following creates two handlers
handlers = java.util.logging.ConsoleHandler, java.util.logging.FileHandler
#handlers = java.util.logging.ConsoleHandler
#handlers = java.util.logging.FileHandler
# Set the default logging level for the root logger
.level = INFO
# Set the default logging level for new ConsoleHandler instances
java.util.logging.ConsoleHandler.level = INFO
# Set the default logging level for new FileHandler instances, INFO, FINEST...
java.util.logging.FileHandler.level = INFO
# Set the default formatter for new ConsoleHandler instances
#java.util.logging.ConsoleHandler.formatter = com.hazelcast.impl.LogFormatter
#java.util.logging.FileHandler.formatter = com.hazelcast.impl.LogFormatter
#java.util.logging.FileHandler.pattern = ./hazelcast%u.log
# Set the default logging level for the logger named com.hazelcast
com.hazelcast.level = INFO
# Limiting size of output file in bytes:
java.util.logging.FileHandler.limit=1024
# Number of output files to cycle through, by appending an
# integer to the base file name:
java.util.logging.FileHandler.count=10
I see the log been written into file but I dont see any effect on log-rotation based on size.I am using hazelcast API which implements the logger and I am trying to configure the properties through file.Help would be appreciated.
Thanks & Regards,
Rajeev
According to the Hazelcast Logging Configuration the default logging framework is JDK logging. So all of the configuration is setup in the logging.properties.
The size of the log file has to exceed the limit (1024) before the rotation. You have not set a pattern in your logging properties so the FileHandler will default to your home folder starting with 'java0.log.0'.
The following code is self-contained conversion of your logging properties that does not require any system properties to be set on launch.
public class JdkLogRotation {
private static final String LOGGER_NAME = "com.hazelcast";
private static final int LIMIT = 1024;
private static final int COUNT = 10;
private static final Logger logger = Logger.getLogger(LOGGER_NAME);
public static void main(String[] args) throws Exception {
Properties props = create();
read(LogManager.getLogManager(), props);
String msg = message();
for (int i = 0; i <= COUNT; ++i) {
logger.log(Level.INFO, msg);
}
/*
try (FileOutputStream out = new FileOutputStream(
new File(System.getProperty("user.home"),
"jdklogrotation.properties"))) {
props.store(out, "JDK log rotation");
out.flush();
}*/
}
private static String message() {
char[] c = new char[LIMIT + 1];
Arrays.fill(c, 'A');
return String.valueOf(c);
}
private static Properties create() {
Properties props = new Properties();
props.setProperty("handlers", "java.util.logging.ConsoleHandler, java.util.logging.FileHandler");
props.setProperty(".level", "INFO");
props.setProperty("java.util.logging.ConsoleHandler.level", "INFO");
props.setProperty("java.util.logging.FileHandler.level", "INFO");
props.setProperty("java.util.logging.FileHandler.limit", String.valueOf(LIMIT));
props.setProperty("java.util.logging.FileHandler.count", "10");
props.setProperty(LOGGER_NAME + ".level", "INFO");
return props;
}
private static void read(LogManager manager, Properties props) throws IOException {
final ByteArrayOutputStream out = new ByteArrayOutputStream(512);
props.store(out, "No comment");
manager.readConfiguration(new ByteArrayInputStream(out.toByteArray()));
}
}
The following log file names are created in your user home folder:
java0.log.0
java0.log.1
java0.log.2
java0.log.3
java0.log.4
java0.log.5
java0.log.6
java0.log.7
java0.log.8
java0.log.9
Assuming your logging.properties is loaded and nothing has reset your LogManager you should expect to see the same output.
Hazelcast logging (its more of a general logging problem) sometimes can be a big PITA.
First make sure you have logging configured through a system property, don't rely on the properties inside of the hazelcast.xml file because it could be that a logger is triggered before this file is loaded and then things get funky.
So something like -Dhazelcast.logging.type=jdk.
And remove the hazelcast.logging.type if you have that in your hazelcast.xml file.
Second step: Do you see hazelcast.log being created? Just to be sure that your log configuration is picked up, give it a completly different name just to be totally sure that it got picked up, e.g. monty-python.log. If you don't see this file appearing, then you know your log file isn't picked up.
If this file is picked up; then I would try to figure out why your config doesn't work. I have wasted too much time on configuration issues with logging that were not actually related to the actual configuration file.

Reading and writing files using Java 7 nio

I have files which consist of json elements in an array.
(several file. each file has json array of elements)
I have a process that knows to take each json element as a line from file and process it.
So I created a small program that reads the JSON array, and then writes the elements to another file.
The output of this utility will be the input of the other process.
I used Java 7 NIO (and gson).
I tried to use as much Java 7 NIO as possible.
Is there any improvement I can do?
What about the filter? Which approach is better?
Thanks,
public class TransformJsonsUsers {
public TransformJsonsUsers() {
}
public static void main(String[] args) throws IOException {
final Gson gson = new Gson();
Path path = Paths.get("C:\\work\\data\\resources\\files");
final Path outputDirectory = Paths
.get("C:\\work\\data\\resources\\files\\output");
DirectoryStream.Filter<Path> filter = new DirectoryStream.Filter<Path>() {
#Override
public boolean accept(Path entry) throws IOException {
// which is better?
// BasicFileAttributeView attView = Files.getFileAttributeView(entry, BasicFileAttributeView.class);
// return attView.readAttributes().isRegularFile();
return !Files.isDirectory(entry);
}
};
DirectoryStream<Path> directoryStream = Files.newDirectoryStream(path, filter);
directoryStream.forEach(new Consumer<Path>() {
#Override
public void accept(Path filePath) {
String fileOutput = outputDirectory.toString() + File.separator + filePath.getFileName();
Path fileOutputPath = Paths.get(fileOutput);
try {
BufferedReader br = Files.newBufferedReader(filePath);
User[] users = gson.fromJson(br, User[].class);
BufferedWriter writer = Files.newBufferedWriter(fileOutputPath, Charset.defaultCharset());
for (User user : users) {
writer.append(gson.toJson(user));
writer.newLine();
}
writer.flush();
} catch (IOException e) {
throw new RuntimeException(filePath.toString(), e);
}
}
});
}
}
There is no point of using Filter if you want to read all the files from the directory. Filter is primarily designed to apply some filter criteria and read a subset of files. Both of them may not have any real difference in over all performance.
If you looking to improve performance, you can try couple different approaches.
Multi-threading
Depending on how many files exists in the directory and how powerful your CPU is, you can apply multi threading to process more than one file at a time
Queuing
Right now you are reading and writing to another file synchronously. You can queue content of the file using Queue and create asynchronous writer.
You can combine both of these approaches as well to improve performance further.
Don't put the I/O into the filter. That's not what it's for. You should get the complete list of files and then process it. For example if the I/O creates another file in the directory, the behaviour is undefined. You might miss a file, or see the new file in the accept() method.

make mail mark as unread after polling - Apache Camel

Am using apache camel, With Polling consumer, when poll my mail is mark as read.
options : delete=false&peek=false&unseen=true
After polling , when i am processing the attachment, if any error occurs , i want to make the mail as "unread". So that i can pool again later.
public void process(Exchange exchange) throws Exception {
Map<String, DataHandler> attachments = exchange.getIn().getAttachments();
Message messageCopy = exchange.getIn().copy();
if (messageCopy.getAttachments().size() > 0) {
for (Map.Entry<String, DataHandler> entry : messageCopy.getAttachments().entrySet()) {
DataHandler dHandler = entry.getValue();
// get the file name
String filename = dHandler.getName();
// get the content and convert it to byte[]
byte[] data =
exchange.getContext().getTypeConverter().convertTo(byte[].class, dHandler.getInputStream());
log.info("Downloading attachment, file name : " + filename);
InputStream fileInputStream = new ByteArrayInputStream(data);
try {
// Processing attachments
// if any error occurs here, i want to make the mail mark as unread
} catch (Exception e) {
log.info(e.getMessage());
}
}
}
}
I noticed the option peek, by setting it to true, It will not make the mail mark as read during polling, in that case is there any option to make it mark as read after processing.
To get the result that you want you should have options
peek=true&unseen=true
The peek=true option is supposed to ensure that messages remain the exact state on the mail server as they where before polling even if there is an exception. However, currently it won't work. This is actually a bug in Camel Mail component. I've submitted a patch to https://issues.apache.org/jira/browse/CAMEL-9106 and this will probably be fixed in a future release.
As a workaround you can set mapMailMessages=false but then you will have to work with the email message content yourself. In Camel 2.15 onward you also have postProcessAction option and with that you could probably remove the SEEN flags from messages with processing errors. Still, I would recommend waiting for the fix though.
We can set the mail unread flag with the following code
public void process(Exchange exchange) throws Exception {
final Message mailMessage = exchange.getIn(MailMessage.class).getMessage();
mailMessage.setFlag(Flag.SEEN, false);
}

Path manipulation issue Fortify

Hi i have scanned an application using Fortify tool, in the generated reports i got path manipulation issue in the following method.
Note: In the report it is not showing the error line no. can anyone suggest me how to resove it?
private MimeMessage prepareMessage(EmailMessage req) throws EmailProviderException {
long start=System.currentTimeMillis(),finish=0;
try {
MimeMessage message = emailSender.createMimeMessage();
// create a multipart message
MimeMessageHelper helper = new MimeMessageHelper(message, true);
// set email addresses
helper.setFrom(convertAddress(req.getFromAddress()));
helper.setTo(convertAddress(req.getToAddress()));
helper.setCc(convertAddress(req.getCcAddress()));
helper.setBcc(convertAddress(req.getBccAddress()));
// set subject and body
helper.setSubject(req.getEmailSubject());
String emailBody = req.getEmailBody();
String emailMime = req.getEmailMimeType();
MimeBodyPart messagePart = new MimeBodyPart();
DataSource bodyDataSource = new ByteArrayDataSource(emailBody, emailMime);
messagePart.setDataHandler(new DataHandler(bodyDataSource));
helper.getMimeMultipart().addBodyPart(messagePart);
// add attachments
List<EmailAttachment> lAttach = req.getEmailAttachment();
if (lAttach != null) {
for (EmailAttachment attachMnt: lAttach) {
DataSource dSource = new ByteArrayDataSource(attachMnt
.getContent(), attachMnt
.getMimeType());
helper.addAttachment(attachMnt.getFileName(), dSource);
}
}
finish=System.currentTimeMillis();
statsLogger.info(new FedExLogEntry("prepareMessage took {0}ms",new Object[]{finish-start}));
return message;
} catch (Exception e) {
// covers MessagingException, IllegalStateException, IOException, MailException
String emsg = new StringBuilder("Unable to prepare smtp message.")
.append("\n").append(req.toString()).toString();
logger.warn(emsg, e);
throw new EmailProviderException(emsg, e);
}
}
Hmm. If Fortify is having issues trying to show you the correct line where the issue exists, then it's possible that fortify ran in to a parsing error when it was scanning and rendering the results to your FPR. One thing you could try is to rescan your application under a different build-id and generate a new FPR. Beyond that, I don't know. Sorry.
Something else that I'd recommend would be to inspect your log file to see if there were any errors or warnings during translation/scan.
But after looking at your code sample, I'm thinking that Fortify is tainting the parameter req and flagging the the operation that is occurring when it's trying to add the file as an attachment. Most likely your sink is going to be at
helper.addAttachment(attachMnt.getFileName(), dSource);
You'd want to validate the file names of the attachment themselves prior to trying to save them to disk.

Categories

Resources