is it possible to intercept calls to System.out.print* and System.err.print* (in Java) and prepend a time stamp to them? Don't worry, we use the usual logging frameworks, but occasionally some sys.out leaks out and it would be nice to know when it happen so we can tie it up to the proper log files.
You can do it.
See the docs
Reassigns the "standard" output stream.
First, if there is a security manager, its checkPermission method is called with a RuntimePermission("setIO") permission to see if it's ok to reassign the "standard" output stream.
public class CustomPrintStream extends PrintStream{
//override print methods here
}
System.setOut(new CustomPrintStream());
It should be possible.
System.out is a printStream.
you can extend the stream to append the date and time to the print methods and use System.setOut() to set the stream appropriately.
As an afterthought if you want to identify where the print statements are coming from you can use:
Thread.currentThread().getStackTrace()[1].getClassName();
You could use Aspect Oriented Programming to achieve this - in particular the AspectJ tool.
The idea is that you define pointcuts that match at points in your code and then write advice that is executed at those points. The AspectJ compiler will then weave in your advice at those points.
So for your problem you would first define a pointcut that picked up every time you called a print method on a PrintStream
pointcut callPrint(PrintStream ps, String s) :
call(* java.io.PrintStream.print*(..)) && target(ps) && args(s);
You would then write advice that would go around this call to replace the argument if the PrintStream is System.out (you could do the same with System.err.
void around(PrintStream ps, String s) : callPrint(ps,s) {
if(ps.equals(System.out)){
String new_string = ...
proceed(new_string);
}
else proceed(s);
}
You then need to put this all in an aspect and weave it into your code - there are lots of tutorials online of how to do that.
Related
With Java 11, I could initialize an InputStream as:
InputStream inputStream = InputStream.nullInputStream();
But I am unable to understand a potential use case of InputStream.nullInputStream or a similar API for OutputStream
i.e. OutputStream.nullOutputStream.
From the API Javadocs, I could figure out that it
Returns a new InputStream that reads no bytes. The returned stream is
initially open. The stream is closed by calling the close() method.
Subsequent calls to close() have no effect. While the stream is open,
the available(), read(), read(byte[]), ...
skip(long), and transferTo() methods all behave as if end of stream
has been reached.
I went through the detailed release notes further which states:
There are various times where I would like to use methods that require
as a parameter a target OutputStream/Writer for sending output, but
would like to execute those methods silently for their other effects.
This corresponds to the ability in Unix to redirect command output to
/dev/null, or in DOS to append command output to NUL.
Yet I fail to understand what are those methods in the statement as stated as .... execute those methods silently for their other effects. (blame my lack of hands-on with the APIs)
Can someone help me understand what is the usefulness of having such an input or output stream with a help of an example if possible?
Edit: One of a similar implementation I could find on browsing further is apache-commons' NullInputStream, which does justify the testing use case much better.
Sometimes you want to have a parameter of InputStream type, but also to be able to choose not to feed your code with any data. In tests it's probably easier to mock it but in production you may choose to bind null input instead of scattering your code with ifs and flags.
compare:
class ComposableReprinter {
void reprint(InputStream is) throws IOException {
System.out.println(is.read());
}
void bla() {
reprint(InputStream.nullInputStream());
}
}
with this:
class ControllableReprinter {
void reprint(InputStream is, boolean for_real) throws IOException {
if (for_real) {
System.out.println(is.read());
}
}
void bla() {
reprint(new BufferedInputStream(), false);
}
}
or this:
class NullableReprinter {
void reprint(InputStream is) throws IOException {
if (is != null) {
System.out.println(is.read());
}
}
void bla() {
reprint(null);
}
}
It makes more sense with output IMHO. Input is probably more for consistency.
This approach is called Null Object: https://en.wikipedia.org/wiki/Null_object_pattern
I see it as a safer (1) and more expressive (2) alternative to initialising a stream variable with null.
No worries about NPEs.
[Output|Input]Stream is an abstraction. In order to return a null/empty/mock stream, you had to deviate from the core concept down to a specific implementation.
I think nullOutputStream is very easy and clear: just to discard output (similar to > /dev/null) and/or for testing (no need to invent an OutputStream).
An (obviously basic) example:
OutputStream out = ... // an easy way to either print it to System.out or just discard all prints, setting it basically to the nullOutputStream
out.println("yeah... or not");
exporter.exportTo(out); // discard or real export?
Regarding nullInputStream it's probably more for testing (I don't like mocks) and APIs requiring an input stream or (this now being more probable) delivering an input stream which does not contain any data, or you can't deliver and where null is not a viable option:
importer.importDocument("name", /* input stream... */);
InputStream inputStream = content.getInputStream(); // better having no data to read, then getting a null
When you test that importer, you can just use a nullInputStream there, again instead of inventing your own InputStream or instead of using a mock. Other use cases here rather look like a workaround or misuse of the API ;-)
Regarding the return of an InputStream: that rather makes sense. If you haven't any data you may want to return that nullInputStream instead of null so that callers do not have to deal with null and can just read as they would if there was data.
Finally, these are just convenience methods to make our lifes easier without adding another dependency ;-) and as others already stated (comments/answers), it's basically an implementation of the null object pattern.
Using the null*Stream might also have the benefit that tests are executed faster... if you stream real data (of course... depending on size, etc.) you may just slow down your tests unnecessarily and we all want tests to complete fast, right? (some will put in mocks here... well...)
I'm having great fun with the method delegation described here:
http://www.javacodegeeks.com/2015/01/make-agents-not-frameworks.html
This works nicely:
.intercept(MethodDelegation.to(LogInterceptor.class)
.andThen(SuperMethodCall.INSTANCE)
I can intercept calls and capture arguments passed to methods, which is half of what I want to achieve. However, I haven't found an equally succinct way of capturing the return value. I know I can get a Callable passed to the interceptor which performs the call, but going down that road seems like a sure way to mess up my stacktraces.
It seems to me there should be an easy and canonical-ish way to implement the "around-method" pattern.
Before I start digging into the APIs for reals: Am I missing something?
No, you are not missing anything.
Whenever you manipulate code with Byte Buddy, this manipulation will be reflected by the stack traces of your application. This is intentional as it makes debugging much easier in case that something goes wrong. Think of your log interceptor throwing a runtime exception; if the intercept was somehow merged into your original method, this would be quite confusing for other developers to figure out. With Byte Buddy's approach, you can simply navigate to the causing source as your interceptor is in fact available from there. With Byte Buddy, no exception is ever thrown from generated code such that any problem can be traced back to source code.
Also, merging stack frames can have strange side-effects to caller sensitive code. For example, a security manager might give higher permissions to an interceptor than to the intercepted code. Merging stack frames would revert these permissions.
Writing an interceptor with a #Super Callable injected is the canonical way for implementing arround-advice. Do not worry about performance either. Byte Buddy is written in a way that makes it very easy for the JIT compiler to inline code such that the super method call is most likely executed with zero overhead. (There is even a benchmark demonstrating that.) For your example, generic arround-adivce would look like the following:
public class TimingInterceptor {
#RuntimeType
public static Object intercept(#Super Callable<?> zuper)
throws Exception {
long before = System.currentTimeMillis();
try {
return zuper.call();
} finally {
System.out.println("Took: " + (Systen.currentTimeMillis() - before));
}
}
}
For every method, the time it takes to execute is now printed to the console. You delegate to this code using MethodDelegation.to(TimingInterceptor.class).
Make sure that you use the #RuntimeType annotation. This way, Byte Buddy attempts a casting at runtime, making this generic interception possible.
I have a simple question on logging
why it is common to use this syntax for logging:
LOG.debug("invalidate {}",_clusterId);
not this:
LOG.debug("invalidate" + _clusterId);
In your example, say you have the logging level set to INFO. You'd like to ignore debug-level messages entirely. But the log method can't check the log level until the method is entered, after it gets the parameters. So if you don't know if you're going to need a parameter it's better to avoid having to evaluate it.
With your second example, even though logging is set to info, _clusterId gets toString called on it, then that resulting string is concatenated with the preceding string. Then once the method is entered the logger figures out the debug level doesn't need logging and it throws away the newly-created string and exits the method.
With the first example if debug-level logging is not enabled then _clusterId doesn't get toString called on it and the log message doesn't get built. Calling toString may be slow or create garbage, it's better to avoid it for cases where nothing is going to be logged anyway.
Here's the source code for the debug method on log4j's org.apache.log4j.Category (which is the superclass of Logger):
public void debug(Object message, Throwable t) {
if(repository.isDisabled(Level.DEBUG_INT))
return;
if(Level.DEBUG.isGreaterOrEqual(this.getEffectiveLevel()))
forcedLog(FQCN, Level.DEBUG, message, t);
}
When you have a statement with several parameters, writing the pattern as a string followed by the parameters makes the code more readable. It may also be more efficient, avoiding the needless creation of many temporary string objects, but that depends on how the logging framework implements interpolation internally.
To see the first point, compare this with the equivalent line that uses string concatenation.
LOG.debug("{}: Error {} while processing {} at stage {}", currentFile,
exception.getMessage(), operation.getName(), operation.getStage())
When there's only one parameter it doesn't really matter which one you use, apart from being consistent with the general case.
I am trying to find the method signature of the caller method. I need to do this because the code I'm writing gets obfuscated and a lot of methods get overloaded. I'm trying to ignore calls from a certain method that has the signature At the moment my code looks like this
StackTraceElement caller = Thread.currentThread().getStackTrace()[2];
String cn = caller.getClassName();
String mn = caller.getMethodName();
if(cn == "net.minecraft.client.Minecraft" && (mn == "displayGuiScreen" || mn == "a")){ // displayGuiScreen is for non-obfuscated, a is for obfuscated. Doesn't work because 2 other methods that call it are also called a when obfuscated
System.err.println("Skipped");
return;
}
Can anyone help me with this? Thanks
Try to use asm library, and I've just found I think related to you post, try to read and find out example, post
You might be able to use AspectJ with compile-time weaving. You can use around advice to do nothing if the caller is the displayGuiScreen method. This will only work if you compile all the code that calls your class. Compile-time weaving is necessary because you must do it before obfuscation and for all callers of your method. You should be able to do something like this:
aspect IgnoreCallsFromDisplayGuiScreen {
void around(): call(void MyClass.myMethod()) && withincode(void Minecraft.displayGuiScreen()) {
return;
}
}
First verify that your code works without obfuscation.
Second, find the obfuscation map, which is an output of the obfuscation program.
Third verify that your obfuscator updated the string to match the new method signature as it is detailed in the map. Odds are only the class name and method names have changed.
If you do not have a match, either look to your obfuscator for an option to rewrite strings that look like reflection calls, or use asm to rewrite the string within the compiled class that didn't get the update.
In C++ if we do not want some statements to compile into code that ships like assert function calls, we control their compilation through #ifndef preprocessor directives.
How do we do this in Java?
I have some System.out.println() statements for debugging which I would like to remove for the final code.
one way is to make them execute conditionally under the affect of a boolean variable. Is there a better way of doing this?
As I have a java swing application I can turn off the System.out.println statements without affecting the output. What is the method of doing this?
Use a logging framework like slf4j. You can print to the console in debugging and omit everything in production without recompiling the application.
Use logging. See log4j or commons logging.
Generally, each log entry has a severity level (like debug, info, warning, error) and you can configure which are printed from the application. You can print all of them for debug, but only some (e.g. info and higher) in production. The configuration is usually done with a single plain text file.
Logging frameworks can do more than that: Add more detail automatically (e.g. timestamp or thread ID), log to console, file and/or database, rotate files, and more.
Use AspectJ for those pieces of code which you want added or removed at compile time and compile without AspectJ when you don't want to use them.
In general (not only for your example), you can create multiple implementations of an interface, and change, which instance is used during execution. This is called polymorphism, the advantage over if/else is: you choose the implementation once, instead of every time it's used.
In Java, polymorphism does not result in performance overhead.
public interface MyInterface {
void trashTheCPU();
}
public class MyRealImpl implements MyInterface {
#Override
public void trashTheCPU() {
// actually trash the CPU with heavy tasks
}
}
public class MyEmptyImpl implements MyInterface {
#Override
public void trashTheCPU() {
// do nothing
}
}
// ... somewhere else:
MyInterface mi = null;
public void initEveryting() {
if(trashTheCPUconditionIsMet) {
mi = new MyRealImpl();
} else {
mi = new MyEmptyImpl();
}
}
For what you're doing, assertions may be the way to go. The assert keyword was added to Java in version 1.4, and is conceptually similar to C++'s assert(), which I see you're familiar with.
assert statements in Java do nothing if they evaluate to true and yell at you otherwise, which is perfect for debugging. They also don't work by default, which means that the compiler will ignore them unless you explicitly tell it not to. The end result is a debugging tool that "evaporates" when you ship your production code.
Here's Sun's tutorial on Java asserts.