How to stream out logs via API on dropwizard? - java

I would like to stream out logs via api endpoint /logs for dropwizard.
It is harder than what I thought. Dropwizard would read in the configuration via config.yml and keep those information as private. And I have no idea where would I be able to find the logger that logs everything?
Am I missing something?
Is there another way to do this?

This is a streaming example, you can also read up on this here:
calling flush() on Jersey StreamingOutput has no effect
And
Example of using StreamingOutput as Response entity in Jersey
The code:
public class StreamingTest extends io.dropwizard.Application<Configuration> {
#Override
public void run(Configuration configuration, Environment environment) throws Exception {
environment.jersey().property(ServerProperties.OUTBOUND_CONTENT_LENGTH_BUFFER, 0);
environment.jersey().register(Streamer.class);
}
public static void main(String[] args) throws Exception {
new StreamingTest().run("server", "/home/artur/dev/repo/sandbox/src/main/resources/config/test.yaml");
}
#Path("/log")
public static class Streamer {
#GET
#Produces("application/octet-stream")
public Response test() {
return Response.ok(new StreamingOutput() {
#Override
public void write(OutputStream output) throws IOException, WebApplicationException {
while (true) {
output.write("Test \n".getBytes());
output.flush();
try {
Thread.sleep(1000); // simulate waiting for stream
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}).build();
}
}
}
Which does what you want.
A few points The line evironment.jersey().property(ServerProperties.OUTBOUND_CONTENT_LENGTH_BUFFER, 0); is important. It tells the server not to buffer the output before returning it to the caller.
Additionally, you need to explicitly flush out the output by calling output.flush();
However, this seems to be the wrong way to ship logs. Have you ever looked into i.e. logstash? Or network based appenders that stream the logs directly to where you need them to be?
Hope that helps,
--artur

Related

What is the CDI equivalent of EJB's SessionSynchronization#afterCompletion method?

I've read CDI 2.0 specification (JSR 365) and found out the existence of the #Observes(during=AFTER_SUCCESS) annotation, but it actually requires a custom event to be defined in order to work.
This is what i've got:
//simple """transactional""" file system manager using command pattern
#Transactional(value = Transactional.TxType.REQUIRED)
#TransactionScoped
#Stateful
public class TransactionalFileSystemManager implements SessionSynchronization {
private final Deque<Command> commands = new ArrayDeque<>();
public void createFile(InputStream content, Path path, String name) throws IOException {
CreateFile command = CreateFile.execute(content, path, name);
commands.addLast(command);
}
public void deleteFile(Path path) throws IOException {
DeleteFile command = DeleteFile.execute(path);
commands.addLast(command);
}
private void commit() throws IOException{
for(Command c : commands){
c.confirm();
}
}
private void rollback() throws IOException{
Iterator<Command> it = commands.descendingIterator();
while (it.hasNext()) {
Command c = it.next();
c.undo();
}
}
#Override
public void afterBegin() throws EJBException{
}
#Override
public void beforeCompletion() throws EJBException{
}
#Override
public void afterCompletion(boolean commitSucceeded) throws EJBException{
if(commitSucceeded){
try {
commit();
} catch (IOException e) {
throw new EJBException(e);
}
}
else {
try {
rollback();
} catch (IOException e) {
throw new EJBException(e);
}
}
}
}
However, I want to adopt a CDI-only solution so I need to remove anything EJB related (including the SessionSynchronization interface). How can i achieve the same result using CDI?
First the facts: the authoritative source for this topic is the Java Transaction API (JTA) specification. Search for it online, I got this.
Then the bad news: In order to truly participate in a JTA transaction, you either have to implement a connector according to the Java Connector Architecture (JCA) specification or a XAResource according to JTA. Never done any of them, I am afraid both are going to be hard. Nevertheless, if you search, you may find an existing implementation of a File System Connector.
Your code above will never accomplish true 2-phase commit because, if your code fails, the transaction is already committed, so the application state is inconsistent. Or, there is a small time window when the real transaction is committed but the file system change have not beed executed, again the state is inconsistent.
Some workarounds I can think of, none of which solves the consistency problem:
Persist the File System commands in a database. This ensures that they are enqueued transactionally. A scheduled job wakes up and actually tries to execute the queued FS commands.
Register a Synchronization with the current Transaction, fire an appropriate event from there. Your TransactionalFileSystemManager observes this event, no during attribute needed I guess.

How to block SFTP remove operations with Apache MINA SSHD

I am trying to create a custom sftp server using Apache Mina SSHD. My code so far:
SshServer sshd = SshServer.setUpDefaultServer();
sshd.setPort(PORT_NUMBER);
sshd.setKeyPairProvider(new SimpleGeneratorHostKeyProvider(Paths.get("keys/private_key.ppk")));
SftpSubsystemFactory factory = new SftpSubsystemFactory.Builder()
.build();
factory.addSftpEventListener(new BasicSftpEventListener());
sshd.setSubsystemFactories(Collections.singletonList(factory));
sshd.setShellFactory(new ProcessShellFactory("/bin/sh", "-i", "-l"));
sshd.start();
As you can see, I implemented my own SftpEventListener:
public class BasicSftpEventListener implements SftpEventListener {
#Override
public void removing(ServerSession session, Path path) throws IOException {
System.out.println("Removin");
}
#Override
public void removed(ServerSession session, Path path, Throwable thrown) throws IOException {
System.out.println("removed");
}
When I want to remove file, it executes my removing and removed listeners, BUT the remove operation proceeds and the file is deleted.
Is there a way how to stop this from happening?
Thanks for help!
If you want to block delete actions, you will need to interrupt the flow of the removing method with an exception. This will tell Mina to stop and not remove the file. I would recommend using java.lang.UnsupportedOperationException for this:
#Override
public void removing(ServerSession session, Path path) throws UnsupportedOperationException{
throw new UnsupportedActionException("Removing files is not permitted.");
}

Prevent twitter4j from printing output when displaying a Twitter stream

How would I go about getting the live Twitter feed (or whatever part of it Twitter provides, 1% from what I hear) using twitter4j without having twitter4j print everything to the console output? Although I initially thought that this would mean that one had to turn off logging, trying to turn off logging has not helped. I would prefer an "in-code" solution to one which would require me to pass a VM Argument.
An SSCCE for the problem I'm facing would be:
import twitter4j.*;
public class StreamTweets {
public static void main(String[] args) {
StatusListener listener = new StatusListener(){
public void onStatus(Status status) {
// do whatever I want here
}
public void onDeletionNotice(StatusDeletionNotice statusDeletionNotice) {}
public void onTrackLimitationNotice(int numberOfLimitedStatuses) {}
public void onException(Exception ex) {
ex.printStackTrace();
}
public void onScrubGeo(long arg0, long arg1) {}
public void onStallWarning(StallWarning arg0) {}
};
TwitterStream twitterStream = new TwitterStreamFactory().getInstance();
twitterStream.addListener(listener);
twitterStream.sample();
}
}
NB: Whatever I require to be done, which would entail storing everything into a database, I will be doing in the onStatus() method of the StatusListener(). Not sure that this is very relevant, but thought I'd put it out there just in case.
If you are providing Twitter credentials through twitter4j.properties as mentioned here, then change the first line of twitter4j.properties file to
debug=false
This will stop printing to console on receiving Twitter Stream!

auto log java exceptions

I have written a huge Web-Application and 'forgot' to include logging (I only print the errors with the standard e.printStackTrace() method).
My question is, if there is any method to auto-log (getLogger.LOG(SEVERE,"...")) any thrown exception?
maybe with a custom exception-factory like in exceptionFactory JSF?
I want to log every thrown exception with my logger, e.g. before the program enters the catch-block, the exception has to be logged already:
try{
...
} catch(Exception1 e){
//Exception must have been already logged here (without adding getLogger().LOG(...) every time)
System.out.println(e.printStackTrace());
} catch(Exception2 e){
//Exception must have been already logged here (without adding getLogger().LOG(...) every time)
System.out.println(e.printStackTrace());
}
Take a look at aspect oriented programming which can insert logging code at runtime for your favorite logging framework. The JDK includes the java.lang.instrument package which can insert bytecodes during classloading to perform your logging.
Otherwise, you can install a servlet Filter as the top most filter in the call chain which will catch most of your exceptions.
public class LogFilter implements javax.servlet.Filter {
private static final String CLASS_NAME = LogFilter.class.getName();
private static final Logger logger = Logger.getLogger(CLASS_NAME);
#Override
public void init(FilterConfig filterConfig) throws ServletException {
}
#Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
logger.entering(CLASS_NAME, "doFilter", new Object[]{request, response});
try {
chain.doFilter(request, response);
} catch (IOException | ServletException | RuntimeException | Error ioe) {
logger.log(Level.SEVERE, "", ioe);
throw ioe; //Keep forwarding.
} catch (Throwable t) {
logger.log(Level.SEVERE, "", t);
throw new ServletException(t);
}
logger.exiting(CLASS_NAME, "doFilter");
}
#Override
public void destroy() {
}
}
You can set uncaught exception handler for main thread and every other you create using Thread.setUncaughtExceptionHandler() method and do all the required logging there.
I am now also in front of new larger project and interested in elimination of not necessary code to be produced. First I wanted to log every entry and exit from method including input and output data. In my case of event driven architecture I am pushing these data to elastic and analyse continuously method processing timeouts, that is lot of code lines. So I handled this with AspectJ. Very nice example of this is here:
http://www.baeldung.com/spring-performance-logging
Same applies for auto Error logging, here is dummy example which I will extend to work with slf4j, but these are details:
public aspect ExceptionLoggingAspect {
private Log log = LogFactory.getLog(this.getClass());
private Map loggedThrowables = new WeakHashMap();
public pointcut scope(): within(nl.boplicity..*);
after() throwing(Throwable t): scope() {
logThrowable(t, thisJoinPointStaticPart,
thisEnclosingJoinPointStaticPart);
}
before (Throwable t): handler(Exception+) && args(t) && scope() {
logThrowable(t, thisJoinPointStaticPart,
thisEnclosingJoinPointStaticPart);
}
protected synchronized void logThrowable(Throwable t, StaticPart location,
StaticPart enclosing) {
if (!loggedThrowables.containsKey(t)) {
loggedThrowables.put(t, null);
Signature signature = location.getSignature();
String source = signature.getDeclaringTypeName() + ":" +
(enclosing.getSourceLocation().getLine());
log.error("(a) " + source + " - " + t.toString(), t);
}
}
}
I would be happy to hear what else is good example of boiler plate code reduction. I of course use Loombok which does superior task...
NOTE: do not reinvent wheel, so look here as other people collected usefull AOP to be reused in your project out of the box :-)) open source is great community: https://github.com/jcabi/jcabi-aspects

Restlet streaming with Thread.sleep()

This example is based on an example from the book Restlet in Action.
If I try
public class StreamResource extends ServerResource
{
#Get
public Representation getStream() throws ResourceException, IOException
{
Representation representation = new WriterRepresentation(MediaType.TEXT_PLAIN)
{
#Override
public void write(Writer writer) throws IOException
{
String json = "{\"foo\" : \"bar\"}";
while (true)
{
writer.write(json);
}
}
};
return representation;
}
}
it works and it continuously sends the json string to the client.
If I introduce a delay in the while loop like this
String json = "{\"foo\" : \"bar\"}\r\n";
while (true)
{
writer.write(json);
try
{
Thread.sleep(250);
}
catch (InterruptedException e)
{}
}
I was hoping that the client would get data 4 times in a second BUT nothing seems to get to the client.
Can anyone explain why the introduction of Thread.sleep() does that? What is a good way to introduce delay in streaming data to the client?
You should try with the Jetty connector instead of the internal Restlet connector. This connector isn't ready for production even though we are working on fixing it.
You can also try the Simple extension which has less dependent JARs than the Jetty extension.
You can try to flush the buffer, like this:
String json = "{\"foo\" : \"bar\"}\r\n";
while (true)
{
writer.write(json);
writer.flush(); // flush the buffer.
try
{
Thread.sleep(250);
}
catch (InterruptedException e)
{}
}
Without writer.flush(), the writer waits to fill the internal buffer before writing the socket. Thread.sleep(250) reduces the output produced at each second, so that far more time is required to fill the buffer.

Categories

Resources