EC2 Instance - Sending STDOUT logs to Cloud Watch - java

Reading the 12factor app in the logging chapter, it suggests that the application logs should be sent to STDOUT.
I found plenty of documentation on how to get the logs from STDOUT and send it to Cloud Watch when I'm running the application in a container.
But, is it possible (or even recommended) to do the same when running the application in an EC2 instance (no container/docker involved)?
The way that I managed to have my logs sent to Cloud Watch was doing what I would assume to be the standard way:
Configure my logback-spring.xml to log to a file (Java application)
Install the Cloud Watch agent on the instance and configure it to monitor the file above.
Happy life, all works.
I found this post on the AWS forum where it's suggested to create a symbolic link from stdout to a file, and I would assume that this file would have to be monitored by the agent. The benefit that I can see on this approach would be that who is developing the application don't need to worry about log config, just send to stdout and who is deploying the application could configure the way it wants using some script at the startup.
But as a drawback, I can't see a way to have the application's log sent to differents streams and/or groups.
Thank you.

Related

How to properly keep logs of Spring Boot JMS app deployed in a Linux server

I have created a Spring Boot JMS application. It's function is to act as middleware which consumes/listens messages (xml) from a SOURCE system, transforms the xml, sends the transformed xml to a DESTINATION system.
I have already deployed the jar file in a Linux server. This is the first time I deployed an application, I am not sure of the correct way of keeping a history of log to a file should any error occur while the spring boot app is consuming and processing XML messages.
Some of the XML messages contain account numbers and if anything fails, I need to have some way of knowing which account failed.
I'm unsure because when working in the IDE, when you run the spring boot application, we normally see in the console a log of what is happening. But after I deployed the jar to the Linux server, I no longer have an IDE console to see what's happening. I just see the jar application running in port 8080.
In the IDE, normally we output messages using LOGGER.info(), LOGGER.error()...
private static final Logger LOGGER = LoggerFactory.getLogger(SomeClassFile.class);
What would be the best approach to keep history of logs?
Possible scenarios would be failure in connection while consuming messages from SOURCE system OR while sending messages to DESTINATION system.
Another possible failure would be, failure to transform XML messages.
All of that needs to be captured.
I deployed the app by creating a simple jenkins task which copies the jar to the Linux server after building.
I'd appreciate any comment or suggestion.
Thank you.

Java core application sending Application Insights data (logs) to azure portal when debugging and not on normal application run

I have a java application(not web) that periodically logs data on the file system as well as console. The application is built in Java and is using log4j-1.2.17.jar for logging.
I have configured the java application to send log data to application insights resource of Azure.
The configuration involved adding applicationinsights-core-2.6.1.jar and applicationinsights-logging-log4j1_2-2.6.1.jar to build path of the java project.
When I execute the code in Debug mode, the application sends the log data to azure portal.
When I execute the code in non debug mode, the application fails to send the log data to azure portal.
Can someone please let me know what i am missing so that the application starts sending data to portal in normal mode(non debug).
According to this documentation
Telemetry is not sent instantly. Telemetry items are batched and sent
by the ApplicationInsights SDK. In Console apps, which exits right
after calling Track() methods, telemetry may not be sent unless
Flush() and Sleep/Delay is done before the app exits as shown in full
example later in this article.
You can add a flush() and sleep() method to have a try.It can be your application is over,but telemetry haven't sent.
Hope this can help you.

Logging from client swing Java app to ELK

We have fat Java swing client that runs in multiple instances on Citrix farm, we would like to send client logs to ElasticSearch server. Preferred way as I understand is to setup Logstash and point it to client logs. But our app is on Citrix so it is not desirable to have another app besides our app. Reading other answers like Logging from Java app to ELK without need for parsing logs discourages building custom Java log appenders that would be used for sending logs to ElasticSearch.
Degrading application responsiveness is not an option and solution should be asynchronous. What are our options ?
Have a look at my Log4j2 Elasticsearch Appenders. Out-of-the-box, it gives you async log delivery directly from the application to ES clusters, plus: failover, rolling indices, index template and security config.

Live web log viewer for tomcat catalina.out

With a standard webapp running in Tomcat with the Spring Framwork and Log4J logging to catalina.out I need to have a better access to logs than manual SSH and tail -f catina.out
I already know of some solution like logstash, ... but they require to send the log to a centralized server. I went through a lot of answers of various websites but none satisfies my needs. I just want to have access to the logs in a web browser on the same web server.
Is there any simple and straightforward way to do that ?
Update
I want to to that because I cannot always SSH and tail -f the logs because of the firewall IP security. I need to be able to see these logs from anywhere as long as I have an internet access to such a secure live web console.
Give logsniffer a try. It's a simple standalone Java web application which can run on the same host. log4j log format is supported out of the box, just type in the conversion pattern and the logs will be parsed properly. You can tail, search and monitor the logs in real-time. Last but not least, logsniffer is open source.
Disclaimer: This is my own project.

Logging from 3 different web applications on a tomcat cluster

Our project consists of 3 webapplcations that communicate with each other via web services.
All 3 web apps are running on 3 different web servers that run as a cluster with load balancer. (spring , tomcat, mysql)
Our CTO mentioned that in production, it can be very helpfull to invistigate errors on log on a single unified log file that is consist of all the webapplication log files combined together.
this way it is very easy to see in the log the whole flow across the webapps and not skipping from one log file to another (for each webapp log)
after a quick research we found that combining all the logs into a single file may cause corrupt file error of the log file itself. (we are using slf4j with log4j configuration)
So basically we have 3 questions:
1) Is it a good practice to combine all of the web apps log into one?
2) Whats the best way to achieve that (non corrupted log file will be nice)
3) Is it possible \ relevant to do the same concept of log unification in regard to tomcat logs? (unify all unified logs of all tomcats in the same cluster)
Logging to the same file from multiple servers can get very messy. You inevitably end up with multiple servers attempting to update files simultaneously, which has a habit of causing problems such as weirdly intermingled output and locking.
Given that you're using Log4J, then you should check out JMS queue appenders:
http://logging.apache.org/log4j/2.x/manual/appenders.html#JMSQueueAppender
Using this, every server logs to a JMS queue, and you can set up a listener which logs to file on a separate server.
A reasonable alternative would be to do a little bit of Perl scripting to grab the files and merge them periodically, or on demand.
You will probably find messages which are out of step with each other. That's because each server will be buffering up log output to avoid blocking your application processes.
Logging just the errors to a common place is useful. You can continue to log to each application's log, but you can add remote logging for selected entries (e.g. anything with Level=ERROR).
The easiest way to set this up is to run a SocketServer on one of your machines.
Here's an example that shows how to configure it.

Categories

Resources