Logging from client swing Java app to ELK - java

We have fat Java swing client that runs in multiple instances on Citrix farm, we would like to send client logs to ElasticSearch server. Preferred way as I understand is to setup Logstash and point it to client logs. But our app is on Citrix so it is not desirable to have another app besides our app. Reading other answers like Logging from Java app to ELK without need for parsing logs discourages building custom Java log appenders that would be used for sending logs to ElasticSearch.
Degrading application responsiveness is not an option and solution should be asynchronous. What are our options ?

Have a look at my Log4j2 Elasticsearch Appenders. Out-of-the-box, it gives you async log delivery directly from the application to ES clusters, plus: failover, rolling indices, index template and security config.

Related

How to properly keep logs of Spring Boot JMS app deployed in a Linux server

I have created a Spring Boot JMS application. It's function is to act as middleware which consumes/listens messages (xml) from a SOURCE system, transforms the xml, sends the transformed xml to a DESTINATION system.
I have already deployed the jar file in a Linux server. This is the first time I deployed an application, I am not sure of the correct way of keeping a history of log to a file should any error occur while the spring boot app is consuming and processing XML messages.
Some of the XML messages contain account numbers and if anything fails, I need to have some way of knowing which account failed.
I'm unsure because when working in the IDE, when you run the spring boot application, we normally see in the console a log of what is happening. But after I deployed the jar to the Linux server, I no longer have an IDE console to see what's happening. I just see the jar application running in port 8080.
In the IDE, normally we output messages using LOGGER.info(), LOGGER.error()...
private static final Logger LOGGER = LoggerFactory.getLogger(SomeClassFile.class);
What would be the best approach to keep history of logs?
Possible scenarios would be failure in connection while consuming messages from SOURCE system OR while sending messages to DESTINATION system.
Another possible failure would be, failure to transform XML messages.
All of that needs to be captured.
I deployed the app by creating a simple jenkins task which copies the jar to the Linux server after building.
I'd appreciate any comment or suggestion.
Thank you.

What are the proper ways to centralize the logs of spring boot microservices deployed in Kubernetes?

I would like to have a way to centralize the logs of multiple applications running in a Kubernetes cluster.
The cherry on top would be if there could be a way to query them so I can display them on a UI.
Any ideas / suggestions?
As suggested by the Jordanm you can use the flientd + ES and Kibana (Display log).
i would also suggest checking the same stack Graylog, Elasticsearch and MongoDb.
You are on Kubernetes so this way it would be easy to integrate the Graylog also and it's open-source tool. You can use the official helm chart and deploy it for storing the central logs.
Helm chart to deploy the Graylog : https://charts.kong-z.com/
Graylog official site : https://www.graylog.org/
i would also suggesting checking out the Gelf UDP option with Graylog so your application have to send logs over UDP protocol so under immense traffic also your application can survive and write log without waiting ack.

EC2 Instance - Sending STDOUT logs to Cloud Watch

Reading the 12factor app in the logging chapter, it suggests that the application logs should be sent to STDOUT.
I found plenty of documentation on how to get the logs from STDOUT and send it to Cloud Watch when I'm running the application in a container.
But, is it possible (or even recommended) to do the same when running the application in an EC2 instance (no container/docker involved)?
The way that I managed to have my logs sent to Cloud Watch was doing what I would assume to be the standard way:
Configure my logback-spring.xml to log to a file (Java application)
Install the Cloud Watch agent on the instance and configure it to monitor the file above.
Happy life, all works.
I found this post on the AWS forum where it's suggested to create a symbolic link from stdout to a file, and I would assume that this file would have to be monitored by the agent. The benefit that I can see on this approach would be that who is developing the application don't need to worry about log config, just send to stdout and who is deploying the application could configure the way it wants using some script at the startup.
But as a drawback, I can't see a way to have the application's log sent to differents streams and/or groups.
Thank you.

I have a splunk cloud account, Now I want my java application's logs to be stored on splunk cloud. I am wondering How can I do that?

I have a splunk cloud account, Now I want my java application's logs to be stored on splunk cloud. I am wondering How can I do that Splunk cloud integration with java application's log
There are a number of different options for ingesting log data with Splunk.
Two of the most common solutions for java based applications are:
Configure a splunk forwarder on the instance hosting the application that reads the logs and forwards them to your splunk cloud.
Use the custom logging appenders provided by splunk.

Logging from 3 different web applications on a tomcat cluster

Our project consists of 3 webapplcations that communicate with each other via web services.
All 3 web apps are running on 3 different web servers that run as a cluster with load balancer. (spring , tomcat, mysql)
Our CTO mentioned that in production, it can be very helpfull to invistigate errors on log on a single unified log file that is consist of all the webapplication log files combined together.
this way it is very easy to see in the log the whole flow across the webapps and not skipping from one log file to another (for each webapp log)
after a quick research we found that combining all the logs into a single file may cause corrupt file error of the log file itself. (we are using slf4j with log4j configuration)
So basically we have 3 questions:
1) Is it a good practice to combine all of the web apps log into one?
2) Whats the best way to achieve that (non corrupted log file will be nice)
3) Is it possible \ relevant to do the same concept of log unification in regard to tomcat logs? (unify all unified logs of all tomcats in the same cluster)
Logging to the same file from multiple servers can get very messy. You inevitably end up with multiple servers attempting to update files simultaneously, which has a habit of causing problems such as weirdly intermingled output and locking.
Given that you're using Log4J, then you should check out JMS queue appenders:
http://logging.apache.org/log4j/2.x/manual/appenders.html#JMSQueueAppender
Using this, every server logs to a JMS queue, and you can set up a listener which logs to file on a separate server.
A reasonable alternative would be to do a little bit of Perl scripting to grab the files and merge them periodically, or on demand.
You will probably find messages which are out of step with each other. That's because each server will be buffering up log output to avoid blocking your application processes.
Logging just the errors to a common place is useful. You can continue to log to each application's log, but you can add remote logging for selected entries (e.g. anything with Level=ERROR).
The easiest way to set this up is to run a SocketServer on one of your machines.
Here's an example that shows how to configure it.

Categories

Resources