Design pattern for 2 varities of the same feature - java

All,
I am developing a feature that will on execution of an operation write the logs to a file server using ftp. Note that the write to file server will happen only if a file server is configured. If no server is configured the operation will exit and return the status. The flow is something like this:
1. Execute operation
2. If file server connected (check in DB and ping), write logs
3. return
Now I would like to know if there are design patterns for this, same feature, however the scope of the feature will vary depending on whether or not some configuration is done. I would much appreciate help on this for 2 scenarios:
Static - If the DB config is one time during boot up - as in post bootup the system can "assume" that the file server is there or not based on the read from DB
Dynamic- When the system is up and running, I might bring up a file server and configure DB. Ideally for this scenario the system should detect the file server and start writing logs to it, rather than being forced to reboot the system.
Requesting help in this regard.
Thanks

Your design looks like a breach of the Single Responsibility Principle. You are entangling two different concerns: the first concern is the operation itself, and the second is shipping the logs to a central location.
Think about separating your component into two simpler, independent components. One of them performs the business operation and writes logs, say to a local file, and that's it. The other component checks for the existence of new logs on the local file system and copies them to the central location.

You didn't mention whether or not you are using an existing logging framework, such as Log4J. If not, it would probably be a good idea - if you try to roll out your own logging framework, you can end up having to deal with additional unforeseen complexities, such as dealing with log levels (INFO, DEBUG, ERROR, etc.).
With regards to your original message - I'd consider using the Factory pattern - create a factory class that can internally check whether the file server is available, and return one of two different logger types - something like a ConsoleLogger and an FTPLogger. Make sure that both of these implement the same interface so that your calling code doesn't have to care about what type of logger it's using. Alternatively, you can also use a Decorator that can wrap the object performing your operation - and once it completes a request, have the decorator do the logging.
A final comment - try to avoid checking whether the file server is available every time that you log. A database hit on every log call could result in horrible performance, not to mention that you'll have to ensure that errors in the logging method (such as DB locks) don't result in your entire operation failing.

Related

Centralized Application properties for multiple system

I am looking for a open-source solutions which allow hosting different properties for different applications and allow changes. On any change of properties it should notify or push the change to the application appropriately.
So, instead every application managing the properties in physical file and deploying physically; these properties can be pushed to a single system. User will have GUI to load and change the properties as per right. Should allow push as mentioned.
If you have already similar open source solutions in mind please advice.
Is this something that Puppet can manage for you?
I don't think what you've described (as nice as it would be) will be likely to exist in an app server. If a program is looking for a file, it's either going to load it with a FileReader (or similar), or it will use ClassLoader.getResourceAsStream(). It might be looking for data that is returned in properties, format, XML properties format, or even something completely different like RDF with which to configure itself. Also many programs read their config on start-up and then hold the values in memory, so you would still need to reboot them to get them to change.
To get something like this to work there would need to be a standard for configuration provisioning and live updates. Once that existed the webapp authors and server vendors would each need to add support for the standard.
If you are the one writing the programs to be managed however, then you can write your programs to request configuration from a service, and have a configuration push feature.... there may be packages out there that can speed up adding that to your code, but I get the impression you are looking to manage programs written by others.
Have you considered to use JMX? I think he could be a good starting point to implement your requirements.
MBeans's attributes can store your application's properties, the MBeanServer will allow you to make them available from remotting, JConsole offers you an GUI to update properties values.
You also can write within the MBean some code that notify the corrrespondig application when a user change any properties using the GUI.

What is the diffrence between logging and a normal file write?

I was just curious to know the difference between normal file write and logging. Of course logging is used to record exceptions, errors, installation details and other important data. But this can be done using a normal file write also. I've seen logging use locks for resource sharing (in java). Other than that is there any particular or very important reason behind using logging?
Logging is writing data to some stream to keep a record of events that occur in an application. Note that you don't necessarily have to log to a file. You can log to a console, for example.
Some applications require an "Audit Log" of user activity in the system. This is a case where logging is fulfilling a very specific business requirement.
Note you can write to a file and NOT be logging. If you use the presence of a file to create a lock for a process, for example, you have written to a file, but you are not logging.
In general though, logging is just writing event data somewhere. "started up", "entered method x", "exception occurred" are all events. I think that really what defines a "log" vs a file with different semantics.
Writing to a file is one possibility of doing logging. Logging is a more general term for something like "save important events for later use". If you look at logging frameworks, you see that they allow you to write to a file as one option. But they provide you with more configuration options like logging levels, different logging sinks etc. One could of course implement this on its own by writing certain information to file.
Logging means appending to a file. With write you can override previous data, by appending you can't. It's just my way of thinking.

Implementing logging

I was just wondering if following thing exists.
I have a TCP communicator which keeps communicating with thousands of devices.
Currently, the TCP communicator logs all the events in one log file.
Now, is it possible to log communication with every device in different files. The IMEI number of every device is different. So the logger will check if a file with name equal to the IMEI number of the device exists. If the file exists, logger will start logging events of the device in that file, otherwise it will create a new file with IMEI as the file's name and start logging the events in that file.
(We are developing our application in Java.)
LogBack is the future, and it's here!
Created as a successor of log4j and fully complaint with the slf4j framework, logback might be the easy (and clean) way to fulfill your need.
I'm not an expert but I guess that SiftingAppender might be the right answer. There should be a discriminator option for you. Maybe you can build your own discriminator, extend the SiftingAppender, or get some extra help from Janino library.
Hope this helps!
If you are implementing the logger yourself, there's nothing to stop you from doing this.
For example, give the log function as a parameter the number of the device you're currently communicating with, and implement it the way you described.
If you're using Apache log4j, which I highly recommend, create a custom logging appender by extending AppenderSkeleton and writing unique files for individual connections will be as simple as doing standard file I/O with a variable filename.
If you are using java.util.logging, look at the Handler base class, if you are using log4j, look at Appender. In both cases, you need to somehow get the IMEI associated with the message, so the code writing the log message can pick the appropriate file.
There are two approaches to doing this.
First is to extend the log event class (LogRecord or LoggingEvent respectively). This would allow you to log using your event, which contains the IMEI. However, this does not account for logging performed by other libraries etc while performing the conversation with a device.
The other alternative is to use a ThreadLocal. Set the IMEI associated with a socket whenever you receive a message or are formulating message. Make sure that the logging happens in the same thread, and any queuing is done at the log handler/appender. Look for / ask questions about ThreadLocals if you are unfamiliar with this approach. I believe that Log4J's NDC and MDC implements this sort of strategy, but I've not tried to do specialized processing of context at the appender.
Finally, be aware some operating systems will run out of file handles if you are indeed thinking of keeping "thousands" of log files open. Depending on just how many files, you may want to consider writing log messages (with IMEI) to a database, or doing some sort of LRU-based file closing. In the latter, you would basically not have file handles for log files that haven't been touched in a while.

How to make multiple instances of Java program share the same logging FileHandler?

I have a Java program which runs as 3 separate processes on the same server. I would like all of the processes to share a single log file, is there a way to specify that in a logging.properties file? I am using java.util.logging to handle logging.
Currently, this is how I define my FileHandler in my logging.properties file:
java.util.logging.FileHandler.pattern=%h/log/logfile.log
This works fine for 1 instance of the program, however, if I attempt to start 3 separate instances of the program the result is:
logfile.log
logfile.log.1
logfile.log.2
Any advice on this?
Thankyou
Logback is another logger, but it supports your case.
from the docs: http://logback.qos.ch/manual/appenders.html
check out prudent mode for FileAppender
Writing to the same file from different processes (the different JVMs) is not recommended.
The only safe way to do it is to somehow lock the file, open it, write to it and then close it. This considerably slows down each writing, which is generally deemed unacceptable for a logger. If you really want to go this way, you can always write your own handler.
I would write a 2nd Java program - a logger. Have the other processes send log messages to the logging program, which would then write to the single log file. You can communicate between the programs using sockets. Here's an example of how to do that.
Paul
Elaborating on Paul's answer, you can use a SocketHandler to direct the log events from all processes to a single process, which actually writes to a file.
Most log packages provide a simple implementation of this functionality. Another widely supported option is integration with the system's logging facility (Window's Event Log or syslogd).

Use of AspectJ for debugging Enterprise Java applications

The idea is to utilize AOP for designing applications/tools to debug/view execution flow of an application at runtime. To begin with, a simple data(state) dump at the start and end of method invocation will do the necessary data collection.
The target is not application developers but high level business analyst or high level support people for whom a execution flow could prove helpful. The runtime application flow can also be useful in reducing the learning curve of an application for new developers especially in configuration loaded systems.
I wanted to know if there already exists such tools/applications which could be used. Or better, if this makes sense, then is there a better way to achieve this.
You could start with Spring Insight (http://www.springsource.org/insight) and add your own plugins to collect data appropriate for business analysts/support staff. If that doesn't meet needs, you can write your own custom aspects. It is not that hard.
You could write your own aspects, as suggested by ramnivas, but to prepare for the requests from the users, you may want to just have the aspects compiled into the application, so that you don't have to take a hit at run-time, and then they could just select which execution flows or method groups they are interested in, and you just call the server and set some variable to give them the information desired.
Writing the aspects is easy, but to limit recompiling, you may want to get an idea what the users will want, for example, if they want to have a log of every call made from the time a webservice is called until it gets to the database, then you can build that in, but it would be easier to know this up-front.
Otherwise the aspect does nothing, if the variable is not set, and perhaps unset the variable when finished.
You could also have where they can pick which type of logging and for which user, which may lead to more useful information.

Categories

Resources