I'm required to create a part of a java program that logs all activity "secretly" that the user is doing with the program. purely to catch people trying to "cheat" the system. The thing is, multiple people will be using the same program on multiple computers
All the information needs to be written using PrintWriter to a single text file that will, later on, be used by an administrative part of the program.
PrintWriter printer = new PrintWriter(new FileWriter(serverFolderLocation + "\\LogUserInfo.txt", true));
It's expected that 50+- computers will be using this program, every few seconds an array of 7+ lines of text is expected to be written to that file every time a specific button is pressed on a computer
I know that writing text to a text file is extremely quick and this is unlikely to happen, but If 2 or more computers happens to write to the text file at the same time, while append is set to true, will data go missing? or will it append normally?
Is this even possible? 2+ devices writing data to a text file at different times?
Do note that it is important that all the data from all 50+ computers arrive at the destinated file.
If problems are likely to occur, what other methods can be used of doing something like this, other than setting up a dedicated database?
Setting append to true in the FileWriter:
append - boolean if true, then data will be written to the end of the file rather than the beginning.
If you've got 50 machines doing this all at the same time, there's going to be conflicts. Your computer complains when you try to modify a file that's being used by another process, don't go throwing in 50 more contenders.
You could try using Sockets.
Whichever machine holds that file, designate that as the 'server' and make the other machines 'clients'. Your clients send messages to the server, your server appends those messages to the file in a synchronised manner.
You could then prevent your ~50 clients from directly changing the logs file with some network & server security.
You are simply re-inventing the wheel here, but in a very wrong way. The other answer is correct: using a simple file, and having multiple distributed users write to the same file just screams for failure.
But I disagree with the idea to use sockets instead. That is really low level, and requires you to implement a lot of (complicated) things yourself. You will have to think about network issues, multi threading, locking, buffering, ...
Sure, if this is for education, then building something like that is a challenge. But if you the goal here is to come to a robust solution that just works, you should rather think about using some 3rd party off-the-shelf solution.
You could start reading here. And then pick a framework such as logback. Alternatively, you could look into messaging services, such as ActiveMQ, or RabbitMQ.
Again: creating and collection logs in distributed environments is A) hard to get right but B) a solved problem.
Related
the title actually tells the issue. And before you get me wrong, I DO NOT want to know how this can be done, but how I can prevent it.
I want to write a file uploader (in Java with JPA and MySQL database). Since I'm not yet 100% sure about the internal management, there is the possibility that at some point the file could be executed/opened internally.
So, therefor I'd be glad to know, what there is, an attacker can do to harm, infect or manipulate my system by uploading whatever type of file, may it be a media file, a binary or whatever.
For instance:
What about special characters in the file name?
What about manipulating meta data like EXIF?
What about "embedded viruses" like in an MP3 file?
I hope this is not too vague and I'd be glad to read your tips and hints.
Best regards,
Stacky
It's really very application specific. If you're using a particular web app like phpBB, there are completely different security needs than if you're running a news group. If you want tailored security recommendations, you'll need to search for them based on the context of what you're doing. It could range from sanitizing input to limiting upload size and format.
For example, an MP3 file virus probably only works on a few specific MP3 players. Not on all of them.
At any rate, if you want broad coverage from viruses, then scan the files with a virus scanner, but that probably won't protect you from things like script injection.
If your server doesn't do something inherently stupid, there should be no problem. But...
Since I'm not yet 100% sure about the internal management, there is the possibility that at some point the file could be executed/opened internally.
... this qualifies as inherently stupid. You have to make sure you don't accidently execute uploaded files (permissions on the upload directory are a starting point, limit the upload to specific directories etc.).
Aside from executing, if the server attempts any file type specific processing (e.g. make thumbnails of images) there is always the possibility that the processing can be attacked through buffer overflow exploits (these are specific for each type of software/library though).
A pure file server (e.g. FTP) that just stores/serves files is save (when there are no other holes).
I need to monitor a log file for a pattern. The log file continually gets written by an application.
The application can add new log statements while my program is reading it.
The log gets rolled over when it’s >200 MB or at end of the day, so my program should handle change in filename dynamically.
If my program crashes for any reason, it has to resume from where it left off.
I do not want to re-invent the wheel. I am looking for a Java API. I wrote a program to read file and put in a loop with 30 seconds sleep, but that does not meet all the criteria.
You might consider looking at apache commons io classes, in particular Tailer/TailerListener classes. See http://www.devdaily.com/java/jwarehouse/commons-io-2.0/src/main/java/org/apache/commons/io/input/Tailer.java.shtml.
These two API's can be helpful:
1
JxFileWatcher (Official Site)
Read here what it is capable of
2
Jnotify
JNotify is a java library that allow java application to listen to file system events, such as:
File created
File modified
File renamed
File deleted
If you are using Log4j, or can integrate it, it is possible to append log outputs to a convenient object, such as a StringBuffer, as it has been discussed in this related question: Custom logging to gather messages at runtime
This looks similar: Implementation of Java Tail
Essentially you use a BufferedReader. Tracking where you left off will be something you'll have to add, perhaps capture the last line read?
That same question references JLogTailer which looks interesting and may do most of what you want already.
I have two java processes which I want completely decoupled from each other.
I figure that the best way to do this is for one to write out its data to file and the other to read it from that file (the second might also have to write to the file to say its processed the line).
Problems I envisage are do with similtaneous access to the file. Is there a good simple pattern I can use to get around this problem? Is there a library that handles this sort of functionality?
Best way to describe it is as a simple direct message passing mechanism I could implement using files. (Simpler than JMS).
Thanks Dan
If you want a simple solution and you can assume that "rename file" is an atomic operation (this is not completely true), each one of the processes can rename the file when reading it or writing to it and rename back when it finishes. The other one will not find the file and will wait until the file appears.
you mean like a named pipe? it's possible but java doesn't allow pipe creation unless you use non portable processes
You are asking for functionality that is exactly what JMS does. JMS is an API which has many implemententations. Can you you not just use a lightweight implementation? I don't see why you think this is "complicated". By the time you've mananged to reliably implement your solution you'll have found that it's not trivial to deal with all the edge cases.
Correct me if I don't understand your problem...
Why don't you look at file locks ? When a program acquire the lock, the other wait until the lock is released
If you are not locked on a file-based solution, a database can solve your problem.
Each record will be a line written by the writing process. A single column in the record will be untouched and the reading process will use it to indicate that it red the record.
Naturally you will have to deal with cleanup of the table before it becomes to large, or its partitioning so it will be easy for the reading process to find information inside it.
If you must use a file - you can think of another file that just has the ID of the record that the reader process read - that way you don't need to have concurrently writing processes on the same file.
I have looked at the source code of Apache Commons FileUtils.java class to see how they implement unix like touch functionality. But I wanted to confirm with the community here if my use case would be met by the implementation as it opens and closes a FileOutputStream to provide touch functionality
We have two webservers and one common server between them where a File is residing
For our application we need to use the time modified of this file to make some decisions. We actually don't want to modify the file but change its last modified date when some particular activity happens on one of the webservers.
Its important that last modified time set for the file is taken from the central server to avoid worrying about time differences between two web servers. Therefore changing file.setLastModfiied is not a good option as webserver would send its own time.
But I am wondering that even if I use Apache Commons FileUtils touch method to do this, would closing stream on one webserver set the last modified time of the file using time of the webserver or the central server.
Sorry for so much details but could not see any other way to explain the issue
If you "touch" a file in the filesystem of one webserver, then the timestamp of the file will be set using the clock of that server. I don't think you can solve your problem that way.
I think you've got three options:
configure the servers to synchronize their clocks to the common timebase; e.g. using NTP,
put all files whose timestamps must be accurate to the common timebase on one server, or
change your system design so that it is immune to problems with different servers' clocks being out of sync.
It would be much better to make use of a shared database if you have one so that you can avoid issues of concurrency and synchronisation. I can't recommend any simple and safe distributed file flag system.
I have a set of files. The set of files is read-only off a NTFS share, thus can have many readers. Each file is updated occasionally by one writer that has write access.
How do I ensure that:
If the write fails, that the previous file is still readable
Readers cannot hold up the single writer
I am using Java and my current solution is for the writer to write to a temporary file, then swap it out with the existing file using File.renameTo(). The problem is on NTFS, renameTo fails if target file already exists, so you have to delete it yourself. But if the writer deletes the target file and then fails (computer crash), I don't have a readable file.
nio's FileLock only work with the same JVM, so it useless to me.
How do I safely update a file with many readers using Java?
According to the JavaDoc:
This file-locking API is intended to
map directly to the native locking
facility of the underlying operating
system. Thus the locks held on a file
should be visible to all programs that
have access to the file, regardless of
the language in which those programs
are written.
I don't know if this is applicable, but if you are running in a pure Vista/Windows Server 2008 solution, I would use TxF (transactional NTFS) and then make sure you open the file handle and perform the file operations by calling the appropriate file APIs through JNI.
If that is not an option, then I think you need to have some sort of service that all clients access which is responsible to coordinate the reading/writing of the file.
On a Unix system, I'd remove the file and then open it for writing. Anybody who had it open for reading would still see the old one, and once they'd all closed it it would vanish from the file system. I don't know if NTFS has similar semantics, although I've heard that it's losely based on BSD's file system so maybe it does.
Something that should always work, no matter what OS etc, is changing your client software.
If this is an option, then you could have a file "settings1.ini" and if you want to change it, you create a file "settings2.ini.wait", then write your stuff to it and then rename it to "settings2.ini" and then delete "settings1.ini".
Your changed client software would simply always check for settings2.ini if it has read settings1.ini last, and vice versa.
This way you have always a working copy.
There might be no need for locking. I am not too familiar with the FS API on Windows, but as NTFS supports both hard links and soft links, AFAIK, you can try this if your setup allows it:
Use a hard or soft link to point to the actual file, and name the file diferently. Let everyone access the file using the link's name.
Write the new file under a different name, in the same folder.
Once it is finished, have the file point to the new file. Optimally, Windows would allow you to create the new link with replacing the existing link in one atomic operation. Then you'd effectively have the link always identify a valid file, either the old or the new one. At worst, you'd have to delete the old one first, then create the link to the new file. In that case, there'd be a short time span in which a program would not be able to locate the file. (Also, Mac OS X offers a "ExchangeObjects" function that allows you to swap two items atomically - maybe Windows offers something similar).
This way, any program that has the old file already opened will continue to access the old one, and you won't get into its way creating the new one. Only if an app then notices the existence of the new version, it could then close the current and open it again, this way getting access to the new version.
I don't know, however, how to create links in Java. Maybe you have to use some native API for that.
I hope this helps anyways.
I have been dealing with something similar recently. If you are running Java 5, perhaps you could consider using NIO file locks in conjunction with a ReentrantReadWriteLock? Make sure all code referencing the FileChannel object ALSO references the ReentrantReadWriteLock. This way the NIO locks it at a per-VM level while the reentrant lock locks it at a per-thread level.
FileLock fileLock = filechannel.lock(position, size, shared);
reentrantReadWriteLock.lock();
// do stuff
fileLock.release();
reentrantReadWriteLock.unlock();
Of course, some exception handling would be required.