Cassandra terminates if no space in disk - java

I am using Cassandra DB in my java application. Am using Thrift client to connect Cassandra from my java application. If the Cassandra disk get full means it automatically terminates. So from my java program i could not find the correct error why the Cassandra is down.
So how to avoid the auto termination of Cassandra or is their any way to identify the disk full error ?
Also i dont have physical access to cassandra drive. Its running in some other remote machine.

Disk errors and, in general, generic hardware/system errors are not usually properly handled in any application. The database should only provide as much durability as possible in such scenarios and it is the correct behavior - shut down and break as little as possible.
As for your application - if you can not connect to the database, there is no difference as to what caused an error. You app will not work anyway.
There are special tools that can monitor your machine, i.e. Nagios. If you are the administrator of that server, use such applications. When the disk is getting filled up you will receive an email or text. Use such tools and don't break an open door by implementing several hundred of lines of code to handle random and very rare situations.

Setup ssh access to Casandra machine and use some ssh client like JSch to run df /casandra/drive (if Linux) or fsutil volume diskfree c:\casandra\drive (if Windows) from your Java client. Capture output that is simple and parse to obtain the free disk space. That way your application will monitor that is happening there and probably should alert the user and refuse to add data if there is an out of disk space threat.
You can also use standard monitoring tools or setup server side script to send the message if the disk space low. However this will not stop your application from crashing, you need to take actions after you see that the disk space is low.

Related

Is there any way to divide server resources between users of Java application?

I wonder if there is any possibility to split the users of Java application running under Tomcat by server resources?
Problem description
We have an application written in Java and running under the control of Tomcat server. Sometimes users could possibly do some actions leading to 100% charge of server during a long period of time. That requires some limitation of server resources per user to disable him to make a server crash.
For a moment the only idea I've come up with is to containerize all the application in Docker and launch a separate resource-limited container for each user. It looks like missing an easier solution.
How do you intend to split Server resources?
Memory is shared across the JVM and you cannot limit given memory for a certain Thread. Spawning a new process it the only way to further limit memory in an easy and maintainable way.
If you want to avoid that, you would probably have to rework your memory intensive method.

How to define host of some strange java-process?

Good day everybody!
I'm developing java GWT web application. Yesterday it was working fine - task manager was showing netbeans process and ONE java process - definetely it was tomcat. But today I'm observing netbeans process, java process of tomcat and some unknown java process which causes java heap space error. This strange process eats a lot of memory and it's memory consumption grows dramatically in time.
Probably useful information: the only thing I changed in my app is dropping database and creating it again from some backup. I suspect java JDBC driver can't connect to DB because of probable incorrect user privileges - it is not a problem, queries are performing successfully but strange java process is exists.
Question: How to define host of this unknown java-process? What application, netbeans or tomcat or something else creates it?
On a Unix platform, ps has several options that show more than just the process name ("java") - e.g. on Linux try ps ax | grep java and you'll see the whole command line that was used to start the java process. It's easy to determine from there what process is running and what they're supposed to do.
On Windows you'll have to find an equivalent - if you're lucky the user executing the process will help you as well - e.g. if it's you or SYSTEM (for services), but the full commandline definitely beats it.
OK, I found the reason - I select a lot of data from my DB. It seems JDBC driver was loading incoming data continuously until memory was enough.

Java Multithreaded Socket Server hangs after getting ~ 50 simultaneous Connections

So basically the problem is described in the title.
The server works in the following way:
Listens to a new connection
Once connection is requested - adds the request to the Q,
Continues listening to a new connection
Separate process takes care of a Q and spawns a new thread to deal with the clients' requests.
The server code is similar to this tutorial (everything is in try / catch, unfortunately I cant show the source-code - company policy)
It seems to work very well, until the number of clients exceeds ~ 50, Then it just hangs with no exceptions / warnings / etc. There is a cpu thread limit of 32k, no limits on the number of open files / open sockets / etc. OS = CentOS 5.5 (same seems to happen in ubuntu tho). The server logs data to MySQL using ODBC. Separate stress tests of both showed that I can have up to 32k java processes (limited by /proc/sys/kernel/threads-max ) and MySQL can perform up to 20k simple operations / second, so Im assuming the problem is with the sockets.
So the question really is:
What is the limiting factor in socket connections and how can I make it bigger?
OR am I looking in the wrong place?
The chances are that you have induced a deadlock somewhere in the code. The key indicator here is if by 'hang' you mean the CPU usage of the server drops to nothing and no futher activity is seen in the server.
When the server hangs run jdk tool: jstack against it's process. This should show you what is waiting on what lock. Also in the tool kit is jvisualvm and if on a unix box a simple kill -3 pid will do a thread dump to stderr.
With out the code or at least a reproducable sample I'm afraid I can't help much more. One thing you might want to look at is using jetty as your embedded server instead of a hand roled one, they have already been through the deadlock/threading pain so you don't have to.
Don´t know if this will help you and if your are using it, but try to run your socket server with java switch "-server",this will select the Java HotSpot Server VM.The -server turns on the optimizing JIT along with a few other "server-class" settings. Generally you get the best performance out of this setting. The default VM is -client.
Also check your other params, so your socket server don´t run with minimal resources
Have a nice day

How to monitor exceptions or errors generated by other Java applications?

I want to find or develop an application that can run as a daemon, notify the administrator by email or sms when the Java applications running on a host get any exceptions or errors. I know JVMTI can achieve part of my goal, but it will impact performance of the monitored applications(I don't know how much will it be, it will be acceptable if it's slight), besides it seems to be a troublesom job to develop a JVMTI agent and I'm not sure what would happen if several applications running at the same time using the same agent. Is there any better solutions? Thanks in advance.
One way would be to use a logging system like log4j that publishes all errors occuring on system A to a logging server on system B from which you can monitor the errors occured. This isn't a completely generic solutation however, since only exceptions propagated to log4j (or any other logging system) would be handled - but it may be a good start.
The best solution is to have the Java application send its errors via email/sms. The problem is that programs will generate exceptions and handle correctly in normal operation. You only want particular exception.
Failing this you could write a log reader, which reads the logs of the application. This is tricky to get right, but it can be done.
An application can generate 1000+ exception per days and still be behaving normally because the application knows how to handle these exceptions. e.g. every time a socket connection is closed an exception can be thrown.
IMO, the best approach is to deploy an external monitoring system. This can:
monitor multiple applications
monitor infrastructure services
monitor network availability and machine accessibility,
monitor resources such as processor and file system usage.
Applications can be monitored in a variety of ways, including:
by processing log events,
by watching for application restarts,
by "pinging" the application's web apis to check service liveness, and
by using the application's JMX interfaces.
This information can be filtered and prioritized in an intelligent fashion, and critical events can be reported by whatever means is most appropriate.
You don't want individual applications sending emails, because they don't have sufficient information to do a decent job. Furthermore, putting the reporting logic into individual applications is likely to lead to inconsistent implementation, poor configurability, and so on.
There is a nearby alternative to JVMTI : JPDA. This infrastructure allows you to create a remote "debugger" (yes, that's what you're planning to do) using Java code, and connect it to the VM using either local or remote connection.
There will be, like for JVMTI, an overhead to program execution. However, as the Trace.java example shows, it's quite simple to both implement and connect to target VM.
Finally, notice if you want to instrument code run by application server (JBoss, Glassfish, Tomcat, you name it) there are various other means available.
I follow the pattern where every exception gets logged to a table.
Then an RSS feed selects from that table.
I subscribe to the RSS feed in MS Outlook at work and also on my Android phone with a program called NewsRob. NewsRob let me set my phone to alert me when there is something new.
I blog about how to do this HERE. It is in .net, but you get the idea.
As a related step I found a way to notify myself when something DIDN'T happen. That blog is HERE.
There are loads of applications out there that do what you are looking for in a way that does not impact performance. Have you had a look at Kibana/ElasticSearch, or Splunk or Logscape for enterprise solutions ( they both also have free versions).
I'm going to echo what has already been said and highlight what java already provides and what you can do with an external monitoring system. Java already provides:
log4j - log ERRORS, WARNINGS, FATAL and Exceptions to a file
JMX - Create custom application metrics and you also have access to java.lang/* which will give you heap memory usage , garbage collection, thread counters etc.
JVM gc logging - you can log all your garbage collection events to a file and watch for any long Full GC collections.
An external monitoring system will allow you to set alerts triggered off different operational scenarios. You will also get visualisation of your system performance through charts. I've used Logscape's java app in the past to monitor 30 java processes spread out over3 hosts.

Reliable non-network IPC in Java

Is there a reliable, cross-platform way to do IPC (between two JVMs running on the same host) in Java (J2SE) that doesn't rely on the network stack?
To be more specific, I have a server application that I'd like to provide a small "monitoring" GUI app for. The monitor app would simply talk to the server process and display simple status information. The server app has a web interface for most of its interaction, but sometimes things go wrong (port conflict, user forgot password) that require a local control app.
In the past I've done this by having the server listen on 127.0.01 on a specific port and the client communicates that way. However, this isn't as reliable as I'd like. Certain things can make this not work (Windows's network stack can be bizarre with VPN adapters, MediaSense, laptops lid closing/power saving modes). You can imagine the user's confusion when the tool they use to diagnose the server doesn't even think the server is running.
Named Pipes seem plausible, but Java doesn't seem to have an API for them unless I'm mistaken. Ideas? Third party libraries that support this? My performance requirements are obviously extremely lax in case that helps.
One of my specialties is really low-tech solutions. Especially if your performance requirements aren't critical:
The low-low tech alternative to named pipes is named FILES. Think yourself up a protocol where one app writes a file and another reads it. If need be, you can do semaphoring between them.
Remember that a rename is pretty much an atomic operation, so you could calmly write a file in some process and then make it magically appear in its entirety by renaming/moving it from somewhere that wasn't previously visible.
You can poll for data by checking for appearance of a file (in a loop with a SLEEP in it), and you can signal completion by deleting the file.
An added benefit is that you can debug your app using the DIR command :)
Depending on how much data you need to pass between the server and the diagnostic tool you could:
go low-tech and have a background thread check a file in the file system; fetch commands from it; write ouput into a second to be picked up by the diagnostic tool.
build a component that manages an input/output queue in shared memory connecting to it via JNI.
Consider JMX. I do not know if any of the Windows JVM's allow JMX over shared memory.
Does Windows even have named pipes? I was going to suggest it. You'd just have to use an exec() to create it.
Map a read_write byte buffer into memory from a FileChannel. Write status information into the byte buffer, then call force() to get it written out. On the monitor side, open up the same file and map it into memory too. Poll it periodically to find out the status.

Categories

Resources