I am building a project using Hudson. I have few jar files which i want to sign with the timestamp using Ant SignJar task. It works like a charm when there is no timestamp - it takes about 2-3 seconds for one file. The problem appears when i add the 'tsaurl' attribute to SignJar task. Then timestamp takes few MINUTES for one file. I tried to use different timestamp servers and it did not help. Does anybody know why is it taking so much time? And first of all, is there any way to fix this issue?
The main problem I had with jarsign taking too long (on Linux, at least) was the kernel entropy pool drying up. At this point, the process blocks until more entropy arrives. This leads to the symptoms you are seeing of the jarsigner process sitting there not eating CPU time but not doing much either.
At some point (from 1.5 to 1.6 AFAIK), Java went from using /dev/urandom to /dev/random. Real entropy is actually a scarce resource on modern computers - lots of RAM decreases disk activity, smart programs that cache things decrease network activity. I'm told that on a virtual machine (like many build servers live on) that entropy collection rates can be even lower.
You can either
Reconfigure Java back to using /dev/urandom (if you're not paranoid)
Deploy a means of injecting additional entropy into /dev/random
I went for option B : I installed the randomsound package for my chosen distro (Ubuntu). This samples your mike for whitenoise and uses it to inject entropy into /dev/random when it runs dry. The main downside to this is that it blocks the mike for other uses. There are other ways of getting extra entropy, like copying a disk to /dev/null, or doing a package update (lots of disk and network traffic). You may want to kit one or more of your servers out with a hardware RNG and install something that can serve entropy-as-a-service to the others. Or even a USB soundcard and randomsound would work (lots of whitenoise in a server room...)
For option A, you can set property
-Djava.security.egd=file:/dev/./urandom
(note the extra dot - this is to workaround a "smart" bit of code that assumes you want /dev/random even if you don't say so : see : https://bugs.openjdk.java.net/browse/JDK-6202721 )
Related
I currently have a OSB project with a set of 21 modules that take roughly 4 minutes to build on my local 2 core/12GB ram laptop running Windows using no threading, just a simple build install. It takes 10-20 seconds per module.
When building this exact same project on my CI server running on Ubuntu, with 8 cores/16GB RAM build time is closer to 110 minutes, using around 4 minutes per module.
Some details on the Linux build:
Most of these 4 minutes per module is spent sitting idle on 0% CPU utilization.
MAVEN_OPTS are "-Xmx512m -Xms512m"
Same build time on Java 7 and 8
When running with the -X flag it spends most of it's time at "-- end configuration --"
I have tried increasing the file descriptor limit, thinking this was the problem. This did not do anything to the build time.
After profiling maven with VisualVM both on Windows and Linux I found that on Linux it spent abnormal amounts of time generating a random seed.
So by changing to (the slighty less secure) /dev/./urandom build time went from 110 minutes down to 1minute 47seconds.
An example of how to do this is by passing in the setting as a flag:
-Djava.security.egd=file:/dev/./urandom
If you would like to set this permanently, this can be done in the file jdk1.7.0_75/jre/lib/security/java.security by changing:
securerandom.source=file:/dev/urandom to
securerandom.source=file:/dev/./urandom.
This might bring some security implications of which you should do some research first if you need to do this.
There are a lot of variables here. I can't provide an answer, but in general, I try to pare the problem down as small as possible. You're saying it's around ~21 projects. Is it equally slow with 1? I know you said you get it with 4min/module, but that's not the same as a project with 1 module in it. The sheer scope of the file descriptors (ulimit) can be really troublesome, even if you're only looking at one module at a time during the build.
Second, ensure your own laptop's environment variables are similar. Windows to Linux is not exactly the easiest to compare to, but you should be able to determine if JAVA_OPTS, MAVEN_OPTS, the various -X/-D flags are the same, whether -Xms/-Xmx are set the same, etc. etc.
Further, have you reviewed any of the Google-able results I found?
Why is my Maven so slow on Ubuntu?
http://zeroturnaround.com/rebellabs/your-maven-build-is-slow-speed-it-up/
The difficult part with your problem is we're not looking at even remotely close environments. We don't know
environment variables
settings.xml (and /etc/.../.settings.xml)
Is the CI server software running the build or are you just running the same mvn clean install on both your local machine and the remote?
etc. etc.
And I can't say that this site would even be the best place to have someone troubleshoot. If you're building an OSB set of projects, you might have better luck filing an SR with Oracle Support and asking them to help you out, per your support plan. At least in the SR, there's a bit more back-and-forth in the communication. Here, you're expected to provide all the possible information, and then people spitball the answer to you. Without any data on your question, we have nowhere to go, and wild assumptions/guesses to make.
I am looking for a Java API that will allow registering for file system mount events, i.e. when a file system is mounted or dismounted. Specifically I want to know when a file system on removable USB devices is available, and also know exactly what type of USB device it was.
The udev subsystem provides notifications on USB plug and unplug events by default but not specifically when the file system on the device is available. It is possible to create udev rules that can do this in pieces, e.g. create a directory and execute a program when devices are added and removed. But my experience with udev rules is that the syntax is arcane and they are fragile and not simple to debug.
I've installed usbmount per this post:
https://serverfault.com/questions/414120/how-to-get-usb-devices-to-automount-in-ubuntu-12-04-server
though I believe the devices were automouting by default.
As an alternative I constructed a JDK 7 WatcherService on /media which can detect changes in /etc/mtab. This works but I have seen cases where the file systems on some USB devices are still not ready - meaning that attempts to read the directory throw an Exception - even after the entry in /etc/mtab is made. I added a timer to sleep for a configurable number of milliseconds and in most cases a 100ms wait time works but not 100% of the time. What this means is that increasing this wait time is not an absolute guarantee nor deterministic.
Clearly at some low level the mount event is being generated because the Nautilus pop-up window gets displayed. I had a case of one flash drive that would put the Nautilus icon in the launch pad menu but it would not mount until the icon was clicked open.
I've also looked at these options:
tailing /var/log/syslog; this may be the next best option. I see lines like the following:
:Dec 2 08:58:07 fred-Inspiron-530 udisksd[1759]: Mounted /dev/sdk1 at /media/fred/USB DISK1 on behalf of uid 1000
I am going to try a WatcherService here to see if the same timing issue exists, i.e. is the directory readable once this message is written.
jlibudev [ github.com/nigelb/jlibudev ] Much better Java API to udev subsystem than writing rules but it still falls short in that you still have to piece a number of different events together. NB: jlibudev depends on JNA [https://github.com/twall/jna] and purejavacomm [ github.com/nyholku/purejavacomm, sparetimelabs.com/purejavacomm/purejavacomm.php] both of which are pretty useful in their own right.
lsusb provides details on the usb device but nothing about where it is mounted.
Ideally I would like a simple API that would allow registering for file system mount/unmount events using the standard Java event listening pattern. I want to believe that such an API exists or is at least possible given that at a macro-level the net effect is occurring. I am still scouring the JDK 7 and JDK 8 APIs for other options.
Any and all pointers and assistance would be greatly appreciated.
Since there's no OS-agnostic way to deal with mounting filesystems, there's definitely no JDK API for this. I'm guessing this problem is not dealt with much (not a lot of programs deal with mounting filesystems directly), so it's unlikely that there's any prebuilt library out there waiting for you.
Of the approaches you mentioned, they all sound roughly equal in terms of how platform-specific they are (all Linux-only), so that just leaves performance and ease of coding as open questions. Regarding performance, running lsusb more than once a second is (a) a giant hack :-) and (b) fork+exec is slow compared to running something in-process, and tailing the event log will create a lot of (unpredictable) work for your program that is not related to USB mounts as well as making your implementation more fragile (what if the message strings change when you upgrade your OS?). Regarding ease of programming, either using jna or JNI to call into libudev or a WatcherService on /media sound about equal -- using libudev seems like the most portable option across Linux distros / user configurations (I'm guessing that's what Nautilus uses).
However, for simplicity of implementation that will work for 99% of users, it's hard to do better than a WatcherService on /media. To help ensure that the filesystem is available before use, I would just use a loop with some kind of randomized exponential backoff in the amount of time to wait between attempts to read the directory -- that way you never wait way longer than necessary for the filesystem to mount, you aren't burning tons of CPU waking up and trying to read, and you don't have to pick a single timeout number that won't work everywhere. If you care enough to ensure you don't tie down a single thread sleeping forever, I'd use a ScheduledExecutorService to issue Runnables that try to access the filesystem, and if it's not available schedule themselves to run again in a bit, otherwise alert your main thread that a new filesystem is available for use using a queue of some kind.
Edit: I just learned that you could also watch for updates to the /proc/mounts file. Hopefully since the kernel is responsible for updating this file things only show up when they're fully mounted, although I don't know for certain. For more details, How to interpret /proc/mounts? and the Red Hat docs were useful.
I am trying to develop a framework that will compile and execute (mostly random) C++ and Java packages.
However, given their random nature, I want to check the source (or the executable -- pre-execution) for any linux system calls before execution. If there is such a system call, I don't want to execute the program.
It is safe to assume that these packages wouldn't need to make any system calls to fulfill their functional purpose (they're not complex packages).
Edit: A bash command/script would be simplest, but any answer is fine.
In short, you cannot detect reliably all malicious syscalls (by static analysis of source code); read about the halting problem and Rice theorem... BTW MELT would be slighty better than grep since it works on GCC gimple representation.
Think of (on Linux)
dlopen(3)-ing the libc (or the main executable) then dlsym-ing "system" to get a pointer to the system function
knowing the libc layout and version,, then computing system's address by adding some known offset to address of malloc
using some JIT libary, e.g. the header only GNU lightning
coding the eqivalent of system with fork and execve ....
etc....
Of course, you might be trusting your user (I won't do that for a web application). If you trust all your users and just want to detect mistakes you might be able to filter some of them.
You need some container, e.g. docker
Look in resource limits (setrlimit if you are on POSIX system) as opposed to trying to find the malicious code.
You can limit number of processes, memory, open files, cputime and others. I would suggest you to limit basically everything. And run in chroot jail (even an empty one if you link statically).
I have a network threaded application that I run under Eclipse (Indigo) and Javd 1.7x. For quite a while I have have noticed that the first run of the application produced front and end loaded degradation in performance, for example if I was to load up the application and and then hit it (using a test harness) with say 100 network packets the first few responses would be heavily erratic and the last few. [edit] Without unloading the application, and just running the test harness again, the application performs normally.[end edit]
I decided to get to try and get to the bottom of it and loaded up VisualVM 1.3.5 to profile the behaviour. The CPU Usage has a distinct spike going from 10% to over 50% at the beginning of the run. After the spikes, everything appears normal, and as stated above subsequent runs do not have the leading spikes in CPU Utilisation and the profile of subsequent runs is identical to the profile between the spikes of the first run. There doesn't appear to be any evidence that the number of threads is causing it, but there is a small rise. Heap space increases from 100MB to 200MB, but other than that everything appears normal.
Any thoughts would be welcome.
Thanks
Its fairly typical for system performance to be erratic the first time you run a test. This is due to the operating system reading libraries, JAR files, and other data off of disk and storing it in cache. Once this has been done that first time all subsequent runs will be much faster and more consistent.
Also, keep in mind that the JVM will also tend to be slower right after it starts up. Due to its hotspot analysis and just-in-time compiling, the code will need to run a little while before the JVM optimizes the byte code for your particular workload.
This is typical for OSGi environments, where bundles may be initialize lazily upon first access of a bundles class or services.
You can figure out if this is the case in your scenario by starting eclipse with -console -consolelog arguments.
When the console opens and the application was loaded, issue the ss command and note which bundles are marked LAZY. Then, run your test, issue ss again, and see if one of the LAZY bundles now became ACTIVE. If so, you can force eager start of your bundles via the configuration/config.ini file. This can also be accomplished via the IStartup extension point.
I am running into trouble to determine what is wrong with my software.
The situation is;
-The program is always running on background and every X minutes performs some actions.
-Right now it is set to check every 1 minute a certain directory and see if there are new files in it.
-If there are new files, they are processed and moved somewhere else.
-If not, it simply logs the event and goes idle again.
I Assume that when new files appear, CPU usage can be somewhat high.
The problem comes when, even if I dont put new files in the directory for many days, the CPU usage will raise to ~90% every minute it checks for new entrys, then after some seconds, return back to <1% usage.
The same process under windows seems somehow stable, staying always on low cpu usage.
If I monitor the CPU activty monthly, I can see that the average CPU usage for my java process keeps growing up (without putting new files to 'activate' the rest of the process), and I have to restart the process for it to return to lower CPU usage levels.
I really dont happen to understand this behaviour, so I dont really know what may be affecting this.
If the log file is somewhat 'big', like 10-20mb would it require that much cpu to log a new entry every minute?
If there are many libraries loaded in the classpath for this process, will the cpu usage be increased even though many of this libraries wont be used most all the time?
Excuse me if I haven't been very clear on my question, I am somewhat new to this.
Thanks every one in advance, regards.
--edit--
I note your advices, I will do some monitoring and I will post some code / results to share with you and see what can you come up with!
I am really lost right now!
I your custom monitoring code is causing a problem, you could always use something standard like Apache Commons IO's FileAlterationMonitor. It's simple to implement and it might be faster than fixing your current code.
Are you talking about a simple console application or a swing/awt app ?
Is the application run every minute via OS underlying at schedule or it's a simple server process ?
If the process is run as a server how do you launch the VM ? (server VM or client VM - -server switch on cmd line)
You may check also your garbage collector, sometimes logging framework use up too many object without releasing their references.
Regards
M.