It's pretty easy to minimize scriptS with Yuicompressor. Unfortunately, this process is totally slow when executing the JAR with exec in php.
Example (PHP):
// start with basic command
$cmd = 'java -Xmx32m -jar /bin/yuicompressor-2.4.8pre.jar -o \'/var/www/myscript.min.js\' \'/var/www/myscript.min.temp.js\'';
// execute the command
exec($cmd . ' 2>&1', $ok);
The execution time for ~20 files takes up to 30 seconds ! on a Quad Core Server with 8GB Ram.
Does anybody know a faster solution, to minimize a bunch of scripts ?
The execution time mainly depends of the file size(s).
Let's take a try with Google Closure Compiler.
It is also a good idea to caching the result in a file or use some extensions (APC, Memcached) with the combination of client-side caching headers. If you checking the last modification time with filemtime() you will know to need minify or not.
I often use separate caching by files, to prevent minification of a large content, then creating an MD5 checksum by the whole and if it is modified since the last request, then save the new checksum and print out the content, else just using:
header('Not Modified', true, 302);
By this way, it is a very few calculations by each requests also in dev state. I'm using ExtJS 4 for my current project wich is 1.2 MB large at raw and a lot of project-codes without any problem and under 1s response time.
Related
I have a Java application that needs to run several times. Every time it runs, it checks if there's data to process and if so, it processes the data.
I'm trying to figure out what's the best approach (performance, resource consumption, etc.) to do this:
1.- Launch it once, and if there's nothing to process make it sleep (All Java).
2.- Using a bash script to launch the Java app, and when it finishes, sleep (the script) and then relaunch the java app.
I was wondering if it is best to keep the Java app alive (sleeping) or relaunching every time.
It's hard to answer your question without the specific context. On the face of it, your questions sounds like it could be a premature optimization.
Generally, I suggest you do what's easier for you to do (and to maintain), unless you have good reasons not to. Here are some possible good reasons, pick the ones appropriate to your situation:
For sleeping in Java:
The check of whether there's new data is easier in Java
Starting the Java program takes time or other resources, for example if on startup, your program needs to load a bunch of data
Starting the Java process from bash is complex for some reason - maybe it requires you to fiddle with a bunch of environment variables, files or something else.
For re-launching the Java program from bash:
The check of whether there's new data is easier in bash
Getting the Java process to sleep is complex - maybe your Java process is a complex multi-threaded beast, and stopping, and then re-starting the various threads is complicated.
You need the memory in between Java jobs - killing the Java process entirely would free all of its memory.
I would not keep it alive.
Instead of it you can use some Job which runs at defined intervals you can use jenkins or you can use Windows scheduler and configure it to run every 5 minutes (as you wish).
Run a batch file with Windows task scheduler
And from your batch file you can do following:
javac JavaFileName.java // To Compile
java JavaFileName // to execute file
See here how to execute java file from cmd :
How do I run a Java program from the command line on Windows?
I personally would determine it, by the place where the application is working.
if it would be my personal computer, I would use second option with bash script (as resources on my local machine might change a lot, due to extensive use of some other programs and it can happen that at some point I might be running out of memory for example)
if it goes to cloud (amazon, google, whatever) I know exactly what kind of processes are running there (it should not change so dynamically comparing to my local PC) and long running java with some scheduler would be fine for me
I have a simple question, I've read up online but couldn't find a simple solution:
I'm running a java program on the command line as follows which accesses a database:
java -jar myProgram.jar
I would like a simple mechanism to see the number of disk I/Os performed by this program (on OSX).
So far I've come across iotop but how do I get iotop to measure the disk I/O of myProgram.jar?
Do I need a profiler like JProfiler do get this information?
iotop is a utility which gives you top n processes in descending order of IO consumption/utilization.
Most importantly it is a live monitoring utility which means its output changes every n sec( or time interval you specify). Though you can redirect it to a file, you need to parse that file and find out meaningful data after plotting a graph.
I would recommend to use sar. you can read more about it here
It is the lowest level monitoring utility in linux/unix. It will give you much more data than iotop.
best thing about sar is you can collect the data using a daemon when your program is running and then later analyze it using ksar
According to me, you can follow below approach,
Start sar monitoring, collect sar data every n seconds. value of n depends of approximate execution time of your program.
example : if your program takes 10 seconds to execute then monitoring per sec is good but if your program takes 1hr to execute then monitor per min or 30 sec. This will minimize overhead of sar process and still your data is meaningful.
Wait for some time (so that you get data before your program starts) and then start your program
end of your program execution
wait for some time again (so that you get data after your program finishes)
stop sar.
Monitor/visualize sar data using ksar. To start with, you check for disk utilization and then IOPS for a disk.
You can use Profilers for same thing but they have few drawbacks,
They need their own agents (agents will have their own overhead)
Some of them are not free.
Some of them are not easy to set up.
may or may not provide enough/required data.
besides this IMHO, Using inbuilt/system level utilities is always beneficial.
I hope this was helpful.
Your Java program will eventually be a process for host system so you need to filter out output of monitoring tool for your own process id. Refer Scripts section of this Blog Post
Also, even though you have tagged question with OsX but do mention in question that you are using OsX.
If you are looking for offline data - that is provided by proc filesystem in Unix bases systems but unfortunately that is missing in OSX , Where is the /proc folder on Mac OS X?
/proc on Mac OS X
You might chose to write a small script to dump data from disk and process monitoring tools for your process id. You can get your process id in script by process name, put script in a loop to look for that process name and start script before you execute your Java program. When script finds the said process, it will keep dumping relevant data from commands chosen by you at intervals decided by you. Once your programs ends ,log dumping script also terminates.
I have a few executable tools. In my Java application I need to launch each of them for a few hundred times and measure their memory consumption based on different inputs. I am using
Runtime.getRuntime().exec(externalToolCommand);
to execute external tools. But I don't know how to measure the max memory usage of the external tools.
To make more clear I will exemplify it;
Let say I have prism.exe, mrmc.exe, and plasma.exe which are three executable external tools. I have want to know when I launch one of the tools e.g. prism.exe, how much memory it consumes. I don't need to measure my Java application memory consumption. I need only to know the external memory consumption.
Thanks.
I don't know the exact code but here is what I could think of. On Windows, you can launch a batch script say p.bat from your Java application and launch a powershell script from p.bat say q.ps1 and now you have got access to a powershell script. You can run a process monitor tool(I don't know maybe perfmon...) to measure one time memory consumption of it, log it in a text file. Terminate the process from the script. Do the whole process in loop in your Java application. Finally you have got the file containing the process memory consumption n number of times.
But Beware! it is really expensive as it involves File I/O, context switch back and forth, pipelining on the top.
With powershell, the possibilities are endless. But I am no powershell expert so pardon, I can't write the exact code for you. This answer requires a lot of research on your side on various steps involved.
Try "jvisualvm", you can find it at /bin/jvisualvm.exe
I have a Java program which is launched through command-line by a Bash script, which is in turn called at various intervals by cron.
There are several operations performed by this program, the first being the copy of a possibly large number of more or less large files. (Anything from 10000 files of 30 KB to 1 big 1 GB file, but both of these are edge cases.)
I am curious about how this step should be accomplished to ensure performance (as in speed).
I can use either Bash's cp function, or Java 7's Files.copy(). I will run my own tests but I'm wondering if someone has any comparison data I could take into account before deciding on an implementation?
I am building a project using Hudson. I have few jar files which i want to sign with the timestamp using Ant SignJar task. It works like a charm when there is no timestamp - it takes about 2-3 seconds for one file. The problem appears when i add the 'tsaurl' attribute to SignJar task. Then timestamp takes few MINUTES for one file. I tried to use different timestamp servers and it did not help. Does anybody know why is it taking so much time? And first of all, is there any way to fix this issue?
The main problem I had with jarsign taking too long (on Linux, at least) was the kernel entropy pool drying up. At this point, the process blocks until more entropy arrives. This leads to the symptoms you are seeing of the jarsigner process sitting there not eating CPU time but not doing much either.
At some point (from 1.5 to 1.6 AFAIK), Java went from using /dev/urandom to /dev/random. Real entropy is actually a scarce resource on modern computers - lots of RAM decreases disk activity, smart programs that cache things decrease network activity. I'm told that on a virtual machine (like many build servers live on) that entropy collection rates can be even lower.
You can either
Reconfigure Java back to using /dev/urandom (if you're not paranoid)
Deploy a means of injecting additional entropy into /dev/random
I went for option B : I installed the randomsound package for my chosen distro (Ubuntu). This samples your mike for whitenoise and uses it to inject entropy into /dev/random when it runs dry. The main downside to this is that it blocks the mike for other uses. There are other ways of getting extra entropy, like copying a disk to /dev/null, or doing a package update (lots of disk and network traffic). You may want to kit one or more of your servers out with a hardware RNG and install something that can serve entropy-as-a-service to the others. Or even a USB soundcard and randomsound would work (lots of whitenoise in a server room...)
For option A, you can set property
-Djava.security.egd=file:/dev/./urandom
(note the extra dot - this is to workaround a "smart" bit of code that assumes you want /dev/random even if you don't say so : see : https://bugs.openjdk.java.net/browse/JDK-6202721 )