This question already has answers here:
How to deal with a slow SecureRandom generator?
(17 answers)
Closed 2 years ago.
I am trying to debug a few slow responses served by an app deployed on Tomcat.
Right now I am focussing on SecureRandom and /dev/random (some of the other probable causes have been investigated and ruled out).
The pattern is as follows:
The first call takes exactly 30.0xy seconds after Tomcat restart (even if the request arrives 4 minutes after the Startup)
Later, some calls take exactly 15.0pq seconds (there was no specific pattern that I could establish, pq being the time approximate time taken in TP99)
The service call involves encryption and decryption (AES/ECB/PKCS5Padding).
Is it possible that SecureRandom init/repopulating is leading to this?
(Although, there is a log written in catalina.log that says "Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [28,760] milliseconds.")
Also, in order to check whether /dev/random or /dev/urandom is being used, I used the test from this question. To my surprise, I didn't see reads from either of them unlike the way it happens in the linked question.
These are the last few lines from the strace log:
3561 lstat("/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/jsse.jar", {st_mode=S_IFREG|0644, st_size=258525, ...}) = 0
3561 open("/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/jsse.jar", O_RDONLY) = 6
3561 stat("/dev/random", {st_mode=S_IFCHR|0666, st_rdev=makedev(1, 8), ...}) = 0
3561 stat("/dev/urandom", {st_mode=S_IFCHR|0666, st_rdev=makedev(1, 9), ...}) = 0
3561 open("/dev/random", O_RDONLY) = 7
3561 open("/dev/urandom", O_RDONLY) = 8
3561 unlink("/tmp/hsperfdata_xxxx/3560") = 0
What is then being used for seeding SecureRandom?
fyi, java -version
java version "1.6.0_32"
OpenJDK Runtime Environment (IcedTea6 1.13.4) (rhel-7.1.13.4.el6_5-x86_64)
OpenJDK 64-Bit Server VM (build 23.25-b01, mixed mode)
I could not check your OpenJDK concrete version, but I could check jdk6-b33.
SecureRandom uses SeedGenerator to get the seed bytes
public byte[] engineGenerateSeed(int numBytes) {
byte[] b = new byte[numBytes];
SeedGenerator.generateSeed(b);
return b;
}
SeedGenerator gets the seedSource (String) from SunEntries
String egdSource = SunEntries.getSeedSource();
SunEntries tries to get the source from the system property java.security.egd first, if is not found then tries to get the property securerandom.source from the java.security properties file, if the property is not found returns a blank string.
// name of the *System* property, takes precedence over PROP_RNDSOURCE
private final static String PROP_EGD = "java.security.egd";
// name of the *Security* property
private final static String PROP_RNDSOURCE = "securerandom.source";
final static String URL_DEV_RANDOM = "file:/dev/random";
final static String URL_DEV_URANDOM = "file:/dev/urandom";
private static final String seedSource;
static {
seedSource = AccessController.doPrivileged(
new PrivilegedAction<String>() {
public String run() {
String egdSource = System.getProperty(PROP_EGD, "");
if (egdSource.length() != 0) {
return egdSource;
}
egdSource = Security.getProperty(PROP_RNDSOURCE);
if (egdSource == null) {
return "";
}
return egdSource;
}
});
}
the SeedGenerator check this value to initialize the instance
// Static instance is created at link time
private static SeedGenerator instance;
private static final Debug debug = Debug.getInstance("provider");
final static String URL_DEV_RANDOM = SunEntries.URL_DEV_RANDOM;
final static String URL_DEV_URANDOM = SunEntries.URL_DEV_URANDOM;
// Static initializer to hook in selected or best performing generator
static {
String egdSource = SunEntries.getSeedSource();
// Try the URL specifying the source
// e.g. file:/dev/random
//
// The URL file:/dev/random or file:/dev/urandom is used to indicate
// the SeedGenerator using OS support, if available.
// On Windows, the causes MS CryptoAPI to be used.
// On Solaris and Linux, this is the identical to using
// URLSeedGenerator to read from /dev/random
if (egdSource.equals(URL_DEV_RANDOM) || egdSource.equals(URL_DEV_URANDOM)) {
try {
instance = new NativeSeedGenerator();
if (debug != null) {
debug.println("Using operating system seed generator");
}
} catch (IOException e) {
if (debug != null) {
debug.println("Failed to use operating system seed "
+ "generator: " + e.toString());
}
}
} else if (egdSource.length() != 0) {
try {
instance = new URLSeedGenerator(egdSource);
if (debug != null) {
debug.println("Using URL seed generator reading from "
+ egdSource);
}
} catch (IOException e) {
if (debug != null)
debug.println("Failed to create seed generator with "
+ egdSource + ": " + e.toString());
}
}
// Fall back to ThreadedSeedGenerator
if (instance == null) {
if (debug != null) {
debug.println("Using default threaded seed generator");
}
instance = new ThreadedSeedGenerator();
}
}
if the source is
final static String URL_DEV_RANDOM = "file:/dev/random";
or
final static String URL_DEV_URANDOM = "file:/dev/urandom"
uses the NativeSeedGenerator, on Windows tries to use the native CryptoAPI on Linux the class simply extends the SeedGenerator.URLSeedGenerator
package sun.security.provider;
import java.io.IOException;
/**
* Native seed generator for Unix systems. Inherit everything from
* URLSeedGenerator.
*
*/
class NativeSeedGenerator extends SeedGenerator.URLSeedGenerator {
NativeSeedGenerator() throws IOException {
super();
}
}
and call to the superclass constructor who loads /dev/random by default
URLSeedGenerator() throws IOException {
this(SeedGenerator.URL_DEV_RANDOM);
}
so, OpenJDK uses /dev/random by default until you do not set another value in the system property java.security.egd or in the property securerandom.source of security properties file.
If you want to see the read results using strace you can change the command line and add the trace=open,read expression
sudo strace -o a.strace -f -e trace=open,read java class
the you can see something like this (I did the test with Oracle JDK 6)
13225 open("/dev/random", O_RDONLY) = 8
13225 read(8, "#", 1) = 1
13225 read(3, "PK\3\4\n\0\0\0\0\0RyzB\36\320\267\325u\4\0\0u\4\0\0 \0\0\0", 30) = 30
....
....
The Tomcat Wiki section for faster startup suggest using a non-blocking entropy source like /dev/urandom if you are experiencing delays during startup
More info: https://wiki.apache.org/tomcat/HowTo/FasterStartUp#Entropy_Source
Hope this helps.
The problem is not SecureRandom per se but that /dev/random blocks if it doesn't have enough data. You can use urandom instead but that might not be a good idea if you need cryptographically strong random seeds.
On headless Linux systems you can install the haveged daemon. This keeps /dev/random topped up with enough data so that calls don't have to wait for the required entropy to be generated.
I've done this on a Debian Aws instance and watched SecureRandom generateBytes calls drop from 25 seconds to sub millisecond (Openjdk 1.7 something, can't remember specifically what version).
Related
I am getting the strangest problem that I just can't wrap my head around. My web api which uses Spring Boot and postgresql/postgis, is getting inconsistent errors when trying to read geometries from the database. I have been using this code (with occasional modifications of course) for many, many years and this just starting happening on my last release.
I am using openjdk 11.0.4 2019-07-16 on ubuntu 18.04. Relevent pom.xml entries ...
<groupId>org.locationtech.jts</groupId>
<artifactId>jts-core</artifactId>
<version>1.16.1</version>
</dependency>
I am getting various errors from api calls of the following types ...
e.g. hexstring: 0101000020E6100000795C548B88184FC0206118B0E42750C0
org.locationtech.jts.io.ParseException: Unknown WKB type 0
at org.locationtech.jts.io.WKBReader.readGeometry(WKBReader.java:235)
at org.locationtech.jts.io.WKBReader.read(WKBReader.java:156)
at org.locationtech.jts.io.WKBReader.read(WKBReader.java:137)
at net.crowmagnumb.database.RecordSet.getGeom(RecordSet.java:1073)
e.g. hexstring: 0101000020E61000000080FB3F354F5AC0F3D30EF2C0773540
java.lang.ArrayIndexOutOfBoundsException: arraycopy: length -1 is negative
at java.base/java.lang.System.arraycopy(Native Method)
at org.locationtech.jts.io.ByteArrayInStream.read(ByteArrayInStream.java:59)
at org.locationtech.jts.io.ByteOrderDataInStream.readDouble(ByteOrderDataInStream.java:83)
at org.locationtech.jts.io.WKBReader.readCoordinate(WKBReader.java:378)
at org.locationtech.jts.io.WKBReader.readCoordinateSequence(WKBReader.java:345)
at org.locationtech.jts.io.WKBReader.readPoint(WKBReader.java:256)
at org.locationtech.jts.io.WKBReader.readGeometry(WKBReader.java:214)
at org.locationtech.jts.io.WKBReader.read(WKBReader.java:156)
at org.locationtech.jts.io.WKBReader.read(WKBReader.java:137)
at net.crowmagnumb.database.RecordSet.getGeom(RecordSet.java:1073)
e.g. hexstring: 0101000020E610000066666666669663C00D96D7371DD63440
org.locationtech.jts.io.ParseException: Unknown WKB type 326
at org.locationtech.jts.io.WKBReader.readGeometry(WKBReader.java:235)
at org.locationtech.jts.io.WKBReader.read(WKBReader.java:156)
at org.locationtech.jts.io.WKBReader.read(WKBReader.java:137)
at net.crowmagnumb.database.RecordSet.getGeom(RecordSet.java:1073)
The relevant parts of my RecordSet code is below (so line numbers will not match above stack traces).
public class RecordSet {
private static final Logger logger = LoggerFactory.getLogger(RecordSet.class);
private static WKBReader wkbReader;
private static WKBReader getWKBReader() {
if (wkbReader == null) {
wkbReader = new WKBReader();
}
return wkbReader;
}
private static byte[] hexStringToByteArray(final String hex) {
if (StringUtils.isBlank(hex)) {
return null;
}
int len = hex.length();
byte[] data = new byte[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (byte) ((Character.digit(hex.charAt(i), 16) << 4) + Character.digit(hex.charAt(i + 1), 16));
}
return data;
}
public static Geometry getGeom(final String geomStr) {
byte[] byteArray = hexStringToByteArray(geomStr);
if (byteArray == null) {
return null;
}
try {
return getWKBReader().read(byteArray);
} catch (Throwable ex) {
logger.error(String.format("Error parsing geometry [%s]", geomStr), ex);
return null;
}
}
}
So the extreme weirdness is that
It doesn't happen consistently. The exact same api call when I try to repeat it works fine.
The reported hex strings in the exception message are perfectly correct! If I run them in a test program using the same code give the correct answer and no exception.
Again all of the above reported hexstrings that lead to errors in production api calls are valid representations of POINT geometries.
Is this some weird potential memory leak issue?
Maybe this should have been obvious but in my defense I have been using the above code for many, many years (as I said) without issue so I think I just overlooked the obvious? Anyway, it suddenly dawned on me should I be reusing the same WKBReader over and over again in a multiple-threaded environment? Well, turns out no!
If I just create a new WBBReader() with each call (instead of getting a single static WKBReader) it works fine. Well there is the source of my "memory leak". Self caused!
The following is deep inside a library I use. In 2015 this worked with Groovy 2.3 and early versions of 2.4, probably with Java 6 or 7! I wanted to update to Java 8 before trying to modify for Java9+.
final class DynamicClassLoader extends ClassLoader {
final NodeID originatingNode;
NetChannelOutput requestClassData;
NetChannelInput classDataResponse = NetChannel.net2one();
final Hashtable classes = new Hashtable();
DynamicClassLoader(NodeID originator, NetChannelLocation requestLocation) {
super(ClassLoader.getSystemClassLoader());
this.originatingNode = originator;
this.requestClassData = NetChannel.one2net(requestLocation);
}
...
}
When I try to invoke the code from Groovy I get the following error:
org.codehaus.groovy.tools.RootLoader cannot be cast to jcsp.net2.mobile.DynamicClassLoader
The Point where this is called from is given in the following code at the line indicated by **
public byte[] filterTX(Object obj)
throws IOException
{
ClassLoader loader = obj.getClass().getClassLoader();
byte[] bytes = this.internalFilter.filterTX(obj);
if (loader == ClassLoader.getSystemClassLoader() || loader == null)
{
DynamicClassLoaderMessage message = new DynamicClassLoaderMessage(Node.getInstance().getNodeID(),
(NetChannelLocation) ClassManager.in.getLocation(), bytes);
byte[] wrappedData = this.internalFilter.filterTX(message);
return wrappedData;
}
**DynamicClassLoader dcl = (DynamicClassLoader)loader;**
DynamicClassLoaderMessage message = new DynamicClassLoaderMessage(dcl.originatingNode,
(NetChannelLocation) ClassManager.in.getLocation(), bytes);
byte[] wrappedData = this.internalFilter.filterTX(message);
return wrappedData;
}
After discussion with the Groovy community I discoverd that the problem lay in the way the Intellij invokes Groovy scripts. The code works in Eclipse without any problem. In Intellij it was necessary to create a jar artifact for each of the scripts I wanted to run in parallel, which I could then run from a command line interface. I recoded the application in Java 8 and it worked with no problem. Hope that helps.
I have build an application connecting R and java using the Rserve package.
In that, i am getting the error as "evaluation successful but object is too big to transport". i have tried increasing the send buffer size value in Rconnection class also. but that doesn't seem to work.
The object size which is being transported is 4 MB
here is the code from the R connection file
public void setSendBufferSize(long sbs) throws RserveException {
if (!connected || rt == null) {
throw new RserveException(this, "Not connected");
}
try {
RPacket rp = rt.request(RTalk.CMD_setBufferSize, (int) sbs);
System.out.println("rp is send buffer "+rp);
if (rp != null && rp.isOk()) {
System.out.println("in if " + rp);
return;
}
} catch (Exception e) {
e.printStackTrace();
LogOut.log.error("Exception caught" + e);
}
//throw new RserveException(this,"setSendBufferSize failed",rp);
}
The full java class is available here :Rconnection.java
Instead of RServe, you can use JRI, that is shipped with rJava package.
In my opinion JRI is better than RServe, because instead of creating a separate process it uses native calls to integrate Java and R.
With JRI you don't have to worry about ports, connections, watchdogs, etc... The calls to R are done using an operating system library (libjri).
The methods are pretty similar to RServe, and you can still use REXP objects.
Here is an example:
public void testMeanFunction() {
// just making sure we have the right version of everything
if (!Rengine.versionCheck()) {
System.err.println("** Version mismatch - Java files don't match library version.");
fail(String.format("Invalid versions. Rengine must have the same version of native library. Rengine version: %d. RNI library version: %d", Rengine.getVersion(), Rengine.rniGetVersion()));
}
// Enables debug traces
Rengine.DEBUG = 1;
System.out.println("Creating Rengine (with arguments)");
// 1) we pass the arguments from the command line
// 2) we won't use the main loop at first, we'll start it later
// (that's the "false" as second argument)
// 3) no callback class will be used
engine = REngine.engineForClass("org.rosuda.REngine.JRI.JRIEngine", new String[] { "--no-save" }, null, false);
System.out.println("Rengine created...");
engine.parseAndEval("rVector=c(1,2,3,4,5)");
REXP result = engine.parseAndEval("meanVal=mean(rVector)");
// generic vectors are RVector to accomodate names
assertThat(result.asDouble()).isEqualTo(3.0);
}
I have a demo project that exposes a REST API and calls R functions using this package.
Take a look at: https://github.com/jfcorugedo/RJavaServer
I have following piece of test code:
try {
InputStream is;
Stopwatch.start("FileInputStream");
is = new FileInputStream(imageFile.toFile());
is.skip(1024*1024*1024);
is.close();
Stopwatch.stop();
Stopwatch.start("Files.newInputStream");
is = Files.newInputStream(imageFile);
is.skip(1024*1024*1024);
is.close();
Stopwatch.stop();
}
catch(Exception e)
{
}
and I have following output:
Start: FileInputStream
FileInputStream : 0 ms
Start: Files.newInputStream
Files.newInputStream : 3469 ms
Do you have any idea what is going on? Why skip is so slow in the second case?
I need to use InputStreams acquired from channels because my test have shown that best for my task is to have two threads reading from file simultaneously (and I can notice any improvement only when I am using Streams from Channels).
During tests I figured out that I can do something like this:
SeekableByteChannel sbc = Files.newByteChannel(imageFile);
sbc.position(1024*1024*1024);
is = Channels.newInputStream(sbc);
which takes only avg. 28ms but that does not help me a lot because to use that I would have to make major API changes.
My platform:
Linux galileo 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 07:38:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
Looking at the source, it appears that the default implementation of skip() is actually reading through (and discarding) the stream content until it reaches the target position:
public long skip(long n) throws IOException {
long remaining = n;
int nr;
if (n <= 0) {
return 0;
}
int size = (int)Math.min(MAX_SKIP_BUFFER_SIZE, remaining);
byte[] skipBuffer = new byte[size];
while (remaining > 0) {
nr = read(skipBuffer, 0, (int)Math.min(size, remaining));
if (nr < 0) {
break;
}
remaining -= nr;
}
return n - remaining;
}
The SeekableByteChannel#position() method probably just updates an offset pointer, which doesn't actually require any I/O. Presumably, FileInputStream overrides the skip() method with a similar optimization. The documentation supports this theory:
This method may skip more bytes than are remaining in the backing file. This produces no exception and the number of bytes skipped may include some number of bytes that were beyond the EOF of the backing file. Attempting to read from the stream after skipping past the end will result in -1 indicating the end of the file.
On platter disks or network storage, this could have a significant impact.
Try to set the range with GetObjectRequest.setRange to have the same behavior of skip.
GetObjectRequest req = new GetObjectRequest(BUCKET_NAME, "myfile.zip");
req.setRange(1024); // start download skiping 1024 bytes
S3ObjectInputStream in = client.getObject(req).getObjectContent();
// read "in" while not eof
I used this to avoid SocketTimeoutException on my implementation.
Each time I got a SocketTimeoutException I restart the download using the setRange to skip the bytes that I already downloaded.
I have two Java classes that are running commands on the local system. My dev system is a Mac, my QA system is Windows and the Prod system is UNIX. So there are different commands for each one, at the moment I have to go in and comment/uncomment the differences. Both classes are structured the same with executable and command. Here is what I have.
// Linux (QA/Prod)
final String executable = "/user1/Project/Manufacturer/CommandCLI";
// final String executable = "cat"; // Mac Dev
// final String executable = "cmd"; // Windows QA
final String command = "getarray model=" + model + " serialnum=" + serialnum;
// Windows QA(local laptop)
//final String command = "/C c:/Manufacturer/CommandCLI.bat getarray model=" + model + " serialnum=" + serialnum;
//Mac Dev
// final String command = "/TestData/" + computer.getId() + ".xml"
So, as you can see -- I am commenting and uncommenting depending on the environment. One of my main concerns is that I am relying on the model and serialnum variable -- and I don't know if that can somehow be inserted into a property (model and serialnum are given in the method call).
We are using Maven so during "mvn clean package" we are adding the -P flag to specify a properties file.
What is an elegant way to handle this?
I suggest to create 3 different method: one for each os that contains os-specific commands. And you can determine current os using system properties: check this question. And call appropriate method based on this property. Example:
private runOnLinux(int model, int serialNum) { ... }
private runOnWindows(int model, int serialNum) { ... }
private runOnMac(int model, int serialNum) { ... }
// Somewhere in source code...
String os = System.getProperty("os.name").toLowerCase();
if (os.contains("windows")) {
runOnWindows(model, serialNum);
} else if (os.contains("linux") || os.contains("unix")) {
runOnLinux(model, serialNum);
} else {
// Mac!
runOnMac(model, serialNum);
}
Of course I not sure all this checks are correct. Better check answers to the question I mentioned at the beginning. It contains much more useful information.