Java connection via Globals API causes a StackOverflowError - java

I am trying to connect a Java application to an InterSystems Caché database via Globals API.
import com.intersys.globals.*;
public class Assignment {
public static void main(String[] args) {
final String user = "Andrew";
final String password = "Tobilko";
Connection connection = ConnectionContext.getConnection();
connection.connect("USER", user, password);
}
}
The stacktrace:
Exception in thread "main" java.lang.StackOverflowError
at com.intersys.globals.internal.GlobalsConnectionJNI.connectImpl(Native Method)
at com.intersys.globals.internal.GlobalsConnectionJNI.connect(GlobalsConnectionJNI.java:107)
at com.tobilko.a3.Assignment.main(Assignment.java:12)
The credentials and the namespace are correct.
The Cache instance has been initialised correctly and by instruction.
All the global environment variables including GLOBALS_HOME and DYLD_LIBRARY_PATH have been set.
The following libraries have been soft-linked:
ln -s $GLOBALS_HOME/bin/libisccache.dylib /usr/local/lib
ln -s $GLOBALS_HOME/bin/liblcbjni.dylib /usr/local/lib
ln -s $GLOBALS_HOME/bin/liblcbindnt.dylib /usr/local/lib
ln -s $GLOBALS_HOME/bin/liblcbclientnt.dylib /usr/local/lib
ln -s $GLOBALS_HOME/bin/libmdsjni.dylib /usr/local/lib
-Djava.library.path=/usr/local/lib has been specified.
The jars have been included.
These steps led me to a StackOverflowError exception.
I have no idea where I could have made a mistake.
Any help would be appreciated.

Andrew, I'm not so familiar with GlobalsAPI. But, I did some research and found that this GlobalsAPI was in previous versions of Java CacheExtreme library cacheextreme.jar, in Caché lib folder. In version which you tried to use, GlobalsAPI already disappeared, and only Event Persistent still there. And with IRIS this old library will disappear at all. And in IRIS documentation nothing more about GlobalsAPI. I think it would be better if you ask about GlobalsAPI future on the Developer Community portal.

I skipped the Window configuration part because it isn't my OS.
Apparently, the next configuration is required for all systems:
Configuration for Windows
The default stack size of the Java Virtual Machine on Windows is too small for running eXTreme applications (running them with the
default stack size causes Java to report EXCEPTION_STACK_OVERFLOW). To
optimize performance, heap size should also be increased.
To
temporarily modify the stack size and heap size when running an
eXTreme application, add the following command line arguments:
-Xss1024k -Xms2500m -Xmx2500m
Increasing the stack size has resolved the issue.

Related

Quarkus Native Application with DioZero on Raspberry Pi using Docker containers (multi-arch)

Yoooo!
Scope
I am trying to deploy a Quarkus based application to a Raspberry Pi using some fancy technologies, my goal is to figure out an easy way to develop an application with Quarkus framework, subsequently deploy as native executable to a raspberry device with full GPIO pins access. Below I will provide you will see requirements that I set for myself and my environment settings to have a better picture of the problem that I faced.
Acceptance Criteria
Java 17
Build native executable using GraalVM
Execute native executable in a micro image on raspberry's docker
Target platform can vary
Be able to use GPIO, SPI, I2C and etc. interfaces of the raspberry
Environment
Development PC
Raspberry Pi Model 3 B+
os: Ubuntu 22.04.1 LTS
os: DietPy
platform: x86_64, linux/amd64
platform: aarch64, linux/arm64/v8
Prerequisites
Java: diozero a device I/O library
Docker: working with buildx
Quarkus: build a native executable
How I built ARM based Docker Images for Raspberry Pi using buildx CLI Plugin on Docker Desktop?
Building Multi-Architecture Docker Images With Buildx
Application
source code on github
As for project base I used getting-started application from
https://github.com/quarkusio/quarkus-quickstarts
Adding diozero library to pom.xml
<dependency>
<groupId>com.diozero</groupId>
<artifactId>diozero-core</artifactId>
<version>1.3.3</version>
</dependency>
Creating a simple resource to test GPIO pinspackage org.acme.getting.started;
import com.diozero.devices.LED;
import javax.ws.rs.Path;
import javax.ws.rs.QueryParam;
#Path("led")
public class LedResource {
#Path("on")
public String turnOn(final #QueryParam("gpio") Integer gpio) {
try (final LED led = new LED(gpio)) {
led.on();
} catch (final Throwable e) {
return e.getMessage();
}
return "turn on led on gpio " + gpio;
}
#Path("off")
public String turnOff(final #QueryParam("gpio") Integer gpio) {
try (final LED led = new LED(gpio)) {
led.off();
} catch (final Throwable e) {
return e.getMessage();
}
return "turn off led on gpio " + gpio;
}
}
4.Dockerfile
```
# Stage 1 : build with maven builder image with native capabilities
FROM quay.io/quarkus/ubi-quarkus-native-image:22.0.0-java17-arm64 AS build
COPY --chown=quarkus:quarkus mvnw /code/mvnw
COPY --chown=quarkus:quarkus .mvn /code/.mvn
COPY --chown=quarkus:quarkus pom.xml /code/
USER quarkus
WORKDIR /code
RUN ./mvnw -B org.apache.maven.plugins:maven-dependency-plugin:3.1.2:go-offline
COPY src /code/src
RUN ./mvnw package -Pnative
# Stage 2 : create the docker final image
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6-902
WORKDIR /work/
COPY --from=build /code/target/*-runner /work/application
# set up permissions for user 1001
RUN chmod 775 /work /work/application \
&& chown -R 1001 /work \
&& chmod -R "g+rwX" /work \
&& chown -R 1001:root /work
EXPOSE 8080
USER 1001
CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]
```
Building image with native executable
Dockerfile based on quarkus docs, I changed image of the build container to quay.io/quarkus/ubi-quarkus-native-image:22.0.0-java17-arm64 and executor container to registry.access.redhat.com/ubi8/ubi-minimal:8.6-902, both of these are linux/arm64* compliant.
Since I am developing and building in linux/amd64 and I want to target linux/arm64/v8 my executable must be created in a target like environment. I can achieve that with buildx feature which enables cross-arch builds for docker images.
Installing QEMU
sudo apt-get install -y qemu-user-static
sudo apt-get install -y binfmt-support
Initializing buildx for linux/arm64/v8 builds
sudo docker buildx create --platform linux/arm64/v8 --name arm64-v8
Use new driver
sudo docker buildx use arm64-v8
Bootstrap driver
sudo docker buildx inspect --bootstrap
Verify
sudo docker buildx inspect
Name: arm64-v8
Driver: docker-container
Nodes:
Name: arm64-v80
Endpoint: unix:///var/run/docker.sock
Status: running
Platforms: linux/arm64*, linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
Now looks like we're ready to run the build. I ended up with the following command
sudo docker buildx build --push --progress plain --platform linux/arm64/v8 -f Dockerfile -t nanobreaker/agus:arm64 .
--push - since I need to deploy a final image somewhere
--platform linux/arm64/v8 - docker requires to define target platform
-t nanobreaker/agus:arm64 - my target repository for final image
It took ~16 minutes to complete the build and push the image
target platform is linux/arm64 as needed
59.75 MB image size, good enough already (with micro image I could achieve ~10 MB)
After I connected to raspberry, downloaded image and run it
docker run -p 8080:8080 nanobreaker/agus:arm64
Pretty nice, let's try to execute a http request to test out gpio pins
curl 192.168.0.20:8080/led/on?gpio=3
Okey, so I see here that there are permission problems and diozero library is not in java.library.path
We can fix permission problems by adding additional parameter to docker run command
docker run --privileged -p 8080:8080 nanobreaker/agus:arm64
PROBLEM
From this point I do not know how to resolve library load error in a native executable.
I've tried:
Pulled out native executable from final container, executed on raspberry host os and had same result, this makes me think that library was not included at GraalVM compile time?
Learning how library gets loaded https://github.com/mattjlewis/diozero/blob/main/diozero-core/src/main/java/com/diozero/util/LibraryLoader.java
UPDATE I
It looks like I have two options here
Figure out a way to create configuration for the diozero library so it is properly resolved by GraalVM during native image compilation.
Add library to the native image and pass it to the native executable.
UPDATE II
Further reading of quarkus docs landed me here https://quarkus.io/guides/writing-native-applications-tips
By default, when building a native executable, GraalVM will not include any of the resources that are on the classpath into the native executable it creates. Resources that are meant to be part of the native executable need to be configured explicitly. Quarkus automatically includes the resources present in META-INF/resources (the web resources) but, outside this directory, you are on your own.
I reached out #Matt Lewis (creator of diozero) and he was kind to share his configs, which he used to compile into GraalVM. Thank you Matt!
Here’s the documentation on my initial tests: https://www.diozero.com/performance/graalvm.html
I stashed the GraalVM config here: https://github.com/mattjlewis/diozero/tree/main/src/main/graalvm/config
So combining the knowledge we can enrich pom.xml with additional setting to tell GraalVM how to process our library
<quarkus.native.additional-build-args>
-H:ResourceConfigurationFiles=resource-config.json,
-H:ReflectionConfigurationFiles=reflection-config.json,
-H:JNIConfigurationFiles=jni-config.json,
-H:+TraceServiceLoaderFeature,
-H:+ReportExceptionStackTraces
</quarkus.native.additional-build-args>
Also added resource-config.json, reflection-config.json, jni-config.json to the resource folder of the project (src/main/resources)
First, I will try to create a native executable in my native os ./mvnw package -Dnative
Fatal error: org.graalvm.compiler.debug.GraalError: com.oracle.graal.pointsto.constraints.UnsupportedFeatureException: No instances of java.lang.ProcessHandleImpl are allowed in the image heap as this class should be initialized at image runtime. To see how this object got instantiated use --trace-object-instantiation=java.lang.ProcessHandleImpl.
Okey, so it failed, but let's trace object instantiation as recommended, maybe we can do something in configs to get around this. I added --trace-object-instantiation=java.lang.ProcessHandleImpl to the additional build args.
Fatal error: org.graalvm.compiler.debug.GraalError: com.oracle.graal.pointsto.constraints.UnsupportedFeatureException: No instances of java.lang.ProcessHandleImpl are allowed in the image heap as this class should be initialized at image runtime. Object has been initialized by the java.lang.ProcessHandleImpl class initializer with a trace:
at java.lang.ProcessHandleImpl.<init>(ProcessHandleImpl.java:227)
at java.lang.ProcessHandleImpl.<clinit>(ProcessHandleImpl.java:77)
. To fix the issue mark java.lang.ProcessHandleImpl for build-time initialization with --initialize-at-build-time=java.lang.ProcessHandleImpl or use the the information from the trace to find the culprit and --initialize-at-run-time=<culprit> to prevent its instantiation.
something new at least, let's try to initialize it first at build time with --initialize-at-build-time=java.lang.ProcessHandleImpl
Error: Incompatible change of initialization policy for java.lang.ProcessHandleImpl: trying to change BUILD_TIME from command line with 'java.lang.ProcessHandleImpl' to RERUN for JDK native code support via JNI
com.oracle.svm.core.util.UserError$UserException: Incompatible change of initialization policy for java.lang.ProcessHandleImpl: trying to change BUILD_TIME from command line with 'java.lang.ProcessHandleImpl' to RERUN for JDK native code support via JNI
Okey, we're not able to change the initialization kind and looks like it won't give us any effect.
I found out that with -H:+PrintClassInitialization we can generate a csv file with class initialization info
here we have two lines for java.lang.ProcessHandleImpl
java.lang.ProcessHandleImpl, RERUN, for JDK native code support via JNI
java.lang.ProcessHandleImpl$Info, RERUN, for JDK native code support via JNI
So it says that class is marked as RERUN, but isn't this the thing we're looking for? Makes no sense for me right now.
UPDATE III
With the configs for graalvm provided by #Matt I was able to compile a native image, but it fails anyways during runtime due to java.lang.UnsatisfiedLinkError, makes me feel like the library was not injected properly.
So looks like we just need to build a proper configuration file, in order to do this let's build our application without native for now, just run it on raspberry, trigger the code related to diozero, get output configs.
./mvnw clean package -Dquarkus.package.type=uber-jar
Deploying to raspberry, will run with graalvm agent for configs generation (https://www.graalvm.org/22.1/reference-manual/native-image/Agent/)
/$GRAALVM_HOME/bin/java -agentlib:native-image-agent=config-output-dir=config -jar ags-gateway-1.0.0-SNAPSHOT-runner.jar
Running simple requests to trigger diozero code (I've connected a led to raspberry on gpio 4, and was actually seeing it turn off/on)
curl -X POST 192.168.0.20:8080/blink/off?gpio=4
curl -X POST 192.168.0.20:8080/blink/on?gpio=4
I've published project with output configs
One thing I noticed that "pattern":"\\Qlib/linux-aarch64/libdiozero-system-utils.so\\E" aarch64 library gets pulled while running on py which is correct, but when I build on native OS I should specify 'amd64' platform.
Let's try to build a native with new configs
./mvnw package -Dnative
Successfully compiled, let's run and test
./target/ags-gateway-1.0.0-SNAPSHOT-runner
curl -X POST localhost:8080/led/on?gpio=4
And here we have error again
ERROR [io.qua.ver.htt.run.QuarkusErrorHandler] (executor-thread-0) HTTP Request to /led/on?gpio=4 failed, error id: b0ef3f8a-6813-4ea8-886f-83f626eea3b5-1: java.lang.UnsatisfiedLinkError: com.diozero.internal.provider.builtin.gpio.NativeGpioDevice.openChip(Ljava/lang/String;)Lcom/diozero/internal/provider/builtin/gpio/GpioChip; [symbol: Java_com_diozero_internal_provider_builtin_gpio_NativeGpioDevice_openChip or Java_com_diozero_internal_provider_builtin_gpio_NativeGpioDevice_openChip__Ljava_lang_String_2]
So I finally managed to build native image, but for some reason it didn't resolve JNI for native library.
Any thoughts on how to properly inject diozero library into native executable?
UPDATE IV
With help of #matthew-lewis we managed to build aarch64 native executable on amd64 os. I updated the source project with final configurations, but I must inform you that this is not a final solution and it doesn't cover all the library code, also according to the Matt's comments this might not be the only way to configure the graalvm build.
I've created a very simple Quarkus app that exposes a single REST API to list the available GPIOs. Note that it currently uses the mock provider that will be introduced in v1.3.4 so that I can test and run locally without deploying to a Raspberry Pi.
Running on a Pi would be as simple as removing the dependency to diozero-provider-mock in the pom.xml - you would also currently need to change the dependency to 1.3.3 until 1.3.4 is released.
Basically you need to add this to the application.properties file:
quarkus.native.additional-build-args=\
-H:ResourceConfigurationFiles=resource-config.json,\
-H:JNIConfigurationFiles=jni-config.json,\
-H:ReflectionConfigurationFiles=reflect-config.json
These files were generated by running com.diozero.sampleapps.LEDTest with the GraalVM Java executable (with a few minor tweaks), i.e.:
$GRAALVM_HOME/bin/java -agentlib:native-image-agent=config-output-dir=config \
-cp diozero-sampleapps-1.3.4.jar:diozero-core-1.3.4.jar:tinylog-api-2.4.1.jar:tinylog-impl-2.4.1.jar \
com.diozero.sampleapps.LEDTest 18
Note a lot of this was based my prior experiments with GraalVM as documented here and here.
The ProcessHandlerImpl error appear to be related to the tinylog reflect config that I have edited out.
Update 1
In making life easy for users of diozero, the library does a bit of static initialisation for things like detecting the local board. This causes issues when loading the most appropriate native library at most once (see LibraryLoader - you will notice it has a static Map of libraries that have been loaded which prevents it being loaded at runtime). To get around this I recommend adding this build property:
--initialize-at-run-time=com.diozero.sbc\\,com.diozero.util
Next, I have been unable to resolve the java.lang.ProcessHandleImpl issue, which prevents reenabling the service loader (diozero uses service loader quite a bit to enable flexibility and extensibility). It would be nice to be able to add this flag:
quarkus.native.auto-service-loader-registration=true
Instead I have specified relevant classes in resource-config.json.

Ensuring files are available to the JVM

I'm trying to install TensorFlow for Java on Windows 10 using this Article
. I followed the steps carefully but the windows commands didn't work with me so I decided to do it manually.
The first command is to make the .jar part of the classpath and I did it manually
but the second step was to ensure that the following two files are available to the JVM: the .jar file and the extracted JNI library
but I don't know how to do that manually
The code:
package securityapplication;
import org.tensorflow.TensorFlow;
import org.tensorflow.Graph;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
public class SecurityApplication {
public static void main(String[] args) throws Exception {
try (Graph g = new Graph()) {
final String value = "Hello from " + TensorFlow.version();
// Construct the computation graph with a single operation, a constant
// named "MyConst" with a value "value".
try (Tensor t = Tensor.create(value.getBytes("UTF-8"))) {
// The Java API doesn't yet include convenience functions for adding operations.
g.opBuilder("Const", "MyConst").setAttr("dtype", t.dataType()).setAttr("value", t).build();
}
// Execute the "MyConst" operation in a Session.
try (Session s = new Session(g);
Tensor output = s.runner().fetch("MyConst").run().get(0)) {
System.out.println(new String(output.bytesValue(), "UTF-8"));
}
}
}
}
could someone help? cuz my program that uses TensorFlow still have the following error
The text in the image is :
Exception in thread "main" java.lang.UnsatisfiedLinkError: Cannot find TensorFlow native library for OS: windows, architecture: x86. See https://github.com/tensorflow/tensorflow/tree/master/tensorflow/java/README.md for possible solutions (such as building the library from source). Additional information on attempts to find the native library can be obtained by adding org.tensorflow.NativeLibrary.DEBUG=1 to the system properties of the JVM.
at org.tensorflow.NativeLibrary.load(NativeLibrary.java:66)
at org.tensorflow.NativeLibrary.load(NativeLibrary.java:66)
at org.tensorflow.TensorFlow.init(TensorFlow.java:36)
at org.tensorflow.TensorFlow.<clinit>(TensorFlow.java:40)
at org.tensorflow.Graph.<clinit>(Graph.java:194)
at securityapplication.SecurityApplication.main(SecurityApplication.java:15) Java Result: 1 BUILD SUCCESSFUL (total time: 4 seconds)
The result after running the first command in cmd:
The result after running the second command in Windows PowerShell:
Any suggestions?!
Thank you
The first command failure (javac) suggests that the javac command is not in your PATH environment variables. See for example, this StackOverflow question
For the second command failure, I believe the space after -D is what is causing you trouble as Holger suggested.
IDEs like Eclipse and others also provide a means to set the java.library.path property for the JVM (see this StackOverflow answer for example).
Background: TensorFlow for Java consists of a Java library (packaged in a .jar file) and a native library (.dll on Windows, distributed in a .zip file). You need to ensure that the .jar file is in the classpath and the directory containing the .dll is in included in the java.library.path of the JVM when executing a program.
Hope that helps.

PySpark: java.lang.OutofMemoryError: Java heap space

I have been using PySpark with Ipython lately on my server with 24 CPUs and 32GB RAM. Its running only on one machine. In my process, I want to collect huge amount of data as is give in below code:
train_dataRDD = (train.map(lambda x:getTagsAndText(x))
.filter(lambda x:x[-1]!=[])
.flatMap(lambda (x,text,tags): [(tag,(x,text)) for tag in tags])
.groupByKey()
.mapValues(list))
When I do
training_data = train_dataRDD.collectAsMap()
It gives me outOfMemory Error. Java heap Space. Also, I can not perform any operations on Spark after this error as it looses connection with Java. It gives Py4JNetworkError: Cannot connect to the java server.
It looks like heap space is small. How can I set it to bigger limits?
EDIT:
Things that I tried before running:
sc._conf.set('spark.executor.memory','32g').set('spark.driver.memory','32g').set('spark.driver.maxResultsSize','0')
I changed the spark options as per the documentation here(if you do ctrl-f and search for spark.executor.extraJavaOptions) : http://spark.apache.org/docs/1.2.1/configuration.html
It says that I can avoid OOMs by setting spark.executor.memory option. I did the same thing but it seem not be working.
After trying out loads of configuration parameters, I found that there is only one need to be changed to enable more Heap space and i.e. spark.driver.memory.
sudo vim $SPARK_HOME/conf/spark-defaults.conf
#uncomment the spark.driver.memory and change it according to your use. I changed it to below
spark.driver.memory 15g
# press : and then wq! to exit vim editor
Close your existing spark application and re run it. You will not encounter this error again. :)
If you're looking for the way to set this from within the script or a jupyter notebook, you can do:
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.master('local[*]') \
.config("spark.driver.memory", "15g") \
.appName('my-cool-app') \
.getOrCreate()
I had the same problem with pyspark (installed with brew). In my case it was installed on the path /usr/local/Cellar/apache-spark.
The only configuration file I had was in apache-spark/2.4.0/libexec/python//test_coverage/conf/spark-defaults.conf.
As suggested here I created the file spark-defaults.conf in the path /usr/local/Cellar/apache-spark/2.4.0/libexec/conf/spark-defaults.conf and appended to it the line spark.driver.memory 12g.
I got the same error and I just assigned memory to spark while creating session
spark = SparkSession.builder.master("local[10]").config("spark.driver.memory", "10g").getOrCreate()
or
SparkSession.builder.appName('test').config("spark.driver.memory", "10g").getOrCreate()

Mallet: java.lang.OutOfMemoryError with 1024GB Memory allocation

I am trying to use Mallet to run topic modeling on a ~1GB text file, with 11403956 rows. From the mallet directory, I cd to bin and upgrade the memory requirement to 1024GB:
set MALLET_MEMORY=1024G
I then try to run the command:
bin/mallet import-file --input combined_bios.txt --output dh_size.mallet --keep-sequence --remove-stopwords
However, this throws a memory error:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at gnu.trove.TObjectIntHashMap.rehash(TObjectIntHashMap.java:170)
at gnu.trove.THash.postInsertHook(THash.java:359)
at gnu.trove.TObjectIntHashMap.put(TObjectIntHashMap.java:155)
at cc.mallet.types.Alphabet.lookupIndex(Alphabet.java:115)
at cc.mallet.types.Alphabet.lookupIndex(Alphabet.java:123)
at cc.mallet.types.FeatureSequence.add(FeatureSequence.java:131)
at cc.mallet.pipe.TokenSequence2FeatureSequence.pipe(TokenSequence2FeatureSequence.java:44)
at cc.mallet.pipe.Pipe$SimplePipeInstanceIterator.next(Pipe.java:294)
at cc.mallet.pipe.Pipe$SimplePipeInstanceIterator.next(Pipe.java:282)
at cc.mallet.types.InstanceList.addThruPipe(InstanceList.java:267)
at cc.mallet.classify.tui.Csv2Vectors.main(Csv2Vectors.java:290)
Is there a workaround for such situations? Any help others can offer would be greatly appreciated!
If you are on Linux or OS X, I think you might be altering the wrong variable. The one you are changing is found in bin/mallet.bat, but you want to change the one in the executable at bin/mallet (i.e. without the .bat file extension):
MEMORY=1g
This is also described under "Issues with Big Data" in this Mallet tutorial:
http://programminghistorian.org/lessons/topic-modeling-and-mallet

Java virtual machine launch issue

HI All,
I got an issue, all of a sudden Java stopped working completely. I start getting error like "Could not create the virtual machine". There is no issue with the memory (it has 3GB RAM) and was working fine for over a 6 months in this system without any issue.
Here are some peculiar behaviors -
When I start eclipse I see Java virtual machine dialog box with error messages like
"Could not find main class org.eclipse......support.legacysystemproperties"
Eclipse is able to start(with above error), but while running the program, I get error like "Could not create Java Virtual Machine" in a dialog box and after I click OK on that dialog box, I see error like "unrecognized option -dfile.encoding=cp1252
I used text editor, wrote a class Test.java (without any package), compiled it (Edit #1:javac Test.java). But when I execute the program (Edit #1:java Test), I get the following error -
Exception in thread "main" java.lang.NoClassDefFoundError: test (wrong name: Test).
Edit #1:
Note : The compiled file, Test.class is successfully created in the directory. I did recheck the path and classpath environment variables. All seem to be correct.
Please note that there seems to be some issues with cases which affected the Java.
I did uninstall Java (all versions), re-installed, but nothing helped. Also, I did run CCleaner to clean registry, Malwarebytes' Anti-Malware, but none helped so far.
Appreciate if someone could help me to resolve the issue.
I did googled for this and found that some have experienced similar issues, but none of them have found solution yet other than some suggestion that re-installation of Windows OS itself, which I want to avoid it. I did system restore, but that failed for some other
reason.
Please note that am using Java for over 10 years. This is first time am having such issue. This is something to do with Windows Registry or some other system configuration, but I am not able to find out the exact problem.
Anyways awaiting some good suggestion.
EDIT: Okay, so it looks like the Java executable is getting the command line arguments lower-cased.
Step 1: Verify
You can double-check whether this affects all command line arguments by creating a class with a lower-case name which just dumps its arguments:
public class test {
public static void main(String[] args) {
for (String arg : args) {
System.out.println(arg);
}
}
}
Compile and run this with a variety of inputs.
Step 2: Check scope
Assuming step 1 confirms the problem, if you've got .NET installed you can see whether it affects that as well. Create a file Test.cs:
using System;
class Test
{
static void Main(string[] args)
{
foreach (string arg in args)
{
Console.WriteLine(arg);
}
}
}
Compile this with "csc Test.cs" having found csc in the .NET framework directory (e.g. in c:\Windows\Microsoft.NET\Framework\v4.0.30319 for .NET 4).
Run it like this:
Test foo BAR Baz
and see what happens
Step 3: If step 2 showed that the issue is limited to java.exe:
Check your path, and work out where you're actually running java.exe from
Try explicitly running java.exe from your JRE or JDK directory

Categories

Resources