I am developing a Java program in eclipse using a proprietary API and it throws the following exception at run-time:
java.io.UnsupportedEncodingException:
at java.lang.StringCoding.encode(StringCoding.java:287)
at java.lang.String.getBytes(String.java:954)...
my code:
private static String SERVER = "localhost";
private static int PORT = 80;
private static String DFT="";
private static String USER = "xx";
private static String pwd = "xx";
public static void main(String[] args) {
LLValue entInfo = new LLValue();
LLSession session = new LLSession(SERVER, PORT, DFT, USER, pwd);
try {
LAPI_DOCUMENTS doc = new LAPI_DOCUMENTS(session);
doc.AccessPersonalWS(entInfo);
} catch (Exception e) {
e.printStackTrace();
}
}
The session appears to open with no errors, but the encoding exception is thrown at doc.AccessEnterpriseWS(entInfo)
Through researching this error I have tried using the -encoding option of the compiler, changing the encoding of my editor, etc.
My questions are:
how can I find out the encoding of the .class files I am trying to use?
should I be matching the encoding of my new program to the encoding of the API?
If java is machine independent why isn't there standard encoding?
I have read this stack trace and this guide already --
Any suggestions will be appreciated!
Cheers
Run it in your debugger with a breakpoint on String.getBytes() or StringCoding.encode(). Both are classes in the JDK so you have access to them and should be able to see what the third party is passing in.
The character encoding is used to specify how to interpret the raw binary. The default encoding on English Windows systems in CP1252. Other languages and systems may use different a different default encoding. As a quick test, you might try specifying UTF-8 to see if the problem magically disappears.
As noted in this question, the JVM uses the default encoding of the OS, although you can override this default.
Without knowing more about the third party API you are trying to use, it's hard to say what encoding they might be using. Unfortunately from looking at the implementation of StringCoding.encode() it appears there are a couple different ways you could get an UnsupportedEncodingException. Stepping through with a debugger should help narrow things down.
It looks to me as if something in the proprietary API is calling String.getBytes with an empty string for the character set.
I compiled the following class
public class Test2 {
public static void main(String[] args) throws Exception {
"test".getBytes("");
}
}
and when I ran it, I got the following stacktrace:
Exception in thread "main" java.io.UnsupportedEncodingException:
at java.lang.StringCoding.encode(StringCoding.java:286)
at java.lang.String.getBytes(String.java:954)
at Test2.main(Test2.java:3)
I would be surprised if this is anything to do with the encoding in which the class files are written. It looks to me like this is a problem with code, not a problem you can fix by changing file encodings or compiler/JVM switches.
I don't know anything about what this proprietary API is supposed to do or how it works. Perhaps it is expecting to be run inside a Java EE or web application container? Perhaps it has a bug? Perhaps it needs more configuration before it can run without throwing exceptions? Given that it's proprietary, can you get any support from the vendor?
Related
So we want to use the bog-standard keytool utility that ships with a JRE. But rather than going through the trouble of finding the correct path and executable extension, spawning a subprocess, and running the executable, we collectively had the bright idea ("remember, none of us is as dumb as all of us!") to just call KeyTool's main() directly. It's implemented in Java code and also shipped with the JRE, and contains the standard "classpath" exception to the GPL so we can link against it.
Looking at the KeyTool source, there's even some provision made for this sort of thing: there are comments like "if you're calling KeyTool.main() directly in your own Java program, then [helpful reminder]" and the top-level main() is capable of propagating exceptions to calling code instead of just dying with System.exit(). Being able to just build the same command-line argument array and run KeyTool.main(stuff) instead of having to mess with platform differences seems like a very Java-esque thing to do, right?
In practice, weird things happen and we don't know why.
We want to capture any output from running KeyTool, which starts off like this:
// jdk/src/share/classes/sun/security/tools/KeyTool.java, line 331:
public static void main(String[] args) throws Exception {
KeyTool kt = new KeyTool();
kt.run(args, System.out);
}
private void run(String[] args, PrintStream out) throws Exception {
// real code here, sends to 'out'
}
The KeyTool entry points don't allow us to pass a PrintStream, it's hardcoded to use System.out. That should be okay thanks to System.setOut. We have an OutputStream subclass which feeds to a JTextComponent, but for initial coding, redirecting to a text file is fine. So our code does
PrintStream orig = System.out;
try {
System.out.println("This is the last visible console line");
System.setOut(new PrintStream("redirect_test.txt"));
System.out.println("This is now redirected!");
KeyTool.main(keytool_argv); // "-help" and "-debug" for now
}
catch all the myriad ways things might go wrong { ... }
finally {
System.setOut(orig);
System.out.println("Back to normal console output");
}
But when we run the code, the redirect_test.txt file contains only "This is now redirected!". The output from keytool's "-help" still shows up on the console, along with the before-and-after println calls.
There are some other oddities in calling KeyTool directly, like the package and class name has changed between Java 7 and Java 8, but that's easy to deal with via reflection. (The comments in the KeyTool source in Java 8 still refer to the Java 7 name, heh.) The only thing just freaky weird is how its "System.out" is strangely not affected by the same redirection that works everywhere else. (No, there are no weird import statements bringing in a special System replacement.)
Here's an online copy of Java 7's KeyTool.java if you don't happen to have OpenJDK sitting around.
You just need to redirect both System.out and System.err, since the usage instructions get printed to the standard error stream instead of the standard output stream. Try this:
PrintStream original = System.out;
PrintStream redirected = new PrintStream("redirect_test.txt")
try {
System.out.println("This is the last visible console line");
System.setOut(redirected);
System.setErr(redirected);
System.out.println("This is now redirected!");
KeyTool.main(keytool_argv); // "-help" and "-debug" for now
}
catch all the myriad ways things might go wrong { ... }
finally {
System.setOut(original);
System.setErr(original);
System.out.println("Back to normal console output");
}
I'm working on java application which perform some Runtime sub-process on files, for some files I got error cause the Send error report to Microsoft window to appear ,I need to handle this error programmatically, without showing this window to user. Please can anyone help ?
To Suppress windows error reporting the .exe that is being invoked should not terminate with an unhandled exception. This only works if you have access to the source of the application.
Based on the WER Reference - you should use the Win32 API call WerAddExcludedApplication to add the specific .exe files that you are intending to ignore to the per-user ignore list - you could create a simple stub-application that allows you to add applications by name to the ignore list. Then when you invoke the application it does not trigger the error.
Similarly you could create another application to remove them using the WerRemoveExcludedApplication.
Alternatives are to use JNI/JNA to make a class to encapsulate this functionality rather than using Runtime.exec
Here is a simple example using Java Native Access (JNA), which is a simpler version of JNI (no C++ needed for the most part). Download the jna.jar and make it part of your project.
import com.sun.jna.Native;
import com.sun.jna.WString;
import com.sun.jna.win32.StdCallLibrary;
public class JNATest {
public interface CLibrary extends StdCallLibrary {
CLibrary INSTANCE = (CLibrary) Native.loadLibrary("wer.dll",
CLibrary.class);
int WerAddExcludedApplication(WString name, boolean global);
int WerRemoveExcludedApplication(WString name, boolean global);
}
public static void main(String[] args) {
CLibrary.INSTANCE.WerAddExcludedApplication(new WString("C:\\foo.exe"), false);
CLibrary.INSTANCE.WerRemoveExcludedApplication(new WString("C:\\foo.exe"), false);
}
}
Basically, replace the new WString(...) value with the name of the application that you are intending to ignore. It should be ignored for the purposes of windows error reporting at that point.
Bear in mind that the wer.dll is only on Windows Vista and newer, so if this is a problem, then you may need to edit the registry entries manually.
You can always use try-catch-finally statement:
try
{
some code here (the code that is causing the error);
}
catch (Exception x)
{
handle exception here;
}
It works for me...
EDIT Here is the link that can help you a little bit more:
http://www.exampledepot.com/egs/Java%20Language/TryCatch.html
I am trying to grab filesystem events on OS / Kernel level on OS X.
There are 2 requirements i have to follow. The first one is to do this in java as the whole project im developing for is written in java. The second one is that i have to find out when a document is opened.
For Linux I used inotify-java, but I can't find a good equivalent on OS X. Also the JNA doesn't provide a helpful binding. Currently I'm avoiding catching events by frequently calling the lsof program. This, however, is a bad solution.
Thanks for the help.
You can use dtrace on OSX, but since it needs root privileges it's not something you'd want to put into a runtime of a system.
In any case, you won't be able to do this in pure Java (any Java API would be a wrapper around some lower level C introspection, and if you're doing it kernel-wide, would need to be done as root).
If you just want to track when your program is opening files (as opposed to other files on the same system) then you can install your own Security Manager and implement the checkRead() family of methods, which should give you an idea of when accesses are happening.
import java.io.*;
public class Demo {
public static void main(String args[]) throws Exception {
System.setSecurityManager(new Sniffer());
File f = new File("/tmp/file");
new FileInputStream(f);
}
}
class Sniffer extends SecurityManager {
public void checkRead(String name) {
System.out.println("Opening " + name);
}
}
Is there a way in using externally stored sourcecode and loading it into a Java programm, so that it can be used by it?
I would like to have a program that can be altered without editing the complete source code and that this is even possible without compiling this every time. Another advantage is, that I can change parts of the code like I want.
Of course I have to have interfaces so that it is possible to send data into this and getting it back into the fixed source program again.
And of course it should be faster than a pure interpreting system.
So is there a way in doing this like an additional compiling of these external source code parts and a start of the programm after this is done?
Thank you in advance, Andreas :)
You need the javax.tools API for this. Thus, you need to have at least the JDK installed to get it to work (and let your IDE point to it instead of the JRE). Here's a basic kickoff example (without proper exception and encoding handling just to make the basic example less opaque, cough):
public static void main(String... args) throws Exception {
String source = "public class Test { static { System.out.println(\"test\"); } }";
File root = new File("/test");
File sourceFile = new File(root, "Test.java");
Writer writer = new FileWriter(sourceFile);
writer.write(source);
writer.close();
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
compiler.run(null, null, null, sourceFile.getPath());
URLClassLoader classLoader = URLClassLoader.newInstance(new URL[] { root.toURI().toURL() });
Class<?> cls = Class.forName("Test", true, classLoader);
}
This should print test in stdout, as done by the static initializer in the test source code. Further use would be more easy if those classes implements a certain interface which is already in the classpath. Otherwise you need to involve the Reflection API to access and invoke the methods/fields.
In Java 6 or later, you can get access to the compiler through the javax.tools package. ToolProvider.getSystemJavaCompiler() will get you a javax.tools.JavaCompiler, which you can configure to compile your source. If you are using earlier versions of Java, you can still get at it through the internal com.sun.tools.javac.Main interface, although it's a lot less flexible.
Java6 has a scripting API. I've used it with Javascript, but I believe you can have it compile external Java code as well.
http://java.sun.com/developer/technicalArticles/J2SE/Desktop/scripting/
Edit: Here is a more relevant link:
"Dynamic source" code in Java applications
The obvious answer is to use Charset.defaultCharset() but we recently found out that this might not be the right answer. I was told that the result is different from real default charset used by java.io classes in several occasions. Looks like Java keeps 2 sets of default charset. Does anyone have any insights on this issue?
We were able to reproduce one fail case. It's kind of user error but it may still expose the root cause of all other problems. Here is the code,
public class CharSetTest {
public static void main(String[] args) {
System.out.println("Default Charset=" + Charset.defaultCharset());
System.setProperty("file.encoding", "Latin-1");
System.out.println("file.encoding=" + System.getProperty("file.encoding"));
System.out.println("Default Charset=" + Charset.defaultCharset());
System.out.println("Default Charset in Use=" + getDefaultCharSet());
}
private static String getDefaultCharSet() {
OutputStreamWriter writer = new OutputStreamWriter(new ByteArrayOutputStream());
String enc = writer.getEncoding();
return enc;
}
}
Our server requires default charset in Latin-1 to deal with some mixed encoding (ANSI/Latin-1/UTF-8) in a legacy protocol. So all our servers run with this JVM parameter,
-Dfile.encoding=ISO-8859-1
Here is the result on Java 5,
Default Charset=ISO-8859-1
file.encoding=Latin-1
Default Charset=UTF-8
Default Charset in Use=ISO8859_1
Someone tries to change the encoding runtime by setting the file.encoding in the code. We all know that doesn't work. However, this apparently throws off defaultCharset() but it doesn't affect the real default charset used by OutputStreamWriter.
Is this a bug or feature?
EDIT: The accepted answer shows the root cause of the issue. Basically, you can't trust defaultCharset() in Java 5, which is not the default encoding used by I/O classes. Looks like Java 6 corrects this issue.
This is really strange... Once set, the default Charset is cached and it isn't changed while the class is in memory. Setting the "file.encoding" property with System.setProperty("file.encoding", "Latin-1"); does nothing. Every time Charset.defaultCharset() is called it returns the cached charset.
Here are my results:
Default Charset=ISO-8859-1
file.encoding=Latin-1
Default Charset=ISO-8859-1
Default Charset in Use=ISO8859_1
I'm using JVM 1.6 though.
(update)
Ok. I did reproduce your bug with JVM 1.5.
Looking at the source code of 1.5, the cached default charset isn't being set. I don't know if this is a bug or not but 1.6 changes this implementation and uses the cached charset:
JVM 1.5:
public static Charset defaultCharset() {
synchronized (Charset.class) {
if (defaultCharset == null) {
java.security.PrivilegedAction pa =
new GetPropertyAction("file.encoding");
String csn = (String) AccessController.doPrivileged(pa);
Charset cs = lookup(csn);
if (cs != null)
return cs;
return forName("UTF-8");
}
return defaultCharset;
}
}
JVM 1.6:
public static Charset defaultCharset() {
if (defaultCharset == null) {
synchronized (Charset.class) {
java.security.PrivilegedAction pa =
new GetPropertyAction("file.encoding");
String csn = (String) AccessController.doPrivileged(pa);
Charset cs = lookup(csn);
if (cs != null)
defaultCharset = cs;
else
defaultCharset = forName("UTF-8");
}
}
return defaultCharset;
}
When you set the file encoding to file.encoding=Latin-1 the next time you call Charset.defaultCharset(), what happens is, because the cached default charset isn't set, it will try to find the appropriate charset for the name Latin-1. This name isn't found, because it's incorrect, and returns the default UTF-8.
As for why the IO classes such as OutputStreamWriter return an unexpected result,
the implementation of sun.nio.cs.StreamEncoder (witch is used by these IO classes) is different as well for JVM 1.5 and JVM 1.6. The JVM 1.6 implementation is based in the Charset.defaultCharset() method to get the default encoding, if one is not provided to IO classes. The JVM 1.5 implementation uses a different method Converters.getDefaultEncodingName(); to get the default charset. This method uses its own cache of the default charset that is set upon JVM initialization:
JVM 1.6:
public static StreamEncoder forOutputStreamWriter(OutputStream out,
Object lock,
String charsetName)
throws UnsupportedEncodingException
{
String csn = charsetName;
if (csn == null)
csn = Charset.defaultCharset().name();
try {
if (Charset.isSupported(csn))
return new StreamEncoder(out, lock, Charset.forName(csn));
} catch (IllegalCharsetNameException x) { }
throw new UnsupportedEncodingException (csn);
}
JVM 1.5:
public static StreamEncoder forOutputStreamWriter(OutputStream out,
Object lock,
String charsetName)
throws UnsupportedEncodingException
{
String csn = charsetName;
if (csn == null)
csn = Converters.getDefaultEncodingName();
if (!Converters.isCached(Converters.CHAR_TO_BYTE, csn)) {
try {
if (Charset.isSupported(csn))
return new CharsetSE(out, lock, Charset.forName(csn));
} catch (IllegalCharsetNameException x) { }
}
return new ConverterSE(out, lock, csn);
}
But I agree with the comments. You shouldn't rely on this property. It's an implementation detail.
Is this a bug or feature?
Looks like undefined behaviour. I know that, in practice, you can change the default encoding using a command-line property, but I don't think what happens when you do this is defined.
Bug ID: 4153515 on problems setting this property:
This is not a bug. The "file.encoding" property is not required by the J2SE
platform specification; it's an internal detail of Sun's implementations and
should not be examined or modified by user code. It's also intended to be
read-only; it's technically impossible to support the setting of this property
to arbitrary values on the command line or at any other time during program
execution.
The preferred way to change the default encoding used by the VM and the runtime
system is to change the locale of the underlying platform before starting your
Java program.
I cringe when I see people setting the encoding on the command line - you don't know what code that is going to affect.
If you do not want to use the default encoding, set the encoding you do want explicitly via the appropriate method/constructor.
The behaviour is not really that strange. Looking into the implementation of the classes, it is caused by:
Charset.defaultCharset() is not caching the determined character set in Java 5.
Setting the system property "file.encoding" and invoking Charset.defaultCharset() again causes a second evaluation of the system property, no character set with the name "Latin-1" is found, so Charset.defaultCharset() defaults to "UTF-8".
The OutputStreamWriter is however caching the default character set and is probably used already during VM initialization, so that its default character set diverts from Charset.defaultCharset() if the system property "file.encoding" has been changed at runtime.
As already pointed out, it is not documented how the VM must behave in such a situation. The Charset.defaultCharset() API documentation is not very precise on how the default character set is determined, only mentioning that it is usually done on VM startup, based on factors like the OS default character set or default locale.
First, Latin-1 is the same as ISO-8859-1, so, the default was already OK for you. Right?
You successfully set the encoding to ISO-8859-1 with your command line parameter. You also set it programmatically to "Latin-1", but, that's not a recognized value of a file encoding for Java. See http://java.sun.com/javase/6/docs/technotes/guides/intl/encoding.doc.html
When you do that, looks like Charset resets to UTF-8, from looking at the source. That at least explains most of the behavior.
I don't know why OutputStreamWriter shows ISO8859_1. It delegates to closed-source sun.misc.* classes. I'm guessing it isn't quite dealing with encoding via the same mechanism, which is weird.
But of course you should always be specifying what encoding you mean in this code. I'd never rely on the platform default.
I have set the vm argument in WAS server as -Dfile.encoding=UTF-8 to change the servers' default character set.
check
System.getProperty("sun.jnu.encoding")
it seems to be the same encoding as the one used in your system's command line.