Read special charatters ( æ ø å ) with Java from Oracle database - java

i have a problem when reading special charatters from oracle database (use JDBC driver and glassfish tooplink).
I store on database the name "GRØNLÅEN KJÆTIL" through WebService and, on database, the data are store correctly.
But when i read this String, print on log file and convert this in byte array whit this code:
int pos = 0;
byte[] msg=new byte[1024];
String F = "F" + passenger.getName();
logger.debug("Add " + F + " " + F.length());
msg = addStringToArrayBytePlusSeparator(msg, F,pos);
..............
private byte[] addStringToArrayBytePlusSeparator(byte[] arrDest,String strToAdd,int destPosition)
{
System.arraycopy(strToAdd.getBytes(Charset.forName("ISO-8859-1")), 0, arrDest, destPosition, strToAdd.getBytes().length);
arrDest = addSeparator(arrDest,destPosition+strToAdd.getBytes().length,1);
return arrDest;
}
1) In the log file there is:"Add FGRÃNLÃ " (the name isn't correct and the F.length() are not printed).
2) The code throw:
java.lang.ArrayIndexOutOfBoundsException
at java.lang.System.arraycopy(Native Method)
at it.edea.ebooking.business.chi.control.VingCardImpl.addStringToArrayBytePlusSeparator(Test.java:225).
Any solution?
Tanks

You're calling strToAdd.getBytes() without specifying the character encoding, within the System.arraycopy call - that will be using the system default encoding, which may well not be ISO-8859-1. You should be consistent in which encoding you use. Frankly I'd also suggest that you use UTF-8 rather than ISO-8859-1 if you have the choice, but that's a different matter.
Why are you dealing with byte arrays anyway at this point? Why not just use strings?
Also note that your addStringToArrayBytePlusSeparator method doesn't give any indication of how many bytes it's copied, which means the caller won't have any idea what to do with it afterwards. If you must use byte arrays like this, I'd suggest making addStringToArrayBytePlusSeparator return either the new "end of logical array" or the number of bytes copied. For example:
private static final Charset ISO_8859_1 = Charset.forName("ISO-8859-1");
/**
* (Insert fuller description here.)
* Returns the number of bytes written to the array
*/
private static int addStringToArrayBytePlusSeparator(byte[] arrDest,
String strToAdd,
int destPosition)
{
byte[] encodedText = ISO_8859_1.getBytes(strToAdd);
// TODO: Verify that there's enough space in the array
System.arraycopy(encodedText, 0, arrDest, destPosition, encodedText.length);
return encodedText.length;
}

Encoding/Decoding problems are hard. In every process step you have to do the correct encoding/decoding. So,
familiarize yourself with the difference of bytes (inputstream) and Characters (Readers, Strings)
Choose in which character encoding you want to store your data in the database, and in which character encoding you want to expose your webservice. Make sure when you load initial data in the database it's in the right encoding
connect with the right database properties. mysql requires an addition to the connection url:?useUnicode=true&characterEncoding=UTF-8 when using UTF-8, I don't know about oracle.
if you print/debug at a certain step and it looks ok, you can't be sure you did it right. The logger can write with the wrong encoding (sometimes making something look ok, while in fact it's broken). Your terminal might not handle strange byte encodings correct. The same holds for command-line database clients. Your data might wrongly be stored, but your wrongly configured terminal interprets/shows the data as correct.
In XML, it's not only the stream encoding that matters, but also the xml-encoding attribute.

Related

Shortest String encoding for a byte array

I have this code that generates UBJSON byte array
UBObject obj = UBValueFactory.createObject();
obj.put("appId", UBValueFactory.createString("70cce8adb93c4c968a7b1483f2edf5c1"));
obj.put("apiKey", UBValueFactory.createString("a65d8f147fa741b0a6d7fc43e18363c9"));
obj.put("entityType", UBValueFactory.createString("Todo"));
obj.put("entityId", UBValueFactory.createString("2-0"));
obj.put("blobName", UBValueFactory.createString("blobName"));
ByteArrayOutputStream out = new ByteArrayOutputStream();
UBWriter writer = new UBWriter(out);
try {
writer.write(obj);
writer.close();
} catch (IOException e) {
e.printStackTrace();
}
// Byte array of UBJSON
byte[] ubjsonBytes = out.toByteArray();
The question is, what is the shortest String encoding that can be done for the byte array here, that can be used and transmitted over HTTP URL? Using Base64 works perfect as URL path or query parameter but yields quite long String.
Depending on the input length and other properties you might want to try compressing the input with gzip before encoding the byte[] with Base64. Often a URL friendly variant of Base64 is used:
For this reason, modified Base64 for URL variants exist (such as base64url in RFC 4648), where the + and / characters of standard Base64 are respectively replaced by - and _, so that using URL encoders/decoders is no longer necessary and have no impact on the length of the encoded value, leaving the same encoded form intact for use in relational databases, web forms, and object identifiers in general.
Some variants allow or require omitting the padding = signs to avoid them being confused with field separators, or require that any such padding be percent-encoded. Some libraries will encode = to ., potentially exposing applications to relative path attacks when a folder name is encoded from user data.
You could attempt to use Base85 however it encodes with characters that can change the meaning of URL e.g. &. This might or might not work with your setup and might depend stuff like reverse proxy configuration. Because of that it's often better to use a safe encoding like Base64.
All in all, long data should go into request body and not URL.

Issues in Converting values to UTF-8

I am encountering issues in reporting in displaying names. My application uses different technologies PHP, Perl and for BI Pentaho.
We are using MYSQL as DB and my table is of CHARSET=utf8.
My table is been stored with values in rows as below which is wrong
Row1 = Ãx—350
Row2 = Ñz–401
PHP and Perl are using different in built functions to convert the above values which is stored in DB and it is displaying in UI as below which is correct
Expected Row1 = Áx—350
Expected Row2 = Ñz–401
Coming to reports which is using pentaho I am using ETL to transform the data before showing data in reports. In order to convert the above DB stored values I am trying to convert the data through Java step as below
new java.lang.String(new java.lang.String(CODE).getBytes("Windows-1252"), "UTF-8")
But it is not converting the values properly, among the above 2 wrong values only Row2 value is been converted properly but the first Row1 is wrongly converting as below
Converted Row1 = �?x—350
Converted Row2 = Ñz–401
Please suggest what way I can convert the values properly so that for example Row1 value should be converted properly to Áx—350.
I wrote a small Java program as below to convert the Ãx—350 string to Áx—350
String input = "Ãx—350";
byte[] b1 = input.getBytes("Windows-1252");
System.out.println("Input Get Bytes = "+b1.toString());
String szUT8 = new String(b1, "UTF-8");
System.out.println("Input Encoded = " + szUT8);
The output from the above code is as below
Input Get Bytes = [B#157ee3e5
Input Encoded = �?x—350-350—É1
If we see the output the string is wrong where the actual expected output is Áx—350.
To confirm on the encoding/decoding schemes i tried testing string online and tested with string Ãx—350 and output is as expected Áx—350 which is correct.
So from this any one please point why java code is not able to convert properly although i am using the proper encoding/decoding schemes, anything else which iam missing or my approach is wrong.
The CHARSET setting in your db being set to utf-8 doesn't necessarily mean that the data there is properly encoded in utf-8 (or even in utf-8 at all), as we can see. It looks like you are dealing with mojibake - characters that that were at one time decoded using the wrong encoding scheme, then therefore in turn encoded wrong. Fixing that is a usually tedious process of figuring out past decode/encode errors and then undoing them.
Long story short: if you have mojibake, there isn't any automatic conversions you can do unless you know (or can figure out) what conversions were made in the past.
Converting is a matter of first decoding, then encoding. To convert in Perl:
my $string = "some windows-1252 string";
use Encode;
my $raw = decode('windows-1252',$string);
my $encoded = encode('utf-8',$raw);

Converting string to byte[] returns wrong value (encoding?)

I read a byte[] from a file and convert it to a String:
byte[] bytesFromFile = Files.readAllBytes(...);
String stringFromFile = new String(bytesFromFile, "UTF-8");
I want to compare this to another byte[] I get from a web service:
String stringFromWebService = webService.getMyByteString();
byte[] bytesFromWebService = stringFromWebService.getBytes("UTF-8");
So I read a byte[] from a file and convert it to a String and I get a String from my web service and convert it to a byte[]. Then I do the following tests:
// works!
org.junit.Assert.assertEquals(stringFromFile, stringFromWebService);
// fails!
org.junit.Assert.assertArrayEquals(bytesFromFile, bytesFromWebService);
Why does the second assertion fail?
Other answers have covered the likely fact that the file is not UTF-8 encoded giving rise to the symptoms described.
However, I think the most interesting aspect of this is not that the byte[] assert fails, but that the assert that the string values are the same passes. I'm not 100% sure why this is, but I think the following trawl through the source code might give us the answer:
Looking at how new String(bytesFromFile, "UTF-8"); works - we see that the constructor calls through to StringCoding.decode()
This in turn, if supplied with tht UTF-8 character set, calls through to StringDecoder.decode()
This calls through to CharsetDecoder.decode() which decides what to do if the character is unmappable (which I guess will be the case if a non-UTF-8 character is presented)
In this case it uses an action defined by
private CodingErrorAction unmappableCharacterAction
= CodingErrorAction.REPORT;
Which means that it still reports the character it has decoded, even though it's technically unmappable.
I think this means that even when the code gets an umappable character, it substitutes its best guess - so I'm guessing that its best guess is correct and hence the String representations are the same under comparison, but the byte[] are no longer the same.
This hypothesis is kind of supported by the fact that the catch block for CharacterCodingException in StringCoding.decode() says:
} catch (CharacterCodingException x) {
// Substitution is always enabled,
// so this shouldn't happen
I don't understand it fully, but here's what I get so fare:
The problem is that the data contains some bytes which are not valid UTF-8 bytes as I know by the following check:
// returns false for my data!
public static boolean isValidUTF8(byte[] input) {
CharsetDecoder cs = Charset.forName("UTF-8").newDecoder();
try {
cs.decode(ByteBuffer.wrap(input));
return true;
}
catch(CharacterCodingException e){
return false;
}
}
When I change the encoding to ISO-8859-1 everything works fine. The strange thing (which a don't understand yet) is why my conversion (new String(bytesFromFile, "UTF-8");) doesn't throw any exception (like my isValidUTF8 method), although the data is not valid UTF-8.
However, I think I will go another and encode my byte[] in a Base64 string as I don't want more trouble with encoding.
The real problem in your code is that you don't know what the real file encoding.
When you read the string from the web service you get a sequence of chars; when you convert the string from chars to bytes the conversion is made right because you specify how to transform char in bytes with a specific encoding ("UFT-8"). when you read a text file you face a different problem. You have a sequence of bytes that needs to be converted to chars. In order to do it properly you must know how the chars where converted to bytes i.e. what is the file encoding. For files (unless specified) it's a platform constants; on windows the file are encoded in win1252 (which is very close to ISO-8859-1); on linux/unix it depends, I think UTF8 is the default.
By the way the web service call did a decond operation under the hood; the http call use an header taht defins how chars are encoded, i.e. how to read the bytes form the socket and transform then to chars. So calling a SOAP web service gives you back an xml (which can be marshalled into a Java object) with all the encoding operations done properly.
So if you must read chars from a File you must face the encoding issue; you can use BASE64 as you stated but you lose one of the main benefits of text files: the are human readable, easing debugging and developing.

How to process a string with 823237 characters

I have a string that has 823237 characters in it. its actually an xml file and for testing purpose I want to return as a response form a servlet.
I have tried everything I can possible think of
1) creating a constant with the whole string... in this case Eclipse complains (with a red line under servlet class name) -
The type generates a string that requires more than 65535 bytes to encode in Utf8 format in the constant pool
2) breaking the whole string into 20 string constants and writing to the out object directly
something like :
out.println( CONSTANT_STRING_PART_1 + CONSTANT_STRING_PART_2 +
CONSTANT_STRING_PART_3 + CONSTANT_STRING_PART_4 +
CONSTANT_STRING_PART_5 + CONSTANT_STRING_PART_6 +
// add all the string constants till .... CONSTANT_STRING_PART_20);
in this case ... the build fails .. complaining..
[javac] D:\xx\xxx\xxx.java:87: constant string too long
[javac] CONSTANT_STRING_PART_19 + CONSTANT_STRING_PART_20);
^
3) reading the xml file as a string and writing to out object .. in this case I get
SEVERE: Allocate exception for servlet MyServlet
Caused by: org.apache.xmlbeans.XmlException: error: Content is not allowed in prolog.
Finally my question is ... how can I return such a big string (as response) from the servlet ???
You can avoid to load all the text in memory using streams:
InputStream is = new FileInputStream("path/to/your/file"); //or the following line if the file is in the classpath
InputStream is = MyServlet.class.getResourceAsStream("path/to/file/in/classpath");
byte[] buff = new byte[4 * 1024];
int read;
while ((read = is.read(buff)) != -1) {
out.write(buff, 0, read);
}
The second approach might work the following way:
out.print(CONSTANT_STRING_PART_1);
out.print(CONSTANT_STRING_PART_2);
out.print(CONSTANT_STRING_PART_3);
out.print(CONSTANT_STRING_PART_4);
// ...
out.print(CONSTANT_STRING_PART_N);
out.println();
You can do this in a loop of course (which is highly recommended ;)).
The way you do it, you just temporarely create the large string again to then pass it to println(), which is the same problem as the first one.
Ropes: Theory and practice
Why and when to use Ropes for Java for string manipulations
You can read a 823K file into a String. Maybe not the most elegant method, but totally doable. Method 3 should have worked. There was an XML error, but that has nothing to do with reading from a file into a String, or the length of the data.
It has to be an external file, though, because it is too big to be inlined into a class file (there are size limits for those).
I recommend Commons IO FileUtils#readFileToString.
You have to deal with ByteArrayOutputStream and not with the String it self. If you want to send your String in the http response all you have to do is to read from that byteArray stream and write in the response stream like this :
ByteArrayOutputStream baos = new ByteArrayOutputStream(8232237);
baos.write(constant1.getBytes());
baos.write(constant2.getBytes());
...
baos.writeTo(response.getOutputStream());
Both problem 1) and 2) are due to the same fundamental issue. A String literal (or constant String expression) cannot be more than 65535 characters because there is a hard limit on string constants in the class file format.
The third problem sounds like a bug in the way you've implemented it rather than a fundamental problem. In fact, it sounds like you are trying to load the XML as a DOM and then unparse it (which is unnecessary), and that somehow you have managed to mangle the XML in the process. (Or maybe it is mangled in the file you are trying to read ...)
The simple and elegant solution is to save the stuff in a file, and then read it as plain text.
Or ... less elegant, but just as effective:
String[] strings = new String[](
"longString1",
"longString2",
...
"longStringN"};
for (String str : strings) {
out.write(str);
}
Of course, the problem with embedding test data as string literals is that you have to escape certain characters in the string to keep the compiler happy. That's tedious if you have to do it by hand.

How to check whether the file is binary?

I wrote the following method to see whether particular file contains ASCII text characters only or control characters in addition to that. Could you glance at this code, suggest improvements and point out oversights?
The logic is as follows: "If first 500 bytes of a file contain 5 or more Control characters - report it as binary file"
thank you.
public boolean isAsciiText(String fileName) throws IOException {
InputStream in = new FileInputStream(fileName);
byte[] bytes = new byte[500];
in.read(bytes, 0, bytes.length);
int x = 0;
short bin = 0;
for (byte thisByte : bytes) {
char it = (char) thisByte;
if (!Character.isWhitespace(it) && Character.isISOControl(it)) {
bin++;
}
if (bin >= 5) {
return false;
}
x++;
}
in.close();
return true;
}
Since you call this class "isASCIIText", you know exactly what you're looking for. In other words, it's not "isTextInCurrentLocaleEncoding". Thus you can be more accurate with:
if (thisByte < 32 || thisByte > 127) bin++;
edit, a long time later — it's pointed out in a comment that this simple check would be tripped up by a text file that started with a lot of newlines. It'd probably be better to use a table of "ok" bytes, and include printable characters (including carriage return, newline, and tab, and possibly form feed though I don't think many modern documents use those), and then check the table.
x doesn't appear to do anything.
What if the file is less than 500 bytes?
Some binary files have a situation where you can have a header for the first N bytes of the file which contains some data that is useful for an application but that the library the binary is for doesn't care about. You could easily have 500+ bytes of ASCII in a preamble like this followed by binary data in the following gigabyte.
Should handle exception if the file can't be opened or read, etc.
Fails badly if file size is less than 500 bytes
The line char it = (char) thisByte; is conceptually dubious, it mixes bytes and chars concepts, ie. assumes implicitly that the encoding is one-byte=one character (them, it excludes unicode encodings). In particular, it fails if the file is UTF-16 encoded.
The return inside the loop (slightly bad practice IMO) forgets to close the file.
The first thing I noticed - unrelated to your actual question, but you should be closing your input stream in a finally block to ensure it's always done. Usually this merely handles exceptions, but in your case you won't even close the streams of files when returning false.
Asides from that, why the comparison to ISO control characters? That's not a "binary" file, that's a "file that contains 5 or more control characters". A better way to approach the situation in my opinion, would be to invert the check - write an isAsciiText function instead which asserts that all the characters in the file (or in the first 500 bytes if you so wish) are in a set of bytes that are known good.
Theoretically, only checking the first few hundred bytes of a file could get you into trouble if it was a composite file of sorts (e.g. text with embedded pictures), but in practice I suspect every such file will have binary header data at the start so you're probably OK.
This would not work with the jdk install packages for linux or solaris. they have a shell-script start and then a bi data blob.
why not check the mime type using some library like jMimeMagic (http://http://sourceforge.net/projects/jmimemagic/) and deside based on the mimetype how to handle the file.
One could parse and compare ageinst a list of known binary file header bytes, like the one provided here.
Problem is, one needs to have a sorted list of binary-only headers, and the list might not be complete at all. For example, reading and parsing binary files contained in some Equinox framework jar. If one needs to identify the specific file types though, this should work.
If you're on Linux, for existing files on the disk, native file command execution should work well:
String command = "file -i [ZIP FILE...]";
Process process = Runtime.getRuntime().exec(command);
...
It will output information on the files:
...: application/zip; charset=binary
which you can furtherly filter with grep, or in Java, depending on, if you simply need estimation of the files' binary character, or if you need to find out their MIME types.
If parsing InputStreams, like content of nested files inside archives, this doesn't work, unfortunately, unless resorting to shell-only programs, like unzip - if you want to avoid creating temp unzipped files.
For this, a rough estimation of examining the first 500 Bytes worked out ok for me, so far, as was hinted in the examples above; instead of Character.isWhitespace/isISOControl(char), I used Character.isIdentifierIgnorable(codePoint), assuming UTF-8 default encoding:
private static boolean isBinaryFileHeader(byte[] headerBytes) {
return new String(headerBytes).codePoints().filter(Character::isIdentifierIgnorable).count() >= 5;
}
public void printNestedZipContent(String zipPath) {
try (ZipFile zipFile = new ZipFile(zipPath)) {
int zipHeaderBytesLen = 500;
zipFile.entries().asIterator().forEachRemaining( entry -> {
String entryName = entry.getName();
if (entry.isDirectory()) {
System.out.println("FOLDER_NAME: " + entryName);
return;
}
// Get content bytes from ZipFile for ZipEntry
try (InputStream zipEntryStream = new BufferedInputStream(zipFile.getInputStream(zipEntry))) {
// read and store header bytes
byte[] headerBytes = zipEntryStream.readNBytes(zipHeaderBytesLen);
// Skip entry, if nested binary file
if (isBinaryFileHeader(headerBytes)) {
return;
}
// Continue reading zipInputStream bytes, if non-binary
byte[] zipContentBytes = zipEntryStream.readAllBytes();
int zipContentBytesLen = zipContentBytes.length;
// Join already read header bytes and rest of content bytes
byte[] joinedZipEntryContent = Arrays.copyOf(zipContentBytes, zipContentBytesLen + zipHeaderBytesLen);
System.arraycopy(headerBytes, 0, joinedZipEntryContent, zipContentBytesLen, zipHeaderBytesLen);
// Output (default/UTF-8) encoded text file content
System.out.println(new String(joinedZipEntryContent));
} catch (IOException e) {
System.out.println("ERROR getting ZipEntry content: " + entry.getName());
}
});
} catch (IOException e) {
System.out.println("ERROR opening ZipFile: " + zipPath);
e.printStackTrace();
}
}
You ignore what read() returns, what if the files is shorter than 500 bytes?
When you return false, you don't close the file.
When converting byte to char, you assume your file is 7-bit ASCII.

Categories

Resources