Let's say for example i have URL containing the following percent encoded character : %80
It is obviously not an ascii character.
How would it be possible to convert this value to the corresponding hex string in Java.
i tried the following with no luck.Result should be 80.
public static void main(String[] args) {
System.out.print(byteArrayToHexString(URLDecoder.decode("%80","UTF-8").getBytes()));
}
public static String byteArrayToHexString(byte[] bytes)
{
StringBuffer buffer = new StringBuffer();
for(int i=0; i<bytes.length; i++)
{
if(((int)bytes[i] & 0xff) < 0x10)
buffer.append("0");
buffer.append(Long.toString((int) bytes[i] & 0xff, 16));
}
return buffer.toString();
}
The best way to deal with this is to parse the url using either java.net.URL or java.net.URI, and then use the relevant getters to extract the components that you require. These will take care of decoding any %-encoded portions in the appropriate fashion.
The problem with your current idea is that %80 does not represent "80", or 80. Rather it represents a byte that further needs to be interpreted in the context of the character encoding of the URL. And if the encoding is UTF-8, then the %80 needs to be followed by one or two more %-encoded bytes ... otherwise this is a malformed UTF-8 character representation.
I don't really see what you are trying. However, I'll give it a try.
When you have got this String: "%80" and you want to got the string "80", you can use this:
String str = "%80";
String hex = str.substring(1); // Cut off the '%'
If you are trying to extract the value 0x80 (which is 128 in decimal) out of it:
String str = "%80";
String hex = str.substring(1); // Cut off the '%'
int value = Integer.parseInt(hex, 16);
If you are trying to convert an int to its hexadecimal representation use this:
String hexRepresenation = Integer.toString(value, 16);
Related
I'm using this code to convert a UTF-8 String to binary:
public String toBinary(String str) {
byte[] buf = str.getBytes(StandardCharsets.UTF_8);
StringBuilder result = new StringBuilder();
for (int i = 0; i < buf.length; i++) {
int ch = (int) buf[i];
String binary = Integer.toBinaryString(ch);
result.append(("00000000" + binary).substring(binary.length()));
result.append(' ');
}
return result.toString().trim();
}
Before I was using this code:
private String toBinary2(String str) {
StringBuilder result = new StringBuilder();
for (int i = 0; i < str.length(); i++) {
int ch = (int) str.charAt(i);
String binary = Integer.toBinaryString(ch);
if (ch<256)
result.append(("00000000" + binary).substring(binary.length()));
else {
binary = ("0000000000000000" + binary).substring(binary.length());
result.append(binary.substring(0, 8));
result.append(' ');
result.append(binary.substring(8));
}
result.append(' ');
}
return result.toString().trim();
}
These two method can return different results; for example:
toBinary("è") = "11000011 10101000"
toBinary2("è") = "11101000"
I think that because the bytes of è are negative while the corresponding char is not (because char is a 2 byte unsigned integer).
What I want to know is: which of the two approaches is the correct one and why?
Thanks in advance.
Whenever you want to convert text into binary data (or into text representing binary data, as you do here) you have to use some encoding.
Your toBinary uses UTF-8 for that encoding.
Your toBinary2 uses something that's not a standard encoding: it encodes every UTF-16 codepoint * <= 256 in a single byte and all others in 2 bytes. Unfortunately that one is not a useful encoding, since for decoding you'll have to know if a single byte is stand-alone or part of a 2-byte sequence (UTF-8/UTF-16 do that by indicating with the highest-level bits which one it is).
tl;dr toBinary seems correct, toBinary2 will produce output that can't uniquely be decoded back to the original string.
* You might be wondering where the mention of UTF-16 comes from: That's because all String objects in Java are implicitly encoded in UTF-16. So if you use charAt you get UTF-16 codepoints (which just so happen to be equal to the Unicode code number for all characters that fit into the Basic Multilingual Plane).
This code snippet might help.
String s = "Some String";
byte[] bytes = s.getBytes();
StringBuilder binary = new StringBuilder();
for(byte b:bytes){
int val =b;
for(int i=;i<=s.length;i++){
binary.append((val & 128) == 0 ? 0 : 1);
val<<=1;
}
}
System.out.println(" "+s+ "to binary" +binary);
A string-"gACA" encoded in PHP using base64. Now I'm trying to decode in java using base64. But getting absurd value after decoding. I have tried like this:
public class DecodeString{
{
public static void main(String args[]){
String strEncode = "gACA"; //gACA is encoded string in PHP
byte byteEncode[] = com.sun.org.apache.xerces.internal.impl.dv.util.Base64.decode(strEncode );
System.out.println("Decoded String" + new String(k, "UTF-8"));
}
}
Output:
??
Please help me out
Java has built-in Base64 encoder-decoder, no need extra libraries to decode it:
byte[] data = javax.xml.bind.DatatypeConverter.parseBase64Binary("gACA");
for (byte b : data)
System.out.printf("%02x ", b);
Output:
80 00 80
It's 3 bytes with hexadecimal codes: 80 00 80
public static void main(String args[]) {
String strEncode = "gACA"; //gACA is encoded string in PHP
byte byteEncode[] = Base64.decode(strEncode);
String result = new String(byteEncode, "UTF-8");
char[] resultChar = result.toCharArray();
for(int i =0; i < resultChar.length; i++)
{
System.out.println((int)resultChar[i]);
}
System.out.println("Decoded String: " + result);
}
I suspect it's an encoding problem. Issue about 65533 � in C# text file reading this post suggest the first and last character are \“. In the middle there is a char 0. Your result is probably "" or "0", but with wrong encoding.
Try this, it worked fine for me (However I was decoding files):
Base64.decodeBase64(IOUtils.toByteArray(strEncode));
So it would look like this:
public class DecodeString{
{
public static void main(String args[]){
String strEncode = "gACA"; //gACA is encoded string in PHP
byte[] byteEncode = Base64.decodeBase64(IOUtils.toByteArray(strEncode));
System.out.println("Decoded String" + new String(k, "UTF-8));
}
}
Note that you will need extra libraries:
Commons Codec
Commons FileUpload
Commons IO
First things first, the code you use should not compile, it's missing a closing quote after "UTF-8.
And yeah, "gACA" is a valid base64 string as the format goes, but it doesn't decode to any meaningful UTF-8 text. I suppose you're using the wrong encoding, or messed up the string somehow...
RFC 4648 defines two alphabets.
PHP uses Base 64 Encoding
Java uses Base 64 Encoding with URL and Filename Safe Alphabet.
They are very close but not the exact same. In PHP:
const REPLACE_PAIRS = [
'-' => '+',
'_' => '/'
];
public static function base64FromUrlSafeToPHP($base64_url_encoded) {
return strtr($base64_url_encoded, self::REPLACE_PAIRS);
}
public static function base64FromPHPToUrlSafe($base64_encoded) {
return strtr($base64_encoded, array_flip(self::REPLACE_PAIRS));
}
I was trying to print encrypted text using string perhaps i was wrong somewhere. I am doing simple xor on a plain text. Coming encrypted text/string i am putting in a C program and doing same xor again to get plain text again.
But in between, I am not able to get proper string of encrypted text to pass in C
String xorencrypt(byte[] passwd,int pass_len){
char[] st = new char[pass_len];
byte[] crypted = new byte[pass_len];
for(int i = 0; i<pass_len;i++){
crypted[i] = (byte) (passwd[i]^(i+1));
st[i] = (char)crypted[i];
System.out.println((char)passwd[i]+" "+passwd[i] +"= " + (char)crypted[i]+" "+crypted[i]);/* characters are printed fine but problem is when i am convering it in to string */
}
return st.toString();
}
I don't know if any kind of encoding also needed because if i did so how I will decode and decrypt from C program.
example if suppose passwd = bond007
then java program should return akkb78>
further C program will decrypt akkb78> to bond007 again.
Use
return new String(crypted);
in that case you don't need st[] array at all.
By the way, the encoded value for bond007 is cmm`560 and not what you posted.
EDIT
While solution above would most likely work in most java environments, to be safe about encoding,
as suggested by Alex, provide encoding parameter to String constructor.
For example if you want your string to carry 8-bit bytes :
return new String(crypted, "ISO-8859-1");
You would need the same parameter when getting bytes from your string :
byte[] bytes = myString.getBytes("ISO-8859-1")
Alternatively, use solution provided by Alex :
return new String(st);
But, convert bytes to chars properly :
st[i] = (char) (crypted[i] & 0xff);
Otherwise, all negative bytes, crypted[i] < 0 will not be converted to char properly and you get surprising results.
Change this line:
return st.toString();
with this
return new String(st);
I have a hex string (sA) convert from UTF8 string.
When I convert hex string sA to UTF8 string, I can't show it in form UI with build mode (run file .jar) but when I run with run mode or debug mode UTF8 string can show in form UI.
I use netbeans IDE 7.3.1.
My code below:
public String hexToString(String txtInHex) {
byte[] txtInByte = new byte[txtInHex.length() / 2];
int j = 0;
for (int i = 0; i < txtInHex.length(); i += 2) {
txtInByte[j++] = Byte.parseByte(txtInHex.substring(i, i + 2), 16);
}
return new String(txtInByte);
}
private String asHex(byte[] buf) {
char[] chars = new char[2 * buf.length];
for (int i = 0; i < buf.length; ++i) {
chars[2 * i] = HEX_CHARS[(buf[i] & 0xF0) >>> 4];
chars[2 * i + 1] = HEX_CHARS[buf[i] & 0x0F];
}
return new String(chars);
}
There are multiple problems with this code.
The valid range for byte values is -128 to 127, or -80 to 7F in hex, and Byte.parseByte enforces this. If your asHex method has to process a character whose second byte is greater than 127 it will produce a string that can't be decoded by toHexString.
The asHex method processes only the second byte of the input characters, so it will work correctly only for the first 256 Unicode characters and produce bogus output for the rest of them.
The toHexString method decodes a string from a byte array assuming some platform-specific default encoding, which will give incorrect results if the data was supposedly encoded in UTF-8 and the default encoding is something else.
Why are you trying to create your own methods for encoding and decoding hex strings instead of using a well known and tested library?
new String(txtInByte, "UTF-8");
Without the encoding the platform encoding is taken, for instance Windows-1252. The same holds for its inverse: String.getBytes-
String s = "....";
byte[] b = s.getBytes("UTF-8");
Hello I am looking for a way to detect if a string has being encoded
For example
String name = "Hellä world";
String encoded = new String(name.getBytes("utf-8"), "iso8859-1");
The output of this encoded variable is:
Hellä world
As you can see there is an A with grave and another symbol. Is there a way to check if the output contains encoded characters?
Sounds like you want to check if a string that was decoded from bytes in latin1 could have been decoded in UTF-8, too. That's easy because illegal byte sequences are replaced by the character \ufffd:
String recoded = new String(encoded.getBytes("iso-8859-1"), "UTF-8");
return recoded.indexOf('\uFFFD') == -1; // No replacement character found
Your question doesn't make sense. A java String is a list of characters. They don't have an encoding until you convert them into bytes, at which point you need to specify one (although you will see a lot of code that uses the platform default, which is what e.g. String.getBytes() with no argument does).
I suggest you read this http://kunststube.net/encoding/.
String name = "Hellä world";
String encoded = new String(name.getBytes("utf-8"), "iso8859-1");
This code is just a character corruption bug. You take a UTF-16 string, transcode it to UTF-8, pretend it is ISO-8859-1 and transcode it back to UTF-16, resulting in incorrectly encoded characters.
If I correctly understood your question, this code may help you. The function isEncoded check if its parameter could be encoded as ascii or if it contains non ascii-chars.
public boolean isEncoded(String text){
Charset charset = Charset.forName("US-ASCII");
String checked=new String(text.getBytes(charset),charset);
return !checked.equals(text);
}
#Test
public void testAscii() throws Exception{
Assert.assertFalse(isEncoded("Hello world"));
}
#Test
public void testNonAscii() throws Exception{
Assert.assertTrue(isEncoded("Hellä world"));
}
You can also check for other charset changing charset var or moving it to a parameter.
I'm not really sure what are you trying to do or what is your problem.
This line doesn't make any sense:
String encoded = new String(name.getBytes("utf-8"), "iso8859-1");
You are encoding your name into "UTF-8" and then trying to decode as "iso8859-1".
If you what to encode your name as "iso8859-1" just do name.getBytes("iso8859-1").
Please tell us what is the problem you encountered so that we can help more.
You can check that your string is encoded or not by this code
public boolean isEncoded(String input) {
char[] charArray = input.toCharArray();
for (int i = 0, charArrayLength = charArray.length; i < charArrayLength; i++) {
Character c = charArray[i];
if (Character.getType(c) == Character.OTHER_LETTER)){
return true;
}
}
return false;
}