Error with bencode and torrent file - java

I am using this bencode https://github.com/dampcake/bencode to decode a torrent file. I am having an issue :
the encoded torrent file looks something like this :
d8:announce21:http://127.0.0.1: ....etc..... piece lengthi65536e6:pieces28300:a�ډ|E���� ���#-14 .....etc........
The thing is that when I enter this string in the 'decoder', I get an error because of the � symbols.
Here is my question: should I stop decoding just before those symbols ? Or is the whole string necessary to properly decode the .torrent file ?
From what I've read, I need to stop the decoding at the end of the dictionary, ie. when I encounter the final 'e', but I don't know how to properly identify it..
Thanks
UPDATE:
Here is my code :
byte[] to_decode = null;
try {
Path path = Paths.get("/user/.../file.torrent");
to_decode = Files.readAllBytes(path);
} catch (IOException e) {
System.out.println(e.toString());
}
//System.out.println(to_decode.toString());
Bencode bencode = new Bencode();
Map<String, Object> dict = bencode.decode(to_decode, Type.DICTIONARY);
System.out.println(dict);
When I run it, I have no errors but this kind of output:
f<�>�0�1FT���n" ......etc...... 4'}$�Q�3�� Җk�, private=0}}
So, considering the brackets, I think the output is a dictionary but not in a usable format, and I can't seem to make it work
Any advice ?

Following specification https://en.wikipedia.org/wiki/Bencode 6:pieces28300:a means there is a 28300 bytes long string. So it should be parsed too. You should stop at the end of dictionary but it is not in 6:pieces28300:a (it is at the end).
Both length and � indicate that you are dealing with binary data. You do not specify error, neither source code you are using, but you are using wrong character encoding. So check character encoding of encoded torrent file data and make sure to use same encoding in your Bencode constructor.

Related

How to make a frequency table from file content using fileInputStream

My assignment is to create a program that does compression using the Huffman algorithm. My program must be able to compress any type of file. Hence why i'm not using the Reader that works with characters.
Im not understanding how to be able to make some kind of frequency table when encoding a binary file?
EDIT!! Problem solved.
public static void main(String args[]){
try{
FileInputStream in = new FileInputStream("./src/hello.jpg");
int currentByte;
while((currentByte = in.read())!=-1){ //in.read()
//read all byte streams in file and create a frequency
//table
}
}catch (IOException e){
e.printStackTrace();
}
}
I'm not sure what you mean by "reading from an image and look at the characters" but talking about text files (as you're reading one in in your code example) this is most of the time working by casting the read byte to char by doing a
char charVal = (char) currentByte;
It's mostly working because most data is ASCII and most charsets contain ASCII. It gets more complicated with non-ASCII characters because a simple cast is equivalent with using charset ISO-8859-1. This will still most of the time produce correct results, because e.g. Window's cp1252 (on german systems) only differ with ISO-8859-1 at the Euro-sign.
Things start to run havoc with charsets like UTF-8 where non-ASCII characters are encoded with multiple bytes, so you will see things like ä instead of an ä. Same for files being encoded with Unicode where every second byte is most likely a binary zero.
You could use Files.readAllBytes and then iterate over this array.
Path path = Paths.get("hello.txt");
try {
byte[] array = Files.readAllBytes(path);
} catch (IOException ) {
}

Converting string to byte[] returns wrong value (encoding?)

I read a byte[] from a file and convert it to a String:
byte[] bytesFromFile = Files.readAllBytes(...);
String stringFromFile = new String(bytesFromFile, "UTF-8");
I want to compare this to another byte[] I get from a web service:
String stringFromWebService = webService.getMyByteString();
byte[] bytesFromWebService = stringFromWebService.getBytes("UTF-8");
So I read a byte[] from a file and convert it to a String and I get a String from my web service and convert it to a byte[]. Then I do the following tests:
// works!
org.junit.Assert.assertEquals(stringFromFile, stringFromWebService);
// fails!
org.junit.Assert.assertArrayEquals(bytesFromFile, bytesFromWebService);
Why does the second assertion fail?
Other answers have covered the likely fact that the file is not UTF-8 encoded giving rise to the symptoms described.
However, I think the most interesting aspect of this is not that the byte[] assert fails, but that the assert that the string values are the same passes. I'm not 100% sure why this is, but I think the following trawl through the source code might give us the answer:
Looking at how new String(bytesFromFile, "UTF-8"); works - we see that the constructor calls through to StringCoding.decode()
This in turn, if supplied with tht UTF-8 character set, calls through to StringDecoder.decode()
This calls through to CharsetDecoder.decode() which decides what to do if the character is unmappable (which I guess will be the case if a non-UTF-8 character is presented)
In this case it uses an action defined by
private CodingErrorAction unmappableCharacterAction
= CodingErrorAction.REPORT;
Which means that it still reports the character it has decoded, even though it's technically unmappable.
I think this means that even when the code gets an umappable character, it substitutes its best guess - so I'm guessing that its best guess is correct and hence the String representations are the same under comparison, but the byte[] are no longer the same.
This hypothesis is kind of supported by the fact that the catch block for CharacterCodingException in StringCoding.decode() says:
} catch (CharacterCodingException x) {
// Substitution is always enabled,
// so this shouldn't happen
I don't understand it fully, but here's what I get so fare:
The problem is that the data contains some bytes which are not valid UTF-8 bytes as I know by the following check:
// returns false for my data!
public static boolean isValidUTF8(byte[] input) {
CharsetDecoder cs = Charset.forName("UTF-8").newDecoder();
try {
cs.decode(ByteBuffer.wrap(input));
return true;
}
catch(CharacterCodingException e){
return false;
}
}
When I change the encoding to ISO-8859-1 everything works fine. The strange thing (which a don't understand yet) is why my conversion (new String(bytesFromFile, "UTF-8");) doesn't throw any exception (like my isValidUTF8 method), although the data is not valid UTF-8.
However, I think I will go another and encode my byte[] in a Base64 string as I don't want more trouble with encoding.
The real problem in your code is that you don't know what the real file encoding.
When you read the string from the web service you get a sequence of chars; when you convert the string from chars to bytes the conversion is made right because you specify how to transform char in bytes with a specific encoding ("UFT-8"). when you read a text file you face a different problem. You have a sequence of bytes that needs to be converted to chars. In order to do it properly you must know how the chars where converted to bytes i.e. what is the file encoding. For files (unless specified) it's a platform constants; on windows the file are encoded in win1252 (which is very close to ISO-8859-1); on linux/unix it depends, I think UTF8 is the default.
By the way the web service call did a decond operation under the hood; the http call use an header taht defins how chars are encoded, i.e. how to read the bytes form the socket and transform then to chars. So calling a SOAP web service gives you back an xml (which can be marshalled into a Java object) with all the encoding operations done properly.
So if you must read chars from a File you must face the encoding issue; you can use BASE64 as you stated but you lose one of the main benefits of text files: the are human readable, easing debugging and developing.

How can I tell whether the text I receive from a network/read from a file uses a given encoding?

I have a file, or I read from a socket; the data I read is supposed to be text encoded with a given character coding.
But even if I specify a coding and it turns out to be wrong in the end, the operation succeeds; instead of an exception of any sort, I get a lot of � in my text :/
Is there a way I can trigger a failure instead?
Yes there is.
First, some information: what is that pesky � character, really? Well, it is Unicode's "replacement character", code point U+FFFD.
Now, why do you get this? In order to explain this, we need to delve a little deeper into what happens...
First, a "formal" definition: a character coding is a process which defines a bijection between a stream of bytes and a stream of characters; as it is a bijection, it means that two operations are defined: encoding (turning a stream of characters into a stream of bytes) and decoding (turning a stream of bytes into a stream of characters).
In Java, a character coding is encompassed in a Charset; you can obtain an encoder using Charset.newEncoder(), and a decoder using Charset.newDecoder().
And of course, it can happen that in the decoding process, which is what is of interest here, a certain sequence of bytes turns out to be malformed, in which case the CharsetDecoder must decide what to do... And this behavior depends on CodingErrorAction, which has three values:
REPLACE (the default!!): replace any unmappable sequence with Unicode's replacement character!
IGNORE: scrap all unmappable sequences, don't output anything;
REPORT: throw an exception on an unmappable sequence...
Now, what we want in order to detect malformed inputs and throw an error is to REPORT them!
So, how do we do this given an InputStream? The solution is to use an InputStreamReader; it has a constructor allowing you to specify a CharsetDecoder as an argument. All you have to do is to create your decoder!
For instance, if you want to ensure correct UTF-8, you would do this:
final CharsetDecoder decoder = StandardCharsets.UTF_8
.newDecoder().onMalformedInput(CodingErrorAction.REPORT);
try (
final InputStreamReader reader = new InputStreamReader(in, decoder);
) {
// read from the reader here
}
The exception you want to catch here is a CharacterCodingException. Note that it inherits IOException, so you want to:
try (
...
) {
...
} catch (CharacterCodingException e) {
...
} catch (IOException e) {
...
}

Base64 String to Windows1251 (cyrillic symbols)

I have a trouble to convert email attachment(simple text file in windows-1251 encoding with latin and cyrillic symbols) to String. I.e I have a problem with converting cyrillic.
I got attachment file as base64 encoded String like this:
Base64Encoded email Attachment
Original file
So when I try to decode it, I got "?" instead of Cyrillic symbols.
How can I get right Cyrillic(Russian) symbols instead of "?"
I've already tried this code with all encodings, but nothing help to get correct Russian symbols.
BASE64Decoder dec = new BASE64Decoder();
for (String key : Charset.availableCharsets().keySet()) {
System.out.println("K=" + key + " Value:" +
Charset.availableCharsets().get(key));
try {
System.out.println(new String(dec.decodeBuffer(encoded), key));
} catch (Exception e) {
continue;
}
}
Thank You beforehand.
I am not very familiar with BPEL and protocols it uses. If you communicate between nodes using some binary protocols, then you must 1) ensure, client and receiver use the same charset and 2) convert java string into proper bytes in this encoding. Java stores string internally in UTF-16 format. So when you execute String correct = new String(commonName.getBytes("ISO-8859-1"), "ISO-8859-5") you will get correct string in UTF-16. Then you need to export it to bytes in requested encoding, eg. byte[] buff = correct.getBytes("UTF-8") assuming the encoding you use between nodes is UTF-8. If happen the encoding is different, then you must make sure, it actually supports Cyrillic characters (e.g. ISO-8859-1 does not support it).
If you use XML for data exchange, make sure it uses suitable encoding in <?xml encoding="UTF-8"?>. You don't need then to play with bytes, you just need to correctly "import" the string (see correct variable). Writing to XML converts characters automatically, but it (encoding) must support characters you want to write. So if you set encoding="ISO-88591", then you will get those question marks again.

Reading from UTF-16 encoded text file, þÿ is prepended on the front

I'm outputting a byte array to a text file using the following method:
try{
FileOutputStream fos = new FileOutputStream(filePath+".8102");
fos.write(concatenatedIVCipherMAC);
fos.close();
}catch(Exception e)
{
e.printStackTrace();
}
which outputs to the file a UTF-16 encoded data, example:
¢¬6î)ªÈP~m˜LïiƟê•Àe»/#Ó ö¹¥‘þ²XhÃ&¼lG:Öé )GU3«´DÃ{+í—Ã]íò
However when I'm reading it back in I get þÿ prepended to the front of the data, e.g:
þÿ¢¬6î)ªÈP~m˜LïiƟê•Àe»/?#Ó ö¹¥‘þ²XhÃ&¼lG:Öé )GU3«´DÃ{+í—Ã]íò
This is the method I'm using to read in the file:
private String getFilesContents()
{
String fileContents = "";
Scanner sc = null;
try {
sc = new Scanner(file, "UTF-16");
System.out.println("Can read file: "+file.canRead());
} catch (FileNotFoundException e) {
e.printStackTrace();
}
while(sc.hasNextLine()){
fileContents += sc.nextLine();
}
sc.close();
return fileContents;
}
and then byte[] contentsOfFile = fileContents.getBytes("UTF-16"); to convert the String into a byte array.
A quick Google told me that þÿ represents the byte order but is it Java putting that there or Windows? How can I avoid having the þÿ prepended at the start of the data I'm reading in? I was thinking of just ignoring the first two bytes but if it is Windows then this will obviously break the program on other platforms.
edit: changed appended to prepended.
The file is the IV+data+MAC. It's not meant to be readable text? Should be I be doing something differently?
Yes. You shouldn't be trying to treat it as text anywhere.
If you really need to convert arbitrary binary data into text, use Base64 to convert it. Other than that, stick to byte arrays, InputStream and OutputStream.
I don't know exactly why you're supposedly getting extra characters, but the fact that you haven't got real text to start suggests that it's not really worth diagnosing that side. Just start handling binary data as binary data instead.
EDIT: Have a look at Guava's IO helpers for simplicity...
þÿ is the byte order mark (BOM) unicode character saved as UTF16-BE, interpreted as ISO-8859-1.
You shouldn't treat binary data as text (in whatever encoding), if you want to avoid such errors.

Categories

Resources