Problems parsing iso8583 message - java

I've download code from here https://github.com/vikrantlabde/iso8583-Java and after some modifications I'm parsing almost fine my fields....
I defined the schema like this:
ISOSCHEMA.put("1","BITMAP");
ISOSCHEMA.put("2","NUM-2-19-0_0");
ISOSCHEMA.put("3","NUMERIC-0-6-0_0");
ISOSCHEMA.put("4","NUMERIC-0-12-0_0");
ISOSCHEMA.put("7","NUMERIC-0-10-0_0");
ISOSCHEMA.put("11","NUMERIC-0-6-0_0");
ISOSCHEMA.put("12","NUMERIC-0-6-0_0");
ISOSCHEMA.put("13","NUMERIC-0-4-0_0");
ISOSCHEMA.put("22","NUMERIC-0-3-0_0");
ISOSCHEMA.put("23","NUMERIC-0-3-0_0");
ISOSCHEMA.put("35","NUM-2-37-0_0");
ISOSCHEMA.put("41","FCHAR-0-8-0_0");
ISOSCHEMA.put("49","FCHAR-0-3-0_0");
ISOSCHEMA.put("55","NUM-3-999-0_0");
The problem is the field 55 that is a binary field. The standard documentation says it:
55 Reserved ISO B 255 LLLVAR (ISO DOCUMENTATION)
I'm having an error parsing a string that has the bitmap turned on for the field 55.
I'm having from the output:
820200409F36020004950500000000009A031409039C01005F2A0209789F02060000000005009F03060000000000009F10201F430200200000000000000000045895000000000000000000000000000000009F260840D26C4BA5577CFB9F2701809F370443DD7E879F1A0202509F3303E0B0C8
But I expect:
820200409F36020004950500000000009A031409039C01005F2A0201249F02060000000005009F03060000000000009F10201F430200200000000000000000045895000000000000000000000000000000009F260840D26C4BA5577CFB9F2701809F370443DD7E879F1A0202509F3303E0B0C8
The length of the iso payload converted is highly different too...
The program output is:
303130307238060020C280C28200313636353433323131313232333334343535303030303030303030303030303030303031313031363138333432363030323339343133333433303130313630373130303133373635343332313131323233333434353564333131303232303030393238333030313031303238343031373430393132343233303832303230303430394633363032303030343935303530303030303030303030394130333134303930333943303130303546324130323039373839463032303630303030303030303035303039463033303630303030303030303030303039463130323031463433303230303230303030303030303030303030303030303034353839353030303030303030303030303030303030303030303030303030303030303030394632363038343044323643344241353537374346423946323730313830394633373034343344443745383739463141303230323530394633333033453042304338
What I expect is:
30313030723806002080820031363635343332313131323233333434353530303030303030303030303030303030303131303136313833343236303032333934313333343330313031363037313030313337363534333231313132323333343435353D33313130323230303039323833303031303130323834303137343039313234313135820200409F36020004950500000000009A031409039C01005F2A0201249F02060000000005009F03060000000000009F10201F430200200000000000000000045895000000000000000000000000000000009F260840D26C4BA5577CFB9F2701809F370443DD7E879F1A0202509F3303E0B0C8
One advice is:
I have to make the explicit conversion to hex from the resultant byte[] and viceversa.
It is:
String isoMessage = ISOUtil.hexString(packIsoMsg("0100", isofields).getBytes());
And:
unpackIsoMsg(new String(ISOUtil.hex2byte(isoMessage), "UTF-8"));
What about the definition of this kind of fields in this class? I'm really a newbie with the standard but I arrived here because jpos doesn't work in an Android environment. Also I'm confused with the last mentioned conversion to hex.
Any help is really appreciated...
Kind regards.

DE55 is defined as a Tag-Lag-Value (TLV) field that is not in the normal Binary / text / or numeric packed format you see the rest of ISO-8583 messages typically but is in ASN.1 BER-TLV / X.690-0207 format.
Unless you account for the BER-TLV you will not successfully unpack DE55 unless it is for non-EMV/Tokenization purposes. It threw me at first as well as I was thinking something more straight forward. Be aware that sometimes the field transport format is actually longer in this format than the original plain text or other binary data output so it is not the most efficient.
There are a couple of other fields depending on the ISO specification may also use BER-TLV but DE55 is the industry standard field to use BER-TLV for EMV functionality replacing DE55's previous use as a generic and rarely used 'fee field'.
The ISO-7816 specification its ISO-8583 use in detail for EMV and Tokenization, as well there are other references and quick guides out there if you are just looking for something not so in-depth. All volumes of the ISO-7816 specification can be found openly on the internet for free, or can be purchased directly (spendy) from the ISO organization if you want them in the plain ISO format.
I am not familiar with the specific JAVA Git you referenced but this one has a help page on how to use BER-TLV. Oracle also has a page on dealing with BER-TLV here. BinaryFoo has a Git available as well.
For the purposes of initial testing, if your data is just test data (DO NOT USE PRODUCTION DATA!) you can use http://www.emvlab.org/tlvutils/ to verify your results. Which when I input your inputs it kicks out your expected output.

what is the field defined by sender in field 55.Assign the same in unpacking.If they are sending string it should be LLLVAR.
when sending ISO message header must be in hex format.hence they convert bytes to hex.

From the looks of it, you consider 5F2A020978 as wrong whereas you expect 5F2A020124. The EMV tag's 5F2A data (with length 02) is the transaction currency code. This means your transaction is performed in EURO currency instead of Canadian dollar as you expect. You can find a currency code list here.
Hope this helps.

Related

HexBinary byte order

The first google link about HexBinary for XML shows link to this article.
There are an example of HexBinary representation:
"<?xml version="1.0" encoding="UTF-8"?>"
"3f3c6d78206c657673726f693d6e3122302e20226e656f636964676e223d54552d4622383e3f"
But when I try to implement encoding myself or use online converter, I receive next string:
"3C3F786D6C2076657273696F6E3D22312E302220656E636F64696E673D225554462D38223F3E"
Difference in byte order 3f3c vs 3c3f. It cannot be UTF-16 problem because it will produce something like 3c003f00 or 003c003f.
So, my question: is it problem in article (book from O'Reilly) or there are special rules for xsd:hexBinary which require to invert byte order?
There are no rules that tell you to invert the byte order.
The XSD specification for hexBinary data type is just:
hexBinary represents arbitrary hex-encoded binary data.
Note the word arbitrary - it doesn't tell you how to encode the data, or that you need to swap every two bytes after the encoding is done.
This is an error in the book. It seems likely the author of the book just copied and pasted text into a hex editor or converter, and didn't notice that with default settings it displays data in units of 2 bytes in little-endian byte order. This is an easy mistake to make if you're not used to working with binary data.
It doesn't look like this errata has been reported. If you want to, you can submit an errata to https://www.oreilly.com/catalog/errata.csp?isbn=9780596004217

What is the definition of "lexically-ordered" base64 and why is RFCC-1940 apparently the canonical reference?

Today I was reading the documentation for Netty's Base64Dialect class.
It includes a dialect called ORDERED, of which it says, somewhat briefly:
Special "ordered" dialect of Base64 described in RFC1940.
To cut to the chase, I can't find any definition of what this is, and includes an erroneous reference which seems to replicated all over the internet.
Instead of RFC-1940, the document actually links to RFCC-1940, which apparently is a "reader comment", and a nonsensical one at that:
RFC 920: whkpiy clujzis brkyh dwojfmz jydwq hrnwcgklt fsltaiu
Comment by lsnxkrjo sxavymwpg
Submitted on 10/26/2006
Related RFC: RFC-920
Now RFC-920 appears to have nothing to do with base 64:
Domain requirements
This memo restates and refines the requirements on establishing a
Domain first described in RFC-881. It adds considerable detail
to that discussion, and introduces the limited set of top level
domains.
Is RFC-1940 relevant? Skimming, no I can't see any base 64 encoding definitions here:
Source Demand Routing: Packet Format and Forwarding Specification (Version 1).
The purpose of SDRP is to support source-initiated selection of
routes to complement the route selection provided by existing routing
protocols for both inter-domain and intra-domain routes. [...]
In fact, searching the web for "rfcc 1940 ordered base64" finds this same URL in lots of other documentation, but sadly no explanation of "lexically ordered base 64".
Is there a legitimate definition of this anywhere? And why hasn't anyone else noticed this URL refers to nonsense?
I have not found a "legitimate definition" of ordered Base64. (At time of writing this, it is not even mentioned in the Wikipedia page on Base64.)
If you treat the code as a specification(!), ordered Base64 is a variant in which the alphabet has been reordered into ascending ASCII order. This means that the natural ordering for ordered Base64 is the same as the natural ordering for the corresponding byte sequence.
Is it a problem that there isn't a specification for ordered Base64?
Probably not.
In reality the RFCs that "specify" the different variants of Base64 (and Base32 / Base16) are actually more of an attempt to describe the variants rather than specify them. And the same applies to the Wikipedia article.
From what I can tell (google searches), the ordered Base64 variant is rarely used.
The Base64 implementation that introduced the ordered variant is legacy code. (It hasn't been changed in the last 8 years). New Java code that requires Base64 encoding / decoding capability should be using the standard Java java.util.Base64 class introduced in Java 8.
But it is concerning that the javadocs you linked to (and others!) all refer to a nonsense page. That page probably had a legitimate description at some point, but it looks like it has been vandalized.

How to parse freedict files (*.dict and *.index)

I was searching for free translation dictionaries. Freedict (freedict.org) provides the ones I need but I don't know, how to parse the *.index and *.dict files. I also don't really know, what to google, to find useful information about these formats.
The *.index files look following:
00databasealphabet QdGI l
00databasedictfmt1121 B b
00databaseinfo c 5o
00databaseshort 6E u
00databaseurl 6y c
00databaseutf8 A B
a BHO M
a bad risc BHa u
a bag of nerves BII 2
[...]
and the *.dict files:
[Lot of info stuff]
German-English FreeDict Dictionary ver. 0.3.4
Pipi machen /piːpiːmaxən/
to pee; to piss
(Aktien) zusammenlegen /aktsiːəntsuːzamənleːgən/
to merge (with)
[...]
I would be glad to see some example projects (preferably in python, but java, c, c++ are also ok) to understand how to handle these files.
It is too late. However, i hope that it can be useful for others like me.
JGoerzen writes a Dictdlib lib. You can see more details how he parse .index and .dict files.
https://github.com/jgoerzen/dictdlib/blob/master/dictdlib.py
dictd considers its format of .index and .dict[.dz] as private, to reserve itself the right to change it in the future.
If you want to process it directly anyway, the index contains the headwords and the .dict[.dz] contains definitions. It is optionally compressed with a special modified gzip algorithm providing almost random access, which gzip normally does not. The index contains 3 columns per line, tab separated:
The headword for looking up the definition.
The absolute byte position of the definition in the .dict[.dz] file, base64 encoded.
The length of the definition in bytes, base64 encoded.
For more details see the dict(8) man page (section Database Format) you should have found in your research before asking your question. For processing the headwords correctly, you'd have to consider encoding and character collation.
Eventually it would be better to use an existing library to read dictd databases. But that really depends on whether the library is good (no experience here).
Finally, as you noted yourself, XML is made exactly for easy processing. You could extract the headwords and translations using XPath, leaving out all the grammatical stuff and no need to bother parsing anything.
After getting this far the next problem would be that there is no one-to-one mapping between words in different lanuages...

Byte length difference when retrieving from WebSphere MQ Message

In Java, I am polling a WebSphere MQ message queue, expecting a message of `STRING format, that is composed entirely of XML. Part of this XML will contain bytes to a file attachment (any format: pdf, image, etc) which will then be converted to a blob for storage in an Oracle Db and later retrieval.
The issue I am having is that the known size of example files being sent over end up in my Db with a different size. I am not adding anything to the bytes (as far as I know), and the size appears to be larger directly after I get the message. I cannot determine if I am somehow adding information at retrieve, conversion from bytes -> String, or if this is happening on the front end when the sender populates the message.
My code at retrieve of the message:
inboundmsg = new MQMessage();
inboundmsg = getMQMsg(FrontIncomingQueue, gmo);
strLen = inboundmsg.getMessageLength();
strData = new byte[strLen];
ibm_id = inboundmsg.messageId;
inboundmsg.readFully(strData);
inboundmsgContents = new String(strData);
I see a file known to have size 21K go to 28K. A coworker has suggested that charset/encoding may be the issue. I do not specify a charset in the constructor call to String above, nor in any of the calls to getBytes when converting back from a string (for other unrelated uses). My default charset is ISO-8859-1. When speaking with the vendor who is initiating the message transfer, I asked her what charset she is using. Her reply:
"I am using the File.WriteAllBytes method in C# - I pass it the path of my file and it writes it to a byte[]. I could not find any documentation on the MSDN about what encoding the function uses. The method creates a byte array and from what I have read online this morning there is no encoding, its just a sequence of 8bit unsigned binary data with no encoding."
Another coworker has suggested that perhaps the MQ charset is the culprit, but my reading of the documentation suggests that MQ charset only affects the behavior of readString, readLine, & writeString.
If I circumvent MQ totally, and populate a byte array using a File Input Stream and a local file, the file size is preserved all the way to Db store, so this definitely appears to be happening at or during message transfer.
The problem is evident in the wording of the question. You describe a payload that contains arbitrary binary data and also trying to process it as a string. These two things are mutually exclusive.
This appears to be complicated by the vendor not providing valid XML. For example, consider the attachment:
<PdfBytes>iVBORw0KGgoAAAANS … AAAAASUVORK5CYII=</PdfBytes>
If the attachment legitimately contains any XML special character such as < or > then the result is invalid XML. If it contains null bytes, some parsers assume they have reached the end of the text and stop parsing there. That is why you normally see any attachment in XML either converted to Base64 for transport or else converted to hexadecimal.
The vendor describes writing raw binary data which suggests that what you are receiving contains non-string characters and therefore should not be sent as string data. If she had described some sort of conversion that would make the attachment XML compliant then string would be appropriate.
Interestingly, a Base64 encoding results in a payload that is 1.33 times larger than the original. Coincidence that 21k * 1.3 = 28k? One would think that what is received is actually the binary payload in Base64 format. That actually would be parseable as a string and accounts for the difference in file sizes. But it isn't at all what the vendor described doing. she said she's writing "8bit unsigned binary data with no encoding" and not Base64.
So we expect it to fail but not necessarily to result in a larger payload. Consider that WebSphere MQ receiving a message in String format will attempt to convert it. If the CCSID of the message differs from that requested on the GET then MQ will attempt a conversion. If the inbound CCSID is UTF-16 or any double-byte character set, certain characters will be expanded from one to two bytes - assuming the conversion doesn't hit invalid binary characters that cause it to fail.
If the two CCSIDs are the same then no conversion is attempted in the MQ classes but there is still an issue in that something has to parse an XML payload that is by definition not valid and therefore subject to unexpected results. If it happens that the binary payload does not contain any XML special characters and the parser doesn't choke on any embedded null bytes, then the parser is going to rather heroic lengths to forgive the non-compliant payload. If it gets to the </PdfBytes> tag without choking, it may assume that the payload is valid and convert everything between the <PdfBytes>...</PdfBytes> tags itself. Presumably to Base64.
All of this is conjecture, of course. But in a situation where the payload is unambiguously not string data any attempt to parse it as string data will either fail outright or produce unexpected and potentially bizarre results. You are actually unfortunate that it doesn't fail outright because now there's an expectation that the problem is on your end when it clearly appears to be the vendor's fault.
Assuming that the content of the payload remains unchanged, the vendor should be sending bytes messages and you should be receiving them as bytes. That would at least fix the problems MQ is having reconciling the expected format with the actual received format, but it would still be invalid XML. If it works that the vendor sends binary data in a message set to type String with you processing it as bytes then count your blessings and use it that way but don't count on it being reliable. Eventually you'll get a payload with an embedded XML special character and then you will have a very bad day.
Ideally, the vendor should know better than to send binary data in an XML payload without converting it first to string and it is up to them to fix it so that it is compliant with the XML spec and reliable.
Please see this MSDN page: XML, SOAP, and Binary Data

Can all tag numbers be extracted from a given protobuf serialization?

Given a protobuf serialization is it possible to get a list of all tag numbers that are in the message? Generally is it possible to view the structure of the message without the defining .proto files?
Most APIs will indeed have some form of reader-based API that allows you to enumerate a raw protobuf stream. However, that by itself is not enough to fully understand the data, since without the schema the interpretation is ambiguous:
a varint could be zig-zag encoded (sint32/sint64), or not (int32/int64/uint32/uint64) - radically changing the meaning, or a boolean, or an enum
a fixed-32/fixed-64 could be a signed or unsigned integer, or could be an IEEE754 float/double
a length-prefixed chunk could be a UTF-8 string, a BLOB, a sub-message, or a "packed" repeated set of primitives; if it is a sub-message, you'll have to repeat recursively
So... yes and no. Certainly you can get the field numbers of the outermost message.
Another approach would be to use the regular API against a type with no members (message Naked {}), and then query the unexpected data (i.e. all of it) via the "extension" API that many implementations provide.
You can get all the tag numbers which appear in one particular message, but you won't get any nested messages - and you won't know the types of those fields, only some subset of possible types.
If you look at the wire encoding, you can see that (for example) byte arrays, strings and nested messages are all encoded the same way - so you may know that "field 3 is a length-prefixed binary field" but you won't know whether that means it's a nested message, a string or a byte array.

Categories

Resources