Use APDU commands to get some information for a card - java

I have a terminal that has its own API to stablish and send commands between chip and terminal, there is a function that transmits the APDU command and returns the answer in a byte array.
For example, if a want to read the tag 5A (Application PAN), I send the following command:
byte[] byteArrayAPDU = new byte[]{(byte)0x00, (byte)0xCA, (byte)0x00, (byte)0x5A};
int nResult = SmartCardInterface.transmit(nCardHandle, byteArrayAPDU, byteArrayResponse);
The variable byteArrayResponse gets the response to the APDU command.
When I translate the value of byteArrayAPDU to a string of hexadecimal digits, this gives me: 00 CA 00 5A. And the response to that command is 6E 00 (class not supported).
My device works with ISO 7816 as technical specifications. Is the way in which I am sending APDU commands correct? I ask this because I have read that an APDU command must have 5 values at least, but I don't know what to send in the fifth parameter. I don't know what the lenght of the response is.
Can you give an example of how to get the tag 5A or something else in APDU commands?
If the command where correct, in place of where I see 6E 00 at the moment, would I see the information as plain text when cast to a string?

The input and output values that you showed in your question suggest that your use of the method transceive() is correct, i.e. the second argument is a command APDU and the third argument is filled with the response APDU:
resultCode = SmartCardInterface.transmit(cardHandle, commandAPDU, ResponseAPDU);
Your question regarding the format and validity of APDU commands is rather broad. In general, the format of APDUs and a basic set of commands is defined in ISO/IEC 7816-4. Since you tagged the question with emv and mention the application primary account number, you are probably interacting with some form of EMV payment card (e.g. a credit or debit card from one of the major schemes). In that case, you would probably want to study the various specifications for EMV payment systems which define the data structures and application-specific commands for those cards.
Regarding your specific questions:
Do APDUs always consist of at least 5 bytes?
No, certainly not. Command APDUs consist of at least 4 bytes (the header bytes). These are
+-----+-----+-----+-----+
| CLA | INS | P1 | P2 |
+-----+-----+-----+-----+
Such a 4-byte APDU is called "case 1". This means that the command APDU does not contain a data field sent to the card and that the card is not expected to generate a response data field. So the response APDU is expected to only contain a response status word:
+-----+-----+
| SW1 | SW2 |
+-----+-----+
What is the 5th byte of a command APDU?
The 5th byte is a length field (or part of a length field in case of extended length APDUs, which I won't further explain in this post). Depending on the case, this length field may have two meanings:
If the command APDU does not have a data field, that length field indicates the expected length (Ne) of the response data field:
+-----+-----+-----+-----+-----+
| CLA | INS | P1 | P2 | Le |
+-----+-----+-----+-----+-----+
Le = 0x01 .. 0xFF: This means that the expected response data length Ne is 1, 2, ... 255 bytes (i.e. exactly the value of Le).
Le = 0x00: This means that the expected response data length Ne is 256 bytes. This is typically used to instruct the card to give you as much bytes as it has available (up to 256 bytes). So even if Le is set to 0x00, you won't always get exactly 256 bytes from the card.
If the command APDU itself has a data field, that length field indicates the length (Nc) of the command data field:
+-----+-----+-----+-----+-----+-----------------+
| CLA | INS | P1 | P2 | Lc | DATA (Nc bytes) |
+-----+-----+-----+-----+-----+-----------------+
Lc = 0x01 .. 0xFF: This means that the command data length Nc is 1, 2, ... 255 bytes (i.e. exactly the value of Lc).
Lc = 0x00: This is used to indicate an extended length APDU.
If there is a command data field and the command is expected to generate response data, that command APDU may again be followed by an Le field:
+-----+-----+-----+-----+-----+-----------------+-----+
| CLA | INS | P1 | P2 | Lc | DATA (Nc bytes) | Le |
+-----+-----+-----+-----+-----+-----------------+-----+
Is the command 00 CA 00 5A correct?
Probably not, for several reasons:
Since you expect the card to deliver a response data field (i.e. the data object 0x5A), you need to specify an Le field. Hence, a valid format would be
+------+------+------+------+------+
| CLA | INS | P1 | P2 | Le |
+------+------+------+------+------+
| 0x00 | 0xCA | 0x00 | 0x5A | 0x00 |
+------+------+------+------+------+
You receive the status word 6E 00 in response to the command. The meaning of this status word is "class not supported". This indicates that commands with the CLA byte set to 0x00 are not supported in the current state. With some cards this also simply means that this combination of CLA and INS (00 CA) is not supported, eventhough this contradicts the definition in ISO/IEC 7816-4.
Overall, you can assume that your card does not support this command in its current execution state.
Assuming you are interacting with an EMV payment card, you typically need to select an application first. Your question does not indicate if you do this already, so I assume, you don't do this right now. Selecting an application is done by sending a SELECT (by AID) command:
+------+------+------+------+------+-----------------+------+
| CLA | INS | P1 | P2 | Le | DATA | Le |
+------+------+------+------+------+-----------------+------+
| 0x00 | 0xA4 | 0x04 | 0x00 | 0xXX | Application AID | 0x00 |
+------+------+------+------+------+-----------------+------+
The value of the application AID, of course, depends on the card application and may be obtained by following the discovery procedures defined in the EMV specifications.
Even after application selection, the GET DATA APDU command for EMV applications is defined in the proprietary class. Consequently, the CLA byte must be set to 0x80:
+------+------+------+------+------+
| CLA | INS | P1 | P2 | Le |
+------+------+------+------+------+
| 0x80 | 0xCA | 0x00 | 0x5A | 0x00 |
+------+------+------+------+------+
Finally, even then, I'm not aware of any schemes where cards would allow you to retrieve the PAN through a GET DATA command. Usually, the PAN is only accessible through file/record based access. Since you did not reveal the specific type/brand of your card, it's impossible to tell what your card may or may not actually support.

At Start
Standard ISO 7816 includes several parts.
When terminal device vendors noticed about ISO 7816 they just confirm that the common Physical characteristics (Part 1), Dimension and Contacts (Part 2) and Transmission protocol (Part 3) were applied to the device reader.
APDU commands and responses defined in ISO 7816 Part 4 (and few other parts also) are generic definition and might not fully supported by your smartcard.
You need to learn about the card-terminal interaction layers related to your card type:
EMV is the customized version of ISO 7816 for Payment cards.
Global Card Brands used own customized specifications based on EMV and ISO 7816. For sample Amex "AEIPS", Diners "D-PAS", MasterCard "M/Chip", Visa "VIS", etc. They are almost the same with small differences related to the supported Commands, flows and list of Tags.
Unfortunately most of payment cards are not supposed to return Tag 0x5A value with GET DATA APDU command. Usually you need to follow payment procedure. At least SELECT card application and READ Tag Values from SFI card records.
According to EMV GET DATA P1 P2 values should be used for Tags 0x9F36, 0x9F13, 0x9F17, or 0x9F4F.
Answering your questions:
What to send in the fifth parameter? What is the length of the response?
Fifth byte known as "Le" - Length of Expected Data. You can try to use Le = "00".
If APDU command supported by card you may get SW1SW2 as 0x"6Cxx" where xx is the hexadecimal length of the requested data. When you can repeat same command with correct Le value.
For sample, to read PIN Counter
Get Data (Tag = '9F 17')
Request : 80 CA 9F 17 00
Response: 6C 04
SW1 SW2: 6C 04 (SW_Warning Wrong length(Le))
Get Data (Tag = '9F 17')
Request : 80 CA 9F 17 04
Response: 9F 17 01 00 90 00
Data : 9F 17 01 03 // Tag + Length + Value
Tag 9F 17: Personal Identification Number (PIN) Try Counter : 03
SW1 SW2 : 90 00 (SW_OK)
If the command where satisfactory in place of see 6E 00 at the moment of cast the answer to string I would see the information as plain text?
APDU commands and responses used BYTE encoding. According to provided terminal API example you will get Array of Bytes.
As developer you can transform bytes into desired format or use it as-is. Please keep in mind that according to EMV specifications the formats of Tags data can be variable:
HEX (or binary) for sample for numeric Tags like amounts;
BCD for sample for date/time or some numbers like currency. PAN also BCD encoder;
Strings in different charsets (ASCII, Unicode, ...) for sample for Cardholder Name, Application Name.
etc.
Tag 0x5A - Application Primary Account Number (PAN) encoded as BCD and can be padded with 0xF in case odd PAN length.

Just answering to how READ your specific tag data since APDU and application State behavior is already answered.
After you SELECT application, you can initiate a GET PROCESSING OPTIONS. This is the actual start of the transaction. Here you will be returned a tag named AFL (application file locator). You need to parse this element and do multiple READ RECORDS till you find the data.
AFL is a set of four byte data( If you have two sets of SFI, there will be eight byte data).
First byte denote the SFI(5 most significant bytes is the input to P2
of READ RECORD). Second byte denotes the first record to read( input
to P1 of READ RECORD). Third byte denotes the last record to read.(
you need to loop READ RECORD this many times) The fourth byte donotes
the number of records involved in offline data authentication.
As you parse through, you will find the your required data. In case you are not sure how to parse, copy the hex data an try it here

Related

Talend : capture value from row1 and replace it in the entire column

I want to take uk from 1st row and replace it in the entire country column without changing the values in zones. I have tried regex expression from expression builder but failed.
COUNTRY
ZONE
UK
12
AU
44
FR
21
GER
20
FR
02
Your job design will look like this
Second , using a tSampleRow you will get the range of lines (in your case you would like first line )
Third , stock your wanted line in a global variable like this
Finally , in the tmap just get your global variable as such
Here is the output (I have 201 lignes i will have 201 UK printed ):
.--------.
|tLogRow_1|
|=------=|
|mystring|
|=------=|
|UK|
|UK |
|UK |
|UK |
|UK |
'--------'
[statistics] disconnected
Job operation ended at 14:00 21/02/2022. [exit code = 0]

Java Parsing a String to Extract Data

I have to write a program but I have no idea where to start. Can anyone help me with an outline of how I should go about it? please excuse my novice level at programming. I have provided the input and output of the program.
The trouble that I'm facing is how do I handle the input text? How should I store the input text to extract the data that I need to produce the output commands? Any guidance would be so helpful.
A little explanation of the input:
The output will start with APPLE1: CT= (whatever number is there for CT in line 4)
The following lines of the output will begin with "APPLES:"
I must include and extract the values for CR, PLANTING and RW in the output.
Wherever there is a non-zero or not null in the DATA portion, it will appear in the output.
When the program reads END, "APP;APPLER:CT=(whatever number);" will be the last two commands
INPUT:
<apple:ct=12;
FARM DATA
INPUT DATA
CT CH CR PLANTING RW DATA
12 YES PG -0 FA=1 R=CODE1 MM2 COA COB CI COC COD
0 0 1 0
COE RN COF COG COH
4 00 0
COI COJ D
0
FA=2 R=CODE2 112 COA COB CI COC COD
0 0 0 0
COE RN COF COG COH
4 00 0
COI COJ D
7
END
OUPUT:
APPLE1:CT=12;
APPLES:CR=PG-0,FA=1,R=CODE1,RW=MM2,COC=1,COE=4;
APPLES:FA=2,R=CODE2,RW=112,COE=4,COI=7;
APP;
APPLER:CT=12;

DB2 UTF-8 XML C2 85 to new line conversion

We have problem when saving XML data ( UTF-8) encoded to DB2 9.7 LUW in table.
Table DDL:
CREATE TABLE DB2ADMIN.TABLE_FOR_XML
(
ID INTEGER NOT NULL,
XML_FIELD XML NOT NULL
)
Problem occurs in some rare examples with rare Unicode characters, we are using java jdbc db2 driver.
For example looking in editor in normal mode not in hex view (Notepad++) this strange A below (after 16.) is represented as NEL in blacks square
Input XML is in UTF-8 encoding and when looked in HEX editor has this values:
00000010h: 31 36 2E 20 C2 85 42 ; 16. Â…B
After inserting in DB2 I presume that some kind of conversion occurs because when selecting data back this same character are now
00000010h: 31 36 2E 20 0D 0A 42 ; 16. ..B
C2 85 is transformed into 0D 0A that is new line.
One another thing I noticed that although when saving XML into table header content was starting with
<xml version="1.0" encoding="UTF-8">
but after fetching xml from db2 content was starting with
<xml version="1.0" encoding="UTF-16">
Is there way to force db2 to store XML in UTF-8 without conversions ? Fetching with XMLSERIALIZE didn't help
SELECT XML_FIELD AS CONTENT1, XMLSERIALIZE(XML_FIELD as cLOB(1M)) AS CONTENT2 from DB2ADMIN.TABLE_FOR_XML
IN content2 there is no XML header but stile newLine is there.
This behaviour is standard for XML 1.1 processors. XML 1.1 s2.11:
the XML processor must behave as if it normalized all line breaks in external parsed entities (including the document entity) on input, before parsing, by translating [the single character #x85] to a single #xA character
Line ending type is one of the many details of a document that will be lost over a parse-and-serialise cycle (eg attribute order, whitespace in tags, numeric character references...).
It's slightly surprising that DB2's XML fields are using XML 1.1 since not much uses that revision of XML, but not super-surprising in that support for NEL (ancient, useless mainframe line ending character) is something only IBM ever wanted.
Is there way to force db2 to store XML in UTF-8 without conversions ?
Use a BLOB?
If you need both native-XML-field functionality and to retain the exact original serialised form of a document then you'll need two columns.
(Are you sure you need to retain NEL line endings? Nobody usually cares about line endings, and these are pretty bogus.)
As I don't generally need non readable characters, before saving XML string to Db2 I decided to implement clean string from x'c285 (code point 133) and 4 byte UTF-8 characters just for the case:
Found similar example(How to replace/remove 4(+)-byte characters from a UTF-8 string in Java?) and adjusted it.
public static final String LAST_3_BYTE_UTF_CHAR = "\uFFFF";
public static final String REPLACEMENT_CHAR = "\uFFFD";
public static String toValid3ByteUTF8String(String line) {
final int length = line.length();
StringBuilder b = new StringBuilder(length);
for (int offset = 0; offset < length; ) {
final int codepoint = line.codePointAt(offset);
// do something with the codepoint
if (codepoint > LAST_3_BYTE_UTF_CHAR.codePointAt(0)) { //4-byte UTF replace
b.append(REPLACEMENT_CHAR);
} else if( codepoint == 133){ //NEL or x'c285
b.append(REPLACEMENT_CHAR);
} else {
if (Character.isValidCodePoint(codepoint)) {
b.appendCodePoint(codepoint);
} else {
b.append(REPLACEMENT_CHAR);
}
}
offset += Character.charCount(codepoint);
}
return b.toString();
}

Parse data from non structured file

Hi i would like to know how i can parse data from a non structured file. the data is like in a table but there is juste spaces. Here is an example :
DEST ULD ULD XXX NEXT NEXT XXX XXX
XXX/ XXX TYPE/ XXX XXX PCS WGT
BULK NBR SUBTYPE NBR DEST
XXX BULK BF 0
XXX BULK BB 39
XXX BULK BB 1
XXX BULK BF 0
XXX BULK BB 0
I can't use delimiter as useDelimiter("\\s{2,9"); because the spaces changes between column...
Any idea ?
What you have is called fixed-length format. In some ways it is easier. What's the best way of parsing a fixed-width formatted file in Java?

How to parse custom text file for analyize

I'm having a logfile like this:
1352711989.822313 SENDING
SR packet
SSRC 3760482201
NTP timestamp: 1352711989.822293
RTP timestamp: 30163617
Packets sent: 17
Octets sent: 85
RR block 1
SSRC 2520738017
Fraction lost: 0
Packets lost: 0
Ext. high. seq. nr: 64175
Jitter: 2947
LSR: 1035041236
DLSR: 284839
RR block 2
SSRC 2158728709
Fraction lost: 14
Packets lost: 43
Ext. high. seq. nr: 54178
Jitter: 394
LSR: 1035176766
DLSR: 149303
RR block 3
SSRC 100700967
Fraction lost: 36
Packets lost: 120
Ext. high. seq. nr: 45647
Jitter: 2365
LSR: 1035002733
DLSR: 323342
SDES Chunk:
SSRC: 3760482201
And I would like to parse every block into an object, So I made some classes in java.
Now is there a way to smoothly go through this text and put everyting in the right var and do that for the whole text file?
So at the end I have a list with objects that contains there information.
Since it looks like everything here is name value pairs the easiest thing to do would be to iterate over the lines of the file and add each line to a map. If the name blocks are important (ex. RR block 2) you can create a list that you add the current name piece to when you don't have a value, then pop it off once you get to the next piece with the same indentation.

Categories

Resources