How to use User Data(UDH) in smpp? Now I want to send sms specified port.
I use OpenSMPP as my project lib..
Follow these steps to send UDH over SMPP:
Set the UDHI bit to 1 on the esm_class field. The simplest way to do this - esm_class = esm_class | 0x40.
Put UDH at the beginning of short_message field. Read on for a quick summary. See the references to know in details how to encode UDH.
Here is how to encode UDH:
The first byte of UDH must mention the length (in bytes) of remaining part of UDH. Since you may not know this ahead, you may have to calculate it later on.
Then follows one or more IE (Information Element). Each IE has 3 parts:
First byte: IEI (IE Indicator). Identifies the element you want to encode. There are established IEI.
Second byte: IEIDL (IEI Data Length). Identifies the number of bytes that holds the data part. Each established IEI has fixed value for this field.
Third byte and rest part: IEID (IEI Data): Holds the data part. Each established IEI has fixed format of the data.
Count the total bytes consumed by each IE and put the result in the first byte.
For sending SMS to a part, you can use IEI 0x04 or 0x05. I have only seen 0x05 being used.
References
3GPP specification: 23.040
User Data Header
Related
I'm new in VoIP. I was wondering to create an application that streams audio using Client/Server architecture using RTP. Different APIs are available but I need to get understanding at core level. I have studied RFC. Can anybody suggest me how to make an audio RTP packet and send it to the server along with the unpacking of the packet in JAVA.
Thanking in advance.
Create an empty Java class.
Add members for all the fields in the RTP header. Use bool for the single bit fields. For the numerical fields, you need to be mindful of how many bits you need, i.e. use int for SSRC, and timestamp, short for sequence, byte for payload type, etc. CSRC should be an array (or arrayList or whatever) of ints. Audio payload should be a byte array.
Packets are just arrays of bytes, so you need a ToBytes() method that will output a packet byte[], and a constructor that takes a byte[] as a parameter. To send a packet, call ToBytes() and put the result in a UDP packet.
In your ToBytes() method, create a byte array that's 12 bytes, plus an additional 4 bytes for each CSRC, plus however many bytes are in your audio payload.
Single bit values you need to set using the bitwise OR operator. For example, the Marker bit is the first bit of the second byte, so to set it:
if(marker)
{
bytes[1] = bytes[1] | 0x80; //0x80 is 1000 0000
}
To set the values that are ints or shorts, you'll need to convert the value to a network order (bigendian) byte array and then set it in the buffer using arraycopy. I'll leave it up to you to find out how to create the network order byte array.
For the constructor that takes a byte[], you'll need to do the above process in reverse. To check the value of a single bit, use the AND operator, e.g.:
marker = bytes[1] & 0x80 == 1;
Either in this class or a helper class you'll probably need methods to help set the timestamp based on the packet count and sample rate. For example, if the payload is G.711, it's 8000 samples/second, which means a packet sent every 20ms with a 160 byte payload, so timestamp will increase by 160 every packet.
I'm learning about Flash (AMF) and Java (BlazeDS) using the project I found on the internet, but I noticed that the server is receiving via socket the data below:
When I tried to use the Amf0Input/Amf3Input me to return the object, I get an error that does not recognize this type of package. Anyone know which library should I use to decode this message?
The packet you got seems to be a length prefixed AMF3 AmfObject.
In general, whenever you see a string that follows the usual naming convention of fully qualified class names (i.e. like reverse domains), chances are you're dealing with an object instance1.
Looking at the first few bytes, you see 0x00 repeated three times. If we assume AMF3, this would be 3 undefineds, followed by an object with type marker 0x3e - which does not exist. If we instead assume AMF0, we would first have a number (0x00 type marker, followed by 8 bytes of data), followed by an object with type marker 0x6d - which again does not exist.
Thus, the data you got there can't be AMF payload alone. However, if we interpret the first 4 bytes as network byte order (i.e. big endian) integer, we get 0x3E = 62 - which is exactly the length of the remaining data.
Assuming then that the first 4 bytes are just a length prefix, the next byte must be a type marker. In AMF3, 0x0a indicates an object instance. So lets just try to decode the remaining data (section 3.12 of the AMF3 spec, if you want to follow along2): the next byte must indicate the object traits. 0x23 means we have a direct encoding of the traits in that byte - as opposed to a reference to earlier submitted traits.
Since the fourth bit (counted from least significant first) is 0, the object is not dynamic - as in, an instance of some class, not just a plain object instance. The remaining bits, shifted to the right by 4, indicate the number of sealed properties this instance has, which is 2.
Next, we expect the classname, encoded as UTF-8-vr - i.e. length prefixed (when shifted right by 1), UTF-8 encoded string. The next byte is 0x1d, which means the length is 0x1d >> 1 = 14. The next 14 bytes encode common.net.APC, so that's the instance's class name.
Afterwards, we have the two sealed property names, also encoded as UTF-8-vr. The first one has a prefix of 0x15, so a length of 10 - giving us parameters, followed by a prefix of 0x19 (length 12) and payload functionName.
After this, you have the values corresponding to these sealed properties, in the same order. The first one has a type marker of 0x09, which corresponds to an array. The length marker is 0x03, which means the array contains one element, and the next byte is 0x01, indicating we have no associative members. The only element itself has a type marker of 0x04, meaning it's an integer - in this case with value 0.
This is followed by a type marker of 0x06 - a string, with length 14. That string - you probably have guessed it by now - is syncServerTime.
So, in summary, your packet is a length-prefixed instance of common.net.APC, with it's parameters attribute set to [0], and the functionName attribute set to "syncServerTime".
1: The only other alternatives are a vector of object instances - which would require a type marker of 0x10 somewhere - or an AMF0 packet. In the case of an AMF0 packet, you'd also have to have a URI-style path somewhere in the packet, which is not the case here.
2: Note that the EBNF given at the end of the section is not exactly correct - neither syntactically nor semantically ...
In Java, I am polling a WebSphere MQ message queue, expecting a message of `STRING format, that is composed entirely of XML. Part of this XML will contain bytes to a file attachment (any format: pdf, image, etc) which will then be converted to a blob for storage in an Oracle Db and later retrieval.
The issue I am having is that the known size of example files being sent over end up in my Db with a different size. I am not adding anything to the bytes (as far as I know), and the size appears to be larger directly after I get the message. I cannot determine if I am somehow adding information at retrieve, conversion from bytes -> String, or if this is happening on the front end when the sender populates the message.
My code at retrieve of the message:
inboundmsg = new MQMessage();
inboundmsg = getMQMsg(FrontIncomingQueue, gmo);
strLen = inboundmsg.getMessageLength();
strData = new byte[strLen];
ibm_id = inboundmsg.messageId;
inboundmsg.readFully(strData);
inboundmsgContents = new String(strData);
I see a file known to have size 21K go to 28K. A coworker has suggested that charset/encoding may be the issue. I do not specify a charset in the constructor call to String above, nor in any of the calls to getBytes when converting back from a string (for other unrelated uses). My default charset is ISO-8859-1. When speaking with the vendor who is initiating the message transfer, I asked her what charset she is using. Her reply:
"I am using the File.WriteAllBytes method in C# - I pass it the path of my file and it writes it to a byte[]. I could not find any documentation on the MSDN about what encoding the function uses. The method creates a byte array and from what I have read online this morning there is no encoding, its just a sequence of 8bit unsigned binary data with no encoding."
Another coworker has suggested that perhaps the MQ charset is the culprit, but my reading of the documentation suggests that MQ charset only affects the behavior of readString, readLine, & writeString.
If I circumvent MQ totally, and populate a byte array using a File Input Stream and a local file, the file size is preserved all the way to Db store, so this definitely appears to be happening at or during message transfer.
The problem is evident in the wording of the question. You describe a payload that contains arbitrary binary data and also trying to process it as a string. These two things are mutually exclusive.
This appears to be complicated by the vendor not providing valid XML. For example, consider the attachment:
<PdfBytes>iVBORw0KGgoAAAANS … AAAAASUVORK5CYII=</PdfBytes>
If the attachment legitimately contains any XML special character such as < or > then the result is invalid XML. If it contains null bytes, some parsers assume they have reached the end of the text and stop parsing there. That is why you normally see any attachment in XML either converted to Base64 for transport or else converted to hexadecimal.
The vendor describes writing raw binary data which suggests that what you are receiving contains non-string characters and therefore should not be sent as string data. If she had described some sort of conversion that would make the attachment XML compliant then string would be appropriate.
Interestingly, a Base64 encoding results in a payload that is 1.33 times larger than the original. Coincidence that 21k * 1.3 = 28k? One would think that what is received is actually the binary payload in Base64 format. That actually would be parseable as a string and accounts for the difference in file sizes. But it isn't at all what the vendor described doing. she said she's writing "8bit unsigned binary data with no encoding" and not Base64.
So we expect it to fail but not necessarily to result in a larger payload. Consider that WebSphere MQ receiving a message in String format will attempt to convert it. If the CCSID of the message differs from that requested on the GET then MQ will attempt a conversion. If the inbound CCSID is UTF-16 or any double-byte character set, certain characters will be expanded from one to two bytes - assuming the conversion doesn't hit invalid binary characters that cause it to fail.
If the two CCSIDs are the same then no conversion is attempted in the MQ classes but there is still an issue in that something has to parse an XML payload that is by definition not valid and therefore subject to unexpected results. If it happens that the binary payload does not contain any XML special characters and the parser doesn't choke on any embedded null bytes, then the parser is going to rather heroic lengths to forgive the non-compliant payload. If it gets to the </PdfBytes> tag without choking, it may assume that the payload is valid and convert everything between the <PdfBytes>...</PdfBytes> tags itself. Presumably to Base64.
All of this is conjecture, of course. But in a situation where the payload is unambiguously not string data any attempt to parse it as string data will either fail outright or produce unexpected and potentially bizarre results. You are actually unfortunate that it doesn't fail outright because now there's an expectation that the problem is on your end when it clearly appears to be the vendor's fault.
Assuming that the content of the payload remains unchanged, the vendor should be sending bytes messages and you should be receiving them as bytes. That would at least fix the problems MQ is having reconciling the expected format with the actual received format, but it would still be invalid XML. If it works that the vendor sends binary data in a message set to type String with you processing it as bytes then count your blessings and use it that way but don't count on it being reliable. Eventually you'll get a payload with an embedded XML special character and then you will have a very bad day.
Ideally, the vendor should know better than to send binary data in an XML payload without converting it first to string and it is up to them to fix it so that it is compliant with the XML spec and reliable.
Please see this MSDN page: XML, SOAP, and Binary Data
I have a multi-threaded client-server application that uses Vector<String> as a queue of messages to send.
I need, however, to send a file using this application. In C++ I would not really worry, but in Java I'm a little confused when converting anything to string.
Java has 2 byte characters. When you see Java string in HEX, it's usually like:
00XX 00XX 00XX 00XX
Unless some Unicode characters are present.
Java also uses Big endian.
These facts make me unsure, whether - and eventually how - to add the file into the queue. Preferred format of the file would be:
-- Headers --
2 bytes Size of the block (excluding header, which means first four bytes)
2 bytes Data type (text message/file)
-- End of headers --
2 bytes Internal file ID (to avoid referring by filenames)
2 bytes Length of filename
X bytes Filename
X bytes Data
You can see I'm already using 2 bytes for all numbers to avoid some horrible operations required when getting 2 numbers out of one char.
But I have really no idea how to add the file data correctly. For numbers, I assume this would do:
StringBuilder packetData = new StringBuilder();
packetData.append((char) packetSize);
packetData.append((char) PacketType.BINARY.ordinal()); //Just convert enum constant to number
But file is really a problem. If I have also described anything wrongly regarding the Java data types please correct me - I'm a beginner.
Does it have to send only Strings? I think if it does then you really need to encode it using base64 or similar. The best approach overall would probably be to send it as raw bytes. Depending on how difficult it would be to refactor your code to support byte arrays instead of just Strings, that may be worth doing.
To answer your String question I just saw pop up in the comments, there's a getBytes method on a String.
For the socket question, see:
Java sending and receiving file (byte[]) over sockets
When I read this link: http://docs.oracle.com/javase/7/docs/api/java/util/zip/CRC32.html
There is method to get CRC32 value from array of bytes. However, when I send this CRC32ed value to receiver, how can receiver check if this value is valid?
The idea is that the receiver will run the same algorithm on the data you sent through, and it should get the same CRC32 value. If not, the data (or CRC) has been corrupted somehow.
In other words, let's say your payload is "Pax is an incredibly handsome guy" and you calculate the CRC as 42.
You send through both that payload and the CRC to the receiving end and it performs the same actions on the payload, also coming up with 42.
That means the content of that payload is correct. But not necessarily the veracity :-)
That's what CRC stands for, cyclic redundancy check.
Keep in mind that CRCs tend to be effective against accidental damage to payload but, unless there's some secret information being used to generate the check, it can be faked.