Downloading xml data on very slow 2G internet - java

i have successfully programmed an app that takes Traces of allot of system services (GPS location, Network Location, wifi, neighbouringcellinfo, sensors ....) in every 10 seconds which works very well, but as soon as i restrict the internet on my phone to use only 2G and turn off Wifi i get the app still works but start to lag.
I have tried to find out where the Problem is coming from and i have noticed that it comes from this code line:
XmlPullParser receivedData = XmlPullParserFactory.newInstance()
.newPullParser().setInput(xmlUrl.openStream(), null); return receivedData;
As soon as i delete this couple of code lines in my activity the app works without lagging, but seeing as they are essential for my app i would very much like to have them work (which they already do) but without causing lags.
Can anyone please help me?
I have printed the parsed result from the XML file and it is correct, so my only problem here is the lagging of the app.
A typical XML file that i would be dealing with looks like this:
<rsp stat="ok">
<cell lat="49.88415658974359" lon="8.637537076923078" mcc="262" mnc="7" lac="41146"
cellid="42404" averageSignalStrength="-79" samples="39" changeable="1"/>
</rsp>

2g is some really slow connection. Even worse is the "warm up" of the antenna. It may last up to 30 seconds, before teh first bit is received. (And there is not really something you can do against this, because it is all about physics).
So the only thing you could do is loading the File in an background-Thread. This will make the appp resonsive (if yo don't need the data in time).

Mayb explicitly a BufferedInputStream
XmlPullParser receivedData = XmlPullParserFactory.newInstance()
.newPullParser().setInput(xmlUrl.openStream(), null);
XmlPullParser receivedData = XmlPullParserFactory.newInstance()
.newPullParser().setInput(
new BufferedInputStream(xmlUrl.openStream()), null);
Maybe, maybe compression
As you know in HTTP a browser may declare in its headers that it can decompress compressed data; and then the server may send a compressed version of the HTML. This serves to put less load on the server side, and may speed up communication, depending.
The same one can do oneself.
For an external uncontrolled site one might try. Send a header
Accept-Encoding: gzip
And one is lucky when receiving a response header:
Content-Encoding: gzip
Doing boing sides oneselfby wrapping the streams:
outputStream = new GZipOutputStream(outputStream);
inputStream = new GZipImüputStream(inputStream);
Saving memory
To make the same string instancees unique reduces memory and might help, even if it costs considerable time itself. String.intern() is bad idea, as prior to java 8, the strings go into the permanent (unrecoverable) memory space. One might use a
private Map<String, String> identityMap = new HashMap<>();
public String unique(String s) {
if (s.length() >= 30) {
return s;
}
String t = identityMap.get(s);
if (t == null) {
t = s;
identityMap.put(s, t);
}
return t;
}
The hope is, that processing becomes faster.

Related

Url Encoding Issue - Special Characters originate webserver crash

I have been searching about this info but since I'm new to web development the answers I'm getting are getting me even more confused.
Basically, I have a webserver established in a Java Modem (which uses 1.3IDE) which will handle requests. These requests were being processed as long as I kept it simple.
http://87.103.87.59/teste.html?a=10&b=10
This request is normally processed.
However, when applying the real deal, my webserver is crashing.
http://5.43.52.4/api.html?ATCOMMAND=AT%5EMTXTUNNEL=SMS,0035111111111,string sending test
The problem is due to two aspects. The "%" character and the string sending test.
To put everything clear, handlers I'm using are these:
public InputStream is = null;
private OutputStream os = null;
private byte buffer[] = new byte[];
String streamAux="";
is = socketX.openInputStream();
os = socketX.openOutputStream();
if ((is.available()>0)||(blockX==true))
{
//Read data sent from remote client
numDadosLidos=is.read(buffer);
for (int i=0;i<numDadosLidos;i++)
streamAux= streamAux + (char)buffer[i]; //where the url will be stored
Basically I will need those parameters so I can use them to operate my Java device so, I think I'll need to do some sort of encoding but there's a lot of information that I can't comprehend and my 1.3 IDE is kind of keeping me stuck.
I apologize for some sort of newbie behaviour in advance.
Hope you can lend me a hand,
Thanks
For those who are interested, I basically went around the issue obliging the message to be sent using '-' character. It doesn't solve the issue, it simply solves the question with the "not-ideal" method.
Still totally interested if someone figures this one out.
Thanks.

Reading from javamail takes a long time

I use javamail to read mails from an exchage account using IMAP protocol. Those mails are in plain format and its contents are XMLs.
Almost all those mails have short size (usually under 100Kb). However, sometimes I have to deal with large mails (about 10Mb-15Mb). For example, yesterday I received an email which was 13Mb size. It took more than 50min just to read it. Is it normal? Is there a way to increase its performance?
The code is:
Session sesion = Session.getInstance(System.getProperties());
Store store = sesion.getStore("imap");
store.connect(host, user, passwd);
Folder inbox = store.getFolder("INBOX");
inbox.open(Folder.READ_WRITE);
Message[] messages = inbox.search(new FlagTerm(new Flags(Flags.Flag.SEEN), false));
for (int i = 0 ; i< messages.length ; i++){
Object contents = messages[i].getContent(); // Here it takes 50 min on 13Mb mail
// ...
}
Method that takes such a long time is messages[i].getContent(). What am I doing wrong? Any hint?
Thanks a lot and sorry for my english! ;)
I finally solved this issue and wanted to share.
The solution, at least the one that worked to me, was found in this site: http://www.oracle.com/technetwork/java/faq-135477.html#imapserverbug
So, my original code typed in my first post becomes to this:
Session sesion = Session.getInstance(System.getProperties());
Store store = sesion.getStore("imap");
store.connect(host, user, passwd);
Folder inbox = store.getFolder("INBOX");
inbox.open(Folder.READ_WRITE);
// Convert to MimeMessage after search
MimeMessage[] messages = (MimeMessage[]) carpetaInbox.search(new FlagTerm(new Flags(Flags.Flag.SEEN), false));
for (int i = 0 ; i< messages.length ; i++){
// Create a new message using MimeMessage copy constructor
MimeMessage cmsg = new MimeMessage(messages[i]);
// Use this message to read its contents
Object obj = cmsg.getContent();
// ....
}
The trick is, using MimeMessage() copy constructor, create a new MimeMessage and read its contents instead of original message.
You should note that such object is not really connected to server, so any changes you make on it, like setting flags, won't take effect. Any change on message, have to be done on original message.
To sum up: This solution works reading large Plain Text mails (up to 15Mb) connecting to an Exchange Server using IMAP protocol. The times lowered from 51-55min to read a 13Mb mail, to 9seconds to read same mail. Unbelievable.
Hope this helps someone and sorry for English mistakes ;)
It would always be messages[i].getContent() that would be the slowest part of the code. The reason is normally IMAP server would not cache this part of message data. Nevertheless, you can try this:
FetchProfile fp = new FetchProfile();
fp.add(FetchProfile.Item.ENVELOPE);
fp.add(FetchProfileItem.FLAGS);
fp.add(FetchProfileItem.CONTENT_INFO);
fp.add("X-mailer");
and after you have specified the fetch profile then you do your search/fetch of messages.
Basically the concept is that the IMAP provider fetches the data for a message from the server only when necessary. (The javax.mail.FetchProfile is used to optimize this). The header and body structure information, once fetched, is always cached within the Message object. However, the content of a bodypart is not cached. So each time the content is requested by the client (either using getContent() or using getInputStream()), a new FETCH request is issued to the server. The reason for this is that the content of a message could be potentially large, and if we cache this content for a large number of messages, there is the possibility that the system may run out of memory soon since the garbage collector cannot free the referenced objects. Clients should be aware of this and must hold on to the retrieved content themselves if needed.
By using the above mentioned code snippet you could 'hope' for some speed improvement but it solely depends on your SMTP server if this would work or not. All the big SMTP server do not support this behaviour because of the load issue mentioned in the previous paragraph and hence you may not gain any speed.
Using the Folder.fetch method you can prefetch in one operation the metadata for multiple messages. That will reduce the time to process each message, but won't help that much with a huge message.
The handle huge message parts efficiently, you'll generally want to use the getInputStream method to process the data incrementally, rather than using the getContent method to read all the data in and create a huge String object with all the data.
You can also tune the fetching by specifying the "mail.imap.fetchsize" property, which defaults to 16384. If most of your messages are less than 100K, and you always need to read all of the data in the message, you might set the fetchsize to 100K. That will make small messages much faster and larger message more efficient.
I had a similar issue. Fetching mails via IMAP were very slow. Furthermore I had another issue downloading large attachments. After a look in the JavaMail FAQ I found the solution for the later issue in this question that advises to set the mail.imap.partialfetch (respectively mail.imaps.partialfetch) to false. This not only fixes the download issue but the slow reading of the messages as well.
In the referenced JavaMail notes.txt it is said.
Due to a problem in the Microsoft Exchange IMAP server, insufficient
number of bytes may be retrieved when reading big messages. There
are two ways to workaround this Exchange bug:
(a) The Exchange IMAP server provides a configuration option called
"fast message retrieval" to the UI. Simply go to the site, server
or recipient, click on the "IMAP4" tab, and one of the check boxes
is "enable fast message retrieval". Turn it off and the octet
counts will be exact. This is fully described at
http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q191504
(b) Set the "mail.imap.partialfetch" property to false. You'll have
to set this property in the Properties object that you provide to
your Session.
Certain IMAP servers do not implement the IMAP Partial FETCH
functionality properly. This problem typically manifests as corrupt
email attachments when downloading large messages from the IMAP
server. To workaround this server bug, set the
"mail.imap.partialfetch"
property to false. You'll have to set this property in the Properties
object that you provide to your Session.

ReadableByteChannel hangs on read(bytebuffer)

Im working on Instant messenger using java 1.6. IM uses multithreading - main thread, receiving, and ping. For tcp/ip communication I used SocketChannel. And it seems there is a problem with receiving bigger packages from server. Server instead of one sends a couple of packages and thats where the problem begins. Every first 8 bytes is telling what is the type of package and how big it is. This is how I managed reading:
public void run(){
while(true){
try{
Headbuffer.clear();
bytes = readChannel.read(Headbuffer); //ReadableByteChannel
Headbuffer.flip();
if(bytes != -1){
int head = Headbuffer.getInt();
int size = Headbuffer.getInt();
System.out.println("received pkg: 0x" + Integer.toHexString(head)+" with size "+ size+" bytes);
switch(head){
case incoming.Pkg1: ReadWelcome(); break;
case incoming.Pkg2: ReadLoginFail();break;
case incoming.Pkg3: ReadLoginOk();break;
case incoming.Pkg4: ReadUserList();break;
case incoming.Pkg5: ReadUserData();break;
case incoming.Pkg6: ReadMessage();break;
case incoming.Pkg7: ReadTypingNotify();break;
case incoming.Pkg8: ReadListStatus();break;
case incoming.Pkg9: ChangeStatus();break;
}
}
}catch(Exception e){
e.printStackTrace();
}
}
}
And during the tests everything was fine until i logged on my account and import my buddylist. I send request to server for statuses and he send me back about 10 out of 80 contacts. So I came up with something like this:
public synchronized void readInStatus(ByteBuffer headBuffer){
byteArray.add(headBuffer); //Store every buffer in ArrayList
int buddies = MainController.controler.getContacts().getSize();
while(buddies>0){
readStuff();
readDescription();
--buddies;
}
}
and each readStuff() and readDescription() are checking each parameter size with remaining bytes in the buffer:
if(byteArray.get(current).remaining() >= 4){
uin = byteArray.get(current).getInt();
}else{
byteArray.add(Receiver.receiver.read());
current = current +1;
uin = byteArray.get(current).getInt();
}
and Receiver.receiver.read() is:
public ByteBuffer read(){
try {
ByteBuffer bb = ByteBuffer.allocate(40000);
bb.order(ByteOrder.LITTLE_ENDIAN);
bytes = readChannel.read(bb);
bb.flip();
return bb;
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
So application is lunched, logged and then sends contacts. Server send me back just a piece of my list. But in the method readInStatus(ByteBuffer headBuffer) I try to force the rest of the list. And now the fun part - after some time it gets to the Receiver.receiver.read() and on bytes = readChannel.read(bb) it just stops and I dont know why , no errors no nothing even after some time and Im out of the ideas. Im fighting with this whole week and i dont get anywhere near the solution. I will appreciate any suggestions. Thanks.
Thanks for response. Yes, I'm using blocking SocketChannel, I tried non-blocking but it goes wild and out of control so I skipped the idea. About the bytes I expect - this is kind of weird, because its giving me size only once in head but its size of the first part not the whole package, other parts is not containing header bytes at all. I can't predict how much bytes it would be, the reason is - descriptions with 255 bytes capacity. This is exactly why I've created variable buddies in : public synchronized void readInStatus(ByteBuffer headBuffer)
wich is basically length of my buddy list and before reading each field I'm checking if there is enough bytes left if its not, I do read(). But last field before description is integer with the length of the incoming description. But its impossible to determine how long package is, until some processing is done. #robert do you think I should try again switching to non-blocking SocketChannel in that situation ?
The problem is most likely that you are sending fewer bytes than you are trying to read. You might have missed writing something, written things in the wrong order, misread a size field or something like that.
I think I'd attack this problem by adding tracing code to count and log the number of bytes read and written, notional packect sizes and so on. Then run, and compare the traces to see where things start to get out of sync.
If you are using a blocking SocketChannel, then read will block until the buffer is filled or the server delivers end of stream. For a server with connection keep-alive, the server does not send end of stream - it will simply stop sending data, and the read will hang indefinitely or until timeout.
You could either:
(i) try using a non-blocking SocketChannel, repeatedly reading until the read delivers 0 bytes (but beware 0 bytes does not necessarily mean end of stream - it could mean an interruption) or
(ii) if you have to use the blocking version, and you know how many bytes you were expecting from the server e.g. from a header, when the number of bytes left to read is less than buffer.capacity(), move position and/or limit on the buffer so as to leave only the required space in the buffer before the read. I am working this solution now. If it works for you, please let me know!
So far as I can work out, if you have to use a blocking SocketChannel and you do not know how many bytes you are expecting, and the server does not send end of stream, there is no solution.

Downloading files in Java and common errors

I wrote a simple downloader as Java applet. During some tests I discover that my way of downloading files is not even half as perfect as e.g. Firefox's way of doing it.
My code:
InputStream is = null;
FileOutputStream os = null;
os = new FileOutputStream(...);
URL u = new URL(...);
URLConnection uc = u.openConnection();
is = uc.getInputStream();
final byte[] buf = new byte[1024];
for(int count = is.read(buf);count != -1;count = is.read(buf)) {
os.write(buf, 0, count);
}
Sometimes my applet works fine, sometimes unexpected things happen. E.g. from time to time, in the middle of downloading applet throws an IO exception or just lose a connection for a while, without possibility to return to current download and finish it.
I know that really advanced way is too complicated for single unexperienced Java programmer, but maybe you know some techniques to minimalise risk of appearing these problems.
So you want to resume your download.
If you get an IOException on reading from the URL, there was a problem with the connection.
This happens. Now you must note how much you already did download, and open a new connection which starts from there.
To do this, use setRequestProperty() on the second, and send the right header fields for "I want only the range of the resource starting with ...". See section 14.35.2 Range Retrieval Requests in the HTTP 1.1 specification. You should check the header fields on the response to see if you really got back a range, though.

How do I get google protocol buffer messages over a socket connection without disconnecting the client?

I'm attempting to send a .proto message from an iPhone application to a Java server via a socket connection. However so far I'm running into an issue when it comes to the server receiving the data; it only seems to process it after the client connection has been terminated. This points to me that the data is getting sent, but the server is keeping its inputstream open and waiting for more data. Would anyone know how I might go about solving this? The current code (or at least the relevant parts) is as follows:
iPhone:
Person *person = [[[[Person builder] setId:1] setName:#"Bob"] build];
RequestWrapper *request = [[[RequestWrapper builder] setPerson:person] build];
NSData *data = [request data];
AsyncSocket *socket = [[AsyncSocket alloc] initWithDelegate:self];
if (![socket connectToHost:#"192.168.0.6" onPort:6666 error:nil]){
[self updateLabel:#"Problem connecting to socket!"];
} else {
[self updateLabel:#"Sending data to server..."];
[socket writeData:data withTimeout:-1 tag:0];
[self updateLabel:#"Data sent, disconnecting"];
//[socket disconnect];
}
Java:
try {
RequestWrapper wrapper = RequestWrapper.parseFrom(socket.getInputStream());
Person person = wrapper.getPerson();
if (person != null) {
System.out.println("Persons name is " + person.getName());
socket.close();
}
On running this, it seems to hang on the line where the RequestWrapper is processing the inputStream. I did try replacing the socket writedata method with:
[request writeToOutputStream:[socket getCFWriteStream]];
(here I'm calling the gpb to write to the output stream, instead of writing the data generated to the output stream)
Which I thought might work, however I get an error claiming that the "Protocol message contained an invalid tag (zero)". I'm fairly certain that it doesn't contain an invalid tag as the message works when sending it via the writedata method.
Any help on the matter would be greatly appreciated!
Cheers!
Dan
(EDIT: I should mention, I am using the metasyntactic gpb code; and the cocoaasyncsocket implementation)
Maybe you need to flush the output socket on the iPhone side, its possible that the data is sitting in an OS/library buffer and is not being written until the connection is closed (which causes an implicit flush).
EDIT
It looks like the api doesn't support flushing (i guess flushing isn't a very async thing to do) but you can subscribe to the didWriteDataWithTag event: from the headers
/**
* Called when a socket has completed writing the requested data. Not called if there is an error.
**/
- (void)onSocket:(AsyncSocket *)sock didWriteDataWithTag:(long)tag;
I would subscribe to this event and then in the event handler call
[self updateLabel:#"Data sent, disconnecting"];
[socket disconnect];
this way you only show the Data Sent label when it is actually sent.
(in the interest of full disclosure i have no idea how to program in objective-c :) good luck)
Finally I managed to solve it!! It's embarrassing how simple it was once I stopped caring about the GPB way of doing it. The C++ code I used is:
NSData *data = [wrapper data];
int s = [wrapper serializedSize];
NSData *size = [NSData dataWithBytes:&s length:1];
[sock writeData:size withTimeout:-1 tag:1];
[sock writeData:data withTimeout:-1 tag:1];
and then on the Java end I just kept the
RequestWrapper.parseDelimitedFrom(socket.getInputStream())
line and it works a treat! All I end up doing is sending the size of the data before the data itself and the GPB method on the Java end works out the rest!
One major problem I had was actually converting the size of the data to send, from an int to NSData, and having it send across the network correctly. The way I was advised of doing it was
NSData *size = [NSData dataWithBytes:&s length:sizeof(s)];
However whenever I sent that across as data, it would seem to send the first byte, along with 3 "0" bytes. This caused havoc with GPB because if it receives a 0 byte at any point it throws an exception thinking the code is corrupt (my guess). Seeing as I never looked at the actual bytes coming across and analysing them until trying to do it a different way today, I am a bit gutted as I could have figured out that this was the issue a while ago. After some experimenting around with it, I gathered that the 'sizeof' method was the problem so I removed it. Currently I have just put a '1' instead of the actual size, which
seems to only return 1 byte when sending the data file across the network; although I'm not sure thats going to be an 'ideal' solution (although the message size should only be in 1 byte anyway) - if anyone could advise me why this sizeof() is causing an issue, it would be appreciated :)
Points should really go to Luke Steffen who helped me with this on the cocoaasyncsocket google group despite my idiocy - so again, thanks Luke!

Categories

Resources