I have been searching about this info but since I'm new to web development the answers I'm getting are getting me even more confused.
Basically, I have a webserver established in a Java Modem (which uses 1.3IDE) which will handle requests. These requests were being processed as long as I kept it simple.
http://87.103.87.59/teste.html?a=10&b=10
This request is normally processed.
However, when applying the real deal, my webserver is crashing.
http://5.43.52.4/api.html?ATCOMMAND=AT%5EMTXTUNNEL=SMS,0035111111111,string sending test
The problem is due to two aspects. The "%" character and the string sending test.
To put everything clear, handlers I'm using are these:
public InputStream is = null;
private OutputStream os = null;
private byte buffer[] = new byte[];
String streamAux="";
is = socketX.openInputStream();
os = socketX.openOutputStream();
if ((is.available()>0)||(blockX==true))
{
//Read data sent from remote client
numDadosLidos=is.read(buffer);
for (int i=0;i<numDadosLidos;i++)
streamAux= streamAux + (char)buffer[i]; //where the url will be stored
Basically I will need those parameters so I can use them to operate my Java device so, I think I'll need to do some sort of encoding but there's a lot of information that I can't comprehend and my 1.3 IDE is kind of keeping me stuck.
I apologize for some sort of newbie behaviour in advance.
Hope you can lend me a hand,
Thanks
For those who are interested, I basically went around the issue obliging the message to be sent using '-' character. It doesn't solve the issue, it simply solves the question with the "not-ideal" method.
Still totally interested if someone figures this one out.
Thanks.
Related
we are downloading a very large file (~70G) but one one occasion the code completed without throwing an exception, but the downloaded file was incomplete, just under 50G.
The code is:
public void download(String url, String filename) throws Exception {
URL dumpUrl = new URL(url);
try (InputStream input = dumpUrl.openStream()) {
Files.copy(input, Paths.get(filename));
}
}
The url is a presigned Google Cloud Storage URL.
Is this just the libraries not detecting a connection reset issue? Or something else?
Are there better libraries I could use. Or do I need to do a HEAD call first and then match downloaded size against content-length.
Don't care that it didn't work, that happens and we have retry logic. My issue is the code thought it did work.
UPDATE: So it seems it failed at exactly 2 hours after starting download. This makes me suspect it may be netops/firewall issue. Not sure at which end, I'll hassle my ops team for starters. Anybody know of time limits at google's end?
Ignore this update - have more instances now, no set time. Anywhere between 20 minutes and 2 hours.
Never resolved core issue. But was able to workaround by comparing the bytes downloaded to the Content-Length header. Work in a loop which resumes incomplete download using the Range header (similar to curl -C -).
i have successfully programmed an app that takes Traces of allot of system services (GPS location, Network Location, wifi, neighbouringcellinfo, sensors ....) in every 10 seconds which works very well, but as soon as i restrict the internet on my phone to use only 2G and turn off Wifi i get the app still works but start to lag.
I have tried to find out where the Problem is coming from and i have noticed that it comes from this code line:
XmlPullParser receivedData = XmlPullParserFactory.newInstance()
.newPullParser().setInput(xmlUrl.openStream(), null); return receivedData;
As soon as i delete this couple of code lines in my activity the app works without lagging, but seeing as they are essential for my app i would very much like to have them work (which they already do) but without causing lags.
Can anyone please help me?
I have printed the parsed result from the XML file and it is correct, so my only problem here is the lagging of the app.
A typical XML file that i would be dealing with looks like this:
<rsp stat="ok">
<cell lat="49.88415658974359" lon="8.637537076923078" mcc="262" mnc="7" lac="41146"
cellid="42404" averageSignalStrength="-79" samples="39" changeable="1"/>
</rsp>
2g is some really slow connection. Even worse is the "warm up" of the antenna. It may last up to 30 seconds, before teh first bit is received. (And there is not really something you can do against this, because it is all about physics).
So the only thing you could do is loading the File in an background-Thread. This will make the appp resonsive (if yo don't need the data in time).
Mayb explicitly a BufferedInputStream
XmlPullParser receivedData = XmlPullParserFactory.newInstance()
.newPullParser().setInput(xmlUrl.openStream(), null);
XmlPullParser receivedData = XmlPullParserFactory.newInstance()
.newPullParser().setInput(
new BufferedInputStream(xmlUrl.openStream()), null);
Maybe, maybe compression
As you know in HTTP a browser may declare in its headers that it can decompress compressed data; and then the server may send a compressed version of the HTML. This serves to put less load on the server side, and may speed up communication, depending.
The same one can do oneself.
For an external uncontrolled site one might try. Send a header
Accept-Encoding: gzip
And one is lucky when receiving a response header:
Content-Encoding: gzip
Doing boing sides oneselfby wrapping the streams:
outputStream = new GZipOutputStream(outputStream);
inputStream = new GZipImüputStream(inputStream);
Saving memory
To make the same string instancees unique reduces memory and might help, even if it costs considerable time itself. String.intern() is bad idea, as prior to java 8, the strings go into the permanent (unrecoverable) memory space. One might use a
private Map<String, String> identityMap = new HashMap<>();
public String unique(String s) {
if (s.length() >= 30) {
return s;
}
String t = identityMap.get(s);
if (t == null) {
t = s;
identityMap.put(s, t);
}
return t;
}
The hope is, that processing becomes faster.
I'm attempting to send a .proto message from an iPhone application to a Java server via a socket connection. However so far I'm running into an issue when it comes to the server receiving the data; it only seems to process it after the client connection has been terminated. This points to me that the data is getting sent, but the server is keeping its inputstream open and waiting for more data. Would anyone know how I might go about solving this? The current code (or at least the relevant parts) is as follows:
iPhone:
Person *person = [[[[Person builder] setId:1] setName:#"Bob"] build];
RequestWrapper *request = [[[RequestWrapper builder] setPerson:person] build];
NSData *data = [request data];
AsyncSocket *socket = [[AsyncSocket alloc] initWithDelegate:self];
if (![socket connectToHost:#"192.168.0.6" onPort:6666 error:nil]){
[self updateLabel:#"Problem connecting to socket!"];
} else {
[self updateLabel:#"Sending data to server..."];
[socket writeData:data withTimeout:-1 tag:0];
[self updateLabel:#"Data sent, disconnecting"];
//[socket disconnect];
}
Java:
try {
RequestWrapper wrapper = RequestWrapper.parseFrom(socket.getInputStream());
Person person = wrapper.getPerson();
if (person != null) {
System.out.println("Persons name is " + person.getName());
socket.close();
}
On running this, it seems to hang on the line where the RequestWrapper is processing the inputStream. I did try replacing the socket writedata method with:
[request writeToOutputStream:[socket getCFWriteStream]];
(here I'm calling the gpb to write to the output stream, instead of writing the data generated to the output stream)
Which I thought might work, however I get an error claiming that the "Protocol message contained an invalid tag (zero)". I'm fairly certain that it doesn't contain an invalid tag as the message works when sending it via the writedata method.
Any help on the matter would be greatly appreciated!
Cheers!
Dan
(EDIT: I should mention, I am using the metasyntactic gpb code; and the cocoaasyncsocket implementation)
Maybe you need to flush the output socket on the iPhone side, its possible that the data is sitting in an OS/library buffer and is not being written until the connection is closed (which causes an implicit flush).
EDIT
It looks like the api doesn't support flushing (i guess flushing isn't a very async thing to do) but you can subscribe to the didWriteDataWithTag event: from the headers
/**
* Called when a socket has completed writing the requested data. Not called if there is an error.
**/
- (void)onSocket:(AsyncSocket *)sock didWriteDataWithTag:(long)tag;
I would subscribe to this event and then in the event handler call
[self updateLabel:#"Data sent, disconnecting"];
[socket disconnect];
this way you only show the Data Sent label when it is actually sent.
(in the interest of full disclosure i have no idea how to program in objective-c :) good luck)
Finally I managed to solve it!! It's embarrassing how simple it was once I stopped caring about the GPB way of doing it. The C++ code I used is:
NSData *data = [wrapper data];
int s = [wrapper serializedSize];
NSData *size = [NSData dataWithBytes:&s length:1];
[sock writeData:size withTimeout:-1 tag:1];
[sock writeData:data withTimeout:-1 tag:1];
and then on the Java end I just kept the
RequestWrapper.parseDelimitedFrom(socket.getInputStream())
line and it works a treat! All I end up doing is sending the size of the data before the data itself and the GPB method on the Java end works out the rest!
One major problem I had was actually converting the size of the data to send, from an int to NSData, and having it send across the network correctly. The way I was advised of doing it was
NSData *size = [NSData dataWithBytes:&s length:sizeof(s)];
However whenever I sent that across as data, it would seem to send the first byte, along with 3 "0" bytes. This caused havoc with GPB because if it receives a 0 byte at any point it throws an exception thinking the code is corrupt (my guess). Seeing as I never looked at the actual bytes coming across and analysing them until trying to do it a different way today, I am a bit gutted as I could have figured out that this was the issue a while ago. After some experimenting around with it, I gathered that the 'sizeof' method was the problem so I removed it. Currently I have just put a '1' instead of the actual size, which
seems to only return 1 byte when sending the data file across the network; although I'm not sure thats going to be an 'ideal' solution (although the message size should only be in 1 byte anyway) - if anyone could advise me why this sizeof() is causing an issue, it would be appreciated :)
Points should really go to Luke Steffen who helped me with this on the cocoaasyncsocket google group despite my idiocy - so again, thanks Luke!
I'm working on a software that does extensive queries to a database which is has a http interface. So my program parses and handles queries that are in form of long http:// addresses..
I have realized that the bottleneck of this whole system is the querying and the data transfer barely goes above 20KB/s even though I am sitting in the university network with a gigabit connection. Recently a friend of mine mentioned that I might have written my code in an ineffective way and that might be reason for the lack of speed in the process. So my question is what is the fastest/most effective way of getting data from a web source in Java.
Here's the code I have right now:
private void handleQuery(String urlQuery,int qNumber, BufferedWriter out){
BufferedReader reader;
try{
// IO - routines: read from the webservice and print to a log file
reader = new BufferedReader(new InputStreamReader(openURL(urlQuery)));
....
}
}
private InputStream openURL(String urlName)
throws IOException
{
URL url = new URL(urlName);
URLConnection urlConnection = url.openConnection();
return urlConnection.getInputStream();
}
Your code looks good to me. The code snippet doesn't explain the slow read.
Possible problems are,
Network issues. Do an end-end network test to make sure network is as fast as you think.
Server issues. Maybe the server is too slow.
Thread contention. Check if you have any thread issues.
A profiler and network trace will pin-point the problem.
There is nothing in the code that you have provided that should be a bottleneck. The problem is probably somewhere else; e.g. what you are doing with the characters after you read them, how the remote server is writing them, network or webproxy issues, etc.
I am sending newsletters from a Java server and one of the hyperlinks is arriving missing a period, rendering it useless:
Please print your <a href=3D"http://xxxxxxx.xxx.xx.edu=
au//newsletter2/3/InnovExpoInviteVIP.pdf"> VIP invitation</a> for future re=
ference and check the Innovation Expo website <a href=3D"http://xxxxxxx.xx=
xx.xx.edu.au/2008/"> xxxxxxx.xxxx.xx.edu.au</a> for updates.
In the example above the period was lost between edu and au on the first hyperlink.
We have determined that the mail body is being line wrapped and the wrapping splits the line at the period, and that it is illegal to start a line with a period in an SMTP email:
https://www.rfc-editor.org/rfc/rfc2821#section-4.5.2
My question is this - what settings should I be using to ensure that the wrapping is period friendly and/or not performed in the first place?
UPDATE: After a lot of testing and debugging it turned out that our code was fine - the client's Linux server had shipped with a very old Java version and the old Mail classes were still in one of the lib folders and getting picked up in preference to ours.
JDK prior to 1.2 have this bug.
From an SMTP perspective, you can start a line with a period but you have to send two periods instead. If the SMTP client you're using doesn't do this, you may encounter the problem you describe.
It might be worth trying an IP sniffer to see where the problem really is. There are likely at least two separate SMTP transactions involved in sending that email.
I had a similar problem in HTML emails: mysterious missing periods, and in one case a strangely truncated message. JavaMail sends HTML email using the quoted-printable encoding which wraps lines at any point (i.e. not only on whitespace) so that no line exceeds 76 characters. (It uses an '=' at the end of the line as a soft carriage return, so the receiver can reassemble the lines.) This can easily result in a line beginning with a period, which should be doubled. (This is called 'dot-stuffing') If not, the period will be eaten by the receiving SMTP server or worse, if the period is the only character on a line, it will be interpreted by the SMTP server as the end of the message.
I tracked it down to the GNU JavaMail 1.1.2 implementation (aka classpathx javamail). There is no newer version of this implementation and it hasn't been updated for 4 or 5 years. Looking at the source, it partially implements dot-stuffing -- it tries to handle the period on a line by itself case, but there's a bug that prevents even that case from working.
Unfortunately, this was the default implementation on our platform (Centos 5), so I imagine it is also the default on RedHat.
The fix on Centos is to install Sun's (or should I now say Oracle's?) JavaMail implementation (I used 1.4.4), and use the Centos alternatives command to install it in place of the default implementation. (Using alternatives ensures that installing Centos patches won't cause a reversion to the GNU implmentation.)
Make sure all your content is RFC2045 friendly by virtue of quoted-printable.
Use the MimeUtility class in a method like this.
private String mimeEncode (String input)
{
ByteArrayOutputStream bOut = new ByteArrayOutputStream();
OutputStream out;
try
{
out = MimeUtility.encode( bOut, "quoted-printable" );
out.write( input.getBytes( ) );
out.flush( );
out.close( );
bOut.close( );
} catch (MessagingException e)
{
log.error( "Encoding error occured:",e );
return input;
} catch (IOException e)
{
log.error( "Encoding error occured:",e );
return input;
}
return bOut.toString( );
}
I am having a similar problem, but using ASP.NET 2.0. Per application logs, the link in the email is correct 'http://www.3rdmilclassrooms.com/' however then the email is received by the client the link is missing a period 'http://www3rdmilclassrooms.com'
I have done all I can to prove that the email is being sent with the correct link. My suspicion is that it is the email client or spam filter software that is modifying the hyperlink. Is it possible that an email spam filtering software would do that?
Are you setting the Mime type to "text/html"? You should have something like this:
BodyPart bp = new MimeBodyPart();
bp.setContent(message,"text/html");
I am not sure, but it looks a bit as if your email is getting encoded. 0x3D is the hexadecimal character 61, which is the equals character ('=').
What classes/libary are you using to send the emails? Check the settings regarding encoding.
I had a similar problem sending email programmatically over to a yahoo account. They would get one very long line of text and add their own linebreaks in the HTML email, thinking that that wouldnt cause a problem, but of course it would.
the trick was not to try to send such a long line. Because HTML emails don't care about linebreaks, you should add your own every few blocks, or just before the offending line, to ensure that your URL doesn't get split at a period like that.
I had to change my ASP VB from
var html;
html = "Blah Blah Blah Blah ";
html = html & " More Text Here....";
to
var html;
html = "Blah Blah Blah Blah " & VbCrLf;
html = html & " More Text Here....";
And that's all it took to clean up the output as was being processed on their end.
As Greg pointed out, the problem is with your SMTP client, which does not do dot-stuffing (doubling the leading dot).
It appears that the e-mail is being encoded in quoted-printable. Switching to base64 (I assume you can do it with the current java mime implementation) will fix the problem.