I am trying to get HTTP codes from websites.
When I parse sites without threads, one-by-one, everything is fine.
But if I use threads, sometimes I receive
java.sql.SQLException: After end of result set
at
URL url = new URL(rset.getString("url"));
I think that the problem is in timeouts, and tried to break the cycle if timeout is > then I want.
if (connection.getConnectTimeout()>10)
{
System.out.println("timeout");
break;
}
But it seems that it never works. What am I doing wrong? Thank you. The full part of problem code is below.
static class JThread extends Thread {
JThread(String name){
super(name);
}
public void run() {
try {
while (rset.next()) {
System.out.println("hello");
URL url = new URL(rset.getString("url"));
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod("GET");
connection.connect();
if (connection.getConnectTimeout()>10)
{
System.out.println("timeout");
break;
}
//Thread.sleep(1000);
int code = connection.getResponseCode();
System.out.println(code);
}
} catch (SQLException e) {
e.printStackTrace();
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (ProtocolException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("Thread stopped");
}
}
I am trying to get HTTP codes from websites. When I parse sites without threads, one-by-one, everything is fine. But if I use threads, sometimes I receive
java.sql.SQLException: After end of result set
at
URL url = new URL(rset.getString("url"));
This has nothing to do with HTTP timeouts. The exception you are getting is because you are trying to get the "url" column from a database row after you have already read all the results (and there are no more results to read).
You say this only happens when you use multiple threads. It looks like a thread enters the while loop (rset.next() is true). Then another thread calls rset.next() (moving past the end of the result set), gets false, and doesn't enter the loop. Then the first thread tries to get the URL, but you've already gone past the last result.
You should synchronize access to the ResultSet (or any object) if it is shared between multiple threads, but it would probably be better to just not share the ResultSet between threads. A better way to split the work would be to have a single thread get the URLs from the database, and 1 or more threads do the HTTP connection (e.g. thread pool, new thread per URL, etc.).
You need to synchronize the calls to ResultSet.next() and check beforehand if it has been exhausted.
Now you have two threads calling it the same time and the first one gets the last row and the next one tries to get a row when the result set is already exhausted.
Related
We use connection pool in our application. While I understand that we should close and get connections as needed since we are using a connection pool. I implemented a cache update mechanism by receiving Postgres LISTEN notifications. The code is pretty much similar to the canonical example given by the documentation.
As you can see in the code, the query is initiated in the constructor and the connection is re used. This may pose problem when the connection is closed out of band due to any factor. One solution to this is to get the connection before every use, but as you can see the statement is only executed once in the constructor but still I can receive the notification in the polling. So if I get the connection every time, it will force me to re issue the statement for every iteration(after delay). I'm not sure if that's an expensive operation.
What is the middle ground here?
class Listener extends Thread
{
private Connection conn;
private org.postgresql.PGConnection pgconn;
Listener(Connection conn) throws SQLException
{
this.conn = conn;
this.pgconn = conn.unwrap(org.postgresql.PGConnection.class);
Statement stmt = conn.createStatement();
stmt.execute("LISTEN mymessage");
stmt.close();
}
public void run()
{
try
{
while (true)
{
org.postgresql.PGNotification notifications[] = pgconn.getNotifications();
if (notifications != null)
{
for (int i=0; i < notifications.length; i++){
//use notification
}
}
Thread.sleep(delay);
}
}
catch (SQLException sqle)
{
//handle
}
catch (InterruptedException ie)
{
//handle
}
}
}
In addition to this, there is also another similar document which had another query in run method as well in addition to constructor. I'm wondering if someone could enlighten me the purpose of another query within the method.
public void run() {
while (true) {
try {
//this query is additional to the one in the constructor
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT 1");
rs.close();
stmt.close();
org.postgresql.PGNotification notifications[] = pgconn.getNotifications();
if (notifications != null) {
for (int i=0; i<notifications.length; i++) {
System.out.println("Got notification: " + notifications[i].getName());
}
}
// wait a while before checking again for new
// notifications
Thread.sleep(delay);
} catch (SQLException sqle) {
//handle
} catch (InterruptedException ie) {
//handle
}
}
}
I experimented closing the connection in every iteration(but without getting another one). That's still working. Perhaps that's due to unwrap that was done.
Stack:
Spring Boot, JPA, Hikari, Postgres JDBC Driver (not pgjdbc-ng)
The connection pool is the servant, not the master. Keep the connection for as long as you are using it to LISTEN on, i.e. ideally forever. If the connection ever does close, then you will miss whatever notices were sent while it was closed. So to keep the cache in good shape, you would need to discard the whole thing and start over. Obviously not something you would want to do on a regular basis, or what would be the point of having it in the first place?
The other doc you show is just an ancient version of the first one. The dummy query just before polling is there to poke the underlying socket code to make sure it has absorbed all the messages. This is no longer necessary. I don't know if it ever was necessary, it might have just been some cargo cult that found its way into the docs.
You would probably be better off with the blocking version of this code, by using getNotifications(0) and getting rid of sleep(delay). This will block until a notice becomes available, rather than waking up twice a second and consuming some (small) amount of resources before sleeping again. Also, once a notice does arrive it will be processed almost immediately, instead of waiting for what is left of a half-second timeout to expire (so, on average, about a quarter second).
The following java code runs in an IntentService; it works; it uploads an image to twitter.
I have coded it with a separate try-catch at each call that throws an IOException.
I have removed code at the ... spots to make it quicker to read.
try {
con = (HttpURLConnection) new URL(Constants.TWITTER_ENDPOINT_UPLOAD_MEDIA).openConnection();
} catch (IOException e) {
e.printStackTrace();
}
...
con.setDoOutput(true);
try {
os = con.getOutputStream();
} catch (IOException e) {
e.printStackTrace();
}
DataOutputStream out = new DataOutputStream(os);
try {
write(out, boundary + "\r\n");
...
} catch (IOException e) {
e.printStackTrace();
}
try {
statusCode = con.getResponseCode();
} catch (IOException e) {
e.printStackTrace();
}
My question is:
Does it make sense to put one or more of these calls inside a re-try loop, like this:
for (int i = 0; ; i++) {
try {
httpURLConnection = (HttpURLConnection) new URL(Constants.TWITTER_ENDPOINT_UPLOAD_MEDIA).openConnection();
break;
}
catch (IOException e) {
if (i < 3) {
continue;
}
else {
reportErrorToCallingActivity();
return;
}
}
}
And, if it does make sense, at which ones?
And, if for example I were to need to re-try at:
statusCode = con.getResponseCode();
how much code should I re-run: all the way back to openConnection?
I just can't figure this out from the documentation,
and I don't want to just guess at what to do. Hope you can help!
I appreciate this is probably greatly cut-down, but I think the giveaway here is that when you catch your IOExceptions all you do is log them to the console. In other words, your code doesn't know what to do, what state it (or the connection) is in, or how to proceed. Given that checking exception message text is bad practice, the exception type just doesn't give good enough information. In that case, the simplest and most honest thing is to rethrow the original exception in the hope that the calling code knows more / can handle the error better.
If, at any stage, you (or your code) are confident you can fully/largely deal with an exception, then have an explicit try/catch the way your current code does. But if you don't, your method should 'fess up' and declare that it throws IOException without trying to handle them at all. (Make sure you have a finally block to clean up any resources you've created in the process!)
Provided your method is honest and cleans up after itself, the calling method can potentially retry in a loop - as per your second code example. However, without any controls on retrying (number of attempts, backoff, etc.) and without any good guidance from the connection API, it's probably unwise to keep trying, potentially wasting resources, getting rate-limited, breaching T&Cs etc.
A better solution would be to obtain a Twitter access library that returns meaningful status messages, and that can potentially handle its own retries.
Edit:
By far the most useful line is statusCode = con.getResponseCode(); which gives you a value you can check against Twitter's Error Codes list. In general, if you get 2xx back, your request has succeeded, 4xx means your code has done something wrong, 5xx means Twitter has done something wrong. At least that lets you adjust your error message if you - as you probably should - bail out rather than try to work around it.
It looks like 420 and 429 best indicate that you can retry. However, you will need to do so much more carefully than just repeating your request in an endless loop. You should definitely read the Rate Limits doc for guidance.
I am testing a chat application for number of users. So what I am trying is as follows:
I am trying to run my chat application by login for chat for only one user for 1000 times in for loop. here is my part of code .
public void LoginChatConnect() {
try {
// while(true){
for(int i=0;i<1000;i++){
System.out.println("inside loginChatLogin");
String userId = "Rahul";
String password = "rahul";
sockChatListen = new Socket("localhost", 5004);
// /sockChatListen.
dosChatListen = new DataOutputStream(
sockChatListen.getOutputStream());
disChatListen = new DataInputStream(sockChatListen.getInputStream());
dosChatListen.writeUTF(userId);
dosChatListen.writeUTF(password);
// System.out.println(dosChatListen.toString());
dosChatListen.flush();
// sockChatListen.close();
boolean b = sockChatListen.isClosed();
System.out.println("connection open**********************" + b);
sockChatListen.close();
System.out.println("connection closed**********************" + b);
count++;
System.out.println("count" + count);
}
} catch (UnknownHostException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
In above code I am just trying to login for only one user for 1000 times. But after certain login it is giving me this socket error.
java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
Here I am trying to create a connection with a single port 5004. why I am getting error after 100+ successful connections(login).?
How should I recover this problem?
Any suggestions will be helpful.
What I understand from your post is that you want to simulate 1000 users logging to the chat server concurrently. I believe you are trying to test the load on your chat server.
However, from your code, I see that you are establishing and closing the socket connection every time in the loop. This is similar to 1000 users waiting in a queue and attempting to login to the server one after the other. This does not simulate the concurrent load but a 1000 sequential calls to the server and would not be appropriate to load test your server.
My comments are based on the above stated understanding. Please calrify if this is not the case.
Regarding the exception you get, I have no idea why it should not work after 100+ attempts. May be you need to check your server side code to figure out the problem.
Ok my issue is simple. I am trying to make simple chat but i feel that detection of disconnected client from server is mandatory. Chat works fine (without such detection) when i use simple:
if (this.in.ready()) //preinitialized buffered reader
this.dataIn=this.in.readLine();
I have browsed lots of websites/questions posted here and i read that ready() should be ommited since it blocks everything, that may be true since when i delete this ready() my chat no longer works, however it enables client disconnected detection.
In order to reach my goal i need to test if BufferedReader recieves null through readLine() but this does not work as it should either.
if (this.in.readLine()!=null){ //1. check if client disconnected
if (this.in.ready()) //2/
this.dataIn=this.in.readLine(); //3.
}
else
this.notice="Client disconnected!";
Now what happens when i apply code presented above. Initial if (1) is blocking the inner ready() (line 2) which is required to read actual message send over socket (3).
The other "order" does not work:
if (this.in.ready()){
this.dataIn=this.in.readLine();
}
else if (this.in.readLine()!=null)
this.notice="Client disconnected!";
This one also does not allow to send messages through socket.
*Yes, sending/recieving is realized in separate thread
*BufferedReader is initialized only once
Thread source code (if any1 would need it in order to take a look):
#Override
public void run() {
try {
if (this.isServer){
this.initServer();
this.initProcessData(sSocket);
}
else if (!this.isServer){
this.initClient();
this.initProcessData(clientSocket);
}
this.initDone=true;
} catch (IOException ex) {
Logger.getLogger(NetClass.class.getName()).log(Level.SEVERE, null, ex);
}
while(this.initDone){
try {
Thread.sleep(100);
if ((!this.dataOut.isEmpty())&&(this.dataOut!="")){
this.out.println(this.dataOut);
this.out.flush();
this.dataOut = "";
}
if (this.in.ready())
this.dataIn=this.in.readLine();
}
catch (IOException ex) {
Logger.getLogger(NetClass.class.getName()).log(Level.SEVERE, null, ex);
}
catch (InterruptedException ex) {
this.initDone=false;
Logger.getLogger(NetClass.class.getName()).log(Level.SEVERE, null, ex);
}
//System.out.println(this.notice);
}
}
The worst thing is that i have either proper detection of client disconnected or i have working chat.
Can anyone enlight me what should i do in order to combine those two together? Any help greatly appreciated.
Consider using java.io.ObjectInputStream and java.io.ObjectOutputStream. Use the blocking read() method in a separate worker thread, and loop until it returns -1 or throws an exception. Send String objects back and forth. In that way, you can also send messages with line feeds.
In my application I have implemented a method to get favourits of particular user. If the user is a new one there will not be a entry in the table.If so I add default favourtis to the table. Code is shown below.
public String getUserFavourits(String username) {
String s = "SELECT FAVOURITS FROM USERFAVOURITS WHERE USERID='" +
username.trim() + "'";
String a = "";
Statement stm = null;
ResultSet reset = null;
DatabaseConnectionHandler handler = null;
Connection conn = null;
try {
handler = DatabaseConnectionHandler.getInstance();
conn = handler.getConnection();
stm = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_UPDATABLE);
reset = stm.executeQuery(s);
if (reset.next()) {
a = reset.getString("FAVOURITS").toString();
}
reset.close();
stm.close();
}
catch (SQLException ex) {
ex.printStackTrace();
}
catch (Exception ex) {
ex.printStackTrace();
}
finally {
try {
handler.returnConnectionToPool(conn);
if (stm != null) {
stm.close();
}
if (reset != null) {
reset.close();
}
}catch (Exception ex) {
ex.printStackTrace();
}
}
if (a.equalsIgnoreCase("")) {
a = updateNewUserFav(username);
}
return a;
}
You can see that after the Finally block updateNewUserFav(username) method is use to insert default favourits in to table. Normally users are forced to change this in their first login.
My problem is many users have complain me about they hava lost their customized favourits and default has get loaded in their login. When I go through the code I notice that it can only happen if exception occured in the try block. When I debug code works fine. Is this can be coused at time when DB is busy?
Normally there are more than 1000 concurrent user in the system. Since it is real time application there will be huge number a of request comming to the Database(DB is Oracle).
Can some one pls explain.
Firstly, use jonearles suggestion about bind variables. If a lot of your code is like this, with 1000 concurrent users, I'd hate to think what performance is like.
Secondly, if it is busy then there is a chance of time-outs. As you say, if an exception is encountered then it falls back to the "updateNewUserFav"
Really, it should only call that if NO exception is raised.
If an exception is raised, the function should fail. The current code is similar to
"TURN THE IGNITION KEY TO START THE CAR"
"IF THERE IS A PROBLEM, RING GARAGE AND BOOK APPOINTMENT"
"PUT CAR INTO GEAR AND RELEASE HAND_BRAKE"
You really only want to release the hand-brake once the car has successfully started, otherwise you'll end up rolling down the hill until the sudden stop at the end (often involving an expensive CRUNCH sound).