Update a string for N seconds in a while loop - java

I just started learn Java and I'm stuck with this problem: I have an infinite while-loop which creates a message to send over a socket; currently the message is not send until a number of elements is poll from a queue and read them.
String msg = null;
String toSend = "";
String currentNumOfMsg = 0;
String MAX_MSG_TO_SEND = 200;
while(true) {
if ((msg = messageQueue.poll()) != null) { // if there is an element in the list
toSend += (msg + "#");
currentNumOfMsg++;
if (currentNumOfMsg == MAX_MSG_TO_SEND) {
try {
sendMessage(toSend); // send to socket
} finally {
msg = null;
toSend = "";
currentNumOfMsg = 0;
}
}
}
}
My goal is to send the message after N seconds, without waiting to reach the MAX_MSG_TO_SEND... Is it possible to do it or I shall continue with this approach?

While the other answer is perfectly valid, I thought it may be valuable to tell you that ScheduledExecutorService (documentation found here), lets you call a function foo() every n seconds using the method scheduleAtFixedRate().
Basically, the actually setting up the executor is as easy as:
ScheduledExecutorService ses = Executors.newScheduledThreadPool(1);
ses.scheduleAtFixedRate(foo, 0, n, TimeUnit.SECONDS);
I think putting any more code in here is bit unnecessary, but to see how to do this in more detail, look here, here, or here. These links give some basic examples. I would really recommend doing it this way as this class is part of the java util library (so no extra dependencies) and you don't actually have to worry very much about the multithreading/scheduling part of it, it takes care of all that for you. But thats just my $.02.
Leave a question/comment if you have one, I'll try to answer it.

Yeah, definitely you can do such a thing. But at first you should store your receive messages in a data structure and when you want to send the data via the socket, send the data in the data structure.
also, you can use guava stopWatch to send the message exactly on time. for further information, you can see https://dzone.com/articles/guava-stopwatch
Otherwise, you can use a long variable which stores System.currentTimeMillis() and each time checks if the expected elapsed time is received or not like below sample code:
long l = System.currentTimeMillis();
if(System.currentTimeMillis() - l >= 10000) {
//send data
}

Related

read mutiple characteristic from BLE and reading a string

So I have two question.
Let's start with the first one, how do you make two readCharacteristic after eachothers? the code I've showed is what I was thinking you could do it. But because onCharacteristicRead isn't called yet in the first readCharacteristic call the next readCharacteristic isn't triggered. Here i solved it by calling the second readCharacteristic in the if-statement for the first readCharacteristic in the onCharacteristicRead, but i don't know it this is normal/stupid solution?
public void onServicesDiscovered(final BluetoothGatt gatt, int status) {
if (status == BluetoothGatt.GATT_SUCCESS) {
BluetoothGattService mBluetoothGattService = gatt.getService(UUID.fromString(CSUuid));
if (mBluetoothGattService != null) {
Log.i(TAG, "Connection State: Service characteristic UUID found: " + mBluetoothGattService.getUuid().toString());
mCharacterisitc = mBluetoothGattService.getCharacteristic(UUID.fromString(UuidRead));
mCharacterisitc2 = mBluetoothGattService.getCharacteristic(UUID.fromString(UuidRead2));
Log.w(TAG, "Connection State 1: mCharacterisitc " + mCharacterisitc + " " + mCharacterisitc2);
readCharacteristic(gatt, mCharacterisitc);
//I know I have to wait for the above is done, but can I do it here instead of
//calling the line under in onCharacteristicRead?
readCharacteristic(gatt, mCharacterisitc2);
} else {
Log.i(TAG, "Connection State: Service characteristic not found for UUID: " + UuidRead);
}
}
}
Next question is a bit hard I think?
the code is made in PSoC creator 4.3
So at the moment I read a single int from my PSoC 6 BLE device, and another letter 'M' converted to a integer and back to a 'M' on the app-side. The reason I only read a SIGNLE 'M' is because I don't know how to send a whole string like 'Made it'. I think the issue I'm having is on the PSoC side where I don't know how to read a whole string.
for(;;)
{
/* Place your application code here. https://www.youtube.com/watch?v=Aeip0hkc4YE*/
cy_stc_ble_gatt_handle_value_pair_t serviceHandle;
cy_stc_ble_gatt_value_t serviceData;
//this is the variables I've declared earlier in the code
//static uint8 data[1] = {0};
//static char * ValStr;
//here I just have a simple Integer which count up every sec
serviceData.val = (uint8*)data;
serviceData.len = 1;
serviceHandle.attrHandle = CY_BLE_CUSTOM_SERVICE_DEVICE_OUTBOUND_CHAR_HANDLE;
serviceHandle.value = serviceData;
Cy_BLE_GATTS_WriteAttributeValueLocal(&serviceHandle); //sending the data to -> OUTBOUND
//this part should probably not be in a for-loop, but for now it is.
ValStr = "Mads Sander Hoegstrup"; //I want read whole string on my android APP
serviceData.val = (uint8*) ValStr; //this only takes the 'M' and thats the only variable I can read from my APP not the rest of the string
serviceData.len = 1; //Does not help to increase, if it's more than 1 I read 0 and not a letter
serviceHandle.attrHandle = CY_BLE_CUSTOM_SERVICE_DEVICE_OUTBOUND_2_CHAR_HANDLE;
serviceHandle.value = serviceData;
Cy_BLE_GATTS_WriteAttributeValueLocal(&serviceHandle); //sending the data to -> OUTBOUND_2
data[0]++;
CyDelay(1000);
}
Here you can see that I revice the right values, a Integer and a String, but only the letter 'M' and not the string 'Mads Sander Hoegstrup'
Just ask if you want more information
You'd better ask two separate questions, since they have nothing to do with each other.
I'll answer the first question. You cannot wait inside the onServicesDiscovered method between the two reads. Even if you wait for 30 seconds it will not work. The reason is that only one thread can run a callback on each BluetoothGatt object at the same time, and it's the caller of onCharacteristicRead that clears the internal gatt busy flag which otherwise prevents you from submitting another request. You'd better implement some queue mechanism to keep the code cleaner if you like.

How to improve performance of deserializing objects from HttpsURLConnection.getInputStream()?

I have a client-server application where the server sends some binary data to the client and the client has to deserialize objects from that byte stream according to a custom binary format. The data is sent via an HTTPS connection and the client uses HttpsURLConnection.getInputStream() to read it.
I implemented a DataDeserializer that takes an InputStream and deserializes it completely. It works in a way that it performs multiple inputStream.read(buffer) calls with small buffers (usually less than 100 bytes). On my way of achieving better overall performance I also tried different implementations here. One change did improve this class' performance significantly (I'm using a ByteBuffer now to read primitive types rather than doing it manually with byte shifting), but in combination with the network stream no differences show up. See the section below for more details.
Quick summary of my issue
Deserializing from the network stream takes way too long even though I proved that the network and the deserializer themselves are fast. Are there any common performance tricks that I could try? I am already wrapping the network stream with a BufferedInputStream. Also, I tried double buffering with some success (see code below). Any solution to achieve better performance is welcome.
The performance test scenario
In my test scenario server and client are located on the same machine and the server sends ~174 MB of data. The code snippets can be found at the end of this post. All numbers you see here are averages of 5 test runs.
First I wanted to know, how fast that InputStream of the HttpsURLConnection can be read. Wrapped into a BufferedInputStream, it took 26.250s to write the entire data into a ByteArrayOutputStream.1
Then I tested the performance of my deserializer passing it all that 174 MB as a ByteArrayInputStream. Before I improved the deserializer's implementation, it took 38.151s. After the improvement it took only 23.466s.2
So this is going to be it, I thought... but no.
What I actually want to do, somehow, is passing connection.getInputStream() to the deserializer. And here comes the strange thing: Before the deserializer improvement deserializing took 61.413s and after improving it was 60.100s!3
How can that happen? Almost no improvement here despite the deserializer improved significantly. Also, unrelated to that improvement, I was surprised that this takes longer than the separate performances summed up (60.100 > 26.250 + 23.466). Why? Don't get me wrong, I didn't expect this to be the best solution, but I didn't expect it to be that bad either.
So, three things to notice:
The overall speed is bound by the network which takes at least 26.250s. Maybe there are some http-settings that I could tweak or I could further optimize the server, but for now this is likely not what I should focus on.
My deserializer implementation is very likely still not perfect, but on its own it is faster than the network, so I don't think there is need to further improve it.
Based on 1. and 2. I'm assuming that it should be somehow possible to do the entire job in a combined way (reading from the network + deserializing) which should take not much more than 26.250s. Any suggestions on how to achieve this are welcome.
I was looking for some kind of double buffer allowing two threads to read from it and write to it in parallel.
Is there something like that in standard Java? Preferably some class inheriting from InputStream that allows to write to it in parallel? If there is something similar, but not inheriting from InputStream, I may be able to change my DataDeserializer to consume from that one as well.
As I haven't found any such DoubleBufferInputStream, I implemented it myself.
The code is quite long and likely not perfect and I don't want to bother you to do a code review for me. It has two 16kB buffers. Using it I was able to improve the overall performance to 39.885s.4
That is much better than 60.100s but still much worse than 26.250s. Choosing different buffer sizes didn't change much. So, I hope someone can lead me to some good double buffer implementation.
The test code
1 (26.250s)
InputStream inputStream = new BufferedInputStream(connection.getInputStream());
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
byte[] buffer = new byte[16 * 1024];
int count = 0;
long start = System.nanoTime();
while ((count = inputStream.read(buffer)) >= 0) {
outputStream .write(buffer, 0, count);
}
long end = System.nanoTime();
2 (23.466s)
InputStream inputStream = new ByteArrayInputStream(entire174MBbuffer);
DataDeserializer deserializer = new DataDeserializer(inputStream);
long start = System.nanoTime();
deserializer.Deserialize();
long end = System.nanoTime();
3 (60.100s)
InputStream inputStream = new BufferedInputStream(connection.getInputStream());
DataDeserializer deserializer = new DataDeserializer(inputStream);
long start = System.nanoTime();
deserializer.Deserialize();
long end = System.nanoTime();
4 (39.885s)
MyDoubleBufferInputStream doubleBufferInputStream = new MyDoubleBufferInputStream();
new Thread(new Runnable() {
#Override
public void run() {
try (InputStream inputStream = new BufferedInputStream(connection.getInputStream())) {
byte[] buffer = new byte[16 * 1024];
int count = 0;
while ((count = inputStream.read(buffer)) >= 0) {
doubleBufferInputStream.write(buffer, 0, count);
}
} catch (IOException e) {
} finally {
doubleBufferInputStream.closeWriting(); // read() may return -1 now
}
}
}).start();
DataDeserializer deserializer = new DataDeserializer(doubleBufferInputStream);
long start = System.nanoTime();
deserializer.deserialize();
long end = System.nanoTime();
Update
As requested, here is the core of my deserializer. I think the most important method is prepareForRead() which performs the actual reading of the stream.
class DataDeserializer {
private InputStream _stream;
private ByteBuffer _buffer;
public DataDeserializer(InputStream stream) {
_stream = stream;
_buffer = ByteBuffer.allocate(256 * 1024);
_buffer.order(ByteOrder.LITTLE_ENDIAN);
_buffer.flip();
}
private int readInt() throws IOException {
prepareForRead(4);
return _buffer.getInt();
}
private long readLong() throws IOException {
prepareForRead(8);
return _buffer.getLong();
}
private CustomObject readCustomObject() throws IOException {
prepareForRead(/*size of CustomObject*/);
int customMember1 = _buffer.getInt();
long customMember2 = _buffer.getLong();
// ...
return new CustomObject(customMember1, customMember2, ...);
}
// several other built-in and custom object read methods
private void prepareForRead(int count) throws IOException {
while (_buffer.remaining() < count) {
if (_buffer.capacity() - _buffer.limit() < count) {
_buffer.compact();
_buffer.flip();
}
int read = _stream.read(_buffer.array(), _buffer.limit(), _buffer.capacity() - _buffer.limit());
if (read < 0)
throw new EOFException("Unexpected end of stream.");
_buffer.limit(_buffer.limit() + read);
}
}
public HugeCustomObject Deserialize() throws IOException {
while (...) {
// call several of the above methods
}
return new HugeCustomObject(/* deserialized members */);
}
}
Update 2
I modified my code snippet #1 a little bit to see more precisely where time is being spent:
InputStream inputStream = new BufferedInputStream(connection.getInputStream());
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
byte[] buffer = new byte[16 * 1024];
long read = 0;
long write = 0;
while (true) {
long t1 = System.nanoTime();
int count = istream.read(buffer);
long t2 = System.nanoTime();
read += t2 - t1;
if (count < 0)
break;
t1 = System.nanoTime();
ostream.write(buffer, 0, count);
t2 = System.nanoTime();
write += t2 - t1;
}
System.out.println(read + " " + write);
This tells me that reading from the network stream takes 25.756s while writing to the ByteArrayOutputStream only takes 0.817s. This makes sense as these two numbers almost perfectly sum up to the previously measured 26.250s (plus some additional measuring overhead).
In the very same way I modified code snippet #4:
MyDoubleBufferInputStream doubleBufferInputStream = new MyDoubleBufferInputStream();
new Thread(new Runnable() {
#Override
public void run() {
try (InputStream inputStream = new BufferedInputStream(httpChannelOutputStream.getConnection().getInputStream(), 256 * 1024)) {
byte[] buffer = new byte[16 * 1024];
long read = 0;
long write = 0;
while (true) {
long t1 = System.nanoTime();
int count = inputStream.read(buffer);
long t2 = System.nanoTime();
read += t2 - t1;
if (count < 0)
break;
t1 = System.nanoTime();
doubleBufferInputStream.write(buffer, 0, count);
t2 = System.nanoTime();
write += t2 - t1;
}
System.out.println(read + " " + write);
} catch (IOException e) {
} finally {
doubleBufferInputStream.closeWriting();
}
}
}).start();
DataDeserializer deserializer = new DataDeserializer(doubleBufferInputStream);
deserializer.deserialize();
Now I would expect that the measured reading time is exactly the same as in the previous example. But instead, the read variable holds a value of 39.294s (How is that possible?? It's the exact same code being measured as in the previous example with 25.756s!)* while writing to my double buffer only takes 0.096s. Again, these numbers almost perfectly sum up to the measured time of code snippet #4.
Additionally, I profiled this very same code using Java VisualVM. That tells me that 40s were spent in this thread's run() method and 100% of these 40s are CPU time. On the other hand, it also spends 40s inside of the deserializer, but here only 26s are CPU time and 14s are spent waiting. This perfectly matches the time of reading from network into ByteBufferOutputStream. So I guess I have to improve my double buffer's "buffer-switching-algorithm".
*) Is there any explanation for this strange observation? I could only imagine that this way of measuring is very inaccurate. However, the read- and write-times of the latest measurements perfectly sum up to the original measurement, so it cannot be that inaccurate... Could someone please shed some light on this?
I was not able to find these read and write performances in the profiler... I will try to find some settings that allow me to observe the profiling results for these two methods.
Apparently, my "mistake" was to use a 32-bit JVM (jre1.8.0_172 being precise).
Running the very same code snippets on a 64-bit version JVM, and tadaaa... it is fast and makes all sense there.
In particular see these new numbers for the corresponding code snippets:
snippet #1: 4.667s (vs. 26.250s)
snippet #2: 11.568s (vs. 23.466s)
snippet #3: 17.185s (vs. 60.100s)
snippet #4: 12.336s (vs. 39.885s)
So apparently, the answers given to Does Java 64 bit perform better than the 32-bit version? are simply not true anymore. Or, there is a serious bug in this particular 32-bit JRE version. I didn't test any others yet.
As you can see, #4 is only slightly slower than #2 which perfectly matches my original assumption that
Based on 1. and 2. I'm assuming that it should be somehow possible to
do the entire job in a combined way (reading from the network +
deserializing) which should take not much more than 26.250s.
Also the very weird results of my profiling approach described in Update 2 of my question do not occur anymore. I didn't repeat every single test in 64 bit yet, but all profiling results that I did do are plausible now, i.e. the same code takes the same time no matter in which code snippet. So maybe it's really a bug, or does anybody have a reasonable explanation?
The most certain way to improve any of these is to change
connection.getInputStream()
to
new BufferedInputStream(connection.getInputStream())
If that doesn't help, the input stream isn't your problem.

How to correctly communicate with 3D Printer

I have to write a java program that receives G-Code commands via network and sends them to a 3D printer via serial communication. In principle everything seems to be okay, as long as the printer needs more than 300ms to execute a command. If execution time is shorter than that, it takes too much time for the printer to receive the next command and that results in a delay between command execution (printer nozzle standing still for about 100-200ms). This can become a problem in 3d printing so i have to eliminate that delay.
For comparison: Software like Repetier Host or Cura can send the same commands via seial without any delay between command execution, so it has to be possible somehow.
I use jSerialComm library for serial communication.
This is the Thread that sends commands to the printer:
#Override
public void run() {
if(printer == null) return;
log("Printer Thread started!");
//wait just in case
Main.sleep(3000);
long last = 0;
while(true) {
String cmd = printer.cmdQueue.poll();
if (cmd != null && !cmd.equals("") && !cmd.equals("\n")) {
log(cmd+" last: "+(System.currentTimeMillis()-last)+"ms");
last = System.currentTimeMillis();
send(cmd + "\n", 0);
}
}
}
private void send(String cmd, int timeout) {
printer.serialWrite(cmd);
waitForBuffer(timeout);
}
private void waitForBuffer(int timeout) {
if(!blockForOK(timeout))
log("OK Timeout ("+timeout+"ms)");
}
public boolean blockForOK(int timeoutMillis) {
long millis = System.currentTimeMillis();
while(!printer.bufferAvailable) {
if(timeoutMillis != 0)
if(millis + timeoutMillis < System.currentTimeMillis()) return false;
try {
sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
printer.bufferAvailable = false;
return true;
}
this is printer.serialWrite: ("Inspired" by Arduino Java Lib)
public void serialWrite(String s){
comPort.setComPortTimeouts(SerialPort.TIMEOUT_SCANNER, 0, 500);
try{Thread.sleep(5);} catch(Exception e){}
PrintWriter pout = new PrintWriter(comPort.getOutputStream());
pout.print(s);
pout.flush();
}
printer is an Object of class Printer which implements com.fazecast.jSerialComm.SerialPortDataListener
relevant functions of Printer
#Override
public int getListeningEvents() {
return SerialPort.LISTENING_EVENT_DATA_AVAILABLE;
}
#Override
public void serialEvent(SerialPortEvent serialPortEvent) {
byte[] newData = new byte[comPort.bytesAvailable()];
int numRead = comPort.readBytes(newData, newData.length);
handleData(new String(newData));
}
private void handleData(String line) {
//log("RX: "+line);
if(line.contains("ok")) {
bufferAvailable = true;
}
if(line.contains("T:")) {
printerThread.printer.temperature[0] = Utils.readFloat(line.substring(line.indexOf("T:")+2));
}
if(line.contains("T0:")) {
printerThread.printer.temperature[0] = Utils.readFloat(line.substring(line.indexOf("T0:")+3));
}
if(line.contains("T1:")) {
printerThread.printer.temperature[1] = Utils.readFloat(line.substring(line.indexOf("T1:")+3));
}
if(line.contains("T2:")) {
printerThread.printer.temperature[2] = Utils.readFloat(line.substring(line.indexOf("T2:")+3));
}
}
Printer.bufferAvailable is declared volatile
I also tried blocking functions of jserialcomm in another thread, same result.
Where is my bottleneck? Is there a bottleneck in my code at all or does jserialcomm produce too much overhead?
For those who do not have experience in 3d-printing:
When the printer receives a valid command, it will put that command into an internal buffer to minimize delay. As long as there is free space in the internal buffer it replies with ok. When the buffer is full, the ok is delayed until there is free space again.
So basicly you just have to send a command, wait for the ok, send another one immediately.
#Override
public void serialEvent(SerialPortEvent serialPortEvent) {
byte[] newData = new byte[comPort.bytesAvailable()];
int numRead = comPort.readBytes(newData, newData.length);
handleData(new String(newData));
}
This part is problematic, the event may have been triggered before a full line was read, so potentially only half an ok has been received yet. You need to buffer (over multiple events) and reassamble into messages first before attempting to parse this as full messages.
Worst case, this may have resulted in entirely loosing temperature readings or ok messages as they have been ripped in half.
See the InputStream example and wrap it in a BufferedReader to get access to BufferedReader::readLine(). With the BufferedReader in place, you can that just use that to poll directly in the main thread and process the response synchronously.
try{Thread.sleep(5);} catch(Exception e){}
sleep(1);
You don't want to sleep. Depending on your system environment (and I strongly assume that this isn't running on Windows on x86, but rather Linux on an embedded platform), a sleep can be much longer than anticipated. Up to 30ms or 100ms, depending on the Kernel configuration.
The sleep before write doesn't make much sense in the first place, you know that the serial port is ready to write as you already had received an ok confirming reception of the previously sent command.
The sleep during receive becomes pointless when using the BufferedReader.
comPort.setComPortTimeouts(SerialPort.TIMEOUT_SCANNER, 0, 500);
And this is actually causing your problems. SerialPort.TIMEOUT_SCANNER activates a wait period on read. After receiving the first byte it will wait at least for another 100ms to see if it will become part of a message. So after it has seen the ok it then waits 100ms internally on the OS side before it assumes that this was all there is.
You need SerialPort.TIMEOUT_READ_SEMI_BLOCKING for low latency, but then the problem predicted in the first paragraph will occur unless buffered.
Setting repeatedly also causes yet another problem, because there is a 200ms sleep in Serialport::setComPortTimeouts internally. Set it per serial connection once, no more than that.
Check the manual of the printer (or tell us the model) not sure you actually need to wait for the ok, and therefore you can read/write concurrently. Some of the time there's a hardware flow control handling this stuff for you, with large enough buffers. Try just send the commands without waiting for ok, see what happens.
If you just want to pipe commands from the network to serial port, you can use ready-made solution like socat. For example running the following:
socat TCP-LISTEN:8888,fork,reuseaddr FILE:/dev/ttyUSB0,b115200,raw
would pipe all bytes coming from clients connected to the 8888 port directly to the /dev/ttyUSB0 at baud rate of 115200 (and vice-versa).

Using final variable in lambda expression Java

I have an app that fetches a lot of data, so I would like to paginate the data into chunks and process those chunks individually rather than dealing with the data all at once. So I wrote a function I am calling every n seconds to check if a chunk is done and then process that chunk. My problem is I have no way of keeping track of the fact that I just processed a chunk and that I should move onto the next chunk when it is available. I was thinking something along the lines of the code below, however I cannot call multiplier++; as it complains that it is not behaving like a final variable anymore. I would like to use something like multiplier so that once the code processes a chunk it 1) doesn't process the same chunk again and 2) moves onto the next chunk. Is it possible to do this? Is there a modifier one can put on multiplier to help avoid race conditions?
int multiplier = 1;
CompletableFuture<String> completionFuture = new CompletableFuture<>();
final ScheduledFuture<?> checkFuture = executor.scheduleAtFixedRate(() -> {
// parse json response
String response = getJSONResponse();
JsonObject jsonObject = ConverterUtils.parseJson(response, true)
.getAsJsonObject();
int pages = jsonObject.get("stats").getAsJsonObject().get("pages").getAsInt();
// if we have a chunk of n pages records then process them with dataHandler function
if (pages > multiplier * bucketSize) {
dataHandler.apply(getResponsePaginated((multiplier - 1) * bucketSize, bucketSize));
multiplier++;
}
if (jsonObject.has("finishedAt") && !jsonObject.get("finishedAt").isJsonNull()) {
// we are done!
completionFuture.complete("");
}
}, 0, sleep, TimeUnit.SECONDS);
You can use an AtomicInteger. Since this is a mutable type, you can assign it to a final variable while still being able to change its value. This also addresses the synchronization issue between the callbacks:
final AtomicInteger multiplier = new AtomicInteger(1);
executor.scheduleAtFixedRate(() -> {
//...
multiplier.incrementAndGet();
}, 0, sleep, TimeUnit.SECONDS);

Check servers for active Webserver fast (multithreaded)

I want to check an huge amount (thousands) of Websites, if they are still running. Because I want to get rid of unececarry entries in my HostFile Wikipage about Hostfiles.
I want to do it in a 2 Stage process.
Check if something is running on Port 80
Check the HTTP response code (if it's not 200 I have to check the site)
I want to multithread, because if I want to check thousands of addresses, I cant wait for timeouts.
This question is just about Step one.
I have the problem, that ~1/4 of my connect attempts don't work. If I retry the not working ones about ~3/4 work? Do I not close the Sockets correctly? Do I run into a limit of open Sockets?
Default I run 16 threads, but I have the same problems with 8 or 4.
Is there something I'm missing
I have simplified the code a little.
Here is the code of the Thread
public class SocketThread extends Thread{
int tn;
int n;
String[] s;
private ArrayList<String> good;
private ArrayList<String> bad;
public SocketThread(int tn, int n, String[] s) {
this.tn = tn;
this.n = n;
this.s = s;
good = new ArrayList<String>();
bad = new ArrayList<String>();
}
#Override
public void run() {
int answer;
for (int i = tn * (s.length / n); i < ((tn + 1) * (s.length / n)) - 1; i++) {
answer = checkPort80(s[i]);
if (answer == 1) {
good.add(s[i]);
} else {
bad.add(s[i]);
}
System.out.println(s[i] + " | " + answer);
}
}
}
And here is the checkPort80 Method
public static int checkPort80(String host)
Socket socket = null;
int reachable = -1;
try {
//One way of doing it
//socket = new Socket(host, 80);
//socket.close();
//Another way I've tried
socket = new Socket();
InetSocketAddress ina = new InetSocketAddress(host, 80);
socket.connect(ina, 30000);
socket.close();
return reachable = 1;
} catch (Exception e) {
} finally {
if (socket != null) {
if (socket.isBound()) {
try {
socket.close();
return reachable;
} catch (Exception e) {
e.getMessage();
return reachable;
}
}
}
}
}
About Threads, I make a ArrayList of Threads, create them and .start() them and right afterwards I .join() them, get the "God" and the "Bad" save them to files.
Help is appreciated.
PS: I rename the Hosts-file first so that it doesn't affect the process, so this is not an issue.
Edit:
Thanks to Marcelo Hernández Rishr I discovered, that HttpURLConnection seems to be the better solution. It works faster and I can also get the HttpResponseCode, which I was also interested anyways (just thought it would be much slower, then just checking Port 80). I still after a while suddenly get Errors, I guess this has to do with the DNS server thinking this is a DOS-Attack ^^ (but I should examine futher if the error lies somewhere else) also fyi I use OpenDNS, so maybe they just don't like me ^^.
x4u suggested adding a sleep() to the Threads, which seems to make things a little better, but will it help me raise entries/second i don't know.
Still, I can't (by far) get to the speed I wanted (10+ entries/second), even 6 entries per second doesn't seem to work.
Here are a few scenarios I tested (until now all without any sleep()).
number of time i get first round how many entries where entries/second
threads of errors processed until then
10 1 minute 17 seconds ~770 entries 10
8 3 minute 55 seconds ~2000 entries 8,51
6 6 minute 30 seconds ~2270 entries 5,82
I will try to find a sweet spot with Threads and sleep (or maybe simply pause all for one minute if I get many errors).
Problem is, there are Hostfiles with one million entries, which at one entry per second would take 11 Days, which I guess all understand, is not expectable.
Are there ways to switch DNS-Servers on the fly?
Any other suggestions?
Should I post the new questions as separate questions?
Thanks for the help until now.
I'll post new results in about a week.
I have 3 suggestions that may help you in your task.
Maybe you can use the class HttpURLConnection
Use a maximum of 10 threads because you are still limited by cpu, bandwidth, etc.
The lists good and bad shouldn't be part of your thread class, maybe they can be static members of the class were you have your main method and do static synchronized methods to add members to both lists from any thread.
Sockets usually try to shut down gracefully and wait for a response from the destination port. While they are waiting they are still blocking resources which can make successive connection attempts fail if they were executed while there have still been too many open sockets.
To avoid this you can turn off the lingering before you connect the socket:
socket.setSoLinger(false, 0);

Categories

Resources