I'm trying to read a string from a file, do an HTTP request with that string, and if the request returns a 200 then do another HTTP request with it.
I thought a good model for this would be the producer consumer model, but for some reason I'm totally stuck. The whole process just stops at a certain point for some reason and I have no idea why.
public static void main(String[] args) throws InterruptedException, IOException {
ArrayBlockingQueue<String> subQueue = new ArrayBlockingQueue<>(3000000);
ThreadPoolExecutor consumers = new ThreadPoolExecutor(100, 100, 10000, TimeUnit.MILLISECONDS, new ArrayBlockingQueue<>(10));
ThreadPoolExecutor producers = new ThreadPoolExecutor(100, 100, 10000, TimeUnit.MILLISECONDS, new ArrayBlockingQueue<>(10000000));
consumers.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
String fileName = "test";
try (BufferedReader br = new BufferedReader(new FileReader(fileName))) {
String line;
while ((line = br.readLine()) != null) {
String address = new JSONObject(line).getString("Address");
producers.submit(new Thread(() -> {
if (requestReturn200(address)) {
try {
subQueue.put(address);
} catch (InterruptedException e) {
System.out.println("Error producing.");
}
}
}));
}
producers.shutdown();
}
while (subQueue.size() != 0 || !producers.isShutdown()) {
String address = subQueue.poll(1, TimeUnit.SECONDS);
if (address != null) {
consumers.submit(new Thread(() -> {
try {
System.out.println("Doing..." + address);
doOtherHTTPReqeust(address);
} catch (Exception e) {
System.out.println("Fatal error consuming);
}
}));
} else {
System.out.println("Null");
}
}
consumers.shutdown();
}
Any and all help would be greatly appreciated.
while (subQueue.size() != 0 || !producers.isShutdown()) {
First of all !producers.isShutdown() will always return !true because it is checked after producers.shutdown(). isShutdown does not says if tasks in pool are still running or not, but if pool has been shut down, resulting in inability to accept new tasks. In your case this will always be false
Second, subQueue.size() != 0 While your consumer creating loop and consumers takes much more faster data from queue than producers can provide, in middle of "producing" process, consumers might have clear the quueue resulting in condition subQueue.size!= to be falsy. As you know this would break the loop and forbit submition of producers.
You should stop using queue.size() but rather use blocking properties of BlockingQueue. queue.take() will block until new element is available.
So the overall flow should be like that.
Start some pool of producer tasks, like you are doing right now.
Let producer put data in blocking queue - yep you are here
Start some (I would say fixed) number of consumers
Let consumers queue.take() data from queue. This will force consumers to "autowait" for new data and take it when it will become available.
I will put aside mentions that creating 200 threads is insane and misses the whole purpose of multithreading consumers/producers/task pools, at least in your case IMHO. The idea is to use small amount of threads as they are heavyweight to do plenty of queued tasks. But that is discussion for different time .
Related
I have written a piece of software in Java that checks if proxies are working by sending a HTTP request using the proxy.
It takes around 30,000 proxies from a database, then attempts to check if they are operational. The proxies received from the database used to be returned as an ArrayList<String>, but have been changed to Deque<String> for reasons stated below.
The way the program works is there is a ProxyRequest object that stores the IP & Port as a String and int respectively. The ProxyRequest object has a method isWorkingProxy() which attempts to send a request using a proxy and returns a boolean on whether it was successful.
This ProxyRequest object is wrapped around by a RunnableProxyRequest object that calls super.isWorkingProxy() in the overrided run() method. Based on the response from super.isWorkingProxy(), the RunnableProxyRequest object updates a MySQL database.
Do note that the updating of the MySQL database is synchronized().
It runs on 750 threads using a FixedThreadPool (on a VPS), but towards
the end, it becomes very slow (stuck on ~50 threads), which obviously
implies the garbage collector is working. This is the problem.
I have attempted the following to improve the lag, it does not seem to work:
1) Using a Deque<String> proxies and using Deque.pop() to obtain the String in which the proxy is. This (I believe), continuously makes the Deque<String> smaller, which should improve lag caused by the GC.
2) Set the con.setConnectTimeout(this.timeout);, where this.timeout = 5000; This way, the connection should return a result in 5 seconds. If not, the thread is completed and should no longer be active in the threadpool.
Besides this, I don't know any other way I can improve performance.
Can anyone recommend a way for me to improve performance to avoid / stop lagging towards the end of the threads by the GC? I know there is a Stackoverflow question about this (Java threads slow down towards the end of processing), but I have tried everything in the answer and it has not worked for me.
Thank you for your time.
Code snippets:
Loop adding threads to the FixedThreadPool:
//This code is executed recursively (at the end, main(args) is called again)
//Create the threadpool for requests
//Threads is an argument that is set to 750.
ThreadPoolExecutor executor = (ThreadPoolExecutor)Executors.newFixedThreadPool(threads);
Deque<String> proxies = DB.getProxiesToCheck();
while(proxies.isEmpty() == false) {
try {
String[] split = proxies.pop().split(":");
Runnable[] checks = new Runnable[] {
//HTTP check
new RunnableProxyRequest(split[0], split[1], Proxy.Type.HTTP, false),
//SSL check
new RunnableProxyRequest(split[0], split[1], Proxy.Type.HTTP, true),
//SOCKS check
new RunnableProxyRequest(split[0], split[1], Proxy.Type.SOCKS, false)
//Add more checks to this list as time goes...
};
for(Runnable check : checks) {
executor.submit(check);
}
} catch(IndexOutOfBoundsException e) {
continue;
}
}
ProxyRequest class:
//Proxy details
private String proxyIp;
private int proxyPort;
private Proxy.Type testingType;
//Request details
private boolean useSsl;
public ProxyRequest(String proxyIp, String proxyPort, Proxy.Type testingType, boolean useSsl) {
this.proxyIp = proxyIp;
try {
this.proxyPort = Integer.parseInt(proxyPort);
} catch(NumberFormatException e) {
this.proxyPort = -1;
}
this.testingType = testingType;
this.useSsl = useSsl;
}
public boolean isWorkingProxy() {
//Case of an invalid proxy
if(proxyPort == -1) {
return false;
}
HttpURLConnection con = null;
//Perform checks on URL
//IF any exception occurs here, the proxy is obviously bad.
try {
URL url = new URL(this.getTestingUrl());
//Create proxy
Proxy p = new Proxy(this.testingType, new InetSocketAddress(this.proxyIp, this.proxyPort));
//No redirect
HttpURLConnection.setFollowRedirects(false);
//Open connection with proxy
con = (HttpURLConnection)url.openConnection(p);
//Set the request method
con.setRequestMethod("GET");
//Set max timeout for a request.
con.setConnectTimeout(this.timeout);
} catch(MalformedURLException e) {
System.out.println("The testing URL is bad. Please fix this.");
return false;
} catch(Exception e) {
return false;
}
try(
BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
) {
String inputLine = null; StringBuilder response = new StringBuilder();
while((inputLine = in.readLine()) != null) {
response.append(inputLine);
}
//A valid proxy!
return con.getResponseCode() > 0;
} catch(Exception e) {
return false;
}
}
RunnableProxyRequest class:
public class RunnableProxyRequest extends ProxyRequest implements Runnable {
public RunnableProxyRequest(String proxyIp, String proxyPort, Proxy.Type testingType, boolean useSsl) {
super(proxyIp, proxyPort, testingType, useSsl);
}
#Override
public void run() {
String test = super.getTest();
if(super.isWorkingProxy()) {
System.out.println("-- Working proxy: " + super.getProxy() + " | Test: " + test);
this.updateDB(true, test);
} else {
System.out.println("-- Not working: " + super.getProxy() + " | Test: " + test);
this.updateDB(false, test);
}
}
private void updateDB(boolean success, String testingType) {
switch(testingType) {
case "SSL":
DB.updateSsl(super.getProxyIp(), super.getProxyPort(), success);
break;
case "HTTP":
DB.updateHttp(super.getProxyIp(), super.getProxyPort(), success);
break;
case "SOCKS":
DB.updateSocks(super.getProxyIp(), super.getProxyPort(), success);
break;
default:
break;
}
}
}
DB class:
//Locker for async
private static Object locker = new Object();
private static void executeUpdateQuery(String query, String proxy, int port, boolean toSet) {
synchronized(locker) {
//Some prepared statements here.
}
}
Thanks to Peter Lawrey for guiding me to the solution! :)
His comment:
#ILoveKali I have found network libraries are not aggressive enough in
shutting down a connection when things go really wrong. Timeouts tend
to work best when the connection is fine. YMMV
So I did some research, and found that I had to also use the method setReadTimeout(this.timeout);. Previously, I was only using setConnectTimeout(this.timeout);!
Thanks to this post (HttpURLConnection timeout defaults) that explained the following:
Unfortunately, in my experience, it appears using these defaults can
lead to an unstable state, depending on what happens with your
connection to the server. If you use an HttpURLConnection and don't
explicitly set (at least read) timeouts, your connection can get into
a permanent stale state. By default. So always set setReadTimeout to
"something" or you might orphan connections (and possibly threads
depending on how your app runs).
So the final answer is: The GC was doing just fine, it was not responsible for the lag. The threads were simply stuck FOREVER at a single number because I did not set the read timeout, and so the isWorkingProxy() method never got a result and kept reading.
This question already has an answer here:
ExecutorService Future::get very slow
(1 answer)
Closed 5 years ago.
I am trying to search a list of words and find the total count of all the words across multiple files.
My logic is to have separate threads for each file and get the count. Finally I can aggregate the total count got from each of the threads.
Say, I have 50 files each of 1MB. The performance does not improve when I am using multiple threads. My total execution time does not improve with FILE_THREAD_COUNT. I am getting almost the same execution time when my thread count is either 1 or 50.
Am I doing something wrong in using the executor service?
Here is my code.
public void searchText(List<File> filesInPath, Set<String> searchWords) {
try {
BlockingQueue<File> filesBlockingQueue = new ArrayBlockingQueue<>(filesInPath.size());
filesBlockingQueue.addAll(filesInPath);
ExecutorService executorService = Executors.newFixedThreadPool(FILE_THREAD_COUNT);
int totalWordCount = 0;
while (!filesBlockingQueue.isEmpty()) {
Callable<Integer> task = () -> {
int wordCount = 0;
try {
File file = filesBlockingQueue.take();
try (BufferedReader bufferedReader = new BufferedReader(new FileReader(file))) {
String currentLine;
while ((currentLine = bufferedReader.readLine()) != null) {
String[] words = currentLine.split("\\s+");
for (String word : words) {
for (String searchWord : searchWords) {
if (word.contains(searchWord)) {
wordCount++;
}
}
}
}
} catch (Exception e) {
// Handle error
}
} catch (Exception e) {
// Handle error
}
return wordCount;
};
totalWordCount += executorService.submit(task).get();
}
System.out.println("Final word count=" + totalWordCount);
executorService.shutdown();
} catch (Exception e) {
// Handle error
}
}
Yes, you're doing something wrong.
The problem is here:
executorService.submit(task).get()
Your code submits a task then waits for it to finish, which achieves nothing in parallel; the tasks run sequentially. And your BlockingQueue adds no value whatsoever.
The way to run tasks in parallel is to first submit all tasks, collect the Futures returned, then call get() on all of them. Like this:
List<Future<Integer>> futures = filesInPath.stream()
.map(<create your Callable>)
.map(executorService::submit)
.collect(toList());
for (Future future : futures)
totalWordCount += future.get();
}
You can actually do it in one stream, by going through the intermediate list (as above) but then immediately streaming that, but you have to wrap the call to Future#get in some code to catch the checked exception - I leave that as an exercise for the reader.
I have a Runnable that watches for data to send out UDP as well as to send a keep alive every 10 seconds. The process is taking 100% CPU. I tried setting the thread to low priority but didn't seem to make any difference.
private Runnable keepAliveRunnable = new Runnable() {
long nextSend = 0;
byte[] sendData;
#Override
public void run() {
if(DEBUG)
System.out.println("Starting keepAlive.");
while (socket != null) {
synchronized (socketLock) {
try {
sendData = sendQueue.poll();
if (sendData != null) {
socket.send(new DatagramPacket(sendData, sendData.length,
InetAddress.getByName(Main.ipAddress), 10024));
} else if (nextSend < System.currentTimeMillis()) {
if(DEBUG && nextSend < System.currentTimeMillis())
System.out.println("Update keepAlive.");
// Send /xremote
socket.send(new DatagramPacket(("/xremote").getBytes(),
("/xremote").getBytes().length,
InetAddress.getByName(Main.ipAddress), 10024));
nextSend = System.currentTimeMillis() + keepAliveTimeout;
// Send /info
socket.send(new DatagramPacket(("/info").getBytes(),
("/info").getBytes().length,
InetAddress.getByName(Main.ipAddress), 10024));
}
} catch (IOException e) {
e.printStackTrace();
if(!e.getMessage().contains("Socket closed")) {
e.printStackTrace();
}
}
}
}
System.out.println("keepAliveRunnable ended.");
}
};
Make sendQueue a LinkedBlockingQueue, and use poll timeouts.
You are busy waiting, which essentially forces your app to keep running the same logic over and over instead of giving the CPU back to the system.
Don't count on your own implementation of checking the time, that is unreliable and can result in what you're seeing. Instead, use blockingQueue.poll(10, TimeUnit.SECONDS), which automatically handles returning the CPU to the system.
I made a few other changes to your code; I put the duplicated packet construction code in a separate method, and I wrapped the synchronization of the socket only when the socket is actually being used. Notice how much cleaner it is when you let the queue do the work for you.
while(socket != null) {
try {
sendData = sendQueue.poll(10, TimeUnit.SECONDS);
if (sendData != null) {
sendPacket(sendData);
} else {
sendPacket("/xremote".getBytes());
sendPacket("/info".getBytes());
}
} catch (IOException e) {
e.printStackTrace();
if (!e.getMessage().contains("Socket closed")) {
e.printStackTrace();
}
}
}
And here's sendPacket:
private static void sendPacket(byte[] data) throws UnknownHostException, IOException {
// Note, you probably only have to do this once, rather than looking it up every time.
InetAddress address = InetAddress.getByName(Main.ipAddress);
DatagramPacket p = new DatagramPacket(data, data.length, address, 10024);
synchronized(socketLock) {
socket.send(p);
}
}
You should add a Thread.sleep() at the bottom of your while loop, to slow down your loop. As is, you're busy-waiting and churning the CPU while you wait for the nextSend time to be reached. Thread.sleep() will actually pause the thread, allowing other threads and processes to use the CPU while this one sleeps.
Sleeping for a 10th of a second (100 milliseconds) should be a good amount of time to sleep between iterations of your loop, if your goal is to actually do work every 10 seconds.
There are more advanced techniques for dispatching work every so often, like ScheduledExecutorService, which you could also consider using. But for a small application the pattern you're using is fine, just avoid busy waiting.
I think rather than polling your sendqueue, its better to use semaphore signal and wait.
When a packet is inserted in sendqueue, call semaphore signal.
Use semaphore wait instead of call to sendqueue.poll().
I assume you have separate threads for pushing popping data from sendqueue.
This is standard consumer producer problem. https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem
After digging through my code, I had realized that over time I had whittled down the number of processes sending data to 1 (duh) so I really didn't need the runnable as I could just send the data directly. I also set up a separate runnable and used ScheduledExecutor. I thought I would just put that here for other to see. Durron597's code is a little prettier but since I'm only sending two packs now I decided to just put the code together.
// In main
pingXAir();
private void pingXAir() {
System.out.println("Start keepAlive");
ScheduledExecutorService executorService = Executors.newScheduledThreadPool(1);
executorService.scheduleAtFixedRate(keepAliveRunnable, 0, 5, TimeUnit.SECONDS);
}
private Runnable keepAliveRunnable = new Runnable() {
#Override
public void run() {
synchronized (socketLock) {
try {
if (DEBUG)
System.out.println("Update keepAlive.");
// Send /xremote
socket.send(new DatagramPacket(("/xremote").getBytes(),
("/xremote").getBytes().length,
InetAddress.getByName(Main.ipAddress), 10024));
// Send /info
socket.send(new DatagramPacket(("/info").getBytes(),
("/info").getBytes().length,
InetAddress.getByName(Main.ipAddress), 10024));
} catch (IOException e) {
e.printStackTrace();
if (!e.getMessage().contains("Socket closed")) {
e.printStackTrace();
}
}
}
}
};
I have my multithread web server and now i wish to implement a thread pool, however even after looking about it i don't get how can i do it in my code :(
Could someone help me get it better?
I really need to understand how what i read can be used here, because i don't see the connection and how that works.
import java.io.*;
import java.net.ServerSocket;
import java.net.Socket;
public class WebServer {
static class RequisicaoRunnable implements Runnable {
private Socket socket;
RequisicaoRunnable(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
try {
//System.out.println("connection from " + socket.getInetAddress().getHostName());
BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
//System.out.println("READING SOCKET...");
String str = in.readLine();
String[] arr = str.split(" ");
if (arr != null && arr.length > 2) {
while(!str.equals("")) {
//System.out.println(str);
str = in.readLine();
}
if (arr[0].equals("GET")) {
//System.out.println("REQUESTED RESOURCE: " + arr[1]);
String nomeArquivo = arr[1];
if (arr[1].startsWith("/")) {
nomeArquivo = nomeArquivo.substring(1);
}
if (nomeArquivo.equals("")) {
nomeArquivo = "index.html";
}
File f = new File(nomeArquivo);
if (f.exists()) {
FileInputStream fin = new FileInputStream(f);
socket.getOutputStream().write("HTTP/1.0 200 OK\n\n".getBytes());
byte[] buffer = new byte[1024];
int lidos;
do {
lidos = fin.read(buffer);
if (lidos > 0) {
socket.getOutputStream().write(buffer, 0, lidos);
}
} while (lidos > 0);
fin.close();
} else {
socket.getOutputStream().write("HTTP/1.0 404 Not Found\n\n".getBytes());
socket.getOutputStream().write("<html><body>HTTP/1.0 404 File Not Found</body></html>\n\n".getBytes());
}
} else {
socket.getOutputStream().write("HTTP/1.0 501 Not Implemented\n\n".getBytes());
}
}
socket.close();
} catch (IOException e) { }
}
}
public static void main(String[] args) throws IOException {
ServerSocket serverSocket = new ServerSocket(8080);
System.out.println("waiting connections....");
while (true) {
Socket socket = serverSocket.accept();
RequisicaoRunnable req = new RequisicaoRunnable(socket);
new Thread(req).start();
}
}
}
Idea behind the Thread pool is that create a specified number of threads at start and then assign task to them. Alternatively removing headache of creating threads each time.
I was implemented it a little some days ago, here is what I done.
Create some threads at start they share a request queue
Threads are constantly looking for queue and when a request come one
of the thread dispatch the request and perform action
The Queue will be synchronized 3.
Here are some queue methods
Queue#add(); //add the socket at the end
Queue#removeFront();//remove socket
Queue#isEmpty();//boolean if queue is empty
Queue#size(); //return size of queue
Queue#getMaxSize();//get maximum allowed size for queue
Your Request processing runnable
public class Processor implements Runnable {
private Queue<Socket> requests;
private boolean shut;
Processor(Queue<Socket> requests) {
this.requests = requests;
shut = false;
}
#Override
public void run() {
while(!shut) {
if(requests.isEmpty()) {
try{
Thread.sleep(#rendomeTimemill);
} catch(InterruptedException e){}
}else {
Socket skt = Queue.removeFront();
try {
//System.out.println("processing request from " + socket.getInetAddress().getHostName());
//do you want
} catch (Exception e) {
} finally {
if(skt != null) {
try{ skt.close(); skt = null; } catch(IOException ex){}
}
}
}
}
}
public void stopNow() {
shut = true;
Thread.interrupt();
}
}
in your main thread
create a queue to put requests
//start your server socket
Queue<Socket> requests = new Queue<Socket>();
Start worker thread pool
Precessor []workers = new Processor[NUM_WORKER];
for(int i=0;i<NUM_WORKER; i++) {
worker[i] = new Processor(requests);
Thread th = new Thread(worker[i]);
th.strat();
}
in request listening
//while loope that run forever
// accept socket
if(requests.size() == requests.getMaxSize()) {
socket.getOutputStream().write("HTTP/1.0 505 Error\n\n".getBytes());
socket.getOutputStream().write("<html><body>Try again</body></html>\n\n".getBytes());
socket.close();
} else {
requests.add(socket);
}
when you want to shout down server
for(int i=0;i<NUM_WORKER; i++) {
worker[i].stopNow();
}
Note: My concern was not the HTTP headers, so i m not specific, but you must implement the complete HTTP header e.g. Content-type, Content-length etc.
JDK might be a good place to start
An Executor or ExecutorService should is what you're looking for. Reading material:
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ExecutorService.html
The examples in there are pretty complete I think, but here's an example using the code you posted:
public static void main(String[] args) throws IOException {
ServerSocket serverSocket = new ServerSocket(8080);
System.out.println("waiting connections....");
ExecutorService pool = Executors.newCachedThreadPool();
while (true) {
Socket socket = serverSocket.accept();
RequisicaoRunnable req = new RequisicaoRunnable(socket);
pool.execute(req);
}
}
We create an executor service that is backed by a cached thread pool. You can swap this out for any type of pool you like by changing the type of executor service you get from Executors:
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Executors.html
In the example I've given we use a cached thread pool which should create new threads as needed but re use old ones as they become available (finish whatever they were executing). If you look through the methods provided in that class you can create Executor services that are backed by various types of thread pool e.g. single thread, fixed number of threads, etc.
The example above should work as is, but if you want to change how the thread pool works try another thread pool type.
The cached thread pool will mean each connection will immediately be serviced, however it can create an unbounded number of threads.
on the other hand if you wanted the executor to use a blocking queue as suggested by fge you could try a fixed thread pool instead:
Executors.newFixedThreadPool(x)
you get the blocking queue for free with that.
You can use, for instance, a BlockingQueue. This is the basis for a producer/consumer scenario.
In your case:
the producer holds the server socket; it accepts new client sockets and pushes the client sockets onto the queue;
the consumers grab client sockets from the queue and process requests.
On top of all that, you can also use a bounded queue; you can try and push a new client socket to the queue; if the queue is full you can then default to a "no can't do" consumer.
Scenarios are many. There is not one answer.
OK, the idea is simple enough. You main loop currently creates a new RequisicaoRunnable object and a new Thread to run it each time it gets a connection from a client. The idea behind a thread pool is to avoid creating new Threads each time.
In the simplest version of a thread pool, you create a blocking queue, and you create and start a fixed number of worker threads before you enter your main loop. The main loop will look almost exactly the same as what you have now, but instead of starting a Thread to run each new RequisicaoRunnable, it will simply add the new object to the queue.
Your worker threads are all the same:
while (! shutdownHasBeenRequested()) {
RequisicaoRunnable requisicaoRunnable = getATaskFromTheQueue();
requisicaoRunnable.run();
}
That way, each new task (client) will be executed (handled) by the next available thread from your pool.
If this is a homework assignment then you'll pretty much want to implement what I described, filling in some details as needed.
If it's not homework, then consider using a java.util.concurrent.ThreadPoolExcecutor() instead. No point in re-inventing the wheel when there's a perfectly good wheel right there waiting to be used.
Edit: as fge said, one improvement would be to send back a quick "sorry, try again later" response when new connections are coming in faster than you can handle them. When the queue has too many pending connections in it (i.e., when you hit the limit of a BoundedQueue), that's when you know to bail out and send the "try again later" response.
I have a JAVA game server that uses 1 thread per TCP connection. (I know it's bad but i'll have to keep it this way for now). On a (3.2Ghz 6cor x2 machine, 24GB RAM, windows server 2003 64bits) and here is a piece of the code:
public void run()
{
try
{
String packet = "";
char charCur[] = new char[1];
while(_in.read(charCur, 0, 1)!=-1 && Server.isRunning)
{
if (charCur[0] != '\u0000' && charCur[0] != '\n' && charCur[0] != '\r')
{
packet += charCur[0];
}else if(!packet.isEmpty())
{
parsePlayerPacket(packet);
packet = "";
}
}
}catch(Exception e)
{
e.printStackTrace();
}
finally
{
try{
kickPlayer();
}catch(Exception e){e.printStackTrace();};
Server.removeIp(_ip);
}
}
After about 12 hours or more of server upTime (and about 3.000 players connected) the server starts eating 100% of all the 12 CPUs for ever, until I manually reboot the JAVA application. So the game starts lagging verry bad and my players starts complaining.
I have tried profiling the application and here is what I came up with:
So I am guessing that the problem is coming from here:
while(_in.read(charCur, 0, 1)!=-1 && Server.isRunning)
knowing that the variable "_in" is a reader of the socket input : (_in = new BufferedReader(new InputStreamReader(_socket.getInputStream()))).
Why on earth _in.read() takes so much CPU after a long server upTime?
I have tried putting a Thread.sleep(1); and more inside the While loop, but doesn't do anything, I guess the problem is inside of the BufferedReader.read() method.
Does anyone have any idea of what can cause this?? And how to fix it?
This is a duplicate of your previous question: An infinite loop somewhere in my code. Please do not open up a new question, but instead use the editing functions.
That being said, 3000 threads is definitely a lot and would most likely cause excessive amounts of context switching. Instead of starting a new thread for each connection, consider using non-blocking IO facilities in Java. Examples can be found here: http://download.oracle.com/javase/1.4.2/docs/guide/nio/example/index.html
I don't know why the call is slow but I would never read one byte at a time in a tight loop. Who knows what kind of overhead the internal function has.
I would read all the data that is available currently in the stream and parse that.
This would require a buffer and some extra bookkeeping but anyway faster than reading byte by byte from a stream.
'1 thread per TCP connection'
'about 3.000 players connected'
= 3.000 threads?!
My guess: the maximum amount of threads that can repeatedly copy one byte at a time is around 3.000. That doesn't sound so weird.
Solution: less threads and read more bytes in one go.
You could use a executorService. There is a simplistic example in the javadoc: http://download.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorService.html
It doesn't look like you ever close the BufferedReader either, unless you are attempting it in the kickPlayer() method.
Each reader may be living a lot longer than you realise.
I'm also stuck on this same problem, I have also tried many solutions but no luck with read(byte). But when I have tried with readLine(), it works well. #Reacen did you found any other answer please let me know too.
public void run() {
try {
InputStream input = clientSocket.getInputStream();
BufferedReader bf = new BufferedReader(new InputStreamReader(input));
while (isRunning) {
if (mainServer.isStopped()) {
disconnect();
}
if (clientSocket.isClosed()) {
isRunning = false;
break;
}
// New Code Receive commands from device
String result = null;
try {
result = bf.readLine();
if (result == null) {
disconnect();
} else {
Pattern pattern = Pattern.compile("(?<=\\[).*(?=\\])");
Matcher matcher = pattern.matcher(result);
if (matcher.find()) {
result = matcher.group(0);
}
}
} catch (SocketTimeoutException e) {
logger.debug("Socket Read Timeout: " + remoteAddress);
} catch (SocketException e) {
isRunning = false;
break;
}
if (result == null || result.trim().length() == 0) {
continue;
}