I have tested a socket connection programme with the idea where the socket connection will be one separate thread by itself and then it will enqueue and another separate thread for dbprocessor will pick from the queue and run through a number of sql statement. So I notice here is where the bottle neck that the db processing. I would like to get some idea is what I am doing the right architecture or I should change or improve on my design flow?
The requirement is to capture data via socket connections and run through a db process then store it accordingly.
public class cServer
{
private LinkedBlockingQueue<String> databaseQueue = new LinkedBlockingQueue<String>();
class ConnectionHandler implements Runnable {
ConnectionHandler(Socket receivedSocketConn1) {
this.receivedSocketConn1=receivedSocketConn1;
}
// gets data from an inbound connection and queues it for databse update
public void run(){
databaseQueue.add(message); // put to db queue
}
}
class DatabaseProcessor implements Runnable {
public void run(){
// open database connection
createConnection();
while (true){
message = databaseQueue.take(); // keep taking message from the queue add by connectionhandler and here I will have a number of queries to run in terms of select,insert and updates.
}
}
void createConnection(){
System.out.println("Crerate Connection");
connCreated = new Date();
try{
dbconn = DriverManager.getConnection("jdbc:mysql://localhost:3306/test1?"+"user=user1&password=*******");
dbconn.setAutoCommit(false);
}
catch(Throwable ex){
ex.printStackTrace(System.out);
}
}
}
public void main()
{
new Thread(new DatabaseProcessor()).start(); //calls the DatabaseProcessor
try
{
final ServerSocket serverSocketConn = new ServerSocket(8000);
while (true){
try{
Socket socketConn1 = serverSocketConn.accept();
new Thread(new ConnectionHandler(socketConn1)).start();
}
catch(Exception e){
e.printStackTrace(System.out);
}
}
}
catch (Exception e){
e.printStackTrace(System.out);
}
}
}
It's hard (read 'Impossible') to judge a architecture without the requirements. So I will just make some up:
Maximum Throughput:
Don't use a database, write to a flatfile, possibly stored on something fast like a solid state disc.
Guaranteed Persistence (If the user gets an answer not consisting of an error, the data must be stored securely):
Make the whole thing single threaded, save everything in a database with redundant discs. Make sure you have a competent DBA who knows about Back up and Recovery. Test those on regular intervals.
Minimum time for finishing the user request:
Your approach seems reasonable.
Minimum time for finishing the user request + Maximizing Throughput + Good Persistence (what ever that means):
Your approach seems good. You might plan for multiple threads processing the DB requests. But test how much (more) throughput you actually get and where precisely the bottleneck is (Network, DB CPU, IO, Lock contention ...). Make sure you don't introduce bugs by using a concurrent approach.
Generally, your architecture sounds correct. You need to make sure that your two threads are synchronised correctly when reading/writing from/to the queue.
I am not sure what you mean by "bottle neck that the db processing"? If DB processing takes a long time and and you end up with a long queue, there's not much you can do apart from having multiple threads performing the DB processing (assuming the processing can be parallelised, of course) or do some performance tuning in the DB thread.
If you post some specific code that you believe is causing the problem, we can have another look.
You don't need two threads for this simple task. Just read the socket and execute the statements.
Related
I have a question for you.
I have multiple Threads runnings of a class called ServerThread. When an specific event happens on ANY of those threads, I want to call a method of every other thread running in parallel.
public class ServerThread implements Runnable {
private TCPsocket clientSocket;
public ServerThread(Socket comSocket){
clientSocket = new TCPsocket(comSocket);
}
#Override
public void run(){
boolean waiting = true;
Message msg;
try{
while(waiting){
msg = clientSocket.getMessage();
shareMessage(msg);
}
}catch(Exception e){
ErrorLogger.toFile("EndConnection", e.toString());
}
}
public void shareMessage(Message msg){
clientSocket.sendMessage(msg);
}
}
I am talking about this specific line
shareMessage(msg);
which I would like to be called on every thread/instance
-- so that a message is sent to every client (in all tcp connections)
I've tried with synchronized but either I'm not using it well or that is not what I am looking for.
Another thing that might work is keeping a class with an static member which is a list of those tcpconnection objects and then do some loop in all every time.
Thanks for your help and time.
Edited with one possible solution
*Add an static array as a member of the class and add/remove objects of same class (or tcp sockets would also work)
private static ArrayList<ServerThread> handler;
...
handler.add(this);
...
handler.remove(this); //when client exists and thread stops
*Then create a method that iterates for each connection, and make it synchronized so that two threads won't interact at the same time. You may want to implement synchronized on your message sending methods as well.
public void shareMessage(Message msg){
//this.clientSocket.sendMessage(msg);
synchronized (handler){
for(ServerThread connection: handler){
try{
connection.clientSocket.sendMessage(msg);
} catch(Exception e){
connection.clientSocket.closeConnection();
}
}
}
}
First: synchronized is required to prevent race conditions when multiple threads want to call the same method and this method accesses/modifies shared data. So maybe (probably) you will need it somewhere but it does not provide you the functionality you require.
Second: You cannot command an other thread to call a method directly. It is not possible e.g. for ThreadA to call methodX in ThreadB.
I guess you have one thread per client. Probably each thread will block at clientSocket.getMessage() until the client sends a message. I don't know the implementation of TCPsocket but maybe it is possible to interrupt the thread. In this case you may need to catch a InterruptedException and ask some central data structure if the interrupt was caused because of a new shared message and to return the shared message.
Maybe it is also possible for TCPsocket.getMessage() to return, if no message was received for some time, in which case you would again have to ask a central data structure if there is a new shared message.
Maybe it is also possible to store all client connections in such a data structure and loop them every time, as you suggested. But keep in mind that the client might send a message at any time, maybe even at the exact same time when you try to send it the shared message received from another client. This might be no problem but this depends on your application. Also you have to consider that the message will also be shared with the client that sent it to your server in the first place…
Also take a look at java.util.concurrent and its subpackages, it is likely you find something useful there… ;-)
To summarize: There are many possibilities. Which one is the best depends on what you need. Please add some more detail to your question if you need more specific help.
I made a bukkit plugin with mysql on it and i need to know why i am having lag whenever this code runs i run the server on my system and the mysql server with hostgator heres my code
openConnection();
try{
int level1 = 0;
if(playerDataConatinsPlayer(p)){
PreparedStatement sql = connection.prepareStatement("SELECT level FROM `player_data` WHERE player=?;");
sql.setString(1, p.getName());
ResultSet result = sql.executeQuery();
result.next();
level1 = result.getInt("level");
PreparedStatement levelUpdate = connection.prepareStatement("UPDATE `player_data` SET level=? WHERE player=?;");
levelUpdate.setInt(1, level1+1);
levelUpdate.setString(2, p.getName());
levelUpdate.executeUpdate();
levelUpdate.close();
sql.close();
result.close();
}else{
PreparedStatement newPlayer = connection.prepareStatement("INSERT INTO `player_data` values(?,0,1,0);");
newPlayer.setString(1, p.getName());
newPlayer.execute();
newPlayer.close();
}
}catch(Exception e1){
e1.printStackTrace();
}finally{
closeConnection();
}
here is my openconnection method
public synchronized static void openConnection(){
try{
connection = DriverManager.getConnection(""); //i know its empty cause i dont wanna give that info out
}catch(Exception e){
e.printStackTrace();
}
}
heres my closeconnection
public synchronized static void closeConnection(){
try{
connection.close();
}catch(Exception e){
e.printStackTrace();
}
}
There are a few things you can do to speed up your queries latency:
If your app is query intensive use persistent connections and keep them open instead of opening a new connection every time you need to access the database.
Run the MySQL server locally to speed up connection times.
Index the search fields of your tables (e.g. player on player_data) to have the search run faster.
Run the MySQL server on a powerful, dedicated machine with SSD drives and lots of RAM, and set the proper parameters on my.cnf (worker threads, max processes, max number of connections, memory limit, buffer sizes) to make use of that RAM and processing power and speed up search and processing times. Things like this question and answers may help you with the memory settings, but the best you can do is your own, exhaustive, online research and testing. Do your homework!
Use some kind of caching system to speed up reading (like memcached).
If your app is data intensive and has to support a huge number of connections, get a higher bandwidth or even consider setting up a cluster to balance the load.
Reduce the number of queries! You don't need to query the database twice to increase the level!
Try:
if (playerDataContainsPlayer(p)){
PreparedStatement levelUpdate = connection.prepareStatement(
"UPDATE player_data SET level=level+1 WHERE player=?;"
);
levelUpdate.setString(1, p.getName());
levelUpdate.executeUpdate();
levelUpdate.close();
sql.close();
} else {
...
}
Sounds like you are running your query on the main server Thread. You really shouldn't do this especially if your SQL server isn't on the local machine.
Have a read of the tutorial about how to run more CPU intensive or longer running tasks in the background to avoid this type of performance loss.
What you need to do is put your code into a BukkitRunnable:
public class ExampleTask extends BukkitRunnable {
private final JavaPlugin plugin;
public ExampleTask(JavaPlugin plugin) {
this.plugin = plugin;
}
#Override
public void run() {
// Put your task's code here
}
}
And then, to allow you to run the code in its own Thread leaving the main server Thread to take care of the game uninterrupted, call your Task like so:
BukkitTask task = new ExampleTask(this.plugin).runTask(this.plugin);
This should avoid the lag you mention. Just be careful about concurrency issues and note that Bukkit's docs specify that no Bukkit API interactions should happen inside asynchronous tasks. So just perform your query and any validation/parsing in the Task and pass the results back to the server thread for use in-game if needed.
I have a main thread that runs periodically. It opens a connection, with setAutoCommit(false), and is passed as reference to few child threads to do various database read/write operations. A reasonably good number of operations are performed in the child threads. After all the child threads had completed their db operations, the main thread commits the transaction with the opened connection. Kindly note that I run the threads inside the ExecutorService. My question, is it advisable to share a connection across threads? If "yes" see if the below code is rightly implementing it. If "no", what are other way to perform a transaction in multi-threaded scenario? comments/advise/a-new-idea are welcome. pseudo code...
Connection con = getPrimaryDatabaseConnection();
// let me decide whether to commit or rollback
con.setAutoCommit(false);
ExecutorService executorService = getExecutor();
// connection is sent as param to the class constructor/set-method
// the jobs uses the provided connection to do the db operation
Callable jobs[] = getJobs(con);
List futures = new ArrayList();
// note: generics are not mentioned just to keep this simple
for(Callable job:jobs) {
futures.add(executorService.submit(job));
}
executorService.shutdown();
// wait till the jobs complete
while (!executorService.isTerminated()) {
;
}
List result = ...;
for (Future future : futures) {
try {
results.add(future.get());
} catch (InterruptedException e) {
try {
// a jobs has failed, we will rollback the transaction and throw exception
connection.rollback();
result = null;
throw SomeException();
} catch(Exception e) {
// exception
} finally {
try {
connection.close();
} catch(Exception e) {//nothing to do}
}
}
}
// all the jobs completed successfully!
try {
// some other checks
connection.commit();
return results;
} finally {
try {
connection.close();
} catch(Exception e){//nothing to do}
}
I wouldn't recommend you to share connection between threads, as operations with connection is quite slow and overall performance of you application may harm.
I would rather suggest you to use Apache Connections Pool and provide separate connection to each thread.
You could create a proxy class that holds the JDBC connection and gives synchronized access
to it. The threads should never directly access the connection.
Depending on the use and the operations you provide you could use synchronized methods, or lock on objects if the proxy needs to be locked till he leaves a certain state.
For those not familiar with the proxy design pattern. Here the wiki article. The basic idea is that the proxy instance hides another object, but offers the same functionality.
In this case, consider creating a separate connection for each worker. If any one worker fails, roll back all the connections. If all pass, commit all connections.
If you're going to have hundreds of workers, then you'll need to provide synchronized access to the Connection objects, or use a connection pool as #mike and #NKukhar suggested.
I have the starts of a very basic multi-hreaded web server, it can recieve all GET requests as long as they come one at a time.
However, when multiple GET requests come in at the same time, sometimes they all are recieved, and other times, some are missing.
I tested this by creating a html page with multiple image tags pointing to my webserver and opening the page in firefox. I always use shift+refresh.
Here is my code, I must be doing something fundamentally wrong.
public final class WebServer
{
public static void main(String argv[]) throws Exception
{
int port = 6789;
ServerSocket serverSocket = null;
try
{
serverSocket = new ServerSocket(port);
}
catch(IOException e)
{
System.err.println("Could not listen on port: " + port);
System.exit(1);
}
while(true)
{
try
{
Socket clientSocket = serverSocket.accept();
new Thread(new ServerThread(clientSocket)).start();
}
catch(IOException e)
{
}
}
}
}
public class ServerThread implements Runnable
{
static Socket clientSocket = null;
public ServerThread(Socket clientSocket)
{
this.clientSocket = clientSocket;
}
public void run()
{
String headerline = null;
DataOutputStream out = null;
BufferedReader in = null;
int i;
try
{
out = new DataOutputStream(clientSocket.getOutputStream());
in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
while((headerline = in.readLine()).length() != 0)
{
System.out.println(headerline);
}
}
catch(Exception e)
{
}
}
First, #skaffman's comment is spot on. You should not catch-and-ignore exceptions like your code is currently doing. In general, it is a terrible practice. In this case, you could well be throwing away the evidence that would tell you what the real problem is.
Second, I think you might be suffering from a misapprehension of what a server is capable of. No matter how you implement it, a server can only handle a certain number of requests per second. If you throw more requests at it than that, some have to be dropped.
What I suspect is happening is that you are sending too many requests in a short period of time, and overwhelming the operating system's request buffer.
When your code binds to a server socket, the operating system sets up a request queue to hold incoming requests on the bound IP address/port. This queue has a finite size, and if the queue is full when a new request comes, the operating system will drop requests. This means that if your application is not able to accept requests fast enough, some will be dropped.
What can you do about it?
There is an overload of ServerSocket.bind(...) that allows you to specify the backlog of requests to be held in the OS-level queue. You could use this ... or use a larger backlog.
You could change your main loop to pull requests from the queue faster. One issue with your current code is that you are creating a new Thread for each request. Thread creation is expensive, and you can reduce the cost by using a thread pool to recycle threads used for previous requests.
CAVEATS
You need to be a bit careful. It is highly likely that you can modify your application to accept (not drop) more requests in the short term. But in the long term, you should only accept requests as fast as you can actually process them. If it accepts them faster than you can process them, a number of bad things can happen:
You will use a lot of memory with all of the threads trying to process requests. This will increase CPU overheads in various ways.
You may increase contention for internal Java data structures, databases and so on, tending to reduce throughput.
You will increase the time taken to process and reply to individual GET requests. If the delay is too long, the client may timeout the request ... and send it again. If this happens, the work done by the server will be wasted.
To defend yourself against this, it is actually best to NOT eagerly accept as many requests as you can. Instead, use a bounded thread pool, and tune the pool size (etc) to optimize the throughput rate while keeping the time to process individual requests within reasonable limits.
I actually discovered the problem was this:
static Socket clientSocket = null;
Once I removed the static, it works perfectly now.
Hi I have a webapp - and in one method I need to encrypt part of data from request and store them on disk and return response.
Response is in no way related to encryption.
The encryption is quite time demanding however. How to make threads or so properly in this problem?
I tried something like
Thread thread ...
thread.start();
or
JobDetail job = encryptionScheduler.getJobDetail(jobDetail.getName(), jobDetail.getGroup());
encryptionScheduler.scheduleJob(jobDetail,TriggerUtils.makeImmediateTrigger("encryptionTrigger",1,1)
I tried servlet where before encryption I close the outpuStream.
or: Executors.newFixedThreadPool(1);
But whatever I tried a client has to wait longer.
btw: why is that so? Can it be faster?
I haven't tried to start thread after context initalization and wait somehow for method needing encryption.
how to speed up this?
thank you
--------------EDIT:
//I use axis 1.4, where I have Handler, which in invoke method encrypt a value:
try {
LogFile logFile = new LogFile(strategy,nodeValue,path, new Date());
LogQueue.queue.add(logFile);
}
catch (Exception e) {
log.error(e.getMessage(),e);
}
EExecutor.executorService.execute(new Runnable() {
public void run() {
try {
LogFile poll = LogQueue.queue.poll();
String strategy = poll.getStrategy();
String value = poll.getNodeValue();
value = encrypt(strategy,value);
PrintWriter writer = new PrintWriter(new OutputStreamWriter(new BufferedOutputStream(new FileOutputStream(poll.getPath(), true )),"UTF-8"));
writer.print(value);
writer.close();
}catch (IOException e ) {
log.error(e.getMessage(),e);
}
}
});
} catch (Throwable e ) {
log.error(e.getMessage(),e);
}
//besides I have executor service
public class EExecutor { public static ExecutorService executorService = Executors.newCachedThreadPool();}
//and what's really interesting.. when I move encryption from this handler away into another handler which is called
last when I send response! It's faster. But when I leave it in one of the first handlers when I receive request. It's even slower without using threads/servlet etc.
Threads only help you if parts of your task can be done in parallel. It sounds like you're waiting for the encryption to finish before returning the result. If it's necessary for you to do that (e.g., because the encrypted data is the result) then doing the encryption on a separate thread won't help you here---all it will do is introduce the overhead of creating and switching to a different thread.
Edit: If you're starting a new thread for each encryption you do, then that might be part of your problem. Creating new threads is relatively expensive. A better way is to use an ExecutorService with an unbounded queue. If you don't care about the order in which the encryption step happens (i.e., if it's ok that the encryption which started due to a request at time t finishes later than one which started at time t', and t < t'), then you can let the ExecutorService have more than a single thread. That will give you both greater concurrency and save you the overhead of recreating threads all the time, since an ExecutorService pools and reuses threads.
The proper way to do something like this is to have a message queue, such as the standard J2EE JMS.
In a message queue, you have one software component whose job it is to receive messages (such as requests to encrypt some resource, as in your case), and make the request "durable" in a transactional way. Then some independent process polls the message queue for new messages, takes action on them, and transactionally marks the messages as received.