StreamCorruptedException | OptionalDataException in objectInputStream.readObject() in multithreaded environment - java

I have a Client-Server architecture. The server can have * clients so for each of them two threads (input & output) are created.
I have one master class that coordinates all actions on server side. it has (among others) such a method:
public static synchronized void sendMessageToUser(Message message, String username){
clientOutputThreadPool.submit(new ObjectStreamOutputCallable(userObjectOutputStreams.get(username), message));
}
the objectOutputStreamCallable gets the objectOutputStream for the specific user passed (i keep them in a hashmap to reuse them). The callable is executed in a threadpool
the callable looks like this:
#Override
public Object call() throws Exception {
writeObjectToStream();
return null;
}
private synchronized void writeObjectToStream() throws IOException {
oos.reset();
oos.writeObject(message);
oos.flush();
}
Now I !Sometimes! get the above mentioned errors (on the client side). The fact that this only happens about 30-40% of the time, gets me to believe, that it has something to do with concurrency. Could it be e.g. that the message object that is being serialized is at the same time manipulated somewhere else in the code and that then creates the error? I have read many times that one may not use more than one objectOutputStream or objectInputStream . But I cannot find any place in my code where I use different objectOutputStreams for the same client. Each one has one oos that is created at socket creation time and then kept in a hashmap for later use. also i reset the socket before each message but that still has no effect...

Related

Call a method of all parallel Class Threads

I have a question for you.
I have multiple Threads runnings of a class called ServerThread. When an specific event happens on ANY of those threads, I want to call a method of every other thread running in parallel.
public class ServerThread implements Runnable {
private TCPsocket clientSocket;
public ServerThread(Socket comSocket){
clientSocket = new TCPsocket(comSocket);
}
#Override
public void run(){
boolean waiting = true;
Message msg;
try{
while(waiting){
msg = clientSocket.getMessage();
shareMessage(msg);
}
}catch(Exception e){
ErrorLogger.toFile("EndConnection", e.toString());
}
}
public void shareMessage(Message msg){
clientSocket.sendMessage(msg);
}
}
I am talking about this specific line
shareMessage(msg);
which I would like to be called on every thread/instance
-- so that a message is sent to every client (in all tcp connections)
I've tried with synchronized but either I'm not using it well or that is not what I am looking for.
Another thing that might work is keeping a class with an static member which is a list of those tcpconnection objects and then do some loop in all every time.
Thanks for your help and time.
Edited with one possible solution
*Add an static array as a member of the class and add/remove objects of same class (or tcp sockets would also work)
private static ArrayList<ServerThread> handler;
...
handler.add(this);
...
handler.remove(this); //when client exists and thread stops
*Then create a method that iterates for each connection, and make it synchronized so that two threads won't interact at the same time. You may want to implement synchronized on your message sending methods as well.
public void shareMessage(Message msg){
//this.clientSocket.sendMessage(msg);
synchronized (handler){
for(ServerThread connection: handler){
try{
connection.clientSocket.sendMessage(msg);
} catch(Exception e){
connection.clientSocket.closeConnection();
}
}
}
}
First: synchronized is required to prevent race conditions when multiple threads want to call the same method and this method accesses/modifies shared data. So maybe (probably) you will need it somewhere but it does not provide you the functionality you require.
Second: You cannot command an other thread to call a method directly. It is not possible e.g. for ThreadA to call methodX in ThreadB.
I guess you have one thread per client. Probably each thread will block at clientSocket.getMessage() until the client sends a message. I don't know the implementation of TCPsocket but maybe it is possible to interrupt the thread. In this case you may need to catch a InterruptedException and ask some central data structure if the interrupt was caused because of a new shared message and to return the shared message.
Maybe it is also possible for TCPsocket.getMessage() to return, if no message was received for some time, in which case you would again have to ask a central data structure if there is a new shared message.
Maybe it is also possible to store all client connections in such a data structure and loop them every time, as you suggested. But keep in mind that the client might send a message at any time, maybe even at the exact same time when you try to send it the shared message received from another client. This might be no problem but this depends on your application. Also you have to consider that the message will also be shared with the client that sent it to your server in the first place…
Also take a look at java.util.concurrent and its subpackages, it is likely you find something useful there… ;-)
To summarize: There are many possibilities. Which one is the best depends on what you need. Please add some more detail to your question if you need more specific help.

java: Single socket on read write operation. Full duplex

I have to implement sending data with specific source port and in the same time listen to that port. Full duplex. Does anybody know how to implement it on java. I tried to create separate thread for listening on socket input stream but it doesnt work. I cannot bind ServerSocket and client socket to the same source port and the the same with netty.
It there any solution for dull duplex?
init(){
socket = new Socket(InetAddress.getByName(Target.getHost()), Target.getPort(), InetAddress.getByName("localhost"), 250);
in = new DataInputStream(socket.getInputStream());
out = new DataOutputStream(socket.getOutputStream());
}
private static void writeAndFlush(OutputStream out, byte[] b) throws IOException {
out.write(b);
out.flush();
}
public class MessageReader implements Runnable {
#Override
public void run() {
//this method throw exception EOF
read(in);
}
private void read(DataInputStream in){
while (isConnectionAlive()) {
StringBuffer strBuf = new StringBuffer();
byte[] b = new byte[1000];
while ((b[0] = bufferedInputStream.read(b)) != 3) {
strBuf.append(new String(b));
}
log.debug(strBuf.toString());
}
}
}
What you're trying to do is quite strange: A ServerSocket is a fully implemented socket that accepts connections, it handles its own messages and you definitely cannot piggy-back another socket on top of it.
Full duplex is fairly simple to do with NIO:
Create a Channel for your Socket in non-blocking mode
Add read to the interest OPs
Sleep with a Selector's select() method
Read any readable bytes, write any writable bytes
If writing is done, remove write from interest OPs
GOTO 3.
If you need to write, add bytes to a buffer, add write to interest OPs and wake up selector. (slightly simplified, but I'm sure you can find your way around the Javadoc)
This way you will be completely loading the outgoing buffer every time there is space and reading from the incoming one at the same time (well, single thread, but you don't have to finish writing to start reading etc).
I had run into the same question and decided to answer it myself. I would like to share with you guys the code repo. It is really simple, you can get the idea to make your stuff work. It is an elaborate example. The steps accidentally look like Ordous's solution.
https://github.com/khanhhua/full-duplex-chat
Feel free to clone! It's my weekend homework.
Main thread:
Create background thread(s) that will connect to any target machines(s).
These threads will connect to target machines and transmit data and die
Create an infinite loop
Listen for incoming connections.
Thread off any connection to handle I/O
Classes:
Server
Listens for incoming connections and threads off a Client object
Client
This class is created upon the server accepting the incoming connection, the TcpClient or NetClient (i forget what java calls it) is used to send data. Upon completion it dies.
Target
Is created during the start up and connects to a specific target and send data.
once complete it dies.

Java concurrency with read/write

I'm making a client server application in Java. In short, the Server has some files. The Client can send a file to the Server and the Client can request to download all the files from the Server. I'm using RMI for the Server and Client to communicate and I'm using the RMI IO library to send files between Client and Server.
Some example code:
Server:
class Server implements ServerService {
// private Map<String, File> files;
private ConcurrentHashMap<String, File> files // Solution
// adding a file to the server
public synchronized void addFile(RemoteInputStream inFile, String filename)
throws RemoteException, IOException {
// From RMI IO library
InputStream istream = RemoteInputStreamClient.wrap(inFile);
File f = new File(dir, filename);
FileOutputStream ostream = new FileOutputStream(f);
while (istream.available() > 0) {
ostream.write(istream.read());
}
istream.close();
ostream.close();
files.put(filename, f);
}
// requesting all files
public requestFiles(ClientService stub)
throws RemoteException, IOException {
for(File f: files.values()) {
//Open a stream to this file and give it to the Client
RemoteInputStreamServer istream = null;
istream = new SimpleRemoteInputStream(new BufferedInputStream(
new FileInputStream(f)));
stub.receiveFile(istream.export());
}
}
Please note that this is just some example code to demonstrate.
My questions concerns concurrent access to the files on the Server. As you can see, I've made the addFile method synchronized because it modifies the resources on my Server. My requestFiles method is not synchronized.
I am wondering if this can cause some trouble. When Client A is adding a File and Client B is at the same time requesting all files, or vice versa, will this cause trouble? Or will the addFile method wait (or make the other method wait) because it is synchronized?
Thanks in advance!
Yes this could cause trouble. Other threads could access requestFiles(), whilst a single thread is performing the addFile() method.
It is not possible for two invocations of synchronized methods
on the same object to interleave. When one thread is executing a
synchronized method for an object, all other threads that invoke
synchronized methods for the same object block (suspend execution)
until the first thread is done with the object.
[Source] http://docs.oracle.com/javase/tutorial/essential/concurrency/syncmeth.html
So methods that are declared syncronised lock the instance to all syncronised methods in that instance (In your case the instance of Server). If you had the requestFiles() method syncronised as well, you would essentially be syncronising access to the Server instance completely. So you wouldn't have this problem.
You could also use syncronised blocks on the files map. See this stackoverflow question:
Java synchronized block vs. Collections.synchronizedMap
That being said, a model that essentially locks the entire Server object whenever a file is being written or read is hampering a concurrent design.
Depending on the rest of your design and assuming each file you write with the 'addFile()' method has a different name, and you are not overwriting files. I would explore something like the following:
Remove the map completely, and have each method interact with the file system separately.
I would use a temporary (.tmp) extension for files being written by 'addFile()', and then (once the file has been written) perform an atomic file rename to convert the extension to a '.txt' file.
Files.move(src, dst, StandardCopyOption.ATOMIC_MOVE);
Then restrict the entire 'requestFiles()' method to just '.txt' files. This way file writes and file reads could happen in parallel.
Obviously use whatever extensions you require.

Synchronizing methods in RXTX

Situation:
I'm trying to make the incoming data from SerialPort usefull for my purposes. In one class Processor.java I've implemented several methods - one of them (serialEvent) implements gnu.io.SerialPortEventListener. It stores the information read from inputStream in a buffer which is a byte array. There is also a method, which writes data to outputStream.
Problem:
I want to implement a method (in the same class) which will write something to outputStream depending on the messages read from the inputStream.
Pseudo code:
#Override
public void serialEvent(SerialPortEvent event) {
// get data
}
public void writeData(String dataToWrite) {
// write data
}
public void respond() {
// write data
// wait for appropriate response (read data)
// write data
// ...
}
How can I do this?
Only thing that comes to mind is a background thread that waits for input-buffer-full condition to process the received message and responds to it.
If you are communicating in fixed length packets or start-stop marked packets you should create a thread that would monitor the serial port, buffer the received data and once a "packet/message complete" condition is met to fire an event to a registered listener (in another thread if possible). That listener would then process the message and respond (in its own thread).

Java Multithreaded Web Server - Not recieving multiple GET requests

I have the starts of a very basic multi-hreaded web server, it can recieve all GET requests as long as they come one at a time.
However, when multiple GET requests come in at the same time, sometimes they all are recieved, and other times, some are missing.
I tested this by creating a html page with multiple image tags pointing to my webserver and opening the page in firefox. I always use shift+refresh.
Here is my code, I must be doing something fundamentally wrong.
public final class WebServer
{
public static void main(String argv[]) throws Exception
{
int port = 6789;
ServerSocket serverSocket = null;
try
{
serverSocket = new ServerSocket(port);
}
catch(IOException e)
{
System.err.println("Could not listen on port: " + port);
System.exit(1);
}
while(true)
{
try
{
Socket clientSocket = serverSocket.accept();
new Thread(new ServerThread(clientSocket)).start();
}
catch(IOException e)
{
}
}
}
}
public class ServerThread implements Runnable
{
static Socket clientSocket = null;
public ServerThread(Socket clientSocket)
{
this.clientSocket = clientSocket;
}
public void run()
{
String headerline = null;
DataOutputStream out = null;
BufferedReader in = null;
int i;
try
{
out = new DataOutputStream(clientSocket.getOutputStream());
in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
while((headerline = in.readLine()).length() != 0)
{
System.out.println(headerline);
}
}
catch(Exception e)
{
}
}
First, #skaffman's comment is spot on. You should not catch-and-ignore exceptions like your code is currently doing. In general, it is a terrible practice. In this case, you could well be throwing away the evidence that would tell you what the real problem is.
Second, I think you might be suffering from a misapprehension of what a server is capable of. No matter how you implement it, a server can only handle a certain number of requests per second. If you throw more requests at it than that, some have to be dropped.
What I suspect is happening is that you are sending too many requests in a short period of time, and overwhelming the operating system's request buffer.
When your code binds to a server socket, the operating system sets up a request queue to hold incoming requests on the bound IP address/port. This queue has a finite size, and if the queue is full when a new request comes, the operating system will drop requests. This means that if your application is not able to accept requests fast enough, some will be dropped.
What can you do about it?
There is an overload of ServerSocket.bind(...) that allows you to specify the backlog of requests to be held in the OS-level queue. You could use this ... or use a larger backlog.
You could change your main loop to pull requests from the queue faster. One issue with your current code is that you are creating a new Thread for each request. Thread creation is expensive, and you can reduce the cost by using a thread pool to recycle threads used for previous requests.
CAVEATS
You need to be a bit careful. It is highly likely that you can modify your application to accept (not drop) more requests in the short term. But in the long term, you should only accept requests as fast as you can actually process them. If it accepts them faster than you can process them, a number of bad things can happen:
You will use a lot of memory with all of the threads trying to process requests. This will increase CPU overheads in various ways.
You may increase contention for internal Java data structures, databases and so on, tending to reduce throughput.
You will increase the time taken to process and reply to individual GET requests. If the delay is too long, the client may timeout the request ... and send it again. If this happens, the work done by the server will be wasted.
To defend yourself against this, it is actually best to NOT eagerly accept as many requests as you can. Instead, use a bounded thread pool, and tune the pool size (etc) to optimize the throughput rate while keeping the time to process individual requests within reasonable limits.
I actually discovered the problem was this:
static Socket clientSocket = null;
Once I removed the static, it works perfectly now.

Categories

Resources