I have generated client and server.
I need to add one more field to message in client and read it in server.
I think about to decorate TProtocol on client side and TProcessor on server side. E.g.:
// Client:
TTransport transport = new TSocket("localhost", 8888);
transport.open();
TProtocol protocol = new DecoratedProtocol(new TBinaryProtocol(transport));
// Server:
TServerTransport transport = new TServerSocket(8888);
TServer server = new TSimpleServer(new Args(transport).processor(new DecoratedProcessor(...)));
But I'm not sure what to do inside DecoratedProtocol and DecoratedProcessor?
There is a solution to add message meta data called the THeaderProtocol, but AFAIK right now this is only implemented for C++. That could be a starting point to implement the same for Java.
EDIT: I just noticed that fbthrift seems to implement THeaderProtocol for Java already.
Related
I have an application consisting of a Java Thrift Asynchronous client communicating with a Python/Twisted Thrift Asynchronous server over TCP using the loopback interface (localhost).
I want to use a UNIX domain socket instead of a TCP socket for increased throughput, but I cannot find a suitable Asynchronous (Non-blocking) UNIX Domain Socket implementation for Java to use with Thrift.
I had this Python/Twisted TCP Thrift server:
handler = ThriftServiceHandler(am)
processor = ThriftServiceHandler.Processor(handler)
pfactory = TBinaryProtocol.TBinaryProtocolFactory()
reactor.listenTCP(PORT, TTwisted.ThriftServerFactory(processor, pfactory), interface="127.0.0.1")
And I managed to create an Asynchronous UNIX Domain Socket Thrift server with:
handler = ThriftServiceHandler(am)
processor = ThriftServiceHandler.Processor(handler)
pfactory = TBinaryProtocol.TBinaryProtocolFactory()
reactor.listenUNIX("thrift.sock", TTwisted.ThriftServerFactory(processor, pfactory))
Currently I have this Java client:
clientManager = new TAsyncClientManager();
factory = new TBinaryProtocol.Factory();
socket = new TNonblockingSocket("127.0.0.1", thriftConstants.PORT);
client = new ThriftService.AsyncClient(this.factory, this.clientManager, this.socket);
But I am missing how to do the Java Client implementation. I looked at using https://github.com/jnr/jnr-unixsocket and https://github.com/kohlschutter/junixsocket, but I couldn't get them to work with Thrift's Java AsyncClient. Any help would be appreciated.
I am using the latest version of Thrift (0.9.2).
I'm executing Apache Thrift tutorial for Java.
When running 2 client processes at the same time, the server doesn't accept the 2nd client. Only after the first client finishes, the second one is accepted by the server.
Can anyone explain what's going on?
How can I make the server accept several connections in several threads?
Can anyone explain what's going on?
You already found it out: The TSimpleServer allows only for one connection at a time. It will be available again when the first client disconnects.
How can I make the server accept several connections in several threads?
Use one of the threading servers, whichever fits your use case best.
TThreadPoolServer
TThreadedSelectorServer
TNonBlockingServer
The half-sync/half-async server
Please note, that some of the servers require the client to use TFramedTransport.
Based on other answers, below is the code to enable executing multiple clients simultaneously.
Server (simple):
CalculatorHandler handler = new CalculatorHandler();
Calculator.Processor processor = new Calculator.Processor(handler);
TNonblockingServerSocket serverTransport = new TNonblockingServerSocket(9090);
THsHaServer.Args args = new THsHaServer.Args(serverTransport);
args.processor(processor);
args.transportFactory(new TFramedTransport.Factory());
TServer server = new THsHaServer(args);
server.serve();
Client:
transport = new TSocket("localhost", 9090);
transport.open();
TProtocol protocol = new TBinaryProtocol(new TFramedTransport(transport));
Calculator.Client client = new Calculator.Client(protocol);
perform(client);
I am making a chat in Java which uses a TCP protocol.
I have a client and a server side.
To send a message to another user, I have to send the message to the server through my client, and the server has to send it to another client.
The server holds the addresses of both online users. When I send a private message, the server finds the ip and a port and creates a socket from them.
The problem is that it doesn’t work correctly.
Here’s the code:
int portNumber = 4444;
String host = "192.168.0.100”;
Socket link;
try {
link = new Socket(host, portNumber);
// Then I set to already created PrintWriter the outputstream
out = new PrintWriter(link.getOutputStream(), true);
} catch (Exception e) {}
// Unfortunately the server freezes here (it doesn't show anything).
How to solve this problem? Where dod I make a mistake?
Thank you in advance.
You shouldn't create a new Socket to send a message. Instead, use a socket of an existing connection.
The sequence should be the following:
Client A connects to the server (server stores the connection as SocketA).
Client B connects to the server (server stores the connection as SocketB).
Server reads a private message from SocketA. The message is addressed to client B.
Server finds the existing socket for client B. It's SocketB.
Server sends the message into SocketB.
I'm pretty puzzled with this issue. I have an Apache Thrift 0.9.0 client and server. The client code goes like this:
this.transport = new TSocket(this.server, this.port);
final TProtocol protocol = new TBinaryProtocol(this.transport);
this.client = new ZKProtoService.Client(protocol);
This works fine. However, if I try to wrap the transport in a TFramedTransport
this.transport = new TSocket(this.server, this.port);
final TProtocol protocol = new TBinaryProtocol(new TFramedTransport(this.transport));
this.client = new ZKProtoService.Client(protocol);
I get the following obscure (no explanation message whatsoever) exception in the client side. Server side shows no error.
org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at com.blablabla.android.core.device.proto.ProtoService$Client.recv_open(ProtoService.java:108)
at com.blablabla.android.core.device.proto.ProtoService$Client.open(ProtoService.java:95)
at com.blablabla.simpleprotoclient.proto.ProtoClient.initializeCommunication(ProtoClient.java:411)
at com.blablabla.simpleprotoclient.proto.ProtoClient.doWork(ProtoClient.java:269)
at com.blablabla.simpleprotoclient.proto.ProtoClient.run(ProtoClient.java:499)
at java.lang.Thread.run(Thread.java:724)
It also fails if I use TCompactProtocol instead of TBinaryProtocol.
In the server side I have extended TProcessor with my own class since I need to reuse existing service handler (the service server-side IFace implementation) for this client:
#Override
public boolean process(final TProtocol in, final TProtocol out)
throws TException {
final TTransport t = in.getTransport();
final TSocket socket = (TSocket) t;
socket.setTimeout(ProtoServer.SOCKET_TIMEOUT);
final String clientAddress = socket.getSocket().getInetAddress()
.getHostAddress();
final int clientPort = socket.getSocket().getPort();
final String clientRemote = clientAddress + ":" + clientPort;
ProtoService.Processor<ProtoServiceHandler> processor = PROCESSORS
.get(clientRemote);
if (processor == null) {
final ProtoServiceHandler handler = new ProtoServiceHandler(
clientRemote);
processor = new ProtoService.Processor<ProtoServiceHandler>(
handler);
PROCESSORS.put(clientRemote, processor);
HANDLERS.put(clientRemote, handler);
ProtoClientConnectionChecker.addNewConnection(clientRemote,
socket);
}
return processor.process(in, out);
}
And this is how I start the server side:
TServerTransport serverTransport = new TServerSocket(DEFAULT_CONTROL_PORT);
TServer server = new TThreadPoolServer(new TThreadPoolServer.Args(
serverTransport).processor(new ControlProcessor()));
Thread thControlServer = new Thread(new StartServer("Control", server));
thControlServer.start();
I have some questions:
Is it correct to reuse service handler instances or I shouldn't be doing this?
Why does it fail when I use TFramedTransport or TCompactProtocol? How to fix this?
Any help on this issue is welcome. Thanks in advance!
I was having the same problem and finally found the answer. It is possible to set the transport type on the server, though this is not clear from most tutorials and examples I've found on the web. Have a look at all of the methods of the TServer.Args class (or the args classes for other servers, which extend TServer.Args). There are methods inputTransportFactory and outputTransportFactory. You can use new TFramedTransport.Factory() as inputs to each of these methods to declare which transport the server should use. In scala:
val handler = new ServiceStatusHandler
val processor = new ServiceStatus.Processor(handler)
val serverTransport = new TServerSocket(9090)
val args = new TServer.Args(serverTransport)
.processor(processor)
.inputTransportFactory(new TFramedTransport.Factory)
.outputTransportFactory(new TFramedTransport.Factory)
val server = new TSimpleServer(args)
println("Starting the simple server...")
server.serve()
Note that if you are using a TAsyncClient, you have no choice about the transport that you use. You must use TNonblockingTransport, which has only one standard implementation, TNonblockingSocket, which internally wraps whatever protocol you are using in a framed transport. It doesn't actually wrap your chosen protocol in a TFramedTransport, but it does prepend the length of the frame to the content that it writes, and expects the server to prepend the length of the response as well. This wasn't documented anywhere I found, but if you look at the source code and experiment with different combinations, you will find that with TSimpleServer you must use TFramedTransport to get it to work with an async client.
By the way, it's also worth noting that the docs say that a TNonblockingServer must use TFramedTransport in the outermost later of the transport. However, the examples don't show this being set in TNonblockingServer.Args, yet you still find that you must use TFramedTransport on the client side to successfully execute an rpc on the server. This is because TNonblockingServer.Args has its input and output protocols set to TFramedTransport by default (you can see this using reflection to inspect the fields of the superclass hierarchy or in the source code for the constructor of AbstractNonblockingServerArgs -- you can override the input and output transports, but the server will likely fail for the reasons discussed in the documentation).
When the issue happens with framed, but it works without framed, then you have an incompatible protocol stack on both ends. Choose one of the following:
either modify the server code to use framed as well
or do not use framed on the client
A good rule of thumb is, to always use the exact same protocol/transport stack on both ends. In the particular case it blows up, because framed adds a four-byte header holding the size of the message that follows. If the server does not use framed, these additional four bytes sent by the client will be interpreted (wrongly) as part of the message.
Altough the sample code in that answer
TNonblockingServer in thrift crashes when TFramedTransport opens is for C++, adding framed on the server should be very similar with Java.
PS: Yes, it is perfectly ok to re-use your handler. A typical handler is a stateless thing.
I've recently tried to connect Python to Java using Thrift.
I've written a server on Python (PyPy). I've also written a reference client which works.
Then I've written a Java client which produces only a 'Connection refused' exception.
What's wrong with this? (Recently I've also found a closed issue featuring this problem https://issues.apache.org/jira/browse/THRIFT-1888)
PS. Used Thrift 0.9 release, PyPy 2.0 beta 2, Java 1.7.0_11
test.thrift
namespace java com.test
namespace python test
service TestPing {
void ping()
}
Python server code
class TestPingHandler:
def ping(self):
pass
handler = TestPingHandler()
processor = TestPing.Processor(handler)
transport = TSocket.TServerSocket(port=9091)
tfactory = TTransport.TBufferedTransportFactory()
pfactory = TBinaryProtocol.TBinaryProtocolFactory()
server = TServer.TThreadedServer(processor, transport, tfactory, pfactory)
print 'Starting the server...'
server.serve()
print 'done.'
Java client code
TTransport transport;
transport = new TSocket("localhost", 9091);
transport.open();
TProtocol protocol = new TBinaryProtocol(transport);
client = new TestPing.Client(protocol);
client.ping();
Reference Python client code
transport = TSocket.TSocket('localhost', 9091)
transport = TTransport.TBufferedTransport(transport)
protocol = TBinaryProtocol.TBinaryProtocol(transport)
client = TestPing.Client(protocol)
transport.open()
client.ping()
transport.close()
I had the same issue.
Replacing "localhost" with the ip fixed it.
The reason was: Python used TCPV6, where Java used TCP.
Python:
transport = TSocket.TServerSocket(host="127.0.0.1", port = 9091)
Java:
transport = new TSocket("127.0.0.1", 9091);
transport = new TSocket("localhost", 9091);
TProtocol protocol = new TBinaryProtocol(transport);
transport.open();
This should work...