If I want to kick off a thread that will be sending a text message with twilio,
is it better to do
TwilioRestClient client = new TwilioRestClient(ACCOUNT_SID, AUTH_TOKEN);
for each thread or should I make one client and share it with the threads?
You can see the source code for TwilioRequestClient class of the twilio-java helper library here: https://github.com/twilio/twilio-java/blob/master/src/main/java/com/twilio/sdk/TwilioRestClient.java
I don't see anything not obviously thread-safe. My only concern would be this part of the code in the constructor:
//Grab the proper connection manager, based on runtime environment
ClientConnectionManager mgr = null;
try {
Class.forName("com.google.appengine.api.urlfetch.HTTPRequest");
mgr = new AppEngineClientConnectionManager();
} catch (ClassNotFoundException e) {
//Not GAE
mgr = new ThreadSafeClientConnManager();
((ThreadSafeClientConnManager) mgr).setDefaultMaxPerRoute(10);
}
It generates a new thread pool for every initialization, so I'd say share the resource. On the other hand, will it have enough connections available to efficiently handle your load?
You can read up more about ThreadSafeClientConnManager here: https://hc.apache.org/httpcomponents-client-4.3.x/httpclient/apidocs/org/apache/http/impl/conn/tsccm/ThreadSafeClientConnManager.html#setDefaultMaxPerRoute%28int%29
Bottom line, try load testing it with your expected usage and tweak the source to meet your needs.
Related
I have a Jersey client up and running, using the Apache Client 4 library, like this:
private Client createClient() {
ApacheHttpClient4Config cc = new DefaultApacheHttpClient4Config();
// boring stuff here
return ApacheHttpClient4.create(cc);
}
But this by default uses a BasicClientConnManager, which doesn't allow multi-threaded connections.
The ApacheHttpClient4Config Javadoc says that I need to set the PROPERTY_CONNECTION_MANAGER to a ThreadSafeClientConnManager instance if I want multi-threaded operation. I can do this, and it works OK:
private Client createClient() {
ApacheHttpClient4Config cc = new DefaultApacheHttpClient4Config();
cc.getProperties().put(ApacheHttpClient4Config.PROPERTY_CONNECTION_MANAGER,
new ThreadSafeClientConnManager());
// boring stuff here
return ApacheHttpClient4.create(cc);
}
But ThreadSafeClientConnManager is deprecated. This is annoying.
The more modern version is PoolingHttpClientConnectionManager. Unfortunately, though, the ApacheHttpClient4.create() method requires the connection manager to be an implementation of ClientConnectionManager (itself deprecated), and PoolingHttpClientConnectionManager doesn't implement that interface. So if I try to use it, my connection manager gets ignored and we're back to a BasicClientConnManager.
How can I end up with a thread-safe client without using anything that's deprecated?
You can create the client as follows (see https://github.com/phillbarber/connection-leak-test/blob/master/src/test/java/com/github/phillbarber/connectionleak/IntegrationTestThatExaminesConnectionPoolBeforeAndAfterRun.java#L30-L33):
client = new ApacheHttpClient4(new ApacheHttpClient4Handler(HttpClients.custom()
.setConnectionManager(new PoolingHttpClientConnectionManager())
.build(), null, false));
I am using the Oracle Jersey Client, and am trying to cancel a long running get or put operation.
The Client is constructed as:
JacksonJsonProvider provider = new JacksonJsonProvider(new ObjectMapper());
ClientConfig clientConfig = new DefaultClientConfig();
clientConfig.getSingletons().add(provider);
Client client = Client.create(clientConfig);
The following code is executed on a worker thread:
File bigZipFile = new File("/home/me/everything.zip");
WebResource resource = client.resource("https://putfileshere.com");
Builder builder = resource.getRequestBuilder();
builder.type("application/zip").put(bigZipFile); //This will take a while!
I want to cancel this long-running put. When I try to interrupt the worker thread, the put operation continues to run. From what I can see, the Jersey Client makes no attempt to check for Thread.interrupted().
I see the same behavior when using an AsyncWebResource instead of WebResource and using Future.cancel(true) on the Builder.put(..) call.
So far, the only solution I have come up with to interrupt this is throwing a RuntimeException in a ContainerListener:
client.addFilter(new ConnectionListenerFilter(
new OnStartConnectionListener(){
public ContainerListener onStart(ClientRequest cr) {
return new ContainerListener(){
public void onSent(long delta, long bytes) {
//If the thread has been interrupted, stop the operation
if (Thread.interrupted()) {
throw new RuntimeException("Upload or Download canceled");
}
//Report progress otherwise
}
}...
I am wondering if there is a better solution (perhaps when creating the Client) that correctly handles interruptible I/O without using a RuntimeException.
I am wondering if there is a better solution (perhaps when creating the Client) that correctly handles interruptible I/O without using a RuntimeException.
Yeah, interrupting the thread will only work if the code is watching for the interrupts or calling other methods (such as Thread.sleep(...)) that watch for it.
Throwing an exception out of listener doesn't sound like a bad idea. I would certainly create your own RuntimeException class such as TimeoutRuntimeException or something so you can specifically catch and handle it.
Another thing to do would be to close the underlying IO stream that is being written to which would cause an IOException but I'm not familiar with Jersey so I'm not sure if you can get access to the connection.
Ah, here's an idea. Instead of putting the File, how about putting some sort of extension on a BufferedInputStream that is reading from the File but also has a timeout. So Jersey would be reading from the buffer and at some point it would throw an IOException if the timeout expires.
As of Jersey 2.35, the above API has changed. A timeout has been introduces in the client builder which can set read timeout. If the server takes too long to respond, the underlying socket will timeout. However, if the server starts sending the response, it shall not timeout. This can be utilized, if the server does not start sending partial response, which depends on the server implementation.
client=(JerseyClient)JerseyClientBuilder
.newBuilder()
.connectTimeout(1*1000, TimeUnit.MILLISECONDS)
.readTimeout(5*1000, TimeUnit.MILLISECONDS).build()
The current filters and interceptors are for data only and the solution posted in the original question will not work with filters and interceptors (though I admit I may have missed something there).
Another way is to get hold of the underlying HttpUrlConnection (for standard Jersey client configuration) and it seems to be possible with org.glassfish.jersey.client.HttpUrlConnectorProvider
HttpUrlConnectorProvider httpConProvider=new HttpUrlConnectorProvider();
httpConProvider.connectionFactory(new CustomHttpUrlConnectionfactory());
public static class CustomHttpUrlConnectionfactory implements
HttpUrlConnectorProvider.ConnectionFactory{
#Override
public HttpURLConnection getConnection(URL url) throws IOException {
System.out.println("CustomHttpUrlConnectionfactory ..... called");
return (HttpURLConnection)url.openConnection();
}//getConnection closing
}//inner-class closing
I did try the connection provider approach, however, I could not get that working. The idea would be to keep reference to the connection by some means (thread id etc.) and close it if the communication is taking too long. The primary problem was I could not find a way to register the provider with the client. The standard
.register(httpConProvider)
mechanism does not seem to work (or perhaps it is not supposed to work like that) and the documentation is a bit sketchy in that direction.
I'm pretty puzzled with this issue. I have an Apache Thrift 0.9.0 client and server. The client code goes like this:
this.transport = new TSocket(this.server, this.port);
final TProtocol protocol = new TBinaryProtocol(this.transport);
this.client = new ZKProtoService.Client(protocol);
This works fine. However, if I try to wrap the transport in a TFramedTransport
this.transport = new TSocket(this.server, this.port);
final TProtocol protocol = new TBinaryProtocol(new TFramedTransport(this.transport));
this.client = new ZKProtoService.Client(protocol);
I get the following obscure (no explanation message whatsoever) exception in the client side. Server side shows no error.
org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at com.blablabla.android.core.device.proto.ProtoService$Client.recv_open(ProtoService.java:108)
at com.blablabla.android.core.device.proto.ProtoService$Client.open(ProtoService.java:95)
at com.blablabla.simpleprotoclient.proto.ProtoClient.initializeCommunication(ProtoClient.java:411)
at com.blablabla.simpleprotoclient.proto.ProtoClient.doWork(ProtoClient.java:269)
at com.blablabla.simpleprotoclient.proto.ProtoClient.run(ProtoClient.java:499)
at java.lang.Thread.run(Thread.java:724)
It also fails if I use TCompactProtocol instead of TBinaryProtocol.
In the server side I have extended TProcessor with my own class since I need to reuse existing service handler (the service server-side IFace implementation) for this client:
#Override
public boolean process(final TProtocol in, final TProtocol out)
throws TException {
final TTransport t = in.getTransport();
final TSocket socket = (TSocket) t;
socket.setTimeout(ProtoServer.SOCKET_TIMEOUT);
final String clientAddress = socket.getSocket().getInetAddress()
.getHostAddress();
final int clientPort = socket.getSocket().getPort();
final String clientRemote = clientAddress + ":" + clientPort;
ProtoService.Processor<ProtoServiceHandler> processor = PROCESSORS
.get(clientRemote);
if (processor == null) {
final ProtoServiceHandler handler = new ProtoServiceHandler(
clientRemote);
processor = new ProtoService.Processor<ProtoServiceHandler>(
handler);
PROCESSORS.put(clientRemote, processor);
HANDLERS.put(clientRemote, handler);
ProtoClientConnectionChecker.addNewConnection(clientRemote,
socket);
}
return processor.process(in, out);
}
And this is how I start the server side:
TServerTransport serverTransport = new TServerSocket(DEFAULT_CONTROL_PORT);
TServer server = new TThreadPoolServer(new TThreadPoolServer.Args(
serverTransport).processor(new ControlProcessor()));
Thread thControlServer = new Thread(new StartServer("Control", server));
thControlServer.start();
I have some questions:
Is it correct to reuse service handler instances or I shouldn't be doing this?
Why does it fail when I use TFramedTransport or TCompactProtocol? How to fix this?
Any help on this issue is welcome. Thanks in advance!
I was having the same problem and finally found the answer. It is possible to set the transport type on the server, though this is not clear from most tutorials and examples I've found on the web. Have a look at all of the methods of the TServer.Args class (or the args classes for other servers, which extend TServer.Args). There are methods inputTransportFactory and outputTransportFactory. You can use new TFramedTransport.Factory() as inputs to each of these methods to declare which transport the server should use. In scala:
val handler = new ServiceStatusHandler
val processor = new ServiceStatus.Processor(handler)
val serverTransport = new TServerSocket(9090)
val args = new TServer.Args(serverTransport)
.processor(processor)
.inputTransportFactory(new TFramedTransport.Factory)
.outputTransportFactory(new TFramedTransport.Factory)
val server = new TSimpleServer(args)
println("Starting the simple server...")
server.serve()
Note that if you are using a TAsyncClient, you have no choice about the transport that you use. You must use TNonblockingTransport, which has only one standard implementation, TNonblockingSocket, which internally wraps whatever protocol you are using in a framed transport. It doesn't actually wrap your chosen protocol in a TFramedTransport, but it does prepend the length of the frame to the content that it writes, and expects the server to prepend the length of the response as well. This wasn't documented anywhere I found, but if you look at the source code and experiment with different combinations, you will find that with TSimpleServer you must use TFramedTransport to get it to work with an async client.
By the way, it's also worth noting that the docs say that a TNonblockingServer must use TFramedTransport in the outermost later of the transport. However, the examples don't show this being set in TNonblockingServer.Args, yet you still find that you must use TFramedTransport on the client side to successfully execute an rpc on the server. This is because TNonblockingServer.Args has its input and output protocols set to TFramedTransport by default (you can see this using reflection to inspect the fields of the superclass hierarchy or in the source code for the constructor of AbstractNonblockingServerArgs -- you can override the input and output transports, but the server will likely fail for the reasons discussed in the documentation).
When the issue happens with framed, but it works without framed, then you have an incompatible protocol stack on both ends. Choose one of the following:
either modify the server code to use framed as well
or do not use framed on the client
A good rule of thumb is, to always use the exact same protocol/transport stack on both ends. In the particular case it blows up, because framed adds a four-byte header holding the size of the message that follows. If the server does not use framed, these additional four bytes sent by the client will be interpreted (wrongly) as part of the message.
Altough the sample code in that answer
TNonblockingServer in thrift crashes when TFramedTransport opens is for C++, adding framed on the server should be very similar with Java.
PS: Yes, it is perfectly ok to re-use your handler. A typical handler is a stateless thing.
I am trying to setup my MessageServer class so that it services each client in a separate request (you'll see below that it's pretty linear right now)
How should I go about it?
import java.net.*;
import java.io.*;
public class MessageServer {
public static final int PORT = 6100;
public static void main(String[] args) {
Socket client = null;
ServerSocket sock = null;
BufferedReader reader = null;
try {
sock = new ServerSocket(PORT);
// now listen for connections
while (true) {
client = sock.accept();
reader = new BufferedReader(new InputStreamReader(client.getInputStream()));
Message message = new MessageImpl(reader.readLine());
// set the appropriate character counts
message.setCounts();
// now serialize the object and write it to the socket
ObjectOutputStream soos = new ObjectOutputStream(client.getOutputStream());
soos.writeObject(message);
System.out.println("wrote message to the socket");
client.close();
}
}
catch (IOException ioe) {
System.err.println(ioe);
}
}
}
Sorry, but your question doesn't make much sense.
If we are using the term "request" in the normal way, a client sends a request to the server and the server processes each request. It simply makes no sense for a server to not service the requests separately (in some sense).
Perhaps you are asking something different. (Do you mean, "service each client request in a separate thread"?) Whatever you mean, please review your terminology.
Given that you are talking about executing requests in different threads, then using the ExecutorService API is a good choice. Use an implementation class that allows you to put an upper bound on the number of worker threads. If you don't, you open yourself up for problems where overload results in the allocation of large numbers of threads, which only makes the server slower. (Besides, creating new threads is not cheap. It pays to recycle them.)
You should also consider configuring your executor so that it doesn't have a request queue. You want the executor service to block the thread that is trying to submit the job if there isn't a worker available. Let the operating system queue incoming connections / requests at the ServerSocket level. If you queue requests internally, you can run into the situation where you are wasting time by processing requests that the client-side has already timed out / abandoned.
I intend to use Netty in an upcoming project. This project will act as both client and server. Especially it will establish and maintain many connections to various servers while at the same time serving its own clients.
Now, the documentation for NioServerSocketChannelFactory fairly specifies the threading model for the server side of things fairly well - each bound listen port will require a dedicated boss thread throughout the process, while connected clients will be handled in a non-blocking fashion on worker threads. Specifically, one worker thread will be able to handle multiple connected clients.
However, the documentation for NioClientSocketChannelFactory is less specific. This also seems to utilize both boss and worker threads. However, the documentation states:
One NioClientSocketChannelFactory has one boss thread. It makes a connection attempt on request. Once a connection attempt succeeds, the boss thread passes the connected Channel to one of the worker threads that the NioClientSocketChannelFactory manages.
Worker threads seem to function in the same way as for the server case too.
My question is, does this mean that there will be one dedicated boss thread for each connection from my program to an external server? How will this scale if I establish hundreds, or thousands of such connections?
As a side note. Are there any adverse side effects for re-using a single Executor (cached thread pool) as both the bossExecutor and workerExecutor for a ChannelFactory? What about also re-using between different client and/or server ChannelFactory instances? This is somewhat discussed here, but I do not find those answers specific enough. Could anyone elaborate on this?
This is not a real answer to your question regarding how the Netty client thread model works. But you can use the same NioClientSocketChannelFactory to create single ClientBootstrap with multiple ChannelPipelineFactorys , and in turn make a large number of connections. Take a look at the example below.
public static void main(String[] args)
{
String host = "localhost";
int port = 8090;
ChannelFactory factory = new NioClientSocketChannelFactory(Executors
.newCachedThreadPool(), Executors.newCachedThreadPool());
MyHandler handler1 = new MyHandler();
PipelineFactory factory1 = new PipelineFactory(handler1);
AnotherHandler handler2 = new AnotherHandler();
PipelineFactory factory2 = new PipelineFactory(handler2);
ClientBootstrap bootstrap = new ClientBootstrap(factory);
// At client side option is tcpNoDelay and at server child.tcpNoDelay
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
for (int i = 1; i<=50;i++){
if(i%2==0){
bootstrap.setPipelineFactory(factory1);
}else{
bootstrap.setPipelineFactory(factory2);
}
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host,
port));
future.addListener(new ChannelFutureListener()
{
#Override
public void operationComplete(ChannelFuture future) throws Exception
{
future.getChannel().write("SUCCESS");
}
});
}
}
It also shows how different pipeline factories can be set for different connections, so based on the connection you make you can tweak your encoders/decoders in the channel pipeline.
I am not sure your question has been answer. Here's my answer: there's a single Boss thread that is managing simultaneously all the pending CONNECTs in your app. It uses nio to process all the current connects in a single (Boss) thread, and then hands each successfully connected channel off to one of the workers.
Your question mainly concerns performance. Single threads scale very well on the client.
Oh, and nabble has been closed. You can still browse the archive there.