How I can implement AWS S3 client in a thread safe manner? - java

Hi I have a method that is executed by multiple threads concurrently to connect to the s3 bucket objects and read metadata. All those methods are using a single s3 client object. Based on the Amazon Java SDK documentation I found that the s3Clients are thread safe objects. Can the following way of implementation cause any deadlock or performance issue? Is this the correct way of implementation when using multiple thread with s3 client?
public class S3Client {
// static method that returns the s3 client for all the requests
public static AmazonS3 getS3Client(){
return AmazonS3ClientBuilder.standard().withRegion(Regions.DEFAULT_REGION).build();
}
}
And there is another class(RequestHandler->readObject method) that will be executed by multiple threads concurrently. Hence will be executed for each and every requests.
public class RequestHandler {
// Multiple concurrent threads are accessing this method
public void readObject(){
AmazonS3 s3Client = S3Client.getS3Client();
ListObjectsV2Result result = s3Client.listObjectsV2("bucket_name");
}
}
Please advice. Thanks in advance!!

Lets go one by one:
The builders in the java AWS S3 sdk are generally not thread safe. So
try not to use S3Client#getS3Client() in multi-threaded environment.
AmazonS3Client is annotated with #ThreadSafe. This is an annotation in Java AWS sdk, that marks the class as thread-safe. So there is no need to create some sort of object factory like you did, you only can have one AmazonS3Client singleton object per application. In the examples above you clearly create new instance per each and every RequestHandler#readObject() method invocation. It is not only unsafe, it will likely cause a performance issue, since you will create a lot of AmazonS3Client, which will degrade your java app garbage collection process.
You can solve pretty much all of it if you will just use a singleton pattern, i.e create AmazonS3Client as a singleton object, either by spring, or by any other IoC framework, or by yourself, for example via double check locking. In this way you will achieve thread safety along with relatively good performance (in comparison to code in the question).
Hope it helped, have a nice day!)

Related

How can I know if an AmazonS3Client has been shutdown?

Is there a way I can tell if an AmazonS3Client has been shutdown?
Bellow is the scenario I'm trying to achieve to avoid creating new clients every time and to make sure if some other component shutdowns it (by mistake) it wont break the next requests.
private AmazonS3ClientBuilder createBuilder() {
return AmazonS3ClientBuilder.standard()
.withCredentials(InstanceProfileCredentialsProvider.getInstance();)
.withRegion(clientRegion);
}
public synchronized AmazonS3 buildAmazonClient() {
if (s3Client == null || s3Client.*IS_SHUTDOWN?*)
s3Client = buildAmazonClient();
return s3Client;
}
According to Amazon SDK Developer Guide:
Service clients in the SDK are thread-safe and, for best performance, you should treat them as long-lived objects. Each client has its own connection pool resource
and Amazon SDK Reference:
This is an optional method, and callers are not expected to call it, but can if they want to explicitly release any open resources
So although there doesn't seem to be any method to check if it's already shutdown (i.e. the client is unusable any longer to make requests), it seems you could manage it yourself, but you don't really need to:
Since it's a long lived object you shouldn't create too many instances, and optionally invoke shutdown once you no longer plan on accessing them.
If you really have a use case in which you need to instantiate and kill different instances throughout the lifetime of your application, I'd suggest you keep tabs on your shutdown invocation, so you can tell if it's been shutdown already (although once the resources are freed, there shouldn't be a real need to keep a reference to a shutdown client any longer...)

Multiple HTTP request safe Multithreaded Spring boot server

In Spring boot server, my Rest controller class is Multithreaded and many threads share (read and write) a variable for the same HTTP Request. The program is thread-safe for 1 HTTP Request.
But I have some static variables in the program which I use to share information between Java threads of the same HTTP request.
I understand that this would create an issue when Multiple HTTP Requests arrive together.
How should I make the program/server accept multiple HTTP requests simultaneously and yet not mess up the variables 'logger' ,'abc', 'xyz' for 2 different HTTP Requests?
How must one declare and use these variables?
#RestController
public class ABC {
private static final Logger logger = LoggerFactory.getLogger(ABC.class);
private static volatile byte[] abc = null;
static volatile boolean xyz = "true"
// Multithreaded program where multiple threads read and write
both 'abc' and 'xyz'
The standard HTTP model is one thread per request.
The new reactive model uses Netty and an event bus.
You know full well that shared, mutable data is a problem for multi-thread access. You'll have to write your code so it's thread safe. Use the java.util.concurrent package.
You make a mistake to deviate from the model of choice. Smart people have a hard time writing multi-threaded code that's correct.
Spring has some feature to help dealing with that, but it's hard to suggest as you haven't shared your use case...
To solve this particular problem you can use also AtomicReference's, which does not use locks.
Here's a good tutorial on how to use it:
http://tutorials.jenkov.com/java-util-concurrent/atomicreference.html

is hadoop DistributedFileSystem thread safe?

I am using hadoop for writing data I scrape.
I have a spring service that is called from multiple threads to write some content to the HDFS.
#Service
public class WriteService
{
public void write(String path, String content)
{
FileSystem fs = FileSystem.get(conf);
}
}
I am not sure whether the FileSystem object can be a member of the WriteService and I don't find whether it is thread safe or not.
I am using the DistributedFileSystem object.
Do you know if it is thread-safe and I can use it as a member to my service?
Thank you
Hadoop DFS uses a so-called WORM-Model. This makes it more robust when it comes to concurrency issues.
But, to answer the question, it is not safe in general. You still need to think about concurrency control requirements.
If config.setBoolean("fs.hdfs.impl.disable. cache", true); is modified first, FileSystem.get(config) can be used in multiple threads.

Does a thread-safe statsd client exist?

I need to use a thread-safe statsd client in a web application to monitor user threads for the statistics. Please suggest a solution that is both thread safe and does not compromise performance.
What about the Java client StatsdClient.java in the examples directory? They use synchronization in their Java client implementation. In the class comment is a usage example:
StatsdClient client = new StatsdClient("statsd.example.com", 8125);
// increment by 1
client.increment("foo.bar.baz");
increment(String) delegates to
increment(String, int) which delegates to
increment(String, int, double) which generates a string to invoke
send(double, String...) which finally does a call to
sendTo(String) which uses the modifier synchronized
This does apply to other methods in this class, too.
Granted, declaring all your relevant methods as synchronized does not make a thread-safe class. But I believe Etsy took that into account here and made it that way.

Play! Framework: Reuse of instances over multiple requests

I develop an application using the Play! Framework which makes heavy use of the javax.script Package, including the ScriptEngine. As ScriptEngines are expensive to create and it would make sense to use them across multiple requests (I don't bother to create multiple ScriptEngines, one per Thread - at least i won't create ScriptEngines for each Request over and over).
I think this case is not restriced to ScriptEngines, there might be something in the framework I'm not aware of to handle such cases.
Thank you for any ideas you have!
Malax
Play is stateless, so there is no "session-like" mechanism to link an object to a user. You may have 2 alternatives:
Use the Cache. Store the ScriptEngine in the cache with a unique ID, and add a method that checks if it's still there. Something like:
public static Engine getScriptEngine(Long userId) {
String key = "MY_ENGINE" + userId;
ScriptEngine eng = (ScriptEngine) Cache.get(key);
if(eng == null) {
eng = ScriptEngine.create();
Cache.put(key, eng);
}
return eng;
}
Or create a singleton object that contains a static instance of the ScriptEngine so it's always there once the server starts.
I would say the Cache one is the best approach.
EDIT: on your comment, this will depend on situation:
If you want to reuse a Engine across multiple request of a unique user (that is, each user has his own ScriptEngine to work with) the cache method works as the cache links the Engine to the user id. This would solve any threading issue too.
Otherwise, if you want to reuse it across multiple requests of multiple users, the static method is a better approach. But as you mention the access won't be thread safe, in Play or in any system.
I'm thinking your best bet is to work asynchronously with them. I don't know how you will use the ScriptEngines, but try to do something like this:
On request, store an entry in a table from the db marking a ScriptEngine processing request
In the same request, launch an asynchronous job (or have on running every 30 seconds)
The job will read the first entry of the table, remove it, do the task, return answer to the user. The job may have a pool of ScriptEngine to work with.
As jobs are not launched again while a current job is working, if you have enought requests the job will never cease working. If it does it means that you don't need engines at that time, and they will be recreated on demand.
This way you work linearly with a pool, ignoring threading issues. If you can't do this, then you need to fix the thread-safety of your ScriptEngine, as you can't pretend to share an object that it's not thread safe in a server environemnt which spawns multiple threads :)
Why don't you implement a Script-Pool? So each request get a instance from the pool the same way as a JDBC-Connection-Pool.
But make sure the Script-Engine is stateless.

Categories

Resources