the use of threads and the java Future interface in AWS Lambda - java

I want to create an AWS Lambda function in java that writes to a database in Firestore. The short story is that, while the code does what it should when I execute it
on my own computer, using NetBeans (the truth is that it works most of the time, but not always, maybe due to problems with my internet connection), nothing at all
happens when I deploy it as a Lambda function and invoke this. I suspect that this has less to do with Firestore itself, but rather with how AWS Lambda handles asynchronous
operations.
Now to the details!
As a simple example, the method that writes to the Firestore object db reads
public static void writeFirestore(Firestore db){
try{
DateTime now = DateTime.now();
String time = now.toString();
Map<String, String> data = new HashMap<>();
data.put("time", time);
String collTitle = "Notebook";
String docTitle = "Document: "+time;
db.collection(collTitle).document(docTitle).set(data);
System.out.println("wrote to Firestore");
}
catch(Exception e){
System.out.println("Could not write to db: "+e.toString());
}
}
Now, as it takes some time to connect to Firestore and initialize db, I want to make sure that db is not passed as an argument into writeFirestore() before it
has been properly retrieved. So, I define a version of db in the form of a Future object, using ExecutorService, and then retrieve
the object db with the get()-method. For this, I define the class TaskRunner:
public class TaskRunner {
ExecutorService executor;
public TaskRunner(){
executor = Executors.newSingleThreadExecutor();
}
public static interface Callback<T>{
public void onCallback(T result);
}
public <T> void executeAsync(Callable<T> callable, Callback<T> callback) throws Exception{
try{
Future future = executor.submit(callable);
Object result = future.get();
if(result != null){
System.out.println("result is not null; applying callback...");
callback.onCallback((T) result);
}
else{
System.out.println("result is null");
}
}
catch(Exception e){
System.out.println("Problem running executeAsync: "+e.toString());
}
}
}
Writing the example document to my fixed database db now goes as follows:
I define the class FirestoreCreator that implements Callable with the purpose of retrieving the Firestore object db:
public static class FirestoreCreator implements Callable<Firestore>{
#Override
public Firestore call() throws Exception {
String projectId = "myProjectId";
GoogleCredentials credentials =
GoogleCredentials.fromStream(new FileInputStream("myCredentialsFile.json"));
FirestoreOptions firestoreOptions = FirestoreOptions.getDefaultInstance()
.toBuilder()
.setProjectId(projectId)
.setCredentials(credentials)
.build();
Firestore db = firestoreOptions.getService();
return db;
}
}
I implement the TaskRunner.Callback interface using writeFirestore().
I create a TaskRunner object, taskRunner, and call its executeAsync() method with the above two objects as parameters.
These three steps are collected in the final method testUpdateFirestore() that does the job:
public static void testUpdateFirestoreInterface(){
FirestoreCreator fsCreator = new FirestoreCreator();
TaskRunner.Callback<Firestore> updateCallback = new TaskRunner.Callback<Firestore>() {
#Override
public void onCallback(Firestore result) {
writeFirestore(result);
}
};
TaskRunner taskRunner = new TaskRunner();
try {
taskRunner.executeAsync(fsCreator, updateCallback);
} catch (Exception ex) {
System.out.println("Failed to run executeAsync");
}
}
As I already mentioned in the introduction, the code works (most times) when I run it on my computer, but not at all in AWS Lambda. No exception is thrown, and yet no document has been written in Firestore.
The discussion about threads in AWS Lambda (https://dzone.com/articles/multi-threaded-programming-with-aws-lambda) made me suspect that reason is that the use of some thread that runs when ExecutorService is used is not being handled properly.
Does anyone know what goes wrong and what a solution could look like?

Related

Implementing Spring + Apache Flink project with Postgres

I have a SpringBoot gradle project using apache flink to process datastream signals. When a new signal comes through the datastream, I would like to query look up (i.e. findById() ) it's details using an ID in a postgres database table which is already created in order to get additional information about the signal and enrich the data. I would like to avoid using spring dependencies to perform the lookup (i.e Autowire repository) and want to stick with flink implementation for the lookup.
Where can i specify how to add the postgres connection config information such as port, database, url, username, password etc... (for simplicity purposes can assume the postgres db is local in my machine). Is it as simple as adding the configuration to the application.properties file? if so how can i write the query method to look up the record in the postgres table when searching by non primary key value?
Some online sources are suggesting using this skeleton code but I am not sure how/id it fits my use case. (I have a EventEntity model created which contains all the params/columns from the table which i'm looking up).
like so
public class DatabaseMapper extends RichFlatMapFunction<String, EventEntity> {
// Declare DB connection & query statements
public void open(Configuration parameters) throws Exception {
//Initialize DB connection
//prepare query statements
}
#Override
public void flatMap(String value, Collector<EventEntity> out) throws Exception {
}
}
Your sample code is correct. You can set all your custom initialization and preparation code for PostgreSQL in open() method. Then you can use your pre-configured fields in your flatMap() function.
Here is one sample for Redis operations
I have used RichAsyncFunction here and I suggest you do the same as it is suggested as best practice. Read here for more: https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/stream/operators/asyncio.html)
You can pass configuration parameteres in your constructor method and use it in your initialization process
public static class AsyncRedisOperations extends RichAsyncFunction<Object,Object> {
private JedisPool jedisPool;
private Configuration redisConf;
public AsyncRedisOperations(Configuration redisConf) {
this.redisConf = redisConf;
}
#Override
public void open(Configuration parameters) {
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.setMaxTotal(this.redisConf.getInteger("pool", 8));
jedisPoolConfig.setMaxIdle(this.redisConf.getInteger("pool", 8));
jedisPoolConfig.setMaxWaitMillis(this.redisConf.getInteger("maxWait", 0));
JedisPool jedisPool = new JedisPool(jedisPoolConfig,
this.redisConf.getString("host", "192.168.10.10"),
this.redisConf.getInteger("port", 6379), 5000);
try {
this.jedisPool = jedisPool;
this.logger.info("Redis connected: " + jedisPool.getResource().isConnected());
} catch (Exception e) {
this.logger.error(BaseUtil.append("Exception while connecting Redis"));
}
}
#Override
public void asyncInvoke(Object in, ResultFuture<Object> out) {
try (Jedis jedis = this.jedisPool.getResource()) {
String key = jedis.get(key);
this.logger.info("Redis Key: " + key);
}
}
}

Manipulate with cache as with collection in Spring

I looked a lot of stuff on on internet but I don't found any solution for my needs.
Here is a sample code which doesn't work but show my requirements for better understanding.
#Service
public class FooCachedService {
#Autowired
private MyDataRepository dataRepository;
private static ConcurrentHashMap<Long, Object> cache = new ConcurrentHashMap<>();
public void save(Data data) {
Data savedData = dataRepository.save(data);
if (savedData.getId() != null) {
cache.put(data.getRecipient(), null);
}
}
public Data load(Long recipient) {
Data result = null;
if (!cache.containsKey(recipient)) {
result = dataRepository.findDataByRecipient(recipient);
if (result != null) {
cache.remove(recipient);
return result;
}
}
while (true) {
try {
if (cache.containsKey(recipient)) {
result = dataRepository.findDataByRecipient(recipient);
break;
}
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
return result;
}
}
and data object:
public class Data {
private Long id;
private Long recipient;
private String payload;
// getters and setters
}
As you can see in code above I need implement service which will be stored new data into database and into cache as well.
Whole algorithm should looks something like that:
Some userA create POST request to my controller to store data and it fire save method of my service.
Another userB logged in system send request GET to my controller which fire method load of my service. In this method is compared logged user's id which sent request with recipients' ids in map. If map contains data for this user they are fetched with repository else algorithm check every second if there are some new data for that user (this checking will be some timeout, for example 30s, and after 30s request return empty data, and user create new GET request and so on...)
Can you tell me if it possible do it with some elegant way and how? How to use cache for that or what is the best practice for that? I am new in this area so I will be grateful for any advice.

Mongo Connection is created multiple times in RESTful API and never released

I have written a RESTful API using Apache Jersey. I am using MongoDB as my backend. I used Morphia (v.1.3.4) to map and persist POJO to database. I tried to follow "1 application 1 connection" in my API as recommended everywhere but I am not sure I am successful. I run my API in Tomcat 8. I also ran Mongostat to see the details and connection. At start, Mongostat showed 1 connection to MongoDB server. I tested my API using Postman and it was working fine. I then created a load test in SoapUI where I simulated 100 users per second. I saw the update in Mongostat. I saw there were 103 connections. Here is the gif which shows this behaviour.
I am not sure why there are so many connections. The interesting fact is that number of mongo connection are directly proportional to number of users I create on SoapUI. Why is that? I found other similar questions but I think I have implemented there suggestions.
Mongo connection leak with morphia
Spring data mongodb not closing mongodb connections
My code looks like this.
DatabaseConnection.java
// Some imports
public class DatabaseConnection {
private static volatile MongoClient instance;
private static String cloudhost="localhost";
private DatabaseConnection() { }
public synchronized static MongoClient getMongoClient() {
if (instance == null ) {
synchronized (DatabaseConnection.class) {
if (instance == null) {
ServerAddress addr = new ServerAddress(cloudhost, 27017);
List<MongoCredential> credentialsList = new ArrayList<MongoCredential>();
MongoCredential credentia = MongoCredential.createCredential(
"test", "test", "test".toCharArray());
credentialsList.add(credentia);
instance = new MongoClient(addr, credentialsList);
}
}
}
return instance;
}
}
PourService.java
#Secured
#Path("pours")
public class PourService {
final static Logger logger = Logger.getLogger(Pour.class);
private static final int POUR_SIZE = 30;
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Response createPour(String request)
{
WebApiResponse response = new WebApiResponse();
Gson gson = new GsonBuilder().setDateFormat("dd/MM/yyyy HH:mm:ss").create();
String message = "Pour was not created.";
HashMap<String, Object> data = null;
try
{
Pour pour = gson.fromJson(request, Pour.class);
// Storing the pour to
PourRepository pourRepository = new PourRepository();
String id = pourRepository.createPour(pour);
data = new HashMap<String, Object>();
if ("" != id && null != id)
{
data.put("id", id);
message = "Pour was created successfully.";
logger.debug(message);
return response.build(true, message, data, 200);
}
logger.debug(message);
return response.build(false, message, data, 500);
}
catch (Exception e)
{
message = "Error while creating Pour.";
logger.error(message, e);
return response.build(false, message, new Object(),500);
}
}
PourDao.java
public class PourDao extends BasicDAO<Pour, String>{
public PourDao(Class<Pour> entityClass, Datastore ds) {
super(entityClass, ds);
}
}
PourRepository.java
public class PourRepository {
private PourDao pourDao;
final static Logger logger = Logger.getLogger(PourRepository.class);
public PourRepository ()
{
try
{
MongoClient mongoClient = DatabaseConnection.getMongoClient();
Datastore ds = new Morphia().map(Pour.class)
.createDatastore(mongoClient, "tilt45");
pourDao = new PourDao(Pour.class,ds);
}
catch (Exception e)
{
logger.error("Error while creating PourDao", e);
}
}
public String createPour (Pour pour)
{
try
{
return pourDao.save(pour).getId().toString();
}
catch (Exception e)
{
logger.error("Error while creating Pour.", e);
return null;
}
}
}
When I work with Mongo+Morphia I get better results using a Factory pattern for the Datastore and not for the MongoClient, for instance, check the following class:
public DatastoreFactory(String dbHost, int dbPort, String dbName) {
final Morphia morphia = new Morphia();
MongoClientOptions.Builder options = MongoClientOptions.builder().socketKeepAlive(true);
morphia.getMapper().getOptions().setStoreEmpties(true);
final Datastore store = morphia.createDatastore(new MongoClient(new ServerAddress(dbHost, dbPort), options.build()), dbName);
store.ensureIndexes();
this.datastore = store;
}
With that approach, everytime you need a datastore you can use the one provided by the factory. Of course, this can implemented better if you use a framework/library that support factory pattern (e.g.: HK2 with org.glassfish.hk2.api.Factory), and also singleton binding.
Besides, you can check the documentation of MongoClientOptions's builder method, perhaps you can find a better connection control there.

Concurrency on Vertx

i have joined to one of those Vertx lovers , how ever the single threaded main frame may not be working for me , because in my server there might be 50 file download requests at a moment , as a work around i have created this class
public abstract T onRun() throws Exception;
public abstract void onSuccess(T result);
public abstract void onException();
private static final int poolSize = Runtime.getRuntime().availableProcessors();
private static final long maxExecuteTime = 120000;
private static WorkerExecutor mExecutor;
private static final String BG_THREAD_TAG = "BG_THREAD";
protected RoutingContext ctx;
private boolean isThreadInBackground(){
return Thread.currentThread().getName() != null && Thread.currentThread().getName().equals(BG_THREAD_TAG);
}
//on success will not be called if exception be thrown
public BackgroundExecutor(RoutingContext ctx){
this.ctx = ctx;
if(mExecutor == null){
mExecutor = MyVertxServer.vertx.createSharedWorkerExecutor("my-worker-pool",poolSize,maxExecuteTime);
}
if(!isThreadInBackground()){
/** we are unlocking the lock before res.succeeded , because it might take long and keeps any thread waiting */
mExecutor.executeBlocking(future -> {
try{
Thread.currentThread().setName(BG_THREAD_TAG);
T result = onRun();
future.complete(result);
}catch (Exception e) {
GUI.display(e);
e.printStackTrace();
onException();
future.fail(e);
}
/** false here means they should not be parallel , and will run without order multiple times on same context*/
},false, res -> {
if(res.succeeded()){
onSuccess((T)res.result());
}
});
}else{
GUI.display("AVOIDED DUPLICATE BACKGROUND THREADING");
System.out.println("AVOIDED DUPLICATE BACKGROUND THREADING");
try{
T result = onRun();
onSuccess((T)result);
}catch (Exception e) {
GUI.display(e);
e.printStackTrace();
onException();
}
}
}
allowing the handlers to extend it and use it like this
public abstract class DefaultFileHandler implements MyHttpHandler{
public abstract File getFile(String suffix);
#Override
public void Handle(RoutingContext ctx, VertxUtils utils, String suffix) {
new BackgroundExecutor<Void>(ctx) {
#Override
public Void onRun() throws Exception {
File file = getFile(URLDecoder.decode(suffix, "UTF-8"));
if(file == null || !file.exists()){
utils.sendResponseAndEnd(ctx.response(),404);
return null;
}else{
utils.sendFile(ctx, file);
}
return null;
}
#Override
public void onSuccess(Void result) {}
#Override
public void onException() {
utils.sendResponseAndEnd(ctx.response(),404);
}
};
}
and here is how i initialize my vertx server
vertx.deployVerticle(MainDeployment.class.getCanonicalName(),res -> {
if (res.succeeded()) {
GUI.display("Deployed");
} else {
res.cause().printStackTrace();
}
});
server.requestHandler(router::accept).listen(port);
and here is my MainDeployment class
public class MainDeployment extends AbstractVerticle{
#Override
public void start() throws Exception {
// Different ways of deploying verticles
// Deploy a verticle and don't wait for it to start
for(Entry<String, MyHttpHandler> entry : MyVertxServer.map.entrySet()){
MyVertxServer.router.route(entry.getKey()).handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext ctx) {
String[] handlerID = ctx.request().uri().split(ctx.currentRoute().getPath());
String suffix = handlerID.length > 1 ? handlerID[1] : null;
entry.getValue().Handle(ctx, new VertxUtils(), suffix);
}
});
}
}
}
this is working just fine when and where i need it , but i still wonder if is there any better way to handle concurencies like this on vertx , if so an example would be really appreciated . thanks alot
I don't fully understand your problem and reasons for your solution. Why don't you implement one verticle to handle your http uploads and deploy it multiple times? I think that handling 50 concurrent uploads should be a piece of cake for vert.x.
When deploying a verticle using a verticle name, you can specify the number of verticle instances that you want to deploy:
DeploymentOptions options = new DeploymentOptions().setInstances(16);
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);
This is useful for scaling easily across multiple cores. For example you might have a web-server verticle to deploy and multiple cores on your machine, so you want to deploy multiple instances to take utilise all the cores.
http://vertx.io/docs/vertx-core/java/#_specifying_number_of_verticle_instances
vertx is a well-designed model so that a concurrency issue does not occur.
generally, vertx does not recommend the multi-thread model.
(because, handling is not easy.)
If you select multi-thread model, you have to think about shared data..
Simply, if you just only want to split EventLoop Area,
first of all, you make sure Check your a number of CPU Cores.
and then Set up the count of Instances .
DeploymentOptions options = new DeploymentOptions().setInstances(4);
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);
But, If you have 4cores of CPU, you don't set up over 4 instances.
If you set up to number four or more, the performance won't improve.
vertx concurrency reference
http://vertx.io/docs/vertx-core/java/

Rxjava2 + Retrofit2 + Android. Best way to do hundreds of network calls

I have an app. I have a big button that allows the user to sync all their data at once to the cloud. A re-sync feature that allows them to send all their data again. (300+ entries)
I am using RXjava2 and retrofit2. I have my unit test working with a single call. However I need to make N network calls.
What I want to avoid is having the observable call the next item in a queue. I am at the point where I need to implement my runnable. I have seen a bit about Maps but I have not seen anyone use it as a queue. Also I want to avoid having one item fail and it report back as ALL items fail, like the Zip feature would do. Should I just do the nasty manager class that keeps track of a queue? Or is there a cleaner way to send several hundred items?
NOTE: SOLUTION CANNOT DEPEND ON JAVA8 / LAMBDAS. That has proved to be way more work than is justified.
Note all items are the same object.
#Test
public void test_Upload() {
TestSubscriber<Record> testSubscriber = new TestSubscriber<>();
ClientSecureDataToolKit clientSecureDataToolKit = ClientSecureDataToolKit.getClientSecureDataKit();
clientSecureDataToolKit.putUserDataToSDK(mPayloadSecureDataToolKit).subscribe(testSubscriber);
testSubscriber.awaitTerminalEvent();
testSubscriber.assertNoErrors();
testSubscriber.assertValueCount(1);
testSubscriber.assertCompleted();
}
My helper to gather and send all my items
public class SecureDataToolKitHelper {
private final static String TAG = "SecureDataToolKitHelper";
private final static SimpleDateFormat timeStampSimpleDateFormat =
new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
public static void uploadAll(Context context, RuntimeExceptionDao<EventModel, UUID> eventDao) {
List<EventModel> eventModels = eventDao.queryForAll();
QueryBuilder<EventModel, UUID> eventsQuery = eventDao.queryBuilder();
String[] columns = {...};
eventsQuery.selectColumns(columns);
try {
List<EventModel> models;
models = eventsQuery.orderBy("timeStamp", false).query();
if (models == null || models.size() == 0) {
return;
}
ArrayList<PayloadSecureDataToolKit> toSendList = new ArrayList<>();
for (EventModel eventModel : models) {
try {
PayloadSecureDataToolKit payloadSecureDataToolKit = new PayloadSecureDataToolKit();
if (eventModel != null) {
// map my items ... not shown
toSendList.add(payloadSecureDataToolKit);
}
} catch (Exception e) {
Log.e(TAG, "Error adding payload! " + e + " ..... Skipping entry");
}
}
doAllNetworkCalls(toSendList);
} catch (SQLException e) {
e.printStackTrace();
}
}
my Retrofit stuff
public class ClientSecureDataToolKit {
private static ClientSecureDataToolKit mClientSecureDataToolKit;
private static Retrofit mRetrofit;
private ClientSecureDataToolKit(){
mRetrofit = new Retrofit.Builder()
.baseUrl(Utilities.getSecureDataToolkitURL())
.addCallAdapterFactory(RxJavaCallAdapterFactory.create())
.addConverterFactory(GsonConverterFactory.create())
.build();
}
public static ClientSecureDataToolKit getClientSecureDataKit(){
if(mClientSecureDataToolKit == null){
mClientSecureDataToolKit = new ClientSecureDataToolKit();
}
return mClientSecureDataToolKit;
}
public Observable<Record> putUserDataToSDK(PayloadSecureDataToolKit payloadSecureDataToolKit){
InterfaceSecureDataToolKit interfaceSecureDataToolKit = mRetrofit.create(InterfaceSecureDataToolKit.class);
Observable<Record> observable = interfaceSecureDataToolKit.putRecord(NetworkUtils.SECURE_DATA_TOOL_KIT_AUTH, payloadSecureDataToolKit);
return observable;
}
}
public interface InterfaceSecureDataToolKit {
#Headers({
"Content-Type: application/json"
})
#POST("/api/create")
Observable<Record> putRecord(#Query("api_token") String api_token, #Body PayloadSecureDataToolKit payloadSecureDataToolKit);
}
Update. I have been trying to apply this answer to not much luck. I am running out of steam for tonight. I am trying to implement this as a unit test, like I did for the original call for one item.. It looks like something is not right with use of lambda maybe..
public class RxJavaBatchTest {
Context context;
final static List<EventModel> models = new ArrayList<>();
#Before
public void before() throws Exception {
context = new MockContext();
EventModel eventModel = new EventModel();
//manually set all my eventmodel data here.. not shown
eventModel.setSampleId("SAMPLE0");
models.add(eventModel);
eventModel.setSampleId("SAMPLE1");
models.add(eventModel);
eventModel.setSampleId("SAMPLE3");
models.add(eventModel);
}
#Test
public void testSetupData() {
Assert.assertEquals(3, models.size());
}
#Test
public void testBatchSDK_Upload() {
Callable<List<EventModel> > callable = new Callable<List<EventModel> >() {
#Override
public List<EventModel> call() throws Exception {
return models;
}
};
Observable.fromCallable(callable)
.flatMapIterable(models -> models)
.flatMap(eventModel -> {
PayloadSecureDataToolKit payloadSecureDataToolKit = new PayloadSecureDataToolKit(eventModel);
return doNetworkCall(payloadSecureDataToolKit) // I assume this is just my normal network call.. I am getting incompatibility errors when I apply a testsubscriber...
.subscribeOn(Schedulers.io());
}, true, 1);
}
private Observable<Record> doNetworkCall(PayloadSecureDataToolKit payloadSecureDataToolKit) {
ClientSecureDataToolKit clientSecureDataToolKit = ClientSecureDataToolKit.getClientSecureDataKit();
Observable observable = clientSecureDataToolKit.putUserDataToSDK(payloadSecureDataToolKit);//.subscribe((Observer<? super Record>) testSubscriber);
return observable;
}
Result is..
An exception has occurred in the compiler (1.8.0_112-release). Please file a bug against the Java compiler via the Java bug reporting page (http://bugreport.java.com) after checking the Bug Database (http://bugs.java.com) for duplicates. Include your program and the following diagnostic in your report. Thank you.
com.sun.tools.javac.code.Symbol$CompletionFailure: class file for java.lang.invoke.MethodType not found
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:compile<MyBuildFlavorhere>UnitTestJavaWithJavac'.
> Compilation failed; see the compiler error output for details.
Edit. No longer trying Lambdas. Even after setting up the path on my mac, javahome to point to 1.8, etc. I could not get it to work. If this was a newer project I would push harder. However as this is an inherited android application written by web developers trying android, it is just not a great option. Nor is it worth the time sink to get it working. Already into the days of this assignment instead of the half day it should have taken.
I could not find a good non lambda flatmap example. I tried it myself and it was getting messy.
If I understand you correctly, you want to make your calls in parallel?
So rx-y way of doing this would be something like:
Observable.fromCallable(() -> eventsQuery.orderBy("timeStamp", false).query())
.flatMapIterable(models -> models)
.flatMap(model -> {
// map your model
//avoid throwing exceptions in a chain, just return Observable.error(e) if you really need to
//try to wrap your methods that throw exceptions in an Observable via Observable.fromCallable()
return doNetworkCall(someParameter)
.subscribeOn(Schedulers.io());
}, true /*because you don't want to terminate a stream if error occurs*/, maxConcurrent /* specify number of concurrent calls, typically available processors + 1 */)
.subscribe(result -> {/* handle result */}, error -> {/* handle error */});
In your ClientSecureDataToolKit move this part into constructor
InterfaceSecureDataToolKit interfaceSecureDataToolKit = mRetrofit.create(InterfaceSecureDataToolKit.class);

Categories

Resources