We have a spring application deployed on weblogic server. We have a REST webservice in our project. It is a POST call. In the request body of this webservice, a name comes. When our webservice is being called, internally it calls other webservice using RestTemplate.
Now we need to pass more than 10 names in each request, and in turn our webservice should call the other web service 10 times. We need to make it a multi-threaded call.
So we are using ThreadPoolTaskExecutor. The following is the code for the same.
<bean id="threadPoolTaskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="3"></property>
<property name="maxPoolSize" value="4"></property>
<property name="WaitForTasksToCompleteOnShutdown" value="true">
</property>
</bean>
public class ApplicationDirectorImpl{
#Autowired
private ThreadPoolTaskExecutor threadPoolTaskExecutor;
....
....
public CustomResponseObject method(){
....
....
List<Future<FutureCustomResponse>> fList = new ArrayList<Future<FutureCustomResponse>>();
for(String name : nameList){
Future<FutureCustomResponse> fut = threadPoolTaskExecutor.submit(new Task(name));
fList.add(fut);
}
for(Future f: fList){
FutureCustomResponse fuResponse = f.get();
}
}
}
public class Task implements Callable<FutureCustomResponse>{
private String name;
public call() throws Exception{
System.out.println("name" + name + " performed by " + Thread.currentThread().getName());
return NameBuild.getPersonalInfo(this.name);
}
}
Now when a request comes with 5 names, it should be serviced by these 3 threads.
We have 2 servers, and so there will be 2 instances of our application. So the threadpool will be created for each server or common between 2 servers.
So in total how many threads will be created? 3 per server or 3 per application?
Related
What is the right way to deploy Hazelcast on one REST-server and 5 worker machines cluster? Should I start Hazelcast 5 server instances (one on each worker) and 1 HazelcastClient on REST server?
I have
One REST server machine, which handle all user requests;
Five worker machines in a cluster, each of machines keeps some data in local file system. That data is definitely to big to keep them in RAM, I need Hazelcast only to distribute my search query through cluster.
I want
On user request, search through data of each of 5 worker machines and return result to user. User request will be accepted by REST-server machine, than REST-server will send search MultiTask to each worker in a cluster. Something like:
public MySearchResult handleUserSearchRequest(String query) {
MultiTask<String> task = new MultiTask<String>(query, Hazelcast.getCluster().getMembers());
ExecutorService executorService = Hazelcast.getExecutorService();
executorService.execute(task);
Collection<String> results = task.get();
return results.stream().reduce(/*some logic*/);
}
P.S.
How to launch all 6 Hazelcast instances from single place (Spring Boot application)?
You can simply have a script that can run your main class containing the node startup code, those many number of times.
Understanding your usecase, I have given a sample code for creating a cluster and submitting a task to all the nodes from a Driver class in your case REST client.
Run the below class 5 times to create a cluster of 5 nodes under TCP/IP configuration.
public class WorkerNode {
public static void main(String[] args){
/*
Create a new Hazelcast node.
Get the configurations from Hazelcast.xml in classpath or default one from jar
*/
HazelcastInstance workerNode = Hazelcast.newHazelcastInstance();
System.out.println("*********** Started a WorkerNode ***********");
}
}
Here is the NodeTask containing your business logic to do the IO operations.
public class NodeTask implements Callable<Object>, HazelcastInstanceAware, Serializable {
private transient HazelcastInstance hazelcastInstance;
public void setHazelcastInstance(HazelcastInstance hazelcastInstance) {
this.hazelcastInstance = hazelcastInstance;
}
public Object call() throws Exception {
Object returnableObject = "testData";
//Do all the IO operations here and set the returnable object
System.out.println("Running the NodeTask on a Hazelcast Node: " + hazelcastInstance.getName());
return returnableObject;
}
}
Here is the driver class from your REST client:
public class Driver {
public static void main(String[] args) throws Exception {
HazelcastInstance client = HazelcastClient.newHazelcastClient();
IExecutorService executor = client.getExecutorService("executor");
Map<Member, Future<Object>> result = executor.submitToAllMembers(new NodeTask());
for (Future<Object> future : result.values()) {
/*
Aggregation logic goes here.
*/
System.out.println("Returned data from node: " + future.get());
}
client.shutdown();
System.exit(0);
}
}
Sample Hazelcast.xml configuration:
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config http://www.hazelcast.com/schema/config/hazelcast-config-3.8.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<network>
<port auto-increment="true" port-count="100">5701</port>
<join>
<multicast enabled="false">
<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>
</multicast>
<tcp-ip enabled="true">
<!--Replace this with the IP addresses of the servers -->
<interface>127.0.0.1</interface>
</tcp-ip>
<aws enabled="false"/>
</join>
<interfaces enabled="false">
<interface>127.0.0.1</interface>
</interfaces>
</network>
</hazelcast>
I want to handle multi Thread in spring mvn model. I have written this code
#RequestMapping("/indialCall")
#ResponseBody
public String indialCall(HttpServletRequest request) {
String result = "FAIL";
try {
Map<String, String> paramList = commonUtilities.getParamList(request);
logger.info("indialCall paramList :::" + paramList);
// System.out.println("indial-call paramList :::" + paramList);
result = inDialHandler.processIndialWork(paramList);
logger.info(result);
} catch (Exception e) {
logger.info("Error :" + e);
}
return result;
}
public String processIndialWork(final Map<String, String> paramList) {
final Boolean sendSms = Boolean.parseBoolean(paramList.get(constantService.getSendSms()));
//assume it is always true
if(true){
Thread thread = new Thread(new Runnable() {
#Override
public void run() {
String sessionId = (String) paramList.get(constantService.getInDialSession());
String msisdn = (String) paramList.get(constantService.getInDialMsisdn());
//This method will save the entry into database
saveMissedCall(callStartDate, sessionId, msisdn, vmnNo, advId, enterpriseId, sendSms, advertiser);
}
});
thread.start();
return "1";
}
}
In this code i am using thread creation on every http request. Which is not good for my case.
because system get 50 request /sec. and when i see the cpu usage it is too high.
I am calling this thread for async communication so that calling party can get response instantly
and later on application do the further processing.
I want to use the Executor service but do not know how to do this. Can some one guide me or can write
few line of code for me to implment the correct thread pool executor.
First define a simple taskExecutor in your config file.
<bean id="taskExecutor"
class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="5" />
<property name="maxPoolSize" value="10" />
<property name="WaitForTasksToCompleteOnShutdown" value="true" />
</bean>
Create spring bean with prototype scope (prototype is important as you want to give each thread different data), and they
will run simultaneously.
This spring bean will implement runnable with run method and will have class level variable paramlist for getting vales.
public class MyRunnableBean implements runnable{
private Map<String, String> paramList();
// add setter
public void run(){
// your logic
}
}
Inject task executor (singleton) in your existin bean , get the instances of this runnable bean in your existing bean set the paramlist and add it in executor :-
MyRunnableBEan myRunnableBEan = applicationContext.getBean("myRunnable");
myRunnableBean.setParamList(/* your paramlist*/ );
taskExecutor.execute(myRunnableBean);
Correct the compilation and syntax error, this sample code written on notepad, i don't have java on my machine.
This is how my existing system works.
I have batch written using spring batch which writes messages to queues ASYNCHRONOUSLY. The writers once send certain number of messages to queue, starts listening to LINKED_BLOCKING_QUEUE for same number of messages.
I have spring amqp listeners which consumes messages and process them. Once processed, consumer replies back on reply queue. There are listeners which listens to reply queue to check whether messages are successfully processed or not. The reply listener retrives response and add it to LINKED_BLOCKING_QUEUE which is then fetched by writer. Once writer fetch all responses finishes batch. If there is exception, it stops the batch.
This is my job configurations
<beans:bean id="computeListener" class="com.st.symfony.Foundation"
p:symfony-ref="symfony" p:replyTimeout="${compute.reply.timeout}" />
<rabbit:queue name="${compute.queue}" />
<rabbit:queue name="${compute.reply.queue}" />
<rabbit:direct-exchange name="${compute.exchange}">
<rabbit:bindings>
<rabbit:binding queue="${compute.queue}" key="${compute.routing.key}" />
</rabbit:bindings>
</rabbit:direct-exchange>
<rabbit:listener-container
connection-factory="rabbitConnectionFactory" concurrency="${compute.listener.concurrency}"
requeue-rejected="false" prefetch="1">
<rabbit:listener queues="${compute.queue}" ref="computeListener"
method="run" />
</rabbit:listener-container>
<beans:beans profile="master">
<beans:bean id="computeLbq" class="java.util.concurrent.LinkedBlockingQueue" />
<beans:bean id="computeReplyHandler" p:blockingQueue-ref="computeLbq"
class="com.st.batch.foundation.ReplyHandler" />
<rabbit:listener-container
connection-factory="rabbitConnectionFactory" concurrency="1"
requeue-rejected="false">
<rabbit:listener queues="${compute.reply.queue}" ref="computeReplyHandler"
method="onMessage" />
</rabbit:listener-container>
<beans:bean id="computeItemWriter"
class="com.st.batch.foundation.AmqpAsynchItemWriter"
p:template-ref="amqpTemplate" p:queue="${compute.queue}"
p:replyQueue="${compute.reply.queue}" p:exchange="${compute.exchange}"
p:replyTimeout="${compute.reply.timeout}" p:routingKey="${compute.routing.key}"
p:blockingQueue-ref="computeLbq"
p:logFilePath="${spring.tmp.batch.dir}/#{jobParameters[batch_id]}/log.txt"
p:admin-ref="rabbitmqAdmin" scope="step" />
<job id="computeJob" restartable="true">
<step id="computeStep">
<tasklet transaction-manager="transactionManager">
<chunk reader="computeFileItemReader" processor="computeItemProcessor"
writer="computeItemWriter" commit-interval="${compute.commit.interval}" />
</tasklet>
</step>
</job>
</beans:beans>
This is my writer code,
public class AmqpAsynchRpcItemWriter<T> implements ItemWriter<T> {
protected String exchange;
protected String routingKey;
protected String queue;
protected String replyQueue;
protected RabbitTemplate template;
protected AmqpAdmin admin;
BlockingQueue<Object> blockingQueue;
String logFilePath;
long replyTimeout;
// Getters and Setters
#Override
public void write(List<? extends T> items) throws Exception {
for (T item : items) {
Message message = MessageBuilder
.withBody(item.toString().getBytes())
.setContentType(MessageProperties.CONTENT_TYPE_TEXT_PLAIN)
.setReplyTo(this.replyQueue)
.setCorrelationId(item.toString().getBytes()).build();
template.send(this.exchange, this.routingKey, message);
}
for (T item : items) {
Object msg = blockingQueue
.poll(this.replyTimeout, TimeUnit.MILLISECONDS);
if (msg instanceof Exception) {
admin.purgeQueue(this.queue, true);
throw (Exception) msg;
} else if (msg == null) {
throw new Exception("reply timeout...");
}
}
System.out.println("All items are processed.. Command completed. ");
}
}
Listener pojo
public class Foundation {
Symfony symfony;
long replyTimeout;
//Getters Setters
public Object run(String command) {
System.out.println("Running:" + command);
try {
symfony.run(command, this.replyTimeout);
} catch (Exception e) {
return e;
}
return "Completed : " + command;
}
}
This is reply handler
public class ReplyHandler {
BlockingQueue<Object> blockingQueue;
public void onMessage(Object msgContent) {
try {
blockingQueue.put(msgContent);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Now, the problem is, I want to run multiple batches with unique batch id simultaneously which will process different data (of same type) for different batches.
As the number of batches are going to be increased in future, I don't want to keep adding separate queues and reply queues for each batch.
And also, to process messages simultaneously, I have multiple listeners (set with listener concurrency ) listening to queue. If I add different queue for different batches, number of listeners running will be increased which may overload servers (CPU/Memory usage goes high).
So I don't want to replicate same infrastructure for each type of batch I am going to add. I want to use same infrastructure just writers of specific batch should get only its responses not the responses of other batches running simultaneously.
Can we use same instances of item writers which use same blocking queue instances for multiple instances of batches running parallel ?
You may want to look into JMS Message Selectors.
As per Docs
The createConsumer and createDurableSubscriber methods allow you to specify a message selector as an argument when you create a message consumer.
The message consumer then receives only messages whose headers and properties match the selector.
There is no equivalent of a JMS message selector expression in the AMQP (RabbitMQ) world.
Each consumer has to have his own queue and you use an exchange to route to the appropriate queue, using a routing key set by the sender.
It is not as burdensome as you might think; you don't have to statically configure the broker; the consumers can use a RabbitAdmin to declare/delete exchanges, queues, bindings on demand.
See Configuring the Broker in the Spring AMQP documentation.
I have some service bean which is accessible by identifier someSpecificService which I need to modify.
Beans are defined in different xml files and are collected together in runtime. So one big xml file is created where all these xmls are imported:
context.xml
....
<import path="spring1.xml" />
<import path="spring2.xml" />
...
So there is following configuration:
<-- definitions from spring1.xml -->
<alias name="defaultSomeSpecificService" alias="someSpecificService" />
<bean id="defaultSomeSpecificService" class="..."/>
....
<!-- definitions from spring2.xml -->
<alias name="myOwnSomeSpecificService" alias="someSpecificService" />
<bean id="myOwnSomeSpecificService" class="..." /> <!-- how to inject previously defined someSpecificService into this new bean? -->
I would like to override someSpecificService from spring1.xml in spring2.xml, however I do need to inject previously defined bean defaultSomeSpecificService and all I know is its alias name someSpecificService which I need to redefine to new bean myOwnSomeSpecificService.
Is it possible to implement?
One solution would be to avoid trying to override the definition, by creating a proxy for the service implementation to intercept all calls towards it.
1) For the sake of the example, suppose the service would be something like:
public interface Service {
public String run();
}
public class ExistingServiceImpl implements Service {
#Override
public String run() {
throw new IllegalStateException("Muahahahaha!");
}
}
2) Implement an interceptor instead of myOwnSomeSpecificService:
import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;
public class SomeSpecificServiceInterceptor implements MethodInterceptor {
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
String status;
try {
// allow the original invocation to actually execute
status = String.valueOf(invocation.proceed());
} catch (IllegalStateException e) {
System.out.println("Existing service threw the following exception [" + e.getMessage() + "]");
status = "FAIL";
}
return status;
}
}
3) In spring2.xml define the proxy creator and the interceptor:
<bean id="serviceInterceptor" class="com.nsn.SomeSpecificServiceInterceptor" />
<bean id="proxyCreator" class="org.springframework.aop.framework.autoproxy.BeanNameAutoProxyCreator">
<property name="beanNames" value="someSpecificService"/>
<property name="interceptorNames">
<list>
<value>serviceInterceptor</value>
</list>
</property>
</bean>
4) Running a small example such as:
public class Main {
public static void main(String[] args) {
Service service = new ClassPathXmlApplicationContext("context.xml").getBean("someSpecificService", Service.class);
System.out.println("Service execution status [" + service.run() + "]");
}
}
... instead of the IllegalStateException stacktrace you'd normally expect, it will print:
Existing service threw the following exception [Muahahahaha!]
Service execution status [FAIL]
Please note that in this example the service instance is not injected in the interceptor as you asked because I had no user for it. However should you really need it, you can easily inject it via constructor/property/etc because the interceptor is a spring bean itself.
I'm trying to create a Spring Batch job using a ListItemReader<String>, ItemProcessor<String, String> and ItemWriter<String>.
The XML looks like the following,
<job id="sourceJob" xmlns="http://www.springframework.org/schema/batch">
<step id="step1" next="step2">
<tasklet>
<chunk reader="svnSourceItemReader"
processor="metadataItemProcessor"
writer="metadataItemWriter"
commit-interval="1" />
</tasklet>
</step>
<step id="step2">
<tasklet ref="lastRevisionLoggerTasklet"></tasklet>
</step>
</job>
<bean id="svnSourceItemReader"
class="com.example.repository.batch.SvnSourceItemReader"
scope="prototype">
<constructor-arg index="0">
<list>
<value>doc1.xkbml</value>
<value>doc2.xkbml</value>
<value>doc3.xkbml</value>
</list>
</constructor-arg>
</bean>
<bean id="metadataItemProcessor"
class="com.example.repository.batch.MetadataItemProcessor"
scope="prototype" />
<bean id="metadataItemWriter"
class="com.example.repository.batch.MetadataItemWriter"
scope="prototype" />
The reader, processor and writer are vanilla,
public class SvnSourceItemReader extends ListItemReader<String> {
public SvnSourceItemReader(List<String> list) {
super(list);
System.out.println("Reading data list " + list);
}
#Override
public String read() {
String out = (String) super.read();
System.out.println("Reading data " + out);
return out;
}
}
public class MetadataItemProcessor implements ItemProcessor<String, String> {
#Override
public String process(String i) throws Exception {
System.out.println("Processing " + i + " : documentId " + documentId);
return i;
}
}
public class MetadataItemWriter implements ItemWriter<String> {
#Override
public void write(List<? extends String> list) throws Exception {
System.out.println("Writing " + list);
}
}
The job is started like this, but on a schedule of every 10 seconds.
long nanoBits = System.nanoTime() % 1000000L;
if (nanoBits < 0) {
nanoBits *= -1;
}
String dateParam = new Date().toString() + System.currentTimeMillis()
+ "." + nanoBits;
param = new JobParametersBuilder().addString("date", dateParam)
.toJobParameters();
JobExecution execution = jobLauncher.run(job, param);
When the application starts, I see it read, process and write each of the three items in the list passed to the reader.
Reading data doc1.xkbml
Processing doc1.xkbml : documentId doc1
Writing [doc1.xkbml]
Reading data doc2.xkbml
Processing doc2.xkbml : documentId doc2
Writing [doc2.xkbml]
Reading data doc3.xkbml
Processing doc3.xkbml : documentId doc3
Writing [doc3.xkbml]
Because this sourceJob is on a scheduled timer, every 10 seconds I expected to see that list processed, but instead I see on all subsequent runs.
Reading data null
Does anyone know why this is happening? I'm new to Spring Batch and just can't get my hands around the issue.
Thanks /w
The problem is that you marked your reader as scope="prototype". It should be scope="step".
In Spring-batch there are only two scopes: singleton (the default) and step.
From the javadoc:
StepScope: Scope for step context. Objects in this scope use the
Spring container as an object factory, so there is only one instance
of such a bean per executing step. All objects in this scope are
(no need to decorate the bean definitions).
and
Using a scope of Step is required in order to use late binding since
the bean cannot actually be instantiated until the Step starts, which
allows the attributes to be found.
During the Spring context startup look at your log and you will see this line:
INFO: Done executing SQL script from class path resource
[org/springframework/batch/core/schema-hsqldb.sql] in 9 ms.
Reading data list [doc1.xkbml, doc2.xkbml, doc3.xkbml]
as you can see your reader has already been created and managed as a singleton; dynamic beans in spring-batch context should be managed with the special step scope so that Spring will create a fresh copy of the bean every time a step is executed.
In your reader, ListItemReader.read() is written as:
public T read() {
if (!list.isEmpty()) {
return list.remove(0);
}
return null;
}
In each read items are removed from original list! The reader is constructed once and, on second job execution, the list is empty!
Just an additional information: you can also use JavaConfig instead of the xml config file, and annotate the reader bean declaration with #StepConfig.
ex:
#Configuration
#EnableBatchProcessing
public class MyConfig {
...
#Bean
#StepScope
public ItemReader<HeadingBreakevenAssociation> readerHeadingBreakevenAssociationList(){
ItemReader<Person> itemReader = new ListItemReader<Person>(myList);
return itemReader;
}
}