I am trying to get the FacesContext by calling FacesContext.getCurrentInstance() in the run() method of a Runnable class, but it returns null.
public class Task implements Runnable {
#Override
public void run() {
FacesContext context = FacesContext.getCurrentInstance(); // null!
// ...
}
}
How is this caused and how can I solve it?
The FacesContext is stored as a ThreadLocal variable in the thread responsible for the HTTP request which invoked the FacesServlet, the one responsible for creating the FacesContext. This thread usually goes through the JSF managed bean methods only. The FacesContext is not available in other threads spawned by that thread.
You should actually also not have the need for it in other threads. Moreover, when your thread starts and runs independently, the underlying HTTP request will immediately continue processing the HTTP response and then disappear. You won't be able to do something with the HTTP response anyway.
You need to solve your problem differently. Ask yourself: what do you need it for? To obtain some information? Just pass that information to the Runnable during its construction instead.
The below example assumes that you'd like to access some session scoped object in the thread.
public class Task implements Runnable {
private Work work;
public Task(Work work) {
this.work = work;
}
#Override
public void run() {
// Just use work.
}
}
Work work = (Work) FacesContext.getCurrentInstance().getExternalContext().getSessionMap().get("work");
Task task = new Task(work);
// ...
If you however ultimately need to notify the client e.g. that the thread's work is finished, then you should be looking for a different solution than e.g. adding a faces message or so. The answer is to use "push". This can be achieved with SSE or websockets. A concrete websockets example can be found in this related question: Real time updates from database using JSF/Java EE. In case you happen to use PrimeFaces, look at
<p:push>. In case you happen to use OmniFaces, look at <o:socket>.
Unrelated to the concrete problem, manually creating Runnables and manually spawning threads in a Java EE web application is alarming. Head to the following Q&A to learn about all caveats and how it should actually be done:
Spawning threads in a JSF managed bean for scheduled tasks using a timer
Is it safe to start a new thread in a JSF managed bean?
Related
I have a JAX-RS/Jersey Rest API which gets a request and needs to do an additional job in a separate thread but I am not sure whether it would be advisable to use a threadpool or not. I expect a lot of requests to this API (a few thousands a day) but I only have a single additional job in the background.
Would it be bad to just create a new Thread each time like this? Any advice would be appreciated. I have not used a ThreadPool before.
#Get
#Path("/myAPI")
public Response myCall() {
// call load in the background
load();
...
// do main job here
mainJob();
...
}
private void load() {
new Thread(new Runnable() {
#Override
public void run() {
doSomethingInTheBackground();
}
}).start();
}
Edit:
Just to clarify. I only need a single additional job to run in the background. This job will call another API to log some info and that's it. But it has to do this for every request and I do not need to wait for a response. That's why I thought of just doing this in a new background thread.
Edit2:
So this is what I came up with now. Could anyone please tell me if this seems OK (it works locally) and if I need to shutdown the executor (see my comment in the code)?
// Configuration class
#Bean (name = "executorService")
public ExecutorService executorService() {
return Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() + 1);
}
// Some other class
#Qualifier("executorService")
#Autowired
private ExecutorService executorService;
....
private void load() {
executorService.submit(new Runnable() {
#Override
public void run() {
doSomethingInTheBackground();
}
});
// If I enable this I will get a RejectedExecutionException
// for a next request.
// executorService.shutdown();
}
Threadpool is a good way of dealing with this for two reasons:
1) you will reuse existing threads in the pool, sort of less overhead
2) more importantly, your system will not get bog down if system goes under attack and some party tries to start zillions of sessions at once because of size of the pool will be preset.
Use of threadpools is not complicated at all. See here more about threadpools. And also take a look at oracle documentation.
It sounds to me you don't need to create multiple threads at all.
(although I might be wrong, I don't know the specifics of your task).
Could you perhaps create exactly 1 thread that does background work, and give that thread a LinkedBlockingQueue to store the parameters of the doSomethingInTheBackground call?
This solution wouldn't work if it is of the utmost importance that the background task starts right away, even when the server is under heavy load. But for example for my most recent task (retrieve text externally, return them to the API caller, then delayed-add the text to the SOLR layer) this was a perfectly fine solution.
I suggest using neither of the approaches you mention, but to use a JMS queue. You can easily embed an ActiveMQ instance in your application. First create one or more separate consumer threads in the background to pick up jobs from the queue.
Then when a request is received just push a message with the job details on the JMS queue. This is a much better architecture and more scalable than fiddling with low level threads or thread pools.
See also: this answer and the activeMQ site.
Alright, I've already asked one question regarding this, but needed a bit more info. I'll try to be coherent with my question as much as I can. (Since I am not sure of the concepts).
Background
I have a java web project(dynamic). I am writing Restful webservices. Below is a snippet from my class
/services
class Services{
static DataSource ds;
static{
ds = createNewDataSource;
}
/serviceFirst
#Consumes(Something)
#produces(Something)
public List<Data> doFirst(){
Connection con = ds.getConnection();
ResultSet res = con.execute(preparedStatement);
//iterate over res, and create list of Data.
return list;
}
}
This is a very basic functionality that I have stated here.
I've got tomcat server where I've deployed this. I've heard that Tomcat has a threadpool, of size 200 (by default). Now my question is that, how exactly does the threadpool work here.
Say I have two requests coming in at the same time. That means that 2 of the threads from the threadpool will get to work. Does this mean that both the threads will have an instance of my class Services? Because below is how I understand the threads and concurrency.
public class myThread extends Thread(){
public void run(){
//do whatever you wan to do here;
}
}
In the above, when I call start on my Thread it will execute the code in run() method and all the objects that it creates in there, will belong to it.
now, coming back to the Tomcat, is there somewhere a run() method written that instantiates the Services class, and that is how the threadpool handles 200 concurrent requests. (Obviously, I understant they will require 200 cores for them to execute concurrently, so ignore that).
Because otherwise, if tomcat does not have 200 different threads having the same path of execution (i.e. my Services class), then how exactly will it handle the 200 concurrent requests.
Thanks
Tomcat's thread pool works, more or less, like what you would get from an ExecutorService (see Executors).
YMMV. Tomcat listens for requests. When it receives a request, it puts this request in a queue. In parallel, it maintains X threads which will continuously attempt to take from this queue. They will prepare the ServletRequest and ServletResponse objects, as well as the FilterChain and appropriate Servlet to invoke.
In pseudocode, this would look like
public void run() {
while (true) {
socket = queue.take();
ServletRequest request = getRequest(socket.getInputStream());
ServletResponse response = generateResponse(socket.getOutputStream());
Servlet servletInstance = determineServletInstance(request);
FilterChain chain = determineFilterChainWithServlet(request, servletInstance);
chain.doFilter(request,response); // down the line invokes the servlet#service method
// do some cleanup, close streams, etc.
}
}
Determining the appropriate Servlet and Filter instances depends on the URL path in the request and the url-mappings you've configured. Tomcat (and every Servlet container) will only ever manage a single instance of a Servlet or Filter for each declared <servlet> or <filter> declaration in your deployment descriptor.
As such, every thread is potentially executing the service(..) method on the same Servlet instance.
That's what the Servlet Specification and the Servlet API, and therefore Tomcat, guarantee.
As for your Restful webservices, read this. It describes how a resource is typically an application scoped singleton, similar to the Servlet instance managed by a Servlet container. That is, every thread is using the same instance to handle requests.
Google App Engine allows you to create threads if you use their ThreadManager.currentRequestThreadFactory() in conjunction with an ExecutorService. So, in order to allow the same frontend instance to handle multiple servlet requests at the same time, I am planning on writing code that looks like the following:
public class MyServlet implements HttpServlet {
private RequestDispatcher dispatcher;
// Getter and setter for 'dispatcher'.
#Override
public void doGet(HttpServletRequest request, HttpServletResponse response) {
MyResponse resp = dispatcher.dispatch(request);
PrintWriter writer = response.getWriter();
// Use the 'resp' and 'writer' objects to produce the resultant HTML
// to send back to the client. Omitted for brevity.
}
}
public class RequestDispatcher {
private ThreadFactory threadFactory =
ThreadManager.currentRequestThreadFactory();
private Executor executor;
// Getters and setters for both properties.
public MyResponse dispatch(HttpServletRequest request) {
if(executor == null)
executor = Executors.newCachedThreadPool(threadFactory);
// MyTask implements Callable<MyResponse>.
MyTask task = TaskFactory.newTask(request);
MyResponse myResponse = executor.submit(task);
}
}
So now I believe we have a setup where each frontend will have a servlet that can accept up to 10 (I believe that's the max for what GAE allows) requests at the same time, and process all of them concurrently without blocking. So first off, if I've mistaken the use of ThreadManager and am not using it correctly, or if my setup for this type of concurrent behavior is incorrect, please begin by correcting me!
Assuming I'm more or less on track, I have some concurrency-related concerns with how Google App Engine threads utilize the object tree underneath the MyTask object.
The MyTask callable is responsible for actually processing the HTTP request. In EJB land, this would be the "business logic" code that does stuff like: (1) placing messages on a queue, (2) hitting the Google Datastore for data, (3) saving stuff to a cache, etc. The point is, it spawns a big "object tree" (lots of subsequent child objects) when its call() method is executed by the Executor.
Do I have to make each and every object that gets created from inside MyTask#call thread-safe? Why or why not? Thanks in advance!
You don't need all this to enable an instance to process multiple requests concurrently. GAE allows you to spawn threads if you need to perform multiple tasks in parallel when handling a single given request.
It could be useful, for example, if you need to contact several external URLs in parallel to get information needed to respond to a given request. This would be more efficient tha contacting all the URLs in sequence.
In simple description, I have a servlet and response time is long so I decided to divide it into two parts, one just composes a response to client, and second let's say performs some business logic and stores result in DB. To decrease response time I execute business logic asynchronously using ThreadPoolExecutor in combination with ArrayBlockingQueue. Using ArrayBlockingQueue I can ensure original FIFO ordering if requests were sequential for the same client. This is important prerequisite.
Here is a snippet:
Servlet
public class HelloServlet extends HttpServlet {
AsyncExecutor exe = new AsyncExecutor();
protected void doGet(HttpServletRequest req,
HttpServletResponse resp) throws ServletException, IOException {
PrintWriter w = resp.getWriter();
exe.executeAsync(exe.new Task(this));
w.print("HELLO TO CLIENT");
}
protected void someBusinessMethod(){
// long time execution here
}
}
and Executor
public class AsyncExecutor {
static final BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(10, true);
static final Executor executor = new ThreadPoolExecutor(3, 5, 20L, TimeUnit.SECONDS, queue);
public void executeAsync(Task t){
boolean isTaskAccepted = false;
while(!isTaskAccepted){
try {
executor.execute(t);
isTaskAccepted = true;
} catch (RejectedExecutionException e){
}
}
}
class Task implements Runnable{
private HelloServlet servlet;
Task(HelloServlet servlet){
this.servlet = servlet;
}
#Override
public void run() {
// just call back to servlet's business method
servlet.someBusinessMethod();
}
}
}
This implementation works fine if I deploy it only to one Tomcat node, since I have only one ArrayBlockingQueue in application. But if I have several nodes and load balancer in front then I can not guarantee FIFO ordering of requests for async execution for the same client since I have already several Queues.
My question is, how it is possible to guarantee the same order of requests to be executed asynchronously for the same client in clustered (multi node) deployment? I think ActiveMQ probably a solution (not preferable for me), or load balancer configuration, or can it be implemented in code?
Hope some of these ideas help.
Thanks Sam for you prompt suggestions.
In the first post I described a problem in very simplified way so to clarify it better let's say I have a legacy web app deployed to Tomcat and it serves some Licensing Model(old one). Then we got a new Licensing Model (this is a GlassFish app) and we need to use it alongside with old one to be in sync. For the end user such integration must be transparent and not intrusive. So user request is served like this.
caller send a request (create subscription for example)
execute business logic of the the new licensing model
execute business logic of the the old licensing model
despite the result of the p.3 return response of p.2 in format of old licensing model back to caller
(optional) handle failure of p.3 if any
This was implemented with Aspect which intercepts requests of p.1 and executes the rest of stuff sequentially. And as I said in previous post p.3 execution time can be long that's why I want to make it asynchronous. Let's have a look at snippet of Aspect (instead of Servlet from the first post).
#Aspect #Component public class MyAspect {
#Autowired
private ApplicationContext ctx;
#Autowired
private AsyncExecutor asyncExecutor;
#Around("#annotation(executeOpi)")
public Object around(ProceedingJoinPoint jp, ExecuteOpi executeOpi) throws Throwable {
LegacyMapper newModelExecutor = ctx.getBean(executeOpi.legacyMapper());
// executes a new model and then return result in the format of old model
Object result = newModelExecutor.executeNewModelLogic(joinPoint.getArgs());
// executes old model logic asynchronously
asyncExecutor.executeAsync(asyncExecutor.new Task(this, jp)
return object
}
public void executeOldModelLogic(ProceedingJoinPoint jp) throws Throwable{
// long time execution here
jp.proceed();
}
}
With this implementation as in the first post, I can guarantee a FIFO order of executeOldModelLogic methods, if requests come to the same Tomcat node. But with multi-node deployment and round robin LB in front I can end-up with such a case when for the same caller "update subscription in old model" can come first to ArrayBlockingQueue than "create subscription in old model", which of course a bad logical bug.
And as for points you suggested:
p1, p2 and p4: I probably can't use it as a solution since I don't have a state of object as such. You see that I pass to Runnable task a references of Aspect and JoinPoint to make a call back of executeOldModelLogic from Runnable to Aspect
p3 Don't know about this might be worthwhile to investigate
p5 This is a direction I want go for further investigation, I have a gut feeling it is only way to solve my problem in the given conditions.
There are some solutions that come to mind off hand.
Use the database: post the jobs to be run in a database table, have a secondary server process run the jobs, and place results in an output table. Then when users call back to the web page, it can pick up any results waiting for them from the output table.
Use a JMS service: this is a pretty lightweight messaging service, which would integrate with your Tomcat application reasonably well. The downside here is that you have to run another server side component, and build the integration layer with your app. But that's not a big disadvantage.
Switch to a full J2EE container (Java App Server): and use an EJB Singleton. I have to admit, I don't have any experience with running a Singleton across separate server instances, but I believe that some of them may be able to handle it.
Use EHCache or some other distributed cache: I built a Queue wrapper around EHCache to enable it to be used like a FIFO queue, and it also has RMI (or JMS) replication, so multiple nodes will see the same data.
Use the Load Balancer: If your load balancer supports session level balancing, then all requests for a single user session can be directed to the same node. In a big web environment where I worked we were unable to share user session state across multiple servers, so we set up load balancing to ensure that the user's session always was directed to same web server.
Hope some of these ideas help.
i have a jsf application running on tomcat 6.0 and somewhere in the app i send e mails to some users.But sending mail slower than i thought, it causes lacks beetwen these related pages.
So my question is; is that a good(or doable) a way to give this proccess to another thread which i create, a thread that gets mail sending requests and put these in a queue and proccess these apart from main application.Hence the mail sending proccess would be out of the main flow and doesnt affect the app's speed.
Yes, that's definitely a good idea. You should only do it with an extreme care. Here's some food for thought:
Is it safe to start a new thread in a JSF managed bean?
Spawning threads in a JSF managed bean for scheduled tasks using a timer
As you're using Tomcat, which does not support EJB out the box (and thus #Asynchronus #Singleton is out of question), I'd create an application scoped bean which holds an ExecutorService to process the mail tasks. Here's a kickoff example:
#ManagedBean(eager=true)
#ApplicationScoped
public class TaskManager {
private ExecutorService executor;
#PostConstruct
public void init() {
executor = Executors.newSingleThreadExecutor();
}
public <T> Future<T> submit(Callable<T> task) {
return executor.submit(task);
}
// Or just void submit(Runnable task) if you want fire-and-forget.
#PreDestroy
public void destroy() {
executor.shutdown();
}
}
This creates a single thread and puts the tasks in a queue. You can use it in normal beans as follows:
#ManagedBean
#RequestScoped
public class Register {
#ManagedProperty("#{taskManager}")
private TaskManager taskManager;
public void submit() {
// ...
taskManager.submit(new MailTask(mail));
// You might want to hold the return value in some Future<Result>, but
// you should store it in view or session scope in order to get result
// later. Note that the thread will block whenever you call get() on it.
// You can just ignore it altogether (as the current example is doing).
}
}
To learn more about java.util.concurrent API, refer the official tutorial.