java queue with dynamic priority flag - java

I need to build a queue where the elements will be added and removed in chronological order by default. But if the client sets the priority flag for the queue I need to be able to pull the elements based on the priority order of the elements.
I am thinking of creating a priority queue backed by a map that keeps track of the queue index in priority order and based on priority flag I can pull the items from the map and pop the item from index from the queue.
However with this approach the question will be, weather I create the map by default or only if the flag is set (considering the cost of creating the map on fly being high, I am inclining towards having it by default).
Please let me know if there is a better way of doing this or if there is an existing implementation that exists.
Here is what I currently have:
import javax.naming.OperationNotSupportedException;
import java.util.Comparator;
import java.util.PriorityQueue;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantLock;
public class DynamicPriorityQueue<ComparableQueueElement> implements IQueue<ComparableQueueElement> {
private static final int CONSTANT_HUNDRED = 100;
private boolean fetchByCustomPriority = false;
private final ReentrantLock lock;
private final PriorityQueue<ComparableQueueElement> queue;
private final PriorityQueue<ComparableQueueElement> customPriorityQueue;
public DynamicPriorityQueue() {
this(null);
}
public DynamicPriorityQueue(Comparator<ComparableQueueElement> comparator) {
this.lock = new ReentrantLock();
this.queue = new PriorityQueue<>(CONSTANT_HUNDRED);
if (comparator != null)
this.customPriorityQueue = new PriorityQueue<ComparableQueueElement>(CONSTANT_HUNDRED, comparator);
else
this.customPriorityQueue = null;
}
public void setFetchByCustomPriority(boolean fetchByCustomPriority) throws OperationNotSupportedException {
if (this.customPriorityQueue == null)
throw new OperationNotSupportedException("Object was created without a custom comparator.");
this.fetchByCustomPriority = fetchByCustomPriority;
}
public void push(ComparableQueueElement t) throws InterruptedException {
if (this.lock.tryLock(CONSTANT_HUNDRED, TimeUnit.MILLISECONDS)) {
try {
this.queue.offer(t);
if (this.customPriorityQueue != null)
this.customPriorityQueue.offer(t);
} finally {
this.lock.unlock();
}
}
}
public ComparableQueueElement peek() {
return this.fetchByCustomPriority ? this.queue.peek()
: (this.customPriorityQueue != null ? this.customPriorityQueue.peek() : null);
}
public ComparableQueueElement pop() throws InterruptedException {
ComparableQueueElement returnElement = null;
if (this.lock.tryLock(CONSTANT_HUNDRED, TimeUnit.MILLISECONDS)) {
try {
if (this.fetchByCustomPriority && this.customPriorityQueue != null) {
returnElement = this.customPriorityQueue.poll();
this.queue.remove(returnElement);
}
else {
returnElement = this.queue.poll();
if (this.customPriorityQueue != null) {
this.customPriorityQueue.remove(returnElement);
}
}
} finally {
this.lock.unlock();
}
}
return returnElement;
}
}

I deleted my comments after rereading the question, it may get complicated. You need to turn a fifo (chronological) queue into a priority queue with a flag. Your map would need to be ordered and be able to hold repeated values. Otherwise you would need to search the map to find the highest priority or search the queue. I wouldn't do it.
EDIT
What about using a wrapping class:
class Pointer<T>{
T element
}
And two queues of Pointers where the queues share the Pointers but they return them differently? The only thing you would need to do is to check that "element" is not null (you set it null when it leaves one of the queues.
The Pointer reference remains in the other queue but you check for null before returning.
EDIT
Your code doesn't have a map.
public ComparableQueueElement peek() {
return this.fetchByCustomPriority ? this.queue.peek()
: (this.customPriorityQueue != null ? this.customPriorityQueue.peek() : null);
}
is not correct. If it is not custom you should peek from this.queue
EDIT
Note that by using a wrapping class you save yourself the remove calls in the other queue. The only overhead added is that you need to check for null when fetching

For me implementation looks fine if your application have the requirement where the flag is changing very frequently. In such case you have both the queues ready to offer or poll object from queue. Although it makes your add operation heavy but retrieval is fast.
But if these changes are not frequent then you can think of re initializing the custom priority queue from FIFO queue only when flag changes.
And perform all your peek,offer operation only on one queue instead of two.
which will be more efficient if FIFO priority is being used.
and if you do that modify your push operation as below -
public void push(ComparableQueueElement t) throws InterruptedException {
if (this.lock.tryLock(CONSTANT_HUNDRED, TimeUnit.MILLISECONDS)) {
try {
this.queue.offer(t);
if (this.fetchByCustomPriority) // add to customPriorityQueue only when flag is enabled
this.customPriorityQueue.offer(t);
} finally {
this.lock.unlock();
}
}
}

Related

Preserve call stack of, Timer.schedule in java

I have a daemon thread running which calls a function (prepareOrder) whenever the cook is not busy and there are orders to be delivered. The prepareOrder calls the orderComplete function after a certain interval of time depending upon the time required to complete the order. Now the problem i am facing is only the last call to the prepareOrder gets displayed on sout.
The daemon
package ui;
import Model.takeOrderModel;
public class daemonThread extends Thread{
//call this method in the main method of driving fucntion
private takeOrderModel orderModel;
daemonThread(takeOrderModel orderModel){
this.orderModel = orderModel;
}
public void assignCook(){
while(true){
int toComplete = orderModel.toCompleteOrders.size();
if ( !orderModel.cookBusy && toComplete>0 ) orderModel.prepareOrder();
}
}
}
The prepare order function.
public void prepareOrder(){
// pick the last element from list
if (toCompleteOrders.size() > 0){
String nextPrepare = toCompleteOrders.get(toCompleteOrders.size()-1);
order orderToComplete = allOrdersPlaced.get(nextPrepare);
completeOrder(orderToComplete);
toCompleteOrders.remove(nextPrepare);
}
}
//Helper function to prepareOrder moves an order from toComplete to prepared order
private void completeOrder(order orderToComplete){
changeCookState();
new java.util.Timer().schedule(
new java.util.TimerTask(){
#Override
public void run() {
changeCookState();
preparedOrders.add(orderToComplete.id);
deliverOrder(orderToComplete.id);
}
}, (long) (orderToComplete.timeToComplete*60)
);
}
public void changeCookState(){
this.cookBusy = !cookBusy;
}
// MODIFIES removes a order from the prepared list and puts it in delivered list
public String deliverOrder(String completedOrder){
preparedOrders.remove(completedOrder);
deliveredOrders.add(completedOrder);
System.out.println(String.format("The order of %s is here", allOrdersPlaced.get(completedOrder).customerName));
return String.format("The order of %s is here", allOrdersPlaced.get(completedOrder).customerName);
}
The main function driving code.
orderMachine.takeNewOrder(fullMeal, "Tom");
orderMachine.takeNewOrder(halfMeal, "Bob");
daemonThread backThread = new daemonThread(orderMachine);
backThread.setDaemon(true);
backThread.assignCook();
Now for me only the last placed order("Bob") gets printed on sout. How can all calls created by Timer.schedule stay in stack.
Edits
The take new order function.
public boolean takeNewOrder(List<item> itemsInOrder, String customerName){
try {
order newOrder = new order(itemsInOrder, customerName);
allOrdersPlaced.put(newOrder.id, newOrder);
toCompleteOrders.add(newOrder.id);
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
Edit 2
here is the public repo containing the complete code
https://github.com/oreanroy/Share_code_samples/tree/master/takeOrder
The problem in this code is a concurrency bug - the cookBusy variable is being written to from two different threads. To fix this, use an AtomicBoolean instead of a boolean, as this is thread safe.
AtomicBoolean cookBusy = new AtomicBoolean(false);
Use compareAndSet to ensure the shared variable is set to a known value before updating it.
public void changeCookState(boolean busy){
if (!this.cookBusy.compareAndSet(!busy, busy))
{
throw new RuntimeException("shared variable set to unexpected value");
}
}

Cannot create hot stream from Queue

I have the following rest controller, which receives requests, transforms them into JSON strings and puts them into a concurrent queue.
I would like to make a Flux out of this queue and subscribe to it.
Unfortunately, it doesn't work.
What am I doing wrong here?
#RestController
public class EventController {
private final ObjectMapper mapper = new ObjectMapper();
private final FirehosePutService firehosePutService;
private ConcurrentLinkedQueue<String> events = new ConcurrentLinkedQueue<>();
private int batchSize = 10;
#Autowired
public EventController(FirehosePutService firehosePutService) {
this.firehosePutService = firehosePutService;
Flux<String> eventFlux = Flux.create((FluxSink<String> sink) -> {
String next;
while (( next = events.poll()) != null) {
sink.next(next);
}
});
eventFlux.publish().autoConnect().subscribe(new BaseSubscriber<String>() {
int consumed;
List<String> batchOfEvents = new ArrayList<>(batchSize);
#Override
protected void hookOnSubscribe(Subscription subscription) {
request(batchSize);
}
#Override
protected void hookOnNext(String value) {
batchOfEvents.add(value);
consumed++;
if (consumed == batchSize) {
batchOfEvents.addAll(events);
log.info("Consume {} elements. Size of batchOfEvents={}", consumed, batchOfEvents.size());
firehosePutService.saveBulk(batchOfEvents);
consumed = 0;
batchOfEvents.clear();
events.clear();
request(batchSize);
}
}
});
}
#GetMapping(value = "/saveMany", produces = "text/html")
public ResponseEntity<Void> saveMany(#RequestParam MultiValueMap<String, String> allRequestParams) throws JsonProcessingException {
Map<String, String> paramValues = allRequestParams.toSingleValueMap();
String reignnEvent = mapper.writeValueAsString(paramValues);
events.add(reignnEvent);
return new ResponseEntity<>(HttpStatus.OK);
}
}
First of all, you use poll method. It is not blocking and returns null if queue is empty. You loop collection until first null (i.e. while (next != null), so your code exits loop almost immediately because queue is empty on start. You must replace poll with take which is blocking and will wait until element is available.
Secondly, hookOnNext is invoked when the event is removed from the queue. However, you are trying to read events again using batchOfEvents.addAll(events);. Moreover, you also clear all pending events events.clear();
I advise you to remove all direct access to events collection from hookOnNext method.
Why do you use Flux here at all? Seems overcomplicated. You can use plain thread here
#Autowired
public EventController(FirehosePutService firehosePutService) {
this.firehosePutService = firehosePutService;
Thread persister = new Thread(() -> {
List<String> batchOfEvents = new ArrayList<>(batchSize);
String next;
while (( next = events.take()) != null) {
batchOfEvents.add(value);
if (batchOfEvents.size() == batchSize) {
log.info("Consume {} elements. Size of batchOfEvents={}", consumed, batchOfEvents.size());
firehosePutService.saveBulk(batchOfEvents);
batchOfEvents.clear();
}
}
});
persister.start();
}

How to implement fair usage policy of threads in API backend?

I am building an REST API in Java which I would be exposing to the outside world. People who would be invoking the API would have to be registered and would be sending their userid in the request.
There would be a maximum of, say, 10 concurrent threads available for executing the API request. I am maintaining a queue which holds all the request ids to be serviced (the primary key of the DB entry).
I need to implement some fair usage policy as follows -
If there are more than 10 jobs in the queue (i.e more than max number of threads), a user is allowed to execute only one request at a time (the other requests submitted by him/her, if any, would remain in the queue and would be taken up only once his previous request has completed execution). In case there are free threads, i.e. even after allotting threads to requests submitted by different users, then the remaining threads in the thread pool can be distributed among the remaining requests (even if the user who has submitted the request is already holding one thread at that moment).
The current implementation is as follows -
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.PriorityBlockingQueue;
import java.util.concurrent.Semaphore;
public class APIJobExecutor implements Runnable{
private static PriorityBlockingQueue<Integer> jobQueue = new PriorityBlockingQueue<Integer>();
private static ExecutorService jobExecutor = Executors.newCachedThreadPool();
private static final int MAX_THREADS = 10;
private static Semaphore sem = new Semaphore(MAX_THREADS, true);
private APIJobExecutor(){
}
public static void addJob(int jobId)
{
if(!jobQueue.contains(jobId)){
jobQueue.add(new Integer(jobId));
}
}
public void run()
{
while (true) {
try {
sem.acquire();
}catch (InterruptedException e1) {
e1.printStackTrace();
//unable to acquire lock. retry.
continue;
}
try {
Integer jobItem = jobQueue.take();
jobExecutor.submit(new APIJobService(jobItem));
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
sem.release();
}
}
}
}
Edit:
Is there any out of the box Java data structure that gives me this functionality. If not, how do I go about implementing the same?
This is a fairly common "quality of service" pattern and can be solved using the bucket idea within a job-queue. I do not know of a standard Java implementation and/or datastructure for this pattern (maybe the PriorityQueue?), but there should be at least a couple of implementations available (let us know if you find a good one).
I did once create my own implementation and I've tried to de-couple it from the project so that you may modify and use it (add unit-tests!). A couple of notes:
a default-queue is used in case QoS is not needed (e.g. if less than 10 jobs are executing).
the basic idea is to store tasks in lists per QoS-key (e.g. the username), and maintain a separate "who is next" list.
it is intended to be used within a job queue (e.g. part of the APIJobExecutor, not a replacement). Part of the job queue's responsibility is to always call remove(taskId) after a task is executed.
there should be no memory leaks: if there are no tasks/jobs in the queue, all internal maps and lists should be empty.
The code:
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.locks.ReentrantLock;
import java.util.*;
import org.slf4j.*;
/** A FIFO task queue. */
public class QosTaskQueue<TASKTYPE, TASKIDTYPE> {
private static final Logger log = LoggerFactory.getLogger(QosTaskQueue.class);
public static final String EMPTY_STRING = "";
/** Work tasks queued which have no (relevant) QoS key. */
private final ConcurrentLinkedQueue<TASKIDTYPE> defaultQ = new ConcurrentLinkedQueue<TASKIDTYPE>();
private final AtomicInteger taskQSize = new AtomicInteger();
private final Map<TASKIDTYPE, TASKTYPE> queuedTasks = new ConcurrentHashMap<TASKIDTYPE, TASKTYPE>();
/** Amount of tasks in queue before "quality of service" distribution kicks in. */
private int qosThreshold = 10;
/** Indicates if "quality of service" distribution is in effect. */
private volatile boolean usingQos;
/**
* Lock for all modifications to Qos-queues.
* <br>Must be "fair" to ensure adding does not block polling threads forever and vice versa.
*/
private final ReentrantLock qosKeyLock = new ReentrantLock(true);
/*
* Since all QoS modifications can be done by multiple threads simultaneously,
* there is never a good time to add or remove a Qos-key with associated queue.
* There is always a chance that a key is added while being removed and vice versa.
* The simplest solution is to make everything synchronized, which is what qosKeyLock is used for.
*/
private final Map<String, Queue<TASKIDTYPE>> qosQueues = new HashMap<String, Queue<TASKIDTYPE>>();
private final Queue<String> qosTurn = new LinkedList<String>();
public boolean add(TASKTYPE wt, TASKIDTYPE taskId, String qosKey) {
if (queuedTasks.containsKey(taskId)) {
throw new IllegalStateException("Task with ID [" + taskId + "] already enqueued.");
}
queuedTasks.put(taskId, wt);
return addToQ(taskId, qosKey);
}
public TASKTYPE poll() {
TASKIDTYPE taskId = pollQos();
return (taskId == null ? null : queuedTasks.get(taskId));
}
/**
* This method must be called after a task is taken from the queue
* using {#link #poll()} and executed.
*/
public TASKTYPE remove(TASKIDTYPE taskId) {
TASKTYPE wt = queuedTasks.remove(taskId);
if (wt != null) {
taskQSize.decrementAndGet();
}
return wt;
}
private boolean addToQ(TASKIDTYPE taskId, String qosKey) {
if (qosKey == null || qosKey.equals(EMPTY_STRING) || size() < getQosThreshold()) {
defaultQ.add(taskId);
} else {
addSynced(taskId, qosKey);
}
taskQSize.incrementAndGet();
return true;
}
private void addSynced(TASKIDTYPE taskId, String qosKey) {
qosKeyLock.lock();
try {
Queue<TASKIDTYPE> qosQ = qosQueues.get(qosKey);
if (qosQ == null) {
if (!isUsingQos()) {
// Setup QoS mechanics
qosTurn.clear();
qosTurn.add(EMPTY_STRING);
usingQos = true;
}
qosQ = new LinkedList<TASKIDTYPE>();
qosQ.add(taskId);
qosQueues.put(qosKey, qosQ);
qosTurn.add(qosKey);
log.trace("Created QoS queue for {}", qosKey);
} else {
qosQ.add(taskId);
if (log.isTraceEnabled()) {
log.trace("Added task to QoS queue {}, size: " + qosQ.size(), qosKey);
}
}
} finally {
qosKeyLock.unlock();
}
}
private TASKIDTYPE pollQos() {
TASKIDTYPE taskId = null;
qosKeyLock.lock();
try {
taskId = pollQosRecursive();
} finally {
qosKeyLock.unlock();
}
return taskId;
}
/**
* Poll the work task queues according to qosTurn.
* Recursive in case empty QoS queues are removed or defaultQ is empty.
* #return
*/
private TASKIDTYPE pollQosRecursive() {
if (!isUsingQos()) {
// QoS might have been disabled before lock was released or by this recursive method.
return defaultQ.poll();
}
String qosKey = qosTurn.poll();
Queue<TASKIDTYPE> qosQ = (qosKey.equals(EMPTY_STRING) ? defaultQ : qosQueues.get(qosKey));
TASKIDTYPE taskId = qosQ.poll();
if (qosQ == defaultQ) {
// DefaultQ should always be checked, even if it was empty
qosTurn.add(EMPTY_STRING);
if (taskId == null) {
taskId = pollQosRecursive();
} else {
log.trace("Removed task from defaultQ.");
}
} else {
if (taskId == null) {
qosQueues.remove(qosKey);
if (qosQueues.isEmpty()) {
usingQos = false;
}
taskId = pollQosRecursive();
} else {
qosTurn.add(qosKey);
if (log.isTraceEnabled()) {
log.trace("Removed task from QoS queue {}, size: " + qosQ.size(), qosKey);
}
}
}
return taskId;
}
#Override
public String toString() {
StringBuilder sb = new StringBuilder(this.getClass().getName());
sb.append(", size: ").append(size());
sb.append(", number of QoS queues: ").append(qosQueues.size());
return sb.toString();
}
public boolean containsTaskId(TASKIDTYPE wid) {
return queuedTasks.containsKey(wid);
}
public int size() {
return taskQSize.get();
}
public void setQosThreshold(int size) {
this.qosThreshold = size;
}
public int getQosThreshold() {
return qosThreshold;
}
public boolean isUsingQos() {
return usingQos;
}
}

Java logging API MemoryHandler dumping log instantly

I want to log my application's messages after certain threshold. Say after 10 messages. I read about memory handler and used it. However I found that it logs the messages instantly instead of buffering them as said in documentation. Here's code
Handler h = new FileHandler('/var/tmp/process.log',Level.INFO);
Handler h2 = new MemoryHandler(h, 10, Level.ALL);
logger.addHandler(h2);
for(int i=0; i<10; i++) {
logger.log(Level.INFO, "Sample message");
Thread.sleep(1000);
}
This code is adding above message instantly. What am I missing? My purpose is to not let too much disk I/O happen. Please help
The third constructor argument in
Handler h2 = new MemoryHandler(h, 10, Level.ALL);
defines the push level, i.e. if a message of the given level or above it is logged, the MemoryHandler will push it to the configured downstream handler (jdk documentation).
I don't think, that the MemoryHandler is suitable for the purpose you'd like to achieve. You could create your own implementation of a MemoryHandler with a fixed size buffer, that flushes whenever the buffer is full. But consider the drawbacks of this approach: log messages can get lost when the application terminates, flushing may involve blocking I/O and you cannot determine which thread will have to execute that I/O.
Alternatively you could think about using another proven logging framework, like logback or log4j2. These generally offer more advanced functionality. I suggest to look for asynchronous logging.
You have to extend the MemoryHandler to provide custom push behavior. You can do this by setting the push level to ALL overriding the push method or by setting the push level to OFF and then manually issuing a push from the publish method.
If you want to only start logging after a number of log records are seen then you want to create something like:
public class PopoffHandler extends MemoryHandler {
private long count;
private final long size;
public PopoffHandler(Handler target, int size) {
super(target, size, Level.ALL);
this.size = size;
}
#Override
public synchronized void push() {
if (count == size) {
super.push();
} else {
++count;
}
}
#Override
public void setPushLevel(Level newLevel) {
if (newLevel == null) {
throw new NullPointerException();
}
super.setPushLevel(Level.ALL);
}
}
If you want to log records in groups then you want to do something like:
public class ChunkedHandler extends MemoryHandler {
private long count;
private final long size;
public ChunkedHandler(Handler target, int size) {
super(target, size, Level.OFF);
this.size = size;
}
#Override
public synchronized void publish(LogRecord record) {
super.publish(record);
if (++count % size == 0L) {
super.push();
}
}
#Override
public void setPushLevel(Level newLevel) {
if (newLevel == null) {
throw new NullPointerException();
}
super.setPushLevel(Level.OFF);
}
}

ExtendedDataTable in RichFaces 4: DataModel handling

I have another question, somewhat related to the one I posted in January. I have a list, which is rich:extendedDataTable component, and it gets updated on the fly, as the user types his search criteria in a separate text box (i.e. the user types in the first 4 characters and as he keeps typing, the results list changes). And in the end it works fine, when I use RichFaces 3, but as I upgraded to RichFaces 4, I've got all sorts of compilation problems. The following classes are no longer accessible and there no suitable replacement for these, it seems:
org.richfaces.model.DataProvider
org.richfaces.model.ExtendedTableDataModel
org.richfaces.model.selection.Selection
org.richfaces.model.selection.SimpleSelection
Here is what it was before:
This is the input text that should trigger the search logic:
<h:inputText id="firmname" value="#{ExtendedTableBean.searchValue}">
<a4j:support ajaxSingle="true" eventsQueue="firmListUpdate"
reRender="resultsTable"
actionListener="#{ExtendedTableBean.searchForResults}" event="onkeyup" />
</h:inputText>
Action listener is what should update the list. Here is the extendedDataTable, right below the inputText:
<rich:extendedDataTable tableState="#{ExtendedTableBean.tableState}" var="item"
id="resultsTable" value="#{ExtendedTableBean.dataModel}">
... <%-- I'm listing columns here --%>
</rich:extendedDataTable>
And here's the back-end code, where I use my data model handling:
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package com.beans;
import java.io.FileInputStream;
import java.util.ArrayList;
import java.util.Collections;
import java.util.ConcurrentModificationException;
import java.util.List;
import java.util.Properties;
import java.util.concurrent.CopyOnWriteArrayList;
import javax.faces.context.FacesContext;
import javax.faces.event.ActionEvent;
import org.richfaces.model.DataProvider;
import org.richfaces.model.ExtendedTableDataModel;
public class ExtendedTableBean {
private String sortMode="single";
private ExtendedTableDataModel<ResultObject> dataModel;
//ResultObject is a simple pojo and getResultsPerValue is a method that
//read the data from the properties file, assigns it to this pojo, and
//adds a pojo to the list
private Object tableState;
private List<ResultObject> results = new CopyOnWriteArrayList<ResultObject>();
private List<ResultObject> selectedResults =
new CopyOnWriteArrayList<ResultObject>();
private String searchValue;
/**
* This is the action listener that the user triggers, by typing the search value
*/
public void searchForResults(ActionEvent e) {
synchronized(results) {
results.clear();
}
//I don't think it's necessary to clear results list all the time, but here
//I also make sure that we start searching if the value is at least 4
//characters long
if (this.searchValue.length() > 3) {
results.clear();
updateTableList();
} else {
results.clear();
}
dataModel = null; // to force the dataModel to be updated.
}
public List<ResultObject> getResultsPerValue(String searchValue) {
List<ResultObject> resultsList = new CopyOnWriteArrayList<ResultObject>();
//Logic for reading data from the properties file, populating ResultObject
//and adding it to the list
return resultsList;
}
/**
* This method updates a firm list, based on a search value
*/
public void updateTableList() {
try {
List<ResultObject> searchedResults = getResultsPerValue(searchValue);
//Once the results have been retrieved from the properties, empty
//current firm list and replace it with what was found.
synchronized(firms) {
firms.clear();
firms.addAll(searchedFirms);
}
} catch(Throwable xcpt) {
//Exception handling
}
}
/**
* This is a recursive method, that's used to constantly keep updating the
* table list.
*/
public synchronized ExtendedTableDataModel<ResultObject> getDataModel() {
try {
if (dataModel == null) {
dataModel = new ExtendedTableDataModel<ResultObject>(
new DataProvider<ResultObject>() {
public ResultObject getItemByKey(Object key) {
try {
for(ResultObject c : results) {
if (key.equals(getKey(c))){
return c;
}
}
} catch (Exception ex) {
//Exception handling
}
return null;
}
public List<ResultObject> getItemsByRange(
int firstRow, int endRow) {
return Collections.unmodifiableList(results.subList(firstRow, endRow));
}
public Object getKey(ResultObject item) {
return item.getResultName();
}
public int getRowCount() {
return results.size();
}
});
}
} catch (Exception ex) {
//Exception handling
}
return dataModel;
}
//Getters and setters
}
Now that the classes ExtendedTableDataModel and DataProvider are no longer available, what should I be using instead? RichFaces forum claims there's really nothing and developers are pretty much on their own there (meaning they have to do their own implementation). Does anyone have any other idea or suggestion?
Thanks again for all your help and again, sorry for a lengthy question.
You could convert your data model to extend the abstract org.ajax4jsf.model.ExtendedDataModel instead which actually is a more robust and performant datamodel for use with <rich:extendedDataTable/>. A rough translation of your existing model to the new one below (I've decided to use your existing ExtendedDataModel<ResultObject> as the underlying data source instead of the results list to demonstrate the translation):
public class MyDataModel<ResultObject> extends ExtendedDataModel<ResultObject>{
String currentKey; //current row in the model
Map<String, ResultObject> cachedResults = new HashMap<String, ResultObject>(); // a local cache of search/pagination results
List<String> cachedRowKeys; // a local cache of key values for cached items
int rowCount;
ExtendedTableDataModel<ResultObject> dataModel; // the underlying data source. can be anything
public void setRowKey(Object item){
this.currentKey = (ResultObject)item.getResultName();
}
public void walk(FacesContext context, DataVisitor visitor, Range range, Object argument) throws IOException {
int firstRow = ((SequenceRange)range).getFirstRow();
int numberOfRows = ((SequenceRange)range).getRows();
cachedRowkeys = new ArrayList<String>();
for (ResultObject result : dataModel.getItemsByRange(firstRow,numberOfRows)) {
cachedRowKeys.add(result.getResultName());
cachedResults.put(result.getResultName(), result); //populate cache. This is strongly advised as you'll see later.
visitor.process(context, result.getResultName(), argument);
}
}
}
public Object getRowData() {
if (currentKey==null) {
return null;
} else {
ResultObject selectedRowObject = cachedResults.get(currentKey); // return result from internal cache without making the trip to the database or other underlying datasource
if (selectedRowObject==null) { //if the desired row is not within the range of the cache
selectedRowObject = dataModel.getItemByKey(currentKey);
cachedResults.put(currentKey, selectedRowObject);
return selectedRowObject;
} else {
return selectedRowObject;
}
}
public int getRowCount(){
if(rowCount == 0){
rowCount = dataModel.getRowCount(); //cache row count
return rowCount;
}
return rowCount
}
Those are the 3 most important methods in that class. There are a bunch of other methods, basically carry over from legacy versions that you don't need to worry yourself about. If you're saving JSF state to client, you might be interested in the org.ajax4jsf.model.SerializableDataModel for serialization purposes. See an example for that here. It's an old blog but the logic is still applicable.
Unrelated to this, your current implementation of getRowData will perform poorly in production grade app. Having to iterate thru every element to return a result? Try a better search algorithm.

Categories

Resources