I want to log my application's messages after certain threshold. Say after 10 messages. I read about memory handler and used it. However I found that it logs the messages instantly instead of buffering them as said in documentation. Here's code
Handler h = new FileHandler('/var/tmp/process.log',Level.INFO);
Handler h2 = new MemoryHandler(h, 10, Level.ALL);
logger.addHandler(h2);
for(int i=0; i<10; i++) {
logger.log(Level.INFO, "Sample message");
Thread.sleep(1000);
}
This code is adding above message instantly. What am I missing? My purpose is to not let too much disk I/O happen. Please help
The third constructor argument in
Handler h2 = new MemoryHandler(h, 10, Level.ALL);
defines the push level, i.e. if a message of the given level or above it is logged, the MemoryHandler will push it to the configured downstream handler (jdk documentation).
I don't think, that the MemoryHandler is suitable for the purpose you'd like to achieve. You could create your own implementation of a MemoryHandler with a fixed size buffer, that flushes whenever the buffer is full. But consider the drawbacks of this approach: log messages can get lost when the application terminates, flushing may involve blocking I/O and you cannot determine which thread will have to execute that I/O.
Alternatively you could think about using another proven logging framework, like logback or log4j2. These generally offer more advanced functionality. I suggest to look for asynchronous logging.
You have to extend the MemoryHandler to provide custom push behavior. You can do this by setting the push level to ALL overriding the push method or by setting the push level to OFF and then manually issuing a push from the publish method.
If you want to only start logging after a number of log records are seen then you want to create something like:
public class PopoffHandler extends MemoryHandler {
private long count;
private final long size;
public PopoffHandler(Handler target, int size) {
super(target, size, Level.ALL);
this.size = size;
}
#Override
public synchronized void push() {
if (count == size) {
super.push();
} else {
++count;
}
}
#Override
public void setPushLevel(Level newLevel) {
if (newLevel == null) {
throw new NullPointerException();
}
super.setPushLevel(Level.ALL);
}
}
If you want to log records in groups then you want to do something like:
public class ChunkedHandler extends MemoryHandler {
private long count;
private final long size;
public ChunkedHandler(Handler target, int size) {
super(target, size, Level.OFF);
this.size = size;
}
#Override
public synchronized void publish(LogRecord record) {
super.publish(record);
if (++count % size == 0L) {
super.push();
}
}
#Override
public void setPushLevel(Level newLevel) {
if (newLevel == null) {
throw new NullPointerException();
}
super.setPushLevel(Level.OFF);
}
}
Related
i have joined to one of those Vertx lovers , how ever the single threaded main frame may not be working for me , because in my server there might be 50 file download requests at a moment , as a work around i have created this class
public abstract T onRun() throws Exception;
public abstract void onSuccess(T result);
public abstract void onException();
private static final int poolSize = Runtime.getRuntime().availableProcessors();
private static final long maxExecuteTime = 120000;
private static WorkerExecutor mExecutor;
private static final String BG_THREAD_TAG = "BG_THREAD";
protected RoutingContext ctx;
private boolean isThreadInBackground(){
return Thread.currentThread().getName() != null && Thread.currentThread().getName().equals(BG_THREAD_TAG);
}
//on success will not be called if exception be thrown
public BackgroundExecutor(RoutingContext ctx){
this.ctx = ctx;
if(mExecutor == null){
mExecutor = MyVertxServer.vertx.createSharedWorkerExecutor("my-worker-pool",poolSize,maxExecuteTime);
}
if(!isThreadInBackground()){
/** we are unlocking the lock before res.succeeded , because it might take long and keeps any thread waiting */
mExecutor.executeBlocking(future -> {
try{
Thread.currentThread().setName(BG_THREAD_TAG);
T result = onRun();
future.complete(result);
}catch (Exception e) {
GUI.display(e);
e.printStackTrace();
onException();
future.fail(e);
}
/** false here means they should not be parallel , and will run without order multiple times on same context*/
},false, res -> {
if(res.succeeded()){
onSuccess((T)res.result());
}
});
}else{
GUI.display("AVOIDED DUPLICATE BACKGROUND THREADING");
System.out.println("AVOIDED DUPLICATE BACKGROUND THREADING");
try{
T result = onRun();
onSuccess((T)result);
}catch (Exception e) {
GUI.display(e);
e.printStackTrace();
onException();
}
}
}
allowing the handlers to extend it and use it like this
public abstract class DefaultFileHandler implements MyHttpHandler{
public abstract File getFile(String suffix);
#Override
public void Handle(RoutingContext ctx, VertxUtils utils, String suffix) {
new BackgroundExecutor<Void>(ctx) {
#Override
public Void onRun() throws Exception {
File file = getFile(URLDecoder.decode(suffix, "UTF-8"));
if(file == null || !file.exists()){
utils.sendResponseAndEnd(ctx.response(),404);
return null;
}else{
utils.sendFile(ctx, file);
}
return null;
}
#Override
public void onSuccess(Void result) {}
#Override
public void onException() {
utils.sendResponseAndEnd(ctx.response(),404);
}
};
}
and here is how i initialize my vertx server
vertx.deployVerticle(MainDeployment.class.getCanonicalName(),res -> {
if (res.succeeded()) {
GUI.display("Deployed");
} else {
res.cause().printStackTrace();
}
});
server.requestHandler(router::accept).listen(port);
and here is my MainDeployment class
public class MainDeployment extends AbstractVerticle{
#Override
public void start() throws Exception {
// Different ways of deploying verticles
// Deploy a verticle and don't wait for it to start
for(Entry<String, MyHttpHandler> entry : MyVertxServer.map.entrySet()){
MyVertxServer.router.route(entry.getKey()).handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext ctx) {
String[] handlerID = ctx.request().uri().split(ctx.currentRoute().getPath());
String suffix = handlerID.length > 1 ? handlerID[1] : null;
entry.getValue().Handle(ctx, new VertxUtils(), suffix);
}
});
}
}
}
this is working just fine when and where i need it , but i still wonder if is there any better way to handle concurencies like this on vertx , if so an example would be really appreciated . thanks alot
I don't fully understand your problem and reasons for your solution. Why don't you implement one verticle to handle your http uploads and deploy it multiple times? I think that handling 50 concurrent uploads should be a piece of cake for vert.x.
When deploying a verticle using a verticle name, you can specify the number of verticle instances that you want to deploy:
DeploymentOptions options = new DeploymentOptions().setInstances(16);
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);
This is useful for scaling easily across multiple cores. For example you might have a web-server verticle to deploy and multiple cores on your machine, so you want to deploy multiple instances to take utilise all the cores.
http://vertx.io/docs/vertx-core/java/#_specifying_number_of_verticle_instances
vertx is a well-designed model so that a concurrency issue does not occur.
generally, vertx does not recommend the multi-thread model.
(because, handling is not easy.)
If you select multi-thread model, you have to think about shared data..
Simply, if you just only want to split EventLoop Area,
first of all, you make sure Check your a number of CPU Cores.
and then Set up the count of Instances .
DeploymentOptions options = new DeploymentOptions().setInstances(4);
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);
But, If you have 4cores of CPU, you don't set up over 4 instances.
If you set up to number four or more, the performance won't improve.
vertx concurrency reference
http://vertx.io/docs/vertx-core/java/
I need to build a queue where the elements will be added and removed in chronological order by default. But if the client sets the priority flag for the queue I need to be able to pull the elements based on the priority order of the elements.
I am thinking of creating a priority queue backed by a map that keeps track of the queue index in priority order and based on priority flag I can pull the items from the map and pop the item from index from the queue.
However with this approach the question will be, weather I create the map by default or only if the flag is set (considering the cost of creating the map on fly being high, I am inclining towards having it by default).
Please let me know if there is a better way of doing this or if there is an existing implementation that exists.
Here is what I currently have:
import javax.naming.OperationNotSupportedException;
import java.util.Comparator;
import java.util.PriorityQueue;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantLock;
public class DynamicPriorityQueue<ComparableQueueElement> implements IQueue<ComparableQueueElement> {
private static final int CONSTANT_HUNDRED = 100;
private boolean fetchByCustomPriority = false;
private final ReentrantLock lock;
private final PriorityQueue<ComparableQueueElement> queue;
private final PriorityQueue<ComparableQueueElement> customPriorityQueue;
public DynamicPriorityQueue() {
this(null);
}
public DynamicPriorityQueue(Comparator<ComparableQueueElement> comparator) {
this.lock = new ReentrantLock();
this.queue = new PriorityQueue<>(CONSTANT_HUNDRED);
if (comparator != null)
this.customPriorityQueue = new PriorityQueue<ComparableQueueElement>(CONSTANT_HUNDRED, comparator);
else
this.customPriorityQueue = null;
}
public void setFetchByCustomPriority(boolean fetchByCustomPriority) throws OperationNotSupportedException {
if (this.customPriorityQueue == null)
throw new OperationNotSupportedException("Object was created without a custom comparator.");
this.fetchByCustomPriority = fetchByCustomPriority;
}
public void push(ComparableQueueElement t) throws InterruptedException {
if (this.lock.tryLock(CONSTANT_HUNDRED, TimeUnit.MILLISECONDS)) {
try {
this.queue.offer(t);
if (this.customPriorityQueue != null)
this.customPriorityQueue.offer(t);
} finally {
this.lock.unlock();
}
}
}
public ComparableQueueElement peek() {
return this.fetchByCustomPriority ? this.queue.peek()
: (this.customPriorityQueue != null ? this.customPriorityQueue.peek() : null);
}
public ComparableQueueElement pop() throws InterruptedException {
ComparableQueueElement returnElement = null;
if (this.lock.tryLock(CONSTANT_HUNDRED, TimeUnit.MILLISECONDS)) {
try {
if (this.fetchByCustomPriority && this.customPriorityQueue != null) {
returnElement = this.customPriorityQueue.poll();
this.queue.remove(returnElement);
}
else {
returnElement = this.queue.poll();
if (this.customPriorityQueue != null) {
this.customPriorityQueue.remove(returnElement);
}
}
} finally {
this.lock.unlock();
}
}
return returnElement;
}
}
I deleted my comments after rereading the question, it may get complicated. You need to turn a fifo (chronological) queue into a priority queue with a flag. Your map would need to be ordered and be able to hold repeated values. Otherwise you would need to search the map to find the highest priority or search the queue. I wouldn't do it.
EDIT
What about using a wrapping class:
class Pointer<T>{
T element
}
And two queues of Pointers where the queues share the Pointers but they return them differently? The only thing you would need to do is to check that "element" is not null (you set it null when it leaves one of the queues.
The Pointer reference remains in the other queue but you check for null before returning.
EDIT
Your code doesn't have a map.
public ComparableQueueElement peek() {
return this.fetchByCustomPriority ? this.queue.peek()
: (this.customPriorityQueue != null ? this.customPriorityQueue.peek() : null);
}
is not correct. If it is not custom you should peek from this.queue
EDIT
Note that by using a wrapping class you save yourself the remove calls in the other queue. The only overhead added is that you need to check for null when fetching
For me implementation looks fine if your application have the requirement where the flag is changing very frequently. In such case you have both the queues ready to offer or poll object from queue. Although it makes your add operation heavy but retrieval is fast.
But if these changes are not frequent then you can think of re initializing the custom priority queue from FIFO queue only when flag changes.
And perform all your peek,offer operation only on one queue instead of two.
which will be more efficient if FIFO priority is being used.
and if you do that modify your push operation as below -
public void push(ComparableQueueElement t) throws InterruptedException {
if (this.lock.tryLock(CONSTANT_HUNDRED, TimeUnit.MILLISECONDS)) {
try {
this.queue.offer(t);
if (this.fetchByCustomPriority) // add to customPriorityQueue only when flag is enabled
this.customPriorityQueue.offer(t);
} finally {
this.lock.unlock();
}
}
}
i have written a large scale http server using , but im getting this error when number of concurrent requests increases
WARNING: Thread Thread[vert.x-eventloop-thread-1,5,main] has been blocked for 8458 ms, time limit is 1000
io.vertx.core.VertxException: Thread blocked
here is my full code :
public class MyVertxServer {
public Vertx vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(100));
private HttpServer server = vertx.createHttpServer();
private Router router = Router.router(vertx);
public void bind(int port){
server.requestHandler(router::accept).listen(port);
}
public void createContext(String path,MyHttpHandler handler){
if(!path.endsWith("/")){
path += "/";
}
path+="*";
router.route(path).handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext ctx) {
String[] handlerID = ctx.request().uri().split(ctx.currentRoute().getPath());
String suffix = handlerID.length > 1 ? handlerID[1] : null;
handler.Handle(ctx, new VertxUtils(), suffix);
}
});
}
}
and how i call it :
ver.createContext("/getRegisterManager",new ProfilesManager.RegisterHandler());
ver.createContext("/getLoginManager", new ProfilesManager.LoginHandler());
ver.createContext("/getMapcomCreator",new ItemsManager.MapcomCreator());
ver.createContext("/getImagesManager", new ItemsManager.ImagesHandler());
ver.bind(PORT);
how ever i dont find eventbus() useful for http servers that process send/receive files , because u need to send the RoutingContext in the message with is not possible.
could you please point me to the right direction? thanks
added a little bit of handler's code:
class ProfileGetter implements MyHttpHandler{
#Override
public void Handle(RoutingContext ctx, VertxUtils utils, String suffix) {
String username = utils.Decode(ctx.request().headers().get("username"));
String lang = utils.Decode(ctx.request().headers().get("lang"));
display("profile requested : "+username);
Profile profile = ProfileManager.FindProfile(username,lang);
if(profile == null){
ctx.request().response().putHeader("available","false");
utils.sendResponseAndEnd(ctx.response(),400);
return;
}else{
ctx.request().response().putHeader("available","true");
utils.writeStringAndEnd(ctx, new Gson().toJson(profile));
}
}
}
here ProfileManager.FindProfile(username,lang) does a long running database job on the same thread
...
basically all of my processes are happening on the main thread , because if i use executor i will get strange exceptions and nullpointers in Vertx , making me feel like the request proccessors in Vertx are parallel
Given the small amount of code in the question lets agree that the problem is on the line:
Profile profile = ProfileManager.FindProfile(username,lang);
Assuming that this is internally doing some blocking JDBC call which is a anti-pattern in Vert.x you can solve this in several ways.
Say that you can totally refactor the ProfileManager class which IMO is the best then you can update it to be reactive, so your code would be like:
ProfileManager.FindProfile(username,lang, res -> {
if (res.failed()) {
// handle error, sent 500 back, etc...
} else {
Profile profile = res.result();
if(profile == null){
ctx.request().response().putHeader("available","false");
utils.sendResponseAndEnd(ctx.response(),400);
return;
}else{
ctx.request().response().putHeader("available","true");
utils.writeStringAndEnd(ctx, new Gson().toJson(profile));
}
}
});
Now what would be hapening behind the scenes is that your JDBC call would not block (which is tricky because JDBC is blocking by nature). So to fix this and you're lucky enough to use MySQL or Postgres then you could code your JDBC against the async-client if you're stuck with other RDBMS servers then you need to use the jdbc-client which in turn will use a thread pool to offload the work from the event loop thread.
Now say that you cannot change the ProfileManager code then you can still off load it to the thread pool by wrapping the code in a executeBlocking block:
vertx.executeBlocking(future -> {
Profile profile = ProfileManager.FindProfile(username,lang);
future.complete(profile);
}, false, res -> {
if (res.failed()) {
// handle error, sent 500 back, etc...
} else {
Profile profile = res.result();
if(profile == null){
ctx.request().response().putHeader("available","false");
utils.sendResponseAndEnd(ctx.response(),400);
return;
}else{
ctx.request().response().putHeader("available","true");
utils.writeStringAndEnd(ctx, new Gson().toJson(profile));
}
}
});
I'm newbie to Apache Camel. In hp nonstop there is a Receiver that receives events generated by event manager assume like a stream. My goal is to setup a consumer end point which receives the incoming message and process it through Camel.
Another end point I simply need to write it in logs. From my study I understood that for Consumer end point I need to create own component and configuration would be like
from("myComp:receive").to("log:net.javaforge.blog.camel?level=INFO")
Here is my code snippet which receives message from event system.
Receive receive = com.tandem.ext.guardian.Receive.getInstance();
byte[] maxMsg = new byte[500]; // holds largest possible request
short errorReturn = 0;
do { // read messages from $receive until last close
try {
countRead = receive.read(maxMsg, maxMsg.length);
String receivedMessage=new String(maxMsg, "UTF-8");
//Here I need to handover receivedMessage to camel
} catch (ReceiveNoOpeners ex) {
moreOpeners = false;
} catch(Exception e) {
moreOpeners = false;
}
} while (moreOpeners);
Can someone guide with some hints how to make this as a Consumer.
The 10'000 feet view is this:
You need to start out with implementing a component. The easiest way to get started is to extend org.apache.camel.impl.DefaultComponent. The only thing you have to do is override DefaultComponent::createEndpoint(..). Quite obviously what it does is create your endpoint.
So the next thing you need is to implement your endpoint. Extend org.apache.camel.impl.DefaultEndpoint for this. Override at the minimum DefaultEndpoint::createConsumer(Processor) to create your own consumer.
Last but not least you need to implement the consumer. Again, best ist to extend org.apache.camel.impl.DefaultConsumer. The consumer is where your code has to go that generates your messages. Through the constructor you receive a reference to your endpoint. Use the endpoint reference to create a new Exchange, populate it and send it on its way along the route. Something along the lines of
Exchange ex = endpoint.createExchange(ExchangePattern.InOnly);
setMyMessageHeaders(ex.getIn(), myMessagemetaData);
setMyMessageBody(ex.getIn(), myMessage);
getAsyncProcessor().process(ex, new AsyncCallback() {
#Override
public void done(boolean doneSync) {
LOG.debug("Mssage was processed " + (doneSync ? "synchronously" : "asynchronously"));
}
});
I recommend you pick a simple component (DirectComponent ?) as an example to follow.
Herewith adding my own consumer component may help someone.
public class MessageConsumer extends DefaultConsumer {
private final MessageEndpoint endpoint;
private boolean moreOpeners = true;
public MessageConsumer(MessageEndpoint endpoint, Processor processor) {
super(endpoint, processor);
this.endpoint = endpoint;
}
#Override
protected void doStart() throws Exception {
int countRead=0; // number of bytes read
do {
countRead++;
String msg = String.valueOf(countRead)+" "+System.currentTimeMillis();
Exchange ex = endpoint.createExchange(ExchangePattern.InOnly);
ex.getIn().setBody(msg);
getAsyncProcessor().process(ex, new AsyncCallback() {
#Override
public void done(boolean doneSync) {
log.info("Mssage was processed " + (doneSync ? "synchronously" : "asynchronously"));
}
});
// This is an echo server so echo request back to requester
} while (moreOpeners);
}
#Override
protected void doStop() throws Exception {
moreOpeners = false;
log.debug("Message processor is shutdown");
}
}
When logging to a MemoryHandler, the MemoryHandler removes older entries when the numofentries > size.
I want to avoid this behavior, or at least mark down to the log that older entries are suppressed.
Little test case:
import java.util.logging.*;
public class SSCE01 {
public static void main (String [] args) {
Logger rootLogger = Logger.getLogger("");
rootLogger.removeHandler(rootLogger.getHandlers()[0]); //remove default Console Handler
ConsoleHandler ch = new ConsoleHandler();
Logger l = Logger.getLogger("test");
MemoryHandler mh = new MemoryHandler(ch,3,Level.OFF);
l.addHandler(mh);
l.severe("this shouldnt be logged");
l.severe("this shouldnt be logged");
l.severe("this shouldnt be logged");
l.severe("this should be logged");
l.severe("this should be logged");
l.severe("this should be logged");
mh.push();
}
}
This is a bad idea, you'll fill the memory machine with traces. Despite this, MemoryHandle is a circular buffer so when it gets filled other entries will be removed. If you want entries to not be removed just construct it with Integer.MAX_VALUE as size - again this a bad idea. This will collide with your app performance, and people tend to avoid that.
Consider using a handler that dumps traces into secondary storage, add a timestamp; and build whatever logic you need using the traces from there.
Edit
From the code you reported, you could encapsulate your logging functionality in another class that records the number of entries in MemoryHandler. Something like:
class MyMemoryConsoleHandler {
private Logger rootLogger;
private MemoryHandler mh;
private Logger l;
private int size = 3;
private int entries = 0;
public MyMemoryConsoleHandler() {
this.rootLogger = Logger.getLogger("");
this.rootLogger.removeHandler(rootLogger.getHandlers()[0]);
ConsoleHandler ch = new ConsoleHandler();
this.l = Logger.getLogger("test");
this.mh = new MemoryHandler(ch,this.size,Level.OFF);
}
public synchronized void push() {
this.mh.push();
if (this.entries > this.size) {
this.l.severe("Entries in log discarded !!!");
this.mh.push();
}
this.entries = 0;
}
public synchronized void addMessage(String m) {
this.entries++;
this.l.severe(m);
}
}
Instead of using Java API's logging calls directly use your MyMemoryConsoleHandler so that you have control over what is pushed to the console.
Pay attention to synchronized methods, this is needed in case you have a multi-threaded application. Otherwise you could end up with race conditions.