I want to change my logger statements using key/value pair in such a way to be able to use it in splunk search. How can I retain the text processOne completed in the output log. Please advice.
log.info("ProcessOne completed for Run id " + runId+ "...Time Taken(ms) + (System.currentTimeMillis() - tempTime));
I tried to change it in the way mentioned below, but I also want to add ProcessOne somewhere embedded inside this. How can I achieve it.
log.info("runId=%d, TimeTaken=%s", runId, (System.currentTimeMillis() - sTime_tds));
Why not just add a message key?
Message="processone completed"
Or break up further into process and status e.g.
Process=processone
Status=completed
Related
I need to have trace id and span id available in all my logs. However I am observing that after the first splitter in my camel route, I can no longer see the trace id and span id in my logs.
[traceId: spanId:] INFO ---
Is there any way to enable back the tracing information?
From the Camel Documentation I have tried to start the tracing after the split by using
context.setTracing(true)
But looks like this is not working.
Am I missing anything, please help.
You probably have the traceId and spanId stored in the exchange message headers which are lost after the split.
A solution is to store them in the exchange properties(before the split) which are stored for the entire processing of the exchange(see Passing values between processors in apache camel).
If you are using the Java DSL you can use:
.setProperty("traceId ", constant("traceIdValue"))
.setProperty("spanId", constant("spanIdValue"))
You can use the Simple Expression Language(https://camel.apache.org/manual/latest/simple-language.html) to access the properties after the split using exchangeProperty.property_name.
Example:
.log(LoggingLevel.INFO, "[traceId:${exchangeProperty.traceId} spanId:${exchangeProperty.spanId}]")
When you use split, a new and old exchange will be created and to pass exchange properties downstream, you would need to use an aggregator to do so.
Example:
.split().tokenize(System.lineSeparator()).aggregationStrategy(new YourAggregationStrategyClass())
My java application which is using SLF4J is running on Unix box and my code has entries like :
if (LOG.isDebugEnabled())
LOG.debug("There is nothing in the cache for " + quote.getObject().getObjectId() + " - using the init quote");
I try to enable logging at ALL or DEBUG or TRACE via JMX but still above log statement doesn't get printed. All statement without "isDebugEnabled()" get printed.
Any ideas what can be done to enable these log statements? I cannot change logback.xml and only use JMX to change log levels.
If I've understood you correctly, when you say "All statements without isDebugEnabled() get printed", you mean this gets printed:
LOG.debug("There is nothing in the cache for " + quote.getObject().getObjectId() + " - using the init quote");
but this doesn't:
if (LOG.isDebugEnabled())
LOG.debug("There is nothing in the cache for " + quote.getObject().getObjectId() + " - using the init quote");
That doesn't make sense, since LOG.debug() would internally do the equivalent of if (isDebugEnabled()). Therefore you should run your code through a debugger to find out what is going on.
If you mean that LOG.debug(...) does NOT get printed but other levels do, you should follow the link in #a_a's comment to set the level programmatically to Debug.
I'm creating an alert which should display a variable inside its display message.
So I've created a R.string.dialog_message1 and a R.string.dialog_message2 to put before and then this variable. This is what I achieved:
builder.setMessage(R.string.dialog_message1 + lastOnes + R.string.dialog_message2)
.setTitle(R.string.dialog_title );
I get no errors until I get to the runtime process.
It displays:
"Submit query" instead that my own mixed message, any ideas?
Your code should look like:
builder.setMessage(getString(R.string.dialog_message1) + lastOnes + getString(R.string.dialog_message2)).setTitle(getString(R.string.dialog_title));
Your R.string.stringname1 calls are returning the String resource ids, not the Strings themselves. This example from Android will show you how to properly use String resources.
I am using JMeter to test HLS playback from a Streaming Server. So, the first HTTP request is for a master manifest file(m3u8). Say,
http://myserver/application1/subpath1/file1.m3u8
The reply to this will result in a playlist something like,
subsubFolder/360p/file1.m3u8
subsubFolder/480p/file1.m3u8
subsubFolder/720p/file1.m3u8
So, next set of URLs become
http://myserver/application1/subpath1/subsubFolder/360p/file1.m3u8
http://myserver/application1/subpath1/subsubFolder/480p/file1.m3u8
http://myserver/application1/subpath1/subsubFolder/720p/file1.m3u8
Now, individual reply to these further will be an index of chunks, like
0/file1.ts
1/file1.ts
2/file2.ts
3/file3.ts
Again, we have next set of URLs as
http://myserver/application1/subpath1/subsubFolder/360p/0/file1.ts
http://myserver/application1/subpath1/subsubFolder/360p/1/file1.ts
http://myserver/application1/subpath1/subsubFolder/360p/2/file1.ts
http://myserver/application1/subpath1/subsubFolder/360p/3/file1.ts
This is just the case of one set(360p). There will be 2 more sets like these(for 480p, 720p).
I hope the requirement statement is clear uptill this.
Now, the problem statement.
Using http://myserver/application1 as static part, regex(.+?).m3u8 is applied at 1st reply which gives subpath1/subsubFolder/360p/file1. This, is then added to the static part again, to get http://myserver/application1/subpath1/subsubFolder/360p/file1 + .m3u8
The problem comes at the next stage. As, you can see, with parts extracted previously, all I'm getting is
http://myserver/application1/subpath1/subsubFolder/360p/file1/0/file1.ts
The problem is obvious, an extra file1, 360p/file1 in place of 360p/0.
Any suggestions, inputs or alternate approaches appreciated.
If I understood the problem correctly, all you need is the file name as the other URLs can be constructed with it. Rather than using http://myserver/application1 as static part of your regex, I would try to get the filename directly:
([^\/.]+)\.m3u8$
# match one or more characters that are not a forward slash or a period
# followed by a period
# followed by the file extension (m3u8)
# anchor the whole match to the end
Now consider your urls, e.g. http://myserver/application1/subpath1/subsubFolder/360p/file1.m3u8, the above regex will capture file1, see a working demo here. Now you can construct the other URLs, e.g. (pseudo code):
http://myserver/application1/subpath1/subsubFolder/360p/ + filename + .m3u8
http://myserver/application1/subpath1/subsubFolder/360p/ + filename + /0/ + filename + .ts
Is this what you were after?
Make sure you use:
(.*?) - as Regular Expression (change plus to asterisk in your regex)
-1 - as Match No.
$1$- as template
See How to Load Test HTTP Live Media Streaming (HLS) with JMeter article for detailed instructions.
If you are ready to pay for a commercial plugin, then there is an easy and much more realistic solution which is a plugin for Apache JMeter provided by UbikLoadPack:
Besides doing this job for you, it will simulate the way a player would read the file. It will also scale much better than any custom script or player solution.
It supports VOD and Live which are quite difficult to script.
See:
http://www.ubik-ingenierie.com/blog/easy-and-realistic-load-testing-of-http-live-streaming-hls-with-apache-jmeter/
http://www.ubik-ingenierie.com/blog/ubikloadpack-http-live-streaming-plugin-jmeter-videostreaming-mpegdash/
Disclaimer, we are the providers of this solution
I would like to diagnose some error. I believe I should not tell the whole scenario to get a good solution for my question. So, I would like to create some debug information on the workers and display it on the driver, possibly real-time.
I read somewhere that issuing a System.out.println("DEBUG: ...") on a worker would produce a line in the executor log, but currently I'm having trouble retrieving those logs. Aside from that it would be still useful if I could see some debug noise on the driver as the calculation runs.
(I also figured out a workaround, but I don't know if I should apply it or not. At the end of each worker task I could append elements to a sequence file and I could monitor that, or check it at the end.)
One way I could think of doing this is (ab)using a custom acummulator to send messages from the workers to the driver. This will get whatever String message from the workers to the driver. On the driver you'd print the contents to collect the info. It's not real-time as wished-for as it depends of the program execution.
import org.apache.spark.AccumulatorParam
object LineCummulatorParam extends AccumulatorParam[String] {
def zero(value:String) : String = value
def addInPlace(s1:String, s2:String):String = s1 + "\n" + s2
}
val debugInfo = sparkContext.accumulator("","debug info")(DebugInfoCummulatorParam)
rdd.map{rdd => ...
...
...
//this happens on each worker
debugInfo += "something happened here"
}
//this happens on the driver
println(debugInfo)
Not sure why you cannot access the worker logs - that would be the most straightforward solution BTW.