I am trying to find which .jar detected this error so I can figure out the issue. This is running on hyperion server.
[2015-03-15T15:18:35.352+08:00] [Planning0] [WARNING] [] [oracle.EPMHSP.calcmgr_execution] [tid: 144] [userId: <anonymous>] [ecid: 00iRyJJB65hDOd5LzQL6iW000ly40016YL,0:1] [APP: PLANNING#11.1.2.0] [URI: /HyperionPlanning/faces/RunTimePromptTF/BgImage] [SRC_CLASS: com.hyperion.planning.adf.artifact.datacontrol.HspManageArtifactsDC] [SRC_METHOD: executeCalcScript] Error detected while attempting to run job Test_Rule [[
com.hyperion.planning.HspRuntimeException: Error detected while attempting to run job: Test_Rule.
at com.hyperion.planning.HspAsyncJobsManager.completeJobExceution(HspAsyncJobsManager.java:101)
at com.hyperion.planning.db.HspFMDBImpl$CalcMgrWrapper.runRule(HspFMDBImpl.java:10411)
at com.hyperion.planning.db.HspFMDBImpl.runHBRRule(HspFMDBImpl.java:2254)
at com.hyperion.planning.db.HspFMDBImpl.runCalcScript(HspFMDBImpl.java:2218)
at com.hyperion.planning.HyperionPlanningBean.runCalcScript(HyperionPlanningBean.java:4028)
at com.hyperion.planning.adf.artifact.datacontrol.HspManageArtifactsDC.executeCalcScript(HspManageArtifactsDC.java:3518)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at oracle.adf.model.binding.DCInvokeMethod.invokeMethod(DCInvokeMethod.java:677)
at oracle.adf.model.bean.DCBeanDataControl.invokeMethod(DCBeanDataControl.java:445)
If you are running linux/unix flavor, I usually find jars via something like the following bash script:
for i in $( find LIB_FOLDERS -iname *.jar | xargs ); do
( zipinfo $i | grep -i PATTERN ) && echo $i ; done
Where LIB_FOLDERS is the place where your jars are found, and PATTERN is a characteristic part of the name of the class you are looking for. This will print the names of all jar-files that match the pattern. Most IDEs allow you to "search for a class in the classpath" withouth all that command-line hassle, but I don't know if you have all sources loaded up in one.
Use JarScan. It's one of my favorite tools to search for a class buried in some jar in some directory. Works for any platform, simple and easy to use: https://java.net/projects/jarscan/pages/Tutorial/text
On linux systems I create ~/bin/findjar with the following, then chmod 700 and add ~/bin to my PATH:
#!/bin/bash
# Usage: findjar <classname or string to search for> [path to search under]
#
class=$1
path=$2
if [[ "$path" = "" ]]; then
path=.
fi
echo searching for $class in $path
for f in `find $path -name "*.jar"`; do
match=$(jar tf $f | grep $class);
if [[ -n "$match" ]]; then
echo
echo $f;
echo "$match"
fi;
done
Related
Foreword
Hi, I am new to stackoverflow. If there is any place that is not clear, please point it out. Thank you!
Question
I just started to study hyperledger-fabric. As a Java programmer, I choose to use the fabric-java-sdk.
After I can run the test case End2endIT.java, I want to change the chaincode. I just find the example_cc.go at fabric-sdk-java/src/test/fixture/sdkintegration/gocc/sample1/src/github.com/example_cc/example_cc.go . However, after I changed the chaincode, it did't work. Even after I deleted this code, the test case can still run.
Therefore, I guess I found a wrong place. Can anyone tell me where to change the chaincode? Thx!
Additional
The code to load chaincode
if (isFooChain) {
// on foo chain install from directory.
////For GO language and serving just a single user, chaincodeSource is mostly likely the users GOPATH
installProposalRequest.setChaincodeSourceLocation(new File(TEST_FIXTURES_PATH + "/sdkintegration/gocc/sample1"));
//[output]: src/test/fixture/sdkintegration/gocc/sample1
System.out.println(TEST_FIXTURES_PATH + "/sdkintegration/gocc/sample1");
} else {
// On bar chain install from an input stream.
installProposalRequest.setChaincodeInputStream(Util.generateTarGzInputStream(
(Paths.get(TEST_FIXTURES_PATH, "/sdkintegration/gocc/sample1", "src", CHAIN_CODE_PATH).toFile()),
Paths.get("src", CHAIN_CODE_PATH).toString()));
}
I solved the question in the end as I noticed the fabric.sh in the fabric-sdk-java.
./fabric.sh up to force recreate the docker container
./fabric.sh clean to clean the peers
The reason why I could run the invoke request without chaincode is that I didn't clean the volumns of peers.
And the source code as follows:
#!/usr/bin/env bash
#
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# simple batch script making it easier to cleanup and start a relatively fresh fabric env.
if [ ! -e "docker-compose.yaml" ];then
echo "docker-compose.yaml not found."
exit 8
fi
ORG_HYPERLEDGER_FABRIC_SDKTEST_VERSION=${ORG_HYPERLEDGER_FABRIC_SDKTEST_VERSION:-}
function clean(){
rm -rf /var/hyperledger/*
if [ -e "/tmp/HFCSampletest.properties" ];then
rm -f "/tmp/HFCSampletest.properties"
fi
lines=`docker ps -a | grep 'dev-peer' | wc -l`
if [ "$lines" -gt 0 ]; then
docker ps -a | grep 'dev-peer' | awk '{print $1}' | xargs docker rm -f
fi
lines=`docker images | grep 'dev-peer' | grep 'dev-peer' | wc -l`
if [ "$lines" -gt 0 ]; then
docker images | grep 'dev-peer' | awk '{print $1}' | xargs docker rmi -f
fi
}
function up(){
if [ "$ORG_HYPERLEDGER_FABRIC_SDKTEST_VERSION" == "1.0.0" ]; then
docker-compose up --force-recreate ca0 ca1 peer1.org1.example.com peer1.org2.example.com ccenv
else
docker-compose up --force-recreate
fi
}
function down(){
docker-compose down;
}
function stop (){
docker-compose stop;
}
function start (){
docker-compose start;
}
for opt in "$#"
do
case "$opt" in
up)
up
;;
down)
down
;;
stop)
stop
;;
start)
start
;;
clean)
clean
;;
restart)
down
clean
up
;;
*)
echo $"Usage: $0 {up|down|start|stop|clean|restart}"
exit 1
esac
done
I am trying to simulate Hadoop YARN SLS (Scheduling Load Simulator) with the sources given in Hadoop's GitHub and the SLS source files are located in [REF-1].
Here the step I have done :
Using VMWARE as the Host.
Using Ubuntu 14.04
Installing Hadoop v 2.6.0 [REF-2]
User : hduser | group : hadoop
Installing any needed packages (e.g. maven)
Get the clonning file of Hadoop's GitHub [REF-1]
Syntax : git clone https://git.apache.org/hadoop.git
Result : hduser#ubuntu:~/hadoop$
I made the changes inside directory hduser#ubuntu:~/hadoop/hadoop-tools$
FYI : I used the codes from MaxinetSLS [REF-3] as the way I compile the source files. The SLS source files can be downloaded by using this syntax in Linux : git clone https://github.com/wette/netSLS.git. By default, I can run this program with no error. The SLS Simulator can work perfectly.
From MaxiNetSLS's source files, I copied this files below into my work in hduser#ubuntu:~/hadoop/hadoop-tools$ :
netSLS/generator > hduser#ubuntu:~/hadoop/hadoop-tools$
netSLS/html > hduser#ubuntu:~/hadoop/hadoop-tools$
netSLS/sls.sh > hduser#ubuntu:~/hadoop/hadoop-tools$
netSLS/sls/hadoop/ > hduser#ubuntu:~/hadoop/hadoop-tools/hadoop-sls$
Then, I modified some files as follows.
netSLS/sls.sh
#!/usr/bin/env bash
function print_usage {
echo -e "usage: sls.sh TraceFile"
echo -e
echo -e "Starts SLS with the given trace file."
}
if [[ -z $1 ]]; then
print_usage
exit 1
fi
TRACE_FILE=$(realpath $1)
if [[ ! -f ${TRACE_FILE} ]]; then
echo "File not found: ${TRACE_FILE}"
print_usage
exit 1
fi
cd hadoop-sls
OUTPUT_DIRECTORY="/tmp/sls"
mkdir -p ${OUTPUT_DIRECTORY}
ARGS="-inputsls ${TRACE_FILE}"
ARGS+=" -output ${OUTPUT_DIRECTORY}"
ARGS+=" -printsimulation"
mvn exec:java -Dexec.args="${ARGS}"
hduser#ubuntu:~/hadoop/hadoop-tools/hadoop-sls/pom.xml$
[REF-4]
hduser#ubuntu:~/hadoop/hadoop-tools$ nano hadoop-sls/hadoop/etc/hadoop/sls-runner.xml
[REF-5]
Next step, I try to :
Compile the script using hduser#ubuntu:~/hadoop/hadoop-tools/hadoop-sls$ mvn compile
Compiled with no error (mvn_compile_perfect.jpg).
Run the program using hduser#ubuntu:~/hadoop/hadoop-tools$ ./sls.sh generator/small.json
Got the error here (error_json_compile.jpg). :(
Until now, I have went through some information related with similar problems I faced [REF-6] and tried it, but I still get the same problem. I guess I think the problem is in the ~/hadoop/hadoop-tools/hadoop-sls/pom.xml I mistakenly modified. I have lack of knowledge with Linux Environment. :(
References : http://1drv.ms/21zcJIH (txt file)
*Cannot post more than 2 links in my post. :(
I want to add pre-commit hook for jshint in svn. As I new to svn therefore, I need some help. I already did some work which is following. Below is my local repo folder structure
svn-repo
branches
hooks
pre-commit.sh
tags
trunk
scripts
script.js
index.html
.jshintrc
Here is my pre-commit hook code
ROOT_DIR=$(git rev-parse --show-toplevel) # gets the path of this repo
CONF="--config=${ROOT_DIR}/build/config/jshint.json" # path of your jshint config
JSHINT=$(which jshint) # jshint path
if git rev-parse --verify HEAD >/dev/null 2>&1; then
against=HEAD
else
# Initial commit: diff against an empty tree object
against=4b825dc642cb6eb9a060e54bf8d69288fbee4904
fi
for file in $(git diff-index --name-only ${against} -- | egrep \.js); do
if $JSHINT $file ${CONF} 2>&1 | grep 'No errors found' ; then
echo "jslint passed ${file}"
exit 0
else
$JSHINT $file
exit 1
fi
done
I installed jshint globally and is located on below location
/usr/local/bin/jshint
Now when I commit with incorrect javascript then it still gets committed and does not throw any errors even though js contains errors according to .jshintrc.
How can I make this pre-commit work?
Unrelated
I can't understand, why you used Git-voodoo here, in and for SVN-transaction
Related (partially)
If you want to monitor hook from client side and see any output, you must redirect it to stderr (and you didn't do it), which hook will return to client
Question
Does this script work as expected in standalone-mode?
I am trying to enable JMX monitoring on Munin
I have followed the guide at:
https://github.com/munin-monitoring/contrib/tree/master/plugins/java/jmx
It tells me:
1) Files from "plugin" folder must be copied to /usr/share/munin/plugins (or another - where your munin plugins located)
2) Make sure that jmx_ executable : chmod a+x /usr/share/munin/plugins/jmx_
3) Copy configuration files that you want to use, from "examples" folder, into /usr/share/munin/plugins folder
4) create links from the /etc/munin/plugins folder to the /usr/share/munin/plugins/jmx_
The name of the link must follow wildcard pattern:
jmx_<configname>,
where configname is the name of the configuration (config filename without extension), for example:
ln -s /usr/share/munin/plugins/jmx_ /etc/munin/plugins/jmx_process_memory
I have done exatly this but whern i run ./jmx_process_memory, I just get:
Error: Could not find or load main class org.munin.plugin.jmx.memory
The actual config file is called java_process_memory.conf, so i have also tried naming the symlink jmx_java_process_memory, but get the same error.
I have had success by naming the symlink jmx_Threads as described here:
http://blog.johannes-beck.name/?p=160
I can see that org.munin.plugin.jmx.Threads is the name of a class within munin-jmx-plugins.jar, and the other classes seem to work also. But this is not what the Munin guide tells me to do, so is the documentation wrong? What is the purpose of the config files, they must be there for a reason? There are example config files for Tomcat, which is where my real interest lies, so I need to understand this. Without being able the get it working as per the guide though im a bit stuck!
Can anyone put me right on this?
Cheers
NFV
I was stuck with somehow the same issue.
What i did to get something working a little bit better but still not perfectly.
I'm on RHEL :
[root#bus|in plugins]# cat /etc/munin/plugin-conf.d/munin-node
[diskstats]
user munin
[iostat_ios]
user munin
[jmx_*]
env.ip 192.168.1.101
env.port 5054 <- being the port configured for your jmx
then
[root#bus|in plugins]# ls -l /etc/munin/plugins/jmx_MultigraphAll
lrwxrwxrwx 1 root root 29 14 mars 15:36 /etc/munin/plugins/jmx_MultigraphAll -> /usr/share/munin/plugins/jmx_
and I modified the /usr/share/munin/plugins/jmx_ with the following :
#!/bin/sh
# -*- sh -*-
: << =cut
=head1 NAME
jmx_ - Wildcard plugin to monitor Java application servers via JMX
=head1 APPLICABLE SYSTEMS
Tested with Tomcat 4.1/5.0/5.5/6.0 on Sun JVM 5/6 and OpenJDK.
Any JVM that supports JMX should in theory do.
Needs nc in path for autoconf.
=head1 CONFIGURATION
[jmx_*]
env.ip 127.0.0.1
env.port 5400
env.category jvm
env.username monitorRole
env.password SomethingSecret
env.JRE_HOME /usr/lib/jvm/java-6-sun/jre
env.JAVA_OPTS -Xmx128m
Needed configuration on the Tomcat side: add
-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.port=5400 \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false
to CATALINA_OPTS in your startup scripts.
Replace authenticate=false with
-Dcom.sun.management.jmxremote.password.file=/etc/tomcat/jmxremote.password \
-Dcom.sun.management.jmxremote.access.file=/etc/tomcat/jmxremote.access
...if you want authentication.
jmxremote.password:
monitorRole SomethingSecret
jmxremote.access:
monitorRole readonly
You may need higher access levels for some counters, notably ThreadsDeadlocked.
=head1 BUGS
No encryption supported in the JMX connection.
The plugins available reflect the most interesting aspects of a
JVM runtime. This should be extended to cover things specific to
Tomcat, JBoss, Glassfish and so on. Patches welcome.
=head1 AUTHORS
=encoding UTF-8
Mo Amini, Diyar Amin and Younes Hajji, Høgskolen i Oslo/Oslo
University College.
Shell script wrapper and integration by Erik Inge Bolsø, Redpill
Linpro AS.
Previous work on JMX plugin by Aleksey Studnev. Support for
authentication added by Ingvar Hagelund, Redpill Linpro AS.
=head1 LICENSE
GPLv2
=head1 MAGIC MARKERS
#%# family=auto
#%# capabilities=autoconf suggest
=cut
MUNIN_JAR="/usr/share/java/munin-jmx-plugins.jar"
if [ "x$JRE_HOME" != "x" ] ; then
JRE=$JRE_HOME/bin/java
export JRE_HOME=$JRE_HOME
fi
JAVA_BIN=${JRE:-/opt/jdk/jre/bin/java}
ip=${ip:-192.168.1.101}
port=${port:-5054}
if [ "x$1" = "xsuggest" ] ; then
echo MultigraphAll
exit 0
fi
if [ "x$1" = "xautoconf" ] ; then
NC=`which nc 2>/dev/null`
if [ "x$NC" = "x" ] ; then
echo "no (nc not found)"
exit 0
fi
$NC -n -z $ip $port >/dev/null 2>&1
CONNECT=$?
$JAVA_BIN -? >/dev/null 2>&1
JAVA=$?
if [ $JAVA -ne 0 ] ; then
echo "no (java runtime not found at $JAVA_BIN)"
exit 0
fi
if [ ! -e $MUNIN_JAR ] ; then
echo "no (munin jmx classes not found at $MUNIN_JAR)"
exit 0
fi
if [ $CONNECT -eq 0 ] ; then
echo "yes"
exit 0
else
echo "no (connection to $ip:$port failed)"
exit 0
fi
fi
if [ "x$1" = "xconfig" ] ; then
param=config
else
param=Tomcat
fi
scriptname=${0##*/}
jmxfunc=${scriptname##*_}
prefix=${scriptname%_*}
if [ "x$jmxfunc" = "x" ] ; then
echo "error, plugin must be symlinked in order to run"
exit 1
fi
ip=$ip port=$port $JAVA_BIN -cp $MUNIN_JAR $JAVA_OPTS org.munin.plugin.jmx.$jmxfunc $param $prefix
And you have to add the right permissions and owner:group on what you define as the JRE, for example here :
[root#bus|in plugins]# ls -ld /opt/jdk
drwxrwxr-x 8 nobody nobody 4096 8 oct. 15:03 /opt/jdk
Now I can run (and I can see it's using nobody:nobody as user:group, maybe something to play with in the conf) :
[root#bus|in plugins]# munin-run jmx_MultigraphAll -d
# Processing plugin configuration from /etc/munin/plugin-conf.d/df
# Processing plugin configuration from /etc/munin/plugin-conf.d/fw_
# Processing plugin configuration from /etc/munin/plugin-conf.d/hddtemp_smartctl
# Processing plugin configuration from /etc/munin/plugin-conf.d/munin-node
# Processing plugin configuration from /etc/munin/plugin-conf.d/postfix
# Processing plugin configuration from /etc/munin/plugin-conf.d/sendmail
# Setting /rgid/ruid/ to /99/99/
# Setting /egid/euid/ to /99 99/99/
# Setting up environment
# Environment ip = 192.168.1.101
# Environment port = 5054
# About to run '/etc/munin/plugins/jmx_MultigraphAll'
multigraph jmx_memory
Max.value 2162032640
Committed.value 1584332800
Init.value 1613168640
Used.value 473134248
multigraph jmx_MemoryAllocatedHeap
Max.value 1037959168
Committed.value 1037959168
Init.value 1073741824
Used.value 275414584
multigraph jmx_MemoryAllocatedNonHeap
Max.value 1124073472
Committed.value 546373632
Init.value 539426816
Used.value 197986088
[...]
multigraph jmx_ProcessorsAvailable
ProcessorsAvailable.value 1
Now I'm trying to get it to work for different JVMs on the same host, because this is for a single one.
I hope that can help you.
edit :
actually I did the modifications to use with several java processes having their own jmx ports.
What you have to add them there :
[root#bus|in plugins]# cat /etc/munin/plugin-conf.d/munin-node
[diskstats]
user munin
[iostat_ios]
user munin
[admin_jmx_*]
env.ip 192.168.1.101
env.port 5054
[managed_jmx_*]
env.ip 192.168.1.101
env.port 5055
[jboss_jmx_*]
env.ip 192.168.1.101
env.port 1616
and then create the links :
[root#bus|in plugins]# ls -l /etc/munin/plugins/*_jmx_*
lrwxrwxrwx 1 root root 29 14 mars 15:36 /etc/munin/plugins/admin_jmx_MultigraphAll -> /usr/share/munin/plugins/jmx_
lrwxrwxrwx 1 root root 29 14 mars 16:51 /etc/munin/plugins/jboss_jmx_MultigraphAll -> /usr/share/munin/plugins/jmx_
lrwxrwxrwx 1 root root 29 14 mars 16:03 /etc/munin/plugins/managed_jmx_MultigraphAll -> /usr/share/munin/plugins/jmx_
and I commented out the ip and port from the /usr/share/munin/plugins/jmx_ file, but I'm not sure it plays a role.
Can someone help me check my bash script? i'm trying to feed a directory of .txt files to the stanford parser (http://nlp.stanford.edu/software/pos-tagger-faq.shtml) but i can't get it to work. i'm working on ubuntu 10.10
the loop is working and reading the right files with:
#!/bin/bash -x
cd $HOME/path/to
for file in 'dir -d *'
do
# $HOME/chinesesegmenter-2006-05-11/segment.sh ctb $file UTF-8
echo $file
done
but with
#!/bin/bash -x
cd $HOME/yoursing/sentseg_zh
for file in 'dir -d *'
do
# echo $file
$HOME/chinesesegmenter-2006-05-11/segment.sh ctb $file UTF-8
done
i'm getting this error:
alvas#ikoma:~/chinesesegmenter-2006-05-11$ bash segchi.sh
Standard: CTB
File: dir
Encoding: -d
-------------------------------
Exception in thread "main" java.lang.NoClassDefFoundError: edu/stanford/nlp/ie/crf/CRFClassifier
Caused by: java.lang.ClassNotFoundException: edu.stanford.nlp.ie.crf.CRFClassifier
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
Could not find the main class: edu.stanford.nlp.ie.crf.CRFClassifier. Program will exit.
the following command works:
~/chinesesegmenter-2006-05-11/segment.sh ctb ~/path/to/input.txt UTF-8
and output this
alvas#ikoma:~/chinesesegmenter-2006-05-11$ ./segment.sh ctb ~/path/to/input.txt UTF-8
Standard: CTB
File: /home/alvas/path/to/input.txt
Encoding: UTF-8
-------------------------------
Loading classifier from data/ctb.gz...done [1.5 sec].
Using ChineseSegmenterFeatureFactory
Reading data using CTBSegDocumentReader
Sequence tagging 7 documents
如果 您 在 新加坡 只 能 前往 一 间 俱乐部 , 祖卡 酒吧 必然 是 您 的 不二 选择 。
作为 或许 是 新加坡 唯一 一 家 国际 知名 的 夜店 , 祖卡 既 是 一 个 公共 机构 , 也 是 狮城 年轻人 选择 进行 成人 礼等 庆祝 的 不二场所 。
As well as the : (colon), which should be a ; or a new line, the 'dir -d *' doesn't do what you think it does - the loop will just have one iteration, where file is a long string beginning with dir -d and with all your files afterwards. Also, you initially change to a path based on $file but then reuse the variable file in your loop, which is suspect. I'm having to guess somewhat about your intent, but it can be much simpler, e.g.:
#!/bin/bash
cd ~/path/to/whereever
for file in *
do
~/chinesesegmenter-2006-05-11/segment.sh ctb "$file" UTF-8
done
Even if you used the (more correct) version with backticks:
for file in `dir -d *`
... it would still qualify for a Useless Use of ls * Award ;)
Update: originally I forgot to quote $file, as pointed out in another answer
You could try:
for file in *
do
$HOME/segment.sh ctb "$file" UTF-8
done
So there were a couple of things to correct:
Don't use : after the for statement, use ; or a newline
Put quotation marks around the "$file" object to allow whitespaces in file name
If you want to use a command where you put 'dir -d *' you should use $(dir -d *) or angle quation marks instead ``
for file in 'dir -d *': do
You've put a colon instead of a semicolon.
If you want an easy debugging, you can add -x as an option to your shebang :
#!/bin/bash -x
The errors will be easier to spot.