Unable To Download code From Appengine - Java - java

The below is the code I am trying to execute to download the source code from AppEngine:
/Users/sridhar/Desktop/backupdata/appengine-java-sdk-1.7.4/bin/appcfg.sh download_app -A maharasims2 -V 23 download_app .
I am getting the Bad Format error:
Bad argument: Expected download directory as an argument after download_app.
AppCfg [options] -A app_id [ -V version ] download_app <out-dir>
Download a previously-uploaded app to the specified directory. The app
ID is specified by the "-A" option. The optional version is specified
by the "-V" option.
Can anyone help me out?
I tried the following and it worked.
Step 1: jdk/bin/appcfg.sh -A < apppid > -V < version > download_app < directory >
Example :
/Users/Desktop/appengine-java-sdk-1.7.4/bin/appcfg.sh -A testapp -V 23 download_app ~/Desktop/backupdata/downloads/
Step 2: http://architecturalatrocities.com/post/19073788679/fixing-the-trustanchors-problem-when-running-openjdk-7
Note :
Use step 2 if you are encountering the following error :
"java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty"

The options should go before the command:
appcfg.sh -A maharasims2 -V 23 download_app .

Related

Error building jpy `gcc: error: : No such file or directory`

I am trying to build jpy for using the SNAP API of the european space agency in my ubuntu 16.04 with anaconda. After setting all my java, jdk and jvm paths correctly, I executed
python setup.py build
and got the following error:
src/main/c/jni/org_jpy_PyLib.c:254:26: warning: unused variable ‘state’ [-Wunused-variable]
PyGILState_STATE state = PyGILState_Ensure();
^~~~~ gcc -pthread -shared -B /home/delgado/local/anaconda3/compiler_compat
-L/home/delgado/local/anaconda3/lib -Wl,-rpath=/home/delgado/local/anaconda3/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/src/main/c/jpy_module.o build/temp.linux-x86_64-3.6/src/main/c/jpy_diag.o build/temp.linux-x86_64-3.6/src/main/c/jpy_conv.o build/temp.linux-x86_64-3.6/src/main/c/jpy_compat.o build/temp.linux-x86_64-3.6/src/main/c/jpy_jtype.o build/temp.linux-x86_64-3.6/src/main/c/jpy_jarray.o build/temp.linux-x86_64-3.6/src/main/c/jpy_jobj.o build/temp.linux-x86_64-3.6/src/main/c/jpy_jmethod.o build/temp.linux-x86_64-3.6/src/main/c/jpy_jfield.o build/temp.linux-x86_64-3.6/src/main/c/jni/org_jpy_PyLib.o -L
-L/home/delgado/local/anaconda3/lib -ljvm -ldl -lpython3.6m -o build/lib.linux-x86_64-3.6/jpy.cpython-36m-x86_64-linux-gnu.so
-Xlinker -rpath gcc: error: : No such file or directory error: command 'gcc' failed with exit status 1
I do not know precisely which file is missing and why it is missing.
Using a built version of jpy in conda solved the issue. I suggest eg: conda install -c terradue jpy
hope this still helps somebody

PIVOTAL GPDB- External table gphdfs protocol command ended with error. sh: java: command not found

We have small array of Greenplum database.
When trying to read External table in it.
Getting error
proddb=# select count(*) from ext_table;
ERROR: external table gphdfs protocol command ended with error. sh: java: command not found (seg0 slice1 sdw:
40000 pid=8675)
DETAIL:
Command: 'gphdfs://path/to/hdfs External table revenuereport_stg0, file gphdfs://Path/to/hdfs
We tried :
Checked Java env on greenplum master host.
Also checked , Setting up - the parameters for GPDB
[gpadmin#admin ~]$ gpconfig -c gp_hadoop_home -v "'/usr/lib/gphd'"
[gpadmin#admin ~]$ gpconfig -c gp_hadoop_target_version -v "'gphd-2.0'"
But it is failing with this error
[gpadmin#mdw ~]$ gpconfig -c gp_hadoop_home -v "'/usr/lib/gphd'"
20170123:02:02:04:017762 gpconfig:mdw:gpadmin-[ERROR]:-failed updating the postgresql.conf files on host: sdw
20170123:02:02:04:017762 gpconfig:mdw:gpadmin-[ERROR]:-failed updating the postgresql.conf files on host: mdw
20170123:02:02:09:017762 gpconfig:mdw:gpadmin-[ERROR]:-finished with errors
Therefore ,Test for HDFS Access from greenplum host is not working.
Checked if HDFS is accessible from any of the segment servers
[gpadmin#sdw1 ~]$hdfs dfs -ls hdfs://hdm2:8020/
Any help on it would bemuch appreciated !
It looks like a path issue to me .Please set right JAVA_HOME in hadoop-env.sh file
Also ,Please have a look into the following articles for better understanding on configuring gphdfs with gpdb .
https://discuss.pivotal.io/hc/en-us/articles/202635496-How-to-access-HDFS-data-via-GPDB-external-table-with-gphdfs-protocol
https://discuss.pivotal.io/hc/en-us/articles/203083906-Understanding-GPHDFS-Configurations
https://discuss.pivotal.io/hc/en-us/articles/221492507-One-time-HDFS-Protocol-Installation-for-GPHDFS-access-to-HDP-2-x-cluster
Thanks
Pratheesh Nair
export JAVA_HOME=/usr/local/jdk18
export HADOOP_HOME=/opt/apps/hadoop
export GP_JAVA_OPT='-Xmx1000m -XX:+DisplayVMOutputToStderr'
export PATH=$JAVA_HOME/bin:$PATH
export KRB5CCNAME=$GP_SEG_DATADIR/gpdb-gphdfs.krb5cc
JAVA=$JAVA_HOME/bin/java
java_home 和 hadoop_home 要给具体数值,置于最前面,写成从环境变量获取JAVA_HOME=$JAVA_HOME,GP处理时获取会为空值。

JCEF ICU Check Failed

I seem to be continually coming against a wall in getting chromium running with JCEF in eclipse. I was able to get to the point where the native functions are discovered but am still unable to complete initialization. I set the LD_PRELOAD variable. I am running both the MainFrame.java class and custom Scala code and run into the same problem in each. Is there a way to resolve this?
System:
OS: Ubuntu 16.04
JCEF version 3
CEF version 3
Java Jdk 8
Structure and Configuration:
Everything is under the binary distribution structure. I imported the jars as a library, added the native library path to the jcef jar and imported it into my project.
I setup the run configuration with the environment variables:
Display = :0.0
LD_PRELOAD = /path/to/libcef.so
All of my libraries and *.pak files are in the same directory and a subdirectory where the libcef.so is located (the binary distribution) as are the chrome sandbox and helpers.
Code and Error
The code fails after the following:
println("Generating Handlers")
CefApp.addAppHandler(Handlers.getHandlerAdapter)
private var settings = new CefSettings
settings.windowless_rendering_enabled = useOSR
println("Starting App")
private final val cefApp : CefApp = if(commandLineArgs != null && commandLineArgs.size > 0) CefApp.getInstance(ChromeCommandLineParser.parse(commandLineArgs)) else CefApp.getInstance(settings)
println("Creating Client")
private final val client : CefClient = cefApp.createClient()
The following output results:
Starting
Generating Handlers
Starting App
Creating Client
initialize on Thread[AWT-EventQueue-0,6,main] with library path /home/XXXXX/jcef/src/binary_distrib/linux64/bin/lib/linux64
[0413/135633:ERROR:icu_util.cc(157)] Invalid file descriptor to ICU data received.
[0413/135633:FATAL:content_main_runner.cc(700)] Check failed: base::i18n::InitializeICU().
#0 0x7ff8fa94a62e base::debug::StackTrace::StackTrace()
#1 0x7ff8fa95f88b logging::LogMessage::~LogMessage()
#2 0x7ff8fd7588d4 content::ContentMainRunnerImpl::Initialize()
#3 0x7ff8fa857962 CefContext::Initialize()
#4 0x7ff8fa85775b CefInitialize()
#5 0x7ff8fa80a9b8 cef_initialize
#6 0x7ff8d6946914 CefInitialize()
#7 0x7ff8d690200f Java_org_cef_CefApp_N_1Initialize
#8 0x7ff8de207994 <unknown>
All help is appreciated. Thanks
I had a lot of problems with this too, until I created the symlinks to "icudtl.dat", "natives_blob.bin" and "snapshot_blob.bin" under the $jdk/bin directory, instead of $jdk/jre/bin.
Now I don't get this error any more.
Using the example in https://bitbucket.org/chromiumembedded/java-cef/wiki/BranchesAndBuilding
I changed this...
$ sudo ln -s /path/to/java-cef/src/third_party/cef/linux64/Resources/icudtl.dat /usr/lib/jvm/java-8-oracle/jre/bin/icudtl.dat
$ sudo ln -s /path/to/java-cef/src/third_party/cef/linux64/Debug/natives_blob.bin /usr/lib/jvm/java-8-oracle/jre/bin/natives_blob.bin
$ sudo ln -s /path/to/java-cef/src/third_party/cef/linux64/Debug/snapshot_blob.bin /usr/lib/jvm/java-8-oracle/jre/bin/snapshot_blob.bin
To this...
$ sudo ln -s /path/to/java-cef/src/third_party/cef/linux64/Resources/icudtl.dat /usr/lib/jvm/java-8-oracle/bin/icudtl.dat
$ sudo ln -s /path/to/java-cef/src/third_party/cef/linux64/Debug/natives_blob.bin /usr/lib/jvm/java-8-oracle/bin/natives_blob.bin
$ sudo ln -s /path/to/java-cef/src/third_party/cef/linux64/Debug/snapshot_blob.bin /usr/lib/jvm/java-8-oracle/bin/snapshot_blob.bin
The solution that #dvlcube gave works, but it's not comfortable. You can add some extra logic to detect user's environment and if it's a linux you can copy required files - example:
GitHub - PandomiumLoadWorker [:53]
Instead of copying you can also create symlinks:
Java SE Tutorial: Links
If you don't want to specify related to linux environment variables before launching, you can also inject those variables (like LD_LIBRARY_PATH and LD_PRELOAD) at runtime:
GitHub - LinuxEnv

class: Configuration not found on when simulating Hadoop YARN SLS

I am trying to simulate Hadoop YARN SLS (Scheduling Load Simulator) with the sources given in Hadoop's GitHub and the SLS source files are located in [REF-1].
Here the step I have done :
Using VMWARE as the Host.
Using Ubuntu 14.04
Installing Hadoop v 2.6.0 [REF-2]
User : hduser | group : hadoop
Installing any needed packages (e.g. maven)
Get the clonning file of Hadoop's GitHub [REF-1]
Syntax : git clone https://git.apache.org/hadoop.git
Result : hduser#ubuntu:~/hadoop$
I made the changes inside directory hduser#ubuntu:~/hadoop/hadoop-tools$
FYI : I used the codes from MaxinetSLS [REF-3] as the way I compile the source files. The SLS source files can be downloaded by using this syntax in Linux : git clone https://github.com/wette/netSLS.git. By default, I can run this program with no error. The SLS Simulator can work perfectly.
From MaxiNetSLS's source files, I copied this files below into my work in hduser#ubuntu:~/hadoop/hadoop-tools$ :
netSLS/generator > hduser#ubuntu:~/hadoop/hadoop-tools$
netSLS/html > hduser#ubuntu:~/hadoop/hadoop-tools$
netSLS/sls.sh > hduser#ubuntu:~/hadoop/hadoop-tools$
netSLS/sls/hadoop/ > hduser#ubuntu:~/hadoop/hadoop-tools/hadoop-sls$
Then, I modified some files as follows.
netSLS/sls.sh
#!/usr/bin/env bash
function print_usage {
echo -e "usage: sls.sh TraceFile"
echo -e
echo -e "Starts SLS with the given trace file."
}
if [[ -z $1 ]]; then
print_usage
exit 1
fi
TRACE_FILE=$(realpath $1)
if [[ ! -f ${TRACE_FILE} ]]; then
echo "File not found: ${TRACE_FILE}"
print_usage
exit 1
fi
cd hadoop-sls
OUTPUT_DIRECTORY="/tmp/sls"
mkdir -p ${OUTPUT_DIRECTORY}
ARGS="-inputsls ${TRACE_FILE}"
ARGS+=" -output ${OUTPUT_DIRECTORY}"
ARGS+=" -printsimulation"
mvn exec:java -Dexec.args="${ARGS}"
hduser#ubuntu:~/hadoop/hadoop-tools/hadoop-sls/pom.xml$
[REF-4]
hduser#ubuntu:~/hadoop/hadoop-tools$ nano hadoop-sls/hadoop/etc/hadoop/sls-runner.xml
[REF-5]
Next step, I try to :
Compile the script using hduser#ubuntu:~/hadoop/hadoop-tools/hadoop-sls$ mvn compile
Compiled with no error (mvn_compile_perfect.jpg).
Run the program using hduser#ubuntu:~/hadoop/hadoop-tools$ ./sls.sh generator/small.json
Got the error here (error_json_compile.jpg). :(
Until now, I have went through some information related with similar problems I faced [REF-6] and tried it, but I still get the same problem. I guess I think the problem is in the ~/hadoop/hadoop-tools/hadoop-sls/pom.xml I mistakenly modified. I have lack of knowledge with Linux Environment. :(
References : http://1drv.ms/21zcJIH (txt file)
*Cannot post more than 2 links in my post. :(

Munin jmx configuration

I am trying to enable JMX monitoring on Munin
I have followed the guide at:
https://github.com/munin-monitoring/contrib/tree/master/plugins/java/jmx
It tells me:
1) Files from "plugin" folder must be copied to /usr/share/munin/plugins (or another - where your munin plugins located)
2) Make sure that jmx_ executable : chmod a+x /usr/share/munin/plugins/jmx_
3) Copy configuration files that you want to use, from "examples" folder, into /usr/share/munin/plugins folder
4) create links from the /etc/munin/plugins folder to the /usr/share/munin/plugins/jmx_
The name of the link must follow wildcard pattern:
jmx_<configname>,
where configname is the name of the configuration (config filename without extension), for example:
ln -s /usr/share/munin/plugins/jmx_ /etc/munin/plugins/jmx_process_memory
I have done exatly this but whern i run ./jmx_process_memory, I just get:
Error: Could not find or load main class org.munin.plugin.jmx.memory
The actual config file is called java_process_memory.conf, so i have also tried naming the symlink jmx_java_process_memory, but get the same error.
I have had success by naming the symlink jmx_Threads as described here:
http://blog.johannes-beck.name/?p=160
I can see that org.munin.plugin.jmx.Threads is the name of a class within munin-jmx-plugins.jar, and the other classes seem to work also. But this is not what the Munin guide tells me to do, so is the documentation wrong? What is the purpose of the config files, they must be there for a reason? There are example config files for Tomcat, which is where my real interest lies, so I need to understand this. Without being able the get it working as per the guide though im a bit stuck!
Can anyone put me right on this?
Cheers
NFV
I was stuck with somehow the same issue.
What i did to get something working a little bit better but still not perfectly.
I'm on RHEL :
[root#bus|in plugins]# cat /etc/munin/plugin-conf.d/munin-node
[diskstats]
user munin
[iostat_ios]
user munin
[jmx_*]
env.ip 192.168.1.101
env.port 5054 <- being the port configured for your jmx
then
[root#bus|in plugins]# ls -l /etc/munin/plugins/jmx_MultigraphAll
lrwxrwxrwx 1 root root 29 14 mars 15:36 /etc/munin/plugins/jmx_MultigraphAll -> /usr/share/munin/plugins/jmx_
and I modified the /usr/share/munin/plugins/jmx_ with the following :
#!/bin/sh
# -*- sh -*-
: << =cut
=head1 NAME
jmx_ - Wildcard plugin to monitor Java application servers via JMX
=head1 APPLICABLE SYSTEMS
Tested with Tomcat 4.1/5.0/5.5/6.0 on Sun JVM 5/6 and OpenJDK.
Any JVM that supports JMX should in theory do.
Needs nc in path for autoconf.
=head1 CONFIGURATION
[jmx_*]
env.ip 127.0.0.1
env.port 5400
env.category jvm
env.username monitorRole
env.password SomethingSecret
env.JRE_HOME /usr/lib/jvm/java-6-sun/jre
env.JAVA_OPTS -Xmx128m
Needed configuration on the Tomcat side: add
-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.port=5400 \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false
to CATALINA_OPTS in your startup scripts.
Replace authenticate=false with
-Dcom.sun.management.jmxremote.password.file=/etc/tomcat/jmxremote.password \
-Dcom.sun.management.jmxremote.access.file=/etc/tomcat/jmxremote.access
...if you want authentication.
jmxremote.password:
monitorRole SomethingSecret
jmxremote.access:
monitorRole readonly
You may need higher access levels for some counters, notably ThreadsDeadlocked.
=head1 BUGS
No encryption supported in the JMX connection.
The plugins available reflect the most interesting aspects of a
JVM runtime. This should be extended to cover things specific to
Tomcat, JBoss, Glassfish and so on. Patches welcome.
=head1 AUTHORS
=encoding UTF-8
Mo Amini, Diyar Amin and Younes Hajji, Høgskolen i Oslo/Oslo
University College.
Shell script wrapper and integration by Erik Inge Bolsø, Redpill
Linpro AS.
Previous work on JMX plugin by Aleksey Studnev. Support for
authentication added by Ingvar Hagelund, Redpill Linpro AS.
=head1 LICENSE
GPLv2
=head1 MAGIC MARKERS
#%# family=auto
#%# capabilities=autoconf suggest
=cut
MUNIN_JAR="/usr/share/java/munin-jmx-plugins.jar"
if [ "x$JRE_HOME" != "x" ] ; then
JRE=$JRE_HOME/bin/java
export JRE_HOME=$JRE_HOME
fi
JAVA_BIN=${JRE:-/opt/jdk/jre/bin/java}
ip=${ip:-192.168.1.101}
port=${port:-5054}
if [ "x$1" = "xsuggest" ] ; then
echo MultigraphAll
exit 0
fi
if [ "x$1" = "xautoconf" ] ; then
NC=`which nc 2>/dev/null`
if [ "x$NC" = "x" ] ; then
echo "no (nc not found)"
exit 0
fi
$NC -n -z $ip $port >/dev/null 2>&1
CONNECT=$?
$JAVA_BIN -? >/dev/null 2>&1
JAVA=$?
if [ $JAVA -ne 0 ] ; then
echo "no (java runtime not found at $JAVA_BIN)"
exit 0
fi
if [ ! -e $MUNIN_JAR ] ; then
echo "no (munin jmx classes not found at $MUNIN_JAR)"
exit 0
fi
if [ $CONNECT -eq 0 ] ; then
echo "yes"
exit 0
else
echo "no (connection to $ip:$port failed)"
exit 0
fi
fi
if [ "x$1" = "xconfig" ] ; then
param=config
else
param=Tomcat
fi
scriptname=${0##*/}
jmxfunc=${scriptname##*_}
prefix=${scriptname%_*}
if [ "x$jmxfunc" = "x" ] ; then
echo "error, plugin must be symlinked in order to run"
exit 1
fi
ip=$ip port=$port $JAVA_BIN -cp $MUNIN_JAR $JAVA_OPTS org.munin.plugin.jmx.$jmxfunc $param $prefix
And you have to add the right permissions and owner:group on what you define as the JRE, for example here :
[root#bus|in plugins]# ls -ld /opt/jdk
drwxrwxr-x 8 nobody nobody 4096 8 oct. 15:03 /opt/jdk
Now I can run (and I can see it's using nobody:nobody as user:group, maybe something to play with in the conf) :
[root#bus|in plugins]# munin-run jmx_MultigraphAll -d
# Processing plugin configuration from /etc/munin/plugin-conf.d/df
# Processing plugin configuration from /etc/munin/plugin-conf.d/fw_
# Processing plugin configuration from /etc/munin/plugin-conf.d/hddtemp_smartctl
# Processing plugin configuration from /etc/munin/plugin-conf.d/munin-node
# Processing plugin configuration from /etc/munin/plugin-conf.d/postfix
# Processing plugin configuration from /etc/munin/plugin-conf.d/sendmail
# Setting /rgid/ruid/ to /99/99/
# Setting /egid/euid/ to /99 99/99/
# Setting up environment
# Environment ip = 192.168.1.101
# Environment port = 5054
# About to run '/etc/munin/plugins/jmx_MultigraphAll'
multigraph jmx_memory
Max.value 2162032640
Committed.value 1584332800
Init.value 1613168640
Used.value 473134248
multigraph jmx_MemoryAllocatedHeap
Max.value 1037959168
Committed.value 1037959168
Init.value 1073741824
Used.value 275414584
multigraph jmx_MemoryAllocatedNonHeap
Max.value 1124073472
Committed.value 546373632
Init.value 539426816
Used.value 197986088
[...]
multigraph jmx_ProcessorsAvailable
ProcessorsAvailable.value 1
Now I'm trying to get it to work for different JVMs on the same host, because this is for a single one.
I hope that can help you.
edit :
actually I did the modifications to use with several java processes having their own jmx ports.
What you have to add them there :
[root#bus|in plugins]# cat /etc/munin/plugin-conf.d/munin-node
[diskstats]
user munin
[iostat_ios]
user munin
[admin_jmx_*]
env.ip 192.168.1.101
env.port 5054
[managed_jmx_*]
env.ip 192.168.1.101
env.port 5055
[jboss_jmx_*]
env.ip 192.168.1.101
env.port 1616
and then create the links :
[root#bus|in plugins]# ls -l /etc/munin/plugins/*_jmx_*
lrwxrwxrwx 1 root root 29 14 mars 15:36 /etc/munin/plugins/admin_jmx_MultigraphAll -> /usr/share/munin/plugins/jmx_
lrwxrwxrwx 1 root root 29 14 mars 16:51 /etc/munin/plugins/jboss_jmx_MultigraphAll -> /usr/share/munin/plugins/jmx_
lrwxrwxrwx 1 root root 29 14 mars 16:03 /etc/munin/plugins/managed_jmx_MultigraphAll -> /usr/share/munin/plugins/jmx_
and I commented out the ip and port from the /usr/share/munin/plugins/jmx_ file, but I'm not sure it plays a role.

Categories

Resources