Why zabbix do not show a value received from java code? - java

Consider a java code:
String host = "zabbixHost";
int port = 10051;
ZabbixSender zabbixSender = new ZabbixSender(host, port);
DataObject dataObject = new DataObject();
dataObject.setHost("testHost");
dataObject.setKey("test.ping.count");
dataObject.setValue("10");
// TimeUnit is SECONDS.
dataObject.setClock(System.currentTimeMillis()/1000);
SenderResult result = zabbixSender.send(dataObject);
System.out.println("result:" + result);
if (result.success()) {
System.out.println("send success.");
} else {
System.err.println("sned fail!");
}
The result is {"failed":0,"processed":1,"spentSeconds":0.001715,"total":1}
Then I send a request by zabbix_sender tool from command line:
zabbix_sender -z zabbixHost -p 10051 -s testHost -k test.ping.count -o 8 -v
The output is:
info from server: "processed: 1; failed: 0; total: 1; seconds spent: 0.002052"
sent: 1; skipped: 0; total: 1
For now 2 values were sent into Zabbix. But when I got to the monitoring graphic for test.ping.count and only 8 value is shown. E.g. value from java code was not received even when response was successful.
What is going on? How to fix such situation?
Note
The library is - io.github.hengyunabc:zabbix-sender:0.0.3
Zabbix version is 3.0

The problem was with timestamps, zabbix-sender with version 0.0.1 set request (not dataobject) clock in milliseconds while version 0.0.3 in seconds. So using right version fix issues.
maven sample (source):
<dependency>
<groupId>io.github.hengyunabc</groupId>
<artifactId>zabbix-sender</artifactId>
<version>0.0.3</version>
</dependency>

Related

ASE is terminating this process when trying to install the jar file (Msg 5702, Level 10, State 1)

I have an SAP ASE 16 server on a Windows OS.
I have enabled the java service:
sp_configure 'enable java'
Parameter Name Default Memory Used Config Value Run Value Unit Type
-------------- ----------- ----------- ------------ ------------ ------ ------
enable java 0 0 1 1 switch static
Rows affected (1) Time (0.094 s)
I have created a basic class to test the service (JDBCExamples.java):
import java.sql.*; // JDBC
public class JDBCExamples {
public static void main(String args[]){
if (args.length != 2) {
System.out.println("\n Usage: " + "name secondName \n");
return;
}
try {
String name = args[0];
String secondName = args[1].toLowerCase();
System.out.println("\n HOLA " + name + " " + secondName +" FUNCIONO!!!\n");
} catch (Exception e) {
System.out.println("\n Exception: ");
e.printStackTrace();
}
}
}
I have the class file JDBCExamples.class and I make a file JDBCExamples.jar.
When I try to install the jar file it shows the error message:
instjava -f JDBCExamples.jar -SDEFAULT -Uuser -Ppassword -Ddatabase -new
Server Message: - Msg 5702, Level 10, State 1:
ASE is terminating this process.
I don't see any in log database.
Any idea what the problem is?
Update:
I posted the same problem in https://answers.sap.com/questions/13241081/ase-is-terminating-this-process-when-trying-to-ins.html
In this post suspect the issue is caused by an ASE bug fixed in PL06:
2687973 - NTPCI__exit(1); Native Thread failed to unwind - SAP ASE http://service.sap.com/sap/support/notes/2687973
I have a trial version and I can not download a newer patch (PL06 at least but recommend PL09 as most recent)
Does anyone have this patch?

How to get actual RAM usage of app in android?

Quite simply, how can I get the amount of memory (in MB) that my android app is currently using? This would need to be done in Java so I can display this information to the user.
I've looked at other stackoverflow posts but none give a simple or accurate answer to this problem.
adb shell dumpsys meminfo packagename
Try to execute this command with Java.
You can use ActivityManager for that purpose.
It's answered in this post.
Try to use this code if a performance is not critical:
Debug.MemoryInfo memInfo = new Debug.MemoryInfo();
Debug.getMemoryInfo(memInfo);
long res = memInfo.getTotalPrivateDirty();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT)
res += memInfo.getTotalPrivateClean();
return res * 1024L;
If performance is critical check this answer
1. dumpsys meminfo PKG_NAME
...
App Summary
Pss(KB)
------
Java Heap: 42868 <--
Native Heap: 52268 <--
Code: 23608
Stack: 96
Graphics: 5084
Private Other: 5620
System: 14900
TOTAL: 144444 <-- TOTAL SWAP PSS: 130
2. in java code
https://android.googlesource.com/platform/frameworks/base/+/master/core/java/android/os/Debug.java#640
// this is the value source of dumpsys meminfo.
Debug.MemoryInfo memInfo = new Debug.MemoryInfo();
Debug.getMemoryInfo(memInfo);
if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.M) {
String javaMem = memInfo.getMemoryStat("summary.java-heap");
String natiMem = memInfo.getMemoryStat("summary.native-heap");
String totalMem = memInfo.getMemoryStat("summary.total-pss");
//msg2 += "\n" + String.format("%s %s %s", javaMem, natiMem, totalMem);
msg2 += "\n\n"+String.format("java %8s\nnati %8s\ntotal %8s",
formatMB(Integer.parseInt(javaMem)), formatMB(Integer.parseInt(natiMem)), formatMB(Integer.parseInt(totalMem))
);
}
...
String formatMB(double KB){
return String.format("%.1f MB", KB/1024);
}
The other api need lots of calculate,
// you can read them in Debug.java src file.
This is the closest to dumpsys meminfo or android studio monitor.

RMI Client invokes Graceful shutdown. Server Stops. Manually started again. But somehow gets autokilled. Why?

Everthing on Java.
Environment: RedHat Linux 12.3
Lets get into details of the communication flow:
fig1.
NOTE: 1. Old model: There was NO A.java
"script.sh" starts/stops B.java as process
2. New model: There IS A.java
"script.sh" never uses A
"script.sh" starts B.java as process.
"myGraceful.sh" stops process Gracefully
"script.sh" is NEVER used for stopping
Server(B.java in server.jar):
Java Process triggered as: ./script.sh {start|stop}
Its a legacy class existing for 10 yrs or more
has RemoteB Interface
has
graceFul(){ ..handles all DB ,user states,connection...etc
..works perfectly from Admin
..invoked as RMI from JSP
..never invoked by script till now
}
initServer(){...}
getUsers(){...}
Requirement & My Effort:
Everyone knows how the code RMI Looks or .sh to invoke java. Hence I dont think pasting a proprietary code should be expected here.
Graceful needs to be done from shell script on same Server node. On Server everything runniing by Spring. I will die if I try to Inject a Bean as there is going to be 100x1000 dependencies coming in queue. Hence I created
RMI Client(com/common/task/A.java in same server.jar):
A can be triggered by: ./myGraceful.sh stop (eg. java -cp... com.A 2>&1)
in same server.jar - hence inevitaby loaded (note not running) on same Server node.
having p s v main(String args[])
Forks Thread ..thread calls RMI shutdown on B ...and thread expected
to die on own.
Problem:
Server shutting down perfectly. Then If I isssue following command AGAIN and AGAIN:
./script.sh start
Server starts up. But within a minute it stops automatically. I dont have any clue what and which is stopping the Server. I Observed
Prior to any of my new modifications:
"./script.sh stop" [used to work flawlessly calling kill -9 $pid ]
"ps - aefwww | grep java" used to show:
pid ppid.. /usr/java/jdk/bin/java ........java -D.... -Djava.timeout=.. -D....
pid ppid.. .../abc/ ....java...
pid ppid.. .../xyz/ .................java...
But now
"./myGraceful.sh stop" triggers modified server.jar(which now has A.java):
"ps - aefwww | grep java" shows:
pid ppid.. .../abc/ ....java...
pid ppid.. .../xyz/ .................java...
Here goes some code:
myGraceful.sh:
----------------
#!/bin/bash
CLASSPATH=$COMMON_CLASS_PATH:$LIB_INHOUSE/server.jar
rmiIp=x.y.zz.www [hidden]
rmiPort=xxxx [hidden]
peerId=1
period=5
function kill_server(){
echo -n "Shutting down Server ($pid): "
echo "executing Arnab"
echo "arg0 : $0 pid : $pid"
java -Djava.rmi.server.hostname=localhost \
com.common.task.GracefulRunner $rmiIp $rmiPort $peerId $period 2>&1
echo Done
}
case "$1" in
start)
get_pids "CustomBootstrap" $2
if [ "$pid" != "" ] ; then
get_processname "CustomBootstrap" $2
if [ "$server" != "" ] ; then
echo "Server already running. pid = $pid"
exit 1
fi
if [ "$ctserver" != "" ] ; then
echo "Shutting down CT Server($pid): "
kill -SIGQUIT $pid
kill -9 $pid
echo Done
fi
fi
$0 run $2 1>&2 &
sleep 2
$0 status $2
# $0 err
;;
stop)
$0 kill $2
;;
kill)
get_pids "CustomBootstrap" $2
if [ "$pid" != "" ] ; then
kill_server
echo "Server ended at `date`"
else
get_pids "Launcher" $2
if [ "$pid" != "" ] ; then
kill_server
else
echo "Server is not running !"
fi
fi
;;
esac
A.java
public class A {
class GracefulStopperThread implements Runnable{
private String serverRMIIp = null;
private int serverRmiPort =0;
private String serverPeerId =null;
private int shutDownPeriod =0;
public GracefulStopperThread(String rmiIp,String rmiPort,String peerId,String period){
serverRMIIp = rmiIp;
serverRmiPort = Integer.parseInt(rmiPort);
serverPeerId = peerId;
shutDownPeriod =Integer.parseInt(period);
}
public void run() {
System.out.println("***************************************** GracefulStopper is running *******************************************");
System.out.println("serverPeerId :="+serverPeerId+" , shutDownPeriod :="+shutDownPeriod);
try {
IRemoteServer serverRef = null;
String rmiUrl = getURL(serverRMIIp,serverRmiPort,serverPeerId);
System.out.println("THE RMI URL : "+rmiUrl);
serverRef = (IRemoteServer) Naming.lookup(rmiUrl );
com.server.ds.IRemoteServer pcServerRef = (com.server.ds.IRemoteServer) serverRef;
pcServerRef.graceful(SHUTDOWN_TYPE_SERVER_NOTSYSTEM,"Gracefully Shutting down withing 10 mins", shutDownPeriod);
System.out.println("GracefulStopperThread completed ");
} catch (Exception e) {
e.printStackTrace();
}
}
private String getURL(String rmiIp,int rmiPort,String peerId) {
return new StringBuffer(32).append("rmi://").append(serverRMIIp).append(':').append(serverRmiPort)
.append('/').append(serverPeerId).toString();
}
}
public static void main(String args[]) throws InterruptedException {
A agent = new A();
Runnable stopper = agent.new GracefulStopperThread(args[0],args[1],args[2],args[3]);
Thread t = new Thread(stopper);
t.start();
t.join();
System.out.println("MainThread completed ");
}
}
From catalina and tomcat logs it became clear - there was a wrong missing JMX entry in the jmx config file which has nothing to do with all above. This caused Tomcat to stop after 85 % of its start. Hence it actually never started. Question can be closed and marked solved.

Access FreePastry program that is behind NAT

I'm trying to connect to my program that uses FreePastry behind a NAT but getting no where. mIP is my public IP, mBootport and mBindport is 50001. I have forworded this ports in my router to my computer stil it does not work. I disabled the firewall yet nothing. I disconnected the router and connect directly to the internet and stil it does not work. The only time it does work in on my local network. So something most be wrong in either the code of the config file but i can not see what is wrong.
Environment env = new Environment();
InetSocketAddress bootaddress = new InetSocketAddress(mIP, mBootport);
NodeIdFactory nidFactory = new RandomNodeIdFactory(env);
PastryNodeFactory factory = new SocketPastryNodeFactory(nidFactory, mBindport, env);
for (int curNode = 0; curNode < mNumNodes; curNode++) {
PastryNode node = factory.newNode();
NetworkHandler app = new NetworkHandler(node, mLog);
apps.add(app);
node.boot(bootaddress);
synchronized(node) {
while(!node.isReady() && !node.joinFailed()) {
node.wait(500);
if (node.joinFailed()) {
throw new IOException("Could not join the FreePastry ring. Reason:"+node.joinFailedReason());
}
}
}
System.out.println("Finished creating new node: " + node);
mLog.append("Finished creating new node: " + node + "\n");
}
Iterator<NetworkHandler> i = apps.iterator();
NetworkHandler app = (NetworkHandler) i.next();
app.subscribe();
public class NetworkHandler implements ScribeClient, Application {
int seqNum = 0;
CancellableTask publishTask;
Scribe myScribe;
Topic myTopic;
JTextArea mLog;
protected Endpoint endpoint;
public NetworkHandler(Node node, JTextArea log) {
this.endpoint = node.buildEndpoint(this, "myinstance");
mLog = log;
myScribe = new ScribeImpl(node,"myScribeInstance");
myTopic = new Topic(new PastryIdFactory(node.getEnvironment()), "example topic");
System.out.println("myTopic = "+myTopic);
mLog.append("myTopic = "+myTopic + "\n");
endpoint.register();
}
public void subscribe() {
myScribe.subscribe(myTopic, this);
}
}
freepastry.params
# this file holds the default values for pastry and it's applications
# you do not need to modify the default.params file to override these values
# instead you can use your own params file to set values to override the
# defaults. You can specify this file by constructing your
# rice.environment.Environment() with the filename you wish to use
# typically, you will want to be able to pass this file name from the command
# line
# max number of handles stored per routing table entry
pastry_rtMax = 1
pastry_rtBaseBitLength = 4
# leafset size
pastry_lSetSize = 24
# maintenance frequencies
pastry_leafSetMaintFreq = 60
pastry_routeSetMaintFreq = 900
# drop the message if pastry is not ready
pastry_messageDispatch_bufferIfNotReady = false
# number of messages to buffer while an app hasn't yet been registered
pastry_messageDispatch_bufferSize = 32
# FP 2.1 uses the new transport layer
transport_wire_datagram_receive_buffer_size = 131072
transport_wire_datagram_send_buffer_size = 65536
transport_epoch_max_num_addresses = 2
transport_sr_max_num_hops = 5
# proximity neighbor selection
transport_use_pns = true
# number of rows in the routing table to consider during PNS
# valid values are ALL, or a number
pns_num_rows_to_use = 10
# commonapi testing parameters
# direct or socket
commonapi_testing_exit_on_failure = true
commonapi_testing_protocol = direct
commonapi_testing_startPort = 5009
commonapi_testing_num_nodes = 10
# set this to specify the bootstrap node
#commonapi_testing_bootstrap = localhost:5009
# random number generator's seed, "CLOCK" uses the current clock time
random_seed = CLOCK
# sphere, euclidean or gt-itm
direct_simulator_topology = sphere
# -1 starts the simulation with the current time
direct_simulator_start_time = -1
#pastry_direct_use_own_random = true
#pastry_periodic_leafset_protocol_use_own_random = true
pastry_direct_gtitm_matrix_file=GNPINPUT
# the number of stubs in your network
pastry_direct_gtitm_max_overlay_size=1000
# the number of virtual nodes at each stub: this allows you to simulate multiple "LANs" and allows cheeper scaling
pastry_direct_gtitm_nodes_per_stub=1
# the factor to multiply your file by to reach millis. Set this to 0.001 if your file is in microseconds. Set this to 1000 if your file is in seconds.
pastry_direct_gtitm_delay_factor=1.0
#millis of the maximum network delay for the generated network topologies
pastry_direct_max_diameter=200
pastry_direct_min_delay=2
#setting this to false will use the old protocols which are about 200 times as fast, but may cause routing inconsistency in a real network. Probably won't in a simulator because it will never be incorrect about liveness
pastry_direct_guarantee_consistency=true
# rice.pastry.socket parameters
# tells the factory you intend to use multiple nodes
# this causes the logger to prepend all entries with the nodeid
pastry_factory_multipleNodes = true
pastry_factory_selectorPerNode = false
pastry_factory_processorPerNode = false
# number of bootstap nodehandles to fetch in parallel
pastry_factory_bootsInParallel = 1
# the maximum size of a message
pastry_socket_reader_selector_deserialization_max_size = 1000000
# the maximum number of outgoing messages to queue when a socket is slower than the number of messages you are queuing
pastry_socket_writer_max_queue_length = 30
pastry_socket_writer_max_msg_size = 20480
pastry_socket_repeater_buffer_size = 65536
pastry_socket_pingmanager_smallPings=true
pastry_socket_pingmanager_datagram_receive_buffer_size = 131072
pastry_socket_pingmanager_datagram_send_buffer_size = 65536
# the time before it will retry a route that was already found dead
pastry_socket_srm_check_dead_throttle = 300000
pastry_socket_srm_proximity_timeout = 3600000
pastry_socket_srm_ping_throttle = 30000
pastry_socket_srm_default_rto = 3000
pastry_socket_srm_rto_ubound = 10000
pastry_socket_srm_rto_lbound = 50
pastry_socket_srm_gain_h = 0.25
pastry_socket_srm_gain_g = 0.125
pastry_socket_scm_max_open_sockets = 300
pastry_socket_scm_max_open_source_routes = 30
# the maximum number of source routes to attempt, setting this to 0 will
# effectively eliminate source route attempts
# setting higher than the leafset does no good, it will be bounded by the leafset
# a larger number tries more source routes, which could give you a more accurate
# determination, however, is more likely to lead to congestion collapse
pastry_socket_srm_num_source_route_attempts = 8
pastry_socket_scm_socket_buffer_size = 32768
# this parameter is multiplied by the exponential backoff when doing a liveness check so the first will be 800, then 1600, then 3200 etc...
pastry_socket_scm_ping_delay = 800
# adds some fuzziness to the pings to help prevent congestion collapse, so this will make the ping be advanced or delayed by this factor
pastry_socket_scm_ping_jitter = 0.1
# how many pings until we call the node faulty
pastry_socket_scm_num_ping_tries = 5
pastry_socket_scm_write_wait_time = 30000
pastry_socket_scm_backoff_initial = 250
pastry_socket_scm_backoff_limit = 5
pastry_socket_pingmanager_testSourceRouting = false
pastry_socket_increment_port_after_construction = true
# if you want to allow connection to 127.0.0.1, set this to true
pastry_socket_allow_loopback = false
# these params will be used if the computer attempts to bind to the loopback address, they will open a socket to this address/port to identify which network adapter to bind to
pastry_socket_known_network_address = yahoo.com
pastry_socket_known_network_address_port = 80
pastry_socket_use_own_random = true
pastry_socket_random_seed = clock
# force the node to be a seed node
rice_socket_seed = false
# the parameter simulates some nodes being firewalled, base on rendezvous_test_num_firewalled
rendezvous_test_firewall = false
# probabilistic fraction of firewalled nodes
rendezvous_test_num_firewalled = 0.3
# don't firewall the first node, useful for testing
rendezvous_test_makes_bootstrap = false
# FP 2.1 uses the new transport layer
transport_wire_datagram_receive_buffer_size = 131072
transport_wire_datagram_send_buffer_size = 65536
# NAT/UPnP settings
nat_network_prefixes = 127.0.0.1;10.;192.168.
# Enable and set this if you have already set up port forwarding and know the external address
#external_address = 123.45.67.89:1234
#enable this if you set up port forwarding (on the same port), but you don't
#know the external address and you don't have UPnP enabled
#this is useful for a firwall w/o UPnP support, and your IP address isn't static
probe_for_external_address = true
# values how to probe
pastry_proxy_connectivity_timeout = 15000
pastry_proxy_connectivity_tries = 3
# possible values: always, never, prefix (prefix is if the localAddress matches any of the nat_network_prefixes
# whether to search for a nat using UPnP (default: prefix)
nat_search_policy = prefix
# whether to verify connectivity (default: boot)
firewall_test_policy = never
# policy for setting port forwarding the state of the firewall if there is already a conflicting rule: overwrite, fail (throw exception), change (use different port)
# you may want to set this to overwrite or fail on the bootstrap nodes, but most freepastry applications can run on any available port, so the default is change
nat_state_policy = change
# the name of the application in the firewall, set this if you want your application to have a more specific name
nat_app_name = freepastry
# how long to wait for responses from the firewall, in millis
nat_discovery_timeout = 5000
# how many searches to try to find a free firewall port
nat_find_port_max_tries = 10
# uncomment this to use UPnP NAT port forwarding, you need to include in the classpath: commons-jxpath-1.1.jar:commons-logging.jar:sbbi-upnplib-xxx.jar
nat_handler_class = rice.pastry.socket.nat.sbbi.SBBINatHandler
# hairpinning:
# default "prefix" requires more bandwidth if you are behind a NAT. It enables multiple IP
# addresses in the NodeHandle if you are behind a NAT. These are usually the internet routable address,
# and the LAN address (usually 192.168.x.x)
# you can set this to never if any of the following conditions hold:
# a) you are the only FreePastry node behind this address
# b) you firewall supports hairpinning see
# http://scm.sipfoundry.org/rep/ietf-drafts/behave/draft-ietf-behave-nat-udp-03.html#rfc.section.6
nat_nodehandle_multiaddress = prefix
# if we are not scheduled for time on cpu in this time, we setReady(false)
# otherwise there could be message inconsistency, because
# neighbors may believe us to be dead. Note that it is critical
# to consider the amount of time it takes the transport layer to find a
# node faulty before setting this parameter, this parameter should be
# less than the minimum time required to find a node faulty
pastry_protocol_consistentJoin_max_time_to_be_scheduled = 15000
# in case messages are dropped or something, how often it will retry to
# send the consistent join message, to get verification from the entire
# leafset
pastry_protocol_consistentJoin_retry_interval = 30000
# parameter to control how long dead nodes are retained in the "failed set" in
# CJP (see ConsistentJoinProtocol ctor) (15 minutes)
pastry_protocol_consistentJoin_failedRetentionTime = 900000
# how often to cleanup the failed set (5 mins) (see ConsistentJoinProtocol ctor)
pastry_protocol_consistentJoin_cleanup_interval = 300000
# the maximum number of entries to send in the failed set, only sends the most
recent detected failures (see ConsistentJoinProtocol ctor)
pastry_protocol_consistentJoin_maxFailedToSend = 20
# how often we send/expect to be sent updates
pastry_protocol_periodicLeafSet_ping_neighbor_period = 20000
pastry_protocol_periodicLeafSet_lease_period = 30000
# what the grace period is to receive a periodic update, before checking
# liveness
pastry_protocol_periodicLeafSet_request_lease_throttle = 10000
# how many entries are kept in the partition handler's table
partition_handler_max_history_size=20
# how long entries in the partition handler's table are kept
# 90 minutes
partition_handler_max_history_age=5400000
# what fraction of the time a bootstrap host is checked
partition_handler_bootstrap_check_rate=0.05
# how often to run the partition handler
# 5 minutes
partition_handler_check_interval=300000
# the version number of the RouteMessage to transmit (it can receive anything that it knows how to)
# this is useful if you need to migrate an older ring
# you can change this value in realtime, so, you can start at 0 and issue a command to update it to 1
pastry_protocol_router_routeMsgVersion = 1
# should usually be equal to the pastry_rtBaseBitLength
p2p_splitStream_stripeBaseBitLength = 4
p2p_splitStream_policy_default_maximum_children = 24
p2p_splitStream_stripe_max_failed_subscription = 5
p2p_splitStream_stripe_max_failed_subscription_retry_delay = 1000
#multiring
p2p_multiring_base = 2
#past
p2p_past_messageTimeout = 30000
p2p_past_successfulInsertThreshold = 0.5
#replication
# fetch delay is the delay between fetching successive keys
p2p_replication_manager_fetch_delay = 500
# the timeout delay is how long we take before we time out fetching a key
p2p_replication_manager_timeout_delay = 20000
# this is the number of keys to delete when we detect a change in the replica set
p2p_replication_manager_num_delete_at_once = 100
# this is how often replication will wake up and do maintainence; 10 mins
p2p_replication_maintenance_interval = 600000
# the maximum number of keys replication will try to exchange in a maintainence message
p2p_replication_max_keys_in_message = 1000
#scribe
p2p_scribe_maintenance_interval = 180000
#time for a subscribe fail to be thrown (in millis)
p2p_scribe_message_timeout = 15000
#util
p2p_util_encryptedOutputStream_buffer = 32678
#aggregation
p2p_aggregation_logStatistics = true
p2p_aggregation_flushDelayAfterJoin = 30000
#5 MINS
p2p_aggregation_flushStressInterval = 300000
#5 MINS
p2p_aggregation_flushInterval = 300000
#1024*1024
p2p_aggregation_maxAggregateSize = 1048576
p2p_aggregation_maxObjectsInAggregate = 25
p2p_aggregation_maxAggregatesPerRun = 2
p2p_aggregation_addMissingAfterRefresh = true
p2p_aggregation_maxReaggregationPerRefresh = 100
p2p_aggregation_nominalReferenceCount = 2
p2p_aggregation_maxPointersPerAggregate = 100
#14 DAYS
p2p_aggregation_pointerArrayLifetime = 1209600000
#1 DAY
p2p_aggregation_aggregateGracePeriod = 86400000
#15 MINS
p2p_aggregation_aggrRefreshInterval = 900000
p2p_aggregation_aggrRefreshDelayAfterJoin = 70000
#3 DAYS
p2p_aggregation_expirationRenewThreshold = 259200000
p2p_aggregation_monitorEnabled = false
#15 MINS
p2p_aggregation_monitorRefreshInterval = 900000
#5 MINS
p2p_aggregation_consolidationDelayAfterJoin = 300000
#15 MINS
p2p_aggregation_consolidationInterval = 900000
#14 DAYS
p2p_aggregation_consolidationThreshold = 1209600000
p2p_aggregation_consolidationMinObjectsInAggregate = 20
p2p_aggregation_consolidationMinComponentsAlive = 0.8
p2p_aggregation_reconstructionMaxConcurrentLookups = 10
p2p_aggregation_aggregateLogEnabled = true
#1 HOUR
p2p_aggregation_statsGranularity = 3600000
#3 WEEKS
p2p_aggregation_statsRange = 1814400000
p2p_aggregation_statsInterval = 60000
p2p_aggregation_jitterRange = 0.1
# glacier
p2p_glacier_logStatistics = true
p2p_glacier_faultInjectionEnabled = false
p2p_glacier_insertTimeout = 30000
p2p_glacier_minFragmentsAfterInsert = 3.0
p2p_glacier_refreshTimeout = 30000
p2p_glacier_expireNeighborsDelayAfterJoin = 30000
#5 MINS
p2p_glacier_expireNeighborsInterval = 300000
#5 DAYS
p2p_glacier_neighborTimeout = 432000000
p2p_glacier_syncDelayAfterJoin = 30000
#5 MINS
p2p_glacier_syncMinRemainingLifetime = 300000
#insertTimeout
p2p_glacier_syncMinQuietTime = 30000
p2p_glacier_syncBloomFilterNumHashes = 3
p2p_glacier_syncBloomFilterBitsPerKey = 4
p2p_glacier_syncPartnersPerTrial = 1
#1 HOUR
p2p_glacier_syncInterval = 3600000
#3 MINUTES
p2p_glacier_syncRetryInterval = 180000
p2p_glacier_syncMaxFragments = 100
p2p_glacier_fragmentRequestMaxAttempts = 0
p2p_glacier_fragmentRequestTimeoutDefault = 10000
p2p_glacier_fragmentRequestTimeoutMin = 10000
p2p_glacier_fragmentRequestTimeoutMax = 60000
p2p_glacier_fragmentRequestTimeoutDecrement = 1000
p2p_glacier_manifestRequestTimeout = 10000
p2p_glacier_manifestRequestInitialBurst = 3
p2p_glacier_manifestRequestRetryBurst = 5
p2p_glacier_manifestAggregationFactor = 5
#3 MINUTES
p2p_glacier_overallRestoreTimeout = 180000
p2p_glacier_handoffDelayAfterJoin = 45000
#4 MINUTES
p2p_glacier_handoffInterval = 240000
p2p_glacier_handoffMaxFragments = 10
#10 MINUTES
p2p_glacier_garbageCollectionInterval = 600000
p2p_glacier_garbageCollectionMaxFragmentsPerRun = 100
#10 MINUTES
p2p_glacier_localScanInterval = 600000
p2p_glacier_localScanMaxFragmentsPerRun = 20
p2p_glacier_restoreMaxRequestFactor = 4.0
p2p_glacier_restoreMaxBoosts = 2
p2p_glacier_rateLimitedCheckInterval = 30000
p2p_glacier_rateLimitedRequestsPerSecond = 3
p2p_glacier_enableBulkRefresh = true
p2p_glacier_bulkRefreshProbeInterval = 3000
p2p_glacier_bulkRefreshMaxProbeFactor = 3.0
p2p_glacier_bulkRefreshManifestInterval = 30000
p2p_glacier_bulkRefreshManifestAggregationFactor = 20
p2p_glacier_bulkRefreshPatchAggregationFactor = 50
#3 MINUTES
p2p_glacier_bulkRefreshPatchInterval = 180000
p2p_glacier_bulkRefreshPatchRetries = 2
p2p_glacier_bucketTokensPerSecond = 100000
p2p_glacier_bucketMaxBurstSize = 200000
p2p_glacier_jitterRange = 0.1
#1 MINUTE
p2p_glacier_statisticsReportInterval = 60000
p2p_glacier_maxActiveRestores = 3
#transport layer testing params
org.mpisws.p2p.testing.transportlayer.replay.Recorder_printlog = true
# logging
#default log level
loglevel = WARNING
#example of enabling logging on the endpoint:
#rice.p2p.scribe#ScribeRegrTest-endpoint_loglevel = INFO
logging_packageOnly = true
logging_date_format = yyyyMMdd.HHmmss.SSS
logging_enable=true
# 24 hours
log_rotate_interval = 86400000
# the name of the active log file, and the filename prefix of rotated log
log_rotate_filename = freepastry.log
# the format of the date for the rotating log
log_rotating_date_format = yyyyMMdd.HHmmss.SSS
# true will tell the environment to ues the FileLogManager
environment_logToFile = false
# the prefix for the log files (otherwise will be named after the nodeId)
fileLogManager_filePrefix =
# the suffix for the log files
fileLogManager_fileSuffix = .log
# wether to keep the line prefix (declaring the node id) for each line of the log
fileLogManager_keepLinePrefix = false
fileLogManager_multipleFiles = true
fileLogManager_defaultFileName = main
# false = append true = overwrite
fileLogManager_overwrite_existing_log_file = false
# the amount of time the LookupService tutorial app will wait before timing out
# in milliseconds, default is 30 seconds
lookup_service.timeout = 30000
# how long to wait before the first retry
lookup_service.firstTimeout = 500
Edit: Comfirmed with wireshark that the message indeed reach the computer freepastry just don't accept the connection.
Not sure what you mean by "not work". To test the connectivity between your client and your server (sit behind NAT), you just need do something like "telnet mIP mBindport" on your client side, assuming you have a telnet utility (default on Linux and Mac, you can install one, like nc ("netcat") on your windows).
If the port forwarding is set up correctly, you should see something like the following when the TCP connection is set up with your server.
Connected to localhost.
Escape character is '^]'.
Once the TCP session sets up correctly, you can stop the "telnet" program and use your real client (in java) to talk to your server, it should work fine.
If the TCP session didn't set up, you may want to check on the server side. Use either a wireshark or tcpdump to capture packets with filter "tcp port 50001", and run the telnet command above to check if there is a TCP packet come in.
If nothing show up in wireshark or tcpdump, then your firewall (like portforwarding) is not set up correctly.
If the TCP packet does show up in wireshark or tcpdump, then your server program may be at fault. Check the IP address it binds to using the command (linux):
netstat -antp | grep 50001
(on windows, the command is slightly different).
Typically it should bind to IP address 0.0.0.0 (all ip), if it doesn't, you should check whether the IP it binds to has connectivity/route to the outside world (outside the NAT).
Good luck.
I would try to set your IP as your local for the computer Free Pastry is running on. It sounds like the computer is getting the information but Free Pastry is looking for it on a different address. If you set your mIP to be local, I think it would work. This would be if it is behind the router/NAT.
Port forwarding forwards packets from your public IP on port 50001 to your internal computer IP on whatever port you set, normally the same 50001. If you set your program to listen on the public IP, it doesn't have access to it so it will not accept any packets/messages. Set to listen on the computers IP, or 0.0.0.0/localhost, it should accept any packets/messages on that port.

Python script launched from Jenkins hangs

I have a python script which launches a java tool to run regression tests. If I run these commands on directly on command line they work fine , If I run this script from Jenkins the java tool will be in hanged state till I close the jenkins job. Once I close Jenkins job I see the tool running.
Following is the command generated from the python script:
export DISPLAY=:0;/home/tools/executor/bin/executor -J-Xms128m -J-Xmx1024m -a/home/ExecutorSuites/FWVerification/Platform/update_firmware_polaris.e.xml -r744 <uname> <passwrd>
Is there any issue running python scripts from Jenkins? I really appreciate any help on this. I am a FW engineer, do not know much about java and Jenkins.
Here is the python script:
import paramiko
import sys
import os
class Unbuffered:
def __init__(self, stream):
self.stream = stream
def write(self, data):
self.stream.write(data)
self.stream.flush()
def __getattr__(self, attr):
return getattr(self.stream, attr)
sys.stdout=Unbuffered(sys.stdout)
hostname = os.environ['J_TESTMACHINE_IP']
username = os.environ['J_TESTMACHINE_UNAME']
password = os.environ['J_TESTMACHINE_PASSWD']
vapor_username = os.environ['J_VAPOR_UNAME']
vapor_password = os.environ['J_VAPOR_PASSWD']
executor_path = os.environ['J_EXECUTOR_PATH']
vapor_revid = os.environ['J_VAPOR_REVID']
test_results_path = os.environ['J_VAPOR_TESTRESULTS_FILEPATH']
es_home_dir = os.environ['J_ES_HOME_DIR']
testsuites_string = os.environ['J_TEST_SUITE']
testsuites = [x.strip() for x in testsuites_string.split(',')]
localfile = "C:\Jenkins\localfile.txt"
def create_client(hostname, username, password):
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
client.connect(hostname,username=username, password=password)
except BadHostKeyException:
print "The server's host key could not be verified"
except AuthenticationException:
print "if authentication failed"
except SSHException:
print "There were other errors connecting or establishing an SSH session"
except socket.error:
print "a socket error occurred while connecting"
return client
def destroy_client(client):
client.close()
def form_executor_cmd(testsuite):
executor_cmd = "export DISPLAY=:0;";
executor_cmd += executor_path;
executor_cmd += " -J-Xms128m -J-Xmx1024m";
executor_cmd += " -a" + es_home_dir + testsuite;
executor_cmd += " -r" + vapor_revid;
executor_cmd += " -u" + vapor_username + " -p" + vapor_password;
return executor_cmd
def run_remote_cmd(client, remote_cmd):
try:
stdin, stdout, stderr = client.exec_command(remote_cmd)
except SSHException:
print "When executing command "+remote_cmd+", there were other errors connecting or establishing an SSH session"
channel = stdout.channel
status = channel.recv_exit_status()
print "\n Executing remote command "+remote_cmd+" ...\n";
print "Exit Status " + str(status)
return status, stdin, stdout, stderr
def clear_testresults(client, filepath):
print "clearing test results at " + filepath
status, stdin, stdout, stderr = run_remote_cmd(client, "rm "+filepath)
def did_it_pass(client):
ftp = client.open_sftp()
ftp.get(test_results_path, localfile)
f = open(localfile,"a+")
lines = f.readlines()
print lines
f.close()
ftp.close()
os.remove(localfile)
if "Failed" in lines:
return 1
else:
return 0
os.remove('localfile.txt')
###########################################################
# hostname = 'sclab-sfmfv-avatar'
# username = 'root'
# password = 'pmcsfm'
# ssh connection created
client = create_client(hostname, username, password)
#executor_cmd = "export DISPLAY=:0;/home/tools/executor/bin/executor -a/home/ExecutorSuites/FWVerification/Platform/update_firmware_avatar.e.xml -r342 -uSfmFwTest1 -pSfmFWbot01"
failed = 0
for testsuite in testsuites:
print testsuite
clear_testresults(client, test_results_path)
print "\n Executing suite "+testsuite+" ...\n";
executor_cmd = form_executor_cmd(testsuite)
status, stdin, stdout, stderr = run_remote_cmd(client, executor_cmd)
if did_it_pass(client):
print "Testsuite "+ testsuite +"failed"
data = stderr.read();
print data
failed = 1
else:
print "Testsuite "+ testsuite +"succeeded"
data = stdout.read();
print data
if failed == 1 :
sys.exit (1)
# ssh connection destroyed
destroy_client(client)

Categories

Resources