Unable to pull JMX data using jolokia from Kafka - java

I have installed Jolokia in centos 7 machine and trying to pull Kafka metrics using Jolokia agent and integrate with Icinga monitoring tool using Nagios plugin check_jmx4perl. Below are the configuration steps I have followed
Step 1: Downloaded jolokia-jvm-1.3.4-agent.jar
Step 2: Copied to /home/usr/
Step 3: Provided permissions by issuing command chmod a+x /home/usr/jolokia-jvm-1.3.4.jar
Step 4: Added to class path by issuing command export KAFKA_OPTS="$KAFKA_OPTS -javaagent:/home/usr/jolokia-jvm-1.3.4-agent.jar=host=*"
Step 5: Started Zookeeper and Kafka in standalone mode and tried to fetch list of topics which works fine by displaying the message
INFO: No access restrictor found, access to all MBean is allowed
Jolokia: Agent started with URL http://0:0:0:0:0:0:0:0:8778/jolokia/
Step 6: Testing jolokia agent by issuing the command j4psh http://localhost:8778
Connection refused
I have also tried by providing IP address but the issue still remains the same. Do I need to make an entry of the host in etc/hosts file?

Not sure if you are same OP as this question, but:
Perhaps you need to fully qualify the path of the jar. Mine looks like this and works:
export JOLOKIA_HOME=/libs/java/jolokia/1.3.7
export JOLOKIA_JAR=$JOLOKIA_HOME/jolokia-jvm-1.3.7-agent.jar
export KAFKA_OPTS="-javaagent:$JOLOKIA_JAR=port=7778,host=* $KAFKA_OPTS"
When I start Kafka in non-daemon mode, it prints this:
I> No access restrictor found, access to any MBean is allowed
Jolokia: Agent started with URL http://10.8.36.121:7778/jolokia/
Then I point my browser to http://localhost:7778/jolokia/search/: and I get:
{
"request": {
"mbean": "*:*",
"type": "search"
},
"value": [
"kafka.network:name=ResponseQueueTimeMs,request=ListGroups,type=RequestMetrics",
"kafka.server:delayedOperation=topic,name=PurgatorySize,type=DelayedOperationPurgatory",
"kafka.server:delayedOperation=Fetch,name=NumDelayedOperations,type=DelayedOperationPurgatory",
"kafka.network:name=RemoteTimeMs,request=Heartbeat,type=RequestMetrics",
<-- SNIP -->
"kafka.network:name=LocalTimeMs,request=Offsets,type=RequestMetrics"
],
"timestamp": 1504188793,
"status": 200
}
j4psh also connects with:
j4psh http://localhost:7778/jolokia

Add to KAFKA_OPTS:
javaagent:/usr/share/java/kafka/jolokia-jvm-1.6.0-agent.jar -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=localhost -Dcom.sun.management.jmxremote.rmi.port=9999 -Djava.security.auth.login.config=/var/private/sasl_acl/kafka.server.jaas.config

Related

Cannot connect to Wildfly in Dockerfile

I'm creating a custom Dockerfile with extensions for official keycloak docker image. I want to change web-context and add some custom providers.
Here's my Dockerfile:
FROM jboss/keycloak:7.0.0
COPY startup-config.cli /opt/jboss/tools/cli/startup-config.cli
RUN /opt/jboss/keycloak/bin/jboss-cli.sh --connect --controller=localhost:9990 --file="/opt/jboss/tools/cli/startup-config.cli"
ENV KEYCLOAK_USER=admin
ENV KEYCLOAK_PASSWORD=admin
and startup-config.cli file:
/subsystem=keycloak-server/:write-attribute(name=web-context,value="keycloak/auth")
/subsystem=keycloak-server/:add(name=providers,value="module:module:x.y.z.some-custom-provider")
Bu unfortunately I receive such error:
The controller is not available at localhost:9990: java.net.ConnectException: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed: Connection refused
The command '/bin/sh -c /opt/jboss/keycloak/bin/jboss-cli.sh --connect --controller=localhost:9990 --file="/opt/jboss/tools/cli/startup-config.cli"' returned a non-zero code: 1
Is it a matter of invalid localhost? How should I refer to the management API?
Edit: I also tried with ENTRYPOINT instead of RUN, but the same error occurred during container initialization.
You are trying to have Wildfly load your custom config file at build-time here. The trouble is, that the Wildfly server is not running while the Dockerfile is building.
Wildfly actually already has you covered regarding automatically loading custom config, there is built in support for what you want to do. You simply need to put your config file in a "magic location" inside the image.
You need to drop your config file here:
/opt/jboss/startup-scripts/
So that your Dockerfile looks like this:
FROM jboss/keycloak:7.0.0
COPY startup-config.cli /opt/jboss/startup-scripts/startup-config.cli
ENV KEYCLOAK_USER=admin
ENV KEYCLOAK_PASSWORD=admin
Excerpt from the keycloak documentation:
Adding custom script using Dockerfile
A custom script can be added by
creating your own Dockerfile:
FROM keycloak
COPY custom-scripts/ /opt/jboss/startup-scripts/
Now you can simply start the image, and the built features in keycloak (Wildfly feature really) will go look for a config in that spedific directory, and then attempt to load it up.
Edit from comment with final solution:
While the original answer solved the issue with being able to pass configuration to the server at all, an issue remained with the content of the script. The following error was received when starting the container:
=========================================================================
Executing cli script: /opt/jboss/startup-scripts/startup-config.cli
No connection to the controller.
=========================================================================
The issue turned out to be in the startup-config.cli script, where the jboss command embed-server was missing, needed to initiate a connection to the jboss instance. Also missing was the closing stop-embedded-server command. More about configuring jboss in this manner in the docs here: CHAPTER 8. EMBEDDING A SERVER FOR OFFLINE CONFIGURATION
The final script:
embed-server --std-out=echo
/subsystem=keycloak-server/theme=defaults/:write-attribute(name=cacheThemes,value=false)
/subsystem=keycloak-server/theme=defaults/:write-attribute(name=cacheTemplates,value=false)
stop-embedded-server
WildFly management interfaces are not available when building the Docker image. Your only option is to start the CLI in embedded mode as discussed here Running CLI commands in WildFly Dockerfile.
A more advanced approach consists in using the S2I installation scripts to trigger CLI commands.

What is a simple, effective way to debug custom Kafka connectors?

I'm working a couple of Kafka connectors and I don't see any errors in their creation/deployment in the console output, however I am not getting the result that I'm looking for (no results whatsoever for that matter, desired or otherwise). I made these connectors based on Kafka's example FileStream connectors, so my debug technique was based off the use of the SLF4J Logger that is used in the example. I've searched for the log messages that I thought would be produced in the console output, but to no avail. Am I looking in the wrong place for these messages? Or perhaps is there a better way of going about debugging these connectors?
Example uses of the SLF4J Logger that I referenced for my implementation:
Kafka FileStreamSinkTask
Kafka FileStreamSourceTask
I will try to reply to your question in a broad way. A simple way to do Connector development could be as follows:
Structure and build your connector source code by looking at one of the many Kafka Connectors available publicly (you'll find an extensive list available here: https://www.confluent.io/product/connectors/ )
Download the latest Confluent Open Source edition (>= 3.3.0) from https://www.confluent.io/download/
Make your connector package available to Kafka Connect in one of the following ways:
Store all your connector jar files (connector jar plus dependency jars excluding Connect API jars) to a location in your filesystem and enable plugin isolation by adding this location to the
plugin.path property in the Connect worker properties. For instance, if your connector jars are stored in /opt/connectors/my-first-connector, you will set plugin.path=/opt/connectors in your worker's properties (see below).
Store all your connector jar files in a folder under ${CONFLUENT_HOME}/share/java. For example: ${CONFLUENT_HOME}/share/java/kafka-connect-my-first-connector. (Needs to start with kafka-connect- prefix to be picked up by the startup scripts). $CONFLUENT_HOME is where you've installed Confluent Platform.
Optionally, increase your logging by changing the log level for Connect in ${CONFLUENT_HOME}/etc/kafka/connect-log4j.properties to DEBUG or even TRACE.
Use Confluent CLI to start all the services, including Kafka Connect. Details here: http://docs.confluent.io/current/connect/quickstart.html
Briefly: confluent start
Note: The Connect worker's properties file currently loaded by the CLI is ${CONFLUENT_HOME}/etc/schema-registry/connect-avro-distributed.properties. That's the file you should edit if you choose to enable classloading isolation but also if you need to change your Connect worker's properties.
Once you have Connect worker running, start your connector by running:
confluent load <connector_name> -d <connector_config.properties>
or
confluent load <connector_name> -d <connector_config.json>
The connector configuration can be either in java properties or JSON format.
Run
confluent log connect to open the Connect worker's log file, or navigate directly to where your logs and data are stored by running
cd "$( confluent current )"
Note: change where your logs and data are stored during a session of the Confluent CLI by setting the environment variable CONFLUENT_CURRENT appropriately. E.g. given that /opt/confluent exists and is where you want to store your data, run:
export CONFLUENT_CURRENT=/opt/confluent
confluent current
Finally, to interactively debug your connector a possible way is to apply the following before starting Connect with Confluent CLI :
confluent stop connect
export CONNECT_DEBUG=y; export DEBUG_SUSPEND_FLAG=y;
confluent start connect
and then connect with your debugger (for instance remotely to the Connect worker (default port: 5005). To stop running connect in debug mode, just run: unset CONNECT_DEBUG; unset DEBUG_SUSPEND_FLAG; when you are done.
I hope the above will make your connector development easier and ... more fun!
i love the accepted answer. one thing - the environment variables didn't work for me... i'm using confluent community edition 5.3.1...
here's what i did that worked...
i installed the confluent cli from here:
https://docs.confluent.io/current/cli/installing.html#tarball-installation
i ran confluent using the command confluent local start
i got the connect app details using the command ps -ef | grep connect
i copied the resulting command to an editor and added the arg (right after java):
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
then i stopped connect using the command confluent local stop connect
then i ran the connect command with the arg
brief intermission ---
vs code development is led by erich gamma - of gang of four fame, who also wrote eclipse. vs code is becoming a first class java ide see https://en.wikipedia.org/wiki/Erich_Gamma
intermission over ---
next i launched vs code and opened the debezium oracle connector folder (cloned from here) https://github.com/debezium/debezium-incubator
then i chose Debug - Open Configurations
and entered the highlighted debugging configuration
and then run the debugger - it will hit your breakpoints !!
the connect command should look something like this:
/Library/Java/JavaVirtualMachines/jdk1.8.0_221.jdk/Contents/Home/bin/java -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 -Xms256M -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/var/folders/yn/4k6t1qzn5kg3zwgbnf9qq_v40000gn/T/confluent.CYZjfRLm/connect/logs -Dlog4j.configuration=file:/Users/myuserid/confluent-5.3.1/bin/../etc/kafka/connect-log4j.properties -cp /Users/myuserid/confluent-5.3.1/share/java/kafka/*:/Users/myuserid/confluent-5.3.1/share/java/confluent-common/*:/Users/myuserid/confluent-5.3.1/share/java/kafka-serde-tools/*:/Users/myuserid/confluent-5.3.1/bin/../share/java/kafka/*:/Users/myuserid/confluent-5.3.1/bin/../support-metrics-client/build/dependant-libs-2.12.8/*:/Users/myuserid/confluent-5.3.1/bin/../support-metrics-client/build/libs/*:/usr/share/java/support-metrics-client/* org.apache.kafka.connect.cli.ConnectDistributed /var/folders/yn/4k6t1qzn5kg3zwgbnf9qq_v40000gn/T/confluent.CYZjfRLm/connect/connect.properties
Connector module is executed by the kafka connector framework. For debugging, we can use the standalone mode. we can configure IDE to use the ConnectStandalone main function as entry point.
create debug configure as the following. Need remember to tick "Include dependencies with "Provided" scope if it is maven project
connector properties file need specify the connector class name "connector.class" for debugging
worker properties file can copied from kafka folder /usr/local/etc/kafka/connect-standalone.properties

CHECK_NRPE: Error - Could not complete SSL handshake (web)

I have a local Nagios Server and I'm trying to configure it to monitor my tomcat8 server with check_jvm, so I can control the memory and classes used by Java.
To do so I installed the check_nrpe plugin on the client, and configured it but I'm having an 'odd' error.
If I try to call the plugin on the client from my server, it answers correctly, even using check_jvm commands as parameter.
But when I configure it so nagios do the check on his own, the web browser returns a "CHECK_NRPE: Error - Could not complete SSL handshake" for that service specifically.
This is what I have:
From my nagios server
# /usr/local/nagios/libexec/check_nrpe -H <client.ip>
NRPE v2.12
# /usr/local/nagios/libexec/check_nrpe -H <client.ip> -c tomcat_heap
OK 799998504 |max=2101870592;;; commited=2101870592;;; used=799998504;;;
Where tomcat_heap is the name of a command defined in nrpe.cfg at the client in order to use the check_jvm plugin.
command[tomcat_heap]=sudo /usr/local/nagios/libexec/check_jvm -n org.apache.catalina.startup.Bootstrap -p heap -w 1700000000 -c 2000000000
Now, back again on my Nagios server, this is the service definition
define service{
use generic-service
host_name lin-des
service_description Tomcat heap
check_command check_nrpe!tomcat_heap
}
Now, this returns a 'CHECK_NRPE: Error - Could not complete SSL handshake' on the web app.
I've checked the allowed_hostson the nrpe.cfgfile, as well as on /etc/xinetd.d/nrpe, so it includes my nagios server IP.
I've also checked Selinux and Iptables configuration.
I've also checked that both my Nagios server, and the client share the same version of the ssl libraries.
Lastly, I've checked all the permissions on /usr/local/nagios/libexec on both the server and the client, so the user nagios have the ownership of them.
At this point, I ran out of ideas, and that's why I'm asking you. Any ideas on where the problem may be?
Found it.
It seems when I defined the check_nrpe command in the command.cfg, I made a mistake on the command line.
define command{
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -p 5656 -t 30 -c $ARG1$
}
As you can see, I defined the command to work on the port 5656 which isn't the port used by the nrpe service (it actually is 5666).
After fixing this error, everything runs properly.
I hope this helps to anyone with similar problems.

Unable to open debugger port in IntelliJ

Unable to open debugger port in intellij.
The port number 9009 matches the one which has been set in the configuration file for the application.
<java-config debug-options="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=9009" system-classpath="" native-library-path-prefix="D:\Project\lib\windows\64bit" classpath-suffix="">
<jvm-options>-XX:MaxPermSize=192m</jvm-options>
<jvm-options>-client</jvm-options>
<jvm-options>-XX:+UnlockDiagnosticVMOptions</jvm-options>
<jvm-options>-XX:+LogVMOutput</jvm-options>
<jvm-options>-XX:LogFile=${com.sun.aas.instanceRoot}/logs/jvm.log</jvm-options>
<jvm-options>-Djava.endorsed.dirs=${com.sun.aas.installRoot}/modules/endorsed${path.separator}${com.sun.aas.installRoot}/lib/endorsed</jvm-options>
<jvm-options>-Djava.security.policy=${com.sun.aas.instanceRoot}/config/server.policy</jvm-options>
<jvm-options>-Djava.security.auth.login.config=${com.sun.aas.instanceRoot}/config/login.conf</jvm-options>
<jvm-options>-Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as</jvm-options>
<jvm-options>-Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks</jvm-options>
<jvm-options>-Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks</jvm-options>
<jvm-options>-Djava.ext.dirs=${com.sun.aas.javaRoot}/lib/ext${path.separator}${com.sun.aas.javaRoot}/jre/lib/ext${path.separator}${com.sun.aas.instanceRoot}/lib/ext</jvm-options>
<jvm-options>-Djdbc.drivers=org.apache.derby.jdbc.ClientDriver</jvm-options>
<jvm-options>-DANTLR_USE_DIRECT_CLASS_LOADING=true</jvm-options>
<jvm-options>-Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory</jvm-options>
<jvm-options>-Dosgi.shell.telnet.port=4766</jvm-options>
<jvm-options>-Dosgi.shell.telnet.maxconn=1</jvm-options>
<jvm-options>-Dosgi.shell.telnet.ip=127.0.0.1</jvm-options>
<jvm-options>-Dfelix.fileinstall.dir=${com.sun.aas.installRoot}/modules/autostart/</jvm-options>
<jvm-options>-Dfelix.fileinstall.poll=5000</jvm-options>
<jvm-options>-Dfelix.fileinstall.debug=1</jvm-options>
<jvm-options>-Dfelix.fileinstall.bundles.new.start=true</jvm-options>
<jvm-options>-Dorg.glassfish.web.rfc2109_cookie_names_enforced=false</jvm-options>
<jvm-options>-XX:NewRatio=2</jvm-options>
<jvm-options>-Xmx2048m</jvm-options>
</java-config>
Configuration in IntelliJ:
When I try and enable the remote debugging in for this application it comes up with the following error:
You may have to change the debugger port if your port is already used by another program. To do so:
Run
Edit Configurations
Startup/Connection tab
Debug
Change the port here
Or, maybe in other versions:
Run
Edit Configurations
Remote > Remote debug in the list on the left
Configuration tab, Settings section
Port: change the port here
Add the following parameter debug-enabled="true" to this line in the glassfish configuration.
Example:
<java-config debug-options="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=9009" debug-enabled="true"
system-classpath="" native-library-path-prefix="D:\Project\lib\windows\64bit" classpath-suffix="">
Start and stop the glassfish domain or service which was using this configuration.
I had the same problem and this solution also did the trick for me: Provide the IP 127.0.0.1 in the Intellij Debug configuration instead of the host name "localhost", in case you're using this hostname.
You must set CHMOD +x (execute for *.sh or *.bat files). For example, I am using macOS
cd /Users/donhuvy/Documents/tools/apache-tomcat-9.0.12/bin
sudo chmod +x *.sh
Then IntelliJ IDEA, and Apache Tomcat running or debugging just good.
In glassfish\domains\domain1\config\domain.xml set before start server
<java-config classpath-suffix="" debug-options="-agentlib:jdwp=transport=dt_socket,address=9009,server=y,suspend=n" java-home="C:\Program Files\Java\jdk1.8.0_162" debug-enabled="true" system-classpath="">
or set debug-enabled="true" server=y,suspend=n in http://localhost:4848/common/index.jsf
In current Idea 2018 - Server Run Configuration - Debug - Port - address
I'm hoping your problem has been solved by now. If not, try this... It looks like you have server=y for both your app and IDEA. IDEA should probably be server=n. Also, the (IDEA) client should have an address that includes both the host name and the port, e.g., address=127.0.0.1:9009.
This one worked for me--
If the issue still persists (in case you are not using a glassFish server), then close your JIdea and stop the server. This will disable the ports connectivity. Then start your server and JIdea, this will start fresh connectivity with the ports, resolving the issue.
For me, the problem was that catalina.sh didnt have execute permissions. The "Unable to open debugger port in intellij" message appeared in Intellij, but it sort of masked the 'could not execute catalina.sh' error that appeared in the logs immediately prior.
This error can happen Tomcat is already running. So make sure Tomcat isn't running in the background if you've asked Intellij to start it up ( default ).
Also, check the full output window for more errors. As a more useful error may have preceded this one ( as was the case with my configuration just now )
Answer is pretty simple,
I also faced the problem finally I got perfect solution.
Create Debug
Create Remote debug with following configuration
Firstly run by debug.
It gives you waitng for socket 5005
then run with remote debug
Try to connect with telnet , if it connects then it shows below:
$telnet 10.238.136.165 9999
Trying 10.238.136.165...
Connected to 10.238.136.165.
Escape character is '^]'.
Connection closed by foreign host.
If port is not available (either because someone else is already connected to it or the port is not open etc) then it shows something like it shows like below:
$telnet 10.238.136.165 9999
Trying 10.238.136.165...
telnet: connect to address 10.238.136.165: Connection refused
telnet: Unable to connect to remote host
So I think one needs to see whether:
the application is property listening to port or not
or someone else has already connected to it
Also try to connect on that m/c itself first like
$telnet localhost 9999
Set the MAVEN_OPTS. It should work !!
export MAVEN_OPTS="-Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,address=4000,server=y,suspend=n"
mvn spring-boot:run -Dserver.port=8090
Run your Spring Boot application with the given command to enable debugging on port 6006 while the server is up on port 8090:
mvn spring-boot:run -Drun.jvmArguments='-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=6006' -Dserver.port=8090
Your Service/ Application might already be running. In Search enter Services and you will get list of services. Stop yours and then try again.
I had the same issue, I just have to remove the HTTP protocol from the URL. That's it.
I hope it works for you.
I once have this problem too.
My solution is to work around this problem by kill the application which is using the port.
Here is a article to teach us how to check which application is using which port, find it and kill/close it.
In my case, I was not setting the debug port while starting the application.
I am using tomcat to deploy 3 war files, and I forgot to configure the debug port.
Tomcat allows us to configure this via setenv.sh.
Here are the commands to create setenv.sh file in the bin directory of my tomcat installation and provide the debug arguments/port.
tee /usr/share/tomcat9/bin/setenv.sh << EOF
export CATALINA_OPTS="$CATALINA_OPTS -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005"
EOF
service tomcat9 restart
Merely hitting the debug icon again fixed my problem in a few seconds.
Make sure to specify an SDK and Project SDK for your app under File --> Project Structure (Project | SDKs)

Running Single instance AppScale in a virtual machine

I was trying to run AppScale in a Single-instance node installed in a Vmware box and running the appscale-tools in the same server virtual machine and got this error:
root#appscale:~appscale-tools-bin# ./appscale-run-instances --ips ips.yaml
About to start AppScale over a non-cloud environment.
Head node successfully created at 127.0.0.1. It is now starting up cassandra via the command line arguments given.
Generating certificate and private key
Starting server at 127.0.0.1
Please wait for the controller to finish pre-processing tasks.
Warning: Permanently added '127.0.0.1' (RSA) t othe list of known host.
Error: Couldn't find me in the node map
The solution I was advised was to change a code in this source:
appscale/AppController/lib/helperfunctions.rb
And look for self.local_ip() and change to:
def self.local_ip()
return "127.0.0.1"
end
But when I run
./appscale-run-instances --ips ips.yaml
I am not sure, but it just keep on saying:
"Please wait for the controller to finish pre-processing tasks." for several minutes already.
So I decided to terminate it, and here is what I get:
"...common_functions.rb:399:in 'sleep_unti_port_is_open"
In this case, it seems I need to open a port, I am running AppScale from within Ubuntu, what port should I open in my server?
Here's the complete command line:
./appscale-run-instances --ips ips.yaml
About to start AppScale over a non-cloud environment.
Head node successfully created at 127.0.0.1.
It is now starting up cassandra via the command line arguments given.
Generating certificate and private key.
Starting server at 127.0.0.1
Please wait for the controller to finish pre-processing tasks.
^C./../lib/../lib/common_functions.rb:399:in sleep': Interrupt
from ./../lib/../lib/common_functions.rb:399:in 'sleep_until_port_is_open'
from ./../lib/../lib/common_functions.rb:397:in 'loop'
from ./../lib/../lib/common_functions.rb:397:in 'sleep_until_port_is_open'
from ./../lib/../lib/common_functions.rb:1359:in 'start_appcontroller'
from /usr/lib/ruby/1.8/timeout.rb:62:in ttimeout.
from ./../lib/../lib/common_functions.rb:1351:in 'start_appcontroller'
from ./../lib/../lib/common_functions.rb:548:in 'start_head_node'
from ./../lib/appscale_tools.rb:284:in trun_instances'
from ./appscale-run-instances:14
Using the pre-build Appscale images here or following the steps listed here https://github.com/AppScale/appscale/wiki/Virtualized-Cluster will solve this problem.
Also it has the build script:
$ sudo su
# cd /root
# wget -O - http://bootstrap.appscale.com | sh

Categories

Resources