We are running a LocalCluster of Apache Storm as a java process i.e via nohup.
We are running a simple Topology with following configuration.
Config config = new Config();
config.setMessageTimeoutSecs(120);
config.setNumWorkers(1);
config.setDebug(false);
config.setMaxSpoutPending(1);
We are submitting the Topology to LocalCluster. Our shutdown hook is the default one found across sources.
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
cluster.killTopology(TOPOLOGY_NAME);
cluster.shutdown();
}
});
Lately we were facing Java Heap issues which might have been solved by increasing Xms, Xmx and using MarkSweepGC.
However, we are running into new problem. The spout logs are not being written to after sometime. There will be no trace of any storm relate Exception/Error.
The main problem is the java process i.e. via nohup is still showing up in ps -ef. What issue would be happening?
You can try enabling debug logging with config.setDebug(true);, which might let you tell what is happening.
Also next time your topology hangs, you should be able to tell what it's doing by either using jstack or sending the Java process a SIGQUIT (kill -3). This will cause the process to dump stack traces for each thread in the JVM, which should let you figure out why it's hanging.
As an aside in case you're doing it, please don't use LocalCluster in production. It's intended for testing.
Related
I am able to run an Acache Beam job successfully using the DirectRunner, with the following arguments:
java -jar my-jar.jar --commonConfigFile=comJobConfig.yml
--configFile=relJobConfig.yml
--jobName=my-job
--stagingLocation=gs://my-bucket/staging/
--gcpTempLocation=gs://my-bucket/tmp/
--tempLocation=gs://my-bucket/tmp/
--runner=DirectRunner
--bucket=my-bucket
--project=my-project
--region=us-west1
--subnetwork=my-subnetwork
--serviceAccount=my-svc-account#my-project.iam.gserviceaccount.com
--usePublicIps=false
--workerMachineType=e2-standard-2
--maxNumWorkers=20 --numWorkers=2
--autoscalingAlgorithm=THROUGHPUT_BASED
However, while trying to run on Google Dataflow (simply changing --runner=DataflowRunner) I receive the following message (GetWork timed out, retrying) in the workers.
I have checked the logs generated by the Dataflow process and found
[2023-01-28 20:49:41,600] [main] INFO org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler:91 2023-01-28T20:49:39.386Z: Autoscaling: Raised the number of workers to 2 so that the pipeline can catch up with its backlog and keep up with its input rate.
[2023-01-28 20:50:26,911] [main] INFO org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler:91 2023-01-28T20:50:26.779Z: Workers have started successfully.
and I see no indication that the workers have failed. Moreover I do not see any relevant logs which indicate that the process is working (in my case, reading from the appropriate Pub/Sub topic for notifications). Let me know if there is any further documentation on this log, as I have not been able to find any.
Turns out I forgot to include the --enableStreamingEngine flag. This solved my problem.
does anybody know a way to let gradle run the application in debug mode but not wait until the debugger attachs? I know this is a nice feature to have the debugger attached when the application starts. My google research was not fruitful.
The command I execute to start the application in debug mode.
./gradlew appRunDebug
(which is equivalent to ./gradlew appRun --debug-jvm)
What I see then on the console after several seconds:
Listening for transport dt_socket at address: 5005
<============-> 97% EXECUTING [1m 2s]
At this point I have to attach my debugger to the process in order that the start routine continues. But I just want to have the debug port open and the application fully running without attaching my debugger.
How can this be achived? Thanks for any help. Even the confirmation that this is not possible.
According to https://akhikhl.github.io/gretty-doc/Debugger-support.html, from Gradle 1.1.8 onwards, you should be able to set the debugSuspend property to false and appRunDebug won't start the application in suspended mode.
I´m trying to run JMeter tests directly from my Java code. The load should be produced via a JMeter server. In the code, I use a ClientJMeterEngine instance that connects to the JMeter daemon running on the server. This works so far, I can start and run my test. What I currently don´t understand is how to get the results from the test run after it has completed. My current code looks like this:
// create ClientJMeterEngine instance
ClientJMeterEngine jmeter = new ClientJMeterEngine("myHost:4000");
// load test
File file = new File("C:/myTest.jmx");
// configure test
HashTree testPlanTree = SaveService.loadTree(file);
jmeter.configure(testPlanTree);
// run test
jmeter.runTest();
If I run the test using the StandardJMeterEngine object, I use a ResultCollector to get the results from my test runs. This works as expected. When I try to run the tests using the described server-based approach, I get the following exception on the machine where the server is running:
2016/03/15 09:24:35 ERROR - jmeter.samplers.BatchSampleSender: testEnded(host) java.rmi.ConnectException: Connection refused to host: myHost; nested exception is:
java.net.ConnectException: Connection refused
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:631)
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:228)
at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:214)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:142)
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:238)
at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:190)
at com.sun.proxy.$Proxy1.processBatch(Unknown Source)
The server thus wants to call back to the client from which I start the test run. It fails, because I don´t have anything waiting for the remote call. I´ve worked my way through the JMeter source, but haven´t found an answer, what I need to do on my Java side to make this work.
EDIT
In the meantime I´ve made some progress. What I´ve found is the following:
The client thread must be kept in state running, so the ResultCollector on the client can receive the results from the server. I now do this by "busy waiting" in my client Java code.
The issue now is to know when the test execution on the server is finished, so I can also finish the client thread. The ResultCollector does have the capabilities to recognize the test is finished, but no public methods for this.
The solution would be to extend the ResultCollector and override the testEnded method: This method sets a public boolean variable "testFinished" to true.
Though this feels a little bit "hacky". Does anybody have a better solution here?
I had the same problem and tried your solution with overriding testEnded() method of ResultCollector class. My test was endlessly waiting for response from remote JMeter slave server.
The actual problem is that remote server - Slave - succeed in replying of requests from Master at early stage. However, the connection dropped at some point and the test finished without receiving any report from a Slave. Slave is unable to start a new TCP connection, and can't send results when it is ready with his testing report as its hostname (IP) hasn't been set/sent to Master correctly.
Therefore, the solution is to start a Slave with the correct parameter in command line or in the jmeter.properties.
I started my slave like this:
c:\JMeter_new\apache-jmeter-3.1\bin>jmeter-server -Djava.rmi.server.hostname=IP
where IP is of the Slave
PS If you run tests in GUI, run Master with the parameter as well:
D:\JMeter_new\apache-jmeter-3.1\bin>jmeterw -Djava.rmi.server.hostname=IP_of_master
I have a little problem figuring this out the right way.
I have a Java application reading a log file continuously using threads. While that application is still reading the log file, a client should be able to query the current status (i.e. a certain key was found in the log file) through a java servlet.
My current issue is that I am having problems getting that status using the doGet-Method of the servlet. WHile running the thread is supposed to change a single boolean variable.
My question is:
How do I get the Log Reader Thread to start running when I deploy the Servlet on my Tomcat. In idle mode the Log Reader is listening for new files in a folder and starts to read them once they appear?
Please check below link you can use SevletcontexListener
In SevletcontexListener you can start you logger
Link
public void contextInitialized(ServletContextEvent servletContextEvent)
{
System.out.println("ServletContextListener started");
//start thread here
}
public void contextDestroyed(ServletContextEvent servletContextEvent) {
//stop thread here
}
My current issue is that I am having problems getting that status using the doGet-Method of the servlet. WHile running the thread is supposed to change a single boolean variable.
That's probably because of concurrent update of non thread-safe boolean value. For more details on this topic you can read following tutorial on Java Concurrency
How do I get the Log Reader Thread to start running when I deploy the Servlet on my Tomcat. In idle mode the Log Reader is listening for new files in a folder and starts to read them once they appear?
Please refer to following answer, where it is described how to start threads from ServletContextListener using Executors, which are high-level abstractions over threads.
Hope this helps...
I am writing a Quartz application that runs on Windows and calls Lucene and Solr to run indexing jobs. It runs a sequence of jobs, and each job consists of these steps:
Make sure Tomcat is stopped (Solr running under Tomcat prevents index dir from being deleted or copied)
Delete old index directory if necessary
Start Tomcat
Make sure Tomcat and Solr app are running
Run the indexing job
Stop Tomcat
Make sure Tomcat is stopped
Copy index directory to an archive
I decided to have the code that starts and stops Tomcat set the system properties that are set in Startup.bat, Shutdown.bat and Catalina.bat, and just call Bootstrap.main with "start" and "stop" parameters. This worked for one iteration, but not when I tried a Quartz run in which I set up two iterations.
When my code shut down Tomcat at the end of the first iteration, all of the usual messages were displayed, including
INFO: Stopping ProtocolHandler ["http-bio-5918"]
(I am using port 5918) but when it tried to start Tomcat at the beginning of the second iteration, it got thes errors:
SEVERE: Failed to initialize end point associated with ProtocolHandler ["http-bio-5918"]
java.net.BindException: Address already in use: JVM_Bind <null>:5918
and
SEVERE: Failed to initialize connector [Connector[HTTP/1.1-5918]]
org.apache.catalina.LifecycleException: Failed to initialize component [Connector[HTTP/1.1-5918]]
I ran netstat -an in a command prompt window, and it confirmed that port 5918 was in use. There's nothing special about the code I am using to check if Tomcat is running. I've seen in various places on the internet.
public boolean isTomcatRunning(String url) {
boolean isRunning = false;
try {
new URL(url).openConnection().connect();
isRunning = true;
} catch (IOException e) {
isRunning = false;
}
return isRunning;
}
but it apparently tells me that Tomcat is not running when it is.
As I said, I am starting and stopping Tomcat by calling Bootstrap.main(new String[]{"start"}) and Bootstrap.main(new String[]{"stop"}). The only thing peculiar about that is that is that when I simply call Bootstrap.main(new String[]{"start"}), it doesn't seem to return (I haven't waited long enough yet to see if it is hanging or just taking a long time), so I have been running it inside a thread.
Maybe that is causing the problem, as it looks like Catalina.bat isn't doing anything special and it returns from startup just fine. I wonder if there is an additional setup I need to do to enable it to run startup in the main thread without hanging.
In any case, this is what I am puzzled about with starting and stopping Tomcat from within my Quartz application, and I would appreciate any help and suggestions you can offer.
I strongly suggest that you wrap your Tomcat instance with a wrapper that controls the lifecycle of your Tomcat instance. Such wrapper is the Java Service Wrapper. An older version (3.2.3 I believe) is "free" and works fine with newer Tomcat instances.
Your controlling application then "talks" to the wrapper to start/stop the Tomcat application. There are multiple benefits with this approach. One of them is that you are not subject to your Tomcat application hanging and the port you are testing not replying anymore.