How to find the WebSphere7 server start/stop history in server - java

How can I find history of websphere 7 server start or stop from command line or any other logs.

The startServer.log, stopServer.log files contain this information. Additionally, the startup can be seen in the SystemOut.log file as well. Finally, if Verbose GC logging is enabled, the native_stderr.log file will have entries for each JVM restart as well.
These can all be found in {WebSphere Install Dir}/profiles/{Profile Name}/logs/{server name}.

Related

How to troubleshoot Microsoft Azure WebApp on java? How to access logs?

I have a java-application (standard springboot from default tutorial: https://spring.io/guides/gs/spring-boot-for-azure/ ) that I "successfuly" deploy to my WebApp (created during deployment) via Eclipse/maven plugin azure-webapp:deploy
Once deployed, files are inside the WebApp, I can see them. If start-up is successful I do get running application, but if it is not - I do not know how to troubleshoot. I don't know where to find error logs, what caused the problem and as consequence, how to solve it.
as an example of how to make it fail, add this line:
throw new RuntimeException("Doomed to fail");
I tried enabling logs from "diagnostic logs tab" and expected to see them under LogFiles/Applications but that folder remains empty.
How do I troubleshoot java-application that fails to start in WebApps of Azure?
edit: additional example of Exception to troubleshoot:
public static void main(String[] args) {
throw new RuntimeException("start failure #21");
//SpringApplication.run(Application.class, args);
}
It sounds like you followed the springboot tutorial Deploying a Spring Boot app to Azure to build the GitHub project microsoft/gs-spring-boot and deploy to Azure, but not works.
Here is my steps which I also followed the tutorial, but deployed via my own way.
I created a directory SpringBoot on my local machine, and to do the commands cd SpringBoot and git clone https://github.com/microsoft/gs-spring-boot.
Then, to build it via commands cd gs-spring-boot/complete and mvnw clean package
Note: I reviewed the sections of the tutorial under Create a sample Spring Boot web app which seems to do on Linux, but the web.config file in microsoft/gs-spring-boot/complete is ready to Azure WebApp for Windows. However, there is not any comments to describe the deployment target that be Azure WebApp for Windows or Linux.
So I used my existing WebApp for Windows to test my deployment. I open my Kudu console in browser via the url https://<webapp name>.scm.azurewebsites.net/DebugConsole and drag the files complete/web.config and complete/target/gs-spring-boot-0.1.0.jar to site/wwwroot as the figure below. Then, I started my webapp, and it works fine.
Note: Please check the JAVA_HOME environment variable which has been configured on Azure via command echo %JAVA_HOME% as the figure below.
If not, you need to set Java runtime in the Application settings tab of Azure portal.
Or you can also configure the web.config file to replace the reference %JAVA_HOME% with an existing Java runtime installed in the path D:\Program Files\Java of Azure WebApp, as below.
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<handlers>
<add name="httpPlatformHandler" path="*" verb="*" modules="httpPlatformHandler" resourceType="Unspecified"/>
</handlers>
<!-- <httpPlatform processPath="%JAVA_HOME%\bin\java.exe" -->
<httpPlatform processPath="D:\Program Files\Java\jre1.8.0_181\bin\java.exe"
arguments="-Djava.net.preferIPv4Stack=true -Dserver.port=%HTTP_PLATFORM_PORT% -jar "%HOME%\site\wwwroot\gs-spring-boot-0.1.0.jar"">
</httpPlatform>
</system.webServer>
</configuration>
I didn't manage to find logs in windows based machine, but If you enable logs on linux-based-machine you will see them in the "Diagnostic logs" output. There is a catch though.
There is 230 timeout. It will wait full "timeout" time, until producing logs in the log file, after which you can access them via log file or through "Diagnostic logs". Make sure to enable logging before you start application. This applies to linux based machine, I don't know if it can apply to windows based machine.
Then it waits for an answer trigger. Answer trigger, as it turns out, is a phrase in console output "Application is started in X seconds". I increased the timeout to 500 seconds, because although it starts in 60 seconds on my local machine, it takes 430 seconds on remote-linux-based machine of Microsoft Azure*
Second, I changed the name of my main class from "GameStart" to "Application" and after that it actually caught the trigger. After that the application started. Nowhere in manuals did I find the "until timeout - no logs" and "trigger phrase" clauses mentioned.
ps: For reference, it took me 20 to upload application, 5 minutes to start it. I was using centralUS server, cuz centralEU did not work out for me, as it was even longer, although I'm in central EU myself
-
*using test account. It might happen that on payed account number are either different or similar.

Tomcat catalina.out is 40GB

I wonder why my spring project with tomcat server got catalina.out file with size 40GB. Any solutions, please.catalina.out reach 40 GB
catalina.out reaches such a large size because:
1- there might be many logging messages sent to console handler, and
2- also there is not any rotation of catalina.out (and no policy to remove older catalina.out).
First, as there might be some duplication and the messages in catalina.out , which could also be stored in *log messages too, I'd check if the contents of the log files (catalina.[DATE].log) are the same as those of catalina.out, if so then you can edit file conf/logging.properties and remove console handler
I'd also check the level of the log messages and set a higher level if possible. Look for this line in conf/logging.properties
java.util.logging.ConsoleHandler.level = ....
Possible levels, in increasing level of frequency are SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST or ALL. I'd try replace ALL, FINEST, FINER, FINE by CONFIG or even INFO. For instance, by setting it to INFO, all SEVERE, WARNING and INFO messages will be logged but not any with a level to the right of that list.
Also another option is set a limit to console handler by adding this line to conf/logging.properties
java.util.logging.ConsoleHandler.limit = 1024000
and rotate catalina.out configuring an automatic task to remove older ones.
if you are linux user to handle this from system is pretty easy you can configure logrotation with logrotate this is very easy
Step : 1 (Create Logrotate file)
root#c2dapp01-usea1e# vim /etc/logrotate.d/tomcat
Step : Add rotation instruction for linux log rotator
/opt/tomcat/latest/logs/catalina.out {
copytruncate
daily
rotate 7
compress
missingok
size 100M
}
Step : 3 Add cron job to run daily in crond.daily or create custom cron (This file is by default there if not then only create)
root#c2dapp01-usea1e:# vim /etc/cron.daily/logrotate
# Clean non existent log file entries from status file
cd /var/lib/logrotate
test -e status || touch status
head -1 status > status.clean
sed 's/"//g' status | while read logfile date
do
[ -e "$logfile" ] && echo "\"$logfile\" $date"
done >> status.clean
mv status.clean status
test -x /usr/sbin/logrotate || exit 0
/usr/sbin/logrotate /etc/logrotate.conf
This script can be run manually.
/usr/sbin/logrotate /etc/logrotate.conf
Most probably the huge size of logs is due to the DEBUG mode setting of log4j. Change this setting to WARN and the size of logging will be reduced.
This is quite common to see catalina.out file expanding.
this could be used to clear the log of catalina.out file without stopping tomcat with the command below.
sudo cat /dev/null > /opt/tomcat/apache-tomcat-9.0.37/logs/catalina.out
To keep catalina.out smaller in size, either one of the following ways can be used:
You could use the above Linux command and add it to a CronScheduler
to run daily/ weekly/ monthly or yearly as preferred.
Use logrotate tool in Linux. It is a log managing command-line tool
in Linux. This can rotate the log files under different conditions.
Particularly, we can rotate log files on a fixed duration or if the
file has grown to a certain size. You can click here for more
info on this.

What is a simple, effective way to debug custom Kafka connectors?

I'm working a couple of Kafka connectors and I don't see any errors in their creation/deployment in the console output, however I am not getting the result that I'm looking for (no results whatsoever for that matter, desired or otherwise). I made these connectors based on Kafka's example FileStream connectors, so my debug technique was based off the use of the SLF4J Logger that is used in the example. I've searched for the log messages that I thought would be produced in the console output, but to no avail. Am I looking in the wrong place for these messages? Or perhaps is there a better way of going about debugging these connectors?
Example uses of the SLF4J Logger that I referenced for my implementation:
Kafka FileStreamSinkTask
Kafka FileStreamSourceTask
I will try to reply to your question in a broad way. A simple way to do Connector development could be as follows:
Structure and build your connector source code by looking at one of the many Kafka Connectors available publicly (you'll find an extensive list available here: https://www.confluent.io/product/connectors/ )
Download the latest Confluent Open Source edition (>= 3.3.0) from https://www.confluent.io/download/
Make your connector package available to Kafka Connect in one of the following ways:
Store all your connector jar files (connector jar plus dependency jars excluding Connect API jars) to a location in your filesystem and enable plugin isolation by adding this location to the
plugin.path property in the Connect worker properties. For instance, if your connector jars are stored in /opt/connectors/my-first-connector, you will set plugin.path=/opt/connectors in your worker's properties (see below).
Store all your connector jar files in a folder under ${CONFLUENT_HOME}/share/java. For example: ${CONFLUENT_HOME}/share/java/kafka-connect-my-first-connector. (Needs to start with kafka-connect- prefix to be picked up by the startup scripts). $CONFLUENT_HOME is where you've installed Confluent Platform.
Optionally, increase your logging by changing the log level for Connect in ${CONFLUENT_HOME}/etc/kafka/connect-log4j.properties to DEBUG or even TRACE.
Use Confluent CLI to start all the services, including Kafka Connect. Details here: http://docs.confluent.io/current/connect/quickstart.html
Briefly: confluent start
Note: The Connect worker's properties file currently loaded by the CLI is ${CONFLUENT_HOME}/etc/schema-registry/connect-avro-distributed.properties. That's the file you should edit if you choose to enable classloading isolation but also if you need to change your Connect worker's properties.
Once you have Connect worker running, start your connector by running:
confluent load <connector_name> -d <connector_config.properties>
or
confluent load <connector_name> -d <connector_config.json>
The connector configuration can be either in java properties or JSON format.
Run
confluent log connect to open the Connect worker's log file, or navigate directly to where your logs and data are stored by running
cd "$( confluent current )"
Note: change where your logs and data are stored during a session of the Confluent CLI by setting the environment variable CONFLUENT_CURRENT appropriately. E.g. given that /opt/confluent exists and is where you want to store your data, run:
export CONFLUENT_CURRENT=/opt/confluent
confluent current
Finally, to interactively debug your connector a possible way is to apply the following before starting Connect with Confluent CLI :
confluent stop connect
export CONNECT_DEBUG=y; export DEBUG_SUSPEND_FLAG=y;
confluent start connect
and then connect with your debugger (for instance remotely to the Connect worker (default port: 5005). To stop running connect in debug mode, just run: unset CONNECT_DEBUG; unset DEBUG_SUSPEND_FLAG; when you are done.
I hope the above will make your connector development easier and ... more fun!
i love the accepted answer. one thing - the environment variables didn't work for me... i'm using confluent community edition 5.3.1...
here's what i did that worked...
i installed the confluent cli from here:
https://docs.confluent.io/current/cli/installing.html#tarball-installation
i ran confluent using the command confluent local start
i got the connect app details using the command ps -ef | grep connect
i copied the resulting command to an editor and added the arg (right after java):
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
then i stopped connect using the command confluent local stop connect
then i ran the connect command with the arg
brief intermission ---
vs code development is led by erich gamma - of gang of four fame, who also wrote eclipse. vs code is becoming a first class java ide see https://en.wikipedia.org/wiki/Erich_Gamma
intermission over ---
next i launched vs code and opened the debezium oracle connector folder (cloned from here) https://github.com/debezium/debezium-incubator
then i chose Debug - Open Configurations
and entered the highlighted debugging configuration
and then run the debugger - it will hit your breakpoints !!
the connect command should look something like this:
/Library/Java/JavaVirtualMachines/jdk1.8.0_221.jdk/Contents/Home/bin/java -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 -Xms256M -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/var/folders/yn/4k6t1qzn5kg3zwgbnf9qq_v40000gn/T/confluent.CYZjfRLm/connect/logs -Dlog4j.configuration=file:/Users/myuserid/confluent-5.3.1/bin/../etc/kafka/connect-log4j.properties -cp /Users/myuserid/confluent-5.3.1/share/java/kafka/*:/Users/myuserid/confluent-5.3.1/share/java/confluent-common/*:/Users/myuserid/confluent-5.3.1/share/java/kafka-serde-tools/*:/Users/myuserid/confluent-5.3.1/bin/../share/java/kafka/*:/Users/myuserid/confluent-5.3.1/bin/../support-metrics-client/build/dependant-libs-2.12.8/*:/Users/myuserid/confluent-5.3.1/bin/../support-metrics-client/build/libs/*:/usr/share/java/support-metrics-client/* org.apache.kafka.connect.cli.ConnectDistributed /var/folders/yn/4k6t1qzn5kg3zwgbnf9qq_v40000gn/T/confluent.CYZjfRLm/connect/connect.properties
Connector module is executed by the kafka connector framework. For debugging, we can use the standalone mode. we can configure IDE to use the ConnectStandalone main function as entry point.
create debug configure as the following. Need remember to tick "Include dependencies with "Provided" scope if it is maven project
connector properties file need specify the connector class name "connector.class" for debugging
worker properties file can copied from kafka folder /usr/local/etc/kafka/connect-standalone.properties

How to remote debug an enterprise application running on web logic server in eclipse IDE(same machine)

I am new to weblogic application server and remote debugging & have gone through several post to set up remote debugging. Some post suggest to edit setDomainEnv.cmd file while others suggest to edit startWeblogic.cmd file in my WEBLOGIC_HOME\user_projects\domains\my_domain\bin.
But neither of the solutions worked for me. Listed below are solutions which I tried :
1) Edit setDomainEnv.cmd file
set JAVA_DEBUG=-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=%DEBUG_PORT%,server=y,suspend=n -Djava.compiler=NONE
set JAVA_OPTIONS=%JAVA_OPTIONS% %enableHotswapFlag% -ea -da:com.bea... -da:javelin... -da:weblogic... -ea:com.bea.wli... -ea:com.bea.broker... -ea:com.bea.sbconsole...
The port number is set to 8543 in the file
if "%DEBUG_PORT%"=="" (
set DEBUG_PORT=8453
)
2)Edit startWeblogic.cmd file
I added the following line at the top of the file
-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=8543,server=y,suspend=n
Then in eclipse,when i run debug configuration(port number : 8543), I get Failed to connect to remote VM. Connection refused.
Connection refused: connect
Please let me know
1) How remote debugging works?
2) How to set up remote debugging in eclipse with weblogic server ?
3) What is the difference between above 2 methods ?
4) Where do I need to add the debug command(-Xdebug....) in the startWeblogic.cmd file(at the top)?
5) What is the purpose of setDomainEnv.cmd file in weblogic server ?
Thanks in advance
I think it should be sufficient to just set environment variable before starting Weblogic. Start cmd and before starting weblogic set debugFlag=true. This should make weblogic open debug port which should be set to some default like 8453. You can also set DEBUG_PORT=8888 or any other free port, if you want to change it.
Start Weblogic and verify that the port has been opened. You can use tools like cports or ProcessExplorer for that (or event netstat).
In case debug port isn't open, check if you run Weblogic in development mode, because debugFlag can be ignored in production mode.
In Eclipse create a remote debug configuration (and remember that this option is not available in Run configuration that is next to it on the toolbar):
In point 3 on the screenshot select the project you deploy to weblogic that you want to debug.
It that does not work post the problems you have.

Starting tomcat Error code 4 : Failed

I'm creating multiple instances of tomcat using opscode chef cookbook. I see that tomcat.conf was not written into my instance of tomcat but is only in the base instance. I created a softlink to the base instance tomcat.conf file. When I tried to start the server, I get the following error with no logs. There are no logs in /var/log or tomcat folder. Please provide hints on how to debug.
[root#centosclient2 ~]# service tomcat6-obi_sandbox_tomcat start
Starting tomcat6-obi_sandbox_tomcat: Error code 4 [FAILED]
I saw below in /var/log/tomcat6-obi_sandbox_tomcat-initd.log
-sh: /usr/sbin/tomcat6-obi_sandbox_tomcat: No such file or directory
Apparently there is no such file or directory.
I have run into error code 4 a few times, and the problem was that disk was full.

Categories

Resources