I have created a spring-boot application with open-telemetry. I have used spring-cloud-sleuth for exporting the traces to a open-telemetry collectors which ultimately is exporting these traces to datadog. I can see the exported traces in the datadog.
Now, I also have to add some logging to the application and open-telemetry does not support logging directly. So, I have used opentelemetry-logback-appender to export the logs also to datadog. I can see the log has same trace id and span id as the exported traces in the console. However, the logs are not getting forwarded to datadog.
My code :-
otel-collector-config.yaml :-
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
exporters:
datadog:
api:
site: datadoghq.com
key: ${DD_API_KEY}
file:
path: /tmp/signals.json
logging:
loglevel: debug
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [datadog, logging, file]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [datadog, logging, file]
logs:
receivers: [otlp]
processors: [batch]
exporters: [logging, file]
Log in the console added with slf4j (logback):
spring-cloud-sleuth-otel-slf4j-spring-cloud-sleuth-otel-slf4j-1 | 09:25:45.835 [http-nio-8181-exec-1] ERROR com.uplight.web.MyController traceId: c9c54856c474a11e22e3716b6e97ec4b spanId: 569063cd0411d3a6 - Logging error using SLF4J LOGGER--------------------------------------------------------------------
As seen in image, the log is not available in the trace. Can someone please suggest if I am missing anything?
If you are running a version of the collector less than 0.61.0, which added logs support for the datadogexporter (#2651), then if you update your collector and add datadog to the logs pipeline, logs should appear in Datadog.
Related
I used step by step guide as below.
https://phoenixnap.com/kb/install-hadoop-ubuntu
Then I tried to run mapreduce word count file on a text file.
The problem is that the program is not running and I am getting the AM Container for app and Exception from container-launch.
Is there any solution to this?
All nodes are working.
6544 Jps
3041 NameNode
3842 NodeManager
3219 DataNode
3494 SecondaryNameNode
3706 ResourceManager
Below is the yarn status output for my application.
doop#contactkarim-VirtualBox:~/hadoop-3.3.1$ yarn app -status application_1667981786519_0006
2022-11-09 11:35:22,184 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /127.0.0.1:8032
2022-11-09 11:35:22,522 INFO conf.Configuration: resource-types.xml not found
2022-11-09 11:35:22,522 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
Application Report :
Application-Id : application_1667981786519_0006
Application-Name : word count
Application-Type : MAPREDUCE
User : hdoop
Queue : default
Application Priority : 0
Start-Time : 1667982679380
Finish-Time : 1667982691120
Progress : 0%
State : FAILED
Final-State : FAILED
Tracking-URL : http://contactkm-VirtualBox:8088/cluster/app/application_1667981786519_0006
RPC Port : -1
AM Host : N/A
Aggregate Resource Allocation : 20250 MB-seconds, 8 vcore-seconds
Aggregate Resource Preempted : 0 MB-seconds, 0 vcore-seconds
Log Aggregation Status : DISABLED
Diagnostics : Application application_1667981786519_0006 failed 2 times due to AM Container for appattempt_1667981786519_0006_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2022-11-09 11:31:31.113]Exception from container-launch.
Container id: container_1667981786519_0006_02_000001
Exit code: 1
[2022-11-09 11:31:31.116]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[2022-11-09 11:31:31.116]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
For more detailed output, check the application tracking page: http://contactkarim-VirtualBox:8088/cluster/app/application_1667981786519_0006 Then click on links to logs of each attempt.
. Failing the application.
Unmanaged Application : false
Application Node Label Expression : <Not set>
AM container Node Label Expression : <DEFAULT_PARTITION>
TimeoutType : LIFETIME ExpiryTime : UNLIMITED RemainingTime : -1seconds
Thanks
I troubleshooted many things e.g. I checked the site settings, checked resources. Also, I went through the configurations multiple times. permissions etc are giving.
I am suspecting the java version here only.
Linked blog says nothing about mapreduce, only cluster setup (which I always recommend following official Apache Hadoop site, not 3rd party blogs).
No appenders could be found means you're missing log4j.properties file submitted with your job - See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
You won't be able to see the real runtime error/log output until you add that, e.g. if you've submitted your own jar built by maven/gradle to src/main/resources
I am trying to instrument my spring boot app with opentelemetry and send the telemetry data such as logs, metrics, traces to Elastic APM.
Added otel agent v1.18.0 to classpath.
Elastic APM and apps are running on k8s.
I have followed the docs https://www.elastic.co/guide/en/apm/guide/current/open-telemetry.html
exec java -XX:MinRAMPercentage=70 -XX:MaxRAMPercentage=70 --add-opens java.base/java.math=ALL-UNNAMED --add-opens java.base/java.time=ALL-UNNAMED -javaagent:/app/opentelemetry-javaagent-all.jar -Dotel.service.name=sync-data -Dotel.exporter.otlp.endpoint=http://ip:8200 -Delastic.apm.verify_server_cert=false '-Dotel.exporter.otlp.headers=Authorization=Bearer secret_token' -jar sync-data.jar
INFO io.opentelemetry.javaagent.tooling.VersionLogger - opentelemetry-javaagent - version: 1.18.0
[OkHttp http://ip:8200/...] ERROR io.opentelemetry.exporter.internal.grpc.OkHttpGrpcExporter - Failed to export metrics. The request could not be executed. Full error message: Canceled
[otel.javaagent 2022-10-09 12:54:48:576 +0000] [OkHttp http://ip:8200/...] ERROR io.opentelemetry.exporter.internal.grpc.OkHttpGrpcExporter - Failed to export metrics. The request could not be executed. Full error message: Required SETTINGS preface not received
I have a java web application deployed in app engine and the source code is in Bitbucket under master branch,
And I heard about bitbucket pipelines I found it helpful as a fast way of auto deploying
My master branch having this list of 4 projects:
master --
|- project1
|- project2
|- project3
|- project4
|- bitbucket-pipelines.yml
And I followed exactly what is written in this link to provide the pipeline functionality:
https://confluence.atlassian.com/bitbucket/deploy-to-google-cloud-900820342.html
and here is my bitbucket-pipelines.yml content and its located directly under my master branch
image: maven:3.3.9
pipelines:
branches:
master:
- step:
caches:
- maven
script:
# Downloading the Google Cloud SDK
- curl -o /tmp/google-cloud-sdk.tar.gz https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-155.0.0-linux-x86_64.tar.gz
- tar -xvf /tmp/google-cloud-sdk.tar.gz -C /tmp/
- /tmp/google-cloud-sdk/install.sh -q
- source /tmp/google-cloud-sdk/path.bash.inc
# Authenticating with the service account key file
- echo $GOOGLE_CLIENT_SECRET | base64 --decode --ignore-garbage > ./gcloud-api-key.json
- gcloud config set project $CLOUDSDK_CORE_PROJECT
- gcloud components install app-engine-java
- gcloud auth activate-service-account --key-file client-secret.json
- cd project1
- mvn clean install package
- 'mvn appengine:update'
CLOUDSDK_CORE_PROJECT : is a pipeline variable contains the project ID
GOOGLE_CLIENT_SECRET : is a pipeline variable contains the base64 encoded service account json file as explained in the attached link
and here is my app engine plugin in the pom.xml
<plugin>
<groupId>com.google.appengine</groupId>
<artifactId>appengine-maven-plugin</artifactId>
<version>${appengine.target.version}</version>
<configuration>
<enableJarClasses>false</enableJarClasses>
<oauth2>false</oauth2>
</configuration>
</plugin>
after I run my pipelines I got this error at executing the line of "mvn appengine:update"
lease visit https://developers.google.com/appengine/downloads for the latest SDK.
********************************************************
The following URL can be used to authenticate:
https://accounts.google.com/o/oauth2/auth?access_type=offline&approval_prompt=force&client_id=550516889912.apps.googleusercontent.com&redirect_uri=urn:ietf:wg:oauth:2.0:oob&response_type=code&scope=https://www.googleapis.com/auth/appengine.admin%20https://www.googleapis.com/auth/cloud-platform
Attempting to open it in your browser now.
Unable to open browser. Please open the URL above and copy the resulting code.
Please enter code: Encountered a problem: No line found
Please see the logs [/tmp/appcfg3177766291803906341.log] for further information.
and then the pipeline result is failed, I looked for this error for 2 days with no hope, I hope come one here help me out
Thanks in advance!
I fixed it, it turned out that the docs in bitbucket is misleading, here is the correct pipeline script, you just need to put this 3 lines of code to build and deploy to google cloud right after:
- mvn install package
- echo $GOOGLE_CLIENT_SECRET > /tmp/client-secret.json
- mvn appengine:update -Dappengine.additionalParams="--service_account_json_key_file=/tmp/client-secret.json"
$GOOGLE_CLIENT_SECRET is an environmental variable having the service account json of App Engine default service account or you can create a new one having project editor privileges
It helped me pass the authenticate error but now I see 403 in my logs. Surprisingly, the version is still getting pushed to app-engine but 0% traffic.
Beginning interaction for module default...
0% Created staging directory at: '/var/folders/ny/z92xw4ps0j71v43mnvjzjyd80000gn/T/appcfg16663468200304338426.tmp'
5% Scanning for jsp files.
8% Generated git repository information file.
20% Scanning files on local disk.
25% Initiating update.
28% Cloning 34 application files.
40% Uploading 3 files.
52% Uploaded 1 files.
61% Uploaded 2 files.
68% Uploaded 3 files.
73% Sending batch containing 3 file(s) totaling 41KB.
77% Initializing precompilation...
90% Deploying new version.
95% Closing update: new version is ready to start serving.
98% Uploading index definitions.
Feb. 19, 2018 1:21:24 AM com.google.appengine.tools.admin.AbstractServerConnection send1
WARNING: Error posting to URL: https://appengine.google.com/api/datastore/index/add?app_id=clean-aleph-191303&version=beta-001&
403 Forbidden
You do not have permission to modify this app (app_id=u'f~clean-aleph-191303').
This is try #0
Feb. 19, 2018 1:21:25 AM com.google.appengine.tools.admin.AbstractServerConnection send1
WARNING: Error posting to URL: https://appengine.google.com/api/datastore/index/add?app_id=clean-aleph-191303&version=beta-001&
403 Forbidden
You do not have permission to modify this app (app_id=u'f~clean-aleph-191303').
This is try #1
Feb. 19, 2018 1:21:25 AM com.google.appengine.tools.admin.AbstractServerConnection send1
WARNING: Error posting to URL: https://appengine.google.com/api/datastore/index/add?app_id=clean-aleph-191303&version=beta-001&
403 Forbidden
You do not have permission to modify this app (app_id=u'f~clean-aleph-191303').
This is try #2
Feb. 19, 2018 1:21:25 AM com.google.appengine.tools.admin.AbstractServerConnection send1
WARNING: Error posting to URL: https://appengine.google.com/api/datastore/index/add?app_id=clean-aleph-191303&version=beta-001&
403 Forbidden
You do not have permission to modify this app (app_id=u'f~clean-aleph-191303').
This is try #3
Error Details:
2018-02-19 01:20:57.438:INFO::main: Logging initialized #378ms
2018-02-19 01:20:57.575:INFO:oejs.Server:main: jetty-9.3.18.v20170406
2018-02-19 01:20:58.829:INFO:oeja.AnnotationConfiguration:main: Scanning elapsed time=711ms
2018-02-19 01:20:58.843:INFO:oejq.QuickStartDescriptorGenerator:main: Quickstart generating
2018-02-19 01:20:58.859:INFO:oejsh.ContextHandler:main: Started o.e.j.q.QuickStartWebApp#2aceadd4{/,file:///private/var/folders/ny/z92xw4ps0j71v43mnvjzjyd80000gn/T/appcfg16663468200304338426.tmp/,AVAILABLE}
2018-02-19 01:20:58.861:INFO:oejs.Server:main: Started #1808ms
2018-02-19 01:20:58.863:INFO:oejsh.ContextHandler:main: Stopped o.e.j.q.QuickStartWebApp#2aceadd4{/,file:///private/var/folders/ny/z92xw4ps0j71v43mnvjzjyd80000gn/T/appcfg16663468200304338426.tmp/,UNAVAILABLE}
I was following the tutorial to run WordCount.java mentioned in here and when I run the following line in the tutorial
hadoop jar wordcount.jar org.myorg.WordCount /user/cloudera/wordcount/input /user/cloudera/wordcount/output
I get the following error -
17/09/04 01:57:29 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/09/04 01:57:30 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
The docker image that I used was docker pull cloudera/quickstart
There were no setup tutorials for Hadoop with Docker so it would be helpful if you could tell me the configurations that are to be made to overcome these issues.
That tutorial assumes you are in the cluster with the Hadoop client command available, the Hadoop Services are started, and properly configured.
0.0.0.0:8032 is the default YARN resource manager, so you need to configure your HADOOP_CONF_DIR XML files (specifically yarn-site for this error) to point at the Docker container for the correct addresses of YARN. core and hdfs-site will need configured to point at HDFS as well.
I'm trying to debug some hibernate functionality in a spring app with a junit test and commons logging, but I can't seem to get anything other than the default INFO messages to appear. I'm also running these junit tests from Eclipse.
I've had no luck from the spring forums either.
I'm particularly interested in the debug logging output by Hibernate (to try and figure out why it takes 23 seconds to run this test).
Current output shows the default setting of INFO:
Mar 29, 2011 4:44:35 PM org.springframework.test.AbstractTransactionalSpringContextTests onSetUp
INFO: Began transaction: transaction manager [org.springframework.orm.hibernate3.HibernateTransactionManager#5f873eb2]; defaultRollback true
testGetSubjectsForSite time: [00:00:00:068]
Mar 29, 2011 4:44:58 PM org.springframework.test.AbstractTransactionalSpringContextTests endTransaction
INFO: Rolled back transaction after test execution
I've tried to add a commons-logging.properties file to the classpath (the same location as the hibernate.properties and test-components.xml) but still only the default INFO messages appear.
Here's the commons-logging.properties file:
org.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger
# handlers
handlers=java.util.logging.ConsoleHandler
# default log level
.level=FINE
org.springframework.level=FINE
org.hibernate.level=FINE
# level for the console logger
java.util.logging.ConsoleHandler.level=FINE
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
Is anyone able to shed any light on why I can't get the DEBUG messages to print out? Is there a logging setting I'm missing?
Edit: I've tried FINEST and DEBUG to no avail.
Unfortunately, it seems the logging configuration file used by Jdk14Logger should be specified at runtime.
See the following file in your JDK directory: JDK_HOME/jre/lib/logging.properties (it's the default one used if no config file is found)
Moreover, the file path should be absolute, otherwise it's relative to the folder where the JRE is executed - see the code of java.util.logging.LogManager.readConfiguration()
Also see:
http://www.javapractices.com/topic/TopicAction.do?Id=143
http://cyntech.wordpress.com/2009/01/09/how-to-use-commons-logging/
Your default and hibernate logging is at Level "FINE" which is more of a "INFO" in log4j terms.
You need set DEBUG level for org.hibernate which in JDK logging is equal to 'FINEST'
Set
org.hibernate.level=FINEST (in the above log which should enable debug logs)