Sentry doesn't capture exceptions from Spark job - java

My problem is that the code below throws an exception and I capture that with Sentry, but when I go to Sentry UI, the exception does not appear at all. I want to find a way where I could use Sentry across the spark driver and executors as well. Any ideas?
Also, I'm not sure what additional information is required, so feel free to let me know and I'll provide it.
Versions:
Spark: 2.12-3.0.0
Sentry: 3.1.1
import io.sentry.Sentry
import org.apache.spark.api.java.JavaSparkContext
import org.apache.spark.sql.SparkSession
fun main(args: Array<String>) {
SparkSession.builder()
.appName("myApp")
.config("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", "2")
.config("spark.hadoop.fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.getOrCreate()
.use { sparkSession ->
JavaSparkContext.fromSparkContext(sparkSession.sparkContext()).let { sc ->
Sentry.init { options ->
options.dsn = "********"
}
try {
throw Exception("exception before executors start working")
} catch(e: Exception) {
Sentry.captureException(e)
}
// starting some executors and see if Sentry receives exceptions from them:
sc.parallelize((0..100).toList(), 10).map { i ->
try {
throw Exception("exception $i from worker")
} catch (e: Exception) {
Sentry.captureException(e)
}
}
}
}
}
I also tried to execute the inner HubAdapter from Sentry and use that as a Broadcast variable, but no luck.
UPDATE #1
Added debug option to Sentry init and tried starting and closing a session:
import io.sentry.Sentry
import org.apache.spark.api.java.JavaSparkContext
import org.apache.spark.sql.SparkSession
fun main(args: Array<String>) {
SparkSession.builder()
.appName("myApp")
.config("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", "2")
.config("spark.hadoop.fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.getOrCreate()
.use { sparkSession ->
JavaSparkContext.fromSparkContext(sparkSession.sparkContext()).let { sc ->
Sentry.init { options ->
options.dsn = "********"
options.isDebug = true
}
Sentry.startSession()
try {
throw Exception("exception before executors start working")
} catch(e: Exception) {
Sentry.captureException(e)
}
// starting some executors and see if Sentry receives exceptions from them:
sc.parallelize((0..100).toList(), 10).map { i ->
try {
throw Exception("exception $i from worker")
} catch (e: Exception) {
Sentry.captureException(e)
}
}
Sentry.endSession()
Sentry.close()
}
}
}
Here's what I see in the logs:
INFO: Initializing SDK with DSN: '**********'
INFO: No outbox dir path is defined in options.
INFO: GlobalHubMode: 'false'
20/11/03 18:42:01 INFO BlockManagerMasterEndpoint: Registering block manager 10.36.62.15:42123 with 4.6 GiB RAM, BlockManagerId(1, 10.36.62.15, 42123, None)
DEBUG: UncaughtExceptionHandlerIntegration enabled: true
DEBUG: UncaughtExceptionHandlerIntegration installed.
WARNING: Sessions can't be captured without setting a release.
DEBUG: Capturing event: 9b17170bdaf841cbb764969f653f99b5
ERROR: Request failed, API returned 400
ERROR: {"detail":"invalid event envelope","causes":["invalid item header","EOF while parsing an object at line 1 column 49"]}
WARNING: Sessions can't be captured without setting a release.
INFO: Closing SentryClient.
DEBUG: Shutting down
UPDATE #2:
Created issue in GitHub for better visibility

Related

AWS XRay service map components are disconnected

I'm using open telemetry to export trace information of the following application:
A nodejs kafka producer sends messages to input-topic. It uses kafkajs instrumented with opentelemetry-instrumentation-kafkajs library. I'm using the example from AWS OTEL for NodeJS example. Here is my tracer.js:
module.exports = () => {
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ERROR);
// create a provider for activating and tracking with AWS IdGenerator
const attributes = {
'service.name': 'nodejs-producer',
'service.namespace': 'axel'
}
let resource = new Resource(attributes)
const tracerConfig = {
idGenerator: new AWSXRayIdGenerator(),
plugins: {
kafkajs: { enabled: false, path: 'opentelemetry-plugin-kafkajs' }
},
resource: resource
};
const tracerProvider = new NodeTracerProvider(tracerConfig);
// add OTLP exporter
const otlpExporter = new CollectorTraceExporter({
url: (process.env.OTEL_EXPORTER_OTLP_ENDPOINT) ? process.env.OTEL_EXPORTER_OTLP_ENDPOINT : "localhost:55680"
});
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(otlpExporter));
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
// Register the tracer with X-Ray propagator
tracerProvider.register({
propagator: new AWSXRayPropagator()
});
registerInstrumentations({
tracerProvider,
instrumentations: [new KafkaJsInstrumentation({})],
});
// Return a tracer instance
return trace.getTracer("awsxray-tests");
}
A Java application that reads from input-topic and produces to final-topic. Also instrumented with AWS OTEL java agent. Java app is launched like below:
export OTEL_TRACES_EXPORTER=otlp
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
export OTEL_PROPAGATORS=xray
export OTEL_RESOURCE_ATTRIBUTES="service.name=otlp-consumer-producer,service.namespace=axel"
export OTEL_METRICS_EXPORTER=none
java -javaagent:"${PWD}/aws-opentelemetry-agent.jar" -jar "${PWD}/target/otlp-consumer-producer.jar"
I'm using otel/opentelemetry-collector-contrib that has AWS XRay exporter:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
awsxray:
region: 'eu-central-1'
max_retries: 10
service:
pipelines:
traces:
receivers: [otlp]
exporters: [awsxray]
I can see from log messages and also from XRay console that traces are being published (with correct parent trace ids). NodeJS log message:
{
traceId: '60d0c7d4cfc2d86b2df8624cb4bccead',
parentId: undefined,
name: 'input-topic',
id: '3e289f00c4499ae8',
kind: 3,
timestamp: 1624295380734468,
duration: 3787,
attributes: {
'messaging.system': 'kafka',
'messaging.destination': 'input-topic',
'messaging.destination_kind': 'topic'
},
status: { code: 0 },
events: []
}
and Java consumer with headers:
Headers([x-amzn-trace-id:Root=1-60d0c7d4-cfc2d86b2df8624cb4bccead;Parent=3e289f00c4499ae8;Sampled=1])
As you see parent and root ids match each other. However the service map is constructed in a disconnected way:
What other configuration I'm missing here to compile a correct service map?

Why blocking on thenApplyAsync works but not with thenApply

We saw some interesting behavior in our application. The following Spock spec captures the behavior. I am trying to understand why the second test passes but the first one throws a TimeoutException.
Summary:
There is a mock server with a mock endpoint that responds with a success after a 10ms delay.
We use AsyncHttpClient to make a nonblocking call to this mock endpoint. The first call is chained with a second blocking call to the same endpoint. The first call succeeds but the second fails with timeout if thenApply is used but succeeds if thenApplyAsync is used. In both cases, the mock server seems to respond within 10ms.
Dependencies:
implementation 'com.google.guava:guava:29.0-jre'
implementation 'org.asynchttpclient:async-http-client:2.12.1'
// Use the latest Groovy version for Spock testing
testImplementation 'org.codehaus.groovy:groovy-all:2.5.11'
// Use the awesome Spock testing and specification framework even with Java
testImplementation 'org.spockframework:spock-core:1.3-groovy-2.5'
testImplementation 'org.objenesis:objenesis:1.4'
testImplementation "cglib:cglib:2.2"
testImplementation 'junit:junit:4.13'
testImplementation 'org.mock-server:mockserver-netty:5.11.1'
Spock Spec:
package com.switchcase.asyncthroughput
import com.google.common.base.Charsets
import org.asynchttpclient.DefaultAsyncHttpClient
import org.asynchttpclient.RequestBuilder
import org.mockserver.integration.ClientAndServer
import org.mockserver.model.HttpResponse
import spock.lang.Shared
import spock.lang.Specification
import java.util.concurrent.CompletableFuture
import java.util.concurrent.CompletionException
import java.util.concurrent.ExecutorService
import java.util.concurrent.Executors
import java.util.concurrent.TimeUnit
import java.util.concurrent.TimeoutException
import static org.mockserver.integration.ClientAndServer.startClientAndServer
import static org.mockserver.model.HttpRequest.request
class CompletableFutureThreadsTest extends Specification {
#Shared
ClientAndServer mockServer
def asyncHttpClient = new DefaultAsyncHttpClient();
def setupSpec() {
mockServer = startClientAndServer(9192);
//create a mock server which response with "done" after 100ms.
mockServer.when(request()
.withMethod("POST")
.withPath("/validate"))
.respond(HttpResponse.response().withBody("done")
.withStatusCode(200)
.withDelay(TimeUnit.MILLISECONDS, 10));
}
def "Calls external using AHC with a blocking call with 1sec timeout results in TimeoutException."() {
when:
callExternal().thenApply({ resp -> callExternalBlocking() }).join()
then:
def exception = thrown(CompletionException)
exception instanceof CompletionException
exception.getCause() instanceof TimeoutException
exception.printStackTrace()
}
def "Calls external using AHC with a blocking call on ForkJoinPool with 1sec timeout results in success."() {
when:
def value = callExternal().thenApplyAsync({ resp -> callExternalBlocking() }).join()
then:
value == "done"
}
def cleanupSpec() {
mockServer.stop(true)
}
private CompletableFuture<String> callExternal(def timeout = 1000) {
RequestBuilder requestBuilder = RequestBuilder.newInstance();
requestBuilder.setMethod("POST").setUrl("http://localhost:9192/validate").setRequestTimeout(timeout)
def cf = asyncHttpClient.executeRequest(requestBuilder).toCompletableFuture()
return cf.thenApply({ response ->
println("CallExternal Succeeded.")
return response.getResponseBody(Charsets.UTF_8)
})
}
private String callExternalBlocking(def timeout = 1000) {
RequestBuilder requestBuilder = RequestBuilder.newInstance();
requestBuilder.setMethod("POST").setUrl("http://localhost:9192/validate").setRequestTimeout(timeout)
def cf = asyncHttpClient.executeRequest(requestBuilder).toCompletableFuture()
return cf.thenApply({ response ->
println("CallExternalBlocking Succeeded.")
return response.getResponseBody(Charsets.UTF_8)
}).join()
}
}
EDIT:
Debug log and stack trace for timeout: (The timeout happens on the remote call in callExternalBlocking)
17:37:38.885 [AsyncHttpClient-timer-2-1] DEBUG org.asynchttpclient.netty.timeout.TimeoutTimerTask - Request timeout to localhost/127.0.0.1:9192 after 1000 ms for NettyResponseFuture{currentRetry=0,
isDone=0,
isCancelled=0,
asyncHandler=org.asynchttpclient.AsyncCompletionHandlerBase#478251c9,
nettyRequest=org.asynchttpclient.netty.request.NettyRequest#4945b749,
future=java.util.concurrent.CompletableFuture#4d7a3ab9[Not completed, 1 dependents],
uri=http://localhost:9192/validate,
keepAlive=true,
redirectCount=0,
timeoutsHolder=org.asynchttpclient.netty.timeout.TimeoutsHolder#878bd72,
inAuth=0,
touch=1622248657866} after 1019 ms
17:37:38.886 [AsyncHttpClient-timer-2-1] DEBUG org.asynchttpclient.netty.channel.ChannelManager - Closing Channel [id: 0x5485056c, L:/127.0.0.1:58076 - R:localhost/127.0.0.1:9192]
17:37:38.886 [AsyncHttpClient-timer-2-1] DEBUG org.asynchttpclient.netty.request.NettyRequestSender - Aborting Future NettyResponseFuture{currentRetry=0,
isDone=0,
isCancelled=0,
asyncHandler=org.asynchttpclient.AsyncCompletionHandlerBase#478251c9,
nettyRequest=org.asynchttpclient.netty.request.NettyRequest#4945b749,
future=java.util.concurrent.CompletableFuture#4d7a3ab9[Not completed, 1 dependents],
uri=http://localhost:9192/validate,
keepAlive=true,
redirectCount=0,
timeoutsHolder=org.asynchttpclient.netty.timeout.TimeoutsHolder#878bd72,
inAuth=0,
touch=1622248657866}
java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException: Request timeout to localhost/127.0.0.1:9192 after 1000 ms
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
at org.asynchttpclient.netty.NettyResponseFuture.abort(NettyResponseFuture.java:273)
at org.asynchttpclient.netty.request.NettyRequestSender.abort(NettyRequestSender.java:473)
at org.asynchttpclient.netty.timeout.TimeoutTimerTask.expire(TimeoutTimerTask.java:43)
at org.asynchttpclient.netty.timeout.RequestTimeoutTimerTask.run(RequestTimeoutTimerTask.java:50)
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:672)
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:747)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:472)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: Request timeout to localhost/127.0.0.1:9192 after 1000 ms
... 7 more

Broadleaf Commerce Embedded Solr cannot run with root user

I download a fresh 6.1 broadleaf-commerce and run my local machine via java -javaagent:./admin/target/agents/spring-instrument.jar -jar admin/target/admin.jar successfully on mine macbook. But in my centos 7 I run sudo java -javaagent:./admin/target/agents/spring-instrument.jar -jar admin/target/admin.jar with following error
2020-10-12 13:20:10.838 INFO 2481 --- [ main] c.b.solr.autoconfigure.SolrServer : Syncing solr config file: jar:file:/home/mynewuser/seafood-broadleaf/admin/target/admin.jar!/BOOT-INF/lib/broadleaf-boot-starter-solr-2.2.1-GA.jar!/solr/standalone/solrhome/configsets/fulfillment_order/conf/solrconfig.xml to: /tmp/solr-7.7.2/solr-7.7.2/server/solr/configsets/fulfillment_order/conf/solrconfig.xml
*** [WARN] *** Your Max Processes Limit is currently 62383.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
WARNING: Starting Solr as the root user is a security risk and not considered best practice. Exiting.
Please consult the Reference Guide. To override this check, start with argument '-force'
2020-10-12 13:20:11.021 ERROR 2481 --- [ main] c.b.solr.autoconfigure.SolrServer : Problem starting Solr
Here is the source code of solr configuration, I believe it is the place to change the configuration to run with the argument -force in programming way.
package com.community.core.config;
import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.broadleafcommerce.core.search.service.SearchService;
import org.broadleafcommerce.core.search.service.solr.SolrConfiguration;
import org.broadleafcommerce.core.search.service.solr.SolrSearchServiceImpl;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.stereotype.Component;
/**
*
*
* #author Phillip Verheyden (phillipuniverse)
*/
#Component
public class ApplicationSolrConfiguration {
#Value("${solr.url.primary}")
protected String primaryCatalogSolrUrl;
#Value("${solr.url.reindex}")
protected String reindexCatalogSolrUrl;
#Value("${solr.url.admin}")
protected String adminCatalogSolrUrl;
#Bean
public SolrClient primaryCatalogSolrClient() {
return new HttpSolrClient.Builder(primaryCatalogSolrUrl).build();
}
#Bean
public SolrClient reindexCatalogSolrClient() {
return new HttpSolrClient.Builder(reindexCatalogSolrUrl).build();
}
#Bean
public SolrClient adminCatalogSolrClient() {
return new HttpSolrClient.Builder(adminCatalogSolrUrl).build();
}
#Bean
public SolrConfiguration blCatalogSolrConfiguration() throws IllegalStateException {
return new SolrConfiguration(primaryCatalogSolrClient(), reindexCatalogSolrClient(), adminCatalogSolrClient());
}
#Bean
protected SearchService blSearchService() {
return new SolrSearchServiceImpl();
}
}
Let me preface this by saying you would be better off simply not starting the application as root. If you are in Docker, you can use the USER command to switch to a non-root user.
The Solr server startup in Broadleaf Community is done programmatically via the broadleaf-boot-starter-solr dependency. This is the wrapper around Solr that ties it to the Spring lifecycle. All of the real magic happens in the com.broadleafcommerce.solr.autoconfigure.SolrServer class.
In that class, you will see a startSolr() method. This method is what adds startup arguments to Solr.
In your case, you will need to mostly copy this method wholesale and use cmdLine.addArgument(...) to add additional arguments. Example:
class ForceStartupSolrServer extends SolrServer {
public ForceStartupSolrServer(SolrProperties props) {
super(props);
}
protected void startSolr() {
if (!isRunning()) {
if (!downloadSolrIfApplicable()) {
throw new IllegalStateException("Could not download or expand Solr, see previous logs for more information");
}
stopSolr();
synchConfig();
{
CommandLine cmdLine = new CommandLine(getSolrCommand());
cmdLine.addArgument("start");
cmdLine.addArgument("-p");
cmdLine.addArgument(Integer.toString(props.getPort()));
// START MODIFICATION
cmdLine.addArgument("-force");
// END MODIFICATION
Executor executor = new DefaultExecutor();
PumpStreamHandler streamHandler = new PumpStreamHandler(System.out);
streamHandler.setStopTimeout(1000);
executor.setStreamHandler(streamHandler);
try {
executor.execute(cmdLine);
created = true;
checkCoreStatus();
} catch (IOException e) {
LOG.error("Problem starting Solr", e);
}
}
}
}
}
Then create an #Configuration class to override the blAutoSolrServer bean created by SolrAutoConfiguration (note the specific package requirement for org.broadleafoverrides.config):
package org.broadleafoverrides.config;
public class OverrideConfiguration {
#Bean
public ForceStartupSolrServer blAutoSolrServer(SolrProperties props) {
return new ForceStartupSolrServer(props);
}
}

Exception: org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel

I have a sandbox for exploring newly added functions in Spring Cloud Stream, but I've faced a problem with using Function and Supplier in one Spring Cloud Stream application.
In code I used examples described in docs.
Firstly I added to project Function<String, String> with corresponding spring.cloud.stream.bindings and spring.cloud.stream.function.definition properties in application.yml. Everything is working fine, I post message to my-fun-in Kafka topic, application execute function and send result to my-fun-out topic.
Then I added Supplier<Flux<String>> to the same project with corresponding spring.cloud.stream.bindings and updated spring.cloud.stream.function.definition value to fun;sup. And here weird things start to happen. When I try to start application I receive the following error:
2020-01-15 01:45:16.608 ERROR 10128 --- [oundedElastic-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'application.sup-out-0'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=byte[20], headers={contentType=application/json, id=89301e00-b285-56e0-cb4d-8133555c8905, timestamp=1579045516603}], failedMessage=GenericMessage [payload=byte[20], headers={contentType=application/json, id=89301e00-b285-56e0-cb4d-8133555c8905, timestamp=1579045516603}]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:453)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:403)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109)
at org.springframework.integration.router.AbstractMessageRouter.doSend(AbstractMessageRouter.java:206)
at org.springframework.integration.router.AbstractMessageRouter.handleMessageInternal(AbstractMessageRouter.java:188)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:170)
at org.springframework.integration.handler.AbstractMessageHandler.onNext(AbstractMessageHandler.java:219)
at org.springframework.integration.handler.AbstractMessageHandler.onNext(AbstractMessageHandler.java:57)
at org.springframework.integration.endpoint.ReactiveStreamsConsumer$DelegatingSubscriber.hookOnNext(ReactiveStreamsConsumer.java:165)
at org.springframework.integration.endpoint.ReactiveStreamsConsumer$DelegatingSubscriber.hookOnNext(ReactiveStreamsConsumer.java:148)
at reactor.core.publisher.BaseSubscriber.onNext(BaseSubscriber.java:160)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onNext(FluxDoFinally.java:123)
at reactor.core.publisher.EmitterProcessor.drain(EmitterProcessor.java:426)
at reactor.core.publisher.EmitterProcessor.onNext(EmitterProcessor.java:268)
at reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
at reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
at reactor.core.publisher.FluxCreate$SerializedSink.next(FluxCreate.java:153)
at org.springframework.integration.channel.FluxMessageChannel.doSend(FluxMessageChannel.java:63)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:453)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:403)
at org.springframework.integration.channel.FluxMessageChannel.lambda$subscribeTo$2(FluxMessageChannel.java:83)
at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:189)
at reactor.core.publisher.FluxPublishOn$PublishOnSubscriber.runAsync(FluxPublishOn.java:398)
at reactor.core.publisher.FluxPublishOn$PublishOnSubscriber.run(FluxPublishOn.java:484)
at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84)
at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=byte[20], headers={contentType=application/json, id=89301e00-b285-56e0-cb4d-8133555c8905, timestamp=1579045516603}]
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:139)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:73)
... 34 more
After it I've tried several thing:
Reverted spring.cloud.stream.function.definition to fun (disable sup bean binding to the external destination). Application started, function worked, supplier didn't work. Everything as expected.
Changed spring.cloud.stream.function.definition to sup (disable fun bean binding to the external destination). Application started, function didn't work, supplier worked (produced message to my-sup-out topic every second). Everything as expected as well.
Updated spring.cloud.stream.function.definition value to fun;sup. Application didn't start, got same MessageDeliveryException.
Swapped spring.cloud.stream.function.definition value to sup;fun. Application started, supplier worked, but function didn't work (didn't send messages to my-fun-out topic).
The last one confused me even more than error) So now I need someone's help to sort thing out.
Did I miss something in cofiguration? Why changing beans order separated by ; in spring.cloud.stream.function.definition leads to different results?
Full project is uploaded to GitHub and added below:
StreamApplication.java:
package com.kaine;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import reactor.core.publisher.Flux;
import java.util.function.Function;
import java.util.function.Supplier;
#SpringBootApplication
public class StreamApplication {
public static void main(String[] args) {
SpringApplication.run(StreamApplication.class);
}
#Bean
public Function<String, String> fun() {
return value -> value.toUpperCase();
}
#Bean
public Supplier<Flux<String>> sup() {
return () -> Flux.from(emitter -> {
while (true) {
try {
emitter.onNext("Hello from Supplier!");
Thread.sleep(1000);
} catch (Exception e) {
// ignore
}
}
});
}
}
application.yml
spring:
cloud:
stream:
function:
definition: fun;sup
bindings:
fun-in-0:
destination: my-fun-in
fun-out-0:
destination: my-fun-out
sup-out-0:
destination: my-sup-out
build.gradle.kts:
plugins {
java
}
group = "com.kaine"
version = "1.0-SNAPSHOT"
repositories {
mavenCentral()
}
dependencies {
implementation(platform("org.springframework.cloud:spring-cloud-dependencies:Hoxton.SR1"))
implementation("org.springframework.cloud:spring-cloud-starter-stream-kafka")
implementation(platform("org.springframework.boot:spring-boot-dependencies:2.2.2.RELEASE"))
}
configure<JavaPluginConvention> {
sourceCompatibility = JavaVersion.VERSION_11
}
Actually this is a problem with our documentation as I believe we provide a bad example of the reactive Supplier for his case. The issue is that your Supplier is in an infinite blocking loop. It basically never returns.
So please change it to something like:
#Bean
public Supplier<Flux<String>> sup() {
return () -> Flux.fromStream(Stream.generate(new Supplier<String>() {
#Override
public String get() {
try {
Thread.sleep(1000);
return "Hello from Supplier";
} catch (Exception e) {
// ignore
}
}
})).subscribeOn(Schedulers.elastic()).share();
}

IONIC 3 : UnhandledPromiseRejectionWarning when generating APK File

I'm facing problems when generating apk file. I get the following error.
Command : ionic cordova build android
Output :
> cordova build android
Android Studio project detected
ANDROID_HOME=C:\Users\****\AppData\Local\Android\Sdk
JAVA_HOME=C:\Program Files\Java\jdk-9.0.4
(node:17504) UnhandledPromiseRejectionWarning: Unhandled promise rejection (reje
ction id: 1): CordovaError: Requirements check failed for JDK 1.8 or greater
(node:17504) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate th
e Node.js process with a non-zero exit code.
[13:47:49] lint finished in 8.47 s
This is the content of my rest file rest.ts
import { HttpClient } from '#angular/common/http';
import { Injectable } from '#angular/core';
import { Observable } from 'rxjs/Observable';
import 'rxjs/add/operator/catch';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/toPromise';
#Injectable()
export class RestProvider {
private baseUrl = 'http://localhost/project/web/rest/mobile/v1/';
private nomsvillesUrl = this.baseUrl + 'ville/nomsvilles/1';
constructor(public http: HttpClient) {
console.log('Hello RestProvider Provider');
}
getNomvilles(): Observable<string[]> {
return this.http.get(this.nomsvillesUrl)
.map(this.extractData)
.catch(this.handleError);
}
private extractData(res: Response) {
let body = res;
return body || { };
}
private handleError (error: Response | any) {
let errMsg: string;
if (error instanceof Response) {
const err = error || '';
errMsg = `${error.status} - ${error.statusText || ''} ${err}`;
} else {
errMsg = error.message ? error.message : error.toString();
}
console.error(errMsg);
return Observable.throw(errMsg);
}
}
This is the content of my main class main.ts
import { Component } from '#angular/core';
import { NavController, NavParams } from 'ionic-angular';
import { RestProvider } from '../../providers/rest/rest';
#Component({
selector: 'page-main',
templateUrl: 'main.html',
})
export class MainPage {
villes: string[]
errorMessage: string
constructor(public navCtrl: NavController, public navParams: NavParams, public rest: RestProvider) {
}
ionViewDidLoad() {
this.getVilles();
}
getVilles() {
this.rest.getNomvilles().subscribe(
villes => this.villes = villes,
error => this.errorMessage = <any>error
);
}
}
Please help me ! I want to know how to handle Promise in order to prevent promise rejection.
Thank you .
The error has nothing related to your code. It's asking for jdk 1.8 or higher you can download it from this link. But first uninstall jdk 1.9 you're using because it's not compatible with android.
Then create a environment variable JAVA_HOME=C:\path\to\jdk\bin

Categories

Resources