I would like to make my Apache Camel application to be more resilient and start even when the MQTT broker is not reachable. We are using Camel on IoT devices with a possibly unstable internet connection and I want our application to start even without internet access.
An exemplary route looks like the following:
from("timer:heartbeat?period=5000")
.routeId("send heartbeat")
.setBody(simple("Hello World!"))
.to("paho:myhostnome/heartbeat?brokerUrl={{broker.url}}")
This works fine as long as the MQTT server is available. However when the server is not reachable, the context fails while warming up the PahoEndpoint.
Caused by: org.apache.camel.FailedToCreateRouteException: Failed to create route send heartbeat: Route(send heartbeat)[[From[timer:heartbeat?period={{heartbe... because of Unable to connect to server
at org.apache.camel.impl.RouteService.warmUp(RouteService.java:147) ~[camel-core-2.19.3.jar:2.19.3]
at org.apache.camel.impl.DefaultCamelContext.doWarmUpRoutes(DefaultCamelContext.java:3758) ~[camel-core-2.19.3.jar:2.19.3]
at org.apache.camel.impl.DefaultCamelContext.safelyStartRouteServices(DefaultCamelContext.java:3665) ~[camel-core-2.19.3.jar:2.19.3]
at org.apache.camel.impl.DefaultCamelContext.doStartOrResumeRoutes(DefaultCamelContext.java:3451) ~[camel-core-2.19.3.jar:2.19.3]
at org.apache.camel.impl.DefaultCamelContext.doStartCamel(DefaultCamelContext.java:3305) ~[camel-core-2.19.3.jar:2.19.3]
at org.apache.camel.impl.DefaultCamelContext.access$000(DefaultCamelContext.java:202) ~[camel-core-2.19.3.jar:2.19.3]
at org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:3089) ~[camel-core-2.19.3.jar:2.19.3]
at org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:3085) ~[camel-core-2.19.3.jar:2.19.3]
at org.apache.camel.impl.DefaultCamelContext.doWithDefinedClassLoader(DefaultCamelContext.java:3108) ~[camel-core-2.19.3.jar:2.19.3]
at org.apache.camel.impl.DefaultCamelContext.doStart(DefaultCamelContext.java:3085) ~[camel-core-2.19.3.jar:2.19.3]
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61) ~[camel-core-2.19.3.jar:2.19.3]
at org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.java:3022) ~[camel-core-2.19.3.jar:2.19.3]
at org.apache.camel.spring.boot.RoutesCollector.maybeStart(RoutesCollector.java:242) ~[camel-spring-boot-2.19.3.jar:2.19.3]
at org.apache.camel.spring.boot.RoutesCollector.onApplicationEvent(RoutesCollector.java:217) ~[camel-spring-boot-2.19.3.jar:2.19.3]
... 13 common frames omitted
Caused by: org.eclipse.paho.client.mqttv3.MqttException: Unable to connect to server
at org.eclipse.paho.client.mqttv3.internal.TCPNetworkModule.start(TCPNetworkModule.java:79) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:na]
at org.eclipse.paho.client.mqttv3.internal.ClientComms$ConnectBG.run(ClientComms.java:650) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:na]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_121]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_121]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[na:1.8.0_121]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_121]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_121]
at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_121]
at org.eclipse.paho.client.mqttv3.internal.TCPNetworkModule.start(TCPNetworkModule.java:70) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:na]
My first idea was to disable auto startup on all routes involving Paho and start them manually. This does not work as the PahoEndpoint is started even when the route itself is not started.
I'm now looking for an alternative approach to handle this problem.
Apache Camel 2.20 onwards comes with a new functionality to let the startup procedure run using a superviser controller that takes over starting up from CamelContext itself. This allows to setup more advanced configuration to let the controller handle errors, and attempt to retry starting the failed routes etc.
There is an example where its configured via Spring Boot, but you can configure it from Java API as well: https://github.com/apache/camel/tree/master/examples/camel-example-spring-boot-supervising-route-controller
In the upcoming releases we will improve on this new functionality.
For older versions of Camel then its often the component itself that may or may not offer any kind of retry mechanism, you need to configure, instead of failing fast.
Camel has a try() catch() functionality.
Why not add a try() before calling the to() and catch the exception and then just log it? Something like:
from("timer:heartbeat?period=5000")
.routeId("send heartbeat")
.setBody(simple("Hello World!"))
.doTry()
.to("paho:myhostnome/heartbeat?brokerUrl={{broker.url}}")
.doCatch( java.net.ConnectException.class).log("failed")
.end()
Related
i am developing flink application as Kafka source. I have completed all required changes to code, but its falling into following error when run on one emr cluster. Could you guys let me know the workaround if anyone already experienced.
Caused by: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.
at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:302)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$1(RestClient.java:342)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:500)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:493)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:472)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:413)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:538)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:531)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:111)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:323)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:339)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:685)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:632)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:549)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:511)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918)
at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.util.concurrent.CompletionException: org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: ip-xxx-xxx-xxx.dqa.capitalone.com/xx.xxx.xxx.xxx:38257
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:957)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
... 19 more
Caused by: org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: ip-xxx-xxx-xxx.dqa.capitalone.com/xx.xxx.xxx.xxx:38257
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:336)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:685)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:632)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:549)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:511)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918)
at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:750)
EMR Cluster-id:
It's failing with a connection failure, trying to talk to ip-xxx-xxx-xxx.dqa.capitalone.com/xx.xxx.xxx.xxx:38257. It seems likely that internal corporate address is not reachable from inside of AWS.
Azure Database connect through OpenTextSocket proxy client in heidisql DB connector tool is works, but from java code its not works,
Spring boot application connection with database
Screenshot :
address : hostXXXX/1.0.0.1
And HeidiSQl tool is using libpq-12.dll to connect postgres Database,
but Java code is using Driver Class, so may be is have some issue ?
Connection URL : jdbc:postgresql://XXX.postgres.database.azure.com:5432/dev
Error :
at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:158)
at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:116)
at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:79)
... 24 common frames omitted
Caused by: java.net.SocketException: Bad address: connect
at java.base/java.net.PlainSocketImpl.connect0(Native Method)
at java.base/java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:105)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.base/java.net.Socket.connect(Socket.java:609)
at org.postgresql.core.PGStream.createSocket(PGStream.java:231)
at org.postgresql.core.PGStream.<init>(PGStream.java:95)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:98)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
... 38 common f
I am trying to connect to BigQuery on a machine that requires all traffic to go through a proxy. I've set the http.proxyHost and http.proxyPort system variable in both Java and command line but I always get the following error:
com.google.cloud.bigquery.BigQueryException: Error getting access token for service account: Read timed out
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.translate(HttpBigQueryRpc.java:106)
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.create(HttpBigQueryRpc.java:206)
at com.google.cloud.bigquery.BigQueryImpl$5.call(BigQueryImpl.java:319)
at com.google.cloud.bigquery.BigQueryImpl$5.call(BigQueryImpl.java:316)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.bigquery.BigQueryImpl.create(BigQueryImpl.java:315)
at com.google.cloud.bigquery.BigQueryImpl.create(BigQueryImpl.java:290)
at com.ad.google.bigquery.GoogleBigQuery.runGBQQuery_CCPA(GoogleBigQuery.java:172)
at com.ad.gogbq.test.Tester_GBQ.runQuery(Tester_GBQ.java:62)
at com.ad.gogbq.test.Tester_GBQ.main(Tester_GBQ.java:274)
Caused by: java.io.IOException: Error getting access token for service account: Read timed out
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:432)
at com.google.auth.oauth2.OAuth2Credentials.refresh(OAuth2Credentials.java:157)
at com.google.auth.oauth2.OAuth2Credentials.getRequestMetadata(OAuth2Credentials.java:145)
at com.google.auth.http.HttpCredentialsAdapter.initialize(HttpCredentialsAdapter.java:91)
at com.google.cloud.http.HttpTransportOptions$1.initialize(HttpTransportOptions.java:159)
at com.google.api.client.http.HttpRequestFactory.buildRequest(HttpRequestFactory.java:88)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.buildHttpRequest(AbstractGoogleClientRequest.java:430)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:549)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:482)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:599)
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.create(HttpBigQueryRpc.java:204)
... 10 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1334)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1309)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:259)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:108)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:79)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:995)
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:429)
I have successfully used the above system variables to open and read a web page on the machines in question. It seems like BigQuery isn't using these variables, I was wondering if anyone else has seen this?
In your question, it seems you were setting http.* properties, since all Google API access are through HTTPS, please make sure you set below 2 properties:
System.setProperty("https.proxyHost", "localhost");
System.setProperty("https.proxyPort", "3128");
See more on this doc.
I have 2 Spring boot projects running on same tomcat:
REST-API
Background Service (BS)
Purpose
REST-API : UI application uses this to get things done.
BS : Based on UI activity this service gets executed and gets things done on Real-time .
Details
BS is written as while(1) considering the purpose of the same.
Tomcat Error
1st Attempt:
Using CATALINA_PID: /home/user/bin/pid/tomcat_pid
Tomcat did not stop in time.
PID file was not removed.
To aid diagnostics a thread dump has been written to standard out
2nd Attempt:
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at java.net.Socket.<init>(Socket.java:434)
at java.net.Socket.<init>(Socket.java:211)
at org.apache.catalina.startup.Catalina.stopServer(Catalina.java:492)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.stopServer(Bootstrap.java:406)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:495)
The stop command failed. Attempting to signal the process to stop through OS signal.
Tomcat stopped.
Problems:
1. When I restart tomcat, BS project will only be running and REST-API does not get started (I guess this happens because of while(1) it starts executing and not giving chance to start REST-API project )
2. While stopping tomcat it does not get stopped in a single attempt, when we try to stop one more time then it gets stopped.
This exception mostly indicates that there is no service listening on the IP/port you are trying to connect to, so ,you are trying to connect to the wrong IP/port, or the server is not started.
I'm seeing below error in WSO2 API Manager 1.6 after running the wso2server.sh for sometime with average load. I got below error repeatedly on the error log.
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
... 13 more
[2015-03-03 22:50:48,527] INFO - ReceiverGroup Resending the failed events....
[2015-03-03 22:50:48,527] ERROR - EventPublisher Cannot send events to TCP,localhost:7612,TCP,localhost:7712
org.wso2.carbon.databridge.agent.thrift.exception.EventPublisherException: Cannot send Events
at org.wso2.carbon.databridge.agent.thrift.internal.publisher.client.ThriftEventPublisher.publish(ThriftEventPublisher.java:93)
at org.wso2.carbon.databridge.agent.thrift.internal.publisher.client.EventPublisher.publishEvent(EventPublisher.java:130)
at org.wso2.carbon.databridge.agent.thrift.internal.publisher.client.EventPublisher.run(EventPublisher.java:117)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.wso2.carbon.databridge.commons.thrift.service.general.ThriftEventTransmissionService$Client.recv_publish(ThriftEventTransmissionService.java:146)
at org.wso2.carbon.databridge.commons.thrift.service.general.ThriftEventTransmissionService$Client.publish(ThriftEventTransmissionService.java:133)
at org.wso2.carbon.databridge.agent.thrift.internal.publisher.client.ThriftEventPublisher.publish(ThriftEventPublisher.java:86)
... 5 more
What could be the reason and how can I fix this?
It seems like that you have enabled publishing runtime statistics to BAM. From the log i can see that the server is trying to publish to "TCP,localhost:7612,TCP,localhost:7712" which would be thrift ports. So if you don't have a BAM server running with thrift port 7612 this error would come. You can find more information on publishing runtime statistics to BAM here [1].
[1] https://docs.wso2.com/display/AM160/Publishing+API+Runtime+Statistics