My upload token expired and upon executing
./fortifyclient token -getoken AnalysisUploadToken -url"http://<localhost>/ssc" -user ssc_upload
I receive
An internal error has occurred.
A JAXB unmarshalling exception;
nested exception is javax.xml.bind.UnmarshalException: unexpected element
I would show the rest, however it is approx. 200 lines.
The last time this happened (90 days ago), I used the 4.00 version of ./fortifyclient and it worked.
Any suggestions?
Is time synchronized between your client and server? I think that any operation with fortifyclient will fail if the time on the client and server differs by more than 5 or 10 minutes.
This will include checking the date and timezone as well.
Related
I recently tried to switch my subscriptions in GCP Pub/Sub to the "exactly-once" delivery strategy. However, I started seeing the following warnings ~10 times every 30 minutes in my application logs:
com.google.api.gax.rpc.InvalidArgumentException: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Some acknowledgement ids in the request were invalid. This could be because the acknowledgement ids have expired or the acknowledgement ids were malformed.
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:92)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:98)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:66)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:67)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:574)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:544)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.api.gax.grpc.ChannelPool$ReleasingClientCall$1.onClose(ChannelPool.java:535)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:563)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:744)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:723)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Some acknowledgement ids in the request were invalid. This could be because the acknowledgement ids have expired or the acknowledgement ids were malformed.
at io.grpc.Status.asRuntimeException(Status.java:535)
... 17 more
They're immediately followed by the following INFO log messages in the same thread:
Permanent error invalid ack id message, will not resend
I didn't see any problems caused by these warnings, but it's a bit hard to tell because my application is processing a decent number of messages (~1000/hour). I initially thought that these warnings were just short-term "aftershocks" from switching to the "exactly-once" strategy. However, I waited for about 2 hours and they kept occurring with the same frequency, showing no sign of disappearing. I then disabled the "exactly-once" strategy and they went away immediately after. Can anyone tell me whether these warnings are dangerous, what side effects I can expect, and most importantly how I can get rid of them?
I'm using version 3.4.0 of spring-cloud-gcp-dependencies and spring-cloud-gcp-starter-pubsub. I'm also using Spring Cloud Stream to process the incoming messages and I rely on it to automatically acknowledge the messages.
I have the following configuration set in my application.yaml file:
spring:
cloud:
gcp:
pubsub:
subscriber:
executor-threads: 15
max-ack-extension-period: 23400 # 6 hours and 30 minutes
acknowledgement-deadline: 600 # Maximum value
For context: The messages in my application represent jobs for execution, and they could take quite a while to finish - hence the 6h30m maximum acknowledgement extension period.
I also saw the following StackOverflow question:
How to handle errors during message acknowledgement using google pubsub java library?
From what I understand, the consequence of these warnings is that the messages will be redelivered to my application, but this is exactly what I want to avoid.
Thanks for the question, Alexander.
The errors you are seeing happen when the modifyAckDeadline or Acknowledgement request to the service fail because the acknowledgement id is already expired. In such cases, the service considers the expired acknowledgement id as invalid, since a newer delivery might already be in-flight. This is as-per the guarantees for exactly once delivery. There could be a few reasons for it:
The request was delayed due to network delays and by the time it arrived at the server, the acknowledgement id lease is already expired.
The tasks issuing the modifyAckDeadline or Acknowledgement request is overwhelmed (high CPU/memory/network usage), leading to delay in issuing those requests.
I suggest setting min-duration-per-ack-extension to a high number to reduce issues mentioned above. This will help mitigate the cases where the acknowledgement id lease expired. The highest value you can set for this field is 600 seconds.
Additionally, as mentioned in the other stack overflow question, you should consider checking the response of your acknowledgement operations. This can be used to guide your application if it can expect a redelivery. Sample.
Im having trouble using Thingsboard platform (IoT) when simulating 7.5K devices sending data to the platform. I have the following error in the logs as soon as I start sending data (over MQTT):
2020-08-01 01:17:06,946 [ForkJoinPool-12-worker-0] ERROR c.g.c.u.concurrent.AggregateFuture - Got more than one input Future failure. Logging failures after the first
java.lang.IllegalStateException: Deque full
at java.util.concurrent.LinkedBlockingDeque.addLast(LinkedBlockingDeque.java:335)
at java.util.concurrent.LinkedBlockingDeque.add(LinkedBlockingDeque.java:633)
at org.thingsboard.server.dao.util.AbstractBufferedRateExecutor.submit(AbstractBufferedRateExecutor.java:109)
at org.thingsboard.server.dao.nosql.CassandraAbstractDao.executeAsync(CassandraAbstractDao.java:93)
at org.thingsboard.server.dao.nosql.CassandraAbstractDao.executeAsyncWrite(CassandraAbstractDao.java:76)
at org.thingsboard.server.dao.timeseries.CassandraBaseTimeseriesDao.savePartition(CassandraBaseTimeseriesDao.java:434)
at org.thingsboard.server.dao.timeseries.BaseTimeseriesService.saveAndRegisterFutures(BaseTimeseriesService.java:153)
at org.thingsboard.server.dao.timeseries.BaseTimeseriesService.save(BaseTimeseriesService.java:144)
at org.thingsboard.server.service.telemetry.DefaultTelemetrySubscriptionService.saveAndNotify(DefaultTelemetrySubscriptionService.java:124)
at org.thingsboard.rule.engine.telemetry.TbMsgTimeseriesNode.onMsg(TbMsgTimeseriesNode.java:89)
at org.thingsboard.server.actors.ruleChain.RuleNodeActorMessageProcessor.onRuleChainToRuleNodeMsg(RuleNodeActorMessageProcessor.java:107)
at org.thingsboard.server.actors.ruleChain.RuleNodeActor.onRuleChainToRuleNodeMsg(RuleNodeActor.java:97)
at org.thingsboard.server.actors.ruleChain.RuleNodeActor.doProcess(RuleNodeActor.java:60)
at org.thingsboard.server.actors.service.ContextAwareActor.process(ContextAwareActor.java:45)
at org.thingsboard.server.actors.TbActorMailbox.processMailbox(TbActorMailbox.java:121)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
I have try to google and see the reason behind it, but i havent found anything.
While simulating with 5K devices, this error came like 3 times each day (over a 4 day period), but it stopped showing that error eventually. However, when increasing the number of devices, the error is constant. Im using Kafka as the broker, but I dont see any Kafka related error. I just want to know why the error appears, is it related to memory, or any other limit?
Thanks in advance
Francisco P
I have created a micro service and it runs fine in production for 3-4 weeks. Its a low volume service with few calls every couple of days. After 3-4 weeks all requests starts failing with below message in the logs
2019-04-15T22:33:17.628901587Z 2019-04-15 22:33:17.627 WARN 1 --- [nio-8080-exec-5] o.s.jdbc.support.SQLErrorCodesFactory : Error while extracting database name - falling back to empty error codes
2019-04-15T22:33:17.628928571Z org.springframework.jdbc.support.MetaDataAccessException: Error while extracting DatabaseMetaData; nested exception is java.sql.SQLException: JZ0C0: Connection is already closed.
Upon restart app starts working normally for another few weeks.
Any suggestions are appreciated.
My application properties file looks like below
spring.datasource.max-active=6
spring.datasource.max-idle=0
spring.datasource.min-idle=0
spring.datasource.initial-size=1
spring.datasource.time-between-eviction-runs-millis=30000
spring.datasource.min-evictable-idle-time-millis=60000
spring.datasource.remove-abandoned=true
spring.datasource.remove-abandoned-timeout=120
spring.datasource.validation-query= select 1
spring.datasource.test-while-idle=true
spring.datasource.validation-interval=30
We have a Ruby on Rails application running for some time now, but since a couple of days we're just getting a HTTP500 Error while trying to update any documents. The system is running Ruby 1.9.3 with unicorn 4.8.2 and nginx 1.2.1 and a sunspot_solr 2.1.1 gem for the search index (besides some others).
The Production log tells me that every time someone tries to create or update a document the solr server throws a bunch of errors. The most prominent I think is this one:
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:164)\n\t... 30 more\nCaused by: java.lang.OutOfMemoryError: Java heap space\n\
Unfortunately nothing changes if i edit config/sunstpot.yml to include "max_memory: 1G":
production:
solr:
hostname: localhost
port: 8080
log_level: WARNING
min_memory: 32M
max_memory: 1G
path: /solr-4.10.2/default #production #ollection1 #production
Here is the complete production.log entry:
[a086bc878daa0abd367379daade96682] Completed 500 Internal Server Error in 195ms
[a086bc878daa0abd367379daade96682]
RSolr::Error::Http (RSolr::Error::Http - 500 Internal Server Error
Error: {"responseHeader":{"status":500,"QTime":2},"error":{"msg":"Exception writing document id Neuigkeit 393 to the index; possible analysis error.","trace":"org.apache.solr.common.SolrException: Exception writing document id Neuigkeit 393 to the index; possible analysis error.\n\
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:168)\n\
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)\n\tat
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)\n\tat
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:926)\n\tat
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1080)\n\tat
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:692)\n\tat
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)\n\
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:247)\n\tat
org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)\n\
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:99)\n\tat
org.apache.solr.haa086bc878daa0abd367379daade96682] Completed 500 Internal Server Error in 195ms
[a086bc878daa0abd367379daade96682]
RSolr::Error::Http (RSolr::Error::Http - 500 Internal Server Error
Error: {"responseHeader":{"status":500,"QTime":2},"error":{"msg":"Exception writing document id Neuigkeit 393 to the index; possible analysis error.","trace":"org.apache.solr.common.SolrException: Exception writing document indler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)\n\
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\
org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)\n\
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)\n\
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)\n\
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)\n\
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)\n\
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)\n\
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)\n\
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)\n\
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)\n\
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)\n\
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)\n\
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)\n\
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1003)\n\
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:579)\n\
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)\n\
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\
java.lang.Thread.run(Thread.java:745)\nCaused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed\n\
org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:698)\n\
org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:712)\n\
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1507)\n\
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:240)\n\
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:164)\n\t... 30 more\nCaused by: java.lang.OutOfMemoryError: Java heap space\n\
org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.<init>(FreqProxTermsWriterPerField.java:207)\n\
org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.newInstance(FreqProxTermsWriterPerField.java:230)\n\
org.apache.lucene.index.ParallelPostingsArray.grow(ParallelPostingsArray.java:48)\n\
org.apache.lucene.index.TermsHashPerField$PostingsBytesStartArray.grow(TermsHashPerField.java:252)\n\
org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:292)\n\
org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)\n\
org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:659)\n\
org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:359)\n\
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:318)\n\
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:239)\n\
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:454)\n\
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1511)\n\
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:240)\n\
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:164)\n\
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)\n\
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)\n\
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:926)\n\
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1080)\n\
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:692)\n\
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)\n\
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:247)\n\
org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)\n\
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:99)\n\
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)\n\
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\
org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)\n\
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)\n\
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)\n\
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)\n\
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)\n\
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)\n","code":500}}
URI: http://localhost:8080/solr-4.10.2/default/update?wt=json
EDIT: I'm an idiot. The solr server isn't contained in the unicorn service. Restarting it via 'service tomcat7 restart' loaded the updated config/sunstpot.yml and solved the problem.
We have been facing an issue, where a simple ejb-ql query runs out of transaction time, if same(WL generated SQL version of ejb-ql) is run from SQL command prompt, it takes very less time than the configured JTA time(execute less than 5% time of JTA).
Erros: Few time the error thrown is:
javax.ejb.FinderException: Exception in 'finderMethodName' while using result set: 'weblogic.jdbc.wrapper.ResultSet_oracle_jdbc_driver_OracleResultSetImpl#9c18f'
java.sql.SQLException: Unexpected exception while enlisting XAConnection java.sql.SQLException: Transaction rolled back: Transaction timed out after 301 seconds
Note - JTA is configured to 300 seconds
Most of the time the error thrown is:
javax.ejb.FinderException: Exception in 'finderMethodName' while using result set: 'weblogic.jdbc.wrapper.ResultSet_oracle_jdbc_driver_OracleResultSetImpl#a5af'
java.sql.SQLException: Result set already closed
You should increase the timeout in the container.
Service Configurations -> Other Services
Click JTA Configuration(Under Other Services)
Then you will see the Timeout Seconds on the top of the page.