I have some serviceTask in my sequential sub process. Any of this tasks can generate exception.
And i need to catch this, do some logic and go back to iterative process.
I have tried bpmn:boundaryEvent like this :
but it generates
Caused by: org.activiti.engine.ActivitiException: Errors while parsing:
[Validation set: 'activiti-executable-process' | Problem: 'activiti-seq-flow-invalid-target'] : Invalid target for sequenceflow, the target isn't defined in the same scope as the source - [Extra info : processDefinitionId = notificationsNewsSendProc | id = sid-db0a2e21-f460-4e32-84f4-f2ca88481434 | ] ( line: 100, column: 223)
id = sid-db0a2e21-f460-4e32-84f4-f2ca88481434 is sequenceFlow from error event to external task.
i read here https://www.activiti.org/userguide/#exceptionMapping that i can route exception from service task, but it is not my way, because i dont want to stop subprocess. I just need to catch exception and go back.
Please help me with xml diagram of catching exceptions.
Related
I am using Drools BPMN rule engine for business rules validation and getting below errors intermittently in my application.
Exception while executing rules. [determine_flow_name:1 - Determine Flow Name Setup:3] -- [poareqflow:6 - POA Flow Setup:17] -- [poareqflow:6 - App Request:43] -- [poareqflow:6 - SDM Mapping:28] -- [poareqflow:6 - SDMEnrichment:41] – null
I am looking for below information.
How can I know more about what is this NULL , I changed log level to TRACE but no more information on logs.
What are these number with colon : SDMEnrichment:41 [They are not line number for sure]
Any help is much appreciated
I am developing a simple Kafka Stream application which extracting messages from a topic and put it into another topic after transformation. I am using Intelij for my development.
When I debug/run this application, it works perfect if my IDE and the Kafka Server sitting in the SAME machine
(i.e. with the BOOTSTRAP_SERVERS_CONFIG = localhost:9092 and
SCHEMA_REGISTRY_URL_CONFIG = localhost:8081)
However, when I try to use another machine to do the development
(i.e. with the BOOTSTRAP_SERVERS_CONFIG = XXX.XXX.XXX:9092 and
SCHEMA_REGISTRY_URL_CONFIG = XXX.XXX.XXX:8081 where XXX.XXX.XXX is the
ip address of my Kafka),
the debug process run without problem at the 1st time. However, when I run 2nd time after resetting the offset, I received the following error:
ERROR stream-thread [main] Failed to delete the state directory. (org.apache.kafka.streams.processor.internals.StateDirectory:297)
java.nio.file.DirectoryNotEmptyException: \tmp\kafka-streams\my_application_id\0_0
Exception in thread "main" org.apache.kafka.streams.errors.StreamsException: java.nio.file.DirectoryNotEmptyException:
If I changed my_application_id as my_application_id2, and run it, it works again at the 1st time but receiving error again if I run it again.
I have the following code in my last sentence in my application:
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
Any advice how to solve this problem?
UPDATE:
I have reviewed the state directory created in my development machine (Windows Platform) and if I delete these directory manually before running 2nd time, no error found. I have tried to run my IDE as Administrator because I think this could be something about the permission on the folder. However, this doesn't help.
Full stack for reference:
INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser:109)
INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser:110)
INFO stream-thread [main] Deleting state directory 0_0 for task 0_0 as user calling cleanup. (org.apache.kafka.streams.processor.internals.StateDirectory:281)
Disconnected from the target VM, address: '127.0.0.1:16552', transport: 'socket'
Exception in thread "main" org.apache.kafka.streams.errors.StreamsException: java.nio.file.DirectoryNotEmptyException: C:\workspace\bennychan\kafka-streams\my_application_001\0_0
at org.apache.kafka.streams.processor.internals.StateDirectory.clean(StateDirectory.java:231)
at org.apache.kafka.streams.KafkaStreams.cleanUp(KafkaStreams.java:931)
at com.macroviewhk.financialreport.simpleStream.start(simpleStream.java:60)
at com.macroviewhk.financialreport.simpleStream.main(simpleStream.java:45)
Caused by: java.nio.file.DirectoryNotEmptyException: C:\workspace\bennychan\kafka-streams\my_application_001\0_0
at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:266)
at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:651)
at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:634)
at java.nio.file.Files.walkFileTree(Files.java:2688)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at org.apache.kafka.common.utils.Utils.delete(Utils.java:634)
ERROR stream-thread [main] Failed to delete the state directory. (org.apache.kafka.streams.processor.internals.StateDirectory:297)
at org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:287)
java.nio.file.DirectoryNotEmptyException: C:\workspace\bennychan\kafka-streams\my_application_001\0_0
at org.apache.kafka.streams.processor.internals.StateDirectory.clean(StateDirectory.java:228)
at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:266)
... 3 more
at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:651)
at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:634)
at java.nio.file.Files.walkFileTree(Files.java:2688)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at org.apache.kafka.common.utils.Utils.delete(Utils.java:634)
at org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:287)
at org.apache.kafka.streams.processor.internals.StateDirectory.clean(StateDirectory.java:228)
at org.apache.kafka.streams.KafkaStreams.cleanUp(KafkaStreams.java:931)
at com.macroviewhk.financialreport.simpleStream.start(simpleStream.java:60)
at com.macroviewhk.financialreport.simpleStream.main(simpleStream.java:45)
UPDATE 2 :
After another detailed check, the line below throwing IOException
Files.walkFileTree(file.toPath(), new SimpleFileVisitor<Path>() {
This line is located at kafka-clients-1.1.0.jar org.apache.kafka.common.utilsUtils.class
May be this is the problem with Windows system (sorry that I am not an experienced JAVA programmer).
For googlers..
I'm currently using this Scala code for helping windows guys to handle deletion of state store.
if (System.getProperty("os.name").toLowerCase.contains("windows")) {
logger.info("WINDOWS OS MODE - Cleanup state store.")
try {
FileUtils.deleteDirectory(new File("/tmp/kafka-streams/" + config.getProperty("application.id")))
FileUtils.forceMkdir(new File("/tmp/kafka-streams/" + config.getProperty("application.id")))
} catch {
case e: Exception => logger.error(e.toString)
}
}
else {
streams.cleanUp()
}
I agree with #ideano1 that is seems to be related to https://issues.apache.org/jira/browse/KAFKA-6647 -- what you can try is, to explicitly call KafkaStreams#cleanUp() between tests. It's unclear why there are issues at Window-OS. Atm, all testing happens on Linux.
This is what we've implemented that works on Windows. This is written in Kotlin.
Version used : kafka-streams-test-utils:2.3.0.
The key is to catch the exception. The tests will pass as long as you catch the exception raised by testDriver.close()even if you don't delete the directory. However, cleaning up the directory makes your unit tests independent and repeatable.
val directory = "test"
#BeforeEach
fun setup(){
//other code omitted for setting the props
props.setProperty(StreamsConfig.STATE_DIR_CONFIG,directory)
}
#AfterEach
fun tearDown(){
try{
testDriver.close()
}catch(exception: Exception){
FileUtils.deleteDirectory(File(directory)) //there is a bug on Windows that does not delete the state directory properly. In order for the test to pass, the directory must be deleted manually
}
}
For tests (but not only if you afford so) one could use an IN_MEMORY("in-memory") store for each KTable created (directly or indirectly, by e.g. aggregations); this avoids the creation of any directory such that the error no longer occurs.
My web application runs on Spring (MVC) 4.2.9.RELEASE, Hibernate 5.1.3.Final, Spring Data 1.8.2.RELEASE, and MS SQL Server (2014).
In the Spring context, I have the following exceptioin hanlder:
<bean id="simpleMappingExceptionResolver" class="myproject.CustomMappingExceptionResolver">
...
</bean>
to catch and save stack trace. I am able to see the following deep in a long stack trace printed in the logs:
......
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:136)
... 113 more
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 73) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1535)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:467)
How can I find the following exception class in the above exceptioin hanlder (and given an Exception instance) :
com.microsoft.sqlserver.jdbc.SQLServerException
AND the corresponding message:
Transaction (Process ID 73) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
If I understood you correctly you need to catch a nested exception. That's a bit tricky, but doable. You need to have a try-catch block for the top level exception that you expect. In the catch clause, you can use exception.getCause() to step down one nesting level at a time, and see if that level is an instanceof your SQL exception class. You can also check the message if necessary by using getMessage(). If the exception fits your criteria, congratulations you caught it. If not, simply throw it again.
Two things to keep in mind:
this approach may lead to poor performance if many exceptions occur, and only a small fraction of those actually matches your criteria.
if an exception has no cause, then e.getCause() will return e itself. Watch out for infinite recursion here.
I'm using GraphHopper in the following way:
GraphHopper hopper = new GraphHopper().forServer();
hopper.setCHEnable(false);
hopper.setGraphHopperLocation(GraphHoperMasterFile);
hopper.setOSMFile(OSMFile);
hopper.setEncodingManager(new EncodingManager("car,bike"));
hopper.importOrLoad();
GHRequest req = new GHRequest().addPoint(new GHPoint (latFrom, lonFrom)).addPoint(new GHPoint(latTo, lonTo))
.setVehicle("car")
.setWeighting("fastest")
.setAlgorithm(AlgorithmOptions.ASTAR_BI);;
req.getHints().put("pass_through", true);
GHResponse res = hopper.route(req);
I obtained the GraphHoperMasterFile by downloading the zip from https://github.com/graphhopper/graphhopper/blob/0.5/docs/core/routing.md.
I obtained the .osm file from http://download.geofabrik.de/europe/great-britain/england/greater-london.html.
I also added the maven dependancy from http://mvnrepository.com/artifact/com.graphhopper/graphhopper-web/0.5.0. I get the sense that it's wrong to have the maven dependancy and reference the graphHopperLocation, but i'm not sure.
When I run this code sometime (not all the time) get the following errors:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: To avoid reading partial data we need to obtain the read lock but it failed.
Caused by: java.lang.RuntimeException: To avoid reading partial data we need to obtain the read lock but it failed.
Caused by: java.nio.channels.OverlappingFileLockException
When it works I get the following:
2016-01-28 08:48:14,551 [pool-1-thread-8] INFO com.graphhopper.GraphHopper - version 0.5.0|2015-08-12T12:33:51+0000 (4,12,3,2,2,1)
2016-01-28 08:48:14,551 [pool-1-thread-8] INFO com.graphhopper.GraphHopper - graph car,bike|RAM_STORE|2D|NoExt|4,12,3,2,2, details:edges:387 339(12MB), nodes:291 068(4MB), name:(2MB), geo:960 828(4MB), bounds:-0.5177850019436703,0.33744369456418666,51.28324388600686,51.69833101402963
I see the thrown error over here https://github.com/graphhopper/graphhopper/blob/master/core/src/main/java/com/graphhopper/GraphHopper.java
How can I stop this error from happening?
My CF9 application running on a windows server pops mail. When I attempt to retrieve the entire body of the message, I sometimes get the following error...
Error:
An exception occurred while retrieving mail.
The cause of this exception was: java.lang.ClassCastException: javax.mail.internet.MimeMessage cannot be cast to javax.mail.internet.MimeBodyPart.
Location:
Line 335 in controllers\Submissions.cfc
Not sure if this is pertinent, but FYI every message will have an image attached and the whole process usually works fine. This problem is intermittent.
My Questions
Any idea what causes this?
Any idea how to catch and resolve this issue?
I suspect I'll need to drop down into java, but not sure where to start.
Code Fragments
<cfscript>
// setup variables array for all cfpop calls
CFPopAttributes = {
server = request.pop.server,
port = request.pop.port,
username = request.pop.username,
password = request.pop.password,
timeout = 300
};
</cfscript>
<cfpop
action="getall"
name="entireEmail"
uid="#uid#"
attachmentpath="#originalsPath#"
attributecollection="#CFPopAttributes#" // Line 335
generateuniquefilenames="true"
/>
NOTE: I added the comment "Line 335" above to communicate exactly where in the code the template is breaking. If I move the attributecollection up or down (before/after other attributes), the error always breaks at the attributecollection line.
Stack Trace
struct [Filtered - 1 of 8 keys hidden]
Detail: An exception occurred while invoking an event handler method from Application.cfc. The method name is: onRequest.
Message: Event handler exception.
RootCause:
[struct]
Detail: The cause of this exception was: java.lang.ClassCastException: javax.mail.internet.MimeMessage cannot be cast to javax.mail.internet.MimeBodyPart.
Message: An exception occurred while retrieving mail.
RootCause:
[struct]
Message: javax.mail.internet.MimeMessage cannot be cast to javax.mail.internet.MimeBodyPart
StackTrace: java.lang.ClassCastException: javax.mail.internet.MimeMessage cannot be cast to javax.mail.internet.MimeBodyPart
at coldfusion.mail.EmailTable.getAttachmentName(EmailTable.java:819)
at coldfusion.mail.EmailTable.populate(EmailTable.java:283)
at coldfusion.mail.PopImpl.getMails(PopImpl.java:241)
at coldfusion.tagext.net.PopTag$1.run(PopTag.java:433)
at java.security.AccessController.doPrivileged(Native Method)
at coldfusion.tagext.net.PopTag.doStartTag(PopTag.java:429)
at coldfusion.runtime.CfJspPage._emptyTcfTag(CfJspPage.java:2799)
at cfSubmissions2ecfc1952269377$funcGETEMAIL.runFunction(D:\home\wwwroot\controllers\Submissions.cfc:335)