When I tried to start a task using taskService.start(task.getId(), "krisv");, I get No query defined for that name [getAuditTaskById]. The bpmn file is very similar to the Evaluation.bpmn file. My current version of jbpmn is 6.2.
The code snippet is the following:
List<TaskSummary> tasks = taskService.getTasksAssignedAsPotentialOwner("krisv", "en-UK");
if (tasks.size() > 0) {
TaskSummary task = tasks.get(0);
System.out.println("Task id: " + task.getId());
System.out.println("'krisv' completing task " + task.getName() + ": " + task.getDescription());
System.out.println("Task status: " + task.getStatus().name());
System.out.println("Potential owners: " + task.getActualOwner().getId());
taskService.start(task.getId(), "krisv");
Map<String, Object> results = new HashMap<String, Object>();
results.put("performance", "exceeding");
taskService.complete(task.getId(), "krisv", results);
System.out.println("Completed task");
} else {
System.out.println("No tasks!");
}
The code above is almost a replicate of the ProcessTest.java file in the sample folder. The ProcessTest.java allows the completion of the tasks, but the exact same code doesn't in my custom java file.
Also, the current task's status is "reserved" if that is of any help. Thanks!
The query is defined in the jbpm-human-task-audit-audit jar, you need that on your classpath:
https://github.com/droolsjbpm/jbpm/blob/6.2.0.Final/jbpm-human-task/jbpm-human-task-audit/src/main/resources/META-INF/TaskAuditorm.xml#L40
And you need to make sure this file is referenced in your persistence.xml, like for example here:
https://github.com/droolsjbpm/jbpm/blob/6.2.0.Final/jbpm-test/src/main/resources/META-INF/persistence.xml#L15
Related
I have the following task:
Create a job with SQL request to Hive table;
Run this job on remote Flink cluster;
Collect the result of this job in file (HDFS is preferable).
Note
Because it is necessary to run this job on remote Flink cluster i can not use TableEnvironment in a simple way. This problem is mentioned in this ticket: https://issues.apache.org/jira/browse/FLINK-18095. For current solution I use adivce from http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Table-Environment-for-Remote-Execution-td35691.html.
Code
EnvironmentSettings batchSettings = EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build();
// create remote env
StreamExecutionEnvironment streamExecutionEnvironment = StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081, "/path/to/my/jar");
// create StreamTableEnvironment
TableConfig tableConfig = new TableConfig();
ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
CatalogManager catalogManager = CatalogManager.newBuilder()
.classLoader(classLoader)
.config(tableConfig.getConfiguration())
.defaultCatalog(
batchSettings.getBuiltInCatalogName(),
new GenericInMemoryCatalog(
batchSettings.getBuiltInCatalogName(),
batchSettings.getBuiltInDatabaseName()))
.executionConfig(
streamExecutionEnvironment.getConfig())
.build();
ModuleManager moduleManager = new ModuleManager();
BatchExecutor batchExecutor = new BatchExecutor(streamExecutionEnvironment);
FunctionCatalog functionCatalog = new FunctionCatalog(tableConfig, catalogManager, moduleManager);
StreamTableEnvironmentImpl tableEnv = new StreamTableEnvironmentImpl(
catalogManager,
moduleManager,
functionCatalog,
tableConfig,
streamExecutionEnvironment,
new BatchPlanner(batchExecutor, tableConfig, functionCatalog, catalogManager),
batchExecutor,
false);
// configure HiveCatalog
String name = "myhive";
String defaultDatabase = "default";
String hiveConfDir = "/path/to/hive/conf"; // a local path
HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir);
tableEnv.registerCatalog("myhive", hive);
tableEnv.useCatalog("myhive");
// request to Hive
Table table = tableEnv.sqlQuery("select * from myhive.`default`.test");
Question
On this step I can call table.execute() method and after it get CloseableIterator by collect() method. But in my case I can get a large count of rows as a result of my request and it will be perfect to collect it into file (ORC in HDFS).
How can I reach my goal?
Table.execute().collect() returns the result of the view to your client side for interactive purpose. In your case, you can use the filesystem connector and use INSERT INTO for writing the view to the file. For example:
// create a filesystem table
tableEnvironment.executeSql("CREATE TABLE MyUserTable (\n" +
" column_name1 INT,\n" +
" column_name2 STRING,\n" +
" ..." +
" \n" +
") WITH (\n" +
" 'connector' = 'filesystem',\n" +
" 'path' = 'hdfs://path/to/your/file',\n" +
" 'format' = 'orc' \n" +
")");
// submit the job
tableEnvironment.executeSql("insert into MyUserTable select * from myhive.`default`.test");
See more about the filesystem connector: https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html
when calling
twitter.list().getUserListMemberships(userId, 1000,1,false);
I get this error:
java.lang.NoSuchMethodError: twitter4j.api.ListsResources.getUserListMemberships(JIJ)Ltwitter4j/PagableResponseList;
I read the javadoc for this method (see here), and I don't see what I am doing wrong? And I did verify that my dependencies are OK. Any clue?
I just use Twitter4j 4.0.2 and that method doesn't exist
But on Twitter4j 4.0.4 does exist
So, Are you sure that you are using 4.0.4?
Keep in mind too that you are using cursor 1 on the first call but you have to use -1. I just run this code and works
User user = twitter.showUser("lt_deportes");
long cursor = -1;
PagableResponseList<UserList> lists;
do {
lists = twitter.list().getUserListMemberships(user.getId(),1000,cursor,false);
for (UserList list : lists) {
System.out.println("id:" + list.getId() + ", name:" + list.getName() + ", description:"
+ list.getDescription() + ", slug:" + list.getSlug() + "");
}
} while ((cursor = lists.getNextCursor()) != 0);
I'm sure this question will be silly or annoying on multiple levels....
I am using SVNKit in Java.
I want to get the list of files committed in a particular commit. I have the release ID. Normally I would run something like
svn log url/to/repository -qv -r12345
And I would get the list of commands as normal.
I can't puzzle out how to do a similar thing in SVNKit. Any tips? :)
final SvnOperationFactory svnOperationFactory = new SvnOperationFactory();
final SvnLog log = svnOperationFactory.createLog();
log.setSingleTarget(SvnTarget.fromURL(url));
log.addRange(SvnRevisionRange.create(SVNRevision.create(12345), SVNRevision.create(12345)));
log.setDiscoverChangedPaths(true);
final SVNLogEntry logEntry = log.run();
final Map<String,SVNLogEntryPath> changedPaths = logEntry.getChangedPaths();
for (Map.Entry<String, SVNLogEntryPath> entry : changedPaths.entrySet()) {
final SVNLogEntryPath svnLogEntryPath = entry.getValue();
System.out.println(svnLogEntryPath.getType() + " " + svnLogEntryPath.getPath() +
(svnLogEntryPath.getCopyPath() == null ?
"" : (" from " + svnLogEntryPath.getCopyPath() + ":" + svnLogEntryPath.getCopyRevision())));
}
If you want to run one log request for a revision range, you should use log.setReceiver() call with your receiver implemetation.
Does anyone know how to modify the Jenkins/Hudson node labels in a non-manually way? I mean, thorough an API like the CLI API that this tool offers (without restarting Jenkins/Hudson of course).
My guess is that the best option is using a Groovy script to enter into the Jenkins/Hudson guts. Executing something like:
java -jar -s HUDSON_URL:8080 groovy /path/to/groovy.groovy
Being the content of that script something like:
for (aSlave in hudson.model.Hudson.instance.slaves) {
labels = aSlave.getAssignedLabels()
println labels
**aSlave.setLabel("blabla")** // this method doesn't exist, is there any other way???
}
Thanks in advance!
Victor
Note: the other answers are a bit old, so it could be that the API has appeared since then.
Node labels are accessed in the API as a single string, just like in the Configure screen.
To read and write labels: Node.getLabelString() and Node.setLabelString(String).
Note that you can get the effective labels as well via: Node.getAssignedLabels(), which returns a Collection of LabelAtom that includes dynamically computed labels such as the 'self-label' (representing the node name itself).
Last, these methods on the Node class are directly accessible from the slave objects also, e.g. as a System Groovy Script:
hudson = hudson.model.Hudson.instance
hudson.slaves.findAll { it.nodeName.equals("slave4") }.each { slave ->
print "Slave $slave.nodeName : Labels: $slave.labelString"
slave.labelString = slave.labelString + " " + "offline"
println " --> New labels: $slave.labelString"
}
hudson.save()
I've found a way to do this using the Groovy Postbuild Plugin.
I have a Jenkins job that takes a few parameters (NodeToUpdate, LabelName, DesiredState) and executes this content in the Groovy Postbuild Plugin:
nodeName = manager.envVars['NodeToUpdate']
labelName = manager.envVars['LabelName']
set = manager.envVars['DesiredState']
for (node in jenkins.model.Jenkins.instance.nodes) {
if (node.getNodeName().equals(nodeName)) {
manager.listener.logger.println("Found node to update: " + nodeName)
oldLabelString = node.getLabelString()
if (set.equals('true')) {
if (!oldLabelString.contains(labelName)) {
manager.listener.logger.println("Adding label '" + labelName + "' from node " + nodeName);
newLabelString = oldLabelString + " " + labelName
node.setLabelString(newLabelString)
node.save()
} else {
manager.listener.logger.println("Label '" + labelName + "' already exists on node " + nodeName)
}
}
else {
if (oldLabelString.contains(labelName)) {
manager.listener.logger.println("Removing label '" + labelName + "' from node " + nodeName)
newLabelString = oldLabelString.replaceAll(labelName, "")
node.setLabelString(newLabelString)
node.save()
} else {
manager.listener.logger.println("Label '" + labelName + "' doesn't exist on node " + nodeName)
}
}
}
}
I've not seen a way yet to change the slave label either.
I've taken to editing the main config.xml file and issuing a reload from the CLI.
This has it's own problems though - any jobs currently running are lost until the next jenkins restart - see https://issues.jenkins-ci.org/browse/JENKINS-3265
Using the "Network Updates API" example at the following link I am able to post network updates with no problem using client.postNetworkUpdate(updateText).
http://code.google.com/p/linkedin-j/wiki/GettingStarted
So posting works great.. However posting an update does not return an "UpdateKey" which is used to retrieve stats for post itself such as comments, likes, etc. Without the UpdateKey I cannot retrieve stats. So what I would like to do is post, then retrieve the last post using the getNetworkUpdates() function, and in that retrieval will be the UpdateKey that I need to use later to retrieve stats. Here's a sample script in Java on how to get network updates, but I need to do this in Coldfusion instead of Java.
Network network = client.getNetworkUpdates(EnumSet.of(NetworkUpdateType.STATUS_UPDATE));
System.out.println("Total updates fetched:" + network.getUpdates().getTotal());
for (Update update : network.getUpdates().getUpdateList()) {
System.out.println("-------------------------------");
System.out.println(update.getUpdateKey() + ":" + update.getUpdateContent().getPerson().getFirstName() + " " + update.getUpdateContent().getPerson().getLastName() + "->" + update.getUpdateContent().getPerson().getCurrentStatus());
if (update.getUpdateComments() != null) {
System.out.println("Total comments fetched:" + update.getUpdateComments().getTotal());
for (UpdateComment comment : update.getUpdateComments().getUpdateCommentList()) {
System.out.println(comment.getPerson().getFirstName() + " " + comment.getPerson().getLastName() + "->" + comment.getComment());
}
}
}
Anyone have any thoughts on how to accomplish this using Coldfusion?
Thanks
I have not used that api, but I am guessing you could use the first two lines to grab the number of updates. Then use the overloaded client.getNetworkUpdates(start, end) method to retrieve the last update and obtain its key.
Totally untested, but something along these lines:
<cfscript>
...
// not sure about accessing the STATUS_UPDATE enum. One of these should work:
// method 1
STATUS_UPDATE = createObject("java", "com.google.code.linkedinapi.client.enumeration.NetworkUpdateType$STATUS_UPDATE");
// method 2
NetworkUpdateType = createObject("java", "com.google.code.linkedinapi.client.enumeration.NetworkUpdateType");
STATUS_UPDATE = NetworkUpdateType.valueOf("STATUS_UPDATE");
enumSet = createObject("java", "java.util.EnumSet");
network = yourClientObject.getNetworkUpdates(enumSet.of(STATUS_UPDATE));
numOfUpdates = network.getUpdates().getTotal();
// Add error handling in case numOfUpdates = 0
result = yourClientObject.getNetworkUpdates(numOfUpdates, numOfUpdates);
lastUpdate = result.getUpdates().getUpdateList().get(0);
key = lastUpdate.getUpdateKey();
</cfscript>
You can also use socialauth library to retrieve updates and post status on linkedin.
http://code.google.com/p/socialauth