i'm using JNA to read some event logs delivered by my application. Im mostly interested in the description strings data.
I'm using the code below:
private static void readLog() {
Advapi32Util.EventLogIterator iter = new Advapi32Util.EventLogIterator("Application");
while (iter.hasNext()) {
Advapi32Util.EventLogRecord record = iter.next();
System.out.println("------------------------------------------------------------");
System.out.println(record.getRecordNumber()
+ ": Event ID: " + record.getInstanceId()
+ ", Event Type: " + record.getType()
+ ", Event Strings: " + Arrays.toString(record.getStrings())
+ ", Data: " + record.getRecord().toString());
System.out.println();
}
}
Example event my application produces:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Microsoft-Windows-MyApp" Guid="{4d5ae6a1-c7c8-4e6d-b840-4d8080b42e1b}" />
<EventID>201</EventID>
<Version>0</Version>
<Level>2</Level>
<Task>2</Task>
<Opcode>30</Opcode>
<Keywords>0x4010000001000000</Keywords>
<TimeCreated SystemTime="2021-02-19T15:16:03.675690900Z" />
<EventRecordID>3622</EventRecordID>
<Correlation ActivityID="{e6ee2b3b-9b9a-4c9d-b39b-6c2bf2550000}" />
<Execution ProcessID="2108" ThreadID="8908" />
<Channel>Microsoft-Windows-MyApp/Operational</Channel>
<Computer>computer</Computer>
<Security UserID="S-1-5-20" />
</System>
<UserData>
<EventInfo xmlns="aag">
<Username>username</Username>
<IpAddress>127.0.0.1</IpAddress>
<AuthType>NTLM</AuthType>
<Resource />
<ConnectionProtocol>HTTP</ConnectionProtocol>
<ErrorCode>23003</ErrorCode>
</EventInfo>
</UserData>
</Event>
Other event UserData:
<UserData>
<EventInfo xmlns="aag">
<Username>otherUserName</Username>
<IpAddress>10.235.163.52:50427</IpAddress>
</EventInfo>
</UserData>
JNA provides event log records in EVENTLOGRECORD class which only contains methods to get only values of description strings. If i could get the record in XML format my problem would be gone.
Data in UserData is not always the same, it contains different values depending on the event type. I want to parse the data from UserData section to POJO (it can be just one POJO containing all available fields). I dont want to use fields order, because some events have different fields than other (as shown in example).
Is there any way to do this using xml tag names? I will consider even switching to other lang.
As I pointed out in my comment you need to render the event to get to the XML. Matthias Bläsing also pointed out that some sample code is available in the WevtapiTest test class in JNA.
Using that test class, I created the below program which reads the XML from the latest 50 events from the System log. Filtering events to what you want is left as an exercise for the reader.
public static void main(String[] args) {
EVT_HANDLE queryHandle = null;
// Requires elevation or shared access to the log.
String path = "C:\\Windows\\System32\\Winevt\\Logs\\System.evtx";
try {
queryHandle = Wevtapi.INSTANCE.EvtQuery(null, path, null, Winevt.EVT_QUERY_FLAGS.EvtQueryFilePath);
// Read 10 events at a time
int eventArraySize = 10;
int evtNextTimeout = 1000;
int arrayIndex = 0;
EVT_HANDLE[] eventArray = new EVT_HANDLE[eventArraySize];
IntByReference returned = new IntByReference();
while (Wevtapi.INSTANCE.EvtNext(queryHandle, eventArraySize, eventArray, evtNextTimeout, 0, returned)) {
Memory buff;
// This just needs to be 0. Kept same name from test sample
IntByReference propertyCount = new IntByReference();
Winevt.EVT_VARIANT evtVariant = new Winevt.EVT_VARIANT();
for (int i = 0; i < returned.getValue(); i++) {
buff = WevtapiUtil.EvtRender(null, eventArray[i], Winevt.EVT_RENDER_FLAGS.EvtRenderEventXml,
propertyCount);
// Output the XML!
System.out.println(buff.getWideString(0));
}
arrayIndex++;
// Quit after 5 x 10 events
if (arrayIndex >= 5) {
break;
}
}
if (Kernel32.INSTANCE.GetLastError() != WinError.ERROR_SUCCESS
&& Kernel32.INSTANCE.GetLastError() != WinError.ERROR_NO_MORE_ITEMS) {
throw new Win32Exception(Kernel32.INSTANCE.GetLastError());
}
} finally {
if (queryHandle != null) {
Wevtapi.INSTANCE.EvtClose(queryHandle);
}
}
}
Related
I am trying to export repeat grid data to excel. To do this, I have provided a button which runs "MyCustomActivity" activity via clicking. The button is placed above the grid in the same layout. It also worth pointing out that I am utulizing an article as a guide to configure. According to the guide my "MyCustomActivity" activity contains two steps:
Method: Property-Set, Method Parameters: Param.exportmode = "excel"
Method: Call pzRDExportWrapper. And I pass current parameters (There is only one from the 1st step).
But after I had got an issue I have changed the 2nd step by Call Rule-Obj-Report-Definition.pzRDExportWrapper
But as you have already understood the solution doesn't work. I have checked the log files and found interesting error:
2017-04-11 21:08:27,992 [ WebContainer : 4] [OpenPortal] [ ] [ MyFW:01.01.02] (ctionWrapper._baseclass.Action) ERROR as1|172.22.254.110 bar - Activity 'MyCustomActivity' failed to execute; Failed to find a 'RULE-OBJ-ACTIVITY' with the name 'PZRESOLVECOPYFILTERS' that applies to 'COM-FW-MyFW-Work'. There were 3 rules with this name in the rulebase, but none matched this request. The 3 rules named 'PZRESOLVECOPYFILTERS' defined in the rulebase are:
2017-04-11 21:08:42,807 [ WebContainer : 4] [TABTHREAD1] [ ] [ MyFW:01.01.02] (fileSetup.Code_Security.Action) ERROR as1|172.22.254.110 bar - External authentication failed:
If someone have any suggestions and share some, I will appreciate it.
Thank you.
I wanted to provide a functionality of exporting retrieved works to a CSV file. The functionality should has a feature to choose fields to retrieve, all results should be in Ukrainian and be able to use any SearchFilter Pages and Report Definition rules.
At a User Portal I have two sections: the first section contains text fields and a Search button, and a section with a Repeat Grid to display results. The textfields are used to filter results and they use a page Org-Div-Work-SearchFilter.
I made a custom parser to csv. I created two activities and wrote some Java code. I should mention that I took some code from the pzPDExportWrapper.
The activities are:
ExportToCSV - takes parameters from users, gets data, invokes the ConvertResultsToCSV;
ConvertResultsToCSV - converts retrieved data to a .CSV file.
Configurations of the ExportToCSV activity:
The Pages And Classes tab:
ReportDefinition is an object of a certain Report Definition.
SearchFilter is a Page with values inputted by user.
ReportDefinitionResults is a list of retrieved works to export.
ReportDefinitionResults.pxResults denotes a type of a certain work.
The Parameters tab:
FileName is a name of a generated file
ColumnsNames names of columns separated by comma. If the parameter is empty then CSVProperties is exported.
CSVProperties is a props to display in a spreadsheet separated by comma.
SearchPageName is a name of a page to filter results.
ReportDefinitionName is a RD's name used to retrieve results.
ReportDefinitionClass is a class of utilized report definition.
The Step tab:
Lets look through the steps:
1. Get an SearchFilte Page with a name from a Parameter with populated fields:
2. If SearchFilter is not Empty, call a Data Transform to convert SearchFilter's properties to Paramemer properties:
A fragment of the data Transform:
3. Gets an object of a Report Definition
4. Set parameters for the Report Definition
5. Invoke the Report Definition and save results to ReportDefinitionResults:
6. Invoke the ConvertResultsToCSV activity:
7. Delete the result page:
The overview of the ConvertResultsToCSV activity.
The Parameters tab if the ConvertResultsToCSV activity:
CSVProperties are the properties to retrieve and export.
ColumnsNames are names of columns to display.
PageListProperty a name of the property to be read in the primay page
FileName the name of generated file. Can be empty.
AppendTimeStampToFileName - if true, a time of the file generation.
CSVString a string of generated CSV to be saved to a file.
FileName a name of a file.
listSeperator is always a semicolon to separate fields.
Lets skim all the steps in the activity:
Get a localization from user settings (commented):
In theory it is able to support a localization in many languages.
Set always "uk" (Ukrainian) localization.
Get a separator according to localization. It is always a semicolon in Ukrainian, English and Russian. It is required to check in other languages.
The step contains Java code, which form a CSV string:
StringBuffer csvContent = new StringBuffer(); // a content of buffer
String pageListProp = tools.getParamValue("PageListProperty");
ClipboardProperty resultsProp = myStepPage.getProperty(pageListProp);
// fill the properties names list
java.util.List<String> propertiesNames = new java.util.LinkedList<String>(); // names of properties which values display in csv
String csvProps = tools.getParamValue("CSVProperties");
propertiesNames = java.util.Arrays.asList(csvProps.split(","));
// get user's colums names
java.util.List<String> columnsNames = new java.util.LinkedList<String>();
String CSVDisplayProps = tools.getParamValue("ColumnsNames");
if (!CSVDisplayProps.isEmpty()) {
columnsNames = java.util.Arrays.asList(CSVDisplayProps.split(","));
} else {
columnsNames.addAll(propertiesNames);
}
// add columns to csv file
Iterator columnsIter = columnsNames.iterator();
while (columnsIter.hasNext()) {
csvContent.append(columnsIter.next().toString());
if (columnsIter.hasNext()){
csvContent.append(listSeperator); // listSeperator - local variable
}
}
csvContent.append("\r");
for (int i = 1; i <= resultsProp.size(); i++) {
ClipboardPage propPage = resultsProp.getPageValue(i);
Iterator iterator = propertiesNames.iterator();
int propTypeIndex = 0;
while (iterator.hasNext()) {
ClipboardProperty clipProp = propPage.getIfPresent((iterator.next()).toString());
String propValue = "";
if(clipProp != null && !clipProp.isEmpty()) {
char propType = clipProp.getType();
propValue = clipProp.getStringValue();
if (propType == ImmutablePropertyInfo.TYPE_DATE) {
DateTimeUtils dtu = ThreadContainer.get().getDateTimeUtils();
long mills = dtu.parseDateString(propValue);
java.util.Date date = new Date(mills);
String sdate = dtu.formatDateTimeStamp(date);
propValue = dtu.formatDateTime(sdate, "dd.MM.yyyy", "", "");
}
else if (propType == ImmutablePropertyInfo.TYPE_DATETIME) {
DateTimeUtils dtu = ThreadContainer.get().getDateTimeUtils();
propValue = dtu.formatDateTime(propValue, "dd.MM.yyyy HH:mm", "", "");
}
else if ((propType == ImmutablePropertyInfo.TYPE_DECIMAL)) {
propValue = PRNumberFormat.format(localeCode,PRNumberFormat.DEFAULT_DECIMAL, false, null, new BigDecimal(propValue));
}
else if (propType == ImmutablePropertyInfo.TYPE_DOUBLE) {
propValue = PRNumberFormat.format(localeCode,PRNumberFormat.DEFAULT_DECIMAL, false, null, Double.parseDouble(propValue));
}
else if (propType == ImmutablePropertyInfo.TYPE_TEXT) {
propValue = clipProp.getLocalizedText();
}
else if (propType == ImmutablePropertyInfo.TYPE_INTEGER) {
Integer intPropValue = Integer.parseInt(propValue);
if (intPropValue < 0) {
propValue = new String();
}
}
}
if(propValue.contains(listSeperator)){
csvContent.append("\""+propValue+"\"");
} else {
csvContent.append(propValue);
}
if(iterator.hasNext()){
csvContent.append(listSeperator);
}
propTypeIndex++;
}
csvContent.append("\r");
}
CSVString = csvContent.toString();
5. This step forms and save a file in server's catalog tree
char sep = PRFile.separatorChar;
String exportPath= tools.getProperty("pxProcess.pxServiceExportPath").getStringValue();
DateTimeUtils dtu = ThreadContainer.get().getDateTimeUtils();
String fileNameParam = tools.getParamValue("FileName");
if(fileNameParam.equals("")){
fileNameParam = "RecordsToCSV";
}
//append a time stamp
Boolean appendTimeStamp = tools.getParamAsBoolean(ImmutablePropertyInfo.TYPE_TRUEFALSE,"AppendTimeStampToFileName");
FileName += fileNameParam;
if(appendTimeStamp) {
FileName += "_";
String currentDateTime = dtu.getCurrentTimeStamp();
currentDateTime = dtu.formatDateTime(currentDateTime, "HH-mm-ss_dd.MM.yyyy", "", "");
FileName += currentDateTime;
}
//append a file format
FileName += ".csv";
String strSQLfullPath = exportPath + sep + FileName;
PRFile f = new PRFile(strSQLfullPath);
PROutputStream stream = null;
PRWriter out = null;
try {
// Create file
stream = new PROutputStream(f);
out = new PRWriter(stream, "UTF-8");
// Bug with Excel reading a file starting with 'ID' as SYLK file. If CSV starts with ID, prepend an empty space.
if(CSVString.startsWith("ID")){
CSVString=" "+CSVString;
}
out.write(CSVString);
} catch (Exception e) {
oLog.error("Error writing csv file: " + e.getMessage());
} finally {
try {
// Close the output stream
out.close();
} catch (Exception e) {
oLog.error("Error of closing a file stream: " + e.getMessage());
}
}
The last step calls #baseclass.DownloadFile to download the file:
Finally, we can post a button on some section or somewhere else and set up an Actions tab like this:
It also works fine inside "Refresh Section" action.
A possible result could be
Thanks for reading.
I have 2 classes doing a similar task in Apache Spark but the one using data frame is many times slower than the "regular" one using RDD. (30x)
I would like to use data frame since it will eliminate a lot of code and classes we have but obviously I can't have it be that much slower.
The data set is nothing big. We have 30 some files with json data in each about events triggered from activities in another piece of software. There are between 0 to 100 events in each file.
A data set with 82 events will take about 5 minutes to be processed with data frames.
Sample code:
public static void main(String[] args) throws ParseException, IOException {
SparkConf sc = new SparkConf().setAppName("POC");
JavaSparkContext jsc = new JavaSparkContext(sc);
SQLContext sqlContext = new SQLContext(jsc);
conf = new ConfImpl();
HashSet<String> siteSet = new HashSet<>();
// last month
Date yesterday = monthDate(DateUtils.addDays(new Date(), -1)); // method that returns the date on the first of the month
Date startTime = startofYear(new Date(yesterday.getTime())); // method that returns the date on the first of the year
// list all the sites with a metric file
JavaPairRDD<String, String> allMetricFiles = jsc.wholeTextFiles("hdfs:///somePath/*/poc.json");
for ( Tuple2<String, String> each : allMetricFiles.toArray() ) {
logger.info("Reading from " + each._1);
DataFrame metric = sqlContext.read().format("json").load(each._1).cache();
metric.count();
boolean siteNameDisplayed = false;
boolean dateDisplayed = false;
do {
Date endTime = DateUtils.addMonths(startTime, 1);
HashSet<Row> totalUsersForThisMonth = new HashSet<>();
for (String dataPoint : Conf.DataPoints) { // This is a String[] with 4 elements for this specific case
try {
if (siteNameDisplayed == false) {
String siteName = parseSiteFromPath(each._1); // method returning a parsed String
logger.info("Data for site: " + siteName);
siteSet.add(siteName);
siteNameDisplayed = true;
}
if ( dateDisplayed == false ) {
logger.info("Month: " + formatDate(startTime)); // SimpleFormatDate("yyyy-MM-dd")
dateDisplayed = true;
}
DataFrame lastMonth = metric.filter("event.eventId=\"" + dataPoint + "\"").filter("creationDate >= " + startTime.getTime()).filter("creationDate < " + endTime.getTime()).select("event.data.UserId").distinct();
logger.info("Distinct for last month for " + dataPoint + ": " + lastMonth.count());
totalUsersForThisMonth.addAll(lastMonth.collectAsList());
} catch (Exception e) {
// data does not fit the expected model so there is nothing to print
}
}
logger.info("Total Unique for the month: " + totalStudentForThisMonth.size());
startTime = DateUtils.addMonths(startTime, 1);
dateDisplayed = false;
} while ( startTime.getTime() < commonTmsMetric.monthDate(yesterday).getTime());
// reset startTime for the next site
startTime = commonTmsMetric.StartofYear(new Date(yesterday.getTime()));
}
}
There are a few things that are not efficient in this code but when I look at the logs it only adds a few seconds to the whole processing.
I must be missing something big.
I have ran this with 2 executors and 1 executor and the difference is 20 seconds on 5 minutes.
This is running with Java 1.7 and Spark 1.4.1 on Hadoop 2.5.0.
Thank you!
So there a few things, but its hard to say without seeing the breakdown of the different tasks & their time. The short version is you are doing way to much work in the driver and not taking advantage of Spark's distributed capabilities.
For example, you are collecting all of the data back to the driver program (toArray() and your for loop). Instead you should just point Spark SQL at the files in needs to load.
For the operators, it seems like your doing many aggregations in the driver, instead you could use the driver to generate the aggregations and have Spark SQL execute them.
Another big difference between your in-house code and the DataFrame code is going to be Schema inference. Since you've already created classes to represent your data, it seems likely that you know the schema of your JSON data. You can likely speed up your code by adding the schema information at read time so Spark SQL can skip inference.
I'd suggest re-visiting this approach and trying to build something using Spark SQL's distributed operators.
I am attempting to write a simple java utility that extracts data from SAP into a MySQL database, using JCo. I have understood the JCo documentation and tried out the relevant examples mentioned in SAP help portal, I am able to retrieve data from Table and insert into MySQL DB.
What I would like to have is a facility to filter data in following two ways :
I would like to fetch only the required fields.
I would like to fetch rows only if the value of the a particular field matches certain pattern.
After doing some research I didn't find any way to specify query parameters so that it retrieves only the filtered data, it basically queries all the fields from a Table, I think I will have to filter out the data that I don't want in my java-client layer. Please let me know if I am missing out something here.
Here is a code example :
public static void readTables() throws JCoException, IOException {
final JCoDestination destination = JCoDestinationManager
.getDestination(DESTINATION_NAME2);
final JCoFunction function = destination.getRepository().getFunction(
"RFC_READ_TABLE");
function.getImportParameterList().setValue("QUERY_TABLE", "DD02L");
function.getImportParameterList().setValue("DELIMITER", ",");
if (function == null) {
throw new RuntimeException("BAPI RFC_READ_TABLE not found in SAP.");
}
try {
function.execute(destination);
} catch (final AbapException e) {
System.out.println(e.toString());
return;
}
final JCoTable codes = function.getTableParameterList().getTable(
"FIELDS");
String header = "SN";
for (int i = 0; i < codes.getNumRows(); i++) {
codes.setRow(i);
header += "," + codes.getString("FIELDNAME");
}
final FileWriter outFile = new FileWriter("out.csv");
outFile.write(header + "\n");
final JCoTable rows = function.getTableParameterList().getTable("DATA");
for (int i = 0; i < rows.getNumRows(); i++) {
rows.setRow(i);
outFile.write(i + "," + rows.getString("WA") + "\n");
outFile.flush();
}
outFile.close();
}
This method tries to read a table where SAP stores meta data or data dictionary and writes the output to a csv file. This works fine but takes 30-40 secs and returns around 4 hundred thousand records with 32 columns. My intention was to ask if there is a way I can restrict my query to return only a particular field, instead of reading all the fields and discarding them in the client layer.
Thanks.
This works fine :
JCoTable table = function.getTableParameterList().getTable("FIELDS");
table.appendRow();
table.setValue("FIELDNAME", "TABNAME");
table.appendRow();
table.setValue("FIELDNAME", "TABCLASS");
Please check this Thread
Thanks.
I have a routine that I repeatedly doing for many projects and I want to generalized it. I used iText for PDF manipulation.
Let say that I have 2000 PDFs inside a folder, and I need to zip these together. Let say the limit is 1000 PDFs per zip. So the name of the zip would follow this rule: job name + job sequence. For example, the zip name of the first 1000 PDF would be XNKXMN + AA and the second zip name would be XNKXMN + AB. Before zipping these PDFs, I need to add some text to each PDF. Text look something like this job name + job sequence + pdf sequence. So the first PDF inside the first zip will have this text XNKXMN + AA + 000001, and the one after that is XNKXMN + AA + 000002. Here is my attempt
First I have abstract clas GenericText that represent my text.
public abstract class GenericText {
private float x;
private float y;
private float rotation;
/**
* Since the text that the user want to insert onto the Pdf might vary
* from page to page, or from logical document to logical document, we allow
* the user to write their own implementation of the text. To give the user enough
* flexibility, we give them the reference to the physical page index, the logical page index.
* #param physcialPage The actual page number that the user current looking at
* #param logicalPage A Pdf might contain multiples sub-documents, <code>logicalPage</code>
* tell the user which logical sub-document the system currently looking at
*/
public abstract String generateText(int physicalPage, int logicalPage);
GenericText(float x, float y, float rotation){
this.x = x;
...
}
}
JobGenerator.java: my generic API to do what I describe above
public String generatePrintJob(List<File> pdfList, String outputPath,
String printName, String seq, List<GenericText> textList, int maxSize)
for (int currentPdfDocument = 0; currentPdfDocument < pdfList.size(); currentPdfDocument++) {
File pdf = pdfList.get(currentPdfDocument);
if (currentPdfDocument % maxSize != 0) {
if(textList != null && !textList.isEmpty()){
for(GenericText gt : textList){
String text = gt.generateText(currentPdfDocument, currentPdfDocument)
//Add the text content to the PDF using PdfReader and PdfWriter
}
}
...
}else{
//Close the current output stream and zip output stream
seq = Utils.getNextSeq(seq);
jobPath = outputPath + File.separator + printName + File.separator + seq + ".zip"
//Open new zip output stream with the new <code>jobPath</code>
}
}
}
So now in my main class I would just do this
final String printName = printNameLookup.get(baseOutputName);
String jobSeq = config.getPrintJobSeq();
final String seq = jobSeq;
GenericText keyline = new GenericText(90, 640, 0){
#Override
public String generateText(int physicalPage, int logicalPage) {
//if logicalPage = 1, Utils.right(String.valueOf(logicalPage), 6, '0') -> 000001
return printName + seq + " " + Utils.right(String.valueOf(logicalPage), 6, '0');
}
};
textList.add(keyline);
JobGenerator pjg = new JobGenerator();
pjg.generatePrintJob(...,..., printName, jobSeq, textList, 1000);
The problem that I am having with this design is that, even though I process archive the PDF into two zip correctly, the text is not correctly reflect. The print and the sequence does not change accordingly, it stay XNKXMN + AA for 2000 PDF instead of XNKXMN + AA for the first 1000 and change to XNKXMN + AB for the later 1000. There seems to be flawed in my design, please help
EDIT:
After looking at toto2 code, I see my problem. I create GenericText with the hope of adding text anywhere on the pdf page without affecting the basic logic of the process. However, the job sequence is by definition depending on the logic,as it need to increment if there are too many PDFs for one ZIP to handle (> maxSize). I need to rethink this.
When you create an anonymous GenerateText, the final seq which you use in the overridden generateText method is truly final and will always remain the value given at creation time. The update you carry on seq inside the else in generatePrintJob does nothing.
On a more general note, your code looks very complex and you should probably take a step back and do some major refactoring.
EDIT:
I would instead try something different, with no template method pattern:
int numberOfZipFiles =
(int) Math.ceil((double) pdfList.size() / maxSize);
for (int iZip = 0; iZip < numberOfZipFiles; iZip++) {
String batchSubName = generateBatchSubName(iZip); // gives AA, AB,...
for (int iFile = 0; iFile < maxSize; iFile++) {
int fileNumber = iZip * maxSize + iFile;
if (fileNumber >= pdfList.size()) // can happen for last batch
return;
String text = jobName + batchSubName + iFile;
... add "text" to pdfList.get(fileNumber)
}
}
However, you might also want to maintain the template pattern. In that case, I would keep the for-loops I wrote above, but I would change the generating method to genericText.generateText(iZip, iFile) where iZip = 0 gives AA and iZip = 1 gives AB, etc:
for (int iZip = 0; iZip < numberOfZipFiles; iZip++) {
for (int iFile = 0; iFile < maxSize; iFile++) {
int fileNumber = iZip * maxSize + iFile;
if (fileNumber >= pdfList.size()) // can happen for last batch
return;
String text = genericText.generateText(iZip, iFile);
... add "text" to pdfList.get(fileNumber)
}
}
It would be possible also to have genericText.generateText(fileNumber) which could itself decompose the fileNumber in AA000001, etc. But that would be somewhat dangerous because maxSize would be used in two different places and it might be bug prone to have duplicate data like that.
I'm trying to get a list of workflows the document is attached to in an Alfresco webscript, but I am kind of stuck.
My original problem is that I have a list of files, and the current user may have workflows assigned to him with these documents. So, now I want to create a webscript that will look in a folder, take all the documents there, and assemble a list of documents together with task references, if there are any for the current user.
I know about the "workflow" object that gives me the list of workflows for the current user, but this is not a solution for my problem.
So, can I get a list of workflows a specific document is attached to?
Well, for future reference, I've found a way to get all the active workflows on a document from javascript:
var nodeR = search.findNode('workspace://SpacesStore/'+doc.nodeRef);
for each ( wf in nodeR.activeWorkflows )
{
// Do whatever here.
}
I used packageContains association to find workflows for document.
Below i posted code in Alfresco JavaScript for active workflows (as zladuric answered) and also for all workflows:
/*global search, logger, workflow*/
var getWorkflowsForDocument, getActiveWorkflowsForDocument;
getWorkflowsForDocument = function () {
"use strict";
var doc, parentAssocs, packages, packagesLen, i, pack, props, workflowId, instance, isActive;
//
doc = search.findNode("workspace://SpacesStore/8847ea95-108d-4e08-90ab-34114e7b3977");
parentAssocs = doc.getParentAssocs();
packages = parentAssocs["{http://www.alfresco.org/model/bpm/1.0}packageContains"];
//
if (packages) {
packagesLen = packages.length;
//
for (i = 0; i < packagesLen; i += 1) {
pack = packages[i];
props = pack.getProperties();
workflowId = props["{http://www.alfresco.org/model/bpm/1.0}workflowInstanceId"];
instance = workflow.getInstance(workflowId);
/* instance is org.alfresco.repo.workflow.jscript.JscriptWorkflowInstance */
isActive = instance.isActive();
logger.log(" + instance: " + workflowId + " (active: " + isActive + ")");
}
}
};
getActiveWorkflowsForDocument = function () {
"use strict";
var doc, activeWorkflows, activeWorkflowsLen, i, instance;
//
doc = search.findNode("workspace://SpacesStore/8847ea95-108d-4e08-90ab-34114e7b3977");
activeWorkflows = doc.activeWorkflows;
activeWorkflowsLen = activeWorkflows.length;
for (i = 0; i < activeWorkflowsLen; i += 1) {
instance = activeWorkflows[i];
/* instance is org.alfresco.repo.workflow.jscript.JscriptWorkflowInstance */
logger.log(" - instance: " + instance.getId() + " (active: " + instance.isActive() + ")");
}
}
getWorkflowsForDocument();
getActiveWorkflowsForDocument();
Unfortunately the javascript API doesn't expose all the workflow functions. It look like getting the list of workflow instances that are attached to a document only works in Java (or Java backed webscripts).
List<WorkflowInstance> workflows = workflowService.getWorkflowsForContent(node.getNodeRef(), true);
A usage of this can be found in the workflow list in the document details: http://svn.alfresco.com/repos/alfresco-open-mirror/alfresco/HEAD/root/projects/web-client/source/java/org/alfresco/web/ui/repo/component/UINodeWorkflowInfo.java
To get to the users who have tasks assigned you would then need to use getWorkflowPaths and getTasksForWorkflowPath methods of the WorkflowService.