What would be the best way to serialize informations about SAP RFC Function Module parameters (i.e. parameter names and parameter values) with which the function was called in SAP (with parameter 'DESTINATION' provided as SAP JCO SERVER) and then captured in JCo?
The point is that the serialization should be done in Jco (using Java) and then this data would be send back to SAP and saved in a Z-Table in SAP, so that later using this entries in SAP Table it will be possible to "reserialize" this data in ABAP and call again the given funcion with exactly the same parameters and values.
To make it easier to understand, I will give an example:
Step 1. We call RFC FM in ABAP:
CALL FUNCTION 'remotefunction'
DESTINATION jco_server
EXPORTING e1 = exp1
* IMPORTING i1 =
TABLES t1 = tab1.
Step 2. We catch this function call in Jco and need to serialize the information about the parameters with which the function was called and save it in a SAP table, for example:
String ImportParList = function.getImportParameterList().toXML().toString(); //serialization of import parameters
String ExportParList = function.getExportParameterList().toXML().toString(); //serialization of export parameters
String TableParList = function.getTableParameterList().toXML().toString(); //serialization of table parameters
String ParList = ImportParList + ExportParList + TableParList;
//Call function to save content of variable "ParList" in a SAP table
Step 3. Using ABAP we need to select the data from SAP table and "reserialize" (e.g. using CALL TRANSFORMATION FM in ABAP) to be able to recall again the FM "remotefunction" with the same parameters and values as earlier.
To sum it up:
A. Is there maybe any standard Java method in Jco for such serialization (better than manually converting this to xml/JSON and saving as String)?
B. How to deal with deep ABAP structures, e.g. tables in which subsequent tables are nested? Also convert it to XML/JSON just like the rest?
C. Do you have any other ideas how to perform this process better than what I presented?
Thanks in advance!
What I describe here may not be appropriate to you due to the complexity of installation and setup, but just for sake of use for other community members and for the acknowledgement that such functionality exists.
SAP has a special framework for enabling the case you want, called LOWGWIN (LOGCOM 200). Instructions for installation are in SAP Note 1870371.
Details of the feature set:
The logging of RFCs allows you to establish which users had access to
which data at what point in time. You can log data on RFC Function
Module (FM) level, for example:
Type of parameters
Name and corresponding values of parameters
In order to minimize the amount of logged data, you can do the following:
Restrict logging to certain users
Filter the parameters that need to be logged before they are included in the log records
Enable logging on client level only for the RFC Function Modules that
you want to log
You can fine-tune wich RFC calls (modules) will be logged including successful or failed ones by BAdI /LOGWIN/BADI_RFC_LOG_FILTER.
Initially the log is stored temporarily in SAP and can be viewed via transaction /LOGWIN/SHOW_LOG, after that you can transfer necessary log records to external repository (which you should set up in advance) by transaction /LOGWIN/TSF_TO_EXT.
Architecture overview:
So you can setup external repository in Java storage or leave it as is and read parameter values from SAP, after that you can re-run failed modules from SAP or Java side.
Also, there is a bunch of other settings about data archiving, user permissions, exclusion, mappings etc. which are too big to describe in the answer.
More documentation is here:
Configuration guide
Application guide (PDF)
installation is done from https://support.sap.com/swdc > Software Downloads > Installations and Upgrades > A–Z Index > L > LOGGING OF RFC AND WEB SVCS 2.0
also check notes 1870371 and 1878916
Related
I am using OpenSearch to index JSON documents & make them searchable. All documents have update timestamp field in EPOCH format. The problem is I can get update request where document body contains an older update time. My application should skip the update if the current document update time is older than the update time field in existing document stored in OpenSearch
To fulfil the requirement, I added external version in HTTP request /test_index/_update/123?version=1674576432910&version_type=external.
But I am getting error
Validation Failed: 1: internal versioning can not be used for optimistic concurrency control. Please use if_seq_no and if_primary_term instead
I read about if_seq_no & if_primary_term fields. They can't be used to solve my problem. Has anyone else encountered this problem & solved it? Please share. Or if anyone know about any plugin that I can install to support this, please share.
Sadly neither OpenSearch nor ElasticSearch supports external version in update request. And I don't see the feature getting added in near future. You can solve your specific problem using scripting. OpenSearch supports multiple scripting languages including Painless script. You can write a script that will compare a specific field (in your case update timestamp). And if condition is true, it will go ahead & update the fields with the new values.
{
"script": {
"lang": "painless",
"source": "if (params.updateTimestamp > ctx._source.updateTimestamp) {for (entry in params.entrySet()) {ctx._source[entry.getKey()] = entry.getValue();}}"
}
}
You can see a sample script above which will silently skip any update if new document has older timestamp. You can even throw exception also & handle it from you application. That way you can track number of requests with such issue.
You can use a similar script as stored script & use it in your update request. You can get more details including sample HTTP request & Java code in this article.
You should use the "if_seq_no" and "if_primary_term" parameters to perform optimistic concurrency control.
To solve your problem, you could first retrieve the existing document from OpenSearch using the document ID, and check the update timestamp field. If the existing timestamp is newer than the one in the update request, you can skip the update. Otherwise, you can include the "if_seq_no" and "if_primary_term" parameters in your update request, along with the updated document. The "if_seq_no" parameter should be set to the sequence number of the existing document, and the "if_primary_term" parameter should be set to the primary term of the existing document.
You can use the Update API for this... or the Optimistic Concurrency Control (OCC) mechanism, which is based on a combination of _seq_no and _primary_term fields.
I try Corb to search and update node in large number of documents:
Sample input:
<hcmt xmlns="http://horn.thoery">
<susceptible>X</susceptible>
<reponsible>foresee–intervention</reponsible>
<intend>Benefit Protagonist</intend>
<justified>Goal Outwiegen</justified>
</hcmt>
Xquery:
(: let $resp := "foresee–intervention" :)
let $docs :=
cts:search(doc(),
cts:and-query((
cts:collection-query("hcmt"),
cts:path-range-query("/horn:hcmt/horn:responsible", "=", $resp)
))
)
return
for $doc in $docs
return
xdmp:node-replace($doc/horn:hcmt/horn:responsible, "Foresee Intervention")
Expected output:
<hcmt xmlns="http://horn.thoery">
<susceptible>X</susceptible>
<reponsible>Foresee Intervention</reponsible>
<intend>Benefit Protagonist</intend>
<justified>Goal Outwiegen</justified>
</hcmt>
But node-replace didn’t happen in Corb and no error returns. Other queries work fine in Corb. How can the node-replace work correctly in Corb?
Thanks in advance for any help.
I create functions to reconcile the encoding matters. This not only mitigates potential API transaction failures but also is a requisite to validate & encode parameter or element/property/uri name.
That said, a sample MarkLogic Java API implementation is:
Create a dynamic query construct in the filesystem, in my case, product-query-option.xml (use the query value directly: Chooser–Option)
<search xmlns="http://marklogic.com/appservices/search">
<query>
<and-query>
<collection-constraint-query>
<constraint-name>Collection</constraint-name>
<uri>proto</uri>
</collection-constraint-query>
<range-constraint-query>
<constraint-name>ProductType</constraint-name>
<value>Chooser–Option</value>
</range-constraint-query>
</and-query>
</query>
</search>
Deploy the persistent query options to modules database, in my case, search-lexis.xml, the options file is like:
<options xmlns="http://marklogic.com/appservices/search">
<constraint name="Collection">
<collection prefix=""/>
</constraint>
<constraint name="ProductType">
<range type="xs:string" collation="http://marklogic.com/collation/en/S1">
<path-index xmlns:prod="schema://fc.fasset/product">/prod:requestProduct/prod:_metaData/prod:productType</path-index>
</range>
</constraint>
</options>
Follow on from Dynamic Java Search
File file = new File("src/main/resources/queryoption/product-query-option.xml");
FileHandle fileHandle = new FileHandle(file);
RawCombinedQueryDefinition rcqDef = queryMgr.newRawCombinedQueryDefinition(fileHandle, queryOption);
You can, assuredly, combine the query and the options as one handle in QueryDefinition.
Your original node-replace is translated as Java Partial Update
make sure the DocumentPatchBuilder setNamespaces with the correct NamespaceContext.
For batch data operation, the performant approach is MarkLogic Data Movement: instantiate the QueryBatcher with the searched Uris, supply the replace value or data fragment PatchBuilder.replaceValue
, and complete the batch with
dbClient.newXMLDocumentManager().patch(uri, patchHandle);
MarkLogic Data Services: If you succeed above, perhaps, then go at a more robust and scalable enterprise SOA approach, please review Data Services.
The implementation with Gradle is like:
(Note, all of the transformation metrics should be parameters, including path/element/property name, namespace, value…etc. Nothing is hardcoded.) One proxy service declared in service.json can serve multiple end points (under /root/df-ds/fxd ) with different types of modules which give you the free rein to develop pure Java or extend the development platform to handle complex data operations.
If these operations are persistent node update, you should consider in-memory node transform before the ingestion. Besides the MarkLogic data transformation tools, you can harness the power of XSLT2+.
Saxon XPathFactory could be a serviceable vehicle to query/transform node. Not sure if it is a reciprocity, ML Java API implements the XPath compile to split large paths and stream transaction. XSLT/Saxon is not my forte; therefore, I can’t comment how comparable it is with this encode/decode particularity or how it handles transaction (insert, update…etc) streaming.
I am trying to get the forest Data directory in MarkLogic. I used the following method to get data directory...using the Server Evaluation Call Interface running queries as admin. If not, please let me know how I can get forest data directory
ServerEvaluationCall forestDataDirCall = client.newServerEval()
.xquery("admin:forest-get-data-directory(admin:get-configuration(), admin:forest-get-id(admin:get-configuration(), \"" + forestName +"\"))");
for (EvalResult forestDataDirResult : forestDataDirCall.eval()) {
String forestDataDir = null;
forestDataDir = forestDataDirResult.getString();
System.out.println("forestDataDir is " + forestDataDir);
}
I see no reason for needing to hit the server evaluation endpoint to ask this question to the server. MarkLogic comes with a robust REST based management API including getters for almost all items of interest.
Knowing that, you can use what is documented here:
http://yourserver:8002/manage/v2/forests
Results can be in JSON, XML or HTML
It is the getter for forest configurations. Which forests you care about can be found by iterating over all forests or by reaching through the database config and then to the forests. It all depends on what you already know from the outside.
References:
Management API
Scripting Administrative Tasks
This is most similar to this question.
I am creating a pipeline in Dataflow 2.x that takes streaming input from a Pubsub queue. Every single message that comes in needs to be streamed through a very large dataset that comes from Google BigQuery and have all the relevant values attached to it (based on a key) before being written to a database.
The trouble is that the mapping dataset from BigQuery is very large - any attempt to use it as a side input fails with the Dataflow runners throwing the error "java.lang.IllegalArgumentException: ByteString would be too long". I have attempted the following strategies:
1) Side input
As stated,the mapping data is (apparently) too large to do this. If I'm wrong here or there is a work-around for this, please let me know because this would be the simplest solution.
2) Key-Value pair mapping
In this strategy, I read the BigQuery data and Pubsub message data in the first part of the pipeline, then run each through ParDo transformations that change every value in the PCollections to KeyValue pairs. Then, I run a Merge.Flatten transform and a GroupByKey transform to attach the relevant mapping data to each message.
The trouble here is that streaming data requires windowing to be merged with other data, so I have to apply windowing to the large, bounded BigQuery data as well. It also requires that the windowing strategies are the same on both datasets. But no windowing strategy for the bounded data makes sense, and the few windowing attempts I've made simply send all the BQ data in a single window and then never send it again. It needs to be joined with every incoming pubsub message.
3) Calling BQ directly in a ParDo (DoFn)
This seemed like a good idea - have each worker declare a static instance of the map data. If it's not there, then call BigQuery directly to get it. Unfortunately this throws internal errors from BigQuery every time (as in the entire message just says "Internal error"). Filing a support ticket with Google resulted in them telling me that, essentially, "you can't do that".
It seems this task doesn't really fit the "embarrassingly parallelizable" model, so am I barking up the wrong tree here?
EDIT :
Even when using a high memory machine in dataflow and attempting to make the side input into a map view, I get the error java.lang.IllegalArgumentException: ByteString would be too long
Here is an example (psuedo) of the code I'm using:
Pipeline pipeline = Pipeline.create(options);
PCollectionView<Map<String, TableRow>> mapData = pipeline
.apply("ReadMapData", BigQueryIO.read().fromQuery("SELECT whatever FROM ...").usingStandardSql())
.apply("BQToKeyValPairs", ParDo.of(new BQToKeyValueDoFn()))
.apply(View.asMap());
PCollection<PubsubMessage> messages = pipeline.apply(PubsubIO.readMessages()
.fromSubscription(String.format("projects/%1$s/subscriptions/%2$s", projectId, pubsubSubscription)));
messages.apply(ParDo.of(new DoFn<PubsubMessage, TableRow>() {
#ProcessElement
public void processElement(ProcessContext c) {
JSONObject data = new JSONObject(new String(c.element().getPayload()));
String key = getKeyFromData(data);
TableRow sideInputData = c.sideInput(mapData).get(key);
if (sideInputData != null) {
LOG.info("holyWowItWOrked");
c.output(new TableRow());
} else {
LOG.info("noSideInputDataHere");
}
}
}).withSideInputs(mapData));
The pipeline throws the exception and fails before logging anything from within the ParDo.
Stack trace:
java.lang.IllegalArgumentException: ByteString would be too long: 644959474+1551393497
com.google.cloud.dataflow.worker.repackaged.com.google.protobuf.ByteString.concat(ByteString.java:524)
com.google.cloud.dataflow.worker.repackaged.com.google.protobuf.ByteString.balancedConcat(ByteString.java:576)
com.google.cloud.dataflow.worker.repackaged.com.google.protobuf.ByteString.balancedConcat(ByteString.java:575)
com.google.cloud.dataflow.worker.repackaged.com.google.protobuf.ByteString.balancedConcat(ByteString.java:575)
com.google.cloud.dataflow.worker.repackaged.com.google.protobuf.ByteString.balancedConcat(ByteString.java:575)
com.google.cloud.dataflow.worker.repackaged.com.google.protobuf.ByteString.copyFrom(ByteString.java:559)
com.google.cloud.dataflow.worker.repackaged.com.google.protobuf.ByteString$Output.toByteString(ByteString.java:1006)
com.google.cloud.dataflow.worker.WindmillStateInternals$WindmillBag.persistDirectly(WindmillStateInternals.java:575)
com.google.cloud.dataflow.worker.WindmillStateInternals$SimpleWindmillState.persist(WindmillStateInternals.java:320)
com.google.cloud.dataflow.worker.WindmillStateInternals$WindmillCombiningState.persist(WindmillStateInternals.java:951)
com.google.cloud.dataflow.worker.WindmillStateInternals.persist(WindmillStateInternals.java:216)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext$StepContext.flushState(StreamingModeExecutionContext.java:513)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext.flushState(StreamingModeExecutionContext.java:363)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1000)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$800(StreamingDataflowWorker.java:133)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$7.run(StreamingDataflowWorker.java:771)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Check out the section called "Pattern: Streaming mode large lookup tables" in Guide to common Cloud Dataflow use-case patterns, Part 2. It might be the only viable solution since your side input doesn't fit into memory.
Description:
A large (in GBs) lookup table must be accurate, and changes often or
does not fit in memory.
Example:
You have point of sale information from a retailer and need to
associate the name of the product item with the data record which
contains the productID. There are hundreds of thousands of items
stored in an external database that can change constantly. Also, all
elements must be processed using the correct value.
Solution:
Use the "Calling external services for data enrichment" pattern
but rather than calling a micro service, call a read-optimized NoSQL
database (such as Cloud Datastore or Cloud Bigtable) directly.
For each value to be looked up, create a Key Value pair using the KV
utility class. Do a GroupByKey to create batches of the same key type
to make the call against the database. In the DoFn, make a call out to
the database for that key and then apply the value to all values by
walking through the iterable. Follow best practices with client
instantiation as described in "Calling external services for data
enrichment".
Other relevant patterns are described in Guide to common Cloud Dataflow use-case patterns, Part 1:
Pattern: Slowly-changing lookup cache
Pattern: Calling external services for data enrichment
I have a template accountlist.scala.html looking like this:
#(accounts: models.domain.AccountList)
#titlebar = {<p>some html</p>}
#content = {
#for(account <- accounts) {
<p>#account.name</p>
}
}
#main(titlebar)(content)
... and another template account.scala.html like this:
#(account: models.domain.Account)
#titlebar = {<p>#account.name</p>}
#content = {
#for(transaction <- account.getTransactions()) {
<p>#transaction.detail</p>
}
}
#main(titlebar)(content)
From both of them I am invoking the template main.scala.html.
I have access to the entire Account POJO in the first view accountlist.scala.html, so really there is no need for me to invoke the server to get the details of the account when I go to the view in which I display the details. I would just like to change view on the client side. How could I call the second view account.scala.html from the view accountlist.scala.html a user clicks on an account in the list? I am ready to change the templates as needed.
I have provided a previous answer, which is still available at the end of this post. From your comments, however I understand that you are asking for something else without understanding how dangerous it is.
There are three ways of handling your use case, let's start with the worst one.
A stateful web application
If you have loaded data into a Pojo from some data source and you want to re-use the Pojo between multiple requests, what you are trying to do is to implement some kind of client-state on the server, such as a cache. Web applications have been developed in this way for long time and this was the source of major bugs and errors. What happens if the underlying account data in the database is updated between one http request and the following? Your client won't see it, because it use cached data. Additionally, what happens when the user performs a back in his browser? How do you get notified on the server side so you keep track of where the user is in his navigation flow? For these and others reasons, Play! is designed to be stateless. If you are really into web applications, you probably need to read about what is the REST architectural style.
A stateless web application
In a stateless web applications, you are not allowed to keep data between two http requests, so you have two ways to handle it:
Generate the user interface in a single shot
This is the approach which you can use when your account data is reduced. You embed all the necessary data from each account into your page and you generate the view, which you keep hidden and you show only when the user clicks. Please note that you can generate the HTML on the server side and with Javascript makes only certain part of your DOM visible, or just transfer a JSON representation of your accounts and use some kind of templating library to build the necessary UI directly on the client
Generate the user interface when required
This approach becomes necessary when the account data structure contains too many informations, and you don't want to transfer all this information for all the accounts on the client at first. For example, if you know the user is going to be interested in seeing the details only of very few accounts, you want to require the details only when the user asks for it.
For example, in your list of accounts you will have a button associated with each account, called details and you will use the account id to send a new request to the server.
#(accounts: models.domain.AccountList)
#titlebar = {<p>some html</p>}
#content = {
#for(account <- accounts) {
<p>#account.name <button class="details" href="#routes.Controllers.account(account.id)">details</button></p>
}
}
Please note that you can also generate the user interface on the client side, but you will still need to retrieve it from the server the data structures when the user clicks on the button. This will ensure that the user retrieves the last available state of the account.
Old answer
If your goal is to reuse your views, Play views are nothing else then Scala classes, so you can import them:
#import packagename._
and then you use it in another template:
#for(account <- accounts) {
#account(account)
}
The question reveals a misunderstanding of play framework templates. When compiling the play project the template code is transformed to html, css and javascript.
You can not "invoke"/link another template showing the account transactions from a href attribute of your Account row. However, you can do any of the following:
In case you have loaded all transactions from all accounts to the client in one go: extend the template to generate separate <div> sections for each account showing the transactions. Also generate javascript to 1) hide the overview div and 2) show the specific transaction div when clicking on one of the accounts in the overview. Please see the knockout library proposed by Edmondo1984 or the accordion or tabs in twitter bootstrap.
In case you only load the account overview from the server. Generate a link such as this one href="#routes.Controllers.account(account.id)" (see Edmondo1984 answer) and make another template to view this data.
Since the question concerned a case in which you got all data from the server, go by option 1.