Documentum DFS: Timeout for service calls - java

I'm working with the DFS Java API and was wondering whether anyone knows a simple way to configure a client-side timeout for service-calls that can be configured on the service context, for example?
I have experienced some rare occasions where a Documentum repository was not responding, that's why I am considering a general timeout for all DFS calls.
For testing a hanging service call, I created a dummy TBO implementation that simply blocks the thread for 10 minutes when updating the document:
#Override
public void saveEx(boolean keepLock, String versionLabels) throws DfException {
if (isNew() == false) {
try {
Thread.sleep(1000*60*10);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
super.saveEx(keepLock, versionLabels);
}
I'm not sure if this behaves exactly like a hanging service call, but at least in my tests it worked as expected - my invocations of the update method of the Object Service took about 10minutes.
Is there any configuration I have not yet found, or maybe a runtime-property to pass to the service context to configure the timeout?
I would prefer using existing features of DFS for this instead of implementing my own mechanism.

Have you tried editing the value in dfs-runtime.properties? I don't think the timeout can be context-specific, but you should be able to change it for the client as a whole.
Reposted from https://community.emc.com/message/3249#3249
"Please see the Server runtime startup settings section of the Deployment guide.
The following list describes the precedence that dfs-runtime.properties files take depending on their location:
local-dfs‑runtime.properties file in the local classpath
runtime properties file specified with ‑Ddfs.runtime.properties.file
dfs‑runtime.properties packaged with emc‑dfs‑rt.jar
For example, settings in the local-dfs‑runtime.properties file on the local classpath will take precedence of identical settings in the dfs‑runtime.properties file that is located in emc‑dfs‑rt.jar or the one specified with the ‑D parameter. The DFS application must be restarted after any changes to the configuration. As a best practice, use the provided configuration file that is deployed in the emc‑dfs‑rt.jar file for your base settings and use an external file to override settings that you specifically wish to change."

Related

Error "io.questdb.cairo.CairoException: [2] could not open read-write [file=<dir>/_tab_index.d]"

Currently, I am testing QuestDB in a Apache Camel / Spring Boot scenario for our project. I set up a custom Camel component and a configuration bean holding the connection properties. As far as I can see, my custom Camel component properly connects to the server where a test instance of QuestDB is running. But when sending data over the Camel route, I get error messages:
io.questdb.cairo.CairoException: [2] could not open read-write [file=<dir>/_tab_index.d]
The exception is thrown when creating the CairoEngine like (taken from QuestDB API documentation:
try (CairoEngine engine = new CairoEngine(this.configuration)) {
... other code ...
} catch (Exception e) {
e.printStackTrace();
...
}
where this.configuration is of type CairoConfiguration and contains the "data_dir" and is instantiated like this:
configuration = new DefaultCairoConfiguration(<quest db directory (String)>);
Currently, I am passing the fully qualified path my database directory: /srv/questdb/db. I confirmed that the file _tab_index.d is available at this location.
What am I going wrong? Maybe I should mention, that I set the access rights to the questdb directory to 777, the owner was set to chown root:questdb ...
Indeed, the embedded API is not suitable for what I want to do. I need to one of the other APIs. I tested my scenario withe the InfluxDB line protocol (see Line protocol documentation) and the data gets written to the server without problems.
The doInsert method in my custom component look like this (just for testing) which is called when building a route with the custom QuestDB "to" end point:
public class QuestDbProducer extends DefaultProducer {
... other code ...
private void doInsert(Exchange exchange, String tableName) throws InvalidPayloadException {
try (Sender sender = Sender.builder().address("lxyrpc01.gsi.de:9009").build()) {
sender.table("inventors")
.symbol("born", "Austrian Empire")
.longColumn("id", 0)
.stringColumn("name", "Nicola Tesla")
.atNow();
sender.table("inventors")
.symbol("born", "USA")
.longColumn("id", 1)
.stringColumn("name", "Thomas Alva Edison")
.atNow();
}
}

How to run downloaded App Router via Service Marketplace

I downloaded XS_JSCRIPT14_10-70001363 package from Service Marketplace.
Please suggest me how to run this App Router Login form with localhost
I am trying with npm startcommand, but getting UAA service exception. How to handle from localhost.
When you download the approuter, either via npm or service marketplace you have to provide two additional files for a basic setup inside the AppRouter directory (besides package.json, xs-app.json, etc.).
The default-services.json holds the variables that tell the approuter where to find the correct authentication server (e.g., XSUAA). You have to provide at least the clientid, clientsecret, and URL of the authorization server as part of this file like this:
{
"uaa": {
"url" : "http://my.uaa.server/",
"clientid" : "client-id",
"clientsecret" : "client-secret",
"xsappname" : "my-business-application"
}
}
You can get this parameters, for example, after binding on SAP Cloud Platform, CloudFoundry your application to an (empty) instance of XSUAA where you can retrieve the values via cf env <appname> from the `VCAP_SERVICES/xsuaa' properties (they have exactly the same property names).
In addition, you require the default-env.json file which holds at least the destination variable to which backend microservice you want to send the received Json Web Token to. It may look like this:
{
"destinations": [ {
"name": "my-destination", "url": "http://localhost:1234", "forwardAuthToken": true
}]
}
Afterwards, inside the approuter directory you can simply run npm start which runs the approuter per default under http://localhost:5000. It also writes nice console output you can use to debug the parameters above.
EDIT: Turns out I was incorrect, it is apparently possible to run the approuter locally.
First of all, here is the documentation for the approuter: https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/01c5f9ba7d6847aaaf069d153b981b51.html
As far as I understood, you need to provide to files to the approuter for it to run locally, default-services.json and default-env.json (put them in the same directory as your package.json.
The default-services.json has a format like this:
{
"uaa": {
"url" : "http://my.uaa.server/",
"clientid" : "client-id",
"clientsecret" : "client-secret",
"xsappname" : "my-business-application"
}
}
The default-env.json is simply a json file holding the environment variables that the approuter needs to access, like so:
{
"VCAP_SERVICES": <env>,
...
}
Unfortunately, the documentation does not state which variables are required, therefore I cannot provide you with a working example.
Hope this helps you! Should you manage to get this running, I'm sure others would appreciate if you share your knowledge here.

Using external server restlet framework

I am using the Restlet Framework, but now I want to change to a proper server instead of using localhost.
I have already added my php files (they access the java files using the rest_server URL) to the server's folder and my java files as well, but I am not sure how to change the code so it identifies where the new location of the files is.
Here is the code from IdentiscopeServer (constructor empty):
public static void main(String[] args) throws Exception {
//setsup our security manager
if (System.getSecurityManager() == null){
System.setSecurityManager(new SecurityManager());
}
identiscopeServerApp = new IdentiscopeServerApplication();
IdentiscopeServer server = new IdentiscopeServer();
server.getServers().add(Protocol.HTTP,8888);
server.getDefaultHost().attach("", identiscopeServerApp);
server.start();
}
I guess that the correct line to change is the one with "Protocol.HTTP, 8888". If the address of my new server is http://devweb2013.co.uk/research/Identiscope, how exactly do I set this up? Is there anything else necessary for it to work apart from just moving the files to a folder in the server?
The IdensticopeServerApplication is the following:
public class IdentiscopeServerApplication extends Application {
public IdentiscopeServerApplication() {
}
public Restlet createInboundRoot() {
Router router = new Router(getContext());
//attaches the /tweet path to the TweetRest class
router.attach("/collectionPublic", CollectionPublicREST.class);
router.attach("/collectionPrivate", CollectionPrivateREST.class);
router.attach("/analysis", AnalysisREST.class);
return router;
}
}
Thank you in advance, it is my first time using this Framework.
If I understand you correctly, you just want to run your main() method as the server, correct? In this case, the code for main() needs to be in a location that -- when running -- can provide the service at http://devweb2013.co.uk/research/Identiscope. Since you haven't stated what kind of server you are putting the code, I can't say where the best place to put the code would be. I assume you have superuser privileges on your deployment server, since the URL you provided implies port 80 will be serving your Identiscope web service (port 80 is a privileged port on most OS's). So as an answer, I can only provide general information.
On your deployment server, port 80 must be free (i.e. nothing else should be acting as a web server on port 80 on that machine) and the IdentiscopeApplication must be running on port 80. To do that, you need only change the line:
server.getServers().add(Protocol.HTTP,8888);
to:
server.getServers().add(Protocol.HTTP, 80);
then run the application as a user that is allowed to start servers on port 80 (preferably NOT the superuser). If you haven't already, you will need to get Java running on your deployment server and make sure all Restlet libraries are in the classpath where you plan to run your application.
If I understand what you are trying to do, then this should do the trick.

Inconsistent Results from javax.print library

I have a section of code within a Web App, running in Tomcat 5.0, that makes a call to the javax.print.PrintServiceLookup method lookupPrintServices(null, null). Previously, this code returned an array of substantial size, listing all the printers on the server as expected. Rather suddenly one day recently, it started behaving differently, now returning a zero-size array of no printers instead. Checking rather thoroughly, I was not able to determine what might have changed to cause this method to behave differently now than it did before.
I made a small, stand-alone test program that contained this same method call.
PrintService[] printers = PrintServiceLookup.lookupPrintServices(null, null);
System.out.println("Java Version: " + System.getProperty("java.version"));
System.out.println("Printers found:");
if (printers != null) {
for (PrintService printer : printers) {
if (printer != null) {
System.out.println(" " + printer.toString());
}
}
}
System.out.println("End");
Running this program, it reacted differently, returning the full list of printers. Double-checking, I put the same code (using logging statements instead of System.print statements) in the context initialization method of the Web App, and it still returns zero printers. The method returns different results depending on whether it is run from the web app war or the stand-alone jar.
Some of my colleagues suggested that it might have to do with the Security Manager, and indeed, the documentation for the PrintService class says that certain properties of a Security Manager can alter results from the method call. However, after adding some code to my test to retrieve and view the Security Manager, it appears that there is none in either case.
try {
if (sec != null) {
System.out.println(sec.toString());
sec.checkPrintJobAccess();
}
System.out.println("*-*-*-*-*Printer Access allowed!!");
}
catch (SecurityException e) {
System.out.println("*-*-*-*-*Printer NOT Access allowed!!");
}
The result is that the Security Manager is null in both cases.
Trying it on a different server, both the web app and the stand-alone jar versions of doing things return no printers. There is no consistency that I can find.
What is going on here? What is causing this javax method call to return different results in different situations? What could have changed about the web app to alter its behavior between one day and the next?
Try starting the server with this option -DUseSunHttpHandler=true to initiate the HTTP URL Connection with JDK API instead of server API.
Hope this works for you too.

Log4J SMTP digest/aggregate emails?

I have a JBOSS batch application that sometimes sends hundreds on emails in a minute to the same email address with Log4J errors. This causes problems with Gmail, because it says we are sending emails too quickly for that gmail account.
So I was wondering if there was a way to basically create a "digest" or "aggregate" email puts all the error logs in 1 email and sends that every 5 minutes. So that way every 5 minutes we may get a large email, but at least we actually get the email instead of it being delayed for hours and hours by gmail servers rejecting it.
I read this post that suggested something about using an evaluator to do that, but I couldn't see how that is configured in the Log4J xml configuration file. It also seemed like it might not be able to "digest" all the logs into 1 email anyway.
Has anyone done this before? Or know if it's possible?
From (the archived) SMTPAppender Usage page:
set this property
log4j.appender.myMail.evaluatorClass = com.mydomain.example.MyEvaluator
Now you have to create the evaluator class and implement the org.apache.log4j.spi.TriggeringEventEvaluator interface and place this class in a path where log4j can access it.
//Example TriggeringEventEvaluator impl
package com.mydomain.example;
import org.apache.log4j.spi.LoggingEvent;
import org.apache.log4j.spi.TriggeringEventEvaluator;
public class MyEvaluator implements TriggeringEventEvaluator {
public boolean isTriggeringEvent(LoggingEvent event) {
return true;
}
}
You have to write the evaluator logic within this method.
I created a free useable solution for log4j2 with an ExtendedSmtpAppender.
(If you still use log4j 1.x, simply replace your log4j-1.x.jar with log4j-1.2-api-2.x.jar - and log4j-core-2.x.jar + log4j-api-2.x.jar of course.)
You get it from Maven Central as de.it-tw:log4j2-extras (This requires Java 7+ and log4j 2.8+).
If you are restricted to Java 6 (and thus log4j 2.3) then use de.it-tw:log4j2-Java6-extras
Additionally, see the GitLab project: https://gitlab.com/thiesw/log4j2-extras (or https://gitlab.com/thiesw/log4j2-Java6-extras)
[OLD text:
If you use log4j2, see answer to other stack overflow issue: https://stackoverflow.com/a/34072704/5074004
Or directly go to my external but publically available solution presented in https://issues.apache.org/jira/browse/LOG4J2-1192
]

Categories

Resources