I am quite new to Java Sesame. I have read through the documentation provided for Sesame. Unfortunately lots of things were not very clear to me. I started with creating the repository as in the code below:
import org.openrdf.repository.RepositoryException;
import org.openrdf.repository.manager.RemoteRepositoryManager;
import org.openrdf.repository.sail.SailRepository;
public class RDF{
public void create() throws RepositoryException
{
File dataDir = new File("myFile");
Repository repo = new SailRepository(new MemoryStore(dataDir));
repo.initialize();
String serverUrl = "http://localhost:8080/openrdf-sesame/repositories/rep";
RemoteRepositoryManager manager = new RemoteRepositoryManager(serverUrl);
manager.initialize();
}
}
I am using Tomcat 6. I run the code in Eclipse. I right click the Dynamic Project and select Run on server. The codes were taken from the documentation itself. My questions are what is creating the file dataDir for?
Is this address http://localhost:8080/openrdf-sesame/repositories/rep of the location where the repository is created? After starting Tomcat, I use the above link but it shows me there is an error.
How can I make sure that the repository has successfully been created and how can I start to use it. Your assistance would be very much appreciated.
You are mixing up two things:
creating a local repository (a MemoryStore persisted to dataDir), and
a remote repository living on the Sesame server
If you just want to create a persistent repository the first option is sufficient. If you want to also have a full Sesame server with all its UI and services then you have to install it, set it up first and use the second option.
In both cases, you can use the RepositoryManager API. An introduction to RepositoryManager.
You wouldn't typically start the Sesame server and workbench via Eclipse. You would use a standalone installation as described in the documentation.
Related
I have used Flavour in Android studio for deploy different version(Demo,Production) in one device, but I also use firebase remote config where I registered it to one package id.
is it possible to make firebase remote config works in multiple package id?, I want to use it in my Demo and Production app.
You can add multiple application package names under project settings so that they all can use common firebase services. In this way, you can share single project resources among multiple applications.
The documented advice for working with multiple environments, such as development and production, is to have one project for each environment. Your app will typically have a unique application ID for each environment, which lets it read remote config parameters for the project that it will access, which is easy to configure with Android build variants.
Here is my answer
Remote config can be setup for below conditions
Only for different projects : ex : amazon, flipkart
Same project with different productFlavors ex : amazonPay, amazonShopping
Same project with different buildTypes ex : amazon.dev, amazon.prod, amazon.experiment
Here is sample code to setup remoteConfig, it might help you/others
Make sure you setupRemoteConfig in onCreate of Application.java file
private void setupRemoteConfig() {
final FirebaseRemoteConfig remoteConfig = FirebaseRemoteConfig.getInstance();
remoteConfig.setDefaults(R.xml.remote_config_default);
remoteConfig.fetch(300).addOnCompleteListener(task -> {
if (task.isSuccessful()) {
remoteConfig.activateFetched();
}
});
}
You can create service accounts for every environment and you can use that, if its a database you can create two separate nodes for Production and Development.
Thank You for reading
I have java application hosted on apache tomcat. There are some values derived from project.properties file. But to reflect any new/changed property value, i need to restart tomcat service/application service.
So is there any way to reflect these changes on run-time / on-fly?
Thanks in advance.
Your question is very broad and does not specify if you're using a specific framework for building your application. Most frameworks have some default support for this, so if you building a plain Java application running in Tomcat you can do this by making use of for instance 'commons configuration'.
With 'commons configuration' you can setup a properties configuration with a reloading strategy:
String filePath = "/some/path/project.properties";
configuration = new PropertiesConfiguration(filePath);
FileChangedReloadingStrategy fileChangedReloadingStrategy = new FileChangedReloadingStrategy();
// 1 second refresh
fileChangedReloadingStrategy.setRefreshDelay("1000");
configuration.setReloadingStrategy(fileChangedReloadingStrategy);
A complete worked out example with Spring can be found in this tutorial.
I am working on setting up a Lagom application in production. I have tried contacting Lightbend for ConductR license but haven't heard back in ages. So, now I am looking for an alternative approach. I have multiple questions.
Since the scale of the application is pretty small right now, I think using a static service locator works for me right now (open to other alternatives). Also, I am using MySQL as my event store instead of the default configuration of Cassandra (Reasons not relevant to this thread).
To suppress Cassandra and Lagom's Service Locator, I have added the following lines to my build.sbt:
lagomCassandraEnabled in ThisBuild := false
I have also added the following piece to my application.conf with service1-impl module.
lagom.services {
service1 = "http://0.0.0.0:8080"
}
For the dev environment, I have been able to successfully run my application using sbt runAll in a tmux session. With this configuration, there is no service locator running on the default 8000 port but I can individually hit service1 on 8080 port. (Not sure if this is the expected behaviour. Comments?)
I ran sbt dist to create a zip file and then unzipped it and ran the executable in there. Interestingly, the zip was created within the service1-impl folder. So, if I have multiple modules (services?), will sbt dist create individual zip files for each of the service?
When I run the executable created via sbt dist, it tries to connect to Cassandra and also launches a service locator and ignores the static service locator configuration that I added. Basically, looks like it ignores the lines I added to build.sbt. Anyone who can explain this?
Lastly, if I were to have 2 services, service1 and service2, and 2 nodes in the cluster with node 1 running service1 and node 2 running both the services, how would my static service locator look like in the application.conf and since each of the service would have its own application.conf, would I have to copy the same configuration w.r.t. static service locator in all the application.confs?
Would it be something like this?
lagom.services {
service1 = "http://0.0.0.0:8080"
service1 = "http://1.2.3.4:8080"
service2 = "http://1.2.3.4:8081"
}
Since each specific actor would be spawned on one of the nodes, how would it work with this service locator configuration?
Also, I don't want to run this in a tmux session in production. What would be the best way to finally run this code in production?
You can get started with ConductR in dev mode immediately, for free, without contacting sales. Instructions are at: https://www.lightbend.com/product/conductr/developer
You do need to register (read: provide a valid email) and accept TnC to access that page. The sandbox is free to use for dev mode today so you can see if ConductR is right for you quickly and easily.
For production, I'm thrilled to say that soon you'll be able to deploy up to 3 nodes in production if you register w/Lightbend.com (same as above) and generate a 'free tier' license key.
Lagom is opinionated about microservices. There's always Akka and Play if those opinions aren't shared by a project. Part of that opinion is that deployment should be easy. Good tools feel 'right' in the hand. You are of course free to deploy the app as you like, but be prepared to produce more polyfill the further from the marked trails you go.
Regarding service lookup, ConductR provides redirection for HTTP service lookups for use with 'withFollowRedirects' on Play WS [1]
Regarding sbt dist, each sub-project service will be a package. You can see this in the Chirper example [2] on which sbt dist generates chirp-impl.zip, friend-impl.zip, activity-stream-impl, etc as seen in the Chirper top level build.sbt file.
As that ConductR is the clean and lighted path, you can reference how it does things in order to better understand how to replace Lagom's deployment poly w/ your own. That's the interface Lagom knows best. Much of ConductR except the core is already OSS so can try github if the docs don't cover something.
Disclosure: I am a ConductR-ing Lightbender.
http://conductr.lightbend.com/docs/1.1.x/ResolvingServices
git#github.com:lagom/activator-lagom-java-chirper.git
I am trying to execute a rule in IBM Jrule Rules execution server , using a java client. I am having Websphere community Edition V2.1 server, I am able call and execute the rules using JSF deployed in the samae server.
I want to call and execute the rules using a java client. I didn't find any way to do this,
In EJB. we can call EJB from web as well as from java client , by setting Initial Context envionment property. Is there any way similar to this is there, to call Rule Execution server rules, using java client, web part is already working.
import ilog.rules.res.session.IlrPOJOSessionFactory;
import ilog.rules.res.session.IlrStatelessSession;
import ilog.rules.res.session.IlrSessionFactory;
import ilog.rules.res.session.IlrStatefulSession;
import ilog.rules.res.session.IlrSessionRequest;
import ilog.rules.res.session.IlrJ2SESessionFactory;
import ilog.rules.res.session.IlrSessionResponse;
import ilog.rules.res.model.IlrPath;
import ilog.rules.res.session.extension.IlrExtendedJ2SESessionFactory;
import miniloan.Borrower;
import miniloan.Loan;
public class POJOEx {
public static void main(String... arg) {
// create rule session factory
//IlrSessionFactory sessionFactory = new IlrPOJOSessionFactory();
//IlrExtendedJ2SESessionFactory sessionFactory = new IlrExtendedJ2SESessionFactory();
// j2se factory
IlrSessionFactory sessionFactory = new IlrJ2SESessionFactory();
try {
// use stateless session for invocation
IlrStatelessSession statelessSession = sessionFactory.createStatelessSession();
//input parameter
Borrower borrower = new miniloan.Borrower("Joe", 600,
80000);
// in out parameter
Loan loan = new miniloan.Loan(500000, 240, 0.05);
IlrSessionRequest request = sessionFactory.createRequest();
//rule path
request.setRulesetPath(IlrPath.parsePath("/miniloanruleapp/2.0/miniloanrules/1.0"));
request.setUserDat("miniloanruleapp.MiniloanrulesclientRunnerImpl.executeminiloanrules");
request.setInputParameter("borrower", borrower);
request.setInputParameter("loan", loan);
//executing
IlrSessionResponse response = statelessSession.execute(request);
System.out.println("userdata = " + response.getOutputParameters().get("loan"));
System.out.println("outputString = " + (String) response.getUserData());
System.out.println("executionId = " + response.getExecutionId());
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
I am getting below error.
ilog.rules.res.xu.ruleset.impl.archive.IlrRulesetArchiveInformationNotFoundException: Cannot get the information about the ruleset /miniloanruleapp/2.0/miniloanrules/1.0
can anybody suggest where to specify Rules execution server url, username and password. like we specify InitialContext values in EJB.
Let me clarify what is RES because it seems there is a misunderstanding here, it may be me.
RES is used in Ilog terminology to describe multiple things:
- The web interface that allows you to manage your ruleapp.
- The actual application that you deploy on your WebSphere CE (or else) in order to execute the rules.
- The .jar files that allows you to execute the ruleapp locally.
You, AFAIK, cannot connect RES using a local JAVA application.
What you have coded is calling the rule engine contained in the RES*.jar files in order to execute your ruleapp locally.
There is no way you can use your JAVA application like you are using your EJB application.
You have to use a webservice or else which is feasible if you put the ruleapp name as a parameter of the web service for instance.
You are using miniloan so you probably know the example using the web interface where you can tell which version of the ruleset to use.
It will be the same if you want to programmatically manage your ruleapp deployed on RES (real application on your application server) you will need to use MDB. Nothing else.
It is disapointing, I know because I went through that, but there is no way I know (at least) to do that. This is not the behaviour you have to follow.
To make it work then put your ruleapp in the classpath (or root of your JAVA application in eclipse) and run it... Then you will execute your rules.
RES doesn't provide the same tools than RTS where you can access RTS from any JAVA application in order to manipulate your rule project.
You are 100% correct there is no way to tell the J2SE connection what is the server URL and hence no way to run your rules from the server.
Hope it helps.
You can absolutely call a Rule Execution Server from J2EE code or as in your case via a remote J2SE call and there is documentation provided to do this. But I do want to clarify a few things regarding the first response.
The Rule Execution Server is the runtime for executing rules. It has a persistence layer (file or database) and a management console that is used to manage it and any other connected Rule Execution Server.
It is this management server you connect to when you using the:
server:port/res URL
You do not connect to an actual RES as you can connect many RES to a single management console. The management console has the details about the persistence layer and a way of extracting the ruleset you wish to execute.
To your question. The reason that you are getting an error is that you have not configured which remote rule execution server you wish to pull the ruleset from, which is why you get the error you see.
To configure the remote connection, you use a file called 'ra.xml' and change the settings to point to your remote res/console.
There is a default ra.xml in the '/executionserver/bin' directory (default to ./IBM/ODM87/ODM/executionserver/bin).
The major aspects in that file to consider would be:
To enable management of Java SE XU instances that are running on different JVM or JMX MBean server, you must configure the XU MBean plug-in with the TCPIP protocol:
<config-property>
<config-property-name>plugins</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value>{pluginClass=Management,xuName=default,protocol=tcpip,tcpip.port=TCPIP_PORT,tcpip.host=RES_CONSOLE_HOST,tcpip.retryInterval=INTERVAL}
</config-property-value>
</config-property>
where:
RES_CONSOLE_HOST is the host on which the Rule Execution Server console is deployed.
TCPIP_PORT is the TCP/IP port on which the Rule Execution Server console management server is listening.
INTERVAL is the interval of time, in milliseconds, during which the console tries to reconnect to the management server if a connection fails.
As long as the ra.xml is in the classpath of the application you are running the local J2SE engine should make a call to the remote RES console and request the rule app specified in the provide RuleSet path.
For J2EE, this is similar but you actually execute the rule in the remote RES rather than pull the ruleset locally.
If you check the ODM Samples there is both a J2EE and J2SE sample that demonstrates both techniques.
adding below files in the same folder of *.dsar worked for me
creation_date.txt, display_name.txt, properties.txt
I have written unit tests for several session beans I have created. When I try to run them, however, NetBeans gives me the following error:
No EJBContainer provider available. The following providers: org.glassfish.ejb.embedded.EJBContainerProviderImpl returned null from createEJBContainer call.
I highly suspect that this is the root cause of the issue:
SEVERE: EJB6004:Specified application server installation location [C:\Development\GlassFish\3.1\glassfish\domains\domain1] does not exist.
It's right. Domain1 does not exist. I created a "development" domain myself and deleted domain1 but it seems there is a lingering reference of which I have no clue where to modify it. The non-embedded container the embedded container is referring to is registered in NetBeans as well and correctly hooked up to the development domain. There are no problems with regular deployments of the project.
Any help very much appreciated!
I believe ScatteredWar is outdated. After a bunch of searching I found the incredibly helpful post Quick introduction to Embeddability of GlassFish Open Source Edition 3.1, which gives this snippet:
If your archive is not pre-built, instead it's components are scattered in multiple directories, then you may be interested in using the scattered archive APIs:
import org.glassfish.embeddable.;
import org.glassfish.embeddable.archive.;
Deployer deployer = glassfish.getDeployer();
// Create a scattered web application.
ScatteredArchive archive = new ScatteredArchive("testapp", ScatteredArchive.Type.WAR);
// target/classes directory contains my complied servlets
archive.addClassPath(new File("target", "classes"));
// resources/sun-web.xml is my WEB-INF/sun-web.xml
archive.addMetadata(new File("resources", "sun-web.xml"));
// resources/MyLogFactory is my META-INF/services/org.apache.commons.logging.LogFactory
archive.addMetadata(new File("resources", "MyLogFactory"), "META-INF/services/org.apache.commons.logging.LogFactory");
deployer.deploy(archive.toURI())
Other docs: Oracle GlassFish Server 3.1 Embedded Server Guide and The updated API.
Adam Bien and Arun Gupta speak about ways to embed GlassFish for unit testing.
The main piece is this:
GlassFish glassfish = new GlassFish(port);
ScatteredWar war = new ScatteredWar(NAME,
new File("src/main/resources"),
new File("src/main/resources/WEB-INF/web.xml"),
Collections.singleton(new File("build/classes").toURI().toURL()));
glassfish.deploy(war);
An alternative approach would be to use OpenEJB to do your unit testing, as this will ensure that you're sticking to standards. Adam as has has an entry on setting that up.