SYSTEM repository not showing repository details - java

In my machine, base/data directory contains multiple repositories. But when I access this data directory from java program it gives me only SYSTEM repository record.
Code to retrieve the repositories :
String dataDir = "D:\\SesameStorage\\data\\"
LocalRepositoryManager localManager = new LocalRepositoryManager(new File(dataDir));
localManager.initialize();
// Get all repositories
Collection<Repository> repos = localManager.getAllRepositories();
System.out.println("LocalRepositoryManager All repositories : "
+ repos.size());
for (Repository repo : repos) {
System.out.println("This is : " + repo.getDataDir());
RepositoryResult<Statement> idStatementIter = repo
.getConnection().getStatements(null,
RepositoryConfigSchema.REPOSITORYID, null,
true, new Resource[0]);
Statement idStatement;
try {
while (idStatementIter.hasNext()) {
idStatement = (Statement) idStatementIter.next();
if ((idStatement.getObject() instanceof Literal)) {
Literal idLiteral = (Literal) idStatement
.getObject();
System.out.println("idLiteral.getLabel() : "
+ idLiteral.getLabel());
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
Output :
LocalRepositoryManager All repositories : 1
This is : D:\SemanticStorage\data\repositories\SYSTEM
idLiteral.getLabel() : SYSTEM
Adding repository to LocalRepositoryManager :
String repositoryName = "data.ttl";
RepositoryConfig repConfig = new RepositoryConfig(repositoryName);
SailRepositoryConfig config = new SailRepositoryConfig(new MemoryStoreConfig());
repConfig.setRepositoryImplConfig(config);
manager.addRepositoryConfig(repConfig);
Getting the repository object :
Repository repository = manager.getRepository(repositoryName);
repository.initialize();
I have successfully added new repository to LocalRepositoryManager and it shows me the repository count to 2. But when I restart the application it shows me only one repository and that is the SYSTEM repository.
My SYSTEM repository is not getting updated, Please suggest me, how should I load that data directory in my LocalRepositoryManager object.

You haven't provided a comprehensive test case, just individual snippets of code with no clear indication of the order in which they get executed, which makes it somewhat hard to figure out what exactly is going wrong.
I would hazard a guess, however, that the problem is that you don't properly close and shut down resources. First of all you are obtaining a RepositoryConnection without ever closing it:
RepositoryResult<Statement> idStatementIter = repo
.getConnection().getStatements(null,
RepositoryConfigSchema.REPOSITORYID, null,
true, new Resource[0]);
You will need to change this to something like this:
RepositoryConnection conn = repo.getConnection();
try {
RepositoryResult<Statement> idStatementIter =
conn.getStatements(null,
RepositoryConfigSchema.REPOSITORYID, null,
true, new Resource[0]);
(... do something with the result here ...)
}
finally {
conn.close();
}
As an aside: if your goal is retrieve repository meta-information (id, title, location), the above code is far too complex. There is no need to open a connection to the SYSTEM repository to read this information at all, you can obtain this stuff directly from the RepositoryManager. For example, you can retrieve a list of repository identifiers simply by doing:
List<String> repoIds = localManager.getRepositoryIDs();
for (String id: repoIds) {
System.out.println("repository id: " + id);
}
Or if you want to also get the file location and/or description, use:
Collection<RepositoryInfo> infos = localManager.getAllRepositoryInfos();
for (RepositoryInfo info: infos) {
System.out.println("id: " + info.getId());
System.out.println("description: " + info.getDescription());
System.out.println("location: " + info.getLocation());
}
Another problem with your code is that I suspect you never properly call manager.shutDown() nor repository.shutDown(). Calling these when your program exits allows the manager and the repository to properly close resources, save state, and exit gracefully. Since you are creating a RepositoryManager object yourself, you need to care to do this on program exit yourself as well.
An alternative to creating your own RepositoryManager object is to use a RepositoryProvider instead (see also the relevant section in the Sesame Programmers Manual). This is a utility class that comes with a built-in shutdown hook, saving you from having to deal with these manager/repository shutdown issues.
So instead of this:
LocalRepositoryManager localManager = new LocalRepositoryManager(new File(dataDir));
localManager.initialize();
Do this:
LocalRepositoryManager localManager =
RepositoryProvider.getRepositoryManager(new File(datadir));

Related

Get token string from tokenID using Stanford Parser in GATE

I am trying to use some Java RHS to get the string value of dependent tokens using Stanford dependency parser in GATE, and add them as features of a new annotation.
I am having problems targeting just the 'dependencies' feature of the token, and getting the string value from the tokenID.
Using below specifying only 'depdencies' also throws a java null pointer error:
for(Annotation lookupAnn : tokens.inDocumentOrder())
{
FeatureMap lookupFeatures = lookupAnn.getFeatures();
token = lookupFeatures.get("dependencies").toString();
}
I can use below to get all the features of a token,
gate.Utils.inDocumentOrder
but it returns all features, including the dependent tokenID's; i.e:
dependencies = [nsubj(8390), dobj(8394)]
I would like to get just the dependent token's string value from these tokenID's.
Is there any way to access dependent token string value and add them as a feature to the annotation?
Many thanks for your help
Here is a working JAPE example. It only printns to the GATE's message window (std out), It doesn't create any new annotations with features you asked for. Please finish it yourself...
Stanford_CoreNLP plugin has to be loaded in GATE to make this JAPE file loadable. Otherwise you will get class not found exception for DependencyRelation class.
Imports: {
import gate.stanford.DependencyRelation;
}
Phase: GetTokenDepsPhase
Input: Token
Options: control = all
Rule: GetTokenDepsRule
(
{Token}
): token
-->
:token {
//note that tokenAnnots contains only a single annotation so the loop could be avoided...
for (Annotation token : tokenAnnots) {
Object deps = token.getFeatures().get("dependencies");
//sometimes the dependencies feature is missing - skip it
if (deps == null) continue;
//token.getFeatures().get("string") could be used instead of gate.Utils.stringFor(doc,token)...
System.out.println("Dependencies for token " + gate.Utils.stringFor(doc, token));
//the dependencies feature has to be typed to List<DependencyRelation>
List<DependencyRelation> typedDeps = (List<DependencyRelation>) deps;
for (DependencyRelation r : typedDeps) {
//use DependencyRelation.getTargetId() to get the id of the target token
//use inputAS.get(id) to get the annotation for its id
Annotation targetToken = inputAS.get(r.getTargetId());
//use DependencyRelation.getType() to get the dependency type
System.out.println(" " +r.getType()+ ": " +gate.Utils.stringFor(doc, targetToken));
}
}
}

How to use github java API (org.eclipse.egit.github.*) to search for a given commit hash

It is possible to receive details regarding a given commit from calling github Search API found in here with providing the relevant commit hash, Now I need to get the same response by using github java API (org.eclipse.egit.github.*) which can be found in here. According to their documentation of the version 2.1.5 found in here, there is no method in CommitService class to get commit information by providing only the commit hash . Is there a workaround to reach them? Thanks in advance
You can use the CommitService.getCommit(IRepositoryIdProvider, String) method, just feed in one more argument, the repository where the commit will be searched. For example,
GitHubClient client = new GitHubClient(server).setCredentials(login, token);
RepositoryService repoService = new RepositoryService(client);
// If you know which repository to search (you know the owner and repo name)
Repository repository = repoService.getRepository(owner, repoName);
CommitService commitService = new CommitService(client)
Commit commit1 = commitService.getCommit(repository, sha).getCommit();
System.out.println("Author: " + commit1.getAuthor().getName());
System.out.println("Message: " + commit1.getMessage());
System.out.println("URL: " + commit1.getUrl());
Or you may just loop through each repository returned from the RepositoryService.getRepositories() method, if you don't know which repository to search. For example,
List<Repository> repositories = repoService.getRepositories();
Commit commit2 = null;
for (Repository repo : repositories) {
try {
commit2 = commitService.getCommit(repo, sha).getCommit();
System.out.println("Repo: " + repo.getName());
System.out.println("Author: " + commit2.getAuthor().getName());
System.out.println("Message: " + commit2.getMessage());
System.out.println("URL: " + commit2.getUrl());
break;
} catch (RequestException re) {
if (re.getMessage().endsWith("Not Found (404)")) {
continue;
} else {
throw re;
}
}
}

How to use SVNKit to get the list of all revisions merged from one branch to another?

Purpose:
As a quality check, we need to make sure that all qualified revisions are merged up to all higher revisions.
Example:
Release_10 --> Release_11
In TortoiseSVN, it is very easy to figure out which revisions have been merged by clicking the "Show Log" button. Merged revisions are clearly distinguished by formatting and a merge icon.
You can also list all merged revisions through pure SVN:
C:\Release_11>svn mergeinfo --show-revs merged https://svn_repo/project/branches/releases/Release_10
r32894
r32897
r32901
r32903
r32929
r32987
r32994
r32996
r33006
r33017
r33020
r33021
r33041
r33045
r33077
Currently, I have the code that lists all revision info:
SVNURL svnURL = SVNURL.parseURIEncoded(url);
SVNRepository repository = SVNRepositoryFactory.create(svnURL);
BasicAuthenticationManager authManager = BasicAuthenticationManager.newInstance("username", "password".toCharArray());
authManager.setProxy("00.00.00.00", 8080, null, (char[])null);
repository.setAuthenticationManager(authManager);
#SuppressWarnings("unchecked")
Collection<SVNLogEntry> logEntries = repository.log(new String[] { "" }, null, 0, -1, true, true);
Now I just need to figure out which ones have been merged.
Preferences:
A single call to the repo to get all merged revisions. (Or better yet: A single call to get all the revisions and whether or not they were merged.)
Would prefer not to have a local checkout of the SVN repo.
Notes:
Using SVNKIT v1.8.14, but am not locked into a specific revision.
If you run svn mergeinfo with absolute URLs for both source and target, you won't need a local checkout of the repository:
svn mergeinfo --show-revs merged https://svn_repo/project/branches/releases/Release_10 https://svn_repo/project/branches/releases/Release_11
To run this command, on SVNKit 1.7 or later, you can use SvnLogMergeInfo:
SvnOperationFactory svnOperationFactory = new SvnOperationFactory();
try {
String repo = "https://svn_repo/project/branches/releases/";
String srcBranch = repo + "Release_10";
String dstBranch = repo + "Release_11";
// svnOperationFactory.setAuthenticationManager(null);
SvnLogMergeInfo op = svnOperationFactory.createLogMergeInfo();
op.setFindMerged(true); // --show-revs merged
op.setSource(SvnTarget.fromURL(SVNURL.parseURIEncoded(srcBranch)));
op.setSingleTarget(SvnTarget.fromURL(SVNURL.parseURIEncoded(dstBranch)));
op.setReceiver(new ISvnObjectReceiver<SVNLogEntry>() {
#Override
public void receive(SvnTarget target, SVNLogEntry logEntry) throws SVNException {
System.out.println("------\n" + logEntry);
}
});
op.run();
} finally {
svnOperationFactory.dispose();
}
For older versions, you can use SVNDiffClient.doGetLogMergedMergeInfo.

Retrieving modified/ checked in files from Clear Case Activity

This query is related to Rational Clear Case Cm api programming using java. We have a requirement wherein we want to get the list of modified files of a particular stream. I am able to get the list of activities which are of type CcActivity from given Stream and Using that activitylist info I am able to fetch the Version information also.
I am unable to get the changeset information ie name of files which are modified, as there is no such method defined.
Could you please help me out as to which property or method I should use to fetch the list of mopdified file or changeset information using activity id or version information. Below is the code which I have written for getting activity list information and version information:-
PropertyRequest propertyrequest = new PropertyRequest(
CcStream.ACTIVITY_LIST,CcStream.TASK_LIST
);
stream=(CcStream) stream.doReadProperties(propertyrequest);
List<CcActivity> listOfAct = stream.getActivityList();
for(int i=0;i<listOfAct.size();i++){
CcActivity ccActivity = listOfAct.get(i);
PropertyRequest activityPropertyRequest = new PropertyRequest(
CcActivity.COMMENT,CcActivity.ID,CcActivity.DISPLAY_NAME,CcActivity.LATEST_VERSION_LIST,CcActivity.CREATOR_DISPLAY_NAME,CcActivity.NAME_RESOLVER_VIEW
,CcActivity.TASK_LIST,CcActivity.CREATOR_LOGIN_NAME,CcActivity.HEADLINE,CcActivity.COMMENT);
ccActivity = (CcActivity)ccActivity.doReadProperties(activityPropertyRequest);
trace(ccActivity.getDisplayName());
trace(ccActivity.getCreatorDisplayName());
trace("CREATOR_LOGIN_NAME :" +ccActivity.getCreatorLoginName());
trace("Headline:" +ccActivity.getHeadline());
ResourceList<javax.wvcm.Version> versionList = ccActivity.getLatestVersionList();
for(int j=0;j<versionList.size();j++){
Version version = versionList.get(j);
PropertyRequest versionPropertyRequest = new PropertyRequest(
Version.PREDECESSOR_LIST,Version.VERSION_NAME,Version.VERSION_HISTORY.nest(VersionHistory.CHILD_MAP),Version.DISPLAY_NAME,Version.COMMENT
,Version.PATHNAME_LOCATION,Version.ACTIVITY.nest(Resource.CONTENT_TYPE));
version = (Version)version.doReadProperties(versionPropertyRequest);
trace("Version Info");
trace("Version Name : " + version.getVersionName());
trace("Version Comment :" +version.getComment());
Old thread but I have recently been trying to learn how to list modified files through the API. I believe the issue is that version object returned by the ResourceList get method should be cast to CcVersion type.
This exposes the StpResource.doReadProperties(Resource context, Feedback feedback) method which requires a handle to the view we are interested in. We can also request the CcVersion specific properties that we are interested in.
The example would then become something like:
ResourceList<javax.wvcm.Version> versionList = ccActivity.getLatestVersionList();
for(int j=0;j<versionList.size();j++){
Version version = versionList.get(j);
PropertyRequest versionPropertyRequest = new PropertyRequest(CcVersion.DISPLAY_NAME, CcVersion.VIEW_RELATIVE_PATH, CcVersion.CREATION_DATE);
version = (CcVersion) version.doReadProperties(view, versionPropertyRequest);
trace("Version Info");
trace("Version DISPLAY_NAME : " + version.getDisplayName());
trace("Version VIEW_RELATIVE_PATH : " + version.getViewRelativePath());
trace("Version CREATION_DATE : " + version.getCreationDate());
}

Neo4j ExecutionEngine does not return valid results

Trying to use a similar example from the sample code found here
My sample function is:
void query()
{
String nodeResult = "";
String rows = "";
String resultString;
String columnsString;
System.out.println("In query");
// START SNIPPET: execute
ExecutionEngine engine = new ExecutionEngine( graphDb );
ExecutionResult result;
try ( Transaction ignored = graphDb.beginTx() )
{
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// END SNIPPET: execute
// START SNIPPET: items
Iterator<Node> n_column = result.columnAs( "n" );
for ( Node node : IteratorUtil.asIterable( n_column ) )
{
// note: we're grabbing the name property from the node,
// not from the n.name in this case.
nodeResult = node + ": " + node.getProperty( "Name" );
System.out.println("In for loop");
System.out.println(nodeResult);
}
// END SNIPPET: items
// START SNIPPET: columns
List<String> columns = result.columns();
// END SNIPPET: columns
// the result is now empty, get a new one
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// START SNIPPET: rows
for ( Map<String, Object> row : result )
{
for ( Entry<String, Object> column : row.entrySet() )
{
rows += column.getKey() + ": " + column.getValue() + "; ";
System.out.println("nested");
}
rows += "\n";
}
// END SNIPPET: rows
resultString = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n.Name" ).dumpToString();
columnsString = columns.toString();
System.out.println(rows);
System.out.println(resultString);
System.out.println(columnsString);
System.out.println("leaving");
}
}
When I run this in the web console I get many results (as there are multiple nodes that have an attribute of Name that contains the pattern 79. Yet running this code returns no results. The debug print statements 'in loop' and 'nested' never print either. Thus this must mean there are not results found in the Iterator, yet that doesn't make sense.
And yes, I already checked and made sure that the graphDb variable is the same as the path for the web console. I have other code earlier that uses the same variable to write to the database.
EDIT - More info
If I place the contents of query in the same function that creates my data, I get the correct results. If I run the query by itself it returns nothing. It's almost as the query works only in the instance where I add the data and not if I come back to the database cold in a separate instance.
EDIT2 -
Here is a snippet of code that shows the bigger context of how it is being called and sharing the same DBHandle
package ContextEngine;
import ContextEngine.NeoHandle;
import java.util.LinkedList;
/*
* Class to handle streaming data from any coded source
*/
public class Streamer {
private NeoHandle myHandle;
private String contextType;
Streamer()
{
}
public void openStream(String contextType)
{
myHandle = new NeoHandle();
myHandle.createDb();
}
public void streamInput(String dataLine)
{
Context context = new Context();
/*
* get database instance
* write to database
* check for errors
* report errors & success
*/
System.out.println(dataLine);
//apply rules to data (make ContextRules do this, send type and string of data)
ContextRules contextRules = new ContextRules();
context = contextRules.processContextRules("Calls", dataLine);
//write data (using linked list from contextRules)
NeoProcessor processor = new NeoProcessor(myHandle);
processor.processContextData(context);
}
public void runQuery()
{
NeoProcessor processor = new NeoProcessor(myHandle);
processor.query();
}
public void closeStream()
{
/*
* close database instance
*/
myHandle.shutDown();
}
}
Now, if I call streamInput AND query in in the same instance (parent calls) the query returns results. If I only call query and do not enter ANY data in that instance (yet web console shows data for same query) I get nothing. Why would I have to create the Nodes and enter them into the database at runtime just to return a valid query. Shouldn't I ALWAYS get the same results with such a query?
You mention that you are using the Neo4j Browser, which comes with Neo4j. However, the example you posted is for Neo4j Embedded, which is the in-process version of Neo4j. Are you sure you are talking to the same database when you try your query in the Browser?
In order to talk to Neo4j Server from Java, I'd recommend looking at the Neo4j JDBC driver, which has good support for connecting to the Neo4j server from Java.
http://www.neo4j.org/develop/tools/jdbc
You can set up a simple connection by adding the Neo4j JDBC jar to your classpath, available here: https://github.com/neo4j-contrib/neo4j-jdbc/releases Then just use Neo4j as any JDBC driver:
Connection conn = DriverManager.getConnection("jdbc:neo4j://localhost:7474/");
ResultSet rs = conn.executeQuery("start n=node({id}) return id(n) as id", map("id", id));
while(rs.next()) {
System.out.println(rs.getLong("id"));
}
Refer to the JDBC documentation for more advanced usage.
To answer your question on why the data is not durably stored, it may be one of many reasons. I would attempt to incrementally scale back the complexity of the code to try and locate the culprit. For instance, until you've found your problem, do these one at a time:
Instead of looping through the result, print it using System.out.println(result.dumpToString());
Instead of the regex query, try just MATCH (n) RETURN n, to return all data in the database
Make sure the data you are seeing in the browser is not "old" data inserted earlier on, but really is an insert from your latest run of the Java program. You can verify this by deleting the data via the browser before running the Java program using MATCH (n) OPTIONAL MATCH (n)-[r]->() DELETE n,r;
Make sure you are actually working against the same database directories. You can verify this by leaving the server running. If you can still start your java program, unless your Java program is using the Neo4j REST Bindings, you are not using the same directory. Two Neo4j databases cannot run against the same database directory simultaneously.

Categories

Resources