I have an external text file that I want to bind to a label so that when the external file value is modified, my UI auto-updates the string value.
So far, I have tried:
val testid: ObservableStringValue = SimpleStringProperty(File("src/.../test").readText())
And in my borderpane, I reference the testid
label.bind(testid)
This reads the file successfully, but the testid doesn't auto update its value when I edit the test file. I thought to try using a Handler() to force the variable to update the value every second, but I'm sure there's a smarter way to use Properties and .observable() to bind the file and Property together.
EDIT:
Following on from mipa's suggestion to use nio2, I'm having trouble producing the object/class for the timer:
object DirectoryWatcher {
#JvmStatic fun main(args:Array<String>) {
val watchService = FileSystems.getDefault().newWatchService()
val path = Paths.get(System.getProperty("src/pykotinterface/test"))
path.register(
watchService,
StandardWatchEventKinds.ENTRY_CREATE,
StandardWatchEventKinds.ENTRY_DELETE,
StandardWatchEventKinds.ENTRY_MODIFY)
val key:WatchKey = watchService.take()
while (key != null) {
for (event in key.pollEvents()) {
println(
"Event kind:" + event.kind()
+ ". File affected: " + event.context() + ".")
}
key.reset()
}
}
}
How do I call this object to run - it's currently resting inside my View() class which is being called by TornadoFX to produce the view, so I can't call DirectWatcher.main(). Do I place a call to this object from within the other App class? I'm very lost.
There is no built-in mechanism in JavaFX which would allow such a binding but you can use the Java watch service as described here:
http://www.baeldung.com/java-nio2-watchservice
The Oracle doc can be found here:
https://docs.oracle.com/javase/10/docs/api/java/nio/file/WatchService.html
Related
My problem is to scrollToItem in an in-memory TreeGrid doesn't work.
I tried to use the solution of coockbook (https://cookbook.vaadin.com/scroll-to-item-in-tree-grid), but the scrollToItem doesn't work, and I get a red error with "(TypeError): this.scrollWhenReady is not a function" message.
My question is that "scrollWhenReady" function is available on different name or the coockbook is not useful for Vaadin 14.6.4 version?
The function is defined is specific to this recipe and is defined here:
private void initScrollWhenReady() {
runBeforeClientResponse(
ui ->
getElement()
.executeJs(
"this.scrollWhenReady = function(index, firstCall){" +
"if(this.loading || firstCall) {var that = this; setTimeout(function(){that.scrollWhenReady(index, false);}, 200);}" +
" else {this.scrollToIndex(index);}" +
"};"
)
);
}
Did you copy this code and is it executed? It should be executed when the view is attached.
I'm using Data Movement SDK from MarkLogic Java API to transform several documents, up to now I can transform documents by using a query batcher and a transform, but i'm only able to use URIS selectors by StructuredQuery objects.
My question is: ¿How may I use a selector module from my database instead of define it into my java application?
Update:
Up to now I already have a code that looks for document's URIS and applies a transform on them. I want to change that query batcher and use a module or selector module instead of looking for all documents into a directory
public TransformExecutionResults applyTransformByModule(String transformName, String filterText, int batchSize, int threadCount, String selectorModuleName, Map<String,String> parameters ) {
final ConcurrentHashMap<String, TransformExecutionResults> transformResult = new ConcurrentHashMap<>();
try {
// Specify a server-side transformation module (stored procedure) by name
ServerTransform transform = new ServerTransform(transformName);
ApplyTransformListener transformListener = new ApplyTransformListener().withTransform(transform).withApplyResult(ApplyResult.REPLACE) // Transform in-place, i.e. rewrite
.onSuccess(batch -> {
transformResult.compute(transformName, (k, v) -> TransformExecutionResults.Success);
System.out.println("Transformation " + transformName + " executed succesfully.");
}).onSkipped(batch -> {
System.out.println("Transformation " + transformName + " skipped succesfully.");
transformResult.compute(transformName, (k, v) -> TransformExecutionResults.Skipped);
}).onFailure((batchListener, throwable) -> {
System.err.println("Transformation " + transformName + " executed with errors.");
transformResult.compute(transformName, (k, v) -> TransformExecutionResults.Failed); // failed
});
// Apply the transformation to only the documents that match a query.
QueryManager qm = DbClient.newQueryManager();
StructuredQueryBuilder sqb = qm.newStructuredQueryBuilder();
// instead of this StruturedQueryDefinition, I want to use a module to get all URIS
StructuredQueryDefinition queryBySubdirectory = sqb.directory(true, "/temp/" + filterText + "/");
final QueryBatcher batcher = DMManager.newQueryBatcher(queryBySubdirectory);
batcher.withBatchSize(batchSize);
batcher.withThreadCount(threadCount);
batcher.withConsistentSnapshot();
batcher.onUrisReady(transformListener).onQueryFailure(exception -> {
exception.printStackTrace();
System.out.println("There was an error on Transform process.");
});
final JobTicket ticket = DMManager.startJob(batcher);
batcher.awaitCompletion();
DMManager.stopJob(ticket);
} catch (Exception fault) {
transformResult.compute(transformName, (k, v) -> TransformExecutionResults.GeneralException); // general exception
}
return transformResult.get(transformName);
}
If the job is small enough, you can just implement the document rewriting within your enode code either by making a call to a resource service extension:
http://docs.marklogic.com/guide/java/resourceservices#id_27702
http://docs.marklogic.com/javadoc/client/com/marklogic/client/extensions/ResourceServices.html
or by invoking a main module:
http://docs.marklogic.com/guide/java/resourceservices#id_84134
If the job is too long to fit in a single transaction, your can create a QueryBatcher with a document URI iterator instead of with a query. See:
http://docs.marklogic.com/javadoc/client/com/marklogic/client/datamovement/DataMovementManager.html#newQueryBatcher-java.util.Iterator-
For some examples illustrating the approach, see the second half of the second example in the class description for QueryBatcher:
http://docs.marklogic.com/javadoc/client/com/marklogic/client/datamovement/QueryBatcher.html
as well as the second half of this example:
http://docs.marklogic.com/javadoc/client/com/marklogic/client/datamovement/UrisToWriterListener.html
In your case, you could implement an Iterator that calls a resource service extension or invokes a main module to get and return the URIs (preferrably with read ahead), blocking when necessary.
By returning the uris to the client, it's easy to log the uris for later audit.
Hoping that helps,
In my machine, base/data directory contains multiple repositories. But when I access this data directory from java program it gives me only SYSTEM repository record.
Code to retrieve the repositories :
String dataDir = "D:\\SesameStorage\\data\\"
LocalRepositoryManager localManager = new LocalRepositoryManager(new File(dataDir));
localManager.initialize();
// Get all repositories
Collection<Repository> repos = localManager.getAllRepositories();
System.out.println("LocalRepositoryManager All repositories : "
+ repos.size());
for (Repository repo : repos) {
System.out.println("This is : " + repo.getDataDir());
RepositoryResult<Statement> idStatementIter = repo
.getConnection().getStatements(null,
RepositoryConfigSchema.REPOSITORYID, null,
true, new Resource[0]);
Statement idStatement;
try {
while (idStatementIter.hasNext()) {
idStatement = (Statement) idStatementIter.next();
if ((idStatement.getObject() instanceof Literal)) {
Literal idLiteral = (Literal) idStatement
.getObject();
System.out.println("idLiteral.getLabel() : "
+ idLiteral.getLabel());
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
Output :
LocalRepositoryManager All repositories : 1
This is : D:\SemanticStorage\data\repositories\SYSTEM
idLiteral.getLabel() : SYSTEM
Adding repository to LocalRepositoryManager :
String repositoryName = "data.ttl";
RepositoryConfig repConfig = new RepositoryConfig(repositoryName);
SailRepositoryConfig config = new SailRepositoryConfig(new MemoryStoreConfig());
repConfig.setRepositoryImplConfig(config);
manager.addRepositoryConfig(repConfig);
Getting the repository object :
Repository repository = manager.getRepository(repositoryName);
repository.initialize();
I have successfully added new repository to LocalRepositoryManager and it shows me the repository count to 2. But when I restart the application it shows me only one repository and that is the SYSTEM repository.
My SYSTEM repository is not getting updated, Please suggest me, how should I load that data directory in my LocalRepositoryManager object.
You haven't provided a comprehensive test case, just individual snippets of code with no clear indication of the order in which they get executed, which makes it somewhat hard to figure out what exactly is going wrong.
I would hazard a guess, however, that the problem is that you don't properly close and shut down resources. First of all you are obtaining a RepositoryConnection without ever closing it:
RepositoryResult<Statement> idStatementIter = repo
.getConnection().getStatements(null,
RepositoryConfigSchema.REPOSITORYID, null,
true, new Resource[0]);
You will need to change this to something like this:
RepositoryConnection conn = repo.getConnection();
try {
RepositoryResult<Statement> idStatementIter =
conn.getStatements(null,
RepositoryConfigSchema.REPOSITORYID, null,
true, new Resource[0]);
(... do something with the result here ...)
}
finally {
conn.close();
}
As an aside: if your goal is retrieve repository meta-information (id, title, location), the above code is far too complex. There is no need to open a connection to the SYSTEM repository to read this information at all, you can obtain this stuff directly from the RepositoryManager. For example, you can retrieve a list of repository identifiers simply by doing:
List<String> repoIds = localManager.getRepositoryIDs();
for (String id: repoIds) {
System.out.println("repository id: " + id);
}
Or if you want to also get the file location and/or description, use:
Collection<RepositoryInfo> infos = localManager.getAllRepositoryInfos();
for (RepositoryInfo info: infos) {
System.out.println("id: " + info.getId());
System.out.println("description: " + info.getDescription());
System.out.println("location: " + info.getLocation());
}
Another problem with your code is that I suspect you never properly call manager.shutDown() nor repository.shutDown(). Calling these when your program exits allows the manager and the repository to properly close resources, save state, and exit gracefully. Since you are creating a RepositoryManager object yourself, you need to care to do this on program exit yourself as well.
An alternative to creating your own RepositoryManager object is to use a RepositoryProvider instead (see also the relevant section in the Sesame Programmers Manual). This is a utility class that comes with a built-in shutdown hook, saving you from having to deal with these manager/repository shutdown issues.
So instead of this:
LocalRepositoryManager localManager = new LocalRepositoryManager(new File(dataDir));
localManager.initialize();
Do this:
LocalRepositoryManager localManager =
RepositoryProvider.getRepositoryManager(new File(datadir));
I'm using azure-documentdb java SDK in order to create and use "User Defined Functions (UDFs)"
So from the official documentation I finally find the way (with a Java client) on how to create an UDF:
String regexUdfJson = "{"
+ "id:\"REGEX_MATCH\","
+ "body:\"function (input, pattern) { return input.match(pattern) !== null; }\","
+ "}";
UserDefinedFunction udfREGEX = new UserDefinedFunction(regexUdfJson);
getDC().createUserDefinedFunction(
myCollection.getSelfLink(),
udfREGEX,
new RequestOptions());
And here is a sample query :
SELECT * FROM root r WHERE udf.REGEX_MATCH(r.name, "mytest_.*")
I had to create the UDF one time only because I got an exception if I try to recreate an existing UDF:
DocumentClientException: Message: {"Errors":["The input name presented is already taken. Ensure to provide a unique name property for this resource type."]}
How should I do to know if the UDF already exists ?
I try to use "readUserDefinedFunctions" function without success. Any example / other ideas ?
Maybe for the long term, should we suggest a "createOrReplaceUserDefinedFunction(...)" on azure feedback
You can check for existing UDFs by running query using queryUserDefinedFunctions.
Example:
List<UserDefinedFunction> udfs = client.queryUserDefinedFunctions(
myCollection.getSelfLink(),
new SqlQuerySpec("SELECT * FROM root r WHERE r.id=#id",
new SqlParameterCollection(new SqlParameter("#id", myUdfId))),
null).getQueryIterable().toList();
if (udfs.size() > 0) {
// Found UDF.
}
An answer for .NET users.
`var collectionAltLink = documentCollections["myCollection"].AltLink; // Target collection's AltLink
var udfLink = $"{collectionAltLink}/udfs/{sampleUdfId}"; // sampleUdfId is your UDF Id
var result = await _client.ReadUserDefinedFunctionAsync(udfLink);
var resource = result.Resource;
if (resource != null)
{
// The UDF with udfId exists
}`
Here _client is Azure's DocumentClient and documentCollections is a dictionary of your documentDb collections.
If there's no such UDF in the mentioned collection, the _client throws a NotFound exception.
When using a directory-expression for an <int-file:outbound-gateway> endpoint, the method below is called on org.springframework.integration.file.FileWritingMessageHandler:
private File evaluateDestinationDirectoryExpression(Message<?> message) {
final File destinationDirectory;
final Object destinationDirectoryToUse = this.destinationDirectoryExpression.getValue(
this.evaluationContext, message);
if (destinationDirectoryToUse == null) {
throw new IllegalStateException(String.format("The provided " +
"destinationDirectoryExpression (%s) must not resolve to null.",
this.destinationDirectoryExpression.getExpressionString()));
}
else if (destinationDirectoryToUse instanceof String) {
final String destinationDirectoryPath = (String) destinationDirectoryToUse;
Assert.hasText(destinationDirectoryPath, String.format(
"Unable to resolve destination directory name for the provided Expression '%s'.",
this.destinationDirectoryExpression.getExpressionString()));
destinationDirectory = new File(destinationDirectoryPath);
}
else if (destinationDirectoryToUse instanceof File) {
destinationDirectory = (File) destinationDirectoryToUse;
} else {
throw new IllegalStateException(String.format("The provided " +
"destinationDirectoryExpression (%s) must be of type " +
"java.io.File or be a String.", this.destinationDirectoryExpression.getExpressionString()));
}
validateDestinationDirectory(destinationDirectory, this.autoCreateDirectory);
return destinationDirectory;
}
Based on this code I see that if the directory to use evaluates to a String, it uses that String to create a new java.io.File object.
Is there a reason that a ResourceLoader couldn't/shouldn't be used instead of directly creating a new file?
I ask because my expression was evaluating to a String of the form 'file://path/to/file/' which of course is an invalid path for the java.io.File(String) constructor. I had assumed that Spring would treat the String the same way as it treats the directory attribute on <int-file:outbound-gateway> and pass it through a ResourceLoader.
Excerpt from my configuration file:
<int-file:outbound-gateway
request-channel="inputChannel"
reply-channel="updateTable"
directory-expression="
'${baseDirectory}'
+
T(java.text.MessageFormat).format('${dynamicPathPattern}', headers['Id'])
"
filename-generator-expression="headers.filename"
delete-source-files="true"/>
Where baseDirectory is a property that changes per-environment of the form 'file://hostname/some/path/'
There's no particular reason that this is the case, it probably just wasn't considered at the time of implementation.
The request sounds reasonable to me and will benefit others (even though you have found a work-around), by providing simpler syntax. Please open an 'Improvement' JIRA issue; thanks.
While not directly answering the question, I wanted to post the workaround that I used.
In my XML configuration, I changed the directory-expression to evaluate to a file through the DefaultResourceLoader instead of a String.
So this is what my new configuration looked like:
<int-file:outbound-gateway
request-channel="inputChannel"
reply-channel="updateTable"
directory-expression=" new org.springframework.core.io.DefaultResourceLoader().getResource(
'${baseDirectory}'
+
T(java.text.MessageFormat).format('${dynamicPathPattern}', headers['Id'])).getFile()
"
filename-generator-expression="headers.filename"
delete-source-files="true"/>