I have started working on a RIAK project via Spring Source.
according to their specifications linking between objects and then linkwalking is very simple.
I am saving 2 objects, linking between them and then trying to retrieve the data:
MyPojo p1 = new MyPojo("o1", "m1");
MyPojo p2 = new MyPojo("o2", "m2");
riakManager.set(bucketName1, "k1", p1);
riakManager.set(bucketName2, "k2", p2);
riakManager.link(bucketName2, "k2", bucketName1, "k1", tagName);
System.out.println(riakManager.get(bucketName1, "k1"));
System.out.println(riakManager.linkWalk(bucketName1, "k1", "_"));
the problem is that after the link, the content of the source ("k1") is deleted, only the link stays. This is the printout:
null
[MyPojo [str1=o2, str2=m2, number=200]]
any idea why link operation deletes the value from the source?
if I try to set the sources value (again) after the link, then the link gets deleted...
thanks,
oved.
Riak requires that the link and the data are stored in one operation. You can't update one without the other (at the moment).
So any time you set the link the operation must also write back the data. I don't know does the Spring adapter take that into account. I did see some messages between the Riak and Spring developers about this, but don't know if anything was fixed yet.
But in any case I am also tending towards using the native Riak Java client rather than Spring.
have decided to abandon the spring adapter. it seems it doesn't have good enough support.
using the RIAK java client instead.
anyone think otherwise?
Related
I am very very new to DynamoDB. I have a lot of stale data in a table. I want to delete it in batch. What I have done till now is that I have queried the Table using GSI. Now, since there is not much relevant content of using batchWriteItem in Java, can someone please help me. A code example would be appreciated.
I have tried googling a lot and have read AWS documentation for batchWriteItem. But they don't have any code examples as such.
You have to use addDeleteItem in the BatchWriteItemEnhancedRequest
var builder =
BatchWriteItemEnhancedRequest.builder();
items.forEach(item -> builder.addWriteBatch(WriteBatch.builder(ItemEntity.class)
.addDeleteItem(<key or item>)
.build()));
enhancedClient.batchWriteItem(builder.build());
All of this is from the v2 sdk version
I am looking for a solution using Maven and Java (Not Spring) where I can upload all my Key and labels and flag value by Json to deploy.
When I configure my project in Jenkins it should apply all the values which are changed.
Kindly provide me some directions, I tried lot with less material on this topic
I managed to workout the solution. Basically following this Microsoft Azure Link
, but not completely solved my problem by this link though. Below is the Code Snippet which solved my problem. Code is not testable or productionable , this is just for reference.
public void process() {
String value = "{\"id\": \"test\", \"description\": \"Sample Feature\",\"enabled\": false,\"conditions\": { \"client_filters\": []}}";
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build();
ConfigurationClient configurationClient = new ConfigurationClientBuilder()
.connectionString(END_POINT)
.buildClient();
final ConfigurationSetting configurationSetting = new ConfigurationSetting();
configurationSetting.setKey(format(".appconfig.abc/%s", "abc"));
configurationSetting.setLabel("lable");
configurationSetting.setContentType("application/vnd.microsoft.appconfig.ff+json;charset=utf-8");
configurationSetting.setValue(value);
configurationClient.addConfigurationSettingWithResponse(configurationSetting, NONE)
}
Key points here is ".appconfig.abc" , At this point of time we don't have direct call to Feature Management , but we can add Key and labels to configuration as I said in the code snippet but with the Key as ".appconfig.abc" which you can get this info from portal. And the value should be a Json object, how we make this Json is upto you really.
Overall so much of information around the sites but none of them are connected in Java world for Azure. May be helpful to any one.
End Point , one can get from the Configuration Access Keys.
Background:
We have a 3-node solr cloud that was migrated to docker. It works as expected, however, for new data that is inserted, it can only be retrieved by id. Once we try to use filters, it doesn't show. Note that old data can still be filtered without any issues.
The database is is used via spring-boot crud-like application.
More background:
The app and the solr were migrated by another person and I have inherited the codebase recently so I am not familiar in much detail about the implementation and am still digging and debugging.
The nodes were migrated as-is (the data was copied into a docker mount).
What I have so far:
I have checked the logs of all the solr nodes and see the following happening when making the calls to the application:
Filtering:
2019-02-22 14:17:07.525 INFO (qtp15xxxxx-15) [c:content_api s:shard1 r:core_node1 x:content_api_shard1_replica0] o.a.s.c.S.Request
[content_api_shard1_replica0]
webapp=/solr path=/select
params=
{q=*:*&start=0&fq=id-lws-ttf:127103&fq=active-boo-ttf:(true)&fq=(publish-date-tda-ttf:[*+TO+2019-02-22T15:17:07Z]+OR+(*:*+NOT+publish-date-tda-ttf:[*+TO+*]))AND+(expiration-date-tda-ttf:[2019-02-22T15:17:07Z+TO+*]+OR+(*:*+NOT+expiration-date-tda-ttf:[*+TO+*]))&sort=create-date-tda-ttf+desc&rows=10&wt=javabin&version=2}
hits=0 status=0 QTime=37
Get by ID:
2019-02-22 14:16:56.441 INFO (qtp15xxxxxx-16) [c:content_api s:shard1 r:core_node1 x:content_api_shard1_replica0] o.a.s.c.S.Request
[content_api_shard1_replica0]
webapp=/solr path=/get params={ids=https://example.com/app/contents/127103/middle-east&wt=javabin&version=2}
status=0 QTime=0
Disclaimer:
I am an absolute beginner in working with Solr and am going through documentation ATM in order to get better insight into the nuts and bolts.
Assumptions and WIP:
The person who migrated it told me that only the data was copied, not the configuration. I have acquired the old config files (/opt/solr/server/solr/configsets/) and am trying to compare to the new ones. But the assumption is that the configs were defaults.
The old version was 6.4.2 and the new one is 6.6.5 (not sure that this could be the issue)
Is there something obvious that we are missing here? What is superconfusing is the fact that the data can be retrieved by id AND that the OLD data can be filtered
Update:
After some researching, I have to say that I have excluded the config issue because when I inspect the configuration from the admin UI, I see the correct configuration.
Also, another weird behavior is that the data can be queried after some time (like more than 5 days). I can see that because I run the query from the UI and order it by descending creation date. From there, I can see my tests that I was not just days ago
Relevant commit config part:
<autoCommit>
<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
</autoSoftCommit>
More config output from the admin endpoint:
config:{
znodeVersion:0,
luceneMatchVersion:"org.apache.lucene.util.Version:6.0.1",
updateHandler:{
indexWriter:{
closeWaitsForMerges:true
},
commitWithin:{
softCommit:true
},
autoCommit:{
maxDocs:-1,
maxTime:15000,
openSearcher:false
},
autoSoftCommit:{
maxDocs:-1,
maxTime:-1
}
},
query:{
useFilterForSortedQuery:false,
queryResultWindowSize:20,
queryResultMaxDocsCached:200,
enableLazyFieldLoading:true,
maxBooleanClauses:1024,
filterCache:{
autowarmCount:"0",
size:"512",
initialSize:"512",
class:"solr.FastLRUCache",
name:"filterCache"
},
queryResultCache:{
autowarmCount:"0",
size:"512",
initialSize:"512",
class:"solr.LRUCache",
name:"queryResultCache"
},
documentCache:{
autowarmCount:"0",
size:"512",
initialSize:"512",
class:"solr.LRUCache",
name:"documentCache"
},
:{
size:"10000",
showItems:"-1",
initialSize:"10",
name:"fieldValueCache"
}
},
...
According to your examples you're only retrieving the document when you're querying the realtime get endpoint - i.e. /get. This endpoint returns documents by querying by id, even if the document hasn't been commited to the index or a new searcher has been opened.
A new searcher has to be created before any changes to the index become visible to the regular search endpoints, since the old searcher will still use the old index files for searching. If a new searcher isn't created, the stale content will still be returned. This matches the behaviour you're seeing, where you're not opening any new searchers, and content becomes visible when the searcher is recycled for other reasons (possibly because of restarts/another explicit commit/merges/optimizes/etc.).
Your example configuration shows that the autoSoftCommit is disabled, while the regular autoCommit is set to not open a new searcher (and thus, no new content is shown). I usually recommend disabling this feature and instead relying on using commitWithin in the URL as it allows greater configurability for different types of data, and allows you to ask for a new searcher to be opened within at least x seconds since data has been added. The default behaviour for commitWithin is that a new searcher will be opened after the commit has happened.
Sounds like you might have switched to a default managed schema on upgrade. Look for schema.xml in your previous install along with a section in your prior install's solrconfig.xml. More info at https://lucene.apache.org/solr/guide/6_6/schema-factory-definition-in-solrconfig.html#SchemaFactoryDefinitioninSolrConfig-SolrUsesManagedSchemabyDefault
hi i want to get all issues stored in jira from java using jql or any othere way.
i try to use this code:
for(String name:getProjectsNames()){
String jqlRequest = "project = \""+name+"\"";
SearchResult result = restClient.getSearchClient().searchJql(
jqlRequest, 10000,0, pm);
final Iterable<BasicIssue> issues = result.getIssues();
for (BasicIssue is : issues) {
Issue issue = restClient.getIssueClient().getIssue(is.getKey(), pm);
...........
}
it give me the result but it take a very long time.
is there a query or a rest API URL or any other way that give me all issues?
please help me
The JIRA REST API will give you all the info from each issue at a rate of a few issues/second. The Inquisitor add-on at https://marketplace.atlassian.com/plugins/com.citrix.jira.inquisitor will give you thousands of issues per second but only the standard JIRA fields.
There is one other way. There is one table in JIRA database named "dbo.jiraissue". If you have access to that database then you can fetch all the ids of all issues. After fetching this data you can send this REST request "**localhost/rest/api/2/issue/issue_id" and get JSON response. Of course you have to write some code for this but this is one way I know to get all issues.
I would like to create a simple XMPP client in java that shares his location (XEP-0080) with other clients.
I already know I can use the smack library for XMPP and that it supports PEP, which is needed for XEP-0080.
Does anyone have an example how to implement this or any pointers, i don't find anything using google.
thanks in advance.
Kristof's right, the doc's are sparse - but they are getting better. There is a good, albeit hard to find, set of docs on extensions though. The PubSub one is at http://www.igniterealtime.org/fisheye/browse/~raw,r=11613/svn-org/smack/trunk/documentation/extensions/pubsub.html.
After going the from scratch custom IQ Provider route with an extension I found it was easier to do it using the managers as much as possible. The developers that wrote the managers have abstracted away a lot of the pain points.
Example (modified-for-geoloc version of one rcollier wrote on the Smack forum):
ConfigureForm form = new ConfigureForm(FormType.submit);
form.setPersistentItems(false);
form.setDeliverPayloads(true);
form.setAccessModel(AccessModel.open);
PubSubManager manager
= new PubSubManager(connection, "pubsub.communitivity.com");
Node myNode = manager.createNode("http://jabber.org/protocol/geoloc", form);
StringBuilder body = new StringBuilder(); //ws for readability
body.append("<geoloc xmlns='http://jabber.org/protocol/geoloc' xml:lang='en'>");
body.append(" <country>Italy</country>");
body.append(" <lat>45.44</lat>");
body.append(" <locality>Venice</locality>");
body.append(" <lon>12.33</lon>");
body.append(" <accuracy>20</accuracy>");
body.append("</geoloc>");
SimplePayload payload = new SimplePayload(
"geoloc",
"http://jabber.org/protocol/geoloc",
body.toString());
String itemId = "zz234";
Item<SimplePayload> item = new Item<SimplePayload>(itemId, payload);
// Required to recieve the events being published
myNode.addItemEventListener(myEventHandler);
// Publish item
myNode.publish(item);
Or at least that's the hard way :). Just remembered there's a PEPManager now...
PEPProvider pepProvider = new PEPProvider();
pepProvider.registerPEPParserExtension(
"http://jabber.org/protocol/tune", new TuneProvider());
ProviderManager.getInstance().addExtensionProvider(
"event",
"http://jabber.org/protocol/pubsub#event", pepProvider);
Tune tune = new Tune("jeff", "1", "CD", "My Title", "My Track");
pepManager.publish(tune);
You'd need to write the GeoLocProvider and GeoLoc classes.
I covered a pure PEP based approach as an alternative method in detail for Android here: https://stackoverflow.com/a/26719158/406920.
This will be very close to what you'd need to do with regular Smack.
Take a look at the existing code for implementations of other extensions. This will be your best example of how to develop with the current library. Unfortunately, there is no developers guide that I know of, so I just poked around to understand some of the basics myself until I felt comfortable with the environment. Hint: Use the providers extension facility to add custom providers for the extension specific stanzas.
You can ask questions on the developer forum for Smack, and contribute your code back to the project from here as well. If you produce an implementation of this extension, then you could potentially get commit privileges yourself if you want it.