How can I use fetchplan with Java API? - java

As stated here, if I define [*]in_*:-2 out_*:-2 as fetchplan, the query should return only properties and none info about the edges.
OrientGraph graph = new OrientGraph(URL, USER, USER);
try {
Iterable resultList = graph.command(new OSQLSynchQuery("select from #11:0")).setFetchPlan("[*]in_*:-2 out_*:-2").execute();
OrientVertex user = (OrientVertex) resultList.iterator().next();
for (String s : user.getRecord().fieldNames()) {
System.out.println(s);
}
Iterable resultList2 = graph.command(new OSQLSynchQuery("select from #11:0")).execute();
OrientVertex user2 = (OrientVertex) resultList2.iterator().next();
for (String s : user2.getRecord().fieldNames()) {
System.out.println(s);
}
} finally {
graph.shutdown();
}
I'm having the same output (that includes info about edges), with and without fetchplan. What am I doing wrong?

With network protocol, Fetch plan is to optimize network transfer, not to exclude connections. In the case above OrientDB client will fetch the connected edges if excluded by fetch plan.

Related

How many roundtrips are made to a MongoDB server when using transactions?

I wonder how many roundtrips that are made to the server when using transactions MongoDB? For example if the Java driver is used like this:
ClientSession clientSession = client.startSession();
TransactionOptions txnOptions = TransactionOptions.builder()
.readPreference(ReadPreference.primary())
.readConcern(ReadConcern.LOCAL)
.writeConcern(WriteConcern.MAJORITY)
.build();
TransactionBody txnBody = new TransactionBody<String>() {
public String execute() {
MongoCollection<Document> coll1 = client.getDatabase("mydb1").getCollection("foo");
MongoCollection<Document> coll2 = client.getDatabase("mydb2").getCollection("bar");
coll1.insertOne(clientSession, new Document("abc", 1));
coll2.insertOne(clientSession, new Document("xyz", 999));
return "Inserted into collections in different databases";
}
};
try {
clientSession.withTransaction(txnBody, txnOptions);
} catch (RuntimeException e) {
// some error handling
} finally {
clientSession.close();
}
In this case two documents are stored in a transaction:
coll1.insertOne(clientSession, new Document("abc", 1));
coll2.insertOne(clientSession, new Document("xyz", 999));
Are the "insert operations" stacked up and sent to the server in one roundtrip or are two calls (or more?) actually made to the server?
Each insert is sent separately. You can use use bulk writes to batch write operations together.
The commit at the end is a separate operation also.

Cannot read Gremlin data from remote after writing

I use Java to connect to a "remote" (localhost:8182) Gremlin server g this way:
traversalSource = traversal().withRemote(DriverRemoteConnection.using("localhost", 8182, "g"));
Then, I write some node like this:
traversalSource.addV("TenantProfile");
From Gremlin console, connected to the same Gremlin server, I see all created nodes and edges
gremlin> g==>graphtraversalsource[tinkergraph[vertices:42 edges:64], standard]
and queries work, but if I read graph from Java, it results empty, so querying e.g. like
traversalSource.V()
.has("label", TENANT_PROFILE_LABEL)
.has("fiscal id", "04228480408")
.out(OWNS_LABEL)
.has("type", "SH")
.values("description")
.toList();
returns an emtpy list.
Could anyone help me solve this mistery, please?
Thanks.
In reply to Stephen, I post the last instructions before iterate()
for (final Map<String, String> edgePropertyMap : edgePropertyTable) {
edgeTraversal = traversalSource
.V(vertices.get(edgePropertyMap.get(FROM_KEY)))
.addE(edgeLabel)
.to(vertices.get(edgePropertyMap.get(TO_KEY)));
final Set<String> edgePropertyNames = edgePropertyMap.keySet();
for (final String nodePropertyName : edgePropertyNames)
if ((!nodePropertyName.equals(FROM_KEY)) && (!nodePropertyName.equals(TO_KEY))) {
final String edgePropertyValue = edgePropertyMap.get(nodePropertyName);
edgeTraversal = edgeTraversal.property(nodePropertyName, edgePropertyValue);
}
edgeTraversal.as(edgePropertyMap.get(IDENTIFIER_KEY)).iterate();
}
Anyway, if no iterate() were present, how could nodes and edges be visible from inside console? How could they have been "finalized" on remote server?

Flexible search with parameters return null value

I have to do this flexible search query in a service Java class:
select sum({oe:totalPrice})
from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk}
join OrderEntry as oe on {or.pk}={oe.order}}
where {or:versionID} is null and {or:orderType} in (8796093066999)
and {or:company} in (8796093710341)
and {or:pointOfSale} in (8796097413125)
and {oe:ecCode} in ('13','14')
and {or:yearSeason} in (8796093066981)
and {os:code} not in ('CANCELED', 'NOT_APPROVED')
When I perform this query in the hybris administration console I correctly obtain:
1164.00000000
In my Java service class I wrote this:
private BigDecimal findGroupedOrdersData(String total, String uncDisc, String orderPromo,
Map<String, Object> queryParameters) {
BigDecimal aggregatedValue = new BigDecimal(0);
final StringBuilder queryBuilder = new StringBuilder();
queryBuilder.append("select sum({oe:").append(total).append("})");
queryBuilder.append(
" from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk} join OrderEntry as oe on {or.pk}={oe.order}}");
queryBuilder.append(" where {or:versionID} is null");
if (queryParameters != null && !queryParameters.isEmpty()) {
appendWhereClausesToBuilder(queryBuilder, queryParameters);
}
queryBuilder.append(" and {os:code} not in ('");
queryBuilder.append(CustomerOrderStatus.CANCELED.getCode()).append("', ");
queryBuilder.append("'").append(CustomerOrderStatus.NOT_APPROVED.getCode()).append("')");
FlexibleSearchQuery query = new FlexibleSearchQuery(queryBuilder.toString(), queryParameters);
List<BigDecimal> result = Lists.newArrayList();
query.setResultClassList(Arrays.asList(BigDecimal.class));
result = getFlexibleSearchService().<BigDecimal> search(query).getResult();
if (!result.isEmpty() && result.get(0) != null) {
aggregatedValue = result.get(0);
}
return aggregatedValue;
}
private void appendWhereClausesToBuilder(StringBuilder builder, Map<String, Object> params) {
if ((params == null) || (params.isEmpty()))
return;
for (String paramName : params.keySet()) {
builder.append(" and ");
if (paramName.equalsIgnoreCase("exitCollection")) {
builder.append("{oe:ecCode}").append(" in (?").append(paramName).append(")");
} else {
builder.append("{or:").append(paramName).append("}").append(" in (?").append(paramName).append(")");
}
}
}
The query string before the search(query).getResult() function is:
query: [select sum({oe:totalPrice}) from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk}
join OrderEntry as oe on {or.pk}={oe.order}} where {or:versionID} is null
and {or:orderType} in (?orderType) and {or:company} in (?company)
and {or:pointOfSale} in (?pointOfSale) and {oe:ecCode} in (?exitCollection)
and {or:yearSeason} in (?yearSeason) and {os:code} not in ('CANCELED', 'NOT_APPROVED')],
query parameters: [{orderType=OrderTypeModel (8796093230839),
pointOfSale=B2BUnitModel (8796097413125), company=CompanyModel (8796093710341),
exitCollection=[13, 14], yearSeason=YearSeasonModel (8796093066981)}]
but after the search(query) result is [null].
Why? Where I wrong in the Java code? Thanks.
In addition, if you want to disable restriction in your java code. You can do like this ..
#Autowired
private SearchRestrictionService searchRestrictionService;
private BigDecimal findGroupedOrdersData(String total, String uncDisc, String orderPromo,
Map<String, Object> queryParameters) {
searchRestrictionService.disableSearchRestrictions();
// You code here
searchRestrictionService.enableSearchRestrictions();
return aggregatedValue;
}
In the above code, You can disabled the search restriction and after the search result, you can again enable it.
OR
You can use sessionService to execute flexible search query in Local View. The method executeInLocalView can be used to execute code within an isolated session.
(SearchResult<? extends ItemModel>) sessionService.executeInLocalView(new SessionExecutionBody()
{
#Override
public Object execute()
{
sessionService.setAttribute(FlexibleSearch.DISABLE_RESTRICTIONS, Boolean.TRUE);
return flexibleSearchService.search(query);
}
});
Here you are setting DISABLE RESTRICTIONS = true which will run the query in admin context [Without Restriction].
Check this
Better i would suggest you to check what restriction exactly applying to your item type. You can simply check in Backoffice/HMC
Backoffice :
Go to System-> Personalization (SearchRestricion)
Search by Restricted Type
Check Filter Query and analysis your item data based on that.
You can also check its Principal (UserGroup) on which restriction applied.
To confirm, just check by disabling active flag.
Query in the code is running not under admin user (most likely).
And in this case the different search Restrictions are applied to the query.
You can see that the original query is changed:
start DB logging (/hac -> Monitoring -> Database -> JDBC logging);
run the query from the code;
stop DB logging and check log file.
More information: https://wiki.hybris.com/display/release5/Restrictions
In /hac console the admin user is usually used and restrictions will not be applied because of this.
As the statement looks ok to me i'm going to go with visibility of the data. Are you able to see all the items as whatever user you are running the query as? In the hac you would be admin obviously.

Find all the attached volumes for an EC2 instance

I'm using the below code to get all the available volumes under EC2. But I can't find any Ec2 api to get already attached volumes with an instance. Please let me know how to get all attached volumes using instanceId.
EC2Api ec2Api = computeServiceContext.unwrapApi(EC2Api.class);
List<String> volumeLists = new ArrayList<String>();
if (null != volumeId) {
volumeLists.add(volumeId);
}
String[] volumeIds = volumeLists.toArray(new String[0]);
LOG.info("the volume IDs got from user is ::"+ Arrays.toString(volumeIds));
Set<Volume> ec2Volumes = ec2Api.getElasticBlockStoreApi().get()
.describeVolumesInRegion(region, volumeIds);
Set<Volume> availableVolumes = Sets.newHashSet();
for (Volume volume : ec2Volumes) {
if (volume.getSnapshotId() == null
&& volume.getStatus() == Volume.Status.AVAILABLE) {
LOG.debug("available volume with no snapshots ::" + volume.getId());
availableVolumes.add(volume);
}
}
The AWS Java SDK now provides a method to get all the block device mappings for an instance. You can use that to get a list of all the attached volumes:
// First get the EC2 instance from the id
DescribeInstancesRequest describeInstancesRequest = new DescribeInstancesRequest().withInstanceIds(instanceId);
DescribeInstancesResult describeInstancesResult = ec2.describeInstances(describeInstancesRequest);
Instance instance = describeInstancesResult.getReservations().get(0).getInstances().get(0);
// Then get the mappings
List<InstanceBlockDeviceMapping> mappingList = instance.getBlockDeviceMappings();
for(InstanceBlockDeviceMapping mapping: mappingList) {
System.out.println(mapping.getEbs().getVolumeId());
}
You can filter the output of the EC2 DescribeVolumes API call. There are various attachment.* filters available, the one you want is filtering by attached instance ID. Try the following code:
Multimap<String, String> filter = ArrayListMultimap.create();
filter.put("attachment.instance-id", instanceId);
filter.put("attachment.status", "attached");
Set<Volume> volumes = ec2Api.getElasticBlockStoreApi().get()
.describeVolumesInRegionWithFilter(region, volumeIds, filter);
The filter is a Multimap with the keys and values you want to filter on. You can actually specify the same filter multiple times, for example to get all volumes attached to a number of different instances.
You can use volumeAttachmentApi.listAttachmentsOnServer() to do this.
NovaApi novaApi = context.unwrapApi(NovaApi.class);VolumeApi volumeApi = novaApi.getVolumeExtensionForZone(region).get();
VolumeAttachmentApi volumeAttachmentApi = novaApi.getVolumeAttachmentExtensionForZone(region).get();
volumeAttachmentApi.listAttachmentsOnServer(serverId)

MongoDB SELF JOIN query having 1 collection

I'd like to do something like
SELECT e1.sender
FROM email as e1, email as e2
WHERE e1.sender = e2.receiver;
but in MongoDB. I found many forums about JOIN, which can be implemented via MapReduce in MongoDB, but I don't understand how to do it in this example with self-join.
I was thinking about something like this:
var map1 = function(){
var output = {
sender:db.collectionSender.email,
receiver: db.collectionReceiver.findOne({email:db.collectionSender.email}).email
}
emit(this.email, output);
};
var reduce1 = function(key, values){
var outs = {sender:null, receiver:null
values.forEach(function(v) {
if(outs.sender == null){
outs.sender = v.sender
}
if(outs.receivers == null){
outs.receiver = v.receiver
}
});
return outs; }};
db.email.mapReduce(map2,reduce2,{out:'rec_send_email'})
to create 2 new collections - collectionReceiver containing only receiver email and collectionSender containing only sender email
OR
var map2 = function(){
var output = {sender:this.sender,
receiver: db.email.findOne({receiver:this.sender})}
emit(this.sender, output);
};
var reduce2 = function(key, values){
var outs = {sender:null, receiver:null
values.forEach(function(v){
if(outs.sender == null){
outs.sender = v.sender
}
if(outs.receiver == null){
outs.receiver = v.receiver
}
});
return outs; };};
db.email.mapReduce(map2,reduce2,{out:'rec_send_email'})
but none of them is working and I don't understand this MapReduce-thing well. Could somebody explain it to me please? I was inspired by this article http://tebros.com/2011/07/using-mongodb-mapreduce-to-join-2-collections/ .
Additionally, I need to write it in Java. Is there any way how to solve it?
If you need to implement a "self-join" when using MongoDB then you may have structured your schema incorrectly (or sub-optimally).
In MongoDB (and noSQL in general) the schema structure should reflect the queries you will need to run against them.
It looks like you are assuming a collection of emails where each document has one sender and one receiver and now you want to find all senders who also happen to be receivers of email? The only way to do this would be via two simple queries, and not via map/reduce (which would be far more complex, unnecessary and the way you've written them wouldn't work as you can't query from within map function).
You are writing in Java - why not make two queries - the first to get all unique senders and the second to find all unique receivers who are also in the list of senders?
In the shell it would be:
var senderList = db.email.distinct("sender");
var receiverList = db.email.distinct("receiver", {"receiver":{$in:senderList}})

Categories

Resources