I am trying to construct a DTO object to ferry the data from data layer to view layer.
The data is as follows:
There are 7 days(dates can be used as a key in Map or any other datastructure)
The individual dates will contain multiple records.
Each record contains contact details obtained from multiple tables.
One record needs to be constructed from 3 rows in the result table. ie:
A record may return three rows with same values for all columns except for the user details; which contains details like id,name and designation.
When I display, I need to show their name as Manager and assistant manager in the same row.
Data Layer
T01 25/12/2012 ABC XYZ Manager
T01 25/12/2012 ABC IJK Asst.manager
Display:
Date 1
TaskID Taskdeadline TaskGivenBy Task assigned to Manager Task Assigned toAsst.Manager
T01 25/12/2012 ABC XYZ IJK
T02 1/1/2013 BCE WUV MNO
Solution I tried:
Map<Date,Map<Position,Object>>
Map<25/12/2012,Map<(Manager,Object details),(Asst Manager,Object details)>
and then repeat it. But I guess I am storing duplicate data. I don't think this is an ideal solution
Related
I hope you are well !!
My database looks like below:
/ CLASSROOM1 [collection] ----> DATAS [document] ----> USERS [collection] ----> userdId [document] ----> [user data (field)]
/ CLASSROOM2 [collection] ----> DATAS [document] ----> USERS [collection] ----> userdId [document] ----> [user data (field)]
/ CLASSROOM3 [collection] ----> DATAS [document] ----> USERS [collection] ----> userdId [document] ----> [user data (field)]
I would like to know if it is possible to search the entire Firestore database (CLASSROOM1, CLASSROOM2, CLASSROOM3 ...) and find the correspondence with the "matricule" entered by the user.
The goal here is to find the field "matricule" found in the document "userId" and to read its value, so as to know which user it belongs to.
Thank you. I start on Firestore
I don't think this would be efficient. As #Doug mentioned above, each collection will require it's own query which would ofcourse empty your pocket.
Best way to do this would be create a different collection which contains the matricule as documents and store the {user id} in that document per user. Then when you want to fetch the data, query this new collection to get the {user id}. You know what the user ID is, now you fetch anything regarding that user in your Firestore Database.
It's not possible to perform a single query across multiple collections with different names. Each collection will require its own query, and you will have to merge the results from those queries on the client.
The only type of query that can use multiple collections is a collection group query, which requires that each subcollection must have the same name.
I assume you have userId as input and wnat the corresponding matricule. Add "userId" as a field, then you can do a collection group query on the collections "USERS"
collectionGroup("USERS").where("userId", "==", userId)
The problem is as following: I have to make a spring controller with a parameter ("/getDocumentInfo?uniqueDocNo=100)", but I can't get the all the datas because I can't make a proper join between them.
First table, Document, contains datas about a document and the table Email contains data about who was sent the document and what was in the email.
The first table contains the parameter uniqueDocNo and another variable that I do need, but the second table contains uniqueDocNo only as a part of a concatenated field. I tried to make some mappings myself but I'm not sure I understood how I can get all my datas and put them in the controller only with one parameter(uniqueDocNo) or even if it is possible.
Tables: Document(uniqueDocNo, seenYN, docName, crDate),
Email(emailNo, emailDate, emailTo, emailSubject, internNo).
PK are with bold and in italic is the field that is concatenated. For example: uniqueDocNo = 100, internNo = 100_other_numbers_i_dont_need.
I expect to have a mapper or something that can the datas from both tables for the uniqueDocNo, more precisely: uniqueDocNo, seenYN(from the first table) and all the datas from the second table according to the the uniqueDocNo.
Thanks in advance.
i am trying to new cache configuration with new DB tables. following are sample table structure and sample Data.
TABLE_DEPT:
Dept Id Detp Name Dept Dtls
111 SALES A1
112 MARKET A2
TABLE_EMP:
Emp Id Emp Name DeptId Working Started Working ended
1 ABEmp 111 01-01-2017 02-02-2017
2 CDEmp 112 01-01-2017 03-12-2017
3 EFEmp 113 01-01-2017 03-12-2017
1 ABEmp 115 03-02-2017 03-12-2017
if i want to add load the data in cache by using Dept Id, i will have unique dept Id with list of emp Id - details.
if i want to search by dept id in ehcache configuration, i can simply give search attribute is "dept id".
but, if i want to search by emp id, i should get list of dept under employee worked.
what should be my ehcache design?
my Java bean/POJO looks like below.
Class DeptDtls{
int deptId;
String deptName;
List<Integer> empIdList;
//Setter & getters
}
cache - i want to put key as deptId & value is whole DeptDtls.
in this case, how can i allow search operation based on empId?
Ehcache can provide in memory data storage only but in your case search also required on top cache.I got the same scenario and used elasticsearch for caching the data and search on stored data.elastic search has inbuilt support for search and we can index the stored data .elasticsearch can be configured for cache.
I have adf view Object called SalesVO that I need to save to database, in that View object I have a foreign key (CustomerId) that is associated with another View Object called CustmersVO
I dragged salesVO in jspx page, The user should enter information in the sales information in the page then click save (commit).
the problem is when while the user is filling the sales info he should select customer from LOV once he select customer the page should display Customer info (like address, phone, ..etc) automatically from CustomersVO
how can I display these read only info of the selected customer?
the structure is as follow:
SalesVO : saleId, SaleDate, CustomerId ..... etc
CustomerVO: CustomerId, CustomerName, Phone, Address....etc
Are you saying you want to basically join the Sales info with Customer info and allow user to select the Customer based on Customer ID in Sales VO and then when that customer is selected, display the retrieved customer data along with the sales VO, but only the Sales VO info is saved?
If so this is a simple, basic thing to do in ADF BC. have you looked at the many tutorials available online here? Have you read the docs on how to create an LOV or Reference entity usage or seen this or this?
The SalesVO is based on the Sales EO and is updatable, then add the CustVO as a non-updatable reference only data and then make the Cust ID in the Sales VO the basis for the LOV. The selected Customer data will appear in the LOV and the selected Customer ID will be saved as the foreign key to the Sales VO, with the extra customer data providing non-updatable reference values. The steps to do this are widely available in many blog posts - Google is your friend here. Also consider reading the ADF BC docs, on LOVS, and reference entities and joins of VOs.
I have a dataset similar to Twitter's graph. The data is in the following form:
<user-id1> <List of ids which he follows separated by spaces>
<user-id2> <List of ids which he follows separated by spaces>
...
I want to model this in the form of a unidirectional graph, expressed in the cypher syntax as:
(A:Follower)-[:FOLLOWS]->(B:Followee)
The same user can appear more than once in the dataset as he might be in the friend list of more than one person, and he might also have his friend list as part of the data set. The challenge here is to make sure that there are no duplicate nodes for any user. And if the user appears as a Follower and Followee both in the data set, then the node's label should have both the values, i.e., Follower:Followee. There are about 980k nodes in the graph and size of dataset is 1.4 GB.
I am not sure if Cypher's load CSV will work here because each line of the dataset has a variable number of columns making it impossible to write a query to generate the nodes for each of the columns. So what would be the best way to import this data into Neo4j without creating any duplicates?
I did actually exactly the same for the friendster dataset, which has almost the same format as yours.
There the separator for the many friends was ":".
The queries I used there, are these:
create index on :User(id);
USING PERIODIC COMMIT 1000
LOAD CSV FROM "file:///home/michael/import/friendster/friends-000______.txt" as line FIELDTERMINATOR ":"
MERGE (u1:User {id:line[0]})
;
USING PERIODIC COMMIT 1000
LOAD CSV FROM "file:///home/michael/import/friendster/friends-000______.txt" as line FIELDTERMINATOR ":"
WITH line[1] as id2
WHERE id2 <> '' AND id2 <> 'private' AND id2 <> 'notfound'
UNWIND split(id2,",") as id
WITH distinct id
MERGE (:User {id:id})
;
USING PERIODIC COMMIT 1000
LOAD CSV FROM "file:///home/michael/import/friendster/friends-000______.txt" as line FIELDTERMINATOR ":"
WITH line[0] as id1, line[1] as id2
WHERE id2 <> '' AND id2 <> 'private' AND id2 <> 'notfound'
MATCH (u1:User {id:id1})
UNWIND split(id2,",") as id
MATCH (u2:User {id:id})
CREATE (u1)-[:FRIEND_OF]->(u2)
;