I am developing an app for ordering products online. I will first present you a part of the ER Diagram and then explain.
Here the "products" are food items and they will go inside the fresh_products table. As you can see a fresh_product consists of product species, category, type, size and product grade.
Then when I place an order, the order id and order_status will be saved in order table. All the ordered items will be saved in the order_item table. In order_item table you can see I have a direct connection with the fresh_products.
At certain times, the management will decide to change the existing fresh_products. For an example, lets take fish products such as Tuna. This fish item has a category called Loin(fished with Loin), after sometime management decide to remove it or rename it because they no longer fish with Loin.
Now, my design will be affected because of the above change. Why? When you make a change to any of the fresh_product fields, it will directly affect the order_item which holds the information of already ordered items. So if you rename Loin , all order_items which has Loin will now be linked to the new name which is wrong. If you decide to delete a fresh_product you can't do that either, because there are existing order history bound to that fresh_product.
My Suggestion
As a solution for this problem, I am thinking of removing the relationship of fresh_products from order_item. Then, I will add String fields into order_item representing all fields of fresh_products, for an example productType, prodyctCategory and so on.
Now I don't have a direct connection with fresh_products but have all the information needed. At the same time, anyfresh_product item can undertake any change without affecting already purchased items at order_item.
My question is, is my suggestion the best way to solve this issue? If you have better solutions, I am open.
Consider the following; in this example, only the name product changes, but you could easily add columns for each attribute (although at some point this would move from sublime to ridiculous)...
Also, I rarely use correlated subqueries, so apologies if there's an error there...
create table product_history
(product_id int not null
,product_name varchar(12) not null
,date date not null
,primary key (product_id,date)
);
insert into product_history values
(1,'soda','2020-01-01'),
(1,'pop','2020-01-04'),
(1,'cola','2020-01-07');
create table order_detail
(order_id int not null
,product_id int not null
,quantity int not null default 1
,primary key(order_id,product_id)
);
insert into order_detail values
(100,1,4);
create table orders
(order_id serial primary key
,customer_id int not null
,order_date date not null
);
insert into orders values
(100,22,'2020-01-05');
SELECT o.order_id
, o.customer_id
, o.order_date
, ph.product_id
, ph.product_name
, od.quantity
FROM orders o
JOIN order_detail od
ON od.order_id = o.order_id
JOIN product_history ph
ON ph.product_id = od.product_id
WHERE ph.date IN
( SELECT MAX(x.date)
FROM product_history x
WHERE x.product_id = ph.product_id
AND x.date <= o.order_date
);
| order_id | customer_id | order_date | product_id | product_name | quantity |
| -------- | ----------- | ---------- | ---------- | ------------ | -------- |
| 100 | 22 | 2020-01-05 | 1 | pop | 4 |
---
View on DB Fiddle
I have successfully called a Stored Procedure ("Users_info") from my java app, which returns a table that contains two columns: ("user" and "status").
CallableStatement proc_stmt = con.prepareCall("{call Users_info (?)}");
proc_stmt.setInt(1, 1);
ResultSet rs = proc_stmt.executeQuery();
I managed to get all the user names and their respective status ("0" for inactive and "1" for active) into a ArrayList using ResultSet.getString.
ArrayList<String> userName= new ArrayList<String>();
ArrayList<String> userStatus= new ArrayList<String>();
while ( rs.next() ) {
userName.add(rs.getString("user"));
userStatus.add(rs.getString("CHKStatus_Id"));
}
Database shows:
user | status|
------------|--------
Joshua H. | 1 |
Mark J. | 0 |
Jonathan R. | 1 |
Angie I. | 0 |
Doris S. | 0 |
I want to find the best or most efficient way to separate the active users from the inactive users.
I can do this by comparing values using IF STATEMENTS, which, because of my limited knowledge in Java programming I think is not the most efficient way and maybe there is a better way that you guys can help me understand.
Thanks.
I know this sounds like a simple question, but for some reason I can't find a clear answer online or through StackOverflow.
I have a DynamoDB with a Table named "ABC". The primary key is "ID" as a String and one of the other attributes is "Name" as a String. How can I delete an item from this table using Java?
AmazonDynamoDBClient dynamoDB;
.
.
.
DeleteItemRequest dir = new DeleteItemRequest();
dir.withConditionExpression("ID = 214141").withTableName("ABC");
DeleteItemResult deleteResult = dynamoDB.deleteItem(dir);
I have a validation exception:
Exception in thread "main" com.amazonaws.AmazonServiceException: 1 validation error detected: Value null at 'key' failed to satisfy constraint: Member must not be null (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: RQ70OIGOQAJ9MRGSUA0UIJLRUNVV4KQNSO5AEMVJF66Q9ASUAAJG)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1160)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:748)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:467)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:302)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:3240)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.deleteItem(AmazonDynamoDBClient.java:972)
at DynamoDBUploader.deleteItems(DynamoDBUploader.java:168)
at Main.main(Main.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
If I need to know the Hash Key in order to delete an item in a DynamoDB Table, I think I may need to redesign my database in order to delete items efficiently.
My table looks like this:
If that is the case, ahh... I think I need to re-design my database table.
ID | Name | Date | Value
-----------------------------------
1 | TransactionA | 2015-06-21 | 30
2 | TransactionB | 2015-06-21 | 40
3 | TransactionC | 2015-06-21 | 50
Basically, I would like to easily delete all transactions with Date "2015-06-21". How can I do this simply and quickly without having to deal with the Hash Key ID?
AWS DynamoDB knows the column that is hash key of your table.
You just need to specify the value to be deleted.
DeleteItemRequest has a fluent API for that :
Key keyToDelete = new Key().withHashKeyElement(new AttributeValue("214141"));
DeleteItemRequest dir = new DeleteItemRequest()
.withTableName("ABC")
.withKey(keyToDelete);
For Kotlin:
I have table with :
Partition key: account_id (String)
Sort key: message_id (String)
To delete an item fron DynamoDb I do the following:
fun deleteMessageById(messageId: String, accountId: String){
val item = HashMap<String, AttributeValue>()
item["account_id"] = AttributeValue(accountId)
item["message_id"] = AttributeValue(messageId)
val deleteRequest = DeleteItemRequest().withTableName(tableName).withKey(item)
dynamoConfiguration.amazonDynamoDB().deleteItem(deleteRequest)
}
I have db table Users with two columns : ID (AI) and NAME (UNIQUE).
When I'm adding new record to db everything is ok, this record has ID = 1.
When I'm trying to add record with existing name I'm getting error:
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry
I want to add next record, with valid data and this record has ID = 3.
Is any solution to avoid it? I want to have have ID's like this:
1 | 2 | 3
not
1 | 3 | 5 etc.
or maybe should I first check if this name exist? Which option is better?
My code:
public boolean save(){
hibernate.beginTransaction();
hibernate.save(userObject);
hibernate.getTransaction().commit();
hibernate.close();
return true;
}
There are different ID generation strategies. If ID is generated based on sequence then its last number is immediately incremented whenever "next value" is requested. There won't be a problem you described if ID is generated based on table.
I'm having a big problem with the embedded firebird database engine and Java SE.
I'm currently developing a filtering tool for users to filter out data.
So I have made two Options for filtering, user can chose one or both:
Filter out from a black list(the black list is controled by user).
Filter out according to a massive list that records every record ever uploaded and filtered out.
The data the user uploads its in plain text comma or token separated like this:
(SET OF COLUMNS)| RECORD TO FILTER |
0-MANY COLUMNS | ABC2 |
0-MANY COLUMNS | ABC5 |
When I Upload it to the DB, I add A FLAG for every filter
(SET OF COLUMNS) | RECORD TO FILTER | FLAG FOR FIlTER A | FLAG FOR FILTER B |
0-MANY COLUMNS | ABC2 | | |
0-MANY COLUMNS | ABC5 | | |
So, when it comes to the second filter, the program has a main empty table on the first run of the software, then it fills that table with all the records from the very first upload.
The main table will have unique records like the following table after a few text uploads made by the user:
Record | Date criteria for filtering |
ABC1 | 08/11/2012:1,07/11/2012:3,06/11/2012:5|
ABC2 | 05/11/2012:1,04/11/2012:0,03/11/2012:0|
ABC3 | 12/11/2012:3,11/11/2012:0,10/11/2012:0|
ABC4 | 12/11/2012:1,11/11/2012:0,10/11/2012:0|
ABC5 | 12/11/2012:3,11/11/2012:0,10/11/2012:3|
ABC9 | 11/11/2012:3,10/11/2012:1,09/11/2012:0|
When the data is processed, for example, the software updates both, the main table and the user table:
(SET OF COLUMNS| RECORD TO FILTER | FLAG FOR FIlTER A | FLAG FOR FILTER B |
0-MANY COLUMNS | ABC4 | | |
0-MANY COLUMNS | ABC9 | | |
So the main table will update:
Record | Day criteria for filtering |
ABC1 | 08/11/2012:1,07/11/2012:3,06/11/2012:5|
ABC2 | 05/11/2012:1,04/11/2012:0,03/11/2012:0|
ABC3 | 12/11/2012:3,11/11/2012:0,10/11/2012:0|
ABC4 | 12/11/2012:1,11/11/2012:0,10/11/2012:0| ->12/11/2012:2,11/11/2012:0,10/11/2012:0
ABC5 | 12/11/2012:3,11/11/2012:0,10/11/2012:3|
ABC9 | 11/11/2012:3,10/11/2012:1,09/11/2012:0| ->12/11/2012:1,11/11/2012:3,10/11/2012:1
If in the last three days the data criteria event has reached more than four, the user table will flag filter B. Notice that each date has an integer next to it.
(SET OF COLUMNS)| RECORD TO FILTER | FLAG FOR FIlTER A | FLAG FOR FILTER B |
0-MANY COLUMNS | ABC4 | | |
0-MANY COLUMNS | ABC9 | | X |
Both updates are in a single transaction, the problem is that when the user uploads more than 800,000 records my program throws the following exception in the while loop.
I use StringBuilder parsing and append methods for maximun performance on the mutable days string.
java.lang.OutOfMemoryError: Java heap space
Here it's my code, I use five days:
FactoriaDeDatos factoryInstace = FactoriaDeDatos.getInstance();
Connection cnx = factoryInstace.getConnection();
cnx.setAutoCommit(false);
PreparedStatement pstmt = null;
ResultSet rs=null;
pstmt = cnx.prepareStatement("SELECT CM.MAIL,CM.FECHAS FROM TCOMERCIALMAIL CM INNER JOIN TEMPMAIL TMP ON CM.MAIL=TMP."+colEmail);
rs=pstmt.executeQuery();
pstmtDet = cnx.prepareStatement("ALTER INDEX IDX_MAIL INACTIVE");
pstmtDet.executeUpdate();
pstmtDet = cnx.prepareStatement("SET STATISTICS INDEX IDX_FECHAS");
pstmtDet.executeUpdate();
pstmtDet = cnx.prepareStatement("ALTER INDEX IDX_FECHAS INACTIVE");
pstmtDet.executeUpdate();
pstmtDet = cnx.prepareStatement("SET STATISTICS INDEX IDX_FECHAS");
pstmtDet.executeUpdate();
sql_com_local_tranx=0;
int trxNum=0;
int ix=0;
int ixE1=0;
int ixAc=0;
StringBuilder sb;
StringTokenizer st;
String fechas;
int pos1,pos2,pos3,pos4,pos5,pos6,pos7,pos8,pos9;
StringBuilder s1,s2,sSQL,s4,s5,s6,s7,s8,s9,s10;
long startLoop = System.nanoTime();
long time2 ;
boolean ejecutoMax=false;
//int paginador_sql=1000;
//int trx_ejecutada=0;
sb=new StringBuilder();
s1=new StringBuilder();
s2=new StringBuilder();
sSQL=new StringBuilder();
s4=new StringBuilder();
s6=new StringBuilder();
s8=new StringBuilder();
s10=new StringBuilder();
while(rs.next()){
//De aqui
actConteoDia=0;
sb.setLength(0);
sb.append(rs.getString(2));
pos1= sb.indexOf(":",0);
pos2= sb.indexOf(",",pos1+1);
pos3= sb.indexOf(":",pos2+1);
pos4= sb.indexOf(",",pos3+1);
pos5= sb.indexOf(":",pos4+1);
pos6= sb.indexOf(",",pos5+1);
pos7= sb.indexOf(":",pos6+1);
pos8= sb.indexOf(",",pos7+1);
pos9= sb.indexOf(":",pos8+1);
s1.setLength(0);
s1.append(sb.substring(0, pos1));
s2.setLength(0);
s2.append(sb.substring(pos1+1, pos2));
s4.setLength(0);
s4.append(sb.substring(pos3+1, pos4));
s6.setLength(0);
s6.append(sb.substring(pos5+1, pos6));
s8.setLength(0);
s8.append(sb.substring(pos7+1, pos8));
s10.setLength(0);
s10.append(sb.substring(pos9+1));
actConteoDia=Integer.parseInt(s2.toString());
actConteoDia++;
sb.setLength(0);
//sb.append(s1).a
if(actConteoDia>MAXIMO_LIMITE_POR_SEMANA){
actConteoDia=MAXIMO_LIMITE_POR_SEMANA+1;
}
sb.append(s1).append(":").append(actConteoDia).append(",").append(rs.getString(2).substring(pos2+1, rs.getString(2).length()));
//For every date record it takes aprox 8.3 milisec by record
sSQL.setLength(0);
sSQL.append("UPDATE TCOMERCIALMAIL SET FECHAS='").append(sb.toString()).append("' WHERE MAIL='").append(rs.getString(1)).append("'");
pstmtDet1.addBatch(sSQL.toString());
//actConteoDia=0;
//actConteoDia+=Integer.parseInt(s2.toString());
actConteoDia+=Integer.parseInt(s4.toString());
actConteoDia+=Integer.parseInt(s6.toString());
actConteoDia+=Integer.parseInt(s8.toString());
actConteoDia+=Integer.parseInt(s10.toString());
if(actConteoDia>MAXIMO_LIMITE_POR_SEMANA){
sSQL.setLength(0);
sSQL.append("UPDATE TEMPMAIL SET DIASLIMITE='S' WHERE ").append(colEmail).append("='").append(rs.getString(1)).append("'");
pstmtDet.addBatch(sSQL.toString());
}
sql_com_local_tranx++;
if(sql_com_local_tranx%2000==0 || sql_com_local_tranx%7000==0 ){
brDias.setString("PROCESANDO "+sql_com_local_tranx);
pstmtDet1.executeBatch();
pstmtDet.executeBatch();
}
if(sql_com_local_tranx%100000==0){
System.gc();
System.runFinalization();
}
}
pstmtDet1.executeBatch();
pstmtDet.executeBatch();
cnx.commit();
I've made telemetry tests so I can trace where the problem lies.
The big while it's the problem, I think, but I don't know where the problem may exactly be.
I'm adding some images of the telemtry tests, please I need to interpretate them properly.
The gc becomes inverse to the time the jvm to keep objects alive:
http://imageshack.us/photo/my-images/849/66780403.png
The memory heap goes from 50 MB to 250 MB, the used heap reaches the 250 MB creating the outOfMemory exception:
50 MB
http://imageshack.us/photo/my-images/94/52169259.png
REACHING 250 MB
http://imageshack.us/photo/my-images/706/91313357.png
OUT OF MEMORY
http://imageshack.us/photo/my-images/825/79083069.png
The final stack of objetcts generated ordered by LiveBytes:
http://imageshack.us/photo/my-images/546/95529690.png
Any help, suggestion, answer will be vastly appreciated.
The problem is that you are using PreparedStatement as if it is a Statement, as you are calling addBatch(string). The javadoc of this method says:
Note:This method cannot be called on a PreparedStatement or CallableStatement.
This comment was added with JDBC 4.0, before that it said the method was optional. The fact that Jaybird allows you to call this method on PreparedStatement is therefor a bug: I created issue JDBC-288 in the Jaybird tracker.
Now to the cause of the OutOfMemoryError: When you use addBatch(String) on the PreparedStatement implementation of Jaybird (FBPreparedStatement), it is added to a list internal to the Statement implementation (FBStatement). In case of FBStatement, when you call executeBatch(), it will execute all statements in this list and then clear it. In FBPreparedStatement however executeBatch() is overridden to execute the originally prepared statement with batch parameters (in your example it won't do anything, as you never actually add a PreparedStatement-style batch). It will never execute the statements you added with addBatch(String), but it will also not clear the list of statements in FBStatement and that is most likely the cause of your OutOfMemoryError.
Based on this, the solution should be to create a Statement using cnx.createStatement and use that to execute your queries, or investigate if you could benefit from using one or more PreparedStatement objects with a parameterized query. It looks like you should be able to use two separate PreparedStatements, but I am not 100% sure; the added benefit would be protection against SQL injection and a minor performance improvement.
Addendum
This issue has been fixed since Jaybird 2.2.2
Full disclosure: I am the developer of the Jaybird / Firebird JDBC driver.
do not execute batch statements while iterating through the result set. store the sql you want to execute in a collection and then when you have finished processing the result set
start executing the new sql. does everything have to happen inside the same transaction?