Let me give the example first. It is a log table.
User A subscribe the service A = OK
User A unsubscribe the service A = OK
User A subscribe the service A again = OK
User A subscribe the service A again = Not OK, because you can't subscribe same service at the same time.
Sometimes the client goes crazy and send 5 subscribe requests at the same time ( 4 tomcat servers behind), if I do nothing in this situation then 5 same records will be inserted.
As you can see, I can't use unique constraint here.
I guess perhaps I can use some single thread block in Oracle, but not sure..
I tried "merge" , but I guess it is used in specific records instead of last record.
begin single thread
select the last record
if the last record is the same then don't insert.
if the last record is not the same then insert.
end single thread
Is it possible and how to achieve ?
Perhaps you need to check for the user id, and service type. if same user trying to subscribe same service before the previous subcribed service is performed, then alert the user.
or maybe you want to limit the user to subscribe in only some given duration, say: user can only subscribe same service in each 1 day
You can update the record if the record already exist, for example:
Make a query to check if the record with particular user and service is exist:
SELECT * FROM table WHERE userid = userid AND serviceid=serviceid
If the query return any result, means its exist. then do update:
UPDATE table SET column1='value', column2='value2' ... WHERE userid = userid AND serviceid = serviceid
else, if no result returned, means the user haven't subscribe the service. then insert record:
INSERT INTO table(column1, column2, ...) values ('value1', 'value2', ...)
I think you could solve this problem with constraint. When user subscribes it inserts a row when it unsubscribes it deletes it. A row must be unique for same user and same service.
If you do not want to delete rows add ACTIVE column to this table and make constraint on USER + SERVICE + ACTIVE.
I do not fully understand your problem, but it seems you need to implement mutual exclusion somewhere. Have you tried with a SELECT ... FOR UPDATE?
http://www.techonthenet.com/oracle/cursors/for_update.php
I tried "MERGE" and subquery to solve this case.
By the way, this problem is only happened when subscribe. First, I get the status(subscribe or unsubscribe) from the last record of a user and the service. If the last status in table is 'subscribe', means this subscribed request might be the duplicated one.
MERGE INTO subscr_log M
USING
(SELECT status
FROM subscr_log
WHERE rid=
(SELECT MAX(rid)
FROM monthly_subscr_log
WHERE SCRID ='123456'
AND service_item='CHANNEL1'
)
) C
ON (C.status ='SUB' ) -- try to see the last record is subscribe or not
WHEN MATCHED THEN
UPDATE SET M.REASON='N/A' WHERE M.STATUS='XXXXXXX' --do impossible sql
WHEN NOT MATCHED THEN
INSERT VALUES (9999,8888,'x','x','x','x','x','x','x','x','x',sysdate,'x','x','x','x');
Related
I want to optimize the number of queries in the database. At the moment on REST the list of devices comes. Need to check if new devices have been added. Now it works like this: all devices for the current user are selected from the database and a check(with a list received from the request) for the presence of new devices. I want to translate all the work into a database and do something like this:
select p
from :firstParam p
where p.sdauId NOT IN (select t.id
from Equipment t
where t.owner.id = :secondParam)
Param ":firstParam" is a list of devices received from the request. ":secondParam" is a user id.
Can i use the section "from" like that? After reading the documentation and making many attempts to make a similar request, nothing came of it. I will be grateful for any tips on writing a request or approach to solving such a problem.
Database object names (e.g. database, table, column names) cannot be bound using a placeholder in a prepared statement. So, you'll have to hard code the name of the first table:
select p
from yourTable p -- cannot use a parameter for table names
where p.sdauId NOT IN (select t.id from Equipment t where t.owner.id = :secondParam)
I have a Java application that reads rows from table A and writes those rows to table B.
Now, my other requirement is to check table A periodically (every x minutes) for newly inserted rows and move them to table B. If it helps, table A has created_at and updated_at TIMESTAMP fields.
Is there any way I can check for newly inserted rows in a certain interval, based on the TIMESTAMP value, add those new rows to a List and re-use the Java method that I already have, to write to table B?
I'm a Java/MySQL noob and would highly appreciate any advice/suggestion that'd help me get started. I'm using MariaDB database.
Method 1: One way you can do this by adding an additional column indicating status as either active or processed. Once the row is picked from Table A and sent to Table B you can update the status of the column as Processed. This can be done using simple update SQL query. You can keep checking the table A after every X min for any record with status as open with a simple select query like select * from order_book where status = ' OPEN'.
Method 2: You can use a trigger that as soon as the record is inserted in Table A send the record to Table B. Benefit of this method is that it doesn't required you to keep a Connection opened to read the table. This will be done by the DBMS directly
CREATE TRIGGER trigger_UpdateTableB ON TableA
FOR INSERT AS
BEGIN
INSERT INTO
TableB
(
primary_column,
column_tableA
)
SELECT
primary_column,
column_tableB,
FROM
INSERTED
END
Method 3: You can also achieve a similar thing easily using Apache Camel.
<route>
<!--To set up a route that generates an event every 60 seconds -->
<from uri="timer://foo?fixedRate=true&period=60000" />
<to uri="sqlComponent:{{sql.selectTableA}}" />
<to uri="sqlComponent:{{sql.updateStatus}}" />
<to uri="sqlComponent:{{sql.InsertTableB}}" />
</route>
Method 4: You can also create a Scheduled Job in java using ScheduledExecutorService. You can configure the scheduled job to run maybe once in an hour (according to the use case) and pick up new records from table A and insert them into table B.
Appendix -
https://mariadb.com/kb/en/library/trigger-overview/ To read more about triggers in MariaDB
http://camel.apache.org/getting-started.html To know more about Apache Camel
https://mkyong.com/java/java-scheduledexecutorservice-examples/
This is such a weird situation. I have Microsoft SQL Server database, app written in java (queries run using javax.persistence.Query) and external service written in C#.
I have a procedure that makes update on table MyTable.
There are only two update statements in this procedure:
UPDATE
MyTable
SET
status = dbo.getStat(id)
WHERE
EXISTS
(
...
)
UPDATE
MyTable
SET
status = 3
WHERE
status = 2
AND EXISTS
(
...
)
EXISTS part is the same for both statements.
In my java app I use following code to launch this procedure:
Query query = em.createNativeQuery(recources.getString("updateStatuses"));
query.setHint("toplink.refresh", "true");
query.setParameter(1, sessId);
query.executeUpdate();
Under updateStatuses I have following sql:
EXEC updateStatusesP ?
After that I select records from this table if they have status equal to 3:
SELECT
id
FROM
MyTable
WHERE
status = 3
AND EXISTS
(
...
)
I iterate through ids and I make a call to external service written in C# which modifies data in db.
The thing is that currently I'm getting a timeout on a select query launched from this external service. It's a query selecting data from MyTable.
If I remove first update statement from my updateStatusesP procedure then it works fine (no timeout). If I modify first query to include "status = " condition in WHERE clause then it also works fine. But without it I get this timeout.
There is a trigger and index on this table. I removed them to check if maybe there is something happening there. No changes.
If I make a direct call to this external service (via postman) then I get correct response. Only when calling it from application code: procedure, select query to select ids and then call to external service - I'm getting a response from this service (with info about timeout) and I can see in db monitoring tool that it didn't go pass first select query launched inside this service.
I don't understand what is happening. I tried putting those updates into transaction and committing it at the end of the procedure but it didn't help. Any ideas?
Say I have a table named users and a column named username with the format user1, user2 ..
I want to insert users into this table in a loop and value of every entry depends on the one's before. Value of the new entry is generated by the alphabetically greatest entry in the table, namely users.
Since it's possible in JDBC API to getGeneratedKeys after an insert while AutoCommit set to false;
In a situation like given below:
connection.setAutoCommit(false);
while(someCondition)
{
ResultSet rs = connection.createStatement("select max(username) from users").executeQuery();
if(rs.next())
{
name= rs.getString("username"); //returns user1
}
String newName = generateNewName(name); // simply makes user1 -> user2
connection.createStatement("insert into users (name,...) values ("+newName+",...)").executeUpdate(); ///and inserts..
}
does the select query return the last inserted value
or
it returns the max column in the table before I start the loop ?
First, to make sure you see all changes on the database immediately prefer TransactionIsolation of READ_UNCOMMITED over using auto commit. Alternatively using auto commit everywhere would do the job, too.
Once you made sure you see every db change immediately, the database will send you the maximum user from some time during the selects execution. But once you actually receive the result there might be additional users created by others threads. Thus this will only work for a single thread working and most likely that doesn't make any sense nowadays.
TL:DR
No, don't do it!
I have a table xxx with id (id_xxx int AUTO_INCREMENT ) and name (name_xxx varchar (50)),
When I insert a new row in the table I made:
INSERT INTO xxx VALUES ("name for test");
and the result (int=1) of insertion is returned, then I display in my java interface a message "succseed!", until now it's a very basic and simple operation...
BUT,
when I want to return the inserted id_xxx,I have to do another query to the database:
INSERT INTO xxx VALUES ("name for test");
//after the insert response I made:
SELECT MAX (id_xxx) FROM xxx;
and I display in my java interface "succseed $$$ is your id_xxx "....
the second version can easily cause a serious error during concurrent access to multiple users:
imagine a case when a user1 makes an insert... and then H2DB interrupt operations of this user then executes the insert of user2.
when user1 executes a select max (id_xxx) the H2DB return A FALSE id_xxx...
(I hope that my example is clear otherwise I will schematize this problem).
how to solve this problem?
You should be able to retrieve keys generated by insert query, see 5.1.4 Retrieving Automatically Generated Keys.