I have a facebook 'like' application - a virtual white board for multiple 'teams' who share a 'wall' common to that project. There are about 9-12 entities for which I capture the data. I'm trying to have the user's homepage display the update of activities that have happened since the past login - like how facebook posts notifications:
"[USER] has done [some activity] on [some entity] - 20 minutes ago"
where [...] are clickable links and the activities are primarily (rather only) CRUD.
I'll have to persist these updates. I'm using MySQL as the backend DB and thought of having an update table per project that could store the activities. But it seems there needs to be one trigger per table and that would just be redundant. More so It's difficult to nail down the tabular schema for that update table since there are many different entities.
The constraint is to use MySQL but I'm open to other options of "how" to achieve this functionality.
Any ideas?
PS: Using jQuery + REST + Restlet + Glassfish + MySQL + Java
It doesn't have to be handled at the database level. You can have a transaction logging service that you call in each operation. Each transaction gets a (unique, sequential) key.
Store the key of the last item the person saw, and show any updates where the key is higher, the update the last key seen.
A periodic routine can go through the user accounts and see what is the lowest seen transaction log key across all users (i.e. what is the newest log entry that all users have already seen) and delete/archive any entries with a key <= that one.
Related
I have a requirement to fetch the records from the Dataverse in which some changes have been done in specif columns values. For example, let's say we have a table named employee in which we have a field called position which can be changed over time from intern, software developer, development lead, etc. If we have 10 records currently and if the position of one of the employees gets changed, I need only that one employee record. I have gone through Retrieve and detect changes to table definitions but I believe it is related to changes in the schema and not related changes in the data. I am using the Spring Boot with Java 11 and to work with Dataverse I am using the Olingo library and also may use the Web APIs if required. Is there a way to detect the changes in the data as described above?
EDIT
To add more details we will have a scheduled job that triggers at X minutes which needs to fetch the employee data for which position has changed from the last fetch time of X minutes. As we can see in the image below, all 3 records are being updated in that X minutes internal and last modified time has been updated for all. I need to fetch the records highlighted in green for which position attribute has changed. For a record with Id 2, I don't need to fetch it as the position is the same.
Solution 1: Custom changes table
If you may and can extend your current Dataverse environment
Create a new table called Employee Change. Add a column of type Lookup named Employee and link it to your Employee table
Modify Main Form and add Employee column to the form
Create a workflow process which would fire on field change. Inside the workflow process you create an Employee Change record and set lookup column value to the changed record
You can now query Employee Change table for changed records. You would need to expand the lookup column to get required columns from Employee table.
Example Web API query:
GET [Organization URI]/api/data/v9.1/employee?$select=createdon, employeeid
&$expand=employeeid($select=employeeid,fullname,column2,column3) HTTP/1.1
Accept: application/json
OData-MaxVersion: 4.0
OData-Version: 4.0
More info on expanding lookup columns can be found here
Solution 2: Auditing
Use built-in auditing feature
Make sure auditing is enabled. Details can be found in docs
Enable auditing on required column in Employee table
Query audit records for changes in Employee table. You have to pay attention only to changes to specific attributes of interest
You will get a list of records changed and then you have to query once again to retrieve columns of the records
Solution 3: Push instead of pull
It might make more sense to push changes from Dataverse to your API instead of constantly querying for changes.
You could use Microsoft Power Automate to create a simple flow which would call your API / platform when change is detected in Dataverse
A good start could be exploring the following Power Automate template: When a record is updated in Microsoft Dataverse, send an email. You could then replace "send email" steps with querying other APIs
I'm currently developing an application in Java that connects to a MySQL database using JDBC, and displays records in jTable. The application is going to be run by more than one user at a time and I'm trying to implement a way to see if the table has been modified. EG if user one modifies a column such as stock level, and then user two tries to access the same record tries to change it based on level before user one interacts.
At the moment I'm storing the checksum of the table that's being displayed as a variable and when a user tries to modify a record it will do a check whether the stored checksum is the same as the one generated before the edit.
As I'm new to this I'm not sure if this a correct way to do it or not; as I have no experience in this matter.
Calculating the checksum of an entire table seems like a very heavy-handed solution and definitely something that wouldn't scale in the long term. There are multiple ways of handling this but the core theme is to do as little work as possible to ensure that you can scale as the number of users increase. Imagine implementing the checksum based solution on table with million rows continuously updated by hundreds of users!
One of the solutions (which requires minimal re-work) would be to "check" the stock name against which the value is updated. In the background, you'll fire across a query to the table to see if the data for "that particular stock" has been updated after the table was populated. If yes, you can warn the user or mark the updated cell as dirty to indicate that that value has changed. The problem here is that the query won't be fired off till the user tries to save the updated value. Or you could poll the database to avoid that but again hardly an efficient solution.
As a more robust solution, I would recommend using a database which implements native "push notifications" to all the connected clients. Redis is a NoSQL database which comes to mind for this.
Another tried and tested technique would be to forgo direct database connection and use a middleware layer like a messaging queue (e.g. RabbitMQ). Message queues enable design of systems which communicate using message. So for e.g. every update the stock value in the JTable would be sent across as a message to an "update database queue". Once the update is done, a message would be sent across to a "update notification queue" to which all clients would be connected. This will enable all of them to know that the value of a given stock has been updated and act accordingly. The advantage to this solution is that you get to keep your existing stack (Java, MySQL) and can implement notifications without polling the DB and killing it.
Checksum is a way to see if data has changed.
Anyway I would suggest you store a column "last_update_date", this column is supposed to be always updated at every update of the record.
So you juste have to store this date (precision date time) and do the check with that.
You can also add a column version number : a simple counter incremented by 1 at each update.
Note:
You can add a trigger on update for updating last_update_date, it should be 100% reliable, maybe you don't need a trigger if you control all updates.
When using in network communication:
A checksum is a count of the number of bits in a transmission unit
that is included with the unit so that the receiver can check to see
whether the same number of bits arrived. If the counts match, it's
assumed that the complete transmission was received.
So it can be translated to check 2 objects are different, your approach is correct.
I have to solve this situation: in my Spring + JPA web application I have a jsp similar to an excel work sheet.
So I have a certain number of cells and each cell is saved in a DB table with additional information: I have a row for each cell.
id | value | column | row | ...
I use this structure because number of columns in my jsp table is dynamic.
At the moment, when I save cells I truncate the current set of rows in DB table and re-insert all the new rows. This is the fastest way I found to update a large set of rows.
But now I have a concurrency problem: the jsp page can be used by different users at the same time and this can cause overwriting problems on other users savings.
I need to implement some kind of lock in my web app. I found there are mainly two types of lock: optimistic vs pessimistic.
Can you suggest me a common approach to solve this situation? Where do I need to implement the lock, at data access level or at service level?
NOTE to be more clear: table values are shared among users, but can be updated by anyone among authorized users.
The solution would probably depend on the behavior requirements.
How about the following scenario: users A and B started to change some values, then user A pressed Save button and saved data, after that user B did the same. User B got an error message saying something like "the data has been updated, please reload the page". He reloads the page and lose all changes he did :( Only after that he is able to save his changes, but he has to do it once again.
Other possible scenario: users A and B accessing the page, but only the user who was the first will be able to save his work, other users will see message saying something like "someone else is editing the page, try again later".
For the first scenario you can implement the following: each line of the table (in database) has a last-update-timestamp which is updated to current time each time this row is changed.
Now, let's imagine user A get row with timestamp 1 when opened the page, user B was a little bit slower and got the same row with timestamp 2. But, he did his changes faster and pressed Save button first. Now, the row is saved in DB with timestamp let's say 5. User A is trying to save his changes, but the timestamp of his data is 1, which is different from 5 currently in DB. That means someone changed that data already and he should see error message I mentioned above.
Second scenario is a little bit harder to implement. I think the best way to do this is to open transaction to DB which
reads the row(s) we want;
put some flag like "locked" to true for all of them;
if some row is locked already, fails (or return available rows, depending on what you need). But, probably should fail;
returns rows to jsp page;
Now, if other user requested the same rows, transaction will fail and he will not be able to start changing data.
User A should put these locked flags back to false when he saves the data.
Important thing: these locks should have timeout to prevent situation when user opened the page and closed it without saving (or browser crash, or something else). You may also want to implement some kind of lock reackquire for the same user - when user opened the page for the first time, then closed it without saving data and opened once again - he should be able to edit the data. This can be done by identifying user somehow - login, cookie, and so on.
I am performing audit trail for a web application of the user id login who perform insert/update/delete of records.
There are no issues for insert / update triggers. However, for delete trigger the database would not know the "user id" who perform the delete.
I am using oracle database and jdbc with connection pooling.
How do I pass the "user id" to the delete trigger?
Take a look at Audit4j. It supports out of the box support for application auditing.
Are you using JPA? If so, are you using EclipseLink? If so, can this be done without triggers?
If you answered yes to all three answers, have I got an answer for you. Take a look at EclipseLink's History table feature . I've used this in the past successfully to implement an audit trail.
Otherwise, consider "soft deletes" by having a Status column. Instead of physically removing the row from the database, you simple set the row status to disabled. In your Select queries, just add "and where status != disabled."
I have a soap-based web service with Java + Mysql.
The web services consist in save and send as a response generated documents. Each user has a limited number of documents available. This service provide documents to external systems, so, i have to know the documents available any time for an specific user.
To improve this a build a trigger that updates the user row when a new document is created.
CREATE TRIGGER `Service`.`discount_doc_fromplan`
AFTER INSERT ON `Service`.`Doc` FOR EACH ROW
UPDATE `Service`.`User` SET User.DocAvailable = User.DocAvailable - 1 where User.id = NEW.idUser
The problem comes when an user tries to create 2 or more documents at the same time because of their systems. This give me a "Deadlock found when trying to get lock".
Somebody has an idea to improve this without the deadlock problem and at the same time the right number of documents available?. This is my first web service. Thanks.
You are trying to implement your business logic inside a database trigger. Instead of trigger, you can implement this logic in either (1) your web service application middleware or (2) a stored procedure. I prefer approach (1) though. The basic code in either will collect all inserts in Doc table by a user in a cumulative counter and at the end of all inserts, update the User table DocAvailable = DocAvailable -counter in one go. You can do this in a transaction so that you can rollback in case of a problem. You will have to read the available Doc quota for the user before starting the transaction.