SmartGWT RestDateSource and Paging (Large DataSet of) Dynamic Data - java

I have a database table for log messages and at any time there can be inserted new rows. I want to show them in grid and when you scroll down I want to request more rows form this table (server side) but without to be affected from new added rows. The new rows only have to be visible if I refresh the whole grid.
I'm not sure how can I request rows in a range (from, to) using JDBC. I think there is no portable (across deferent databases) SQL query to do this? (I'm using MYSQL)
I think that after reading first page of this table I have to send to the client side the Max Id from log table and after that request new rows using this Max Id as parameter in SQL (WHERE id <= MAXID) but I'm not sure how I can pass this parameter from server to client and back using RestDateSource?
Do you have any better ideas how I can make this?
P.S. I'm using LGPL SmartGWT version and using my own servlets for server side.

Here is what I would do; I imagine that you either have a growing-number ID or a timestamp for each of your rows.
Before you start querying for data, you call a webservice to query the current id (eg last line insterted is 12345).
Then you add a Criteria object to your datasource that says "rowId <= 12345". At this point, you can use the grid freely - paging, sorting, etc will work automatically as new rows will automatically be excluded.
(Or if you use a personalized datasource and not the default RESTdataSource, you basically do the same thing without using Criteria explicitly).

SmartGWT Pro and better do this automatically. Even if you don't want to use Pro, you can download the evaluation (smartclient.com/builds) and watch the server-side console, where the SQL queries are logged.

Related

Change summary after executing SQL query

I am trying to log a “change summary” from each INSERT/UPDATE MySQL/SQL Server query that executes in a Java program. For example, let’s say I have the following query:
Connection con = ...
PreparedStatement ps = con.prepareStatement(“INSERT INTO cars (color, brand) VALUES (?, ?)”);
ps.setString(1, “red”);
ps.setString(2, “toyota”);
ps.executeUpdate();
I want to build a “change set“ from this query so I know that one row was inserted into the cars table with the values color=red and brand=toyota.
Ideally, I would like MySQL/SQL Server to tell me this information as that would be the most accurate. I want to avoid using a Java SQL parser because I may have queries with “IF EXISTS BEGIN ELSE END”, in which case I would want to know what was the final query that was inserted/updated.
I only want to track INSERT/UPDATE queries. Is this possible?
What ORM do you use? If you don't use one, now could be the time to start - you give the impression that you have all these prepared statement scattered throughout the code, which is something that needs improving anyway.
Using something like Hibernate means you can just activate its logging and keep the query/parameter data. It might also make you focus your data later a bit more (if it's a bit haphazardly structured right now).
If you're not willing to switch to using an ORM consider creating your own class, perhaps called LoggingPreparedStatement, that is identical to normal PreparedStatement (subclass or wrapper of PreparedStatement such that it uses all the same method names etc so it's a drop in replacement) and logs whatever you want. Use find/replace across the code base to switch to using it.
As an alternative to doing it on the client side, you can get the database to do It. For SQL server it has change tracking, don't know what there is for MySQL but it'll be something proprietary. For something consistent, most DB have triggers that have some mechanism of identifying old and new data and you can stash this in a history table(s) to see what was changed and when. Triggers that keep history have a regularity to their code that means they can be programmatically generated from a list of the table columns and datatypes, so you can query the db for the column names (most db have some virtual tables that tell you info about the real tables) etc and generate your triggers in code and (re)apply them whenever schema changes. The advantage of using triggers is that they really easily identify the data that was changed. The disadvantage is that this is all they can see so if you want your trigger to know more you have to add that info to the table or the session so the trigger can access it - stuff like who ran the query, what the query was. If you're not willing to add useless columns to a table (and indeed, why should you) you can rename all your tables and provide a set of views that select from the new names and are named the old names. These new views can expose extra columns that your client side can update and the views themselves can have INSTEAD OF triggers that update the real tables. Doesn't help for selections though because deleting data doesn't need any data from the client, so the whole thing is a mess. If you were going that wholesale on your DB you'd just switch to using stored procedures for your data modifications and embark on a massive job to change your client side calls. An alternative that is also well leveraged for SQL Server is the CONTEXT_INFO variable, a 128byte variable block of binary data that lives for the life of your connection/session or it's newer upgrade SESSION_CONTEXT, a 256kb set of key value pairs. If you're building something at the client side that logs the user, query and parameter data and you're also building a trigger that logs the data change you could use these variables, programmatically set at the start of each data modification statement, to give your trigger something more involved than "what is the current time" to identify which triggered dataset relates to which query logged. Generating a guid in the client and passing it to the db in some globally readable way that means the database trigger can see it and log it in the history table , tying the client side log of the statement and parameters to the server side set of logged row changes

Configuring database change notification to get only newly inserted or updated data in Java

I am building an application that does some processing after looking up a database (oracle).
Currently, I have configured the application with Spring Integration and it polls data in a periodic fashion regardless of whether any data is updated or inserted.
The problem here is, I cannot add or use any column to distinguish between old and new records. Also, for no insert or update in table as well, poller polls data from database and feeds the data into message channel.
For that, I want to switch to database change notification and I need to register the query something like
SELECT * FROM EMPLOYEE WHERE STATUS='ACTIVE'
now this active status is true for old and new entries and I want to eliminate the old entries from my list. So that, only after a new insert or an existing update, I want to get data which are added newly or updated recently.
Well, that is really very sad that you can't modify the data model in the database. I'd really suggest to try to insist to change the table for your convenience. For example might really be just one more column LAST_MODIFIED, so could to filter the old records and only poll those which date is very fresh.
There is also possibility in Oracle like trigger, so you can perform some action on INSERT/UPDATE and modify some other table for your interest.
Otherwise you don't have choice unless use one more extra persistence service to track loaded records. For example MetadataStore based on Redis or MongoDB: https://docs.spring.io/spring-integration/docs/4.3.12.RELEASE/reference/html/system-management-chapter.html#metadata-store

Dataweave stopped picking up Java linkedList data from query in input payload side

I created a query "select * from auto_policy;" in Database component. It showed the data from Java LinkedList in the Dataweave component (Mule) and I mapped to a huge CDM XSD on the output side. I had an issue with the database missing things and having extra fields I didn't need. I modified the table in the database. I spent hours trying to get the metadata(showing) to show the new columns. Finally I scrapped everything and tried a brand new project.
Now, I cannot get the same query "select * or written out columns from auto_policy;" (includes the semicolon) to show anything on the left side (input) of the Dataweave component. Stumped here in NC.
Open your Database connector properties, and make sure:
It is connected successfully to the database
The metadata (columns name) is listed in the Output tab as Payload
If not listed there (or not updated, e.g.: only 5 columns, whereas it should be 7) then click Refresh Metadata link at the bottom.
Once you get the expected metadata, then it will be listed on the left side (input) of the DataWeave component.
It might be because the database is not connected. just check the connectivity for the data base. And when you keep dataweave after database it will fetch all the required fields directly and when you provide the output side metadata make sure its added correctly. Map the fields as per your requirement.

JSP delete row and save

I am working in Spring MVC 2, Jsp, Dojo, Javascript.
Actually I am populating Jsp page table-grid with list of objects coming in form command object. Let say 3 records displayed in grid. I am deleting third record with JavaScript getElementById.. delete-row/removeChild functions. That record is deleted from presentation i.e. grid. Now when I save this. It takes 3 records to server side instead of 2. It should take 2 records because the third record was deleted. I am using Dojo to dragNdrop grid rows.
If you're using a grid component that maintains a datastore - e.g. the DojoX DataGrid, you might be removing the markup for the row, but not telling the datastore to purge the row data. When the save occurs, the datastore sends all three rows.
If you are using the DataGrid, you should delete the row from the DataStore, which will be reflected automatically in the UI.
When I have this kind of issue, I always check the cache related headers in my response.
Could it be that the http request supposed to fetch saved data from the server in order to refresh the view doesn't hit the server, but instead hit the browser cache?
Could not resolve issue but another logic fulfills my need. Spring form tags were used to bind this for with objectclass. Converting deleted item row's id to negative and hiding this row at client side does the trick. When form submits this negative id converted to positive value and deleted from DB.

Client Side sorting + Hibernate Paging?

I use GWT for UI and Hibernate/Spring for buisness-layer.Following GWT widget is used to display the records.(http://collectionofdemos.appspot.com/demo/com.google.gwt.gen2.demo.scrolltable.PagingScrollTableDemo/PagingScrollTableDemo.html).I assume the sorting is done in client side.
I do not retrieve the entire result set since its huge.
I use
principals = getHibernateTemplate().findByCriteria(criteria,
fromIndex, numOfRecords);
to retrive data.Theres no criteria for sorting in Hibernate layer.
This approach does not give the correct behaviour since it only Sorts the current dataset in the client.
What is the best solution for this problem?
NOTE : I can get the primary-Sort-column and other sort Columns using the UI framework.
May be I can sort the result using primary-sort-column in the hibernate layer?
You need to sort on the server.
Then you can either:
send the complete resultset to the client and handle pagination on the client side. The problem is that the resultset may be big to retrieve from db and sent to the client.
handle the pagination on the server side. The client and the server request only one page at a time from the db. The problem then is that you will order the same data again and again to extract page 1, page 2, etc. each time you ask the db for a specific page. This can be a problem with large database.
have a trade-off between both (for large database):
Set a limit, say 300 items
The server asks the db for the first 301 items according to the order by
The server keept the resultset (up to 301 items) in a cache
The client request the server page by page
The server handles the pagination using the cache
If there are 301 items, the client displays "The hit list contains more than 300 items. It has been truncated".
Note 1: Usually, the client doesn't care if he can't go to the last page. You can improve the solution to count for the total number of rows first (no need of order by then) so that you can display message that is better to the user, e.g. "Result contained 2023 elements, only first 300 can be viewed".
Note 2: if you request the data page by page in the database without using any order criterion, most db (at least Oracle) don't guarantee any ordering. So you may have the same item in page 1 and 2 if you make two requests to the database. The same problem happens if multiple items have the same value that is use to order by (e.g. same date). The db doesn't guarantee any ordering between element with the same value. If this is the case, I would then suggest to use the PK as the last order criterion to order by (e.g. ORDER BY date, PK) so that the paging is done in a consistent way.
Note 3: I speak about client and server, but you can adapt the idea to your particular situation.
Always have a sort column. By default it could by "name" or "id"
Use server side paging. I.e. pass the current page index and fetch the appropriate data subset.
In the fetch criteria / query use the sort column. If none is selected by the client, use the default.
Thus you will have your desired behaviour without trade-offs.
It will be confusing to the user if you sort on a partial result in the GUI, and page on the server.
Since the data set is huge, sending the entire data set to the user and do both paging and sorting there is a no-go.
That only leaves both sorting and paging on the server. You can use Criteria.addOrder() to do sorting in hibernate. See this tutorial.

Categories

Resources