Currently I need to move my database from postgreSql to google cloud sql. I use pg_cron to remove stale records like this:
SELECT cron.schedule('30 3 * * 6', $$DELETE FROM events WHERE event_time < now() - interval '1 week'$$);
I've read folowing article: https://cloud.google.com/sql/docs/postgres/extensions
And was not found anything related to pg_cron
Also I've read Does Cloud SQL Postgres have any cron-like tool? but it looks like overengineering for my task.
Is there a simpler way?
As of November 2021 pg_cron is available:
https://cloud.google.com/sql/docs/release-notes#November_19_2021
Flags to enable and configure pg_cron available in following link
https://cloud.google.com/sql/docs/postgres/flags
Unfortunately pg_cron is not supported by Cloud SQL. In order to be able to run this extension you need to do it as SUPERUSER. As mentioned in the post you found Cloud SQL is a fully-managed service. This means that while some operations such as seting up, maintain, managing and administering your databases are managed easily, you don't have the SUPERUSER privilege.
There is an open Feature Request regarding this improvement but there is no estimated time of arrival for this.
If the workaround given in the same post is not suitable for you, you can always create a Compute Engine VM instance and set up PostgreSQL there. This will allow you to fully manage your database.
Related
I found the commitlog(.log) files in the folder, and would like to analyze them. For example, I wanna know which query is executed in the history of the machine. Is there any code to do that?
Commit log files are specific for a version of Cassandra, and you may need to tinker with CommitLogReader, etc. You can find more information in the documentation on Change Data Capture.
But the main issue for you is that commit log doesn't contain the query executed, it contains the data that is modified. What you really need is the audit functionality - here you have several choices:
It's built-in into upcoming Cassandra 4.0 - see the documentation on how to use it
use ecAudit plugin open sourced by Ericsson - it supports Cassandra 2.2, 3.0 & 3.11
if you use DataStax Enterprise (DSE) it has built-in support for audit logging
I am trying to understand "changing database without changing code". Currently working with micro services using springboot, java, thymeleaf and cloud foundry.
I have a spring boot application and attached a database as a service using cloud foundry.
My problem is I am seeing that the purpose of micro service is allowing the ease to change services without changing code.
Here is where I got stuck
In java I have a sql script, "select * from ORDER where Status = 'ACCEPTED';"
Images source
My database would be attached as a service on cloud foundry using CUPS
"jdbc:oracle:thin:username/password//host:port/servicename"
So let say I want to change this database to CUSTOMER table(take it as a different database). This will throw an error because CUSTOMER table will not have "select * from ORDER where Status = 'ACCEPTED';"
I've changed database, but wouldn't I still have to go back to my code and change the sql script?
My Attempt to resolve this issue
So instead of hard coding my sql script in java "select * from ORDER where Status = 'ACCEPTED';"
I created a system environment variable and set it as sqlScript with value of select * from ORDER where Status = 'ACCEPTED'
Then in java I called the env variable String sqlScript= System.getenv("sqlScript");
So now instead of going back into java to change sql script, user can change it through environment variables.
this is a very dirty method to go around my issue, what would be a better alternative?
I know my logic of understanding is really wrong. Please guide me to the right path.
I think the phrase 'changing database without changing code' doesn't mean that if you add/remove fields in DB you do not have to modify your codebase - it just doesn't make any sense.
What it really means is that you should use good database abstractions, so in case you need to change your database vendor from, let's say, MYSQL to OracleDB your Java code should stay the same. The only thing that may differ is some configurations.
A good example of it is ORM like Hibernate. You write your java code once, no matter what is the SQL Database that you are using underneath. To switch databases the only thing that you need to change is a dialect configuration property (In reality it's not that easy to do, but probably easier than if we were coupled to a one specific DB).
Hibernate gives you a good abstraction over SQL databases. Nowadays we have a new trend - having the abstraction over different DB families like SQL and NoSQL. So in the ideal world, your codebase should stay unchanged even if you want to change MySQL to MongoDB or even Neo4j. Spring Data probably is the most popular framework that tries to solve this problem. Another framework that I found recently is Kundera but I haven't used it so far.
So answering your question - you do not need to keep your SQL queries as system variables. All you need to do is to use proper abstractions in your language of choice.
In my opinion, it would be better to use something like Flyway or Liquibase, which are integrated really well in Spring Boot. You can find more information here.
I prefer Liquibase, since it uses a higher level format to describe your database migrations, allowing you to switch databases quite easily. This way, you can also use different databases per environment, for example:
HSQLDB during local development
MySQL in DEV and TEST
Oracle in Production
It's also possible to export your current database schema from an existing database to have an initial version in Flyway or Liquibase, this will give you a good baseline for your scripts.
I am trying to log query execution times for my application which is based on Eclipse link JPA and oracle DB.
I've enabled tracing for my application which uses ojdbc6_g.jar as detailed here http://docs.oracle.com/cd/B28359_01/java.111/b31224/diagnose.htm. It works well and does all the logging but i didn't see the query execution time anywhere.
As we are using Eclipse link JPA and tried to use the performance profiler but looks like it is supported in the latest versions whereas we are on 1.x and cannot upgrade now since we are closer to a release. Ours is a standalone Java application.
Are there any other ways to achieve this?
log4jdbc is a Java JDBC driver that can log SQL and/or JDBC calls (and optionally SQL timing information) for other JDBC drivers. The jdbc.sqltiming logger provided by the project will allow you to record the execution of the SQL statements executed.
Follow it here:
https://code.google.com/p/log4jdbc/
In my case a while ago to get query execution times,
execTime = timeBeforeJPA - timeAfterJPA
or you can use #Juned suggestion
thanks for the above responses. I found JProfiler to be very useful in regards to displaying query execution times and other details. Except that its not free and also i am trying to look for something that will work the same way as logging does in the long run.
-Sree
I need to copy all the data from MS SQL server to MySQL server. I am planning to use the Quartz scheduler to perform this. This scheduler will run every night and move the data from MS-SQL-Server to MySQL server. Can anyone please tell if this is fine or is there any other better way to do this?
Update:
I need to transfer only one table with 40 columns (from MS SQL server to MySQL)
I wouldn't involve java unless I absolutely had to: java would be adding no value but would be adding extra complexity.
This is a "DBA" type task that belongs in a script scheduled with cron tab.
If I was implementing it, I would export the source database as an SQL script then import it by running it on the target.
SQL Server Management Studio's "Import Data" task (right-click on the DB name, then tasks) will do most of this for you. Run it from the database you want to copy the data into.
If the tables don't exist it will create them for you, but you'll probably have to recreate any indexes and such. If the tables do exist, it will append the new data by default but you can adjust that (edit mappings) so it will delete all existing data.
I use this all the time and it works fairly well.
by- david
I would recommend you to use http://www.talend.com for tasks like that.
UPDATE
Talend Open Studio for Data Integration is Opensource, there are some other features which are propietary details here
As PbxMan said, I would use an ETL, but I recommed Pentaho (http://wiki.pentaho.com/display/EAI/Spoon+User+Guide) which I think is far easier for such simple jobs
I agree with #bohemian - running a job to transfer a single table every night sounds like a great candidate for a cron job (or "scheduled task" if on Windows). Using a framework like Quartz for this task seems like overkill.
There are many solutions for moving data from SQL Server to MySQL, others have listed great solutions such as Talend. If it would benefit you to only transfer certain columns (for example, to avoid leaking PII), my product SQLpipe might help, as it transfers the result of a query, not an entire table.
Here is a blog post showing how to use the product to transfer data from SQL Server to MySQL.
I have a JDBC application that uses Apache Derby. How can I migrate my entire database system to use MySQL?
I have 3 Java programs that access the database
I have 3 tables and 2 views
I am using Netbeans. I have never used MySQL before and do not know where to begin. Is there nice integration with Java and MySQL in Netbeans? How can I get nice integration with NetBeans and MySQL?
All help is greatly appreciated!
Looks like this plugin would probably help you:
http://netbeans.org/kb/docs/ide/mysql.html
I found this tutorial on the Spring site, but I think it is only a partial solution.
Tutorial
In it they are relying on hibernate to drop and create the tables, and I really don't like that. You have to go through special coding to add static data. For example, if your app is tracking devices, you probably want a table of device_types. At least some o those device types will be in the db, as well as devices, users, etc.
What I intend to do, is to use Derby until I am somewhat stable. From it, I will get the database schema and create it in mysql. It seems that the DB look utility can be used for that. DB Look
As added security I intend to run my web app with a db user that does not have the ability to add or drop tables. Also it is possible to remove the permission to delete rows if you use the concept of making rows "inactive" So instead of deleting a no longer used device type, you set the "active" flag to F. So your device type query would look like:
select * from device_type where active = 'T'