How to prevent dropping public schema in Heroku PostgreSQL? - java

Luckily I was still in pretty early development mode.
I wanted to update my schema using some FlywayDB magic by just dropping and re-creating the public schema on my local development database, but I wasn't paying attention and I had the Heroku one open in PgAdmin as well! Well, I dropped the one for Heroku (the one that will become the "production" database once the application has some users), and it freaked me out, so here I am.
I would like some kind of safety from myself to keep me from dropping this without using the Heroku Toolbox, but I'm not sure if that's possible. All the drop schema prevention things I've seen by Googling require me to have admin rights, which I obviously don't have in a shared environment like Heroku.
Any help is greatly appreciated!

I'm not sure about preventing drop, but if you take regular backups with Heroku PGBackups you can easily import a backup if something bad happens.
EDIT
Here's some documentation on the default role and its limitations.

Related

App with derby database - client access needs changing to embedded?

I'm still rather new to java and I think I've started a project with a problem.
I created a job for a friend in which there are employees, shifts, sites and these needed to be loaded at the beginning.
I went looking for solutions and decided to use a Derby database. I've now programmed the Application and it works fine with the database. It loads all parameters and creates objects for handling,
Now I need to deploy my project to my friends computer so he can use it and I think I have the database set up wrong. I think I needed it to be embedded? so it goes with the application.
So my questions are what are my choices,
I read I can change the database to an 'embedded' one by making the database a class? I have no idea how to do this and maybe because I'm new to java, I'm finding all the write ups on this subject difficult to understand.
Alternatively I thought maybe I can install Derby separately and connect to that?
Or maybe I can drop the Derby idea and switch entirely to another database entirely,
I'm a bit confused over my choices here, basically I've built an application around an installation of Derby DB using this line to connect to it.
jdbc:derby://localhost:1527/SG_database
If someone can give me some 'Plain English' options here I would very much appreciate it.
To reconfigure your application to use Derby as an embedded database, all you have to do is change that JDBC Connection URL to jdbc:derby:SG_database, and change your CLASSPATH so that your program references derby.jar rather than derbyclient.jar. You should possibly add ;create=true to the end of that URL so that, the first time your friend runs your application, the database is created on their machine.
But yes, you have other choices, and without knowing a fair amount about your application it's hard to give you very detailed guidance.
When your friend is using the application, do you want you and your friend to be sharing the same set of data? Or is your application designed so that your data and your friend's data have nothing in common?
If you want to be sharing the data, then yes it will be important to have a single instance of the database, and both of you have to share it, in which case a client-server configuration can work quite well.
If you want to be two completely separate applications, with nothing shared, and each of you has your own copy of the data, then an embedded configuration can work quite well.
Perhaps you could simply try the embedded configuration, see how it behaves with your application, and then return here if you have a more specific question to ask?

Without using hibernate.hbm2ddl.auto, how do I export all the initial schema into Flyway?

I am at the almost ready stage of my JEE development. With a lot of recommendation NOT to use Hibernate's hbm2ddl.auto in production, I decided to remove it.
So now, I found out about Flyway, which seems great for future db changes and migrations, but I am stuck at first step: I have many entities, some entities inherit from base entities. This makes the CREATE statement very complex.
What is the best practice to create the first migration file?
Thanks!
If you've taken an "entities first" approach during development you'll need to generate the initial schema in the same way for the first live deployment: This will produce the first creation script used by Flyway and there may also need to be a second associated script for populating reference data.
In a nutshell, the reasons for no longer being able to use hbm2ddl.auto after the first deployment are that create will destroy existing data and update isn't reliable enough to cover all types of schema changes (as it sounds like you may already know from this SO question).
Flyway is a very useful tool but it does require a level of discipline that may not have existed during development. When going forward from the initial release, database update scripts need to be produced for Flyway that are equivalent to the changes made to the entities since the last release. There are tools (e.g. various commercial products from Redgate) that may help here: These attempt to "diff" two schemas and generate schema and/or data update scripts for getting from database A to database B. But in my experience, none of them are perfect and they don't quite reach the holy grail of enabling a completely automated approach.
Arguably, the best way is an "as you go" manual approach to ensure that non-destructive update scripts are committed to source control whenever an entity change is made that affects the schema or reference data - but as already mentioned, this will require some discipline and/or documented processes for all team members to follow.
For the first migration file, you just need the current ddl of your database. There are many tools which can get this for you (such as the "copy ddl" option in the IntelliJ IDEA Database tool or a GUI client from your database vendor).
I am not sure about Flyway but there is an alternate way, you can use ant tasks for hibernate to generate or update schema.
Hope it helps.
If you build your project with Maven, you could use Hibernate maven plugin.

Setup java web application to run each SQL script only once

Question
Together with my friends from university I'm making Web Application and We faced following problem recently. The server is synchronized with remote repository (git). Everyone can run application locally and has own local database on his local machine. There is database on web-hosting plugged to application on server. When someone wants to change something in database, he writes an sql script push it to the repository run it, then run it on server and make sure that every each developer execute it too. That seems to be very uncomfortable for us.
Bad idea
The solution would be plugging everyone to the same database. But IMHO this is the bad idea because of:
We would need to buy another web-host for SQL because, that which is running currently is for worldwide users. For safety, testing reasons we would need another one.
Having a database that is visible for the world, protected with simple password only, seems to be dangerous for me. Current database is configured to be visible only locally (locally relatively to server of course), so generally it is visible for the web server and to developers via ssh if needed.
Performance reason. Connecting to remote database instead of local would be over a dozen times slower considering it for developer use (more complicated queries, tesing site a lots of jUnit testing) would be incredibly painful solution.
Good idea
Some time ago I worked in company that problem was resolved as follows. There was a maven plugin configured to run each sql script in specified directory only once during application build (mvn clean install) i.e. it remembers which script was executed already and leave it. Consider that someone wants to change something in database new column for example. Then he writes script push it to the repository then he don't worry about anything because script would be automatically executed for him, sever and every other developer during application build.
How to do it
Unfortunately I can't find that plugin or configuration. To be honest I cannot find anything related to my problem on the web which is surprising because it seems to be a common problem for me. So can I do it by some Maven plugin? Maybe there is way to do it by proper Spring configuration. In case I would forced to do it manually (in Java at the application start) what tools do I need, any advice, class patterns?
Looking forward for your help. Also sorry for my English I'm not a native speaker.
Just a guess, but maybe the company you worked for used liquibase or flyway.
In case of liquibase which can be used via maven as well, information can be found here: http://www.liquibase.org/, specifically for the maven integratation here: http://www.liquibase.org/documentation/maven/index.html
Spring comes with a liquibase integration as well, information can be found here: http://www.liquibase.org/documentation/spring.html or in addition, if you're using spring boot: https://docs.spring.io/spring-boot/docs/current/reference/html/howto-database-initialization.html
Another possible solution for database migration is flyway: your entry point for documentation: http://flywaydb.org/

How to deploy application updates to a production Spring/Hibernate Application?

Up until this point, I've been using Spring in a development mode of sorts with hbmddl2 properties which drop all the tables and start again when I deploy the application to glassfish. It works well as a development config, since I know exactly what my database is going to contain when I run my app.
However, this isn't appropriate for an application with a rolling release cycle and I'm not exactly sure how to proceed in changing it so it would be suitable in a production environment. Googling it just gives me resources on how to update Spring or Hibernate itself, but nothing on maintaining a server. I'm getting the feeling I'm going to have to start creating XML object property mappings for Hibernate, but I think that's a little over the top when all I want to do is update a schema with new tables and new columns with default values.
Thanks in advance for any answers, I'm completely stuck on this.
This question is a matter of opinion so is is very broad.
There is no best way or right way of doing it.
Updating/upgrading/versioning etc. a production database is always a risk based activity where the key is to mitigate the risk as much as possible.
Here is a example answer to your question Best Practice for Updating a Production Database manually.
This is one of those areas where you gotta do your research and find the best deployment/upgrade method for you. At the end of the day you are going to be accountable for any user/customer data in your database so you have to be comfortable with the approach.

Java Google App Engine inconsistent data lose after restarting dev server

I am using Java GAE. So far, i'm just scafolding my data objects and i'm seeing an interesting issue.
The records that i am playing around with are getting updated properly as long as my dev server is running up. The second that the my dev server gets restarted, i lose all of my changes.
That would be not alarming if i lost all of my records, but, there was a point of time where my data persisted through the server restart. I'm worried that i would lose production data if i launched without fixing this potential bugs?
ANy idea on wher ei should look?
The datastore is persisted between instances as described here. The Java SDK doesn't have any functionality to clear the datastore for you, so you, or something working on your behalf (eg, your build process) must be deleting it.
Sounds like local development environment problem. Check the location of local_db.bin and ensure your build process does not touch the database file. Maybe the restart happens before the data has been persisted? The local development datastore is not stable like local relational databases. E.g. after upgrading appengine sdk versions the old local datastore might not work at all.
How are you starting the dev server? Make sure you're not providing "c" or "clear" as a flag, which does erase all the persisted data.
How long is it before the dev server persists the data to disk. Do you see the log messages when the data is persisted?

Categories

Resources