Feeling a bit stupid, but I have a simple architecture where the repositories are the only ones accessing ~Record classes and the services work on POJOs.
So basic flow is
repository fetches into POJO
service modifies POJO
repository receives POJO to update DB
repository matches updated POJO to record
repository stores (insert or update) the record
repository maps updated record (may have received generated values from insert) back to POJO
service receives updated POJO
i.e. something like
fun save(set: MySet): MySet {
set.description = set.description ?: ""
val record = ctx.newRecord(MY_SET, set).apply {
store()
}
// "When store() performs an INSERT statement, jOOQ attempts to load any generated keys from the database back into the record."
// cf. https://www.jooq.org/doc/latest/manual/sql-execution/crud-with-updatablerecords/simple-crud/
return record.into(MySet::class.java)
}
This fails because to quote documentation for newRecord:
Create a new pre-filled Record that can be inserted into the corresponding table.
This performs roughly the inverse operation of Record.into(Class)
The resulting record will have its internal "changed" flags set to true for all values. This means that UpdatableRecord.store() will perform an INSERT statement. If you wish to store the record using an UPDATE statement, use executeUpdate(UpdatableRecord) instead.
I CAN, of course, check if I have an id, and then either fetch the record from the database or create a new one
fun save(set: MySet): MySet {
set.description = set.description ?: ""
val record = when (val setId = set.id) {
null -> ctx.newRecord(MY_SET, set)
else -> ctx.selectFrom(MY_SET).where(MY_SET.ID.eq(setId)).fetchSingle()
}
//TOOD: update record manually from `set`
record.store()
// "When store() performs an INSERT statement, jOOQ attempts to load any generated keys from the database back into the record."
// cf. https://www.jooq.org/doc/latest/manual/sql-execution/crud-with-updatablerecords/simple-crud/
return record.into(MySet::class.java)
}
But that kind of is a lot of boilerplate code.
I DO have access to the MySetDao but that one just has insert and update, there's no store or upsert, as far as I can see.
Is there a way to turn a POJO into an UpdatableRecord directly or is this fetch-and-manual-update the way to go?
(Worth noting: the MySet POJO used here was generated by jOOQ.)
Related
I have a Spring application that runs a cron on it. The cron every few minutes gets new data from external API. The data should be stored in a database (MySQL), in place of old data (Old data should be overwritten by new data). The data requires to be overwritten instead of updated. The application itself provides REST API so the client is able to get the data from the database. So there should not be situation that client sees an empty or just a part of data from database because there is an data update.
Currently I've tried deleting whole old data and insert new data but there is a place that a client gets just a part of the data. I've tried it via Spring Data deleteAll and saveAll methods.
#Override
#Transactional
public List<Country> overrideAll(#NonNull Iterable<Country> countries) {
removeAllAndFlush();
List<CountryEntity> countriesToCreate = stream(countries.spliterator(), false)
.map(CountryEntity::from)
.collect(toList());
List<CountryEntity> createdCountries = repository.saveAll(countriesToCreate);
return createdCountries.stream()
.map(CountryEntity::toCountry)
.collect(toList());
}
private void removeAllAndFlush() {
repository.deleteAll();
repository.flush();
}
I also thought about having a temporary table that gets new data and when the data is complete just replace main table with temporary table. Is it a good idea? Any other ideas?
It's a good idea. You can minimize the downtime by working on another table until it's ready and then switch tables quickly by renaming. This will also improve perceived performance by the users because no record needs to be locked like what happens when using UPDATE/DELETE.
In MySQL, you can use RENAME TABLE if you don't have triggers on the table. It allows multiple table renaming at once and it works atomically (i.e. transaction - if any error happens, no change is made). You can use the following for example
RENAME TABLE countries TO countries_old, countries_new TO countries;
DROP TABLE countries_old;
Refer here for more details
https://dev.mysql.com/doc/refman/5.7/en/rename-table.html
Currently i am mapping from list of pojos to Record, and i want to be able to insert multiple rows at once. how can i do that in JOOQ with one transaction?
List<Record> recordList = receiverList.stream().map(r -> {
return dslContext.newRecord(Table, r);
}).collect(Collectors.toList());
I have tried put the list in the "values", but getting exception "The number of values must match the number of fields"
dslContext.insertInto(Table).values(recordList);
Your error is because .values(...) is waiting for field values not Record.
Maybe you can do a batch execution :
dslContext.batchInsert(recordList);
As Lukas mentioned it, it will execute it in a single jdbc statement which is atomic
instead of batchInsert you can also do:
var insertStepN = dslContext.insertInto(Table).set(dslContext.newRecord(Table, recordList.get(0));
for (var record : recordList.subList(1, recordList.size()) {
insertStepN = insertStepN.newRecord().set(dslContext.newRecord(Table, record));
}
insertStepN.returning().fetch().into(YourClass.class);
this way you can get the inserted values back using returning(), which you won't get with batchInsert().
I have written a code to fetch data from Google Datastore in my Google Cloud Dataflow program. I am able to fetch all fields of the entity except Id field which is autogenerated field. I have tried to use entity.getKey() but I am getting null.
Below is my code snippet,
Datastore datastore = DataflowDatastoreService.getDatastoreObject(null, null, null);
Query.Builder queryBuilder = Query.newBuilder();
Filter filter1 = Filter.newBuilder()
.setPropertyFilter(PropertyFilter.newBuilder() .setProperty(PropertyReference.newBuilder().setName("cId"))
.setOp(PropertyFilter.Operator.EQUAL)
.setValue(Value.newBuilder().setIntegerValue(1059438885900008L).build()).build()).build();
Filter filter2 = Filter.newBuilder()
.setPropertyFilter(PropertyFilter.newBuilder()
.setProperty(PropertyReference.newBuilder().setName("active"))
.setOp(PropertyFilter.Operator.EQUAL)
.setValue(Value.newBuilder().setBooleanValue(Boolean.TRUE).build()).build()).build();
Filter composeFilter = Filter.newBuilder().setCompositeFilter(CompositeFilter.newBuilder()
.addFilters(filter1).setOp(Operator.AND).addFilters(filter2).build()).build();
queryBuilder.addKind(KindExpression.newBuilder().setName("MyMaster").build());
queryBuilder.setFilter(composeFilter).build();
RunQueryRequest request = DataflowDatastoreService.makeRequest(queryBuilder.build(), null);
RunQueryResponse response = datastore.runQuery(request);
QueryResultBatch batch = response.getBatch();
List<EntityResult> entityResutls = batch.getEntityResultsList();
List<Entity> myEntities = new ArrayList<>();
Map<String, Value> entityMap = myEntities(0).getPropertiesMap();
In my code I am able to get all fields in entityMap key but I am not getting key, is there any other way through which I can fetch all the fields with Id.
Note: I'm not a java user, answer based on python experience
Indeed, entities returned in a regular query result do not contain the entity key/ID. Attempting to obtain that from the entity is rather inefficient - you need to reach to the datastore for each individual entity (not even looking at why that doesn't appear to be working for you).
If I need the entity keys/IDs I'd instead use keys-only queries - obtaining the keys, from which I can easily get:
the key IDs, locally, without making actual datastore calls (in python via key.id(), I don't know the java equivalent)
the entities via direct key lookup, which can be batched for efficiency.
entity.getKey().getPathList().get(0).getId()
This help me to achieve the result. Getting entity Id through getKey method.
I'm trying to do upsert using mongodb driver, here is a code:
BulkWriteOperation builder = coll.initializeUnorderedBulkOperation();
DBObject toDBObject;
for (T entity : entities) {
toDBObject = morphia.toDBObject(entity);
builder.find(toDBObject).upsert().replaceOne(toDBObject);
}
BulkWriteResult result = builder.execute();
where "entity" is morphia object. When I'm running the code first time (there are no entities in the DB, so all of the queries should be insert) it works fine and I see the entities in the database with generated _id field. Second run I'm changing some fields and trying to save changed entities and then I receive the folowing error from mongo:
E11000 duplicate key error collection: statistics.counters index: _id_ dup key: { : ObjectId('56adfbf43d801b870e63be29') }
what I forgot to configure in my example?
I don't know the structure of dbObject, but that bulk Upsert needs a valid query in order to work.
Let's say, for example, that you have a unique (_id) property called "id". A valid query would look like:
builder.find({id: toDBObject.id}).upsert().replaceOne(toDBObject);
This way, the engine can (a) find an object to update and then (b) update it (or, insert if the object wasn't found). Of course, you need the Java syntax for find, but same rule applies: make sure your .find will find something, then do an update.
I believe (just a guess) that the way it's written now will find "all" docs and try to update the first one ... but the behavior you are describing suggests it's finding "no doc" and attempting an insert.
I am using spring, hibernate and postgreSQL.
Let's say I have a table looking like this:
CREATE TABLE test
(
id integer NOT NULL
name character(10)
CONSTRAINT test_unique UNIQUE (id)
)
So always when I am inserting record the attribute id should be unique
I would like to know what is better way to insert new record (in my spring java app):
1) Check if record with given id exists and if it doesn't insert record, something like this:
if(testDao.find(id) == null) {
Test test = new Test(Integer id, String name);
testeDao.create(test);
}
2) Call straight create method and wait if it will throw DataAccessException...
Test test = new Test(Integer id, String name);
try{
testeDao.create(test);
}
catch(DataAccessException e){
System.out.println("Error inserting record");
}
I consider the 1st way appropriate but it means more processing for DB. What is your opinion?
Thank you in advance for any advice.
Option (2) is subject to a race condition, where a concurrent session could create the record between checking for it and inserting it. This window is longer than you might expect because the record might be already inserted by another transaction, but not yet committed.
Option (1) is better, but will result in a lot of noise in the PostgreSQL error logs.
The best way is to use PostgreSQL 9.5's INSERT ... ON CONFLICT ... support to do a reliable, race-condition-free insert-if-not-exists operation.
On older versions you can use a loop in plpgsql.
Both those options require use of native queries, of course.
Depends on the source of your ID. If you generate it yourself you can assert uniqueness and rely on catching an exception, e.g. http://docs.oracle.com/javase/1.5.0/docs/api/java/util/UUID.html
Another way would be to let Postgres generate the ID using the SERIAL data type
http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE-SERIAL
If you have to take over from an untrusted source, do the prior check.