Spring R2DBC + SQL Server: procedures query - java

I am required to execute a stored procedure in a SQL server to fetch some data, and since I will later save the data into a Mongo and this one is with ReactiveMongoTemplate and so on, I introduced Spring R2DBC.
implementation("org.springframework.data:spring-data-r2dbc:1.0.0.RELEASE")
implementation("io.r2dbc:r2dbc-mssql:0.8.1.RELEASE")
I see that I can do SELECT and INSERT and so on with R2DBC, but is it possible to EXEC prod_name? I tried it and it hangs forever and then the test terminates, without success but neither failure. The last line of log is:
io.r2dbc.mssql.QUERY - Executing query: EXEC "SCHEMA"."MY_PROCEDURE"
The code is like:
public Flux<Coupon> selectWithProcedure() {
return databaseClient
.execute("EXEC \"SCHEMA\".\"MY_PROCEDURE\" ")
.as(Coupon.class)
.fetch().all()
.doOnNext(coupon -> {
coupon.setCouponStatusRefFromId(coupon.getCouponStatusRefId());
});
}
And it seems that no data is retrieved.
If I test some other methods with simple queries like SELECT... it works. But the problem is, DBAs do not allow my app to read table data, instead, they create a procedure for me. If this query is not possible, I must go with traditional JPA way and going reactive at Mongo side has lost its sense.

Well. I just saw this:
https://github.com/r2dbc/r2dbc-mssql, version 0.8.1:
Next steps:
Execution of stored procedures
Add support for TVP and UDTs
And:
https://r2dbc.io/2019/05/13/r2dbc-0-8-milestone-8-released
We already have a few tickets lined up for the next milestone, and we know that they will require further SPI modifications:
Support for Auto-Commit
Connection Validation
Support for Stored Procedures

Related

SpringBoot + AWS DynamoDB and atomic code block

For a SpringBoot project I'm using DynamoDB as a Database.
I'm not a DynamoDB DB expert.
Currently, into my service, I have this high level logic implemented:
result = service1.checkStatus() // here I have a query on the DB and I check the result of the query
if (result) {
saveNewEntity()
}
Using this pseudocode, I would like to give an overview of the logic in order to explain the issue:
Probably I need to have this code into a 'Transaction'. I mean, in a stress test session, I noticed that, using different requests in parallel, I'm able to save new entitles bypassing the check (checkStatus).
How can I have this?

Update Statement Issues with Apache Ignite(2.13.0) + Java Spring boot

We are facing issues while updating tables having column with datatype timestamp.
Insert and Update works fine if we use ignite repository for both.
Insert or Update works fine if we use native queries for both.
Insert via Ignite repository and update via native queries results in an below error
class org.apache.ignite.binary.BinaryObjectException: Invalid flag value: 32
at org.apache.ignite.internal.binary.builder.BinaryBuilderReader.parseValue(BinaryBuilderReader.java:863)
at org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:290)
at org.apache.ignite.internal.binary.builder.BinaryBuilderSerializer.writeValue(BinaryBuilderSerializer.java:103)
at org.apache.ignite.internal.binary.builder.BinaryBuilderSerializer.writeValue(BinaryBuilderSerializer.java:56)
at org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:297)
at org.apache.ignite.internal.binary.builder.BinaryBuilderSerializer.writeValue(BinaryBuilderSerializer.java:103)
at org.apache.ignite.internal.binary.builder.BinaryBuilderSerializer.writeValue(BinaryBuilderSerializer.java:56)
at org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:297)
```
If you can post example code, this would make a good bug report.
https://github.com/apache/ignite/blob/876a2ca190dbd88f42bc7acecff8b7783ce7ce54/modules/core/src/main/java/org/apache/ignite/internal/binary/builder/BinaryBuilderReader.java#L515

Performing update using #Query via Spring Data MongoDB

i wanted to do a direct update to mongodb and setting someflag to either true or false for my use case. To be effecient i do not want to query all documents and set the someflag and save it back to db. i just want to directly update it on db just like when doing update on mongodb terminal.
Here is the sample document. NOTE: these documents can number from 1 ~ N so i need to handle efficiently big datas
{
_id: 60db378d0abb980372f06fc1
someid: 23cad24fc5d0290f7d5274f5
somedata: some data of mine
flag: false
}
Currently im doing an #Query method on my repository
#Query(value ="{someid: ?0}, {$set: {flag: false}}")
void updateFlag(String someid)
Using the above syntax, it doesnt work, i always get an exception message below;
Failed to instantiate void using constructor NO_CONSTRUCTOR with
arguments
How do i perform a direct update effeciently without querying all those document and updating it back to db?
Use the BulkOperations class (https://docs.spring.io/spring-data/mongodb/docs/current/api/org/springframework/data/mongodb/core/BulkOperations.html)
Sample codes:
Spring Data Mongodb Bulk Operation Example

Rolling back a Postgres database using Liquibase in Java

I'm trying to roll back the changes to a postgres table inbetween component tests so each one has a clean db to work with.
I'm using liquibase to set up postgres (the changelog xml to describe the setup and then the liquibase-core Kotlin/Java library to apply it). I'm also using Hibernate to interact with postgres directly. The test framework I'm using is Kotest, using the beforeXXX methods to make sure all the setup happens before the tests run. The database is set up once before everything runs and the idea is to rollback after each test.
From looking in the docs I've found tagDatabase and rollback seem to be what I need, however when running them they don't seem to actually roll anything back.
The code is roughly as follows (this is just test code to see if it works at all, mind - code would ideally be segmented as I descirbed above):
// 1 - (Pre-all-tests) Postgres Setup
liquibase = Liquibase(
"/db/changelog/changelog-master.xml",
ClassLoaderResourceAccessor(),
DatabaseFactory.getInstance().findCorrectDatabaseImplementation(JdbcConnection(connection))
)
liquibase.update(Contexts(), LabelExpression())
liquibase.tag("initialised")
// 2 - Something is inserted
val newEntity = ThingEntity()
entityManager.persist(
entity
)
entityManager.transaction.commit()
entityManager.clear()
// 3 - Cleanup
liquibase.rollback("initialised", Contexts())
// 4 - Fetching
entityManager.find(ThingEntity::class.java, id)
Thing is, after running liquibase.rollback the newEntity I persisted earlier is still present. The tag has dissapeared - if I run the doesTagExist method it returns true and then false after the rollback so the tag is being removed at least.
Given I'm clearing the entity manager after the commit I don't think it's because it's being cached and as I said the tag is being removed - just not the data.
Can anyone tell my why the actual transactions (i.e. the persist) aren't being erased?
Thanks!
Looks like you are using liquibase in a wrong way. What you are trying to have (rollback of data that is added in unit-test) is something close to what is described here: Rollback transaction after #Test
And when you are asking liquibase to rollback to some tag it just executes rollback scripts (if any provided) for changesets that were applied after changeset with tag: https://docs.liquibase.com/commands/community/rollbackbytag.html

EclipseLink can't retrieve entities inserted manually

I'm having some trouble with EclipseLink. My program has to interact with a database (representing a building). I've written a little input-testmode where I can manually insert stuff through the console.
My problem: a normal getByID-operation works just fine if I try to retrieve an entity I previously inserted through EclipseLink itself (by commit()), but throws a NoResultException when trying to select a row manually inserted via SQL-script (building -> lots of rooms -> script).
This (oversemplified) works fine:
main() {
MyRoom r = new MyRoom();
r.setID("floor1-roomnr4");
em.commit(r); //entity manager
DAO.getRoomByID("floor1-roomnr4"); // works
}
and the combination of generation-script + simply getRoomByID() throws an exception.
If I try it in SQL Developer I get the result I want for the exact select statement which just threw a NoResultException. I also only get this problem in the input-mode, otherwise selecting the generated rows works also fine.
Does EclipseLink have some cache-mechanism I'm unaware of which is causing some problem?
Are you sure EclipseLink and SQL Developer are connected to the same Database? Please verify the connection information for both. Is the generation-script committing the changes with the "commit" command?
If EclipseLink works similarly to Hibernate then yes there is a cache. The "first level cache" guaranties that you get the exact same instance within one transaction which makes sense. If you know EclipseLink/transactions then try to evict all loaded instances or commit the transaction and then try your DAO again. This would force EclipseLink to fetch the data from the database again
See Answer to similar question

Categories

Resources