i need to receive the size of data (in byte) from a postgres database.
Postgres has this method called: octet_length what is exactly doing what i want.
But i can't find a way to get the same from Hibernate.
So is it possible to receive these information from Hibernate?
It seems that Hibernate register the octet_length function into this dialect.
For example in PostgreSQL8Dialect.java we can see:
registerFunction( "octet_length", new StandardSQLFunction("octet_length", Hibernate.LONG) );
So you can use
Projections.SqlFunction
in your code
Related
i wanted to do a direct update to mongodb and setting someflag to either true or false for my use case. To be effecient i do not want to query all documents and set the someflag and save it back to db. i just want to directly update it on db just like when doing update on mongodb terminal.
Here is the sample document. NOTE: these documents can number from 1 ~ N so i need to handle efficiently big datas
{
_id: 60db378d0abb980372f06fc1
someid: 23cad24fc5d0290f7d5274f5
somedata: some data of mine
flag: false
}
Currently im doing an #Query method on my repository
#Query(value ="{someid: ?0}, {$set: {flag: false}}")
void updateFlag(String someid)
Using the above syntax, it doesnt work, i always get an exception message below;
Failed to instantiate void using constructor NO_CONSTRUCTOR with
arguments
How do i perform a direct update effeciently without querying all those document and updating it back to db?
Use the BulkOperations class (https://docs.spring.io/spring-data/mongodb/docs/current/api/org/springframework/data/mongodb/core/BulkOperations.html)
Sample codes:
Spring Data Mongodb Bulk Operation Example
I am required to execute a stored procedure in a SQL server to fetch some data, and since I will later save the data into a Mongo and this one is with ReactiveMongoTemplate and so on, I introduced Spring R2DBC.
implementation("org.springframework.data:spring-data-r2dbc:1.0.0.RELEASE")
implementation("io.r2dbc:r2dbc-mssql:0.8.1.RELEASE")
I see that I can do SELECT and INSERT and so on with R2DBC, but is it possible to EXEC prod_name? I tried it and it hangs forever and then the test terminates, without success but neither failure. The last line of log is:
io.r2dbc.mssql.QUERY - Executing query: EXEC "SCHEMA"."MY_PROCEDURE"
The code is like:
public Flux<Coupon> selectWithProcedure() {
return databaseClient
.execute("EXEC \"SCHEMA\".\"MY_PROCEDURE\" ")
.as(Coupon.class)
.fetch().all()
.doOnNext(coupon -> {
coupon.setCouponStatusRefFromId(coupon.getCouponStatusRefId());
});
}
And it seems that no data is retrieved.
If I test some other methods with simple queries like SELECT... it works. But the problem is, DBAs do not allow my app to read table data, instead, they create a procedure for me. If this query is not possible, I must go with traditional JPA way and going reactive at Mongo side has lost its sense.
Well. I just saw this:
https://github.com/r2dbc/r2dbc-mssql, version 0.8.1:
Next steps:
Execution of stored procedures
Add support for TVP and UDTs
And:
https://r2dbc.io/2019/05/13/r2dbc-0-8-milestone-8-released
We already have a few tickets lined up for the next milestone, and we know that they will require further SPI modifications:
Support for Auto-Commit
Connection Validation
Support for Stored Procedures
I am using latest version of GreenDAO... I am missing something on using the data from the DB.
I need to prevent the creation of records that have the same PROFILE_NUMBER. Currently during testing I have inserted 1 record with the PROFILE_NUMBER of 1.
I need someone to show me an example of how to obtain the actual value of the field from the db.
I am using this
SvecPoleDao svecPoleDao = daoSession.getSvecPoleDao();
List poles = svecPoleDao.queryBuilder().where(SvecPoleDao.Properties.Profile_number.eq(1)).list();
and it obtains something... this.
[com.example.bobby.poleattachmenttest2_workingdatabase.db.SvecPole#bfe830c3.2]
Is this serialized? The actual value I am looking for here is 1.
Here is the solution.You'll need to use listlazy() instead of list().
List<SvecPole> poles = svecPoleDao.queryBuilder().where(SvecPoleDao.Properties.Profile_number.eq(1)).listLazy();
As of NiFi 1.7.1, the new DBCPConnectionPoolLookup enables dynamic selection of database connections: set an attribute database.name on a FlowFile and when a consuming processor accesses a configured DBCPConnectionPoolLookup controller service, the content of that attribute will be used to get a connection through this lookup's configured properties, which contain a mapping of potential values to DBCPConnectionPool controller service.
I'd like to list the tables in each database that I've configured in the lookup, but the ListDatabaseTables processor does not accept incoming FlowFiles. This seems to mean that it's not usable for listing tables in a dynamic set of databases.
What is the best way to accomplish this?
ListDatabaseTables uses the JDBC API for getting table info from the metadata of an established JDBC connection. This hides the underlying method of how to actually get tables from a particular database.
If all your databases are of the same ilk, then if you have a list of databases, you could generate flow files with one per database, filling in the database.name attribute, then using ExecuteSQL with the DBCPConnectionPoolLookup to execute the corresponding SQL statement to get the tables for that database, such as SHOW TABLES. You can parse the records using any of the record-aware processors such as QueryRecord, UpdateRecord, ConvertRecord, etc. and if you need one table per flow file you can use SplitRecord. If the output is JSON or CSV or XML, you could use EvaluateJsonPath, ExtractText, or EvaluateXPath respectively to get the table name into an attribute, and continue on from there.
I wrote up NIFI-5519 to cover the proposal for ListDatabaseTables to optionally accept incoming connections, in the meantime you'd need 1 ListDatabaseTables instance to correspond to each of your DBCPConnectionPool instances.
I have attempted to use a stored procedure to extract a PDF stored as a BLOB in a SQL Server database. A bit similar to the following:
SimpleJdbcCall sp = new SimpleJdbcCall(getJdbcTemplate())
.withSchemaName("dbo")
.withProcedureName("sp_find_doc")
.declareParameters(
new SqlParameter("DocID", java.sql.Types.BIGINT),
new SqlOutParameter("DocBLOB", java.sql.Types.BLOB));
SqlParameterSource args = new MapSqlParameterSource().addValue("ID", 1L);
Map<String, Object> map = sp.execute(args);
The stored procedure defines the output as varbinary(max), so it should support pretty large outputs. However, the PDF is corrupted. Examining more closely, I can see that the object returned by the execute method to the map is truncated to 8000 bytes.
8000 bytes is a basic varbinary (without the max), so I'm guessing that some component is failing to specify the output type correctly. Or perhaps the execute method is only reading off the first 8000 bytes?
Is there something missing from my code above to ensure that Spring JDBC defines the correct types and maps all the bytes in the BLOB?
Note that I have also tried specifying useLOBs=false on the database URL and LONGVARBINARY for the outputs, but again, 8000 bytes was the limit to the arrays returned.
FYI - I'm using SQL Server 2014, jTDS 1.3.1 and Spring Boot 1.3.1 (Spring JDBC 4.2.3).
Workaround
Note that I have implemented a workaround of rewriting the stored procedure to return a result set rather than output parameters. This way, I can use rs.getBlob("DocBLOB"), which works nicely.
But of course I would be interested to find out whether there is a correct/better way to execute stored procedures which return BLOBs in output parameters.