How does the postgresql (set role user) command use in SSM projects? - java

Now the project is using springmvc+ spring + mybatis + druid + postgresql
The users in the project correspond to the users in the database, so each time you run SQL, you switch the users with the (set role user) command and then perform the crud operations of the database.
My question:
Because there are many connections in the connection pool, the first step is to get the connection of the database, then switch users, and then perform the operation of business SQL on the database. But I don't know which part of the project this logic should be processed, because the connection of the connection pool and the execution of SQL are implemented by the underlying code. Do you have any good plans?
Can you provide me with a complete demo, such as the following operations:
Step 1, get the user's name from spring security (or shiro).
Step 2, Get the connection currently using the database from the connection pool.
Step 3, execute SQL (set role user) to switch roles.
Step 4, perform crud operation.
Step 5, Reset the database connection(reset role)

Here is a simple way to do what you need with the help of mybatis-spring.
Unless you already use mybatis-spring the first step would be to change the configuration of your project so that you obtain SqlSessionFactory using org.mybatis.spring.SqlSessionFactoryBean provided by mybatis-spring.
The next step is the implementation of setting/resetting the user role for the connection. In mybatis the connection lifecycle is controlled by the class implementing org.apache.ibatis.transaction.Transaction interface. The instance of this class is used by the query executor to get the connection.
In a nutshell you need to create your own implementation of this class and configure mybatis to use it.
Your implementation can be based on the SpringManagedTransaction from mybatis-spring and would look something like:
import org.springframework.security.core.Authentication;
class UserRoleAwareSpringManagedTransaction extends SpringManagedTransaction {
public UserRoleAwareSpringManagedTransaction(DataSource dataSource) {
super(dataSource);
}
#Override
public Connection getConnection() throws SQLException {
Connection connection = getCurrentConnection();
setUserRole(connection);
return connection;
}
private Connection getCurrentConnection() {
return super.getConnection();
}
#Override
public void close() throws SQLException {
resetUserRole(getCurrentConnection());
super.close();
}
private void setUserRole(Connection connection) {
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
String username = authentication.getName();
Statement statement = connection.createStatement();
try {
// note that this direct usage of usernmae is a subject for SQL injection
// so you need to use the suggestion from
// https://stackoverflow.com/questions/2998597/switch-role-after-connecting-to-database
// about encoding of the username
statement.execute("set role '" + username + "'");
} finally {
statement.close();
}
}
private void resetUserRole(Connection connection) {
Statement statement = connection.createStatement();
try {
statement.execute("reset role");
} finally {
statement.close();
}
}
}
Now you need to configure mybatis to use you Transaction implementation. For this you need to implement TransactionFactory similar to org.mybatis.spring.transaction.SpringManagedTransactionFactory provided by mybatis-spring:
public class UserRoleAwareSpringManagedTransactionFactory implements TransactionFactory {
#Override
public Transaction newTransaction(DataSource dataSource, TransactionIsolationLevel level, boolean autoCommit) {
return new UserRoleAwareSpringManagedTransaction(dataSource);
}
#Override
public Transaction newTransaction(Connection conn) {
throw new UnsupportedOperationException("New Spring transactions require a DataSource");
}
#Override
public void setProperties(Properties props) {
}
}
And then define a bean of type UserRoleAwareSpringManagedTransactionFactory in your spring context and inject it into transactionFactory property of the SqlSessionFactoryBeen in your spring context.
Now every time mybatis obtains a Connection the implementation of Transaction will set the current spring security user to set the role.

Best practice is that database users are applications. Application users' access to particular data/resource should be controlled in the application. Applications should not rely on database to restrict data/resource access. Therefore, application users should not have different roles in database. An application should use only a single database user account.
Spring is manifestation of best practices. Therefore, Spring does not implement this functionality. If you want such functionality, you need to hack.
Referring to this, your best bet is to:
#Autowired JdbcTemplate jdbcTemplate;
// ...
public runPerUserSql() {
jdbcTemplate.execute("set role user 'user_1';");
jdbcTemplate.execute("SELECT 1;");
}
I still do not have much confidence in this. Unless you are writing a pgAdmin webapp for multiple users, you should re-consider your approach and design.

Related

How to use a HikariCP connection pool with prepared statements?

I am developing a Spring Boot web (REST) application where I need to serve many requests. Therefore I wanted my application to be able to handle requests concurrently. Since Spring Boot REST-Services are out-of-the-box concurrently usable, I only need to make the (PostgreSQL) database access concurrently accessible. For that I am using the HikariCP data source.
Since a lot of my statements are prepared statements, I collected them in one method where I call pstmt = connection.prepareStatement("SQLCODE"); once for every statemment. Those prepared statements are then used in various methods when user interaction from the REST service is processed.
Now, when I use the HikariCP I can't do that anymore, can I?
When I prepare a statement, this statement is bound to one connection. If I then try to access it concurrently, I can't because the connection is not shared.
Am I missing something? How can I solve this? Do I need to retrieve a connection from the pool, prepare the statement locally, execute my query, and close the connection? If so, what's the point of using a prepared statement then (other than preventing SQL injection)?
I know that the statements are cached by on the PostreSQL side. So would it be a good idea to keep the method where all prepared statements are prepared? To sent them to the database cache. And then just creating locally the same statements again. That way, one might still leverage the caching possibilities of the database. But on the other hand it would be really ugly code.
Im am using Spring: 5.3.10, Java: 11, PostgreSQL: 14.0
#RestController
public class RESTController {
/** The database controller. */
private DBController dbc;
/** The data source object serving as a connection pool. */
private HikariDataSource ds;
/** The logger object for this class. */
private static Logger logger = LoggerFactory.getLogger(RESTController.class);
public RESTController(DBController dbc, Config config) {
this.dbc = dbc;
// Create the database
if (!this.dbc.createDB(config)) {
logger.error("Couldn't create the database. The service will now exit.");
Runtime.getRuntime().halt(1);
}
// Create a connection pool
ds = new HikariDataSource();
ds.setJdbcUrl(config.getUrl());
ds.setUsername(config.getUser());
ds.setPassword(config.getPassword());
ds.addDataSourceProperty("cachePrepStmts", "true");
ds.addDataSourceProperty("prepStmtCacheSize", "250");
ds.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
// Create the necessary tables
if (!this.dbc.createTables(ds)) {
logger.error("Couldn't create the tables. The service will now exit.");
ds.close();
Runtime.getRuntime().halt(1);
}
// Prepare SQL statements
if (!this.dbc.prepareStatements(ds)) {
logger.error("Couldn't prepare the SQL statements. The service will now exit.");
ds.close();
Runtime.getRuntime().halt(1);
}
}
#PostMapping("/ID")
public ResponseEntity<String> createNewDomain(#RequestParam(name = "name", required = true) String name) {
// Do stuff ...
}
// [...]
}
#Component
public class DBController {
/** The logger object for this class. */
private static Logger logger = LoggerFactory.getLogger(DBController.class);
// Prepared Statements
private PreparedStatement stmt1, stmt2, stmt3;
public boolean prepareStatements(HikariDataSource ds) {
try {
// Get connection from the pool
Connection c = ds.getConnection();
// Prepare all the statements
stmt1 = c.prepareStatement("SQLCODE");
stmt2 = c.prepareStatement("SQLCODE1");
stmt2 = c.prepareStatement("SQLCODE1");
// [...]
} catch (SQLException e) {
logger.debug("Could not prepare the SQL statements: " + e.getMessage());
return false;
}
logger.debug("Successfully prepared the SQL statements.");
return true;
}
public boolean m1(int i) {
stmt1.setInt(i);
ResultSet rs = stmt1.executeQuery();
}
public boolean m2(int j) {
stmt1.setInt(j);
ResultSet rs = stmt1.executeQuery();
}
public boolean m3(String a) {
stmt2.setString(a);
ResultSet rs = stmt2.executeQuery();
}
// [...]
}
Thanks in advance.
pleae read the part Statement Cache at https://github.com/brettwooldridge/HikariCP
Many connection pools, including Apache DBCP, Vibur, c3p0 and others
offer PreparedStatement caching. HikariCP does not. Why?
So it does not cache. and if you read explanaition maybe you decide you don't need it to.

Implementing Spring + Apache Flink project with Postgres

I have a SpringBoot gradle project using apache flink to process datastream signals. When a new signal comes through the datastream, I would like to query look up (i.e. findById() ) it's details using an ID in a postgres database table which is already created in order to get additional information about the signal and enrich the data. I would like to avoid using spring dependencies to perform the lookup (i.e Autowire repository) and want to stick with flink implementation for the lookup.
Where can i specify how to add the postgres connection config information such as port, database, url, username, password etc... (for simplicity purposes can assume the postgres db is local in my machine). Is it as simple as adding the configuration to the application.properties file? if so how can i write the query method to look up the record in the postgres table when searching by non primary key value?
Some online sources are suggesting using this skeleton code but I am not sure how/id it fits my use case. (I have a EventEntity model created which contains all the params/columns from the table which i'm looking up).
like so
public class DatabaseMapper extends RichFlatMapFunction<String, EventEntity> {
// Declare DB connection & query statements
public void open(Configuration parameters) throws Exception {
//Initialize DB connection
//prepare query statements
}
#Override
public void flatMap(String value, Collector<EventEntity> out) throws Exception {
}
}
Your sample code is correct. You can set all your custom initialization and preparation code for PostgreSQL in open() method. Then you can use your pre-configured fields in your flatMap() function.
Here is one sample for Redis operations
I have used RichAsyncFunction here and I suggest you do the same as it is suggested as best practice. Read here for more: https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/stream/operators/asyncio.html)
You can pass configuration parameteres in your constructor method and use it in your initialization process
public static class AsyncRedisOperations extends RichAsyncFunction<Object,Object> {
private JedisPool jedisPool;
private Configuration redisConf;
public AsyncRedisOperations(Configuration redisConf) {
this.redisConf = redisConf;
}
#Override
public void open(Configuration parameters) {
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.setMaxTotal(this.redisConf.getInteger("pool", 8));
jedisPoolConfig.setMaxIdle(this.redisConf.getInteger("pool", 8));
jedisPoolConfig.setMaxWaitMillis(this.redisConf.getInteger("maxWait", 0));
JedisPool jedisPool = new JedisPool(jedisPoolConfig,
this.redisConf.getString("host", "192.168.10.10"),
this.redisConf.getInteger("port", 6379), 5000);
try {
this.jedisPool = jedisPool;
this.logger.info("Redis connected: " + jedisPool.getResource().isConnected());
} catch (Exception e) {
this.logger.error(BaseUtil.append("Exception while connecting Redis"));
}
}
#Override
public void asyncInvoke(Object in, ResultFuture<Object> out) {
try (Jedis jedis = this.jedisPool.getResource()) {
String key = jedis.get(key);
this.logger.info("Redis Key: " + key);
}
}
}

Multi-tenant hibernate doesn't switch between tenants

I am in the middle of changing my spring + hibernate + mysql setup to be multi-tenant. First of all, I have the following in my application.properties:
spring.datasource.url=jdbc:mysql://localhost:3306/test?useSSL=false
spring.datasource.username=root
spring.datasource.password=root
I am not sure if I'm supposed to make it connect to a specific schema here (test) after introducing multi-tenancy? It doesn't make sense to me, as it's supposed to use a default schema if no tenant is provided, otherwise connect to the schema associated with the tenant. However, if I remove it I get an error that no database was provided.
Secondly, the multi-tenant part doesn't seem to be working. All my queries are made in the test schema. I have implemented the following MultiTenantConnectionProvider:
#Component
public class TenantConnectionProvider implements MultiTenantConnectionProvider {
private Datasource datasource;
public TenantConnectionProvider(DataSource datasource) {
this.datasource = datasource;
}
...
#Override
public Connection getConnection(String tenantIdentifier) throws SQLException {
logger.info("Get connection for tenant {}", tenantIdentifier);
final Connection connection = getAnyConnection();
connection.setSchema(tenantIdentifier);
return connection;
}
#Override
public void releaseConnection(String tenantIdentifier, Connection connection) throws SQLException {
logger.info("Release connection for tenant {}", tenantIdentifier);
connection.setSchema(DEFAULT_TENANT);
releaseAnyConnection(connection);
}
}
I am getting no errors, and when I make a query it correctly prints Get connection for tenant with the correct tenantIdentifier that matches the name of one of my other schemas. Still, it queries the test schema. What am I missing? Thanks!
EDIT:
It seems like connection.setSchema(tenantIdentifier) has no effect. In the method's description, it says the following: If the driver does not support schemas, it will silently ignore this request. So I'm guessing my driver does not support it? What to do in that case?
Using connection.createStatement().execute("USE " + tenantIdentifier); instead of connection.setSchema(tenantIdentifier); solved my problem.

Which layer can I put the mysql transaction database logic?

Basically I have two services each one of them handle methods for each persistent object that I have in my project, these services hold some method which the endpoint(Google) will call to perform something.
I'm using Google Could Endpoints + Mysql Cloud + Hibernate.
Two POs
#Entity
public class Device {
...
}
#Entity
public class User {
...
}
The services for each one of POs
public class DeviceService {
Device getDevice(Long devId){
return new Dao().getById(devId, Device.class);
}
void allocateDevice(Long userId){
User u = new UserService().getUser(userId);
... do stuff
}
}
public class UserService {
User getUser(Long userId){
return new Dao().getById(userId, User.class);
}
}
The endpoint for each one
public class DeviceEndpoint {
#ApiMethod(
name = "device.get",
path = "device/{devId}",
httpMethod = ApiMethod.HttpMethod.GET
)
Device getDevice(Long devId){
MyEntityManager em = new MyEntityManager();
try {
em.getTransaction().begin();
new DeviceService().getDevice(devId);
em.getTransaction().commit();
}finally {
em.cleanup(); //custom method to rollback also
}
return device;
}
#ApiMethod(
name = "device.allocate",
path = "device/{userId}/allocate",
httpMethod = ApiMethod.HttpMethod.GET
)
void allocateDevice(Long deviceId){
MyEntityManager em = new MyEntityManager();
try {
em.getTransaction().begin();
new DeviceService().allocateDevice(userId);
em.getTransaction().commit();
}finally {
em.cleanup(); //custom method to rollback also
}
}
}
I would like to know where I put the database transaction logic(begin,commit,rollback).
Dao layer
Firstly I had inserted into Dao class, but every query/insert/update I had to open and close the connection and when I had to use more than one CRUD I did several open/close connections and it had been expensive and delayed.
Example: In one endpoint request I want to obtain some object from db and update. Two operations and two open/close connections.
Endpoint layer(as example)
Secondly I put the logic to open/close on endpoint methods(as example above), but they said(my work colleagues) it isn't a good pattern, begin and commit transactions in this layer isn't a good idea, then they suggested to do the third option.
Service layer
Put that logic(begin/commit/rollback) into Service layer, in each method, I tried but, some methods call another and that last also open and close the connection, so when the second method return, the transaction came closed.
Please, let me know case missing some important info.
Typically This type of action is performed in the Service Layer as this layer is there to provide logic to operate on the data sent to and from the DAO layer - that being said you could bundle these together into the same module.
The comment "I tried but, some methods call another and that last also open and close the connection, so when the second method return, the transaction came closed." Is interesting; I am not sure how you are managing your connections; but you may want/need to revisit if your connections are being closed before transactions are completed - you may want to look at Hibernates HibernateTransactionManager
Where should "#Transactional" be place Service Layer or DAO

How to implement transaction in Spring Data Redis in a clean way?

I am following RetwisJ tutorial available here. In this I don't think Redis transactions are implemented. For example, in the following function, if some exception occurs in between, the data will be left in an inconsistent state.
I want to know how a function like the following can be implemented in Spring Data Redis as a single transaction:
public String addUser(String name, String password) {
String uid = String.valueOf(userIdCounter.incrementAndGet());
// save user as hash
// uid -> user
BoundHashOperations<String, String, String> userOps = template.boundHashOps(KeyUtils.uid(uid));
userOps.put("name", name);
userOps.put("pass", password);
valueOps.set(KeyUtils.user(name), uid);
users.addFirst(name);
return addAuth(name);
}
Here userIdCounter, valueOps and users are initialized in the constructor. I have come across this in the documentation(section 4.8), but I can't figure out how to fit that into this function where some variables are initialized outside the function(please don't tell I have to initialize these variables in each and every function where I need transactions!).
PS: Also is there any #Transaction annotation or transaction manager available for Spring Data Redis?
UPDATE: I have tried using MULTI, EXEC. The code which I have written is for another project, but when its applied to this problem it'll be as follows:
public String addMyUser(String name, String password) {
String uid = String.valueOf(userIdCounter.incrementAndGet());
template.execute(new SessionCallback<Object>() {
#Override
public <K, V> Object execute(RedisOperations<K, V> operations)
throws DataAccessException {
operations.multi();
getUserOps(operations, KeyUtils.uid(uid)).put("name", name);
getUserOps(operations, KeyUtils.uid(uid)).put("pass", password);
getValueOps(operations).set(KeyUtils.user(name), uid);
getUserList(operations, KeyUtils.users()).leftPush(name);
operations.exec();
return null;
}
});
return addAuth(name);
}
private ValueOperations<String, String> getValueOps(RedisOperations operations) {
return operations.opsForValue();
}
private BoundHashOperations<String, String, String> getUserOps(RedisOperations operations, String key) {
return operations.boundHashOps(key);
}
private BoundListOperations<String, String> getUserList(RedisOperations operations, String key) {
return operations.boundListOps(key);
}
Please tell whether this way of using MULTI, EXEC is recommended or not.
Up to SD Redis 1.2 you will have to take care of tansaction handling yourself using TransactionSynchronisationManager
The snipplet above could then look something like this:
public String addUser(String name, String password) {
String uid = String.valueOf(userIdCounter.incrementAndGet());
// start the transaction
template.multi();
// register synchronisation
if(TransactionSynchronisationManager.isActualTransactionActive()) {
TransactionSynchronisationManager.registerSynchronisation(new TransactionSynchronizationAdapter()) {
#Override
public void afterCompletion(int status) {
switch(status) {
case STATUS_COMMITTED : template.exec(); break;
case STATUS_ROLLED_BACK : template.discard(); break;
default : template.discard();
}
}
}
}
BoundHashOperations<String, String, String> userOps = template.boundHashOps(KeyUtils.uid(uid));
userOps.put("name", name);
userOps.put("pass", password);
valueOps.set(KeyUtils.user(name), uid);
users.addFirst(name);
return addAuth(name);
}
Please note that once in multi, read operations will also be part of the transaction which means you'll likely not be able to read data from redis server.
The setup might differ form the above one as you could want to additionally call WATCH. Further on you'll also have to take care of multiple callbacks do not sending MULTI and/or EXEC more than once.
The upcoming 1.3 RELEASE of Spring Data Redis will ship with support for spring managed transactions in a way of taking care of MULTi|EXEC|DISCARD plus allowing read operations (on already existing keys) while transaction synchronization is active. You could already give the BUILD-SNAPSHOT a spin and turn this on by setting template.setEnableTransactionSupport(true).
By default, RedisTemplate does not participate in managed Spring
transactions. If you want RedisTemplate to make use of Redis
transaction when using #Transactional or TransactionTemplate, you need
to be explicitly enable transaction support for each RedisTemplate by
setting setEnableTransactionSupport(true). Enabling transaction
support binds RedisConnection to the current transaction backed by a
ThreadLocal. If the transaction finishes without errors, the Redis
transaction gets commited with EXEC, otherwise rolled back with
DISCARD. Redis transactions are batch-oriented. Commands issued during
an ongoing transaction are queued and only applied when committing the
transaction.
Spring Data Redis distinguishes between read-only and write commands
in an ongoing transaction. Read-only commands, such as KEYS, are piped
to a fresh (non-thread-bound) RedisConnection to allow reads. Write
commands are queued by RedisTemplate and applied upon commit.
https://docs.spring.io/spring-data/data-redis/docs/current/reference/html/#tx.spring

Categories

Resources