I need to use Redis as data source in Java, so I decide to use the code:
public class RedisService {
private static final Jedis jedis = new Jedis("host",6400);;
public static Device getDevice(String key) {
// Do something use redis.
return null;
}
}
I thought the server will automatically init Jedis(Redis API for Java), it this a good way to use Jedis ?
Have a look at how we are using Jedis:
Create a singleton org.springframework.data.redis.connection.jedis.JedisConnectionFactory instance by passing host and port info
Create singleton org.springframework.data.redis.core.RedisTemplate instance by passing the connection factory to it
Use the redisTemplate created above in your service, the benefit of using Redistemplate is that you can use it perform operation across all data structures provided by redis( list, set, hashes)
Just for your reference, here's the spring code that does the same, you can use if your are using spring else you can create the same using java code
<!-- Create Factory -->
<bean id="jedisFactory" class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory" >
<property name="hostName" value="localhost" />
<property name="port" value="6370" />
<property name="timeout" value="5000" />
</bean>
<!-- Create Redis Template -->
<bean id="redisRemplate" class="org.springframework.data.redis.core.RedisTemplate" >
<property name ="connectionFactory" ref="jedisFactory" />
</bean>
<!-- Your Service class -->
<bean id="serviceClass" class="RedisService" >
<property name ="redisTemplate" ref="redisRemplate" />
</bean>
public class RedisService
{
private final RedisTemplate redisTemplate = /* get from store or inject using spring */;
public static Device getDevice(String key) {
// Do something use Redis.
return null;
}
}
As Santosh Joshi tried to explain: it is best to use a JedisFactory. Your Jedis which is Singleton can "die" due to network, overload etc... and you will have to restart your application to get a new connection to Redis.
To counter that, you can define a Jedis Pool and, if you don't want to use Spring (on which the solution from Santosh is based on), you can use the JedisPool class which is provided with Jedis. Then, you can define it as a singleton (as static final or via Spring for instance) and get Jedis instances from it.
As it is a pool you can get more than 1 connection to Redis at a time (you can configure that), and it supports dealing with broken connections: it creates fresh new Jedis when one is dead.
Related
I have controller ,service and dao class singleton
Dao Class:
#Autowired
JdbcTemplate jdbcTemplate;
#Override
public String addUsers(UserDTO userDto) throws Exception {
// TODO Auto-generated method stub
System.out.println("JDBC TEMPLATE::"+jdbcTemplate);
String query="Insert into users values('"+userDto.getUserName()+"')";
System.out.println(query);
jdbcTemplate.update(query);
return "success";
}
applicationContext.xml
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="dataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource" >
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://localhost:3306/demo" />
<property name="username" value="" />
<property name="password" value="" />
</bean>
In dao class I am using JdbcTemplate which is defined as singleton and dataSource bean is also singleton.
Now I have following doubt:
1)If my JdbcTemplate is singleton and dataSource bean is singleton will they cause any problem for concurrent request?
2)Is that the ideal way to make JdbcTemplate bean and injecting in to DAo?
3)Is request scope should only be there if any class hold instance variables?
In order be able to work concurrently against your DB, I would suggest to you to use connection pooling.
When multiple requests will "arrive" concurrently, for each of them the connections pool will assign a dedicated connection to work against the DB.
Of course, that is under your responsibility to make sure you're not accessing to the same "area" in your DB.
In MySql DB I know that there's a locking mechanism for such scenarios, but I would recommend making a deeper research.
There are 2 well-known connection pools:
Apache DBCP http://commons.apache.org/proper/commons-dbcp/
c3po http://www.mchange.com/projects/c3p0/
More detailed explanation:
Connection Pooling
It's a technique to allow multiple clients to make use of a cached set of shared and reusable connection objects providing access to a database.
Opening/Closing database connections is an expensive process and hence connection pools improve the performance of execution of commands on a database for which we maintain connection objects in the pool.
It facilitates reuse of the same connection object to serve a number of client requests.
Every time a client request is received, the pool is searched for an available connection object and it's highly likely that it gets a free connection object.
Otherwise, either the incoming requests are queued or a new connection object is created and added to the pool (depending on how many connections are already there in the pool and how many the particular implementation and configuration can support).
As soon as a request finishes using a connection object, the object is given back to the pool from where it's assigned to one of the queued requests (based on what scheduling algorithm the particular connection pool implementation follows for serving queued requests).
I am using a third party library in my application to do some task. They have provided a wrapper that I've added in my project using maven. For using this wrapper we have to give an access key to their client class in order to use it's functionality. For ex:
final WeatherApiService was = new WeatherApiServiceImpl(accessKey);
final WeatherApiClient weatherApiClient = new WeatherApiClient(was);
What I want is to remove the above code (Since it's kind of Singleton and should be registered in spring context when the application is being started) and do something so that I can just autowire the WeatherApiClient and we are good to go. (wrapper isn't using spring FYI). Below is what I did is in my spring context I registered two beans and put the access-key is web.xml.
spring-context.xml
<bean id="was" class="my.librarypath.WeatherApiService ">
<constructor-arg type="java.lang.String" value="${accessKeyFromWebXml}"/>
</bean>
<bean id="weatherApiClient" class="my.librarypath.WeatherApiClient">
<constructor-arg type="my.librarypath.WeatherApiService" value="was"/>
</bean>
my component that will use the third party library
#Component("myComponent")
public class MyComponent IComponent {
#Resource(name = "weatherApiClient") // <--- getting Error here i.e: Couldn't aurtowire, bean should be of String type
private String weatherApiClient;
public void myFunction() {
weatherApiClient.getWeather();
}
}
Can someone confirm if I'm doing it right or is there any best practices options available !?
Ther were two issues:
<bean id="weatherApiClient" class="my.librarypath.WeatherApiClient">
<constructor-arg type="my.librarypath.WeatherApiService" value="was"/>
// ^---- should be ref
</bean>
Secondly, I was using String instead of WeatherApiClient. MY BAD :/
#Resource(name = "weatherApiClient")
private String weatherApiClient;
// ^---- this one should have to be WeatherApiClient
I am using the HDIV Web Application Security Framework for a java web application. Every new web-page-request generates hdiv-internal security information that is cached and used for security checks.
I have the following szenario:
I have one order page that pops up a confirmation-page for 2 seconds when something was added to or removed from the cart.
after 50 popups the the underlaying order page is removed from the cache and therefor an error occurs in the app.
does anybody know how to influence the hdiv cache-removal strategy to keep the basepage alive?
One way around is to increase org.hdiv.session.StateCache.maxSize from 50 to 500.
but this would only cure the symptoms not the underlying cause.
Update:
using #rbelasko solution
I succeded to use the original org.hdiv.session.StateCache to change the maxSize to 20 and verified in the debug-log that the cachentries are dismissed after 20 entries.
When I changed it to use my own implementation it didn-t work
Bean definition
<bean id="cache" class="com.mycompany.session.StateCacheTest" singleton="false"
init-method="init">
<property name="maxSize">
<value>20</value>
</property>
</bean>
My own class
public class StateCacheTest extends StateCache
{
private static final Log log = LogFactory.getLog(StateCacheTest.class);
public StateCacheTest()
{
log.debug("StateCache()");
}
#Override
public void setMaxSize(final int maxSize)
{
super.setMaxSize(maxSize);
if (log.isDebugEnabled())
{
log.debug("setMaxSize to " + maxSize);
}
}
}
In the debug-log were no entries from StateCacheTest
Any ideas?
Update 2:
While i was not able to load a different IStateCache implementation via spring i was able to make this error less likely using
<hdiv:config ... maxPagesPerSession="200" ... />
the bean-settings definition
<property name="maxSize">
<value>20</value>
</property>
had no effect on the cachesize in my system.
You could create a custom IStateCache interface implementation.
Using the HDIV explicit configuration (not using HDIV's new custom schema) this is the default configuration for "cache" bean:
<bean id="cache" class="org.hdiv.session.StateCache" singleton="false"
init-method="init">
<property name="maxSize">
<value>200</value>
</property>
</bean>
You could create your own implementation and implement the strategy that fits your requirements.
Regards,
Roberto
For efficiency reasons, I am interested in limiting the number of threads that simultaneously uses the beans of the Spring application context (I don't want an unlimited number of threads proccessing in my limited memory).
I have found here (spring documentation) a way to achieve this by pooling the beans in a EJB style, by doing the following:
Declare the target bean as scope "prototype".
Declare a Pool provider that will deliver a limited number of pooled "target" instances.
Declare a "ProxyFactoryBean" which function is not clear to me.
Here is the declaration of this beans:
<bean id="businessObjectTarget" class="com.mycompany.MyBusinessObject"
scope="prototype">
... properties omitted
</bean>
<bean id="poolTargetSource" class="org.springframework.aop.target.CommonsPoolTargetSource">
<property name="targetBeanName" value="businessObjectTarget"/>
<property name="maxSize" value="25"/>
</bean>
<bean id="businessObject" class="org.springframework.aop.framework.ProxyFactoryBean">
<property name="targetSource" ref="poolTargetSource"/>
<property name="interceptorNames" value="myInterceptor"/>
</bean>
My problem is that when I will declare another bean to use pooled instances of the "businessObjectTarget", how should I do it? I mean, when i try to do something like this:
<bean id="clientBean" class="com.mycompany.ClientOfTheBusinessObject">
<property name="businessObject" ref="WHAT TO PUT HERE???"/>
</bean>
What should be the value of the "ref" ??
You cannot use properties to get instances of prototypes.
One option is to use the lookup methods (see chapter 3.3.7.1)
Another option to get your bean in code: make your com.mycompany.ClientOfTheBusinessObject to implement the ApplicationContextAware interface and then call context.getBean("clientBean")
Please note the name of the third bean in the spring example:-"businessObject"
It means this the bean from where you are supposed to access the common pool.
For your case if you need your own client bean you may have it as follows.
But in such a case businessObject is not required.:-
<bean id="businessObjectTarget" class="com.mycompany.MyBusinessObject"
scope="prototype">
... properties omitted
</bean>
<bean id="poolTargetSource" class="org.springframework.aop.target.CommonsPoolTargetSource">
<property name="targetBeanName" value="businessObjectTarget"/>
<property name="maxSize" value="25"/>
</bean>
<bean id="clientBean" class="com.mycompany.ClientOfTheBusinessObject">
<property name="poolTargetSource" ref="poolTargetSource"/>
</bean>
Java classes:-
public class ClientOfTheBusinessObject{
CommonsPoolTargetSource poolTargetSource;
//<getter and setter for poolTargeTSource>
public void methodToAccessCommonPool(){
//The following line gets the object from the pool.If there is nothing left in the pool then the thread will be blocked.(The blocking can be replaced with an exception by changing the properties of the CommonsPoolTargetSource bean)
MyBusinessObject mbo = (MyBusinessObject)poolTargetSource.getTarget();
//Do whatever you want to do with mbo
//the following line puts the object back to the pool
poolTargetSource.releaseTarget(mbo);
}
}
I'm pretty sure you can limit the number of simultaneous threads in a less convoluted way. Did you look at the Java Concurrency API, specifically at the Executors.newFixedThreadPool() ?
i used java-configuration to construct a proxy over the interface that handles pooling using apache commons-pool to achieve invocation-level-pooling.
I did it using Annotations based configuration:
I did create my BusinessObject class as a POJO and annotate it this way:
#Component("businessObject")
#Scope("prototype")
public class BusinessObject { ... }
I gave it a specific name and did mark it as prototype so that Spring doesn't create a singleton instance for it; every time the bean is required, Spring would create a new instance.
In my #Configuration class (or in the #SpringBootApplication class, if using Spring Boot) I created a CommonsPool2TargetSource instance to hold BusinessObject instances:
#Bean
public CommonsPool2TargetSource pooledTargetSource() {
final CommonsPool2TargetSource commonsPoolTargetSource = new CommonsPool2TargetSource();
commonsPoolTargetSource.setTargetBeanName("businessObject");
commonsPoolTargetSource.setTargetClass(BusinessObject.class);
commonsPoolTargetSource.setMaxSize(maxPoolSize);
return commonsPoolTargetSource;
}
Here I'm indicating that the pool will hold BusinessObject instances. Notice that my maxPoolSize=? value is set with the max number of BusinessObject instances I want to hold in the pool.
Finally, I did access my pooled instances this way:
#Autowired
private CommonsPool2TargetSource pooledTargetSource;
void someMethod() {
// First I retrieve one pooled BusinessObject instance
BusinessObject businessObject = (BusinessObject)pooledTargetSource.getTarget();
try {
// Second, I do some logic using the BusinessObject instance gotten
} catch (SomePossibleException e) {
// Catch and handle any potential error, if any
} finally {
// Finally, after executing my business logic
// I release the BusinessObject instance so that it can be reused
pooledTargetSource.releaseTarget(businessObject);
}
}
It is very important to always make sure to release the BusinessObject borrowed from the pool, without mattering if the business logic did finish successfully or with error. Otherwise the pool could get empty with all the instances being borrowed and never released and any further requests for instances will block forever.
Using Spring and Hibernate, I want to write to one MySQL master database, and read from one more more replicated slaves in cloud-based Java webapp.
I can't find a solution that is transparent to the application code. I don't really want to have to change my DAOs to manage different SessionFactories, as that seems really messy and couples the code with a specific server architecture.
Is there any way of telling Hibernate to automatically route CREATE/UPDATE queries to one datasource, and SELECT to another? I don't want to do any sharding or anything based on object type - just route different types of queries to different datasources.
An example can be found here: https://github.com/afedulov/routing-data-source.
Spring provides a variation of DataSource, called AbstractRoutingDatasource. It can be used in place of standard DataSource implementations and enables a mechanism to determine which concrete DataSource to use for each operation at runtime. All you need to do is to extend it and to provide an implementation of an abstract determineCurrentLookupKey method. This is the place to implement your custom logic to determine the concrete DataSource. Returned Object serves as a lookup key. It is typically a String or en Enum, used as a qualifier in Spring configuration (details will follow).
package website.fedulov.routing.RoutingDataSource
import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;
public class RoutingDataSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return DbContextHolder.getDbType();
}
}
You might be wondering what is that DbContextHolder object and how does it know which DataSource identifier to return? Keep in mind that determineCurrentLookupKey method will be called whenever TransactionsManager requests a connection. It is important to remember that each transaction is "associated" with a separate thread. More precisely, TransactionsManager binds Connection to the current thread. Therefore in order to dispatch different transactions to different target DataSources we have to make sure that every thread can reliably identify which DataSource is destined for it to be used. This makes it natural to utilize ThreadLocal variables for binding specific DataSource to a Thread and hence to a Transaction. This is how it is done:
public enum DbType {
MASTER,
REPLICA1,
}
public class DbContextHolder {
private static final ThreadLocal<DbType> contextHolder = new ThreadLocal<DbType>();
public static void setDbType(DbType dbType) {
if(dbType == null){
throw new NullPointerException();
}
contextHolder.set(dbType);
}
public static DbType getDbType() {
return (DbType) contextHolder.get();
}
public static void clearDbType() {
contextHolder.remove();
}
}
As you see, you can also use an enum as the key and Spring will take care of resolving it correctly based on the name. Associated DataSource configuration and keys might look like this:
....
<bean id="dataSource" class="website.fedulov.routing.RoutingDataSource">
<property name="targetDataSources">
<map key-type="com.sabienzia.routing.DbType">
<entry key="MASTER" value-ref="dataSourceMaster"/>
<entry key="REPLICA1" value-ref="dataSourceReplica"/>
</map>
</property>
<property name="defaultTargetDataSource" ref="dataSourceMaster"/>
</bean>
<bean id="dataSourceMaster" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="${db.master.url}"/>
<property name="username" value="${db.username}"/>
<property name="password" value="${db.password}"/>
</bean>
<bean id="dataSourceReplica" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="${db.replica.url}"/>
<property name="username" value="${db.username}"/>
<property name="password" value="${db.password}"/>
</bean>
At this point you might find yourself doing something like this:
#Service
public class BookService {
private final BookRepository bookRepository;
private final Mapper mapper;
#Inject
public BookService(BookRepository bookRepository, Mapper mapper) {
this.bookRepository = bookRepository;
this.mapper = mapper;
}
#Transactional(readOnly = true)
public Page<BookDTO> getBooks(Pageable p) {
DbContextHolder.setDbType(DbType.REPLICA1); // <----- set ThreadLocal DataSource lookup key
// all connection from here will go to REPLICA1
Page<Book> booksPage = callActionRepo.findAll(p);
List<BookDTO> pContent = CollectionMapper.map(mapper, callActionsPage.getContent(), BookDTO.class);
DbContextHolder.clearDbType(); // <----- clear ThreadLocal setting
return new PageImpl<BookDTO>(pContent, p, callActionsPage.getTotalElements());
}
...//other methods
Now we can control which DataSource will be used and forward requests as we please. Looks good!
...Or does it? First of all, those static method calls to a magical DbContextHolder really stick out. They look like they do not belong the business logic. And they don't. Not only do they not communicate the purpose, but they seem fragile and error-prone (how about forgetting to clean the dbType). And what if an exception is thrown between the setDbType and cleanDbType? We cannot just ignore it. We need to be absolutely sure that we reset the dbType, otherwise Thread returned to the ThreadPool might be in a "broken" state, trying to write to a replica in the next call. So we need this:
#Transactional(readOnly = true)
public Page<BookDTO> getBooks(Pageable p) {
try{
DbContextHolder.setDbType(DbType.REPLICA1); // <----- set ThreadLocal DataSource lookup key
// all connection from here will go to REPLICA1
Page<Book> booksPage = callActionRepo.findAll(p);
List<BookDTO> pContent = CollectionMapper.map(mapper, callActionsPage.getContent(), BookDTO.class);
DbContextHolder.clearDbType(); // <----- clear ThreadLocal setting
} catch (Exception e){
throw new RuntimeException(e);
} finally {
DbContextHolder.clearDbType(); // <----- make sure ThreadLocal setting is cleared
}
return new PageImpl<BookDTO>(pContent, p, callActionsPage.getTotalElements());
}
Yikes >_< ! This definitely does not look like something I would like to put into every read only method. Can we do better? Of course! This pattern of "do something at the beginning of a method, then do something at the end" should ring a bell. Aspects to the rescue!
Unfortunately this post has already gotten too long to cover the topic of custom aspects. You can follow up on the details of using aspects using this link.
I don't think that deciding that SELECTs should go to one DB (one slave) and CREATE/UPDATES should go to a different one (master) is a very good decision. The reasons are:
replication is not instantaneous, so you could CREATE something in the master DB and, as part of the same operation, SELECT it from the slave and notice that the data hasn't yet reached the slave.
if one of the slaves is down, you shouldn't be prevented from writing data in the master, because as soon as the slave is back up, its state will be synchronized with master. In your case though, your write operations are dependent on both master and slave.
How would you then define transactionality if you're in fact using 2 dbs?
I would advise using the master DB for all the WRITE flows, with all the instructions they might require (whether they are SELECTs, UPDATE or INSERTS). Then, the application dealing with the read-only flows can read from the slave DB.
I'd also advise having separate DAOs, each with its own methods, so that you'll have a clear distinction between read-only flows and write/update flows.
You could create 2 session factories and hava a BaseDao wrapping the 2 factories(or the 2 hibernateTemplates if you use them) and use the get methods with on factory and the saveOrUpdate methods with the other
Try this way : https://github.com/kwon37xi/replication-datasource
It works nicely and very easy to implement without any extra annotation or code. It requires only #Transactional(readOnly=true|false).
I have been using this solution with Hibernate(JPA),Spring JDBC Template, iBatis.
You can use DDAL to implement writting master database and reading slave database in a DefaultDDRDataSource without modifying your Daos, and what's more, DDAL provided loading balance for mulit-slave databases. It doesn't rely on spring or hibernate. There is a demo project to show how to use it: https://github.com/hellojavaer/ddal-demos and the demo1 is just what you described scene.