How to use Pooled Spring beans instead of Singleton ones? - java

For efficiency reasons, I am interested in limiting the number of threads that simultaneously uses the beans of the Spring application context (I don't want an unlimited number of threads proccessing in my limited memory).
I have found here (spring documentation) a way to achieve this by pooling the beans in a EJB style, by doing the following:
Declare the target bean as scope "prototype".
Declare a Pool provider that will deliver a limited number of pooled "target" instances.
Declare a "ProxyFactoryBean" which function is not clear to me.
Here is the declaration of this beans:
<bean id="businessObjectTarget" class="com.mycompany.MyBusinessObject"
scope="prototype">
... properties omitted
</bean>
<bean id="poolTargetSource" class="org.springframework.aop.target.CommonsPoolTargetSource">
<property name="targetBeanName" value="businessObjectTarget"/>
<property name="maxSize" value="25"/>
</bean>
<bean id="businessObject" class="org.springframework.aop.framework.ProxyFactoryBean">
<property name="targetSource" ref="poolTargetSource"/>
<property name="interceptorNames" value="myInterceptor"/>
</bean>
My problem is that when I will declare another bean to use pooled instances of the "businessObjectTarget", how should I do it? I mean, when i try to do something like this:
<bean id="clientBean" class="com.mycompany.ClientOfTheBusinessObject">
<property name="businessObject" ref="WHAT TO PUT HERE???"/>
</bean>
What should be the value of the "ref" ??

You cannot use properties to get instances of prototypes.
One option is to use the lookup methods (see chapter 3.3.7.1)
Another option to get your bean in code: make your com.mycompany.ClientOfTheBusinessObject to implement the ApplicationContextAware interface and then call context.getBean("clientBean")

Please note the name of the third bean in the spring example:-"businessObject"
It means this the bean from where you are supposed to access the common pool.
For your case if you need your own client bean you may have it as follows.
But in such a case businessObject is not required.:-
<bean id="businessObjectTarget" class="com.mycompany.MyBusinessObject"
scope="prototype">
... properties omitted
</bean>
<bean id="poolTargetSource" class="org.springframework.aop.target.CommonsPoolTargetSource">
<property name="targetBeanName" value="businessObjectTarget"/>
<property name="maxSize" value="25"/>
</bean>
<bean id="clientBean" class="com.mycompany.ClientOfTheBusinessObject">
<property name="poolTargetSource" ref="poolTargetSource"/>
</bean>
Java classes:-
public class ClientOfTheBusinessObject{
CommonsPoolTargetSource poolTargetSource;
//<getter and setter for poolTargeTSource>
public void methodToAccessCommonPool(){
//The following line gets the object from the pool.If there is nothing left in the pool then the thread will be blocked.(The blocking can be replaced with an exception by changing the properties of the CommonsPoolTargetSource bean)
MyBusinessObject mbo = (MyBusinessObject)poolTargetSource.getTarget();
//Do whatever you want to do with mbo
//the following line puts the object back to the pool
poolTargetSource.releaseTarget(mbo);
}
}

I'm pretty sure you can limit the number of simultaneous threads in a less convoluted way. Did you look at the Java Concurrency API, specifically at the Executors.newFixedThreadPool() ?

i used java-configuration to construct a proxy over the interface that handles pooling using apache commons-pool to achieve invocation-level-pooling.

I did it using Annotations based configuration:
I did create my BusinessObject class as a POJO and annotate it this way:
#Component("businessObject")
#Scope("prototype")
public class BusinessObject { ... }
I gave it a specific name and did mark it as prototype so that Spring doesn't create a singleton instance for it; every time the bean is required, Spring would create a new instance.
In my #Configuration class (or in the #SpringBootApplication class, if using Spring Boot) I created a CommonsPool2TargetSource instance to hold BusinessObject instances:
#Bean
public CommonsPool2TargetSource pooledTargetSource() {
final CommonsPool2TargetSource commonsPoolTargetSource = new CommonsPool2TargetSource();
commonsPoolTargetSource.setTargetBeanName("businessObject");
commonsPoolTargetSource.setTargetClass(BusinessObject.class);
commonsPoolTargetSource.setMaxSize(maxPoolSize);
return commonsPoolTargetSource;
}
Here I'm indicating that the pool will hold BusinessObject instances. Notice that my maxPoolSize=? value is set with the max number of BusinessObject instances I want to hold in the pool.
Finally, I did access my pooled instances this way:
#Autowired
private CommonsPool2TargetSource pooledTargetSource;
void someMethod() {
// First I retrieve one pooled BusinessObject instance
BusinessObject businessObject = (BusinessObject)pooledTargetSource.getTarget();
try {
// Second, I do some logic using the BusinessObject instance gotten
} catch (SomePossibleException e) {
// Catch and handle any potential error, if any
} finally {
// Finally, after executing my business logic
// I release the BusinessObject instance so that it can be reused
pooledTargetSource.releaseTarget(businessObject);
}
}
It is very important to always make sure to release the BusinessObject borrowed from the pool, without mattering if the business logic did finish successfully or with error. Otherwise the pool could get empty with all the instances being borrowed and never released and any further requests for instances will block forever.

Related

Get new instance of a spring bean

I have an interface called MyInterface. The class that implements MyInterface (lets call it MyImplClass) also implements the Runnable interface so i can use it to instantiate threads. This is my code now.
for (OtherClass obj : someList) {
MyInterface myInter = new MyImplClass(obj);
Thread t = new Thread(myInter);
t.start();
}
What i want to do is to declare the implementing class in my ApplicationContext.xml and get a new instance for each iteration. So my code will look something like this:
for (OtherClass obj : someList) {
MyInterface myInter = // getting the implementation from elsewhere
Thread t = new Thread(myInter);
t.start();
}
I want to still keep the IoC pattern if possible.
How can i do so?
Thanks
You can try factory pattern with spring scope prototype like below. Define a Abstract Factory Class which will give you MyInterface object
public abstract class MyInterfaceFactoryImpl implements MyInterfaceFactory {
#Override
public abstract MyInterface getMyInterface();
}
Then define the Spring bean.xml file as below. Please note myinterface bean is defined as prototype ( So it will always give you new instance).
<bean name="myinterface" class="com.xxx.MyInterfaceImpl" scope="prototype"/>
Then define the factorybean with factory method name.
<bean name="myinterfaceFactory" class="com.xxx.MyInterfaceFactoryImpl">
<lookup-method bean="myinterface" name="getMyInterface" />
</bean>
Now you can call myinterfaceFactory to get new instance.
for (OtherClass obj : someList) {
MyInterface myInter = myInterfaceFactory.getMyInterface();
Thread t = new Thread(myInter);
t.start();
}
Keep the spring configuration file, beans.xml in the root of the classpath.
Making scope=prototype, will result in different instances of bean for each getBean method invocation.
beans.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="myinterface" class="MyImplClass" scope="prototype"/>
</beans>
Similar way if you want Spring to return the same bean instance each time one is needed, you should declare the bean's scope attribute to be singleton.
Once the IoC container is initialized, you can retrieve your Spring beans. But make sure, you do the below only initialization only once.
ApplicationContext context = new ClassPathXmlApplicationContext("beans.xml");
Then you can change your code as below.
for (OtherClass obj : someList) {
MyInterface myInter = (MyInterface ) context.getBean("myinterface");
Thread t = new Thread(myInter);
t.start();
}
Given the context you provided in your comment to me, I would suggest you don't have the MyImplClass instances created by Spring. Having this prototyped object instantiated by Spring provides no benefit from what I can tell.
The best way, in my opinion, to keep with the IoC pattern here would be to instead utilize a Spring managed Factory that produces instances of MyImplClass. Something along the lines of this:
public class MyInterfaceFactory {
public MyInterface newInstance(final OtherClass o) {
return new MyImplClass(o);
}
}
Depending on the usage needs, you can modify this factory's interface to return MyImplClass, or add some logic to return a different implementation of MyInterface.
I tend to think that Factories and IoC/DI work pretty well together, and your use case is a pretty good example of that.
Initial Note 1
Instead of creating and starting threads by hand, I would suggest to use a pool of threads that is externally configured, so that you can manage the number of threads that are created. If the size of someList is 1000, creating so many threads is inefficient. You should better use an executor backed by a pool of threads. Spring provides some implementations that can be used as spring beans configured with the task namespace, something like this:
<task:executor id="executor" queue-capacity="10" rejection-policy="CALLER_RUNS" />
queue-capacity is the max size of the threads pool. If that size is exceeded, the current thread will run the additional task, thus blocking the loop until another thread is freed (rejection-policy="CALLER_RUNS"). See the task:executor documentation, or define any ThreadPoolExecutor (spring or jdk-concurrent) with your own configuration.
Initial Note 2
If the only state that you intend to store in MyClassImpl is the item from the list, then you can forget the rest of the explanation below (except for the ThreadPool stuff), and directly use a singleton bean : remove the Runnable interface and its no-arg run() method, add a run(OtherClass obj) method and do something like this:
final MyInterface task = // get it from spring as a singleton
for (final OtherClass obj : someList) {
executor.execute(new Runnable() {
public void run() {task.run(obj);}
});
// jdk 8 : executor.execute(task::run);
}
If you plan to store some state inside MyClassImpl during the execution of run() (other than the processed object), go on reading. But you will still use the run(OtherClass obj) method instead of no-args run().
The basic idea is to get a different object for each running thread, based on some kind of model or prototype defined as a spring bean. In order to achieve this, just define the bean that you initially want to pass to each thread as a proxy that dispatches to an instance that is bound to the running thread. This means that the same instance of task is injected into each thread, and during the thread execution, the real task on which you invoke methods is bound to the current thread.
Main program
Since you are using the elements of the list to do your business, you will pass each element to its owning task.
public class Program {
#Resource private MyInterface task; // this is a proxy
#Resource private TaskExecutor executor;
public void executeConcurrently(List<OtherClass> someList) {
for (final OtherClass obj : someList) {
executor.execute(new Runnable() {
public void run() { task.run(obj); }
});
// jdk 8 : executor.execute(task::run);
}
}
}
We suppose that Program is a spring bean, thus the dependencies can be injected. If Program is not a spring bean, you will need to get the spring ApplicationContext from somewhere, then autowire Program (i.e. inject dependencies found in the ApplicationContext, based on annotations). Something like this (in the constructor) :
public Program(ApplicationContext ctx) {
ctx.getAutowireCapableBeanFactory().autowireBean(this);
}
Define the task
<bean id="taskTarget" class="MyImplClass" scope="prototype" autowire-candidate="false" />
<bean id="task" class="org.springframework.aop.framework.ProxyFactoryBean">
<property name="targetSource">
<bean class="org.springframework.aop.target.ThreadLocalTargetSource">
<property name="targetBeanName" value="taskTarget"/>
<property name="targetClass" value="MyInterface"/>
</bean>
</property>
</bean>
taskTarget is where you define your business. This bean is defined as a prototype, as a new instance will be allocated to each thread. Thanks to this, you can even store state that depends on the run() parameter. This bean is never used directly by the application (thus autowire-candidate="false"), but it is used through the task bean. In executeConcurrently() above, the line task.run(obj) will actually be dispatched on one of the prototype taskTarget that was created by the proxy.
If you can determine at runtime which MyImplClass instance to use, you could list all implementations as beans in your context xml and #Autowire an array of type MyInterface to get all MyInterface implementors.
Given the following in the context xml:
<bean class="MyImplClass" p:somethingCaseSpecific="case1"/>
<bean class="MyImplClass" p:somethingCaseSpecific="case2"/>
Then a deceleration
#Autowire
MyInterface[] allInterfaceBeans;
will result in allInterfaceBeans containing both beans defined above.
If you wanted the logic for determining which implementation to use to be done at injection time, you could always #Autowire a setter method setAllInterfaceBeans(MyInterface[] allInterfaceBeans);.
First and foremost, we all know that by default spring container will create bean in singleton mode (if you don't explicitly specify the scope). As the name implies, singleton guarantee that everytime you call the bean, it will give you the same instance. Nevertheless, there's slightly differences between singleton in spring with the one singleton that mentioned by GoF. In Spring, the created instance will be restricted to the container (not JVM as we found in GoF).
Additionally, in spring, you can define two different bean instances of the same type but with different names and they will be two different instances created on the heap. But every time you reference one of those beans by name (ref= in a bean definition or getBean on the appContext), you get the same object every time. That is obviously different than the actual singleton pattern but similar in concept anyway.
Generally speaking, there are implications of using a singleton in a multi-threaded application (Spring singleton or actual singleton). Any state that you keep on these objects must account for the fact that multiple threads will access it. Usually, any state that exists will be set during instantiation via a setter or constructor argument. This category of Spring bean makes sense for long lived objects, thread-safe objects. If you want something thread specific and still desire spring to create the object, then prototype scope works.

How to autowire a class with non-empty constructor?

I'd like to #Autowired a class that has a non-empty constructor.
Just the the following as an example, it does not necessairly have to be a view/service. Could be whatever component you like, having non-default constructor:
#Component
class MyViewService {
//the "datasource" to show in the view
private List<String> companies companies;
private MyObject obj;
public MyViewService(List<String> companies, MyObject obj) {
this.companies = companies;
this.obj = obj;
}
}
Of course I cannot just write
#Autowired
private MyViewService viewService;
as I'd like to use the constructor with the list. But how?
Are there better approaches than refactoring these sort of constructors to setters? I wouldn't like this approach as ideally the constructor forces other classes to provide all objects that are needed within the service. If I use setters, one could easily forget to set certain objects.
If you want Spring to manage MyViewService you have to tell Spring how to create an instance of it. If you're using XML configuration:
<bean id="myViewService" class="org.membersound.MyViewService">
<constructor-arg index="0" ref="ref_to_list" />
<constructor-arg index="1" ref="ref_to_object" />
</bean>
If you're using Java configuration then you'd call the constructor yourself in your #Beanannotated method.
Check out the Spring docs on this topic. To address a comment you made to another answer, you can create a List bean in XML as shown in the Spring docs. If the list data isn't fixed (which it's probably not) then you want to use an instance factory method to instantiate the bean.
In short, the answers you seek are all in the Spring docs :)
If a component has a non-default constructor then you need to configure the constructor in the bean configuration.
If you are using XML,
it might look like this (example from the spring reference document):
<beans>
<bean id="foo" class="x.y.Foo">
<constructor-arg ref="bar"/>
<constructor-arg ref="baz"/>
</bean>
<bean id="bar" class="x.y.Bar"/>
<bean id="baz" class="x.y.Baz"/>
</beans>
The key here is constructor wiring of the bean that will be used for the #AutoWire.
The way you use the bean has no impact.

How to setup Hibernate to read/write to different datasources?

Using Spring and Hibernate, I want to write to one MySQL master database, and read from one more more replicated slaves in cloud-based Java webapp.
I can't find a solution that is transparent to the application code. I don't really want to have to change my DAOs to manage different SessionFactories, as that seems really messy and couples the code with a specific server architecture.
Is there any way of telling Hibernate to automatically route CREATE/UPDATE queries to one datasource, and SELECT to another? I don't want to do any sharding or anything based on object type - just route different types of queries to different datasources.
An example can be found here: https://github.com/afedulov/routing-data-source.
Spring provides a variation of DataSource, called AbstractRoutingDatasource. It can be used in place of standard DataSource implementations and enables a mechanism to determine which concrete DataSource to use for each operation at runtime. All you need to do is to extend it and to provide an implementation of an abstract determineCurrentLookupKey method. This is the place to implement your custom logic to determine the concrete DataSource. Returned Object serves as a lookup key. It is typically a String or en Enum, used as a qualifier in Spring configuration (details will follow).
package website.fedulov.routing.RoutingDataSource
import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;
public class RoutingDataSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return DbContextHolder.getDbType();
}
}
You might be wondering what is that DbContextHolder object and how does it know which DataSource identifier to return? Keep in mind that determineCurrentLookupKey method will be called whenever TransactionsManager requests a connection. It is important to remember that each transaction is "associated" with a separate thread. More precisely, TransactionsManager binds Connection to the current thread. Therefore in order to dispatch different transactions to different target DataSources we have to make sure that every thread can reliably identify which DataSource is destined for it to be used. This makes it natural to utilize ThreadLocal variables for binding specific DataSource to a Thread and hence to a Transaction. This is how it is done:
public enum DbType {
MASTER,
REPLICA1,
}
public class DbContextHolder {
private static final ThreadLocal<DbType> contextHolder = new ThreadLocal<DbType>();
public static void setDbType(DbType dbType) {
if(dbType == null){
throw new NullPointerException();
}
contextHolder.set(dbType);
}
public static DbType getDbType() {
return (DbType) contextHolder.get();
}
public static void clearDbType() {
contextHolder.remove();
}
}
As you see, you can also use an enum as the key and Spring will take care of resolving it correctly based on the name. Associated DataSource configuration and keys might look like this:
....
<bean id="dataSource" class="website.fedulov.routing.RoutingDataSource">
<property name="targetDataSources">
<map key-type="com.sabienzia.routing.DbType">
<entry key="MASTER" value-ref="dataSourceMaster"/>
<entry key="REPLICA1" value-ref="dataSourceReplica"/>
</map>
</property>
<property name="defaultTargetDataSource" ref="dataSourceMaster"/>
</bean>
<bean id="dataSourceMaster" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="${db.master.url}"/>
<property name="username" value="${db.username}"/>
<property name="password" value="${db.password}"/>
</bean>
<bean id="dataSourceReplica" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="${db.replica.url}"/>
<property name="username" value="${db.username}"/>
<property name="password" value="${db.password}"/>
</bean>
At this point you might find yourself doing something like this:
#Service
public class BookService {
private final BookRepository bookRepository;
private final Mapper mapper;
#Inject
public BookService(BookRepository bookRepository, Mapper mapper) {
this.bookRepository = bookRepository;
this.mapper = mapper;
}
#Transactional(readOnly = true)
public Page<BookDTO> getBooks(Pageable p) {
DbContextHolder.setDbType(DbType.REPLICA1); // <----- set ThreadLocal DataSource lookup key
// all connection from here will go to REPLICA1
Page<Book> booksPage = callActionRepo.findAll(p);
List<BookDTO> pContent = CollectionMapper.map(mapper, callActionsPage.getContent(), BookDTO.class);
DbContextHolder.clearDbType(); // <----- clear ThreadLocal setting
return new PageImpl<BookDTO>(pContent, p, callActionsPage.getTotalElements());
}
...//other methods
Now we can control which DataSource will be used and forward requests as we please. Looks good!
...Or does it? First of all, those static method calls to a magical DbContextHolder really stick out. They look like they do not belong the business logic. And they don't. Not only do they not communicate the purpose, but they seem fragile and error-prone (how about forgetting to clean the dbType). And what if an exception is thrown between the setDbType and cleanDbType? We cannot just ignore it. We need to be absolutely sure that we reset the dbType, otherwise Thread returned to the ThreadPool might be in a "broken" state, trying to write to a replica in the next call. So we need this:
#Transactional(readOnly = true)
public Page<BookDTO> getBooks(Pageable p) {
try{
DbContextHolder.setDbType(DbType.REPLICA1); // <----- set ThreadLocal DataSource lookup key
// all connection from here will go to REPLICA1
Page<Book> booksPage = callActionRepo.findAll(p);
List<BookDTO> pContent = CollectionMapper.map(mapper, callActionsPage.getContent(), BookDTO.class);
DbContextHolder.clearDbType(); // <----- clear ThreadLocal setting
} catch (Exception e){
throw new RuntimeException(e);
} finally {
DbContextHolder.clearDbType(); // <----- make sure ThreadLocal setting is cleared
}
return new PageImpl<BookDTO>(pContent, p, callActionsPage.getTotalElements());
}
Yikes >_< ! This definitely does not look like something I would like to put into every read only method. Can we do better? Of course! This pattern of "do something at the beginning of a method, then do something at the end" should ring a bell. Aspects to the rescue!
Unfortunately this post has already gotten too long to cover the topic of custom aspects. You can follow up on the details of using aspects using this link.
I don't think that deciding that SELECTs should go to one DB (one slave) and CREATE/UPDATES should go to a different one (master) is a very good decision. The reasons are:
replication is not instantaneous, so you could CREATE something in the master DB and, as part of the same operation, SELECT it from the slave and notice that the data hasn't yet reached the slave.
if one of the slaves is down, you shouldn't be prevented from writing data in the master, because as soon as the slave is back up, its state will be synchronized with master. In your case though, your write operations are dependent on both master and slave.
How would you then define transactionality if you're in fact using 2 dbs?
I would advise using the master DB for all the WRITE flows, with all the instructions they might require (whether they are SELECTs, UPDATE or INSERTS). Then, the application dealing with the read-only flows can read from the slave DB.
I'd also advise having separate DAOs, each with its own methods, so that you'll have a clear distinction between read-only flows and write/update flows.
You could create 2 session factories and hava a BaseDao wrapping the 2 factories(or the 2 hibernateTemplates if you use them) and use the get methods with on factory and the saveOrUpdate methods with the other
Try this way : https://github.com/kwon37xi/replication-datasource
It works nicely and very easy to implement without any extra annotation or code. It requires only #Transactional(readOnly=true|false).
I have been using this solution with Hibernate(JPA),Spring JDBC Template, iBatis.
You can use DDAL to implement writting master database and reading slave database in a DefaultDDRDataSource without modifying your Daos, and what's more, DDAL provided loading balance for mulit-slave databases. It doesn't rely on spring or hibernate. There is a demo project to show how to use it: https://github.com/hellojavaer/ddal-demos and the demo1 is just what you described scene.

Spring: Using "Lookup method injection" for my ThreadFactory looks not scalable

We're building a ThreadFactory so everytime a singleton controller needs a new thread, i get a new instance everytime.
Looking at Lookup method injection looks good but what if we have multiple thread classes? I like the fact that i can autowire my threadBeans.
like:
public abstract class ThreadManager {
public abstract Thread createThreadA();
public abstract Thread createThreadB();
}
and config:
<bean id="threadManager" class="bla.ThreadManager" singleton="true">
<lookup-method name="createThreadA" bean="threadA" />
<lookup-method name="createThreadB" bean="threadB"/>
</bean>
<!-- Yes! i can autowire now :)-->
<bean id="threadA" class="bla.ThreadA" singleton="false" autowire="byType">
<bean id="threadB" class="bla.ThreadB" singleton="false" autowire="byType">
and usage:
threadManager.createThreadA();
Question: I don't want to create an abstract "create" method for every new threadclass.
Is it possible to make this generich like:
threadManager.createThread(ThreadA.class);
I also looked at ServiceLocatorFactoryBean but for multiple classes i have to pass the bean name (not type safe).
Thank you
I don't think there is a way to do that automatically. And if you don't want to use ExecutorService, as suggested, you you can achieve this manually, if it is such a problem for you (but I don't think it is)
make your threadManager implement ApplicationContextAware or BeanFactoryAware, thus obtaining the application context / bean factory
in your createThread(..) method use the context/factory obtained above to get an instance of the thread bean (which should be of scope prototype of course)

Should references in RMI exposed services be transient?

I'm exposing some services using RMI on Spring. Every service has a dependency to other service bean which does the real processing job. For example:
<bean id="accountService" class="example.AccountServiceImpl">
<!-- any additional properties, maybe a DAO? -->
</bean>
<bean id="rmiAccount" class="example.AccountRmiServiceImpl"/>
<bean class="org.springframework.remoting.rmi.RmiServiceExporter">
<!-- does not necessarily have to be the same name as the bean to be exported -->
<property name="serviceName" value="AccountService"/>
<property name="service" ref="accountService"/>
<property name="serviceInterface" value="example.AccountService"/>
<!-- defaults to 1099 -->
<property name="registryPort" value="1199"/>
</bean>
My AccountRmiServiceImpl looks like this:
public class AccountRmiServiceImpl implements AccountRmiService {
private static final long serialVersionUID = -8839362521253363446L;
private AccountService accountService;
#Autowired
public void setAccountService(AccountService accountService) {
this.accountService = accountService;
}
}
My question is: could AccountServiceImpl be created without implementing the Serializable marker interface? If it is a case, then its reference in AccountRmiServiceImpl should be made transient. This means that it would not be serialized and transfered to the client where the RMI invocation is being made. Is it possible?
Maybe.
You could definitely mark the accountService field as transient, which would indeed stop it from being serialised and sent over RMI (or more accurately, failing to be serialised and throwing an exception). However, at this point the AccountRmiServiceImpl that's reconstructed on the other side will have a null value for its accountService, which without any other changes would almost certainly lead to a NullPointerException later.
If your AccountServiceImpl is not serialisable (in the Java sense), but you are still able to create an instance of it based on some simple serialisable information, then you're in luck. You can implement the serialisation yourself using the writeObject/readObject or writeReplace/readResolve methods (see Serializable for details).
If instances of AccountServiceImpl are not serialisable in any sense of the word (e.g. an anonymous inner class with inline logic as well as references to final local variables in its outer scope), then there's no way to send this across. What kind of object would be recreated on the other side? If this is the situation you find yourself in, you'd need to refactor your code to make the class(es) serialisable.

Categories

Resources