I need to pass data related to processing an item in between item processors, I don't need to persist the data, what is the best approach (Note I'm currently using StepSynchronizationManager to access the stepExecution and store the data in ExecutionContext).
What makes you think, that your way - storing the data in StepExecutionContext - is a bad or not the best way ?
You could try it without saving the data in the StepExecution and instead change the items between the processors
public class FirstProcessor implements ItemProcessor<String, String> {...}
public class SecondProcessor implements ItemProcessor<String, OtherClass> {
public OtherClass process(String item) throws Exception {
return otherClassObjectWithDataForNextProcessor;
}
}
public class ThirdProcessor implements ItemProcessor<OtherClass, TargetClass> {...}
public class CustomItemWriter implements ItemWriter<TargetClass> {...}
see Spring Batch Doc - Chaining Item Processors
Related
I have a base class like (this is an artificial example):
#Document(collection = "cars")
public class BaseCar implements Serializable
{
private int number;
private String color;
...
}
Then I have a derived class like:
#Document(collection = "cars")
public class FastCar extends BaseCar implements Serializable
{
private int numberOfWonRaces;
...
}
For both I have a MongoRepository class:
public interface BaseCarRepository extends MongoRepository<BaseCar, String> {
{ ... }
and
public interface FastCarRepository extends MongoRepository<FastCar, String> {
{ ... }
If I now save a FastCar in MongoDB the I get additionally a _class field added which indicates from where the data is coming from. In this example it shows FastCar.
In my project I have a REST API interface to get cars. I use the findBy function to get a car by its color. For example:
BaseCar baseCar = baseCarRep.findByColor(color);
Even if I use an object of BaseCar, Springboot is detecting that it is a FastCar and is returning an FastCar object with all the information.
Question:
Is there a way to force Springboot to return only a BaseCar? I do not want to send all the information to the REST API interface.
What I have done so far:
If I remove the the _class field in the MongoDB, Springboot cannot automatically detect the class anymore and is returning the BaseCar. But I do not want to lose this functionality by forcing Springboot to remove the _class (Spring data MongoDb: MappingMongoConverter remove _class)
It seems that there is also a way with projections to filter the fields which should be returned. This is to me not an elegant way as I have to write down all the fields again and I have to update it as soon as I am updating the BaseCar class.
Thank you for any help.
Philipp
Imagine I have a Storage bean, which incapsulates logic, related to storing my entities.
public interface Storage {
Object get(String id);
String save(Object obj);
}
And I have 3 implementations:
public FileStorage implements Storage { ... } // needs FileService
public RedisStorage implements Storage { ... } // needs JedisPool, RedisService and RedisSerializer
public MixedStorage implements Storage { ... } // combines other Storages
I also have 2 properties:
redis.enabled
file.enabled
Depending on these properties, I have to either create one of the beans, or both of them using MixedStorage (or none, but this is out of the question scope).
I have created a StorageFactory factory-bean:
public class StorageFactory {
// decide which impl to create basing on properties
}
Now I am passing all the dependent resources I need for all implementations (RedisSerializer, JedisPool, RedisService, FileService). Number of these resources can grow very faster, while adding new implementations.
Is there any way not to pass all the dependencies, but initialize them later?
I am using XML
I don't know if it be useful for you but using annotations it is look like this:
For beans:
#Component("FileStorage")
public FileStorage implements Storage { ... }
For service:
#Service
public class StorageFactory {
#Autowired
private Map<String,Storage> storageMap;//where key - bean name, value - class instance
}
And yes, the map will contains all beans but you will able to implement some logic based on your property file.
I have some entity type that needs additional logic on saving (to be precise, I want to save position at the moment of saving). I don't want to do it with any DB-specific features, like triggers, because I'm not sure what will be the data storage used in future.
So I would like to override save() method.
In Spring Data JPA documentation I can see two ways of providing own implementation for repository classes:
Extend base repository class and tell Spring Data to use it.
Defining an interface (in my case I assume PositionedRepository) with an implementation class (PositionedRepositoryImpl).
Problem with first way - I don't want to implement it for all repositories, only two entity types are positioned.
Problem with second way - I don't have access to base repository methods, so apart from position calculation I would need to somehow build all of the queries, normally provided by base repository.
Any way to extend base repository class just for specific repository types?
Don't do that logic in the repository itself. Think about repositories as a dumb layer between java and the database. It just passes data from end to the other.
Instead you should handle that case in a different layer. A more intelligent one. The business logic layer.
See this example:
#Service
public class MyEntityService{
private final MyEntityRepository myEntityRepository;
private final OtherEntityRepository otherEntityRepository;
#Autowired
public MyEntityService(MyEntityRepository myEntityRepository,
OtherEntityRepository otherEntityRepository){
this.myEntityRepository = myEntityRepository;
this.otherEntityRepository = otherEntityRepository;
}
public void save(MyEntity myEntity){
// do stuff with otherEntityRepository
myEntitiyRepository.save(myEntity);
}
}
you can :
public class CustomJpaRepository<T, ID extends Serializable> extends SimpleJpaRepository<T, ID> {
private final JpaEntityInformation<T, ?> entityInformationWrap;
private final EntityManager emWrap;
public CustomJpaRepository(JpaEntityInformation entityInformation, EntityManager entityManager) {
super(entityInformation, entityManager);
entityInformationWrap=entityInformation;
emWrap=entityManager;
}
#Override
public <S extends T> S save(S entity) {
//doing
}
}
then main class add:
#EnableJpaRepositories(repositoryBaseClass = CustomJpaRepository.class)
As third option you can extend SimpleJpaRepository that implements JpaRepository and JpaSpecificationExecutor.
In this way, you could benefit from the default implementation of JpaRepository while being the ability to override these methods.
For example :
#Repository
public class PositionedRepository extends SimpleJpaRepository<Positioned, Long> {
#Override
public Positioned save(Positioned positioned) {
...
}
}
As fourth option you can also define your own savePositioned() method that uses under the hood the JpaRepository.save().
I've got around 5 objects that I want to do similar things with.
I figured out that not to polute the code I will put a logic for those objects in one place.
public class MetaObjectController<T extends MetaObject> {
#Autowired
private final MetaObjectRepository<T> repository;
// generic logic
Here's how repository looks:
public interface MetaObjectRepository<T extends MetaObject> extends GraphRepository<T> {
T findByName(String name);
}
Now, I create concrete class which uses delegation:
public class ExperimentalController {
#Autowired
private final MetaObjectController<MetaCategory> metaController;
#RequestMapping(method = RequestMethod.POST)
public void add(#RequestBody MetaCategory toAdd) {
metaController.add(toAdd);
}
Now, when I look at the generated queries I see, that although instantiated correctly, repository puts MetaObject as an entity name instead of runtime type.
Is there a way to force the repository to use runtime type?
Please don't advise to put a #Query annnotation. That's not what I am looking for.
This is most probably due to type erasure: at runtime there is only the type constraint available which is MetaObject. If you want to use (via spring-data) the actually relevant subclass you will have to create explicit interfaces of the MetaObjectRepository like this:
public class Transmogrifier extends MetaObject
public interface MetaTransmogrifierRepository
extends MetaObjectRepository<Transmogrifier> {}
I want to know when we need to use the abstract factory pattern.
Here is an example,I want to know if it is necessary.
The UML
THe above is the abstract factory pattern, it is recommended by my classmate.
THe following is myown implemention. I do not think it is necessary to use the pattern.
And the following is some core codes:
package net;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
public class Test {
public static void main(String[] args) throws IOException, InstantiationException, IllegalAccessException, ClassNotFoundException {
DaoRepository dr=new DaoRepository();
AbstractDao dao=dr.findDao("sql");
dao.insert();
}
}
class DaoRepository {
Map<String, AbstractDao> daoMap=new HashMap<String, AbstractDao>();
public DaoRepository () throws IOException, InstantiationException, IllegalAccessException, ClassNotFoundException {
Properties p=new Properties();
p.load(DaoRepository.class.getResourceAsStream("Test.properties"));
initDaos(p);
}
public void initDaos(Properties p) throws InstantiationException, IllegalAccessException, ClassNotFoundException {
String[] daoarray=p.getProperty("dao").split(",");
for(String dao:daoarray) {
AbstractDao ad=(AbstractDao)Class.forName(dao).newInstance();
daoMap.put(ad.getID(),ad);
}
}
public AbstractDao findDao(String id) {return daoMap.get(id);}
}
abstract class AbstractDao {
public abstract String getID();
public abstract void insert();
public abstract void update();
}
class SqlDao extends AbstractDao {
public SqlDao() {}
public String getID() {return "sql";}
public void insert() {System.out.println("sql insert");}
public void update() {System.out.println("sql update");}
}
class AccessDao extends AbstractDao {
public AccessDao() {}
public String getID() {return "access";}
public void insert() {System.out.println("access insert");}
public void update() {System.out.println("access update");}
}
And the content of the Test.properties is just one line:
dao=net.SqlDao,net.SqlDao
So any ont can tell me if this suitation is necessary?
-------------------The following is added to explain the real suitation--------------
I use the example of Dao is beacuse it is common,anyone know it.
In fact,what I am working now is not related to the DAO,I am working to build a Web
service,the web serivce contains some algorithms to chang a file to other format,
For example:net.CreatePDF,net.CreateWord and etc,it expose two interfaces to client:getAlgorithms and doProcess.
The getAlogrithoms will return all the algorithms's ids,each id is realted to the
corresponding algorithm.
User who call the doProcess method will also provide the algorithm id he wanted.
All the algorithm extends the AbstractAlgorithm which define a run() method.
I use a AlogrithmsRepository to store all the algorithms(from
the properties file which config the concrete java classes of the algorithms by the web
service admin).That's to say, the interface DoProcess exposed by the web service is
executed by the concrete alogrithm.
I can give a simple example:
1)user send getAlgorithms request:
http://host:port/ws?request=getAlgorithms
Then user will get a list of algorithms embeded in a xml.
<AlgorithmsList>
<algorithm>pdf</algorithm>
<algorithm>word<algorithm>
</AlgorithmsList>
2)user send a DoProcess to server by:
http://xxx/ws?request=doProcess&alogrithm=pdf&file=http://xx/Test.word
when the server recieve this type of requst,it will get the concrete algorithm instance according to the "algorithm" parameter(it is pdf in this request) from the AlgorithmRepostory. And call the method:
AbstractAlgorithm algo=AlgorithmRepostory.getAlgo("pdf");
algo.start();
Then a pdf file will be sent to user.
BTW,in this example, the each algorithm is similar to the sqlDao,AccessDao.
Here is the image:
The design image
Now,does the AlgorithmRepostory need to use the Abstract Factory?
The main difference between the two approaches is that the top one uses different DAO factories to create DAO's while the bottom one stores a set of DAO's and returns references to the DAO's in the repository.
The bottom approach has a problem if multiple threads need access to the same type of DAO concurently as JDBC connections are not synchronised.
This can be fixed by having the DAO implement a newInstance() method which simply creates and returns a new DAO.
abstract class AbstractDao {
public abstract String getID();
public abstract void insert();
public abstract void update();
public abstract AbstractDao newInstance();
}
class SqlDao extends AbstractDao {
public SqlDao() {}
public String getID() {return "sql";}
public void insert() {System.out.println("sql insert");}
public void update() {System.out.println("sql update");}
public AbstractDao newInstance() { return new SqlDao();}
}
The repository can use the DAO's in the repository as factories for the DAO's returned by the Repository (which I would rename to Factory in that case) like this:
public AbstractDao newDao(String id) {
return daoMap.containsKey(id) ? daoMap.get(id).newInstance() : null;
}
Update
As for your question should your web-service implement a factory or can it use the repository like you described? Again the answer depends on the details:
For web-services it is normal to
expect multiple concurrent clients
Therefore the instances executing the
process for two clients must not
influence eachother
Which means they must not have shared state
A factory delivers a fresh instance on
every request, so no state is shared
when you use a factory pattern
If (and only if) the instances in your
repository are stateless your
web-service can also use the
repository as you describe, for this
they probably need to instantiate
other objects to actually execute the
process based on the request
parameters passed
If you ask to compare 2 designs from UML, 2nd API on UML have following disadvantage:
caller needs to explicitly specify type of DAO in call to getDAO(). Instead, caller shouldn't care about type of DAO it works with, as long as DAO complies with interface. First design allows caller simply call createDAO() and get interface to work with. This way control of which impl to use is more flexible and caller don't have this responsibility, which improves overall coherence of design.
Abstract Factory is useful if you need to separate multiple dimensions of choices in creating something.
In the common example case of windowing systems, you want to make a family of widgets for assorted windowing systems, and you create a concrete factory per windowing system which creates widgets that work in that system.
In your case of building DAOs, it is likely useful if you need to make a family of DAOs for the assorted entities in your domain, and want to make a "sql" version and an "access" version of the entire family. This is I think the point your classmate is trying to make, and if that's what you're doing it's likely to be a good idea.
If you have only one thing varying, it's overkill.