I'm new to ATG and I'm failing to do something that looks fairly simple.
I'm trying to get an Order in the database by the number of the order. But this number is not the orderId so I can't just use the OrderManager.loadOrder method.
This is the code I have so far:
Repository orderRepository = getOrderManager().getOrderTools().getOrderRepository();
RepositoryView view = orderRepository.getView("order");
RqlStatement statement = RqlStatement.parseRqlStatement("orderNumber EQUALS ?0");
Object params[] = { pOrderNumber };
RepositoryItem items[] = statement.executeQuery(view, params);
RepositoryItem order = null;
if( (items != null) && (items.length > 0) ) {
order = items[0];
}
//Now I want to convert this order of type "RepositoryItem" to an actual Order object
I can do this by getting the repository ID and calling loadOrder form the OrderManager, but that seems like going back to the database and finding again what I already have in my hands.
Is there another way to get an actual Order object out of this RepositoryItem object?
If you only need properties off of the order item itself, then you can just retrieve them directly from the RepositoryItem using the getPropertyValue methods. If you find that you want to utilize the OrderImpl wrapper and its associated convenience methods, then you should retrieve the Order object instances via the OrderManager.loadOrder() method as you have suggested. While this will require slightly more work by the application to contruct the Order wrapper, it does not necessarily mean another DB call against the order tables. Assuming you have not disabled repository caching for the order item then ATG will utilize the already cached order repository item when it is constructing the OrderImpl wrapper for you. This item would have been cached when you did the RQL lookup for the order by orderNumber, so a redundant DB call will not be performed.
Note that it may require additional DB calls to retrieve related order items if those items have not already been cached (i.e. payment groups, shipping groups, commerce items, etc).
It really depends on what you are trying to do.
The question about whether the item is loaded from the database or from the cache depends on your repository settings, combined with the lazy loading settings. The documentation on this can be found here.
If you would like to update the order then you should use OrderManager.loadOrder() as this will ensure that the order is updated correctly and allows you to reprice the order and update other repository items which make up the order such as payment groups and shipping groups (remember to use a transaction wrapper to ensure the order is updated safely).
If you are simply trying to read values then going the repository way will be quicker. I would recommend creating a globally scoped component which your form handler references. Something along the lines of (code below not tested):
OrderTools.properties file:
$class=com.acme.commerce.order.OrderTools
$scope=global
orderRepository=/atg/commerce/order/OrderRepository
OrderTools class:
public class OrderTools extends GenericService
{
private RepositoryView orderView;
private RqlStatement orderStm;
private OrderRepository orderRepository;
private OrderManager orderManager;
public void doStartService() throws ServiceException
{
try
{
orderView = getOrderRepository().getView(CommerceConstants.ORDER);
orderStm = RqlStatement.parseRqlStatement("uniqueOrderId = ?0");
} catch (RepositoryException e)
{
throw new ServiceException(e);
}
}
protected RepositoryItem getOrderItem(final String uniqueOrderId) throws RepositoryException
{
Object params[] = new Object[1];
params[0] = uniqueOrderId;
RepositoryItem[] orderItems = orderStm.executeQuery(orderView, params);
if (orderItems != null)
{
return getOrderRepository().getItem(orderItems[0].getRepositoryId(), CommerceConstants.ORDER);
} else
{
return null;
}
}
/*
This method demonstrates how to load an order using the OrderManager.loadOrder() method.
The code includes some basic timing so that a performance comparison can be done with the loadOrderSubItemsRepositoryMethod()
*/
public void loadOrderUsingOrderManager(String orderId) {
long startTime = System.currentTimeMillis();
Order order = getOrderManager().loadOrder(orderId);
long orderLoadTime = System.currentTimeMillis();
//read your properties here ...
long totalTime = System.currentTimeMillis();
if(isLoggingDebug()) {
logDebug("The order load time was " + (orderLoadTime - startTime) + "ms");
logDebug("The time to read the list of properties was " + (totalTime - startTime) + "ms");
}
}
/*
This method shows how to get order items such as payment groups or shipping groups using the repository.
*/
public void loadOrderSubItemsRepositoryMethod(MutableRepositoryItem orderItem) {
long startTime = System.currentTimeMillis();
// Example of how to get the payment groups using the repository
MutableRepositoryItem paymentGroups = (List) orderItem.getPropertyValue("paymentGroups");
// Put code here to iterate through the list of items you want to read
// Examploe of how to get the shipping groups
MutableRepositoryItem shippingGroups = (List) orderItem.getPropertyValue("shippingGroups");
long totalTime = System.currentTimeMillis();
if(isLoggingDebug()) {
logDebug("The order load time was " + (orderLoadTime - startTime) + "ms");
logDebug("The time to read the list of properties was " + (totalTime - startTime) + "ms");
}
}
public MutableRepository getOrderRepository()
{
return orderRepository;
}
public void setOrderRepository(final MutableRepository orderRepository)
{
this.orderRepository = orderRepository;
}
public OrderManager getOrderManager()
{
return orderRepository;
}
public void setOrderRepository(final OrderManager orderManager)
{
this.orderManager = orderManager;
}
}
Hope this helps.
Related
I am using Room Architecture component from android Jet-pack in my App. I have implemented the Repository class where I manage my data sources like server and data from Room database. I am using live Data to get a list of all the objects present in my database and I have attached an Observer in my activity class. All works perfectly except one thing before making a call to my server I want to check if data is present in Room or not if data is present in Room I do not want to make a call to the server to save resources But when I try to get the data from local database in repository class it always returns null I have also tried attaching an observer to it but no use.
public LiveData<List<AllbrandsdataClass>> getAllBrands() {
brandsDao.getallbrands().observeForever(new Observer<List<AllbrandsdataClass>>() {
#Override
public void onChanged(#Nullable final List<AllbrandsdataClass> allbrandsdataClasses) {
final List<AllbrandsdataClass> listofbrandsobjectfromdb = allbrandsdataClasses;
if (listofbrandsobjectfromdb == null) {
Log.d(TAG, "Repository getallbrands number of brands in the DB is: 0");
} else {
// perform the logic to check and than fetch from server
}
return brandsDao.getallbrands();
}
}
}
here is my getAllBrands() method in the interface class which is annotated as DAO
#Query("SELECT * FROM AllbrandsdataClass order by timeStamp desc")
LiveData<List<AllbrandsdataClass>> getallbrands();
what I want is to perform a check in repository class for data from the local database before fetching the data from the server but I am unable to do it when using live data as shown above
Below I am using 2 live data streams(income, expense) of type "SumOfRowsFromDB" yours can be any depending upon your business logic, in the repository class to get a single live data "remainingIncome" of type Long
first, I added both my input live data as source to my output live data "remainingIncome" and in the lamda I set the value of my output live data as a method that is defined below, now whenever any of the input live data changes my method "combinedResult(income, expense)" gets called and I can change the value of my output accordingly as per my business logic.
public LiveData<Long> getRemainingIncome() {
MediatorLiveData<Long> remainingIncome = new MediatorLiveData<>();
LiveData<SumOfRowsFromDB> income = mainDashBoardDao.getTansWiseSum(Constants.TRANS_TYPES.get(2));
LiveData<SumOfRowsFromDB> expense = mainDashBoardDao.getTansWiseSum(Constants.TRANS_TYPES.get(1));
remainingIncome.addSource(income, value -> {
remainingIncome.setValue(combinedResult(income, expense));
});
remainingIncome.addSource(expense, value -> {
remainingIncome.setValue(combinedResult(income, expense));
});
return remainingIncome;
}
private Long combinedResult(LiveData<SumOfRowsFromDB> income, LiveData<SumOfRowsFromDB> expense) {
if (income.getValue() != null && expense.getValue() != null) {
return (income.getValue().getSumOfRow() - expense.getValue().getSumOfRow());
} else {
return 0L;
}
I have a stream of objects which I would like to collect the following way.
Let's say we are handling forum posts:
class Post {
private Date time;
private Data data
}
I want to create a list which groups posts by a period. If there were no posts for X minutes, create a new group.
class PostsGroup{
List<Post> posts = new ArrayList<> ();
}
I want to get a List<PostGroups> containing the posts grouped by the interval.
Example: interval of 10 minutes.
Posts:
[{time:x, data:{}}, {time:x + 3, data:{}} , {time:x + 12, data:{}, {time:x + 45, data:{}}}]
I want to get a list of posts group:
[
{posts : [{time:x, data:{}}, {time:x + 3, data:{}}, {time:x + 12, data:{}]]},
{posts : [{time:x + 45, data:{}]}
]
notice that the first group lasted till X + 22. Then a new post was received at X + 45.
Is this possible?
This problem could be easily solved using the groupRuns method of my StreamEx library:
long MAX_INTERVAL = TimeUnit.MINUTES.toMillis(10);
StreamEx.of(posts)
.groupRuns((p1, p2) -> p2.time.getTime() - p1.time.getTime() <= MAX_INTERVAL)
.map(PostsGroup::new)
.toList();
I assume that you have a constructor
class PostsGroup {
private List<Post> posts;
public PostsGroup(List<Post> posts) {
this.posts = posts;
}
}
The StreamEx.groupRuns method takes a BiPredicate which is applied to two adjacent input elements and returns true if they must be grouped together. This method creates the stream of lists where each list represents the group. This method is lazy and works fine with parallel streams.
You need to retain state between stream entries and write yourself a grouping classifier. Something like this would be a good start.
class Post {
private final long time;
private final String data;
public Post(long time, String data) {
this.time = time;
this.data = data;
}
#Override
public String toString() {
return "Post{" + "time=" + time + ", data=" + data + '}';
}
}
public void test() {
System.out.println("Hello");
long t = 0;
List<Post> posts = Arrays.asList(
new Post(t, "One"),
new Post(t + 1000, "Two"),
new Post(t + 10000, "Three")
);
// Group every 5 seconds.
Map<Long, List<Post>> gouped = posts
.stream()
.collect(Collectors.groupingBy(new ClassifyByTimeBetween(5000)));
gouped.entrySet().stream().forEach((e) -> {
System.out.println(e.getKey() + " -> " + e.getValue());
});
}
class ClassifyByTimeBetween implements Function<Post, Long> {
final long delay;
long currentGroupBy = -1;
long lastDateSeen = -1;
public ClassifyByTimeBetween(long delay) {
this.delay = delay;
}
#Override
public Long apply(Post p) {
if (lastDateSeen >= 0) {
if (p.time > lastDateSeen + delay) {
// Grab this one.
currentGroupBy = p.time;
}
} else {
// First time - start there.
currentGroupBy = p.time;
}
lastDateSeen = p.time;
return currentGroupBy;
}
}
Since no one has provided a solution with a custom collector as it was required in the original problem statement, here is a collector-implementation that groups Post objects based on the provided time-interval.
Date class mentioned in the question is obsolete since Java 8 and not recommended to be used in new projects. Hence, LocalDateTime will be utilized instead.
Post & PostGroup
For testing purposes, I've used Post implemented as a Java 16 record (if you substitute it with a class, the overall solution will be fully compliant with Java 8):
public record Post(LocalDateTime dateTime) {}
Also, I've enhanced the PostGroup object. My idea is that it should be capable to decide whether the offered Post should be added to the list of posts or rejected as the Information expert principle suggests (in short: all manipulations with the data should happen only inside a class to which that data belongs).
To facilitate this functionality, two extra fields were added: interval of type Duration from the java.time package to represent the maximum interval between the earliest post and the latest post in a group, and intervalBound of type LocalDateTime which gets initialized after the first post will be added a later on will be used internally by the method isWithinInterval() to check whether the offered post fits into the interval.
public class PostsGroup {
private Duration interval;
private LocalDateTime intervalBound;
private List<Post> posts = new ArrayList<>();
public PostsGroup(Duration interval) {
this.interval = interval;
}
public boolean tryAdd(Post post) {
if (posts.isEmpty()) {
intervalBound = post.dateTime().plus(interval);
return posts.add(post);
} else if (isWithinInterval(post)) {
return posts.add(post);
}
return false;
}
public boolean isWithinInterval(Post post) {
return post.dateTime().isBefore(intervalBound);
}
#Override
public String toString() {
return "PostsGroup{" + posts + '}';
}
}
I'm making two assumptions:
All posts in the source are sorted by time (if it is not the case, you should introduce sorted() operation in the pipeline before collecting the results);
Posts need to be collected into the minimum number of groups, as a consequence of this it's not possible to split this task and execute stream in parallel.
Building a Custom Collector
We can create a custom collector either inline by using one of the versions of the static method Collector.of() or by defining a class that implements the Collector interface.
These parameters have to be provided while creating a custom collector:
Supplier Supplier<A> is meant to provide a mutable container which store elements of the stream. In this case, ArrayDeque (as an implementation of the Deque interface) will be handy as a container to facilitate the convenient access to the most recently added element, i.e. the latest PostGroup.
Accumulator BiConsumer<A,T> defines how to add elements into the container provided by the supplier. For this task, we need to provide the logic on that will allow determining whether the next element from the stream (i.e. the next Post) should go into the last PostGroup in the Deque, or a new PostGroup needs to be allocated for it.
Combiner BinaryOperator<A> combiner() establishes a rule on how to merge two containers obtained while executing stream in parallel. Since this operation is treated as not parallelizable, the combiner is implemented to throw an AssertionError in case of parallel execution.
Finisher Function<A,R> is meant to produce the final result by transforming the mutable container. The finisher function in the code below turns the container, a deque containing the result, into an immutable list.
Note: Java 16 method toList() is used inside the finisher function, for Java 8 it can be replaced with collect(Collectors.toUnmodifiableList()) or collect(Collectors.toList()).
Characteristics allow providing additional information, for instance Collector.Characteristics.UNORDERED which is used in this case denotes that the order in which partial results of the reduction produced while executing in parallel is not significant. In this case, collector doesn't require any characteristics.
The method below is responsible for generating the collector based on the provided interval.
public static Collector<Post, ?, List<PostsGroup>> groupPostsByInterval(Duration interval) {
return Collector.of(
ArrayDeque::new,
(Deque<PostsGroup> deque, Post post) -> {
if (deque.isEmpty() || !deque.getLast().tryAdd(post)) { // if no groups have been created yet or if adding the post into the most recent group fails
PostsGroup postsGroup = new PostsGroup(interval);
postsGroup.tryAdd(post);
deque.addLast(postsGroup);
}
},
(Deque<PostsGroup> left, Deque<PostsGroup> right) -> { throw new AssertionError("should not be used in parallel"); },
(Deque<PostsGroup> deque) -> deque.stream().collect(Collectors.collectingAndThen(Collectors.toUnmodifiableList())));
}
main() - demo
public static void main(String[] args) {
List<Post> posts =
List.of(new Post(LocalDateTime.of(2022,4,28,15,0)),
new Post(LocalDateTime.of(2022,4,28,15,3)),
new Post(LocalDateTime.of(2022,4,28,15,5)),
new Post(LocalDateTime.of(2022,4,28,15,8)),
new Post(LocalDateTime.of(2022,4,28,15,12)),
new Post(LocalDateTime.of(2022,4,28,15,15)),
new Post(LocalDateTime.of(2022,4,28,15,18)),
new Post(LocalDateTime.of(2022,4,28,15,27)),
new Post(LocalDateTime.of(2022,4,28,15,48)),
new Post(LocalDateTime.of(2022,4,28,15,54)));
Duration interval = Duration.ofMinutes(10);
List<PostsGroup> postsGroups = posts.stream()
.collect(groupPostsByInterval(interval));
postsGroups.forEach(System.out::println);
}
Output:
PostsGroup{[Post[dateTime=2022-04-28T15:00], Post[dateTime=2022-04-28T15:03], Post[dateTime=2022-04-28T15:05], Post[dateTime=2022-04-28T15:08]]}
PostsGroup{[Post[dateTime=2022-04-28T15:12], Post[dateTime=2022-04-28T15:15], Post[dateTime=2022-04-28T15:18]]}
PostsGroup{[Post[dateTime=2022-04-28T15:27]]}
PostsGroup{[Post[dateTime=2022-04-28T15:48], Post[dateTime=2022-04-28T15:54]]}
You can also play around with this Online Demo
I am new to Java and Hibernate.
I have implemented a functionality where I generate request nos. based on already saved request no. This is done by finding the maximum request no. and incrementing it by 1,and then again save i it to database.
However I am facing issues with multithreading. When two threads access my code at the same time both generate same request no. My code is already synchronized. Please suggest some solution.
synchronized (this.getClass()) {
System.out.println("start");
certRequest.setRequestNbr(generateRequestNumber(certInsuranceRequestAddRq.getAccountInfo().getAccountNumberId()));
reqId = Utils.getUniqueId();
certRequest.setRequestId(reqId);
ItemIdInfo itemIdInfo = new ItemIdInfo();
itemIdInfo.setInsurerId(certRequest.getRequestId());
certRequest.setItemIdInfo(itemIdInfo);
dao.insert(certRequest);
addAccountRel();
System.out.println("end");
}
Following is the output showing my synchronization:
start
end
start
end
Is it some Hibernate issue.
Does the use of transactional attribute in Spring affects the code commit in my Case?
I am using the following Transactional Attribute:
#Transactional(readOnly = false, propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
EDIT: code for generateRequestNumber() shown in chat room.
public String generateRequestNumber(String accNumber) throws Exception {
String requestNumber = null;
if (accNumber != null) {
String SQL_QUERY = "select CERTREQUEST.requestNbr from CertRequest as CERTREQUEST, "
+ "CertActObjRel as certActObjRel where certActObjRel.certificateObjkeyId=CERTREQUEST.requestId "
+ " and certActObjRel.certObjTypeCd=:certObjTypeCd "
+ " and certActObjRel.certAccountId=:accNumber ";
String[] parameterNames = {"certObjTypeCd", "accNumber"};
Object[] parameterVaues = new Object[]
{
Constants.REQUEST_RELATION_CODE, accNumber
};
List<?> resultSet = dao.executeNamedQuery(SQL_QUERY,
parameterNames, parameterVaues);
// List<?> resultSet = dao.retrieveTableData(SQL_QUERY);
if (resultSet != null && resultSet.size() > 0) {
requestNumber = (String) resultSet.get(0);
}
int maxRequestNumber = -1;
if (requestNumber != null && requestNumber.length() > 0) {
maxRequestNumber = maxValue(resultSet.toArray());
requestNumber = Integer.toString(maxRequestNumber + 1);
} else {
requestNumber = Integer.toString(1);
}
System.out.println("inside function request number" + requestNumber);
return requestNumber;
}
return null;
}
Don't synchronize on the Class instance obtained via getClass(). It can have some strange side effects. See https://www.securecoding.cert.org/confluence/pages/viewpage.action?pageId=43647087
For example use:
synchronize(this) {
// synchronized code
}
or
private synchronized void myMethod() {
// synchronized code
}
To synchronize on the object instance.
Or do:
private static final Object lock = new Object();
private void myMethod() {
synchronize(lock) {
// synchronized code
}
}
Like #diwakar suggested. This uses a constant field to synchronize on to guarantee that this code is synchronizing on the same lock.
EDIT: Based on information from chat, you are using a SELECT to get the maximum requestNumber and increasing the value in your code. Then this value is set on the CertRequest which is then persisted in the database via a DAO. If this persist action is not committed (e.g. by making the method #Transactional or some other means) then another thread will still see the old requestNumber value. So you could solve this by making the code transactional (how depends on which frameworks you use etc.). But I agree with #VA31's answer which states that you should use a database sequence for this instead of incrementing the value in code. Instead of a sequence you could also consider using an auto-incement field in CertRequest, something like:
#GeneratedValue(strategy=GenerationType.AUTO)
private int requestNumber;
For getting the next value from a sequence you can look at this question.
You mentioned this information in your question.
I have implemented a functionality where I generate request nos. based on already saved request no. This is done by finding the maximum request no. and incrementing it by 1,and then again save i it to database.
On a first look, it seems the problem caused by multi appserver code. Threads are synchronised inside one JVM(appserver). If you are using more than one appserver then you have to do it differently using more robust approach by using server to server communication or by batch allocation of request no to each appserver.
But, if you are using only one appserver and multiple threads accessing the same code then you can put a lock on the instance of the class rather then the class itself.
synchronized(this) {
lastName = name;
nameCount++;
}
Or you can use the locks private to the class instance
private Object lock = new Object();
.
.
synchronized(lock) {
System.out.println("start");
certRequest.setRequestNbr(generateRequestNumber(certInsuranceRequestAddRq.getAccountInfo().getAccountNumberId()));
reqId = Utils.getUniqueId();
certRequest.setRequestId(reqId);
ItemIdInfo itemIdInfo = new ItemIdInfo();
itemIdInfo.setInsurerId(certRequest.getRequestId());
certRequest.setItemIdInfo(itemIdInfo);
dao.insert(certRequest);
addAccountRel();
System.out.println("end");
}
But make sure that your DB is updated by the new sequence no before the next thread is accessing it to get new one.
It is a good practice to generate "the request number (Unique Id)" by using the DATABASE SEQUENCE so that you don't need to synchronize your Service/DAO methods.
First thing:
Why are you getting the thread inside the method. I is not required here.
Also, one thing;
Can you try like this once:
final static Object lock = new Object();
synchronized (lock)
{
.....
}
what I feel is that object what you are calling is different so try this once.
I'm writing a java application that copies one database's information (db2) to anther database (sql server). The order of operations is very simple:
Check to see if anything has been updated in a certain time frame
Grab everything from the first database that is within the designated time frame
Map database information to POJOs
Divide subsets of POJOs into threads (pre defined # in properties file)
Threads cycle through each POJO Individually
Update the second database
I have everything working just fine, but at certain times of the day there is a huge jump in the amount of updates that need to take place (can get in to the hundreds of thousands).
Below you can see a generic version of my code. It follows the basic algorithm of the application. Object is generic, the actual application has 5 different types of specified objects each with its own updater thread class. But the generic functions below are exactly what they all look like. And in the updateDatabase() method, they all get added to threads and all run at the same time.
private void updateDatabase()
{
List<Thread> threads = new ArrayList<>();
addObjectThreads( threads );
startThreads( threads );
joinAllThreads( threads );
}
private void addObjectThreads( List<Thread> threads )
{
List<Object> objects = getTransformService().getObjects();
logger.info( "Found " + objects.size() + " Objects" );
createThreads( threads, objects, ObjectUpdaterThread.class );
}
private void createThreads( List<Thread> threads, List<?> objects, Class threadClass )
{
final int BASE_OBJECT_LOAD = 1;
int objectLoad = objects.size() / Database.getMaxThreads() > 0 ? objects.size() / Database.getMaxThreads() + BASE_OBJECT_LOAD : BASE_OBJECT_LOAD;
for (int i = 0; i < (objects.size() / objectLoad); ++i)
{
int startIndex = i * objectLoad;
int endIndex = (i + 1) * objectLoad;
try
{
List<?> objectSubList = objects.subList( startIndex, endIndex > objects.size() ? objects.size() : endIndex );
threads.add( new Thread( (Thread) threadClass.getConstructor( List.class ).newInstance( objectSubList ) ) );
}
catch (Exception exception)
{
logger.error( exception.getMessage() );
}
}
}
public class ObjectUpdaterThread extends BaseUpdaterThread
{
private List<Object> objects;
final private Logger logger = Logger.getLogger( ObjectUpdaterThread.class );
public ObjectUpdaterThread( List<Object> objects)
{
this.objects = objects;
}
public void run()
{
for (Object object : objects)
{
logger.info( "Now Updating Object: " + object.getId() );
getTransformService().updateObject( object );
}
}
}
All of these go to a spring service that looks like the code below. Again its generic, but each type of object has the exact same type of logic to them. The getObjects() from the code above are just one line pass throughs to the DAO so no need to really post that.
#Service
#Scope(value = "prototype")
public class TransformServiceImpl implements TransformService
{
final private Logger logger = Logger.getLogger( TransformServiceImpl.class );
#Autowired
private TransformDao transformDao;
#Override
public void updateObject( Object object )
{
String sql;
if ( object.exists() )
{
sql = Object.Mapper.UPDATE;
}
else
{
sql = Object.Mapper.INSERT;
}
boolean isCompleted = false;
while ( !isCompleted )
{
try
{
transformDao.updateObject( object, sql );
isCompleted = true;
}
catch (Exception exception)
{
logger.error( exception.getMessage() );
threadSleep();
logger.info( "Now retrying update for Object: " + object.getId() );
}
}
logger.info( "Updated Object: " + object.getId() );
}
}
Finally these all go to the DAO that looks like this:
#Repository
#Scope(value = "prototype")
public class TransformDaoImpl implements TransformDao
{
//#Resource is like #Autowired but with the added option of being able to specify the name
//Good for autowiring two different instances of the same class [NamedParameterJdbcTemplate]
//Another alternative = #Autowired #Qualifier(BEAN_NAME)
#Resource(name = "db2")
private NamedParameterJdbcTemplate db2;
#Resource(name = "sqlServer")
private NamedParameterJdbcTemplate sqlServer;
final private Logger logger = Logger.getLogger( TransformerImpl.class );
#Override
public void updateObject( Objet object, String sql )
{
MapSqlParameterSource source = new MapSqlParameterSource();
source.addValue( "column1_value", object.getColumn1Value() );
//put all source values from the POJO in just like above
sqlServer.update( sql, source );
}
}
My insert statements look like this:
"INSERT INTO dbo.OBJECT_TABLE " +
"(COLUMN1, COLUMN2...) " +
"VALUES(:column1_value, :column2_value... "
And my update statements look like this:
"UPDATE dbo.OBJECT_TABLE SET " +
"COLUMN1 = :column1_value, COLUMN2 = :column2_value, " +
"WHERE PRIMARY_KEY_COLUMN = :primary_key_value"
Its a lot of code and stuff I know, But I just wanted to layout everything I have in hopes that I can get help making this faster or more efficient. It takes hours on hours to update so many rows and it would nice if it only took a couple/few hours instead hours on hours. Thanks for any help. I welcome all learning experiences about spring, threads and databases.
If you're sending large amounts of SQL to the server, you should consider Batching it using the Statement.addBatch and Statement.executeBatch methods. The batches are finite in size (I always limited mine to 64K of SQL), but they dramatically lower the round trips to the database.
As I was iterating and creating SQL, I would keep track of how much I had batched already, when the SQL crossed the 64K boundary, I'd fire off an executeBatch and start a fresh one.
You may want to experiment with the 64K number, it may have been an Oracle limitation, which I was using at the time.
I can't speak to Spring, but batching is a part of the JDBC Statement. I'm sure it's straightforward to get to this.
Check to see if anything has been updated in a certain time frame
Grab everything from the first database that is within the designated time frame
Is there an index on the LAST_UPDATED_DATE column (or whatever you're using) in the source table? Rather than put the burden on your application, if it's within your control, why not write some triggers in the source database that create entries in an "update log" table? That way, all that your app would need to do is consume and execute those entries.
How are you managing your transactions? If you're creating a new transaction for each operation it's going to be brutally slow.
Regarding the threading code, have you considered using something more standard rather than writing your own? What you have is a pretty typical producer/consumer and Java has excellent support for that type of thing with ThreadPoolExecutor and numerous queue implementations to move data between threads that perform different tasks.
The benefit with using something off the shelf is that 1) it's well tested 2) there are numerous tuning options and sizing strategies that you can adjust to increase performance.
Also, rather than use 5 different thread types for each type of object that needs to be processed, have you considered encapsulating the processing logic for each type into separate strategy classes? That way, you could use a single pool of worker threads (which would be easier to size and tune).
I'm working on an App that has objects that must be available to all instances but also have synchronized access for certain methods within the object.
For instance I have this object:
public class PlanetID implements Serializable {
public PlanetID() {
id = 0;
}
public long generateID() {
id++;
return id;
}
private long id;
}
It's a simple object that creates a long (id) in series. It's necessary that this object generate a unique id every time. At the moment I have a static synchronized method that handles the Datastore access and storage along with the MemCache access and storage. It works for this particular method but I can already see issues with more complex objects that require a user to be able to access non-synchronized variables along with synchronized variables.
Is there some way to make an object global and allow for both synchronized methods and non-synchronized methods along with the storage of the object when those synchronized objects are accessed?
EDIT: I think people focused too much on the example I gave them and not on the bigger question of having a global variable which can be accessed by all instances and having synchronized access to specific methods while allowing asynchronous access to others.
Here's a better example in hopes it makes things a big more clearer.
Ex.
public class Market implements Serializable {
public Market() {
mineral1 = new ArrayList<Listing>();
mineral2 = new ArrayList<Listing>();
mineral3 = new ArrayList<Listing>();
mineral4 = new ArrayList<Listing>();
}
public void addListing(int mineral, String userID, int price, long amount) { //Doesn't require synchronized access
switch (mineral) {
case MINERAL1:
mineral1.add(new Listing(userID, price, amount));
break;
case MINERAL2:
mineral2.add(new Listing(userID, price, amount));
break;
case MINERAL3:
mineral3.add(new Listing(userID, price, amount));
break;
case MINERAL4:
mineral4.add(new Listing(userID, price, amount));
break;
}
}
public void purchased(int mineral, String userID, long amount) { //Requires synchronized access
ArrayList<Listing> mineralList = null;
switch (mineral) {
case MINERAL1:
mineralList = mineral1;
break;
case MINERAL2:
mineralList = mineral2;
break;
case MINERAL3:
mineralList = mineral3;
break;
case MINERAL4:
mineralList = mineral4;
break;
}
Listing remove = null;
for (Listing listing : mineralList)
if (listing.userID == userID)
if (listing.amount > amount) {
listing.amount -= amount;
return;
} else{
remove = listing;
break;
}
mineralList.remove(remove);
Collections.sort(mineralList);
}
public JSONObject toJSON(int mineral) { //Does not require synchronized access
JSONObject jsonObject = new JSONObject();
try {
switch (mineral) {
case MINERAL1:
for (Listing listing : mineral1)
jsonObject.accumulate(Player.MINERAL1, listing.toJSON());
break;
case MINERAL2:
for (Listing listing : mineral2)
jsonObject.accumulate(Player.MINERAL2, listing.toJSON());
break;
case MINERAL3:
for (Listing listing : mineral3)
jsonObject.accumulate(Player.MINERAL3, listing.toJSON());
break;
case MINERAL4:
for (Listing listing : mineral4)
jsonObject.accumulate(Player.MINERAL4, listing.toJSON());
break;
}
} catch (JSONException e) {
}
return jsonObject;
}
public static final int MINERAL1 = 0;
public static final int MINERAL2 = 1;
public static final int MINERAL3 = 2;
public static final int MINERAL4 = 3;
private ArrayList<Listing> mineral1;
private ArrayList<Listing> mineral2;
private ArrayList<Listing> mineral3;
private ArrayList<Listing> mineral4;
private class Listing implements Serializable, Comparable<Listing> {
public Listing(String userID, int price, long amount) {
this.userID = userID;
this.price = price;
this.amount = amount;
}
public JSONObject toJSON() {
JSONObject jsonObject = new JSONObject();
try {
jsonObject.put("UserID", userID);
jsonObject.put("Price", price);
jsonObject.put("Amount", amount);
} catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return jsonObject;
}
#Override
public int compareTo(Listing listing) {
return (price < listing.price ? -1 : (price == listing.price ? 0 : 1));
}
public String userID;
public int price;
public long amount;
}
}
With GAE, the Java language is NOT going to hide all the datastore abstractions for you.
Stop thinking in terms of global variables and methods. These are Java language constructs. Start thinking in terms of datastore constructs - entities, datastore accesses, and transactions.
On GAE, your code will be simultaneously running on many servers, they will not share global variables, the "shared data" is in the datastore (or memcache)
An entity is an object in the datastore. You can make datastore fetches from anywhere in your code, so they can replace your global variables. You define transactions within your methods to synchronize datastore accesses and ensure a transaction only happens once. You can use transactions in some methods, and don't use transactions when you don't need synchronization.
You shouldn't need your global ArrayLists of minerals. When you handle a purchase, you essentially need a transaction where you fetch a listing from the datastore, or create it if it doesn't exist, update the user, and write it back to the datastore. You probably want to read up on the datastore before continuing.
Other approach beside transaction is to use a single backend instance to keep your global object, and have all access to the object synchronized there. All other instances need to access this backend instance using URLFetch to get the state of the object.
This is a horrible performance bottleneck though if your app want to scale up smoothly, please don't use it, I'm just pointing out alternative approaches. In fact, if possible, please kindly avoid the need to have a synchronized global object on a distributed application in the first place.
Have a look at DatastoreService.allocateIds - it doesn't matter whether or not you're actually using the generated ID to write a datastore entity, you can get unique long numbers out of this in an efficient manner.
Please note however that they're not guaranteed to be in sequence, they're only guaranteed to be unique - the question doesn't state being sequential as a requirement.
public class PlanetID implements Serializable
{
private DatastoreService ds;
public PlanetID()
{
ds = DatastoreServiceFactory.getDatastoreService();
}
public long generateID()
{
return ds.allocateIds("Kind_That_Will_Never_Be_Used", 1L).getStart().getId();
}
}
As noted in comments - you can use transactions to achieve this:
Start a transaction
Create or update the SomeObject entity with index and someText properties.
Commit transaction. If two instances are trying to do this simultaneously then one will get an exception and will need to retry (get the data again, increment, put all in transaction).
Edit:
(removed section on sharded counters as they do not guarantee )
Note that above solution has a write bottleneck at about 1 write/s. If you need a higher performance solution you could look into using backend instances.
Whilst technically possible, you should heed the advice of other answers and use the services supplied by Google, such as datastore and memcache.
However, you could use a single backend which contains your data, then use your favourite RPC method to read and write data into the shared object. You will need to be aware that although it doesn't happen often, backends are not guaranteed not to die randomly - so you could lose all the data in this object.