How to remove the time measurement logic - java

I need to calculate the execution time of some methods. These are private methods in the class, so Spring AOP is not appropriate. Now the code looks like this.
public void method() {
StopWatch sw = new StopWatch();
sw.start();
innerMethod1();
sw.stop();
Monitoring.add("eventType1", sw.getLastTaskTimeMillis());
sw.start();
innerMethod2("abs");
sw.stop();
Monitoring.add("eventType2", sw.getLastTaskTimeMillis());
sw.start();
innerMethod3(5, 29);
sw.stop();
Monitoring.add("eventType3", sw.getLastTaskTimeMillis());
}
But inserts with time measurement fit into the business logic. Are there any solutions? These data will be then recorded in the database for grafana. I'm looking towards AspectJ, but I can't pass keys when starting the app.
When class instrumentation is required in environments that do not support or are not supported by the existing LoadTimeWeaver implementations, a JDK agent can be the only solution. For such cases, Spring provides InstrumentationLoadTimeWeaver, which requires a Spring-specific (but very general) VM agent,org.springframework.instrument-{version}.jar (previously named spring-agent.jar).
To use it, you must start the virtual machine with the Spring agent, by supplying the following JVM options:
-javaagent:/path/to/org.springframework.instrument-{version}.jar
to Mark Bramnik
If I understand you correctly, then for methods
private List<String> innerMethod3(int value, int count) {
//
}
private String innerMethod2(String event) {
//
}
need methods
public <T, R, U> U timed(T value, R count, BiFunction<T, R, U> function) {
long start = System.currentTimeMillis();
U result = function.apply(value, count);
Monitoring.add("method", System.currentTimeMillis() - start);
return result;
}
public <T, R> R timed(T value, Function<T, R> function) {
long start = System.currentTimeMillis();
R result = function.apply(value);
Monitoring.add("method", System.currentTimeMillis() - start);
return result;
}
And calling methods:
List<String> timed = timed(5, 5, this::innerMethod3);
String string = timed("string", this::innerMethod2);
But if method4 has 4 parameters, then I need a new method for measuring time and a new functional interface

There are many approaches you can take but all will boil down to refactoring.
Approach 1:
class Timed {
public static void timed(String name, Runnable codeBlock) {
long from = System.currentTimeMillis();
codeBlock.run();
long to = System.currentTimeMillis();
System.out.println("Monitored: " + name + " : " + (to - from) + " ms");
}
public static <T> T timed(String name, Supplier<T> codeBlock) {
long from = System.currentTimeMillis();
T result = codeBlock.get();
long to = System.currentTimeMillis();
System.out.println("Monitored: " + name + " : " + (to - from) + " ms");
return result;
}
}
Notes:
I've used Runnable / Supplier interfaces for simplicity you might want to create your own functional interfaces for this.
I've used System.out - you'll use the existing Monitoring.add call instead
The aforementioned code can be used like this:
Timed.timed("sample.runnable", ()-> { // Timed. can be statically imported for even further brevity
// some code block here
});
// will measure
int result = Timed.timed("sample.callable", () -> 42);
// will measure and result will be 42
Another approach.
Refactor the code to public methods and integrate with Micrometer that already has annotations support (see #Timed).
I don't know what Monitoring is but micrometer already contains both integration with Prometheus (and other similar products that can store the metrics and later on used from grafana) + it keeps in memory the mathematical model of your measurements and doesn't keep in memory the information per each measurement. In the custom implementation its a complicated code to maintain.
Update 1
No, you got it wrong, you don't need to maintain different versions of timed - you need only two versions that I've provided in the solution. In the case that you've presented in the question, you won't even need the second version of timed.
Your code will become:
public void method() {
Timed.timed("eventType1", () -> {
innerMethod1();
});
Timed.timed("eventType2", () -> {
innerMethod2("abs");
});
Timed.timed("eventType3", () -> {
innerMethod3(5, 29);
});
}
The second version is required for the cases where you actually return some value from the "timed" code:
Example:
Lets say you have innerMethod4 that returns String, so you'll write the following code:
String result = Timed.timed("eventType3", () -> {
return innerMethod4(5, 29);
});

Related

Java short circuit CompletableFuture

I am trying to find a way to skip CompletableFuture based on specific conditions.
For example
public CompletableFuture<Void> delete(Long id) {
CompletableFuture<T> preFetchCf = get(id);
CompletableFuture<Boolean> cf1 = execute();
/*This is where I want different execution path, if result of this future is true go further, else do not*/
// Execute this only if result of cf1 is true
CompletableFuture<T> deleteCf = _delete(id);
// Execute this only if result of cf1 is true
CompletableFuture<T> postDeleteProcess = postDelete(id);
}
What is a good way to achieve this ?
I will prepare a different example than the one you used in the question, because your code is not quite clear in intent from the readers perspective.
First suppose the existing of a CompletableFuture<String> that provides the name of a Star Wars characters.
CompletableFuture<String> character = CompletableFuture.completedFuture("Luke");
Now, imagine I have two other CompletableFuture that represent different paths I may want to follow depending on whether the first completable future provides a character that is a Jedi.
Supplier<CompletableFuture<String>> thunk1 = () -> CompletableFuture.completedFuture("This guy is a Jedi");
Supplier<CompletableFuture<String>> thunk2 = () -> CompletableFuture.completedFuture("This guy is not a Jedi");
Notice that I wrapped the CompletableFuture in a a Supplier, to avoid that they get eagerly evaluated (this is concept known as thunk).
Now, I go and to my asynchronous chain:
character.thenApply(c -> isJedi(c))
.thenCompose(isJedi -> isJedi ? thunk1.get() : thunk2.get())
.whenComplete((answer, error) -> System.out.println(answer));
The use of thenCompose let me choose a path based on the boolean result. There I evaluate one of the thunks and cause it to create a new CompletableFuture for the path I care about.
This will print to the screen "This guys is a Jedi".
So, I believe what you're looking for is the thenCompose method.
Not sure if I understand your objective, but why won't you just go with future chaining like you said in the comment? Something like this, just to illustrate:
public class AppTest {
#Test
public void testCompletableFutures() {
Integer id = (int) Math.random() * 1000;
CompletableFuture<Void> testing = AppTest.execute()
.thenAcceptAsync(result -> {
System.out.println("Result is: " + result);
if(result)
AppTest.delete(id);
else
throw new RuntimeException("Execution failed");
})
.thenApplyAsync(result -> AppTest.postDelete())
.thenAcceptAsync(postDeleteResult -> {
if(postDeleteResult)
System.out.println("Post delete cleanup success");
else
throw new RuntimeException("Post delete failed");
});
}
private static boolean postDelete() {
System.out.println("Post delete cleanup");
return Math.random() > 0.3;
}
private static CompletableFuture<Boolean> delete(int i) {
System.out.println("Deleting id = " + i);
return CompletableFuture.completedFuture(true);
}
private static CompletableFuture<Boolean> execute() {
return CompletableFuture.supplyAsync(() -> Math.random() > 0.5);
}
}
Of course that doesn't make much real-life sense, but I think it works to show a concept.
If you want to skip the second call after execute based on the result it's clearly not possible since you need that result. The point is that it should not matter for you whether you skipped that or not since it's asynchronous, you are not blocking to wait for that result.

Akka stream - limiting Flow rate without introducing delay

I'm working with Akka (version 2.4.17) to build an observation Flow in Java (let's say of elements of type <T> to stay generic).
My requirement is that this Flow should be customizable to deliver a maximum number of observations per unit of time as soon as they arrive. For instance, it should be able to deliver at most 2 observations per minute (the first that arrive, the rest can be dropped).
I looked very closely to the Akka documentation, and in particular this page which details the built-in stages and their semantics.
So far, I tried the following approaches.
With throttle and shaping() mode (to not close the stream when the limit is exceeded):
Flow.of(T.class)
.throttle(2,
new FiniteDuration(1, TimeUnit.MINUTES),
0,
ThrottleMode.shaping())
With groupedWith and an intermediary custom method:
final int nbObsMax = 2;
Flow.of(T.class)
.groupedWithin(Integer.MAX_VALUE, new FiniteDuration(1, TimeUnit.MINUTES))
.map(list -> {
List<T> listToTransfer = new ArrayList<>();
for (int i = list.size()-nbObsMax ; i>0 && i<list.size() ; i++) {
listToTransfer.add(new T(list.get(i)));
}
return listToTransfer;
})
.mapConcat(elem -> elem) // Splitting List<T> in a Flow of T objects
Previous approaches give me the correct number of observations per unit of time but these observations are retained and only delivered at the end of the time window (and therefore there is an additional delay).
To give a more concrete example, if the following observations arrives into my Flow:
[Obs1 t=0s] [Obs2 t=45s] [Obs3 t=47s] [Obs4 t=121s] [Obs5 t=122s]
It should only output the following ones as soon as they arrive (processing time can be neglected here):
Window 1: [Obs1 t~0s] [Obs2 t~45s]
Window 2: [Obs4 t~121s] [Obs5 t~122s]
Any help will be appreciated, thanks for reading my first StackOverflow post ;)
I cannot think of a solution out of the box that does what you want. Throttle will emit in a steady stream because of how it is implemented with the bucket model, rather than having a permitted lease at the start of every time period.
To get the exact behavior you are after you would have to create your own custom rate-limit stage (which might not be that hard). You can find the docs on how to create custom stages here: http://doc.akka.io/docs/akka/2.5.0/java/stream/stream-customize.html#custom-linear-processing-stages-using-graphstage
One design that could work is having an allowance counter saying how many elements that can be emitted that you reset every interval, for every incoming element you subtract one from the counter and emit, when the allowance used up you keep pulling upstream but discard the elements rather than emit them. Using TimerGraphStageLogic for GraphStageLogic allows you to set a timed callback that can reset the allowance.
I think this is exactly what you need: http://doc.akka.io/docs/akka/2.5.0/java/stream/stream-cookbook.html#Globally_limiting_the_rate_of_a_set_of_streams
Thanks to the answer of #johanandren, I've successfully implemented a custom time-based GraphStage that meets my requirements.
I post the code below, if anyone is interested:
import akka.stream.Attributes;
import akka.stream.FlowShape;
import akka.stream.Inlet;
import akka.stream.Outlet;
import akka.stream.stage.*;
import scala.concurrent.duration.FiniteDuration;
public class CustomThrottleGraphStage<A> extends GraphStage<FlowShape<A, A>> {
private final FiniteDuration silencePeriod;
private int nbElemsMax;
public CustomThrottleGraphStage(int nbElemsMax, FiniteDuration silencePeriod) {
this.silencePeriod = silencePeriod;
this.nbElemsMax = nbElemsMax;
}
public final Inlet<A> in = Inlet.create("TimedGate.in");
public final Outlet<A> out = Outlet.create("TimedGate.out");
private final FlowShape<A, A> shape = FlowShape.of(in, out);
#Override
public FlowShape<A, A> shape() {
return shape;
}
#Override
public GraphStageLogic createLogic(Attributes inheritedAttributes) {
return new TimerGraphStageLogic(shape) {
private boolean open = false;
private int countElements = 0;
{
setHandler(in, new AbstractInHandler() {
#Override
public void onPush() throws Exception {
A elem = grab(in);
if (open || countElements >= nbElemsMax) {
pull(in); // we drop all incoming observations since the rate limit has been reached
}
else {
if (countElements == 0) { // we schedule the next instant to reset the observation counter
scheduleOnce("resetCounter", silencePeriod);
}
push(out, elem); // we forward the incoming observation
countElements += 1; // we increment the counter
}
}
});
setHandler(out, new AbstractOutHandler() {
#Override
public void onPull() throws Exception {
pull(in);
}
});
}
#Override
public void onTimer(Object key) {
if (key.equals("resetCounter")) {
open = false;
countElements = 0;
}
}
};
}
}

Reduce on Pojo field with Apache Flink using Java

I'm building a benchmarking tool for some distributed processing tools at the moment, and have some trouble with Apache Flink.
The setup is simple: LogPojo is a simple Pojo with three fields (long date, double value, String data). Out of a List I'm looking for the one LogPojo with the minimum "value" field. Basically the equivalent to:
pojoList.stream().min(new LogPojo.Comp()).get().getValue();
My flink setup looks like:
public double processLogs(List<LogPojo> logs) {
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSet<LogPojo> logSet = env.fromCollection(logs);
double result = 0.0;
try {
ReduceOperator ro = logSet.reduce(new LogReducer());
List<LogPojo> c = ro.collect();
result = c.get(0).getValue();
} catch (Exception ex) {
System.out.println("Exception caught" + ex);
}
return result;
}
public class LogReducer implements ReduceFunction<LogPojo> {
#Override
public LogPojo reduce(LogPojo o1, LogPojo o2) {
return (o1.getValue() < o2.getValue()) ? o1 : o2;
}
}
It stops with:
Exception in thread "main" java.lang.NoSuchMethodError: scala.collection.immutable.HashSet$.empty()Lscala/collection/immutable/HashSet;
So somehow it seems to be unable to apply the reduce function. I just can't find, why. Any hints?
First of all you should check your imports. You get an exception from a Scala class but your program is implemented in Java. You might have accidentally imported the Scala DataSet API. Using the Java API should not result in a Scala exception (unless you are using classes which depend on Scala).
Regardless of that, Flink has a built-in aggregation methods for min, max, etc.
DataSet<LogPojo> logSet = env.fromCollection(logs);
// map LogPojo to a Tuple1<Double>
// (Flink's built-in aggregation functions work only on Tuple types)
DataSet<Tuple1<Double>> values = logSet.map(new MapFunction<LogPojo, Tuple1<Double>>() {
#Override
public Tuple1<Double> map(LogPojo l) throws Exception {
return new Tuple1<>(l.value);
}
});
// fetch the min value (at position 0 in the Tuple)
List<Tuple1<Double>> c = values.min(0).collect();
// get the first field of the Tuple
Double minVal = c.get(0).f0;

Complex custom Collector with Java 8

I have a stream of objects which I would like to collect the following way.
Let's say we are handling forum posts:
class Post {
private Date time;
private Data data
}
I want to create a list which groups posts by a period. If there were no posts for X minutes, create a new group.
class PostsGroup{
List<Post> posts = new ArrayList<> ();
}
I want to get a List<PostGroups> containing the posts grouped by the interval.
Example: interval of 10 minutes.
Posts:
[{time:x, data:{}}, {time:x + 3, data:{}} , {time:x + 12, data:{}, {time:x + 45, data:{}}}]
I want to get a list of posts group:
[
{posts : [{time:x, data:{}}, {time:x + 3, data:{}}, {time:x + 12, data:{}]]},
{posts : [{time:x + 45, data:{}]}
]
notice that the first group lasted till X + 22. Then a new post was received at X + 45.
Is this possible?
This problem could be easily solved using the groupRuns method of my StreamEx library:
long MAX_INTERVAL = TimeUnit.MINUTES.toMillis(10);
StreamEx.of(posts)
.groupRuns((p1, p2) -> p2.time.getTime() - p1.time.getTime() <= MAX_INTERVAL)
.map(PostsGroup::new)
.toList();
I assume that you have a constructor
class PostsGroup {
private List<Post> posts;
public PostsGroup(List<Post> posts) {
this.posts = posts;
}
}
The StreamEx.groupRuns method takes a BiPredicate which is applied to two adjacent input elements and returns true if they must be grouped together. This method creates the stream of lists where each list represents the group. This method is lazy and works fine with parallel streams.
You need to retain state between stream entries and write yourself a grouping classifier. Something like this would be a good start.
class Post {
private final long time;
private final String data;
public Post(long time, String data) {
this.time = time;
this.data = data;
}
#Override
public String toString() {
return "Post{" + "time=" + time + ", data=" + data + '}';
}
}
public void test() {
System.out.println("Hello");
long t = 0;
List<Post> posts = Arrays.asList(
new Post(t, "One"),
new Post(t + 1000, "Two"),
new Post(t + 10000, "Three")
);
// Group every 5 seconds.
Map<Long, List<Post>> gouped = posts
.stream()
.collect(Collectors.groupingBy(new ClassifyByTimeBetween(5000)));
gouped.entrySet().stream().forEach((e) -> {
System.out.println(e.getKey() + " -> " + e.getValue());
});
}
class ClassifyByTimeBetween implements Function<Post, Long> {
final long delay;
long currentGroupBy = -1;
long lastDateSeen = -1;
public ClassifyByTimeBetween(long delay) {
this.delay = delay;
}
#Override
public Long apply(Post p) {
if (lastDateSeen >= 0) {
if (p.time > lastDateSeen + delay) {
// Grab this one.
currentGroupBy = p.time;
}
} else {
// First time - start there.
currentGroupBy = p.time;
}
lastDateSeen = p.time;
return currentGroupBy;
}
}
Since no one has provided a solution with a custom collector as it was required in the original problem statement, here is a collector-implementation that groups Post objects based on the provided time-interval.
Date class mentioned in the question is obsolete since Java 8 and not recommended to be used in new projects. Hence, LocalDateTime will be utilized instead.
Post & PostGroup
For testing purposes, I've used Post implemented as a Java 16 record (if you substitute it with a class, the overall solution will be fully compliant with Java 8):
public record Post(LocalDateTime dateTime) {}
Also, I've enhanced the PostGroup object. My idea is that it should be capable to decide whether the offered Post should be added to the list of posts or rejected as the Information expert principle suggests (in short: all manipulations with the data should happen only inside a class to which that data belongs).
To facilitate this functionality, two extra fields were added: interval of type Duration from the java.time package to represent the maximum interval between the earliest post and the latest post in a group, and intervalBound of type LocalDateTime which gets initialized after the first post will be added a later on will be used internally by the method isWithinInterval() to check whether the offered post fits into the interval.
public class PostsGroup {
private Duration interval;
private LocalDateTime intervalBound;
private List<Post> posts = new ArrayList<>();
public PostsGroup(Duration interval) {
this.interval = interval;
}
public boolean tryAdd(Post post) {
if (posts.isEmpty()) {
intervalBound = post.dateTime().plus(interval);
return posts.add(post);
} else if (isWithinInterval(post)) {
return posts.add(post);
}
return false;
}
public boolean isWithinInterval(Post post) {
return post.dateTime().isBefore(intervalBound);
}
#Override
public String toString() {
return "PostsGroup{" + posts + '}';
}
}
I'm making two assumptions:
All posts in the source are sorted by time (if it is not the case, you should introduce sorted() operation in the pipeline before collecting the results);
Posts need to be collected into the minimum number of groups, as a consequence of this it's not possible to split this task and execute stream in parallel.
Building a Custom Collector
We can create a custom collector either inline by using one of the versions of the static method Collector.of() or by defining a class that implements the Collector interface.
These parameters have to be provided while creating a custom collector:
Supplier Supplier<A> is meant to provide a mutable container which store elements of the stream. In this case, ArrayDeque (as an implementation of the Deque interface) will be handy as a container to facilitate the convenient access to the most recently added element, i.e. the latest PostGroup.
Accumulator BiConsumer<A,T> defines how to add elements into the container provided by the supplier. For this task, we need to provide the logic on that will allow determining whether the next element from the stream (i.e. the next Post) should go into the last PostGroup in the Deque, or a new PostGroup needs to be allocated for it.
Combiner BinaryOperator<A> combiner() establishes a rule on how to merge two containers obtained while executing stream in parallel. Since this operation is treated as not parallelizable, the combiner is implemented to throw an AssertionError in case of parallel execution.
Finisher Function<A,R> is meant to produce the final result by transforming the mutable container. The finisher function in the code below turns the container, a deque containing the result, into an immutable list.
Note: Java 16 method toList() is used inside the finisher function, for Java 8 it can be replaced with collect(Collectors.toUnmodifiableList()) or collect(Collectors.toList()).
Characteristics allow providing additional information, for instance Collector.Characteristics.UNORDERED which is used in this case denotes that the order in which partial results of the reduction produced while executing in parallel is not significant. In this case, collector doesn't require any characteristics.
The method below is responsible for generating the collector based on the provided interval.
public static Collector<Post, ?, List<PostsGroup>> groupPostsByInterval(Duration interval) {
return Collector.of(
ArrayDeque::new,
(Deque<PostsGroup> deque, Post post) -> {
if (deque.isEmpty() || !deque.getLast().tryAdd(post)) { // if no groups have been created yet or if adding the post into the most recent group fails
PostsGroup postsGroup = new PostsGroup(interval);
postsGroup.tryAdd(post);
deque.addLast(postsGroup);
}
},
(Deque<PostsGroup> left, Deque<PostsGroup> right) -> { throw new AssertionError("should not be used in parallel"); },
(Deque<PostsGroup> deque) -> deque.stream().collect(Collectors.collectingAndThen(Collectors.toUnmodifiableList())));
}
main() - demo
public static void main(String[] args) {
List<Post> posts =
List.of(new Post(LocalDateTime.of(2022,4,28,15,0)),
new Post(LocalDateTime.of(2022,4,28,15,3)),
new Post(LocalDateTime.of(2022,4,28,15,5)),
new Post(LocalDateTime.of(2022,4,28,15,8)),
new Post(LocalDateTime.of(2022,4,28,15,12)),
new Post(LocalDateTime.of(2022,4,28,15,15)),
new Post(LocalDateTime.of(2022,4,28,15,18)),
new Post(LocalDateTime.of(2022,4,28,15,27)),
new Post(LocalDateTime.of(2022,4,28,15,48)),
new Post(LocalDateTime.of(2022,4,28,15,54)));
Duration interval = Duration.ofMinutes(10);
List<PostsGroup> postsGroups = posts.stream()
.collect(groupPostsByInterval(interval));
postsGroups.forEach(System.out::println);
}
Output:
PostsGroup{[Post[dateTime=2022-04-28T15:00], Post[dateTime=2022-04-28T15:03], Post[dateTime=2022-04-28T15:05], Post[dateTime=2022-04-28T15:08]]}
PostsGroup{[Post[dateTime=2022-04-28T15:12], Post[dateTime=2022-04-28T15:15], Post[dateTime=2022-04-28T15:18]]}
PostsGroup{[Post[dateTime=2022-04-28T15:27]]}
PostsGroup{[Post[dateTime=2022-04-28T15:48], Post[dateTime=2022-04-28T15:54]]}
You can also play around with this Online Demo

How to cast a RepositoryItem to an Order on ATG

I'm new to ATG and I'm failing to do something that looks fairly simple.
I'm trying to get an Order in the database by the number of the order. But this number is not the orderId so I can't just use the OrderManager.loadOrder method.
This is the code I have so far:
Repository orderRepository = getOrderManager().getOrderTools().getOrderRepository();
RepositoryView view = orderRepository.getView("order");
RqlStatement statement = RqlStatement.parseRqlStatement("orderNumber EQUALS ?0");
Object params[] = { pOrderNumber };
RepositoryItem items[] = statement.executeQuery(view, params);
RepositoryItem order = null;
if( (items != null) && (items.length > 0) ) {
order = items[0];
}
//Now I want to convert this order of type "RepositoryItem" to an actual Order object
I can do this by getting the repository ID and calling loadOrder form the OrderManager, but that seems like going back to the database and finding again what I already have in my hands.
Is there another way to get an actual Order object out of this RepositoryItem object?
If you only need properties off of the order item itself, then you can just retrieve them directly from the RepositoryItem using the getPropertyValue methods. If you find that you want to utilize the OrderImpl wrapper and its associated convenience methods, then you should retrieve the Order object instances via the OrderManager.loadOrder() method as you have suggested. While this will require slightly more work by the application to contruct the Order wrapper, it does not necessarily mean another DB call against the order tables. Assuming you have not disabled repository caching for the order item then ATG will utilize the already cached order repository item when it is constructing the OrderImpl wrapper for you. This item would have been cached when you did the RQL lookup for the order by orderNumber, so a redundant DB call will not be performed.
Note that it may require additional DB calls to retrieve related order items if those items have not already been cached (i.e. payment groups, shipping groups, commerce items, etc).
It really depends on what you are trying to do.
The question about whether the item is loaded from the database or from the cache depends on your repository settings, combined with the lazy loading settings. The documentation on this can be found here.
If you would like to update the order then you should use OrderManager.loadOrder() as this will ensure that the order is updated correctly and allows you to reprice the order and update other repository items which make up the order such as payment groups and shipping groups (remember to use a transaction wrapper to ensure the order is updated safely).
If you are simply trying to read values then going the repository way will be quicker. I would recommend creating a globally scoped component which your form handler references. Something along the lines of (code below not tested):
OrderTools.properties file:
$class=com.acme.commerce.order.OrderTools
$scope=global
orderRepository=/atg/commerce/order/OrderRepository
OrderTools class:
public class OrderTools extends GenericService
{
private RepositoryView orderView;
private RqlStatement orderStm;
private OrderRepository orderRepository;
private OrderManager orderManager;
public void doStartService() throws ServiceException
{
try
{
orderView = getOrderRepository().getView(CommerceConstants.ORDER);
orderStm = RqlStatement.parseRqlStatement("uniqueOrderId = ?0");
} catch (RepositoryException e)
{
throw new ServiceException(e);
}
}
protected RepositoryItem getOrderItem(final String uniqueOrderId) throws RepositoryException
{
Object params[] = new Object[1];
params[0] = uniqueOrderId;
RepositoryItem[] orderItems = orderStm.executeQuery(orderView, params);
if (orderItems != null)
{
return getOrderRepository().getItem(orderItems[0].getRepositoryId(), CommerceConstants.ORDER);
} else
{
return null;
}
}
/*
This method demonstrates how to load an order using the OrderManager.loadOrder() method.
The code includes some basic timing so that a performance comparison can be done with the loadOrderSubItemsRepositoryMethod()
*/
public void loadOrderUsingOrderManager(String orderId) {
long startTime = System.currentTimeMillis();
Order order = getOrderManager().loadOrder(orderId);
long orderLoadTime = System.currentTimeMillis();
//read your properties here ...
long totalTime = System.currentTimeMillis();
if(isLoggingDebug()) {
logDebug("The order load time was " + (orderLoadTime - startTime) + "ms");
logDebug("The time to read the list of properties was " + (totalTime - startTime) + "ms");
}
}
/*
This method shows how to get order items such as payment groups or shipping groups using the repository.
*/
public void loadOrderSubItemsRepositoryMethod(MutableRepositoryItem orderItem) {
long startTime = System.currentTimeMillis();
// Example of how to get the payment groups using the repository
MutableRepositoryItem paymentGroups = (List) orderItem.getPropertyValue("paymentGroups");
// Put code here to iterate through the list of items you want to read
// Examploe of how to get the shipping groups
MutableRepositoryItem shippingGroups = (List) orderItem.getPropertyValue("shippingGroups");
long totalTime = System.currentTimeMillis();
if(isLoggingDebug()) {
logDebug("The order load time was " + (orderLoadTime - startTime) + "ms");
logDebug("The time to read the list of properties was " + (totalTime - startTime) + "ms");
}
}
public MutableRepository getOrderRepository()
{
return orderRepository;
}
public void setOrderRepository(final MutableRepository orderRepository)
{
this.orderRepository = orderRepository;
}
public OrderManager getOrderManager()
{
return orderRepository;
}
public void setOrderRepository(final OrderManager orderManager)
{
this.orderManager = orderManager;
}
}
Hope this helps.

Categories

Resources