Invert list of periods - java

I have a list of periods - each period contains startTime and endTime (as a timestamp).
I want to create a list which will contain missing gaps in given range.
Example:
from 100 to 500 for given list:
Range[150, 200]
Range[230, 400]
It will produce a list:
Range[100, 150]
Range[200, 230]
Range[400, 500]
I created a simple algorithm which is iterating my input list and creates valid result table, but I wonder if I can do the same using java8 time API or is there an external library for that.

Using a list of individual range boundaries, you can construct that using a complete set that includes min and max:
I'm using array[int, int], which should be easy to translate into your Range object.
The logic is simple: using only the range boundary numbers, make a complete set and then make pairs of all consecutive boundaries. For that, a sorted list of all (distinct) numbers, including the missing ranges is first created...
List<Integer> flat = Arrays.<int[]>asList(new int[] { 150, 200 },
new int[] { 230, 400 }).stream()
.flatMap(e -> Arrays.asList(e[0],
e[1]).stream()).collect(Collectors.toList());
List<Integer> fullRange = new ArrayList<>();
fullRange.add(100);
fullRange.add(500);
fullRange.addAll(flat);
List<Integer> all = fullRange.stream()
.distinct()
.sorted()
.collect(Collectors.toList());
System.out.println(
IntStream.range(0, all.size())
.filter(i -> i < -1 + all.size()) #Excluding the last element
.mapToObj(index -> Arrays.asList( //You can create Range objects here
all.get(index),
all.get(index + 1))
)
.collect(Collectors.toList()));
This outputs:
[[100, 150], [150, 200], [200, 230], [230, 400], [400, 500]]

Here a solution using my lib Time4J. I have assumed that your timestamps are to be modelled as "milliseconds since Unix epoch", but you are free to use any other type. Time4J knows many different types of date- or time-related intervals and offers various methods to calculate interferences of intervals, here the complement of an interval collection.
// define/create your intervals
MomentInterval i1 =
MomentInterval.between(Instant.ofEpochMilli(150), Instant.ofEpochMilli(200));
MomentInterval i2 =
MomentInterval.between(Instant.ofEpochMilli(230), Instant.ofEpochMilli(400));
// collect the intervals into an interval-collection
IntervalCollection<Moment> ic =
IntervalCollection.onMomentAxis().plus(Arrays.asList(i1, i2));
// define/create the outer time window
MomentInterval window =
MomentInterval.between(Instant.ofEpochMilli(100), Instant.ofEpochMilli(500));
// create/calculate the complement of the interval collection
ic.withComplement(window)
.getIntervals()
.forEach(
i ->
System.out.println(
"Range["
+ i.getStart().getTemporal().toTemporalAccessor().toEpochMilli()
+ ", "
+ i.getEnd().getTemporal().toTemporalAccessor().toEpochMilli()
+ "]"
)
);
Range[100, 150]
Range[200, 230]
Range[400, 500]
By the way, Time4J uses the half-open-approach for moment/instant-intervals meaning that the end boundary of such intervals is excluded. Therefore, I would rather choose the open bracket ")" instead of "]" but have here closely followed your question.

another solution could be like below. for this i defined some classes and functions.
Class Range :
class Range{
private int start;
private int end;
}
class RangeInfo:
class RangeInfo {
private List<Range> ranges;
private int newStart;
private int newEnd;
}
function1:
This function create new Range object from two Range object.
BiFunction<Range,Range,Range> function1 = (r1,r2)->new Range(r1.getEnd(),r2.getStart());
function
This function has two parameter that they are List(Range[150, 200] and Range[230, 400]) and Range (100, 500) and return List of new Range.
BiFunction<List<Range>,Range,List<Range>> function = (r1, pair)->{ // you can use new Pair(100,500) instead of Range object
result.add(new Range(pair.getStart(),r1.get(0).getStart()));
IntStream.range(0, r1.size() - 1).mapToObj(i -> function1.apply(r1.get(i), r1.get(i + 1))).forEachOrdered(result::add);
result.add(new Range(r1.get(r1.size()-1).getEnd(),pair.getEnd()));
result.addAll(r1);
return result;
};
and finally create Range and sort them like below.
function.apply(ranges,new Range(100,500))
.stream()
.sorted(Comparator.comparingInt(Range::getStart))
.collect(Collectors.toList());
Demo

This would be the code in native Java
Code
public final class RangesCalculator {
private static final String START_RANGE = "100";
private static final String END_RANGE = "500";
public static List<Range> invertRange(List<Range> ranges) {
List<Range> toReturn = new ArrayList<>();
for (Range range : ranges) {
if (toReturn.isEmpty()) {
if (!START_RANGE.equals(range.getStart())) {
toReturn.add(Range.newBuilder().setStart(START_RANGE).setEnd(range.getStart()).build());
}
} else {
Range lastRange = toReturn.get(toReturn.size() - 1);
lastRange.setEnd(range.getStart());
}
if (!END_RANGE.equals(range.getEnd())) {
toReturn.add(Range.newBuilder().setStart(range.getEnd()).setEnd(END_RANGE).build());
}
}
return toReturn.stream().filter(range -> !range.getStart().equals(range.getEnd())).collect(Collectors.toList());
}
}

Related

Using Java Stream API, finding highest value of variable, with the stream of the changes made to the variable

Context/Scenario
Let's say we have an immutable object called Transaction, where transaction.getAction() would return a TransactionAction enum which can be DEPOSIT or WITHDRAW, and transaction.getAmount() would return an Integer which specify the amount of money being deposit or withdrawn.
enum TransactionAction {
WITHDRAW,
DEPOSIT
}
public class Transaction {
private final TransactionAction action;
private final int amount;
public Transaction(TransactionAction action, int amount) {
this.action = action;
this.amount = amount;
}
public TransactionAction getAction() {
return action;
}
public int getAmount() {
return amount;
}
}
Question
We now have a Stream<Transaction> which is a stream filled with Transaction that can either be DEPOSIT or WITHDRAW. We can imagine this Stream<Transaction> as a history of transactions of one particular bank account.
What I am trying to achieve is to get the highest balance the account has ever achieved in most efficient manner (thus using Stream API).
Example
Bob transaction history is:
// balance start at 0
[DEPOSIT] 1200 // balance: 1200
[DEPOSIT] 500 // balance: 1700
[WITHDRAW] 700 // balance: 1000
[DEPOSIT] 300 // balance: 1300
[WITHDRAW] 800 // balance: 500
[WITHDRAW] 500 // balance: 0
Bob's highest balance is 1700.
What you need is to find the maximum value of a cumulative sum. In pseudo-code, this would be something like:
transactions = [1200, 500, -700, 300, -800, -500]
csum = cumulativeSum(transactions) // should be [1200,1700,1000,1300,500,0]
max(csum) // should be 1700
The imperative way:
The traditional for-loop is well suited for such cases. It should be fairly easy to write and is probably the most efficient alternative both in time and space. It does not require multiple iterations and it does not require extra lists.
int max = 0;
int csum = 0;
for (Transaction t: transactions) {
int amount = (t.getAction() == TransactionAction.WITHDRAW ? -1 : 1) * t.getAmount();
csum += amount;
if (csum > max) max = csum;
}
Diving into functional:
Streams are a functional programming concept and, as such, they are free of side-effects and well suited for stateless operations. Keeping the cumulative state is considered a side-effect, and then we would have to talk about Monads to bring those side-effects under control and... we don't want to go that way.
Java, not being a functional language (although allowing for functional style), cares less about purity. You could simply have a control variable outside the stream to keep track of that external state within the current map or reduce operations. But that would also be giving up everything Streams are meant for.
So let's see how Java's experienced fellows do in this matter. In pure Haskell, the cumulative sum can be achieved with a Scan Left operation:
λ> scanl1 (+) [1200, 500, -700, 300, -800, -500]
[1200,1700,1000,1300,500,0]
Finding the maximum of this would be as simple as:
λ> maximum ( scanl1 (+) [1200, 500, -700, 300, -800, -500] )
1700
A Java Streams solution:
Java does not have such an idiomatic way of expressing a scan left, but you may achieve a similar result with collect.
transactions.stream()
.map(t -> (t.getAction() == TransactionAction.WITHDRAW ? -1 : 1) * t.getAmount())
.collect(ArrayList<Integer>::new, (csum, amount) ->
csum.add(csum.size() > 0 ? csum.get(csum.size() - 1) + amount : amount),
ArrayList::addAll)
.stream()
.max(Integer::compareTo);
// returns Optional[1700]
EDIT: As correctly pointed out in the comments, this accumulator function is not associative and problems would appear if trying to use parallelStream instead of stream.
This can be further simplified. For example, if you enrich your TransactionAction enum with a multiplier (-1 for WITHDRAW and 1 for DEPOSIT), then map could be replaced with:
.map(t -> t.getAction().getMultiplier() * t.getAmount())
EDIT: Yet another approach: Parallel Prefix Sum
Since Java 8, arrays offer a parallelPrefix operation that could be used like:
Integer[] amounts = transactions.stream()
.map(t -> (t.getAction() == TransactionAction.WITHDRAW ? -1 : 1) * t.getAmount())
.toArray(Integer[]::new);
Arrays.parallelPrefix(amounts, Integer::sum);
Arrays.stream(amounts).max(Integer::compareTo);
// returns Optional[1700]
As Streams collect, it also requires an associative function, Integer::sum satisfies that property. The downside is that it requires an array and can't be used with lists. Although the parallelPrefix is very efficient, setting up the array to work with it could not pay off.
Wrapping up:
Again, it's possible to achieve this with Java Streams although it won't be as efficient as a traditional loop both in time and space. But you benefit from the compositionality of streams. As always, it's a trade-off.
A stream would not help here. Use a list and a for-loop:
List<Transaction> transactions = ...;
int balance = 0;
int max = 0;
for (Transaction transaction : transactions) {
balance += (transaction.getAction() == TransactionAction.DEPOSIT ? 1 : -1)
* transaction.getAmount();
max = Math.max(max, balance);
}
The problem is that you need to keep track of some state while processing transactions, and you wouldn't be able to do this with streams without introducing complicated or mutable data structures that would make this code bug-prone.
Here is another Stream solution:
AtomicInteger balance = new AtomicInteger(0);
int highestBalance = transactions
.stream()
.mapToInt(transaction -> {
int amount = transaction.getAmount();
if (transaction.getAction() == TransactionAction.WITHDRAW) {
amount = -amount;
}
return balance.accumulateAndGet(amount, Integer::sum);
})
.max()
.orElse(0);
Cumulative Sum of each position can be computed like this:
List<Integer> integers = Arrays.asList(1200, 500, -700, 300, -800, -500);
Stream<Integer[]> cumulativeSum = Stream.iterate(
new Integer[]{0, integers.get(0)},
p -> new Integer[]{p[0] + 1, p[1] + integers.get(p[0] + 1)}
)
.limit(integers.size());
With this you can get the max balance in this way:
Integer[] max = cumulativeSum
.max(Comparator.comparing(p -> p[1]))
.get();
System.out.println("Position: " + max[0]);
System.out.println("Value: " + max[1]);
Or with iterator but here is problem that last sum wouldn't be computed:
Stream<Integer> integerStream = Arrays.stream(new Integer[]{
1200, 500, -700, 300, -800, -500});
Iterator<Integer> iterator = integerStream.iterator();
Integer maxCumulativeSum = Stream.iterate(iterator.next(), p -> p + iterator.next())
.takeWhile(p -> iterator.hasNext())
.max(Integer::compareTo).get();
System.out.println(maxCumulativeSum);
Problem is with takeWhile and it may be solved with takeWhileInclusive (from external library).
A wrong solution
// Deposit is positive, withdrawal is negative.
final Stream<Integer> theOriginalDepositWithdrawals = Stream.of(1200, 500, -700, 300, -800, -500);
final Stream<Integer> sequentialDepositWithdrawals = theOriginalDepositWithdrawals.sequential();
final CurrentBalanceMaximumBalance currentMaximumBalance = sequentialDepositWithdrawals.<CurrentBalanceMaximumBalance>reduce(
// Identity.
new CurrentBalanceMaximumBalance(0, Integer.MIN_VALUE),
// Accumulator.
(currentAccumulation, elementDepositWithdrawal) -> {
final int newCurrentBalance =
currentAccumulation.currentBalance +
elementDepositWithdrawal;
final int newMaximumBalance = Math.max(
currentAccumulation.maximumBalance,
newCurrentBalance
);
return new CurrentBalanceMaximumBalance(
newCurrentBalance,
newMaximumBalance
);
},
// Combiner.
(res1, res2) -> {
final int newCurrentBalance =
res1.currentBalance +
res2.currentBalance;
final int newMaximumBalance = Math.max(
res1.maximumBalance,
res2.maximumBalance
);
return new CurrentBalanceMaximumBalance(
newCurrentBalance, newMaximumBalance
);
}
);
System.out.println("Maximum is: " + currentMaximumBalance.maximumBalance);
Helper class:
class CurrentBalanceMaximumBalance {
public final int currentBalance;
public final int maximumBalance;
public CurrentBalanceMaximumBalance(
int currentBalance,
int maximumBalance
) {
this.currentBalance = currentBalance;
this.maximumBalance = maximumBalance;
}
}
This is a wrong solution. It might arbitrarily work, but there is no guarantee that it will.
It breaks the interface of reduce. The properties that are broken are associativity for both the accumulator function and the combiner function. It also doesn't require that the stream respects the order of the original transactions.
This makes it possibly dangerous to use, and might well give wrong results depending on what the implementation of reduce happens to be as well as whether the stream respects the original order of the deposits and withdrawals or not.
Using sequential() here is not sufficient, since sequential() is about sequential/parallel execution. An example of a stream that executes sequentially but does not have ordering is a stream created from a HashSet and then have sequential() called on it.
A correct solution
The problem uses the concept of a "current balance", and that is only meaningful when computed from the first transaction and then in order to the end. For instance, if you have the list [-1000, 10, 10, -1000], you cannot start in the middle and then say that the "current balance" was 20 at some point. You must apply the operations reg. "current balance" in the order of the original transactions.
So, one straight-forward solution is to:
Require that the stream respects the original order of transactions, with a defined "encounter order".
Apply forEachOrdered​().

Hackerrank: Frequency Queries Question Getting Timeout Error, How to optimize the code further?

I am getting timeout error for my code which I wrote using hashmap functions in java 8.When I submitted my answer 5 test cases failed due to timeout error out of 14 test cases on hackerrank platform.
Below is the question
You are given queries. Each query is of the form two integers described below:
x : Insert x in your data structure.
y : Delete one occurence of y from your data structure, if present.
z : Check if any integer is present whose frequency is exactly z. If yes, print 1 else 0.
The queries are given in the form of a 2-D array of where queries[i][0] contains the operation, and queries[i][1] contains the data element.
How should I optimize this code further ?
static HashMap<Integer,Integer> buffer = new HashMap<Integer,Integer>();
// Complete the freqQuery function below.
static List<Integer> freqQuery(List<List<Integer>> queries) {
List<Integer> output = new ArrayList<>();
output = queries.stream().map(query -> {return performQuery(query);}).filter(v -> v!=-1).collect(Collectors.toList());
//get the output array iterate over each query and perform operation
return output;
}
private static Integer performQuery(List<Integer> query) {
if(query.get(0) == 1){
buffer.put(query.get(1), buffer.getOrDefault(query.get(1), 0) + 1);
}
else if(query.get(0) == 2){
if(buffer.containsKey(query.get(1)) && buffer.get(query.get(1))>0 ){
buffer.put(query.get(1), buffer.get(query.get(1)) - 1);
}
}
else{
if(buffer.containsValue(query.get(1))){
return 1;
}
else{
return 0;
}
}
return -1;
}
public static void main(String[] args) {
List<List<Integer>> queries = Arrays.asList(
Arrays.asList(1,5),
Arrays.asList(1,6),
Arrays.asList(3,2),
Arrays.asList(1,10),
Arrays.asList(1,10),
Arrays.asList(1,6),
Arrays.asList(2,5),
Arrays.asList(3,2)
);
long start = System.currentTimeMillis();
System.out.println(freqQuery(queries));
long end = System.currentTimeMillis();
//finding the time difference and converting it into seconds
float sec = (end - start) / 1000F;
System.out.println("FreqQuery function Took "+sec + " s");
}
}
The problem with your code is the z operation. Sepecifically, the method containsValue has linear time complexty, making the whole complexity of the algorithm in the order of O(n*n). Here is a hint: add another hashmap on top of the one that you have which counts the occurences of occurences by value of the other map. In that way you can query directly this second one by the value (because it will be the key in this case).

What is the best way to find common elements from 2 sets?

Recently I had an interview and I was asked one question.
I have 2 sets with around 1 Million records each.
I have to find the common element in 2 sets.
My response:
I will create a new empty Set. And i gave him below solution but he was not happy with it. He said there are 1 million records so the solution won't be good.
public Set<Integer> commonElements(Set<Integer> s1, Set<Integer> s2) {
Set<Integer> res = new HashSet<>();
for (Integer temp : s1) {
if(s2.contains(temp)) {
res.add(temp);
}
}
return res;
}
What is the better way to solve this problem then?
First of all: in order determine the intersection of two sets, you absolutely have to look at all entries of at least one of the two sets (to figure whether it is in the other set). There is no magic around that would tell you that in less than O(min(size(s1), size(s2)). Period.
The next thing to tell the interviewer: "1 million entries. You must be kidding. It is 2019. Any decent piece of hardware crunches two 1-million sets in less than a second". (Of course: that only applies for objects that are cheap to compare, like here for Integer instances. If oneRecord.equals(anotherRecord) is a super expensive operation, then 1 million entries could still be a problem in 2022).
Then you briefly mention that there are various built-in ways to solve this, as well as various 3rd party libraries. But you avoid the mistake that the other two answers make: pointing to a library that does compute the intersect is not at all something you sell as "solution" to this question.
You see, regarding coding: the java Set interface has an easy solution to that: s1.retainAll(s2) computes the join of the two sets, as it removes all elements from s1 that
aren't in s2.
Obviously, you have to mention within the interview that this will modify s1.
In case that the requirement is to not modify s1 or s2, your solution is a viable way to go, and there isn't anything one can do about the runtime cost. If it all, you could call size() for both sets and iterate the one that has less entries.
Alternatively, you can do
Set<String> result = new HashSet<>(s1);
return result.retain(s2);
but in the end, you have to iterate one set and for each element determine whether it is in the second set.
But of course, the real answer to such questions is always always always to show the interviewer that you are able to dissect the problem into its different aspects. You outline basic constraints, you outline different solutions and discuss their pros and cons. Me for example, I would expect you to sit down and maybe write a program like this:
public class Numbers {
private final static int numberOfEntries = 20_000_000;
private final static int maxRandom = numberOfEntries;
private Set<Integer> s1;
private Set<Integer> s2;
#Before
public void setUp() throws Exception {
Random random = new Random(42);
s1 = fillWithRandomEntries(random, numberOfEntries);
s2 = fillWithRandomEntries(random, numberOfEntries);
}
private static Set<Integer> fillWithRandomEntries(Random random, int entries) {
Set<Integer> rv = new HashSet<>();
for (int i = 0; i < entries; i++) {
rv.add(random.nextInt(maxRandom));
}
return rv;
}
#Test
public void classic() {
long start = System.currentTimeMillis();
HashSet<Integer> intersection = new HashSet<>();
s1.forEach((i) -> {
if (s2.contains(i))
intersection.add(i);
});
long end = System.currentTimeMillis();
System.out.println("foreach duration: " + (end-start) + " ms");
System.out.println("intersection.size() = " + intersection.size());
}
#Test
public void retainAll() {
long start = System.currentTimeMillis();
s1.retainAll(s2);
long end = System.currentTimeMillis();
System.out.println("Retain all duration: " + (end-start) + " ms");
System.out.println("intersection.size() = " + s1.size());
}
#Test
public void streams() {
long start = System.currentTimeMillis();
Set<Integer> intersection = s1.stream().filter(i -> s2.contains(i)).collect(Collectors.toSet());
long end = System.currentTimeMillis();
System.out.println("streaming: " + (end-start) + " ms");
System.out.println("intersection.size() = " + intersection.size());
}
#Test
public void parallelStreams() {
long start = System.currentTimeMillis();
Set<Integer> intersection = s1.parallelStream().filter(i -> s2.contains(i)).collect(Collectors.toSet());
long end = System.currentTimeMillis();
System.out.println("parallel streaming: " + (end-start) + " ms");
System.out.println("intersection.size() = " + intersection.size());
}
}
The first observation here: I decided to run with 20 million entries. I started with 2 million, but all three tests would run well below 500 ms. Here is the print out for 20 million on my Mac Book Pro:
foreach duration: 9304 ms
intersection.size() = 7990888
streaming: 9356 ms
intersection.size() = 7990888
Retain all duration: 685 ms
intersection.size() = 7990888
parallel streaming: 6998 ms
intersection.size() = 7990888
As expected: all intersects have the same size (because I seeded the random number generator to get to comparable results).
And surprise: modifying s1 in place ... is by far the cheapest option. It beats streaming by a factor of 10. Also note: the parallel streaming is quicker here. When running with 1 million entries, the sequential stream was faster.
Therefore I initially mentioned to mention "1 million entries is not a performance problem". That is a very important statement, as it tells the interviewer that you are not one of those people wasting hours to micro-optimize non-existing performance issues.
you can use
CollectionUtils
its from apache
CollectionUtils.intersection(Collection a,Collection b)
The answer is:
s1.retainAll(s2);
Ref. https://www.w3resource.com/java-exercises/collection/java-collection-hash-set-exercise-11.php

Compartmentalizing loops over a large iteration

The Goal of my question is to enhance the performance of my algorithm by splitting the range of my loop iterations over a large array list.
For example: I have an Array list with a size of about 10 billion entries of long values, the goal I am trying to achieve is to start the loop from 0 to 100 million entries, output the result for the 100 million entries of whatever calculations inside the loop; then begin and 100 million to 200 million doing the previous and outputting the result, then 300-400million,400-500million and so on and so forth.
after I get all the 100 billion/100 million results, then I can sum them up outside of the loop collecting the results from the loop outputs parallel.
I have tried to use a range that might be able to achieve something similar by trying to use a dynamic range shift method but I cant seem to have the logic fully implemented like I would like to.
public static void tt4() {
long essir2 = 0;
long essir3 = 0;
List cc = new ArrayList<>();
List<Long> range = new ArrayList<>();
// break point is a method that returns list values, it was converted to
// string because of some concatenations and would be converted back to long here
for (String ari1 : Breakpoint()) {
cc.add(Long.valueOf(ari1));
}
// the size of the List is huge about 1 trillion entries at the minimum
long hy = cc.size() - 1;
for (long k = 0; k < hy; k++) {
long t1 = (long) cc.get((int) k);
long t2 = (long) cc.get((int) (k + 1));
// My main question: I am trying to iterate the entire list in a dynamic way
// which would exclude repeated endpoints on each iteration.
range = LongStream.rangeClosed(t1 + 1, t2)
.boxed()
.collect(Collectors.toList());
for (long i : range) {
// Hard is another method call on the iteration
// complexcalc is a method as well
essir2 = complexcalc((int) i, (int) Hard(i));
essir3 += essir2;
}
}
System.out.println("\n" + essir3);
}
I don't have any errors, I am just looking for a way to enhance performance and time. I can do a million entries in under a second directly, but when I put the size I require it runs forever. The size I'm giving are abstracts to illustrate size magnitudes, I don't want opinions like a 100 billion is not much, if I can do a million under a second, I'm talking massively huge numbers I need to iterate over doing complex tasks and calls, I just need help with the logic I'm trying to achieve if I can.
One thing I would suggest right off the bat would be to store your Breakpoint return value inside a simple array rather then using a List. This should improve your execution time significantly:
List<Long> cc = new ArrayList<>();
for (String ari1 : Breakpoint()) {
cc.add(Long.valueOf(ari1));
}
Long[] ccArray = cc.toArray(new Long[0]);
I believe what you're looking for is to split your tasks across multiple threads. You can do this with ExecutorService "which simplifies the execution of tasks in asynchronous mode".
Note that I am not overly familiar with this whole concept but have experimented with it a bit recently and give you a quick draft of how you could implement this.
I welcome those more experienced with multi-threading to either correct this post or provide additional information in the comments to help improve this answer.
Runnable Task class
public class CompartmentalizationTask implements Runnable {
private final ArrayList<Long> cc;
private final long index;
public CompartmentalizationTask(ArrayList<Long> list, long index) {
this.cc = list;
this.index = index;
}
#Override
public void run() {
Main.compartmentalize(cc, index);
}
}
Main class
private static ExecutorService exeService = Executors.newCachedThreadPool();
private static List<Future> futureTasks = new ArrayList<>();
public static void tt4() throws ExecutionException, InterruptedException
{
long essir2 = 0;
long essir3 = 0;
ArrayList<Long> cc = new ArrayList<>();
List<Long> range = new ArrayList<>();
// break point is a method that returns list values, it was converted to
// string because of some concatenations and would be converted back to long here
for (String ari1 : Breakpoint()) {
cc.add(Long.valueOf(ari1));
}
// the size of the List is huge about 1 trillion entries at the minimum
long hy = cc.size() - 1;
for (long k = 0; k < hy; k++) {
futureTasks.add(Main.exeService.submit(new CompartmentalizationTask(cc, k)));
}
for (int i = 0; i < futureTasks.size(); i++) {
futureTasks.get(i).get();
}
exeService.shutdown();
}
public static void compartmentalize(ArrayList<Long> cc, long index)
{
long t1 = (long) cc.get((int) index);
long t2 = (long) cc.get((int) (index + 1));
// My main question: I am trying to iterate the entire list in a dynamic way
// which would exclude repeated endpoints on each iteration.
range = LongStream.rangeClosed(t1 + 1, t2)
.boxed()
.collect(Collectors.toList());
for (long i : range) {
// Hard is another method call on the iteration
// complexcalc is a method as well
essir2 = complexcalc((int) i, (int) Hard(i));
essir3 += essir2;
}
}

JVisualVM HeapDump OQL rendering array inside an Object

I am trying to write a query such as this:
select {r: referrers(f), count:count(referrers(f))}
from com.a.b.myClass f
However, the output doesn't show the actual objects:
{
count = 3.0,
r = [object Object]
}
Removing the Javascript Object notation once again shows referrers normally, but they are no longer compartmentalized. Is there a way to format it inside the Object notation?
So I see that you asked this question a year ago, so I don't know if you still need the answer, but since I was searching around for something similar, I can answer this. The problem is that referrers(f) returns an enumeration and so it doesn't really translate well when you try to put it into your hashmap. I was doing a similar type of analysis where I was trying to find unique char arrays (count the unique combinations of char arrays up to the first 50 characters). What I came up with was this:
var counts = {};
filter(
map(
unique(
map(
filter(heap.objects('char[]'), "it.length > 50"), // filter out strings less than 50 chars in length
function(charArray) { // chop the string at 50 chars and then count the unique combos
var subs = charArray.toString().substr(0,50);
if (! counts[subs]) {
counts[subs] = 1;
} else {
counts[subs] = counts[subs] + 1;
}
return subs;
}
) // map
) // unique
, function(subs) { // map the strings into an array that has the string and the counts of that string
return { string: subs, count: counts[subs] };
}) // map
, "it.count > 5000"); // filter out strings that have counts < 5000
This essentially shows how to take an enumeration (heap.objects('char[]') in this case) and filter it and map it so that you can compute statistics on it. Hope this helps someone.

Categories

Resources