I have a list of sessions expirations time and session timeout value.
So I can get session start time subtracting timeout from expiration time.
I know how to check if two dates is overlapping:
public boolean isOverlapped(LocalDateTime start1, LocalDateTime end1,
LocalDateTime start2, LocalDateTime end2) {
return (start1.isBefore(end2) || start1.equals(end2))
&& (start2.isBefore(end1) || start2.equals(end2));
}
but have no idea how to do this with for list of dates.
In result I want to have list with the longest chain of overlapped(concurrent) sessions.
Appreciate any help!
First off, make a new class that represents these sessions (if you don't have one already):
class Session {
private LocalDateTime start;
private LocalDateTime end;
public boolean isOverlapped(Session other) {
return (start.isBefore(other.end) || start.equals(other.end))
&& (end.isAfter(other.start) || end.equals(other.start));
}
...
}
Your input list will have to be a list of Sessions.
Next, here is an algorithm that does what you asked for; It takes in a list and checks for each element if it overlaps with any other element in the list (except for itself). If that is the case, it puts it in the result list:
public static List<Session> filter(List<Session> in) {
List<Session> result = new ArrayList<>();
for(Session current : in) {
for(Session other : in) {
if(current != other && current.isOverlapped(other)) {
result.add(current);
break;
}
}
}
return result;
}
Here is also an example program: Ideone
The result will be a list containing sessions that were concurrent with any other session.
This is a rather classic problem. You didn't specify if you want the longest session in terms of time or in terms of number of intervals, but they both work the same way.
First sort all your sessions by start time. Then the logic would be the following:
current_chain = []
best_chain = []
for session in sessions_sorted_by_start:
if session doesn't overlap with any session in curent_chain: [1]
update best_chain if current_chain is better [2]
current_chain = []
current_chain.insert(session) [3]
update best_chain if current_chain is better [2]
The idea here is the following: we maintain a current chain. If a new session overlaps with any other session in the chain, we just add it to the chain. If it doesn't overlap with any session in the current chain, then its start is to the right from the end of the furthest session in the current chain, so no other remaining session will overlap with anything in the current chain (since they are sorted by the start date). That means that the current chain is as long as it will ever get, so we can check if it is better than the best chain so far ([2]) based on whichever criteria (time or number of sessions), and reset it.
Now, to make it linear in time it would be cool to do overlap check of a session and a chain at [1] in constant time. This is easily done if for the chain you always maintain the furthest session in it. To do it, when you insert a new session at [3], if its end extends beyond the end of the current furthest session, update the furthest session, otherwise do not. This way at [1] you only need to check the overlap with the furthest session, instead of checking all of them, which makes that particular check constant time, and the entire algorithm linear (if you do not account for initial sorting, which is of course O(n log n)).
Related
I'm using Ignite 2.13 in a Windows environment.
I have exclusive use of a 110 node compute grid. I want to complete a ComputeTask that maps to 100 ComputeJobs. The jobs take a variable amount of time to complete. As the compute progresses many of the nodes become idle waiting for the remaining jobs to complete. Midway through the compute I'd like to find all the idle nodes and then kill some subset of them, leaving a few in case one of the remaining jobs randomly fails and needs to failover.
In the ComputeTask the "result" method is called as Jobs complete. From the "result" method I'm trying to find all the grid nodes with an active job. Any node that does not have an active job is assumed idle and potentially eligible to be killed off.
I have two methods for finding the nodes with an active job getActiveNodesOld and getActiveNodesLocal. They return drastically different answers.
// This version uses this node's view of the all the cluster.
// This status lags behind reality by 2 seconds.
public static Collection<UUID> getActiveNodesOld(Ignite grid)
{
Set<UUID> retval = new LinkedHashSet<>();
Collection<ClusterNode> nodes = grid.cluster().nodes();
for(ClusterNode node : nodes)
{
int activeJobs = node.metrics().getCurrentActiveJobs();
if(activeJobs > 0)
{
retval.add(node.id());
}
}
return retval;
}
// This version asks each node about its own status. The results are supposed to be more
// accurate.
public static Collection<UUID> getActiveNodesLocal(Ignite grid)
{
Set<UUID> retval = new LinkedHashSet<>();
IgniteCompute compute = grid.compute();
Collection<IgniteBiTuple<UUID, Integer>> metricsMap =
compute.broadcast(new ActiveJobCallable());
for(Map.Entry<UUID, Integer> entry : metricsMap)
{
UUID id = entry.getKey();
int activeJobs = entry.getValue();
if(activeJobs > 0)
{
retval.add(id);
}
if(activeJobs > 1){
logger.info("Warning: Node:" + id + " has " + activeJobs + " active jobs. ");
}
}
return retval;
}
private static class ActiveJobCallable implements IgniteCallable<IgniteBiTuple<UUID, Integer>>
{
#IgniteInstanceResource
private transient Ignite ignite;
public ActiveJobCallable()
{
}
#Override
public IgniteBiTuple<UUID, Integer> call()
{
ClusterNode clusterNode = ignite.cluster().localNode();
ClusterMetrics nodeMetrics = clusterNode.metrics();
return new IgniteBiTuple<>(clusterNode.id(), nodeMetrics.getCurrentActiveJobs());
}
}
These two methods return very different results with regard to which nodes are active. The first time through the getActiveNodesLocal reports that there are 0 active nodes. I can see how this is possible - perhaps a job is not considered complete until "result" returns. I'll give it the benefit of the doubt. The other method, getActiveNodesOld indicates there is a large number of idle nodes. Even though its being called for the first completed compute job so the remainder of the grid should still be working on compute jobs.
As the compute progresses neither method produces the answers I'd expect. The nodes seem to start reporting 1 activeJob and then never report 0 activeJobs even after the job is complete. Do I need to be resetting the ClusterMetrics somehow?
The method getActiveNodesLocal broadcasts an IgniteCallable into the grid. Does that callable get counted in the results of getCurrentActiveJobs()? It doesn't seem to but I thought that might explain the weird results I'm seeing. I never see the logger message I added that should be triggered if "activeJobs > 1".
Is there some other way I Should accomplish my goal of finding nodes that are idling? Can I find, for example, what node each job is currently assigned to and then determine which nodes don't have a job assignment? I don't see a method for the ComputeTask to determine node<->job mapping but maybe I'm just overlooking it.
I suppose I could have the nodes send a grid message when they start and also when they complete a job and I could track which jobs are active but that feels like the wrong solution.
Skip ClusterMetrics entirely. Create an IgniteCache and put the node UUID in the cache when the Job is started and remove the UUID when the job is completed.
Finding the Nodes that are still in the cluster and have a job could be done like this:
public static Collection<UUID> getActiveNodesCache(Ignite grid)
{
Set<UUID> retval = new TreeSet<>();
IgniteCache<UUID, ComputeJob> workingOnMap = grid.cache(JOB_CACHE);
workingOnMap.query(new ScanQuery<>(null)).forEach(entry -> retval.add((UUID) entry.getKey()));
// We count on the nodes to add and remove themselves from the cache when they start/complete jobs.
// Nodes that crashed aren't removed from this cache - they just die. So retval could
// contain nodes that are gone at this point.
Set<UUID> inGrid = grid.cluster().nodes().stream().map(ClusterNode::id).collect(Collectors.toSet());
Set<UUID> notInGrid = new LinkedHashSet<>(retval);
notInGrid.removeAll(inGrid);
retval.retainAll(inGrid); // remove any nodes that are no longer in the cluster.
logger.log(Level.INFO,
"Found {0} nodes in grid. {1} of those are active. {2} nodes were active but left grid.",
new Object[]{inGrid.size(), retval.size(), notInGrid.size()});
return retval;
}
The ComputeJobs can create the cache like this:
IgniteCache<UUID, ComputeJob> workingOnMap = grid.getOrCreateCache(getInProgressConfig());
Because you want to kill the nodes, be sure the cache is REPLICATED across the grid so that dead nodes don't take portions of the cache with them.
public static CacheConfiguration<UUID, ComputeJob> getInProgressConfig()
{
CacheConfiguration<UUID, ComputeJob> config = new CacheConfiguration<>();
config.setName(JOB_CACHE);
config.setCacheMode(CacheMode.REPLICATED);
config.setAtomicityMode(CacheAtomicityMode.ATOMIC);
config.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
return config;
}
I am working on a Spigot 1.8.9 plugin and am trying to add a feature when a staff right-clicks an item it teleports them to the next player that isn't in vanish and not themselves and if there aren't any it should return null.
On click I attempted to add all possible users to a list using
public static List<User> getPossibleUsers(User user){
List<User> result = new ArrayList<>();
for(User target : users)
if(!target.isVanished() && !user.getUUID().equals(target.getUUID()))
result.add(target);
return result;
}
The staff is also assigned an int called nextPlayer which is set to 0 when they login. Then when they click I add one to the int so next time they click it can get the next user.
private User getNextPlayer(User user) {
int next = user.nextPlayer;
List<User> users = getPossibleUsers(user);
if(users.size() == 0)
return null;
int current = 0;
for(User target : users) {
if(current == next){
return target;
}
current++;
}
user.nextPlayer = next;
}
The problem is I don't know how to make the getNextPlayer method correctly and make it efficient. I also would like to also to make it so once it hits the last player it loops back to the first player.
I'd suggest thinking about your problem entirely differently if you want it to be efficient, but efficiency really isn't a concern in this situation, so I'm opting to not pre-maturely optimize and instead work with the code you already have.
public static List<User> getPossibleUsers(User user){
List<User> result = new ArrayList<>();
for(User target : users)
if(!target.isVanished() && !user.getUUID().equals(target.getUUID()))
result.add(target);
return result;
}
This currently returns the Users in the same order, as they are defined on users.
This better have a natural sort order, otherwise you are going to have issues when people join / leave the server, as it will cause people to change their ordering in the list.
Now let's get back to first principals.
int next = user.nextPlayer;
Looks like you are storing the index of the player in the list you have already been in on the 'user'.
Once you have this, you can access that index directly from the list.
https://docs.oracle.com/javase/8/docs/api/java/util/List.html#get-int-
E get(int index)
So, doing users.get(next++); is all you need to do to 'fix' the code you have above. it increments next, and gets the user at that position (assuming the ordering is consistent, and hasn't changed) However, it may throw an exception if it's out of range of the list, so we wrap it in
if(next <= users.length) {
users.get(next++);
} else return null;
This will change it to returning null, if it would otherwise throw an exception.
BUT all of this still has a fatal flaw, that if the list is mutated between calls, that you could be potentially skipping or changing the order around.
A far better solution to this, is to instead cache the visited users, as well as the last visited user.
If the users are ordered, and you store the last visited user, instead of the index, you are storing data that is much more resilient to change, and more closely matches the behavior you want.
To more closely match your needs, you are asking that.
Generate a predictable, ordered list of users that don't include the admin, or anyone else that is vanished, to aid the admin in predicting where they are going.
Rotate through this list, by right clicking with a tool, (Note this is async, so all the state needs to be saved)
Ensure that all visited users are visited before repeating the sequence.
public class TeleportTooldata {
private ListIterator<UUID> cursor;
private List<UUID> cachedOrder;
public TeleportTooldata(List<UUID> applicableUsers) {
cachedOrder = applicableUsers;
}
#Nullable
public UUID next() {
if (!cursor.hasNext()) return null;
UUID next = cursor.next();
if (!cachedOrder.contains(next)) {
cachedOrder.add(next);
}
return next;
}
public void Update(List<UUID> applicableUsers) {
applicableUsers.removeAll(cachedOrder);
cachedOrder.addAll(applicableUsers);
}
}
public class TeleportToolUtil {
YourPluginUserRepo repo;
Map<User, TeleportTooldata> storage; //This could be a cache, make sure to remove if they log out, or maybe timed as well.
public List<UUID> getApplicableUsers() {
return repo.getOnlineUsers().stream()
.filter(User::isVanish)
.sorted(Comparator.comparing(User::getId)) // You can change the sort order
.map(User::getId)
.collect(Collectors.toList());
}
public void onToolUse(User user) {
TeleportTooldata data = storage.computeIfAbsent(user, x -> new TeleportTooldata(getApplicableUsers()));
UUID next = data.next();
if (next == null) {
data.Update(getApplicableUsers());
next = data.next();
if(next == null) {
storage.put(user, new TeleportTooldata(getApplicableUsers()));
next = data.next();
}
}
user.teleportTo(next);
}
}
A few changes.
We are now caching the ordering, so that you could conceptually also let the user go backwards through the list.
We are using ListIterator. ListIterator is an object that loops through lists, and stores the current position for you! Much like you were doing before, but without indexes.
We now have the possibility to update the data, in case a player joins late, or someone unvanishes they will be put at the back of the list if they are not already inside it.
when we run out of users, we attempt an update, if we are really out, we start again with a brand new list. (note this won't guarantee the same order every time (people will be 'properly' sorted when it updates if they were previously appended, but it's close enough for this usecase)
However! We still need to be mindful of memory leaks. using UUID's rather then players or users, means this class is very light weight, we should be pretty safe from memory leaks in the list of UUID AS LONG as the TeleportTooldata doesn't live too long.
You can replace the Map of TeleportTooldata with a cache (maybe from Guava?) to remove the data some time after the admin leaves the game.
If TeleportTooldata was expected to be long-lived, we would want to seriously consider removing UUID's from the history.
Also, not handled in my example, is the possibility of the users going offline after the order is cached.
To handle this, before teleporting the player, check if the uuid is online, otherwise go to the 'next' and follow all the same logic again.
I need a java utility method in java (for my application which get thousands of request coming in a second), which has following feature.
The request has arrivaltime in format of (DD-MM-YYYY-HH:MM:SS) and bucketNumber (1-100).
I want that if for same bucketNumber if same arrivaltime comes from request it should increment the value of arrivaltime of request by 1 miliisecond.
For example :
If for bucketNumber=1 arrival time for 1st, 2nd, 3rd request = 01-01-2016-10:00:00 (actually time till milli 01-01-2016-10:00:00:000) and a 4th request with 01-01-2016-10:00:01.
So for 2nd request the utility method will return 01-01-2016-10:00:00 (BUT this actually 01-01-2016-10:00:00:001)
and for 3rd request the utility method will return 01-01-2016-10:00:00 (BUT this actually 01-01-2016-10:00:00:002)
and for 4rd request the utility method will return 01-01-2016-10:00:01 only without performing any operation.
I don't want to keep a huge cache to perform this action (if I use set then I want to keep removing redundant the data as well).
//signature should be like below
Date getIncrementedArrivalTimeIfSame(Date arrivaltime, int bucketNumber ) {
//return incremented if equal else return original arrivaltime
}
Should I use a global map which has bucketNumber as key and a set which has arrival time? Please help me to implement this. This method will be invoked within synchronized block in a threadSafe way.
Below is my solution.
I finaly used a map:
static Map<Integer, Date> arrivalTimeMap = new HashMap<>();
static Date getIncrementedArrivalTimeIfEqual(Date arrivaltime,
int bucketNumber) {
Date lastArrivalTime = arrivalTimeMap.put(bucketNumber, arrivaltime);
if(lastArrivalTime != null && !lastArrivalTime.before(arrivaltime)){
Date incrementedArrivalTime = incrementDateByMilliSeconds(
lastArrivalTime, 1);
arrivaltime = incrementedArrivalTime;
}
arrivalTimeMap.put(bucketNumber, arrivaltime);
return arrivaltime;
}
I'm given a problem where the user is given an empty recipe book and they can input and sort the recipes.
I know that a book is sorted if it is empty, has one recipe, and two recipes(ascending/descending). These can use binary search.
But when the user inputs a third recipe it could be either "cookies, donut, turkey" (which is sorted) or "cookies, donut, apples" and that's not sorted. If it's not sorted then I have to use linear search.
This is what I have so far
public void sortBook(int choice, boolean ascend) {
RecipeBookComparator comparing = new RecipeBookComparator(choice, ascend);
mList.sort(comparing);}
public class RecipeBookComparator implements Comparator {
private int mSortRBook;
private boolean mAscend;
public RecipeBookComparator (int choice, boolean ascend) {
mSortRBook = choice;
mAscend = ascend;
}
public int compare(Object o1, Object o2) {
Recipe s1 = (Recipe)o1, s2 = (Recipe)o2;
switch (mSortRBook) {
case 1:
if (mAscend == true) {
int compareName = s1.getName().compareTo(s2.getName());
if (compareName != 0) {
return compareName;
}
}
else {
int compareName = s1.getName().compareTo(s2.getName());
if (compareName != 0) {
return compareName * -1;
}
} ///more cases...
I know what I'm supposed to do, but I don't know how to approach it "code-wise"
To find out if a list is sorted, you would have to compare each element with ist neighbors. If only one element of thousands is not in order, a binary search can fail. So you would have to check the complete list. But going through all the list to check if the list is sorted takes longer than to look for one element in the list with linear search, so that makes no sense. Use linear search, if you don't know for sure if a list is sorted or not. That's it.
Your code says:
mList.sort(comparing);
I believe you've misunderstood what you've been asked to do - assuming that "use binary search if sorted otherwise use linear search", the title of your question, is what you are supposed to do. You are not supposed to sort them at all. This problem requires no knowledge of how to sort things in Java whatsoever.
What you need to understand is searching, not sorting. And how to check whether a sequence of inputs is already sorted.
Now, admittedly, technically you can check if a sequence is already sorted by actually sorting it and then checking if the resulting sequence is in the same order as what you started with. But I wouldn't recommend that.
What I would recommend, instead, is using a Comparator to compare each pair of adjacent elements (if any) in the sequence, to check if the sequence is monotonically increasing or monotonically decreasing.
My problem
Let's say I want to hold my messages in some sort of datastructure for longpolling application:
1. "dude"
2. "where"
3. "is"
4. "my"
5. "car"
Asking for messages from index[4,5] should return:
"my","car".
Next let's assume that after a while I would like to purge old messages because they aren't useful anymore and I want to save memory. Let's say after time x messages[1-3] became stale. I assume that it would be most efficient to just do the deletion once every x seconds. Next my datastructure should contain:
4. "my"
5. "car"
My solution?
I was thinking of using a concurrentskiplistset or concurrentskiplist map. Also I was thinking of deleting the old messages from inside a newSingleThreadScheduledExecutor. I would like to know how you would implement(efficiently/thread-safe) this or maybe use a library?
The big concern, as I gather it, is how to let certain elements expire after a period. I had a similar requirement and I created a message class that implemented the Delayed Interface. This class held everything I needed for a message and (through the Delayed interface) told me when it has expired.
I used instances of this object within a concurrent collection, you could use a ConcurrentMap because it will allow you to key those objects with an integer key.
I reaped the collection once every so often, removing items whose delay has passed. We test for expiration by using the getDelay method of the Delayed interface:
message.getDelay(TimeUnit.MILLISECONDS);
I used a normal thread that would sleep for a period then reap the expired items. In my requirements it wasn't important that the items be removed as soon as their delay had expired. It seems that you have a similar flexibility.
If you needed to remove items as soon as their delay expired, then instead of sleeping a set period in your reaping thread, you would sleep for the delay of the message that will expire first.
Here's my delayed message class:
class DelayedMessage implements Delayed {
long endOfDelay;
Date requestTime;
String message;
public DelayedMessage(String m, int delay) {
requestTime = new Date();
endOfDelay = System.currentTimeMillis()
+ delay;
this.message = m;
}
public long getDelay(TimeUnit unit) {
long delay = unit.convert(
endOfDelay - System.currentTimeMillis(),
TimeUnit.MILLISECONDS);
return delay;
}
public int compareTo(Delayed o) {
DelayedMessage that = (DelayedMessage) o;
if (this.endOfDelay < that.endOfDelay) {
return -1;
}
if (this.endOfDelay > that.endOfDelay) {
return 1;
}
return this.requestTime.compareTo(that.requestTime);
}
#Override
public String toString() {
return message;
}
}
I'm not sure if this is what you want, but it looks like you need a NavigableMap<K,V> to me.
import java.util.*;
public class NaviMap {
public static void main(String[] args) {
NavigableMap<Integer,String> nmap = new TreeMap<Integer,String>();
nmap.put(1, "dude");
nmap.put(2, "where");
nmap.put(3, "is");
nmap.put(4, "my");
nmap.put(5, "car");
System.out.println(nmap);
// prints "{1=dude, 2=where, 3=is, 4=my, 5=car}"
System.out.println(nmap.subMap(4, true, 5, true).values());
// prints "[my, car]" ^inclusive^
nmap.subMap(1, true, 3, true).clear();
System.out.println(nmap);
// prints "{4=my, 5=car}"
// wrap into synchronized SortedMap
SortedMap<Integer,String> ssmap =Collections.synchronizedSortedMap(nmap);
System.out.println(ssmap.subMap(4, 5));
// prints "{4=my}" ^exclusive upper bound!
System.out.println(ssmap.subMap(4, 5+1));
// prints "{4=my, 5=car}" ^ugly but "works"
}
}
Now, unfortunately there's no easy way to get a synchronized version of a NavigableMap<K,V>, but a SortedMap does have a subMap, but only one overload where the upper bound is strictly exclusive.
API links
SortedMap.subMap
NavigableMap.subMap
Collections.synchronizedSortedMap