I have a class Period that is represented by start and end dates, where end is after start. I need to write a function to check if periods overlap.
The straightforward approach is to check every period with every other period. Is there a way to introduce a data structure that will perform faster?
class Period {
LocalDateTime start;
LocalDateTime end;
}
boolean isOverlap(Set<Period> periods) {
// TODO put the code here
}
isOverlap should return true when at least two of the periods overlap.
Checking every period against every other period will have an O(n2) time complexity. Instead, I'd sort them by start and end times and then iterate over the list. This way, a period can only overlap the periods directly before and after it (or multiple subsequent ones before or after it - but that's inconsequential, since you're looking for a single overlap to return true). You can iterate over the list and check this. The total cost of this algorithm would be the cost of the sorting, O(nlog(n)):
boolean isOverlap(Set<Period> periods) {
List<Period> sorted =
periods.stream()
.sorted(Comparator.comparing((Period p) -> p.start)
.thenComparing(p -> p.end))
.collect(Collectors.toList());
for (int i = 0; i < sorted.size() - 1; ++i) {
if (sorted.get(i).end.compareTo(sorted.get(i + 1).start) > 0) {
return true;
}
}
return false;
}
Related
for(i=1;i<list.size();i++){
//do something
//For Eg: move marker to a new position on the map
}
I want the above loop to complete all the iterations irrespective of the size of the list and also want the entire task to run for 1 minute. (60 seconds)
I don't really know if this is what you want but I hope this helps.
import java.util.concurrent.TimeUnit;
for(i=1;i<list.size();i++){
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
// Execute thing you want to be executed every second
}
As explanation: you iterate through the for loop and the thread waits for one second before executing the code after the TimeUnit.SECONDS.sleep(1);.
If the list's size is 60 it would therefore take a minute for the loop to end.
Edit: It has occurred to me that it might be smarter to do a try-catch around the sleep function.
You can, for example, use System.nanoTime() to measure the duration of your loop, and then use TimeUnit.NANOSECONDS.sleep(...) to make it wait for the rest of time like this:
long start = System.nanoTime();
long desiredDuration = 60 * 1000 * 1000;
// your loop goes here
long duration = System.nanoTime() - start;
if (duration < desiredDuration)
TimeUnit.NANOSECONDS.sleep(desiredDuration - duration);
The best possible solution is to compute the desired time first and then run the loop to that extent.
long finish=System.currentTimeMillis() + 60000;
while(System.currentTimeMillis() != finish)
{
//statements;
//statements;
}
If you are trying to equip the CPU and keep it idle for this time the process is known as busy waiting but is not considered convenient in many cases so i recommend to use Thread.sleep(duration) for this purpose.
Would like to receive further queries from your side.
To spread N amount of invocations uniformly across a minute, you'll have to set the delay in between the invocations to the value 60/(N-1). The -1 is optional but causes the first and last invocations to be exactly 60 seconds apart. (just like how a ladder with N rungs has N-1 spaces)
Of course, using sleep() with the number calculated above is not only subject to round-off errors, but also drift, because you do stuff between the delays, and that stuff also takes time.
A more accurate solution is to subtract the time at which each invocation should occur (defined by startTime + 60*i/(N-1)) from the current time. Reorder and reformulate those formulas and you can subtract the 'time that should have elapsed for the next invocation' from the already elapsed time.
Of course 'elapsed time' should be calculated using System.nanoTime() and not System.currentTimeMillis() as the latter can jump when the clock changes or the computer resumes from stand-by.
For this example I changed 60 seconds to 6 seconds so you can more easily see what's going on when you run it.
public static void main(String... args) throws Exception {
int duration = 6; // seconds
List<Double> list = IntStream.range(0, 10).mapToDouble(i->ThreadLocalRandom.current().nextDouble()).boxed().collect(Collectors.toList());
long startTime = System.nanoTime();
long elapsed = 0;
for (int i = 0; i < list.size(); i++) { // Bug fixed: start at 0, not at 1.
if (i > 0) {
long nextInvocation = TimeUnit.NANOSECONDS.convert(duration, TimeUnit.SECONDS) * i / (list.size() - 1);
long sleepAmount = nextInvocation - elapsed;
TimeUnit.NANOSECONDS.sleep(sleepAmount);
}
elapsed = System.nanoTime() - startTime;
doSomething(elapsed, list.get(i));
}
}
private static void doSomething(long elapsedNanos, Double d) {
System.out.println(elapsedNanos / 1.0e9f + "\t" + d);
}
Of course when the task you preform per list element takes longer than 60/(N-1) seconds, you get contention and the 'elapsed time' deadlines are always exceeded. With this algorithm the total time just taking longer than a mnute. However if some earlier invocations exceed the deadline, and later invocations take much less time than 60/(N-1), this algorithm will show 'catch-up' behavior. This can be partially solved by sleeping at least a minimum amount even when sleepAmount is less.
Check out this.
long start = System.currentTimeMillis();
long end = start + 60*1000; // 60 seconds * 1000 ms/sec
int i = 0;
while (System.currentTimeMillis() < end)
{
// do something, iterate your list
i++;
if (i == list.size()) { // check size of the list if iteration is completed
// if time has not yet expired, sleep for the rest of the time
Thread.sleep(end - System.currentTimeMillis());
}
}
Do not forget checking size of the list.
I was in a job interview and got this question: " Write a function that gets 2 strings s,t that represents 2 hours ( in format HH: MM: SS ). It's known that s is earlier than t.
The function needs to calculate how many hours between the two given hours contains at most 2 digits.
For example- s- 10:59:00, t- 11:00:59 -
Answer- 11:00:00, 11:00:01,11:00:10, 11:00:11.
I tried to do while loops and got really stuck. Unfortunately, I didn't pass the interview.
How can I go over all the hours (every second is a new time) between 2 given hours in java as explained above? Thanks a lot
Java 8 allows you to use LocalTime.
LocalTime time1 = LocalTime.parse(t1);
LocalTime time2 = LocalTime.parse(t2);
The logic would require you to count the amount of different digits in a LocalTime, something like
boolean isWinner(LocalTime current) {
String onlyDigits = DateTimeFormatter.ofPattern("HHmmss").format(current);
Set<Character> set = new HashSet<>();
for (int index = 0; index < onlyDigits.length(); index++) {
set.add(onlyDigits.charAt(index));
}
return set.size() <= 2;
}
You can loop between the times like this
int count = 0;
for (LocalTime current = time1; current.isBefore(time2); current = current.plusSeconds(1)) {
if (isWinner(current)) {
count++;
}
}
That's it.
The question is really more geared towards getting a feel of how you'd approach the problem, and if you know about LocalTime API etc.
I have a List of TrackDay objects for a runner going around a track field on different days. Each pair of start/finish times signal a single lap run by the runner. We are guaranteed that there is a matching start/finish date (in the order in which they appear in the appropriate lists) :
TrackDay() {
List<DateTime> startTimes
List<DateTime> finishTimes
}
I would like to find the top N days (lets say 3) that runner ran the most. This translates to finding the N longest total start/finish times per TrackDay object. The naive way would be to do the following:
for (TrackDay td : listOftrackDays) {
// loop through each start/finish lists and find out the finish-start time for each pair.
// Add the delta times (finish-start) up for each pair of start/finish objects.
// Create a map to store the time for each TrackDay
// sort the map and get the first N entries
}
Is there a better, more clean/efficient way to do the above?
The problem you're trying to solve is well-known as Selection algorithm, in particular - Quick select. While sorting in general works good, for large collections it would be better to consider this approach, since it will give you linear time instead of N*log(N).
This solution should be linear time. I have assumed that startTimes and finishTimes support random access. I don't know what API your DateTime is part of, so have used java.time.LocalDateTime.
public List<TrackDay> findTop(List<TrackDay> trackDays, int limit) {
limit = Math.min(limit, trackDays.size());
List<Duration> durations = new ArrayList<>(Collections.nCopies(limit, Duration.ZERO));
List<TrackDay> result = new ArrayList<>(Collections.nCopies(limit, null));
int lastIndex = limit - 1;
for (TrackDay trackDay : trackDays) {
Duration duration = Duration.ZERO;
for (int i = 0, n = trackDay.startTimes.size(); i < n; i++) {
duration = duration.plus(Duration.between(trackDay.startTimes.get(i), trackDay.finishTimes.get(i)));
}
Integer destinationIndex = null;
for (int i = lastIndex; i >= 0; i--) {
if (durations.get(i).compareTo(duration) >= 0) {
break;
}
destinationIndex = i;
}
if (destinationIndex != null) {
durations.remove(lastIndex);
result.remove(lastIndex);
durations.add(destinationIndex, duration);
result.add(destinationIndex, trackDay);
}
}
return result;
}
I want to be-able to ask an object 'how many events have occurred in the last x seconds' where the x is an argument.
e.g. how many events have occurred in the last 120 seconds..
How I approached is linear based on the number of events occurring but was wanting to see what the most efficient way (space & time) to achieve this requirement?;
public class TimeSinceStat {
private List<DateTime> eventTimes = new ArrayList<>();
public void apply() {
eventTimes.add(DateTime.now());
}
public int eventsSince(int seconds) {
DateTime startTime = DateTime.now().minus(Seconds.seconds(seconds));
for (int i = 0; i < orderTimes.size(); i++) {
DateTime dateTime = eventTimes.get(i);
if (dateTime.compareTo(startTime) > 0)
return eventTimes.subList(i, eventTimes.size()).size();
}
return 0;
}
(PS - i'm using JodaTime for the date/time representation)
Edit:
The key of this algorithm to find all events that have happened in the last x seconds; the exact start time (e.g. now - 30 seconds) is may or maynot be in the collection
Store the DateTime in a TreeSet and then use tailSet to get the most recent events. This saves you from having to find the starting point by iteration (which is O(n)) and instead by searching (which is O (log n)).
TreeSet<DateTime> eventTimes;
public int eventsSince(int seconds) {
return eventTimes.tailSet(DateTime.now().minus(Seconds.seconds(seconds)), true).size();
}
Of course, you could also binary search on your sorted list, but this does the work for you.
Edit
If it's a concern that multiple events could occur at the same DateTime, you can take the exact same approach with a SortedMultiset from Guava:
TreeMultiset<DateTime> eventTimes;
public int eventsSince(int seconds) {
return eventTimes.tailMultiset(
DateTime.now().minus(Seconds.seconds(seconds)),
BoundType.CLOSED
).size();
}
Edit x2
Here's a much more efficient approach that leverages the fact that you only log events that happened after all other events. With each event, store the number of events up to that date:
SortedMap<DateTime, Integer> eventCounts = initEventMap();
public SortedMap<DateTime, Integer> initEventMap() {
TreeMap<DateTime, Integer> map = new TreeMap<>();
//prime the map to make subsequent operations much cleaner
map.put(DateTime.now().minus(Seconds.seconds(1)), 0);
return map;
}
private long totalCount() {
//you can handle the edge condition here
return eventCounts.getLastEntry().getValue();
}
public void logEvent() {
eventCounts.put(DateTime.now(), totalCount() + 1);
}
Then getting the count since a date is super efficient, just take the total and subtract the count of events that occurred before that date.
public int eventsSince(int seconds) {
DateTime startTime = DateTime.now().minus(Seconds.seconds(seconds));
return totalCount() - eventCounts.lowerEntry(startTime).getValue();
}
This eliminates the inefficient iteration. It's a constant time lookup and an O(log n) lookup.
If you were implementing a data structure from scratch, and the data are not in sorted order, you'd want to construct a balanced order statistic tree (also see code here). This is just a regular balanced tree with the size of the tree rooted at each node maintained in the node itself.
The size fields enable efficient calcualtion of the "rank" of any key in the tree. You can do the desired range query by making two O(log n) probes into the tree for the rank of the min and max range value, finally taking their difference.
The proposed tree and set tail operations are great except the tail views will need time to construct, even though all you need is their size. The asymptotic complexity is the same as the OST, but the OST avoids this overhead completely. The difference could be meaningful if performance is very criticial.
Of course I'd definitely use the standard library solution first and consider the OST only if the speed turned out to be inadequate.
Since DateTime already implements Comparable interface, I would recommend storing the data in a TreeMap instead, and you could use TreeMap#tailMap to get a subtree of the DateTime's that occurs in the desired time.
Based on your code:
public class TimeSinceStat {
//just in case two or more events start at the "same time"
private NavigableMap<DateTime, Integer> eventTimes = new TreeMap<>();
//if this class needs to be used in multiple threads, use ConcurrentSkipListMap instead of TreeMap
public void apply() {
DateTime dateTime = DateTime.now();
Integer times = eventTimes.contains(dateTime) != null ? 0 : (eventTimes.get(dateTime) + 1);
eventTimes.put(dateTime, times);
}
public int eventsSince(int seconds) {
DateTime startTime = DateTime.now().minus(Seconds.seconds(seconds));
NavigableMap<DateTime, Integer> eventsInRange = eventTimes.tailMap(startTime, true);
int counter = 0;
for (Integer time : eventsInRange.values()) {
counter += time;
}
return counter;
}
}
Assuming the list is sorted, you could do a binary search. Java Collections already provides Collections.binarySearch, and DateTime implements Comparable (according to the JodaTime JavaDoc). binarySearch will return the index of the value you want, if it exists in the list, otherwise it returns the index of the greatest value less than the one you want (with the sign flipped). So, all you need to do is (in your eventsSince method):
// find the time you want.
int index=Collections.binarySearch(eventTimes, startTime);
if(index < 0) index = -(index+1)-1; // make sure we get the right index if startTime isn't found
// check for dupes
while(index != eventTimes.size() - 1 && eventTimes.get(index).equals(eventTimes.get(index+1))){
index++;
}
// return the number of events after the index
return eventTimes.size() - index; // this works because indices start at 0
This should be a faster way to do what you want.
I wrote a little program that tries to find a connection between two equal length English words. Word A will transform into Word B by changing one letter at a time, each newly created word has to be an English word.
For example:
Word A = BANG
Word B = DUST
Result:
BANG -> BUNG ->BUNT -> DUNT -> DUST
My process:
Load an English wordlist(consist of 109582 words) into a Map<Integer, List<String>> _wordMap = new HashMap();, key will be the word length.
User put in 2 words.
createGraph creates a graph.
calculate the shortest path between those 2 nodes
prints out the result.
Everything works perfectly fine, but I am not satisfied with the time it took in step 3.
See:
Completely loaded 109582 words!
CreateMap took: 30 milsecs
CreateGraph took: 17417 milsecs
(HOISE : HORSE)
(HOISE : POISE)
(POISE : PRISE)
(ARISE : PRISE)
(ANISE : ARISE)
(ANILE : ANISE)
(ANILE : ANKLE)
The wholething took: 17866 milsecs
I am not satisfied with the time it takes create the graph in step 3, here's my code for it(I am using JgraphT for the graph):
private List<String> _wordList = new ArrayList(); // list of all 109582 English words
private Map<Integer, List<String>> _wordMap = new HashMap(); // Map grouping all the words by their length()
private UndirectedGraph<String, DefaultEdge> _wordGraph =
new SimpleGraph<String, DefaultEdge>(DefaultEdge.class); // Graph used to calculate the shortest path from one node to the other.
private void createGraph(int wordLength){
long before = System.currentTimeMillis();
List<String> words = _wordMap.get(wordLength);
for(String word:words){
_wordGraph.addVertex(word); // adds a node
for(String wordToTest : _wordList){
if (isSimilar(word, wordToTest)) {
_wordGraph.addVertex(wordToTest); // adds another node
_wordGraph.addEdge(word, wordToTest); // connecting 2 nodes if they are one letter off from eachother
}
}
}
System.out.println("CreateGraph took: " + (System.currentTimeMillis() - before)+ " milsecs");
}
private boolean isSimilar(String wordA, String wordB) {
if(wordA.length() != wordB.length()){
return false;
}
int matchingLetters = 0;
if (wordA.equalsIgnoreCase(wordB)) {
return false;
}
for (int i = 0; i < wordA.length(); i++) {
if (wordA.charAt(i) == wordB.charAt(i)) {
matchingLetters++;
}
}
if (matchingLetters == wordA.length() - 1) {
return true;
}
return false;
}
My question:
How can I improve my algorithm inorder to speed up the process?
For any redditors that are reading this, yes I created this after seeing the thread from /r/askreddit yesterday.
Here's a starting thought:
Create a Map<String, List<String>> (or a Multimap<String, String> if you've using Guava), and for each word, "blank out" one letter at a time, and add the original word to the list for that blanked out word. So you'd end up with:
.ORSE => NORSE, HORSE, GORSE (etc)
H.RSE => HORSE
HO.SE => HORSE, HOUSE (etc)
At that point, given a word, you can very easily find all the words it's similar to - just go through the same process again, but instead of adding to the map, just fetch all the values for each "blanked out" version.
You probably need to run it through a profiler to see where most of the time is taken, especially since you are using library classes - otherwise you might put in a lot of effort but see no significant improvement.
You could lowercase all the words before you start, to avoid the equalsIgnoreCase() on every comparison. In fact, this is an inconsistency in your code - you use equalsIgnoreCase() initially, but then compare chars in a case-sensitive way: if (wordA.charAt(i) == wordB.charAt(i)). It might be worth eliminating the equalsIgnoreCase() check entirely, since this is doing essentially the same thing as the following charAt loop.
You could change the comparison loop so it finishes early when it finds more than one different letter, rather than comparing all the letters and only then checking how many are matching or different.
(Update: this answer is about optimizing your current code. I realize, reading your question again, that you may be asking about alternative algorithms!)
You can have the list of words of same length sorted, and then have a loop nesting of the kind for (int i = 0; i < n; ++i) for (int j = i + 1; j < n; ++j) { }.
And in isSimilar count the differences and on 2 return false.