I'm working on writing a k means algorithm that takes in a double[][] that stores locations and returning two clusters of locations.
I just have a really quick question: what is the best way to choose what the initial cluster values should be?
I've tried randomizing the values but that doesn't always work well, and I can't find any good answers to this question online. Any help is much appreciated.
One popular strategy that is usually more effective than random selection is to pick the first value at random, and then choose the second value by finding the farthest data point from the first selection.
The next value would then be chosen to be the furthest from both of the first two, and so on.
This is similar to the slightly more complex initialization algorithm K-means++.
Related
I'm trying to solve a variant of the partition problem. I have two important twists. I need to solve for k partitions, not just 2, as in the classic partition problem.
The following code does that:
https://gist.github.com/ishikawa/21680
I also need to allow the freedom to jumble up the order of the items, so that I can get the optimal solution. So, where the classic problem requires the order of the elements be left in tact, and the array is just split at an semi-optimal points, I need to allow the array to be re-ordered in such a way as the difference between the partitions is smallest.
How can I tackle this? Both twists are necessary for this real world application. I'd be extremely happy if I could find a Java library that already handles this.
I just recently did a sudoku solver in java using backtracking.
Is it possible, given the solutions, to formulate a problem or puzzle?
EDIT
to formulate the original puzzle, is there some way to achieve this?
and an additional question,
given the puzzle and the solutions.
If I am able to solve the puzzle using the solutions (result is puzzles)
and at the same time able to solve the solutions using the puzzle (result is solutions)
Which has the greater number?
the puzzles? or the solutions?
It is possible to formulate one of the multiple possible original states.
Start with the final solution (all numbers are present)
Remove one number (chosen randomly or not)
Check if this number can be found back given the current state of the board (you already have a solver, it should be easy)
If this number can be calculated, everything is OK. Go back to 2.
If this number cannot be found back, put it back where it was. Go back to 2.
If no more numbers can be removed, you have reached one of the original states of the puzzle.
If you chose the numbers you remove randomly (step 2), you can execute this several times, and get different starting points that lead to the same final puzzle.
Creating a puzzle from a solution is very simple. You just apply the solving steps backwards.
For example, if a line contains 8 digits, you can fill in the 9th. If you do that backwards, if a line contains 9 digits, you can remove one. This will yield a very boring puzzle, but still a valid one (a valid puzzle is a puzzle with only one solution).
The more complicated the steps you make are, the more difficult the puzzle will be. In fact, brute forcing the puzzle is the most difficult strategy, executing it backwards probably boils down to randomly removing a digit, and brute force check if there is still only 1 unique solution. Notice you do not have to solve the entire puzzle: It's enough to prove there is only 1 way to add the removed digit back into the puzzle.
As for the second part of your question: This feels a bit like a mathematical question, but let me answer:
A good puzzle only has 1 solution. Since there are multiple puzzles that yield the same solution (like, there are 81 ways to fill in 80 of the 81 squares, all yielding the same solution from a different puzzle), you could say there are many more puzzles than solutions.
If you also allow puzzles with multiple solutions, it changes. For every puzzle, there must be one or more solutions, but all those solutions also belong to that puzzle, so the number of solutions to puzzles is equal to the number of puzzles to solutions. Invalid puzzles do not change this: Since they belong to 0 solutions, you need no additional puzzles belonging to those solutions.
Ps. It's also trivial to create puzzles if they do not need to be uniquely solvable: Just randomly remove some digits and you're done.
If I am able to solve the puzzle using the solutions (result is puzzles)
and at the same time able to solve the solutions using the puzzle (result is solutions)
Which has the greater number?
the puzzles? or the solutions?
There is no well-defined answer.
Each solution has exactly 281 corresponding puzzles, including the trivial one. Not all of these have a unique solution, but many do. Therefore, if the chosen set of solutions contains only one element, then the maximal set of puzzles that jointly correspond to the solution set is vastly larger.
On the other hand, the completely blank puzzle affords 6670903752021072936960 solutions. There are many pairs of those that share only the blank grid as a common puzzle. Thus, if the chosen set of puzzles includes only the blank grid then the set of corresponding solutions is vastly larger.
An interviewer asked me how to find the largest number in a random array. I answered that loop the whole array to find the largest number but he said it would take too much time. I wonder if there is a better solution for it. Any suggestion?
Your interviewer probably wanted you to drill into details. When I interview someone, I would be looking for someone to ask these questions:
How many numbers are there?
What are the range of values in the array?
Is the array sorted? (Yes, really)
Where did the data come from?
How often will this need to run?
I might answer, "They are ages between 15 and 99. And there are 100,000,000 of them." This would lead to an obvious optimization: if you see a 99, break your loop and return 99. That could save a lot of time. If the numbers are evenly distributed (you should ask!), this would take the average number of items you have to look at from 100,000,000 to under 100.
What I look for, is questions. I don't want someone jumping in and doing things they think are 'right' without knowing the details.
Even without silly constraints, a good candidate would try to figure out what type of system this is going to fit into. Obviously, finding the highest number in an array isn't going to be a one-time thing. If I need to repeatedly get the next highest number, again and again, sorting it first makes sense. If the array will be growing and I need to keep pulling out the highest then a heap makes sense.
You'll never know what your interviewer was thinking because you didn't drill down. That might have been all she was looking for, your questions. But even if she wasn't, even if she was just fumbling the question, by nailing down the details you would have demonstrated that you know what you're talking about.
If the items in the array are random, the largest number could be at any position and you therefore need to read each item in the array to find the largest. Your proposal is the most efficient method given the assumptions...
If the array is big, there is a faster way -- not faster in terms of CPU cycles, but faster in terms of wall clock. You could subdivide the array into sections (not by copying it, but by identifying start/end points) and then search each section in a separate thread. When all of the threads are done, find the largest number among the ones that each found. That way, you'll be able to solve the problem X times faster, where X is the number of cores on your machine.
It most likely was a trick interviewers use to put you under pressure and "show nuts" by defending your answer.
There is no faster way to do this if you know nothing else about the array except that it is sorted. If the values are sorted or pre-organized in some way it might look different.
You will need to look at every element in the array since it is unsorted. I think when the interviewer said it would take long, you should have said, "yes, it could take long, but that's the only way".
Perhaps he is testing your knowledge on algorithms.
Your method may take O(n), there are other sorting algorithms that can sort this random array into a ordered array, for example quicksort makes O(nlogn) comparisons. It may have a bad worst case but this is rare.
In interviews I find it is always best to voice out your ideas, your assumptions your logic so that your interviewer is in the loop and gets an idea of how you think.
I'm practicing up for a programming competition, and i'm going over some difficult problems that I wasn't able to answer in the past. One of them was the King's Maze. Essentially you are given an NxN array of numbers -50<x<50 that represent "tokens". You have to start at position 1,1(i assume that's 0,0 in array indexes) and finish at N,N. You have to pick up tokens on cells you visit, and you can't step on a cell with no token (represented by a 0). If you get surrounded by 0's you lose. If there is no solution to a maze you output "No solution". Else, you output the highest possible number you can get from adding up the tokens you pick up.
I have no idea how to solve this problem. I figured you could write a maze algorithm to solve it, but that takes time, and in programming contests you are only given two hours to solve multiple problems. I'm guessing there's some sort of pattern i'm missing. Anyone know how I should approach this?
Also, it might help to mention that this problem is meant for high school students.
This type of problem is typically solved using dynamic programming or memoization.
Basically you formulate a recursive solution, and solve it bottom up while remembering and reusing previously computed results.
The simple approach (i.e. simplest to code) is try all the possible paths - try each first step; for each first step try each second step; for each first step/second step combination try each third step; and so on. However depending on how big the maze is this may take too long to run (or it may not).
Your next step is to think about how you can do this faster. The first step is usually to eliminate moves that you know can't lead to a finish, or can't lead to a finish with higher points than the one you already have. Since this is practice for a competition we'll leave you to do this work yourself.
Think "graph" algorithms: The Algorithm Design Manual
I have a data set with attributes like this:
Marital_status = {M,S,W,D}
IsBlind = {Y,N}
IsDisabled = {Y,N}
IsVetaran = {Y,N}
etc. There are about 200 such variables.
I need an algorithm to generate combinations of the attributes, with one value at a time.
In other words, my first combination would be:
Marital_status = M, IsBlind = Y, IsDisabled = Y, IsVeteran = Y
The next set would be:
Marital_status = M, IsBlind = Y, IsDisabled = Y, IsVeteran = N
I tried to use a simple combination generator, treating each value for each attribute as an attribute itself. It did not work because the mutually exclusive choices are included in the combinations and the number of possible combinations was really huge (133873417996074857185490633899939406700260683726864088366400 to be precise)
Could you please suggest an algorithm (preferably coded in Java)?
Thanks!!
As others have pointed out (and yourself also), it is impossible to test exhaustively this.
I suggest you take the sampling approach, and test with that. You have strong theoretical background, so you will be able to find your way in the internet to find and understand this.
But let me give a small example. For now, I will ignore possible "clusters" of parameters (that are strongly related).
Create a sample of one data, containing all possible values for all your 200 parameters. This exhaustivity ensures that no parameter value could be forgotten.
It doesn't have to be created upfront, the values can be created by a loop.
To each sample of one data, you need to add the other values. A simple approach would be to choose a number of times you want to test each one-sample (say N = 100). For each sample of one data, you would generate randomly N times the other values.
If there are 1000 possible values using all 200 parameters, and N=100, that would give us 100K tests.
You could elaborate on this basic idea in many ways:
If you want your test to be repeatable, you could generate it only once, store it, and then reuse the same sets in all future tests.
You could control your distribution so that each value gets selected a fair number of times.
In real life, all 200 parameters wouldn't be without connections. Many parameters would actually be connected to some others, in that the probability of finding the values together are not even. Instead of making the initial exhaustive set on only one parameter as I did previously,
I would run the exhaustive set on a cluster of connected parameters.
Find another way. If you have 200 variables, and each one has at least 2 choices, you're going to have >= 2^200 combinations. If you generate one combination each nanosecond, it would take about 10^43 years to enumerate 2^200 choices.
As Keith pointed out, the number of combinations will be impossibly large if there are no excluded combinations, which would make your need unmeetable. However, since you've already said that you have mutually exclusive choices, the solution space will be smaller.
How much smaller? Depends on how many choices are mutually exclusive. I recommend doing some math on that before going too hard.
Assuming that enough choices are exclusive, you're still going to have to essentially brute-force it, but you're very unlikely to find an existing, useful algorithm.
Which brings me to the question: what's your reason for doing this - exhaustive testing? Sounds good, but you may find that that's not possible. I've encountered this issue myself, and in the end, you may well be forced to some combination of carefully selected "edge" cases, plus some quasi-randomly selected other cases.
Having read your comment above, it appears you define "mutual exclusion" differently than I do, and I fear that you may have a problem.
So a given patient is not both blind and not blind. Great. But that's not what I (and I suspect everyone else here) understood when you mentioned mutual exclusions.
By those, I'm talking about e.g., if blind, cannot be non-disabled, or something like that.
Without a significant number of mutually exclusive inter-relationships between your attributes which limit their combinations, you will be unable to complete your exhaustive testing.