Algorithm for distributing members over activites (with individual preferences) - java

So in my school we have a day where everybody participates in different activities.
Each projects can have like 10 members. The whole day is divided in 2 or 3 different blocks, in which the pupils assigned to the activity change.(So in block 1 pupil x takes part in activity a and in the second block in activity d).
Before this day starts, we give make lists in which each pupil can tell us his 3 (or 4) favorite activities (he only takes part in two of them, these again are ordered from most "favorite" to least) in which he wants to take part.
Now our job is to assign these pupils in a way that we have the best overall satisfaction among the pupils (so everybody did more or less did get his/her chosen activities).What would be a good algorithm to solve this ?(I'm quite familiar with programming (especially java), so the approach would be enough too (although some (pseudo-)code would be great too:) )
Is there any way to do this, apart from calculating such a "satisfaction" value for each possible solution?
An optional feature would be that if someone can't get in to his/her project, they would get into a similar on (also this sounds kind of sexist, you could for example rate how "female"/"male" this activity is and choose similar activities according to this scale)
I'm hope this question is fits into stackexchange, if it is totally off-topic I would be happy to tell me about a more suitable stack.
Looking forward to your suggestions,
John

If the students are ranking each of their favorite activities (1-4) then it's simple to assign those activities a weight (1-4). You group everyone that weights a certain activity at a certain level and compare the number of students to the number of activities. If there's more students than spots the method of choosing is up in the air. I would say random for fairness or if you want to get fancy you can track it from day to day so that everyone gets a chance to participate in a favorite activity.
If there are more slots than students then you could poll for people who rate it a 3 and so on down the line.
That seems like a fair place to start at least.

I don't have an algorithm for you but there is a package which will do much of the job for you. The site is http://www.optaplanner.org/ and it is part of the Drools project.
Configuring the application requires some work. By the time you finish the configuration you will have gotten some hint as to how hard the task is, and why no simple algorithm will do the job.

Related

What's a good way of generating football scores based on team strength?

About a year ago I tried to make a simple text-based football simulation program but I abandoned it and I want to start from the beginning since it wasn't really good. My problem is finding a good algorithm for calculating the scores.
I don't want this program to just be a big Math.random() mess, but instead, use a combination of Math.random() and team strength (each team was given attack and defense ratings of 1-20, but that could be improved too since I'm not sure if that's the optimal way of doing it) in order to calculate the scores, and maybe throw home advantage and form/morale into the mix (but that would be later since it's slightly more complicated).
In my previous program, the algorithm compared the attack and defense ratings of the 2 teams and then decided how many goals to randomize. So for example, if a team had a max of 20 attack rating and played against a team with 5 defence rating, they would most likely score a few goals. A few problems with this algorithm are:
I think will be really hard to incorporate home advantage and form using this algorithm. It was a lot of if's and else's with a random integer between 1 to 100 to have different chance percentages.
There were not a lot of surprise results, despite the fact that I gave smaller teams a chance to score a few goals.
The scores were insane, with teams winning 5-1 week in, week out.
It didn't technically check if a team is better overall, just decided the number of goals a team would score based on attack vs defense. So theoretically, attack could be overpowered (I might be wrong though).
Let's say a team who has to win the league and they are playing against a team who already secured a mid-table finish: that wouldn't have had an effect on the game. I suppose this is something to add next to form but I don't know.
Considering these points, do you have a good idea for a better algorithm? I don't know machine learning and all that stuff, I want this program to be somewhat simple. I also don't need you to write the whole algorithm for me of course, just give me some general ideas. Thanks a lot :)
My suggestion (just thinking out loud, I have no experience at all on football scores simulation).
I would assign every team a relative "goal potential", meaning like "how many goals would that team score on average" (against the whole population of teams). Then the simulation would be based on a normal distribution with the mean equal to the difference of goal potentials of the two players, and a some standard deviation.
To get an idea of appropriate values, you could take historical data of a few teams and compute their average number of goals (per match played), and the global standard deviation.
You could refine the model by assigning the teams different standard deviations, which will reflect their "regularity". Again you can compute these on actual data to get an idea of realistic values (and check if using different values makes any sense).
The home advantage can be modelled as just a fixed increment of the goal potential. You can also measure actual home advantage by computing the two average goal-per-match of a given team, both when home or visitor, and taking the difference (this is a way to see if home advantage is real or just folklore).

Recursion with 1D array

Ive been staring at this problem for hours. For some reason I understand flood fill and recursion with 2-D arrays, but can't seem to even get started on this problem:
You have three assistants working for you. One is named Jeff, the other is named Jeff, and the third is named Jeff. They all type with the same speed of one page per minute. You came to your office today with a bunch of papers you need to have typed as soon as possible. You have to distribute the papers among your assistants in such a way that they finish all the papers at the earliest possible time. "Jeff do this," you yell. "Jeff do that," you say. "Jeff, finish the job," you admonish. So Jeff does. But you need to help him. So you have this APT.
Your task is, given a int[] with the number of pages for each paper, return the minimum number of minutes needed for your assistants to type all these papers. Assume that they can't divide a paper into parts, that is, each paper is typed by one person. For example, given {1,2,3,4,5,6,7}, the function should return 10 because 7+3=10, 6+2+1=9, and 5+4=9 (there are also other combinations of these numbers that would yield the same result).

Beat Matching Algorithm

I've recently begun trying to create a mobile app (iOS/Android) that will automatically beat match (http://en.wikipedia.org/wiki/Beatmatching) two songs.
I know that this exists out there, and there have been others who have had some success, but I'm running into issues related to the accuracy of the players.
Specifically, I run into "sync" issues where the "beats" don't line up. The various methods used to date are:
Calculate the BPM in advance, identify a "beat" (using something like sonicapi.com), and trying to line up appropriately, and begin a mix in with its playback rate adjusted (tempo adjustment)
Utilizing a bunch of meta data to trigger specific starts and stops
What does NOT work:
Leveraging echonest's API (it beat matches on the server, we want to do it on the client)
Something like pydub (does not do it in realtime)
Who uses this algorithm today:
iwebdj
Traktor
Does anyone have any suggestions on how to solve this problem? I've seen lots of people do it, but doing it in real time on a mobile device seems to be an issue.
There are lots of methods for solving this problem, some of which work better than others. Matthew Davies has published several papers on the matter, among many others. Glancing at this article seems to break down some of the steps necessary for doing this. I built a beat tracker in Matlab (unfortunately...) with a fellow student and our goal was to create an outro/intro between 2 songs so that the tempo was seamless between them. We wanted to do this for songs that varied in BPM by a small amount (+-7 or so BPM between the two). Our method went sort of like this:
Find two songs in our database that had overlapping 'key center'. So lets say 2 songs, both in Am.
Find this particular overlap of key centers between the two. Say 30 seconds into song 1 and 60 seconds into song 2
Now create a beat map, using an onset-detection algorithm with peak picking; Also, this was helpful for us.
Pick the first 'beat' for each track, and overlap the two tracks at that point. Now, since they are slightly different BPM from each other, the beats won't really line up with each other.
From this, we created a sort of map that gave us the sample offsets between beats of song A and beats of song B. From this, we wanted to be able to time-stretch the fade-in region of song B so that each one of its onsets (beats in this case) lined up at the correct sample index as the onsets from song A, over ITS fade-out region. So for example, if onset 2 from song B was shown as 5,000 samples ahead of onset 2 from song A, we simply stretched that 5,000 sample region so that onset 2 matched exactly between both songs.
This seems like it would sound weird, but it actually sounded pretty good. Although this was done entirely offline in Matlab, I am also looking for a way to do this in real-time in a mobile app. Not entirely sure about libraries you can use for this in Android world, but I imagine that it would be most efficient in C++.
A couple of libraries I have come across would be good for prototyping something, or at least studying the source code to get a better understanding of how you could do this in a mobile app:
Essentia (great community, open-source)
Aubio (also seems to be maintained pretty well, open-source)
Additional things to read up on for doing this kind of stuff in iOS land:
vDSP Programming guide
This article may also help
I came across this project that is doing some beat detection. Although it seems pretty out-dated unfortunately, it may offer some additional insights.
Unfortunately it isn't as simple as just 'pressing play' at the same time to align beats, unless you are assuming very specific aspects about them (exact tempos, etc.).
If you reallllly have some time on your hands, you should check out Tristan Jehan's (founder of Echonest) thesis; it is jam packed with algorithms and methods for beat detection, etc.

Design for a Debate club assignment application

For my university's debate club, I was asked to create an application to assign debate sessions and I'm having some difficulties as to come up with a good design for it. I will do it in Java. Here's what's needed:
What you need to know about BP debates: There are four teams of 2 debaters each and a judge. The four groups are assigned a specific position: gov1, gov2, op1, op2. There is no significance to the order within a team.
The goal of the application is to get as input the debaters who are present (for example, if there are 20 people, we will hold 2 debates) and assign them to teams and roles with regards to the history of each debater so that:
Each debater should debate with (be on the same team) as many people as possible.
Each debater should uniformly debate in different positions.
The debate should be fair - debaters have different levels of experience and this should be as even as possible - i.e., there shouldn't be a team of two very experienced debaters and a team of junior debaters.
There should be an option for the user to restrict the assignment in various ways, such as:
Specifying that two people should debate together, in a specific position or not.
Specifying that a single debater should be in a specific position, regardless of the partner.
If anyone can try to give me some pointers for a design for this application, I'll be so thankful!
Also, I've never implemented a GUI before, so I'd appreciate some pointers on that as well, but it's not the major issue right now.
Also, there is the issue of keeping Debater information in file, which I also never implemented in Java, and would like some tips on that as well.
This seems like a textbook constraint problem. GUI notwithstanding, it'd be perfect for a technology like Prolog (ECLiPSe prolog has a couple of different Java integration libraries that ship with it).
But, since you want this in Java why not store the debaters' history in a sql database, and use the SQL language to structure the constraints. You can then wrap those SQL queries as Java methods.
There are two parts (three if you count entering and/or saving the data), the underlying algorithm and the UI.
For the UI, I'm weird. I use this technique (there is a link to my sourceforge project). A Java version would have to be done, which would not be too hard. It's weird because very few people have ever used it, but it saves an order of magnitude coding effort.
For the algorithm, the problem looks small enough that I would approach it with a simple tree search. I would have a scoring algorithm and just report the schedule with the best score.
That's a bird's-eye overview of how I would approach it.

Calculating Mutual Information For Selecting a Training Set in Java

Scenario
I am attempting to implement supervised learning over a data set within a Java GUI application. The user will be given a list of items or 'reports' to inspect and will label them based on a set of available labels. Once the supervised learning is complete, the labelled instances will then be given to a learning algorithm. This will attempt to order the rest of the items on how likely it is the user will want to view them.
To get the most from the user's time I want to pre-select the reports that will provide the most information about the entire collection of reports, and have the user label them. As I understand it, to calculate this, it would be necessary to find the sum of all the mutual information values for each report, and order them by that value. The labelled reports from supervised learning will then be used to form a Bayesian network to find the probability of a binary value for each remaining report.
Example
Here, an artificial example may help to explain, and may clear up confusion when I've undoubtedly used the wrong terminology :-) Consider an example where the application displays news stories to the user. It chooses which news stories to display first based on the user's preference shown. Features of a news story which have a correlation are country of origin, category or date. So if a user labels a single news story as interesting when it came from Scotland, it tells the machine learner that there's an increased chance other news stories from Scotland will be interesting to the user. Similar for a category such as Sport, or a date such as December 12th 2004.
This preference could be calculated by choosing any order for all news stories (e.g. by category, by date) or randomly ordering them, then calculating preference as the user goes along. What I would like to do is to get a kind of "head start" on that ordering by having the user to look at a small number of specific news stories and say if they're interested in them (the supervised learning part). To choose which stories to show the user, I have to consider the entire collection of stories. This is where Mutual Information comes in. For each story I want to know how much it can tell me about all the other stories when it is classified by the user. For example, if there is a large number of stories originating from Scotland, I want to get the user to classify (at least) one of them. Similar for other correlating features such as category or date. The goal is to find examples of reports which, when classified, provide the most information about the other reports.
Problem
Because my math is a bit rusty, and I'm new to machine learning I'm having some trouble converting the definition of Mutual Information to an implementation in Java. Wikipedia describes the equation for Mutual Information as:
However, I'm unsure if this can actually be used when nothing has been classified, and the learning algorithm has not calculated anything yet.
As in my example, say I had a large number of new, unlabelled instances of this class:
public class NewsStory {
private String countryOfOrigin;
private String category;
private Date date;
// constructor, etc.
}
In my specific scenario, the correlation between fields/features is based on an exact match so, for instance, one day and 10 years difference in date are equivalent in their inequality.
The factors for correlation (e.g. is date more correlating than category?) are not necessarily equal, but they can be predefined and constant. Does this mean that the result of the function p(x,y) is the predefined value, or am I mixing up terms?
The Question (finally)
How can I go about implementing the mutual information calculation given this (fake) example of news stories? Libraries, javadoc, code examples etc. are all welcome information. Also, if this approach is fundamentally flawed, explaining why that is the case would be just as valuable an answer.
PS. I am aware of libraries such as Weka and Apache Mahout, so just mentioning them is not really useful for me. I'm still searching through documentation and examples for both these libraries looking for stuff on Mutual Information specifically. What would really help me is pointing to resources (code examples, javadoc) where these libraries help with mutual information.
I am guessing that your problem is something like...
"Given a list of unlabeled examples, sort the list by how much the predictive accuracy of the model would improve if the user labelled the example and added it to the training set."
If this is the case, I don't think mutual information is the right thing to use because you can't calculate MI between two instances. The definition of MI is in terms of random variables and an individual instance isn't a random variable, it's just a value.
The features and the class label can be though of as random variables. That is, they have a distribution of values over the whole data set. You can calculate the mutual information between two features, to see how 'redundant' one feature is given the other one, or between a feature and the class label, to get an idea of how much that feature might help prediction. This is how people usually use mutual information in a supervised learning problem.
I think ferdystschenko's suggestion that you look at active learning methods is a good one.
In response to Grundlefleck's comment, I'll go a bit deeper into terminology by using his idea of a Java object analogy...
Collectively, we have used the term 'instance', 'thing', 'report' and 'example' to refer to the object being clasified. Let's think of these things as instances of a Java class (I've left out the boilerplate constructor):
class Example
{ String f1;
String f2;
}
Example e1 = new Example("foo", "bar");
Example e2 = new Example("foo", "baz");
The usual terminology in machine learning is that e1 is an example, that all examples have two features f1 and f2 and that for e1, f1 takes the value 'foo' and f2 takes the value 'bar'. A collection of examples is called a data set.
Take all the values of f1 for all examples in the data set, this is a list of strings, it can also be thought of as a distribution. We can think of the feature as a random variable and that each value in the list is a sample taken from that random variable. So we can, for example, calculate the MI between f1 and f2. The pseudocode would be something like:
mi = 0
for each value x taken by f1:
{ sum = 0
for each value y taken by f2:
{ p_xy = number of examples where f1=x and f2=y
p_x = number of examples where f1=x
p_y = number of examples where f2=y
sum += p_xy * log(p_xy/(p_x*p_y))
}
mi += sum
}
However you can't calculate MI between e1 and e2, it's just not defined that way.
I know information gain only in connection with decision trees (DTs), where in the construction of a DT, the split to make on each node is the one which maximizes information gain. DTs are implemented in Weka, so you could probably use that directly, although I don't know if Weka lets you calculate information gain for any particular split underneath a DT node.
Apart from that, if I understand you correctly, I think what you're trying to do is generally referred to as active learning. There, you first need some initial labeled training data which is fed to your machine learning algorithm. Then you have your classifier label a set of unlabeled instances and return confidence values for each of them. Instances with the lowest confidence values are usually the ones which are most informative, so you show these to a human annotator and have him/her label these manually, add them to your training set, retrain your classifier, and do the whole thing over and over again until your classifier has a high enough accuracy or until some other stopping criterion is met. So if this works for you, you could in principle use any ML-algorithm implemented in Weka or any other ML-framework as long as the algorithm you choose is able to return confidence values (in case of Bayesian approaches this would be just probabilities).
With your edited question I think I'm coming to understand what your aiming at. If what you want is calculating MI, then StompChicken's answer and pseudo code couldn't be much clearer in my view. I also think that MI is not what you want and that you're trying to re-invent the wheel.
Let's recapitulate: you would like to train a classifier which can be updated by the user. This is a classic case for active learning. But for that, you need an initial classifier (you could basically just give the user random data to label but I take it this is not an option) and in order to train your initial classifier, you need at least some small amount of labeled training data for supervised learning. However, all you have are unlabeled data. What can you do with these?
Well, you could cluster them into groups of related instances, using one of the standard clustering algorithms provided by Weka or some specific clustering tool like Cluto. If you now take the x most central instances of each cluster (x depending on the number of clusters and the patience of the user), and have the user label it as interesting or not interesting, you can adopt this label for the other instances of that cluster as well (or at least for the central ones). Voila, now you have training data which you can use to train your initial classifier and kick off the active learning process by updating the classifier each time the user marks a new instance as interesting or not. I think what you're trying to achieve by calculating MI is essentially similar but may be just the wrong carriage for your charge.
Not knowing the details of your scenario, I should think that you may not even need any labeled data at all, except if you're interested in the labels themselves. Just cluster your data once, let the user pick an item interesting to him/her from the central members of all clusters and suggest other items from the selected clusters as perhaps being interesting as well. Also suggest some random instances from other clusters here and there, so that if the user selects one of these, you may assume that the corresponding cluster might generally be interesting, too. If there is a contradiction and a user likes some members of a cluster but not some others of the same one, then you try to re-cluster the data into finer-grained groups which discriminate the good from the bad ones. The re-training step could even be avoided by using hierarchical clustering from the start and traveling down the cluster hierarchy at every contradiction user input causes.

Categories

Resources