Simulation in java [closed] - java

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am novice to the simulation world, and want to learn how programmers develop real simulation projects in java. I would use eclipse.
Could anyone point to other things that I need to know (e.g. other packages, software etc. and their purposes)?
I am afraid the question might seem a bit vague as it is not clear which type of project I am talking about. But being a novice, let me say that it's to begin how to code a simulation project.

If you are building a Monte-Carlo model for a discrete event simulation or simulation model for pricing derivatives you should find there is a body of framework code already out there. If you are doing a numerical simulation such as a finite-element model you will be basing your simulation on a matrix computation library. Other types of simulation exist, but these are the two most likely cases.
I've never written a finite element model and know next to nothing about these, although I did have occasion to port one to DEC Visual FORTRAN at one point. Although the program (SAFIR, if anyone cares) was commented in French, the porting exercise consisted of modifying two date functions for a total of 6 lines of FORTRAN code - and writing a makefile.
Monte-Carlo models consist of measuring some base population to get distributions of one or more variables of interest. Then you take a Pseudo-Random number generator with good statistical and geometric properties (the Mersenne Twister algorithm is widely used for this) and write a function to convert the output of this to a random variable with the appropriate distribution. You will probably be able to find library functions that do this unless your variables have a really unusual distribution.
Then you build or obtain a simulation framework and write a routine that takes the random variables and does whatever computation you want to do for the model. You run it, storing the results of each simulation, until the error is within some desired tolerance level. After that, you calculate statistics (means, distributions etc.) from all of the runs of the simulation model.
There are quite a lot of resources on the web, and many books on simulation modelling, particularly in the area of derivatives pricing. You should hunt around and see what you can find.
As an aside, the random module on Python has conversion functions for quite a few distributions. If you want one you could get that and port the appropriate conversion function to java. You could use the output of the python one with the same random number seed to test the correctness of the java one.

Discrete-event simulation is a good option for problems that can be modeled as individual events that take place at specific times. Key activities are:
randomly generating times and durations based on empirical data, and
accumulating statistics as the simulation runs.
For example, you could simulate the activity in a parking garage as the entries and departures of a cars and the loss of customers who can't enter because the garage is full. This can be done with two model classes, a Car and the Garage, and three infrastructure classes, an Event class (described below), a Schedule to manage events, and a Monitor to accumulate data.
Here's a brief sketch of how it could work.
Event
An Event has a time, and represents calling a specific method on an object of a specific class.
Schedule
The Schedule keeps a queue of Events, ordered by Event time. The Schedule drives the overall simulation with a simple loop. As long as there are remaining Events (or until the Event that marks the end of the simulation run):
take the earliest Event from the queue,
set the "world clock" to the time of that event, and
invoke whatever action the Event specifies.
Car
The Car class holds the inter-arrival and length-of-stay statistics.
When a Car arrives, it:
logs its arrival with the Monitor,
consults the world clock, determines how long before the next Car should arrive, and posts that arrival Event on the Schedule.
asks the Garage whether it is full:
if full, the Car logs its departure as a lost customer with the Monitor.
if not full, the Car:
logs its entry with the Monitor,
tells the Garage it has entered (so that the Garage can decrease its available capacity),
determines how long it will stay, and posts its departure Event with the Schedule.
When a Car departs, it:
tells the Garage (so the Garage can increase available capacity), and
logs its departure with the Monitor.
Garage
The Garage keeps track of the Cars that are currently inside, and knows about its available capacity.
Monitor
The Monitor keeps track of the statistics in which you're interested: number of customers (successfully-arriving Cars), number of lost customers (who arrived when the lot was full), average length of stay, revenue (based on rate charged for parking), etc.
A simulation run
Start the simulation by putting two Events into the schedule:
the arrival of the first Car (modeled by instantiating a Car object and calling its "arrive" event) and
the end of the simulation.
Repeat the basic simulation loop until the end-of-simulation event is encountered. At that point, ask the Garage to report on its current occupants, and ask the Monitor to report the overall statistics for the session.

The short answer is that it depends.
Unless you can make the question more specific, there is no way to give an answer.
What do you want to simulate?
For example, if you want to simulate adding two numbers, you can do it using something like:
a = b + c;
If you want to simulate the bouncing of a ball, you can do that using a little bit of math equations and the graphic libraries.
If you want to simulate a web browser, you can do that too.
So the exact answer depends on what simulation you want to do.

Come up with a problem first.
There's no such things as a generic "simulation". There are lots of techniques out there.
If you're just a gamer who wants to have pseudo-physics, maybe something like this would be what you had in mind.

Have a look at Repast Symphony: http://repast.sourceforge.net/repast_simphony.html
"Repast Simphony 2.0 Beta, released on 12/3/2010, is a tightly integrated, richly interactive, cross platform Java-based modeling system that runs under Microsoft Windows, Apple Mac OS X, and Linux. It supports the development of extremely flexible models of interacting agents for use on workstations and small computing clusters. 
Repast Simphony models can be developed in several different forms including the ReLogo
 dialect of Logo, point-and-click flowcharts, Groovy, or Java, all of which can be fluidly interleaved NetLogo
 models can also be imported.
Repast Simphony has been successfully used in many application domains including social science, consumer products, supply chains, possible future hydrogen infrastructures
, and ancient pedestrian traffico name a few."

This is an old question, but for Simulation in Java I just installed and tested JavaSim by
Mark Little, University of Newcastle upon Tyne. As far as I can tell, it works very well if you have a model you can convert into a discrete event simulation. See Mark's site http://markclittle.blogspot.com.au/2008/03/csimjavasim.html. I also attempted to use Desmo-J, which is very extensive and has a 2-D graphical mode, but could not get it going under JDK 1.6 on a Mac.

Related

Leader Boards in Java

I am looking to create a leader board for work to encourage a bit of competitiveness and a general moral boost for the teams. I have created 2 pages, 1 you input data and it does a sum in the back ground for each department and then orders them.
This information should appear on the second page, the leader board itself.
My question is - is it possible to have a delayed "thinking time" for the application where the names and numbers of each department cycle quickly and load from last place to first? Kind-of like a one armed bandit machine in a casino
I am using Java SE8 in netbeans and im unsure if this is possible or if there is an alternative.

Beat Matching Algorithm

I've recently begun trying to create a mobile app (iOS/Android) that will automatically beat match (http://en.wikipedia.org/wiki/Beatmatching) two songs.
I know that this exists out there, and there have been others who have had some success, but I'm running into issues related to the accuracy of the players.
Specifically, I run into "sync" issues where the "beats" don't line up. The various methods used to date are:
Calculate the BPM in advance, identify a "beat" (using something like sonicapi.com), and trying to line up appropriately, and begin a mix in with its playback rate adjusted (tempo adjustment)
Utilizing a bunch of meta data to trigger specific starts and stops
What does NOT work:
Leveraging echonest's API (it beat matches on the server, we want to do it on the client)
Something like pydub (does not do it in realtime)
Who uses this algorithm today:
iwebdj
Traktor
Does anyone have any suggestions on how to solve this problem? I've seen lots of people do it, but doing it in real time on a mobile device seems to be an issue.
There are lots of methods for solving this problem, some of which work better than others. Matthew Davies has published several papers on the matter, among many others. Glancing at this article seems to break down some of the steps necessary for doing this. I built a beat tracker in Matlab (unfortunately...) with a fellow student and our goal was to create an outro/intro between 2 songs so that the tempo was seamless between them. We wanted to do this for songs that varied in BPM by a small amount (+-7 or so BPM between the two). Our method went sort of like this:
Find two songs in our database that had overlapping 'key center'. So lets say 2 songs, both in Am.
Find this particular overlap of key centers between the two. Say 30 seconds into song 1 and 60 seconds into song 2
Now create a beat map, using an onset-detection algorithm with peak picking; Also, this was helpful for us.
Pick the first 'beat' for each track, and overlap the two tracks at that point. Now, since they are slightly different BPM from each other, the beats won't really line up with each other.
From this, we created a sort of map that gave us the sample offsets between beats of song A and beats of song B. From this, we wanted to be able to time-stretch the fade-in region of song B so that each one of its onsets (beats in this case) lined up at the correct sample index as the onsets from song A, over ITS fade-out region. So for example, if onset 2 from song B was shown as 5,000 samples ahead of onset 2 from song A, we simply stretched that 5,000 sample region so that onset 2 matched exactly between both songs.
This seems like it would sound weird, but it actually sounded pretty good. Although this was done entirely offline in Matlab, I am also looking for a way to do this in real-time in a mobile app. Not entirely sure about libraries you can use for this in Android world, but I imagine that it would be most efficient in C++.
A couple of libraries I have come across would be good for prototyping something, or at least studying the source code to get a better understanding of how you could do this in a mobile app:
Essentia (great community, open-source)
Aubio (also seems to be maintained pretty well, open-source)
Additional things to read up on for doing this kind of stuff in iOS land:
vDSP Programming guide
This article may also help
I came across this project that is doing some beat detection. Although it seems pretty out-dated unfortunately, it may offer some additional insights.
Unfortunately it isn't as simple as just 'pressing play' at the same time to align beats, unless you are assuming very specific aspects about them (exact tempos, etc.).
If you reallllly have some time on your hands, you should check out Tristan Jehan's (founder of Echonest) thesis; it is jam packed with algorithms and methods for beat detection, etc.

Calculating the average of multiple time-series with random sampling interval

I've tried searching for answers, but couldn't find one that exactly match my problem.
I'm doing a stochastic simulator of biological systems, where the outcome is a "Scatter-plot" time series with concentration levels at some random points in time. Now i would like to be able to take the average time-series of multiple simulation runs and are in doubt how to proceed as up to 500 simulation runs, each with several thousands measurements, can be expected.
Naturally, i could "bucket" the intervals probably losing some precision or try to interpolate the missing measurements. But what is the preferred method in my case?
This has to be implemented in Java and i would prefer a citation to a paper that explains the method.
Thanks!
If you want a book, Simulation Modeling & Analysis by Law or Discrete Event System Simulation by Banks, Carson, Nelson & Nicol both devote several chapters to time series output analysis. For "breaking news", there are several analysis tracks that have papers on recent developments in the field in the Paper Archives section at WinterSim.org. For a flow-chart of how to decide what type of analysis may be appropriate, see Figure 4 on p.60 of this tutorial paper from WinterSim 2007.

Calculating Mutual Information For Selecting a Training Set in Java

Scenario
I am attempting to implement supervised learning over a data set within a Java GUI application. The user will be given a list of items or 'reports' to inspect and will label them based on a set of available labels. Once the supervised learning is complete, the labelled instances will then be given to a learning algorithm. This will attempt to order the rest of the items on how likely it is the user will want to view them.
To get the most from the user's time I want to pre-select the reports that will provide the most information about the entire collection of reports, and have the user label them. As I understand it, to calculate this, it would be necessary to find the sum of all the mutual information values for each report, and order them by that value. The labelled reports from supervised learning will then be used to form a Bayesian network to find the probability of a binary value for each remaining report.
Example
Here, an artificial example may help to explain, and may clear up confusion when I've undoubtedly used the wrong terminology :-) Consider an example where the application displays news stories to the user. It chooses which news stories to display first based on the user's preference shown. Features of a news story which have a correlation are country of origin, category or date. So if a user labels a single news story as interesting when it came from Scotland, it tells the machine learner that there's an increased chance other news stories from Scotland will be interesting to the user. Similar for a category such as Sport, or a date such as December 12th 2004.
This preference could be calculated by choosing any order for all news stories (e.g. by category, by date) or randomly ordering them, then calculating preference as the user goes along. What I would like to do is to get a kind of "head start" on that ordering by having the user to look at a small number of specific news stories and say if they're interested in them (the supervised learning part). To choose which stories to show the user, I have to consider the entire collection of stories. This is where Mutual Information comes in. For each story I want to know how much it can tell me about all the other stories when it is classified by the user. For example, if there is a large number of stories originating from Scotland, I want to get the user to classify (at least) one of them. Similar for other correlating features such as category or date. The goal is to find examples of reports which, when classified, provide the most information about the other reports.
Problem
Because my math is a bit rusty, and I'm new to machine learning I'm having some trouble converting the definition of Mutual Information to an implementation in Java. Wikipedia describes the equation for Mutual Information as:
However, I'm unsure if this can actually be used when nothing has been classified, and the learning algorithm has not calculated anything yet.
As in my example, say I had a large number of new, unlabelled instances of this class:
public class NewsStory {
private String countryOfOrigin;
private String category;
private Date date;
// constructor, etc.
}
In my specific scenario, the correlation between fields/features is based on an exact match so, for instance, one day and 10 years difference in date are equivalent in their inequality.
The factors for correlation (e.g. is date more correlating than category?) are not necessarily equal, but they can be predefined and constant. Does this mean that the result of the function p(x,y) is the predefined value, or am I mixing up terms?
The Question (finally)
How can I go about implementing the mutual information calculation given this (fake) example of news stories? Libraries, javadoc, code examples etc. are all welcome information. Also, if this approach is fundamentally flawed, explaining why that is the case would be just as valuable an answer.
PS. I am aware of libraries such as Weka and Apache Mahout, so just mentioning them is not really useful for me. I'm still searching through documentation and examples for both these libraries looking for stuff on Mutual Information specifically. What would really help me is pointing to resources (code examples, javadoc) where these libraries help with mutual information.
I am guessing that your problem is something like...
"Given a list of unlabeled examples, sort the list by how much the predictive accuracy of the model would improve if the user labelled the example and added it to the training set."
If this is the case, I don't think mutual information is the right thing to use because you can't calculate MI between two instances. The definition of MI is in terms of random variables and an individual instance isn't a random variable, it's just a value.
The features and the class label can be though of as random variables. That is, they have a distribution of values over the whole data set. You can calculate the mutual information between two features, to see how 'redundant' one feature is given the other one, or between a feature and the class label, to get an idea of how much that feature might help prediction. This is how people usually use mutual information in a supervised learning problem.
I think ferdystschenko's suggestion that you look at active learning methods is a good one.
In response to Grundlefleck's comment, I'll go a bit deeper into terminology by using his idea of a Java object analogy...
Collectively, we have used the term 'instance', 'thing', 'report' and 'example' to refer to the object being clasified. Let's think of these things as instances of a Java class (I've left out the boilerplate constructor):
class Example
{ String f1;
String f2;
}
Example e1 = new Example("foo", "bar");
Example e2 = new Example("foo", "baz");
The usual terminology in machine learning is that e1 is an example, that all examples have two features f1 and f2 and that for e1, f1 takes the value 'foo' and f2 takes the value 'bar'. A collection of examples is called a data set.
Take all the values of f1 for all examples in the data set, this is a list of strings, it can also be thought of as a distribution. We can think of the feature as a random variable and that each value in the list is a sample taken from that random variable. So we can, for example, calculate the MI between f1 and f2. The pseudocode would be something like:
mi = 0
for each value x taken by f1:
{ sum = 0
for each value y taken by f2:
{ p_xy = number of examples where f1=x and f2=y
p_x = number of examples where f1=x
p_y = number of examples where f2=y
sum += p_xy * log(p_xy/(p_x*p_y))
}
mi += sum
}
However you can't calculate MI between e1 and e2, it's just not defined that way.
I know information gain only in connection with decision trees (DTs), where in the construction of a DT, the split to make on each node is the one which maximizes information gain. DTs are implemented in Weka, so you could probably use that directly, although I don't know if Weka lets you calculate information gain for any particular split underneath a DT node.
Apart from that, if I understand you correctly, I think what you're trying to do is generally referred to as active learning. There, you first need some initial labeled training data which is fed to your machine learning algorithm. Then you have your classifier label a set of unlabeled instances and return confidence values for each of them. Instances with the lowest confidence values are usually the ones which are most informative, so you show these to a human annotator and have him/her label these manually, add them to your training set, retrain your classifier, and do the whole thing over and over again until your classifier has a high enough accuracy or until some other stopping criterion is met. So if this works for you, you could in principle use any ML-algorithm implemented in Weka or any other ML-framework as long as the algorithm you choose is able to return confidence values (in case of Bayesian approaches this would be just probabilities).
With your edited question I think I'm coming to understand what your aiming at. If what you want is calculating MI, then StompChicken's answer and pseudo code couldn't be much clearer in my view. I also think that MI is not what you want and that you're trying to re-invent the wheel.
Let's recapitulate: you would like to train a classifier which can be updated by the user. This is a classic case for active learning. But for that, you need an initial classifier (you could basically just give the user random data to label but I take it this is not an option) and in order to train your initial classifier, you need at least some small amount of labeled training data for supervised learning. However, all you have are unlabeled data. What can you do with these?
Well, you could cluster them into groups of related instances, using one of the standard clustering algorithms provided by Weka or some specific clustering tool like Cluto. If you now take the x most central instances of each cluster (x depending on the number of clusters and the patience of the user), and have the user label it as interesting or not interesting, you can adopt this label for the other instances of that cluster as well (or at least for the central ones). Voila, now you have training data which you can use to train your initial classifier and kick off the active learning process by updating the classifier each time the user marks a new instance as interesting or not. I think what you're trying to achieve by calculating MI is essentially similar but may be just the wrong carriage for your charge.
Not knowing the details of your scenario, I should think that you may not even need any labeled data at all, except if you're interested in the labels themselves. Just cluster your data once, let the user pick an item interesting to him/her from the central members of all clusters and suggest other items from the selected clusters as perhaps being interesting as well. Also suggest some random instances from other clusters here and there, so that if the user selects one of these, you may assume that the corresponding cluster might generally be interesting, too. If there is a contradiction and a user likes some members of a cluster but not some others of the same one, then you try to re-cluster the data into finer-grained groups which discriminate the good from the bad ones. The re-training step could even be avoided by using hierarchical clustering from the start and traveling down the cluster hierarchy at every contradiction user input causes.

Best way to implement game playback?

I'm creating a grid based game in Java and I want to implement game recording and playback. I'm not sure how to do this, although I've considered 2 ideas:
Several times every second, I'd record the entire game state. To play it back, I write a renderer to read the states and try to create a visual representation. With this, however, I'd likely have a large save file, and any playback attempts would likely have noticeable lag.
I could also write every key press and mouse click into the save file. This would give me a smaller file, and could play back with less lag. However, the slightest error at the start of the game (For example, shooting 1 millisecond later) would result in a vastly different game state several minutes into the game.
What, then, is the best way to implement game playback?
Edit- I'm not sure exactly how deterministic my game is, so I'm not sure the entire game can be pieced together exactly by recording only keystrokes and mouse clicks.
A good playback mechanism is not something that can be simply added to a game without major difiiculties. The best would be do design the game infrastructure with it in mind. The command pattern can be used to achieve such a game infrastructure.
For example:
public interface Command{
void execute();
}
public class MoveRightCommand implements Command {
private Grid theGrid;
private Player thePlayer;
public MoveRightCommand(Player player, Grid grid){
this.theGrid = grid;
this.thePlayer = player;
}
public void execute(){
player.modifyPosition(0, 1, 0, 0);
}
}
And then the command can be pushed in an execution queue both when the user presses a keyboard button, moves the mouse or without a trigger with the playback mechanism. The command object can have a time-stamp value (relative to the beginning of the playback) for precise playback...
Shawn Hargreaves had a recent post on his blog about how they implemented replay in MotoGP. Goes over several different approaches and their pros and cons.
http://blogs.msdn.com/shawnhar/archive/2009/03/20/motogp-replays.aspx
Assuming that your game is deterministic, it might be sufficient if you recorded the inputs of the users (option 2). However, you would need to make sure that you are recognizing the correct and consistent times for these events, such as when it was recognized by the server. I'm not sure how you handle events in the grid.
My worry is that if you don't have a mechanism that can uniformly reference timed events, there might be a problem with the way your code handles distributed users.
Consider a game like Halo 3 on the XBOX 360 for example - each client records his view of the game, including server-based corrections.
Why not record several times a second and then compress your output, or perhaps do this:
recordInitialState();
...
runs 30 times a second:
recordChangeInState(previousState, currentState);
...
If you only record the change in state with a timestamp(and each change is small, and if there is no change, then record nothing), you should end up with reasonable file sizes.
There is no need to save everything in the scene for every frame. Save changes incrementally and use some good interpolation techniques. I would not really use a command pattern based approach, but rather make checks at a fixed rate for every game object and see if it has changed any attribute. If there is a change that change is recorded in some good encoding and the replay won't even become that big.
How you approach this will depend greatly on the language you are using for your game, but in general terms there are many approaches, depending on if you want to use a lot of storage or want some delay. It would be helpful if you could give some thoughts as to what sacrifices you are willing to make.
But, it would seem the best approach may be to just save the input from the user, as was mentioned, and either store the positions of all the actors/sprites in the game at the same time, which is as simple as just saving direction, velocity and tile x,y, or, if everything can be deterministic then ignore the actors/sprites as you can get their information.
How non-deterministic your game is would also be useful to give a better suggestion.
If there is a great deal of dynamic motion, such as a crash derby, then you may want to save information each frame, as you should be updating the players/actors at a certain framerate.
I would simply say that the best way to record a replay of a game depends entirely on the nature of the game. Being grid based isn't the issue; the issue is how predictable behaviour is following a state change, how often there are new inputs to the system, whether there is random data being injected at any point, etc, You can store an entire chess game just by recording each move in turn, but that wouldn't work for a first person shooter where there are no clear turns. You could store a first person shooter by noting the exact time of each input, but that won't work for an RPG where the result of an input might be modified by the result of a random dice roll. Even the seemingly foolproof idea of taking a snapshot as often as possible isn't good enough if important information appears instantaneously and doesn't persist in any capturable form.
Interestingly this is very similar to the problem you get with networking. How does one computer ensure that another computer is made aware of the game state, without having to send that entire game state at an impractically high frequency? The typical approach ends up being a bespoke mixture of event notifications and state updates, which is probably what you'll need here.
I did this once by borrowing an idea from video compression: keyframes and intermediate frames. Basically, every few seconds you save the complete state of the world. Then, once per game update, you save all the changes to the world state that have happened since the last game update. The details (how often do you save keyframes? What exactly counts as a 'change to the world state'?) will depend on what sort of game information you need to preserve.
In our case, the world consisted of many, many game objects, most of which were holding still at any given time, so this approach saved us a lot of time and memory in recording the positions of objects that weren't moving. In yours the tradeoffs might be different.

Categories

Resources