I am trying to solve a problem:-
A point is at an initial position X. It can be shifted either left or right. If it is moved either left or right with equal probability 10 times, what is the probability of it ending up in its initial position X?
I used the following java program:-
import java.util.Random;
public class ProbMain {
int left;
int right;
int error;
double middle;
public ProbMain() {
left = right = error = 0;
middle = 0.0;
}
void pushLeft() {
++left;
}
void pushRight() {
++right;
}
void push() {
int whichWay;
Random rand = new Random();
whichWay = rand.nextInt(2);
if (whichWay == 0)
pushLeft();
else if (whichWay == 1)
pushRight();
else
++error;
}
public static void main(String[] args) {
ProbMain obj = new ProbMain();
for (int b = 0; b < 10000; b++) {
for (int a = 0; a < 10000; a++) {
for (int i = 0; i < 10; i++)
obj.push();
if (obj.left == obj.right)
++obj.middle;
obj.left = obj.right = 0;
}
}
System.out.println("Error: " + obj.error);
System.out.println("Probability of middle: " + obj.middle / (10000*10000));
}
}
Weird thing is that when I run this on Eclipse I get result around 0.05 but when I run from command line I get result around 0.24. Why so? And which one is correct?
You are creating a new Random object each time you want to retrieve a random number (in the push() method) - this can lead to very poor entropy and create strange results when the program is run with different timings - usually running from eclipse will be much slower due to the attached debugger, which will yield better random results when the RNG is initialized with a time value as seed.
You should change your program to use only ONE Random instance, for example by declaring a new Random member variable and initializing it once in your ProbMain constructor.
Related
I'm a first-year computer science student and I am currently dabbling in some algorithmic competitions. The code below that I made has a flaw that I'm not sure how to fix
Here is the problem statement:
http://www.usaco.org/index.php?page=viewproblem2&cpid=811
In the statement, I missed where it said that Farmer John could only switch boots on tiles that both boots can stand on. I tried adding constraints in different places but none seemed to address the problem fully. I don't really see a way to do it without butchering the code
Basically, the problem is that John keeps switching boots on tiles where the new boots can't stand on, and I can't seem to fix it
Here is my code (sorry for the one letter variables):
import java.io.*;
import java.util.*;
public class snowboots {
static int n,k;
static int[] field,a,b; //a,b --> strength, distance
static int pos;
public static void main(String[] args) throws IOException {
BufferedReader br = new BufferedReader(new FileReader("snowboots.in"));
PrintWriter pw = new PrintWriter(new BufferedWriter(new FileWriter("snowboots.out")));
StringTokenizer st = new StringTokenizer(br.readLine());
n = Integer.parseInt(st.nextToken());
k = Integer.parseInt(st.nextToken());
st = new StringTokenizer(br.readLine());
field = new int[n];
a = new int[k];
b = new int[k];
for (int i = 0; i < n; i++)
field[i] = Integer.parseInt(st.nextToken());
for (int i = 0; i < k; i++) {
st = new StringTokenizer(br.readLine());
a[i] = Integer.parseInt(st.nextToken());
b[i] = Integer.parseInt(st.nextToken());
}
pw.println(solve());
pw.close();
}
static int solve() {
pos = 0;
int i = 0; //which boot are we on?
while(pos < n-1) {
while(move(i)); //move with boot i as far as possible
i++; //use the next boot
}
i--;
return i;
}
static boolean move(int c) {
for (int i = pos+b[c]; i > pos; i--) {
if (i < n && field[i] <= a[c]) { //snow has to be less than boot strength
pos = i;
return true;
}
}
return false;
}
}
I tried adding a constraint in the "move" method, and one when updating I, but they both are too strict and activate at unwanted times
Is it salvageable?
Yes, it's possible to salvage your solution, by adding an extra for-loop.
What you need to do is, if you find that your previous pair of boots can get you all the way to a tile that's too deep in snow for your next pair, then you need to try "backtracking" to the latest tile that's not too deep. This ends up giving a solution in worst-case O(N·B) time and O(1) extra space.
It may not be obvious why it's OK to backtrack to that tile — after all, just because you can reach a given tile, that doesn't necessarily mean that you were able to reach all the tiles before it — so let me explain a bit why it is OK.
Let maxReachableTileNum be the number (between 1 and N) of the last tile that you were able to reach with your previous boots, and let lastTileNumThatsNotTooDeep be the number (between 1 and N) of the last tile on or before maxReachableTileNum that's not too deeply snow-covered for your next pair. (We know that there is such a tile, because tile #1 has no snow at all, so if nothing else we know that we can backtrack to the very beginning.) Now, since we were able to get to maxReachableTileNum, then some previous boot must have either stepped on lastTileNumThatsNotTooDeep (in which case, no problem, it's reachable) or skipped over it to some later tile (on or before maxReachableTileNum). But that later tile must be deeper than lastTileNumThatsNotTooDeep (because that later tile's depth is greater than scurrentBootNum, which is at least at great as the depth of lastTileNumThatsNotTooDeep), which means that the boot that skipped over lastTileNumThatsNotTooDeep certainly could have stepped on lastTileNumThatsNotTooDeep instead: it would have meant taking a shorter step (OK) onto a less-deeply-covered tile (OK) than what it actually did. So, either way, we know that lastTileNumThatsNotTooDeep was reachable. So it's safe for us to try backtracking to lastTileNumThatsNotTooDeep. (Note: the below code uses the name reachableTileNum instead of lastTileNumThatsNotTooDeep, because it continues to use the reachableTileNum variable for searching forward to find reachable tiles.)
However, we still have to hold onto the previous maxReachableTileNum: backtracking might turn out not to be helpful (because it may not let us make any further forward progress than we already have), in which case we'll just discard these boots, and move on to the next pair, with maxReachableTileNum at its previous value.
So, overall, we have this:
public static int solve(
final int[] tileSnowDepths, // tileSnowDepths[0] is f_1
final int[] bootAllowedDepths, // bootAllowedDepths[0] is s_1
final int[] bootAllowedTilesPerStep // bootAllowedTilesPerStep[0] is d_1
) {
final int numTiles = tileSnowDepths.length;
final int numBoots = bootAllowedDepths.length;
assert numBoots == bootAllowedTilesPerStep.length;
int maxReachableTileNum = 1; // can reach tile #1 even without boots
for (int bootNum = 1; bootNum <= numBoots; ++bootNum) {
final int allowedDepth = bootAllowedDepths[bootNum-1];
final int allowedTilesPerStep = bootAllowedTilesPerStep[bootNum-1];
// Find the starting-point for this boot -- ideally the last tile
// reachable so far, but may need to "backtrack" if that tile is too
// deep; see explanation above of why it's safe to assume that we
// can backtrack to the latest not-too-deep tile:
int reachableTileNum = maxReachableTileNum;
while (tileSnowDepths[reachableTileNum-1] > allowedDepth) {
--reachableTileNum;
}
// Now see how far we can go, updating both maxReachableTileNum and
// reachableTileNum when we successfully reach new tiles:
for (int tileNumToTry = maxReachableTileNum + 1;
tileNumToTry <= numTiles
&& tileNumToTry <= reachableTileNum + allowedTilesPerStep;
++tileNumToTry
) {
if (tileSnowDepths[tileNumToTry-1] <= allowedDepth) {
maxReachableTileNum = reachableTileNum = tileNumToTry;
}
}
// If we've made it to the last tile, then yay, we're done:
if (maxReachableTileNum == numTiles) {
return bootNum - 1; // had to discard this many boots to get here
}
}
throw new IllegalArgumentException("Couldn't reach last tile with any boot");
}
(I tested this on USACO's example data, and it returned 2, as expected.)
This can potentially be optimized further, e.g. with logic to skip pairs of boots that clearly aren't helpful (because they're neither stronger nor more agile than the previous successful pair), or with an extra data structure to keep track of the positions of latest minima (to optimize the backtracking process), or with logic to avoid backtracking further than is conceivably useful; but given that N·B ≤ 2502 = 62,500, I don't think any such optimizations are warranted.
Edited to add (2019-02-23): I've thought about this further, and it occurs to me that it's actually possible to write a solution in worst-case O(N + B log N) time (which is asymptotically better than O(N·B)) and O(N) extra space. But it's much more complicated; it involves three extra data-structures (one to keep track of the positions of latest minima, to allow backtracking in O(log N) time; one to keep track of the positions of future minima, to allow checking in O(log N) time if the backtracking is actually helpful (and if so to move forward to the relevant minimum); and one to maintain the necessary forward-looking information in order to let the second one be maintained in amortized O(1) time). It's also complicated to explain why that solution is guaranteed to be within O(N + B log N) time (because it involves a lot of amortized analysis, and making a minor change that might seem like an optimization — e.g., replacing a linear search with a binary search — can break the analysis and actually increase the worst-case time complexity. Since N and B are both known to be at most 250, I don't think all the complication is worth it.
You can solve this problem by Dynamic Programming. You can see the concept in this link (Just read the Computer programming part).
It has following two steps.
First solve the problem recursively.
Memoize the states.
#include<bits/stdc++.h>
using namespace std;
#define ll long long
#define mx 100005
#define mod 1000000007
int n, b;
int f[333], s[333], d[333];
int dp[251][251];
int rec(int snowPos, int bootPos)
{
if(snowPos == n-1){
return 0;
int &ret = dp[snowPos][bootPos];
if(ret != -1) return ret;
ret = 1000000007;
for(int i = bootPos+1; i<b; i++)
{
if(s[i] >= f[snowPos]){
ret = min(ret, i - bootPos + rec(snowPos, i));
}
}
for(int i = 1; i<=d[bootPos] && snowPos+i < n; i++){
if(f[snowPos + i] <= s[bootPos]){
ret = min(ret, rec(snowPos+i, bootPos));
}
}
return ret;
}
int main()
{
freopen("snowboots.in", "r", stdin);
freopen("snowboots.out", "w", stdout);
scanf("%d %d", &n, &b);
for(int i = 0; i<n; i++)
scanf("%d", &f[i]);
for(int i = 0; i<b; i++){
scanf("%d %d", &s[i], &d[i]);
}
memset(dp, -1, sizeof dp);
printf("%d\n", rec(0, 0));
return 0;
}
This is my solution to this problem (in C++).
This is just a recursion. As problem says,
you can change boot, Or
you can do a jump by current boot.
Memoization part is done by the 2-Dimensional array dp[][].
One way which to solve it using BFS. You may refer below code for details. Hope this helps.
import java.util.*;
import java.io.*;
public class SnowBoots {
public static int n;
public static int[] deep;
public static int nBoots;
public static Boot[] boots;
public static void main(String[] args) throws Exception {
// Read the grid.
Scanner stdin = new Scanner(new File("snowboots.in"));
// Read in all of the input.
n = stdin.nextInt();
nBoots = stdin.nextInt();
deep = new int[n];
for (int i = 0; i < n; ++i) {
deep[i] = stdin.nextInt();
}
boots = new Boot[nBoots];
for (int i = 0; i < nBoots; ++i) {
int d = stdin.nextInt();
int s = stdin.nextInt();
boots[i] = new boot(d, s);
}
PrintWriter out = new PrintWriter(new FileWriter("snowboots.out"));
out.println(bfs());
out.close();
stdin.close();
}
// Breadth First Search Algorithm [https://en.wikipedia.org/wiki/Breadth-first_search]
public static int bfs() {
// These are all valid states.
boolean[][] used = new boolean[n][nBoots];
Arrays.fill(used[0], true);
// Put each of these states into the queue.
LinkedList<Integer> q = new LinkedList<Integer>();
for (int i = 0; i < nBoots; ++i) {
q.offer(i);
}
// Usual bfs.
while (q.size() > 0) {
int cur = q.poll();
int step = cur / nBoots;
int bNum = cur % nBoots;
// Try stepping with this boot...
for (int i = 1; ((step + i) < n) && (i <= boots[bNum].maxStep); ++i) {
if ((deep[step+i] <= boots[bNum].depth) && !used[step+i][bNum]) {
q.offer(nBoots * (step + i) + bNum);
used[step + i][bNum] = true;
}
}
// Try switching to another boot.
for (int i = bNum + 1; i < nBoots; ++i) {
if ((boots[i].depth >= deep[step]) && !used[step][i]) {
q.offer(nBoots * step + i);
used[step][i] = true;
}
}
}
// Find the earliest boot that got us here.
for (int i = 0; i < nBoots; ++i) {
if (used[n - 1][i]) {
return i;
}
}
// Should never get here.
return -1;
}
}
class Boot {
public int depth;
public int maxStep;
public Boot(int depth, int maxStep) {
this.depth = depth;
this.maxStep = maxStep;
}
}
I have a list of 10 probabilities (assume these are sorted in descending order): <p1, p2, ..., p10>. I want to sample (without replacement) 10 elements such that the probability of selecting i-th index is p_i.
Is there a ready to use Java method in common libraries like Random, etc that I could use to do that?
Example: 5-element list: <0.4,0.3,0.2,0.1,0.0>
Select 5 indexes (no duplicates) such that their probability of selection is given by the probability at that index in the list above. So index 0 would be selected with probability 0.4, index 1 selected with prob 0.3 and so on.
I have written my own method to do that but feel that an existing method would be better to use. If you are aware of such a method, please let me know.
This is how this is typically done:
static int sample(double[] pdf) {
// Transform your probabilities into a cumulative distribution
double[] cdf = new double[pdf.length];
cdf[0] = pdf[0];
for(int i = 1; i < pdf.length; i++)
cdf[i] += pdf[i] + cdf[i-1];
// Let r be a probability [0,1]
double r = Math.random();
// Search the bin corresponding to that quantile
int k = Arrays.binarySearch(cdf, random.nextDouble());
k = k >= 0 ? k : (-k-1);
return k;
}
If you want to return a probability do:
return pdf[k];
EDIT: I just noticed you say in the title sampling without replacement. This is not so trivial to do fast (I can give you some code I have for that). Anyhow, your question does not make any sense in that case. You cannot sample without replacement from a probability distribution. You need absolute frequencies.
i.e. If I tell you that I have a box filled with two balls: orange and blue with the proportions 20% and 80%. If you do not tell me how many balls you have of each (in absolute terms), I cannot tell you how many balls you will have in a few turns.
EDIT2: A faster version. This is not how it is typically but I have found this suggestion on the web, and I have used it in projects of mine as well.
static int sample(double[] pdf) {
double r = random.nextDouble();
for(int i = 0; i < pdf.length; i++) {
if(r < pdf[i])
return i;
r -= pdf[i];
}
return pdf.length-1; // should not happen
}
To test this:
// javac Test.java && java Test
import java.util.Arrays;
import java.util.Random;
class Test
{
static Random random = new Random();
public static void sample(double[] pdf) {
...
}
public static void main(String[] args) {
double[] pdf = new double[] { 0.3, 0.4, 0.2, 0.1 };
int[] counts = new int[pdf.length];
final int tests = 1000000;
for(int i = 0; i < tests; i++)
counts[sample(pdf)]++;
for(int i = 0; i < counts.length; i++)
System.out.println(counts[i] / (double)tests);
}
}
You can see we get output very similar to the PDF that was used:
0.3001356
0.399643
0.2001143
0.1001071
This are the times I get when running each version:
1st version: 0m0.680s
2nd version: 0m0.296s
Use sample[i] as index of your values array.
public static int[] WithoutReplacement(int m, int n) {
int[] perm = new int[n];
for (int i = 0; i < n; i++) {
perm[i] = i;
}
//take sample
for (int i = 0; i < m; i++) {
int r = i + (int) (Math.random() * (n - 1));
int tmp = perm[i];
perm[i] = perm[r];
perm[r] = tmp;
}
int[] sample = new int[m];
for (int i = 0; i < m; i++) {
sample[i] = perm[i];
}
return sample;
}
I am creating a concentration game.
I have an buffered image array where I load in a 25 image sprite sheet.
public static BufferedImage[] card = new BufferedImage[25];
0 index being the card back. and 1 - 24 being the values for the face of the cards to check against if the cards match.
What I am tying to do is this I will have 4 difficulties Easy, Normal, Hard, and Extreme. Each difficulty will have a certain amount of cards it will need to draw and then double the ones it chosen. for example the default level will be NORMAL which is 12 matches so it need to randomly choose 12 unique cards from the Buffered Image array and then double each value so it will only have 2 of each cards and then shuffle the results.
This is what I got so far but it always seems to have duplicates about 99% of the time.
//generate cards
Random r = new Random();
int j = 0;
int[] rowOne = new int[12];
int[] rowTwo = new int[12];
boolean[] rowOneBool = new boolean[12];
for(int i = 0; i < rowOneBool.length; i++)
rowOneBool[i] = false;
for(int i = 0; i < rowOne.length; i++){
int typeId = r.nextInt(12)+1;
while(rowOneBool[typeId]){
typeId = r.nextInt(12)+1;
if(rowOneBool[typeId] == false);
}
rowOne[i] = typeId;
j=0;
}
the 3 amounts I will be needing to generate is Easy 6, Normal 12, and Hard 18 extreme will use all of the images except index 0 which is the back of the cards.
This is more or less in the nature of random numbers. Sometimes they are duplicates. You can easily factor that in though if you want them to be more unique. Just discard the number and generate again if it's not unique.
Here's a simple method to generate unique random numbers with a specified allowance of duplicates:
public static void main(String[] args) {
int[] randoms = uniqueRandoms(new int[16], 1, 25, 3);
for (int r : randoms) System.out.println(r);
}
public static int[] uniqueRandoms(int[] randoms, int lo, int hi, int allowance) {
// should do some error checking up here
int range = hi - lo, duplicates = 0;
Random gen = new Random();
for (int i = 0, k; i < randoms.length; i++) {
randoms[i] = gen.nextInt(range) + lo;
for (k = 0; k < i; k++) {
if (randoms[i] == randoms[k]) {
if (duplicates < allowance) {
duplicates++;
} else {
i--;
}
break;
}
}
}
return randoms;
}
Edit: Tested and corrected. Now it works. : )
From what I understand from your question, the answer should look something like this:
Have 2 classes, one called Randp and the other called Main. Run Main, and edit the code to suit your needs.
package randp;
public class Main {
public static void main(String[] args) {
Randp randp = new Randp(10);
for (int i = 0; i < 10; i++) {
System.out.print(randp.nextInt());
}
}
}
package randp;
public class Randp {
private int numsLeft;
private int MAX_VALUE;
int[] chooser;
public Randp(int startCounter) {
MAX_VALUE = startCounter; //set the amount we go up to
numsLeft = startCounter;
chooser = new int[MAX_VALUE];
for (int i = 1; i <= chooser.length; i++) {
chooser[i-1] = i; //fill the array up
}
}
public int nextInt() {
if(numsLeft == 0){
return 0; //nothing left in the array
}
int a = chooser[(int)(Math.random() * MAX_VALUE)]; //picking a random index
if(a == 0) {
return this.nextInt(); //we hit an index that's been used already, pick another one!
}
chooser[a-1] = 0; //don't want to use it again
numsLeft--; //keep track of the numbers
return a;
}
}
This is how I would handle it. You would move your BufferedImage objects to a List, although I would consider creating an object for the 'cards' you're using...
int removalAmount = 3; //Remove 3 cards at random... Use a switch to change this based upon difficulty or whatever...
List<BufferedImage> list = new ArrayList<BufferedImage>();
list.addAll(Arrays.asList(card)); // Add the cards to the list, from your array.
Collections.shuffle(list);
for (int i = 0; i < removalAmount; i++) {
list.remove(list.size() - 1);
}
list.addAll(list);
Collections.shuffle(list);
for (BufferedImage specificCard : list) {
//Do something
}
Ok, I said I'd give you something better, and I will. First, let's improve Jeeter's solution.
It has a bug. Because it relies on 0 to be the "used" indicator, it won't actually produce index 0 until the end, which is not random.
It fills an array with indices, then uses 0 as effectively a boolean value, which is redundant. If a value at an index is not 0 we already know what it is, it's the same as the index we used to get to it. It just hides the true nature of algorithm and makes it unnecessarily complex.
It uses recursion when it doesn't need to. Sure, you can argue that this improves code clarity, but then you risk running into a StackOverflowException for too many recursive calls.
Thus, I present an improved version of the algorithm:
class Randp {
private int MAX_VALUE;
private int numsLeft;
private boolean[] used;
public Randp(int startCounter) {
MAX_VALUE = startCounter;
numsLeft = startCounter;
// All false by default.
used = new boolean[MAX_VALUE];
}
public int nextInt() {
if (numsLeft <= 0)
return 0;
numsLeft--;
int index;
do
{
index = (int)(Math.random() * MAX_VALUE);
} while (used[index]);
return index;
}
}
I believe this is much easier to understand, but now it becomes clear the algorithm is not great. It might take a long time to find an unused index, especially when we wanted a lot of values and there's only a few left. We need to fundamentally change the way we approach this. It'd be better to generate the values randomly from the beginning:
class Randp {
private ArrayList<Integer> chooser = new ArrayList<Integer>();
private int count = 0;
public Randp(int startCounter) {
for (int i = 0; i < startCounter; i++)
chooser.add(i);
Collections.shuffle(chooser);
}
public int nextInt() {
if (count >= chooser.size())
return 0;
return chooser.get(count++);
}
}
This is the most efficient and extremely simple since we made use of existing classes and methods.
I am trying to simulate the math puzzle I found on http://blog.xkcd.com/2010/02/09/math-puzzle/. However, the java random class is returning weird results. In the code below, the result is what is expected. The output is somewhere around .612 for the first line and between .49 and .51 for the second.
int trials = 10000000;
int success = 0;
int returnstrue = 0;
for (int i = 0; i < trials; i++) {
Random r = new Random();
//double one = r.nextDouble()*10000;
//double two = r.nextDouble()*10000;
double one = 1;
double two = Math.PI;
double check = r.nextDouble();
boolean a = r.nextBoolean();
if(a)
{
returnstrue++;
}
if(a){
if((check>p(one)) && two > one)
{
success++;
}
if((check<p(one))&& two<one)
{
success++;
}
}
else{
if((check>p(two)) && two < one)
{
success++;
}
if((check<p(two))&& two>one)
{
success++;
}
}
}
System.out.println(success/(double)trials);
System.out.println(returnstrue/(double)trials);
However, when I switch the lines of
double check = r.nextDouble();
boolean a = r.nextBoolean();
to
boolean a = r.nextBoolean();
double check = r.nextDouble();
the output is around .476 for the first number and .710 for the second. This implies that the nextBoolean() method is returning true 70% of the time in the later configuration. Am I doing something wrong or is this just a bug?
Move the instantiation of r to outside the for loop, as in:
Random r = new Random();
for (int i = 0; i < trials; i++) {
:
}
What you are doing now is creating a new one every time the loop iterates and, since the seed is based on the time (milliseconds), you're likely to get quite a few with the same seed.
That's almost certainly what's skewing your results.
So, yes, it is a bug, just in your code rather than in Java. That tends to be the case in about 99.9999% of the times when people ask that question since Java itself is continuously being tested by millions around the world and that snippet of yours has been tested by, well, just you :-)
For the code below, it stops running when "n" gets around 100,000. I need it to run until 1 million. I dont know where its going wrong, I am still learning Java so there might be simple mistakes in the code as well.
public class Problem14{
public static void main(String[] args) {
int chainLength;
int longestChain = 0;
int startingNumber = 0;
for(int n =2; n<=1000000; n++)
{
chainLength = getChain(n);
if(chainLength > longestChain)
{
System.out.println("chainLength: "+chainLength+" start: "+n);
longestChain = chainLength;
startingNumber = n;
}
}
System.out.println("longest:"+longestChain +" "+"start:"+startingNumber);
}
public static int getChain(int y)
{
int count = 0;
while(y != 1)
{
if((y%2) == 0)
{
y = y/2;
}
else{
y = (3*y) + 1;
}
count = count + 1;
}
return count;
}
}
Please use long as the data type instead of int
I will want this to come into light, that the number does flung higher than 1000000, so variable y needs long to hold it.
It's the datatype for y. It should be long. Otherwise it wraps round to -2 billion.
I thought I recognised this - it's Euler problem 14. I've done this myself.
getChain() method is causing problem it gets to negative and then it hangs forever in the loop.