I'm trying to make a videogame. A randomly generated world made of 3D blocks, and with various creatures inhabiting it. Simple 2D graphics, tick-based time. I tried to generate a simple testing world like this:
World currentWorld = new World(24576, 24576, 150);
for (int ix = 0; ix < currentWorld.worldArray.length; ix++) {
for (int iy = 0; iy < currentWorld.worldArray[0].length; iy++) {
for (int iz = 0; iz < currentWorld.worldArray[0][0].length; iz++) {
currentWorld.worldArray[ix][iy][iz] = new Element(random.nextInt(10));
System.out.println(currentWorld.worldArray[ix][iy][iz].ID);
}
}
}
And I got this error:
Exception in thread "Display" java.lang.OutOfMemoryError: Java heap space
at cz.sargon.realms.world.World.<init>(World.java:11)
I understand that the error means that I ran out of Java's 'working' memory, intended for tempopary storage of data that is currently being worked with. Now, how do I go around storing this world somehow and working with it later? I will need to read from it every time I want to render something and also check the AIs of some of the creatures every few ticks (I plan for every creature in the world to check it's surroundings every few ticks and react in an appropriate way to its circumstances).
Related
I want to create a genetic algorithm that recreates images. I have created the program for this processing but the images that evolve are not anything close to the input image.
I believe that I have a problem with my fitness function. I have tried many things from changing the polygon types that are part of the DNA, I have tried to do both a crossover and a single parent, and I tried multiple fitness functions: histogram comparison across all channels, pixel comparison, brightness comparison(for black and white images).
public void calcFitness(PImage tar){
tar.loadPixels();
image.loadPixels();
int brightness = 0;
for(int i = 0; i < image.pixels.length;i++){
brightness += Math.abs(parent.brightness(tar.pixels[i])-parent.brightness(image.pixels[i]));
}
fitness = 1.0/ (Math.pow(1+brightness,2)/2);
}
public void calculateFitness(){
int[] rHist= new int[256], gHist= new int[256], bHist = new int[256];
image.loadPixels();
//Calculate Red Histogram
for(int i =0; i<image.pixels.length;i++) {
int red = image.pixels[i] >> 16 & 0xFF;
rHist[red]++;
}
//Calculate Green Histogram
for(int i =0; i<image.pixels.length;i++) {
int green = image.pixels[i] >> 8 & 0xFF;
gHist[green]++;
}
//Calculate Blue Histogram
for(int i =0; i<image.pixels.length;i++) {
int blue = image.pixels[i] & 0xFF;
bHist[blue]++;
}
//Compare the target histogram and the current one
for(int i = 0; i < 256; i++){
double totalDiff = 0;
totalDiff += Math.pow(main.rHist[i]-rHist[i],2)/2;
totalDiff += Math.pow(main.gHist[i]-gHist[i],2)/2;
totalDiff += Math.pow(main.bHist[i]-bHist[i],2)/2;
fitness+=Math.pow(1+totalDiff,-1);
}
}
public void evaluate(){
int totalFitness = 0;
for(int i = 0; i<POPULATION_SIZE;i++){
population[i].calcFitness(target);
//population[i].calculateFitness();
totalFitness+=population[i].fitness;
}
if(totalFitness>0) {
for (int i = 0; i < POPULATION_SIZE; i++) {
population[i].prob = population[i].fitness / totalFitness;
}
}
}
public void selection() {
SmartImage[] newPopulation = new SmartImage[POPULATION_SIZE];
for (int i = 0; i < POPULATION_SIZE; i++) {
DNA child;
DNA parentA = pickOne();
DNA parentB = pickOne();
child = parentA.crossover(parentB);
child.mutate(mutationRate);
newPopulation[i] = new SmartImage(parent, child, target.width, target.height);
}
population = newPopulation;
generation++;
}
What I expect from this is to get a general shape and color that is similar to my target image but all I get is random polygons with random colors and alphas.
The code looks fine at first glance. You should first check that your code is capable of converging to a target at all , for example by feeding a target image that is either generated by your algorithm with a random genome (or a very simple image that it should be easily recreated by your algorithm).
You are using the SAD (sum of absolute differences) metric between pixels to calculate fitness. You can try using SSD (sum of squared differences) like you are doing in the histogram difference method but between pixels or blocks, that will heavily penalize large differences so the remaining images won't be too different from the target. You can try using a more perceptual image space like HSV so the images will be closer visually even if they are farther in RGB space.
I think comparing the histogram of the entire image may be too lax, as there are many different images that will result in the same histogram. Comparing individual pixels may be too strict, the image needs to be aligned very precisely to get low differences, so everything gets low fitness values unless you are very lucky so the convergence will be too slow. I would recommend that you compare the histogram between overlapping blocks, and don't use all the 256 levels, use only about 16 levels or so (or use some kind of overlapping).
Read about Histogram of oriented gradients (HOG) and other similar techniques to get ideas to improve your fitness function. I took an online course about object recognition in images, Coursera - Deteccion de Objetos by the University of Barcelona but it's in Spanish. I'm pretty sure you can find similar study materials in English.
Edit: before trying something more complex a good idea would be doing the SAD or SSD on the average of each overlapping block (which would have a similar effect to strongly blurring the reference and generated images and then comparing the pixels, but faster). The fitness function should be resilient against small changes. An image that it's shifted by a few pixels or that is very similar after discarding the low-level detail should have much better fitness than a very different image and I think blurring will have that effect.
My question does not refer to what operators I need to use to manipulate matrices, but rather what is actually being sought by doing this procedure.
I have, for example, an image in matrix form on which I need to perform several operations (this filter is one of them). After converting said image to grayscale, I need to apply the following filter
float[][] smoothKernel = {
{0.1f,0.1f,0.1f},
{0.1f,0.2f,0.1f},
{0.1f,0.1f,0.1f}
};
on it.
The assignment file gives this example , so I assumed that when asked to "smooth" the image, I had to replace every individual pixel with an average of its neighbors (while also making sure special cases such as corners or side were handled properly).
The basic idea is this:
public static float[][] filter(float[][] gray, float[][] kernel) {
// gray is the image matrix, and kernel is the array I specifed above
float current = 0.0f;
float around = 0.0f;
float[][] smooth = new float[gray.length][gray[0].length];
for (int col = 0; col < gray.length; col++) {
for (int row = 0; row < gray[0].length; row++) {
//first two for loops are used to do this procedure on every single pixel
//the next two call upon the respective pixels around the one in question
for (int i = -1; i < 2; i++) {
for (int j = -1; j < 2; j++) {
around = at(gray, i + col, j + row); //This calls a method which checks for the
//pixels around the one being modified
current += around * kernel[i+1][j+1];
//after the application of the filter these are then added to the new value
}
}
smooth[col][row] = current;
current = 0.0f;
//The new value is now set into the smooth matrix
}
}
return smooth;
}
My dilemma lies in if I have to create this new array float[][] smooth; so as to avoid overriding the values of the original (the image outputted is all white in this case...). From the end product in the example I linked above I just cannot understand what is going on.
What is the correct way of applying the filter? Is this a universal method or does it vary for different filters?
Thank you for taking the time to clarify this.
EDIT: I have found the two errors which I detailed in the comments below, implemented back into the code, everything is working fine now.
I have also been able to verify that some of the values in the example are calculated incorrectly (thus contributing to my confusion), so I will be sure to point it out in my next class.
Question has been solved by ulterior methods, I am however not deleting it in hopes other people can benefit from it. The original code can be found in the edits.
A more advanced colleague of mine helped me to note that I was missing two things: one was the issue with resetting the current variable after computing the "smoothed" variables in the new array (resulting in a white image because this value would get increasingly larger thus surpassing the binary color limit, so it was set to the max). The second issue was that I was continuously iterating on the same pixel, which caused the whole image to have the same color (I was iterating the new array). So I added these specifications in, and all works fine since.
I am having a problem with a basic code which displays some animated images in a 150x150 grid on the screen. (Note: Yes i know the images go off the edge of the screen but in the end i was planning to scale the images as required to fit the screen). However the program only runs at 2 FPS causing the animation to sometimes not work. My loop is currently as follows (in Java):
for (int i = 0; i < 22; i++) {
for (int j = 0; j < 11; j++) {
g2d.drawImage(getImage(texture_Ocean,l),i*64,j*64,i*64+64,j*64+64,0,0,64,64,this);
}
}
And getImage:
public Image getImage(Image i, Long l) {
BufferedImage b = (BufferedImage) i;
int w = b.getWidth();
if (b.getHeight() % w == 0) {
int frames = b.getHeight()/w;
int frame = Math.round((l%1000)/(1000/frames));
System.out.println(frame);
return b.getSubimage(0,(int) (w*frame) ,w, w);
} else {
return texture_error;
}
}
My question is how can i make my program more efficient/run quicker? I know there has to be a way to do it as you see games such as prison architect and rimworld with words that are 300x300 and have hundreads of entities. And games such as TF2 which display thousands of polygons in 3D space. How?
The problem is that you are using the CPU (and through inefficient memory access methods as well) to do a job that the GPU is much better for.
You need to look at using something like a 2d graphics or games library or similar to get the sort of performance you are looking for.
The thing is, when developing games, you should care a lot with optimization. You can't simply call a paint method if you don't need it, and hope everything will be alright.
Besides that, you should try to look for a library dedicated to graphics (like OpenGL), since they can handle the optimization easily with the hardware.
I have code
public static void program() throws Exception{
BufferedImage input = null;
long start = System.currentTimeMillis();
while((System.currentTimeMillis() - start)/1000 < 220){
for (int i = 1; i < 13; i++){
for (int j = 1; j < 7; j++){
input = robot.createScreenCapture(new Rectangle(3+i*40, 127+j*40, 40, 40));
if ((input.getRGB(6, 3) > -7000000) && (input.getRGB(6, 3)<-5000000)){
robot.mouseMove(10+i*40, 137+j*40);
robot.mousePress(InputEvent.BUTTON1_MASK);
robot.mouseRelease(InputEvent.BUTTON1_MASK);
}
}
}
}
}
On a webpage there's a matrix (12*6) and there will randomly spawn some images. Some are bad, some are good.
I'm looking for a better way to check for good images. At the moment, on good images on location (6,3) the RGB color is different from bad images.
I'm making screenshot from every box (40 * 40) and looking at pixel in location (6,3)
Don't know how to explain my code any better
EDIT:
Picture of the webpage. External links ok?
http://i.imgur.com/B5Ev1Y0.png
I'm not sure what exactly the bottleneck is in your code, but I have a hunch it might be the repeated calls to robot.createScreenCapture.
You could try calling robot.createScreenCapture on the entire matrix (i.e. a large rectangle that covers all the smaller rectangles you are interested in) outside your nested loops, and then look up the pixel values at the points you are interested in using offsets for the x and y coordinates for the sub rectangles you are inspecting.
Ok. I managed to make and use QuadTree for my collision Detection alghorithm and it works just fine. I have my enemies and put them in the QuadTree, than retrieve the candidates that could possibly collide with my hero. That is hitTestObject() many agains one.
The problem I reached is how to test fastly whether some of the enmies collide with my hero's bullets. Roughly I have 4-6 bullets on the stage at same time. In this case I have hitTstObject 4-6 bullets against many enemy objects, which in turns gives me for cycle in for cycle so even using quad tree after a while things start to lag on the stage :)
I used this tutorial quadtree in java to develop m alghorithm but it works fine only in the above case. What should I do in this circumstance? Use another algorithm for many agains many or ?
Roughly this is the code
bulletsQuadTree.clear();
for (var bIndex:uint; bIndex < allEnemies.length; bIndex += 1 )
{
bulletsQuadTree.insert(allEnemies[bIndex]);
}
for (var bc:uint = 0; bc < bullets.length; bc += 1 )
{
var enemiesCollideBullets:Array = new Array();
bulletsQuadTree.retrieve(enemiesCollideBullets, bullets[bc]);
for (var dc:uint = 0; dc < enemiesCollideBullets.length; dc += 1 )
{
if (enemiesCollideBullets[dc].hitTestObject(bullets[bc]))
{
enemiesCollideBullets[dc].destroy();
enemiesCollideBullets.splice(dc, 1);
}
}
}
So this happens on each frame which are many operations per frame :(
Each bullet is treated as a hero and an array of enemies is returned for each bullet that could possibly collide with.
If you want to improve the performance of this loop, change this line:
enemiesCollideBullets[dc].hitTestObject(bullets[bc]);
The actionscript hit test functions are slow. A much better approach for bullets is to check for the distance.
var distanceSquared:number = (bullet.width/2 + object.width/2) * (bullet.width/2 + object.width/2);
if((bullet.x - object.x)* (bullet.x - object.x) + (bullet.y - object.y)* (bullet.y - object.y) < distanceSquared) {
// its a hit!