Images use too much RAM in Processing - java

I'm trying to load a few thousand small images into processing and display them as a larger map. The total filesize of the images together is 130MiB however when I run the program it uses all of my RAM and I even get an OutOfMemoryError as the RAM usage exceeds 2GiB.
What causes over 10x the memory usage compared to the filesize, and is there any way I can mitigate this?
EDIT:
Example code
ArrayList<PImage> images = new ArrayList<PImage>();
void setup() {
for (int i = 0; i < 2000 /*num of images*/; i++) {
images.add(loadImage(Integer.toString(i) + ".jpg");
}
}
//in reality, never gets here
void draw() {
for (PImage i: images) {
image(i, /*precalculated x and y*/ random(500), random(500));
}
}

JPEG compression ratio is 10:1, so that would explain it: the images are compressed on disk, but when loaded into your program and decompressed you get 10 times the size.
To improve your code, don't load the images all at once: process them few at a time.

Related

how to stop images from being saved in ram even when the variable they are represented by has been assigned to a new image

I am a third year programmer in high school, so not a complete beginner, but I cannot fix this bug; When I load an image into a variable in Processing 3.5.3 and then copy it to another variable, as I set the first variable to a new image and then transfer it over to the second once it loads, and repeat an undetermined number of times. no matter what I do to clear the variables, the sketch eventually runs out of memory
I have tried setting everything to null with each iteration of the code and running the garbage collector but it always runs out of memory eventually.
Here is my code:
import java.io.FileWriter;
import java.io.FileReader;
int m=0, last=0, nums;
PImage show, img;
private FileWriter csvWriter;
int count=1;
void setup()
{
//fullScreen();
size(1800, 900);
imageMode(CENTER);
noStroke();
nums == /*the number of images to be cycled through*/
frameRate(.1);
}
void draw()
{
testDraw();
Runtime.getRuntime().gc();
g.removeCache(img);
g.removeCache(show);
System.gc();
}
public void testDraw()
{
int num = (int)(Math.random()*nums);
println("image number: "+ num);
int count=0;
String data=null;
try
{
BufferedReader csvReader = new BufferedReader(new FileReader(/*a csv with the paths of the images to be loaded*/));
while (count<num)
{
csvReader.readLine();
count++;
}
data=csvReader.readLine();
csvReader.close();
csvReader=null;
}
catch (IOException e)
{
e.printStackTrace();
}
if (frameCount ==1)
{
try
{
img = loadImage(data);
}
catch (Exception e)
{
e.printStackTrace();
}
while (data != null && img.width<=0)
{
//println("loading...");
}
}
show = img.copy();
img=null;
displayImage(show);
show=null;
try
{
println("available ram: " + Runtime.getRuntime().freeMemory());
img = loadImage(data);
}
catch (Exception e)
{
e.printStackTrace();
}
while (data != null && img.width<=0)
{
//println("loading...");
}
data=null;
}
public void displayImage(PImage in)
{
if ((((float)(width)/in.width)*in.height)<height)
{
image(in, width/2, height/2, width, ((float)(width)/in.width)*in.height);
} else
{
image(in, width/2, height/2, ((float)(height)/in.height)*in.width, height);
}
}
the code is supposed to load and display and image on a screen from a network drive, the network part works and it displays images but it is supposed to load a new image every few seconds, forever, but it crashes with the error message:
OutOfMemoryError: You may need to increase the memory setting in Preferences.
and the printout:
java.lang.OutOfMemoryError: Java heap space
OutOfMemoryError: Java heap space
at java.awt.image.DataBufferInt.<init>(DataBufferInt.java:75)
at java.awt.image.Raster.createPackedRaster(Raster.java:467)
at java.awt.image.DirectColorModel.createCompatibleWritableRaster(DirectColorModel.java:1032)
at sun.awt.image.ImageRepresentation.createBufferedImage(ImageRepresentation.java:253)
at sun.awt.image.ImageRepresentation.setPixels(ImageRepresentation.java:559)
at sun.awt.image.ImageDecoder.setPixels(ImageDecoder.java:138)
at sun.awt.image.JPEGImageDecoder.sendPixels(JPEGImageDecoder.java:119)
at sun.awt.image.JPEGImageDecoder.readImage(Native Method)
at sun.awt.image.JPEGImageDecoder.produceImage(JPEGImageDecoder.java:141)
at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:269)
at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:205)
at sun.awt.image.ImageFetcher.run(ImageFetcher.java:169)
An OutOfMemoryError means that your code is either using up too much memory
because of a bug (e.g. creating an array that's too large, or unintentionally
loading thousands of images), or that your sketch may need more memory to run.
If your sketch uses a lot of memory (for instance if it loads a lot of data files)
you can increase the memory available to your sketch using the Preferences window.
I have tried increasing the memory but it just delays the eventual out of memory crash
please help, and I will do my best to answer any questions about my code
As a programmer, you can never directly force the garbage collector to run. Calling Runtime.getRuntime().gc() is just a suggestion. The JVM can (and probably will) ignore it. There's some verbiage about that in Javadoc here: https://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html#gc()
Depending on how much you've increased JVM memory, and for how long you've let it run, there may still be value in trying larger memory amounts. As an example, you could try running with max heap size of 4GB by using -Xmx=4096m.
Lastly, I would look more closely at what happens when you call img.copy() and image(). It's possible that one or both somehow results in retaining a reference to the underlying image that such that img=null or show=null doesn't have the effect you want (of allowing the underlying image data to be garbage collected).

Increase/decrease audio play speed of AudioInputStream with Java

Getting into the complex world of audio using Java I am using this
library , which basically I improved and published on Github.
The main class of the library is StreamPlayer and the code has comments and is straightforward to understand.
The problem is that it supports many functionalities except speed increase/decrease audio speed. Let's say like YouTube does when you change the video speed.
I have no clue how I can implement such a functionality. I mean, what can I do when writing the audio to the sample rate of targetFormat? I have to restart the audio again and again every time....
AudioFormat targetFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, sourceFormat.getSampleRate()*2, nSampleSizeInBits, sourceFormat.getChannels(),
nSampleSizeInBits / 8 * sourceFormat.getChannels(), sourceFormat.getSampleRate(), false);
The code of playing the audio is:
/**
* Main loop.
*
* Player Status == STOPPED || SEEKING = End of Thread + Freeing Audio Resources.<br>
* Player Status == PLAYING = Audio stream data sent to Audio line.<br>
* Player Status == PAUSED = Waiting for another status.
*/
#Override
public Void call() {
// int readBytes = 1
// byte[] abData = new byte[EXTERNAL_BUFFER_SIZE]
int nBytesRead = 0;
int audioDataLength = EXTERNAL_BUFFER_SIZE;
ByteBuffer audioDataBuffer = ByteBuffer.allocate(audioDataLength);
audioDataBuffer.order(ByteOrder.LITTLE_ENDIAN);
// Lock stream while playing.
synchronized (audioLock) {
// Main play/pause loop.
while ( ( nBytesRead != -1 ) && status != Status.STOPPED && status != Status.SEEKING && status != Status.NOT_SPECIFIED) {
try {
//Playing?
if (status == Status.PLAYING) {
// System.out.println("Inside Stream Player Run method")
int toRead = audioDataLength;
int totalRead = 0;
// Reads up a specified maximum number of bytes from audio stream
//wtf i have written here xaxaxoaxoao omg //to fix! cause it is complicated
for (; toRead > 0
&& ( nBytesRead = audioInputStream.read(audioDataBuffer.array(), totalRead, toRead) ) != -1; toRead -= nBytesRead, totalRead += nBytesRead)
// Check for under run
if (sourceDataLine.available() >= sourceDataLine.getBufferSize())
logger.info(() -> "Underrun> Available=" + sourceDataLine.available() + " , SourceDataLineBuffer=" + sourceDataLine.getBufferSize());
//Check if anything has been read
if (totalRead > 0) {
trimBuffer = audioDataBuffer.array();
if (totalRead < trimBuffer.length) {
trimBuffer = new byte[totalRead];
//Copies an array from the specified source array, beginning at the specified position, to the specified position of the destination array
// The number of components copied is equal to the length argument.
System.arraycopy(audioDataBuffer.array(), 0, trimBuffer, 0, totalRead);
}
//Writes audio data to the mixer via this source data line
sourceDataLine.write(trimBuffer, 0, totalRead);
// Compute position in bytes in encoded stream.
int nEncodedBytes = getEncodedStreamPosition();
// Notify all registered Listeners
listeners.forEach(listener -> {
if (audioInputStream instanceof PropertiesContainer) {
// Pass audio parameters such as instant
// bit rate, ...
listener.progress(nEncodedBytes, sourceDataLine.getMicrosecondPosition(), trimBuffer, ( (PropertiesContainer) audioInputStream ).properties());
} else
// Pass audio parameters
listener.progress(nEncodedBytes, sourceDataLine.getMicrosecondPosition(), trimBuffer, emptyMap);
});
}
} else if (status == Status.PAUSED) {
//Flush and stop the source data line
if (sourceDataLine != null && sourceDataLine.isRunning()) {
sourceDataLine.flush();
sourceDataLine.stop();
}
try {
while (status == Status.PAUSED) {
Thread.sleep(50);
}
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
logger.warning("Thread cannot sleep.\n" + ex);
}
}
} catch (IOException ex) {
logger.log(Level.WARNING, "\"Decoder Exception: \" ", ex);
status = Status.STOPPED;
generateEvent(Status.STOPPED, getEncodedStreamPosition(), null);
}
}
// Free audio resources.
if (sourceDataLine != null) {
sourceDataLine.drain();
sourceDataLine.stop();
sourceDataLine.close();
sourceDataLine = null;
}
// Close stream.
closeStream();
// Notification of "End Of Media"
if (nBytesRead == -1)
generateEvent(Status.EOM, AudioSystem.NOT_SPECIFIED, null);
}
//Generate Event
status = Status.STOPPED;
generateEvent(Status.STOPPED, AudioSystem.NOT_SPECIFIED, null);
//Log
logger.info("Decoding thread completed");
return null;
}
Feel free to download and check out the library alone if you want. :) I need some help on this... Library link.
Short answer:
For speeding up a single person speaking, use my Sonic.java native Java implementation of my Sonic algorithm. An example of how to use it is in Main.Java. A C-language version of the same algorithm is used by Android's AudioTrack. For speeding up music or movies, find a WSOLA based library.
Bloated answer:
Speeding up speech is more complex than it sounds. Simply increasing the sample rate without adjusting the samples will cause speakers to sound like chipmunks. There are basically two good schemes for linearly speeding up speech that I have listened to: fixed-frame based schemes like WSOLA, and pitch-synchronous schemes like PICOLA, which is used by Sonic for speeds up to 2X. One other scheme I've listened to is FFT-based, and IMO those implementations should be avoided. I hear rumor that FFT-based can be done well, but no open-source version I am aware of was usable the last time I checked, probably in 2014.
I had to invent a new algorithm for speeds greater than 2X, since PICOLA simply drops entire pitch periods, which works well so long as you don't drop two pitch periods in a row. For faster than 2X, Sonic mixes in a portion of samples from each input pitch period, retaining some frequency information from each. This works well for most speech, though some languages such as Hungarian appear to have parts of speech so short that even PICOLA mangles some phonemes. However, the general rule that you can drop one pitch period without mangling phonemes seems to work well most of the time.
Pitch-synchronous schemes focus on one speaker, and will generally make that speaker clearer than fixed-frame schemes, at the expense of butchering non-speech sounds. However, the improvement of pitch synchronous schemes over fixed-frame schemes is hard to hear at speeds less than about 1.5X for most speakers. This is because fixed-frame algorithms like WSOLA basically emulate pitch synchronous schems like PICOLA when there is only one speaker and no more than one pitch period needs to be dropped per frame. The math works out basically the same in this case if WSOLA is tuned well to the speaker. For example, if it is able to select a sound segment of +/- one frame in time, then a 50ms fixed frame will allow WSOLA to emulate PICOLA for most speakers who have a fundamental pitch > 100 Hz. However, a male with a deep voice of say 95 Hz would be butchered with WSOLA using those settings. Also, parts of speech, such as at the end of a sentence, where our fundamental pitch drops significantly can also be butchered by WSOLA when parameters are not optimally tuned. Also, WSOLA generally falls apart for speeds greater than 2X, where like PICOLA, it starts dropping multiple pitch periods in a row.
On the positive side, WSOLA will make most sounds including music understandable, if not high fidelity. Taking non-harmonic multi-voice sounds and changing the speed without introducing substantial distortion is impossible with overlap-and-add (OLA) schemes like WSOLA and PICOLA.
Doing this well would require separating the different voices, changing their speeds independently, and mixing the results together. However, most music is harmonic enough to sound OK with WSOLA.
It turns out that the poor quality of WSOLA at > 2X is one reason folks rarely listen at higher speeds than 2X. Folks simply don't like it. Once Audible.com switched from WSOLA to a Sonic-like algorithm on Android, they were able to increase the supported speed range from 2X to 3X. I haven't listened on iOS in the last few years, but as of 2014, Audible.com on iOS was miserable to listen to at 3X speed, since they used the built-in iOS WSOLA library. They've likely fixed it since then.
Looking at the library you linked, it doesn't seem like a good place to start specifically for this playback speed issue; is there any reason you aren't using AudioTrack? It seems to support everything you need.
EDIT 1: AudioTrack is Android specific, but the OP's question is Desktop javaSE based; I will only leave it here for future reference.
1. Using AudioTrack and adjusting playback speed (Android)
Thanks to an answer on another SO post (here), there is a class posted which uses the built in AudioTrack to handle speed adjustment during playback.
public class AudioActivity extends Activity {
AudioTrack audio = new AudioTrack(AudioManager.STREAM_MUSIC,
44100,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
intSize, //size of pcm file to read in bytes
AudioTrack.MODE_STATIC);
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//read track from file
File file = new File(getFilesDir(), fileName);
int size = (int) file.length();
byte[] data = new byte[size];
try {
FileInputStream fileInputStream = new FileInputStream(file);
fileInputStream.read(data, 0, size);
fileInputStream.close();
audio.write(data, 0, data.length);
} catch (IOException e) {}
}
//change playback speed by factor
void changeSpeed(double factor) {
audio.setPlaybackRate((int) (audio.getPlaybackRate() * factor));
}
}
This just uses a file to stream the whole file in one write command, but you could adjust it otherwise (the setPlayBackRate method is the main part you need).
2. Applying your own playback speed adjustment
The theory of adjusting playback speed is with two methods:
Adjust the sample rate
Change the number of samples per unit time
Since you are using the initial sample rate (because I'm assuming you have to reset the library and stop the audio when you adjust the sample rate?), you will have to adjust the number of samples per unit time.
For example, to speed up an audio buffer's playback you can use this pseudo code (Python-style), found thanks to Coobird (here).
original_samples = [0, 0.1, 0.2, 0.3, 0.4, 0.5]
def faster(samples):
new_samples = []
for i = 0 to samples.length:
if i is even:
new_samples.add(0.5 * (samples[i] + samples[i+1]))
return new_samples
faster_samples = faster(original_samples)
This is just one example of speeding up the playback and is not the only algorithm on how to do so, but one to get started on. Once you have calculated your sped up buffer you can then write this to your audio output and the data will playback at whatever scaling you choose to apply.
To slow down the audio, apply the opposite by adding data points between the current buffer values with interpolation as desired.
Please note that when adjusting playback speed it is often worth low pass filtering at the maximum frequency desired to avoid unnecessary artifacts.
As you can see the second attempt is a much more challenging task as it requires you implementing such functionality yourself, so I would probably use the first but thought it was worth mentioning the second.

Parallel screencapture with robot

I want to write a program that captures parts of my screen. In order to improve the number of pictures taken per second, I use 4 threads. My threads look like this:
class Sub1 extends Thread{
public void run(){
Rectangle screenRect1 = new Rectangle(0,0,89,864);
for(int i = 0; i<1000; i++) {
try {
Robot robot = new Robot();
BufferedImage screenLeft = robot.createScreenCapture(screenRect1);
} catch (AWTException ex) {
System.err.println(ex);
}
}
}
}
With different numbers for the rectangle object in each thread.
I call this 4 times so i can get the most out of my i5 processor. However when i try to run it, the cpu usage is at about 30%. If I fill the threads with while(true){} I get 100% usage. Does this mean code cant be run parallel ? If so, what can I do to execute it parallel?
You program is working in parallel. But CPU is not the only bottleneck of your program, while I/O is the real bottleneck of your program.
I'm not an expert about screen capture program, but I think maybe the I/O operation performed by such as BufferedImage is the reason why your CPU usage about 30%, because CPU is spending time on waiting I/O.

Windows hangs during (or after) execution of Java code generating many images

I am developing a code which sometimes has to generate a lot of images. The program works perfectly fine when I have relatively small number of images to generate, however when I have to generate tens of thousands of images something strange could happen.
At some random point Windows appears to hung during execution of the code for up to few minutes. Task manager claims that Java application uses 0% of processor at that time. In fact every application that tries to use a resource from hard drive hangs, but applications that are already opened and don't require access to the hard drive seems to work.
What is more strange this behavior could happen even few seconds/minutes after my program is finished. But sometimes it doesn't happen at all.
Here is the simplified example:
public static void main(String[] args) {
try {
for (int i = 1; i <= 100; i++) {
String dirName = "tmp/" + i + "/";
File dir = new File(dirName);
dir.mkdirs();
//put only 200 file into one directory
for (int j = 0; j < 200; j++) {
drawImage(dirName, j);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
private static void drawImage(String dirName, int j)
throws FileNotFoundException, IOException {
BufferedImage bi = new BufferedImage(512, 512, BufferedImage.TYPE_INT_ARGB);
Graphics2D graphics = bi.createGraphics();
//draw something on the image
for (int k = 0; k < 10; k++)
graphics.drawLine(k, 0, k*2, 512);
BufferedImage tmpBI=new BufferedImage(512, 512, BufferedImage.TYPE_INT_ARGB);
Graphics2D tmpGraphics = tmpBI.createGraphics();
tmpGraphics.drawImage(bi, 0, 0, 512, 512, 0, 0, 512, 512, null);
//write image to png
FileOutputStream fos;
fos = new FileOutputStream(new File(dirName + "img" + j + ".PNG"));
ImageIO.write(tmpBI, "PNG", fos);
fos.close();
}
My first guess is that there are some problems with file handlers in OS or that my Java code improperly handles files.
The second guess is that garbage collector does some magic things that I don't understand.
But to be honest I have no idea how to find out what the real problem is and how to fix it.
I run the code on Windows 7 64bit and jdk1.7 64-bit with NTFS file system.
UPDATE
Few responses proposed some workarounds. I tested all of them with the same effect:
change the output directory to USB memory stick
additional thread for computations
single ZIP file as the output stream for all files
The last try suprised me. I expected that in this case it wouldn't hang. So, I performed another test and instead of writing to file I used NullOutputStream. The result was the same...
My conclusion: either there is something wrong in swing library (very unprobable) or maybe there is something wrong with my computer/OS. I will check it on other computers/OS. If the problem persists I will get back to it.
The code you write access some kernell functions.
The open and close functions for OutputStream are partially done in native, so is createGraphics().
When accessing native functions, keep in mind that they will have their own synchronization implemented.
I believe you are in the scenario where the quantity of information sent to the native thread comes at a much higher pace then the native thread writes to disk (when you think about it, RAM memory access is way quicker then HDD memory).
Even if your Java machine quits, the "tasks" were already given to the native thread to write.
That's probably why all HDD operations are getting hanged.
Try adding a Thread Sleep to your thread in order to put the Java thread in a lower priority and see if this helps.
As a general rule of thumb, file creation is a very expensive operation.20000 files is hard to even be listed in a folder :).
My feeling is that you should invoke the drawings on the UI thread, because you make use of awt.Graphics and BufferedImage. These are UI specific components and you always want to make UI operations on the UI thread.
Try somethig like:
EventQueue.invokeLater(new Runnable() {
public void run() {
try {
// do your graphics related operations here
} catch (Exception e) {
e.printStackTrace();
}
}
});
Try going multithreaded, so you don't use up your main thread. I guess this code is originally ran in a swing form other than console, if you do massive operations on the main thread the program won't redraw until the operation is over.
public static void main(String[] args) {
Thread operation = new Thread(new Runnable(ThreadOperation));
operation.start();
}
public static void ThreadOperation() {
try {
for (int i = 1; i <= 100; i++) {
String dirName = "tmp/" + i + "/";
File dir = new File(dirName);
dir.mkdirs();
//put only 200 file into one directory
for (int j = 0; j < 200; j++) {
drawImage(dirName, j);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
Also you may track the progress with a taskbar with a little more coding.
Now, you should really expect other programs to hang while your program saves a massive amount of data into the disk, since hard disks aren't as fast as your processor can be. Unless (or even if) you have an SSD HD you will have this issue.
Other programs hanging while your process writes your data, only means they're trying to write data into the disk using their main thread and won't redraw until they finish writing their data, then windows thinks: Wait this guy isn't redrawing... must be dead.
PS: This code might be wrong somewhere, i've been coding way too much in c# lately.

Increasing screen capture speed when using Java and awt.Robot

Edit: If anyone also has any other recommendations for increasing performance of screen capture please feel free to share as it might fully address my problem!
Hello Fellow Developers,
I'm working on some basic screen capture software for myself. As of right now I've got some proof of concept/tinkering code that uses java.awt.Robot to capture the screen as a BufferedImage. Then I do this capture for a specified amount of time and afterwards dump all of the pictures to disk. From my tests I'm getting about 17 frames per second.
Trial #1
Length: 15 seconds
Images Captured: 255
Trial #2
Length: 15 seconds
Images Captured: 229
Obviously this isn't nearly good enough for a real screen capture application. Especially since these capture were me just selecting some text in my IDE and nothing that was graphically intensive.
I have two classes right now a Main class and a "Monitor" class. The Monitor class contains the method for capturing the screen. My Main class has a loop based on time that calls the Monitor class and stores the BufferedImage it returns into an ArrayList of BufferedImages.
If I modify my main class to spawn several threads that each execute that loop and also collect information about the system time of when the image was captured could I increase performance? My idea is to use a shared data structure that will automatically sort the frames based on capture time as I insert them, instead of a single loop that inserts successive images into an arraylist.
Code:
Monitor
public class Monitor {
/**
* Returns a BufferedImage
* #return
*/
public BufferedImage captureScreen() {
Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
BufferedImage capture = null;
try {
capture = new Robot().createScreenCapture(screenRect);
} catch (AWTException e) {
e.printStackTrace();
}
return capture;
}
}
Main
public class Main {
public static void main(String[] args) throws InterruptedException {
String outputLocation = "C:\\Users\\ewillis\\Pictures\\screenstreamer\\";
String namingScheme = "image";
String mediaFormat = "jpeg";
DiscreteOutput output = DiscreteOutputFactory.createOutputObject(outputLocation, namingScheme, mediaFormat);
ArrayList<BufferedImage> images = new ArrayList<BufferedImage>();
Monitor m1 = new Monitor();
long startTimeMillis = System.currentTimeMillis();
long recordTimeMillis = 15000;
while( (System.currentTimeMillis() - startTimeMillis) <= recordTimeMillis ) {
images.add( m1.captureScreen() );
}
output.saveImages(images);
}
}
Re-using the screen rectangle and robot class instances will save you a little overhead. The real bottleneck is storing all your BufferedImage's into an array list.
I would first benchmark how fast your robot.createScreenCapture(screenRect); call is without any IO (no saving or storing the buffered image). This will give you an ideal throughput for the robot class.
long frameCount = 0;
while( (System.currentTimeMillis() - startTimeMillis) <= recordTimeMillis ) {
image = m1.captureScreen();
if(image !== null) {
frameCount++;
}
try {
Thread.yield();
} catch (Exception ex) {
}
}
If it turns out that captureScreen can reach the FPS you want there is no need to multi-thread robot instances.
Rather than having an array list of buffered images I'd have an array list of Futures from the AsynchronousFileChannel.write.
Capture loop
Get BufferedImage
Convert BufferedImage to byte array containing JPEG data
Create an async channel to the output file
Start a write and add the immediate return value (the future) to your ArrayList
Wait loop
Go through your ArrayList of Futures and make sure they all finished
I guess that the intensive memory usage is an issue here. You are capturing in your tests about 250 screenshots. Depending on the screen resolution, this is:
1280x800 : 250 * 1280*800 * 3/1024/1024 == 732 MB data
1920x1080: 250 * 1920*1080 * 3/1024/1024 == 1483 MB data
Try caputuring without keeping all those images in memory.
As #Obicere said, it is a good idea to keep the Robot instance alive.

Categories

Resources