How to add stereo,treble options in audio equalizer? - java

I am trying to a small audio songs equalizer. I want to add in it options of treble,stereo like it is in Poweramp player.
I implemented equlizer with 5 bands successfully like this:-
public class FragmentEqualizer extends Fragment {
#Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,Bundle savedInstanceState) {
super.onCreateView(inflater,container,savedInstanceState);
equalizer = new EQ(getActivity(), new Equalizer(0,com.androidhive.musicplayer.AndroidBuildingMusicPlayerActivity.mp.getAudioSessionId()));
for(Bar bar : eqBars)
bar.setActiveEQ();
maximum= EQ.getEqualizer().getBandLevelRange()[1];
minimum= EQ.getEqualizer().getBandLevelRange()[0];
}
public void onActivityCreated(Bundle savedInstanceState) {
super.onActivityCreated(savedInstanceState);
lvforprest.setOnItemClickListener(new AdapterView.OnItemClickListener() {
#Override
public void onItemClick(AdapterView<?> parent, View view,int position, long id) {
btnformenu.setText(gtuforpresets.get(position).gtumcaFirstName);
if(position!=0 && position <=10)
{
try
{
EQ.getEqualizer().usePreset((short) (position-1));
EQ.getEqualizer().setBandLevel((short)0, EQ.getEqualizer().getBandLevel((short) 0));
EQ.getEqualizer().setBandLevel((short)1, EQ.getEqualizer().getBandLevel((short) 1));
EQ.getEqualizer().setBandLevel((short)2, EQ.getEqualizer().getBandLevel((short) 2));
EQ.getEqualizer().setBandLevel((short)3, EQ.getEqualizer().getBandLevel((short) 3));
EQ.getEqualizer().setBandLevel((short)4, EQ.getEqualizer().getBandLevel((short) 4));
eqBars.get(0).setEQPosition(EQ.getEqualizer().getBandLevel((short) 0));
eqBars.get(1).setEQPosition(EQ.getEqualizer().getBandLevel((short) 1));
eqBars.get(2).setEQPosition(EQ.getEqualizer().getBandLevel((short) 2));
eqBars.get(3).setEQPosition(EQ.getEqualizer().getBandLevel((short) 3));
eqBars.get(4).setEQPosition(EQ.getEqualizer().getBandLevel((short) 4));
seekbar1katop.setText(EQ.getEqualizer().getBandLevel((short) 0)+"mB");
seekbar2katop.setText(EQ.getEqualizer().getBandLevel((short) 1)+"mB");
seekbar3katop.setText(EQ.getEqualizer().getBandLevel((short) 2)+"mB");
seekbar4katop.setText(EQ.getEqualizer().getBandLevel((short) 3)+"mB");
seekbar5katop.setText(EQ.getEqualizer().getBandLevel((short) 4)+"mB");
}
catch(IllegalStateException e)
{
Toast.makeText(getActivity(),"Unable",Toast.LENGTH_SHORT).show();
}
catch(IllegalArgumentException e)
{
Toast.makeText(getActivity(),"Unable",Toast.LENGTH_SHORT).show();
}
catch(UnsupportedOperationException e)
{
Toast.makeText(getActivity(),"Unable",Toast.LENGTH_SHORT).show();
}
}
// Toast.makeText(getApplicationContext(),"You Clicked : " + mEqualizer.getEnabled(),Toast.LENGTH_SHORT).show();
}
});
}
}
Above code is just a short brief of my equalizer code.it wont work just as a example i posted here .
.
I too want to add treble, stereo, mono effects in my equalizer.
I already implemented bass boost like this:
public static void setBassBoost(BassBoost bassBoost, int percent) {
try{
bassBoost.setStrength((short) ((short) 1000 / 100 * percent));
bassBoost.setEnabled(true);
}catch (Exception e){
}
}
public static void setBassBoostOff(BassBoost bassBoost) {
bassBoost.setEnabled(false);
}
I used an inbulilt class for bass boost.
How can I add treble and stereo/mono effects to my app?

In order to change the bass, mid, treble there's no need to use the AudioTrack object (even because with this object you could only playback non-compressed PCM data).
You just need to adjust the proper frequency bands level using your Equalizer object. To get the number of available bands, just call:
myEqualizer.getNumberOfBands()
Considering the number of available bands, you can now set the level for each band using the following method:
myEqualizer.setBandLevel(band, level);
where:
band: frequency band that will have the new gain. The numbering of the
bands starts from 0 and ends at (number of bands - 1).
level: new gain
in millibels that will be set to the given band. getBandLevelRange()
will define the maximum and minimum values.
The meaning of each bands, from left to right, is summarized in the following image:
UPDATE
To implement a trivial balance effect, just differentiate the left/right volume on your player (MediaPlayer, SoundPool,...):
mediaPlayer.setVolume(left, right)
To obtain a mono effect you can consider using a Virtualizer, which provides a stereo widening effect. You can set the strength of the virtualization effect using the method:
virtualizer.setStrength(1000); //range is [0..1000]
You need to read the documentation carefully in order to check if the current configuration of your virtualizer is really supported by the underlying system.
Anyway, this is not a real mono output and I think you won't be able to obtain a mono ouput on stereo speaker without using low level API such as AudioTrack (actually Poweramp relies on native JNI libraries for its audio pipeline).
If you want to use an AudioTrack for playback you need to consider that it only supports PCM data (WAV) as input: this means you won't be able to play compressed audio file (like MP3, flac, ...) directly since you need to manually decode the compressed audio file first.
[Compressed File (MP3)] ===> decode() ===> [PCM data] ===> customEffect() ===> AudioTrack playback()
Thus, in order to play a compressed audio using and AudioTrack (and eventually create a custom effect) the following steps are required:
decode the compressed file using a decoder (NO PUBLIC SYSTEM API available for this, you need to do it manually!!!).
if necessary, transform uncompressed data in a PCM format which is compatible with AudioTrack
(eventually) apply your transformation on the PCM data stream (e.g. you can merge both L/R channels and create a mono effect)
play the PCM stream using an AudioTrack
I suggest you to skip this effect ;)
Regarding the bass-boost effect, you need to check if your current configuration is supported by the the running device (like the virtualizer). Take a look here for more info on this.

Related

JFugue: Is there a way to get the current note that a player() is on?

I am making a virtual piano project and I have some transcribed sample songs. I would like to know if I could get the current note that the player is on, so that it could be displayed visually on a piano.
Edit: I'm still learning Java, so sorry in advance if I need some more explanation than usual.
You can create a ParserListener to listen for any musical event that any parser is parsing. I have adjusted one of the examples to print out the note position in an octave. You can modify this to find out exactly which note is pressed:
public class ParserDemo {
public static void main(String[] args) throws InvalidMidiDataException, IOException {
MidiParser parser = new MidiParser(); // Remember, you can use any Parser!
MyParserListener listener = new MyParserListener();
parser.addParserListener(listener);
parser.parse(MidiSystem.getSequence(new File(PUT A MIDI FILE HERE)));
}
}
//Extend the ParserListenerAdapter and override the onNoteParsed event to find the current note
class MyParserListener extends ParserListenerAdapter {
#Override
public void onNoteParsed(Note note) {
//A "C" note is in the 0th position of an octave
System.out.println("Note pushed at position " + note.getPositionInOctave());
}
}
Source: http://www.jfugue.org/examples.html
When JFugue is playing music, it is using javax.sound.midi.Sequencer to playback a MIDI Sequence. That means you can listen to the MIDI events themselves using a Receiver on the same MidiDevice, and since MIDI is a system-wide resource on your computer, you can even do this outside of Java.

Why the size of audio after being recorded using a post delayed handler sometimes not the same

I am making a simple audio recording application, I want the all audio file to have the same duration that's why I followed this article using post delayed handler to make stopRecording automatically active after 3000 milisecond. Here is my current code to start recording:
#Override
public void onClick(View v) {
switch (v.getId()) {
case R.id.btnStart: {
AppLog.logString("Start Recording");
startRecording();
new Handler().postDelayed(new Runnable() {
#Override
public void run() {
stopRecording();
enableButtons(false);
AppLog.logString("Stop Recording");
Toast.makeText(MainActivity.this, "File name: " + getFilename(),
Toast.LENGTH_SHORT).show();
}
}, 3000);
break;
}
}
}
all audio files are stored in internal memory, this is a picture for the all audio files that I recorded:
my question is: are all the audio files (sampletest1.wav - sampletest6.wav) have a same duration? even though the size of the audio file is different? and why did this happen?
The simple answer is, you won't get the accuracy you're expecting using timers such as postDelayed. The files you've shown are different in length; the difference between the longest (519 KB) and shortest (512 KB) is about 40 milliseconds.
Why are they different? Because the processor, which is measuring the 3000 ms and calling your handler, is doing a lot of other work too, servicing the operating system and other applications.
Incidentally, given the 44.1 kHz sample rate and the sizes shown, I guess the sample size is 32 bits. Exactly 3 seconds of audio would have a data size of:
44100 x 4 x 3 = 529,200 bytes
(ignoring the WAV header, which is normally only about 44 bytes). This is 516.8 KB.

Increase/decrease audio play speed of AudioInputStream with Java

Getting into the complex world of audio using Java I am using this
library , which basically I improved and published on Github.
The main class of the library is StreamPlayer and the code has comments and is straightforward to understand.
The problem is that it supports many functionalities except speed increase/decrease audio speed. Let's say like YouTube does when you change the video speed.
I have no clue how I can implement such a functionality. I mean, what can I do when writing the audio to the sample rate of targetFormat? I have to restart the audio again and again every time....
AudioFormat targetFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, sourceFormat.getSampleRate()*2, nSampleSizeInBits, sourceFormat.getChannels(),
nSampleSizeInBits / 8 * sourceFormat.getChannels(), sourceFormat.getSampleRate(), false);
The code of playing the audio is:
/**
* Main loop.
*
* Player Status == STOPPED || SEEKING = End of Thread + Freeing Audio Resources.<br>
* Player Status == PLAYING = Audio stream data sent to Audio line.<br>
* Player Status == PAUSED = Waiting for another status.
*/
#Override
public Void call() {
// int readBytes = 1
// byte[] abData = new byte[EXTERNAL_BUFFER_SIZE]
int nBytesRead = 0;
int audioDataLength = EXTERNAL_BUFFER_SIZE;
ByteBuffer audioDataBuffer = ByteBuffer.allocate(audioDataLength);
audioDataBuffer.order(ByteOrder.LITTLE_ENDIAN);
// Lock stream while playing.
synchronized (audioLock) {
// Main play/pause loop.
while ( ( nBytesRead != -1 ) && status != Status.STOPPED && status != Status.SEEKING && status != Status.NOT_SPECIFIED) {
try {
//Playing?
if (status == Status.PLAYING) {
// System.out.println("Inside Stream Player Run method")
int toRead = audioDataLength;
int totalRead = 0;
// Reads up a specified maximum number of bytes from audio stream
//wtf i have written here xaxaxoaxoao omg //to fix! cause it is complicated
for (; toRead > 0
&& ( nBytesRead = audioInputStream.read(audioDataBuffer.array(), totalRead, toRead) ) != -1; toRead -= nBytesRead, totalRead += nBytesRead)
// Check for under run
if (sourceDataLine.available() >= sourceDataLine.getBufferSize())
logger.info(() -> "Underrun> Available=" + sourceDataLine.available() + " , SourceDataLineBuffer=" + sourceDataLine.getBufferSize());
//Check if anything has been read
if (totalRead > 0) {
trimBuffer = audioDataBuffer.array();
if (totalRead < trimBuffer.length) {
trimBuffer = new byte[totalRead];
//Copies an array from the specified source array, beginning at the specified position, to the specified position of the destination array
// The number of components copied is equal to the length argument.
System.arraycopy(audioDataBuffer.array(), 0, trimBuffer, 0, totalRead);
}
//Writes audio data to the mixer via this source data line
sourceDataLine.write(trimBuffer, 0, totalRead);
// Compute position in bytes in encoded stream.
int nEncodedBytes = getEncodedStreamPosition();
// Notify all registered Listeners
listeners.forEach(listener -> {
if (audioInputStream instanceof PropertiesContainer) {
// Pass audio parameters such as instant
// bit rate, ...
listener.progress(nEncodedBytes, sourceDataLine.getMicrosecondPosition(), trimBuffer, ( (PropertiesContainer) audioInputStream ).properties());
} else
// Pass audio parameters
listener.progress(nEncodedBytes, sourceDataLine.getMicrosecondPosition(), trimBuffer, emptyMap);
});
}
} else if (status == Status.PAUSED) {
//Flush and stop the source data line
if (sourceDataLine != null && sourceDataLine.isRunning()) {
sourceDataLine.flush();
sourceDataLine.stop();
}
try {
while (status == Status.PAUSED) {
Thread.sleep(50);
}
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
logger.warning("Thread cannot sleep.\n" + ex);
}
}
} catch (IOException ex) {
logger.log(Level.WARNING, "\"Decoder Exception: \" ", ex);
status = Status.STOPPED;
generateEvent(Status.STOPPED, getEncodedStreamPosition(), null);
}
}
// Free audio resources.
if (sourceDataLine != null) {
sourceDataLine.drain();
sourceDataLine.stop();
sourceDataLine.close();
sourceDataLine = null;
}
// Close stream.
closeStream();
// Notification of "End Of Media"
if (nBytesRead == -1)
generateEvent(Status.EOM, AudioSystem.NOT_SPECIFIED, null);
}
//Generate Event
status = Status.STOPPED;
generateEvent(Status.STOPPED, AudioSystem.NOT_SPECIFIED, null);
//Log
logger.info("Decoding thread completed");
return null;
}
Feel free to download and check out the library alone if you want. :) I need some help on this... Library link.
Short answer:
For speeding up a single person speaking, use my Sonic.java native Java implementation of my Sonic algorithm. An example of how to use it is in Main.Java. A C-language version of the same algorithm is used by Android's AudioTrack. For speeding up music or movies, find a WSOLA based library.
Bloated answer:
Speeding up speech is more complex than it sounds. Simply increasing the sample rate without adjusting the samples will cause speakers to sound like chipmunks. There are basically two good schemes for linearly speeding up speech that I have listened to: fixed-frame based schemes like WSOLA, and pitch-synchronous schemes like PICOLA, which is used by Sonic for speeds up to 2X. One other scheme I've listened to is FFT-based, and IMO those implementations should be avoided. I hear rumor that FFT-based can be done well, but no open-source version I am aware of was usable the last time I checked, probably in 2014.
I had to invent a new algorithm for speeds greater than 2X, since PICOLA simply drops entire pitch periods, which works well so long as you don't drop two pitch periods in a row. For faster than 2X, Sonic mixes in a portion of samples from each input pitch period, retaining some frequency information from each. This works well for most speech, though some languages such as Hungarian appear to have parts of speech so short that even PICOLA mangles some phonemes. However, the general rule that you can drop one pitch period without mangling phonemes seems to work well most of the time.
Pitch-synchronous schemes focus on one speaker, and will generally make that speaker clearer than fixed-frame schemes, at the expense of butchering non-speech sounds. However, the improvement of pitch synchronous schemes over fixed-frame schemes is hard to hear at speeds less than about 1.5X for most speakers. This is because fixed-frame algorithms like WSOLA basically emulate pitch synchronous schems like PICOLA when there is only one speaker and no more than one pitch period needs to be dropped per frame. The math works out basically the same in this case if WSOLA is tuned well to the speaker. For example, if it is able to select a sound segment of +/- one frame in time, then a 50ms fixed frame will allow WSOLA to emulate PICOLA for most speakers who have a fundamental pitch > 100 Hz. However, a male with a deep voice of say 95 Hz would be butchered with WSOLA using those settings. Also, parts of speech, such as at the end of a sentence, where our fundamental pitch drops significantly can also be butchered by WSOLA when parameters are not optimally tuned. Also, WSOLA generally falls apart for speeds greater than 2X, where like PICOLA, it starts dropping multiple pitch periods in a row.
On the positive side, WSOLA will make most sounds including music understandable, if not high fidelity. Taking non-harmonic multi-voice sounds and changing the speed without introducing substantial distortion is impossible with overlap-and-add (OLA) schemes like WSOLA and PICOLA.
Doing this well would require separating the different voices, changing their speeds independently, and mixing the results together. However, most music is harmonic enough to sound OK with WSOLA.
It turns out that the poor quality of WSOLA at > 2X is one reason folks rarely listen at higher speeds than 2X. Folks simply don't like it. Once Audible.com switched from WSOLA to a Sonic-like algorithm on Android, they were able to increase the supported speed range from 2X to 3X. I haven't listened on iOS in the last few years, but as of 2014, Audible.com on iOS was miserable to listen to at 3X speed, since they used the built-in iOS WSOLA library. They've likely fixed it since then.
Looking at the library you linked, it doesn't seem like a good place to start specifically for this playback speed issue; is there any reason you aren't using AudioTrack? It seems to support everything you need.
EDIT 1: AudioTrack is Android specific, but the OP's question is Desktop javaSE based; I will only leave it here for future reference.
1. Using AudioTrack and adjusting playback speed (Android)
Thanks to an answer on another SO post (here), there is a class posted which uses the built in AudioTrack to handle speed adjustment during playback.
public class AudioActivity extends Activity {
AudioTrack audio = new AudioTrack(AudioManager.STREAM_MUSIC,
44100,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
intSize, //size of pcm file to read in bytes
AudioTrack.MODE_STATIC);
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//read track from file
File file = new File(getFilesDir(), fileName);
int size = (int) file.length();
byte[] data = new byte[size];
try {
FileInputStream fileInputStream = new FileInputStream(file);
fileInputStream.read(data, 0, size);
fileInputStream.close();
audio.write(data, 0, data.length);
} catch (IOException e) {}
}
//change playback speed by factor
void changeSpeed(double factor) {
audio.setPlaybackRate((int) (audio.getPlaybackRate() * factor));
}
}
This just uses a file to stream the whole file in one write command, but you could adjust it otherwise (the setPlayBackRate method is the main part you need).
2. Applying your own playback speed adjustment
The theory of adjusting playback speed is with two methods:
Adjust the sample rate
Change the number of samples per unit time
Since you are using the initial sample rate (because I'm assuming you have to reset the library and stop the audio when you adjust the sample rate?), you will have to adjust the number of samples per unit time.
For example, to speed up an audio buffer's playback you can use this pseudo code (Python-style), found thanks to Coobird (here).
original_samples = [0, 0.1, 0.2, 0.3, 0.4, 0.5]
def faster(samples):
new_samples = []
for i = 0 to samples.length:
if i is even:
new_samples.add(0.5 * (samples[i] + samples[i+1]))
return new_samples
faster_samples = faster(original_samples)
This is just one example of speeding up the playback and is not the only algorithm on how to do so, but one to get started on. Once you have calculated your sped up buffer you can then write this to your audio output and the data will playback at whatever scaling you choose to apply.
To slow down the audio, apply the opposite by adding data points between the current buffer values with interpolation as desired.
Please note that when adjusting playback speed it is often worth low pass filtering at the maximum frequency desired to avoid unnecessary artifacts.
As you can see the second attempt is a much more challenging task as it requires you implementing such functionality yourself, so I would probably use the first but thought it was worth mentioning the second.

Slow Multithreading in Java - Air Percussion project

I am creating "Air Percussion" using IMU sensors and Arduino to communicate with computer (3 separate IMUs and Arduinos). They are connected to the computer through USBs. I am gathering data on separate Threads (each thread for each sensor). When I connect only one "set" my program is working really fast. I can get even 5 plays of sound per second. Unfortunatelly when i am trying to connect 3 sensors and run them on separate Threads at the same time my program slows down horribly. Even when im moving only one of sensors, I can get like 1 "hit" per second and sometimes it's even losing some of the sounds it should play. I'll show only important parts of the code below.
In the main i've got ActionListener for button, where it should start gathering the data. I run there 3 separate Threads for each USB Port.
connectButton.addActionListener(new ActionListener(){
#Override public void actionPerformed(ActionEvent arg0) {
int dialogButton = 1;
if(!flagaKalibracjiLewa || !flagaKalibracjiPrawa){ //some unimportant flags
dialogButton = JOptionPane.showConfirmDialog(null, "Rozpoczynając program bez kalibracji będziesz miał do dyspozycji mniejszą ilość dzwięków. Czy chcesz kontynuować?","Warning",JOptionPane.YES_NO_OPTION);
}else{
dialogButton = JOptionPane.YES_OPTION;
}
if(dialogButton == JOptionPane.YES_OPTION){
if(connectButton.getText().equals("Connect")) {
if(!flagaKalibracjiLewa && !flagaKalibracjiPrawa) podlaczPorty();
Thread thread = new Thread(){
#Override public void run() {
Scanner data = new Scanner(chosenPort.getInputStream());
dataIncoming(data, "lewa");
data.close();
}
};
Thread thread2 = new Thread(){
#Override public void run() {
Scanner data = new Scanner(chosenPort2.getInputStream());
dataIncoming(data, "prawa");
data.close();
}
};
Thread thread3 = new Thread(){
#Override public void run() {
Scanner data = new Scanner(chosenPort3.getInputStream());
dataIncoming(data, "stopa");
data.close();
}
};
thread.start();
thread2.start();
thread3.start();
connectButton.setText("Disconnect");
} else {
// disconnect from the serial port
chosenPort.closePort();
chosenPort2.closePort();
chosenPort3.closePort();
portList.setEnabled(true);
portList2.setEnabled(true);
portList3.setEnabled(true);
connectButton.setText("Connect");
}
}
}
});
in "dataIncoming" method there is bunch of not important things (like picking, which sound should be played etc.). The important part is in the while loop. In the "while" im gathering next lines of data from sensor. When one of the values is higher than something it should play a sound but only if some time has passed and the sensor has moved a certain way. (when the drumstick is going down the "imuValues[4]" is increasing, when its going up its decreasing, so when its past 160 it means that the player has taken the drumstick up so its ready for the next hit)
while(data.hasNextLine()) {
try{
imuValues = data.nextLine().split(",");
if(Double.parseDouble(imuValues[4])>200 && flagaThreada) {
flagaThreada = false;
playSound(sound1);
}
if(Double.parseDouble(imuValues[4])<160 && System.currentTimeMillis()-startTime>100) {
flagaThreada = true;
startTime=System.currentTimeMillis();
}
}catch(Exception e){
System.out.println("ERROR");
}
}
and finally the method for playing the sound is :
public static synchronized void playSound(String sound) {
try {
String url = "/sounds/"+sound+".wav";
Clip clip = AudioSystem.getClip();
AudioInputStream inputStream = AudioSystem.getAudioInputStream(
Main.class.getResourceAsStream(url));
clip.open(inputStream);
clip.start();
} catch (Exception e) {
System.err.println("ERROR IN OPENING");
}
}
Is my computer to slow to compute and play sounds for 3 sensors at the same time? Or is there a way to create those Threads in a better fashion?
I wrote a version of Clip, called AudioCue, which allows multi-threading on the play commands. It is open source, BSD license (free), consists of three files which you can cut and paste into your program. There is also an API link for it. More info at AudioCue. The site has code examples as well as link to API and source code. There is also some dialogue about its use at Java-gaming.org, under the "Sound" topic thread.
The basic principle behind the code is to make the audio data available in a float array, and send multiple, independent "cursors" through it (one per play command). The setup lets us also do real time volume fading, pitch changes and panning. The audio is output via a SourceDataLine which you can configure (set thread priority, buffer size).
I'm maybe a week or two away from sharing a more advanced version that allows all AudioCues to be mixed through a single output line. This version has five classes/interfaces instead of three, and is being set up for release on github. I'm also hoping to get a donate button and the like set up for this next iteration. The next version might be more useful for Arduino in that I believe you are only allowed up to 8 audio outputs on that system.
Other than that, the steps you have taken (separating the open from the play, using setFramePosition for restarts) are correct. I can't think of anything else to add to help out besides writing your own mixer/cue player (as I have done and am willing to share).

Increasing screen capture speed when using Java and awt.Robot

Edit: If anyone also has any other recommendations for increasing performance of screen capture please feel free to share as it might fully address my problem!
Hello Fellow Developers,
I'm working on some basic screen capture software for myself. As of right now I've got some proof of concept/tinkering code that uses java.awt.Robot to capture the screen as a BufferedImage. Then I do this capture for a specified amount of time and afterwards dump all of the pictures to disk. From my tests I'm getting about 17 frames per second.
Trial #1
Length: 15 seconds
Images Captured: 255
Trial #2
Length: 15 seconds
Images Captured: 229
Obviously this isn't nearly good enough for a real screen capture application. Especially since these capture were me just selecting some text in my IDE and nothing that was graphically intensive.
I have two classes right now a Main class and a "Monitor" class. The Monitor class contains the method for capturing the screen. My Main class has a loop based on time that calls the Monitor class and stores the BufferedImage it returns into an ArrayList of BufferedImages.
If I modify my main class to spawn several threads that each execute that loop and also collect information about the system time of when the image was captured could I increase performance? My idea is to use a shared data structure that will automatically sort the frames based on capture time as I insert them, instead of a single loop that inserts successive images into an arraylist.
Code:
Monitor
public class Monitor {
/**
* Returns a BufferedImage
* #return
*/
public BufferedImage captureScreen() {
Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
BufferedImage capture = null;
try {
capture = new Robot().createScreenCapture(screenRect);
} catch (AWTException e) {
e.printStackTrace();
}
return capture;
}
}
Main
public class Main {
public static void main(String[] args) throws InterruptedException {
String outputLocation = "C:\\Users\\ewillis\\Pictures\\screenstreamer\\";
String namingScheme = "image";
String mediaFormat = "jpeg";
DiscreteOutput output = DiscreteOutputFactory.createOutputObject(outputLocation, namingScheme, mediaFormat);
ArrayList<BufferedImage> images = new ArrayList<BufferedImage>();
Monitor m1 = new Monitor();
long startTimeMillis = System.currentTimeMillis();
long recordTimeMillis = 15000;
while( (System.currentTimeMillis() - startTimeMillis) <= recordTimeMillis ) {
images.add( m1.captureScreen() );
}
output.saveImages(images);
}
}
Re-using the screen rectangle and robot class instances will save you a little overhead. The real bottleneck is storing all your BufferedImage's into an array list.
I would first benchmark how fast your robot.createScreenCapture(screenRect); call is without any IO (no saving or storing the buffered image). This will give you an ideal throughput for the robot class.
long frameCount = 0;
while( (System.currentTimeMillis() - startTimeMillis) <= recordTimeMillis ) {
image = m1.captureScreen();
if(image !== null) {
frameCount++;
}
try {
Thread.yield();
} catch (Exception ex) {
}
}
If it turns out that captureScreen can reach the FPS you want there is no need to multi-thread robot instances.
Rather than having an array list of buffered images I'd have an array list of Futures from the AsynchronousFileChannel.write.
Capture loop
Get BufferedImage
Convert BufferedImage to byte array containing JPEG data
Create an async channel to the output file
Start a write and add the immediate return value (the future) to your ArrayList
Wait loop
Go through your ArrayList of Futures and make sure they all finished
I guess that the intensive memory usage is an issue here. You are capturing in your tests about 250 screenshots. Depending on the screen resolution, this is:
1280x800 : 250 * 1280*800 * 3/1024/1024 == 732 MB data
1920x1080: 250 * 1920*1080 * 3/1024/1024 == 1483 MB data
Try caputuring without keeping all those images in memory.
As #Obicere said, it is a good idea to keep the Robot instance alive.

Categories

Resources