I have this code:
Synthesizer synthesizer = MidiSystem.getSynthesizer();
synthesizer.open();
Instrument[] instrument = synthesizer.getDefaultSoundbank().getInstruments();
synthesizer.loadInstrument(instrument[29]);
MidiChannel[] channels = synthesizer.getChannels();
MidiChannel channel = channels[1];
channel.programChange(29);
channel.noteOn(noteNumber, 127);
Teszthang.sleep(2000);
channel.noteOff(noteNumber);
so this is an example, to play a sound in max volume (127) for 2 seconds. but i want to control the channel's volume, like after 2 seconds, the volume fade out in an another 2 seconds. How could I do that? I know these methods:
channel.controlChange(controller, value);
channel.setPolyPressure(noteNumber, pressure);
but these don't change any volume! I don't know how to use these methods. How could I change the channel's volume after the noteOn() while it has been playing?
You can use CC 7 for setting channel volume.
channel.controlChange(7, value);
see: http://improv.sapp.org/doc/class/MidiOutput/controllers/controllers.html
Sometimes you have some volume events in the midi file so you cannot change channel volume.
After getting the sequence, remove these events :
Track[] tracks = sequence.getTracks();
for (Track track : tracks){
for(int i = 0; i < track.size(); i++){
if(!track.remove(track.get(i))){
System.out.println("MIDI Event not removed");
}
}}
Related
I am writing an application that records MIDI data from a keyboard.
It should be able to record multiple tracks and store them in one MIDI file.
It is controlled with three buttons - "record", "stop recording" and "write to a MIDI file".
I followed this tutorial on StackOverflow to write the recording method.
Everything was going well until I tried to record a second track. track#2 is recorded, but it is placed in the sequence behind the track#1 i.e. they donĀ“t play simultaneously in the MIDI file and if I try to play only track#2, there is a lot of blank space in the beginning of it.
I read that I have to use
Sequencer.setTickPosition(0);
which I did, but without any effect.
Here are parts of the code:
The user presses "record" button:
sequencer.open();
sequencer.setSequence(mySequence);
newTrack = mySequence.createTrack();
sequencer.setTickPosition(0);
sequencer.recordEnable(newTrack, -1);
sequencer.startRecording();
When "stop recording" button is pressed:
sequencer.stopRecording();
sequencer.recordDisable(newTrack);
And finally when "write to MIDI file" button is pressed:
Sequence tmp = sequencer.getSequence();
MidiSystem.write(tmp, 1, new File("Sequence.MID"));
//*******************************************************************
Here is my full code:
private MidiDevice inputDevice // Selected from a combo box
private Sequencer sequencer = MidiSystem.getSequencer();
private Transmitter transmitter = inputDevice.getTransmitter();
private Receiver receiver = sequencer.getReceiver();
transmitter.setReceiver(receiver);
private Sequence seq = new Sequence(Sequence.PPQ,24);
private Track newTrack;
/*
.
Buttons added
.
*/
public void actionPerformed( ActionEvent e ){
Object source = e.getSource();
if (source == record){
sequencer.open();
sequencer.setSequence(seq);
newTrack = seq.createTrack();
sequencer.setTickPosition(0);
sequencer.recordEnable(newTrack, -1);
sequencer.startRecording();
}
//*********************************************************************
if (source == stop){
sequencer.stopRecording();
sequencer.recordDisable(newTrack);
}
//******************************************************************
if(source == write){
Sequence tmp = sequencer.getSequence();
MidiSystem.write(tmp, 1, new File("MyMidiFile1.mid"));
}
}
Thanks in advance for any advice!
I'm trying to create an AudioPlayer with a bufferqueue source and outputmix sink. I've configured the source with a pcm format very similar to that shown in the ndk samples, but OpenSL is rejecting SL_DATAFORMAT_PCM ("data format 2"). This doesn't make any sense to me.
Here's the error (on a Samsung Galaxy S2):
02-27 15:43:47.315: E/libOpenSLES(12681): pAudioSrc: data format 2 not allowed
02-27 15:43:47.315: W/libOpenSLES(12681): Leaving Engine::CreateAudioPlayer (SL_RESULT_CONTENT_UNSUPPORTED)
and here's the relevant code:
SLuint32 channels = 2;
SLuint32 speakers = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT;
SLuint32 sr = SL_SAMPLINGRATE_48;
//...
SLDataFormat_PCM format_pcm = {
SL_DATAFORMAT_PCM,
channels,
sr,
SL_PCMSAMPLEFORMAT_FIXED_16,
SL_PCMSAMPLEFORMAT_FIXED_16,
speakers,
SL_BYTEORDER_LITTLEENDIAN
};
// Configure audio player source
SLDataLocator_AndroidBufferQueue loc_bufq =
{SL_DATALOCATOR_ANDROIDBUFFERQUEUE, 2};
SLDataSource audioSrc = {&loc_bufq, &format_pcm};
// configure audio player sink
SLDataLocator_OutputMix loc_outmix =
{SL_DATALOCATOR_OUTPUTMIX, outputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};
// create audio player
const SLInterfaceID iidsOutPlayer[] = {SL_IID_ANDROIDBUFFERQUEUESOURCE};
const SLboolean reqsOutPlayer[] = {SL_BOOLEAN_TRUE};
result = (*engineItf)->CreateAudioPlayer(
engineItf,
&(outPlayerObject),
&audioSrc, &audioSnk,
1, iidsOutPlayer,reqsOutPlayer);
Does anyone know what's causing this? Thanks!
May be the audio sampling rate is not supported in your device. Try SL_SAMPLINGRATE_8
instead of SL_SAMPLINGRATE_48 or pick some other device (Nexus 4/HTC One) to test.
If you hear distorted sound while voice communication then increase your recorder buffer size and you may try changing sampling rate too. Other sampling rate options are: SL_SAMPLINGRATE_16, SL_SAMPLINGRATE_32, SL_SAMPLINGRATE_44_1 etc.
There's a specific/preferred buffer size and sampling rate for each android device. You can have the preferred buffer size and sampling rate from following java code segment. Note that, audioManager.getProperty() doesn't work with API level<17.
AudioManager audioManager = (AudioManager) this.getSystemService(Context.AUDIO_SERVICE);
String rate = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE);
String size = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER);
Log.d("Buffer Size and sample rate", "Size :" + size + " & Rate: " + rate);
It turns out I needed to be using the SLDataLocator_AndroidSimpleBufferQueue instead of SLDataLocator_AndroidBufferQueue.
TargetDataLine is, for me so far, the easiest way to capture microphone input in Java. I want to encode the audio that I capture with a video of the screen [in a screen recorder software] so that the user can create a tutorial, slide case etc.
I use Xuggler to encode the video.
They do have a tutorial on encoding audio with video but they take their audio from a file. In my case, the audio is live.
To encode the video I use com.xuggle.mediaTool.IMediaWriter. The IMediaWriter object allows me to add a video stream and has an
encodeAudio(int streamIndex, short[] samples, long timeStamp, TimeUnit timeUnit)
I can use that if I can get the samples from target data line as short[]. It returns byte[]
So two questions are:
How can I encode the live audio with video?
How do I maintain the proper timing of the audio packets so that they are encoded at the proper time?
References:
1. DavaDoc for TargetDataLine: http://docs.oracle.com/javase/1.4.2/docs/api/javax/sound/sampled/TargetDataLine.html
2. Xuggler Documentation: http://build.xuggle.com/view/Stable/job/xuggler_jdk5_stable/javadoc/java/api/index.html
Update
My code for capturing video
public void run(){
final IRational FRAME_RATE = IRational.make(frameRate, 1);
final IMediaWriter writer = ToolFactory.makeWriter(completeFileName);
writer.addVideoStream(0, 0,FRAME_RATE, recordingArea.width, recordingArea.height);
long startTime = System.nanoTime();
while(keepCapturing==true){
image = bot.createScreenCapture(recordingArea);
PointerInfo pointerInfo = MouseInfo.getPointerInfo();
Point globalPosition = pointerInfo.getLocation();
int relativeX = globalPosition.x - recordingArea.x;
int relativeY = globalPosition.y - recordingArea.y;
BufferedImage bgr = convertToType(image,BufferedImage.TYPE_3BYTE_BGR);
if(cursor!=null){
bgr.getGraphics().drawImage(((ImageIcon)cursor).getImage(), relativeX,relativeY,null);
}
try{
writer.encodeVideo(0,bgr,System.nanoTime()-startTime,TimeUnit.NANOSECONDS);
}catch(Exception e){
writer.close();
JOptionPane.showMessageDialog(null,
"Recording will stop abruptly because" +
"an error has occured", "Error",JOptionPane.ERROR_MESSAGE,null);
}
try{
sleep(sleepTime);
}catch(InterruptedException e){
e.printStackTrace();
}
}
writer.close();
}
I answered most of that recently under this question: Xuggler encoding and muxing
Code sample:
writer.addVideoStream(videoStreamIndex, 0, videoCodec, width, height);
writer.addAudioStream(audioStreamIndex, 0, audioCodec, channelCount, sampleRate);
while (... have more data ...)
{
BufferedImage videoFrame = ...;
long videoFrameTime = ...; // this is the time to display this frame
writer.encodeVideo(videoStreamIndex, videoFrame, videoFrameTime, DEFAULT_TIME_UNIT);
short[] audioSamples = ...; // the size of this array should be number of samples * channelCount
long audioSamplesTime = ...; // this is the time to play back this bit of audio
writer.encodeAudio(audioStreamIndex, audioSamples, audioSamplesTime, DEFAULT_TIME_UNIT);
}
In the case of TargetDataLine, getMicrosecondPosition() will tell you the time you need for audioSamplesTime. This appears to start from the time the TargetDataLine was opened. You need to figure out how to get a video timestamp referenced to the same clock, which depends on the video device and/or how you capture video. The absolute values do not matter as long as they are both using the same clock. You could subtract the initial value (at start of stream) from both your video and your audio times so that the timestamps match, but that is only a somewhat approximate match (probably close enough in practice).
You need to call encodeVideo and encodeAudio in strictly increasing order of time; you may have to buffer some audio and some video to make sure you can do that. More details here.
I am building a Java application that programatically generates a MIDI Sequence that is then sent over the LoopBe Internal Midi Port so that I can use Ableton Live instruments for better sound playback quality.
Please correct me if I am wrong. What I need is to generate a Sequence, that will contain Tracks that will contains MidiEvents, that will contain MIDI messages with time information. That I think I got down.
The real problem is how to send it over the LoopBe MIDI Port. For that I supposedly need a Sequencer, but I don't know how I can get one rather than the default one, and I don't want that.
I guess a workaround would be to write the Sequence to a .mid file and then programatically play it back on the LoopBe Port.
So my question is: How can I obtain a non-default Sequencer?
You need method MidiSystem.getSequencer(boolean). When you call it with false parameter, it gives you unconnected sequencer.
Get Receiver instance from your target MIDI device and set it to sequencer with seq.getTransmitter().setReceiver(rec) call.
Example snippet:
MIDIDevice device = ... // obtain the MIDIDevice instance
Sequencer seq = MidiSystem.getSequencer(false);
Receiver rec = device.getReceiver();
seq.getTransmitter().setReceiver(rec)
For examples on Sequencer use, see tutorial on http://docs.oracle.com/javase/tutorial/sound/MIDI-seq-methods.html
For my own project I use LoopBe1 to send MIDI signals to REAPER.
Of course, LoopBe1 should already be installed.
In this example I iterate through the system's MIDI devices for the external MIDI port of LoopBe and then send the note C 10 times.
import javax.sound.midi.*;
public class Main {
public static void main(String[] args) throws MidiUnavailableException, InvalidMidiDataException, InterruptedException {
MidiDevice external;
MidiDevice.Info[] devices = MidiSystem.getMidiDeviceInfo();
//Iterate through the devices to get the External LoopBe MIDI port
for (MidiDevice.Info deviceInfo : devices) {
if(deviceInfo.getName().equals("LoopBe Internal MIDI")){
if(deviceInfo.getDescription().equals("External MIDI Port")){
external = MidiSystem.getMidiDevice(deviceInfo);
System.out.println("Device Name : " + deviceInfo.getName());
System.out.println("Device Description : " + deviceInfo.getDescription() + "\n");
external.open();
Receiver receiver = external.getReceiver();
ShortMessage message = new ShortMessage();
for (int i = 0; i < 10; i++) {
// Start playing the note Middle C (60),
// moderately loud (velocity = 93).
message.setMessage(ShortMessage.NOTE_ON, 0, 60, 93);
long timeStamp = -1;
receiver.send(message, timeStamp);
Thread.sleep(1000);
}
external.close();
}
}
}
}
}
For further information about the sending a MIDI signal, refer to this link:
https://docs.oracle.com/javase/tutorial/sound/MIDI-messages.html
I hope this helps!
I can't seem to get the instrument to change. I switch the value of the instrument but get nothing different on the output. I can only get a piano instrument to play no matter what value I try. Here is the simple code below. Does anyone have any suggestions? Or am I missing a fundamental of the instrument object?
import javax.sound.midi.*;
//import javax.sound.*;
public class Drum {
static int instrument = 45;
static int note = 100;
static int timbre = 0;
static int force = 100;
public static void main(String[] args) {
Synthesizer synth = null;
try {
synth = MidiSystem.getSynthesizer();
synth.open();
}
catch (Exception e) {
System.out.println(e);
}
Soundbank soundbank = synth.getDefaultSoundbank();
Instrument[] instr = soundbank.getInstruments();
synth.loadInstrument(instr[instrument]); //Changing this int (instrument) does nothing
MidiChannel[] mc = synth.getChannels();
mc[4].noteOn(note, force);
try { Thread.sleep(1000); }
catch(InterruptedException e) {}
System.out.println(instr[instrument].getName());
synth.close();
}
}
You need to tell the channel to use the instrument. I admit I've never used MIDI in Java, but something like mc.programChange(instr.getPatch().getProgram()) sounds promising.
To play the percussion instruments you have to use the channel 10, that channel is used only for percussion instruments. (http://en.wikipedia.org/wiki/General_MIDI)
For example:
int instrument = 36;
Sequence sequence = new Sequence(Sequence.PPQ, 1);
Track track = sequence.createTrack();
ShortMessage sm = new ShortMessage( );
sm.setMessage(ShortMessage.PROGRAM_CHANGE, 9, instrument, 0); //9 ==> is the channel 10.
track.add(new MidiEvent(sm, 0));
then every note you add it will sound with percussion.
You need to send a program change event to the sequencer. How? Send a short message.
sound.setMessage(ShortMessage.PROGRAM_CHANGE, channel, instrument, channel);
long timeStam1p = -1;
Receiver rcv1r = MidiSystem.getReceiver();
rcv1r.send(sound, timeStam1p);
sound.setMessage(ShortMessage.NOTE_ON, channel, note, velocity);
long timeStamp = -1;
Receiver rcvr = MidiSystem.getReceiver();
rcvr.send(sound, timeStamp);
Variables are channel (int) note (int), instrument (int), velocity (int).
Also, I suggest to learn midi events. Events are how a midi plays notes, stops notes, change instruments, tempo change, control changes, etc. I spent 2 years using a midi program.