I am using a combination of wireshark's tshark and jnetpcap to decode offline captures and extract the rtp audio payload from files for foward and reverse directions.
In the first steps I isolate only the rtp files and save them to an extra file.
Then I simply loop with jnetpcap through that file and save the rtp payload to a file.
In case I need both channels the produced file can be played, but sampling etc. does not work correctly. It sounds bit too fast (too high). So something must be done differently..
anybody got a hint how to save it into 2 channels so it works as stereo instead of mono?
final StringBuilder errbuf = new StringBuilder();
Pcap pcap = Pcap.openOffline(filename, errbuf);
if(pcap == null) {
System.err.printf("Error while opening device for capture: "
+ errbuf.toString());
return false;
}
PcapPacketHandler<String> handler = new PcapPacketHandler<String>() {
public void nextPacket(PcapPacket packet, String user) {
System.out.println("size of packet is=" + packet.size());
Rtp rtp = new Rtp();
if(packet.hasHeader(rtp)) {
System.out.println("rtp.headerLength = "+rtp.getHeaderLength()+ "rtp.payloadLength = "+rtp.getPayloadLength());
try {
dos.write(rtp.getPayload());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
};
int ret = pcap.dispatch(-1, handler, "I rock");
System.out.println("Ret = "+ret);
try {
dos.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
The solution to the question is to build a RingBuffer as jitterbuffer and synchronize packages and do proper silence generation.
Related
I am currently trying to create a water level readout as a progress bar in a simple Android app. Currently, I am using an Arduino Mega 2560 with a HC-05 to transmit the readout of the water level sensor. To simplify things, the arduino code is just counting up and down from 0 to 1000 and back, as follows.
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
Serial.println("Test for Water Sensor");
Serial1.begin(9600);
}
void loop() {
// put your main code here, to run repeatedly:
for (int i = 0; i <= 1000; i++)
{
Serial1.println(i);
Serial.println(i);
delay(100);
}
for (int i = 1000; i >= 0; i--)
{
Serial1.println(i);
Serial.println(i);
delay(100);
}
}
On the android end, I am using this to convert to int, then change the progress bar. It also currently displays the unconverted message in a TextView.
mHandler = new Handler(Looper.getMainLooper()){
#Override
public void handleMessage(Message msg){
if(msg.what == MESSAGE_READ){
String readMessage = null;
try {
readMessage = new String((byte[]) msg.obj, "UTF-8");
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
mReadBuffer.setText(readMessage);
try {
waterLevelValue = NumberFormat.getInstance().parse(readMessage).intValue();
waterLevel.setProgress(waterLevelValue);
} catch (ParseException e) {
e.printStackTrace();
}
}
if(msg.what == CONNECTING_STATUS){
if(msg.arg1 == 1)
mBluetoothStatus.setText("Connected to Device: " + msg.obj);
else
mBluetoothStatus.setText("Connection Failed");
}
}
};
The issue I am getting is that quite often (maybe 1-2 times a second) it is not reading the first digit. I can see on the Serial Monitor that all digits are going there, but on the android app, it will sometimes miss the first (eg: 443, 444, 45, 446, 447, etc)
What could be causing the issue here, I am very new to Bluetooth, so please help! More than happy to send more portions of code if needed.
EDIT: Adding code for reading input stream. Probably was important in the first place.
public void run() {
byte[] buffer = new byte[1024]; // buffer store for the stream
int bytes; // bytes returned from read()
// Keep listening to the InputStream until an exception occurs
while (true) {
try {
// Read from the InputStream
bytes = mmInStream.available();
if(bytes != 0) {
SystemClock.sleep(100); //pause and wait for rest of data. Adjust this depending on your sending speed.
bytes = mmInStream.available(); // how many bytes are ready to be read?
bytes = mmInStream.read(buffer, 0, bytes); // record how many bytes we actually read
mHandler.obtainMessage(MESSAGE_READ, bytes, -1, buffer)
.sendToTarget(); // Send the obtained bytes to the UI activity
}
} catch (IOException e) {
e.printStackTrace();
break;
}
}
}
I have a Client-Server system where server is written in cpp and the client is written is Java (Android application).
The server reads an image from a local directory as an ifstream using read method.
The reading process is done inside a loop, where the program reads parts of the image every time. Every time a part of the image is read, it's sent over a socket to the client that collects all the piece inside a byteBuffer and when all the bytes of the image are transfered to the client, the client attempts to turn that array of bytes (after using byteBuffer.array() method) into a Bitmap.
This is where the problem begins - I've tried a few methods but it seems that I'm unable to turn this array of bytes into a Bitmap.
From what I understood, this byte array is probably a raw representation of the image, which can't be decodded using methods like BitmapFactory.decodeByteArray() since it wasn't encoded in the first place.
Ultimately, my question is - how can I proccess this array of bytes so that I'll be able to set the image as a source to an ImageView?
Note: I've already made sure that all the data is sent over the socket correctly and the pieces are collected in the right order.
Client code:
byte[] image_bytes
byte[] response_bytes;
private void receive_image ( final String protocol, final int image_size, final int buffer_size)
{
if (image_size <= 0 || buffer_size <= 0)
return;
Thread image_receiver = new Thread(new Runnable() {
#Override
public void run() {
ByteBuffer byteBuffer = ByteBuffer.allocate(image_size);
byte[] buffer = new byte[buffer_size];
int bytesReadSum = 0;
try {
while (bytesReadSum != image_size) {
activeReader.read(buffer);
String message = new String(buffer);
if (TextUtils.substring(message, 0, 5len_of_protocol_number).equals(protocol)) {
int bytesToRead = Integer.parseInt(TextUtils.substring(message,
len_of_protocol_number,
len_of_protocol_number + len_of_data_len));
byteBuffer.put(Arrays.copyOfRange(buffer,
len_of_protocol_number + len_of_data_len,
bytesToRead + len_of_protocol_number + len_of_data_len));
bytesReadSum += bytesToRead;
} else {
response_bytes = null;
break;
}
}
if (bytesReadSum == image_size) {
image_bytes = byteBuffer.array();
if (image_bytes.length > 0)
response_bytes = image_bytes;
else
response_bytes = null;
}
} catch (IOException e) {
response_bytes = null;
}
}
});
image_receiver.start();
try {
image_receiver.join();
} catch (InterruptedException e) {
response_bytes = null;
}
if (response_bytes != null)
{
final ImageView imageIV = (ImageView) findViewById(R.id.imageIV);
File image_file = new File(Environment.getExternalStorageDirectory(), "image_file_jpg");
try
{
FileOutputStream stream = new FileOutputStream(image_file);
stream.write(image_bytes);
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
//Here the method returns null
final Bitmap image_bitmap = BitmapFactory.decodeFile(image_file.getAbsolutePath());
main.this.runOnUiThread(new Runnable() {
#Override
public void run() {
imageIV.setImageBitmap(image_bitmap);
imageIV.invalidate();
}
}
}
}
Whenever you exchange data between two machines of different architectures via sockets you need to know the Endianness (big-endian/little-endian) of each machine. If different, you will need to convert bytes to correct the data. Perhaps that's your issue. Here's a link with sample code: Converting Little Endian to Big Endian. You should be able to easily find more articles explaining the concept.
It turned out that something was wrong with my sending protocol.
After patching it up a bit it actually worked.
Thanks for the help.
I am using org.glassfish.jersey.server.ChunkedOutput to get the chunked response to my request.
When I hit the URL through browser, instead of getting output as separate chunks, I am getting all the chunks at once.
But when I use a Test Client to hit the resource, I get the output as separate chunks.
Server Used: Glassfish 4.0
Jersey version 2.13
Resource method is as follows:
#GET
#Path("chunk")
public ChunkedOutput<String> getChunkedResponse(#Context HttpServletRequest request) {
final ChunkedOutput<String> output = new ChunkedOutput<String>(
String.class);
new Thread() {
public void run() {
try {
Thread.sleep(2000);
String chunk;
String arr[] = { "America\r\n", "London\r\n", "Delhi\r\n", "null" };
int i = 0;
while (!(chunk = arr[i]).equals("null")) {
output.write(chunk);
i++;
Thread.sleep(2000);
}
} catch (IOException e) {
logger.error("IOException : ", e);
} catch (InterruptedException e) {
logger.error("InterruptedException : ", e);
e.printStackTrace();
} finally {
try {
output.close();
} catch (IOException e) {
// TODO Auto-generated catch block
logger.error("IOException IN finally : ", e);
}
}
}
}.start();
// the output will be probably returned even before
// a first chunk is written by the new thread
return output;
}
Test Client method is as follows:
private static void testChunkedResponse(WebTarget target){
final Response response = target.path("restRes").path("chunk")
.request().get();
final ChunkedInput<String> chunkedInput =
response.readEntity(new GenericType<ChunkedInput<String>>() {});
String chunk;
while ((chunk = chunkedInput.read()) != null) {
logger.info("Next chunk received: " + chunk);
}
}
Can someone please help me understand why response is not getting chunked on browser and what can be done about it?
I am also working on the client to process the chunkedouput response. From my knowledge,
For browser, you need to write longer string, i am not sure why, but seems for firefox, there is a buffer, so unless the buffer size get met, the browser wont rendering the html. From my test on Chrome, you can see the effect for short strings.
for the chunkedInput, there is a parser keep searching for separator for the string. Since using Jersey, ChunkedInput wont be able to how to split the stream. it use the parser to split and return splited substring when you call read(). By default, the separator is '\r\n', if you write '\r\n' to your chunkedoutput, you should see the client code run as you expected.
I had the same problem. Adding line separator when writing solved the problem.
output.write(chunk + System.lineSeparator());
Hey there stack overflow.
I'm creating a playlist player in Java, so far so good, I got all the logic down and the project is nearing completion. We've been testing the playback by creating some large playlist and just let the thing go from start to end. The playback sounds good, but sometimes the audio is cut off at the end. This happens very rarely. The last x seconds (time varies) are not played.
The files im testing with are all PCM wave file of 16 or 24 bit sampling size. Im using the Java sound engine in combination with Java zooms mp3 and ogg spi to support other types of audio files.
So far I have this logged a couple of times and my first thought was that the file might be corrupt, this is not the case. I've tried playing the file on its own and it played fully!
I've tried to find the problem but i just cant find it. I dont think theres anything wrong with my audio player, im running out of ideas.
Here is how i create my audio input stream:
public static AudioInputStream getUnmarkableAudioInputStream(Mixer mixer, File file)
throws UnsupportedAudioFileException
{
if (!file.exists() || !file.canRead()) {
return null;
}
AudioInputStream stream;
try {
stream = getAudioInputStream(file);
} catch (IOException e) {
logger.error("failed to retrieve stream from file", e);
return null;
}
AudioFormat baseFormat = stream.getFormat();
DataLine.Info info = new DataLine.Info(SourceDataLine.class, baseFormat);
boolean supportedDirectly = false;
if (mixer == null) {
supportedDirectly = AudioSystem.isLineSupported(info);
} else {
supportedDirectly = mixer.isLineSupported(info);
}
// compare the AudioFormat with the desired one
if (baseFormat.getEncoding() != AudioFormat.Encoding.PCM_SIGNED || !supportedDirectly) {
AudioFormat decodedFormat = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED,
baseFormat.getSampleRate(), 16, baseFormat.getChannels(),
baseFormat.getChannels() * 2, baseFormat.getSampleRate(),
false);
// convert the audio format to the supported one
if (AudioSystem.isConversionSupported(decodedFormat, baseFormat)) {
stream = AudioSystem.getAudioInputStream(decodedFormat, stream);
} else {
logger.debug(
"Audio format {} is not supported "
+ "and can not be converted to default format",
baseFormat.toString());
return null;
}
}
return stream;
}
And this is my audio player thread:
final class PlayerThread extends Thread
{
private byte[] buffer;
/**
* Initialize the buffer
*/
public void initBuffer()
{
linelock.lock();
try {
buffer = new byte[line.getBufferSize() / 5];
} finally {
linelock.unlock();
}
}
public void run()
{
initBuffer();
while (!isInterrupted()) {
checkState();
// if the line is just cleared go to the start of the loop
if (line == null || isInterrupted()) {
continue;
}
write();
}
// clean up all resources
close();
// change the state
state = Player.State.STOPPED;
}
private void checkState()
{
if (state != Player.State.PLAYING) {
if (line != null) {
line.flush();
}
try {
synchronized (this) {
this.wait();
}
} catch (InterruptedException e) {
// reset the interupt status
interrupt();
}
}
}
private void write()
{
// how much bytes could be written on the line
int available = line.available();
// is the space on the line big enough to write the buffer to
if (available >= buffer.length) {
// fill the buffer array
int read = 0;
try {
read = audioStream.read(buffer, 0, buffer.length);
} catch (Throwable ball) {
logger.error("Error in audio engine (read)", ball);
}
// if there was something to read, write it to the line
// otherwise stop the player
if (read >= 0) {
try {
linelock.lock();
line.write(buffer, 0, read);
} catch (Throwable ball) {
logger.error("Error in audio engine (write)", ball);
} finally {
linelock.unlock();
}
bytesRead += read;
} else {
line.drain();
MoreDefaultPlayer.this.stop();
}
}
}
private void close()
{
// invoke close on listeners
invokePlayerClosedOnListeners();
// destroy the volume chain
vc.removeVolumeListener(MoreDefaultPlayer.this);
// close the stream
try {
audioStream.close();
} catch (IOException e) {
logger.error("failed to close audio stream");
}
clearAllListeners();
linelock.lock();
try {
// quit the line
line.stop();
line.close();
line = null;
} finally {
linelock.unlock();
}
}
}
As you can see I drain the line after, so i dont think the problem is the line being closed before everything from the stream is played.
Can anyone see what might be wrong with this code?
I don't see an obvious answer, but there are a couple things that raise yellow flags for me. The common practise is to put the line.write() method in a while loop, not to invoke it repeatedly. There is usually no need to test for line.available() or to handle locking the line. The method line.write() will handle the necessary blocking if there is no space on the line available. I've always been cautioned not to lock or block audio lines unnecessarily.
Is the locking logic an integral part of the handling of the sequence of queues? The error you are describing could be in that handling. (Maybe there is an interaction with the test of available() compared to the buffer size? Is the amount of cutoff roughly equal to the buffer size?)
I would consider implementing a LineListener to announce when a cue is finished, and making that event the trigger of the playback of the next cue. An LineEvent of type STOP can be issued when the given file is done, notifying whatever handles the queue to proceed to the next file.
Previously I was working with JMF, but JMF need to be installed, but I don't want to add this overhead. That's why I want be moved to FMJ. And FMJ is opensource. :)
There is some sample example given with FMJ source. And there is a FMJStudio, from where I can run and transmit RTP audio captured from microphone.
But when I want to Transmit RTP, using the source below, it couldn't find any capture device.
The complete source can be found on: fmj-20070928-0938_2.zip in FMJ
And the class name of this source class is SimpleVoiceTransmiter.
//final String urlStr = URLUtils.createUrlStr(new File("samplemedia/gulp2.wav"));//"file://samplemedia/gulp2.wav";
Format format;
format = new AudioFormat(AudioFormat.ULAW_RTP, 8000, 8, 1);
//format = new AudioFormat(AudioFormat.ULAW_RTP, 8000.0, 8, 1, AudioFormat.LITTLE_ENDIAN, AudioFormat.SIGNED);
//format = new AudioFormat(BonusAudioFormatEncodings.ALAW_RTP, 8000, 8, 1);
//format = new AudioFormat(BonusAudioFormatEncodings.SPEEX_RTP, 8000, 8, 1, -1, AudioFormat.SIGNED);
//format = new AudioFormat(BonusAudioFormatEncodings.ILBC_RTP, 8000.0, 16, 1, AudioFormat.LITTLE_ENDIAN, AudioFormat.SIGNED);
CaptureDeviceInfo di = null;
//Set to true if you want to transmit audio from capture device, like microphone.
if (true)
{
// First find a capture device that will capture linear audio
// data at 8bit 8Khz
AudioFormat captureFormat = new AudioFormat(AudioFormat.LINEAR, 8000, 8, 1);
Vector devices = CaptureDeviceManager.getDeviceList(captureFormat);
if (devices.size() > 0)
{
di = (CaptureDeviceInfo) devices.elementAt(0);
} else
{
System.err.println("No capture devices");
// exit if we could not find the relevant capturedevice.
System.exit(-1);
}
}
// Create a processor for this capturedevice & exit if we
// cannot create it
Processor processor = null;
try
{
//processor = Manager.createProcessor(new MediaLocator(urlStr));
processor = Manager.createProcessor(di.getLocator());
} catch (IOException e)
{
e.printStackTrace();
System.exit(-1);
} catch (NoProcessorException e)
{
e.printStackTrace();
System.exit(-1);
}
// configure the processor
processor.configure();
while (processor.getState() != Processor.Configured)
{
try
{
Thread.sleep(10);
} catch (InterruptedException e)
{
// TODO Auto-generated catch block
e.printStackTrace();
}
}
processor.setContentDescriptor(new ContentDescriptor(ContentDescriptor.RAW_RTP));
TrackControl track[] = processor.getTrackControls();
boolean encodingOk = false;
// Go through the tracks and try to program one of them to
// output g.711 data.
for (int i = 0; i < track.length; i++)
{
if (!encodingOk && track[i] instanceof FormatControl)
{
if (((FormatControl) track[i]).setFormat(format) == null)
{
track[i].setEnabled(false);
} else
{
encodingOk = true;
}
} else
{
// we could not set this track to g.711, so disable it
track[i].setEnabled(false);
}
}
// At this point, we have determined where we can send out
// g.711 data or not.
// realize the processor
if (encodingOk)
{
if (!new net.sf.fmj.ejmf.toolkit.util.StateWaiter(processor).blockingRealize())
{
System.err.println("Failed to realize");
return;
}
// get the output datasource of the processor and exit
// if we fail
DataSource ds = null;
try
{
ds = processor.getDataOutput();
} catch (NotRealizedError e)
{
e.printStackTrace();
System.exit(-1);
}
// hand this datasource to manager for creating an RTP
// datasink our RTP datasink will multicast the audio
try
{
String url = "rtp://192.168.1.99:49150/audio/1";
MediaLocator m = new MediaLocator(url);
DataSink d = Manager.createDataSink(ds, m);
d.open();
d.start();
System.out.println("Starting processor");
processor.start();
Thread.sleep(30000);
} catch (Exception e)
{
e.printStackTrace();
System.exit(-1);
}
}
When I run this source, The output is: No capture devices
What may be the problem? :-(
Edit: I uninstalled the JMF from my system.
Ok, after two and half days, stuck in the middle of nowhere, I pointed out the problem myself.
The problem was, when I uninstalled JMF it wasn't removed from the CLASSPATH user variable. There was somethinng like:
"C:\PROGRA~1\JMF21~1.1E\lib\sound.jar;C:\PROGRA~1\JMF21~1.1E\lib\jmf.jar;C:\PROGRA~1\JMF21~1.1E\lib;"
and when I removed them, and restarted my computer. Then bingo. The code run without any problem. :)