I'm trying write a java program to send live microphone data over UDP, then receive the data in VLC. I'm basically using the same code as in this post to package up the stream and send them over. When I receive the data in VLC, I get nothing. I see a bunch of input coming in but none of it is interpreted as audio data. It tries to resolve the information as mpga or mpgv, but I'm pretty sure it's being sent as raw audio. Is the problem on VLC's end? Should I configure VLC to receive a specific format? Or is the problem with my program not packaging the data in a way VLC can interpret it?
First thing you should do is capture the live microphone data to a file and figure out exactly what format it is. Then transfer the file to VLC (if that makes sense) to see if VLC can cope with it in that form.
If you are going to use UDP in the long term, you need to be sure that the audio format you are using can cope with the loss of chunks of data in the middle of the audio stream due to network packet loss. If not, you should use TCP rather than UDP.
Related
I have a Java application that needs to play a few distinct 'sounds/riffs' to indicate status. I would like to know whether it is better to record these as audio files (wav or whatever format) and play them back using the Java audio classes, or whether it would be better to store MIDI data and play them using the Java MIDI classes.
In my case, storage space is not a problem (within reason). I have about 5-8 status that I would like to play different melodies for. Each melody would be 1-3 seconds, consisting of 2-8 notes.
PCM (found within your WAV file) and MIDI are tools for entirely different jobs.
PCM is a way to encode audio, the sound itself. MIDI is a way to encode messages for controlling synthesizes... note on, note off, etc.
If you're playing back music and you don't particularly need high control over what it sounds like (as each system's MIDI synth can sound different), MIDI is an efficient way to encode it. If you need good quality instruments, vocals, etc., you need an actual sound format like PCM in WAV, MP3, AAC, etc.
I'm studying about Networking .I had sent a file image between two pc by ICMP then capture by wireshark. I'm try to get raw data from pcap file and decode it to get data that I just had sent by java. I had taken a lot of times but I dont know how i can do it. I really wanna you do me a favor. Sorry because of English not my mother tongue.
I want to delete certain packets (which I don't want to be part of my new pcap file) from the wireshark file (Copy of the original pcap file) via Java code.
Is it possible to create a new pcap file with certain packets removed?
Pcap isn't a format specific to Wireshark, Wireshark just happens to be able to both perform a packet capture and save it in a pcap format, as well as process pcap files for you to view, so you could probably remove the Wireshark part of the question and just ask how to manipulate pcap files using java. This would be far easier than trying to work out how to use Java to work with Wireshark to produce the resultant packet capture.
In terms of manipulating a pcap file in Java, there are many third party libraries available that expose the pcap format, or wrappers for the pcap libraries, and I suppose in most of them there would be some way to filter the captured data and save it back to a file.
Check out http://code.google.com/p/sjpcap/ which is a simple alternative to the popular wrapper http://netresearch.ics.uci.edu/kfujii/Jpcap/doc/ both of which are able to process/filter/manipulate pcap files. The latter is more complex and potentially overkill for what you are doing.
I need to create an audio streamer for Android. I want it to play MP3(and other formats too if possible). I also want to be able to progressive download the audio. Does anyone knows a good way to do that?
Thanks!
You should start by checking this link: http://developer.android.com/guide/topics/media/index.html
Apparently this is already done for video, since you can specify a URL as the source for your stream.
You also will have to check some references about audio streaming and the size of the buffering you should use, as well as its dependency with the rate at which you're getting data from the stream in question.
I am on creating voice report. The user has to submit his voice report and it should simultaneously encode the audio data using Vorbis encoder. Its working fine but
encoding will start after the recording is over.
But I should have to employ the Vorbis encoder on the fly. Please share any sample code it would be much helpful.
Be more specific about how you record the audio. Do you just get the wav file or you get chunks of data? If latter, why don't you just feed it to the encoder in real time?