Will I'm working on project using JnetPcap API,I was able to list to run the ClassicPcapExample successfully
public class ClassicPcapExample {
/**
* Main startup method
*
* #param args
* ignored
*/
public static void main(String[] args) {
List<PcapIf> alldevs = new ArrayList<PcapIf>(); // Will be filled with NICs
StringBuilder errbuf = new StringBuilder(); // For any error msgs
/***************************************************************************
* First get a list of devices on this system
**************************************************************************/
int r = Pcap.findAllDevs(alldevs, errbuf);
if (r == Pcap.NOT_OK || alldevs.isEmpty()) {
System.err.printf("Can't read list of devices, error is %s", errbuf
.toString());
return;
}
System.out.println("Network devices found:");
int i = 0;
for (PcapIf device : alldevs) {
String description =
(device.getDescription() != null) ? device.getDescription()
: "No description available";
System.out.printf("#%d: %s [%s]\n", i++, device.getName(), description);
}
PcapIf device = alldevs.get(0); // We know we have atleast 1 device
System.out
.printf("\nChoosing '%s' on your behalf:\n",
(device.getDescription() != null) ? device.getDescription()
: device.getName());
/***************************************************************************
* Second we open up the selected device
**************************************************************************/
int snaplen = 64 * 1024; // Capture all packets, no trucation
int flags = Pcap.MODE_PROMISCUOUS; // capture all packets
int timeout = 10 * 1000; // 10 seconds in millis
Pcap pcap =
Pcap.openLive(device.getName(), snaplen, flags, timeout, errbuf);
if (pcap == null) {
System.err.printf("Error while opening device for capture: "
+ errbuf.toString());
return;
}
the problem I m having now , is my wireless interface is not listed among the interfaces , so I can sniff HTTP Packets .
Here is the Output of this program :
Network devices found:
#0: \Device\NPF_{273EF1C6-92B4-446F-9D88-553E18695A27} [VMware Virtual Ethernet Adapter]
#1: \Device\NPF_{C69FC3BE-1E6C-415B-9AAC-36D4654C7AD8} [Microsoft]
#2: \Device\NPF_{46AC6814-0644-4B81-BAC9-70FEB2002E07} [VMware Virtual Ethernet Adapter]
#3: \Device\NPF_{037F3BF4-B510-4A1D-90C0-1014FB3974F7} [Microsoft]
#4: \Device\NPF_{CA7D4FF0-B88B-4D0D-BBDC-A1923AF8D4B3} [Realtek PCIe GBE Family Controller]
#5: \Device\NPF_{3E2983E7-11F8-415A-BC81-E1B99CA8B092} [Microsoft]
Choosing 'VMware Virtual Ethernet Adapter' on your behalf:
jnetpcap didn't list your wireless interface as "wireless" like wireshark does.
It's listed as Microsoft instead, so your interface is one of the Microsoft devices in that list.
Let only your wireless access the net, then try the sniffer on each one until you find which is the wireless interface.
Related
I'm making an application that captures the network trafic. I need to get the URL that the browser displays to the users and I thought I could use InetAddress to get the domain from the IP I had captured but It's not the same. For example, the domain I get from facebook.com is xx-fbcdn-shv-01-dft4.fbcdn.net and this is not always the same. Is there any other way to do it? Is it even possible?
I'm using JNetPCap library and the example code of it's page just to learn how to implement it in my proyect.
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.jnetpcap.*;
import org.jnetpcap.packet.PcapPacket;
import org.jnetpcap.packet.PcapPacketHandler;
import org.jnetpcap.protocol.network.Ip4;
/**
*
* #author User
*/
public class Sniffer {
static private String device;
public static void main(String[] args) {
List<PcapIf> alldevs = new ArrayList<>(); // Will be filled with NICs
StringBuilder errbuf = new StringBuilder(); // For any error msgs
/**
* *************************************************************************
* First get a list of devices on this system
*************************************************************************
*/
int r = Pcap.findAllDevs(alldevs, errbuf);
if (r == Pcap.NOT_OK || alldevs.isEmpty()) {
System.err.printf("Can't read list of devices, error is %s", errbuf
.toString());
return;
}
System.out.println("Network devices found:");
int i = 0;
for (PcapIf device : alldevs) {
String description
= (device.getDescription() != null) ? device.getDescription()
: "No description available";
System.out.printf("#%d: %s [%s]\n", i++, device.getName(), description);
System.out.println(device.toString());
}
PcapIf device = alldevs.get(2);
/***************************************************************************
* Second we open up the selected device
**************************************************************************/
int snaplen = 64 * 1024; // Capture all packets, no trucation
int flags = Pcap.MODE_PROMISCUOUS; // capture all packets
int timeout = 10 * 1000; // 10 seconds in millis
Pcap pcap
= Pcap.openLive(device.getName(), snaplen, flags, timeout, errbuf);
if (pcap == null) {
System.err.printf("Error while opening device for capture: "
+ errbuf.toString());
return;
}
///*-------------------------------------------------------------------- ------------------------------
PcapBpfProgram program = new PcapBpfProgram();
String expression = "port 80";
int optimize = 0; // false
int netmask = 0xFFFFFF00; // 255.255.255.0
if (pcap.compile(program, expression, optimize, netmask) != Pcap.OK) {
System.err.println(pcap.getErr());
return;
}
if (pcap.setFilter(program) != Pcap.OK) {
System.err.println(pcap.getErr());
return;
}
///*
/**
* *************************************************************************
* Third we create a packet handler which will receive packets from the
* libpcap loop.
*************************************************************************
*/
PcapPacketHandler<String> jpacketHandler = (PcapPacket packet, String user) -> {
System.out.printf("Received packet at %s caplen=%-4d len=%-4d %s\n",
new Date(packet.getCaptureHeader().timestampInMillis()),
packet.getCaptureHeader().caplen(), // Length actually captured
packet.getCaptureHeader().wirelen(), // Original length
user // User supplied object
);
Ip4 ip = new Ip4();
//Tcp tcp= new Tcp();
byte[] sIP;
if (packet.hasHeader(ip)) {
try {
sIP = packet.getHeader(ip).source();
String sourceIP = org.jnetpcap.packet.format.FormatUtils.ip(sIP);
String myIp = InetAddress.getLocalHost().getHostAddress();
if (!myIp.equals(sourceIP)) {
System.out.println("source= "+sourceIP);
String domain;
domain = InetAddress.getByName(sourceIP).getHostName();
System.out.println("--------------------------"+domain);
}
} catch (UnknownHostException ex) {
Logger.getLogger(Sniffer.class.getName()).log(Level.SEVERE, null, ex);
}
}
};
/**
* *************************************************************************
* Fourth we enter the loop and tell it to capture 10 packets. The loop
* method does a mapping of pcap.datalink() DLT value to JProtocol ID,
* which is needed by JScanner. The scanner scans the packet buffer and
* decodes the headers. The mapping is done automatically, although a
* variation on the loop method exists that allows the programmer to
* sepecify exactly which protocol ID to use as the data link type for
* this pcap interface.
*************************************************************************
*/
pcap.loop(-1, jpacketHandler, "jNetPcap rocks!");
/**
* *************************************************************************
* Last thing to do is close the pcap handle
*************************************************************************
*/
pcap.close();
}
}
You can't. Big sites got several addresses, so Facebook.com resolves to many addresses but reverse lookup of those addresses does not resolve to Facebook.com.
If you can capture the conversation you can read http headers, there you can find the url you are interested in.
I found a way to do it reading the tcp packets and using it's destination port. Thanks you for your help minus, I wouldn't find the solution without your help. This is the handler method.
static String dominios[] = {"com", "org", "net", "info", "biz", "edu", "gob"};
PcapPacketHandler<String> jpacketHandler = (PcapPacket packet, String user) -> {
Tcp tcp= new Tcp();
Http http= new Http();
if (packet.hasHeader(tcp)) {
if (tcp.destination() == 443) {
int payloadstart = tcp.getOffset() + tcp.size();
JBuffer buffer = new JBuffer(64 * 1024);
buffer.peer(packet, payloadstart, packet.size() - payloadstart);
String payload = buffer.toHexdump(packet.size(), false, true, true);
for (String b : dominios) {
if (payload.contains(b)) {
procesarTcp(payload, b);
}
}
}
else if(packet.hasHeader(http)){
System.out.println(http.fieldValue(Http.Request.Host));
}
}
};
public static void procesarTcp(String payload, String dominio) {
payload= payload.substring(0,payload.indexOf(dominio)+dominio.length());
StringTokenizer token= new StringTokenizer(payload,"\n");
String pagina;
String aux;
payload="";
while(token.hasMoreTokens()){
aux=token.nextToken();
payload+=aux.substring(aux.indexOf(" ")+4, aux.length());
}
pagina= payload.substring(payload.lastIndexOf("..")+2,payload.length());
System.err.println(pagina);
}
I am developing an Android app that receives data from a BLE sensor at a rate of about 8000 bytes per second.
The connection logic in my app is based upon Google's BluetoothLeGatt sample. It works. I haven't changed anything and don't explicitly set any connection parameters like interval between connection events (I don't think the Android 4.4 APIs support that).
I am testing on two Android phones, both using Android version 4.4.2 and am using the TI BLE sniffer to monitor BLE traffic.
One phone negotiates a 7.5ms interval and exchanges three 20-byte packets per connection event. The other phone negotiates 48.75ms between connection events and exchanges nineteen 20-byte packets per event (the effective data transfer rate per second is about the same).
My problem is that I am trying to log the data from the BLE Service activity as it comes in to a SQLite database. The logging works for the phone with the 7.5ms interval. However,
the app locks up for the phone with the 48.75ms interval. (In general, that phone's connection is a lot less stable). I assume that's because it is getting processing for 19 packets right on top of each other.
My questions:
1. Is there anyway I can make both phones (and any future devices) use the 7.5ms interval since that seems to work better? Is there a way to control the Minimum/Maximum_CE_Length parameters?
Is there a better way to log the data than directly from the BLE service activity? These SQLite Android Developer pages suggest using an ASync task but that doesn't seem appropriate since the data isn't going to the UI thread.
My code snippets:
This is my connection code directly from the BluetoothLeGatt sample
/**
* Connects to the GATT server hosted on the Bluetooth LE device.
*
* #param address The device address of the destination device.
*
* #return Return true if the connection is initiated successfully. The connection result
* is reported asynchronously through the
* {#code BluetoothGattCallback#onConnectionStateChange(android.bluetooth.BluetoothGatt, int, int)}
* callback.
*/
public boolean connect(final String address) {
if (mBluetoothAdapter == null || address == null) {
Log.w(TAG, "BluetoothAdapter not initialized or unspecified address.");
return false;
}
// Previously connected device. Try to reconnect.
if (mBluetoothDeviceAddress != null && address.equals(mBluetoothDeviceAddress)
&& mBluetoothGatt != null) {
Log.d(TAG, "Trying to use an existing mBluetoothGatt for connection.");
if (mBluetoothGatt.connect()) {
mConnectionState = STATE_CONNECTING;
return true;
} else {
return false;
}
}
final BluetoothDevice device = mBluetoothAdapter.getRemoteDevice(address);
if (device == null) {
Log.w(TAG, "Device not found. Unable to connect.");
return false;
}
// We want to directly connect to the device, so we are setting the autoConnect
// parameter to false.
mBluetoothGatt = device.connectGatt(this, false, mGattCallback);
Log.d(TAG, "Trying to create a new connection.");
mBluetoothDeviceAddress = address;
mConnectionState = STATE_CONNECTING;
return true;
}`
My logging code is in the broadcastUpdate function:
private void broadcastUpdate(final String action,
final BluetoothGattCharacteristic characteristic) {
final Intent intent = new Intent(action);
StringBuilder stringBuilder = new StringBuilder();
StringBuilder descStringBuilder = new StringBuilder();
byte[] newData = characteristic.getValue();
String dataString;
if (newData != null && newData.length > 0) {
if (UUID_SENSOR_FFF4.equals(characteristic.getUuid())) {
totalDataBytes += newData.length;
// https://stackoverflow.com/questions/8150155/java-gethours-getminutes-and-getseconds
estimatedTime = System.currentTimeMillis();
Date timeDiff = new Date(estimatedTime - startTime - 19 * 3600000);
SimpleDateFormat timeFormat = new SimpleDateFormat("HH:mm:ss.SSS");
descStringBuilder.append("CHAR_FFF4\n");
descStringBuilder.append("Total Data: " + totalDataBytes + " Bytes\n");
descStringBuilder.append("Elapsed Time: " + timeFormat.format(timeDiff) + "\n");
for (int i = 0; i < newData.length; i++){
byte[] tempArray = { newData[i+1], newData[i] };
ByteBuffer wrapper = ByteBuffer.wrap(tempArray);
short tempShort = wrapper.getShort();
i++;
stringBuilder.append( tempShort );
stringBuilder.append( ", ");
}
dataString = stringBuilder.toString();
values.put(NbmcContract.NmbcDeviceData.COLUMN_TIMESTAMP, estimatedTime );
values.put(NbmcContract.NmbcDeviceData.COLUMN_DATA_STRING, dataString);
long newRowId = db.insert(NbmcContract.NmbcDeviceData.TABLE_NAME, null, values);
descStringBuilder.append("Row ID: " + newRowId + "\n");
} else {
descStringBuilder.append(getCharacteristicString(characteristic) + "\nDATA: ");
// We expect these characteristics to return ASCII strings
if ( DEVICE_NAME_CHAR.equals(characteristic.getUuid()) ||
MODEL_NUM_CHAR.equals(characteristic.getUuid()) ||
SERIAL_NUM_CHAR.equals(characteristic.getUuid()) ||
FIRMWARE_REV_CHAR.equals(characteristic.getUuid()) ||
HARDWARE_REV_CHAR.equals(characteristic.getUuid()) ||
FIRMWARE_REV_CHAR.equals(characteristic.getUuid()) ||
SOFTWARE_REV_CHAR.equals(characteristic.getUuid()) ||
MANUF_NAME_STRING_CHAR.equals(characteristic.getUuid()))
{
for (byte byteChar : newData) {
stringBuilder.append(String.format("%c", byteChar));
}
}
else {
for (byte byteChar : newData) {
stringBuilder.append(String.format("%02X", byteChar));
}
}
dataString = stringBuilder.toString();
}
String descString = descStringBuilder.toString();
intent.putExtra("DESC_STRING", descString);
UUID uuid = characteristic.getUuid();
String uuidString = uuid.toString();
intent.putExtra("CHAR_UUID", uuidString);
intent.putExtra("EXTRA_DATA", dataString);
}
sendBroadcast(intent);
}
1) on API lvl21+ you can try bluetoothGatt.requestConnectionPriority(int connectionPriority)
But that is all you can do.
2)
The BTLE stack on Android is not the best, i would suggest taking the data without any processing and do the processing after all the data has been received. Please note that the gatt callbacks happen on a random thread that is NOT your ui-thread. Two calls can originate from two different threads and if one writes into a database and the other comes along and starts to write as well you get a mess.
My suggestion:
On every receiving event you copy the byte[] array (as Android may reuse the given array for future data) and send it to the main thread with a Handler. On the main thread you collect the arrays in a List or Collection and once a certain amount has been collected you start a new thread, give it the list and let it enter the data into a database while the main thread creates a new list for new data.
I went over Java's tutorial on sounds but somehow it is way too complex for a beginner.
It is here
My aim is this:
Detect all the audio input and output devices
let the user select a audio input device
capture what the user says
output it to the default audio output device
Now how do I go about doing that?
Is there a better tutorial available?
What I have tried:
import javax.sound.sampled.*;
public class SoundTrial {
public static void main(String[] args) {
Mixer.Info[] info = AudioSystem.getMixerInfo();
int i =0;
for(Mixer.Info print : info){
System.out.println("Name: "+ i + " " + print.getName());
i++;
}
}
}
output:
Name: 0 Primary Sound Driver
Name: 1 Speakers and Headphones (IDT High Definition Audio CODEC)
Name: 2 Independent Headphones (IDT High Definition Audio CODEC
Name: 3 SPDIF (Digital Out via HP Dock) (IDT High Definition Audio CODEC)
Name: 4 Primary Sound Capture Driver
Name: 5 Integrated Microphone Array (ID
Name: 6 Microphone (IDT High Definition
Name: 7 Stereo Mix (IDT High Definition
Name: 8 Port Speakers and Headphones (IDT Hi
Name: 9 Port SPDIF (Digital Out via HP Dock)
Name: 10 Port Integrated Microphone Array (ID
Name: 11 Port Microphone (IDT High Definition
Name: 12 Port Stereo Mix (IDT High Definition
Name: 13 Port Independent Headphones (IDT Hi
This code may help you. Note this has been taken from this link: Audio Video. I found using Google search, I just posted code here in-case link becomes outdated.
import javax.media.*;
import javax.media.format.*;
import javax.media.protocol.*;
import java.util.*;
/*******************************************************************************
* A simple application to allow users to capture audio or video through devices
* connected to the PC. Via command-line arguments the user specifies whether
* audio (-a) or video (-v) capture, the duration of the capture (-d) in
* seconds, and the file to write the media to (-f).
*
* The application would be far more useful and versatile if it provided control
* over the formats of the audio and video captured as well as the content type
* of the output.
*
* The class searches for capture devices that support the particular default
* track formats: linear for audio and Cinepak for video. As a fall-back two
* device names are hard-coded into the application as an example of how to
* obtain DeviceInfo when a device's name is known. The user may force the
* application to use these names by using the -k (known devices) flag.
*
* The class is static but employs the earlier Location2Location example to
* perform all the Processor and DataSink related work. Thus the application
* chiefly involves CaptureDevice related operations.
*
* #author Michael (Spike) Barlow
******************************************************************************/
public class SimpleRecorder {
/////////////////////////////////////////////////////////////
// Names for the audio and video capture devices on the
// author's system. These will vary system to system but are
// only used as a fallback.
/////////////////////////////////////////////////////////////
private static final String AUDIO_DEVICE_NAME = "DirectSoundCapture";
private static final String VIDEO_DEVICE_NAME = "vfw:Microsoft WDM Image Capture:0";
///////////////////////////////////////////////////////////
// Default names for the files to write the output to for
// the case where they are not supplie by the user.
//////////////////////////////////////////////////////////
private static final String DEFAULT_AUDIO_NAME = "file://./captured.wav";
private static final String DEFAULT_VIDEO_NAME = "file://./captured.avi";
///////////////////////////////////////////
// Type of capture requested by the user.
//////////////////////////////////////////
private static final String AUDIO = "audio";
private static final String VIDEO = "video";
private static final String BOTH = "audio and video";
////////////////////////////////////////////////////////////////////
// The only audio and video formats that the particular application
// supports. A better program would allow user selection of formats
// but would grow past the small example size.
////////////////////////////////////////////////////////////////////
private static final Format AUDIO_FORMAT = new AudioFormat(
AudioFormat.LINEAR);
private static final Format VIDEO_FORMAT = new VideoFormat(
VideoFormat.CINEPAK);
public static void main(String[] args) {
//////////////////////////////////////////////////////
// Object to handle the processing and sinking of the
// data captured from the device.
//////////////////////////////////////////////////////
Location2Location capture;
/////////////////////////////////////
// Audio and video capture devices.
////////////////////////////////////
CaptureDeviceInfo audioDevice = null;
CaptureDeviceInfo videoDevice = null;
/////////////////////////////////////////////////////////////
// Capture device's "location" plus the name and location of
// the destination.
/////////////////////////////////////////////////////////////
MediaLocator captureLocation = null;
MediaLocator destinationLocation;
String destinationName = null;
////////////////////////////////////////////////////////////
// Formats the Processor (in Location2Location) must match.
////////////////////////////////////////////////////////////
Format[] formats = new Format[1];
///////////////////////////////////////////////
// Content type for an audio or video capture.
//////////////////////////////////////////////
ContentDescriptor audioContainer = new ContentDescriptor(
FileTypeDescriptor.WAVE);
ContentDescriptor videoContainer = new ContentDescriptor(
FileTypeDescriptor.MSVIDEO);
ContentDescriptor container = null;
////////////////////////////////////////////////////////////////////
// Duration of recording (in seconds) and period to wait afterwards
///////////////////////////////////////////////////////////////////
double duration = 10;
int waitFor = 0;
//////////////////////////
// Audio or video capture?
//////////////////////////
String selected = AUDIO;
////////////////////////////////////////////////////////
// All devices that support the format in question.
// A means of "ensuring" the program works on different
// machines with different capture devices.
////////////////////////////////////////////////////////
Vector devices;
//////////////////////////////////////////////////////////
// Whether to search for capture devices that support the
// format or use the devices whos names are already
// known to the application.
//////////////////////////////////////////////////////////
boolean useKnownDevices = false;
/////////////////////////////////////////////////////////
// Process the command-line options as to audio or video,
// duration, and file to save to.
/////////////////////////////////////////////////////////
for (int i = 0; i 0 && !useKnownDevices) {
audioDevice = (CaptureDeviceInfo) devices.elementAt(0);
} else
audioDevice = CaptureDeviceManager.getDevice(AUDIO_DEVICE_NAME);
if (audioDevice == null) {
System.out.println("Can't find suitable audio device. Exiting");
System.exit(1);
}
captureLocation = audioDevice.getLocator();
formats[0] = AUDIO_FORMAT;
if (destinationName == null)
destinationName = DEFAULT_AUDIO_NAME;
container = audioContainer;
}
/////////////////////////////////////////////////////////////////
// Perform setup for video capture. Includes finding a suitable
// device, obatining its MediaLocator and setting the content
// type.
////////////////////////////////////////////////////////////////
else if (selected.equals(VIDEO)) {
devices = CaptureDeviceManager.getDeviceList(VIDEO_FORMAT);
if (devices.size() > 0 && !useKnownDevices)
videoDevice = (CaptureDeviceInfo) devices.elementAt(0);
else
videoDevice = CaptureDeviceManager.getDevice(VIDEO_DEVICE_NAME);
if (videoDevice == null) {
System.out.println("Can't find suitable video device. Exiting");
System.exit(1);
}
captureLocation = videoDevice.getLocator();
formats[0] = VIDEO_FORMAT;
if (destinationName == null)
destinationName = DEFAULT_VIDEO_NAME;
container = videoContainer;
} else if (selected.equals(BOTH)) {
captureLocation = null;
formats = new Format[2];
formats[0] = AUDIO_FORMAT;
formats[1] = VIDEO_FORMAT;
container = videoContainer;
if (destinationName == null)
destinationName = DEFAULT_VIDEO_NAME;
}
////////////////////////////////////////////////////////////////////
// Perform all the necessary Processor and DataSink preparation via
// the Location2Location class.
////////////////////////////////////////////////////////////////////
destinationLocation = new MediaLocator(destinationName);
System.out.println("Configuring for capture. Please wait.");
capture = new Location2Location(captureLocation, destinationLocation,
formats, container, 1.0);
/////////////////////////////////////////////////////////////////////////////
// Start the recording and tell the user. Specify the length of the
// recording. Then wait around for up to 4-times the duration of
// recording
// (can take longer to sink/write the data so should wait a bit incase).
/////////////////////////////////////////////////////////////////////////////
System.out.println("Started recording " + duration + " seconds of "
+ selected + " ...");
capture.setStopTime(new Time(duration));
if (waitFor == 0)
waitFor = (int) (4000 * duration);
else
waitFor *= 1000;
int waited = capture.transfer(waitFor);
/////////////////////////////////////////////////////////
// Report on the success (or otherwise) of the recording.
/////////////////////////////////////////////////////////
int state = capture.getState();
if (state == Location2Location.FINISHED)
System.out.println(selected
+ " capture successful in approximately "
+ ((int) ((waited + 500) / 1000))
+ " seconds. Data written to " + destinationName);
else if (state == Location2Location.FAILED)
System.out.println(selected
+ " capture failed after approximately "
+ ((int) ((waited + 500) / 1000)) + " seconds");
else {
System.out.println(selected
+ " capture still ongoing after approximately "
+ ((int) ((waited + 500) / 1000)) + " seconds");
System.out.println("Process likely to have failed");
}
System.exit(0);
}
}
I am trying to send data over a USB to serial cable using the jUSB library. I'm coding in the NetBeans IDE on Windows.
What is the problem behind the message: "USB Host support is unavailable" in the following code:
package usb.core;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.List;
import usb.core.*;
public class Main {
public static void main(String[] args) throws IOException {
try {
// Bootstrap by getting the USB Host from the HostFactory.
Host host = HostFactory.getHost();
// Obtain a list of the USB buses available on the Host.
Bus[] bus = host.getBusses();
int total_bus = bus.length;
System.out.println(total_bus);
// Traverse through all the USB buses.
for (int i = 0; i < total_bus; i++) {
// Access the root hub on the USB bus and obtain the
// number of USB ports available on the root hub.
Device root = bus[i].getRootHub();
int total_port = root.getNumPorts();
// Traverse through all the USB ports available on the
// root hub. It should be mentioned that the numbering
// starts from 1, not 0.
for (int j = 1; j <= total_port; j++) {
// Obtain the Device connected to the port.
Device device = root.getChild(j);
if (device != null) {
// USB device available, do something here.
// Obtain the current Configuration of the device
// and the number of interfaces available under the
// current Configuration.
Configuration config = device.getConfiguration();
int total_interface = config.getNumInterfaces();
// Traverse through the Interfaces
for (int k = 0; k < total_interface; k++) {
// Access the current Interface and obtain the
// number of endpoints available on the Interface
Interface itf = config.getInterface(k, 0);
int total_ep = itf.getNumEndpoints();
// Traverse through all the endpoints.
for (int l = 0; l < total_ep; l++) {
// Access the endpoint and
// obtain its I/O type
Endpoint ep = itf.getEndpoint(l);
String io_type = ep.getType();
boolean input = ep.isInput();
// If the endpoint is an input endpoint,
// obtain its InputStream and read in data.
if (input) {
InputStream in;
in = ep.getInputStream();
// Read in data here
in.close();
}
else {
// If the Endpoint is an output Endpoint,
// obtain its OutputStream and
// write out data.
OutputStream out;
out = ep.getOutputStream();
// Write out data here.
out.close();
}
}
}
}
}
}
}
catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
I Googled that error message and I found:
A forum post from 2005 that said that on Linux this can be due to something else having grabbed exclusive use of the USB controller: http://bytes.com/topic/java/answers/16736-jusb
An online copy of the source code, which indicates that this happens if getHost's attempt to create a (platform specific) HostFactory fails. Unfortunately, the code eats unexpected exceptions (*), so it you'll need to use a Java debugger to figure out what the real code is.
(* The code catches Exception in maybeGetHost and other places and throws away the diagnostics! This is a major no-no, and a big red flag on overall code quality of the library. If I were you, I'd be looking for a better quality library to use.)
I want to capture and stream audio using JMF 2.1.1e in RTP format. I wrote a simple transmitter, I can transmit and receive the audio. But when I saw in Wireshark, I saw the packets as UDP. Can anyone point me out the problem, please.
And here is my function responsible for audio capture and transmit.
public void captureAudio(){
// Get the device list for ULAW
Vector devices = captureDevices();
CaptureDeviceInfo captureDeviceInfo = null;
if (devices.size() > 0) {
//get the first device from the list and cast it as CaptureDeviceInfo
captureDeviceInfo = (CaptureDeviceInfo) devices.firstElement();
}
else {
// exit if we could not find the relevant capturedevice.
System.out.println("No such device found");
System.exit(-1);
}
Processor processor = null;
try {
//Create a Processor for the specified media.
processor = Manager.createProcessor(captureDeviceInfo.getLocator());
} catch (IOException ex) {
System.err.println(ex);
} catch (NoProcessorException ex) {
System.err.println(ex);
}
//Prepares the Processor to be programmed.
//puts the Processor into the Configuring state.
processor.configure();
//Wait till the Processor configured.
while (processor.getState() != Processor.Configured){
try {
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
//Sets the output content-type for this Processor
processor.setContentDescriptor(CONTENT_DESCRIPTOR);
/**
ContentDescriptor CONTENT_DESCRIPTOR
= new ContentDescriptor(ContentDescriptor.RAW_RTP);
*/
//Gets a TrackControl for each track in the media stream.
TrackControl track[] = processor.getTrackControls();
boolean encodingOk = false;
//searching through tracks to get a supported audio format track.
for (int i = 0; i < track.length; i++) {
if (!encodingOk && track[i] instanceof FormatControl) {
if (((FormatControl)
track[i]).setFormat( new AudioFormat(AudioFormat.ULAW_RTP, 8000, 8, 1) ) == null)
{
track[i].setEnabled(false);
}
else {
encodingOk = true;
track[i].setEnabled(encodingOk);
System.out.println("enc: " + i);
}
} else {
// we could not set this track to ULAW, so disable it
track[i].setEnabled(false);
}
}
//If we could set this track to ULAW we proceed
if (encodingOk){
processor.realize();
while (processor.getState() != Processor.Realized){
try {
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
DataSource dataSource = null;
try {
dataSource = processor.getDataOutput();
} catch (NotRealizedError e) {
e.printStackTrace();
}
try {
String url= "rtp://192.168.1.99:49150/audio/1";
MediaLocator m = new MediaLocator(url);
DataSink d = Manager.createDataSink(dataSource, m);
d.open();
d.start();
System.out.println("transmitting...");
processor.start();
} catch (Exception e) {
e.printStackTrace();
}
}
}
And please ask, if you find anything improper or vague.
Thanks in advance. :)
Clarification: I have a peice of C# code for RTP streaming. And when I capture the data using wireshark, I can see them as RTP, but the problem is when I capture the data stream from JMF wireshark show them as UDP. And my question is, why?
I know the difference between UDP and RTP.
RTP is the Application layer,
UDP is the Transport Layer,
that is not the same level!
Wikipedia helps to explain that in detail.
So your data is send as a RTP stream within a UDP "Frame"
A small overview...
Application layers:
* DHCP
* DHCPv6
* DNS
* FTP
* HTTP
* IMAP
* IRC
* LDAP
* MGCP
* NNTP
* BGP
* NTP
* POP
* RPC
* RTP
* RTSP
* RIP
* SIP
* SMTP
* SNMP
* SOCKS
* SSH
* Telnet
* TLS/SSL
* XMPP
* (more)
Transport layer
* TCP
* UDP
* DCCP
* SCTP
* RSVP
* (more)
Internet layer
* IP
o IPv4
o IPv6
* ICMP
* ICMPv6
* ECN
* IGMP
* IPsec
* (more)
Link layer
* ARP/InARP
* NDP
* OSPF
* Tunnels
o L2TP
* PPP
* Media access control
o Ethernet
o DSL
o ISDN
o FDDI
* (more)
If I understand your question correctly you want to know why the RTP packets are not recognised as RTP packets in wireshark. In my experience this can be the case if wireshark does not have enough information about the RTP session.
For example: 1) if I sniff an RTP session that has been setup using RTSP or SIP and SDP, then wireshark will show detect RTP. 2) However in another application in which I was setting up the session using local SDP descriptions, the packets show up as UDP. In the second scenario wireshark sees the RTP packets but lacks the information to classify them as RTP. Since RTP usually sits on top of UDP and wireshark can classify UDP packets, they are classified as such.
FYI, you can then select a packet from stream that you know is RTP, and select "Decode as". Then select RTP from the protocol list for the appropriate stream (and RTCP for the other one if applicable), and then wireshark will show the packets as being RTP and you'll be able to see the packet info.
I'm not sure if this is the reason in your specific case, but perhaps there is a signaling difference between your JMF and your C# example? It sounds like you might be using SIP for the C# application, and nothing for the JMF one?