Based on Peter Brinkmann's sample class, I am running libpd and processing in Eclipse. But I don't seem to completely understand how to get the audio input from the Android microphone into Pure Data.
when I run it on an actual or virtual device, I get a bunch of errors saying:
E/AudioRecord(1079): Could not get audio input for record source 1
E/AudioRecord-JNI(1079): Error creating AudioRecord instance: initialization check failed.
Here's the main Class:
package com.noisepages.nettoyeur.processing.sample;
import org.puredata.android.io.AudioParameters;
import org.puredata.android.processing.PureDataP5Android;
import processing.core.PApplet;
/**
* #author Peter Brinkmann (peter.brinkmann#gmail.com)
*/
public class PdP5Sample extends PApplet {
PureDataP5Android pd;
int zipId = com.noisepages.nettoyeur.processing.sample.R.raw.patch; // Processing masks R
int ins = AudioParameters.suggestInputChannels();
int sampleRate = AudioParameters.suggestSampleRate();
public void setup() {
pd = new PureDataP5Android(this, sampleRate, ins, 2);
pd.unpackAndOpenPatch(zipId, "audiotest.pd");
pd.start();
}
public void draw() {
background(0);
fill(mouseY, mouseX, 0);
stroke(mouseY, mouseX, 0);
ellipseMode(CENTER);
ellipse(mouseX, mouseY, 100, 100);
}
public void stop() {
pd.release();
super.stop();
}
/*
// Implement methods like the following if you want to receive messages from Pd.
// You'll also need to subscribe to receive symbols you're interested if you want
// to receive messages.
public void pdPrint(String s) {
// Handle string s, printed by Pd
}
public void receiveBang(String source) {
// Handle bang sent to symbol source in Pd
}
public void receiveFloat(String source, float x) {
// Handle float x sent to symbol source in Pd
}
public void receiveSymbol(String source, String sym) {
// Handle symbol sym sent to symbol source in Pd
}
*/
// boilerplate
public int sketchWidth() { return this.screenWidth; }
public int sketchHeight() { return this.screenHeight; }
public String sketchRenderer() { return PApplet.OPENGL; }
}
did you add this to AndroidManifest.xml?
<uses-permission android:name="android.permission.RECORD_AUDIO" />
Related
I would like to implement the library PocketSphinx in my Android project but I fail with it since nothing happens. It doesn't work and I don't get any errors.
This is how I tried it:
Added pocketsphinx-android-5prealpha-release.aar to /app/libs
Added assets.xml to /app
Aded the following to /app/build.gradle:
ant.importBuild 'assets.xml'
preBuild.dependsOn(list, checksum)
clean.dependsOn(clean_assets)
Added sync (with all sub-files) into /app/assets
Cloned the following repos into my root-directory:
git clone https://github.com/cmusphinx/sphinxbase
git clone https://github.com/cmusphinx/pocketsphinx
git clone https://github.com/cmusphinx/pocketsphinx-android
Executed gradle build
This is how my code looks like:
import android.app.Service;
import android.content.Intent;
import android.os.AsyncTask;
import android.os.IBinder;
import android.util.Log;
import androidx.annotation.Nullable;
import java.io.File;
import java.io.IOException;
import java.lang.ref.WeakReference;
import java.util.HashMap;
import ch.yourclick.kitt.R;
import edu.cmu.pocketsphinx.Assets;
import edu.cmu.pocketsphinx.Hypothesis;
import edu.cmu.pocketsphinx.RecognitionListener;
import edu.cmu.pocketsphinx.SpeechRecognizer;
import edu.cmu.pocketsphinx.SpeechRecognizerSetup;
public class SttService extends Service implements RecognitionListener {
private static final String TAG = "SstService";
/* Named searches allow to quickly reconfigure the decoder */
private static final String KWS_SEARCH = "wakeup";
private static final String FORECAST_SEARCH = "forecast";
private static final String DIGITS_SEARCH = "digits";
private static final String PHONE_SEARCH = "phones";
private static final String MENU_SEARCH = "menu";
/* Keyword we are looking for to activate menu */
private static final String KEYPHRASE = "oh mighty computer";
/* Used to handle permission request */
private static final int PERMISSIONS_REQUEST_RECORD_AUDIO = 1;
private SpeechRecognizer recognizer;
private HashMap<String, Integer> captions;
public SttService() {
// Prepare the data for UI
captions = new HashMap<>();
captions.put(KWS_SEARCH, R.string.kws_caption);
captions.put(MENU_SEARCH, R.string.menu_caption);
captions.put(DIGITS_SEARCH, R.string.digits_caption);
captions.put(PHONE_SEARCH, R.string.phone_caption);
captions.put(FORECAST_SEARCH, R.string.forecast_caption);
Log.e(TAG, "SttService: Preparing the recognition");
// Recognizer initialization is a time-consuming and it involves IO,
// so we execute it in async task
new SetupTask(this).execute();
}
private static class SetupTask extends AsyncTask<Void, Void, Exception> {
WeakReference<SttService> activityReference;
SetupTask(SttService activity) {
this.activityReference = new WeakReference<>(activity);
}
#Override
protected Exception doInBackground(Void... params) {
try {
Assets assets = new Assets(activityReference.get());
File assetDir = assets.syncAssets();
activityReference.get().setupRecognizer(assetDir);
} catch (IOException e) {
return e;
}
return null;
}
#Override
protected void onPostExecute(Exception result) {
if (result != null) {
Log.e(TAG, "onPostExecute: Failed to init recognizer " + result);
} else {
activityReference.get().switchSearch(KWS_SEARCH);
}
}
}
#Override
public void onDestroy() {
super.onDestroy();
if (recognizer != null) {
recognizer.cancel();
recognizer.shutdown();
}
}
#Nullable
#Override
public IBinder onBind(Intent intent) {
return null;
}
/**
* In partial result we get quick updates about current hypothesis. In
* keyword spotting mode we can react here, in other modes we need to wait
* for final result in onResult.
*/
#Override
public void onPartialResult(Hypothesis hypothesis) {
if (hypothesis == null)
return;
String text = hypothesis.getHypstr();
if (text.equals(KEYPHRASE))
switchSearch(MENU_SEARCH);
else if (text.equals(DIGITS_SEARCH))
switchSearch(DIGITS_SEARCH);
else if (text.equals(PHONE_SEARCH))
switchSearch(PHONE_SEARCH);
else if (text.equals(FORECAST_SEARCH))
switchSearch(FORECAST_SEARCH);
else
Log.e(TAG, "onPartialResult: " + text);
}
/**
* This callback is called when we stop the recognizer.
*/
#Override
public void onResult(Hypothesis hypothesis) {
if (hypothesis != null) {
String text = hypothesis.getHypstr();
Log.e(TAG, "onResult: " + text);
}
}
#Override
public void onBeginningOfSpeech() {
}
/**
* We stop recognizer here to get a final result
*/
#Override
public void onEndOfSpeech() {
if (!recognizer.getSearchName().equals(KWS_SEARCH))
switchSearch(KWS_SEARCH);
}
private void switchSearch(String searchName) {
recognizer.stop();
// If we are not spotting, start listening with timeout (10000 ms or 10 seconds).
if (searchName.equals(KWS_SEARCH))
recognizer.startListening(searchName);
else
recognizer.startListening(searchName, 10000);
String caption = getResources().getString(captions.get(searchName));
Log.e(TAG, "switchSearch: "+ caption);
}
private void setupRecognizer(File assetsDir) throws IOException {
// The recognizer can be configured to perform multiple searches
// of different kind and switch between them
recognizer = SpeechRecognizerSetup.defaultSetup()
.setAcousticModel(new File(assetsDir, "en-us-ptm"))
.setDictionary(new File(assetsDir, "cmudict-en-us.dict"))
.setRawLogDir(assetsDir) // To disable logging of raw audio comment out this call (takes a lot of space on the device)
.getRecognizer();
recognizer.addListener(this);
/* In your application you might not need to add all those searches.
They are added here for demonstration. You can leave just one.
*/
// Create keyword-activation search.
recognizer.addKeyphraseSearch(KWS_SEARCH, KEYPHRASE);
// Create grammar-based search for selection between demos
File menuGrammar = new File(assetsDir, "menu.gram");
recognizer.addGrammarSearch(MENU_SEARCH, menuGrammar);
// Create grammar-based search for digit recognition
File digitsGrammar = new File(assetsDir, "digits.gram");
recognizer.addGrammarSearch(DIGITS_SEARCH, digitsGrammar);
// Create language model search
File languageModel = new File(assetsDir, "weather.dmp");
recognizer.addNgramSearch(FORECAST_SEARCH, languageModel);
// Phonetic search
File phoneticModel = new File(assetsDir, "en-phone.dmp");
recognizer.addAllphoneSearch(PHONE_SEARCH, phoneticModel);
}
#Override
public void onError(Exception error) {
Log.e(TAG, "onError: " + error.getMessage());
}
#Override
public void onTimeout() {
switchSearch(KWS_SEARCH);
}
}
My code is almost the same as pocketsphinx-android-demo. The only differences are that I am doing this in a service class, instead of an Activity and I am not asking the user for microphone permission since I do that in the MainActity already. Well, my code has some warnings but no errors.
When I run my app, I get this message (see the full stack trace):
E/SstService: switchSearch: To start demonstration say "oh mighty
computer".
But when I say "oh mighty computer" (or anything else), nothing happens. I don't even get an error. So I have no idea where I am stuck and what I am doing wrong.
If there is someone familiar with that library, any help will be appreciated!
I tried making a synth and it works and I can play music with them. However the first synth that I made had delay and you couldn't play fast songs. So I tried again using sourceDataline.flush() method to speed it up. Well it somewhat fixes it but delay is to much. I tried also reducing sample rate but delay is to much.
Edit: turns out you can comment the line keyStateInterface.setFlush(false);
it improves the delay however you still can't play fast songs
here is the code:
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.SourceDataLine;
public class SoundLine implements Runnable{
KeyStateInterface keyStateInterface;
public SoundLine(KeyStateInterface arg){
keyStateInterface=arg;
}
#Override
public void run() {
AudioFormat audioFormat = new AudioFormat(44100,8,1,true,false);
try {
SourceDataLine sourceDataLine = AudioSystem.getSourceDataLine(audioFormat);
sourceDataLine.open(audioFormat);
sourceDataLine.start();
SynthMain synthMain = new SynthMain();
int v = 0;
while (true) {
int bytesAvailable = sourceDataLine.available();
if (bytesAvailable > 0) {
int sampling = 256/(64);
byte[] bytes = new byte[sampling];
for (int i = 0; i < sampling; i++) {
//bytes[i] = (byte) (Math.sin(angle) * 127f);
float t = (float) (synthMain.makeSound((double)v,44100,keyStateInterface)* 127f);
bytes[i] = (byte) (t);
v += 1;
}
if(keyStateInterface.getFlush()){
sourceDataLine.flush();
}
sourceDataLine.write(bytes, 0, sampling);
//if(!keyStateInterface.isCacheKeysSame())sourceDataLine.flush();
//System.out.println(bytesWritten);
} else {
Thread.sleep(1);
}
//System.out.println(bytesAvailable);
//System.out.println();
//if((System.currentTimeMillis()-mil)%50==0)freq+=0.5;
}
}catch (Exception e){
}
}
}
public class SynthMain {
double[] noteFrequency = {
466.1637615181,
493.8833012561,
523.2511306012,
554.3652619537,
587.3295358348,
622.2539674442,
659.2551138257,
698.4564628660,
739.9888454233,
783.9908719635,
830.6093951599,
880.0000000000,
932.3275230362,
987.7666025122,
1046.5022612024,
1108.7305239075,
1174.6590716696,
1244.5079348883,
1318.5102276515,
1396.9129257320,
1479.9776908465,
1567.9817439270,
1661.2187903198,
1760.0000000000,
1864.6550460724,
1975.5332050245,
2093.0045224048,
2217.4610478150,
2349.3181433393,
2489.0158697766,
2637.0204553030,
2793.8258514640,
2959.9553816931,
3135.9634878540,
3322.4375806396,
3520.0000000000,
3729.3100921447,
};
boolean[] keys = new boolean[noteFrequency.length];
public double makeSound(double dTime,double SampleRate,KeyStateInterface keyStateInterface){
if(keyStateInterface.getSizeOfMidiKey()>0){
keyStateInterface.setFlush(true);
for(int i=0;i<keyStateInterface.getSizeOfMidiKey();i++) {
KeyRequest keyRequest = keyStateInterface.popMidiKey();
if(keyRequest.getCommand()==-112){
if(keyRequest.getVelocity()>0)keys[keyRequest.getArg1()] = true;
if(keyRequest.getVelocity()<1)keys[keyRequest.getArg1()] = false;
System.out.println(keyRequest.getVelocity());
}
}
}else{
keyStateInterface.setFlush(false);
}
//System.out.println("makeSound");
double a = 0.0;
for(int i=0;i<keys.length;i++){
if(keys[i]){
a+=Oscillate(dTime,noteFrequency[i],(int)SampleRate);
}
}
return a*0.4;
}
public double Oscillate(double dTime,double dFreq,int sampleRate){
double period = (double)sampleRate / dFreq;
return Math.sin(2.0 * Math.PI * (int)dTime / period);
}
}
import java.util.ArrayList;
import java.util.Stack;
public class KeyState implements KeyStateInterface{
boolean isFlush;
ArrayList<KeyRequest> keyRequest = new ArrayList<KeyRequest>();
ArrayList<KeyRequest> midiKeyRequest = new ArrayList<KeyRequest>();
#Override
public void pushKey(int keyCode, boolean press) {
keyRequest.add(new KeyRequest(KeyRequest.KEY,keyCode,press));
}
#Override
public void pushMidiKey(int command, int arg1, int velocity) {
midiKeyRequest.add(new KeyRequest(KeyRequest.MIDI_KEY,command,arg1,velocity));
}
#Override
public KeyRequest popKey() {
KeyRequest t = keyRequest.get(keyRequest.size());
return t;
}
#Override
public KeyRequest popMidiKey() {
KeyRequest t = midiKeyRequest.get(keyRequest.size());
midiKeyRequest.remove(keyRequest.size());
return t;
}
#Override
public int getSizeOfKey() {
return keyRequest.size();
}
#Override
public int getSizeOfMidiKey() {
return midiKeyRequest.size();
}
#Override
public boolean getFlush() {
boolean v = isFlush;
isFlush = false;
return v;
}
#Override
public void setFlush(boolean arg) {
isFlush=arg;
}
}
I haven't dug deep into your code, but perhaps the following info will be useful.
The SourceDataLine.write() method uses a blocking queue internally. It will only progress as fast as the data can be processed. So, there is no need to test for available capacity before populating and shipping bytes.
I'd give the SDL thread a priority of 10, since most of it's time is spent in a blocked state anyway.
Also, I'd leave the line open and running. I first got that advice from Neil Smith of Praxis Live. There is a cost associated with continually rebuilding it. And it looks to me like you are creating a new SDL for every 4 bytes of audio data. That would be highly inefficient. I suspect that shipping somewhere in the range of 256 to 8K on a line that is left open would be a better choice, but I don't have hard facts to back that up that opinion. Neil wrote about having all the transporting arrays be the same size (e.g., the array of data produced by the synth be the same size as the SDL write).
I've made a real-time theremin with java, where the latency includes the task of reading the mouse click and position, then sending that to the synth that is generating the audio data. I wouldn't claim thay my latency down to a precision that allows "in the pocket" starts and stops to notes, but it still is pretty good. I suspect further optimization possible on my end.
I think Neil (mentioned earlier) has had better results. He's spoken of achieving latencies in the range of 5 milliseconds and less, as far back as 2011.
I am trying to make a module for react-native that will change a video into a gif. I have little to no experience with android studios/java, but I would love to learn more! I am using this library to convert the video to a gif. Here is my code:
package com.reactlibrary;
import android.widget.Toast;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
import com.facebook.react.bridge.ReactMethod;
import com.github.hiteshsondhi88.libffmpeg.FFmpeg;
public class RNGifMakerModule extends ReactContextBaseJavaModule {
private final ReactApplicationContext reactContext;
public RNGifMakerModule(ReactApplicationContext reactContext) {
super(reactContext);
this.reactContext = reactContext;
}
#Override
public String getName() {
return "RNGifMakerModule";
}
#ReactMethod
public void alert(String message) {
Toast.makeText(getReactApplicationContext(), "Error", Toast.LENGTH_LONG).show();
String[] cmd = {"-i"
, message
, "Image.gif"};
conversion(cmd);
}
public void conversion(String[] cmd) {
FFmpeg ffmpeg = FFmpeg.getInstance(this.reactContext);
try {
// to execute "ffmpeg -version" command you just need to pass "-version"
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
#Override
public void onStart() {
}
#Override
public void onProgress(String message) {
}
#Override
public void onFailure(String message) {
}
#Override
public void onSuccess(String message) {
}
#Override
public void onFinish() {
}
});
} catch (FFmpegCommandAlreadyRunningException e) {
// Handle if FFmpeg is already running
e.printStackTrace();
}
}
}
And I get this error:
Error:(43, 31) error: cannot find symbol class ExecuteBinaryResponseHandler
This seems odd to be, because in the documentation for ffmpeg-android-java it says to use almost exactly the same code.
Bounty
The bounty will be awarded to you if you can find a way to convert a video.mp4 into a gif. You do not necessarily have to use FFmpeg, but your solution has to work with java/android studios.
First of all you should init ffmpeg correctly.
FFmpeg ffmpeg = FFmpeg.getInstance(this.reactContext);
// please add following method after
ffmpeg.loadBinary(new FFmpegLoadBinaryResponseHandler() {
#Override
public void onFailure() {
// probably your device not supported
}
#Override
public void onSuccess() {
// you should init flag here (isLoaded, isReady etc.)
}
Only after onSuccess() you can work with commands.
Then please check following answer by LordNeckbeard.
So your code should be something like this:
if (isFFmpegLoaded) {
// ffmpeg.execute(commands from link from the answer)
}
Please do not forget to remove all spaces from command's string and "ffmpeg" word.
To keep command more readable I will recommend to build command like this:
final String[] command = new String[11]; // example of the first command in the answer
command[0] = "-y";
command[1] = "-ss";
command[2] = "30";
command[3] = "-t";
command[4] = "3";
command[5] = "-i";
command[6] = "-t";
command[7] = "filePath";
command[8] = "-vf";
command[9] = "fps=10,scale=320:-1:flags=lanczos,palettegen";
command[10] = "palette.png";
Please make sure that you have storage permission to work with file just in case you are working on external storage.
Based on this strategy ffmpeg works well for me. Thanks and good luck!
First of all, you should use: File - Invalidate Caches/Restart - Invalidate and Restart and try to reimport ExecuteBinaryResponseHan‌dler. If the problem hasn't been resolved you can try the small hack. Inside your project create package com.github.hiteshsondhi88.libffmpeg and class:
public class ExecuteBinaryResponseHandler implements FFmpegExecuteResponseHandler {
#Override
public void onSuccess(String message) {
}
#Override
public void onProgress(String message) {
}
#Override
public void onFailure(String message) {
}
#Override
public void onStart() {
}
#Override
public void onFinish() {
}
}
It should be as on image:
Then inside your build.gradle file in defaultConfig block add multiDexEnabled true
Then you will be able to use that class
I'm trying to create a native extension which can receive broadcasts, sent from a native android am as intent broadcasts.
The sending part works, I've tested this with a native app that has a broadcast receiver, but I cant get it to work in the native extension.
Here's what I have so far:
Here the java side of the ANE
public class ReceiverPhidget extends BroadcastReceiver {
private FREContext mFREContext;
public ReceiverPhidget(FREContext mFREContext) {
this.mFREContext = mFREContext;
}
#Override
public void onReceive(Context context, Intent intent) {
String action = intent.getAction();
if (action.equals(IntentsKeys.INTENT_PHIDGET_CONNECTED)){
//Send listener in ANE project with message that phidget connected (not must)
System.out.println("Phidget connected");
mFREContext.dispatchStatusEventAsync("Yes", Keys.KEY_CONNECTED);
} else
if (action.equals(IntentsKeys.INTENT_PHIDGET_DISCONNECTED)){
//Send listener in ANE project with message that phidget disconnected (not must)
System.out.println("Phidget disconnected");
mFREContext.dispatchStatusEventAsync("Yes", Keys.KEY_DISCONNECTED);
} else
if (action.equals(IntentsKeys.INTENT_PHIDGET_GAIN_TAG)){
//Send listener with data in ANE project with message that phidget gain receive
String message = intent.getStringExtra(IntentsKeys.INTENT_PHIDGET_EXTRA_DATA);
System.out.println("Phidget gain message: " + message);
Log.d("TAG FOUND", message);
mFREContext.dispatchStatusEventAsync(message, Keys.KEY_TAG_GAIN);
}
}
public static IntentFilter getIntentFilter(){
final IntentFilter intentFilter = new IntentFilter();
intentFilter.addAction(IntentsKeys.INTENT_PHIDGET_CONNECTED);
intentFilter.addAction(IntentsKeys.INTENT_PHIDGET_DISCONNECTED);
intentFilter.addAction(IntentsKeys.INTENT_PHIDGET_GAIN_TAG);
return intentFilter;
}
}
And the FREExtension
public class ReceiverExtension implements FREExtension {
private ReceiverPhidget mReceiverPhidget;
private ReceiverExtensionContext mContext;
#Override
public void initialize() {
mReceiverPhidget = new ReceiverPhidget(mContext);
mContext.getActivity().registerReceiver(mReceiverPhidget, ReceiverPhidget.getIntentFilter());
}
#Override
public FREContext createContext(String s) {
return mContext = new ReceiverExtensionContext();
}
#Override
public void dispose() {
mContext.getActivity().unregisterReceiver(mReceiverPhidget);
}
}
And here is the flash library side of the ANE
package nl.mediaheads.anetest.extension {
import flash.events.EventDispatcher;
import flash.events.StatusEvent;
import flash.external.ExtensionContext;
public class RFIDController extends EventDispatcher {
private var extContext:ExtensionContext;
private var channel:int;
private var scannedChannelList:Vector.<int>;
public function RFIDController() {
extContext = ExtensionContext.createExtensionContext(
"nl.mediaheads.anetest.exntension.RFIDController", "");
extContext.addEventListener(StatusEvent.STATUS, onStatus);
}
private function onStatus(event:StatusEvent):void {
if (event.level == EventKeys.KEY_TAG_GAIN) {
dispatchEvent (new TagEvent(TagEvent.TAG_GAINED, event.code) );
}
}
}
}
And here is my test mobile project class to test the ANE
package
{
import flash.display.Sprite;
import flash.display.StageAlign;
import flash.display.StageScaleMode;
import flash.events.Event;
import flash.text.TextField;
import nl.mediaheads.anetest.extension.RFIDController;
[SWF(width="1280", height="800", frameRate="60", backgroundColor="#ffffff")]
public class AneTestApp extends Sprite
{
private var tf:TextField;
private var rc:RFIDController;
public function AneTestApp()
{
super();
// support autoOrients
stage.align = StageAlign.TOP_LEFT;
stage.scaleMode = StageScaleMode.NO_SCALE;
stage.color = 0xFFFFFF;
addEventListener(Event.ADDED_TO_STAGE, onAdded);
}
private function onAdded(event:Event):void {
//
tf = new TextField();
tf.width = 200;
tf.height = 50;
tf.x = 10;
tf.y = 64;
tf.mouseEnabled = false;
tf.background = true;
tf.backgroundColor = 0xF50000;
addChild(tf);
rc = new RFIDController();
tf.text = "test 1";
this.addEventListener( TagEvent.TAG_GAINED , onTagAdded);
tf.text = "test 2";
//
}
private function onTagAdded(event:TagEvent):void
{
tf.text = event.params;
}
}
}
I have signed the ANE accordingly, I also signed the test app it's self.
I have a Log.d in the java part of the ANE which should pop up on log cat but it doesn't, also the textfield just becomes blank as soon as I initialized the RFIDController even without added the event listener.
If you need any more code or information to help me solve this problem feel free to ask.
I could really use some help because I'm completely lost, I've followed multiple tutorials and guide on how to do this, I should have done everything correctly, but I clearly have not.
UPDATE: 1
The extension xml
<extension xmlns="http://ns.adobe.com/air/extension/3.5">
<id>nl.mediaheads.anetest.exntension.RFIDController</id>
<versionNumber>0.0.1</versionNumber>
<platforms>
<platform name="Android-ARM">
<applicationDeployment>
<nativeLibrary>AneTest.jar</nativeLibrary>
<initializer>nl.mediaheads.anetest.ReceiverExtension</initializer>
<finalizer>nl.mediaheads.anetest.ReceiverExtension</finalizer>
</applicationDeployment>
</platform>
</platforms>
</extension>
UPDATE 2:
I fixed it, it was an context issue together with that flash somehow clean my custom event so I used status event to parse from the flash side of the ANE to the air application itself.
Currently you are creating your receiver at the initialisation point of the extension which will most likely be called before the context creation, so your context may be null at that point and causing your errors.
Try moving the creation of your ReceiverPhidget to the constructor of your ReceiverExtensionContext. Something like the following (I haven't tested this):
public class ReceiverExtensionContext extends FREContext
{
private ReceiverPhidget mReceiverPhidget;
public ReceiverExtensionContext()
{
mReceiverPhidget = new ReceiverPhidget( this );
getActivity().registerReceiver( mReceiverPhidget, ReceiverPhidget.getIntentFilter() );
}
#Override
public Map<String, FREFunction> getFunctions()
{
Map<String, FREFunction> functionMap = new HashMap<String, FREFunction>();
return functionMap;
}
#Override
public void dispose()
{
getActivity().unregisterReceiver( mReceiverPhidget );
}
}
I am trying to understand how this piece of code all work together. While testing, in tomcat i see that when the device receives the signal, i get the msg "==Deleted all data # + uri". The tablet essentially receives a signal from the computer to begin this process of wiping. As soon as it gets the signal and initializes connection this is displayed on tomcat and then it begins wipe. How does it do this wipe?
DSer.java
public abstract class DSer implements Wipeable{
protected void clearExistingData(ContextWrapper cw) {
this.clearExistingData(cw, getContentURI());
}
protected void clearExistingData(ContextWrapper cw, Uri uri) {
cw.getContentResolver().delete(uri, null, null);
Log.d("DataSerializer", "== Deleted all data #" + uri);
}
#Override
public void wipeData(ContextWrapper cw, Vector<DSFileInfo> files) {
this.clearExistingData(cw);
}
Wipeable.java
import java.util.Vector;
import android.content.ContextWrapper;
public interface Wipeable {
public void wipeData(ContextWrapper cw, Vector<DSFileInfo> files);
}