I have to develop an Android application for educational purposes.
This application uses a smartphone's hardware sensors, especially the accelerometer.
The application must be made up of several "modules"(which could be for example background services), which all need to access to the same data from the accelerometer in real time, so that each module can use the data to make its own calculations.
For example, suppose I have an activity/an user interface, from which I can choose which services I want to start.
For simplicity, suppose I have just two services, and I can choose to start just the first one, just the second one or both.
The first service reads data from the accelerometer and writes all of them into a text file;
the second service, instead, uses accelerometer's data to calculate the magnitude of the acceleration and, if it is greater than a given threshold, it writes the amplitude in another file, otherwise it does nothing.
How can I do to make that the two services share the same data?
Should I put the data in a buffer or something like that? It is possible that I have to collect accelerometer's data for a long period, for example 8/12 hours at the fastest speed.
I don't have any clue of how I can do.
Could you help me please?
Many thanks
Related
I've gone through the tutorials for the Java Sound API and I've successfully read off data from my microphone.
I would now like to go a step further and get data synchronously from multiple microphones in a microphone array (like a PS3 Eye or Respeaker)
I could get a TargetDataLine for each microphone and open/start/write the input to buffers - but I don't know how to do this in a way that will give me data that I can then line up time-wise (I would like to eventually do beamforming)
When reading from something like ALSA I would get the bytes from the different microphone simultaneously, so I know that each byte from each microphone is from the same time instant - but the Java Sound API seems to have an abstration that obfuscates this b/c you are just dumping/writing data out of separate line buffers and processing it and each line is acting separately. You don't interact with the whole device/mic-array at once
However I've found someone who managed to do beamforming in Java with the Kinect 1.0 so I know it should be possible. The problem is that the secret sauce is inside a custom Mixer object inside a .jar that was pulled out of some other software.. So I don't have any easy way to figure out how they pulled it off
You will only be able to align data from multiple sources with the time synchronous accuracy to perform beam-forming if this is supported by the underlying hardware drivers.
If the underlying hardware provides you with multiple, synchronised, data-streams (e.g. recording in 2 channels - in stereo), then your array data will be time synchronised.
If you are relying on the OS to simply provide you with two independent streams, then maybe you can rely on timestamping. Do you get the timestamp of the first element? If so, then you can re-align data by dropping samples based on your sample rate. There may be a final difference (delta-t) that you will have factor in to your beam-forming algorithm.
Reading about the PS3 Eye (which has an array of microphones), you will be able to do this if the audio driver provides all the channels at once.
For Java, this probably means "Can you open the channel with an AudioFormat that includes 4 channels"? If yes, then your samples will contain multiple frames and the decoded frame data will (almost certainly) be time aligned.
To quote the Java docs : "A frame contains the data for all channels at a particular time".
IDK what "beamforming" is, but if there is hardware that can provide synchronization, using that would obviously be the best solution.
Here, for what it is worth, is what should be a plausible algorithmic way to manage synchronization.
(1) Set up a frame counter for each TargetDataLine. You will have to convert bytes to PCM as part of this process.
(2) Set up some code to monitor the volume level on each line, some sort of RMS algorithm I would assume, on the PCM data.
(3) Create a loud, instantaneous burst that reaches each microphone at the same time, one that the RMS algorithm is able to detect and to give the frame count for the onset.
(4) Adjust the frame counters as needed, and reference them going forward on each line of incoming data.
Rationale: Java doesn't offer real-time guarantees, as explained in this article on real-time, low latency audio processing. But in my experience, the correspondence between the byte data and time (per the sample rate) is very accurate on lines closest to where Java interfaces with external audio services.
How long would frame counting remain accurate without drifting? I have never done any tests to research this. But on a practical level, I have coded a fully satisfactory "audio event" scheduler based on frame-counting, for playing multipart scores via real-time synthesis (all done with Java), and the timing is impeccable for the longest compositions attempted (6-7 minutes in length).
My app builds a complex model based on socket input. Assume that the input comes regularly at two different time intervals, in a hourly interval and in a daily interval. The data is treated exactly the same, only that I want to build a "hourly" model and a "daily" model at the same time, in parallel.
The simplest solution would be to duplicate the code with two different socket endpoints, in order to send the hourly and daily data to the different endpoints. Obiously this is not an elegant solution. So, is there an elegant way/design pattern/architecture that supports my purposes?
The requirements would be as following:
1. Use the same code base to build different models at the same time, based on the type of input
2. At the same time process/cccessing the data of the models at a central place to draw conclusions/combine the models
I thought about letting the application run in two different threads, e.g. one to build the hourly and one to build the daily model.However, I don't want to share my variables between the threads. E.g. Currently my code stores the incoming data into a list, which is then further processed. So when the input is hourly and daily data, I don't want it to be mixed (otherwise I wouldn't get two separate models), but rather be treated separately (without duplicating my code or huge refactoring like making the code work for two instead of one input type). Basically I want my code to be scalable.
Android provides a default of 15 steps for its sound systems which you can access through Audio Manager. However, I would like to have finer control.
One method of doing so seems to be altering specific files within the Android system to divide the sound levels even further then default. I would like to programmatically achieve the same effect using Java.
Fine volume control is an example of the app being able to divide the sound levels into one hundred distinct intervals. How do I achieve this?
One way, in Java, to get very precise volume adjustment is to access the PCM data directly and multiply it by some factor, usually from 0 up to 1. Another is to try and access the line's volume control, if it has one. I've given up trying to do the latter. The precision is okay in terms of amplitude, but the timing is terrible. One can only have one volume change per audio buffer read.
To access the PCM data directly, one has to iterate through the audio read buffer, translate the bytes into PCM, perform the multiplication then translate back to bytes. But this gives you per-frame control, so very smooth and fast fades can be made.
EDIT: To do this in Java, first check out the sample code snippet at the start of this java tutorial link, in particular, the section with the comment
// Here, do something useful with the audio data that's now in the audioBytes array...
There are several StackOverflow questions that show code for the math to convert audio bytes to PCM and back, using Java. Should not be hard to uncover with a search.
Pretty late to the party, but I'm currently trying to solve this issue as well. IF you are making your own media player app and are running an instance of a MediaPlayer, then you can use the function setVolume(leftScalar, rightScalar) where leftScalar and rightScalar are floats in the range of 0.0 to 1.0. representing logarithmic scale volume for each respective ear.
HOWEVER, this means that you must have a reference to the currently active MediaPlayer instance. If you are making a music app, no biggie. If you're trying to run a background service that allows users to give higher precision over all media output, I'm not sure how to use this in that scenario.
Hope this helps.
I am trying to implement a real-time updating framework, where changing input data automatically leads to recalculating all dependent results. So I need a kind of subscription mechanism, but a clever one, as I have to handle enormous amounts of data. I like to think about the mechanism as a "calculation tree" or directed graph, with the nodes representing the results, and the edges representing the functions.
Something similar must have been implemented in MS Excel, with the cells being the nodes, but Excel will not fulfill my needs as it is not able to handle large amounts of data, and is not flexible enough.
While in principle I want to be able to browse through the complete calculation tree (including all results in the complete depth of the tree), I know that this could mean storing several Terabytes of data. So I need to be able to forget or skip nodes if the computer runs out of memory, and then recalculate them as needed. And not to forget: while programming the (short!) functions, I don't want to be bothered with endless technical subscribe stuff (ideally this should be taken care of automatically in the framework).
Do you think it's doable, and if so, how would you attack it? Do you know of any component / library which one could use for this type of things? I have thought about publish/subscribe mechanisms and message brokers, but fear they are going to slow down my calculations.
Thx in advance for your responses!
Calle
I ve got a problem in a game project. I develop a bot in video game. The game engine of the game is that the game gives every game tick information about the track and i use that information to make decisions about bot strategy. I want to store that information all these game ticks in a txt file. However, i noticed that when i store the data in txt files my bot fails to make correct decision. Actually the behavior of the bot slow down. Is there efficient way to store my data to ram? My project is in java.
If the bot needs the data to make it's decision, it's best to keep all that data in ram.
If you need to save the data for other reasons to disk, you might want to consider only saving the data every minute, and not every game tick, as disk-io tends to be slow.
File writing is comparatively very slow, hence why your game slows down. What information exactly do you need to store? Defining a class (used statically, if necessary, but preferably not) whose members represent the data you need is probably a way to go about it...