Locking resources for an indeterminate amount of time - java

My application contains a number of objects which contain getters and setters. These correspond to changing the state of physical objects (for example, a stepper motor).
Other threads may call methods on this object in order to do things to the stepper motor - this provides an interface between the stepper motor and the underlying hardware. So, for example, we may have a function that causes the motor to rotate by 15 degrees, or we may have a function that causes it to return to a neutral position.
Now, these objects care threadsafe, but that's not good enough. Consider the situation where one thread tries to rotate the motor by 90 degrees (by firing six calls to rotate by 15 degrees) and half way through, another resets the motor, meaning that it's only moved 45 degrees.
My design solution is to allow the controlling objects to take out locks on the controller, but I'm unsure how to manage this. It seems that most of the Java locking methods are designed to be atomic over a single method call, where I wish to have the objects locked for an indeterminate amount of time.
Would a simple Java lock be sufficient for this purpose, or does anyone know of something better? I'm worried by the fact that the standard ReentrantLock would seem to almost require the try-finally paradigm, meaning that I'd be likely bastardising it to a certain extent.

You could provide a method to submit several commands atomically. Assuming all your methods are synchronized, it could simply be:
public synchronized void submitAtomically(List<Command> commands) {
for (Command c : commands) {
submit(c);
}
}
public synchronized void submit(Command c) {
//rotate or reset or ...
}
If you don't want the methods to block other threads for too long, the simplest would be to use a typical producer/copnsumer pattern:
private final BlockingQueue<Command> queue = new LinkedBlockingQueue<> ();
public synchronized void submit(Command c) {
queue.put(c);
}
//somewhere else:
new Thread(new Runnable() {
public void run() {
while(true) {
Command c = queue.take();
c.execute();
}
}
}).start();
If a client submits 6 rotations via the submitAtomically, it prevents other threads from inserting other commands in the middle of the 6 rotations. However the submit operation is very fast (it does not actually execute the command) so it will not block the other threads for long.

In the database world, this is done by having transactions, which is a way of grouping small low-level operations into big high-level operations that are atomic. Going down that road would be quite painful.
I think you need to go back and decide what the fundamental atomic operations are to be.
Consider the situation where one thread tries to rotate the motor by 90 degrees (by firing six calls to rotate by 15 degrees) and half way through, another resets the motor, meaning that it's only moved 45 degrees.
It seems that you have decided that "rotate 15 degrees" is the only atomic operation, but that is evidently a poor fit for your application.
Do you also need "rotate 45 degrees" and "rotate 90 degrees" as atomic operations? Perhaps you need "rotate X degrees" as an atomic operation?
What is the purpose of rotating the motor to a particular position? If thread A rotates the motor to position X, and then thread B immediately rotates it to a different position, what has been achieved? Is some operation to be done (by thread A) once the motor is at position X? If so, you want the rotation and that operation together to be one atomic opeartion.
And why rotate the motor by a given amount before performing an operation? Do you in fact want the motor to be in a specific position (absolute, not relative to its previous position) when the operation is performed? In that case you do not want your operations to be given the amount to rotate by, but rather the required position. The atomic operation would be responsible for deciding by how much to rotate the motor by.

Related

Concurrently accessing different members of the same object in Java

I am familiar with many of the mechanisms and idioms surrounding concurrency in Java. Where I am confused is with a simple concept: concurrent access of different members of the same object.
I have a set of variables which can be accessed by two threads, in this case concerning graphical information within a game engine. I need to be able to modify the position of an object in one thread and read it off in another. The standard approach to this problem is to write the following code:
private int xpos;
private object xposAccess;
public int getXpos() {
int result;
synchronized (xposAccess) {
result = xpos;
}
return result;
}
public void setXpos(int xpos) {
synchronized (xposAccess) {
this.xpos = xpos;
}
}
However, I'm writing a real-time game engine, not a 20 questions application. I need things to work fast, especially when I access and modify them as often as I do the position of a graphical asset. I want to remove the synchronized overhead. Even better, I'd like to remove the function call overhead altogether.
private int xpos;
private int bufxpos;
...
public void finalize()
{
bufxpos = xpos;
...
}
Using locks, I can make the threads wait on each other, and then call finalize() while the object is neither being accessed nor modified. After this quick buffering step, both threads are free to act on the object, with one modifying/accessing xpos and one accessing bufxpos.
I have already had success using a similar method where the information was copied in to a second object, and each thread acted on a separate object. However, both members are still part of the same object in the above code, and some funny things begin to happen when both my threads access the object concurrently, even when acting on different members. Unpredictable behaviour, phantom graphical objects, random errors in screen position, etc. To verify that this was indeed a concurrency issue, I ran the code for both threads in a single thread, where it executed flawlessly.
I want performance above all else, and I am considering buffering the critical data in to separate objects. Are my errors caused by concurrent access of the same objects? Is there a better solution for concurrency?
EDIT: If you are doubting my valuation of performance, I should give you more context. My engine is written for Android, and I use it to draw hundreds or thousands of graphic assets. I have a single-threaded solution working, but I have seen a near doubling in performance since implementing the multi-threaded solution, despite the phantom concurrency issues and occasional uncaught exceptions.
EDIT: Thanks for the fantastic discussion about multi-threading performance. In the end, I was able to solve the problem by buffering the data while the worker threads were dormant, and then allowing them each their own set of data within the object to operate on.
If you are dealing with just individual primitives, such as AtomicInteger, which has operations like compareAndSet, are great. They are non-blocking and you can get a good deal of atomicity, and fall back to blocking locks when needed.
For atomically setting accessing variables or objects, you can leverage non-blocking locks, falling back to traditional locks.
However, the simplest step forward from where you are in your code is to use synchronized but not with the implicit this object, but with several different member objects, one per partition of members that need atomic access: synchronized(partition_2) { /* ... */ }, synchronized(partition_1) { /* ... */ }, etc. where you have members private Object partition1;, private Object partition2; etc.
However, if the members cannot be partitioned, then each operation must acquire more than one lock. If so, use the Lock object linked earlier, but make sure that all operation acquires the locks it needs in some universal order, otherwise your code might deadlock.
Update: Perhaps it is genuinely not possible to increase the performance if even volatile presents an unacceptable hit to performance. The fundamental underlying aspect, which you cannot work around, is that mutual exclusion necessarily implies a tradeoff with the substantial benefits of a memory hierarchy, i. e. caches. The fastest per-processor-core memory cache cannot hold variables that you are synchronizing. Processor registers are arguably the fastest "cache" and even if the processor is sophisticated enough to keep the closest caches consistent, it still precludes keeping values in registers. Hopefully this helps you see that it is a fundamental block to performance and there is no magic wand.
In case of mobile platforms, the platform is deliberately designed against letting arbitrary apps run as fast as possible, because of battery life concerns. It is not a priority to let any one app exhaust battery in a couple of hours.
Given the first factor, the best thing to do would be redesign your app so that it doesn't need as much mutual exclusion -- consider tracking x-pos inconsistently except if two objects come close to each other say within a 10x10 box. So you have locking on a coarse grid of 10x10 boxes and as long an object is within it you track position inconsistently. Not sure if that applies or makes sense for your app, but it is just an example to convey the spirit of an algorithm redesign rather than search for a faster synchronization method.
I don't think that I get exactly what you mean, but generally
Is there a better solution for concurrency?
Yes, there is:
prefer Java Lock API over the intrinsic built-in lock.
think of using non-blocking constructs provided in atomic API such as AtomicInteger for better performance.
I think synchronization or any kind of locking can be avoided here with using an immutable object for inter-thread communication. Let's say the message to be sent looks like this:
public final class ImmutableMessage {
private final int xPos;
// ... other fields with adhering the rules of immutability
public ImmutableObject(int xPos /* arguments */) { ... }
public int getXPos() { return xPos; }
}
Then somewhere in the writer thread:
sharedObject.message = new ImmutableMessage(1);
The reader thread:
ImmutableMessage message = sharedObject.message;
int xPos = message.getXPos();
The shared object (public field for the shake of simplicity):
public class SharedObject {
public volatile ImmutableMessage message;
}
I guess things change rapidly in a real-time game engine which might end up creating a lot of ImmutableMessage object which in the end may degrade the performance, but may be it is balanced by the non-locking nature of this solution.
Finally, if you have one free hour for this topic, I think it's worth to watch this video about the Java Memory Model by Angelika Langer.

How to use threads for collision detection simultaneously for different pair of objects

I found many questions regarding collision detection and I have created an efficient enough method which will detect if the two given pair of objects collide or not. The thing is, when I increase the number of objects from 2 to 20, suddenly the algorithm stops working correctly. For example, if the particle one hits particle ten, then particle ten in turn skips the other objects and collides directly to the wall.
The reason behind it is that when, say, particle one is actually colliding with particle ten, my algorithm is not checking for collision between them, but is checking for other pairs.
The solution according to me, would be to run the collision detection method for each pair simultaneously. Now for that I need to pass Particle one and Particle Two to each thread where One and Two are the objects for which collision is to be detected.
Here's the pseudo code:
private double isColliding(Particle One, Particle Two) {
//Collision Detection Mechanism
//Returns 0 if no collision
//Otherwise returns a double between 0 and 1
//used to clip the velocity vector so that it stops right before collision
}
So, what I want is to know how to convert the above method to run on different threads for different pair of objects....
Also, is there any other way this could be done.......
Note.... this above method doesn't change any values of particle one or two, so can be used asynchronously....
Here the lighty version of threading ^^
double result;
Thread t = new Thread(){
public void run(){
result = isColliding(aParticle, aParticle);
}
};
t.start();
t.join();
The join lets the mainthread wait for the thread t to finish.
Put this in a 2-dim for loop to iterate over every particle pair with a result array and a thread array to call join on at the end. And you got what wantet.
BUT there is no point in trying to make sth. work in multiple threads, if its not running in a single one ^^
So actually my answer to you is. You did sth. wrong. You should fix your collisionDetection function. If you want help, you should provide us more code.

Python - Threading, Timing, or function use?

I am having an issue formulating an idea on how to work this problem. Please help.
My project consists of an N x N grid with a series of blocks that are supposed to move in a random direction and random velocity within this grid (every .1 seconds, the location of the block is updated with the velocity). I have three "special" blocks that are expected to have individual movement functions. I will have other blocks (many of them) doing nothing but updating their location, and making sure they remain in the grid.
Now these three blocks have functions beyond movement, but each of these runs individually, waiting for the other block's special function to finish (block 2 will wait on block 1, Block 3 will wait on 2 and set it back to block 1, etc.) This queue of sorts will be running while the motion is happening. I want the motion to never stop. After each block's non-movement function runs n times, the code finishes.
My question is this: should I use threads to start and stop the non-movement functions, or is there a way to just set a time and set booleans that could use a class function after .1 seconds to continuously move the objects (and obviously loop over and over), and then use counts to end the program all together? If so, how would you write main function for this in Python? For all of this happening, does anyone think that Java would be significantly faster than Python in running this, especially if writing the data to a .txt file?
Your best bet is probably to handle all of them at once in a single update function rather than attempting to use Threads. This is primarily because the Global Interpreter Lock will prevent multiple threads from processing concurrently anyway. What you're after then is something like this:
def tick():
for box in randomBoxes:
box.relocate()
specialBlock1.relocate()
specialBlock2.relocate()
specialBlock3.relocate()
Then we define a second function that will run our first function indefinitely:
def worker():
while True:
tick()
sleep(0.1)
Now that we have an interval or sorts, we'll launch a Thread that runs in the background and handles our display updates.
from threading import Thread
t = Thread(target = worker, name = "Grid Worker")
t.daemon = True # Useful when this thread is not the main thread.
t.start()
In our tick() function we've worked in the requirements that specialBlocks 1, 2, and 3 are working in a set order. The other boxes each take their actions regardless of what the others do.
If you put the calls to the special functions together in a single function, you get coordination (2) for free.
def run(n, blocks):
for i in range(n):
for b in blocks:
b.special()
As for the speed of Python versus Java, it depends on many things, such as the exact implementation. There is too little information to say.

How to rewind application state?

I'm developing a Java desktop flight simulation. I need to record all the pilot actions as they occur in the cockpit, such as throttle controls, steering, weapon deployment, etc. so that I can view these events at a later time (or stream them live).
I'd like to add a visual replay feature on the playback of the events so I can visually see the cockpit as I move forward and backward in time. There's no problem with the replay as long as I play back the event in chronological order, but the rewind is a little trickier.
How would you implement the rewind feature?
I would use a modified Memento pattern.
The difference would be that I would have the Memento object store a list of all of the pilot actions.
The Memento pattern is typically used for rolling back (undo), however in your case I could see it applying as well. You would need to have the pilot actions be store-able states as well.
You could use a variant of the Command Pattern and have each one of your pilot actions implement an undo operation.
For example if your pilot made the action steer left (simple, i know) the inverse of it would be steer right.
public interface IPilotAction {
void doAction(CockpitState state);
void undoAction(CockpitState state);
}
public class ThrottleControl implement IPilotAction {
private boolean increase;
private int speedAmount;
public ThrottleControl(boolean increase, int speedAmount) {
this.increase = increase;
this.speedAmount = speedAmount;
}
public void doAction(CockpitState state) {
if (increase) {
state.speed += speedAmount;
} else {
state.speed -= speedAmount;
}
}
public void undoAction(CockpitState state) {
if (increase {
state.speed -= speedAmount;
} else {
state.speed += speedAmount;
}
}
What you're looking for is actually a blend of the Command and Memento patterns. Every pilot action should be a command that you can log. Every logged command has, if req'd, a memento recording any additional state that (A) is not in the command, and (B) cannot reliably be reconstructed. The "B" is important, there's some of this state in pretty much any non-trivial domain. It needs to be stored to recover an accurate reconstruction.
If you merge these concepts, essentially attaching a memento to each command, you'll have a fully logged series of deterministic events.
I discussed this at more length in a different answer. Don't be afraid to substantially adapt the design patterns to your specific needs. :)
RE Performance Concerns:
If you expect jumping a number of minutes to be a frequent case, and after implementation you show that it's an unworkable performance bottleneck, I would suggest implementing an occasional "snapshot" along with the logging mechanism. Essentially save the entire application state once every few minutes to minimize the amount of log-rolling that you need to perform. You can then access the desired timeframe from the nearest saved state. This is analogous to key frames in animation and media.
Not a direct answer, but check out discussion of implementing undo. Mostly they will be about text editors, but the same principles should apply.
It helps if you prefer immutability. Undoing complex changes is difficult. Even automated systems have performance problems (Software Transaction Memory, STM).
Make sure that you've implemented the simulation in such a way that the simulation's "state" is a function. That is, a function of time.
Given an initial state at time T0, you should be able to construct the simulation frame at time Tn for any n. For example, an initial stationary state and no events (yet) might equal the identity function, so Tn == Tn+1.
Given some pilot action event at time Ta, you should be able to construct a frame Ta+n for any n. So you think of events as modifying a function that takes a time value as argument and returns the frame of the simulation for that time.
I would implement the history of events as a Zipper of (time, function) pairs representing the control state of the simulation. The "current" state would be in focus, with a list of future states on the right, and past states on the left. Like so:
([past], present, [future])
Every time the simulation state changes, record a new state function in the future. Running the simulation then becomes a matter of taking functions out of the future list and passing the current time into them. Running it backwards is exactly the same except that you take events out of the past list instead.
So if you're at time Tn and you want to rewind to time Tn-1, look into the past list for the latest state whose time attribute is less than n-1. Pass n-1 into its function attribute, and you have the state of simulation at time Tn-1.
I've implemented a Zipper datastructure in Java, here.
you can just store the state at every instance. 1kb for state (wind speed, object speeds + orientation / control input states, x 30fps x 20 min ~ 36megs. 1kb of state would let you record about 16 objects (pos / speed / angular speed / orientation / and 5 axis of control / effect)
that may be too much for you, but it will be easiest to implement. there will have to be no work done at all to recreate state (instant acecss), and you can interpolate between states pretty easy (for faster / slower playback). for disk space you can just zip it, and that can be done while recording, so while playing that memory is not being hogged.
a quick way to save space would be to paginate the recording file, and compress each bin separately. ie one zip stream for each minute. that way you would only have to decompress the current bin, saving a bunch on memory, but that depends how well your state data zips.
recording commands and having your class files implement multiple directions of playback would require a lot of debugging work. slowing / speeding up playback would also be more computationally intensive. and the only thing you save on is space.
if thats a premium, there are other ways to save on that too.

Multi-threading: Objects being set to null while using them

I have a small app that has a Render thread. All this thread does is draw my objects at their current location.
I have some code like:
public void render()
{
// ... rendering various objects
if (mouseBall != null) mouseBall.draw()
}
Then I also have some mouse handler that creates and sets mouseBall to a new ball when the user clicks the mouse. The user can then drag the mouse around and the ball will follow where the mouse goes. When the user releases the ball I have another mouse event that sets mouseBall = null.
The problem is, my render loop is running fast enough that at random times the conditional (mouseBall != null) will return true, but in that split second after that point the user will let go of the mouse and I'll get a nullpointer exception for attempting .draw() on a null object.
What is the solution to a problem like this?
The problem lies in the fact that you are accessing mouseBall twice, once to check whether it is not null and another to call a function on it. You can avoid this problem by using a temporary like this:
public void render()
{
// ... rendering various objects
tmpBall = mouseBall;
if (tmpBall != null) tmpBall.draw();
}
You have to synchronize the if and draw statements so that they are guaranteed to be run as one atomic sequence. In java, this would be done like so:
public void render()
{
// ... rendering various objects
synchronized(this) {
if (mouseBall != null) mouseBall .draw();
}
}
I know you've already accepted other answers, but a third option would be to use the java.util.concurrent.atomic package's AtomicReference class. This provides retrieval, update and compare operations that act atomically without you needing any supporting code. So in your example:
public void render()
{
AtomicReference<MouseBallClass> mouseBall = ...;
// ... rendering various objects
MouseBall tmpBall = mouseBall.get();
if (tmpBall != null) tmpBall.draw();
}
This looks very similar to Greg's solution, and conceptually they are similar in that behind the scenes both use volatility to ensure freshness of values, and take a temporary copy in order to apply a conditional before using the value.
Consequently the exact example used here isn't that good for showing the power of the AtomicReferences. Consider instead that your other thread will update the mouseball vairable only if it was already null - a useful idiom for various initialisation-style blocks of code. In this case, it would usually be essential to use synchronization, to ensure that if you checked and found the ball was null, it would still be null when you tried to set it (otherwise you're back in the realms of your original problem). However, with the AtomicReference you can simply say:
mouseBall.compareAndSet(null, possibleNewBall);
because this is an atomic operation, so if one thread "sees" the value as null it will also set it to the possibleNewBall reference before any other threads get a chance to read it.
Another nice idiom with atomic references is if you are unconditionally setting something but need to perform some kind of cleanup with the old value. In which case you can say:
MouseBall oldBall = mouseBall.getAndSet(newMouseBall);
// Cleanup code using oldBall
AtomicIntegers have these benefits and more; the getAndIncrement() method is wonderful for globally shared counters as you can guarantee each call to it will return a distinct value, regardless of the interleaving of threads. Thread safety with a minimum of fuss.

Categories

Resources