I'm very new to Android programming, and trying to understand touch events with nested views. To start, here's a description of my app:
I have a relative layout that I've added via the GUI editor. Everything is default. I've also created a class called ClipGrid that extends ScrollView. Nested inside that, I make a HorizontalScrollView. Inside of that, I make a TableLayout and it's rows. The rows contain buttons.
The end result is a grid of buttons. It displays 4x4 at once, but can scroll either direction to display other buttons.
I call it to the screen from my main activity like this:
ClipGrid clip_grid = new ClipGrid(this);
setContentView(clip_grid);
I did that just for testing purposes, and I think I will have to change it later when I want to add other views to my relativelayout. But I think it might have implications for touch events.
in the end, I want to detect when the grid has been moved and snap the newly viewable 4x4 grid of buttons to the edge of my layout when the user lifts their finger. I'm just not sure how to go about implementing this and any help would be appreciated. Thanks.
The way touch events are handled is kind of a cascading effect that starts from the top view and goes down to the lower nested views. Basically, Android will pass the event to each view until true is returned.
The general way you could implement the onTouchEvent event of a View would be:
#Override
public boolean onTouchEvent(MotionEvent event) {
boolean actionHandled = false;
final int action = event.getAction();
switch(action & MotionEventCompat.ACTION_MASK) {
case MotionEvent.ACTION_DOWN:
// user first touches view with pointer
break;
case MotionEvent.ACTION_MOVE:
// user is still touching view and moving pointer around
break;
case MotionEvent.ACTION_UP:
// user lifts pointer
break;
}
// if the action was not handled by this touch, send it to other views
if (!actionHandled)
actionHandled |= super.onTouch(v, MotionEvent.event);
return actionHandled;
}
Related
I have a RecyclerView where I want to enable Swipe to Delete/Star on items. My item is a FrameLayout where a MaterialCardView is on the top and the revealed star and archive layouts are under it.
I already made the swipe behavior work using onTouch method applied on the CardView only, but it's just way too hard to scroll across the list or use onClick or onLongClick as onTouch overrides them. The only way to scroll the RecyclerView or invoke onClick or onLongClick is to move only in Y axis without moving even a half pixel in X, as moving in X will invoke an ACTION_MOVE event that will redirect all next touch events only to the CardView. (requestDisallowInterceptTouchEvent()) as the first statement of the switch case of ACTION_MOVE).
So I want to apply ItemTouchHelper or something similar on the CardView while having the ability to modify the way how the card X changes (To make it slower than user swiping speed like in irremovable notifications in Android) and get the MotionEvent that the user applies, and that's because ItemTouchHelper isn't very literal about what can be treated as a swipe, so it would allow onClick and onLongClick on small movements, and allow scrolling the list when the movement of Y axis is way greater than X's.
Please don't close this question saying "Too board" like a lot of other questions I had a chance to answer :(
I'm trying to achieve making drop targets visible (or adding them) as soon as the user starts a drag.
The documentation explains that this should be handled in onDrag when ACTION_DRAG_STARTED is received, which could then be used to say highlight the View as being able to accept the drag.
However, my view (which is actually a LinearLayout) should look different when no drag is going on, and show drop targets when a drag is initiated.
Normal look:
[Item A][Item B]
When drag starts it should look like:
[ ][Item A][ ][Item B][ ]
Where the empty parenthesis represent locations where the drag can be dropped.
I've tried the following things to achieve this:
1) Dynamically add views
When top-level container receives ACTION_DRAG_STARTED, dynamically add the drop target views. Problem: the newly added views never receive ACTION_DRAG_STARTED themselves (or any other events) and so they cannot accept the drop.
2) Have hidden drop targets
Always have View.GONE drop targets in between the real items available all the time, and just make them View.VISIBLE when the drag starts:
if(event.getAction() == DragEvent.ACTION_DRAG_STARTED) {
// Make all containers visible:
for(int i = 0; i < cc.getChildCount(); i++) {
cc.getChildAt(i).setVisibility(View.VISIBLE);
}
}
Problem: apparently View.GONE also means the view does not receive events. Same thing with View.INVISIBLE.
So, what are my options? Using say View.VISIBLE and doing some dynamic resizing when the drag starts/end? Seems really silly...
Any better suggestions?
You need to change the visibility just before calling startDrag() or startDragAndDrop(). I demonstrate that in this sample app (from this chapter of this book, FWIW).
In that sample, if you run it on a tablet, I will initiate a drag-and-drop operation on a long-click of an item in a RecyclerView:
#Override
public boolean onLongClick(View v) {
if (listener!=null) {
listener.onStartDrag();
}
ClipData clip=ClipData.newRawUri(title.getText(), videoUri);
View.DragShadowBuilder shadow=new View.DragShadowBuilder(thumbnail);
itemView.startDrag(clip, shadow, Boolean.TRUE, 0);
return(true);
}
But before I call startDrag(), I let a registered listener know that I am about to start the drag. That listener is the hosting activity, which makes my "hotspot" drop target visible:
#Override
public void onStartDrag() {
info.setVisibility(View.VISIBLE);
}
The net effect is akin to a home screen, where specific "actions" appear (e.g., uninstall) when you start the drag.
I tried your second approach initially, and it didn't work. My assumption is that calling startDrag() or startDragAndDrop() basically captures the roster of visible drop targets, and so changes to that roster (new widgets, newly-visible widgets) have no effect after this point.
I'm working in SWT (no JFace), and I'm attempting to customize the behavior of a Tree that I'm using a sidebar.
The top-level items in the tree shouldn't be selectable; they're basically headers. Only the children of these items should be selectable. As a result, I would like the UI to behave in a way that indicates this; clicking on one of these top-level items should expand or collapse but shouldn't provide any sort of visual feedback outside of the indicator changing its state and the children's visibility changing.
The problem seems to be that on OS X, the expand/collapse indicator is a triangle (don't know if it's an image or a unicode character) that either points right or down, but is also colored based on the "selection" state. I've managed to override all of the relevant behavior for the top-level items except for the arrow changing color.
I've used an SWT.EraseItem listener to hook in to the background drawing so that the background doesn't change color. This works as expected.
I've used an SWT.PaintItem listener to make sure that the text in the top-level item doesn't change color. This works as expected, but doesn't seem to have any influence over the indicator; I've even tried not resetting the GC's foreground color, but the color of the indicator still changes:
Not Selected:
Selected
Some of the things that I've attempted to do, all of which have failed:
Just drawing a rectangle on top of indicator. The indicator seems to always be on top, no matter what I attach the PaintListener to.
Using a Listener on the Selection event type, checking to see if the event.item is one of the top-level items, and then fiddling with the event bits and redraw flags. This doesn't seem to work well at all; the behavior is completely unpredictable. The code looks something like this:
sideBar.addListener(SWT.Selection, new Listener()
{
#Override
public void handleEvent(Event event)
{
TreeItem eventItem = (TreeItem) event.item;
if(sideBarRoots.contains(event.item))
{
event.detail = 0;
event.type = 0;
event.doit = false;
sideBar.setRedraw(false);
}
else
{
sideBar.setRedraw(true);
event.type = SWT.Selection;
event.doit = true;
}
}
});
Since the redraw flag is just a hint, sometimes it gets set properly and others it doesn't. It can have either no effect, or can get locked in to a state where nothing redraws in the sidebar.
I'm aware that a lot of this behavior is highly coupled to the behavior of the underlying native widgets, but I was wondering if there was any way to get the behavior that I'm looking for without roll a custom widget from scratch?
I think in a situation as yours you should ideally not allow selection on a tree header.
You can either cancel the selection event, by not allowing it to be clickable. But that might be a little kludgy implementation, especially for key navigation.
The safer approach, if possible with your data, would be to just move the selection whenever a tree parent node is selected. So it should work like this that if a parent node is selected, either by keyboard or mouse: you expand the node and move the selection to the first child.
This way you can avoid all the paint trickery to hide the selection state.
Imagine a layout with 4 buttons
_______________________________
| | |
| A | B |
|______________|________________|
| | |
| C | D |
|______________|________________|
I'd like to detect the fling gesture over the whole layout but when the fling starts over a button is no detected.
I'm using:
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
gesturedetector= new GestureDetector(this, this);
findViewById(R.id.touchContainer).setOnTouchListener(new OnTouchListener() {
#Override
public boolean onTouch(View v, MotionEvent event) {
Log.e("","TouchEvent");
return gesturedetector.onTouchEvent(event);
}
});
}
It when there is no clickable items but fails if the fling start over a clickable item.
How can I solve that? Offering a bounty of 50 point for a complete working answer
One way I have achieved this is to override the following method:
public boolean onInterceptTouchEvent(MotionEvent event){
super.onInterceptTouchEvent(event);
...
You can override this method in your layout container (e.g. ViewGroup, or whatever you're holding the buttons with) and continue to return false from it in order to 'intercept' touch events that are being consumed by child Views (i.e. your buttons). Within that overridden method you can then call your gesture detector object with the MotionEvents. This method also 'sees' events that target the ViewGroup itself as well, which means - if I remember correctly - you would only need to call your gesture detector from within that method, and in doing so the gesture detector will 'see' all events, no matter whether they'er over the buttons or not. So if you drag your finger starting over a button and then ending at some point on the layout background, the gesture detector should see the entire swipe. You would not need to feed the gesture detector with the events from the layout's own onTouchEvent() because it'll have already seen them.
A second way:
I just looked at my project where I used this, and realised that I switched to a different way of doing it. What I actually did was I designed all of my child Views such that the parent Activity (or the containing ViewGroup) could register the same gesture detector object with all of those child Views (each of my special Views have a method called registerGestureDetector()). Then, in the overridden 'onTouchEvent()' in my child Views, I pass the MotionEvents to the gesture detector that has been registered with that View. In other words, the parent ViewGroup layout and all the child Views simply share the same gesture detector.
I realise that this may sound like a bit of hassle and not necessary considering it could be done using onInterceptTouchEvent(), but my application deals with some pretty complicated rules regarding how my Views need to respond to touch events and gestures, and it allowed me to apply some additional logic that I needed specific for my application. However, both of these methods I've used achieve the same basic objective here: to channel the MotionEvents that targetted various Views to the same gesture detector object.
I have a Java class that pertains to mouse listeners that I'm wanting to convert over to my Android app but can't quite find the necessary events.
My Java app makes use of the following methods:
mouseClicked
mousePressed
mouseReleased
I'm wanting to do something similar however not with click events but touch events. I have come across OnTouchListener and did an override on the onTouch method.
What are the alternatives to mousePressed and mouseReleased?
Edit - (updated after Peter's response)
Are the following events correct:
ACTION_DOWN : mouseClicked
ACTION_MOVE : mousePressed
ACTION_UP : mouseReleased
EDIT - 2 Example Source
My Activity doesn't have any OnTouchListener at the moment because I was hoping I could keep all the touch logic in my View.
View:
/*Inside my View - Is it proper to do onTouch logic here?
Or should I be doing this from the Activity?*/
public class myView {
public boolean onTouch(MotionEvent event) {
switch(event.getAction() & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN:
//draw arrow when screen is simply touched
break;
case MotionEvent.ACTION_MOVE:
//Do Logic
break;
case MotionEvent.ACTION_UP:
//Do Logic
break;
}
}
}
The reason I am doing the logic in my View is because I have some variables that I would like to grab directly rather than creating multiple extra get methods.
Am I able to do it like this? Or will I have to override the onTouch method in my Activity and do the logic there?
All touch events (down, up, move, multitouch) are handled via 'onTouch'. See this tutorial: http://www.zdnet.com/blog/burnette/how-to-use-multi-touch-in-android-2-part-3-understanding-touch-events/1775
If you want to register clicks on a View, implement and add an OnClickListener to it.
When you want to register touch events you need to implement the OnTouchListener and add it to that View.