In my SWT based application, I have a Canvas-derived custom Widget, which is displaying a bunch of "items". The whole purpose of these items is for the user to drag them out of the widget. I had no trouble implementing a DragSource, DragDetectListener and all that stuff to make DND work. The problem I am trying to solve is that I want the drag to be detected much earlier, i.e. after a much shorter mouse drag distance, than the default platform behavior.
I know I can override dragDetect() of the Widget class. However, this only allows me to veto the super class implementation, not to notify that a drag already happened before the super class would think it has.
Basically, if I could generate the drag event myself, like if I could just use Widget.postEvent(SWT.DragDetect, eventWhichIAllocatedAndFilledOut) (which is package private), that would seem like my solution. I've looked at the code for drag detection in Widget, and it doesn't seem to be designed for this use-case. Is there a work around that let's me initiate drags anytime I want?
I have figured it out. It is possible to generate a custom event and distribute it to the DragDetect listener mechanism. The code below does the same as the internal implementation, but can be called at will from within the Widget implementation, for example from a MouseMoveListener's mouseMove(MouseEvent e) hook:
Event event = new Event();
event.type = SWT.DragDetect;
event.display = getDisplay();
event.widget = this;
event.button = e.button;
event.stateMask = e.stateMask;
event.time = e.time;
event.x = e.x;
event.y = e.y;
notifyListeners(SWT.DragDetect, event);
It is noteworthy that the built-in drag detection has to be disabled for this to work as intended. The default implementation is exposed via the dragDetect(MouseEvent e) method that can be called from a mouseDown() handler (as explained in the documentation for dragDetect()). It works by busy looping in the event thread until the drag is detected. It simply consumes mouse move events from the native event queue on the GTK backend at least. When a DragDetectListener is registered with the Widget, this will automatically be done, so unless one disables the mechanism via setDragDetect(false), a custom drag detection would only run after the built-in detection which imposes the delay because it is blocking the event thread, besides detecting the drag a second time, of course.
Related
In a fairly complicated codebase, there is functionality both to
(a) drag objects places, implemented with DragGestureRecognizer,
(b) and the ability to drag boxes around things to select them, implemented via e.g. mousePressed() listeners.
Normally they behave "correctly".
However there are some objects which are marked immovable, and when the user begins a mouse gesture on top of one of them, the DragGestureRecognizer is finding it, and apparently consuming the mouse event.
What I'd like to be able to do is e.g. add something to my DragGestureRecognizer to say "oh look, we found an immovable object, let's not have this be a drag after all", and allow the dragging-a-box-around-things to take control.
I realize I'm not providing code because there is way too much to provide, but short of disabling the DragGestureRecognizer entirely (bad), I haven't found a way to get it to (selectively) turn loose of mouse events. Any help much appreciated!
I have been learning OOP for about 2 to 3 years, but I am still really confused about interfaces.
In Java for example there is an interface called View.OnClickListener.
Now I know the class which implements this interface will have to provide body for its abstract methods, but what I do not get is that in the API documentation its written that the onClick method is called when a view is clicked.
So who calls this method? Is there any other class which implements the interface and then somehow call the method when view is clicked?
Link (https://developer.android.com/reference/android/view/View.OnClickListener)
You are probably a lot less confused about interfaces than you think.
Basically, GUI frameworks like Android's view framework, classic Java Swing, JavaFX, Eclipse SWT and so on are all big and complicated1. But they all work by turning the application "upside down". Instead of your application code controlling everything, the framework is in control. Your application registers a bunch of listeners or "call backs". Then it hans over control to the framework, and waits to be called back to do something.
The GUI framework will have a master "event thread" that reads the stream of low level events coming from the device (clicks and movement from the mouse, or touch screen events, key presses and release on the keyboard, etcetera). For each one, it maps the low level event to the appropriate part of your application, and then calls your application to say something has happened.
Suppose that an Android user taps her finger on the screen. The framework will translate the touch screen event's physical screen position to virtual screen location of some "component" of your app. If that the component was a button, and you had (previously) registered an OnClickListener for that button, the framework would find the button, and call the listener's onclick() method to tell your application "this button has just been clicked".
It is all rather simple and elegant ... and good sound OOP. But the relationship between your code and the framework is upside down to what you would expect.
1 - and too complicated for most people to fully understand. Though in fact, you don't need to fully understand them to use them.
Those methods are called by the Android framework itself. This is one of the useful things the framework does on your behalf, so that you don't need to interpret raw user finger input and figure out what constitutes a "click"!
From https://developer.android.com/guide/topics/ui/ui-events (my emphasis):
Within the various View classes that you'll use to compose your layout, you may notice several public callback methods that look useful for UI events. These methods are called by the Android framework when the respective action occurs on that object.
For onClick specifically, that same page says:
This is called when the user either touches the item (when in touch mode), or focuses upon the item with the navigation-keys or trackball and presses the suitable "enter" key or presses down on the trackball.
I've had this problem repeatedly over the years with AWT and Swing based interfaces: some mouse clicks events do not trigger mouseClicked in a MouseListener because the mouse moved at least one pixel during the click (which is most clicks with some mice). This is interpreted as a drag operation instead.
Is there a way to tell AWT/Swing to be a bit more permissive in its definition of a click?
Manually implementing a workaround is quite cumbersome if you want a full solution (if your components must also handle drag operations for instance):
Adding a unique MouseListener and MouseMotionListener to every component
Doing some sort of calculation of a click in mouseMoved, mouseDragged and mouseReleased (which requires measuring time elapsed, distance traveled...)
Forwarding the "fixed" mouse events to a new mouse input interface implemented by the components
There's got to be a better way! I'm hoping for a global setting but I just can't find one...
Related posts
https://stackoverflow.com/questions/20493509/drag-threshold-for-some-components-too-low
Making a component less sensitive to Dragging in Swing
Try setting the sensitivity of the D&D using:
System.setProperty("awt.dnd.drag.threshold", "5");
Note: this is only honored by List, Table, Text and Tree components!
See usages of javax.swing.plaf.basic.DragRecognitionSupport#mouseDragged() method.
GXT 3.x only.
It is becoming apparent to me that Sencha had deliberately designed FileUploadField to shunt off all key press events from ever being detected.
I tried to intercept onBrowserEvent(Event) and could not detect any key press events which I would have generated by keypresses while having focus on the FileUploadField component.
Where is the key-press event shunt?
I could not find any keypress handler insertion methods.
I wish to allow triggering the file upload by either press on the space-bar or enter key.
Short of rewriting a whole new component from scratch, could someone advise me what I could do to achieve my goal of a keyboard activated file upload?
onBrowserEvent won't recieve any events unless you sink them - did you make sure to call sinkEvents? How are you adding handlers? If you use addDomHandler, it will sink them for you, but addHandler either assumes that they are not dom events, or that you already called sinkEvents. Without sinking an event, the browser doesn't know to pass that event on to a GWT widget. If all events were sunk automatically, then every time you moved the mouse across the page you would see a firestorm of events as mousemove fired for every widget you passed, and all of its parents.
If you override onBrowserEvent, then you are building the method that describes how to handle the actual event that comes from the browser - that is where the com.google.gwt.user.client.DOM class wires into the Widget to give it events. Short of making that method final, there is no way to prevent you, the widget user, from getting those events as long as the browser is generating them and passing them through the event listener.
Even if onBrowserEvent has been overridden and made final, you can still get access to many events by creating a NativePreviewHandler and checking where the event is occurring. This gets you in to the event before it even goes to the widget itself - there you can call NativePreviewEvent.cancel() to prevent it from happening on the widget itself, or you can handle it early in the handler.
I am interested in creating a new widget similar to the JSlider widget, but with a "ghost" (translucent) knob displaying the previous slider position, as well as adding a trails-like animation to the real knob's movement.
I am a bit confused on the proper way of extending a Java Swing widget to add new functionality.
As I see it I have a few options:
1) Extend JSlider, place all my new model information here, and overwrite paint to first draw the JSlider, and than to overlay my desired additions. The problem with this solution is I would need to use trial and error to get the marks in the correct position (as I won't have access to full geometrical information) which will could make it not work in all situations.
2) Extend various classes of the Slider widget (JSlider, SliderUI, BasicSliderUI, MetalSliderUI, BoundedRangeModel, DefaultBoundedRangeModel). The advantage I see here is maintaining the proper model-view-controller architecture. However, this will require overloading various functions within the classes. I believe the result would seem very hacked together.
3) Copying all of the Slider widget's code, and modifying to create my new widget. This would be similar to options (2), but may be a bit simpler to modify code then to extend functions (which will be basically copying/modifying code anyways).
4) Re-create the slider widget from scratch with my desired functionality. After looking at the existing Swing Slider widget's code, this is not a trivial task.
Am I missing some more elegant method of creating a new widget that borrows functionality from an existing one?
Thank you for your insight.
I would choose 2) with some changes
Create WrapperSliderUI class which delegates all the methods calls to the delegate UI. And override just paint method.
SetUI of your JSlider should wrap original UI in the WrapperSliderUI
public void setUI(SliderUI ui) {
super.setUI(new WrapperSliderUI(ui));
}
The in the paint you will check original UI class and adapt your painting accordingly.