Thursday, 25 December 2014

Android SDK: Introduction to Gestures

One of the most widespread changes seen in the use of touch screen devices in the last couple of years has been the adoption of finger gestures, such as swipes and flings. They allow the interaction between user and device to become very intuitive and natural feeling. In this tutorial, you learn how to begin using gestures in your own Android applications.
This tutorial will use code provided in an open source project. The authors are assuming the reader has some experience with Android and Java. However, if you have questions about what we’ve done, feel free to ask.
This tutorial will teach you how to begin handling finger gestures within your applications. We’ll be doing this by using a basic Canvas object drawing on a custom View object. This technique can be applied to whatever graphical environment you’re using, be it a 2D surface or even OpenGL ES rendering. If you’re interested in multitouch gestures (a more advanced gesture handling topic), we’ll be covering that in another upcoming tutorial.
Let’s start simple. Create a new Android project. We’ve named our project Gesture Fun and configured its one Activity, which is named GestureFunActivity. Modify the default layout file, main.xml, to the following, very basic layout:
The important item in this layout is the FrameLayout definition. The FrameLayout control is used to hold the custom View that will draw an image.
Finally, let’s update off the onCreate() method of the Activity class to initialize the FrameLayout control and provide it with some content:
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);

FrameLayout frame = (FrameLayout) findViewById(R.id.graphics_holder);
PlayAreaView image = new PlayAreaView(this);
frame.addView(image);
}
At this point, we have not defined the PlayAreaView class yet, so the project won’t compile. Have patience, we’ll get to this in the next step.
With all our project setup out of the way, we can now focus on the interesting part: drawing on the Canvas object. An easy way to get a Canvas object to drawing on is by overriding the onDraw() method of a View object. Conveniently, this method has a single parameter: the Canvas object. Drawing a Bitmap graphic on a Canvas object is as easy as calling the drawBitmap() method of the Canvas object. Here is a simple example of an onDraw() method implementation, as defined within our new PlayAreaView class:
private class PlayAreaView extends View {
private Matrix translate;
private Bitmap droid;
protected void onDraw(Canvas canvas) {
canvas.drawBitmap(droid, translate, null);
Matrix m = canvas.getMatrix();
Log.d(DEBUG_TAG, "Matrix: "+translate.toShortString());
Log.d(DEBUG_TAG, "Canvas: "+m.toShortString());
}
}
Our onDraw() method implementation is pretty basic. As usual, you’ll need to define your DEBUG_TAG logging tag variable somewhere in your Activity. Most of the onDraw() method is just informational output. The only real work done in this method takes place in the drawBitmap() call, where the first parameter is the image to draw. The second parameter is a Matrix object called translate which, as the name implies, dictates where the bitmap will be drawn relative to the View in which the Canvas object resides. All of the rest of the code in this tutorial will involve manipulating the translate Matrix based upon certain user touch events. This, in turn, will change where the Bitmap object draws within the Canvas and, therefore, on the screen.
The PlayAreaView class needs a constructor to perform some initial setup. Since our custom View will need to react to gestures, we need to a GestureDetector here. A GestureDetector is an Android class that can take motion events, do some mathematical magic to determine what they are, and then delegate calls to a GestureListener object as specific gesture or other motion callbacks. The GestureListener object, a class we implement, receives these calls for specific gestures that the GestureDetector recognizes and allows us to react to them as we see fit (in this case, to move a graphic around within our PlayAreaView). Although the GestureDetector handles the detection of certain motions, it doesn’t do anything specific with them nor does it handle all types of gestures. However, for the purposes of this tutorial, it provides just enough information. So, let’s get it hooked up:
public PlayAreaView(Context context) {
super(context);
translate = new Matrix();
gestures = new GestureDetector(GestureFunActivity.this,
new GestureListener(this));
droid = BitmapFactory.decodeResource(getResources(),
R.drawable.droid_g);
}
Let’s look at the PlayAreaView constructor in a bit more detail. First, we initialize the translate Matrix to an identity matrix (the default). Recall that an identity matrix will make no modifications to a Bitmap: it will be drawn in its original location.
Next, we create and initialize the GestureDetector – a default one – and assign it a valid GestureListener object (we’ll talk more about this in a moment). Finally, the Bitmap drawable, called droid, is loaded directly from the project resources. You can use any image you want – a baseball, apple, fortune cookie, etc. This is the graphic that you will be flinging around on the Canvas object.
We’ll get to the GestureListener next, as it’s a custom object. For now, let’s wire up the GestureDetector object so that it receives the motion data it needs to do its gesture recognizing magic.
Now let’s wire up the GestureDector object called gestures to receive events. To do this, override the View control’s onTouchEvent() method within the PlayAreaView class as follows:
@Override
public boolean onTouchEvent(MotionEvent event) {
return gestures.onTouchEvent(event);
}
What we’ve done here is make the GestureDetector the final word in all touch events for this custom View. However, the GestureDetector doesn’t actually do anything with motion events, it simply recognizes them and makes a call to the registered GestureListener class.
In order to react to the events recognized by the GestureDetector class, we need to implement the GestureListener class. The motion events we are most interested in are double taps and gestures of any kind. To listen for these types of motion events, our GestureListener class must implement both the OnGestureListener and OnDoubleTapListener interfaces.
private class GestureListener implements GestureDetector.OnGestureListener,
GestureDetector.OnDoubleTapListener {
PlayAreaView view;
public GestureListener(PlayAreaView view) {
this.view = view;
}
}
After adding this class as a subclass of the Activity, add default implementations for each of the required methods. For example, here is an implementation for the onDown() method:
@Override
public boolean onDown(MotionEvent e) {
Log.v(DEBUG_TAG, "onDown");
return true;
}
Implementing these methods allows you to study the various events as they are recognized by the GestureDetector object. Interestingly, if the onDown() method does not return true, the main gesture we’re interested in here – scroll (or drag) – won’t be detected. You can, however, return false for the other recognized events that you are not interested in.
The MotionEvent object that is passed as a parameter to each callback method sometimes represents the touch event that started the gesture recognition and other times the last event that completed the gesture recognition. For our purposes, we’re letting the GestureDetector class handle all the details of deciphering which kind of motion the MotionEvent represents.
Note: The Android framework also provides a convenience class called SimpleOnGestureListener which combines the two interfaces (OnGestureListener & OnDoubleTapListener) into a single class with default implementations for all methods. The default implementations return false.
The first event we’d like to handle is the scroll event. A scroll event occurs when the user touches the screen and then moves their finger across it. This gesture is also known as a drag event. This event comes in through the onScroll() method of the OnGestureListener interface.
Here’s the implementation of the onScroll() method:
@Override
public boolean onScroll(MotionEvent e1, MotionEvent e2,
float distanceX, float distanceY) {
Log.v(DEBUG_TAG, "onScroll");

view.onMove(-distanceX, -distanceY);
return true;
}
Use the scroll event to pass along a move request to the PlayAreaView object. The implementation of this method is a important first step in mapping how a finger motion event prompts the graphic movement. We’ll get to this shortly. In the meantime, you’ve handled your first gesture!
The graphic will be moved all over the screen – and sometimes even off of it. By definition, the image is only visible when drawn within the bounds of the View object. If the graphics’ coordinates land outside the boundaries of the View object, the graphic is clipped (not visible). You could put in edge detection and various other bits of logic (beyond the scope of this tutorial, we’re afraid), or just add detection for double-tapping and reset the graphic’s location. Here’s the sample implementation of the onDoubleTap() method (from the OnDoubleTapListener interface):
@Override
public boolean onDoubleTap(MotionEvent e) {
Log.v(DEBUG_TAG, "onDoubleTap");
view.onResetLocation();
return true;
}
As with the previous method implementation, we are using this recognized motion, a double tap, to trigger a change within our view control. In this case, we simply reset the location of the view.
A fling gesture is, essentially, leaving velocity on an item that was being dragged across a screen. The item in motion will usually slow down gradually, but this behavior is dictated by the developer’s implementation. In a game, for example, the velocity could be subject to the physics of the game world. In other applications, the velocity could be based on whatever formula feels right for the action it represents. Testing is the best way to get an idea of how it feels. In our experience, some trial and error is needed to settle upon something that feels – and looks – just right.
In our implementation, we’re going to vary the length of time before the motion caused by the fling stops, and then simply start an animation of the image to the final destination based on the velocity passed to us by the onFling() method and the amount of time we set. Remember, the fling gesture isn’t detected until after the users finger is no longer touching the display. Think of it like throwing an rock – the rock continues to travel when you release it—this is the part we want to animate once the user “lets go”.
Sounds complex? Here’s the code:
@Override
public boolean onFling(MotionEvent e1, MotionEvent e2,
final float velocityX, final float velocityY) {
Log.v(DEBUG_TAG, "onFling");
final float distanceTimeFactor = 0.4f;
final float totalDx = (distanceTimeFactor * velocityX/2);
final float totalDy = (distanceTimeFactor * velocityY/2);

view.onAnimateMove(totalDx, totalDy,
(long) (1000 * distanceTimeFactor));
return true;
}
We don’t even need to examine the two MotionEvent parameters, the velocity data is sufficient for our purposes. The velocity units are in pixels per second. So, we can use the velocity data to decide on a scaling factor that will be used to determine the final length of time before the image fully comes to rest. In our case, we’re using 40% of a second (400 ms). So, multiplying half the two velocity values by 40% (aka the distanceTimeFactor variable) we come up with the total movement achieved after this fraction of a second. Finally, we pass this information on to our custom onAnimateMove() method of the View object, which will actually make our graphic appear to move across the screen using the information provided by the fling motion event.
Why half the initial velocity? If we’re starting at, say, velocity A and ending at velocity B over any amount of time, the average velocity is (A+B)/2. In this case, the ending velocity is 0. So, we cut the velocity in half here so that we don’t inadvertently make the image look like it’s jumping away from our finger faster than it was going before we released it.
Why 400 ms? No reason at all, but it looks reasonably nice on most devices. It’s not unlike calibrating your mouse movements—too fast and it feels jumpy and hard to see, too slow and you are waiting for your sluggish mousepointer to catch up with your brain. This value is the main variable to adjust for the “feel” of your fling. The higher the value, the less “friction” the image will seem to have when sliding to a stop on the screen. If you have real surface variations, you’ll need to apply regular physics calculations. Here, we’re just doing a fixed function slow down without any real physics.
Now that all the gestures we’re interested in area handled, it’s time to implement the actual movement of the underlying graphic. Back in the PlayAreaView class, add the onMove() method:
public void onMove(float dx, float dy) {
translate.postTranslate(dx, dy);
invalidate();
}
This method does two things. First, it translates (translate = graphics term for move from point A to point B) our own matrix by the distance the finger moved. Then it invalidates the view so that it will be redrawn. When drawing, the image will draw in a different location within the view. If we had wanted to pan the entire View object, we could use the translate() method of the View object to update its internal matrix used by its Canvas to perform all the drawing. This might work well for some things, but if we had some static (by which we mean fixed, stationary, unmoving, like mountains) stuff inside the view, it wouldn’t. Instead, for this case, we just update our own Matrix, translate, that we use every time we draw the graphic.
Now we add the onResetLocation() method as well:
public void onResetLocation() {
translate.reset();
invalidate();
}
This method simply resets the matrix to the identity matrix and causes the view to be redrawn again via the invalidate() method. When the view is redrawn, the graphic will be back in its initial position.
For the fling movement, we have a little more to do than just draw it in a new location. We want it to smoooooothly animate to that position. A smooth movement can be achieved through animation – that is, drawing the image at a different location very rapidly. Android has built-in animation classes, but they apply to entire views. We aren’t animating a View object. Instead, we’re moving an image on the Canvas controlled by a View. So, we have to implement our own animation. Darn. ☺
Android provides different interpolators to help tweak the location of an object at the specific time during animations using the built in animation classes. We can leverage these different interpolators within our own animation to save a little work – and apply some fun effects. This is possible because the provided interpolators are nicely generic and not tied to the specifics of how the built-in View animations work.
Let’s start with the onAnimateMove() method:
private Matrix animateStart;
private Interpolator animateInterpolator;
private long startTime;
private long endTime;
private float totalAnimDx;
private float totalAnimDy;

public void onAnimateMove(float dx, float dy, long duration) {
animateStart = new Matrix(translate);
animateInterpolator = new OvershootInterpolator();
startTime = System.currentTimeMillis();
endTime = startTime + duration;
totalAnimDx = dx;
totalAnimDy = dy;
post(new Runnable() {
@Override
public void run() {
onAnimateStep();
}
});
}
In this method, we track the starting location, the starting time, and the end times.
Here, we initialize our animation using the OvershootInterpolator class. Since this interpolator causes the image to move a little farther, overall, than what we calculated, the image will technically start out slightly faster. It’s a small enough difference that it shouldn’t be noticed, but if you were going after precision accuracy, you’d need to adjust for that (which would mean writing your own interpolator–
beyond the scope of this tutorial—and having a method for calculating total distance traveled).
All of this information is used to determine when (in time) we are along the total duration of the animation. We use this data to determine when (in percentage time) we are to pass this information to the interpolator. The interpolator needs this so it can tell us where (in percentage distance) we are from the starting point to the ending point of our motion. We use this, in turn, to determine where (in pixels) we are from the starting point.
This calculation is all done in the onAnimateStep() method, shown below. We call the onAnimationStep() method via a post to the message queue. We don’t want to get in to a tight loop – we would cause the system to become unresponsive. So, a simple way is to just post the messages. This allows the system to remain responsive by providing asynchronous behavior without having to deal with threads. Since we have to do our drawing on the UI thread, anyway, there is little point to a thread in this simple example.
Now let’s implement the onAnimateStep() method:
private void onAnimateStep() {
long curTime = System.currentTimeMillis();
float percentTime = (float) (curTime - startTime)
/ (float) (endTime - startTime);
float percentDistance = animateInterpolator
.getInterpolation(percentTime);
float curDx = percentDistance * totalAnimDx;
float curDy = percentDistance * totalAnimDy;
translate.set(animateStart);
onMove(curDx, curDy);

Log.v(DEBUG_TAG, "We're " + percentDistance + " of the way there!");
if (percentTime < 1.0f) {
post(new Runnable() {
@Override
public void run() {
onAnimateStep();
}
});
}
}
First, we determine percentage of time that the animation has gone through, stored in the float variable percentTime. Then we use that data so the interpolator can tell us where we are as a percentage from start to end, stored as the float variable called percentDistance. Then, we use that data to determine where we are in pixels along both the x and y axis from the starting position, stored as curDx and curDy (standing for current delta x and current delta y). The translation matrix is then reset to the initial value we stored (animateStart). Finally, the onMove() method is used with the calculated curDx and curDy to actually move the graphic to it’s next position along the animation. Whew!
The interpolator does not give the distance changed from the previous location. Instead, it only gives the current percent complete of the total travelled distance. This is why we use our known starting location. Finally, if it is not done yet, post a call to onAnimateStep().
Note: Consider exploring the other interpolators provided with the Android framework. The Linear Interpolator just slows down motion evenly over the distance. This would produce the exact total distance travelled that we calculated. Try it out and see how it differs from the Overshoot Interpolator.
That’s all there is to it. If you’ve been following along with your own code, you might verify you entered everything by looking out our open source code. Otherwise, when you run it, your image will animate something like what you see in this video:
In this tutorial, you’ve learned how to connect up a GestureDetector to a custom View to smoothly move and animate an image within it on a Canvas. In doing so, you’ve learned how to handle basic gestures such as fling, as well as how to implement a custom View. In a future tutorial, you’ll learn how to add multitouch gestures for some other interesting effects.
We look forward to your feedback.

No comments:

Post a Comment