zoukankan      html  css  js  c++  java
  • Gestures_Article_4_0

     

     

    Technical Article

    Windows Phone 7™ Gestures Compared

     

     

    Lab version: 1.0.0

    Last updated: August 25, 2014

     


    Contents

    Overview 3

    Objectives 3

    Introduction to Gestures 3

    Phone Operating System Flavors 4

    Creating a Gesture-Aware Application 7

    Raw Touch Information 9

    Summary 10

     


    Overview

    Windows Phone 7 introduces gestures as part of the operating system.

    This technical article compares ideas about gestures and their implementation across several phone operating systems, focusing on Windows Phone 7 as a reference.

     

    Objectives

    When you’ve finished the article, you will have:

    · A high-level of understanding about gestures used by phone operating systems.

    · A clear view of the ways that gesture implementations differ across resistive and capacitive touch screens, and across various phone operating systems.

    · A knowledge of  how to implement gesture-aware applications in Windows Phone 7.

     

    Introduction to Gestures

    Handheld devices and, in particular, smart–phone devices, have evolved over the years to use touch screen interaction. The primary user interface evolved from stylus-operated devices—with or without the need for additional hardware buttons—to finger-based touch screens over resistive touch screen hardware by using the human finger as a "big stylus." Since resistive screens rely on the application of firm pressure, these input devices had to settle for press/long press/release operations, and had little use for drag-and-drop actions. 

    Dragging an object with a finger over a resistive touch screen is a frustrating task. Why? Because during the drag operation we instinctively tend to release some of the pressure from the screen and drop the dragged object too soon.  Moreover, resistive touch screens are capable of detecting only a single touch point, thereby limiting the possible actions the user can perform.

    Capacitive touch screens

    The recent use of capacitive touch screen hardware in phones has allowed the user to have better control over the location of the selecting finger. The hardware detects the touch location—regardless of the amount of pressure applied to the screen—by applying a tiny amount of electrical current through the finger, and then by calculating the position accordingly.

    Because of their hardware implementation, capacitive screens, unlike older resistive touch screens, support multiple touch locations. Capacitive screens provide for a multitouch experience and unlock a wide range of innovative gestures that users can apply to control applications.

    New interaction possibilities

    In addition to using an input method to select and then drag an object on the screen, we can use capacitive touch screens to hold, pinch, rotate, enlarge, and throw away an object, and so on.

    These input methods, known as gestures, are virtual actions that the user applies to the phone screen. The hardware determines the type of gesture based on the location, velocity, and direction of each finger touching the screen.

     

    Phone Operating System Flavors

    As of today, three major phone operating systems employ gestures: Apple iOS™, Google Android™, and Microsoft Windows Phone 7.

    Because there is no uniform standard for gestures, each operating system has its own set of gestures.  Some gestures conform to other operating systems, some gestures do not.

    Next is a review of the differences among operating systems with regard to gesture support.

    Apple iOS gesture set

    Apple iOS, used mainly on iPhone™, iPad™, and iPod touch™ devices, supports the following gestures:

     Tap – Press or select a screen object–a brief touch within a bounded area on the screen.

     Double tap – Two rapid sequential taps on the same object.

     Swipe – Move a finger across the screen and raise it without stopping.

     Drag/pan – Hold the finger over a screen object and move it around.

     Pinch – Hold two fingers on the screen, and in a relatively straight, virtual line move toward (pinch in) or away from (pinch out) each other.

     Rotate – Hold two fingers on the screen and move them in opposite directions on different virtual lines; one finger acts as a center while the other circles around it.

    Apple iOS 4 introduces two new gestures:

     Long press – Touch a screen object without releasing.

     Three-finger tap

    Google Android™ gesture set

    Google Android™ takes a different approach to gestures. The basic set is limited, but the operating system also enables you to create new gestures into gesture sets that can be used later within applications.

    The basic set includes:

     Single tap – Press or select a screen object.

     Double tap – Two rapid sequential taps on the same object.

     Down – Touch a spot on the screen with a finger. This is the first phase of a tap, fling, or another gesture.

     Up – Finger no longer touches the screen. This is the last phase of a tap, fling, or another gesture.

     Fling – Move a finger across the screen and raise the finger without stopping.

     Long press – Touch a screen object without releasing it.

     Scroll – Hold the finger over a screen object, and then move the finger.

    Windows Phone 7 gesture set

    Windows Phone 7 supports the following gestures:

     Tap – Press or select a screen object—a brief touch within a bounded area on the screen.

     Double tap – Two rapid sequential taps on the same object.

     Pan – Hold a finger on the screen and move it around.

     Flick – Move a finger across the screen and raise the finger without stopping (in-motion). This gesture may be used to create kinetic movements, and can follow a pan gesture.

     Touch and hold – Touch a screen object for a defined time.

    Also, the following gestures are supported for multitouch:

     Pinch – Hold two fingers on the screen,  and move the fingers toward each other.

     Stretch – Hold two fingers on the screen, and move the fingers away from each other.

    Similarities and differences

    Windows Phone 7 gestures are compared to the gestures of the other two operating systems in the following table.

    Table 1. Gesture comparison

    Gesture

    name

     

    Illustration

    Windows Phone 7 usage

    Apple iOS equivalent

    Google Android equivalent

    Tap

    wps_clip_image-18740[4] 

    Select an object

    -  or -

    Stop any content from moving on the screen

    Tap

    Tap

    Double tap

    wps_clip_image-3559[4]

    Toggle between in and out zoom states

    Double tap

    Double tap

    Pan

    wps_clip_image-21851[4]

    Move ("drag") an object on the screen to a different location

     

    Drag/pan

    Scroll

     

    Flick

    wps_clip_image-30280[4]

    Move the whole canvas in any direction

    Swipe

    Fling

    Touch and hold

    wps_clip_image-17880[4]

    Display context menu or option page for an item

    Long press

    Long press

    Pinch

    wps_clip_image-14227[4]

    Zoom out

    - or –

    Diminish an object

    (depending on the application)

    Pinch

    No standard gesture

    Stretch

    wps_clip_image-6866[4]

    Zoom in

    - or –

    Enlarge an object

    (depending on the application)

    Pinch

    No standard gesture

     

    Creating a Gesture-Aware Application

    In order to create a gesture-aware application in Windows Phone 7 under XNA, we first must know how Windows Phone 7 exposes gestures to the programmer. Once we know how gestures are exposed,  we used programmable gestures to define which gestures are allowed for our application, to sample incoming gestures, and to react to them.

    Programmable gestures

    Windows Phone 7 breaks the above logical gestures into more elaborate programmable gestures, as follows:

    Table 2. Logical vs. programmable gestures

    Logical gesture

    Programmable gestures

    Notes

    Tap

    Tap

     

    Double tap

    Double tap

     

    Pan

    FreeDrag (holding and moving in any direction), or HorizontalDrag (as the name implies), or VerticalDrag (as the name implies) gesture, followed by a DragComplete gesture.

    Pan is achieved by starting with the detection of a drag gesture and ending with the detection of the DragComplete gesture.

    An application may limit the user to horizontal only, to vertical only, to both horizontal and vertical, or to free drag types.

    Flick

    Flick

    A flick may be detected following a drag gesture set, and should be treated accordingly.

    Touch And Hold

    Hold

     

    Pinch

    Pinch gesture followed by a PinchComplete gesture.

    Both pinch and stretch logical gestures are achieved by programmable Pinch/PinchComplete gesture set, where the changing deltas between the touch points allow the programmer to determine if a pinch or a stretch is being performed.

    Stretch

    Pinch gesture followed by a PinchComplete gesture.

    Both pinch and stretch logical gestures are achieved by programmable Pinch/PinchComplete gesture set, where the changing deltas between the touch points allow the programmer to determine if a pinch or a stretch is being performed.

     

    Enabling desired gestures

    Assuming we already have an XNA Windows Phone 7 Game project open in Visual Studio 2010, we should instruct the framework to enable the desired gestures in our application.
    As described in the previous section, XNA allows the programmer to define which gestures the application is capable of consuming, thus allowing the user to perform those gestures. There might be a case, for example, where you would like to disallow FreeDrag/VerticalDrag gestures in your application.  You may, however, want to allow HorizontalDrag or to support the Flick gesture.

    In order to define the above instruction, we would use the namespace Microsoft.Xna.Framework.Input.Touch to access the static class TouchPanel. Within this static class, we then would access the static (Flags enumeration) property EnableGestures and set it to the desired set of enabled gestures.

    Gestures must be enabled before we can use them. Thus, TouchPanel.EnabledGestures must be set to the appropriate set of gesture types before calling TouchPanel.IsGestureAvailable or TouchPanel.ReadGesture (both are described later in the article) for the first time.

    The following example shows how to allow Tap, DoubleTap, and Hold touch gestures, and to disallow all the rest:

    C#

    TouchPanel.EnabledGestures = GestureType.Tap |

           GestureType.DoubleTap |
                                 GestureType.Hold;

     

    Waiting for a gesture and reacting accordingly

    We now want to detect and react to incoming gestures. To do this, we sample the current gestures from within the XNA project Update override method. We test if new gestures are available by checking the Boolean property TouchPanel.IsGestureAvailable. Next, we sample the gestures, and then react accordingly.

    When true is returned, we know that there are new gestures waiting to be sampled.  We then call the TouchPanel.ReadGesture method, which returns an instance of the GestureSample class. The returned GestureSample class instance holds a sample of the detected gesture, supplying various information such as the type and location of the gesture, and identification of the deltas between touch points (for multitouch gestures). In the following code example, the Boolean property is tested, a sample is acquired, and information is collected into a string for later display:

    C#

    string infoMessage = "";

    while (TouchPanel.IsGestureAvailable)

    {

            GestureSample gestureSample = TouchPanel.ReadGesture();

     

            infoMessage += String.Format("Type: {0} First touch point position: {1},{2};

    Delta: {3},{4} Second touch point position: {5},{6}; Delta: {7}, {8} ",

              gestureSample.GestureType,

              gestureSample.Position.X, gestureSample.Position.Y,

              gestureSample.Delta.X, gestureSample.Delta.Y,

              gestureSample.Position2.X, gestureSample.Position2.Y,

              gestureSample.Delta2.X, gestureSample.Delta2.Y);

    }

     

    Special considerations

    A single gesture on the screen will create a few subsequent gestures to sample. For example, a DoubleTap gesture always will be preceded by a Tap gesture located near the succeeding DoubleTap gesture. When coding against gestures in XNA Framework, such subsequent gestures must be considered when reacting to incoming gestures.

     

    Raw Touch Information

    Gestures are not always required in our applications. Sometimes we only need to know where user’s fingers are on the screen at any given moment.

    This could be compared to registering a MouseClick event on a standard Windows application, or registering MouseDown and MouseUp events.

    A MouseClick event represents the combination of subsequent MouseDown and MouseUp events at the same location. Registering to the raw MouseDown or MouseUp events allows us to make our own decisions about what the user is trying to do.

    Reading raw touch information

    In order to react to raw user activities, we call the static method TouchPanel.GetState from within the project's Update override method. The static method returns an instance of the TouchCollection structure, holding a collection of touch locations (each represented by the TouchLocationstructure). The collection holds one instance of the structure for each finger touching the screen.

    Using the raw method requires the programmer to watch closely for touch screen changes, and to interpret them accordingly.

    The following code example runs within the Update override method, gets the current touch state, tests if there are any current touched positions, and collects the information into a string for later display:

    C#

    string infoMessage = "";

     

    TouchCollection touchLocations = TouchPanel.GetState();

     

    if (touchLocations.Count > 0)

    {

            infoMessage = String.Format("Detected {0} touch points at the following locations: ",

            touchLocations.Count);

     

            for (int i = 0; i < touchLocations.Count; i++)

              infoMessage += string.Format("{0}. {3} at {1}, {2}; ", i, touchLocations

              [i].Position.X, touchLocations [i].Position.Y, touchLocations [i].State);

     

    }

     

     

    Summary

    Gestures are an exciting new way for applications to interact with the user under Windows Phone 7, employing the power of capacitive touch screen hardware.

    Programming gesture-aware applications against the XNA Framework is quite simple and straightforward.

    Instead of having to interpret and calculate touch locations over time, the programmer may rely on the XNA Framework to do the major part of the work interpreting standard gestures, leaving the special application logic to implement. Nevertheless, the programmer may still use the raw touch input detection method when required, and combine these two methods where needed.

     

  • 相关阅读:
    Docker5之Deploy your app
    Docker4之Stack
    Docker3之Swarm
    Docker之Swarm
    Docker2之Service
    Docker1之Container
    Nuget EPPlus的使用
    Nuget CsvHelper 的使用
    excel
    Chrome上的扩展工具
  • 原文地址:https://www.cnblogs.com/jx270/p/3934167.html
Copyright © 2011-2022 走看看