In the week or so since the last devlog I've mostly been working on mouse and touch input, as you can see from the commit history. The player can now drag a selection box to select multiple units, as well as scroll around a small level with both mouse and touch input.
Dragging a selection box to mass-select a group of units.
As ever all the code is on GitHub and you can play it live at CommandAndConstruct.com.
New player controls
Now the level is scrollable, the supported controls are:
Mouse controls
- Click individual units to select them
- Click in a space to command selected units to move there
- Click and drag to drag a selection box to select multiple units
- Hold middle mouse button to pan the view
- Mouse wheel to zoom
Touch controls
- Tap individual units to select them
- Tap in a space to command selected units to move there
- Swipe with one finger to drag a selection box to select multiple units
- Pinch with two fingers to zoom or pan the view. If you use two fingers to pan and then release one finger, the finger you keep down can continue to be used to pan the view only.
Implementing controls
For a long time the web had to deal with both mouse and touch events. However these days we have one unified API for dealing with all kinds of mouse, touch and even pen input which is now widely supported: Pointer Events.
I've added a new PointerManager class to handle pointer inputs (the "pointerdown"
, "pointermove"
and "pointerup"
events), and also the mouse wheel. Another class PointerInfo is used to track pointer state over time. PointerManager keeps a Map of pointer IDs (uniquely assigned to each individual input) to PointerInfo objects, creating an entry in "pointerdown"
and removing it in "pointerup"
. This is a really nice pattern for handling multi-touch, including even properly handling simultaneous mouse and touch input (e.g. for touchscreen laptops). Long-term state for tracking inputs can be conveniently held in the PointerInfo class.
All this user input code is fully client side - there's no need to involve the server in how the client views their own game state. However handling inputs is something that has to be coded very carefully. Making sure players feel like it "just works" means handling very specific sequences of input.
Tap vs. drag
One example of how PointerInfo comes in handy is: how do you tell the difference between a tap and a drag? This applies for both mouse and touch input. For example a mouse click in an empty space means to command selected units to move there, but dragging the mouse should create a selection box and not command units to move.
The answer is you don't handle inputs in "pointerdown"
(when a button is pressed or a touch starts) - you handle it in "pointerup"
. Then you can distinguish taps and drags by how far the pointer moved in between. If it stayed in the same area, it's a tap; if it moved beyond a certain minimum distance, it's a drag.
PointerInfo is perfect for tracking this kind of change over time. It stores the pointer start position in class properties (#startClientX
and #startClientY
). In OnMove()
, it checks the current position; if the pointer has moved more than MAX_TAP_DIST
pixels (currently 30), then it switches the pointer in to dragging mode. That's done in #StartDrag()
, which sets the #actionType
to "drag"
and creates a selection box. In the pointer up event, if the action type is still its default "tap"
, it stayed in the same area so is treated as a tap; otherwise in drag mode it then selects all units in the selection box area.
Client co-ordinates
Another important detail is pointer events are mostly dealt with in client co-ordinates. These are basically CSS pixels on the page. They don't change as the game scrolls or zooms; instead client co-ordinates are converted to game co-ordinates (using Construct's layer.cssPxToLayer(clientX, clientY)
method) when necessary. Scrolling or zooming the game changes the entire game co-ordinate system, so game co-ordinates aren't really appropriate for tracking pointers - it's all too easy to accidentally set up a feedback loop where moving a pointer changes the game, which changes the pointer position in game co-ordinates, which changes the game, etc. Consistently using client positions avoids this kind of problem and means all the inputs are handled in terms of what the user is physically doing.
Zooming to/from a position
When scrolling the mouse wheel, it's nice to make the zoom go in to, or out from, the mouse position. This lets you zoom out, move the mouse to a new point of interest, and then zoom right in to that area, rather than just wherever the middle of the screen is. That's important for a strategy game where the player will likely want to jump between different places frequently.
Initially I had code for scrolling and zooming in PointerManager, but pretty quickly realised it was best split off in to its own class ViewManager. This took several tries to get right, but I'm pretty happy on the pattern I landed on. You can find the zoom-to-position calculation in #SetActualZoom()
, which is where the zoom level actually changes. (ViewManager also smooths the zoom changes so they don't jump; setting the zoom level actually changes the target zoom, and the actual zoom level moves towards the target zoom in Tick()
.) The methods for both panning and zooming the view are shared between mouse and touch inputs.
Pinch-to-zoom
Probably the trickiest part of this code is the pinch-to-zoom gesture. This starts when there are two touch pointers. If one of the pointers was already dragging a selection box, that action is cancelled. It's also possible to remove one finger, keep pan scrolling with the other finger, and then add a second finger to resume zooming. The rules to decide when to start a pinch-to-zoom are in #MaybeStartPinchToZoom()
in PointerManager.
Once two pointers are in "pinch-to-zoom" mode, if either moves then it will call #UpdatePinchZoom()
in PointerManager at the end of the tick. (I decided to do this since both pointers could move in the same tick; this approach allows handling them both at the same time, rather than one at a time.) This method then either does just panning with one pointer, or pinch-to-zoom with two pointers.
Pinch-to-zoom does two actions simultaneously:
- Panning - the view is scrolled as the mid-point of the two touches moves.
- Zooming - the view is zoomed as the distance between the two touches changes.
This logic mainly happens in #UpdatePinchZoom_2Pointers()
in PointerManager. You can see how it calculates both the starting mid-point and current mid-point, and the starting distance and current distance, and applies panning and zooming accordingly. Note it also uses the current mid-point as the zoom position, so zooming by touch zooms in to and out from the touch position like it does with the mouse cursor when using the mouse wheel.
Conclusion
This was all fairly tricky code to write and I had to think about it carefully, but I'm pretty pleased with the result - I think the code is about as straightforward as it can be while dealing with all these different types of input. Anyway, if you need a reference pinch-to-zoom implementation, you can find one here!
It's nice to have it working with touch input too. I think mobile support is important for this game, and so far I think it feels nice to use on a tablet (and still usable on a phone).
Being able to zoom and scroll is obviously important. I want to have games with 1000 units, and they definitely won't be able to fit on a single screen! Making player inputs feel natural is also essential to make the game playable and feel fun rather than feeling like you're fighting the controls as much as the other team. So this was an important precursor to the next step, which is to set up a huge battle and optimise performance to handle it!
Past blog posts
In case you missed them here are the past blogs about this project: