-
Notifications
You must be signed in to change notification settings - Fork 266
Description
Really excited to see the Web Haptics API explainer.
We have been looking at something extremely similar for Google Search. We've prototyped several options in our native apps within WebViews by bridging calls to native APIs.
This is similar to the first version we considered, and it seems to fulfill most common haptic use cases. We're also exploring a much more advanced API for more delightful moments (e.g. Android's charging cable plug-in ripple) but have found a universal design to be more complicated and the use cases more specific.
Overall I am highly supportive of this proposal. A number of thoughts in no particular order:
We haven't found a universal mapping of named primitives between platforms, but prefer the idea that the haptics will map to existing primitives. This will feel familiar to users and avoid random and bespoke patterns. We've tried to focus on primitives that are common on the web and also have equivalent on native platforms e.g. success and error and some more general intensities (low, medium high). We're considering adding more soon but I don't have confidence on the right semantic set.
We currently trigger from JS, but we're considering building a declarative API. This could have a number of benefits if done at the browser level:
- Haptics need to trigger close to input to feel responsive. JS is vulnerable to delay and jank and declarative haptics could play almost instantly.
- It could be possible to more consistently apply haptics with CSS, like setting a success haptic on buttons with a .success class.
- A concept of
defaultorautohaptics, similar to user agent stylesheets, could be extremely useful to drive consistency in implementation. Especially if the declarative haptic can be reasonably expressed in different element/component contexts, default could select the expected haptic for that platform if one exists. e.g.snap-point-haptic: defaultoroverscroll-haptic: default. The design might be more powerful with a generalized version of interest invokers to link actions with outcomes.
It's worth considering fallback behavior especially if new haptics are added in the future. It may be fine for sites to just keep track of which browser versions support certain primitives, but if they don't they might try to play a haptic that doesn't exist and this leaves an unexpected gap in the experience. Though given it's a progressive enhancement it's not especially harmful. A fallback design could resemble fonts, either using a string or array that chooses the first available haptic pattern.
The "let the device decide what haptic to play" is something I strongly support. Different devices have different actuators and capabilities, and a generic pattern will never feel right on all all devices. It's better to let each one tune the pattern for their own use case. That said, it's worth even further considering the consequences of this. I believe that for Android, device manufacturers have to tune haptics for their hardware. I don't imagine that kind of tuning will be done for a web API specifically, and mapping web primitives to native platform primitives could be the best strategy to leverage work that's done per-device. Using Android as an example, we have a cascade of fallbacks to different primitives depending on the API version available on the device for each web facing primitive.
On the subject of user expectations, and defaults, another principle we're still trying to translate to implementation is matching the platform itself. Standard native components often have default haptics associated with them. The mappings aren't always exposed, but ideally we'd copy them because a user habituates to the catalog of native components and their haptic behaviors. If the web API can provide a sensible way for standard web components to behave like their equivalent native components, it will feel more familiar to users. Though I'm not sure if this is feasible.
We've also favored reactive, input driven haptics as the most logical place to start. The other major use case is async content. Especially with LLMs that can take several seconds to respond on expensive operations, a light haptic to give the user a signal that something is happening can be useful. Though we've also noticed the initial surge in haptics in products for this use case has gone from extremely aggressive (per token chunk) down to a few key response boundaries suggesting that the best practice is "less is more". I'm not sure if this falls under long-running notifications. I would support async haptics with O(seconds) delay but would consider platform-level notifications (after leaving the page) on O(minutes) out of scope.