We’re always working to improve New Zealand government digital services for users and we've recently been looking at how we optimise content for query and context
As part of this work, we’ve a small team looking at how people use voice assistants (Alexa, Google Home etc.). We want to understand better how our content serves voice query ‘as-is’ and what we need to be doing now to optimise structure etc.
We want to understand better how our content serves those ‘as-is’ and what we need to be doing now to meet future needs.
Why voice, why now
There is already some great research about voice looking at trends and typical query complexity. We also know that voice assistant usage is increasing.
We want to understand more about what users expect (if anything) from government, using voice to access and engage with services.
We are exploring usage trends (e.g. big changes in voice assistant market share) to determine how that informs content strategy. We are working on structured content so it can be optimal for users now, and ready for future queries, regardless of query method.
We will explore how we can optimise content for our users and the nuances of serving useful content to users in contexts of
- Input (“ok google, insert dental appointment next Monday at 3pm”)
- Enquiry (“hey google, what’s the weather going to be like tomorrow”)
- Command (“hey Alexa, turn the volume down”)
- Conversation (“hey Alexa I want to get a driving licence”...Alexa: ”what type of license would you like to get”)
We’ll build a prototype to understand more about how we structure content better to meet these types of query. We want to release machine-readable, user-friendly content throughout our products.
We already see featured snippets surfacing well for some use cases (e.g. passport queries to passports.govt.nz.) - we want to find out why. This work will inform us further as we build services that address our user's current and future needs.