Google once again reminds us how behind Siri is
[ad_1]
Our household only has two people in it, but if you were just listening from the outside, you’d be excused for thinking that there was a third: a sometimes helpful but often annoyingly recalcitrant entity named Siri.
We use Siri a lot, thanks to the HomePod mini that’s lived in the kitchen for the last nine months or so, as well as the stereo pair of full-size HomePods in the office. Then, of course, there’s the Apple TV with its Siri Remote and the myriad iPhones and iPads that litter the house.
But for all of the usefulness we get out of the voice assistant—and I’ll be the first to admit that it can be very handy for a lot of frequent tasks that come up every day—it still remains a source of frequent frustration. That frustration is what prevents the technology from really getting to the next level.
Au naturale language processing
At this past week’s Google I/O conference, the company showed off improvements that it’s been making to natural language processing with its new LaMDA platform. The goal is to make talking to an AI feel more like talking to a human; in the demo, a LaMDA-based AI “posed” as, first, the (dwarf) planet Pluto, and later a paper airplane, answering questions and generally responding in a conversational fashion much closer to that of a human.
Apple, for its part, has spent a lot of time making Siri’s voices sound like real people, including recently adding new voices that reflect more diverse accents and pronunciations. But that, unfortunately, doesn’t always translate into the conversations themselves feeling more human. After the sixth or seventh time that Siri has responded to a query with the same exact unhelpful answer, you might still very well feel as though you’re talking to a toddler.
In the past, Google has also rolled out software that’s been designed to make people think they were talking to a human—but that end goal feels decidedly uncomfortable. What the real aim with Siri and other voice assistants should be is similar, but not quite the same: they should be good enough to let you forget you’re talking to a machine.
Siri-tuational awareness
Much like the titular parents of one of Will Smith’s most iconic hits, sometimes Siri just doesn’t understand. Or sometimes maybe it does understand, but it still doesn’t do what you want. Examples have included everything from adding bizarre entries to our shopping list to turning on the lights in the wrong room in the house. Moreover, sometimes, the devices don’t even seem to hear a particular request, ignoring us in a manner that borders on willful. Or, alternatively, a device two rooms away decides that it’s the best one to handle a query.
Apple
One might think having multiple HomePods would improve Siri response, by enabling the device to perhaps do a better job of hearing a request and picking up contextual information, such as which specific location might be referenced, and there are some indications of that—for example, Siri can now understand when you say “turn the lights on” while standing in the kitchen that you probably mean the kitchen lights—but by and large, it sometimes seems as if each of the HomePods lives in a silo apart from one another.
For example, when a timer goes off in the HomePod mini in the kitchen, I can now tell the office HomePod to “stop the timer in the kitchen”, which works. But if, while the timer is still running, I interrogate the office HomePod about the timer in the kitchen, I might as well have asked it to expound upon the meaning of life. It has no idea what I’m talking about.
Better late than never
Look, I’ve been beating this drum for a long time, and while there have been minor improvements to Siri over the years, it’s never really felt as though Apple has invested in deep, fundamental change—this despite the company’s increased focus on key underlying technologies like AI and machine learning.
Apple
That said, it does seem as though Apple is doing work behind the scenes—more than once I’ve noticed that Siri’s responses to a certain request have changed. For example, at one point it went from responding to requests to add things to my shopping list with a verbose “Okay, Dan, I’ve added bananas to your shopping list” to briefly just throwing back a terse “On it.” That wasn’t a bad change, per se, but without understanding why the change happened, it contributed to the ongoing feeling of Siri as a black box, or an experiment where I was the subject.
For six years, I’ve been arguing that Apple is desperately in need of a “Siri 2.0” initiative: a clear roadmap of where it’s going with the virtual assistant. Unbelievably, this fall will see the 10th anniversary of the launch of Siri as an Apple product, and while it has come a long way, there’s still a hell of a long way left to go.
[ad_2]
Source link