Are you thinking about creating a speech bot-driven app for your business?the problem of eroticizing sexual abuse in media Some of the guidelines around bot creation—as outlined at Microsoft Build by noted Swedish entrepreneur, podcaster, and Windows Platform Development MVP Jessica Engstrom—are common sense. For example, don't build a voice bot just because it's cool new technology, and make sure it fits your business model.
But there are plenty of scenarios where voice does fit. One argument is that the average person types 40 words per minute but speaks 150. Approximately 3,000 new bots are released per week on the Microsoft platform alone, and 95 percent of smartphone owners have tried a personal assistant.
It's not all smooth sailing, though. Engstrom mentioned Microsoft's own disastrous voice-plus-AI experiment, Tay, which the company had to pull in less than a day after the internet taught it to be racist. And she pointed to Burger King, which ran a commercial designed to trigger Google Home but instead read a Wikipedia page saying the Whopper contained cyanide.
When designing a voice assistant, you should limit the scope of possible answers, Engstrom said. Don't have it ask open-ended questions. Train the voice assistant to handle many ways of phrasing a question or command. Even write a full script of a conversation that makes sense for your bot. Finally, provide audio help, giving examples of what kind of things a user can say.
One of the big announcements at the Build Keynote was the ability to transcribe multiparty speech in meetings while keeping track of which speaker said what. In a separate session, Aarthy Longino, Principal Program Manager for Speech and Language at Microsoft, showed this working in a custom development interface.
At last year's Build, the biggest hit was a meeting "cone" that recognized participants and transcribed what each said. Now that cone, which also sports a 360-degree camera, is being tested by Microsoft customers in private preview. But there are other devices that anyone can get to test the transcription, including the Roobo Smart Audio Dev Kit, which was impressively demoed in the session.
You can find these Cognitive Services Speech Devices at aka.ms/sdsdk-get.
On the other end of speech, and at least as impressive, is text to speech (TTS). Microsoft's Qinying Liao, a Principal Program Manager on Speech Services, showed advances in things like the remarkably natural-sounding new Neural Voices, which was so smooth that attendees in the room voted for it over an actual human reader.
Currently, Neural Voices are only available for nine regional English dialects, but Japanese, Spanish, and Portuguese are in the works.
Another new capability is to add emotion to the TTS: a simple keyword in code can make the generated voice sound cheerful or empathetic. That works the other way, too. In fact, Microsoft's transcription technologies for call centers can detect when an interaction starts to go negative. The Speech Services will let businesses customize recognition and TTS using their own terminology in a new Custom Speech Portal. You can read about all the Azure Speech Services at this help page.
Topics Microsoft
(Editor: {typename type="name"/})
NYT Connections Sports Edition hints and answers for February 11: Tips to solve Connections #141
The bold suffragists you likely didn’t learn about in school
Chicken salad at Whole Foods recalled for not containing ... chicken
Will TikTok end up in the hands of Trump supporters?
The Best CPU & GPU Purchases of 2017
Annabelle comes for the 'Doctor Strange' director on Twitter
Justin Trudeau's socks upstaged by Irish Prime Minister Leo Varadkar
Trump chastises reporters after scrum in Oval Office
Use Gmail Filters to Automate your Inbox
J.K. Rowling has an explanation for all of Trump's deranged tweets
接受PR>=1、BR>=1,流量相当,内容相关类链接。