我试图使用Swift在iOS项目中使用Azure实现语音识别,我遇到了一个问题:语音识别完成函数(stopContinuousRecognition()
)阻塞应用程序UI几秒钟,但是没有内存、处理器负载或泄漏。我试图将这个函数移到DispatchQueue.main.async {}
,但没有给出结果。也许有人遇到了这样的问题?是否有必要将它放在一个单独的线程中,为什么函数要花这么长时间才能完成?
编辑:很难提供工作示例,但基本上我是在按下按钮时调用此函数:
private func startListenAzureRecognition(lang:String) {
let audioFormat = SPXAudioStreamFormat.init(usingPCMWithSampleRate: 8000, bitsPerSample: 16, channels: 1)
azurePushAudioStream = SPXPushAudioInputStream(audioFormat: audioFormat!)
let audioConfig = SPXAudioConfiguration(streamInput: azurePushAudioStream!)!
var speechConfig: SPXSpeechConfiguration?
do {
let sub = "enter your code here"
let region = "enter you region here"
try speechConfig = SPXSpeechConfiguration(subscription: sub, region: region)
speechConfig!.enableDictation();
speechConfig?.speechRecognitionLanguage = lang
} catch {
print("error \(error) happened")
speechConfig = nil
}
self.azureRecognition = try! SPXSpeechRecognizer(speechConfiguration: speechConfig!, audioConfiguration: audioConfig)
self.azureRecognition!.addRecognizingEventHandler() {reco, evt in
if (evt.result.text != nil && evt.result.text != "") {
print(evt.result.text ?? "no result")
}
}
self.azureRecognition!.addRecognizedEventHandler() {reco, evt in
if (evt.result.text != nil && evt.result.text != "") {
print(evt.result.text ?? "no result")
}
}
do {
try! self.azureRecognition?.startContinuousRecognition()
} catch {
print("error \(error) happened")
}
}
当我再次按下按钮以停止识别时,我调用这个函数:
private func stopListenAzureRecognition(){
DispatchQueue.main.async {
print("start")
// app blocks here
try! self.azureRecognition?.stopContinuousRecognition()
self.azurePushAudioStream!.close()
self.azureRecognition = nil
self.azurePushAudioStream = nil
print("stop")
}
}
另外,我使用的是来自麦克风的原始音频数据(recognizeOnce
非常适合于第一阶段,所以所有的音频数据都很好)。
发布于 2021-12-14 11:30:47
尝试先关闭流,然后停止连续识别:
azurePushAudioStream!.close()
try! azureRecognition?.stopContinuousRecognition()
azureRecognition = nil
azurePushAudioStream = nil
甚至不需要异步完成。
至少这对我有用。
https://stackoverflow.com/questions/70153518
复制相似问题