有什么包,我可以用来创建一个应用程序,可以处理语音到文字?
它应包括以下特点:
到目前为止,我找到了这个识别,但它说:
iOS API发送中间结果,在我的Android设备上,只接收最终的转录。 其他限制:在iOS上,默认情况下,插件配置为法语、英语、俄语、西班牙语和意大利语。在Android上,如果没有额外的安装,它很可能只适用于默认的设备区域设置。
有人测试了这个包,结果很好?或者你还有其他建议吗?
发布于 2020-03-28 08:16:52
我现在正在使用文本。它是积极维护和工作相当好。我认为可以编写一些自定义代码,让它不断地听。
编辑:
如有要求,请参阅下面的连续收听逻辑。我只使用它作为概念的证明,所以我不推荐它用于生产应用程序。据我所知,Android不支持连续监听。
SpeechRecognitionBloc
import 'package:bloc/bloc.dart';
import 'package:meta/meta.dart';
import 'package:template_mobile/core/sevices/speech_recognition_service.dart';
import 'package:template_mobile/core/state/event/speech_recognition_event.dart';
import 'package:template_mobile/core/state/state/speech_recognition_state.dart';
class SpeechRecognitionBloc
extends Bloc<SpeechRecognitionEvent, SpeechRecognitionState> {
final SpeechRecognitionService speechRecognitionService;
SpeechRecognitionBloc({
@required this.speechRecognitionService,
}) : assert(speechRecognitionService != null) {
speechRecognitionService.errors.stream.listen((errorResult) {
add(SpeechRecognitionErrorEvent(
error: "${errorResult.errorMsg} - ${errorResult.permanent}",
));
});
speechRecognitionService.statuses.stream.listen((status) {
if (state is SpeechRecognitionRecognizedState) {
var currentState = state as SpeechRecognitionRecognizedState;
if (currentState.finalResult) {
add(SpeechRecognitionStatusChangedEvent());
}
}
});
speechRecognitionService.words.stream.listen((speechResult) {
add(SpeechRecognitionRecognizedEvent(
words: speechResult.recognizedWords,
finalResult: speechResult.finalResult,
));
});
}
@override
SpeechRecognitionState get initialState =>
SpeechRecognitionUninitializedState();
@override
Stream<SpeechRecognitionState> mapEventToState(
SpeechRecognitionEvent event) async* {
if (event is SpeechRecognitionInitEvent) {
var hasSpeech = await speechRecognitionService.initSpeech();
if (hasSpeech) {
yield SpeechRecognitionAvailableState();
} else {
yield SpeechRecognitionUnavailableState();
}
}
if (event is SpeechRecognitionStartPressEvent) {
yield SpeechRecognitionStartPressedState();
add(SpeechRecognitionStartEvent());
}
if (event is SpeechRecognitionStartEvent) {
speechRecognitionService.startListening();
yield SpeechRecognitionStartedState();
}
if (event is SpeechRecognitionStopPressEvent) {
yield SpeechRecognitionStopPressedState();
add(SpeechRecognitionStopEvent());
}
if (event is SpeechRecognitionStopEvent) {
speechRecognitionService.stopListening();
yield SpeechRecognitionStopedState();
}
if (event is SpeechRecognitionCancelEvent) {
speechRecognitionService.cancelListening();
yield SpeechRecognitionCanceledState();
}
if (event is SpeechRecognitionRecognizedEvent) {
yield SpeechRecognitionRecognizedState(
words: event.words, finalResult: event.finalResult);
if (event.finalResult == true &&
speechRecognitionService.statuses.value == 'notListening') {
await Future.delayed(Duration(milliseconds: 50));
add(SpeechRecognitionStatusChangedEvent());
}
}
if (event is SpeechRecognitionErrorEvent) {
yield SpeechRecognitionErrorState(error: event.error);
// Just for UI updates for the state to propagates
await Future.delayed(Duration(milliseconds: 50));
add(SpeechRecognitionInitEvent());
await Future.delayed(Duration(milliseconds: 50));
add(SpeechRecognitionStartPressEvent());
}
if (event is SpeechRecognitionStatusChangedEvent) {
yield SpeechRecognitionStatusState();
add(SpeechRecognitionStartPressEvent());
}
}
}SpeechRecognitionService
import 'dart:async';
import 'package:rxdart/rxdart.dart';
import 'package:speech_to_text/speech_recognition_error.dart';
import 'package:speech_to_text/speech_recognition_result.dart';
import 'package:speech_to_text/speech_to_text.dart';
class SpeechRecognitionService {
final SpeechToText speech = SpeechToText();
var errors = StreamController<SpeechRecognitionError>();
var statuses = BehaviorSubject<String>();
var words = StreamController<SpeechRecognitionResult>();
var _localeId = '';
Future<bool> initSpeech() async {
bool hasSpeech = await speech.initialize(
onError: errorListener,
onStatus: statusListener,
);
if (hasSpeech) {
var systemLocale = await speech.systemLocale();
_localeId = systemLocale.localeId;
}
return hasSpeech;
}
void startListening() {
speech.stop();
speech.listen(
onResult: resultListener,
listenFor: Duration(minutes: 1),
localeId: _localeId,
onSoundLevelChange: null,
cancelOnError: true,
partialResults: true);
}
void errorListener(SpeechRecognitionError error) {
errors.add(error);
}
void statusListener(String status) {
statuses.add(status);
}
void resultListener(SpeechRecognitionResult result) {
words.add(result);
}
void stopListening() {
speech.stop();
}
void cancelListening() {
speech.cancel();
}
}发布于 2019-10-04 11:42:37
识别是最好的选择。它基于SpeechRecognizer,为文本提供离线语音。
持续倾听是不可能的。即使是付费的在线云语音到文本API也不允许这样做,因为这是危险的(误用等)。
在iOS上,默认情况下,插件配置为法语、英语、俄语、西班牙语和意大利语,但您可以将缺少的语言添加到快速源文件中。
因此,最终您将找不到一个更好的语音识别插件,即使它不是完美的。
发布于 2022-09-04 05:49:47
一个更简单的解决方案,使用颤振文本库的版本5.6.1和不使用bloc库,在前面的答案。
基本上,每当使用statusListener状态调用done方法时,我们都会再次调用again方法。
main.dart
import 'package:flutter/material.dart';
import 'package:speech_to_text/speech_recognition_error.dart';
import 'package:speech_to_text/speech_recognition_result.dart';
import 'package:speech_to_text/speech_to_text.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({Key? key}) : super(key: key);
@override
Widget build(BuildContext context) {
return const MaterialApp(
title: 'Flutter Demo',
home: MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({Key? key}) : super(key: key);
@override
MyHomePageState createState() => MyHomePageState();
}
class MyHomePageState extends State<MyHomePage> {
final SpeechToText _speechToText = SpeechToText();
bool _speechEnabled = false;
bool _speechAvailable = false;
String _lastWords = '';
String _currentWords = '';
final String _selectedLocaleId = 'es_MX';
printLocales() async {
var locales = await _speechToText.locales();
for (var local in locales) {
debugPrint(local.name);
debugPrint(local.localeId);
}
}
@override
void initState() {
super.initState();
_initSpeech();
}
void errorListener(SpeechRecognitionError error) {
debugPrint(error.errorMsg.toString());
}
void statusListener(String status) async {
debugPrint("status $status");
if (status == "done" && _speechEnabled) {
setState(() {
_lastWords += " $_currentWords";
_currentWords = "";
_speechEnabled = false;
});
await _startListening();
}
}
/// This has to happen only once per app
void _initSpeech() async {
_speechAvailable = await _speechToText.initialize(
onError: errorListener,
onStatus: statusListener
);
setState(() {});
}
/// Each time to start a speech recognition session
Future _startListening() async {
debugPrint("=================================================");
await _stopListening();
await Future.delayed(const Duration(milliseconds: 50));
await _speechToText.listen(
onResult: _onSpeechResult,
localeId: _selectedLocaleId,
cancelOnError: false,
partialResults: true,
listenMode: ListenMode.dictation
);
setState(() {
_speechEnabled = true;
});
}
/// Manually stop the active speech recognition session
/// Note that there are also timeouts that each platform enforces
/// and the SpeechToText plugin supports setting timeouts on the
/// listen method.
Future _stopListening() async {
setState(() {
_speechEnabled = false;
});
await _speechToText.stop();
}
/// This is the callback that the SpeechToText plugin calls when
/// the platform returns recognized words.
void _onSpeechResult(SpeechRecognitionResult result) {
setState(() {
_currentWords = result.recognizedWords;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Speech Demo'),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Container(
padding: const EdgeInsets.all(16),
child: const Text(
'Recognized words:',
style: TextStyle(fontSize: 20.0),
),
),
Expanded(
child: Container(
padding: const EdgeInsets.all(16),
child: Text(
_lastWords.isNotEmpty
? '$_lastWords $_currentWords'
: _speechAvailable
? 'Tap the microphone to start listening...'
: 'Speech not available',
),
),
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed:
_speechToText.isNotListening ? _startListening : _stopListening,
tooltip: 'Listen',
child: Icon(_speechToText.isNotListening ? Icons.mic_off : Icons.mic),
),
);
}
}https://stackoverflow.com/questions/58060889
复制相似问题