前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >python语音智能对话聊天机器人,linux&&树莓派双平台兼容

python语音智能对话聊天机器人,linux&&树莓派双平台兼容

作者头像
十四君
发布2019-11-28 00:12:43
5.4K0
发布2019-11-28 00:12:43
举报
文章被收录于专栏:UrlteamUrlteam

项目简介:运用百度语音进行声音转中文的识别与合成,智能对话使用图灵机器人,录音则,linux端用pythonaudio 模块.树莓派端因为pythonaudio不兼容问题,因此用arecord进行录音.最终代码约150行.代码发布在github上.https://github.com/luyishisi/python_yuyinduihua

1.环境搭建

这点非常关键,在后期多数问题都是出现在环境不兼容上.

1.1:linux 版本

环境

Python

# -*- coding: utf-8 -*- from pyaudio import PyAudio, paInt16 import numpy as np from datetime import datetime import wave import time import urllib, urllib2, pycurl import base64 import json import os import sys reload(sys) sys.setdefaultencoding( "utf-8" )

12345678910111213

# -*- coding: utf-8 -*-from pyaudio import PyAudio, paInt16import numpy as npfrom datetime import datetimeimport waveimport timeimport urllib, urllib2, pycurlimport base64import jsonimport osimport sysreload(sys)sys.setdefaultencoding( "utf-8" )

这部分环境最好搭建,只需要

apt-get install python-wave* 这类的安装命令就可以轻松搞定.本质上安装模块就是在找安装的命令.我一半就是把模块肯定会有的名词后面接上*用于模糊匹配.

如果有模块不懂得装,还是百度一下,难度不大.还有mpg123用来播发

1.2:树莓派版本

如果你出现这个博文下出现的错误,请果断弃坑.换用命令行录音实现,不要折腾pyaudio了.

树莓派PyAudio录音

Python

##先更新软件包 sudo apt-get update sudo apt-get upgrade ##安装必要的程序 sudo apt-get -y install alsa-utils alsa-tools alsa-tools-gui alsamixergui

12345

##先更新软件包sudo apt-get updatesudo apt-get upgrade##安装必要的程序sudo apt-get -y install alsa-utils alsa-tools alsa-tools-gui alsamixergui

主要使用的工具

想通过终端来调整扬声器的音量,只需要输入alsamixer.这个很重要你使用的录音设备的录音音量需要这里设置,而且你可以明显的看到自己的声卡是否有问题.

使用的录音设备我用的是 https://item.taobao.com/item.htm?spm=a1z10.5-c.w4002-3667091491.40.mktumv&id=41424706506

录音的命令使用的是arecord

arecord,aplay是命令行的ALSA声卡驱动的录音和播放工具. arecord是命令行ALSA声卡驱动的录音程序.支持多种文件格式和多个声卡. aplay是命令行播放工具,支持多种文件格式.

命令格式:这部分需要研读一下.主要使用dfr三个参数

Python

arecord [flags] [filename] aplay [flags] [filename [filename]] ... 选项: -h, --help帮助. --version打印版本信息. -l, --list-devices列出全部声卡和数字音频设备. -L, --list-pcms列出全部PCM定义. -D, --device=NAME指定PCM设备名称. -q --quiet安静模式. -t, --file-type TYPE文件类型(voc,wav,raw或au). -c, --channels=#设置通道号. -f --format=FORMAT设置格式.格式包括:S8 U8 S16_LE S16_BE U16_LE U16_BE S24_LE S24_BE U24_LE U24_BE S32_LE S32_BE U32_LE U32_BE FLOAT_LE FLOAT_BE FLOAT64_LE FLOAT64_BE IEC958_SUBFRAME_LE IEC958_SUBFRAME_BE MU_LAW A_LAW IMA_ADPCM MPEG GSM -r, --rate=#<Hz>设置频率. -d, --duration=#设置持续时间,单位为秒. -s, --sleep-min=#设置最小休眠时间. -M, --mmap mmap流. -N, --nonblock设置为非块模式. -B, --buffer-time=#缓冲持续时长.单位为微妙. -v, --verbose显示PCM结构和设置. -I, --separate-channels设置为每个通道一个单独文件.

1234567891011121314151617181920212223

arecord [flags] [filename]       aplay [flags] [filename [filename]] ...选项:       -h, --help帮助.       --version打印版本信息.       -l, --list-devices列出全部声卡和数字音频设备.       -L, --list-pcms列出全部PCM定义.       -D, --device=NAME指定PCM设备名称.       -q --quiet安静模式.       -t, --file-type TYPE文件类型(voc,wav,raw或au).       -c, --channels=#设置通道号.       -f --format=FORMAT设置格式.格式包括:S8  U8  S16_LE  S16_BE  U16_LE              U16_BE  S24_LE S24_BE U24_LE U24_BE S32_LE S32_BE U32_LE U32_BE              FLOAT_LE  FLOAT_BE  FLOAT64_LE  FLOAT64_BE   IEC958_SUBFRAME_LE              IEC958_SUBFRAME_BE MU_LAW A_LAW IMA_ADPCM MPEG GSM       -r, --rate=#<Hz>设置频率.       -d, --duration=#设置持续时间,单位为秒.       -s, --sleep-min=#设置最小休眠时间.       -M, --mmap mmap流.       -N, --nonblock设置为非块模式.       -B, --buffer-time=#缓冲持续时长.单位为微妙.       -v, --verbose显示PCM结构和设置.       -I, --separate-channels设置为每个通道一个单独文件.

示例:

Python

aplay -c 1 -t raw -r 22050 -f mu_law foobar 播放raw文件foobar.以22050Hz,单声道,8位,mu_law格式. arecord -d 10 -f cd -t wav -D copy foobar.wav 以CD质量录制foobar.wav文件10秒钟.使用PCM的"copy".

12345

aplay -c 1 -t raw -r 22050 -f mu_law foobar 播放raw文件foobar.以22050Hz,单声道,8位,mu_law格式.        arecord -d 10 -f cd -t wav -D copy foobar.wav 以CD质量录制foobar.wav文件10秒钟.使用PCM的"copy".

2:百度语音合成与识别

这部分难度不大,测试代码如下.如有以为情参看之前的博文

百度语音识别api使用python进行调用

Python

#语音合成 #encoding=utf-8 import wave import urllib, urllib2, pycurl import base64 import json ## get access token by api key & secret key ## 获得token,需要填写你的apikey以及secretkey def get_token(): apiKey = "Ll0c53MSac6GBOtpg22ZSGAU" secretKey = "44c8af396038a24e34936227d4a19dc2" auth_url = "https://openapi.baidu.com/oauth/2.0/token?grant_type=client_credentials&client_id=" + apiKey + "&client_secret=" + secretKey; res = urllib2.urlopen(auth_url) json_data = res.read() return json.loads(json_data)['access_token'] def dump_res(buf): print (buf) ## post audio to server def use_cloud(token): fp = wave.open('2.wav', 'rb') ##已经录好音的语音片段 nf = fp.getnframes() f_len = nf * 2 audio_data = fp.readframes(nf) cuid = "7519663" #你的产品id srv_url = 'http://vop.baidu.com/server_api' + '?cuid=' + cuid + '&token=' + token http_header = [ 'Content-Type: audio/pcm; rate=8000', 'Content-Length: %d' % f_len ] c = pycurl.Curl() c.setopt(pycurl.URL, str(srv_url)) #curl doesn't support unicode #c.setopt(c.RETURNTRANSFER, 1) c.setopt(c.HTTPHEADER, http_header) #must be list, not dict c.setopt(c.POST, 1) c.setopt(c.CONNECTTIMEOUT, 30) c.setopt(c.TIMEOUT, 30) c.setopt(c.WRITEFUNCTION, dump_res) c.setopt(c.POSTFIELDS, audio_data) c.setopt(c.POSTFIELDSIZE, f_len) c.perform() #pycurl.perform() has no return val if __name__ == "__main__": token = get_token() #获得token use_cloud(token) #进行处理,输出在函数内部

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253

#语音合成#encoding=utf-8import waveimport urllib, urllib2, pycurlimport base64import json## get access token by api key & secret key## 获得token,需要填写你的apikey以及secretkeydef get_token():    apiKey = "Ll0c53MSac6GBOtpg22ZSGAU"    secretKey = "44c8af396038a24e34936227d4a19dc2"     auth_url = "https://openapi.baidu.com/oauth/2.0/token?grant_type=client_credentials&client_id=" + apiKey + "&client_secret=" + secretKey;     res = urllib2.urlopen(auth_url)    json_data = res.read()    return json.loads(json_data)['access_token'] def dump_res(buf):    print (buf) ## post audio to serverdef use_cloud(token):    fp = wave.open('2.wav', 'rb')    ##已经录好音的语音片段    nf = fp.getnframes()    f_len = nf * 2    audio_data = fp.readframes(nf)     cuid = "7519663" #你的产品id    srv_url = 'http://vop.baidu.com/server_api' + '?cuid=' + cuid + '&token=' + token    http_header = [        'Content-Type: audio/pcm; rate=8000',        'Content-Length: %d' % f_len    ]     c = pycurl.Curl()    c.setopt(pycurl.URL, str(srv_url)) #curl doesn't support unicode    #c.setopt(c.RETURNTRANSFER, 1)    c.setopt(c.HTTPHEADER, http_header)   #must be list, not dict    c.setopt(c.POST, 1)    c.setopt(c.CONNECTTIMEOUT, 30)    c.setopt(c.TIMEOUT, 30)    c.setopt(c.WRITEFUNCTION, dump_res)    c.setopt(c.POSTFIELDS, audio_data)    c.setopt(c.POSTFIELDSIZE, f_len)    c.perform() #pycurl.perform() has no return val if __name__ == "__main__":    token = get_token()    #获得token    use_cloud(token)    #进行处理,输出在函数内部

3:图灵机器人

官方网址:http://www.tuling123.com/

图灵机器人部分的测试代码

难度不大非常轻松.你得去注册一下,然后使用他们给你的key和api.剩下的就是json的文本提取

Python

# -*- coding: utf-8 -*- import urllib import json def getHtml(url): page = urllib.urlopen(url) html = page.read() return html if __name__ == '__main__': key = '05ba411481c8cfa61b91124ef7389767' api = 'http://www.tuling123.com/openapi/api?key=' + key + '&info=' while True: info = raw_input('我: ') request = api + info response = getHtml(request) dic_json = json.loads(response) print '机器人: '.decode('utf-8') + dic_json['text']

12345678910111213141516171819

# -*- coding: utf-8 -*-import urllibimport json def getHtml(url):    page = urllib.urlopen(url)    html = page.read()    return html if __name__ == '__main__':     key = '05ba411481c8cfa61b91124ef7389767'    api = 'http://www.tuling123.com/openapi/api?key=' + key + '&info='    while True:        info = raw_input('我: ')        request = api + info        response = getHtml(request)        dic_json = json.loads(response)        print '机器人: '.decode('utf-8') + dic_json['text']

4:linux下使用pythonaudio进行音频解析

这部分,在正常电脑上,只要环境没有大问题就很轻松,代码放在整体的源代码中,这里做个小说明.

这部分代码不可运行,在整体源代码中可以.不过这部分稍微需要抽取出来,作为理解

建立的pa是pyudio对象,可以获取当前的音高,然后检测当音高超过200就启动,录音.同时有一个5秒的额外限制.

Python

NUM_SAMPLES = 2000 # pyAudio内部缓存的块的大小 SAMPLING_RATE = 8000 # 取样频率 LEVEL = 1500 # 声音保存的阈值 COUNT_NUM = 20 # NUM_SAMPLES个取样之内出现COUNT_NUM个大于LEVEL的取样则记录声音 SAVE_LENGTH = 8 # 声音记录的最小长度:SAVE_LENGTH * NUM_SAMPLES 个取样 # 开启声音输入 pa = PyAudio() stream = pa.open(format=paInt16, channels=1, rate=SAMPLING_RATE, input=True, frames_per_buffer=NUM_SAMPLES)\ string_audio_data = stream.read(NUM_SAMPLES) # 将读入的数据转换为数组 audio_data = np.fromstring(string_audio_data, dtype=np.short) # 计算大于LEVEL的取样的个数 large_sample_count = np.sum( audio_data > LEVEL ) temp = np.max(audio_data) if temp > 2000 and t == 0: t = 1#开启录音 print "检测到信号,开始录音,计时五秒" begin = time.time() print temp

123456789101112131415161718192021

NUM_SAMPLES = 2000      # pyAudio内部缓存的块的大小SAMPLING_RATE = 8000    # 取样频率LEVEL = 1500            # 声音保存的阈值COUNT_NUM = 20          # NUM_SAMPLES个取样之内出现COUNT_NUM个大于LEVEL的取样则记录声音SAVE_LENGTH = 8         # 声音记录的最小长度:SAVE_LENGTH * NUM_SAMPLES 个取样# 开启声音输入pa = PyAudio()stream = pa.open(format=paInt16, channels=1, rate=SAMPLING_RATE, input=True,                frames_per_buffer=NUM_SAMPLES)\string_audio_data = stream.read(NUM_SAMPLES)    # 将读入的数据转换为数组    audio_data = np.fromstring(string_audio_data, dtype=np.short)    # 计算大于LEVEL的取样的个数    large_sample_count = np.sum( audio_data > LEVEL )     temp = np.max(audio_data)    if temp > 2000 and t == 0:        t = 1#开启录音        print "检测到信号,开始录音,计时五秒"        begin = time.time()        print temp

5:树莓派下使用arecord进行录音

这里主要还是记录下整体的一些资料.在树莓派上能够成功运行下面的命令就算ok.别的是一路研究的资料.

sudo arecord -D “plughw:1,0” -d 5 f1.wav

参数释义: -D这个参数的意思就选择设备,外部设备就是plughw:1,0 内部设备就是plughw:0,0,树莓派本身并没有录音模块,故没有内部设备。-d 5

的意思就是录制时间为5秒,如果不加这个参数就是一直录音直到ctrol+C停止, 最后生成的文件名字叫做f1.wav

百度语音要求的是16比特的所以还需要设定-f

具体pcm的说明如下:

这都是PCM的一种表示范围的方法,所以表示方法中最小值等价,最大值等价,中间的数据级别就是对应的进度了,可以都映射到-1~1范围。

  • S8:     signed   8 bits,有符号字符 = char,          表示范围 -128~127
  • U8:     unsigned 8 bits,无符号字符 = unsigned char,表示范围 0~255
  • S16_LE: little endian signed 16 bits,小端有符号字 = short,表示范围 -32768~32767
  • S16_BE: big endian signed 16 bits,大端有符号字 = short倒序(PPC),表示范围 -32768~32767
  • U16_LE: little endian unsigned 16 bits,小端无符号字 = unsigned short,表示范围 0~65535
  • U16_BE: big endian unsigned signed 16 bits,大端无符号字 = unsigned short倒序(PPC),表示范围 0~65535
  • 还有S24_LE,S32_LE等,都可以表示数字的方法,PCM都可以用这些表示。
  • 上面这些值中,所有最小值-128, 0, -32768, -32768, 0, 0对应PCM描叙来说都是一个值,表示最小值,可以量化到浮点-1。所有最大值也是一个值,可以量化到浮点1,其他值可以等比例转换。

PCMU应该是指无符号PCM:可以包括U8,U16_LE,U16_BE,… PCMA应该是指有符号PCM:可以包括S8,S16_LE,S16_BE,…

查看声卡

Python

cat/proc/asound/cards cat/proc/asound/modules

123

cat/proc/asound/cards  cat/proc/asound/modules

6:整体调试linux平台下的

源代码如下:解析在注释上

Python

# -*- coding: utf-8 -*- from pyaudio import PyAudio, paInt16 import numpy as np from datetime import datetime import wave import time import urllib, urllib2, pycurl import base64 import json import os import sys reload(sys) sys.setdefaultencoding( "utf-8" ) #一些全局变量 save_count = 0 save_buffer = [] t = 0 sum = 0 time_flag = 0 flag_num = 0 filename = '' duihua = '1' def getHtml(url): page = urllib.urlopen(url) html = page.read() return html def get_token(): apiKey = "Ll0c53MSac6GBOtpg22ZSGAU" secretKey = "44c8af396038a24e34936227d4a19dc2" auth_url = "https://openapi.baidu.com/oauth/2.0/token?grant_type=client_credentials&client_id=" + apiKey + "&client_secret=" + secretKey; res = urllib2.urlopen(auth_url) json_data = res.read() return json.loads(json_data)['access_token'] def dump_res(buf):#输出百度语音识别的结果 global duihua print "字符串类型" print (buf) a = eval(buf) print type(a) if a['err_msg']=='success.': #print a['result'][0]#终于搞定了,在这里可以输出,返回的语句 duihua = a['result'][0] print duihua def use_cloud(token):#进行合成 fp = wave.open(filename, 'rb') nf = fp.getnframes() f_len = nf * 2 audio_data = fp.readframes(nf) cuid = "7519663" #产品id srv_url = 'http://vop.baidu.com/server_api' + '?cuid=' + cuid + '&token=' + token http_header = [ 'Content-Type: audio/pcm; rate=8000', 'Content-Length: %d' % f_len ] c = pycurl.Curl() c.setopt(pycurl.URL, str(srv_url)) #curl doesn't support unicode #c.setopt(c.RETURNTRANSFER, 1) c.setopt(c.HTTPHEADER, http_header) #must be list, not dict c.setopt(c.POST, 1) c.setopt(c.CONNECTTIMEOUT, 30) c.setopt(c.TIMEOUT, 30) c.setopt(c.WRITEFUNCTION, dump_res) c.setopt(c.POSTFIELDS, audio_data) c.setopt(c.POSTFIELDSIZE, f_len) c.perform() #pycurl.perform() has no return val # 将data中的数据保存到名为filename的WAV文件中 def save_wave_file(filename, data): wf = wave.open(filename, 'wb') wf.setnchannels(1) wf.setsampwidth(2) wf.setframerate(SAMPLING_RATE) wf.writeframes("".join(data)) wf.close() NUM_SAMPLES = 2000 # pyAudio内部缓存的块的大小 SAMPLING_RATE = 8000 # 取样频率 LEVEL = 1500 # 声音保存的阈值 COUNT_NUM = 20 # NUM_SAMPLES个取样之内出现COUNT_NUM个大于LEVEL的取样则记录声音 SAVE_LENGTH = 8 # 声音记录的最小长度:SAVE_LENGTH * NUM_SAMPLES 个取样 # 开启声音输入pyaudio对象 pa = PyAudio() stream = pa.open(format=paInt16, channels=1, rate=SAMPLING_RATE, input=True, frames_per_buffer=NUM_SAMPLES) token = get_token()#获取token key = '05ba411481c8cfa61b91124ef7389767' #key和api的设定 api = 'http://www.tuling123.com/openapi/api?key=' + key + '&info=' while True: # 读入NUM_SAMPLES个取样 string_audio_data = stream.read(NUM_SAMPLES) # 将读入的数据转换为数组 audio_data = np.fromstring(string_audio_data, dtype=np.short) # 计算大于LEVEL的取样的个数 large_sample_count = np.sum( audio_data > LEVEL ) temp = np.max(audio_data) if temp > 2000 and t == 0: t = 1#开启录音 print "检测到信号,开始录音,计时五秒" begin = time.time() print temp if t: print np.max(audio_data) if np.max(audio_data)<1000: sum += 1 print sum end = time.time() if end-begin>5: time_flag = 1 print "五秒到了,准备结束" # 如果个数大于COUNT_NUM,则至少保存SAVE_LENGTH个块 if large_sample_count > COUNT_NUM: save_count = SAVE_LENGTH else: save_count -= 1 if save_count < 0: save_count = 0 if save_count > 0: # 将要保存的数据存放到save_buffer中 save_buffer.append(string_audio_data ) else: # 将save_buffer中的数据写入WAV文件,WAV文件的文件名是保存的时刻 #if time_flag: if len(save_buffer) > 0 or time_flag: #filename = datetime.now().strftime("%Y-%m-%d_%H_%M_%S") + ".wav"#原本是用时间做名字 filename = str(flag_num)+".wav" flag_num += 1 save_wave_file(filename, save_buffer) save_buffer = [] t = 0 sum =0 time_flag = 0 print filename, "保存成功正在进行语音识别" use_cloud(token) print duihua info = duihua duihua = "" request = api + info response = getHtml(request) dic_json = json.loads(response) #print '机器人: '.decode('utf-8') + dic_json['text']#这里麻烦的是字符编码 #huida = ' '.decode('utf-8') + dic_json['text'] a = dic_json['text'] print type(a) unicodestring = a # 将Unicode转化为普通Python字符串:"encode" utf8string = unicodestring.encode("utf-8") print type(utf8string) print str(a) url = "http://tsn.baidu.com/text2audio?tex="+dic_json['text']+"&lan=zh&per=0&pit=1&spd=7&cuid=7519663&ctp=1&tok=24.a5f341cf81c523356c2307b35603eee6.2592000.1464423912.282335-7519663" os.system('mpg123 "%s"'%(url))#用mpg123来播放

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167

# -*- coding: utf-8 -*-from pyaudio import PyAudio, paInt16import numpy as npfrom datetime import datetimeimport waveimport timeimport urllib, urllib2, pycurlimport base64import jsonimport osimport sysreload(sys)sys.setdefaultencoding( "utf-8" )#一些全局变量save_count = 0save_buffer = []t = 0sum = 0time_flag = 0flag_num = 0filename = ''duihua = '1' def getHtml(url):    page = urllib.urlopen(url)    html = page.read()    return html def get_token():    apiKey = "Ll0c53MSac6GBOtpg22ZSGAU"    secretKey = "44c8af396038a24e34936227d4a19dc2"    auth_url = "https://openapi.baidu.com/oauth/2.0/token?grant_type=client_credentials&client_id=" + apiKey + "&client_secret=" + secretKey;    res = urllib2.urlopen(auth_url)    json_data = res.read()    return json.loads(json_data)['access_token'] def dump_res(buf):#输出百度语音识别的结果    global duihua    print "字符串类型"    print (buf)    a = eval(buf)    print type(a)    if a['err_msg']=='success.':        #print a['result'][0]#终于搞定了,在这里可以输出,返回的语句        duihua = a['result'][0]        print duihua def use_cloud(token):#进行合成    fp = wave.open(filename, 'rb')    nf = fp.getnframes()    f_len = nf * 2    audio_data = fp.readframes(nf)    cuid = "7519663" #产品id    srv_url = 'http://vop.baidu.com/server_api' + '?cuid=' + cuid + '&token=' + token    http_header = [        'Content-Type: audio/pcm; rate=8000',        'Content-Length: %d' % f_len    ]     c = pycurl.Curl()    c.setopt(pycurl.URL, str(srv_url)) #curl doesn't support unicode    #c.setopt(c.RETURNTRANSFER, 1)    c.setopt(c.HTTPHEADER, http_header)   #must be list, not dict    c.setopt(c.POST, 1)    c.setopt(c.CONNECTTIMEOUT, 30)    c.setopt(c.TIMEOUT, 30)    c.setopt(c.WRITEFUNCTION, dump_res)    c.setopt(c.POSTFIELDS, audio_data)    c.setopt(c.POSTFIELDSIZE, f_len)    c.perform() #pycurl.perform() has no return val # 将data中的数据保存到名为filename的WAV文件中def save_wave_file(filename, data):    wf = wave.open(filename, 'wb')    wf.setnchannels(1)    wf.setsampwidth(2)    wf.setframerate(SAMPLING_RATE)    wf.writeframes("".join(data))    wf.close()  NUM_SAMPLES = 2000      # pyAudio内部缓存的块的大小SAMPLING_RATE = 8000    # 取样频率LEVEL = 1500            # 声音保存的阈值COUNT_NUM = 20          # NUM_SAMPLES个取样之内出现COUNT_NUM个大于LEVEL的取样则记录声音SAVE_LENGTH = 8         # 声音记录的最小长度:SAVE_LENGTH * NUM_SAMPLES 个取样 # 开启声音输入pyaudio对象pa = PyAudio()stream = pa.open(format=paInt16, channels=1, rate=SAMPLING_RATE, input=True,                frames_per_buffer=NUM_SAMPLES)  token = get_token()#获取tokenkey = '05ba411481c8cfa61b91124ef7389767' #key和api的设定api = 'http://www.tuling123.com/openapi/api?key=' + key + '&info=' while True:    # 读入NUM_SAMPLES个取样    string_audio_data = stream.read(NUM_SAMPLES)    # 将读入的数据转换为数组    audio_data = np.fromstring(string_audio_data, dtype=np.short)    # 计算大于LEVEL的取样的个数    large_sample_count = np.sum( audio_data > LEVEL )     temp = np.max(audio_data)    if temp > 2000 and t == 0:        t = 1#开启录音        print "检测到信号,开始录音,计时五秒"        begin = time.time()        print temp    if t:        print np.max(audio_data)        if np.max(audio_data)<1000:            sum += 1            print sum        end = time.time()        if end-begin>5:            time_flag = 1            print "五秒到了,准备结束"        # 如果个数大于COUNT_NUM,则至少保存SAVE_LENGTH个块        if large_sample_count > COUNT_NUM:            save_count = SAVE_LENGTH        else:            save_count -= 1         if save_count < 0:            save_count = 0         if save_count > 0:            # 将要保存的数据存放到save_buffer中            save_buffer.append(string_audio_data )        else:            # 将save_buffer中的数据写入WAV文件,WAV文件的文件名是保存的时刻            #if  time_flag:            if len(save_buffer) > 0  or time_flag:                #filename = datetime.now().strftime("%Y-%m-%d_%H_%M_%S") + ".wav"#原本是用时间做名字                filename = str(flag_num)+".wav"                flag_num += 1                 save_wave_file(filename, save_buffer)                save_buffer = []                t = 0                sum =0                time_flag = 0                print filename, "保存成功正在进行语音识别"                use_cloud(token)                print duihua                info = duihua                duihua = ""                request = api + info                response = getHtml(request)                dic_json = json.loads(response)                 #print '机器人: '.decode('utf-8') + dic_json['text']#这里麻烦的是字符编码                #huida = ' '.decode('utf-8') + dic_json['text']                a = dic_json['text']                print type(a)                unicodestring = a                 # 将Unicode转化为普通Python字符串:"encode"                utf8string = unicodestring.encode("utf-8")                 print type(utf8string)                print str(a)                url = "http://tsn.baidu.com/text2audio?tex="+dic_json['text']+"&lan=zh&per=0&pit=1&spd=7&cuid=7519663&ctp=1&tok=24.a5f341cf81c523356c2307b35603eee6.2592000.1464423912.282335-7519663"                os.system('mpg123 "%s"'%(url))#用mpg123来播放

7:主要bug解析

这里算是解析一下主要坑的地方.除了环境因素,就是中文编码,还有对象解析了.源代码中从百度语音识别出来返回的是一个字典对象,而字典对象中有部分是直接一个字符串,有的则是数组,首先得读出字符串来确定是否是succees.然后再读取text数组.中的中文.

另外一个bug是中文编码.要这么处理

Python

import sys reload(sys) sys.setdefaultencoding( "utf-8" ) #还有 #print '机器人: '.decode('utf-8') + dic_json['text'] #huida = ' '.decode('utf-8') + dic_json['text'] a = dic_json['text'] print type(a) unicodestring = a # 将Unicode转化为普通Python字符串:"encode" utf8string = unicodestring.encode("utf-8")

1234567891011121314

import sysreload(sys)sys.setdefaultencoding( "utf-8" ) #还有 #print '机器人: '.decode('utf-8') + dic_json['text']#huida = ' '.decode('utf-8') + dic_json['text']a = dic_json['text']print type(a)unicodestring = a # 将Unicode转化为普通Python字符串:"encode"utf8string = unicodestring.encode("utf-8")

然后移植到树莓派上出现的主要问题是有aercode命令出现文件目录找不到.那么说明是你声卡选择错了,录音声音太小了也是,使用alsamixer选择清楚.

还有录音识别效率问题,问题主要集中在百度有他的要求,所以得设定16bit.然后再听一遍录制的声音,看看音量会不会太大,,有没有很粗糙的声音.最好能分开测试

8:源代码-树莓派环境下

pyaudio错误得我不要不要的,,所以还是绕开,使用aercode进行录音命令,然后python进行掉用..代码也短很多,但是失去了实时处理音波的能力.

Python

# -*- coding: utf-8 -*- from pyaudio import PyAudio, paInt16 import numpy as np from datetime import datetime import wave import time import urllib, urllib2, pycurl import base64 import json import os import sys reload(sys) sys.setdefaultencoding( "utf-8" ) save_count = 0 save_buffer = [] t = 0 sum = 0 time_flag = 0 flag_num = 0 filename = '2.wav' duihua = '1' def getHtml(url): page = urllib.urlopen(url) html = page.read() return html def get_token(): apiKey = "Ll0c53MSac6GBOtpg22ZSGAU" secretKey = "44c8af396038a24e34936227d4a19dc2" auth_url = "https://openapi.baidu.com/oauth/2.0/token?grant_type=client_credentials&client_id=" + apiKey + "&client_secret=" + secretKey; res = urllib2.urlopen(auth_url) json_data = res.read() return json.loads(json_data)['access_token'] def dump_res(buf): global duihua print "字符串类型" print (buf) a = eval(buf) print type(a) if a['err_msg']=='success.': #print a['result'][0]#终于搞定了,在这里可以输出,返回的语句 duihua = a['result'][0] print duihua def use_cloud(token): fp = wave.open(filename, 'rb') nf = fp.getnframes() f_len = nf * 2 audio_data = fp.readframes(nf) cuid = "7519663" #产品id srv_url = 'http://vop.baidu.com/server_api' + '?cuid=' + cuid + '&token=' + token http_header = [ 'Content-Type: audio/pcm; rate=8000', 'Content-Length: %d' % f_len ] c = pycurl.Curl() c.setopt(pycurl.URL, str(srv_url)) #curl doesn't support unicode #c.setopt(c.RETURNTRANSFER, 1) c.setopt(c.HTTPHEADER, http_header) #must be list, not dict c.setopt(c.POST, 1) c.setopt(c.CONNECTTIMEOUT, 30) c.setopt(c.TIMEOUT, 30) c.setopt(c.WRITEFUNCTION, dump_res) c.setopt(c.POSTFIELDS, audio_data) c.setopt(c.POSTFIELDSIZE, f_len) c.perform() #pycurl.perform() has no return val # 将data中的数据保存到名为filename的WAV文件中 def save_wave_file(filename, data): wf = wave.open(filename, 'wb') wf.setnchannels(1) wf.setsampwidth(2) wf.setframerate(SAMPLING_RATE) wf.writeframes("".join(data)) wf.close() token = get_token() key = '05ba411481c8cfa61b91124ef7389767' api = 'http://www.tuling123.com/openapi/api?key=' + key + '&info=' while(True): os.system('arecord -D "plughw:1,0" -f S16_LE -d 5 -r 8000 /home/luyi/yuyinduihua/2.wav') use_cloud(token) print duihua info = duihua duihua = "" request = api + info response = getHtml(request) dic_json = json.loads(response) a = dic_json['text'] print type(a) unicodestring = a # 将Unicode转化为普通Python字符串:"encode" utf8string = unicodestring.encode("utf-8") print type(utf8string) print str(a) url = "http://tsn.baidu.com/text2audio?tex="+dic_json['text']+"&lan=zh&per=0&pit=1&spd=7&cuid=7519663&ctp=1&tok=24.a5f341cf81c523356c2307b35603eee6.2592000.1464423912.282335-7519663" os.system('mpg123 "%s"'%(url))

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105

# -*- coding: utf-8 -*-from pyaudio import PyAudio, paInt16import numpy as npfrom datetime import datetimeimport waveimport timeimport urllib, urllib2, pycurlimport base64import jsonimport osimport sysreload(sys)sys.setdefaultencoding( "utf-8" ) save_count = 0save_buffer = []t = 0sum = 0time_flag = 0flag_num = 0filename = '2.wav'duihua = '1' def getHtml(url):    page = urllib.urlopen(url)    html = page.read()    return html def get_token():    apiKey = "Ll0c53MSac6GBOtpg22ZSGAU"    secretKey = "44c8af396038a24e34936227d4a19dc2"    auth_url = "https://openapi.baidu.com/oauth/2.0/token?grant_type=client_credentials&client_id=" + apiKey + "&client_secret=" + secretKey;    res = urllib2.urlopen(auth_url)    json_data = res.read()    return json.loads(json_data)['access_token'] def dump_res(buf):    global duihua    print "字符串类型"    print (buf)    a = eval(buf)    print type(a)    if a['err_msg']=='success.':        #print a['result'][0]#终于搞定了,在这里可以输出,返回的语句        duihua = a['result'][0]        print duihua def use_cloud(token):    fp = wave.open(filename, 'rb')    nf = fp.getnframes()    f_len = nf * 2    audio_data = fp.readframes(nf)    cuid = "7519663" #产品id    srv_url = 'http://vop.baidu.com/server_api' + '?cuid=' + cuid + '&token=' + token    http_header = [        'Content-Type: audio/pcm; rate=8000',        'Content-Length: %d' % f_len    ]     c = pycurl.Curl()    c.setopt(pycurl.URL, str(srv_url)) #curl doesn't support unicode    #c.setopt(c.RETURNTRANSFER, 1)    c.setopt(c.HTTPHEADER, http_header)   #must be list, not dict    c.setopt(c.POST, 1)    c.setopt(c.CONNECTTIMEOUT, 30)    c.setopt(c.TIMEOUT, 30)    c.setopt(c.WRITEFUNCTION, dump_res)    c.setopt(c.POSTFIELDS, audio_data)    c.setopt(c.POSTFIELDSIZE, f_len)    c.perform() #pycurl.perform() has no return val # 将data中的数据保存到名为filename的WAV文件中def save_wave_file(filename, data):    wf = wave.open(filename, 'wb')    wf.setnchannels(1)    wf.setsampwidth(2)    wf.setframerate(SAMPLING_RATE)    wf.writeframes("".join(data))    wf.close() token = get_token()key = '05ba411481c8cfa61b91124ef7389767'api = 'http://www.tuling123.com/openapi/api?key=' + key + '&info=' while(True):    os.system('arecord -D "plughw:1,0" -f S16_LE -d 5 -r 8000 /home/luyi/yuyinduihua/2.wav')    use_cloud(token)    print duihua    info = duihua    duihua = ""    request = api   + info    response = getHtml(request)    dic_json = json.loads(response)     a = dic_json['text']    print type(a)    unicodestring = a     # 将Unicode转化为普通Python字符串:"encode"    utf8string = unicodestring.encode("utf-8")     print type(utf8string)    print str(a)    url = "http://tsn.baidu.com/text2audio?tex="+dic_json['text']+"&lan=zh&per=0&pit=1&spd=7&cuid=7519663&ctp=1&tok=24.a5f341cf81c523356c2307b35603eee6.2592000.1464423912.282335-7519663"    os.system('mpg123 "%s"'%(url))

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2016-05-042,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1.环境搭建
    • 1.1:linux 版本
      • 1.2:树莓派版本
        • 2:百度语音合成与识别
          • 3:图灵机器人
            • 4:linux下使用pythonaudio进行音频解析
              • 5:树莓派下使用arecord进行录音
                • 6:整体调试linux平台下的
                  • 7:主要bug解析
                    • 8:源代码-树莓派环境下
                    相关产品与服务
                    命令行工具
                    腾讯云命令行工具 TCCLI 是管理腾讯云资源的统一工具。使用腾讯云命令行工具,您可以快速调用腾讯云 API 来管理您的腾讯云资源。此外,您还可以基于腾讯云的命令行工具来做自动化和脚本处理,以更多样的方式进行组合和重用。
                    领券
                    问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档