我试着在我的Macbook M1 Max 64 AI上运行大科学的Bloom AI模型,新安装的M1芯片和Python3.10.6运行。我根本无法得到任何输出。与其他人工智能模型,我有同样的问题,我真的不知道我应该如何修复它。
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "mps" if torch.backends.mps.is_available() else "cpu"
if device == "cpu" and torch.cuda.is_available():
device = "cuda" #if the device is cpu and cuda is available, set the device to cuda
print(f"Using {device} device") #print the device
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom")
model = AutoModelForCausalLM.from_pretrained("bigscience/bloom").to(device)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
我试过其他型号(更小的bert模型),也尝试让它只在CPU上运行,而根本不使用mps设备。
也许任何人都能帮上忙
发布于 2022-11-14 16:41:42
可能要花太长时间才能得到输出。你想把它分解成串行调用吗? a)嵌入层,b) 70块,c)输出层规范,d)令牌解码?
运行此代码的示例可在https://nbviewer.org/urls/arteagac.github.io/blog/bloom_local.ipynb上使用。
它基本上可以归结为:
def forward(input_ids):
# 1. Create attention mask and position encodings
attention_mask = torch.ones(len(input_ids)).unsqueeze(0).bfloat16().to(device)
alibi = build_alibi_tensor(input_ids.shape[1], config.num_attention_heads,
torch.bfloat16).to(device)
# 2. Load and use word embeddings
embeddings, lnorm = load_embeddings()
hidden_states = lnorm(embeddings(input_ids))
del embeddings, lnorm
# 3. Load and use the BLOOM blocks sequentially
for block_num in range(70):
load_block(block, block_num)
hidden_states = block(hidden_states, attention_mask=attention_mask, alibi=alibi)[0]
print(".", end='')
hidden_states = final_lnorm(hidden_states)
#4. Load and use language model head
lm_head = load_causal_lm_head()
logits = lm_head(hidden_states)
# 5. Compute next token
return torch.argmax(logits[:, -1, :], dim=-1)
请参考链接笔记本,以获得在forward
调用中使用的函数的实现。
https://stackoverflow.com/questions/74319809
复制相似问题