我有一个正在使用CMake的项目,我想把它切换到Bazel。主要依赖项是LLVM,我使用它生成LLVM IR。环顾四周,似乎没有很多关于这方面的指导,因为只有TensorFlow似乎使用Bazel的LLVM (据我所知,自动生成它的配置)。我还发现了一个关于bazel的文章-讨论,它讨论了一个类似的问题,尽管我复制它的尝试失败了。
目前,我最好的运行方式必须是这个(fetcher.bzl
):
def _impl(ctx):
# Download LLVM master
ctx.download_and_extract(url = "https://github.com/llvm-mirror/llvm/archive/master.zip")
# Run `cmake llvm-master` to generate configuration.
ctx.execute(["cmake", "llvm-master"])
# The bazel-discuss thread says to delete llvm-master, but I've
# found that only generated files are pulled out of master, so all
# the non-generated ones get dropped if I delete this.
# ctx.execute(["rm", "-r", "llvm-master"])
# Generate a BUILD file for the LLVM dependency.
ctx.file('BUILD', """
# Build a library with all the LLVM code in it.
cc_library(
name = "lib",
srcs = glob(["**/*.cpp"]),
hdrs = glob(["**/*.h"]),
# Include the x86 target and all include files.
# Add those under llvm-master/... as well because only built files
# seem to appear under include/...
copts = [
"-Ilib/Target/X86",
"-Iinclude",
"-Illvm-master/lib/Target/X86",
"-Illvm-master/include",
],
# Include here as well, not sure whether this or copts is
# actually doing the work.
includes = [
"include",
"llvm-master/include",
],
visibility = ["//visibility:public"],
# Currently picking up some gtest targets, I have that dependency
# already, so just link it here until I filter those out.
deps = [
"@gtest//:gtest_main",
],
)
""")
# Generate an empty workspace file
ctx.file('WORKSPACE', '')
get_llvm = repository_rule(implementation = _impl)
然后,我的WORKSPACE
文件如下所示:
load(":fetcher.bzl", "get_llvm")
git_repository(
name = "gflags",
commit = "46f73f88b18aee341538c0dfc22b1710a6abedef", # 2.2.1
remote = "https://github.com/gflags/gflags.git",
)
new_http_archive(
name = "gtest",
url = "https://github.com/google/googletest/archive/release-1.8.0.zip",
sha256 = "f3ed3b58511efd272eb074a3a6d6fb79d7c2e6a0e374323d1e6bcbcc1ef141bf",
build_file = "gtest.BUILD",
strip_prefix = "googletest-release-1.8.0",
)
get_llvm(name = "llvm")
然后,我将使用bazel build @llvm//:lib --verbose_failures
运行这个程序。
我会不断地从丢失的头文件中得到错误。最后,我发现运行cmake llvm-master
会将许多头文件生成到当前目录中,但似乎将非生成的头文件保留在llvm-master/
中。我在llvm-master/
下添加了相同的包含目录,这似乎捕获了很多文件。但是,目前看来tblgen
没有运行,而且我仍然缺少编译所需的关键头。我目前的错误是:
In file included from external/llvm/llvm-master/include/llvm/CodeGen/MachineOperand.h:18:0,
from external/llvm/llvm-master/include/llvm/CodeGen/MachineInstr.h:24,
from external/llvm/llvm-master/include/llvm/CodeGen/MachineBasicBlock.h:22,
from external/llvm/llvm-master/include/llvm/CodeGen/GlobalISel/MachineIRBuilder.h:20,
from external/llvm/llvm-master/include/llvm/CodeGen/GlobalISel/ConstantFoldingMIRBuilder.h:13,
from external/llvm/llvm-master/unittests/CodeGen/GlobalISel/PatternMatchTest.cpp:10:
external/llvm/llvm-master/include/llvm/IR/Intrinsics.h:42:38: fatal error: llvm/IR/IntrinsicEnums.inc: No such file or directory
特别是试图查找这个文件时,我没有看到任何IntrinsicEnums.inc
、IntrinsicEnums.h
或IntrinsicEnums.dt
。我确实看到了很多Instrinsics*.td
,所以也许其中一个会生成这个特定的文件?
似乎tblgen
应该将*.td
文件转换为*.h
和*.cpp
文件(如果我误解了,请纠正我)。然而,这似乎没有运行。我看到在Tensorflow的项目中,他们有一个gentbl()
构建宏,尽管对我来说复制它是不实际的,因为它对Tensorflow的其他构建基础设施有太多的依赖。
在没有像Tensorflow这样庞大而复杂的系统的情况下,有没有办法做到这一点?
发布于 2018-10-14 14:34:57
我在llvm dev邮件列表这里上发布了一些有趣的回复。LLVM显然不是为了支持Bazel而设计的,而且做得并不特别好。从理论上讲,通过使用Ninja输出所有编译命令,然后从Bazel中使用它们,这似乎是可能的。这可能非常困难,需要一个单独的工具来输出Skylark代码,由Bazel运行。
对于我正在进行的项目规模来说,这似乎相当复杂,所以我的解决办法是从releases.llvm.org下载预构建的二进制文件。这包括所有必要的头、库和工具二进制文件。我用Bazel语言为我的自定义编程语言制作了一个简单但功能强大的工具链。
简单示例(有限但重点突出):https://github.com/dgp1130/llvm-bazel-foolang
完整示例(更复杂、更不集中):https://github.com/dgp1130/sanity-lang
https://stackoverflow.com/questions/51585688
复制相似问题