前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >实习杂记(31):android多dex方案二

实习杂记(31):android多dex方案二

作者头像
wust小吴
发布2019-07-08 18:25:28
9310
发布2019-07-08 18:25:28
举报
文章被收录于专栏:风吹杨柳风吹杨柳

这一章是在继续学习android多dex之前必须要准备的知识

作为一个android开发者,在开发应用时,随着业务规模发展到一定程度,不断地加入新功能、添加新的类库,代码在急剧的膨胀,相应的apk包的大小也急剧增加, 那么终有一天,你会不幸遇到这个错误:

  1. 生成的apk在android 2.3或之前的机器上无法安装,提示INSTALL_FAILED_DEXOPT
  2. 方法数量过多,编译时出错,提示: Conversion to Dalvik format failed:Unable to execute dex: method ID not in [0, 0xffff]: 65536

而问题产生的具体原因如下:

  1. 无法安装(Android 2.3 INSTALL_FAILED_DEXOPT)问题,是由dexopt的LinearAlloc限制引起的,在Android版本不同分别经历了4M/5M/8M/16M限制,目前主流4.2.x系统上可能都已到16M, 在Gingerbread或者以下系统LinearAllocHdr分配空间只有5M大小的, 高于Gingerbread的系统提升到了8M。Dalvik linearAlloc是一个固定大小的缓冲区。在应用的安装过程中,系统会运行一个名为dexopt的程序为该应用在当前机型中运行做准备。dexopt使用LinearAlloc来存储应用的方法信息。Android 2.2和2.3的缓冲区只有5MB,Android 4.x提高到了8MB或16MB。当方法数量过多导致超出缓冲区大小时,会造成dexopt崩溃。
  2. 超过最大方法数限制的问题,是由于DEX文件格式限制,一个DEX文件中method个数采用使用原生类型short来索引文件中的方法,也就是4个字节共计最多表达65536个method,field/class的个数也均有此限制。对于DEX文件,则是将工程所需全部class文件合并且压缩到一个DEX文件期间,也就是Android打包的DEX过程中, 单个DEX文件可被引用的方法总数(自己开发的代码以及所引用的Android框架、类库的代码)被限制为65536;

 `dexopt`过程是一个耗时操作,根据大牛的经验,dex的大小直接影响opt时间。最直观的想法是,等主界面显示之后再去加载子Dex,但假如在加载的过程中,有用户操作调用到了子Dex中的类,就出问题了。甚至一种很常见的情况,Activity A属于子Dex,然后放在后台被杀,此时用户是可以直接通过最近列表返回该Activity的,这种情况下必崩。 

鉴于此,opt这个过程需要先了解:这里有一篇技术文档:

Dalvik Optimization and Verification with Dexopt

The Dalvik virtual machine was designed specifically for the Android mobile platform. The target systems have little RAM, store data on slow internal flash memory, and generally have the performance characteristics of decade-old desktop systems. They also run Linux, which provides virtual memory, processes and threads, and UID-based security mechanisms.

Dalvik 虚拟机是专门为android 移动平台设计的,它的系统几乎没有内存,内部闪存存储数据非常慢,也没有桌面系统一般性能特征的那种表现。它是运行在LINUX平台上的,Linux为其提供虚拟内存,进程和线程,以及UID-based安全机制

The features and limitations caused us to focus on certain goals:

这种特点和限制让我们有下面的改进:

  • Class data, notably bytecode, must be shared between multiple processes to minimize total system memory usage.
  • 类数据,尤其是字节码,必须在多个进程之间共享,目的是减轻系统内存使用的总大小【加个前缀:经过opt处理】
  • The overhead in launching a new app must be minimized to keep the device responsive.
  • 启动一个新的app所用的开销必须最小,目的是保证设备可以响应。【翻译是这样理解,加个前缀是:经过opt这个过程可以让APP启动更快。实际上就是主dex不应该干很多事的】
  • Storing class data in individual files results in a lot of redundancy, especially with respect to strings. To conserve disk space we need to factor this out.
  • 把类数据存储在一个文件里会造成大量的冗余,尤其是对字符串而言。为了节省磁盘空间,需要把这个因素剔除。这个过程需要opt来处理了。
  • Parsing class data fields adds unnecessary overhead during class loading. Accessing data values (e.g. integers and strings) directly as C types is better.
  • 在类加载过程中,解析类数据字段增加了不必要的开销。访问数据值如int,string类型的时候,直接用C类型值可能更好。这个过程opt也可以帮你处理。
  • Bytecode verification is necessary, but slow, so we want to verify as much as possible outside app execution.
  • 字节码验证是必须的,但是这个过程非常的慢,在执行程序之前,我们想尽最大可能多的去验证。【或者这样翻译:所以应该尽可能在程序运行之前就验证完】
  • Bytecode optimization (quickened instructions, method pruning) is important for speed and battery life.
  • 字节码优化(加快指令,修剪方法)对于速度和电池寿命是非常重要的
  • For security reasons, processes may not edit shared code.
  • 出于安全的考虑,这个过程可能不去编辑共享的代码

The typical VM implementation uncompresses individual classes from a compressed archive and stores them on the heap. This implies a separate copy of each class in every process, and slows application startup because the code must be uncompressed (or at least read off disk in many small pieces). On the other hand, having the bytecode on the local heap makes it easy to rewrite instructions on first use, facilitating a number of different optimizations.

典型的虚拟机做法是实现从一个压缩文件解压单个文件的class,并且把它们存储在Java堆上。这意味着每个类在每一个进程中都是独立的,这必然会降低启动的速度,因为在启动之前代码必须是已经解压过的(或者以非常多的小碎片形式从磁盘上读取),在另一方面,在本地堆上有字节码,很容易在第一次使用的时候复写指令,这样带来不同的优化过程。

The goals led us to make some fundamental decisions:

这样的目标让我们可以做如下的决策:指的是相对于典型的虚拟机而言,opt处理能够干的事:

  • Multiple classes are aggregated into a single "DEX" file.
  • 多个Class文件聚合到一个dex文件中
  • DEX files are mapped read-only and shared between processes.
  • dex文件映射在多进程里是只读和共享的
  • Byte ordering and word alignment are adjusted to suit the local system.
  • 字节次顺和字节对齐调整,是为了适应某个特定设备的系统
  • Bytecode verification is mandatory for all classes, but we want to "pre-verify" whatever we can.
  • 字节码验证是必须的,但是我们可以在我们想的任何时候进行 “pre-verify”操作
  • Optimizations that require rewriting bytecode must be done ahead of time.
  • 需要重写的字节码优化必须在这个时间之前完成【我也没有搞懂说的是啥】

The consequences of these decisions are explained in the following sections.

下面是对各种决策进行详细的解释:

VM Operation 虚拟机操作

Application code is delivered to the system in a .jar or .apk file. These are really just .zip archives with some meta-data files added. The Dalvik DEX data file is always called classes.dex.

应用程序都是被交付成 jar文件或者dex文件,这些文件实质都是zip格式文件的,

Dalyik dex数据文件也被称作 classes.dex文件

The bytecode cannot be memory-mapped and executed directly from the zip file, because the data is compressed and the start of the file is not guaranteed to be word-aligned. These problems could be addressed by storing classes.dex without compression and padding out the zip file, but that would increase the size of the package sent across the data network.

来自于zip文件的字节码不能内存映射和直接执行,因为数据是压缩的并且文件开头并不保证字节对齐,

在没有对zip文件进行压缩和填充的情况下,这些问题对classes.dex的存储不影响,但是会增加通过网络传送数据包的大小

We need to extract classes.dex from the zip archive before we can use it. While we have the file available, we might as well perform some of the other actions (realignment, optimization, verification) described earlier. This raises a new question however: who is responsible for doing this, and where do we keep the output?

在我们使用classes.dex文件之前,我们需要从zip文件中提取它。当我们在读取文件的时候,我们可能也会做一些调整,优化,验证等行为,这就会导致一个新的问题,谁复杂做这件事?输出位置在那里?

Preparation 准备阶段

There are at least three different ways to create a "prepared" DEX file, sometimes known as "ODEX" (for Optimized DEX):

至少有三种方法去创建   “prepared”dex文件,有时这种文件被称为   ODEX ----优化的DEX

  1. The VM does it "just in time". The output goes into a special dalvik-cache directory. This works on the desktop and engineering-only device builds where the permissions on the dalvik-cache directory are not restricted. On production devices, this is not allowed.
  2. 虚拟机会实时处理它。输出到一个特别的虚拟机缓存文件目录下。这种权限在桌面上并不是那么严格,或者叫没有。在生产环境设备里这个也是没有的
  3. The system installer does it when an application is first added. It has the privileges required to write to dalvik-cache.
  4. 当一个应用程序第一次被加入(系统)进来的时候,系统会安装它。它有向dalvik缓存写的特权(权限)
  5. The build system does it ahead of time. The relevant jar / apk files are present, but the classes.dex is stripped out. The optimized DEX is stored next to the original zip archive, not in dalvik-cache, and is part of the system image.
  6. 提前构建,相关的jar或者apk文件是存在的,但是classes.dex文件是剥离的,优化的DEX文件不是存储在虚拟机缓存里面,是系统镜像的一部分。

The dalvik-cache directory is more accurately $ANDROID_DATA/data/dalvik-cache. The files inside it have names derived from the full path of the source DEX. On the device the directory is owned by system / system and has 0771 permissions, and the optimized DEX files stored there are owned by system and the application's group, with 0644 permissions. DRM-locked applications will use 640 permissions to prevent other user applications from examining them. The bottom line is that you can read your own DEX file and those of most other applications, but you cannot create, modify, or remove them.

虚拟机缓存的目录一般是:$ANDROID_DATA/data/dalvik-cache

里面的文件命名都是来源于DEX源文件完整路径,在一台设备上,这个目录是被系统拥有,有0771权限,优化DEX文件存在在那里,被系统拥有,是应用程序这个组里面的,这个组拥有0644权限。DRM-LOCKED应用程序将使用0644权限,目的是为了防止其他的应用程序去检查它们。你可以读自己的DEX文件,或者是其他的大部分DEX文件,但是对其他的DEX文件,没有权限去 创建、修改、删除它们

Preparation of the DEX file for the "just in time" and "system installer" approaches proceeds in three steps:

DEX准备阶段(包含:虚拟机实时处理和系统的安装两个)有三个步骤:

First, the dalvik-cache file is created. This must be done in a process with appropriate privileges, so for the "system installer" case this is done within installd, which runs as root.

首先是  虚拟机缓存目录被创建,这个是需要在一定的权限下做的,他跟系统安装这个阶段是有交叉的

Second, the classes.dex entry is extracted from the the zip archive. A small amount of space is left at the start of the file for the ODEX header.

从ZIP文件读取classes.dex文件,然后在它的开头需要留一些空间,应该是字节对齐吧

Third, the file is memory-mapped for easy access and tweaked for use on the current system. This includes byte-swapping and structure realigning, but no meaningful changes to the DEX file. We also do some basic structure checks, such as ensuring that file offsets and data indices fall within valid ranges.

第三,进行文件的内存映射,方便在当前的系统上进行使用访问,这里面的做法包含字节交换和结构调整,但是并没有改变DEX文件的意义。这里面还涉及到一些基本的结构检查,比如确保文件偏移量、数据指标在有效的范围内。

The build system uses a hairy process that involves starting the emulator, forcing just-in-time optimization of all relevant DEX files, and then extracting the results from dalvik-cache. The reasons for doing this, rather than using a tool that runs on the desktop, will become more apparent when the optimizations are explained.

Once the code is byte-swapped and aligned, we're ready to go. We append some pre-computed data, fill in the ODEX header at the start of the file, and start executing. (The header is filled in last, so that we don't try to use a partial file.) If we're interested in verification and optimization, however, we need to insert a step after the initial prep.

一旦代码进行了字节交换和对齐,这个阶段就准备好了,会将一些预计算的数据填在DEX文件头部,然后开始执行。

dexopt

We want to verify and optimize all of the classes in the DEX file. The easiest and safest way to do this is to load all of the classes into the VM and run through them. Anything that fails to load is simply not verified or optimized. Unfortunately, this can cause allocation of some resources that are difficult to release (e.g. loading of native shared libraries), so we don't want to do it in the same virtual machine that we're running applications in.

我们想要去验证和优化DEX文件中所有的类。最简单最安全的方法是把所有的类都加载到虚拟机运行,任何加载失败的都是不能验证和优化的,不幸的是,这样做将会导致分配的一些资源难以释放,比如说:加载本地共享库,所以我们不想在同一个虚拟机上运行应用程序。

The solution is to invoke a program called dexopt, which is really just a back door into the VM. It performs an abbreviated VM initialization, loads zero or more DEX files from the bootstrap class path, and then sets about verifying and optimizing whatever it can from the target DEX. On completion, the process exits, freeing all resources.

解决办法就是去唤醒一个程序,他叫做dexopt,这叫是在虚拟机里面加一个后门。opt程序执行一个简短的虚拟机初始化过程,从引导类文件路径加载0个或者多个dex文件,然后设置验证和优化,不管它是不是目标dex文件。完成之后,进程退出,释放所有资源。

It is possible for multiple VMs to want the same DEX file at the same time. File locking is used to ensure that dexopt is only run once.

有可能多个虚拟机在同一个时间想去加载同一个DEX文件。文件锁就会被用到确保dexopt只运行一次。

Verification 验证阶段

The bytecode verification process involves scanning through the instructions in every method in every class in a DEX file. The goal is to identify illegal instruction sequences so that we don't have to check for them at run time. Many of the computations involved are also necessary for "exact" garbage collection. See Dalvik Bytecode Verifier Notes for more information.

通过指令字节码验证过程包括扫描DEX文件在每个类的每个方法,我们的目标是识别  非法的指令序列,这样我们不需要在运行时检测它们,对于“精确”垃圾回收,大量的计算也是必须的,在Dalvik Bytecode Vefifier Notes 这里能看到更多的信息。

For performance reasons, the optimizer (described in the next section) assumes that the verifier has run successfully, and makes some potentially unsafe assumptions. By default, Dalvik insists upon verifying all classes, and only optimizes classes that have been verified. If you want to disable the verifier, you can use command-line flags to do so. See also Controlling the Embedded VM for instructions on controlling these features within the Android application framework.

由于性能原因,优化器假设验证已经成功,做了一些潜在的不安全的假设。默认情况下,虚拟机会验证所有的类,只优化那些验证过的类。如果想要跳过验证阶段,可以使用命令行。

Reporting of verification failures is a tricky issue. For example, calling a package-scope method on a class in a different package is illegal and will be caught by the verifier. We don't necessarily want to report it during verification though -- we actually want to throw an exception when the method call is attempted. Checking the access flags on every method call is expensive though. The Dalvik Bytecode Verifier Notes document addresses this issue.

验证失败报告是一个棘手的问题。例如,调用package-scope方法,对于同一个类,在不同的包下是非法的,这个时候会被验证阶段抓住的,我们不一定要在验证阶段报告,实际上我们是当这个方法企图去调用的时候,抛出一个异常。对每一个方法调用检测访问权限是非常高昂的,  Dalvik Bytecode Verfier Notes这里有强调这个问题。

Classes that have been verified successfully have a flag set in the ODEX. They will not be re-verified when loaded. The Linux access permissions are expected to prevent tampering; if you can get around those, installing faulty bytecode is far from the easiest line of attack. The ODEX file has a 32-bit checksum, but that's chiefly present as a quick check for corrupted data.

类已经验证成功,ODEX文件会有一个标志的。当再次加载这个类的时候,它们是不会再次验证的,这个就是这个标志的作用。

Linux访问权限防止被修改,如果你想绕过那些权限,导致安装了错误的字节码对攻击而言是最容易的。ODEX文件已经有32位校验和,但是他主要是作为快速验证损坏的数据。

Optimization 优化阶段

Virtual machine interpreters typically perform certain optimizations the first time a piece of code is used. Constant pool references are replaced with pointers to internal data structures, operations that always succeed or always work a certain way are replaced with simpler forms. Some of these require information only available at runtime, others can be inferred statically when certain assumptions are made.

第一次使用某个片段代码,虚拟机解释其器通常执行某些优化。常量池引用被  与指向内部数据结构的指针取代,成功的或者某种工作方式会被一种简单的形式替代。这些需求信息中的一些只有在运行是有访问权限的,别人可以推断静态时的某些假设。

The Dalvik optimizer does the following:

虚拟机优化如下:

  • For virtual method calls, replace the method index with a vtable index.
  • 对虚拟方法的调用,用虚拟索引来替代方法索引
  • For instance field get/put, replace the field index with a byte offset. Also, merge the boolean / byte / char / short variants into a single 32-bit form (less code in the interpreter means more room in the CPU I-cache).
  • 对于get/put字段,用一个字节的偏移量替代字段索引。同时,合并boolean/byte/char/short/ 导入到一个32位形式,【更少的代码翻译意味着在CPU I-cache有更多的空间】,
  • Replace a handful of high-volume calls, like String.length(), with "inline" replacements. This skips the usual method call overhead, directly switching from the interpreter to a native implementation.
  • Prune empty methods. The simplest example is Object.<init>, which does nothing, but must be called whenever any object is allocated. The instruction is replaced with a new version that acts as a no-op unless a debugger is attached.
  • 删除空的方法,简单的例子就是对象的初始化,什么都不做的构造方法,但是每当对象分配的时候又必须需要,指令被新版本替代,新的版本中除了调试器无操作,
  • Append pre-computed data. For example, the VM wants to have a hash table for lookups on class name. Instead of computing this when the DEX file is loaded, we can compute it now, saving heap space and computation time in every VM where the DEX is loaded.
  • 附加的预先计算数据,例如,虚拟机想要有一个哈希表,去查找类名,并不是在类被加载的时候去计算的,我们可以现在就计算它,节省堆空间和计算时间。

All of the instruction modifications involve replacing the opcode with one not defined by the Dalvik specification. This allows us to freely mix optimized and unoptimized instructions. The set of optimized instructions, and their exact representation, is tied closely to the VM version.

我们可以自由组合优化和非优化指令。

Most of the optimizations are obvious "wins". The use of raw indices and offsets not only allows us to execute more quickly, we can also skip the initial symbolic resolution. Pre-computation eats up disk space, and so must be done in moderation.

大多数优化是非常有成效的,索引和偏移量的使用不仅让我们执行速度更快,,也让我们跳过象征性的初始化决策,

预先预计了需要多少磁盘空间。

There are a couple of potential sources of trouble with these optimizations. First, vtable indices and byte offsets are subject to change if the VM is updated. Second, if a superclass is in a different DEX, and that other DEX is updated, we need to ensure that our optimized indices and offsets are updated as well. A similar but more subtle problem emerges when user-defined class loaders are employed: the class we actually call may not be the one we expected to call.

也有一些潜在的问题来自优化。第一,虚拟索引表和字节偏移量是针对变化的时候,当虚拟机更新了。第二,如果在不同的dex中都有同一个父类对象,如果某个dex文件父类发生了改变,我们需要去确保同时优化索引和偏移量。类似的,当开发者使用自己的类加载器的时候,一个更微妙的问题就发生了:我们实际调用的那个类可能不是我们期望的那个类。

These problems are addressed with dependency lists and some limitations on what can be optimized.

这些问题的解决是  靠依赖项和局限性

Dependencies and Limitations 依赖项和局限性

The optimized DEX file includes a list of dependencies on other DEX files, plus the CRC-32 and modification date from the originating classes.dex zip file entry. The dependency list includes the full path to the dalvik-cache file, and the file's SHA-1 signature. The timestamps of files on the device are unreliable and not used. The dependency area also includes the VM version number.

优化DEX文件包含一个依赖于其他的DEX文件的列表,从原始的calsses.dex文件加上CRC-32和日期信息。

这个依赖列表   包含 完整虚拟机缓存目录路径 ,还有文件的签名,设备上的文件的时间戳是不可靠的,不能被使用的,

依赖区域还包括虚拟机的版本号。

An optimized DEX is dependent upon all of the DEX files in the bootstrap class path. DEX files that are part of the bootstrap class path depend upon the DEX files that appeared earlier. To ensure that nothing outside the dependent DEX files is available, dexopt only loads the bootstrap classes. References to classes in other DEX files fail, which causes class loading and/or verification to fail, and classes with external dependencies are simply not optimized.

一个优化的DEX文件取决于所有的引导类路径的DEX文件,引导类路径的DEX文件一般在这个优化的DEX路径之前先出现,要确保外部依赖dex文件都是可以用的,dexopt只加载引导类,

This means that splitting code out into many separate DEX files has a disadvantage: virtual method calls and instance field lookups between non-boot DEX files can't be optimized. Because verification is pass/fail with class granularity, no method in a class that has any reliance on classes in external DEX files can be optimized. This may be a bit heavy-handed, but it's the only way to guarantee that nothing breaks when individual pieces are updated.

Another negative consequence: any change to a bootstrap DEX will result in rejection of all optimized DEX files. This makes it hard to keep system updates small.

代码分割成许多独立的DEX文件有一个劣势:虚方法调用和实例字段的查找,在不是引导类路径的DEX文件里面是不能进行优化的,

另外一个不良的后果是:在引导类路径里的DEX改变将会引起优化DEX的拒绝,这使得保持系统更小将变得有难度。

Despite our caution, there is still a possibility that a class in a DEX file loaded by a user-defined class loader could ask for a bootstrap class (say, String) and be given a different class with the same name. If a class in the DEX file being processed has the same name as a class in the bootstrap DEX files, the class will be flagged as ambiguous and references to it will not be resolved during verification / optimization. The class linking code in the VM does additional checks to plug another hole; see the verbose description in the VM sources for details (vm/oo/Class.c).

If one of the dependencies is updated, we need to re-verify and re-optimize the DEX file. If we can do a just-in-time dexopt invocation, this is easy. If we have to rely on the installer daemon, or the DEX was shipped only in ODEX, then the VM has to reject the DEX.

The output of dexopt is byte-swapped and struct-aligned for the host, and contains indices and offsets that are highly VM-specific (both version-wise and platform-wise). For this reason it's tricky to write a version of dexopt that runs on the desktop but generates output suitable for a particular device. The safest way to invoke it is on the target device, or on an emulator for that device.

Generated DEX

Some languages and frameworks rely on the ability to generate bytecode and execute it. The rather heavy dexopt verification and optimization model doesn't work well with that.

We intend to support this in a future release, but the exact method is to be determined. We may allow individual classes to be added or whole DEX files; may allow Java bytecode or Dalvik bytecode in instructions; may perform the usual set of optimizations, or use a separate interpreter that performs on-first-use optimizations directly on the bytecode (which won't be mapped read-only, since it's locally defined).

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2016年08月10日,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • VM Operation 虚拟机操作
    • Preparation 准备阶段
      • dexopt
      • Verification 验证阶段
      • Optimization 优化阶段
      • Dependencies and Limitations 依赖项和局限性
      • Generated DEX
      相关产品与服务
      数据保险箱
      数据保险箱(Cloud Data Coffer Service,CDCS)为您提供更高安全系数的企业核心数据存储服务。您可以通过自定义过期天数的方法删除数据,避免误删带来的损害,还可以将数据跨地域存储,防止一些不可抗因素导致的数据丢失。数据保险箱支持通过控制台、API 等多样化方式快速简单接入,实现海量数据的存储管理。您可以使用数据保险箱对文件数据进行上传、下载,最终实现数据的安全存储和提取。
      领券
      问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档