跳转至

Buildroot

构建流程

构建出来的目录:

r@r-work ~/osp/buildroot-2019.02.1/output $ tree -L 2
.
├── build
│   ├── buildroot-config
│   ├── buildroot-fs
 。。。略去一堆包的构建目录和build相关log。。。
│   ├── toolchain-external
│   └── toolchain-external-custom
├── host
│   ├── arm-buildroot-linux-uclibcgnueabi
│   ├── bin
│   ├── etc
│   ├── include
│   ├── lib
│   ├── lib64 -> lib
│   ├── share
│   └── usr -> .
├── images
│   ├── rootfs.tar
│   └── rootfs.tar.gz
├── staging -> /home/r/osp/buildroot-2019.02.1/output/host/arm-buildroot-linux-uclibcgnueabi/sysroot
└── target
    ├── bin
    ├── dev
    ├── etc
    ├── lib
    ├── lib32 -> lib
    ├── media
    ├── mnt
    ├── opt
    ├── proc
    ├── root
    ├── run
    ├── sbin
    ├── sys
    ├── THIS_IS_NOT_YOUR_ROOT_FILESYSTEM
    ├── tmp
    └── usr

文件夹功能:

build   存放packages的源码、构建log
host    存放交叉编译中宿主(host)依赖的相关工具:比如automake、m4;此外还有交叉编译工具链,比如这里的“arm-buildroot-linux-uclibcgnueabi”
target  目标根文件系统的中间产品
images  最终产品,比如编译好的系统内核、打包好的根文件系统(镜象、归档)

流程大概是:

  1. 构建交叉编译工具链:如果配置由buildroot来构建工具链,则编译后安装到host目录;如果配置了外部工具链(external-toolchain),则会将工具链按照一定规则copy到host目录中
  2. 构建宿主工具
  3. 构建目标packages:下载、打patch、编译、安装到target目录中
  4. 打包

处理与buildroot不相容的工具链

配置使用外部工具链后,buildroot将从指定路径copy工具链到host目录中,主要是一些libs,这个步骤导致一些目录结构比较奇葩的工具链无法用于buildroot。

这里有几个解决思路:

  1. 放弃使用外部工具链,使用buildroot构建
  2. 使用Crosstool-NG等工具构建,buildroot兼容这些常见工具构建出的工具链
  3. 尝试修改buildroot源码

开发过程中使用buildroot构建

buildroot中的package基本是已经稳定的包,所以都是从源码包中解压-编译-安装,但我们不会希望正在开发的工程走这样的流程,而是直接从源码编译,解决方法如下:

下面这段翻译自: https://buildroot.org/downloads/manual/manual.html#_using_buildroot_during_development

Buildroot的正常操作是下载源码包,解压缩,配置,编译和安装源码包中的软件。源代码在output/build/<package>-<version>中提取,这是一个临时目录:每当使用make clean时,该目录将被完全删除,并在下一次make调用时重新创建。即使使用Git或Subversion存储库作为包源代码的输入,Buildroot也会创建一个源码包,然后像上文说的流程使用它。 当Buildroot主要用作集成工具时,这种行为非常适合构建、集成嵌入式Linux系统的组件。但是,如果在系统的某些组件的开发过程中使用Buildroot,这种行为不是很方便:当需要对一个包的源代码做一点小的改动,并且能够使用Buildroot快速重建系统时。直接在output/build/<package>-<version>中进行更改不是一个合适的解决方案,因为在make clean时会删除目录。 因此,Buildroot为此用例提供了一种特定的机制:<pkg> _OVERRIDE_SRCDIR机制。Buildroot读取OVERRIDE文件,用户通过它告诉Buildroot一些源码包的位置。默认情况下,此覆盖文件名为local.mk,位于Buildroot源树的顶级目录中,但可以通过BR2_PACKAGE_OVERRIDE_FILE配置选项指定其他位置。

基于buildroot自定义的部分

我们总会根据自己的平台,添加、修改一些包、补丁、defconfig等

需要添加自定义软件包时,无需直接修改buildroot目录,它提供了一个机制方便在目录外添加额外包、配置等:

文档https://github.com/fabiorush/buildroot/blob/master/docs/manual/customize-outside-br.txt

添加软件包时的依赖问题

Kconfig中需要用select/depends on来描述软件包依赖,但这并不会自动地在构建中确定构建顺序,需要手动地在.mk中描述依赖关系。

比如当创建一个generic-package时,.mk文件中的LIBFOO_DEPENDENCIES指定了依赖关系。

覆盖官方维护的package行为

local.mk可以覆盖一些包的行为,比如libite包默认不编译静态库,可以添加:

LIBITE_CONF_OPTS = --enable-static

自动登录

调试时会希望有个自动登入的环境,修改/etc/inittab中:

ttySAC0::respawn:/sbin/getty -L  ttySAC0 115200 vt100 # GENERIC_SERIAL

改为:

::respawn:-/bin/sh

todo: 下列段落似乎不适合放buildroot里,毕竟通用知识

设备节点实现方式的选择

[Buildroot] Device files and Buildroot via: http://lists.busybox.net/pipermail/buildroot/2011-December/048057.html

With Buildroot, you have four ways of managing the device files
in /dev. The mechanism used to manage device files is configured from
System configuration -> /dev management. The four ways are :

 * Static using device table. In this case the "System configuration ->
   Path to the device tables" option gives a space-separated list of
   files, each of which containing a list of devices to create at build
   time in the root filesystem. By default, this list is defined to
   just the target/generic/device_table_dev.txt, which creates some
   basic device files. Those device files are created at *build* time
   and are statically present in the root filesystem image generated by
   Buildroot. All basic devices such as /dev/console, /dev/null and al.
   are already present in the default device table. If you are in this
   mode and want to add more device files, then you should add them to
   target/generic/device_table_dev.txt, or better, create your own
   additional device table in
   board/<yourcompany>/<yourproject>/device_table.txt, and add it to
   the space-separated list in "System configuration -> Path to the
   device tables".

 * Dynamic using devtmpfs only. Devtmpfs is a virtual filesystem
   implemented in the Linux kernel that can be mounted in /dev. The
   kernel will automatically create/remove device files from this
   filesystem as devices appear/disappear from the system. devtmpfs
   exists in the Linux kernel since 2.6.32. When this option is
   selected *and* Buildroot is responsible for building the kernel,
   then Buildroot ensures that the kernel is built with the appropriate
   options to make devtmpfs work. When Buildroot is *not* responsible
   for building the kernel (the user does it on its own), then the user
   is responsible for making sure that CONFIG_DEVTMPFS and
   CONFIG_DEVTMPFS_MOUNT are both enabled in the kernel configuration.
   When this mode is used, no static device files are created in the
   root filesystem: the device files are automatically created at boot
   time by the kernel.

 * Dynamic using mdev. This is exactly like with 'devtmpfs' (i.e,
   devtmpfs is required for this mode to work), but Buildroot adds the
   mdev utility into the mix. mdev is an utility bundled with Busybox
   which gets executed when the kernel notifies that a device has been
   added or removed from the system. Compared to a pure 'devtmpfs'
   solution, it allows to execute arbitrary applications or shell
   scripts when devices appear/disappear. mdev behaviour can be
   configured from /etc/mdev.conf, refer to the Busybox documentation
   for more details. Since this case relies on devtmpfs, there are no
   static device files created in the root filesystem, and no device
   table is used.

 * Dynamic using udev. This is also exactly like with 'devtmpfs' (i.e,
   devtmpfs is required for this mode to work), but Buildroot adds the
   udev daemon into the mix. udev is the "device event manager" used in
   all Linux desktop and server systems and can be seen as a
   "full-featured" mdev. It is more configurable, provides a library
   called libudev to allow applications to query for which devices are
   available, etc.

热插拔

mdev是busybox上类似udev的实现,看help和/etc/init.d/S10mdev就能大概了解用法。(这还有mdev文档

# mdev
BusyBox v1.29.3 (2019-04-10 20:07:42 CST) multi-call binary.

Usage: mdev [-s]

mdev -s is to be run during boot to scan /sys and populate /dev.

Bare mdev is a kernel hotplug helper. To activate it:
        echo /sbin/mdev >/proc/sys/kernel/hotplug

It uses /etc/mdev.conf with lines
        [-][ENV=regex;]...DEVNAME UID:GID PERM [>|=PATH]|[!] [@|$|*PROG]
where DEVNAME is device name regex, @major,minor[-minor2], or
environment variable regex. A common use of the latter is
to load modules for hotplugged devices:
        $MODALIAS=.* 0:0 660 @modprobe "$MODALIAS"

If /dev/mdev.seq file exists, mdev will wait for its value
to match $SEQNUM variable. This prevents plug/unplug races.
To activate this feature, create empty /dev/mdev.seq at boot.

If /dev/mdev.log file exists, debug log will be appended to it.