kubernetes的编译、打包、发布

Tags: kubernetes 

目录

快速开始

k8s release binary中可以直接下载已经变好的二进制文件,To start developing kubernetes

KUBE_BUILD_PLATFORMS指定目标平台,WHAT指定编译的组件,通过GOFLAGS和GOGCFLAGS传入编译时参数:

KUBE_BUILD_PLATFORMS=linux/amd64 make all WHAT=cmd/kubelet GOFLAGS=-v GOGCFLAGS="-N -l"

如果不指定WHAT,则编译全部。

make all是在本地环境中进行编译的。

make releasemake quick-release在容器中完成编译、打包成docker镜像。

使用本地环境编译

kubernetes对golang的版本有要求,具体情况见k8s development Guide:

kubernetes   requires Go
1.0 - 1.2      1.4.2
1.3, 1.4       1.6
1.5, 1.6       1.7 - 1.7.5
1.7+           1.8.1

如果是在MAC上操作,因为MAC的shell命令是BSD风格的,因此需要安装GNU command tools

brew install coreutils
brew install gnu-tar

获取代码,用本地go进行编译:

go get -d k8s.io/kubernetes
cd $GOPATH/src/k8s.io/kubernetes
KUBE_BUILD_PLATFORMS=linux/amd64 make all 

使用官方容器编译

要在容器中编译,因为墙的缘故,需要先在本地准备好docker镜像:

gcr.io/google_containers/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG

TAG在文件build-image/cross/VERSION中:

$ cat build-image/cross/VERSION
v1.7.5-2

因为gcr.io镜像需要翻墙获取,可以使用docker.io中他人上传的cross镜像,例如:

docker pull tacylee/kube-cross:v1.7.5-2
docker tag tacylee/kube-cross:v1.7.5-2  gcr.io/google_containers/kube-cross:v1.7.5-2

开始编译目标:

KUBE_BUILD_PLATFORMS=linux/amd64 build/run.sh make all

将编译得到的文件从容器中拷贝出来:

build/copy-output.sh

进入编译容器:

build/shell.sh

一步完成编译、打镜像

因为墙的缘故,先准备好build/common.sh中指定的base镜像:

kube::build::get_docker_wrapped_binaries() {
  debian_iptables_version=v7
	...
	 kube-proxy,gcr.io/google-containers/debian-iptables-amd64:${debian_iptables_version}

可以从docker.io上获取他人上传的镜像:

docker pull googlecontainer/debian-iptables-amd64:v7
docker tag googlecontainer/debian-iptables-amd64:v7  gcr.io/google-containers/debian-iptables-amd64:v7

docker pull googlecontainer/debian-iptables-arm:v7
docker tag googlecontainer/debian-iptables-arm:v7  gcr.io/google-containers/debian-iptables-arm:v7

docker pull googlecontainer/debian-iptables-arm64:v7
docker tag googlecontainer/debian-iptables-arm64:v7  gcr.io/google-containers/debian-iptables-arm64:v7

docker pull googlecontainer/debian-iptables-ppc64le:v7
docker tag googlecontainer/debian-iptables-ppc64le:v7  gcr.io/google-containers/debian-iptables-ppc64le:v7

docker pull googlecontainer/debian-iptables-s390x:v7
docker tag googlecontainer/debian-iptables-s390x:v7  gcr.io/google-containers/debian-iptables-s390x:v7

并且把build/lib/release.sh中的--pull去掉,避免构建镜像继续拉取镜像:

 "${DOCKER[@]}" build --pull -q -t "${docker_image_tag}" ${docker_build_path} >/dev/null

修改为:

 "${DOCKER[@]}" build -q -t "${docker_image_tag}" ${docker_build_path} >/dev/null

编译所有的目标平台,编译过程会很久,特别是test的时间会很长:

make release

只编译linux/amd64,且略过test:

make quick-release

如果在mac上执行还会编译mac平台,可以通过设置环境变量OSTYPE忽略mac平台:

OSTYPE=notdetected make quick-release

默认是在容器中完成编译的,如果想打包在本地编译得到的程序,可以注释掉build/release.sh中代码:

#kube::build::verify_prereqs
#kube::build::build_image
#kube::build::run_build_command make cross

#if [[ $KUBE_RELEASE_RUN_TESTS =~ ^[yY]$ ]]; then
#  kube::build::run_build_command make test
#  kube::build::run_build_command make test-integration
#fi

#kube::build::copy_output

#将编译的代码都注释,只保留打包用的代码
kube::release::package_tarballs
kube::release::package_hyperkube

服务端以镜像的形式发布在_output/release-stage目录中:

$cd ./_output/
$find ./release-stage/ -name "*.tar"
./release-stage/server/linux-amd64/kubernetes/server/bin/kube-aggregator.tar
./release-stage/server/linux-amd64/kubernetes/server/bin/kube-apiserver.tar
./release-stage/server/linux-amd64/kubernetes/server/bin/kube-controller-manager.tar
./release-stage/server/linux-amd64/kubernetes/server/bin/kube-proxy.tar
./release-stage/server/linux-amd64/kubernetes/server/bin/kube-scheduler.tar
...

客户端以压缩包的形式发布在_output/release-tars目录中:

$ls _output/release-tars/
kubernetes-client-darwin-386.tar.gz    kubernetes-client-windows-386.tar.gz
kubernetes-client-darwin-amd64.tar.gz  kubernetes-client-windows-amd64.tar.gz
kubernetes-client-linux-386.tar.gz     kubernetes-manifests.tar.gz
kubernetes-client-linux-amd64.tar.gz   kubernetes-node-linux-amd64.tar.gz
kubernetes-client-linux-arm.tar.gz     kubernetes-node-linux-arm.tar.gz
kubernetes-client-linux-arm64.tar.gz   kubernetes-salt.tar.gz
kubernetes-client-linux-ppc64le.tar.gz kubernetes-server-linux-amd64.tar.gz
kubernetes-client-linux-s390x.tar.gz   kubernetes-src.tar.gz

开始分析

kubernetes编译有两种方式,直接编译和在docker中编译。

如果是在MAC上操作,因为MAC的shell命令是BSD风格的,因此需要安装GNU command tools

brew install coreutils
brew install gnu-tar

目标平台与目标组件

KUBE_BUILD_PLATFORMS指定目标平台,WHAT指定编译的组件,通过GOFLAGS和GOGCFLAGS传入编译时参数:

KUBE_BUILD_PLATFORMS=linux/amd64 make all WHAT=cmd/kubelet GOFLAGS=-v GOGCFLAGS="-N -l"

如果不指定WHAT,则编译全部。

make all是在本地环境中进行编译的。

make releasemake quick-release在容器中完成编译、打包成docker镜像。

支持的目标平台

通过环境变量KUBE_BUILD_PLATFORMS指定目标平台,格式为GOOS/GOARCH:

KUBE_BUILD_PLATFORMS=linux/amd64 

GOOS选项:

linux, darwin, windows, netbsd

GOARCH选项:

amd64, 386, arm, ppc64

支持的目标组件

目标组件在src/k8s.io/kubernetes/hack/lib/golang.sh中定义:

readonly KUBE_ALL_TARGETS=(
  "${KUBE_SERVER_TARGETS[@]}"
  "${KUBE_CLIENT_TARGETS[@]}"
  "${KUBE_TEST_TARGETS[@]}"
  "${KUBE_TEST_SERVER_TARGETS[@]}"
  cmd/gke-certificates-controller
)
...
if [[ ${#targets[@]} -eq 0 ]]; then
  targets=("${KUBE_ALL_TARGETS[@]}")
fi

相关变量也在hack/lib/golang.sh中定义:

KUBE_SERVER_TARGETS:
	cmd/kube-proxy
	cmd/kube-apiserver
	cmd/kube-controller-manager
	cmd/cloud-controller-manager
	cmd/kubelet
	cmd/kubeadm
	cmd/hyperkube
	vendor/k8s.io/kube-aggregator
	vendor/k8s.io/kube-apiextensions-server
	plugin/cmd/kube-scheduler

KUBE_CLIENT_TARGETS
	cmd/kubectl
	federation/cmd/kubefed

KUBE_TEST_TARGETS
	cmd/gendocs
	cmd/genkubedocs
	cmd/genman
	cmd/genyaml
	cmd/mungedocs
	cmd/genswaggertypedocs
	cmd/linkcheck
	federation/cmd/genfeddocs
	vendor/github.com/onsi/ginkgo/ginkgo
	test/e2e/e2e.test

KUBE_TEST_SERVER_TARGETS
	cmd/kubemark
	vendor/github.com/onsi/ginkgo/ginkgo

cmd/gke-certificates-controller

在容器中编译

Building kubernetes中给出了在容器中编译的方法。

build/run.shbuild/copy-out.shbuild/make-clean.shbuild/shell.sh是在容器中编译时直接使用的脚本。

build/run.sh:           在编译容器中执行命令,后面跟随的所有指令都是在容器中运行的。
build/copy-output.sh:   将容器中编译得到的文件拷贝到本地。
build/make-clean.sh:    清理编译过程中产生的容器和文件。
build/shell.sh:         进入编译容器

如下所示,make all命令将在容器中运行。

build/run.sh make all

需要特别注意make release以及make quick-release本身就是在容器中完成编译过程,不需要通过build/run.sh。

在容器中编译的过程与直接在本地编译的过程相同,都支持下面的make目标:

make       
make cross
make test
make test-integration
make test-cmd

build/run.sh运行时会使用目录build/build-image中的文件构建编译用的镜像,然后开始编译:

...
kube::build::verify_prereqs
kube::build::build_image
kube::build::run_build_command "$@"
...

build/build-image中的文件:

▾ build/
  ▾ build-image/
    ▸ cross/
      Dockerfile  <-- 用于构建镜像的Dockerfile
      rsyncd.sh*
      VERSION

build/copy-output.sh运行时,将编译后得到的文件拷贝出来:

kube::build::verify_prereqs
kube::build::copy_output

容器中编译的整个过程中会有data、rsyncd、build、rsyncd四个容器参与。

这四个容器都来自run.sh中构建的镜像,只是执行了不同的命令:

data:    创建volume,编译过程中容器退出后不会被删除,用于存放代码和编译后的文件
rsnycd:  挂载data容器的volumes,启动rsyncd服务,接收代码上传,完成上传后被删除
build:   挂载data容器的voluems,开始编译,编译完成后被删除
rsyncd:  挂载data容器的volumes,启动rsyncd服务,将编译后的容器中的内容同步到本地

构建镜像

kube::build::build_image在build/common.sh中实现:

function kube::build::build_image() {
  ...
  cp /etc/localtime "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
  cp build/build-image/Dockerfile "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
  cp build/build-image/rsyncd.sh "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
  dd if=/dev/urandom bs=512 count=1 2>/dev/null | LC_ALL=C tr -dc 'A-Za-z0-9' | dd bs=32 count=1 2>/dev/null > "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
  chmod go= "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
  kube::build::update_dockerfile
  kube::build::docker_build "${KUBE_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false'
  ...

构建的镜像名称为:

${KUBE_BUILD_IMAGE}
=> ${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}
=> kube-build:${KUBE_BUILD_IMAGE_TAG_BASE}-${KUBE_BUILD_IMAGE_VERSION}
=> kube-build:build-${KUBE_ROOT_HASH}-${KUBE_BUILD_IMAGE_VERSION_BASE}-${KUBE_BUILD_IMAGE_CROSS_TAG}

KUBE_ROOT_HASH是根据HOSTNAME和KUBE_ROOT生成的。

KUBE_BUILD_IMAGE_VERSION_BASE在build/build-image/VERSION中定义。

KUBE_BUILD_IMAGE_CROSS_TAG在build/build-image/cross/VERSION中定义。

Dockerfile在build/build-image中:

FROM gcr.io/google_containers/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG
...

可以看到是以gcr.io/google_containers/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG为基础镜像。

编译时需要翻墙获取的就是这个镜像。

创建data容器,准备volumes

镜像构建完成后,会调用kube::build::ensure_data_container,创建一个data容器。

function kube::build::build_image() {
  ...
  kube::build::ensure_data_container
  kube::build::sync_to_container
  ...

一个名为${KUBE_DATA_CONTAINER_NAME}的data容器:

src/k8s.io/kubernetes/build/common.sh

function kube::build::ensure_data_container() {
...
local -ra docker_cmd=(
  "${DOCKER[@]}" run
  --volume "${REMOTE_ROOT}"   # white-out the whole output dir
  --volume /usr/local/go/pkg/linux_386_cgo
  --volume /usr/local/go/pkg/linux_amd64_cgo
  --volume /usr/local/go/pkg/linux_arm_cgo
  --volume /usr/local/go/pkg/linux_arm64_cgo
  --volume /usr/local/go/pkg/linux_ppc64le_cgo
  --volume /usr/local/go/pkg/darwin_amd64_cgo
  --volume /usr/local/go/pkg/darwin_386_cgo
  --volume /usr/local/go/pkg/windows_amd64_cgo
  --volume /usr/local/go/pkg/windows_386_cgo
  --name "${KUBE_DATA_CONTAINER_NAME}"
  --hostname "${HOSTNAME}"
  "${KUBE_BUILD_IMAGE}"
  chown -R ${USER_ID}:${GROUP_ID}
    "${REMOTE_ROOT}"
    /usr/local/go/pkg/
)
"${docker_cmd[@]}"

其中:

REMOTE_ROOT="/go/src/${KUBE_GO_PACKAGE}"

KUBE_DATA_CONTAINER_NAME
=>${KUBE_DATA_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}
=>kube-build-data-${KUBE_ROOT_HASH}-${KUBE_BUILD_IMAGE_VERSION}

data容器中准备好了volume,将来的rsync和build容器会通过--volume-from使用data容器的volume。

启动rsyncd容器,完成代码上传

启动运行rsyncd服务的容器,将代码通过rsync命令传输到容器中:

src/k8s.io/kubernetes/build/common.sh

function kube::build::sync_to_container() {
  kube::log::status "Syncing sources to container"
  ...
  kube::build::start_rsyncd_container
  kube::build::rsync \
	--delete \
	--filter='+ /staging/**' \
	--filter='- /.git/' \
	--filter='- /.make/' \
	--filter='- /_tmp/' \
	--filter='- /_output/' \
	--filter='- /' \
	--filter='- zz_generated.*' \
	--filter='- generated.proto' \
	"${KUBE_ROOT}/" "rsync://k8s@${KUBE_RSYNC_ADDR}/k8s/"

rsync容器启动的时候会挂载data容器的volume:

function kube::build::start_rsyncd_container() {
	...
  kube::build::run_build_command_ex \
	"${KUBE_RSYNC_CONTAINER_NAME}" -p 127.0.0.1:${KUBE_RSYNC_PORT}:${KUBE_CONTAINER_RSYNC_PORT} -d -- /rsyncd.sh >/dev/null
	...

function kube::build::run_build_command_ex() {
	...
	local -a docker_run_opts=(
		"--name=${container_name}"
		"--user=$(id -u):$(id -g)"
		"--hostname=${HOSTNAME}"
		"${DOCKER_MOUNT_ARGS[@]}"
	)
	...

变量DOCKER_MOUNT_ARGS是:

DOCKER_MOUNT_ARGS=(--volumes-from "${KUBE_DATA_CONTAINER_NAME}")

所以代码其实是上传到了最开始创建的data容器的volume上。

function kube::build::rsync {
  local -a rsync_opts=(
    --archive
    --prune-empty-dirs
    --password-file="${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
  )
  if (( ${KUBE_VERBOSE} >= 6 )); then
    rsync_opts+=("-iv")
  fi
  if (( ${KUBE_RSYNC_COMPRESS} > 0 )); then
     rsync_opts+=("--compress-level=${KUBE_RSYNC_COMPRESS}")
  fi
  V=3 kube::log::status "Running rsync"
  rsync "${rsync_opts[@]}" "$@"
}

启动build容器,开始编译

代码上传到data容器的volumes中后,启动build容器,在buid容器的HOME目录下执行编译命令。 执行完成后,再将buid容器中的文件同步到本地。

function kube::build::run_build_command() {
  kube::log::status "Running build command..."
  kube::build::run_build_command_ex "${KUBE_BUILD_CONTAINER_NAME}" -- "$@"
}

function kube::build::run_build_command_ex() {
 ...
  local -a docker_run_opts=(
    "--name=${container_name}"
    "--user=$(id -u):$(id -g)"
    "--hostname=${HOSTNAME}"
    "${DOCKER_MOUNT_ARGS[@]}"
  )
...
    docker_run_opts+=("$1")    //附加$@参数
...
  docker_run_opts+=(
    --env "KUBE_FASTBUILD=${KUBE_FASTBUILD:-false}"
    --env "KUBE_BUILDER_OS=${OSTYPE:-notdetected}"
    --env "KUBE_VERBOSE=${KUBE_VERBOSE}"
  )
...
  local -ra docker_cmd=(
    "${DOCKER[@]}" run "${docker_run_opts[@]}" "${KUBE_BUILD_IMAGE}")

编译时,就是直接在容器中执行传入的make命令,例如make all、make cross等。

启动rsyncd容器,将编译后文件拷贝到本地

function kube::build::copy_output() {
  ...
  kube::build::start_rsyncd_container
  ...

常用变量

KUBE_开头的变量经常在后面的脚本中使用到,这些变量一般都是通过hack/lib/init.sh引入的。

KUBE_CLIENT_TARGETS 与 KUBE_CLIENT_BINARIES

hack/lib/golang.sh中定义,client程序:

readonly KUBE_CLIENT_TARGETS=(
  cmd/kubectl
  federation/cmd/kubefed
)

readonly KUBE_CLIENT_BINARIES=("${KUBE_CLIENT_TARGETS[@]##*/}")

release的时候,被打包到client包里的程序。

KUBE_NODE_TARGETS 与 KUBE_NODE_BINARIES

hack/lib/golang.sh中定义,node中的程序:

kube::golang::node_targets() {
  local targets=(
    cmd/kube-proxy
    cmd/kubelet
  )
  echo "${targets[@]}"
}

readonly KUBE_NODE_TARGETS=($(kube::golang::node_targets))
readonly KUBE_NODE_BINARIES=("${KUBE_NODE_TARGETS[@]##*/}")

本地编译与容器内的编译过程

Development Guide中给出了本地编译的方法,构建过程,用make管理:

make all
make cross
make test
make test-integration
make test-cmd

kubernetes对golang的版本有要求,编译时需要注意:

kubernetes   requires Go
1.0 - 1.2      1.4.2
1.3, 1.4       1.6
1.5, 1.6       1.7 - 1.7.5
1.7+           1.8.1

顶层Makefile

# Build code.
#
# Args:
#   WHAT: Directory names to build.  If any of these directories has a 'main'
#     package, the build will produce executable files under $(OUT_DIR)/go/bin.
#     If not specified, "everything" will be built.
#   GOFLAGS: Extra flags to pass to 'go' when building.
#   GOLDFLAGS: Extra linking flags passed to 'go' when building.
#   GOGCFLAGS: Additional go compile flags passed to 'go' when building.
#
# Example:
#   make
#   make all
#   make all WHAT=cmd/kubelet GOFLAGS=-v
#   make all GOGCFLAGS="-N -l"
#     Note: Use the -N -l options to disable compiler optimizations an inlining.
#           Using these build options allows you to subsequently use source
#           debugging tools like delve.

make all

src/k8s.io/kubernetes/Makefile:

.PHONY: all
ifeq ($(PRINT_HELP),y)
all:
	@echo "$$ALL_HELP_INFO"
else
all: generated_files
	hack/make-rules/build.sh $(WHAT)
endif

hack/make-rules/build.sh开始构建, $(WHAT)是要构建的目标。

generated_files

.PHONY: generated_files
ifeq ($(PRINT_HELP),y)
generated_files:
	@echo "$$GENERATED_FILES_HELP_INFO"
else
generated_files:
	$(MAKE) -f Makefile.generated_files $@ CALLED_FROM_MAIN_MAKEFILE=1
endif

generated_files是在另一个Makefile中完成的:src/k8s.io/kubernetes/Makefile.generated_files

Makefile.generated_files

Makefile.generated_files正如其名,定义了用于进行代码的自动生成的Target。

.PHONY: generated_files
generated_files: gen_deepcopy gen_defaulter gen_conversion gen_openapi

generated_files依赖的四项的工作模式相同,都是用同名的程序,对在代码注释中打了标记的文件就行处理,自动生成对应的方法。生成器代码位于名为gengo的repo中。

gen_deepcopy

.PHONY: gen_deepcopy
gen_deepcopy: $(DEEPCOPY_FILES) $(DEEPCOPY_GEN)
	$(RUN_GEN_DEEPCOPY)

执行的命令:

RUN_GEN_DEEPCOPY =                                                          \
    function run_gen_deepcopy() {                                           \
        if [[ -f $(META_DIR)/$(DEEPCOPY_GEN).todo ]]; then                  \
            ./hack/run-in-gopath.sh $(DEEPCOPY_GEN)                         \
                --v $(KUBE_VERBOSE)                                         \
                --logtostderr                                               \
                -i $$(cat $(META_DIR)/$(DEEPCOPY_GEN).todo | paste -sd, -)  \
                --bounding-dirs $(PRJ_SRC_PATH)                             \
                -O $(DEEPCOPY_BASENAME)                                     \
                "$$@";                                                      \
        fi                                                                  \
    };                                                                      \
    run_gen_deepcopy

DEEPCOPY_GEN是一个可执行程序:

# The tool used to generate deep copies.
DEEPCOPY_GEN := $(BIN_DIR)/deepcopy-gen

run-in-gopaht.sh的作用是设置好GOPATH等变量,并在这个环境下运行传入的命令:

set -o errexit
set -o nounset
set -o pipefail

KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/hack/lib/init.sh"

# This sets up a clean GOPATH and makes sure we are currently in it.
kube::golang::setup_env

# Run the user-provided command.
"${@}"

问题就是deepcopy-gen做了什么事情了,到依赖项中看一下deepcopy-gen是怎样生成的。

$(DEEPCOPY_FILES): $(DEEPCOPY_GEN)
	mkdir -p $$(dirname $(META_DIR)/$(DEEPCOPY_GEN))
	echo $(PRJ_SRC_PATH)/$(@D) >> $(META_DIR)/$(DEEPCOPY_GEN).todo

$(DEEPCOPY_GEN):
	hack/make-rules/build.sh cmd/libs/go2idl/deepcopy-gen
	touch $@

k8s.io/kubernetes/cmd/libs/go2idl/deepcopy-gen/main.go:

// deepcopy-gen is a tool for auto-generating DeepCopy functions.
//
// Given a list of input directories, it will generate functions that
// efficiently perform a full deep-copy of each type.  For any type that
// offers a `.DeepCopy()` method, it will simply call that.  Otherwise it will
// use standard value assignment whenever possible.  If that is not possible it
// will try to call its own generated copy function for the type, if the type is
// within the allowed root packages.  Failing that, it will fall back on
// `conversion.Cloner.DeepCopy(val)` to make the copy.  The resulting file will
// be stored in the same directory as the processed source package.
标记为需要deep-copy

$(DEEPCOPY_FILES)生成了输入参数中的.todo文件,在其中写入需要deep-copy处理的目录。

如果文件中包含注释// +k8s:deepcopy-gen,表示这个文件需要进行deep-copy处理。

k8s.io/kubernetes/Makefile.generated_files:

# Deep-copy generation
#
# Any package that wants deep-copy functions generated must include a
# comment-tag in column 0 of one file of the form:
#     // +k8s:deepcopy-gen=<VALUE>
#
# The <VALUE> may be one of:
#     generate: generate deep-copy functions into the package
#     register: generate deep-copy functions and register them with a
#               scheme

gen_defaulter

# Defaulter generation
#
# Any package that wants defaulter functions generated must include a
# comment-tag in column 0 of one file of the form:
#     // +k8s:defaulter-gen=<VALUE>
#
# The <VALUE> depends on context:
#     on types:
#       true:  always generate a defaulter for this type
#       false: never generate a defaulter for this type
#     on functions:
#       covers: if the function name matches SetDefault_NAME, instructs
#               the generator not to recurse
#     on packages:
#       FIELDNAME: any object with a field of this name is a candidate
#                  for having a defaulter generated

gen_conversion

# Conversion generation
#
# Any package that wants conversion functions generated must include one or
# more comment-tags in any .go file, in column 0, of the form:
#     // +k8s:conversion-gen=<CONVERSION_TARGET_DIR>
#
# The CONVERSION_TARGET_DIR is a project-local path to another directory which
# should be considered when evaluating peer types for conversions.  Types which
# are found in the source package (where conversions are being generated)
# but do not have a peer in one of the target directories will not have
# conversions generated.

gen_openapi

# Open-api generation
#
# Any package that wants open-api functions generated must include a
# comment-tag in column 0 of one file of the form:
#     // +k8s:openapi-gen=true
## hack/make-rules/build.sh

hack/make-rules/build.sh

build.sh用来编译具体的目标。

构建目标

set -o errexit
set -o nounset
set -o pipefail

KUBE_ROOT=$(dirname "${BASH_SOURCE}")/../..
KUBE_VERBOSE="${KUBE_VERBOSE:-1}"
source "${KUBE_ROOT}/hack/lib/init.sh"

kube::golang::build_binaries "$@"
kube::golang::place_bins

如果没有传入构建目标,默认构建所有目标。

函数kube::golang::build_binaries(),接收构建目标,进行构建。

设置编译环境

设置环境变量: kube::golang::setup_env()

编译时的GOPATH为: _output/local/go

KUBE_OUTPUT_SUBPATH="${KUBE_OUTPUT_SUBPATH:-_output/local}"
KUBE_OUTPUT="${KUBE_ROOT}/${KUBE_OUTPUT_SUBPATH}"
KUBE_GOPATH="${KUBE_OUTPUT}/go"
GOPATH=${KUBE_GOPATH}
GOPATH="${GOPATH}:${KUBE_EXTRA_GOPATH}"

可以通过设置环境变量KUBE_EXTRA_GOPATH,增加GOPATH中的路径

编译时源码路径: _out/local/go/src/k8s.io/kubernetes

KUBE_GO_PACKAGE=k8s.io/kubernetes
${KUBE_GOPATH}/src/${KUBE_GO_PACKAGE}

编译时选项:

goflags=(${KUBE_GOFLAGS:-})
gogcflags="${KUBE_GOGCFLAGS:-}"
goldflags="${KUBE_GOLDFLAGS:-} $(kube::version::ldflags)"

链接时选项,就是通过-X,修改pkg/version/vendor/k8s.io/client-go/pkg/version中的变量:

kube::version::ldflag() {
  local key=${1}
  local val=${2}

  echo "-X ${KUBE_GO_PACKAGE}/pkg/version.${key}=${val}"
  echo "-X ${KUBE_GO_PACKAGE}/vendor/k8s.io/client-go/pkg/version.${key}=${val}"
}

# Prints the value that needs to be passed to the -ldflags parameter of go build
# in order to set the kubernetes based on the git tree status.
kube::version::ldflags() {
  kube::version::get_version_vars

  local -a ldflags=($(kube::version::ldflag "buildDate" "$(date -u +'%Y-%m-%dT%H:%M:%SZ')"))
  if [[ -n ${KUBE_GIT_COMMIT-} ]]; then
    ldflags+=($(kube::version::ldflag "gitCommit" "${KUBE_GIT_COMMIT}"))
    ldflags+=($(kube::version::ldflag "gitTreeState" "${KUBE_GIT_TREE_STATE}"))
  fi

  if [[ -n ${KUBE_GIT_VERSION-} ]]; then
    ldflags+=($(kube::version::ldflag "gitVersion" "${KUBE_GIT_VERSION}"))
  fi

  if [[ -n ${KUBE_GIT_MAJOR-} && -n ${KUBE_GIT_MINOR-} ]]; then
    ldflags+=(
      $(kube::version::ldflag "gitMajor" "${KUBE_GIT_MAJOR}")
      $(kube::version::ldflag "gitMinor" "${KUBE_GIT_MINOR}")
    )
  fi

  # The -ldflags parameter takes a single string, so join the output.
  echo "${ldflags[*]-}"
}

运行时,传入的以-开始的参数,被认为是新增的goflags:

for arg; do
  if [[ "${arg}" == "--use_go_build" ]]; then
    use_go_build=true
  elif [[ "${arg}" == -* ]]; then
    # Assume arguments starting with a dash are flags to pass to go.
    goflags+=("${arg}")
  else
    targets+=("${arg}")
  fi
done

准备工具链

准备编译时工具链: kube::golang::build_kube_toolchain()

kube::golang::build_kube_toolchain() {
  local targets=(
    hack/cmd/teststale
    vendor/github.com/jteeuwen/go-bindata/go-bindata
  )

  local binaries
  binaries=($(kube::golang::binaries_from_targets "${targets[@]}"))

  kube::log::status "Building the toolchain targets:" "${binaries[@]}"
  go install "${goflags[@]:+${goflags[@]}}" \
        -gcflags "${gogcflags}" \
        -ldflags "${goldflags}" \
        "${binaries[@]:+${binaries[@]}}"
}

go-bindata用于将任意文件编译到go源码文件中。

源码预处理

go generate生成bindata

readonly KUBE_BINDATAS=(
  test/e2e/generated/gobindata_util.go
)
...
for bindata in ${KUBE_BINDATAS[@]}; do
  if [[ -f "${KUBE_ROOT}/${bindata}" ]]; then
    go generate "${goflags[@]:+${goflags[@]}}" "${KUBE_ROOT}/${bindata}"
  fi
done

go generate会运行目标.go文件中以//go:generate开始的注释行中的指令。

在test/e2e/generated/gobindata_util.go中,运行generate-bindata.sh:

//go:generate ../../../hack/generate-bindata.sh

generate-bindata.sh将一下二进制文件打包到对应的源码中:

# These are files for e2e tests.
BINDATA_OUTPUT="test/e2e/generated/bindata.go"
go-bindata -nometadata -o "${BINDATA_OUTPUT}.tmp" -pkg generated \
	-ignore .jpg -ignore .png -ignore .md \
	"examples/..." \
	"test/e2e/testing-manifests/..." \
	"test/images/..." \
	"test/fixtures/..."

BINDATA_OUTPUT="pkg/generated/bindata.go"
go-bindata -nometadata -nocompress -o "${BINDATA_OUTPUT}.tmp" -pkg generated \
	-ignore .jpg -ignore .png -ignore .md \
	"translations/..."

设置目标平台

设置目标平台,kube::golang::set_platform_envs()

export GOOS=${platform%/*}
export GOARCH=${platform##*/}

通过环境变量KUBE_BUILD_PLATFORMS指定目标平台,格式为OS/ARCH

local -a platforms=(${KUBE_BUILD_PLATFORMS:-})
if [[ ${#platforms[@]} -eq 0 ]]; then
  platforms=("${host_platform}")
fi

例如:

darwin/amd64

GOARCH:

目标CPU结构,amd64, 386, arm, ppc64

GOOS:

目标操作系统,linux, darwin, windows, netbsd

开始编译

开始编译,kube::golang::build_binaries_for_platform()。

编译时将目标分为静态链接、动态链接、测试三组。

//binary就是上面列出的构建目标
for binary in "${binaries[@]}"; do
	if [[ "${binary}" =~ ".test"$ ]]; then
	  tests+=($binary)
	elif kube::golang::is_statically_linked_library "${binary}"; then
	  statics+=($binary)
	else
	  nonstatics+=($binary)
	fi
done

以.test结尾的为测试用,除了下面指定为静态链接的,其它为动态链接:

readonly KUBE_STATIC_LIBRARIES=(
  cloud-controller-manager
  kube-apiserver
  kube-controller-manager
  kube-scheduler
  kube-proxy
  kube-aggregator
  kubeadm
  kubectl
)

静态链接目标的编译:

CGO_ENABLED=0 go build -o "${outfile}" \
"${goflags[@]:+${goflags[@]}}" \
-gcflags "${gogcflags}" \
-ldflags "${goldflags}" \
"${binary}"

非静态目标的编译:

go build -o "${outfile}" \
"${goflags[@]:+${goflags[@]}}" \
-gcflags "${gogcflags}" \
-ldflags "${goldflags}" \
"${binary}"

make update

.PHONY: update
ifeq ($(PRINT_HELP),y)
update:
	@echo "$$UPDATE_HELP_INFO"
else
update:
	hack/update-all.sh
endif

hack/update-all.sh

update-all.sh会依次执行hack/XXX.sh,完成更新操作。

BASH_TARGETS="
	update-generated-protobuf
	update-codegen
	update-codecgen
	update-generated-docs
	update-generated-swagger-docs
	update-swagger-spec
	update-openapi-spec
	update-api-reference-docs
	update-federation-openapi-spec
	update-staging-client-go
	update-staging-godeps
	update-bazel"

hack/update-staging-client-go.sh

update-staging-client-go.sh目的是更新staging/src/k8s.io/client-go/中的文件。

首先会检查是否已经执行go restore,确保kubernetes的依赖包已经安装在$GOPATH中。

之后运行staging/copy.sh,copy.sh的作用是更新staging/src/k8s.io/client-go,具体过程见:k8s的第三方包的使用

make release

make release直接执行build/releash.sh脚本:

release:
	build/release.sh

如果变量KUBE_FASTBUILD为“true”,只发布linux/amd64,否则发布所有平台。

hack/lib/golang.sh:

if [[ "${KUBE_FASTBUILD:-}" == "true" ]]; then
  readonly KUBE_SERVER_PLATFORMS=(linux/amd64)
  readonly KUBE_NODE_PLATFORMS=(linux/amd64)
  if [[ "${KUBE_BUILDER_OS:-}" == "darwin"* ]]; then
    readonly KUBE_TEST_PLATFORMS=(
      darwin/amd64
      linux/amd64
    )
    readonly KUBE_CLIENT_PLATFORMS=(
      darwin/amd64
      linux/amd64
    )
...

build/release.sh

release.sh首先在容器中进行编译,然后进行打包:

src/k8s.io/kubernetes/build/release.sh:

kube::build::verify_prereqs
kube::build::build_image
kube::build::run_build_command make cross

if [[ $KUBE_RELEASE_RUN_TESTS =~ ^[yY]$ ]]; then
  kube::build::run_build_command make test
  kube::build::run_build_command make test-integration
fi

kube::build::copy_output

kube::release::package_tarballs
kube::release::package_hyperkube

kube::release::package_tarballs

function kube::release::package_tarballs() {
  # Clean out any old releases
  rm -rf "${RELEASE_DIR}"
  mkdir -p "${RELEASE_DIR}"
  #源代码打包
  kube::release::package_src_tarball &
  #client端程序打包
  kube::release::package_client_tarballs &
  kube::release::package_salt_tarball &
  kube::release::package_kube_manifests_tarball &
  kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; }

  # _node and _server tarballs depend on _src tarball
  kube::release::package_node_tarballs &
  kube::release::package_server_tarballs &
  kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; }

  kube::release::package_final_tarball & # _final depends on some of the previous phases
  kube::release::package_test_tarball & # _test doesn't depend on anything
  kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; }
}
kube::release::package_src_tarball()
# Package the source code we built, for compliance/licensing/audit/yadda.
function kube::release::package_src_tarball() {
  kube::log::status "Building tarball: src"
  local source_files=(
    $(cd "${KUBE_ROOT}" && find . -mindepth 1 -maxdepth 1 \
      -not \( \
        \( -path ./_\*        -o \
           -path ./.git\*     -o \
           -path ./.config\* -o \
           -path ./.gsutil\*    \
        \) -prune \
      \))
  )
  "${TAR}" czf "${RELEASE_DIR}/kubernetes-src.tar.gz" -C "${KUBE_ROOT}" "${source_files[@]}"
}
kube::release::package_client_tarballs

将变量KUBE_CLIENT_BINARIES中的程序打包:

.:
kubernetes

./kubernetes:
client

./kubernetes/client:
bin

./kubernetes/client/bin:
kubectl  kubefed
kube::release::package_salt_tarball

cluster/saltbase/目录打包, 这是一套用saltstack部署k8s集群的脚本。

kube::release::package_kube_manifests_tarball
# This will pack kube-system manifests files for distros without using salt
# such as GCI and Ubuntu Trusty. We directly copy manifests from
# cluster/addons and cluster/saltbase/salt. The script of cluster initialization
# will remove the salt configuration and evaluate the variables in the manifests.
kube::release::package_node_tarballs

将变量KUBE_NODE_BINARIES中列出的程序打包:

.:
kubernetes

./kubernetes:
kubernetes-src.tar.gz  LICENSES  node

./kubernetes/node:
bin

./kubernetes/node/bin:
kubectl  kubefed  kubelet  kube-proxy
kube::release::package_server_tarballs

将变量KUBE_NODE_BINARIES和变量KUBE_SERVER_BINARIES中列出的程序打包:

.:
kubernetes

./kubernetes:
addons  kubernetes-src.tar.gz  LICENSES  server

./kubernetes/addons:

./kubernetes/server:
bin

./kubernetes/server/bin:
cloud-controller-manager  kube-aggregator          kubectl  kube-proxy
hyperkube                 kube-apiserver           kubefed  kube-scheduler
kubeadm                   kube-controller-manager  kubelet

并为server端的程序创建docker image:

kube::release::create_docker_images_for_server "${release_stage}/server/bin" "${arch}"
kube::release::create_docker_images_for_server

lib/release.sh中定义,分别将这些程序打包到各自的docker image中:

local binaries=($(kube::build::get_docker_wrapped_binaries ${arch}))

get_docker_wrapped_binaries在build/common.sh中定义:

kube::build::get_docker_wrapped_binaries() {
  debian_iptables_version=v7
  case $1 in
    "amd64")
        local targets=(
          kube-apiserver,busybox
          kube-controller-manager,busybox
          kube-scheduler,busybox
          kube-aggregator,busybox
          kube-proxy,gcr.io/google-containers/debian-iptables-amd64:${debian_iptables_version}
        );;
    "arm")
        local targets=(
          kube-apiserver,armel/busybox
          kube-controller-manager,armel/busybox
          kube-scheduler,armel/busybox
          kube-aggregator,armel/busybox
          kube-proxy,gcr.io/google-containers/debian-iptables-arm:${debian_iptables_version}
        );;
    ......
  esac

echo "${targets[@]}"
}

每个target的,之前是要打包进容器的程序名binary_name,,之后是base_image。

Dockerfile:

printf " FROM ${base_image} \n ADD ${binary_name} /usr/local/bin/${binary_name}\n" > ${docker_file_path}

最终镜像保存在:

"${DOCKER[@]}" save ${docker_image_tag} > ${binary_dir}/${binary_name}.tar

./_output/elease-stage/server/linux-amd64/kubernetes/server/bin/kube-controller-manager.tar
./_output/elease-stage/server/linux-amd64/kubernetes/server/bin/kube-scheduler.tar
./_output/elease-stage/server/linux-amd64/kubernetes/server/bin/kube-proxy.tar
./_output/elease-stage/server/linux-amd64/kubernetes/server/bin/kube-apiserver.tar
./_output/elease-stage/server/linux-amd64/kubernetes/server/bin/kube-aggregator.tar
kube::release::package_final_tarball
# This is all the platform-independent stuff you need to run/install kubernetes.
# Arch-specific binaries will need to be downloaded separately (possibly by
# using the bundled cluster/get-kube-binaries.sh script).
# Included in this tarball:
#   - Cluster spin up/down scripts and configs for various cloud providers
#   - Tarballs for salt configs that are ready to be uploaded
#     to master by whatever means appropriate.
#   - Examples (which may or may not still work)
#   - The remnants of the docs/ directory
kube::release::package_test_tarball() {
# This is the stuff you need to run tests from the binary distribution.

release

release已经被作为一个单独项目发布k8s release发布。

制作rpm

cd rpm
./docker-build.sh

附录

make all的输出:

+++ [0518 15:19:14] Building the toolchain targets:
    k8s.io/kubernetes/hack/cmd/teststale
    k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0518 15:19:14] Generating bindata:
    test/e2e/generated/gobindata_util.go
~/Work/Docker/GOPATH/src/k8s.io/kubernetes ~/Work/Docker/GOPATH/src/k8s.io/kubernetes/test/e2e/generated
~/Work/Docker/GOPATH/src/k8s.io/kubernetes/test/e2e/generated
+++ [0518 15:19:14] Building go targets for darwin/amd64:
    cmd/libs/go2idl/deepcopy-gen
+++ [0518 15:19:18] Building the toolchain targets:
    k8s.io/kubernetes/hack/cmd/teststale
    k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0518 15:19:18] Generating bindata:
    test/e2e/generated/gobindata_util.go
~/Work/Docker/GOPATH/src/k8s.io/kubernetes ~/Work/Docker/GOPATH/src/k8s.io/kubernetes/test/e2e/generated
~/Work/Docker/GOPATH/src/k8s.io/kubernetes/test/e2e/generated
+++ [0518 15:19:19] Building go targets for darwin/amd64:
    cmd/libs/go2idl/defaulter-gen
+++ [0518 15:19:23] Building the toolchain targets:
    k8s.io/kubernetes/hack/cmd/teststale
    k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0518 15:19:23] Generating bindata:
    test/e2e/generated/gobindata_util.go
~/Work/Docker/GOPATH/src/k8s.io/kubernetes ~/Work/Docker/GOPATH/src/k8s.io/kubernetes/test/e2e/generated
~/Work/Docker/GOPATH/src/k8s.io/kubernetes/test/e2e/generated
+++ [0518 15:19:23] Building go targets for darwin/amd64:
    cmd/libs/go2idl/conversion-gen
+++ [0518 15:19:27] Building the toolchain targets:
    k8s.io/kubernetes/hack/cmd/teststale
    k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0518 15:19:27] Generating bindata:
    test/e2e/generated/gobindata_util.go
~/Work/Docker/GOPATH/src/k8s.io/kubernetes ~/Work/Docker/GOPATH/src/k8s.io/kubernetes/test/e2e/generated
~/Work/Docker/GOPATH/src/k8s.io/kubernetes/test/e2e/generated
+++ [0518 15:19:28] Building go targets for darwin/amd64:
    cmd/libs/go2idl/openapi-gen
+++ [0518 15:19:33] Building the toolchain targets:
    k8s.io/kubernetes/hack/cmd/teststale
    k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0518 15:19:33] Generating bindata:
    test/e2e/generated/gobindata_util.go
~/Work/Docker/GOPATH/src/k8s.io/kubernetes ~/Work/Docker/GOPATH/src/k8s.io/kubernetes/test/e2e/generated
~/Work/Docker/GOPATH/src/k8s.io/kubernetes/test/e2e/generated
+++ [0518 15:19:34] Building go targets for darwin/amd64:
    cmd/kube-proxy
    cmd/kube-apiserver
    cmd/kube-controller-manager
    cmd/cloud-controller-manager
    cmd/kubelet
    cmd/kubeadm
    cmd/hyperkube
    vendor/k8s.io/kube-aggregator
    plugin/cmd/kube-scheduler
    cmd/kubectl
    federation/cmd/kubefed
    cmd/gendocs
    cmd/genkubedocs
    cmd/genman
    cmd/genyaml
    cmd/mungedocs
    cmd/genswaggertypedocs
    cmd/linkcheck
    examples/k8petstore/web-server/src
    federation/cmd/genfeddocs
    vendor/github.com/onsi/ginkgo/ginkgo
    test/e2e/e2e.test
    cmd/kubemark
    vendor/github.com/onsi/ginkgo/ginkgo
    test/e2e_node/e2e_node.test
    cmd/gke-certificates-controller

参考

  1. k8s development Guide
  2. Building kubernetes
  3. Install and Use GNU Command Line Tools on macOS/OS X
  4. gengo
  5. k8s的第三方包的使用
  6. To start developing kubernetes
  7. k8s release
  8. k8s release binary
  9. k8s build local
  10. k8s build in docker

kubernetes

  1. 使用 kubespray 部署 kubernetes 集群
  2. kubernetes 使用:多可用区、Pod 部署拓扑与 Topology Aware Routing
  3. kubernetes 扩展:Cloud Controller Manager
  4. kubernetes 准入:操作合法性检查(Admission Control)
  5. kubernetes 鉴权:用户操作权限鉴定(Authorization)
  6. kubernetes 认证:用户管理与身份认证(Authenticating)
  7. kubernetes 开发:代码生成工具
  8. kubernetes 扩展:operator 开发
  9. kubernetes 扩展:CRD 的使用方法
  10. kubernetes configmap 热加载,inotifywatch 监测文件触发热更新
  11. kubernetes 扩展:扩展点和方法(api/cr/plugin...)
  12. kubernetes 调度组件 kube-scheduler 1.16.3 源代码阅读指引
  13. kubernetes 代码中的 k8s.io 是怎么回事?
  14. 阅读笔记《不一样的 双11 技术,阿里巴巴经济体云原生实践》
  15. kubernetes ingress-nginx 启用 upstream 长连接,需要注意,否则容易 502
  16. ingress-nginx 的限速功能在 nginx.conf 中的对应配置
  17. kubernetes 中的容器设置透明代理,自动在 HTTP 请求头中注入 Pod 信息
  18. kubernetes ingress-nginx 的测试代码(单元测试+e2e测试)
  19. kubernetes ingress-nginx http 请求复制功能与 nginx mirror 的行为差异
  20. kubernetes 基于 openresty 的 ingress-nginx 的状态和配置查询
  21. kubernetes ingress-nginx 0.25 源代码走读笔记
  22. kubernetes ingress-nginx 的金丝雀(canary)/灰度发布功能的使用方法
  23. kubernetes 操作命令 kubectl 在 shell 中的自动补全配置
  24. kubernetes 组件 kube-proxy 的 IPVS 功能的使用
  25. kubernetes initializer 功能的使用方法: 在 Pod 等 Resource 落地前进行修改
  26. kubernetes 版本特性: 新特性支持版本和组件兼容版本
  27. kubernetes API 与 Operator: 不为人知的开发者战争(完整篇)
  28. kubernetes 1.12 从零开始(七): kubernetes开发资源
  29. kubernetes 1.12 从零开始(六): 从代码编译到自动部署
  30. kubernetes 网络方案 Flannel 的学习笔记
  31. kubernetes 1.12 从零开始(五): 自己动手部署 kubernetes
  32. kubernetes 1.12 从零开始(四): 必须先讲一下基本概念
  33. kubernetes 1.12 从零开始(三): 用 kubeadm 部署多节点集群
  34. kubernetes 1.12 从零开始(二): 用 minikube 部署开发测试环境
  35. kubernetes 1.12 从零开始(一): 部署环境准备
  36. kubernetes 1.12 从零开始(零): 遇到的问题与解决方法
  37. kubernetes 1.12 从零开始(初): 课程介绍与官方文档汇总
  38. kubernetes 集群状态监控:通过 grafana 和 prometheus
  39. 一些比较有意思的Kubernetes周边产品
  40. Borg论文阅读笔记
  41. kubelet下载pod镜像时,docker口令文件的查找顺序
  42. kubernetes 的 Client Libraries 的使用
  43. kubernetes的网络隔离networkpolicy
  44. kube-router的源码走读
  45. kubernetes 跨网段通信: 通过 calico 的 ipip 模式
  46. kubernetes的调试方法
  47. kubernetes 与 calico 的衔接过程
  48. 怎样理解 kubernetes 以及微服务?
  49. kubernetes中部署有状态的复杂分布式系统
  50. kubernetes的apiserver的启动过程
  51. kubernetes的api定义与装载
  52. kubernetes的federation部署,跨区Service
  53. kubernetes的编译、打包、发布
  54. kubernetes的第三方包的使用
  55. kubernetes的Storage的实现
  56. kubernetes 的 Apiserver 的 storage 使用
  57. kubernetes的Controller-manager的工作过程
  58. kubernetes的Client端Cache
  59. kubernetes 的 Apiserver 的工作过程
  60. kubernetes的CNI插件初始化与Pod网络设置
  61. kubernetes的Pod变更过程
  62. kubernetes的kubelet的工作过程
  63. kuberntes 的 Cmdline 实现
  64. kubernetes的Pod内挂载的Service Account的使用方法
  65. kubernetes的社区资源与项目参与方式
  66. kubernetes的Kube-proxy的转发规则分析
  67. kubernetes的基本操作
  68. kubernetes在CentOS上的集群部署
  69. kubernetes在CentOS上的All In One部署
  70. 怎样选择集群管理系统?

推荐阅读

Copyright @2011-2019 All rights reserved. 转载请添加原文连接,合作请加微信lijiaocn或者发送邮件: [email protected],备注网站合作

友情链接:  系统软件  程序语言  运营经验  水库文集  网络课程  微信网文  发现知识星球