History: Mar-11-2020 | Xiaohua Zhang | Init.
Mar-23-2020 | Xiaohua Zhang | Add the image build process.
a. Remove all the existing partitions (Using the "sudo gparted" to start the partition tool, remove all the partitions)
b. Program the image by using the "dd if=xxx.wic of=/dev/sde bs=4M" as the example.
c. Resize root partition to fit the SD size
Linux ~ $sudo parted /dev/sde
GNU Parted 3.2
Using /dev/sde
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: Generic- USB3.0 CRW -SD (scsi)
Disk /dev/sde: 128GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 4096B 34.1MB 34.1MB primary fat16 boot, lba
2 34.1MB 1971MB 1937MB primary ext4
(parted) resizepart 2 100G
(parted) p
Model: Generic- USB3.0 CRW -SD (scsi)
Disk /dev/sde: 128GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 4096B 34.1MB 34.1MB primary fat16 boot, lba
2 34.1MB 110GB 110GB primary ext4
(parted) quit
Information: You may need to update /etc/fstab.
Linux ~ $
Linux ~ $sudo resize2fs /dev/sde2 100000M
resize2fs 1.42.13 (17-May-2015)
The containing partition (or device) is only 26847145 (4k) blocks.
You requested a new size of 28672000 blocks.
d. Repair the file system.
Linux ~ $sudo gparted
Click the file system of the 2nd partition of the SD card, select “Check”.
Set the correct boot arguments if needed. The following is just an example: setenv othbootargs defaulthugepagesz=2m hugepagesz=2m hugepages=1024 setenv bootargs console=ttyAMA0,115200 earlycon=pl011,mmio32,0x21c0000 root=/dev/mmcblk0p2 rw noconsole_suspend ip=dhcp ${othbootargs}
a. Modify the /etc/hosts to input the IP address and hostname.
root@nxp-lx2xxx:~# vi /etc/hosts
b. Load the Kubernetes images.
root@nxp-lx2xxx:~/k8s_img# docker load -i k8s_arm_docker_img.tar.bz2
fb61a074724d: Loading layer [==================================================>] 479.7kB/479.7kB
72b921579c94: Loading layer [==================================================>] 37.13MB/37.13MB
Loaded image: k8s.gcr.io/coredns:1.3.1
d74829304bab: Loading layer [==================================================>] 1.506MB/1.506MB
3220e8ea3516: Loading layer [==================================================>] 274.3MB/274.3MB
6ad8f49c4795: Loading layer [==================================================>] 24.74MB/24.74MB
Loaded image: k8s.gcr.io/etcd:3.3.10
a764d9373677: Loading layer [==================================================>] 101.1MB/101.1MB
Loaded image: k8s.gcr.io/kubernetes-dashboard-arm64:v1.8.3
32626eb1fe89: Loading layer [==================================================>] 526.8kB/526.8kB
Loaded image: k8s.gcr.io/pause:3.1
186143d216c2: Loading layer [==================================================>] 46.24MB/46.24MB
5443b3d4b49a: Loading layer [==================================================>] 3.314MB/3.314MB
9db5a558d3dd: Loading layer [==================================================>] 35.39MB/35.39MB
Loaded image: k8s.gcr.io/kube-proxy:v1.15.2
5ccd40985620: Loading layer [==================================================>] 37.11MB/37.11MB
Loaded image: k8s.gcr.io/kube-scheduler:v1.15.2
f73db37b5f6f: Loading layer [==================================================>] 110.9MB/110.9MB
Loaded image: k8s.gcr.io/kube-controller-manager:v1.15.2
41c8d5c437b8: Loading layer [==================================================>] 159.8MB/159.8MB
Loaded image: k8s.gcr.io/kube-apiserver:v1.15.2
root@nxp-lx2xxx:~/k8s_img# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-controller-manager v1.15.2 944ad6366dd1 7 months ago 155MB
k8s.gcr.io/kube-apiserver v1.15.2 6d05a94aba7b 7 months ago 204MB
k8s.gcr.io/kube-proxy v1.15.2 e9d3fffed72c 7 months ago 82.9MB
k8s.gcr.io/kube-scheduler v1.15.2 b9dbcd37c857 7 months ago 81.7MB
k8s.gcr.io/coredns 1.3.1 7e8edeee9a1e 13 months ago 37.4MB
k8s.gcr.io/etcd 3.3.10 ad99d3ead043 15 months ago 300MB
k8s.gcr.io/kubernetes-dashboard-arm64 v1.8.3 564d0a97a393 2 years ago 101MB
k8s.gcr.io/pause 3.1 6cf7c80fe444 2 years ago 525kB
b. Init the Kubernetes:
1) kubeadm init --kubernetes-version v1.15.2 --pod-network-cidr=10.244.0.0/16
root@nxp-lx2xxx:~# kubeadm init --kubernetes-version v1.15.2 --pod-network-cidr= 10.244.0.0/16
[init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https:/ /kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your inte rnet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/ku belet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.y aml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [nxp-lx2xxx localhost] and IPs [128.224.180.85 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [nxp-lx2xxx localhost] an d IPs [128.224.180.85 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [nxp-lx2xxx kubernetes ku bernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] an d IPs [10.96.0.1 128.224.180.85]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests "
[wait-control-plane] Waiting for the kubelet to boot up the control plane as sta tic Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.008154 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node nxp-lx2xxx as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node nxp-lx2xxx as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: tzsi2u.uyrk8dhzz29pa6gw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 128.224.180.85:6443 --token tzsi2u.uyrk8dhzz29pa6gw \
--discovery-token-ca-cert-hash sha256:129d1e286043554745d6386a3a17a059abb0e16cf295e347cd033a9f89177c97
root@nxp-lx2xxx:~#
c. Save the configuration.
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
d. Taint the node, only for single nodes which run as master and work node at the same time.
kubectl taint nodes nxp-lx2xxx node-role.kubernetes.io/master-
e. Star the Flannel CNI.
kubectl apply -f $HOME/k8s_img/kube-flannel.yml
root@nxp-lx2xxx:~/k8s_img# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
f. Start the Kubernetes Dashboard.
kubectl apply -f $HOME/k8s_img/kubernetes-dashboard-admin.rbac.yaml
kubectl apply -f $HOME/k8s_img/kubernetes-dashboard.yaml
root@nxp-lx2xxx:~/k8s_img# kubectl apply -f $HOME/k8s_img/kubernetes-dashboard-admin.rbac.yaml
serviceaccount/kubernetes-dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created
root@nxp-lx2xxx:~/k8s_img# kubectl apply -f $HOME/k8s_img/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
root@nxp-lx2xxx:~/k8s_img# kubectl get nodes
NAME STATUS ROLES AGE VERSION
nxp-lx2xxx Ready master 5m3s v1.15.2-dirty
root@nxp-lx2xxx:~/k8s_img# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5c98db65d4-5nbxd 1/1 Running 0 5m4s
kube-system coredns-5c98db65d4-lvrl5 1/1 Running 0 5m4s
kube-system etcd-nxp-lx2xxx 1/1 Running 0 4m27s
kube-system kube-apiserver-nxp-lx2xxx 1/1 Running 0 4m31s
kube-system kube-controller-manager-nxp-lx2xxx 1/1 Running 0 4m33s
kube-system kube-flannel-ds-arm64-hffh7 1/1 Running 0 2m22s
kube-system kube-proxy-8qprt 1/1 Running 0 5m3s
kube-system kube-scheduler-nxp-lx2xxx 1/1 Running 0 4m27s
kube-system kubernetes-dashboard-8588454c87-fkdtq 1/1 Running 0 75s
g. Get the token for access the dashboard of Kubernetes.
kubectl -n kube-system get secret |grep dashboard-admin-token
kubectl -n kube-system describe secret kubernetes-dashboard-admin-token-p96bl
root@nxp-lx2xxx:~/k8s_img# kubectl -n kube-system get secret |grep dashboard-admin-token
kubernetes-dashboard-admin-token-2pfsf kubernetes.io/service-account-token 3 13m
root@nxp-lx2xxx:~/k8s_img# kubectl -n kube-system describe secret kubernetes-dashboard-admin-token-2pfsf
Name: kubernetes-dashboard-admin-token-2pfsf
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin
kubernetes.io/service-account.uid: 5ea2db81-cf24-4adc-aedb-9405daa642ee
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi0ycGZzZiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjVlYTJkYjgxLWNmMjQtNGFkYy1hZWRiLTk0MDVkYWE2NDJlZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.d000kdRczW-E39ddv1HBfYK6WZt_hqrzD_0VD6Wl0fZ1cgYcEMImRGWwueqkZccd5h6Ul3pmTWC8xQg5T4gWnsqp6Z8XyC5CoqnuoIRBZ_LgXnnRqqAIQZYCxWo8dSpWwwvP62Id9r7i_jCrqico72MKHV8DkOC3SN5ym0Ga5WaRMWhhzKvNPfqrnnw4UgurMPeFF4cTZ8VcIhlu_w2A-fjLFgvRt7-16AipW1u1GO5k6esFE5dBkk7ENN4mpyItQ_u0_PL9sSVD08BzgZtP5gG8En3Mtz35m_3nJMgAEljsWjNsLvK9WSeAr5nF9pf42O_kMPLFdkPodLkAOOlt0w
a. Download the wr-app-container-nxp-lx2xxx.tar.bz2
b. docker import wr-app-container-nxp-lx2xxx.tar.bz2 wr-app-container:v1
c. docker run -it --privileged=true -v /root:/home -v /dev:/dev
apiVersion: v1 kind: Pod metadata: name: du-l1l2
spec: containers: - name: du-l1 image: "wr-app-container:v1" securityContext: privileged: true capabilities: add: - ALL command: - sleep - infinity stdin: true tty: true volumeMounts: - name: dev-volume mountPath: /dev - name: home-volume mountPath: /home - name: du-l2 image: "wr-app-container:v1" securityContext: privileged: true capabilities: add: - ALL command: - sleep - infinity stdin: true tty: true volumeMounts: - name: dev-volume mountPath: /dev - name: home-volume mountPath: /home volumes: - name: dev-volume hostPath: path: /dev - name: home-volume hostPath: path: /root
git clone https://gerrit.o-ran-sc.org/r/pti/rtp.git
./rtp/scripts/build_oran.sh -w workspace_nxp -b nxp-lx2xxx
After the building process finish, you will find the final image asoran-inf-arm/workspacenxp/prjoran-inf/tmp-glibc/deploy/images/nxp-lx2xxx/oran-image-inf-host-nxp-lx2xxx.wic