Posted on November 15, 2019 2:07 am
 |  Asked by Yury Glukhovskoy
 |  204 views
0
0
Print Friendly, PDF & Email

Hi Colleagues,

I’ve been trying to start cEOS at Fedora 31.
Quite handy manual https://eos.arista.com/ceos-lab-topo/ doesn’t work in full .

I managed to start a container, but couldn’t execute CLI.
It says that
“Error: executable file not found in $PATH: No such file or directory: OCI runtime command not found error”.

Did anyone have a chance to activate it at the similar platform ?

Regards,
Yury

0
Posted by Yury Glukhovskoy
Answered on November 15, 2019 3:56 pm

just an update.

installed cEOS at Ubuntu under genuine docker and had the same error trying to "exec cli".

however, "exec sh" works at Fedora/Ubuntu ...

0
Posted by Tamas Plugor
Answered on November 15, 2019 5:01 pm

Which EOS version are you using and how did you build the container?
the syntax for the container creation has changed after 4.22:

docker create --name=ceos1 --privileged -e INTFTYPE=eth -e ETBA=1 -e SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 -e CEOS=1 -e EOS_PLATFORM=ceoslab -e container=docker -i -t ceosimage:4.23.0F /sbin/init systemd.setenv=INTFTYPE=eth systemd.setenv=ETBA=1 systemd.setenv=SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 systemd.setenv=CEOS=1 systemd.setenv=EOS_PLATFORM=ceoslab systemd.setenv=container=docker

can you try using these variables?

Thanks,
Tamas

0
Posted by Yury Glukhovskoy
Answered on November 18, 2019 11:37 am

Hi Tamas,

I know about the new syntax. It's mentioned in cEOS-lab-README-generic.txt at Download area.

Relatively the same set of parameters was used to create a container.
I've just tried to create a container one more time with the copy of your suggested parameters.
Unfortunately, it doesn't help.

Regards,
Yury

I've just installed podman on Centos 7 and works fine for me
[root@localhost ~]#podman import  cEOS-lab.tar.xz ceosimage:4.23.0F
Getting image source signatures
Copying blob cbd8f51e099a done
Copying config 3a55de2204 done
Writing manifest to image destination
Storing signatures
3a55de2204db97ae78dd86d7b69a71caaa187c49ed42843bef3c5b505d3df8ae

[root@localhost ~]# podman create --name=ceos1 --privileged -e INTFTYPE=eth -e ETBA=1 -e SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 -e CEOS=1 -e EOS_PLATFORM=ceoslab -e container=podman -i -t ceosimage:4.23.0F /sbin/init systemd.setenv=INTFTYPE=eth systemd.setenv=ETBA=1 systemd.setenv=SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 systemd.setenv=CEOS=1 systemd.setenv=EOS_PLATFORM=ceoslab systemd.setenv=container=podman
ab3cdc296595e1f8f9c66e495a7e5dca2bb1e1dcb2725d40a2c0a18a47258563


[root@localhost ~]# podman start ceos1
ceos1
Wait 20-30 seconds for the container and EOS to init then you can attach via Cli or bash
[root@localhost ~]# podman exec -it ceos1 Cli
ab3cdc296595>en
localhost#
localhost#
localhost#sh run
! Command: show running-config
! device: localhost (cEOSLab, EOS-4.23.0F)
!
transceiver qsfp default-mode 4x10G
!
agent PowerManager shutdown
agent LedPolicy shutdown
agent Thermostat shutdown
agent PowerFuse shutdown
agent StandbyCpld shutdown
agent LicenseManager shutdown
!
spanning-tree mode mstp
!
no aaa root
!
no ip routing
!
end
Let me know if this works for you! Thanks! Tamas
(Tamas Plugor at November 18, 2019 2:59 pm)
0
Posted by Yury Glukhovskoy
Answered on November 18, 2019 4:27 pm

Thank Tamas for your comment.

I tried to do the same and came across the error:

[yurich@localhost Downloads]$ podman import cEOS-lab-4.23.0F.tar.xz ceosimage:4.23.0F
Getting image source signatures
Copying blob cbd8f51e099a done
Copying config 6dd59b52fb done
Writing manifest to image destination
Storing signatures
6dd59b52fbbc8c10d96456ab230718c0df0b0ee91a83a73cad131decdd384267
[yurich@localhost Downloads]$ podman create --name=ceos1 --privileged -e INTFTYPE=eth -e ETBA=1 -e SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 -e CEOS=1 -e EOS_PLATFORM=ceoslab -e container=podman -i -t ceosimage:4.23.0F /sbin/init systemd.setenv=INTFTYPE=eth systemd.setenv=ETBA=1 systemd.setenv=SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 systemd.setenv=CEOS=1 systemd.setenv=EOS_PLATFORM=ceoslab systemd.setenv=container=podman
2d60b332a82617a93159a68c80bf7908c168d0e53b61456beab07ac4108655e7
[yurich@localhost Downloads]$ podman start ceos1
2d60b332a82617a93159a68c80bf7908c168d0e53b61456beab07ac4108655e7
[yurich@localhost Downloads]$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2d60b332a826 docker.io/library/ceosimage:4.23.0F /sbin/init system... 3 minutes ago Up 3 minutes ago ceos1
[yurich@localhost Downloads]$ podman exec -it ceos1 Cli
Error: executable file not found in $PATH: No such file or directory: OCI runtime command not found error
[yurich@localhost Downloads]$
[yurich@localhost Downloads]$

Regards,
Yury

That happened to me as well if I wanted to attach to the container straight after starting it, but waiting for half a minute worked. Does it work if you go to bash and then trigger the Cli from there? In the meantime I'll try this on Fedora 31 too.
(Tamas Plugor at November 18, 2019 5:13 pm)
Actually, I was trying to attach to the container after 3minutes of the start. The output of "podman ps" shows this info....
(Yury Glukhovskoy at November 19, 2019 8:46 am)
I've deployed Fedora31 and hit the same issue as you did...Did a bit of a research and started comparing it with CentOS, and it seems there are known issues in Fedora31, in F30 it seems there were no such issue, and it's not related to cEOS, as alpine and other container images are breaking as well. Here's what I did: Since it was throwing runtime error, I was checking the podman config diff between CentOS and Fedora and saw that CentOS uses runc, whereas Fedora uses crun by default. I've switched to runc on Fedora too. I've modified the runtime value
/usr/share/containers/libpod.conf
and changed
# Default OCI runtime
runtime = "runc"

after this starting the container threw
[root@localhost ~]# podman start ceos1
ERRO[0000] oci runtime "runc" does not support CGroups V2: use system migrate to mitigate
Error: unable to start container "ceos1": this version of runc doesn't work on cgroups v2: OCI runtime error
one some forums I've found that I needed to run
sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
and reboot, but that didn't help so I just did what the error told me, ie run system migrate
root@localhost ~]# podman info  --format '{{ .host.OCIRuntime.name }}'
runc
[root@localhost ~]# podman system migrate
stopped 5677b149c1cfed6e8554cdab311a50dc384d6cf124cd9f77d15d2c54f0a2f9d2
Error: Error shutting down container storage: A layer is mounted: layer is in use by a container
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# podman system migrate
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# podman ps
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# podman ps -a
CONTAINER ID  IMAGE                                COMMAND               CREATED         STATUS                       PORTS  NAMES
5677b149c1cf  docker.io/library/ceosimage:4.23.0F  /sbin/init system...  11 minutes ago  Exited (137) 13 seconds ago         ceos1
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# podman start ceos1
5677b149c1cfed6e8554cdab311a50dc384d6cf124cd9f77d15d2c54f0a2f9d2
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# podman exec -it ceos1 bash
bash-4.2#
bash-4.2#
bash-4.2# cd /mnt/flash/
bash-4.2# ls -l
total 20
-rw-rw-r--+ 1 root root 462 Nov 19 14:31 AsuFastPktTransmit.log
drwxrwxr-x+ 2 root root   6 Nov 19 14:29 Fossil
-rw-rw-r--+ 1 root root 284 Nov 19 14:31 SsuRestore.log
-rw-rw-r--+ 1 root root 284 Nov 19 14:31 SsuRestoreLegacy.log
drwxrwx---+ 3 root root  77 Nov 19 14:30 debug
drwxrwxr-x+ 2 root root   6 Nov 19 14:29 fastpkttx.backup
-rw-rw-r--+ 1 root root 161 Nov 19 14:31 kickstart-config
drwxrwxr-x+ 3 root root  72 Nov 19 14:31 persist
-rw-rwx---+ 1 root root  13 Sep 26 20:27 zerotouch-config
bash-4.2# ^C
bash-4.2#
bash-4.2#
bash-4.2# exit
Error: non zero exit code: 130: OCI runtime error
[root@localhost ~]# podman exec -it ceos1 Cli
5677b149c1cf>en
% Authorization denied for command 'enable': Default authorization provider rejects all commands
5677b149c1cf>
5677b149c1cf>en
% Authorization denied for command 'enable': Default authorization provider rejects all commands
5677b149c1cf>en
5677b149c1cf#
localhost#
localhost#sh run
! Command: show running-config
! device: localhost (cEOSLab, EOS-4.23.0F)
!
transceiver qsfp default-mode 4x10G
!
agent PowerManager shutdown
agent LedPolicy shutdown
agent Thermostat shutdown
agent PowerFuse shutdown
agent StandbyCpld shutdown
agent LicenseManager shutdown
!
spanning-tree mode mstp
!
no aaa root
!
no ip routing
!
end
localhost#
As you see I was able to to login to the Cli after a few seconds (10-20 seconds). Let me know if this works for you! Thanks Tamas
(Tamas Plugor at November 19, 2019 2:40 pm)
0
Posted by Yury Glukhovskoy
Answered on November 21, 2019 11:51 am

Hi Tamas,

it seems your solution works only for root.
it has an issue for non-root users. have a look below:

[yurich@localhost ~]$ sudo podman exec -it ceos1 Cli
localhost>exit
[yurich@localhost ~]$ podman exec -it ceos1 bash
bash-4.2# ls -la /mnt/flash/
total 44
drwxr-xr-x. 6 root root 4096 Nov 21 10:51 .
drwxr-xr-x. 3 root root 4096 Nov 21 10:51 ..
-rw-r--r--. 1 root root 231 Nov 21 10:51 AsuFastPktTransmit.log
drwxr-xr-x. 2 root root 4096 Nov 21 10:51 Fossil
-rw-r--r--. 1 root root 142 Nov 21 10:51 SsuRestore.log
-rw-r--r--. 1 root root 142 Nov 21 10:51 SsuRestoreLegacy.log
drwxrwx---. 3 root root 4096 Nov 21 10:51 debug
drwxr-xr-x. 2 root root 4096 Nov 21 10:51 fastpkttx.backup
-rw-r--r--. 1 root root 161 Nov 21 10:51 kickstart-config
drwxr-xr-x. 2 root root 4096 Nov 21 10:51 persist
-rw-------. 1 root root 13 Sep 26 20:27 zerotouch-config
bash-4.2# exit
exit
[yurich@localhost ~]$ podman exec -it ceos1 Cli
Cannot connect to ConfigAgent
Entering standalone shell.

Note: Standalone CLI does not share data with other agents, but it can
run certain commands such as 'bash' to troubleshoot the system within
authentication and authorization constraints.

Standalone>en
waiting for agent localhost:ar.Aaa to start .............................................timed out

% Internal error
% To see the details of this error, run the command 'show error 0'
Standalone>show error 0
waiting for agent localhost:ar.Aaa to start .............................................timed out

% Internal error
% To see the details of this error, run the command 'show error 1'
Standalone>

Regards,
Yury

0
Posted by Yury Glukhovskoy
Answered on November 21, 2019 4:31 pm

Hi Tamas,

I've tested it.
It seems it works flawlessly for root.
Non-root user has an issue. Have a look below:

[yurich@localhost ~]$ sudo podman exec -it ceos1 Cli
localhost>exit
[yurich@localhost ~]$ podman exec -it ceos1 bash
bash-4.2# ls -la /mnt/flash/
total 44
drwxr-xr-x. 6 root root 4096 Nov 21 10:51 .
drwxr-xr-x. 3 root root 4096 Nov 21 10:51 ..
-rw-r--r--. 1 root root 231 Nov 21 10:51 AsuFastPktTransmit.log
drwxr-xr-x. 2 root root 4096 Nov 21 10:51 Fossil
-rw-r--r--. 1 root root 142 Nov 21 10:51 SsuRestore.log
-rw-r--r--. 1 root root 142 Nov 21 10:51 SsuRestoreLegacy.log
drwxrwx---. 3 root root 4096 Nov 21 10:51 debug
drwxr-xr-x. 2 root root 4096 Nov 21 10:51 fastpkttx.backup
-rw-r--r--. 1 root root 161 Nov 21 10:51 kickstart-config
drwxr-xr-x. 2 root root 4096 Nov 21 10:51 persist
-rw-------. 1 root root 13 Sep 26 20:27 zerotouch-config
bash-4.2# exit
exit
[yurich@localhost ~]$ podman exec -it ceos1 Cli
Cannot connect to ConfigAgent
Entering standalone shell.

Note: Standalone CLI does not share data with other agents, but it can
run certain commands such as 'bash' to troubleshoot the system within
authentication and authorization constraints.

Standalone>en
waiting for agent localhost:ar.Aaa to start .............................................timed out

% Internal error
% To see the details of this error, run the command 'show error 0'
Standalone>show error 0
waiting for agent localhost:ar.Aaa to start .............................................timed out

% Internal error
% To see the details of this error, run the command 'show error 1'
Standalone>

Regards,
Yury

non-root user at CentOS has the same issue when CLI starts ....
(Yury Glukhovskoy at November 22, 2019 9:41 am)
it has nothing to do with the distros, you need to set up root-less podman as per https://github.com/containers/libpod/blob/master/docs/tutorials/rootless_tutorial.md Thanks for Rob Martin to pointing this out! We've been trying this out but hitting some permissions issues:
[tamas@localhost containers]$ podman logs ceos1
systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
Detected virtualization other.
Detected architecture x86-64.

Welcome to CentOS Linux 7 (AltArch)!

Set hostname to .
Failed to read AF_UNIX datagram queue length, ignoring: No such file or directory
Failed to create root cgroup hierarchy: Permission denied
Failed to allocate manager object: Permission denied
[!!!!!!] Failed to allocate manager object, freezing.
systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
Detected virtualization other.
Detected architecture x86-64.

Welcome to CentOS Linux 7 (AltArch)!

Set hostname to .
Failed to read AF_UNIX datagram queue length, ignoring: No such file or directory
Failed to create root cgroup hierarchy: Permission denied
Failed to allocate manager object: Permission denied
[!!!!!!] Failed to allocate manager object, freezing.
This seems to be an issue with most container images, saw many threads of various RHEL and github forums...trying to figure out how to fix these. If anyone else finds the solution in the meantime feel free to chime in! Thanks, Tamas
(Tamas Plugor at November 22, 2019 12:14 pm)

Post your Answer

You must be logged in to post an answer.