Hosts
Actual hosts are not yet allocated. Each machine expected to be used has identical specifications, as detailed below.
...
The IOL RI2 pod 1 is currently deployed across 6 hosts. Five hosts (hpe12, hpe29, hpe27, hpe31, and hpe07) make up the k8s deploy, with an additional sixth node as the jumphost (hpe30).
The IOL RI2 pod 2 is deployed with the same topology, with hpe9 as a jumphost, and (hpe16, hpe18, hpe21, hpe37, hpe38) as node1 through node5
HPE x86_64 node:
Memory | Capacity | 512 GB |
Technology | DDR4 | |
Network Interface Type 1 | Count | 4 |
Speed | 25Gbit | |
Model | ||
Network Interface Type 2 | Count | 2 |
Speed | 10Gbit | |
Model | ||
CPU | Socket count | 2 |
Cores/Socket | 22 | |
Threads/Core | 2 | |
Model | ||
Disk Type 1 | Capacity | ~1TiB |
Count | 3 | |
Interface | SATA 3 | |
Storage Type | SSD | |
RAID | None | |
Disk Type 2 | Capacity | ~800GiB |
Count | 1 | |
Interface | SATA 3 | |
RAID | 1 (two 480GiB members) | |
Storage Type | SSD | |
Feature Support | RedFish | |
IPMI | Yes |
Networking
...
RI2 pod 1 has 7 layer 3 networks:
Name | DHCP Provided | Subnet | Gateway | Netmask | Underlay VLAN |
---|---|---|---|---|---|
public | no | 10.200.120.0 | 10.200.120.1 | 24 | 120 |
oob | no | 10.200.122.0 | N/A | 24 | 122 |
mgmt | no | 10.200.123.0 | 10.200.123.1 | 24 | 123 |
private_1 | no | 127.0.101.0 | N/A | 24 | 201 |
private_2 | no | 127.0.102.0 | N/A | 24 | 202 |
private_3 | no | 127.0.103.0 | N/A | 24 | 203 |
private_4 | no | 127.0.104.0 | N/A | 24 | 200 |
RI2 pod 2 also has 7 layer 3 networks, with different underlay vlans:
Name | DHCP Provided | Subnet | Gateway | Netmask | Underlay VLAN |
---|---|---|---|---|---|
public | no | 10.200.138.0 | 10.200.120.1 | 24 | 138 |
oob | no | 10.200.111.0 | N/A | 24 | 111 |
mgmt | no | 10.200.128.0 | 10.200.123.1 | 24 | 128 |
private_1 | no | 127.0.101.0 | N/A | 24 | 210 |
private_2 | no | 127.0.102.0 | N/A | 24 | 211 |
private_3 | no | 127.0.103.0 | N/A | 24 | 212 |
private_4 | no | 127.0.104.0 | N/A | 24 | 209 |
The jumphost has the following connections by interface name:
Interface | Network | Handles Default Route | IP |
---|---|---|---|
ens1f0 | mgmt | yes | 10.200.123.11 |
ens1f1 | oob | no | 10.200.122.16 |
Each host is connected to networks as follows:
Interface | Network | Handles Default Route | IP |
---|---|---|---|
ens1f0 | oob | no | <subnet>.<node # + 10> |
ens1f1 | public | yes | <subnet>.<node # + 10> |
eno49 | private_2 | no | <subnet>.<node # + 10> |
eno50 | private_3 | no | <subnet>.<node # + 10> |
ens4f0 | private_4 | no | <subnet>.<node # + 10> |
- Update topology, integrate with draft PDF/IDF
-
Validated descriptor files PDF and IDF might help recognize the network topology we need. They are defined under hw_config/intel as YAML schema and can be used to refer to. The code can be found at https://gerrit.opnfv.org/gerrit/admin/repos/kuberef .
Draft PDF:View file name pdf.
yamlheight250 - Given IDF appears to have two networks with dns/gateway, should oob be routable to outside world, and is that a requirement?
- Yes. We are expected to make minimal change before successful run.
- Source IDF/PDF uses multiple undeclared private networks for hosts
-
- Static leases/direct pipes for external traffic
- Can't provide directly assigned static IPs (need to NAT), find out if this is workable or a more exotic solution is necessary
...
OS Environment
OS is wiped away by kuberef (baremetal deploy), jumphost can be Ubuntu or CentOS
- Find OS variant+version for jumphost
- Access/permissioning for jumphost: keys/accounts for all involved parties (add each as points below this)
PDF/IDF
...
The current IDF and PDF for the pod are idf.yaml and pdf.yaml.
Generation Scripts
A series of scripts were created to generate configuration files and the PDF/IDF pair for the pod. These files are available in the kuberef repository at <awaiting PR>.
The versions used to create the current deployment of the pod are attached to this page as install.py and gen_net_configs.py