web-dev-qa-db-fra.com

Echec d'Openstack avec paysage

J'ai suivi les instructions de ce site: http://www.openstackbasement.com/

Je vais jusqu'au bout jusqu'à ce que je sois obligé d'émettre: "Sudo openstack-install" Cela va assez loin, mais il reste bloqué sur la page suivante: openstack-fail

Après environ 2900 secondes, il échoue. Voici le journal des erreurs du contrôleur de MAAS: http://Pastebin.com/raw/A7qtJm4v (1)

Le journal des erreurs sur le noeud déployé ressemble à ceci: http://Pastebin.com/raw/zuX1TJcB (2)

L’écran du noeud déployé ressemble à ceci: openstack-fail2

Question: Pourquoi est-ce causé et comment puis-je le résoudre?

P.S. J'ai du sauter du texte au début du point 2, sinon c'était trop long à poster. La Pastebin contient le fichier entier.

1)

[INFO: 09-05 23:32:16, openstack-install:227] Starting OpenStack Installer v0.99.28
[INFO: 09-05 23:32:16, openstack-install:228] Start command: ['/usr/bin/openstack-install']
[INFO: 09-05 23:32:16, openstack-install:239] Creating juju directories: /home/[user]/.cloud-install/juju
[INFO: 09-05 23:32:19, openstack-install:295] Running Liberty release
[INFO: 09-05 23:32:32, installbase.py:132] Performing an Autopilot install
[INFO: 09-05 23:32:32, utils.py:780] pollinate: Sudo su - -c 'pollinate -q -r --curl-opts "-k --user-agent uoi/bb2f64a2-82d8-4196-b55b-1f42c0141c1b/IL"'
[DEBUG: 09-05 23:32:58, landscape.py:75] Existing MAAS defined, doing a LDS installation with existing MAAS.
[DEBUG: 09-05 23:33:14, utils.py:627] ssh keys exist for this user, they will be used instead.
[DEBUG: 09-05 23:33:14, multi.py:139] Bootstrapping Juju: JUJU_HOME=/home/[user]/.cloud-install/juju juju  bootstrap 
[ERROR: 09-05 23:38:49, multi.py:218] Failed to get ip directly: [Errno -2] Name or service not known
[DEBUG: 09-05 23:38:51, multi.py:177] Finished MAAS step, now deploying Landscape.
[INFO: 09-06 00:27:05, utils.py:780] pollinate: Sudo su - -c 'pollinate -q -r --curl-opts "-k --user-agent uoi/bb2f64a2-82d8-4196-b55b-1f42c0141c1b/ET"'
[ERROR: 09-06 00:27:05, multi.py:384] Problem deploying Landscape: {'err': "2016-09-05 23:39:04 
[DEBUG] deployer.cli: Using runtime GoEnvironment on maas\n2016-09-05 23:39:04 
[INFO] deployer.cli: Starting deployment of landscape-dense-maas\n2016-09-05 23:39:04 
[DEBUG] deployer.import: Getting charms...\n2016-09-05 23:39:04 [DEBUG] deployer.charm: Cache dir /home/[user]/.cloud-install/juju/.deployer-store-cache/cs_trusty_haproxy-16\n2016-09-05 23:39:04 
[DEBUG] deployer.charm: Retrieving store charm cs:trusty/haproxy-16\n2016-09-05 23:39:04 [DEBUG] deployer.charm: Cache dir /home/[user]/.cloud-install/juju/.deployer-store-cache/cs_trusty_landscape-server\n2016-09-05 23:39:04 
[DEBUG] deployer.charm: Retrieving store charm cs:trusty/landscape-server-15\n2016-09-05 23:39:04 
[DEBUG] deployer.charm: Cache dir /home/[user]/.cloud-install/juju/.deployer-store-cache/cs_trusty_postgresql-40\n2016-09-05 23:39:04       
[DEBUG] deployer.charm: Retrieving store charm cs:trusty/postgresql-40\n2016-09-05 23:39:04 
[DEBUG] deployer.charm: Cache dir /home/[user]/.cloud-install/juju/.deployer-store-cache/cs_trusty_rabbitmq-server-43\n2016-09-05 23:39:04     
[DEBUG] deployer.charm: Retrieving store charm cs:trusty/rabbitmq-server-43\n2016-09-05 23:39:05 
[DEBUG] deployer.deploy: Resolving configuration\n2016-09-05 23:39:05      
[DEBUG] deployer.env: Connecting to environment...\n2016-09-05 23:39:05    
[DEBUG] deployer.env: Connected to environment\n2016-09-05 23:39:05     
[INFO] deployer.import: Deploying services...\n2016-09-05 23:39:05         
[INFO] deployer.import:  Deploying service haproxy using cs:trusty/haproxy-16\n2016-09-05 23:39:05 [DEBUG] deployer.import:  Refetching status for placement deploys\n2016-09-05 23:39:14   
[DEBUG] deployer.import:  Setting annotations\n2016-09-05 23:39:15        
[INFO] deployer.import:  Deploying service landscape-server using cs:trusty/landscape-server\n2016-09-05 23:39:19 
[DEBUG] deployer.import:  Setting annotations\n2016-09-05 23:39:20 
[INFO] deployer.import:  Deploying service postgresql using cs:trusty/postgresql-40\n2016-09-05 23:39:25 
[DEBUG] deployer.import:  Setting annotations\n2016-09-05 23:39:25       
[INFO] deployer.import:  Deploying service rabbitmq-server using cs:trusty/rabbitmq-server-43\n2016-09-05 23:39:29 
[DEBUG] deployer.import:  Setting annotations\n2016-09-05 23:39:34  
[DEBUG] deployer.import: Adding units...\n2016-09-05 23:39:34 
[DEBUG] deployer.import:  Service 'haproxy' does not need any more units added.\n2016-09-05 23:39:34 [DEBUG] deployer.import:  Service 'landscape-server' does not need any more units added.\n2016-09-05 23:39:34 
[DEBUG] deployer.import:  Service 'postgresql' does not need any more units added.\n2016-09-05 23:39:34     
[DEBUG] deployer.import:  Service 'rabbitmq-server' does not need any more units added.\n2016-09-05 23:39:34 
[DEBUG] deployer.import: Waiting for units before adding relations\n2016-09-05 23:46:02     
[DEBUG] deployer.env:  Delta machine: 0/lxc/0 change:pending\n2016-09-05 23:47:03 [DEBUG] deployer.env:  Delta machine: 0/lxc/1 change:pending\n2016-09-05 23:48:03     
[DEBUG] deployer.env:  Delta machine: 0/lxc/2 change:pending\n2016-09-05 23:49:11 [DEBUG] deployer.env:  Delta machine: 0/lxc/3 change:pending\n2016-09-06 00:24:03 
[DEBUG] deployer.env: Connecting to environment...\n2016-09-06 00:24:04    
[DEBUG] deployer.env: Connected to environment\n2016-09-06 00:24:04  
[INFO] deployer.import: Adding relations...\n2016-09-06 00:24:04 
[INFO] deployer.import:  Adding relation landscape-server <-> rabbitmq-server\n2016-09-06 00:24:04 
[INFO] deployer.import:  Adding relation landscape-server <-> haproxy\n2016-09-06 00:24:04 
[INFO] deployer.import:  Adding relation landscape-server:db <-> postgresql:db-admin\n2016-09-06 00:24:05 
[DEBUG] deployer.import: Waiting for relation convergence 180s\n2016-09-06 00:27:05 
[ERROR] deployer.import: Reached deployment timeout.. exiting\n2016-09-06 00:27:05     
[INFO] deployer.cli: Deployment stopped. run time: 2881.35\n", 'output': '', 'status': 1}
[ERROR: 09-06 00:27:05, gui.py:269] A fatal error has occurred: Error deploying Landscape.

[ERROR: 09-06 00:27:05, gui.py:270] Error deploying Landscape.
Traceback (most recent call last):
File "/usr/lib/python3.4/concurrent/futures/thread.py", line 54, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/share/openstack/cloudinstall/controllers/install/multi.py", line 181, in do_install
self.loop).run()
File "/usr/share/openstack/cloudinstall/controllers/install/multi.py", line 319, in run
self.deploy_landscape()
File "/usr/share/openstack/cloudinstall/controllers/install/multi.py", line 344, in deploy_landscape
self.run_deployer()
File "/usr/share/openstack/cloudinstall/controllers/install/multi.py", line 385, in run_deployer
raise Exception("Error deploying Landscape.")
Exception: Error deploying Landscape.
[DEBUG: 09-06 00:27:05, error.py:35] showing error view for: Error deploying Landscape.

2)

2016-09-06 08:01:45 INFO juju.mongo open.go:125 dialled mongo successfully on address "127.0.0.1:37017"
2016-09-06 08:01:45 DEBUG juju.apiserver apiserver.go:262 <- [4] machine-0 {"RequestId":20,"Type":"Rsyslog","Request":"GetRsyslogConfig","Params":"'params redacted'"}
2016-09-06 08:01:45 DEBUG juju.apiserver apiserver.go:262 <- [4] machine-0 {"RequestId":21,"Type":"Provisioner","Version":1,"Request":"Life","Params":"'params redacted'"}
2016-09-06 08:01:45 DEBUG juju.apiserver apiserver.go:276 -> [4] machine-0 146.525279ms {"RequestId":13,"Response":"'body redacted'"} Machiner[""].Life
2016-09-06 08:01:45 DEBUG juju.apiserver apiserver.go:276 -> [4] machine-0 147.19989ms {"RequestId":14,"Response":"'body redacted'"} Reboot[""].WatchForRebootEvent
2016-09-06 08:01:45 INFO juju.mongo open.go:125 dialled mongo successfully on address "192.168.1.151:37017"
2016-09-06 08:01:45 INFO juju.mongo open.go:125 dialled mongo successfully on address "192.168.1.151:37017"
2016-09-06 08:01:45 INFO juju.mongo open.go:125 dialled mongo successfully on address "192.168.1.151:37017"
2016-09-06 08:01:45 INFO juju.mongo open.go:125 dialled mongo successfully on address "192.168.1.151:37017"
2016-09-06 08:01:45 DEBUG juju.network network.go:268 no lxc bridge addresses to filter for machine
2016-09-06 08:01:45 INFO juju.worker.machiner machiner.go:132 setting addresses for machine-0 to ["local-machine:127.0.0.1" "local-cloud:192.168.1.151" "local-machine:::1"]
2016-09-06 08:01:45 DEBUG juju.apiserver apiserver.go:276 -> [4] machine-0 136.703326ms {"RequestId":15,"Response":"'body redacted'"} Machiner[""].WatchAPIHostPorts
2016-09-06 08:01:45 INFO juju.mongo open.go:125 dialled mongo successfully on address "192.168.1.151:37017"
2016-09-06 08:01:45 DEBUG juju.apiserver apiserver.go:262 <- [4] machine-0 {"RequestId":22,"Type":"Machiner","Request":"SetMachineAddresses","Params":"'params redacted'"}
2016-09-06 08:01:45 DEBUG juju.apiserver apiserver.go:262 <- [4] machine-0 {"RequestId":23,"Type":"Reboot","Version":1,"Request":"GetRebootAction","Params":"'params redacted'"}
2016-09-06 08:01:45 DEBUG juju.apiserver apiserver.go:262 <- [4] machine-0 {"RequestId":24,"Type":"NotifyWatcher","Id":"3","Request":"Next","Params":"'params redacted'"}
2016-09-06 08:01:45 DEBUG juju.state open.go:57 connection established
2016-09-06 08:01:45 DEBUG juju.worker.peergrouper worker.go:432 found new machine "0"
2016-09-06 08:01:45 DEBUG juju.state open.go:64 mongodb login successful
2016-09-06 08:01:45 INFO juju.worker.diskmanager diskmanager.go:62 block devices changed: [{sda [/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0]    scsi@32:0.0.0 25600  true } {sdb [/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:1:0]    scsi@32:0.1.0 25600  false }]
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:276 -> [4] machine-0 230.408525ms {"RequestId":16,"Response":"'body redacted'"} ProxyUpdater[""].WatchForProxyConfigAndAPIHostPortChanges
2016-09-06 08:01:46 INFO juju.mongo open.go:125 dialled mongo successfully on address "127.0.0.1:37017"
2016-09-06 08:01:46 INFO juju.mongo open.go:125 dialled mongo successfully on address "127.0.0.1:37017"
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:262 <- [4] machine-0 {"RequestId":25,"Type":"Machiner","Request":"APIHostPorts","Params":"'params redacted'"}
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:262 <- [4] machine-0 {"RequestId":26,"Type":"NotifyWatcher","Id":"4","Request":"Next","Params":"'params redacted'"}
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:262 <- [4] machine-0 {"RequestId":27,"Type":"DiskManager","Version":1,"Request":"SetMachineBlockDevices","Params":"'params redacted'"}
2016-09-06 08:01:46 INFO juju.mongo open.go:125 dialled mongo successfully on address "127.0.0.1:37017"
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:276 -> [4] machine-0 216.92964ms {"RequestId":21,"Response":"'body redacted'"} Provisioner[""].Life
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:262 <- [4] machine-0 {"RequestId":28,"Type":"ProxyUpdater","Version":1,"Request":"ProxyConfig","Params":"'params redacted'"}
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:262 <- [4] machine-0 {"RequestId":29,"Type":"NotifyWatcher","Id":"5","Request":"Next","Params":"'params redacted'"}
2016-09-06 08:01:46 INFO juju.cmd.jujud machine.go:1092 update apiserver worker with new certificate
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:276 -> [4] machine-0 37.154757ms {"RequestId":25,"Response":"'body redacted'"} Machiner[""].APIHostPorts
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:276 -> [4] machine-0 139.568869ms {"RequestId":23,"Response":"'body redacted'"} Reboot[""].GetRebootAction
2016-09-06 08:01:46 INFO juju.worker.certupdater certupdater.go:175 State Server cerificate addresses updated to ["192.168.1.151" "anything" "juju-apiserver" "juju-mongodb" "localhost"]
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:262 <- [4] machine-0 {"RequestId":30,"Type":"Provisioner","Version":1,"Request":"SetSupportedContainers","Params":"'params redacted'"}
2016-09-06 08:01:46 INFO juju.apiserver apiserver.go:143 updating api server certificate
2016-09-06 08:01:46 INFO juju.apiserver apiserver.go:150 new certificate addresses: 192.168.1.151
2016-09-06 08:01:46 DEBUG juju.network network.go:268 no lxc bridge addresses to filter for machine
2016-09-06 08:01:46 INFO juju.agent agent.go:565 API server address details [["node2.maas:17070" "192.168.1.151:17070"]] written to agent config as ["192.168.1.151:17070"]
2016-09-06 08:01:46 DEBUG juju.worker.reboot reboot.go:67 Reboot worker got action: noop
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:276 -> [4] machine-0 323.212726ms {"RequestId":17,"Response":"'body redacted'"} Logger[""].LoggingConfig
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:276 -> [4] machine-0 322.960531ms {"RequestId":19,"Response":"'body redacted'"} KeyUpdater[""].AuthorisedKeys
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:276 -> [4] machine-0 323.659437ms {"RequestId":18,"Response":"'body redacted'"} StorageProvisioner[""].WatchForEnvironConfigChanges
2016-09-06 08:01:46 DEBUG juju.apiserver apiserver.go:276 -> [4] machine-0 52.597332ms {"RequestId":28,"Response":"'body redacted'"} ProxyUpdater[""].ProxyConfig
2016-09-06 08:01:46 DEBUG juju.worker.logger logger.go:45 reconfiguring logging from "<root>=DEBUG" to "<root>=WARNING;unit=DEBUG"
2016-09-06 08:02:17 ERROR juju.state.unit unit.go:738 unit haproxy/0 cannot get assigned machine: unit "haproxy/0" is not assigned to a machine
2016-09-06 08:02:17 WARNING juju.state allwatcher.go:351 getting a public  address for unit "haproxy/0" failed: "unit haproxy/0 cannot get assigned machine: unit \"haproxy/0\" is not assigned to a machine"
2016-09-06 08:02:17 ERROR juju.state.unit unit.go:748 unit haproxy/0 cannot get assigned machine: unit "haproxy/0" is not assigned to a machine
2016-09-06 08:02:17 WARNING juju.state allwatcher.go:355 getting a private address for unit "haproxy/0" failed: "unit haproxy/0 cannot get assigned machine: unit \"haproxy/0\" is not assigned to a machine"
2016-09-06 08:02:22 WARNING juju.state allwatcher.go:351 getting a public address for unit "haproxy/0" failed: "public no address"
2016-09-06 08:02:22 WARNING juju.state allwatcher.go:355 getting a private address for unit "haproxy/0" failed: "private no address"
2016-09-06 08:02:22 ERROR juju.state.unit unit.go:738 unit landscape-server/0 cannot get assigned machine: unit "landscape-server/0" is not assigned to a machine
2016-09-06 08:02:22 WARNING juju.state allwatcher.go:351 getting a public address for unit "landscape-server/0" failed: "unit landscape-server/0 cannot get assigned machine: unit \"landscape-server/0\" is not assigned to a machine"
2016-09-06 08:02:22 ERROR juju.state.unit unit.go:748 unit landscape-server/0 cannot get assigned machine: unit "landscape-server/0" is not assigned to a machine
2016-09-06 08:02:22 WARNING juju.state allwatcher.go:355 getting a private address for unit "landscape-server/0" failed: "unit landscape-server/0 cannot get assigned machine: unit \"landscape-server/0\" is not assigned to a machine"
2016-09-06 08:02:27 WARNING juju.state allwatcher.go:351 getting a public address for unit "landscape-server/0" failed: "public no address"
2016-09-06 08:02:27 WARNING juju.state allwatcher.go:355 getting a private address for unit "landscape-server/0" failed: "private no address"
2016-09-06 08:02:27 ERROR juju.state.unit unit.go:738 unit postgresql/0 cannot get assigned machine: unit "postgresql/0" is not assigned to a machine
2016-09-06 08:02:27 WARNING juju.state allwatcher.go:351 getting a public address for unit "postgresql/0" failed: "unit postgresql/0 cannot get assigned machine: unit \"postgresql/0\" is not assigned to a machine"
2016-09-06 08:02:27 ERROR juju.state.unit unit.go:748 unit postgresql/0 cannot get assigned machine: unit "postgresql/0" is not assigned to a machine
2016-09-06 08:02:27 WARNING juju.state allwatcher.go:355 getting a private address for unit "postgresql/0" failed: "unit postgresql/0 cannot get assigned machine: unit \"postgresql/0\" is not assigned to a machine"
2016-09-06 08:02:32 WARNING juju.state allwatcher.go:351 getting a public address for unit "postgresql/0" failed: "public no address"
2016-09-06 08:02:32 WARNING juju.state allwatcher.go:355 getting a private address for unit "postgresql/0" failed: "private no address"
2016-09-06 08:02:32 ERROR juju.state.unit unit.go:738 unit rabbitmq-server/0 cannot get assigned machine: unit "rabbitmq-server/0" is not assigned to a machine
2016-09-06 08:02:32 WARNING juju.state allwatcher.go:351 getting a public address for unit "rabbitmq-server/0" failed: "unit rabbitmq-server/0 cannot get assigned machine: unit \"rabbitmq-server/0\" is not assigned to a machine"
2016-09-06 08:02:32 ERROR juju.state.unit unit.go:748 unit rabbitmq-server/0 cannot get assigned machine: unit "rabbitmq-server/0" is not assigned to a machine
2016-09-06 08:02:32 WARNING juju.state allwatcher.go:355 getting a private address for unit "rabbitmq-server/0" failed: "unit rabbitmq-server/0 cannot get assigned machine: unit \"rabbitmq-server/0\" is not assigned to a machine"
2016-09-06 08:02:37 WARNING juju.state allwatcher.go:351 getting a public address for unit "rabbitmq-server/0" failed: "public no address"
2016-09-06 08:02:37 WARNING juju.state allwatcher.go:355 getting a private address for unit "rabbitmq-server/0" failed: "private no address"
2016-09-06 16:16:56 WARNING juju.apiserver.client status.go:465 error fetching public address: "public no address"
2016-09-06 16:16:56 WARNING juju.apiserver.client status.go:465 error fetching public address: "public no address"
2016-09-06 16:16:56 WARNING juju.apiserver.client status.go:465 error fetching public address: "public no address"
2016-09-06 16:16:56 WARNING juju.apiserver.client status.go:465 error fetching public address: "public no address"
2016-09-06 16:16:56 WARNING juju.apiserver.client status.go:679 error fetching public address: public no address
2016-09-06 16:16:56 WARNING juju.apiserver.client status.go:679 error fetching public address: public no address
2016-09-06 16:16:56 WARNING juju.apiserver.client status.go:679 error fetching public address: public no address
2016-09-06 16:16:56 WARNING juju.apiserver.client status.go:679 error fetching public address: public no address
4
user3892683

J'ai enfin eu la réponse au problème. J'utilisais un seul serveur VMware ESXI pour exécuter les nœuds et le contrôleur. Après un dépannage, il est apparu que JUJU était en train de démarrer des conteneurs dans LXC, mais ces derniers n’avaient pas d’adresse IP. À ce moment, il n'était pas clair si c'était JUJU ou LXC qui était à l'origine du problème.

Après quelques recherches sur Internet, il s'est avéré que le problème ne l'était pas non plus, après avoir trouvé ce lien: https://www.reddit.com/r/homelab/comments/4p3k9j/trouble_getting_lxc_networking_up_containers_not/

J'ai eu un problème similaire il y a quelque temps avec LXC sur Ubuntu VM dans ESXi. Mon problème était que le commutateur virtuel dans ESXi n'était pas en mode promiscuous. Cela a entraîné la suppression du trafic réseau à partir du conteneur lxc.

Après l’activation du mode promiscuous dans VMware ESXI, les conteneurs ont obtenu leur adresse IP et le programme d’installation Openstack a réussi.

0
user3892683

Cela n'indique pas nécessairement un problème avec openstack-install ou Autopilot. Ce que vous devez faire, c'est faire un déploiement de base avec juju bootstrap; juju deploy ubuntu et vous assurer que cela fonctionne avec votre configuration maas actuelle. On m'a dit que vous utilisiez également Wake on Lan comme bmc, dont on sait qu'elle n'est pas fiable.

1
battlemidget