web-dev-qa-db-fra.com

kubectl cluster-info get 502 Bad Gateway error

J'ai utilisé juju deploy canonical-kubernetes pour déployer un K8S. Mais lorsque vous exécutez ./kubectl cluster-info comme La distribution canonique de Kubernetes le document de charme dit, obtenez l'erreur ci-dessous:

Error from server: an error on the server ("<html>\r\n<head><title>502
Bad Gateway</title></head>\r\n<body bgcolor=\"white\">\r\n<center>
<h1>502           Bad Gateway</h1></center>\r\n<hr><center>nginx/1.10.0
 (Ubuntu)</center>\r\n</body>\r\n</html>") has prevented the request from succeeding

Statut de Juju:

MODEL    CONTROLLER  CLOUD/REGION         VERSION
default  lxd-test    localhost/localhost  2.0-rc3

APP                    VERSION  STATUS       SCALE  CHARM                  STORE       REV  OS      NOTES
easyrsa                3.0.1    active           1  easyrsa                jujucharms    2  ubuntu  
elasticsearch                   active           2  elasticsearch          jujucharms   19  ubuntu  
etcd                   2.2.5    active           3  etcd                   jujucharms   13  ubuntu  
filebeat                        active           4  filebeat               jujucharms    5  ubuntu  
flannel                0.6.1    waiting          4  flannel                jujucharms    3  ubuntu  
kibana                          active           1  kibana                 jujucharms   15  ubuntu  
kubeapi-load-balancer  1.10.0   active           1  kubeapi-load-balancer  jujucharms    2  ubuntu  exposed
kubernetes-master      1.4.0    maintenance      1  kubernetes-master      jujucharms    3  ubuntu  
kubernetes-worker      1.4.0    waiting          3  kubernetes-worker      jujucharms    3  ubuntu  exposed
topbeat                         active           3  topbeat                jujucharms    5  ubuntu  

UNIT                      WORKLOAD     AGENT      MACHINE  PUBLIC-ADDRESS  PORTS            MESSAGE
easyrsa/0*                active       idle       0        10.181.160.79                    Certificate Authority connected.
elasticsearch/0*          active       idle       1        10.181.160.62   9200/tcp         Ready
elasticsearch/1           active       idle       2        10.181.160.72   9200/tcp         Ready
etcd/0*                   active       idle       3        10.181.160.41   2379/tcp         Healthy with 3 known peers. (leader)
etcd/1                    active       idle       4        10.181.160.135  2379/tcp         Healthy with 3 known peers.
etcd/2                    active       idle       5        10.181.160.204  2379/tcp         Healthy with 3 known peers.
kibana/0*                 active       idle       6        10.181.160.54   80/tcp,9200/tcp  ready
kubeapi-load-balancer/0*  active       idle       7        10.181.160.42   443/tcp          Loadbalancer ready.
kubernetes-master/0*      maintenance  idle       8        10.181.160.208                   Rendering authentication templates.
  filebeat/0              active       idle                10.181.160.208                   Filebeat ready.
  flannel/0*              waiting      idle                10.181.160.208                   Flannel is starting up.
kubernetes-worker/0*      waiting      idle       9        10.181.160.94                    Waiting for cluster-manager to initiate start.
  filebeat/1*             active       idle                10.181.160.94                    Filebeat ready.
  flannel/1               waiting      idle                10.181.160.94                    Flannel is starting up.
  topbeat/0               active       idle                10.181.160.94                    Topbeat ready.
kubernetes-worker/1       waiting      idle       10       10.181.160.95                    Waiting for cluster-manager to initiate start.
  filebeat/2              active       idle                10.181.160.95                    Filebeat ready.
  flannel/2               waiting      idle                10.181.160.95                    Flannel is starting up.
  topbeat/1*              active       executing           10.181.160.95                    (update-status) Topbeat ready.
kubernetes-worker/2       waiting      idle       11       10.181.160.148                   Waiting for cluster-manager to initiate start.
  filebeat/3              active       idle                10.181.160.148                   Filebeat ready.
  flannel/3               waiting      idle                10.181.160.148                   Flannel is starting up.
  topbeat/2               active       idle                10.181.160.148                   Topbeat ready.

MACHINE  STATE    DNS             INS-ID          SERIES  AZ
0        started  10.181.160.79   juju-23ce86-0   xenial  
1        started  10.181.160.62   juju-23ce86-1   trusty  
2        started  10.181.160.72   juju-23ce86-2   trusty  
3        started  10.181.160.41   juju-23ce86-3   xenial  
4        started  10.181.160.135  juju-23ce86-4   xenial  
5        started  10.181.160.204  juju-23ce86-5   xenial  
6        started  10.181.160.54   juju-23ce86-6   trusty  
7        started  10.181.160.42   juju-23ce86-7   xenial  
8        started  10.181.160.208  juju-23ce86-8   xenial  
9        started  10.181.160.94   juju-23ce86-9   xenial  
10       started  10.181.160.95   juju-23ce86-10  xenial  
11       started  10.181.160.148  juju-23ce86-11  xenial  

RELATION           PROVIDES               CONSUMES               TYPE
certificates       easyrsa                kubeapi-load-balancer  regular
certificates       easyrsa                kubernetes-master      regular
certificates       easyrsa                kubernetes-worker      regular
peer               elasticsearch          elasticsearch          peer
elasticsearch      elasticsearch          filebeat               regular
rest               elasticsearch          kibana                 regular
elasticsearch      elasticsearch          topbeat                regular
cluster            etcd                   etcd                   peer
etcd               etcd                   flannel                regular
etcd               etcd                   kubernetes-master      regular
juju-info          filebeat               kubernetes-master      regular
juju-info          filebeat               kubernetes-worker      regular
sdn-plugin         flannel                kubernetes-master      regular
sdn-plugin         flannel                kubernetes-worker      regular
loadbalancer       kubeapi-load-balancer  kubernetes-master      regular
kube-api-endpoint  kubeapi-load-balancer  kubernetes-worker      regular
beats-Host         kubernetes-master      filebeat               subordinate
Host               kubernetes-master      flannel                subordinate
kube-dns           kubernetes-master      kubernetes-worker      regular
beats-Host         kubernetes-worker      filebeat               subordinate
Host               kubernetes-worker      flannel                subordinate
beats-Host         kubernetes-worker      topbeat                subordinate
5
fkpwolf

Cela semble être dû au fait que vous déployez Kubernetes sur LXD. Selon le README pour les Kubernetes canoniques :

kubernetes-master, kubernetes-worker, kubeapi-load-balancer et etcd ne sont pas pris en charge sur LXD pour le moment.

Il s’agit d’une limitation entre Docker et LXD - une solution que nous espérons avoir bientôt réglée. En attendant, ces composants doivent être exécutés sur au moins une machine virtuelle.

Vous pouvez le faire manuellement, avec LXD, en déployant le reste des composants sur LXD, puis en lançant manuellement quelques instances KVM sur votre ordinateur.

Je vais essayer d'obtenir des instructions claires à ce sujet et de répondre avec eux ici.

2
Marco Ceppi