参考文章:http://gliderlabs.com/registrator/latest/user/quickstart/
Consul是强一致性的数据存储,使用gossip形成动态集群。它提供分级键/值存储方式,不仅可以存储数据,而且可以用于注册器件事各种任务,从发送数据改变通知到运行健康检查和自定义命令,具体如何取决于它们的输出
Registrator通过检查容器在线或者停止运行状态自动注册和去注册服务,它目前支持etcd、Consul和SkyDNS 2
# docker run -d -p 8400:8400 -p 8500:8500 -p 8600:53/udp --name node4 gliderlabs/consul-server:0.6 -bootstrap -advertise 192.168.10.138
# curl 192.168.10.138:8500/v1/catalog/services {"consul":[]}
# docker run -d --name=registrator --net=host --volume=/var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://192.168.10.138:8500
# docker run -d -P --name=redis redis
# curl 192.168.10.138:8500/v1/catalog/services {"consul":[],"redis":[]} # curl 192.168.10.138:8500/v1/catalog/service/redis [{"Node":"23dcba46458b","Address":"192.168.10.138","ServiceID":"localhost.localdomain:redis:6379","ServiceName":"redis","ServiceTags":[],"ServiceAddress":"","Servi
compose文件:
version: '2' services: consul-server: image: gliderlabs/consul-server:0.6 command: -bootstrap -advertise 192.168.10.138 hostname: consul-server ports: - "8400:8400" - "8500:8500" - "8600:53/udp" registrator: image: gliderlabs/registrator:latest command: consul://consul-server:8500 hostname: registrator depends_on: - consul-server volumes: - /var/run/docker.sock:/tmp/docker.sock app: image: tutum/hello-world:latest environment: # Environment variables used by registrator to register services in consul SERVICE_NAME: app SERVICE_TAGS: sample ports: - "8081:80" depends_on: - consul-template-nginx # Nginx Load Balancer consul-template-nginx: image: 1science/nginx:1.9.6-consul ports: - 80:80 volumes: - ./etc/consul-template:/etc/consul-tem
Docker的使用中,尤为重要的是服务发现和docker的宿主机集群及跨主机overlay网络的搭建,这里来介绍下常用来配合使用的swarm+consul集群的搭建(此处全基于docker容器)
192.168.11.30 为consul服务的leader,swarm的集群server和client节点,并为primary
192.168.11.32 为consul服务的节点,swarm的集群server和client节点,并为备份节点
192.168.11.30:
consul、swarm、nginx
192.168.11.32:
consul、swarm、nexus、jenkins、registry
cluster-store 是consul的leader的地址
cluster-advertise 是swarm client的地址,即当前主机
11.30
vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon --tls=false -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 --cluster-store=consul://192.168.11.30:8500 --cluster-advertise=192.168.11.30:2375
systemctl daemon-reload
systemctl restart docker
by 刘迎光@萤火虫工作室
OpenBI交流群:495266201
MicroService 微服务交流群:217722918
mail: liuyg#liuyingguang.cn
博主首页(==防止爬虫==):http://blog.liuyingguang.cn
11.32
vi /usr/lib/systemd/system/docker.servi
Docker的使用中,尤为重要的是服务发现和docker的宿主机集群及跨主机overlay网络的搭建,这里来介绍下常用来配合使用的swarm+consul集群的搭建(此处全基于docker容器)
192.168.11.30 为consul服务的leader,swarm的集群server和client节点,并为primary
192.168.11.32 为consul服务的节点,swarm的集群server和client节点,并为备份节点
192.168.11.30:
consul、swarm、nginx
192.168.11.32:
consul、swarm、nexus、jenkins、registry
cluster-store 是consul的leader的地址
cluster-advertise 是swarm client的地址,即当前主机
11.30
vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon --tls=false -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 --cluster-store=consul://192.168.11.30:8500 --cluster-advertise=192.168.11.30:2375
systemctl daemon-reload
systemctl restart docker
11.32
vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon --tls=false -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 --cluster-store=consul://192.168.11.30:8500 --cluster-