Istio中的流量配置

前端之家收集整理的这篇文章主要介绍了Istio中的流量配置前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

Istio中的流量配置

Istio注入的容器

Istio的数据面会在pod中注入两个容器:istio-initistio-proxy

Istio-init

istio-init会通过创建iptables规则来接管流量:

  • 命令行参数 -p 15001表示出向流量被iptable重定向到Envoy的15001端口

  • 命令行参数 -z 15006表示入向流量被iptable重定向到Envoy的15006端口

  • 命令行参数 -u 1337参数用于排除用户ID为1337,即Envoy自身的流量,以避免Iptable把Envoy发出的数据又重定向到Envoy,形成死循环。在istio-proxy容器中执行如下命令可以看到Envoy使用的用户id为1337

    1. $ id
    2. uid=1337(istio-proxy) gid=1337(istio-proxy) groups=1337(istio-proxy)
  1. istio-iptables
  2. -p
  3. 15001
  4. -z
  5. 15006
  6. -u
  7. 1337
  8. -m
  9. REDIRECT
  10. -i
  11. *
  12. -x
  13. -b
  14. *
  15. -d
  16. 15090,15021,15020
  17. --run-validation
  18. --skip-rule-apply

istio-proxy

istio-proxy容器中会运行两个程序:pilot-agentenvoy

  1. $ ps -ef|cat
  2. UID PID PPID C STIME TTY TIME CMD
  3. istio-p+ 1 0 0 Sep10 ? 00:03:39 /usr/local/bin/pilot-agent proxy sidecar --domain default.svc.cluster.local --serviceCluster sleep.default --proxyLogLevel=warning --proxyComponentLogLevel=misc:error --trust-domain=cluster.local --concurrency 2
  4. istio-p+ 27 1 0 Sep10 ? 00:14:30 /usr/local/bin/envoy -c etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster sleep.default --service-node sidecar~10.80.3.109~sleep-856d589c9b-x6szk.default~default.svc.cluster.local --local-address-ip-version v4 --log-format-prefix-with-location 0 --log-format %Y-%m-%dT%T.%fZ.%l.envoy %n.%v -l warning --component-log-level misc:error --concurrency 2

Envoy架构

Envoy对入站/出站请求的处理过程如下,Envoy按照如下顺序依次在各个过滤器中处理请求。

典型的入站请求流程如下,首先在监听过滤器链中解析入站报文中的TLS,然后通过传输socket建立连接,最后由网络过滤器链进行处理(含HTTP连接管理器)。

Pilot-agent生成的初始配置文件

pilot-agent根据启动参数和K8S API Server中的配置信息生成Envoy的bootstrap文件(/etc/istio/proxy/envoy-rev0.json),并负责启动Envoy进程(可以看到Envoy进程的父进程是pilot-agent);envoy会通过xDS接口从istiod动态获取配置文件envoy-rev0.json初始配置文件结构如下:

  • node:给出了Envoy 实例的信息

    1. "node": {
    2. "id": "sidecar~10.80.3.109~sleep-856d589c9b-x6szk.default~default.svc.cluster.local","cluster": "sleep.default","locality": {
    3. },"Metadata": {"APP_CONTAINERS":"sleep,istio-proxy","CLUSTER_ID":"Kubernetes","EXCHANGE_KEYS":"NAME,NAMESPACE,INSTANCE_IPS,LABELS,OWNER,PLATFORM_MetaDATA,WORKLOAD_NAME,MESH_ID,SERVICE_ACCOUNT,CLUSTER_ID","INSTANCE_IPS":"10.80.3.109,fe80::40fb:daff:Feed:e56c","INTERCEPTION_MODE":"REDIRECT","ISTIO_PROXY_SHA":"istio-proxy:f642a7fd07d0a99944a6e3529566e7985829839c","ISTIO_VERSION":"1.7.0","LABELS":{"app":"sleep","istio.io/rev":"default","pod-template-hash":"856d589c9b","security.istio.io/tlsMode":"istio","service.istio.io/canonical-name":"sleep","service.istio.io/canonical-revision":"latest"},"MESH_ID":"cluster.local","NAME":"sleep-856d589c9b-x6szk","NAMESPACE":"default","OWNER":"kubernetes://apis/apps/v1/namespaces/default/deployments/sleep","POD_PORTS":"[{\"name\":\"http-envoy-prom\",\"containerPort\":15090,\"protocol\":\"TCP\"}]","PROXY_CONFIG":{"binaryPath":"/usr/local/bin/envoy","concurrency":2,"configPath":"./etc/istio/proxy","controlPlaneAuthPolicy":"MUTUAL_TLS","discoveryAddress":"istiod.istio-system.svc:15012","drainDuration":"45s","envoyAccessLogService":{},"envoyMetricsService":{},"parentShutdownDuration":"60s","proxyAdminPort":15000,"proxyMetadata":{"DNS_AGENT":""},"serviceCluster":"sleep.default","statNameLength":189,"statusPort":15020,"terminationDrainDuration":"5s","tracing":{"zipkin":{"address":"zipkin.istio-system:9411"}}},"SDS":"true","SERVICE_ACCOUNT":"sleep","WORKLOAD_NAME":"sleep","k8s.v1.cni.cncf.io/networks":"istio-cni","sidecar.istio.io/interceptionMode":"REDIRECT","sidecar.istio.io/status":"{\"version\":\"8e6e902b765af607513b28d284940ee1421e9a0d07698741693b2663c7161c11\",\"initContainers\":[\"istio-validation\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-data\",\"istio-podinfo\",\"istiod-ca-cert\"],\"imagePullSecrets\":null}","traffic.sidecar.istio.io/excludeInboundPorts":"15020","traffic.sidecar.istio.io/includeOutboundIPRanges":"*"}
    4. },
  • admin:给出了Envoy的日志路径和管理端口,例如可以通过curl -X POST localhost:15000/logging?level=trace设置日志级别为trace

    1. "admin": {
    2. "access_log_path": "/dev/null",/* 管理服务器的访问日志路径 */
    3. "profile_path": "/var/lib/istio/data/envoy.prof",/* 管理服务器的cpu输出路径 */
    4. "address": { /* 管理服务器监听的TCP地址 */
    5. "socket_address": {
    6. "address": "127.0.0.1","port_value": 15000
    7. }
    8. }
    9. },
  • dynamic_resources:配置动态资源,用于配置lds_configcds_configads_config

    Envoy通过xds-grpc cluster(参见static_resources)来获得xDS服务的地址。

    1. "dynamic_resources": {
    2. "lds_config": { /* 通过一个LDS配置Listeners */
    3. "ads": {},"resource_api_version": "V3" /* LDS使用的API版本号 */
    4. },"cds_config": { /* 通过一个CDS配置Cluster */
    5. "ads": {},"resource_api_version": "V3"
    6. },"ads_config": { /* API配置资源,用于指定API类型和Envoy获取xDS APIcluster */
    7. "api_type": "GRPC",/* 使用GRPC获取xDS信息 */
    8. "transport_api_version": "V3",/* xDS传输协议的版本号 */
    9. "grpc_services": [
    10. {
    11. "envoy_grpc": {
    12. "cluster_name": "xds-grpc" /* 动态获取xDS配置的cluster */
    13. }
    14. }
    15. ]
    16. }
    17. },
  • static_resources:配置静态资源,主要包括clusterslisteners两种资源:

    • clusters:下面给出了几个静态配置的cluster。

      1. "clusters": [
      2. {
      3. "name": "prometheus_stats",/* 使用Prometheus暴露metrics,接口为127.0.0.1:15000/stats/prometheus */
      4. "type": "STATIC",/* 明确指定上游host的网络名(IP地址/端口等) */
      5. "connect_timeout": "0.250s","lb_policy": "ROUND_ROBIN","load_assignment": { /* 仅用于类型为STATIC,STRICT_DNSLOGICAL_DNS,用于给非EDSclyster内嵌与EDS等同的endpoint */
      6. "cluster_name": "prometheus_stats","endpoints": [{
      7. "lb_endpoints": [{ /* 负载均衡的后端 */
      8. "endpoint": {
      9. "address":{
      10. "socket_address": {
      11. "protocol": "TCP","address": "127.0.0.1","port_value": 15000
      12. }
      13. }
      14. }
      15. }]
      16. }]
      17. }
      18. },{
      19. "name": "agent",/* 暴露健康检查接口,可以使用curl http://127.0.0.1:15020/healthz/ready -v查看 */
      20. "type": "STATIC","connect_timeout": "0.250s","load_assignment": {
      21. "cluster_name": "prometheus_stats","port_value": 15020
      22. }
      23. }
      24. }
      25. }]
      26. }]
      27. }
      28. },{
      29. "name": "sds-grpc",/* 配置SDS cluster */
      30. "type": "STATIC","http2_protocol_options": {},"connect_timeout": "1s","load_assignment": {
      31. "cluster_name": "sds-grpc","endpoints": [{
      32. "lb_endpoints": [{
      33. "endpoint": {
      34. "address":{
      35. "pipe": {
      36. "path": "./etc/istio/proxy/SDS" /* 进行SDSUNIX socket路径,用于在mTLS期间给istio-agentproxy提供通信 */
      37. }
      38. }
      39. }
      40. }]
      41. }]
      42. }
      43. },{
      44. "name": "xds-grpc",/* 动态xDS使用的grpc服务器配置 */
      45. "type": "STRICT_DNS","respect_dns_ttl": true,"dns_lookup_family": "V4_ONLY","transport_socket": { /* 配置与上游连接的传输socket */
      46. "name": "envoy.transport_sockets.tls",/* 需要实例化的传输socket名称 */
      47. "typed_config": {
      48. "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext","sni": "istiod.istio-system.svc",/* 创建TLS后端(即SDS服务器)连接时要使用的SNI字符串 */
      49. "common_tls_context": { /* 配置clientserver使用的TLS上下文 */
      50. "alpn_protocols": [/* listener暴露的ALPN协议列表,如果为空,则不使用APPN */
      51. "h2"
      52. ],"tls_certificate_sds_secret_configs": [/*通过SDS API获取TLS证书的配置 */
      53. {
      54. "name": "default","sds_config": { /* 配置sds_config时将会从静态资源加载secret */
      55. "resource_api_version": "V3",/* xDSAPI版本 */
      56. "initial_fetch_timeout": "0s","api_config_source": { /* SDS API配置,如版本和SDS服务 */
      57. "api_type": "GRPC","transport_api_version": "V3",/* xDS传输协议的API版本 */
      58. "grpc_services": [
      59. { /* SDS服务器对应上面配置的sds-grpc cluster */
      60. "envoy_grpc": { "cluster_name": "sds-grpc" }
      61. }
      62. ]
      63. }
      64. }
      65. }
      66. ],"validation_context": {
      67. "trusted_ca": {
      68. "filename": "./var/run/secrets/istio/root-cert.pem" /* 本地文件系统的数据源。挂载当前命名空间下的config istio-ca-root-cert,其中的CA证书与istio-system命名空间下的istio-ca-secret中的CA证书相同,用于校验对端istiod的证书 */
      69. },"match_subject_alt_names": [{"exact":"istiod.istio-system.svc"}] /* 验证证书中的SAN,即来自istiod的证书 */
      70. }
      71. }
      72. }
      73. },"load_assignment": {
      74. "cluster_name": "xds-grpc",/* 可以看到xds-grpc的后端为istiod15012端口 */
      75. "endpoints": [{
      76. "lb_endpoints": [{
      77. "endpoint": {
      78. "address":{
      79. "socket_address": {"address": "istiod.istio-system.svc","port_value": 15012}
      80. }
      81. }
      82. }]
      83. }]
      84. },"circuit_breakers": { /* 断路器配置 */
      85. "thresholds": [
      86. {
      87. "priority": "DEFAULT","max_connections": 100000,"max_pending_requests": 100000,"max_requests": 100000
      88. },{
      89. "priority": "HIGH","max_requests": 100000
      90. }
      91. ]
      92. },"upstream_connection_options": {
      93. "tcp_keepalive": {
      94. "keepalive_time": 300
      95. }
      96. },"max_requests_per_connection": 1,"http2_protocol_options": { }
      97. },{
      98. "name": "zipkin",/* 分布式链路跟踪zipkincluster配置 */
      99. "type": "STRICT_DNS","load_assignment": {
      100. "cluster_name": "zipkin","endpoints": [{
      101. "lb_endpoints": [{
      102. "endpoint": {
      103. "address":{
      104. "socket_address": {"address": "zipkin.istio-system","port_value": 9411}
      105. }
      106. }
      107. }]
      108. }]
      109. }
      110. }
      111. ],

      上面使用type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext API接口来对传输socket进行配置,snicommon_tls_context 都属于结构体UpstreamTlsContext中的成员变量。

      可以使用istioctl pc cluster命令查看静态cluster资源,第一列对应上面的Cluster.name,其中sds-grpc用于提供SDS服务,SDS的原理可以参见官方文档

      1. # istioctl pc cluster sleep-856d589c9b-x6szk.default |grep STATIC
      2. BlackHoleCluster - - - STATIC
      3. agent - - - STATIC
      4. prometheus_stats - - - STATIC
      5. sds-grpc - - - STATIC
      6. sleep.default.svc.cluster.local 80 http inbound STATIC
    • listener,下面用到了Network filter中的HTTP connection manager

      1. "listeners":[
      2. {
      3. "address": { /* listener监听的地址 */
      4. "socket_address": {
      5. "protocol": "TCP","address": "0.0.0.0","port_value": 15090
      6. }
      7. },"filter_chains": [
      8. {
      9. "filters": [
      10. {
      11. "name": "envoy.http_connection_manager","typed_config": { /* 对扩展API的配置 */
      12. "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager","codec_type": "AUTO",/* 由连接管理器判断使用哪种编解码器 */
      13. "stat_prefix": "stats","route_config": { /* 连接管理器的静态路由表 */
      14. "virtual_hosts": [ /* 路由表使用的虚拟主机列表 */
      15. {
      16. "name": "backend",/* 路由表使用的虚拟主机 */
      17. "domains": [ /* 匹配该虚拟主机的域列表 */
      18. "*"
      19. ],"routes": [ /* 匹配入请求的路由列表,使用第一个匹配的路由 */
      20. {
      21. "match": { /* HTTP地址为/stats/prometheus的请求路由到cluster prometheus_stats */
      22. "prefix": "/stats/prometheus"
      23. },"route": {
      24. "cluster": "prometheus_stats"
      25. }
      26. }
      27. ]
      28. }
      29. ]
      30. },"http_filters": [{ /* 构成filter链的filter,用于处理请求。此处并没有定义任何规则 */
      31. "name": "envoy.router","typed_config": {
      32. "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
      33. }
      34. }]
      35. }
      36. }
      37. ]
      38. }
      39. ]
      40. },{
      41. "address": {
      42. "socket_address": {
      43. "protocol": "TCP","port_value": 15021
      44. }
      45. },"typed_config": {
      46. "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager","stat_prefix": "agent","route_config": { /* 静态路由表配置 */
      47. "virtual_hosts": [
      48. {
      49. "name": "backend","domains": [
      50. "*"
      51. ],"routes": [
      52. {
      53. "match": { /* HTTP地址为/healthz/ready的请求路由到cluster agent */
      54. "prefix": "/healthz/ready"
      55. },"route": {
      56. "cluster": "agent"
      57. }
      58. }
      59. ]
      60. }
      61. ]
      62. },"http_filters": [{
      63. "name": "envoy.router","typed_config": {
      64. "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
      65. }
      66. }]
      67. }
      68. }
      69. ]
      70. }
      71. ]
      72. }
      73. ]
      74. }
  • tracing,对应上面static_resources里定义的zipkin cluster。

    1. "tracing": {
    2. "http": {
    3. "name": "envoy.zipkin","typed_config": {
    4. "@type": "type.googleapis.com/envoy.config.trace.v3.ZipkinConfig","collector_cluster": "zipkin","collector_endpoint": "/api/v2/spans","collector_endpoint_version": "HTTP_JSON","trace_id_128bit": true,"shared_span_context": false
    5. }
    6. }
    7. }

    基本流程如下:

Envoy管理接口获取的完整配置

可以在注入Envoy sidecar的pod中执行curl -X POST localhost:15000/config_dump获取完整的配置信息。可以看到它主要包含BootstrapConfigClustersConfigListenersConfigRoutesConfigSecretsConfig这5部分。

  • Bootstrap:它与上面由Pilot-agent生成envoy-rev0.json文件中的内容相同,即提供给Envoy proxy的初始化配置,给出了xDS服务器的地址等信息。

  • Clusters:在Envoy中,Cluster是一个服务集群,每个cluster包含一个或多个endpoint(可以将cluster近似看作是k8s中的service)。

    从上图可以看出,ClustersConfig包含两种cluster配置:static_clustersdynamic_active_clusters。前者中的cluster来自envoy-rev0.json中配置的静态cluster资源,包含agentprometheus_statssds-grpcxds-grpczipkin;后者是通过xDS接口从istio的控制面获取的动态配置信息,dynamic_active_clusters主要分为如下四种类型:

    • BlackHoleCluster:

      1. "cluster": {
      2. "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster","name": "BlackHoleCluster",/* cluster名称 */
      3. "type": "STATIC","connect_timeout": "10s","filters": [ /* 出站连接的过滤器配置 */
      4. {
      5. "name": "istio.Metadata_exchange","typed_config": { /* 对扩展API的配置 */
      6. "@type": "type.googleapis.com/udpa.type.v1.TypedStruct","type_url": "type.googleapis.com/envoy.tcp.Metadataexchange.config.MetadataExchange","value": {
      7. "protocol": "istio-peer-exchange"
      8. }
      9. }
      10. }
      11. ]
      12. },

      BlackHoleCluster使用的API类型为type.googleapis.com/udpa.type.v1.TypedStruct,表示控制面缺少缺少该扩展的模式定义,client会使用type_url指定的API,将内容转换为类型化的配置资源。

      在上面可以看到Istio使用了协议istio-peer-exchange ,服务网格内部的两个Envoy实例之间使用该协议来交互Node Metadata。NodeMetadata的数据结构如下:

      1. type NodeMetadata struct {
      2. // ProxyConfig defines the proxy config specified for a proxy.
      3. // Note that this setting may be configured different for each proxy,due user overrides
      4. // or from different versions of proxies connecting. While Pilot has access to the meshConfig.defaultConfig,// this field should be preferred if it is present.
      5. ProxyConfig *NodeMetaProxyConfig `json:"PROXY_CONFIG,omitempty"`
      6. // IstioVersion specifies the Istio version associated with the proxy
      7. IstioVersion string `json:"ISTIO_VERSION,omitempty"`
      8. // Labels specifies the set of workload instance (ex: k8s pod) labels associated with this node.
      9. Labels map[string]string `json:"LABELS,omitempty"`
      10. // InstanceIPs is the set of IPs attached to this proxy
      11. InstanceIPs StringList `json:"INSTANCE_IPS,omitempty"`
      12. // Namespace is the namespace in which the workload instance is running.
      13. Namespace string `json:"NAMESPACE,omitempty"`
      14. // InterceptionMode is the name of the Metadata variable that carries info about
      15. // traffic interception mode at the proxy
      16. InterceptionMode TrafficInterceptionMode `json:"INTERCEPTION_MODE,omitempty"`
      17. // ServiceAccount specifies the service account which is running the workload.
      18. ServiceAccount string `json:"SERVICE_ACCOUNT,omitempty"`
      19. // RouterMode indicates whether the proxy is functioning as a SNI-DNAT router
      20. // processing the AUTO_PASSTHROUGH gateway servers
      21. RouterMode string `json:"ROUTER_MODE,omitempty"`
      22. // MeshID specifies the mesh ID environment variable.
      23. MeshID string `json:"MESH_ID,omitempty"`
      24. // ClusterID defines the cluster the node belongs to.
      25. ClusterID string `json:"CLUSTER_ID,omitempty"`
      26. // Network defines the network the node belongs to. It is an optional Metadata,// set at injection time. When set,the Endpoints returned to a note and not on same network
      27. // will be replaced with the gateway defined in the settings.
      28. Network string `json:"NETWORK,omitempty"`
      29. // RequestedNetworkView specifies the networks that the proxy wants to see
      30. RequestedNetworkView StringList `json:"REQUESTED_NETWORK_VIEW,omitempty"`
      31. // PodPorts defines the ports on a pod. This is used to lookup named ports.
      32. PodPorts PodPortList `json:"POD_PORTS,omitempty"`
      33. // TLSServerCertChain is the absolute path to server cert-chain file
      34. TLSServerCertChain string `json:"TLS_SERVER_CERT_CHAIN,omitempty"`
      35. // TLSServerKey is the absolute path to server private key file
      36. TLSServerKey string `json:"TLS_SERVER_KEY,omitempty"`
      37. // TLSServerRootCert is the absolute path to server root cert file
      38. TLSServerRootCert string `json:"TLS_SERVER_ROOT_CERT,omitempty"`
      39. // TLSClientCertChain is the absolute path to client cert-chain file
      40. TLSClientCertChain string `json:"TLS_CLIENT_CERT_CHAIN,omitempty"`
      41. // TLSClientKey is the absolute path to client private key file
      42. TLSClientKey string `json:"TLS_CLIENT_KEY,omitempty"`
      43. // TLSClientRootCert is the absolute path to client root cert file
      44. TLSClientRootCert string `json:"TLS_CLIENT_ROOT_CERT,omitempty"`
      45. CertBaseDir string `json:"BASE,omitempty"`
      46. // IdleTimeout specifies the idle timeout for the proxy,in duration format (10s).
      47. // If not set,no timeout is set.
      48. IdleTimeout string `json:"IDLE_TIMEOUT,omitempty"`
      49. // HTTP10 indicates the application behind the sidecar is making outbound http requests with HTTP/1.0
      50. // protocol. It will enable the "AcceptHttp_10" option on the http options for outbound HTTP listeners.
      51. // Alpha in 1.1,based on Feedback may be turned into an API or change. Set to "1" to enable.
      52. HTTP10 string `json:"HTTP10,omitempty"`
      53. // Generator indicates the client wants to use a custom Generator plugin.
      54. Generator string `json:"GENERATOR,omitempty"`
      55. // DNSCapture indicates whether the workload has enabled dns capture
      56. DNSCapture string `json:"DNS_CAPTURE,omitempty"`
      57. // ProxyXDSViaAgent indicates that xds data is being proxied via the agent
      58. ProxyXDSViaAgent string `json:"PROXY_XDS_VIA_AGENT,omitempty"`
      59. // Contains a copy of the raw Metadata. This is needed to lookup arbitrary values.
      60. // If a value is known ahead of time it should be added to the struct rather than reading from here,Raw map[string]interface{} `json:"-"`
      61. }

      Istio通过一些特定的TCP属性来启用TCP策略和控制(这些属性由Envoy代理生成),并通过Envoy的Node Metadata来获取这些属性。Envoy使用ALPN隧道和基于前缀的协议来转发Node Metadata到对端的Envoy。Istio定义了一个新的协议istio-peer-exchange,由网格中的客户端和服务端的sidecar在TLS协商时进行宣告并确定优先级。启用istio代理的两端会通过ALPN协商将协议解析为istio-peer-exchange(因此仅限于istio服务网格内的交互),后续的TCP交互将会按照istio-peer-exchange的协议规则进行交互

      使用如下命令可以看到cluster BlackHoleCluster是没有endpoint的。

      1. # istioctl pc endpoint sleep-856d589c9b-x6szk.default --cluster BlackHoleCluster
      2. ENDPOINT STATUS OUTLIER CHECK CLUSTER

      如下内容参考官方博客

      对于外部服务,Istio提供了两种管理方式:通过将global.outboundTrafficPolicy.mode 设置为REGISTRY_ONLY来block所有到外部服务的访问;以及通过将global.outboundTrafficPolicy.mode设置为ALLOW_ANY来允许所有到外部服务的访问。默认会允许所有到外部服务的访问。

      BlackHoleCluster :当global.outboundTrafficPolicy.mode设置为REGISTRY_ONLY时,Envoy会创建一个虚拟的cluster BlackHoleCluster。该模式下,所有到外部服务的访问都会被block(除非为每个服务添加service entries)。为了实现该功能,默认的outbound listener(监听地址为 0.0.0.0:15001)使用原始目的地来设置TCP代理,BlackHoleCluster 作为一个静态cluster。由于BlackHoleCluster 没有任何endpoint,因此会丢弃所有到外部的流量。此外,Istio会为平台服务的每个端口/协议组合创建唯一的listener,如果对同一端口的外部服务发起请求,则不会命中虚拟listener。这种情况会对route配置进行扩展,添加BlackHoleCluster的路由。如果没有匹配到其他路由,则Envoy代理会直接返回502 HTTP状态码(BlackHoleCluster可以看作是路由黑洞)。

      1. {
      2. "name": "block_all","domains": [
      3. "*"
      4. ],"routes": [
      5. {
      6. "match": {
      7. "prefix": "/"
      8. },"direct_response": {
      9. "status": 502
      10. },"name": "block_all"
      11. }
      12. ],"include_request_attempt_count": true
      13. },
    • PassthroughCluster:可以看到PassthroughCluster也使用了istio-peer-exchange协议来处理TCP。

      1. "cluster": {
      2. "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster","name": "PassthroughCluster","type": "ORIGINAL_DST",/* 指定typeORIGINAL_DST,这是一个特殊的cluster */
      3. "connect_timeout": "10s","lb_policy": "CLUSTER_PROVIDED","circuit_breakers": {
      4. "thresholds": [
      5. {
      6. "max_connections": 4294967295,"max_pending_requests": 4294967295,"max_requests": 4294967295,"max_retries": 4294967295
      7. }
      8. ]
      9. },"protocol_selection": "USE_DOWNSTREAM_PROTOCOL","filters": [
      10. {
      11. "name": "istio.Metadata_exchange",/* 配置使用ALPN istio-peer-exchange协议来交换Node Metadata */
      12. "typed_config": {
      13. "@type": "type.googleapis.com/udpa.type.v1.TypedStruct",

      使用如下命令可以看到cluster PassthroughCluster是没有endpoint的。

      1. # istioctl pc endpoint sleep-856d589c9b-x6szk.default --cluster PassthroughCluster
      2. ENDPOINT STATUS OUTLIER CHECK CLUSTER

      PassthroughCluster :当global.outboundTrafficPolicy.mode设置为ALLOW_ANY时,Envoy会创建一个虚拟的cluster PassthroughCluster 。该模式下,会允许所有到外部服务的访问。为了实现该功能,默认的outbound listener(监听地址为 0.0.0.0:15001)使用SO_ORIGINAL_DST来配置TCP Proxy,PassthroughCluster作为一个静态cluster。

      PassthroughCluster cluster使用原始目的地负载均衡策略来配置Envoy发送到原始目的地的流量。

      BlackHoleCluster类似,对于每个基于端口/协议的listener,都会添加虚拟路由,将PassthroughCluster作为为默认路由。

      1. {
      2. "name": "allow_any","domains": [
      3. "*"
      4. ],"routes": [
      5. {
      6. "match": {
      7. "prefix": "/"
      8. },"route": {
      9. "cluster": "PassthroughCluster","timeout": "0s","max_grpc_timeout": "0s"
      10. },"name": "allow_any"
      11. }
      12. ],"include_request_attempt_count": true
      13. },

      由于global.outboundTrafficPolicy.mode只能配置某一个值,因此BlackHoleClusterPassthroughCluster的出现是互斥的,BlackHoleClusterPassthroughCluster的路由仅存在istio服务网格内,即注入sidecar的pod中。

      可以使用Prometheus metrics来监控到BlackHoleClusterPassthroughCluster的访问。

    • inbound cluster:处理入站请求的cluster,对于下面的sleep应用来说,其只有一个本地后端127.0.0.1:80,并通过load_assignment指定了cluster名称负载信息。由于该监听器上的流量不会出战,因此下面并没有配置过滤器。

      1. "cluster": {
      2. "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster","name": "inbound|80|http|sleep.default.svc.cluster.local","type": "STATIC","circuit_breakers": {
      3. "thresholds": [
      4. {
      5. "max_connections": 4294967295,"max_retries": 4294967295
      6. }
      7. ]
      8. },"load_assignment": { /* 设置入站的clusterendpoint负载均衡 */
      9. "cluster_name": "inbound|80|http|sleep.default.svc.cluster.local","endpoints": [
      10. {
      11. "lb_endpoints": [
      12. {
      13. "endpoint": {
      14. "address": {
      15. "socket_address": {
      16. "address": "127.0.0.1","port_value": 80
      17. }
      18. }
      19. }
      20. }
      21. ]
      22. }
      23. ]
      24. }
      25. },

      也可以使用如下命令查看inbound的cluster信息:

      1. # istioctl pc cluster sleep-856d589c9b-c6xsm.default --direction inbound
      2. SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
      3. sleep.default.svc.cluster.local 80 http inbound STATIC
    • outbound cluster:这类cluster为Envoy节点外的服务,配置如何连接上游。下面的EDS表示该cluster的endpoint来自EDS服务发现。下面给出的outbound cluster是istiod的15012端口上的服务。基本结构如下,transport_socket_matches仅在使用TLS才会出现,用于配置与TLS证书相关的信息。

      可以使用istioctl pc endpoint查看EDS的内容

    1. # istioctl pc endpoint sleep-856d589c9b-rn7dw.default --cluster "outbound|15012||istiod.istio-system.svc.cluster.local"
    2. ENDPOINT STATUS OUTLIER CHECK CLUSTER
    3. 10.80.3.141:15012 HEALTHY OK outbound|15012||istiod.istio-system.svc.cluster.local

    具体内容如下:

    1. {
    2. "version_info": "2020-09-15T08:05:54Z/4","cluster": {
    3. "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster","name": "outbound|15012||istiod.istio-system.svc.cluster.local","type": "EDS","eds_cluster_config": { /* EDS的配置 */
    4. "eds_config": {
    5. "ads": {},"resource_api_version": "V3"
    6. },"service_name": "outbound|15012||istiod.istio-system.svc.cluster.local" /* EDScluster的可替代名称,无需与cluster名称完全相同 */
    7. },"circuit_breakers": { /* 断路器设置 */
    8. "thresholds": [
    9. {
    10. "max_connections": 4294967295,"filters": [ /* 设置Node Metadata交互使用的协议为istio-peer-exchange */
    11. {
    12. "name": "istio.Metadata_exchange","typed_config": {
    13. "@type": "type.googleapis.com/udpa.type.v1.TypedStruct","value": {
    14. "protocol": "istio-peer-exchange"
    15. }
    16. }
    17. }
    18. ],"transport_socket_matches": [ /* 指定匹配的后端使用的带TLS的传输socket */
    19. {
    20. "name": "tlsMode-istio",/* match名称 */
    21. "match": { /* 匹配后端的条件,注入istio sidecarpod会打上标签security.istio.io/tlsMode=istio */
    22. "tlsMode": "istio"
    23. },"transport_socket": { /* 匹配cluster的后端使用的传输socket的配置 */
    24. "name": "envoy.transport_sockets.tls","typed_config": {
    25. "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext","common_tls_context": { /* 配置clientserver端使用的TLS上下文 */
    26. "alpn_protocols": [ /* 配置交互使用的ALPN协议集,供上游选择 */
    27. "istio-peer-exchange","istio"
    28. ],"tls_certificate_sds_secret_configs": [ /* 通过SDS API获取TLS证书的配置 */
    29. {
    30. "name": "default","sds_config": {
    31. "api_config_source": {
    32. "api_type": "GRPC","grpc_services": [ /* SDScluster */
    33. {
    34. "envoy_grpc": {
    35. "cluster_name": "sds-grpc" /* 为上面静态配置的cluster */
    36. }
    37. }
    38. ],"transport_api_version": "V3"
    39. },"initial_fetch_timeout": "0s","resource_api_version": "V3"
    40. }
    41. }
    42. ],"combined_validation_context": { /* 包含一个CertificateValidationContext(即下面的default_validation_context)和SDS配置。当SDS服务返回动态的CertificateValidationContext时,动态和默认的CertificateValidationContext会合并为一个新的CertificateValidationContext来进行校验 */
    43. "default_validation_context": { /* 配置如何认证对端istiod服务的证书 */
    44. "match_subject_alt_names": [ /* Envoy会按照如下配置来校验证书中的SAN */
    45. {
    46. "exact": "spiffe://new-td/ns/istio-system/sa/istiod-service-account" /* Istio数据面使用serviceaccount进行授权 */
    47. }
    48. ]
    49. },"validation_context_sds_secret_config": { /* SDS配置,也是通过静态的cluster sds-grpc提供SDS API服务 */
    50. "name": "ROOTCA",/* 用于认证对端的CA证书, */
    51. "sds_config": {
    52. "api_config_source": {
    53. "api_type": "GRPC","grpc_services": [
    54. {
    55. "envoy_grpc": {
    56. "cluster_name": "sds-grpc" /* 获取CA证书的SDS服务器 */
    57. }
    58. }
    59. ],"resource_api_version": "V3"
    60. }
    61. }
    62. }
    63. },"sni": "outbound_.15012_._.istiod.istio-system.svc.cluster.local" /* 创建TLS连接时使用的SNI字符串,即TLSserver_name扩展字段中的值 */
    64. }
    65. }
    66. },{
    67. "name": "tlsMode-disabled",/* 如果与没有匹配到的后端(即istio服务网格外的后端)进行通信时,则使用明文方式 */
    68. "match": {},"transport_socket": {
    69. "name": "envoy.transport_sockets.raw_buffer"
    70. }
    71. }
    72. ]
    73. },"last_updated": "2020-09-15T08:06:23.565Z"
    74. },
  • Listeners:Envoy使用listener来接收并处理下游发来的请求。与cluster类似,listener也分为静态和动态两种配置。静态配置来自Istio-agent生成envoy-rev0.json文件。动态配置为:

    • virtualOutbound Listener:Istio在注入sidecar时,会通过init容器来设置iptables规则,将所有出站的TCP流量拦截到本地的15001端口:

      1. -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001

      一个istio-agent配置中仅包含一个virtualOutbound listener,可以看到该listener并没有配置transport_socket,它的下游流量就是来自本pod的业务容器,并不需要进行TLS校验,直接将流量重定向到15001端口即可,然后转发给和原始目的IP:Port匹配的listener。

      1. {
      2. "name": "virtualOutbound","active_state": {
      3. "version_info": "2020-09-15T08:05:54Z/4","listener": {
      4. "@type": "type.googleapis.com/envoy.config.listener.v3.Listener","name": "virtualOutbound","address": { /* 监听器监听的地址 */
      5. "socket_address": {
      6. "address": "0.0.0.0","port_value": 15001
      7. }
      8. },"filter_chains": [ /* 应用到该监听器的过滤器链 */
      9. {
      10. "filters": [ /* 与该监听器建立连接时使用的过滤器,按顺序处理各个过滤器。如果过滤器列表为空,则默认会关闭连接 */
      11. {
      12. "name": "istio.stats","typed_config": {
      13. "@type": "type.googleapis.com/udpa.type.v1.TypedStruct","type_url": "type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm","value": {
      14. "config": { /* Wasm插件配置 */
      15. "root_id": "stats_outbound",/* 一个VM中具有相同root_id的一组filters/services会共享相同的RootContextContexts,如果该字段为空,所有该字段为空的filters/services都会共享具有相同vm_idContext(s) */
      16. "vm_config": { /* Wasm VM的配置 */
      17. "vm_id": "tcp_stats_outbound",/* 使用相同vm_idcode将使用相同的VM */
      18. "runtime": "envoy.wasm.runtime.null",/* Wasm运行时,v8null */
      19. "code": {
      20. "local": {
      21. "inline_string": "envoy.wasm.stats"
      22. }
      23. }
      24. },"configuration": {
      25. "@type": "type.googleapis.com/google.protobuf.StringValue","value": "{\n \"debug\": \"false\",\n \"stat_prefix\": \"istio\"\n}\n"
      26. }
      27. }
      28. }
      29. }
      30. },{
      31. "name": "envoy.tcp_proxy",/* 处理TCP的过滤器 */
      32. "typed_config": {
      33. "@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy","stat_prefix": "PassthroughCluster","cluster": "PassthroughCluster",/* 连接的上游cluster */
      34. "access_log": [
      35. {
      36. "name": "envoy.file_access_log","typed_config": { /* 配置日志的输出格式和路径 */
      37. "@type": "type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog","path": "/dev/stdout","format": "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% \"%DYNAMIC_MetaDATA(istio.mixer:status)%\" \"%UPSTREAM_TRANSPORT_FAILURE_REASON%\" %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME% %ROUTE_NAME%\n"
      38. }
      39. }
      40. ]
      41. }
      42. }
      43. ],"name": "virtualOutbound-catchall-tcp"
      44. }
      45. ],"hidden_envoy_deprecated_use_original_dst": true,"traffic_direction": "OUTBOUND"
      46. },"last_updated": "2020-09-15T08:06:24.066Z"
      47. }
      48. },

      上面envoy.tcp_proxy过滤器的cluster为PassthroughCluster,这是因为将global.outboundTrafficPolicy.mode设置为了ALLOW_ANY,默认可以访问外部服务。如果global.outboundTrafficPolicy.mode设置为了REGISTRY_ONLY,则此处将变为cluster BlackHoleCluster,默认丢弃所有到外部服务的请求。

      上面使用wasm(WebAssembly)来记录遥测信息,Envoy官方文档中目前缺少对wasm的描述,可以参考开源代码描述。从runtime字段为null可以看到并没有启用。可以在安装istio的时候使用如下参数来启用基于Wasm的遥测。

      1. $ istioctl install --set values.telemetry.v2.MetadataExchange.wasmEnabled=true --set values.telemetry.v2.prometheus.wasmEnabled=true

      启用之后,与wasm有关的用于遥测的过滤器配置变为了如下内容,可以看到其runtime使用了envoy.wasm.runtime.v8。更多参见官方博客

      1. {
      2. "name": "istio.stats","typed_config": {
      3. "@type": "type.googleapis.com/udpa.type.v1.TypedStruct","value": {
      4. "config": {
      5. "root_id": "stats_outbound","vm_config": { /* wasm虚拟机配置 */
      6. "vm_id": "tcp_stats_outbound","runtime": "envoy.wasm.runtime.v8",/* 使用的wasm runtime */
      7. "code": {
      8. "local": {
      9. "filename": "/etc/istio/extensions/stats-filter.compiled.wasm" /* 编译后的wasm插件路径 */
      10. }
      11. },"allow_precompiled": true
      12. },"configuration": {
      13. "@type": "type.googleapis.com/google.protobuf.StringValue",\n \"stat_prefix\": \"istio\"\n}\n"
      14. }
      15. }
      16. }
      17. }
      18. },

      在istio-proxy容器的/etc/istio/extensions/目录下可以看到wasm编译的相关程序,包含用于交换Node Metadata的Metadata-exchange-filter.wasm和用于遥测的stats-filter.wasm,带compiled的wasm用于HTTP。

      1. $ ls
      2. Metadata-exchange-filter.compiled.wasm Metadata-exchange-filter.wasm stats-filter.compiled.wasm stats-filter.wasm

      Istio的filter处理示意图如下:

    • VirtualInbound/Inbound Listener:与virtualOutbound listener类似,通过如下规则将所有入站的TCP流量重定向到15006端口

      1. -A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006

      下面是一个demo环境中的典型配置,可以看到对于每个监听的地址,都配置了两个过滤器:一个带transport_socket,一个不带transport_socket,分别处理使用TLS的连接和不使用TLS的连接。主要的入站监听器为:

      • 处理基于IPv4的带TLS 的TCP连接
      • 处理基于IPv4的不带TLS 的TCP连接
      • 处理基于IPv6的带TLS 的TCP连接
      • 处理基于IPv6的不带TLS 的TCP连接
      • 处理基于IPv4的带TLS 的HTTP连接
      • 处理基于IPv4的不带TLS 的HTTP连接
      • 处理基于IPv6的带TLS 的HTTP连接
      • 处理基于IPv6的不带TLS 的HTTP连接
      • 处理业务的带TLS(不带TLS)的连接

      下面给出如下内容的inbound listener:

      • 处理基于IPv4的带TLS 的TCP连接
      • 处理基于IPv4的不带TLS 的TCP连接
      • 处理业务的带TLS的连接
      1. {
      2. "name": "virtualInbound","name": "virtualInbound","address": { /* listener绑定的地址和端口 */
      3. "socket_address": {
      4. "address": "0.0.0.0","port_value": 15006
      5. }
      6. },"filter_chains": [
      7. /* 匹配所有IPV4地址,使用TLSALPNistio-peer-exchangeistio的连接 */
      8. {
      9. "filter_chain_match": { /* 将连接匹配到该过滤器链时使用的标准 */
      10. "prefix_ranges": [ /* listener绑定到0.0.0.0/::时匹配的IP地址和前缀长度,下面表示整个网络地址 */
      11. {
      12. "address_prefix": "0.0.0.0","prefix_len": 0
      13. }
      14. ],"transport_protocol": "tls",/* 匹配的传输协议 */
      15. "application_protocols": [ /* 使用的ALPN */
      16. "istio-peer-exchange","istio"
      17. ]
      18. },"filters": [
      19. {
      20. "name": "istio.Metadata_exchange",/* 交换Node Metadata的配置 */
      21. "typed_config": {
      22. "@type": "type.googleapis.com/udpa.type.v1.TypedStruct","value": {
      23. "protocol": "istio-peer-exchange"
      24. }
      25. }
      26. },{
      27. "name": "istio.stats",/* 使用wasm进行遥测的配置 */
      28. "typed_config": {
      29. "@type": "type.googleapis.com/udpa.type.v1.TypedStruct","value": {
      30. "config": {
      31. "root_id": "stats_inbound","vm_config": {
      32. "vm_id": "tcp_stats_inbound","runtime": "envoy.wasm.runtime.null","code": {
      33. "local": {
      34. "inline_string": "envoy.wasm.stats"
      35. }
      36. }
      37. },/* 配置连接上游cluster InboundPassthroughClusterIpv4时的访问日志,InboundPassthroughClusterIpv4 cluster用于处理基于IPv4HTTP */
      38. "typed_config": {
      39. "@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy","stat_prefix": "InboundPassthroughClusterIpv4","cluster": "InboundPassthroughClusterIpv4","access_log": [
      40. {
      41. "name": "envoy.file_access_log","typed_config": {
      42. "@type": "type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog","transport_socket": { /* 匹配TLS的传输socket */
      43. "name": "envoy.transport_sockets.tls","typed_config": {
      44. "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext","common_tls_context": {
      45. "alpn_protocols": [ /* 监听器使用的ALPN列表 */
      46. "istio-peer-exchange","h2","http/1.1"
      47. ],"tls_certificate_sds_secret_configs": [ /* 通过SDS API获取证书的配置 */
      48. {
      49. "name": "default","sds_config": {
      50. "api_config_source": {
      51. "api_type": "GRPC","grpc_services": [
      52. {
      53. "envoy_grpc": {
      54. "cluster_name": "sds-grpc"
      55. }
      56. }
      57. ],"transport_api_version": "V3"
      58. },"resource_api_version": "V3"
      59. }
      60. }
      61. ],"combined_validation_context": {
      62. "default_validation_context": { /* 对对端的证书的SAN进行认证 */
      63. "match_subject_alt_names": [
      64. {
      65. "prefix": "spiffe://new-td/"
      66. },{
      67. "prefix": "spiffe://old-td/"
      68. }
      69. ]
      70. },"validation_context_sds_secret_config": { /* 配置通过SDS API获取证书 */
      71. "name": "ROOTCA","resource_api_version": "V3"
      72. }
      73. }
      74. }
      75. },"require_client_certificate": true
      76. }
      77. },"name": "virtualInbound"
      78. },/* 与上面不同的是,此处匹配不带TLS的连接 */
      79. {
      80. "filter_chain_match": {
      81. "prefix_ranges": [
      82. {
      83. "address_prefix": "0.0.0.0","prefix_len": 0
      84. }
      85. ]
      86. },...
      87. },...
      88. }
      89. ],...
      90. /* 应用的监听器,监听端口为HTTP 80端口 */
      91. {
      92. "filter_chain_match": {
      93. "destination_port": 80,/* 匹配的请求的目的端口 */
      94. "application_protocols": [ /* 匹配的ALPN,仅在使用TLS时使用 */
      95. "istio","istio-http/1.0","istio-http/1.1","istio-h2"
      96. ]
      97. },/* 交换Node Metadata的配置 */
      98. ...
      99. },{
      100. "name": "envoy.http_connection_manager",/* HTTP连接管理过滤器 */
      101. "typed_config": {
      102. "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager","stat_prefix": "inbound_0.0.0.0_80","route_config": { /* 静态路由表 */
      103. "name": "inbound|80|http|sleep.default.svc.cluster.local",/* 路由配置的名称 */
      104. "virtual_hosts": [ /* 构成路由表的虚拟主机列表 */
      105. {
      106. "name": "inbound|http|80",/* 构成路由表的虚拟主机名 */
      107. "domains": [ /* 匹配到该虚拟主机的域列表 */
      108. "*"
      109. ],"routes": [ /* 对入站请求的路由,将路径为"/"HTTP请求路由到cluster inbound|80|http|sleep.default.svc.cluster.local*/
      110. {
      111. "match": {
      112. "prefix": "/"
      113. },"route": {
      114. "cluster": "inbound|80|http|sleep.default.svc.cluster.local","max_grpc_timeout": "0s"
      115. },"decorator": {
      116. "operation": "sleep.default.svc.cluster.local:80/*"
      117. },"name": "default" /* 路由的名称 */
      118. }
      119. ]
      120. }
      121. ],"validate_clusters": false
      122. },"http_filters": [ /* HTTP连接过滤器链 */
      123. {
      124. "name": "istio.Metadata_exchange",/* 基于HTTPMetadata的交换配置 */
      125. ...
      126. },{
      127. "name": "istio_authn",/* istiomTLS的默认值 */
      128. "typed_config": {
      129. "@type": "type.googleapis.com/istio.envoy.config.filter.http.authn.v2alpha1.FilterConfig","policy": {
      130. "peers": [
      131. {
      132. "mtls": {
      133. "mode": "PERMISSIVE"
      134. }
      135. }
      136. ]
      137. },"skip_validate_trust_domain": true
      138. }
      139. },{
      140. "name": "envoy.filters.http.cors","typed_config": {
      141. "@type": "type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors"
      142. }
      143. },{
      144. "name": "envoy.fault","typed_config": {
      145. "@type": "type.googleapis.com/envoy.extensions.filters.http.fault.v3.HTTPFault"
      146. }
      147. },{
      148. "name": "istio.stats",/* 基于HTTP的遥测配置 */
      149. ...
      150. },{
      151. "name": "envoy.router","typed_config": {
      152. "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
      153. }
      154. }
      155. ],"tracing": {
      156. "client_sampling": {
      157. "value": 100
      158. },"random_sampling": {
      159. "value": 1
      160. },"overall_sampling": {
      161. "value": 100
      162. }
      163. },"server_name": "istio-envoy",/* 设置访问日志格式 */
      164. "access_log": [
      165. {
      166. "name": "envoy.file_access_log",...
      167. }
      168. ],"use_remote_address": false,"generate_request_id": true,"forward_client_cert_details": "APPEND_FORWARD","set_current_client_cert_details": {
      169. "subject": true,"dns": true,"uri": true
      170. },"upgrade_configs": [
      171. {
      172. "upgrade_type": "websocket"
      173. }
      174. ],"stream_idle_timeout": "0s","normalize_path": true
      175. }
      176. }
      177. ],"transport_socket": { /* TLS传输socket配置 */
      178. "name": "envoy.transport_sockets.tls",...
      179. },"name": "0.0.0.0_80"
      180. },
    • Outbound listener: 下面是到Prometheus服务9092端口的outbound listener。10.84.30.227为Prometheus的k8s service地址,指定了后端的cluster outbound|9092||prometheus-k8s.openshift-monitoring.svc.cluster.localroute_config_name字段指定了该listener使用的route prometheus-k8s.openshift-monitoring.svc.cluster.local:9092

      1. {
      2. "name": "10.84.30.227_9092","name": "10.84.30.227_9092","address": {
      3. "socket_address": {
      4. "address": "10.84.30.227","port_value": 9092
      5. }
      6. },"filter_chains": [
      7. {
      8. "filters": [
      9. {
      10. "name": "istio.stats",...
      11. }
      12. },/* TCP过滤器设置,设置连接到对应cluster的日志格式 */
      13. "typed_config": {
      14. "@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy","stat_prefix": "outbound|9092||prometheus-k8s.openshift-monitoring.svc.cluster.local","cluster": "outbound|9092||prometheus-k8s.openshift-monitoring.svc.cluster.local","access_log": [
      15. ...
      16. ]
      17. }
      18. }
      19. ]
      20. },{
      21. "filter_chain_match": {
      22. "application_protocols": [
      23. "http/1.0","http/1.1","h2c"
      24. ]
      25. },"filters": [
      26. {
      27. "name": "envoy.http_connection_manager",/* 配置HTTP连接 */
      28. "typed_config": {
      29. "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager","stat_prefix": "outbound_10.84.30.227_9092","rds": { /* RDS接口配置 */
      30. "config_source": {
      31. "ads": {},"resource_api_version": "V3"
      32. },"route_config_name": "prometheus-k8s.openshift-monitoring.svc.cluster.local:9092" /* 指定路由配置 */
      33. },"http_filters": [
      34. {
      35. "name": "istio.Metadata_exchange","typed_config": {
      36. "@type": "type.googleapis.com/udpa.type.v1.TypedStruct","type_url": "type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm",...
      37. }
      38. },{
      39. "name": "istio.alpn","typed_config": {
      40. "@type": "type.googleapis.com/istio.envoy.config.filter.http.alpn.v2alpha1.FilterConfig","alpn_override": [
      41. {
      42. "alpn_override": [
      43. "istio-http/1.0","istio"
      44. ]
      45. },{
      46. "upstream_protocol": "HTTP11","alpn_override": [
      47. "istio-http/1.1",{
      48. "upstream_protocol": "HTTP2","alpn_override": [
      49. "istio-h2","istio"
      50. ]
      51. }
      52. ]
      53. }
      54. },"tracing": {
      55. ...
      56. },...
      57. }
      58. }
      59. ],"normalize_path": true
      60. }
      61. }
      62. ]
      63. }
      64. ],"deprecated_v1": {
      65. "bind_to_port": false
      66. },"listener_filters": [
      67. {
      68. "name": "envoy.listener.tls_inspector","typed_config": {
      69. "@type": "type.googleapis.com/envoy.extensions.filters.listener.tls_inspector.v3.TlsInspector"
      70. }
      71. },{
      72. "name": "envoy.listener.http_inspector","typed_config": {
      73. "@type": "type.googleapis.com/envoy.extensions.filters.listener.http_inspector.v3.HttpInspector"
      74. }
      75. }
      76. ],"listener_filters_timeout": "5s","traffic_direction": "OUTBOUND","continue_on_listener_filters_timeout": true
      77. },"last_updated": "2020-09-15T08:06:23.989Z"
      78. }
      79. },

      从上面的配置可以看出,路由配置位于HttpConnectionManager类型中,因此如果某个listener没有用到HTTP,则不会有对应的route。如下面的istiod15012端口上的服务,提供了基于gRPC协议的XDP和CA的服务(使用TLS)。

      1. {
      2. "name": "10.84.251.157_15012","active_state": {
      3. "version_info": "2020-09-16T07:48:42Z/22","listener": {
      4. "@type": "type.googleapis.com/envoy.config.listener.v3.Listener","name": "10.84.251.157_15012","address": {
      5. "socket_address": {
      6. "address": "10.84.251.157","port_value": 15012
      7. }
      8. },"filter_chains": [
      9. {
      10. "filters": [
      11. {
      12. "name": "istio.stats",...
      13. },{
      14. "name": "envoy.tcp_proxy","typed_config": {
      15. "@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy","stat_prefix": "outbound|15012||istiod.istio-system.svc.cluster.local","cluster": "outbound|15012||istiod.istio-system.svc.cluster.local","access_log": [
      16. ...
      17. ]
      18. }
      19. }
      20. ]
      21. }
      22. ],"deprecated_v1": {
      23. "bind_to_port": false
      24. },"traffic_direction": "OUTBOUND"
      25. },"last_updated": "2020-09-16T07:49:34.134Z"
      26. }
      27. },
    • Route:Istio的route也分为静态配置和动态配置。静态路由配置与静态监听器,以及inbound 动态监听器中设置的静态路由配置(envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager中的route_config)有关。

      下面看一个与Prometheus 9092端口提供的服务有关的动态路由,路由配置名称route_config.name与上面Prometheus outbound监听器route_config_name字段指定的值是相同的。

      1. {
      2. "version_info": "2020-09-16T07:48:42Z/22","route_config": {
      3. "@type": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration","name": "prometheus-k8s.openshift-monitoring.svc.cluster.local:9092","virtual_hosts": [
      4. {
      5. "name": "prometheus-k8s.openshift-monitoring.svc.cluster.local:9092","domains": [
      6. "prometheus-k8s.openshift-monitoring.svc.cluster.local","prometheus-k8s.openshift-monitoring.svc.cluster.local:9092","prometheus-k8s.openshift-monitoring","prometheus-k8s.openshift-monitoring:9092","prometheus-k8s.openshift-monitoring.svc.cluster","prometheus-k8s.openshift-monitoring.svc.cluster:9092","prometheus-k8s.openshift-monitoring.svc","prometheus-k8s.openshift-monitoring.svc:9092","10.84.30.227","10.84.30.227:9092"
      7. ],"routes": [
      8. {
      9. "match": {
      10. "prefix": "/"
      11. },"route": { /* 路由到的后端cluster */
      12. "cluster": "outbound|9092||prometheus-k8s.openshift-monitoring.svc.cluster.local","retry_policy": {
      13. "retry_on": "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes","num_retries": 2,"retry_host_predicate": [
      14. {
      15. "name": "envoy.retry_host_predicates.prevIoUs_hosts"
      16. }
      17. ],"host_selection_retry_max_attempts": "5","retriable_status_codes": [
      18. 503
      19. ]
      20. },"max_grpc_timeout": "0s"
      21. },"decorator": {
      22. "operation": "prometheus-k8s.openshift-monitoring.svc.cluster.local:9092/*"
      23. },"name": "default"
      24. }
      25. ],"include_request_attempt_count": true
      26. }
      27. ],"validate_clusters": false
      28. },"last_updated": "2020-09-16T07:49:52.551Z"
      29. },

整个访问流程简单可以视为:

  • Inbound请求:

    1. +----------+ +-----------------------+ +-----------------+ +----------+
    2. | iptables +--->+ virtualInbound:15006 +--->+ Inbound Cluster +--->+ endpoint |
    3. +----------+ +-----------------------+ +-----------------+ +----------+
  • 出站请求:

    1. +----------+ +-----------------------+ +-------------------+
    2. | iptables +--->+ virtualOutbound105001+--->+ Outbound Listener +---+
    3. +----------+ +-----------------------+ +-------------------+ |
    4. |
    5. |
    6. +--------------------------------------------------------------+
    7. |
    8. | +-------+ +-----------------+ +----------+
    9. +---->+ route +--->+Outbound Cluster +--->+ endpoint |
    10. +-------+ +-----------------+ +----------+

更多内容可以参考Envoy的官方文档

下面是基于Istio官方BookInfo的一个访问流程图,可以帮助理解整个流程。

SDS

下图来自这篇文章

SDS会动态下发两个证书:defaultROOTCA。前者表示本服务使用的证书,证书中的SAN使用了该服务对应的命名空间下的serviceaccount;后者为集群的CA,通过将configmap istio-ca-root-cert挂载到服务的pod中,它与istio-system命名空间中的secret istio-ca-secret相同,可以用于认证各个服务的证书(default)。因此不同的服务使用的default证书是不同的,但使用的ROOTCA证书是相同的。

  1. {
  2. "@type": "type.googleapis.com/envoy.admin.v3.SecretsConfigDump","dynamic_active_secrets": [
  3. {
  4. "name": "default","version_info": "09-21 20:10:52.178","last_updated": "2020-09-21T20:10:52.446Z","secret": {
  5. "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret","name": "default",/* 服务使用的证书 */
  6. "tls_certificate": {
  7. "certificate_chain": {
  8. "inline_bytes": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS..."
  9. },"private_key": {
  10. "inline_bytes": "W3JlZGFjdGVkXQ=="
  11. }
  12. }
  13. }
  14. },{
  15. "name": "ROOTCA","version_info": "2020-09-15 08:05:53.174860205 +0000 UTC m=+1.073140142","last_updated": "2020-09-15T08:05:53.275Z","name": "ROOTCA",/* 验证服务证书使用的CA证书 */
  16. "validation_context": {
  17. "trusted_ca": {
  18. "inline_bytes": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0t..."
  19. }
  20. }
  21. }
  22. }
  23. ]
  24. }

将default证书导出,查看该证书,可以看到其使用的SAN为spiffe://new-td/ns/default/sa/sleep,用到了default命名空间下的serviceaccount sleep,提供了该服务的身份标识。

  1. # openssl x509 -in ca-chain.crt -noout -text
  2. Certificate:
  3. Data:
  4. Version: 3 (0x2)
  5. Serial Number:
  6. 37:31:36:d4:25:64:18:10:27:47:75:79:6c:ff:21:3a
  7. Signature Algorithm: sha256WithRSAEncryption
  8. Issuer: O=cluster.local
  9. Validity
  10. Not Before: Sep 21 08:34:33 2020 GMT
  11. Not After : Sep 22 08:34:33 2020 GMT
  12. Subject:
  13. Subject Public Key Info:
  14. ...
  15. X509v3 Subject Alternative Name: critical
  16. URI:spiffe://new-td/ns/default/sa/sleep
  17. Signature Algorithm: sha256WithRSAEncryption
  18. ...

到处defaultROOTCA证书,分别对应下面的pod.crt和root-cat.crt,可以看到,能够使用ROOTCA来验证default

  1. # openssl verify -CAfile root-ca.crt pod.crt
  2. ca-chain.crt: OK

在 Istio的sidecar配置中,有两处需要通过 SDS 来配置证书:

  • Inbound Listener:Inbound Listener用于接收来自下游的连接,在DownstreamTlsContext中配置了对下游连接的认证。其指定了用于验证下游连接的ROOTCA证书。match_subject_alt_names中指定的通配SAN是因为在安装Istio时通过参数values.global.trustDomain指定了信任域。

    1. "common_tls_context": {
    2. "alpn_protocols": [
    3. "istio-peer-exchange","tls_certificate_sds_secret_configs": [
    4. {
    5. "name": "default",/* 服务器使用的证书名称,由SDS下发 */
    6. "sds_config": {
    7. "api_config_source": {
    8. "api_type": "GRPC","combined_validation_context": {
    9. "default_validation_context": {
    10. "match_subject_alt_names": [ /* 指定的信任域 */
    11. {
    12. "prefix": "spiffe://new-td/"
    13. },"validation_context_sds_secret_config": {
    14. "name": "ROOTCA",/* 用于认证下游连接的CA证书 */
    15. "sds_config": {
    16. "api_config_source": {
    17. "api_type": "GRPC",
  • Outbound Cluster:outbound cluster(下例为outbound|80||sleep.default.svc.cluster.local)作为下游配置,也需要对服务端的证书进行验证。其UpstreamTlsContext中的配置如下。由于上有服务为sleep.default.svc.cluster.local,因此在match_subject_alt_names字段中指定了验证的服务端的SAN。

    1. "common_tls_context": {
    2. "alpn_protocols": [
    3. "istio-peer-exchange","istio"
    4. ],"tls_certificate_sds_secret_configs": [
    5. {
    6. "name": "default","sds_config": {
    7. "api_config_source": {
    8. "api_type": "GRPC","grpc_services": [
    9. {
    10. "envoy_grpc": {
    11. "cluster_name": "sds-grpc"
    12. }
    13. }
    14. ],"transport_api_version": "V3"
    15. },"resource_api_version": "V3"
    16. }
    17. }
    18. ],"combined_validation_context": {
    19. "default_validation_context": {
    20. "match_subject_alt_names": [ /* 上游服务使用的证书中的SAN */
    21. {
    22. "exact": "spiffe://new-td/ns/default/sa/sleep"
    23. }
    24. ]
    25. },"validation_context_sds_secret_config": {
    26. "name": "ROOTCA","resource_api_version": "V3"
    27. }
    28. }
    29. }
    30. },

可以看到,作为服务端,仅需要使用ROOTCA证书对客户端进行证书校验即可(如果没有指定信任域);作为客户端,需要使用ROOTCA证书对服务端证书进行校验,也需要对服务端使用的证书中的SAN进行校验。

gateway上,如果需要针对服务网格外部的服务进行TLS双向认证,可以参考Traffic Management

参考

猜你在找的istio相关文章