Template error while templating string expected token end of print statement got

The following is a task, which is throwing me an error because jinja2 templating doesnt support this. - name: Print the ip address debug: msg: "{{ ansible_{{ item }}['ipv4']['address'] }...

The following is a task, which is throwing me an error because jinja2 templating doesnt support this.

- name: Print the ip address
  debug:
    msg: "{{ ansible_{{ item }}['ipv4']['address'] }}"
  with_items: "{{ ansible_interfaces|reject('search', 'lo')|list|sort }}"

The error thrown is:

«msg»: «template error while templating string: expected token ‘end of print statement’, got ‘{‘. String: {{ ansible_{{ item }}[‘ipv4’][‘address’] }}»

Any pointers on how to solve this issue?

Zeitounator's user avatar

Zeitounator

33.8k7 gold badges44 silver badges60 bronze badges

asked Aug 24, 2020 at 7:35

Shruti's user avatar

You cannot use jinja2 expansion when you are already inside a jinja2 expansion expression. In other words mustaches don’t stack.

In your case you can use the vars lookup to fetch your dynamically named var:

- name: Print the ip address
  vars:
    interface_var_name: "ansible_{{ item }}"
  debug:
    msg: "{{ lookup('vars', interface_var_name)['ipv4']['address'] }}"
  with_items: "{{ ansible_interfaces | reject('search', 'lo') | list | sort }}"

answered Aug 24, 2020 at 8:13

Zeitounator's user avatar

ZeitounatorZeitounator

33.8k7 gold badges44 silver badges60 bronze badges

Use lookup plugin vars. For example

    - name: Print the ip address
      debug:
        msg: "{{ my_ifc.ipv4.address|default('Undefined') }}"
      loop: "{{ ansible_interfaces|reject('search', 'lo')|list|sort }}"
      vars:
        my_ifc: "{{ lookup('vars', 'ansible_' ~ item) }}"

gives

ok: [localhost] => (item=eth0) => 
  msg: 10.1.0.27
ok: [localhost] => (item=wlan0) => 
  msg: Undefined

answered Aug 24, 2020 at 8:24

Vladimir Botka's user avatar

Vladimir BotkaVladimir Botka

50.9k4 gold badges28 silver badges59 bronze badges

The following is a task, which is throwing me an error because jinja2 templating doesnt support this.

- name: Print the ip address
  debug:
    msg: "{{ ansible_{{ item }}['ipv4']['address'] }}"
  with_items: "{{ ansible_interfaces|reject('search', 'lo')|list|sort }}"

The error thrown is:

«msg»: «template error while templating string: expected token ‘end of print statement’, got ‘{‘. String: {{ ansible_{{ item }}[‘ipv4’][‘address’] }}»

Any pointers on how to solve this issue?

Zeitounator's user avatar

Zeitounator

33.8k7 gold badges44 silver badges60 bronze badges

asked Aug 24, 2020 at 7:35

Shruti's user avatar

You cannot use jinja2 expansion when you are already inside a jinja2 expansion expression. In other words mustaches don’t stack.

In your case you can use the vars lookup to fetch your dynamically named var:

- name: Print the ip address
  vars:
    interface_var_name: "ansible_{{ item }}"
  debug:
    msg: "{{ lookup('vars', interface_var_name)['ipv4']['address'] }}"
  with_items: "{{ ansible_interfaces | reject('search', 'lo') | list | sort }}"

answered Aug 24, 2020 at 8:13

Zeitounator's user avatar

ZeitounatorZeitounator

33.8k7 gold badges44 silver badges60 bronze badges

Use lookup plugin vars. For example

    - name: Print the ip address
      debug:
        msg: "{{ my_ifc.ipv4.address|default('Undefined') }}"
      loop: "{{ ansible_interfaces|reject('search', 'lo')|list|sort }}"
      vars:
        my_ifc: "{{ lookup('vars', 'ansible_' ~ item) }}"

gives

ok: [localhost] => (item=eth0) => 
  msg: 10.1.0.27
ok: [localhost] => (item=wlan0) => 
  msg: Undefined

answered Aug 24, 2020 at 8:24

Vladimir Botka's user avatar

Vladimir BotkaVladimir Botka

50.9k4 gold badges28 silver badges59 bronze badges

Still have the same problem, none of the solutions worked:

TASK [kubernetes/node : Write kubelet environment config file (kubeadm)] ******************************************************************************************************************************************************************************************************
task path: /root/kubespray/roles/kubernetes/node/tasks/kubelet.yml:16
fatal: [node1]: FAILED! => {
«changed»: false,
«msg»: «AnsibleError: template error while templating string: expected token ‘=’, got ‘end of statement block’. String: KUBE_LOGTOSTDERR=»—logtostderr=true»nKUBE_LOG_LEVEL=»—v={{ kube_log_level }}»nKUBELET_ADDRESS=»—node-ip={{ kubelet_address }}»n{% if kube_override_hostname|default(») %}nKUBELET_HOSTNAME=»—hostname-override={{ kube_override_hostname }}»n{% endif %}nn{# Base kubelet args #}n{% set kubelet_args_base -%}n{# start kubeadm specific settings #}n—bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \n—config={{ kube_config_dir }}/kubelet-config.yaml \n—kubeconfig={{ kube_config_dir }}/kubelet.conf \n{# end kubeadm specific settings #}n{% if container_manager == ‘docker’ %}n—pod-infra-container-image={{ pod_infra_image_repo }}:{{ pod_infra_image_tag }} \n{% else %}n—container-runtime=remote \n—container-runtime-endpoint=unix://{{ cri_socket }} \n{% endif %}n{% if dynamic_kubelet_configuration %}n—dynamic-config-dir={{ dynamic_kubelet_configuration_dir }} \n{% endif %}n—runtime-cgroups={{ kubelet_runtime_cgroups }} \n{% endset %}nn{# Kubelet node taints for gpu #}n{% if nvidia_gpu_nodes is defined and nvidia_accelerator_enabled|bool %}n{% if inventory_hostname in nvidia_gpu_nodes and node_taints is defined %}n{% set dummy = node_taints.append(‘nvidia.com/gpu=:NoSchedule’) %}n{% elif inventory_hostname in nvidia_gpu_nodes and node_taints is not defined %}n{% set node_taints = [] %}n{% set dummy = node_taints.append(‘nvidia.com/gpu=:NoSchedule’) %}n{% endif %}n{% endif %}nnKUBELET_ARGS=»{{ kubelet_args_base }} {% if node_taints|default([]) %}—register-with-taints={{ node_taints | join(‘,’) }} {% endif %} {% if kube_feature_gates %} —feature-gates={{ kube_feature_gates|join(‘,’) }} {% endif %} {% if kubelet_custom_flags is string %} {{kubelet_custom_flags}} {% else %}{% for flag in kubelet_custom_flags %} {{flag}} {% endfor %}{% endif %}{% if inventory_hostname in groups[‘kube-node’] %}{% if kubelet_node_custom_flags is string %} {{kubelet_node_custom_flags}} {% else %}{% for flag in kubelet_node_custom_flags %} {{flag}} {% endfor %}{% endif %}{% endif %}»n{% if kubelet_flexvolumes_plugins_dir is defined %}nKUBELET_VOLUME_PLUGIN=»—volume-plugin-dir={{ kubelet_flexvolumes_plugins_dir }}»n{% endif %}n{% if kube_network_plugin is defined and kube_network_plugin in [«calico», «canal», «cni», «flannel», «weave», «cilium», «kube-ovn», «ovn4nfv», «kube-router», «macvlan»] %}nKUBELET_NETWORK_PLUGIN=»—network-plugin=cni —cni-conf-dir=/etc/cni/net.d —cni-bin-dir=/opt/cni/bin»n{% elif kube_network_plugin is defined and kube_network_plugin == «cloud» %}nKUBELET_NETWORK_PLUGIN=»—hairpin-mode=promiscuous-bridge —network-plugin=kubenet»n{% endif %}n{% if cloud_provider is defined and cloud_provider in [«openstack», «azure», «vsphere», «aws», «external»] %}nKUBELET_CLOUDPROVIDER=»—cloud-provider={{ cloud_provider }} —cloud-config={{ kube_config_dir }}/cloud_config»n{% else %}nKUBELET_CLOUDPROVIDER=»»n{% endif %}nnPATH={{ bin_dir }}:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binn»
}
<192.168.50.149> (0, », »)
<192.168.50.146> (0, », »)
<192.168.50.147> (0, », »)
fatal: [node5]: FAILED! => {
«changed»: false,
«msg»: «AnsibleError: template error while templating string: expected token ‘=’, got ‘end of statement block’. String: KUBE_LOGTOSTDERR=»—logtostderr=true»nKUBE_LOG_LEVEL=»—v={{ kube_log_level }}»nKUBELET_ADDRESS=»—node-ip={{ kubelet_address }}»n{% if kube_override_hostname|default(») %}nKUBELET_HOSTNAME=»—hostname-override={{ kube_override_hostname }}»n{% endif %}nn{# Base kubelet args #}n{% set kubelet_args_base -%}n{# start kubeadm specific settings #}n—bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \n—config={{ kube_config_dir }}/kubelet-config.yaml \n—kubeconfig={{ kube_config_dir }}/kubelet.conf \n{# end kubeadm specific settings #}n{% if container_manager == ‘docker’ %}n—pod-infra-container-image={{ pod_infra_image_repo }}:{{ pod_infra_image_tag }} \n{% else %}n—container-runtime=remote \n—container-runtime-endpoint=unix://{{ cri_socket }} \n{% endif %}n{% if dynamic_kubelet_configuration %}n—dynamic-config-dir={{ dynamic_kubelet_configuration_dir }} \n{% endif %}n—runtime-cgroups={{ kubelet_runtime_cgroups }} \n{% endset %}nn{# Kubelet node taints for gpu #}n{% if nvidia_gpu_nodes is defined and nvidia_accelerator_enabled|bool %}n{% if inventory_hostname in nvidia_gpu_nodes and node_taints is defined %}n{% set dummy = node_taints.append(‘nvidia.com/gpu=:NoSchedule’) %}n{% elif inventory_hostname in nvidia_gpu_nodes and node_taints is not defined %}n{% set node_taints = [] %}n{% set dummy = node_taints.append(‘nvidia.com/gpu=:NoSchedule’) %}n{% endif %}n{% endif %}nnKUBELET_ARGS=»{{ kubelet_args_base }} {% if node_taints|default([]) %}—register-with-taints={{ node_taints | join(‘,’) }} {% endif %} {% if kube_feature_gates %} —feature-gates={{ kube_feature_gates|join(‘,’) }} {% endif %} {% if kubelet_custom_flags is string %} {{kubelet_custom_flags}} {% else %}{% for flag in kubelet_custom_flags %} {{flag}} {% endfor %}{% endif %}{% if inventory_hostname in groups[‘kube-node’] %}{% if kubelet_node_custom_flags is string %} {{kubelet_node_custom_flags}} {% else %}{% for flag in kubelet_node_custom_flags %} {{flag}} {% endfor %}{% endif %}{% endif %}»n{% if kubelet_flexvolumes_plugins_dir is defined %}nKUBELET_VOLUME_PLUGIN=»—volume-plugin-dir={{ kubelet_flexvolumes_plugins_dir }}»n{% endif %}n{% if kube_network_plugin is defined and kube_network_plugin in [«calico», «canal», «cni», «flannel», «weave», «cilium», «kube-ovn», «ovn4nfv», «kube-router», «macvlan»] %}nKUBELET_NETWORK_PLUGIN=»—network-plugin=cni —cni-conf-dir=/etc/cni/net.d —cni-bin-dir=/opt/cni/bin»n{% elif kube_network_plugin is defined and kube_network_plugin == «cloud» %}nKUBELET_NETWORK_PLUGIN=»—hairpin-mode=promiscuous-bridge —network-plugin=kubenet»n{% endif %}n{% if cloud_provider is defined and cloud_provider in [«openstack», «azure», «vsphere», «aws», «external»] %}nKUBELET_CLOUDPROVIDER=»—cloud-provider={{ cloud_provider }} —cloud-config={{ kube_config_dir }}/cloud_config»n{% else %}nKUBELET_CLOUDPROVIDER=»»n{% endif %}nnPATH={{ bin_dir }}:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binn»
}
<192.168.50.150> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.50.150> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/c57bf6d64f 192.168.50.150 ‘/bin/sh -c ‘»‘»‘echo ~ && sleep 0′»‘»»
<192.168.50.148> (0, », »)
fatal: [node3]: FAILED! => {
«changed»: false,
«msg»: «AnsibleError: template error while templating string: expected token ‘=’, got ‘end of statement block’. String: KUBE_LOGTOSTDERR=»—logtostderr=true»nKUBE_LOG_LEVEL=»—v={{ kube_log_level }}»nKUBELET_ADDRESS=»—node-ip={{ kubelet_address }}»n{% if kube_override_hostname|default(») %}nKUBELET_HOSTNAME=»—hostname-override={{ kube_override_hostname }}»n{% endif %}nn{# Base kubelet args #}n{% set kubelet_args_base -%}n{# start kubeadm specific settings #}n—bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \n—config={{ kube_config_dir }}/kubelet-config.yaml \n—kubeconfig={{ kube_config_dir }}/kubelet.conf \n{# end kubeadm specific settings #}n{% if container_manager == ‘docker’ %}n—pod-infra-container-image={{ pod_infra_image_repo }}:{{ pod_infra_image_tag }} \n{% else %}n—container-runtime=remote \n—container-runtime-endpoint=unix://{{ cri_socket }} \n{% endif %}n{% if dynamic_kubelet_configuration %}n—dynamic-config-dir={{ dynamic_kubelet_configuration_dir }} \n{% endif %}n—runtime-cgroups={{ kubelet_runtime_cgroups }} \n{% endset %}nn{# Kubelet node taints for gpu #}n{% if nvidia_gpu_nodes is defined and nvidia_accelerator_enabled|bool %}n{% if inventory_hostname in nvidia_gpu_nodes and node_taints is defined %}n{% set dummy = node_taints.append(‘nvidia.com/gpu=:NoSchedule’) %}n{% elif inventory_hostname in nvidia_gpu_nodes and node_taints is not defined %}n{% set node_taints = [] %}n{% set dummy = node_taints.append(‘nvidia.com/gpu=:NoSchedule’) %}n{% endif %}n{% endif %}nnKUBELET_ARGS=»{{ kubelet_args_base }} {% if node_taints|default([]) %}—register-with-taints={{ node_taints | join(‘,’) }} {% endif %} {% if kube_feature_gates %} —feature-gates={{ kube_feature_gates|join(‘,’) }} {% endif %} {% if kubelet_custom_flags is string %} {{kubelet_custom_flags}} {% else %}{% for flag in kubelet_custom_flags %} {{flag}} {% endfor %}{% endif %}{% if inventory_hostname in groups[‘kube-node’] %}{% if kubelet_node_custom_flags is string %} {{kubelet_node_custom_flags}} {% else %}{% for flag in kubelet_node_custom_flags %} {{flag}} {% endfor %}{% endif %}{% endif %}»n{% if kubelet_flexvolumes_plugins_dir is defined %}nKUBELET_VOLUME_PLUGIN=»—volume-plugin-dir={{ kubelet_flexvolumes_plugins_dir }}»n{% endif %}n{% if kube_network_plugin is defined and kube_network_plugin in [«calico», «canal», «cni», «flannel», «weave», «cilium», «kube-ovn», «ovn4nfv», «kube-router», «macvlan»] %}nKUBELET_NETWORK_PLUGIN=»—network-plugin=cni —cni-conf-dir=/etc/cni/net.d —cni-bin-dir=/opt/cni/bin»n{% elif kube_network_plugin is defined and kube_network_plugin == «cloud» %}nKUBELET_NETWORK_PLUGIN=»—hairpin-mode=promiscuous-bridge —network-plugin=kubenet»n{% endif %}n{% if cloud_provider is defined and cloud_provider in [«openstack», «azure», «vsphere», «aws», «external»] %}nKUBELET_CLOUDPROVIDER=»—cloud-provider={{ cloud_provider }} —cloud-config={{ kube_config_dir }}/cloud_config»n{% else %}nKUBELET_CLOUDPROVIDER=»»n{% endif %}nnPATH={{ bin_dir }}:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binn»
}
fatal: [node2]: FAILED! => {
«changed»: false,
«msg»: «AnsibleError: template error while templating string: expected token ‘=’, got ‘end of statement block’. String: KUBE_LOGTOSTDERR=»—logtostderr=true»nKUBE_LOG_LEVEL=»—v={{ kube_log_level }}»nKUBELET_ADDRESS=»—node-ip={{ kubelet_address }}»n{% if kube_override_hostname|default(») %}nKUBELET_HOSTNAME=»—hostname-override={{ kube_override_hostname }}»n{% endif %}nn{# Base kubelet args #}n{% set kubelet_args_base -%}n{# start kubeadm specific settings #}n—bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \n—config={{ kube_config_dir }}/kubelet-config.yaml \n—kubeconfig={{ kube_config_dir }}/kubelet.conf \n{# end kubeadm specific settings #}n{% if container_manager == ‘docker’ %}n—pod-infra-container-image={{ pod_infra_image_repo }}:{{ pod_infra_image_tag }} \n{% else %}n—container-runtime=remote \n—container-runtime-endpoint=unix://{{ cri_socket }} \n{% endif %}n{% if dynamic_kubelet_configuration %}n—dynamic-config-dir={{ dynamic_kubelet_configuration_dir }} \n{% endif %}n—runtime-cgroups={{ kubelet_runtime_cgroups }} \n{% endset %}nn{# Kubelet node taints for gpu #}n{% if nvidia_gpu_nodes is defined and nvidia_accelerator_enabled|bool %}n{% if inventory_hostname in nvidia_gpu_nodes and node_taints is defined %}n{% set dummy = node_taints.append(‘nvidia.com/gpu=:NoSchedule’) %}n{% elif inventory_hostname in nvidia_gpu_nodes and node_taints is not defined %}n{% set node_taints = [] %}n{% set dummy = node_taints.append(‘nvidia.com/gpu=:NoSchedule’) %}n{% endif %}n{% endif %}nnKUBELET_ARGS=»{{ kubelet_args_base }} {% if node_taints|default([]) %}—register-with-taints={{ node_taints | join(‘,’) }} {% endif %} {% if kube_feature_gates %} —feature-gates={{ kube_feature_gates|join(‘,’) }} {% endif %} {% if kubelet_custom_flags is string %} {{kubelet_custom_flags}} {% else %}{% for flag in kubelet_custom_flags %} {{flag}} {% endfor %}{% endif %}{% if inventory_hostname in groups[‘kube-node’] %}{% if kubelet_node_custom_flags is string %} {{kubelet_node_custom_flags}} {% else %}{% for flag in kubelet_node_custom_flags %} {{flag}} {% endfor %}{% endif %}{% endif %}»n{% if kubelet_flexvolumes_plugins_dir is defined %}nKUBELET_VOLUME_PLUGIN=»—volume-plugin-dir={{ kubelet_flexvolumes_plugins_dir }}»n{% endif %}n{% if kube_network_plugin is defined and kube_network_plugin in [«calico», «canal», «cni», «flannel», «weave», «cilium», «kube-ovn», «ovn4nfv», «kube-router», «macvlan»] %}nKUBELET_NETWORK_PLUGIN=»—network-plugin=cni —cni-conf-dir=/etc/cni/net.d —cni-bin-dir=/opt/cni/bin»n{% elif kube_network_plugin is defined and kube_network_plugin == «cloud» %}nKUBELET_NETWORK_PLUGIN=»—hairpin-mode=promiscuous-bridge —network-plugin=kubenet»n{% endif %}n{% if cloud_provider is defined and cloud_provider in [«openstack», «azure», «vsphere», «aws», «external»] %}nKUBELET_CLOUDPROVIDER=»—cloud-provider={{ cloud_provider }} —cloud-config={{ kube_config_dir }}/cloud_config»n{% else %}nKUBELET_CLOUDPROVIDER=»»n{% endif %}nnPATH={{ bin_dir }}:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binn»
}
fatal: [node4]: FAILED! => {
«changed»: false,
«msg»: «AnsibleError: template error while templating string: expected token ‘=’, got ‘end of statement block’. String: KUBE_LOGTOSTDERR=»—logtostderr=true»nKUBE_LOG_LEVEL=»—v={{ kube_log_level }}»nKUBELET_ADDRESS=»—node-ip={{ kubelet_address }}»n{% if kube_override_hostname|default(») %}nKUBELET_HOSTNAME=»—hostname-override={{ kube_override_hostname }}»n{% endif %}nn{# Base kubelet args #}n{% set kubelet_args_base -%}n{# start kubeadm specific settings #}n—bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \n—config={{ kube_config_dir }}/kubelet-config.yaml \n—kubeconfig={{ kube_config_dir }}/kubelet.conf \n{# end kubeadm specific settings #}n{% if container_manager == ‘docker’ %}n—pod-infra-container-image={{ pod_infra_image_repo }}:{{ pod_infra_image_tag }} \n{% else %}n—container-runtime=remote \n—container-runtime-endpoint=unix://{{ cri_socket }} \n{% endif %}n{% if dynamic_kubelet_configuration %}n—dynamic-config-dir={{ dynamic_kubelet_configuration_dir }} \n{% endif %}n—runtime-cgroups={{ kubelet_runtime_cgroups }} \n{% endset %}nn{# Kubelet node taints for gpu #}n{% if nvidia_gpu_nodes is defined and nvidia_accelerator_enabled|bool %}n{% if inventory_hostname in nvidia_gpu_nodes and node_taints is defined %}n{% set dummy = node_taints.append(‘nvidia.com/gpu=:NoSchedule’) %}n{% elif inventory_hostname in nvidia_gpu_nodes and node_taints is not defined %}n{% set node_taints = [] %}n{% set dummy = node_taints.append(‘nvidia.com/gpu=:NoSchedule’) %}n{% endif %}n{% endif %}nnKUBELET_ARGS=»{{ kubelet_args_base }} {% if node_taints|default([]) %}—register-with-taints={{ node_taints | join(‘,’) }} {% endif %} {% if kube_feature_gates %} —feature-gates={{ kube_feature_gates|join(‘,’) }} {% endif %} {% if kubelet_custom_flags is string %} {{kubelet_custom_flags}} {% else %}{% for flag in kubelet_custom_flags %} {{flag}} {% endfor %}{% endif %}{% if inventory_hostname in groups[‘kube-node’] %}{% if kubelet_node_custom_flags is string %} {{kubelet_node_custom_flags}} {% else %}{% for flag in kubelet_node_custom_flags %} {{flag}} {% endfor %}{% endif %}{% endif %}»n{% if kubelet_flexvolumes_plugins_dir is defined %}nKUBELET_VOLUME_PLUGIN=»—volume-plugin-dir={{ kubelet_flexvolumes_plugins_dir }}»n{% endif %}n{% if kube_network_plugin is defined and kube_network_plugin in [«calico», «canal», «cni», «flannel», «weave», «cilium», «kube-ovn», «ovn4nfv», «kube-router», «macvlan»] %}nKUBELET_NETWORK_PLUGIN=»—network-plugin=cni —cni-conf-dir=/etc/cni/net.d —cni-bin-dir=/opt/cni/bin»n{% elif kube_network_plugin is defined and kube_network_plugin == «cloud» %}nKUBELET_NETWORK_PLUGIN=»—hairpin-mode=promiscuous-bridge —network-plugin=kubenet»n{% endif %}n{% if cloud_provider is defined and cloud_provider in [«openstack», «azure», «vsphere», «aws», «external»] %}nKUBELET_CLOUDPROVIDER=»—cloud-provider={{ cloud_provider }} —cloud-config={{ kube_config_dir }}/cloud_config»n{% else %}nKUBELET_CLOUDPROVIDER=»»n{% endif %}nnPATH={{ bin_dir }}:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binn»
}
<192.168.50.150> (0, ‘/rootn’, »)
<192.168.50.150> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.50.150> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/c57bf6d64f 192.168.50.150 ‘/bin/sh -c ‘»‘»‘( umask 77 && mkdir -p «echo /root/.ansible/tmp«&& mkdir «echo /root/.ansible/tmp/ansible-tmp-1646063614.4-11958-108661428813819» && echo ansible-tmp-1646063614.4-11958-108661428813819=»echo /root/.ansible/tmp/ansible-tmp-1646063614.4-11958-108661428813819» ) && sleep 0′»‘»»
<192.168.50.150> (0, ‘ansible-tmp-1646063614.4-11958-108661428813819=/root/.ansible/tmp/ansible-tmp-1646063614.4-11958-108661428813819n’, »)
<192.168.50.150> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.50.150> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/c57bf6d64f 192.168.50.150 ‘/bin/sh -c ‘»‘»‘rm -f -r /root/.ansible/tmp/ansible-tmp-1646063614.4-11958-108661428813819/ > /dev/null 2>&1 && sleep 0′»‘»»
<192.168.50.150> (0, », »)
fatal: [node6]: FAILED! => {
«changed»: false,
«msg»: «AnsibleError: template error while templating string: expected token ‘=’, got ‘end of statement block’. String: KUBE_LOGTOSTDERR=»—logtostderr=true»nKUBE_LOG_LEVEL=»—v={{ kube_log_level }}»nKUBELET_ADDRESS=»—node-ip={{ kubelet_address }}»n{% if kube_override_hostname|default(») %}nKUBELET_HOSTNAME=»—hostname-override={{ kube_override_hostname }}»n{% endif %}nn{# Base kubelet args #}n{% set kubelet_args_base -%}n{# start kubeadm specific settings #}n—bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \n—config={{ kube_config_dir }}/kubelet-config.yaml \n—kubeconfig={{ kube_config_dir }}/kubelet.conf \n{# end kubeadm specific settings #}n{% if container_manager == ‘docker’ %}n—pod-infra-container-image={{ pod_infra_image_repo }}:{{ pod_infra_image_tag }} \n{% else %}n—container-runtime=remote \n—container-runtime-endpoint=unix://{{ cri_socket }} \n{% endif %}n{% if dynamic_kubelet_configuration %}n—dynamic-config-dir={{ dynamic_kubelet_configuration_dir }} \n{% endif %}n—runtime-cgroups={{ kubelet_runtime_cgroups }} \n{% endset %}nn{# Kubelet node taints for gpu #}n{% if nvidia_gpu_nodes is defined and nvidia_accelerator_enabled|bool %}n{% if inventory_hostname in nvidia_gpu_nodes and node_taints is defined %}n{% set dummy = node_taints.append(‘nvidia.com/gpu=:NoSchedule’) %}n{% elif inventory_hostname in nvidia_gpu_nodes and node_taints is not defined %}n{% set node_taints = [] %}n{% set dummy = node_taints.append(‘nvidia.com/gpu=:NoSchedule’) %}n{% endif %}n{% endif %}nnKUBELET_ARGS=»{{ kubelet_args_base }} {% if node_taints|default([]) %}—register-with-taints={{ node_taints | join(‘,’) }} {% endif %} {% if kube_feature_gates %} —feature-gates={{ kube_feature_gates|join(‘,’) }} {% endif %} {% if kubelet_custom_flags is string %} {{kubelet_custom_flags}} {% else %}{% for flag in kubelet_custom_flags %} {{flag}} {% endfor %}{% endif %}{% if inventory_hostname in groups[‘kube-node’] %}{% if kubelet_node_custom_flags is string %} {{kubelet_node_custom_flags}} {% else %}{% for flag in kubelet_node_custom_flags %} {{flag}} {% endfor %}{% endif %}{% endif %}»n{% if kubelet_flexvolumes_plugins_dir is defined %}nKUBELET_VOLUME_PLUGIN=»—volume-plugin-dir={{ kubelet_flexvolumes_plugins_dir }}»n{% endif %}n{% if kube_network_plugin is defined and kube_network_plugin in [«calico», «canal», «cni», «flannel», «weave», «cilium», «kube-ovn», «ovn4nfv», «kube-router», «macvlan»] %}nKUBELET_NETWORK_PLUGIN=»—network-plugin=cni —cni-conf-dir=/etc/cni/net.d —cni-bin-dir=/opt/cni/bin»n{% elif kube_network_plugin is defined and kube_network_plugin == «cloud» %}nKUBELET_NETWORK_PLUGIN=»—hairpin-mode=promiscuous-bridge —network-plugin=kubenet»n{% endif %}n{% if cloud_provider is defined and cloud_provider in [«openstack», «azure», «vsphere», «aws», «external»] %}nKUBELET_CLOUDPROVIDER=»—cloud-provider={{ cloud_provider }} —cloud-config={{ kube_config_dir }}/cloud_config»n{% else %}nKUBELET_CLOUDPROVIDER=»»n{% endif %}nnPATH={{ bin_dir }}:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binn»
}

NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************************************************************************

PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node1 : ok=420 changed=5 unreachable=0 failed=1 skipped=525 rescued=0 ignored=0
node2 : ok=384 changed=5 unreachable=0 failed=1 skipped=454 rescued=0 ignored=0
node3 : ok=341 changed=5 unreachable=0 failed=1 skipped=415 rescued=0 ignored=0
node4 : ok=288 changed=2 unreachable=0 failed=1 skipped=362 rescued=0 ignored=0
node5 : ok=288 changed=2 unreachable=0 failed=1 skipped=362 rescued=0 ignored=0
node6 : ok=288 changed=2 unreachable=0 failed=1 skipped=362 rescued=0 ignored=0

Monday 28 February 2022

Содержание

  1. Jinja errors when running Ansible tasks
  2. identifier validation for variable names #14004
  3. Comments
  4. Issue Type:
  5. Ansible Version:
  6. Ansible Configuration
  7. Environment
  8. Summary:
  9. Steps To Reproduce:
  10. Expected Results:
  11. Unable to read registered output of docker inspect —format=. #10156
  12. Comments
  13. kubeadm-config.v1beta1.yaml.j2 breaks on AWS — AnsibleError: template error while templating string: expected token ‘=’ #5958
  14. Comments
  15. ## configure a bastion host if your nodes are not directly reachable
  16. for loop in jinja2 [closed]
  17. 1 Answer 1

Jinja errors when running Ansible tasks

One of the great things about Ansible is being able to use Jinja filters both in templates and in yaml files.
I came across a not so great thing today though. Running a play I got the following error:

But the error was ‘thrown’ by a task I haven’t changed in ages.
The task does use a template — but that hadn’t changed either. Mystery!

It turns out that I had used a variable assignment elsewhere in a vars file referred to by a subsequent task within the same play.
This assignment used a Jinja filter (lookup) — (simplied example for the sake of this post).

I had adapted that from the line:

The adaptation was to turn the variable assignment into a dictionary so I could specify the source, destination and mode of the file(s).
This was the error — the filter needs to have double braces around it to be recognised as such — I had left the double braces around the whole element and this threw the error.
The correct assignment in this case is:

This will get the contents of sourcefile (in this case a string for a single file path) and place them in the variable src.
It appears that the templating ‘engine’ was awoken by the failed task and validated vars used in the subsequent task.
So if you get this error — don’t assume it is within the task that it is reported.
Best start looking at recent changes and work back!
A good argument for ‘commit often’ so your changes are easier to review.

Источник

identifier validation for variable names #14004

Issue Type:

Ansible Version:

ansible 2.0.0.1
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides

Ansible Configuration

Didn’t modify the default /etc/ansible/ansible.cfg, and I didn’t create any other ansible.cfg files.
Installed using make && make install . Almost certainly irrelevant.

Environment

Summary:

Variable names / identifiers aren’t validated at all. On the one hand, that’s arguably just an undocumented feature- and it’s quite fun to play with- but it also means that errors are thrown way later than they ought to be (if they’re thrown at all), which would have been more confusing if I weren’t intentionally testing this.

Steps To Reproduce:

If you put the following in a vars file and include the file, that raises zero red flags. Of course, most of these variables aren’t actually usable after evaluation (with the delightful exception of and[«<< hello >>»] ):

Expected Results:

A screen full of Angry Red Text along the lines of «Ansible variables are basically just python identifiers, so the same rules apply (use [a-zA-Z_][a-zA-Z_0-9]* only)».

When used in tasks, Ii got errors including (but not limited to):

And so on. All that was just a way for me to figure out a sane way to implement namespacing. But that’s a discussion for another day / issue ticket.

The text was updated successfully, but these errors were encountered:

Источник

Unable to read registered output of docker inspect —format=. #10156

Issue Type:
Ansible Version:
Environment:
Summary:

Trying to use the registed output of docker inspect —format=. . results in a fatal error.

Interestingly enough this works with Ansible 1.7.2 but not with 1.8.2. Also works when just using the output of docker inspect . without the —format option.

Steps To Reproduce:

Use the following playbook.yml :

Execute it on a host which has Docker installed and running:

Expected Results:
  • inspect.stdout should contain the complete JSON output
  • image.stdout should be the hash of the official busybox Docker image
  • name.stdout should be /sleep
  • volume.stdout should be something like /var/lib/docker/vfs/dir/92fd93ceedababafb4d61615749e093dd170cae4f2f1ec092fc11159b6b0b7c5
Actual Results:
  • inspect.stdout correctly contains the complete JSON output with
  • the debug task for image.stdout fails with: fatal: [localhost] => template error while templating string: expected name or number
  • the debug task for name.stdout fails with: fatal: [localhost] => template error while templating string: unexpected ‘.’
  • the debug task for volume.stdout fails with: fatal: [localhost] => template error while templating string: expected token ‘end of print statement’, got ‘string’

The text was updated successfully, but these errors were encountered:

Just in case this helps anyone, a workaround for now is to use to escape:

I had to double escape the curly braces to get it working with ansible 1.9.1:

Even tried double-escaping mentioned above

PS Downgraded back to 1.7.2

same issue here, might work on something to make this easier today

for those of you trying to call docker inspect by hand, skip that and use either the docker_containers variable, which you magically get by running docker module and has the inspect info, or get the info from docker_facts module http://patg.net/ansible,docker/2014/07/10/ansible-docker-facts/

However, bug is still valid, there’s no way to escape the double curly braces.

@conrado Do you know if ansible has to actually launch the container in the same session for the docker_containers variable to be created? I’m trying to get the IP of already running containers.

yes, I believe the docker_containers requires you to use the docker module at least once. I would recommend you try the docker_facts module instead.

I tried it but got an error I couldn’t decode. Probably user error (I’m new to ansible). My workaround was to put the command in a separate script and use the script: action.

I just came across the following, it works. The question is why? Can this be explained in the docs so everyone can make use of it? I assume the fact that this happening to docker is a tragic coincidence, so should be useful in general?

Basically the idea is to keep the escaped braces ( << and >> ) out of any quotes, this must be some bash-related thing – my best guess.

I wrote a python script to return the data I needed. @ianbytchek would you be interested in using it?

oh, wow. that’s a crazy workaround.

@zstarer thanks man. No, I really prefer to keep things plain and simple without any scripts or crazy hacks. Appreciate the hand of help though. PS: updated the earlier comment with further findings.

yep, understandable.. it quickly turned into a multi step process involving sysargs, an api call into /containers/$container_id/json, patience, etc.

I took the non-cross-platform-compatible out and just used this to obtain the ip:

Hi, this situation is no longer present in the devel branch (which will be ansible 2.0). Here is the output using a version of your example above:

If you continue seeing any problems related to this issue, or if you have any further questions, please let us know by stopping by one of the two mailing lists, as appropriate:

Because this project is very active, we’re unlikely to see comments made on closed tickets, but the mailing list is a great way to ask questions, or post if you don’t think this particular issue is resolved.

Источник

kubeadm-config.v1beta1.yaml.j2 breaks on AWS — AnsibleError: template error while templating string: expected token ‘=’ #5958

I am deploying to AWS — Centos7/RHEL. Over several days, the deployment breaks with the below error

I have changed the ninja2 templater from 2.11.2 to 2.9.5 but problem remains.
Ansible 2.9.6
Kubespray code (committed about 10/04/2020
Deployment to both Core Linux, Centos, AMZ Linux failed with the same reason.
Codes to check etcd (when deployed to host) uses API2 instead of 3 and also fails
Not being a jinja2 expert, I strongly suspect the below code stanza in the kubeadm-config.v1beta1yaml.j2 file which appears to be missing a ‘]’ somewhere.

excludeCIDRs: << «[]» if kube_proxy_exclude_cidrs is not defined or kube_proxy_exclude_cidrs == «null» or kube_proxy_exclude_cidrs | length == 0
else
(kube_proxy_exclude_cidrs if kube_proxy_exclude_cidrs[0] == ‘[‘
else («[» + kube_proxy_exclude_cidrs + «]»
if (kube_proxy_exclude_cidrs[0] | length) == 1
else «[» + kube_proxy_exclude_cidrs | join(«,») + «]»)) >>

The text was updated successfully, but these errors were encountered:

Which version of kubespray are you using ?
kubeadm-config.v1beta2.yaml.j2 code is not the same and I guess the error won’t be there.

@fabianofranz please use the GitHub issue template with appropriate (and standardized) environment and system info.

Environment:
Centos

Cloud provider or hardware configuration:
AWS
6 x nodes (3 masters and 3 workers)
t3.medium

OS ( printf «$(uname -srm)n$(cat /etc/os-release)n» ):
cat /etc/os-release
NAME=»CentOS Linux»
VERSION=»7 (Core)»
ID=»centos»
ID_LIKE=»rhel fedora»
VERSION_ID=»7″
PRETTY_NAME=»CentOS Linux 7 (Core)»
ANSI_COLOR=»0;31″
CPE_NAME=»cpe:/o:centos:centos:7″
HOME_URL=»https://www.centos.org/»
BUG_REPORT_URL=»https://bugs.centos.org/»

CENTOS_MANTISBT_PROJECT=»CentOS-7″
CENTOS_MANTISBT_PROJECT_VERSION=»7″
REDHAT_SUPPORT_PRODUCT=»centos»
REDHAT_SUPPORT_PRODUCT_VERSION=»7″

uname -sr
Linux 3.10.0-1062.18.1.el7.x86_64

  • Version of Ansible ( ansible —version ):
    ansible 2.9.6
    config file = /home/centos/kubespray/ansible.cfg
    configured module search path = [u’/home/centos/kubespray/library’]
    ansible python module location = /usr/lib/python2.7/site-packages/ansible
    executable location = /usr/bin/ansible
    python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
  • Version of Python ( python —version ):
    python version = 2.7.5 (I also experience exactly the same issue in Python3)

Kubespray version (commit) ( git rev-parse —short HEAD ):
03c8d01

Network plugin used:
Calico

Full inventory with variables ( ansible -i inventory/sample/inventory.ini all -m debug -a «var=hostvars[inventory_hostname]» ):
[all]
ip-10-72-53-9.eu-west-2.compute.internal ansible_host=10.72.53.9 access_ip=10.72.53.9 ip=10.72.53.9
ip-10-72-91-10.eu-west-2.compute.internal ansible_host=10.72.91.10 access_ip=10.72.91.10 ip=10.72.91.10
ip-10-72-112-254.eu-west-2.compute.internal ansible_host=10.72.112.254 access_ip=10.72.112.254 ip=10.72.112.254
ip-10-72-44-144.eu-west-2.compute.internal ansible_host=10.72.44.144 access_ip=10.72.44.144 ip=10.72.44.144
ip-10-72-82-50.eu-west-2.compute.internal ansible_host=10.72.82.50 access_ip=10.72.82.50 ip=10.72.82.50
ip-10-72-97-223.eu-west-2.compute.internal ansible_host=10.72.97.223 access_ip=10.72.97.223 ip=10.72.97.223

## configure a bastion host if your nodes are not directly reachable

#bastion ansible_host=10.72.8.210 ansible_user=centos

[kube-master]
ip-10-72-53-9.eu-west-2.compute.internal
ip-10-72-91-10.eu-west-2.compute.internal
ip-10-72-112-254.eu-west-2.compute.internal

[etcd]
ip-10-72-53-9.eu-west-2.compute.internal
ip-10-72-91-10.eu-west-2.compute.internal
ip-10-72-112-254.eu-west-2.compute.internal

[kube-node]
ip-10-72-53-9.eu-west-2.compute.internal
ip-10-72-91-10.eu-west-2.compute.internal
ip-10-72-112-254.eu-west-2.compute.internal
ip-10-72-44-144.eu-west-2.compute.internal
ip-10-72-82-50.eu-west-2.compute.internal
ip-10-72-97-223.eu-west-2.compute.internal

[k8s-cluster:children]
kube-master
kube-node
calico-rr

Command used to invoke ansible:
ansible-playbook -i inventory/centos/hosts.yaml cluster.yml -b —become-user=root -v —private-key=/home/centos/.ssh/AdvancedCFN.pem —flush-cache

Anything else do we need to know:
I set the cloud_provider = aws.
I believe this is why the templating fails. When I deployed to Digitalocean cloud and did not have to set a cloud_provider, the run was successful. I feel that an argument is the template needs looking at.

Источник

for loop in jinja2 [closed]

Questions describing a problem that can’t be reproduced and seemingly went away on its own (or went away when a typo was fixed) are off-topic as they are unlikely to help future readers.

Closed 4 years ago .

Please explain to me how can I fix this problem? i have this file defaults/main.yml

Now, I want in template file ip.j2 by for loop access the IPs of each server and save in address variable

I tried this code:

But an error occurs. How should I do this?

Error :

Edite-1

I changed the template and default/main.yml code. I have names(nodes) but I can not access IPs yet. default/main.yml :

I also used this code:

But not work yet!!

Update

My problem solved, I use this code:

1 Answer 1

Firstly, assuming number_nodes has the values of 1,2,3, you are trying to access the element of node but you do not have such a variable in the provided yaml.

Secondly, you cannot iterate over three different variables in such a way.

However, if your yaml file looked like that:

Your code could look like that:

What is different from your code is:

  • In the first line we loop over the elements of nodes .
  • In the second you select the ip element of x , which is the each element in the loop.
  • In the third line, assuming you want commas in between all elements except after the last one, you need a not .

Источник

Please explain to me how can I fix this problem?
i have this file defaults/main.yml

---
node1:
 ip: 1.1.1.1

node2:
 ip: 2.2.2.2

node3:
 ip: 3.3.3.3 

Now, I want in template file ip.j2 by for loop access the IPs of each server and save in address variable

Like this:

address= 1.1.1.1,2.2.2.2,3.3.3.3

I tried this code:

address={% for x in {{nubmer_nodes}} %}
{{node[x].ip}}
{% if loop.last %},{% endif %}
{% endfor %}

But an error occurs. How should I do this?

Error :

TASK [Gathering Facts] *********************************************************************

ok: [db2]
ok: [db3]
ok: [db1]

TASK [ssh : -- my loop --] *************************************************************************

fatal: [db1]: FAILED! => {"changed": false, "msg": "AnsibleError: template error while templating string: expected token ':', got '}'. String: rnaddress={% for x in {{number_nodes}} %}rn{{node[x].ip}}rn{% if loop.last %},{% endif %}rn{% endfor %}"}
fatal: [db2]: FAILED! => {"changed": false, "msg": "AnsibleError: template error while templating string: expected token ':', got '}'. String: rnaddress={% for x in {{number_nodes}} %}rn{{node[x].ip}}rn{% if loop.last %},{% endif %}rn{% endfor %}"}
fatal: [db3]: FAILED! => {"changed": false, "msg": "AnsibleError: template error while templating string: expected token ':', got '}'. String: rnaddress={% for x in {{number_nodes}} %}rn{{node[x].ip}}rn{% if loop.last %},{% endif %}rn{% endfor %}"}
        to retry, use: --limit @/etc/ansible/playbooks/get_ip_ssh.retry

PLAY RECAP ********************************************************

db1                        : ok=1    changed=0    unreachable=0    failed=1
db2                        : ok=1    changed=0    unreachable=0    failed=1
db3                        : ok=1    changed=0    unreachable=0    failed=1

Edite-1

I changed the template and default/main.ymlcode. I have names(nodes) but I can not access IPs yet.
default/main.yml:

nodes:
 node1:
     ip: 1.1.1.1

 node2:
     ip: 2.2.2.2

 node3:
     ip: 3.3.3.3

get-ip.j2

address={% for host in nodes %}{{host}}{% if not loop.last %},{% endif %}{% endfor %}

Output is:

address=node1,node3,node2

I also used this code:

address={% for host in nodes %}{{host.ip}}{% if not loop.last %},{% endif %}{% endfor %}

OR

address={% for host in nodes %}{{host.[ip]}}{% if not loop.last %},{% endif %}{% endfor %}

But not work yet!!

Update

My problem solved, I use this code:

address={% for host in nodes %}{{ nodes[host].ip }}{% if not loop.last %},{% endif %}{% endfor %}

Понравилась статья? Поделить с друзьями:
  • Telnet сбой подключения как исправить
  • Telltale file validation failed как исправить
  • Teknoblackops update error checking for updates
  • Tekno black ops update ошибка
  • Tefal rg7133rh ошибки