Ansible module failure nsee stdout stderr for the exact error

SUMMARY We are having ansible on Ubuntu server and we are making AIX as client. we are planning for delete some files on AIX servers. we able to ping servers from ubuntu server to AIX server. when ...
SUMMARY

We are having ansible on Ubuntu server and we are making AIX as client.
we are planning for delete some files on AIX servers. we able to ping servers from ubuntu server to AIX server.

when we run playbook its gives Module failure error.

ISSUE TYPE
  • Bug Report
COMPONENT NAME
"module_stderr": "Shared connection to XX.XX.XX.XX closed.rn",
"module_stdout": "rn",
"msg": "MODULE FAILUREnSee stdout/stderr for the exact error",
ANSIBLE VERSION
CONFIGURATION

root@devops:/home/ashwinij# ansible --version
ansible 2.9.2
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4]
root@XXXX:/home/XXXXX# ansible --version
ansible 2.9.2
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4]

OS / ENVIRONMENT

Server: Ubuntu 14.04.5 LTS
Client: AIX 7100-05-02-1832

STEPS TO REPRODUCE
---
- name: Firstcheck
  hosts: "{{ HOSTNAME}}"
  become: true
  gather_facts: yes
  tasks:
   - name: printing echo
     command: echo {{ HOSTNAME }}
     register: echo

   - debug:
     var: echo.stdout
   - name: printing ip
     command: echo {{ ansible_host }}
     register: ip3

   - debug:
       var: ip3

   - name: seting facts
     set_fact: hostname="{{echo.stdout}}"
     register: output
     delegate_to: localhost

   - copy:
       dest: /home/XXX/back.txt
       content: |
                {{ hostname }}
     register: store
     delegate_to: localhost
   
   - debug:
       var: sedd

   - name: value finding
     shell: sh /home/XXXX/host.sh
     register: value1
     delegate_to: localhost

   - debug:
       var: value1

   - name: check last Backup status(checking hostname is present in the backup list or not )
     shell: cat /home/XXXXX/list.txt | grep -i {{ value1.stdout}}
     register: back
     delegate_to: localhost
     ignore_errors: yes

   - name: Mail Notification - Automated Archieve Log deletion
     mail:
       host: localhost
       port: 25
       to: asdas@asd.com
       from: adasa@asda.com
       subject: 'Automated Archieve log deletion Failed:  {{ HOSTNAME }} server is not in Backup list. '
       body: |
              Dear Team,

              Below mentioned server is not in Backup List.

              {{ HOSTNAME }} : {{ ansible_host }}
     delegate_to: 127.0.0.1
     tags: mail
     ignore_errors: true
     when: back.rc == 1

   - name: Grepping last backup date
     shell: cat /home/sdasdasdas/list.txt | grep -i {{ value1.stdout }} | awk -F " " '{print $2}'
     delegate_to: localhost
     register: result
     when: back.rc == 0
EXPECTED RESULTS

Playbook should run without module failure

ACTUAL RESULTS
fatal: [XXXXXX]: FAILED! => {
    "ansible_facts": {},
    "changed": false,
    "failed_modules": {
        "setup": {
            "ansible_facts": {
                "discovered_interpreter_python": "/usr/bin/python"
            },
            "failed": true,
            "module_stderr": "Shared connection to XXXXXXXXX closed.rn",
            "module_stdout": "rn",
            "msg": "MODULE FAILUREnSee stdout/stderr for the exact error",
            "rc": 1,
            "warnings": [
                "Platform aix on host XXXXX is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information."
            ]
        }
    },
    "msg": "The following modules failed to execute: setupn"
}

1 answer to this question.

Related Questions In Ansible

  • All categories

  • ChatGPT
    (2)

  • Apache Kafka
    (84)

  • Apache Spark
    (596)

  • Azure
    (131)

  • Big Data Hadoop
    (1,907)

  • Blockchain
    (1,673)

  • C#
    (141)

  • C++
    (271)

  • Career Counselling
    (1,060)

  • Cloud Computing
    (3,446)

  • Cyber Security & Ethical Hacking
    (147)

  • Data Analytics
    (1,266)

  • Database
    (855)

  • Data Science
    (75)

  • DevOps & Agile
    (3,575)

  • Digital Marketing
    (111)

  • Events & Trending Topics
    (28)

  • IoT (Internet of Things)
    (387)

  • Java
    (1,247)

  • Kotlin
    (8)

  • Linux Administration
    (389)

  • Machine Learning
    (337)

  • MicroStrategy
    (6)

  • PMP
    (423)

  • Power BI
    (516)

  • Python
    (3,188)

  • RPA
    (650)

  • SalesForce
    (92)

  • Selenium
    (1,569)

  • Software Testing
    (56)

  • Tableau
    (608)

  • Talend
    (73)

  • TypeSript
    (124)

  • Web Development
    (3,002)

  • Ask us Anything!
    (66)

  • Others
    (1,929)

  • Mobile Development
    (263)

Subscribe to our Newsletter, and get personalized recommendations.

Already have an account? Sign in.

In remote server, normal user is having sudo access but NOPASSWD not activated. sudo su — command ask for user password. I am trying to run a command using Ansible as providing sudo password but its not working getting «MODULE FAILUREnSee stdout/stderr for the exact error» error. Please check below logs.

Inventory file

[root@**-*****2 ~]# cat inventory
[prod]
10.***.***.250 ansible_user=m**** ansible_password=*******

Its working with normal user

[root@****** ~]# ansible prod -m ping
10.***.***.250 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}

But when i switch to become

[root@****** ~]# ansible prod -m ping --become
10.***.***.250 | FAILED! => {
    "msg": "Missing sudo password"
}

when i provide Sudo Password.

[root@****** ~]# ansible prod -m ping --become -K
BECOME password:
10.***.***.250 | FAILED! => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "module_stderr": "Shared connection to 10.***.***.250 closed.rn",
    "module_stdout": "rn",
    "msg": "MODULE FAILUREnSee stdout/stderr for the exact error",
    "rc": 1
}

the verbose output of above error is

10.***.***.250 | FAILED! => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "module_stderr": "OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017rndebug1: Reading configuration data /etc/ssh/ssh_configrndebug1: /etc/ssh/ssh_config line 58: Applying options for *rndebug1: auto-mux: Trying existing masterrndebug2: fd 3 setting O_NONBLOCKrndebug2: mux_client_hello_exchange: master version 4rndebug3: mux_client_forwards: request forwardings: 0 local, 0 remoterndebug3: mux_client_request_session: enteringrndebug3: mux_client_request_alive: enteringrndebug3: mux_client_request_alive: done pid = 21356rndebug3: mux_client_request_session: session request sentrndebug1: mux_client_request_session: master session id: 2rndebug3: mux_client_read_packet: read header failed: Broken piperndebug2: Received exit status from master 1rnShared connection to 10.***.***.250 closed.rn",
    "module_stdout": "rn",
    "msg": "MODULE FAILUREnSee stdout/stderr for the exact error",
    "rc": 1
}

It is working where sudo with NOPASSWD activated. Kindly suggest.

Question:

I am trying to install AWS CloudWatch agent in EC2 Centos VM using ansible-playbook. It worked well in the sandbox but when I ran it in Production it fails (integrated with Jenkins pipeline)

Here is my task snippet:

— name: setup temp directory for install

file:

path: /tmp/aws-cw-agent

state: directory

— name: download installer

get_url:

url: «{{ aws_cw_agent_url }}»

dest: /tmp/aws-cw-agent/amazon-cloudwatch-agent.rpm

— name: install agent

become: true

shell: rpm -U /tmp/aws-cw-agent/amazon-cloudwatch-agent.rpm

Until “download installer” it works fine and I could find the rpm binary by manually navigating to the directory. But the next task install agent fails. Instead of shell, if I use “yum” module also, it fails.

The error says:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

17:16:07 task path: /home/jenkins/workspace/groupesiph-dsir/03227/03227_Cloudwatch_Agent_deploy_hprod/playbook/deployment/roles/aws_cw_agent/tasks/main.yml:22

>17:16:07 Tuesday 10 March 2020 17:16:07 +0100 (0:00:00.098) 0:00:05.352 *********

17:16:08 Using module file /usr/lib/python3.6/site-packages/ansible/modules/commands/command.py

17:16:08 Pipelining is enabled.

17:16:08 <10.45.1.136> ESTABLISH SSH CONNECTION FOR USER: ansible

>17:16:08 <10.45.1.136> SSH: EXEC sshpass -d10 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o ‘User=»ansible»‘ -o ConnectTimeout=10 -o ServerAliveInterval=60 -o ServerAliveCountMax=10 -o ControlPath=/home/jenkins/.ansible/cp/84b84369b7 10.45.1.136 ‘/bin/sh -c ‘»‘»‘sudo -H -S -n -u root /bin/sh -c ‘»‘»‘»‘»‘»‘»‘»‘»‘echo BECOME-SUCCESS-syqwibhfpdecwpfqddhe ; /usr/bin/python'»‘»‘»» && sleep 0′»‘»»

>17:16:08 Escalation succeeded

17:16:08 <10.45.1.136> (1, b», b»)

17:16:08 <10.45.1.136> Failed to connect to the host via ssh:

17:16:08 fatal: [prod05]: FAILED! => {

17:16:08 «changed»: false,

17:16:08 «module_stderr»: «»,

17:16:08 «module_stdout»: «»,

17:16:08 «msg»: «MODULE FAILUREnSee stdout/stderr for the exact error»,

17:16:08 «rc»: 1

17:16:08

But if it is a real problem with ssh, then how it is succeeding in the previous task to download the installer.

What could be the problem?

Thanks in advance.

Answer:

The problem was user did not have sudoers access to install binary which using yum to install “/tmp/aws-cw-agent/amazon-cloudwatch-agent.rpm”. With debug message I found the error.

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Another version of this product is already installed как исправить
  • Another launcher is already running как исправить
  • Another instance of this program is already running error
  • Another instance is already running как исправить ошибку
  • Another instance is already running euro truck simulator 2 как исправить

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии