Unable to read the cmd header on the pmi context error 1

Ошибка времени выполнения MPI: невозможно прочитать заголовок cmd в контексте pmi, ошибка = -1 У меня проблема с mpich2. Я написал программу на C++ с использованием MPI. Программа успешно скомпилирована, но когда я пытаюсь запустить программу, я получаю Я попытался переустановить mpich, но проблема не решилась. Кто-нибудь знает, как решить проблему? Спасибо! задан 11 […]

Содержание

  1. Ошибка времени выполнения MPI: невозможно прочитать заголовок cmd в контексте pmi, ошибка = -1
  2. 2 ответы
  3. Unable to read the cmd header on the pmi context error 1
  4. Parallel (MPI) error
  5. Unable to read the cmd header on the pmi context error 1
  6. The error message of Intel MPI

Ошибка времени выполнения MPI: невозможно прочитать заголовок cmd в контексте pmi, ошибка = -1

У меня проблема с mpich2. Я написал программу на C++ с использованием MPI. Программа успешно скомпилирована, но когда я пытаюсь запустить программу, я получаю

Я попытался переустановить mpich, но проблема не решилась.

Кто-нибудь знает, как решить проблему? Спасибо!

задан 11 июн ’12, 09:06

Я видел этот сайт, но там нет ответа на мой вопрос. — Nurlan

@NurlanKenzhebekov это происходит в Windows? Какая у вас версия mpich2( mpirun —version )? Это происходит, когда программа работает с одним процессом? Это единственное сообщение об ошибке или есть другие ошибки? — Dmitri Chubarov

не имеет значения, сколько запущенных программ, ошибка одна и та же. Например, если я пытаюсь запустить в 4-м процессе, происходит 4 ранее сказанные ошибки, в 3-м — 3. Так что да, но других ошибок нет. mpirun —version не работает. — Nurlan

2 ответы

В вашей системе имеется более одного файла mpiexec.exe от разных поставщиков. Например, один из c:program filesmpich2bin и один из c:program filesmicrosoft hpc pack 2008 r2bin .

Попробуйте отключить один из них.

ответ дан 16 апр.

Это решило проблему для меня. Вместо «отключения» одного из исполняемых файлов вы можете просто указать полный путь к правильному mpiexec.exe, например, «C:Program FilesMPICH2binmpiexec.exe». — Адверсус

В моем случае этот тип сообщения был вызван щитом AVAST. Я отключил его, и все стало работать гладко.

Источник

I always get error message when I using ANSYS FLUENT 12 with a Windows 7 x64 system and ANSYS 12 x64.

unable to read the cmd header on the pmi context, Undefined dynamic error code.

999999 (..srcmpsystem.c@1149): mpt_read: failed: errno = 10054

999999: mpt_read: error: read failed trying to read 4 bytes: Invalid argument
unable to read the cmd header on the pmi context, Undefined dynamic error code.
unable to read the cmd header on the pmi context, Undefined dynamic error code.
unable to read the cmd header on the pmi context, Undefined dynamic error code.

job aborted:
rank: node: exit code[: error message]
.
The Parallel FLUENT process could not be started.

999999 (..srcmpsystem.c@1149): mpt_read: failed: errno = 10054

999999: mpt_read: error: read failed trying to read 4 bytes: No such file or directory
unable to read the cmd header on the pmi context, Undefined dynamic error code.
SuspendThread failed with error 5 for process 2:627A4F0C-CA4B-4435-B90A-4D110B465C35:’D:ANSYSI

1v121v121fluentfluent1 2.1.2win643ddp_nodefl_mpi1212.exe -mport 10.10.10.193:10.10.10.193:49174:0 node -mpiw mpich2 -pic ethernet’
unable to suspend process.
unable to read the cmd header on the pmi context, Undefined dynamic error code.
unable to read the cmd header on the pmi context, Undefined dynamic error code.
unable to read the cmd header on the pmi context, Undefined dynamic error code.
received kill command for a pmi context that doesn’t exist: unmatched/> received kill command for a pmi context that doesn’t exist: unmatched/>
job aborted:
rank: node: exit code[: error message]
.
The Parallel FLUENT process could not be started.

Can anyone help me or provide ideas to fix this problem?

Thank you very much

hi ,
did u solve your problem in fluent .
I too got the same problem.If u got the solution please help me.

my email id :dy-369@163.com

sunflower December 24, 2010 06:02

Have you solved your problem? Actually I have the same problem as yours. If you find the solution, would you please tell me? Thanks.

skinnyfluid January 18, 2011 09:04

I get the same error only when I run a Composition PDF transport model in the species / transport modelling. Every other model has no problem running in parallel. Can you please tell me how I can solve this problem?

I got the same problem when I read mesh file in fluent 6.3, and my PC’s Cpu is Inter i7 930 which has 8 processes. if all of 8 processes turned on , this problem would appeard. And if less than 7 processes turned on, it will be no problem. But i don’t know why.

i think this maybe useful for you.

I have a Core i7 860 myself and it is the same setup basically. I tried your trick but I was unsuccessful. Let me know what else you had to modify.

Maybe you need to turn off more processers and try again. And i found another route lately, you can setup all options and save case file firstly in single process environment, then iterate it in parallel environment, it will be ok. if still unsccessful, try rebuild your mesh.

hi zeuxxx, can you please share with us what you did to solve the problem?
Thanks!

Hi,
I know this is the fluent forum, and i0m asking about Star-CCM but my problem is the same when i try to start a parallel simulation in Star-CCM 4.04.
For Fluent is a problem of the software itself or something related to external program like Mpich?

Many Thanks.
Tiziano

Regulate up your pagefile as follow:

My computer>properties>advanced>Performance>Configura tion>
advanced>modify
Regulate up your possible pagefile scale by 1.0

2.0 time size of physical memory

akshaydongre February 15, 2013 08:48

Having the same problem. Reduced number of processors to 4. Not helping! I am using GRI 2.11 mechanism to describe chemistry.
Is it because the CHEMKIN mechanism which fails when used in parallel FLUENT Version 13.0?

Seems to be a problem all around the web but I can’t figure out a solution to it. I’ve got it installed on a number of computers and what I’ve noticed is that I get this error on AMD processors and Intel ones seem to run fine. Anyone had a similar case?

999999 (..srcmpsystem.c@1149): mpt_read: failed: errno = 10054

999999: mpt_read: error: read failed trying to read 4 bytes: Invalid argument
MPI Application rank 0 exited before MPI_Finalize() with status -1073741819
The Parallel FLUENT process could not be started.

I am so sorry that I didn’t come back this forum for a long time

Now I post my experience and I hope it can help someone

I am not sure whether my experience works well.


if you use Win7, try to go to the [Control Panel] —> [Program and function] —> [Open or Close Windows function]

under the menu, MAKE SURE you select ALL terms in «Microsoft .NET Framework 3.5.1»

Maybe the translation I wrote above is not completely correct, because I use a Win7 in Chinese version. but I think you can understand what I mean

Thanks for your reply, I tried that and didn’t work for me 🙁
I tried restarting the computer and still nothing. Maybe it was a slightly different problem that you had.

Thanks for your reply, I tried that and didn’t work for me 🙁
I tried restarting the computer and still nothing. Maybe it was a slightly different problem that you had.

maybe u’d better check ur UDF firstly

try to find whether u use codes about UDM or UDS in the UDF, and make sure you open UDM in the DEFINE menu

hi,
I have the same problem. what did you mean with ‘open UDM in the DEFINE menu’ because I am using in define_mass_transfer function C_UDMI but it doesnt work in parallel.

thanks in advance.

maybe u’d better check ur UDF firstly

try to find whether u use codes about UDM or UDS in the UDF, and make sure you open UDM in the DEFINE menu

Источник

Parallel (MPI) error

CFD Online Discussion Forums > Software User Forums > ANSYS > ANSYS

Parallel (MPI) error

—>

New Today
All Forums
Main CFD Forum
ANSYS — CFX
ANSYS — FLUENT
ANSYS — Meshing
Siemens
OpenFOAM
SU2
Last Week
All Forums
Main CFD Forum
ANSYS — CFX
ANSYS — FLUENT
ANSYS — Meshing
Siemens
OpenFOAM
SU2
Updated Today
All Forums
Main CFD Forum
ANSYS — CFX
ANSYS — FLUENT
ANSYS — Meshing
Siemens
OpenFOAM
SU2
Last Week
All Forums
Main CFD Forum
ANSYS — CFX
ANSYS — FLUENT
ANSYS — Meshing
Siemens
OpenFOAM
SU2
Search Forums
Tag Search
Advanced Search
Search Blogs
Tag Search
Advanced Search

There is a problem occurred when I using Parallel to simulate with Fluent 15.0 x64/Windows 7 64 bit / Intel MPI/ Visual studio 2008

Error message shows after 2 time-steps when I try to output fluid field(Pressure coefficient, velocity . etc.)

for my UDF, it can run well on Fluent 14

1. re-install Intel MPI
2. uncheck «double Precision»
3. disable fire wall
4. checked Windows .frameWork 3.5 is running
5. computed by using serial processor and it runs good

I’v google everything as possible but still can’t solve this problem.

Can someone help me please??

Node 999999: Process 6860: Received signal SIGSEGV.

==================================
vis_StateClear: ierr= 8
SYS_ERROR_OPERATION An operation failed in current state
Entity number mismatch
unable to read the cmd header on the pmi context, Error = -1
.
Error posting readv (10054)
unable to read the cmd header on the pmi context, Error = -1

received suspend command for a pmi context that doesn’t exist: unmatched/> received suspend command for a pmi context that doesn’t exist: unmatched/> received suspend command for a pmi context that doesn’t exist: unmatched/> received suspend command for a pmi context that doesn’t exist: unmatched/> received suspend command for a pmi context that doesn’t exist: unmatched/> received suspend command for a pmi context that doesn’t exist: unmatched/> received suspend command for a pmi context that doesn’t exist: unmatched/> .
.
.

job aborted:
rank: node: exit code[: error message]
0: user-PC: 2: process 0 exited without calling finalize
1: user-PC: 2: process 1 exited without calling finalize
2: user-PC: 2: process 2 exited without calling finalize
3: user-PC: 2: process 3 exited without calling finalize
.
.
.
received suspend command for a pmi context that doesn’t exist: unmatched/> received suspend command for a pmi context that doesn’t exist: unmatched/> received suspend command for a pmi context that doesn’t exist: unmatched/> received suspend command for a pmi context that doesn’t exist: unmatched/> received suspend command for a pmi context that doesn’t exist: unmatched/> received suspend command for a pmi context that doesn’t exist: unmatched/> .
.
.
The fl process could not be started.
Error: CX_Send_Help_Command, could not get Help server port = 0.

Источник

Success! Subscription added.

Success! Subscription removed.

Sorry, you must verify to complete this action. Please click the verification link in your email. You may re-send via your profile.

  • Intel Communities
  • Developer Software Forums
  • Toolkits & SDKs
  • Intel® oneAPI HPC Toolkit
  • The error message of Intel MPI

The error message of Intel MPI

  • Subscribe to RSS Feed
  • Mark Topic as New
  • Mark Topic as Read
  • Float this Topic for Current User
  • Bookmark
  • Subscribe
  • Mute
  • Printer Friendly Page
  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

I run the following command at Windows 7 x64.

mpiexec.exe -localonly -n 2 MyMPIApp.exe

And I got some problem.

result command received but the wait_list is empty.

unable to handle the command: «cmd=result src=1 dest=0 tag=0 cmd_tag=0 cmd_orig=

start_dbs kvs_name=30BA3E56-CB7A-40dd-9D07-8F5342D03976 domain_name=CB42CD80-05B

error closing the unknown context socket: Error = -1

sock_op_close returned while unknown context is in state: SMPD_IDLE

Is there any way to do further debugging? Thank you!

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

There are several debugging options you can try. Using the environment variable I_MPI_DEBUG at runtime will generate some debugging information, generally 5 is a good starting value.

[plain]mpiexec -n 2 -env I_MPI_DEBUG 5 test.exe[/plain]You can compile with -check_mpi in order to link to correctness checking libraries. You can get logs from the smpd by using

[plain]smpd -traceon (as administrator) mpiexec -n 2 test.exe smpd -traceoff (as administrator)[/plain]I wouldrecommend using this, as the error you are getting appears to be from smpd. What version of the Intel MPI Library are you using? Have you been able to run one of the provided sample programs located in the test folder in the installation path?

Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools

Источник

У меня проблема с mpich2. Я написал программу на C++ с использованием MPI. Программа успешно скомпилирована, но когда я пытаюсь запустить программу, я получаю

 error:unable to read the cmd header on the pmi context, error = -1.

Я попытался переустановить mpich, но проблема не решилась.

Кто-нибудь знает, как решить проблему? Спасибо!

2 ответы

В вашей системе имеется более одного файла mpiexec.exe от разных поставщиков. Например, один из c:program filesmpich2bin и один из c:program filesmicrosoft hpc pack 2008 r2bin.

Попробуйте отключить один из них.

ответ дан 16 апр.

В моем случае этот тип сообщения был вызван щитом AVAST. Я отключил его, и все стало работать гладко.

Создан 19 июля ’14, 01:07

Не тот ответ, который вы ищете? Просмотрите другие вопросы с метками

runtime
mpi
mpich

or задайте свой вопрос.

Hi everyone,

I’m a new telemac2d user (V7P1R1).

I have some difficulties with an apparently very simple model. But I failed !

My CAS file is :

/
/------------------------------------------------------------------------------------------------------------
/                      ENVIRONNEMENT INFORMATIQUE
/------------------------------------------------------------------------------------------------------------
/
FICHIER DES CONDITIONS AUX LIMITES  : './BC_S1_ini.cli'
FICHIER DE GEOMETRIE                : './FOND_STB.geo'
FICHIER DES RESULTATS               : './ini.rep'
/
/------------------------------------------------------------------------------------------------------------
/                        OPTIONS GENERALES
/------------------------------------------------------------------------------------------------------------
/
/
VARIABLES POUR LES SORTIES GRAPHIQUES :'U,V,S,B,H,Q,M,L,W,F'
VARIABLES A IMPRIMER                  : ''
PAS DE TEMPS                          : 5
DUREE DU CALCUL                       : 20
PERIODE POUR LES SORTIES GRAPHIQUES   : 1
PERIODE DE SORTIE LISTING             : 1
/PROCESSEURS PARALLELES                : 8
NOMBRE MAXIMUM DE FRONTIERES        : 9000
/
/------------------------------------------------------------------------------------------------------------
/                    CONDITIONS INITIALES
/------------------------------------------------------------------------------------------------------------
/
/
CONDITIONS INITIALES : 'HAUTEUR NULLE'
/SUITE DE CALCUL                       : OUI
/REMISE A ZERO DU TEMPS                : OUI
/STRUCTURES VERTICALES                  : OUI
/
/------------------------------------------------------------------------------------------------------------
/                    CONDITIONS AUX LIMITES
/------------------------------------------------------------------------------------------------------------
/
LOI DE FROTTEMENT SUR LE FOND         : 3
COEFFICIENT DE FROTTEMENT             : 30
COTES IMPOSEES          : 297.2;260.3;206;201.5;0;0;0;0;255
COEFFICIENT DE DIFFUSION DES VITESSES             : 0.01
MODELE DE TURBULENCE                              : 1
/
/------------------------------------------------------------------------------------------------------------
/                       OPTIONS NUMERIQUES
/------------------------------------------------------------------------------------------------------------
/
BANCS DECOUVRANTS                                 : OUI
BILAN DE MASSE                                    : OUI
PRECISION DU SOLVEUR                              : 1.D-8
FORME DE LA CONVECTION                            : 1;5
SOLVEUR                                           : 1
MASS-LUMPING SUR H                                : 1
MASS-LUMPING SUR LA VITESSE                       : 1
TRAITEMENT DU SYSTEME LINEAIRE                    : 2
PROFILS DE VITESSE                                : 1
COMPATIBILITE DU GRADIENT DE SURFACE LIBRE        : 0.9
TRAITEMENT DES HAUTEURS NEGATIVES                 : 2
CORRECTION DE CONTINUITE                          : OUI
OPTION DE SUPG                                    : 0;0;0;0
MAXIMUM D'ITERATIONS POUR LE SOLVEUR              : 500
IMPLICITATION POUR LA VITESSE                     : 1
IMPLICITATION POUR LA HAUTEUR                     : 1

The end of my output Log is :

****************************************
                     * FIN DE L'ALLOCATION DE LA MEMOIRE  : *
                     ****************************************

 INBIEF (BIEF) : MACHINE NON VECTORIELLE (SELON VOS DONNEES)
 FONSTR : COEFFICIENTS DE FROTTEMENT LUS DANS
          LE FICHIER DE GEOMETRIE
 STRCHE (BIEF) : PAS DE MODIFICATION DU FROTTEMENT

 FRONT2 : DEPASSEMENT DE TABLEAUX
          AUGMENTER LE MOT-CLE
          NOMBRE MAXIMUM DE FRONTIERES
          DANS LE CODE APPELANT
          LA VALEUR ACTUELLE EST         9000

 PLANTE : ARRET DU PROGRAMME APRES ERREUR
 RETURNING EXIT CODE:            2
unable to read the cmd header on the pmi context, Error = -1
.
Error posting readv, Une connexion existante a dû être fermée par l’hôte distant.(10054)

job aborted:
rank: node: exit code[: error message]
0: Aix-P1085.ingerop.com: 1: process 0 exited without calling finalize

I don’t understand why I have an error message.
NOMBRE MAXIMUM DE FRONTIERES -> It’s the maximum number of boundaries. When I try to change the value in mas CAS file, it’s not upload in the log …

Please someone has an idea about how I can fix this problem ?

Thanks a lot,

Laurie

The administrator has disabled public write access.

As usual, when I finally post a message to ask some help, I find a solution.

I am almost ashamed of the solution. I think I got confused in the files …

In short, I managed to change the maximum number of boundaries (NOMBRE MAXIMUM DE FRONTIERES : 9000 -> 91000) and that it is finally taken into account in the calculation. Everything works normally.

Thanks for the reactivity of the admins for the publication of my post.

The administrator has disabled public write access.

In fact, it seems I have a real problem.

My first message explain ma problem. I join a screen shot to illustrate the situation.

I really don’t know how it’s happened. But the value wrote in my CAS file is definitely diffent from the one is read by telemac …

Have you an idea about how to fix this ?

Thanks !

The administrator has disabled public write access.

I tried to launch the calculation in a sequential way (not in parallel) :

FICHIER DES CONDITIONS AUX LIMITES  : './BC_S1.cli'
FICHIER DE GEOMETRIE                : './Drac_Complet_STB.geo'
FICHIER DES RESULTATS               : './RES_Test.res'
FICHIER DES FRONTIERES LIQUIDES     : './2400T_Q10_Complet.txt'
FICHIER DU CALCUL PRECEDENT         : './ini.rep'
FICHIER DES COURBES DE TARAGE       : './Barrage_StEg_ok.txt'
/FICHIER DES SECTIONS DE CONTROLE    : './controleG6.txt'
/FICHIER DE SORTIE DES SECTIONS DE CONTROLE : './resu.txt'
IMPRESSION DU CUMUL DES FLUX        : OUI
NOMBRE MAXIMUM DE FRONTIERES        : 92000
/FICHIER DE DONNEES DES BRECHES      : './breachG6.txt'
/
/------------------------------------------------------------------------------------------------------------
/                        OPTIONS GENERALES
/------------------------------------------------------------------------------------------------------------
/
/
VARIABLES POUR LES SORTIES GRAPHIQUES :'U,V,S,B,H,Q,
M,L,W,F,MAXV,MAXZ'
VARIABLES A IMPRIMER                  : ''
PAS DE TEMPS                          : 1.5
DUREE DU CALCUL                       : 250000
PERIODE POUR LES SORTIES GRAPHIQUES   : 600
PERIODE DE SORTIE LISTING             : 600
/PROCESSEURS PARALLELES                : 12
/

I get an other error :

READ_DATASET : LECTURE A L'ENREGISTREMENT     5

 TEMPS DE L'ENREGISTREMENT :     20.00000     S
 TEMPS ECOULE REMIS A ZERO

job aborted:
rank: node: exit code[: error message]
0: Aix-P1085.ingerop.com: 29: process 0 exited without calling finalize

—> What is it ?

The administrator has disabled public write access.

I desectivate the «breach» option. Now, it’s ok in sequential mode.
But in parrallel mode, I get an other error :

 +-------------------------------------------------+
   PARTEL/PARRES: TELEMAC METISOLOGIC PARTITIONER
                                                    
   REBEKKA KOPMANN & JACEK A. JANKOWSKI (BAW)
                  JEAN-MICHEL HERVOUET (LNHE)
                  CHRISTOPHE DENIS     (SINETICS) 
                  YOANN AUDOUIN        (LNHE) 
   PARTEL (C) COPYRIGHT 2000-2002 
   BUNDESANSTALT FUER WASSERBAU, KARLSRUHE
  
   METIS 5.0.2 (C) COPYRIGHT 2012 
   REGENTS OF THE UNIVERSITY OF MINNESOTA 
  
   BIEF 7.1 (C) COPYRIGHT 2012 EDF
 +-------------------------------------------------+
  
  
   MAXIMUM NUMBER OF PARTITIONS:       100000
  
 +--------------------------------------------------+
  
 --INPUT FILE NAME <INPUT_NAME>: 
 INPUT: T2DGEO
 --INPUT FILE FORMAT <INPFORMAT> [MED,SERAFIN,SERAFIND]: 
  INPUT: SERAFIN 
 --BOUNDARY CONDITIONS FILE NAME: 
 INPUT: T2DCLI
--NUMBER OF PARTITIONS <NPARTS> [2 -100000]: 
  INPUT:           12
  PARTITIONING METHOD <PMETHOD>  [1 (METIS) OR 2 (SCOTCH)]: 
 --INPUT:            1
 --CONTROL SECTIONS FILE NAME (OR RETURN) : 
  NO SECTIONS 
 --CONTROL ZONES FILE NAME (OR RETURN) : 
  NO ZONES 
 --GEOMETRY FILE NAME <INPUT_NAME>: 
 INPUT: T2DGEO
 --GEOMETRY FILE FORMAT <GEOFORMAT> [MED,SERAFIN,SERAFIND]: 
  INPUT: SERAFIN 
 +---- PARTEL: BEGINNING -------------+


 READ_MESH_INFO: TITLE= newSelafin                                                              
            NUMBER OF ELEMENTS:  1266897
            NUMBER OF POINTS:   690952

            FORMAT NOT INDICATED IN TITLE
  
  
 ONE-LEVEL MESH.
 NDP NODES PER ELEMENT:                    3
 ELEMENT TYPE :                           10
 NPOIN NUMBER OF MESH NODES:          690952
 NELEM NUMBER OF MESH ELEMENTS:      1266897
  
 THE INPUT FILE ASSUMED TO BE 2D
 THERE ARE            1  TIME-DEPENDENT RECORDINGS
 FRONT2: SIZE OF ARRAYS EXCEEDED
         INCREASE THE KEYWORD
         MAXIMUM NUMBER OF BOUNDARIES
         IN THE CALLING PROGRAM
         THE CURRENT VALUE IS         9000



 PLANTE: PROGRAM STOPPED AFTER AN ERROR
 RETURNING EXIT CODE:            2

—> What is it ?

The administrator has disabled public write access.

In front2.f, the parameter concerned is MAXFRO.

The routine script that results from my problem is l.398 :

IF(LNG.EQ.2) THEN
WRITE(LU,*) ‘FRONT2: SIZE OF ARRAYS EXCEEDED’
WRITE(LU,*) ‘ INCREASE THE KEYWORD’
WRITE(LU,*) ‘ MAXIMUM NUMBER OF BOUNDARIES’
WRITE(LU,*) ‘ IN THE CALLING PROGRAM’
WRITE(LU,*) ‘ THE CURRENT VALUE IS ‘,MAXFRO
ENDIF
CALL PLANTE(1)

But in my CAS file, MAXFRO = 92000 (I tried a lot of other values). But The value read still 9000 …

Please, someone has an idea about this problem ?

The administrator has disabled public write access.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
#include <stdio.h>
#include <math.h>
#include <iostream>
#include <fstream>
#include <iomanip>
#include <time.h>
#include <mpi.h>
 
using namespace std;
 
float normal_distance(float x1, float y1, float z1, float x2, float y2, float z2)
{
    return sqrt(pow((x2-x1),2)+pow((y2-y1),2)+pow((z2-z1),2));
}
 
int main(int argc, char **argv)
{
    int N, my_rank, size, k;
    float *xord, *yord, *zord, g;
    double starttime;
    fstream filearray;
 
    MPI_Init(&argc,&argv);
    MPI_Comm_size(MPI_COMM_WORLD,&size);
    MPI_Comm_rank(MPI_COMM_WORLD,&my_rank);
 
    if(my_rank == 0)
    {
        starttime = MPI_Wtime();
 
        char p;
 
        N=0;
        filearray.open("C:\the_text.txt",ios::in);
        if(!filearray.fail())
        {
            while(!filearray.eof())
            {
                filearray.read(&p,1);
                if(p=='n') N++;
            }
        }
        filearray.close();
        filearray.clear();
 
        N-=1;
 
        xord = new float[N];
        yord = new float[N];
        zord = new float[N];
 
        int t = 0;
        filearray.open("C:\the_text.txt",ios::in);
        if(!filearray.fail())
        {
            while(!filearray.eof())
            {
                filearray>>xord[t];
                filearray>>yord[t];
                filearray>>zord[t];
                t++;
            }
        }
 
        filearray.close();
    }
    
    MPI_Bcast(&N,1,MPI_INT,0,MPI_COMM_WORLD);
    k=N/size;
    MPI_Bcast(&k,1,MPI_INT,0,MPI_COMM_WORLD);
 
    MPI_Bcast(xord,N,MPI_FLOAT,0,MPI_COMM_WORLD);
    MPI_Bcast(yord,N,MPI_FLOAT,0,MPI_COMM_WORLD);
    MPI_Bcast(zord,N,MPI_FLOAT,0,MPI_COMM_WORLD);
 
    int *num = new int[2*k];
 
    int p=0;
    int m;
    for(int i=my_rank*k;i<(my_rank+1)*k;i++)
        {
            g=200.0;
            m=0;
            for(int j=0;j<N-1;j++)
            {
                if(i!=j)
                {
                    if(g>normal_distance(xord[i],yord[i],zord[i],xord[j],yord[j],zord[j]))
                    {
                        g=normal_distance(xord[i],yord[i],zord[i],xord[j],yord[j],zord[j]);
                        m=j;
                    }
                }
            }
            num[p]=i;
            p++;
            num[p]=m;
            p++;
        }
 
    int *numb = new int[2*k*size];
    MPI_Gather(num,2*k,MPI_INT,numb,2*k,MPI_INT,0,MPI_COMM_WORLD);
 
    if(my_rank == 0)
    {
        int s=0;
        while(s<2*k*size)
        {
            cout<<numb[s]<<" ";
            s++;
            cout<<numb[s]<<endl;
            s++;
        }
        
        if(N%size != 0){
            int m;
            for(int i=0;i<N%size;i++)
            {
                g=200.0;
                m=0;
                for(int j=0;j<N-1;j++)
                {
                    if(i!=j)
                    {
                        if(g>normal_distance(xord[i],yord[i],zord[i],xord[j],yord[j],zord[j]))
                        {
                            g=normal_distance(xord[i],yord[i],zord[i],xord[j],yord[j],zord[j]);
                            m=j;
                        }
                    }
                }
                cout<<i<<" "<<m<<endl;
            }
        }
 
        delete [] xord;
        delete [] yord;
        delete [] zord;
 
        cout<<MPI_Wtime()-starttime<<endl;
    }
 
    delete [] numb;
    delete [] num;
 
    MPI_Finalize();
 
    return 0;
}

Понравилась статья? Поделить с друзьями:
  • Unable to read ta unit 10021 error 22 hex unit value 00002725
  • Unable to read nonce error
  • Unable to join the game external error occurred iccup
  • Unable to read existing wua group policy object error 0x80070008
  • Unable to install vmware tools an error occurred while trying to access image file