Node warmstarted due to an internal error

Known issues for the IBM PureData System for Operational Analytics, Fix Pack V1.0.0.5 Abstract This document contains the restrictions and known issues for the IBM PureData System for Operational Analytics, Fix Pack V1.0.0.5 (Fix Pack 5). Content Flash storage upgrade gets stalled during apply phase Problem: Flash storage upgrade gets stalled during apply phase […]

Содержание

  1. Known issues for the IBM PureData System for Operational Analytics, Fix Pack V1.0.0.5
  2. Abstract
  3. Content

Known issues for the IBM PureData System for Operational Analytics, Fix Pack V1.0.0.5

Abstract

This document contains the restrictions and known issues for the IBM PureData System for Operational Analytics, Fix Pack V1.0.0.5 (Fix Pack 5).

Content

Flash storage upgrade gets stalled during apply phase

Problem: Flash storage upgrade gets stalled during apply phase which results in the apply failure

Sample output of the failure from the PL log

[06 Mar 2017 07:45:40,711] apply: 172.23.1.182: waiting for upgrade to complete, iteration : update_status=
[06 Mar 2017 07:50:41,284] get_update_status: status is >
[06 Mar 2017 07:50:41,286] apply: 172.23.1.182: waiting for upgrade to complete, iteration : update_status=
[06 Mar 2017 07:55:42,812] get_update_status: status is >
[06 Mar 2017 07:55:42,813] apply: 172.23.1.182: waiting for upgrade to complete, iteration : update_status=
[06 Mar 2017 08:00:42,814] apply: 172.23.1.182: broke out of timed wait after 4 iterations of maximum 48. update status is
[06 Mar 2017 08:00:42,815] Extracted msg from NLS: apply: 172.23.1.182 Error: The update status of the end point is .
[06 Mar 2017 08:00:42,815] apply: 172.23.1.182: error: error state, update status is
[06 Mar 2017 08:00:42,853] apply: storage1: apply failed

Verifying the status on the failed node by executing lssoftwareupgradestatus command

$ ssh admin@172.23.1.182
IBM_FlashSystem:ibisFlash_00:admin>lssoftwareupgradestatus
status percent_complete
stalled 23

Resolution: 1. Abort the upgrade using applysoftware -abort command. Wait until the status becomes inactive

IBM_FlashSystem:ibisFlash_00:admin>applysoftware -abort
IBM_FlashSystem:ibisFlash_00:admin>lssoftwareupgradestatus
status percent_complete
downgrading 23
IBM_FlashSystem:ibisFlash_00:admin>lssoftwareupgradestatus
status percent_complete
downgrading 23
.
IBM_FlashSystem:ibisFlash_00:admin>lssoftwareupgradestatus
status percent_complete
downgrading 23
IBM_FlashSystem:ibisFlash_00:admin>lssoftwareupgradestatus
status percent_complete
downgrading 23
IBM_FlashSystem:ibisFlash_00:admin>lssoftwareupgradestatus
status percent_complete
inactive 0

2. Verify if there are any events for internal errors «Node warmstarted due to an internal error»
This is a known issue with flash storage. One can clear this using cheventlog

IBM_FlashSystem:ibisFlash_00:superuser>lseventlog
sequence_number last_timestamp object_type object_id object_name copy_id status fixed event_id error_code description secondary_object_type secondary_object_id
103 170209221539 node 2 node2 message no 980349 Node added
114 170301053034 drive 0 message no 988024 Flash module format complete
115 170301053034 drive 6 message no 988024 Flash module format complete
.
129 170306074033 cluster ibisFlash_00 message no 980506 Update prepared
130 170306075323 node 1 node1 alert no 074002 2030 Internal error canister 1
131 170306075401 enclosure 1 alert no 085048 2060 Reconditioning of batteries required
.
135 170306075411 cluster ibisFlash_00 message no 980509 Update stalled
136 170306075411 node 1 node1 alert no 009100 2010 Update process failed
IBM_FlashSystem:ibisFlash_00:superuser>

Here event 130 has internal error on node with error code 2030.

Detail listing of event:-
IBM_FlashSystem:ibisFlash_00:superuser>lseventlog 130
sequence_number 130
first_timestamp 170306075323
first_timestamp_epoch 1488815603
last_timestamp 170306075323
last_timestamp_epoch 1488815603
object_type node
object_id 1
object_name node1
copy_id
reporting_node_id
reporting_node_name
root_sequence_number
event_count 1
status alert
fixed no
auto_fixed no
notification_type error
event_id 074002
event_id_text Node warmstarted due to an internal error
error_code 2030
error_code_text Internal error
machine_type 9840AE2
serial_number 1351351
FRU None
fixed_timestamp
fixed_timestamp_epoch
callhome_type software
sense1 41 73 73 65 72 74 20 46 69 6C 65 20 2F 62 75 69
sense2 6C 64 2F 74 6D 73 2F 53 56 43 5F 4F 44 45 5F 52
sense3 32 2F 32 30 31 35 2D 30 33 2D 30 39 5F 31 32 2D
sense4 31 34 2D 30 34 2F 72 32 2F 73 72 63 2F 75 73 65
sense5 72 2F 64 72 76 2F 70 61 2F 70 6C 70 61 2E 63 20
sense6 4C 69 6E 65 20 31 35 39 33 00 00 00 00 00 00 00
sense7 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
sense8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
secondary_object_type canister
secondary_object_id 1
IBM_FlashSystem:ibisFlash_00:superuser>

Event_id_text shows Node warmstarted due to an internal error

We need to clear the event on the node using cheventlog fix command.

IBM_FlashSystem:ibisFlash_00:admin>cheventlog -fix 130
IBM_FlashSystem:ibisFlash_00:admin>lseventlog
sequence_number last_timestamp object_type object_id object_name copy_id status fixed event_id error_code description secondary_object_type secondary_object_id
103 170209221539 node 2 node2 message no 980349 Node added
114 170301053034 drive 0 message no 988024 Flash module format complete
115 170301053034 drive 6 message no 988024 Flash module format complete
117 170301053039 drive 2 message no 988024 Flash module format complete
118 170301053039 drive 7 message no 988024 Flash module format complete
119 170301053039 drive 8 message no 988024 Flash module format complete
120 170301053044 drive 1 message no 988024 Flash module format complete
121 170301053044 drive 3 message no 988024 Flash module format complete
122 170301053044 drive 4 message no 988024 Flash module format complete
123 170301053044 drive 5 message no 988024 Flash module format complete
124 170301053044 drive 9 message no 988024 Flash module format complete
129 170306074033 cluster ibisFlash_00 message no 980506 Update prepared
131 170306075401 enclosure 1 alert no 085048 2060 Reconditioning of batteries required
132 170306075401 enclosure 1 alert no 085048 2060 Reconditioning of batteries required
133 170306075406 enclosure 1 message no 988030 External data link degraded canister 2
134 170306075406 enclosure 1 message no 988030 External data link degraded canister 1
135 170306075411 cluster ibisFlash_00 message no 980509 Update stalled
137 170306223519 cluster ibisFlash_00 message no 980510 Update aborted
138 170306224949 node 2 node2 message no 980349 Node added
139 170306224949 cluster ibisFlash_00 message no 980508 Update Failed
IBM_FlashSystem:ibisFlash_00:admin>

3. Once we have aborted the update and cleared the event we resume the upgrade

Console does not allow resume after PFW update reboots management node

Problem: When the non-management Apply is running, during the PFW update, the cec reboots. This brings down the management node and stops the fix pack update.

Resolution: To resume the update:

Verify the console is started using the mistatus command run as root on the management host. and then execute miupdate -resume

(0) root @ ibis01: 7.1.0.0: /
$ mistatus
CDTFS000063I The system console is started.
(0) root @ ibis01: 7.1.0.0: /
$

The command ‘miupdate -resume’ will not work. Instead:

Determine the ‘bwr#’ associated with the fixpack. Use the appl_ls_cat command, run as root on the management host.

$ appl_ls_cat
NAME VERSION STATUS DESCRIPTION
bwr0 3.0.3.0 Committed Initial images for IBM PureData System for Operational Analytics
bwr1 4.0.5.0 Applied Updates for IBM_PureData_System_for_Operational_Analytics

Substitute the ‘bwr#’ found above, in this example it is 4.0.5.0 and run the appl_install_sw commmand as root on the management host.

echo «appl_install_sw -l bwr1 -resume > /tmp/appl_install_sw_$(date +»%Y%m%d_%H%M%S»).out 2>&1″ | at now

This will run the fixpack application outside of the session (these can be long running commands susceptible to terminal loss). Tail the /tmp/appl_install_sw_ .out file to view the progress.

Apply output shows failed even though there is no failure or error in the log

Problem: The output excerpt from mi command line or through console GUI:
=====================================================
Log file:
Infrastructure Infrastructure SAN switch firmware: SANFW apply started 1 of 1 task completed

Log file:
Infrastructure Infrastructure Network switch firmware: NetFW apply started 1 of 1 task completed

Log file:
Infrastructure Infrastructure Storage firmware: StorageFW apply started 3 of 3 task completed

Log file:
Infrastructure Infrastructure Storage firmware: StorageFW apply started 3 of 3 task completed

========================
«The operation failed during the apply stage. The resume phase for the release ‘bwr1’ failed..
Refer to the platform layer log file for details.»
========================

Resolution: Verify that update was completed by running appl_ls_cat command.

In case of resume scenarios during mgmt update status should be M_Applied

(0) root @ ibis01: 7.1.0.0: /BCU_share/aixappl/pflayer/log
$ appl_ls_cat
NAME VERSION STATUS DESCRIPTION
bwr0 4.0.4.2 Committed Updates for IBM_PureData_System_for_Operational_Analytics
bwr1 4.0.5.0 M_Applied Updates for IBM_PureData_System_for_Operational_Analytics_DB2105

(0) root @ ibis01: 7.1.0.0: /BCU_share/aixappl/pflayer/log

In case of resume scenarios during non management/core update status should be Applied

(0) root @ ibis01: 7.1.0.0: /BCU_share/aixappl/pflayer/log
$ appl_ls_cat
NAME VERSION STATUS DESCRIPTION
bwr0 4.0.4.2 Committed Updates for IBM_PureData_System_for_Operational_Analytics
bwr1 4.0.5.0 Applied Updates for IBM_PureData_System_for_Operational_Analytics_DB2105

(0) root @ ibis01: 7.1.0.0: /BCU_share/aixappl/pflayer/log

If the status shows either Applied or M_Applied then proceed for the next step ignoring the failure message

ISW update failed while updating MGMT update

Problem: During the PDOA V1.1 FP5/FP1 management apply phase the fix pack may encounter an error. The pflayer log file may show the following error:

.
[10 Nov 2016 20:07:24,829] Node: 172.23.1.1 Return: 256
[10 Nov 2016 20:07:24,844] TASK_END::10::5 of 6::ISW_APPLY::172.23.1.1:: ::RC=1::CDTFS000048E An error occurred while updating InfoSphere Data Warehouse.nnDetails:nThe command »/BCU_share/bwr1/software/ISW/isw/install.bin -DDS_HA_MODE=TRUE -i silent -f /BCU_share/update_105_tFFF.rsp -Dprofile=BCU_share/bwr1/software/ISW/PDS -Dlog=/tmp/isw_full.log» failed with the error:nn»»nnUser Response:nContact IBM Support for assistance.
.

Resolution: There is a known issue with the ISW installer returning a status of 256 back to the caller of the install.bin command line even though the installation was a success. To verify:

1. Login to the management node as root in an ssh session.

2. Look for one of the following directories:

/usr/IBM/dwe/appserver_001/iswapp_10.5/logs
or
/usr/IBM/dwe/appserver_001/iswapp_10/logs

3. Look for a file call ‘ISWinstall_summary_ .log with a recent date.

4. Run the following:

grep -i status logs/ISWinstall_summary_1701121953.log

This should return a large number of lines with ‘Status: SUCCESSFUL‘.

If this is the case, it is safe to resume the fix pack as the update was successful.

OPM update failed due to DBI Connect connect issue in wait_for_start.pl

Problem: During the management apply phase, the apply phase may fail with the following symptoms in the logs.

RC=1::Can’t locate DBI.pm in @INC (@INC contains: /usr/opt/perl5/lib/5.10.1/aix-thread-multi /usr/opt/perl5/lib/5.10.1 /usr/opt/perl5/lib/site_perl/5.10.1/aix-thread-multi /usr/opt/perl5/lib/site_perl/5.10.1 /usr/opt/perl5/lib/site_perl .) at /BCU_share/bwr1/code/ISAS/Update/Common/OPM/scripts/wait_for_start.pl line 149.n

DBI connect(‘OPMDB’,». ) failed: [IBM][CLI Driver] SQL1031N The database directory cannot be found on the indicated file system. SQLSTATE=58031
at /BCU_share/bwr1/code/ISAS/Update/Common/OPM/scripts/wait_for_start.pl line 155

and hals shows that the DPM components are failed over to the standby management host.

Resolution: These messages indicate that the during the management apply phase, the DB2 Performance Monitor (DPM) component failed over to the management stand-by node. There are some known issues with DPM on startup that can lead to failures. If the above symptoms are seen then the next steps are to:

1. Use hals to determine if the DPM resources are indeed failed over.

2. Use lssam on the management host to determine if there are any failed states.

3. Use ‘resetrsrc‘ on any DPM resources that are in a failed state.

4. Verify with lssam that the resources are no longer in a failed state.

5. Use ‘hafailover DPM‘ to move the DPM resources to the management host.

6. Verify that the DPM resources successfully moved to the management host.

7. Resume the Fix Pack

Storage update failed during apply phase because of drive update failure

Problem: Storage update fails during the apply phase because of drive update failures; one or more storage drives might be in the offline state.

Console output during failure:
========================
The operation failed during the apply stage. Storage update failed on 172.23.1.186.
Refer to the platform layer log file for details.
========================

Sample output of the failure from the PL log:

[08 Mar 2017 04:13:43,315] Extracted msg from NLS: apply: 172.23.1.186 ssh admin@172.23.1.186 ssh admin@172.23.1.186 LANG=en_US svctask applydrivesoftware -file IBM2076_DRIVE_20160923 -type firmware -drive 0:1:2:3:4:5:6:7:8:9:10:11:12:13:14:15:16:17:18:19:20:21:22:23:24:25:26:27:28:29:30:31:32:33:34:35:36:37:38:39:40:41:42:43:44:45:46:47 command failed.
[08 Mar 2017 04:13:43,315] apply: 172.23.1.186: error: ssh admin@172.23.1.186 ssh admin@​172.23.1.186 LANG=en_US svctask applydrivesoftware -file IBM2076_DRIVE_20160923 -type firmware -drive 0:1:2:3:4:5:6:7:8:9:10:11:12:13:14:15:16:17:18:19:20:21:22:23:24:25:26:27:28:29:30:31:32:33:34:35:36:37:38:39:40:41:42:43:44:45:46:47 command failed , rc=127

By using executing the lsdrive command, we can verify the drive statuses on the failed storage box. As an example:

$ ssh superuser@172.23.1.186 «lsdrive»
id status error_sequence_number use tech_type capacity mdisk_id mdisk_name member_id enclosure_id slot_id node_id node_name auto_manage
0 online member sas_hdd 837.9GB 0 ARRAY3 11 2 1 inactive
1 online member sas_hdd 837.9GB 0 ARRAY3 10 1 1 inactive
2 online member sas_hdd 837.9GB 0 ARRAY3 9 2 2 inactive
3 online member sas_hdd 837.9GB 0 ARRAY3 8 1 10 inactive
4 online member sas_hdd 837.9GB 0 ARRAY3 7 1 2 inactive
5 offline 273 failed sas_hdd 837.9GB 2 10 inactive
6 online member sas_hdd 837.9GB 0 ARRAY3 5 2 9 inactive
7 online member sas_hdd 837.9GB 0 ARRAY3 4 1 9 inactive
8 online member sas_hdd 837.9GB 0 ARRAY3 3 2 11 inactive
9 online member sas_hdd 837.9GB 0 ARRAY3 2 2 8 inactive
10 online member sas_hdd 837.9GB 0 ARRAY3 1 1 8 inactive
11 online member sas_hdd 837.9GB 0 ARRAY3 0 1 11 inactive
12 online member sas_hdd 837.9GB 1 ARRAY4 11 2 7 inactive
13 offline 268 spare sas_hdd 837.9GB 1 7 inactive
14 online member sas_hdd 837.9GB 1 ARRAY4 9 2 6 inactive
15 online member sas_hdd 837.9GB 1 ARRAY4 8 1 6 inactive
16 online member sas_hdd 837.9GB 1 ARRAY4 7 1 5 inactive
17 online member sas_hdd 837.9GB 1 ARRAY4 6 1 12 inactive

Here we see that 2 drives (drive 5 and drive 13) are in the offline state which is the reason for the failure.

Resolution: Run the lsdrive to see the list of failed drives and then do the following:

1. Fix the drives.

2. Ensure the statuses of the drives are all online.

3. Resume the fix pack update.

CECs is rebooted and /BCU_share is unmounted after a power firmware update

Problem: The upgrade of the CEC that hosts the management and admin nodes is done and the CEC gets rebooted and the run is halted due to this reboot.
When the CEC gets back online and the run is resumed, the BCU_share gets mounted on the management and admin nodes.
Subsequently, the other CECs gets upgraded and rebooted.
But when the respective nodes come back up, the BCU_share does not get mounted again and the upgrade proceeds to the point where it tries to access BCU_share and fails.

Symptoms:
1. The output excerpt from the log
=====================================================
[16 Nov 2016 08:32:35,041] Failed to unpack the adapter firmware file /BCU_share/bwr1/firmware/fc_adapter/df1000f114100104/image/df1000f114100104.
203305.aix.rpm on 172.23.1.4.
=====================================================

2. The /BCU_share NFS mount shared from the management host is not mounted on all hosts.

Resolution: After the failure is identified, simply resume. The resume code will verify that /BCU_share is mounted across the hosts.

miinfo compliance command shows compliance issue for some of the products

Problem: Running miinfo compliance command shows some levels are not correct. ‘miinfo -d -c‘

You may see the following under some servers:

IBM Systems Director Common Agent 6.3.3.1 Higher
IBM InfoSphere Warehouse 10.5.0.20151104_10.5.0.8..0 Lower
IBM InfoSphere Optim Query Workload Tuner The version of the product cannot be determined or it is not installed. NA

Resolution: 1. IBM Systems Director Common Agent 6.3.3.1 Higher
The common agent should no longer be tracked by the compliance software. This is a defect in the compliance check program and will not impact the operation of the appliance.

2. IBM InfoSphere Warehouse 10.5.0.20151104_10.5.0.8..0 Lower
The InfoSphere Warehouse software compliance check uses 20151117 instead of 20151104. This is a defect in the compliance check code and will not impact the operation of the appliance.

3. IBM InfoSphere Optim Query Workload Tuner The version of the product cannot be determined or it is not installed. NA
This is normal for nodes that are currently running as standby hosts. The compliance checker has a limitation in that it cannot check the level when a core host is currently a designated standby host.

FP5 cannot be directly applied to FP3 without some additional fixes and modifications.

Problem: The IBM PureData System for Operational Analytics V1.0 FP5 package cannot be directly applied to FP3 environments. There are three distinct issues with FP5 on FP3 environments.

1. It is possible to register the fixpack, however after registration the console will no longer start. During the preview step similar messages to the following will appear in the log.

  • 03 Apr 2017 03:54:54,222] PRODUCT_UPDATES::OPM_PROD::OPM::Management::5.3.1.0.8440::5.3.0.0.7336::OPM
    [03 Apr 2017 03:54:54,481] server_logical_names = server6
    [03 Apr 2017 03:55:01,876] Stage getlevel failed:
    [03 Apr 2017 03:55:01,877] Use of uninitialized value in split at /opt/ibm/aixappl/pflayer/lib/ISAS/PlatformLayer/TSA/Topology.pm line 138, <> line 10.
    [03 Apr 2017 03:55:01,878] Use of uninitialized value in split at /opt/ibm/aixappl/pflayer/lib/ISAS/PlatformLayer/TSA/Topology.pm line 138, <> line 10.
    [03 Apr 2017 03:55:01,927] Executing query Logical_name=bwr1 AND Solution_version=4.0.5.0, to update status of Solution
    [03 Apr 2017 03:55:02,056] PHASE_END PREVIEW
    [03 Apr 2017 03:55:02,058] The preview phase for the release ‘bwr1’ failed.

2. It is possible to apply the fixpack via command line, however the fixpack will fail validation as the firmware levels on the V7000 is too low for FP5 to update.

3. It is possible to apply the fixpack via command line, however the fixpack will fail validation as the firmware levels on the SAN switches is too low for FP5 to update.

Resolution: See the document ‘How to apply the IBM PureData System for Operational Analytics V1.0 FP5 on a FP3 environment?‘ for more information about the FP3 to FP5 scenario.

Failed paths after the fixpack apply stage

Problem: The AIX hosts may have failed paths to the external storage. The run the following command as root on the management host:

dsh -n $ALL «lspath | grep hdisk | grep -v Enabled | wc -l» | dshbak -c

Will return output if there are failed paths to the storage.

Resolution: There are two remedy options.

1. Reboot the host with the failed paths. This will effectively bounce the port. This may require an outage.

2. Follow the instructions below to bounce only the port. All access to the storage is fully redundant with multiple paths which is why the system can start even with failed paths. This method avoids an outage and effectively bounces the port.

The following should be done one port at a time and should be performed either in an outage window, or a time when the system will have very low I/O activity.

a. For each host login, determine the ports with failed paths using the ‘lspath | grep hdisk | grep -v Enabled | while read stat disk dev rest;do echo «$»;done | sort | uniq‘ command. This command will return the uniq set of ports connected to hdisk devices that are Missing or Failed.

Example output:
fscsi10

b. For each port, create a script and update the ‘export to match the # in the fscsi# id of the failed path. This script will then remove all paths to that port, set the port to the defined state, and will then rediscover the paths. This effectively bounces the port.

c. Change the id to match the fscsi number. Run the each script to remove the paths and to put the device in defined state, then use cfgmgr to reinitialize. This should create all of the new paths. Run these scripts one at a time and then verify that the path no longer appears in the command shown in a.

export/> lspath -p fscsi$ | while read st hd fs;do echo $hd;done | sort | uniq | while read disk;do rmpath -d -l $ -p fscsi$;done
rmdev -l sfwcomm$;rmdev -l fscsi$;rmdev -l fcs$
cfgmgr -s

d. After the commands run, verify that there are no more failed paths over the port and that the port has discovered the existing paths.

Bouncing the port in this way preserves any settings stored in ODM for the fcs and fscsi devices.

DB2 and/or ISW preview failures due to incorrect fixpack or incomplete DB2 10.5 upgrade.

Problem: I downloaded and registered the fixpack, however I’m receiving preview errors related to the DB2 and / or InfoSphere Warehouse (ISW) levels.

There are two different fix central downloads for IBM PureData System for Operational Analytics V1.0 fixpack 5.

IBM PureData System for Operational Analytics Fix Pack 5 (for systems with DB2 Version 10.1)

IBM PureData System for Operational Analytics Fix Pack 5 (for systems with DB2 Version 10.5)

There are a couple of scenarios where problems arise.

1. Customer has DB2 V10.1, downloads and registers the fixpack with DB2 10.5.
2. Customer has followed the instructions to uplift or upgrade DB2 to 10.5 by following the technote Upgrading an IBM PureData System for Operational Analytics Version 1.0 environment to DB2 10.5 and downloads the fixpack with DB2 10.1.
3. Customer has only partially followed the instructions to uplift or upgrade DB2 to 10.5 by following the technote Upgrading an IBM PureData System for Operational Analytics Version 1.0 environment to DB2 10.5 and downloads the fixpack with DB2 10.5., but encounters a preview errors related to the ISW level being at the incorrect version level.

Resolution: 1. Customer has DB2 V10.1, downloads and registers the fixpack with DB2 10.5.
2. Customer has followed the instructions to uplift or upgrade DB2 to 10.5 by following the technote Upgrading an IBM PureData System for Operational Analytics Version 1.0 environment to DB2 10.5 and downloads the fixpack with DB2 10.1.

Contact IBM Support for help to de-register the incorrect fixpack.
Download the fixpack with the correct db2 levels.
Follow the fixpack instructions as usually.

3. Customer has only partially followed the instructions to uplift or upgrade DB2 to 10.5 by following the technote Upgrading an IBM PureData System for Operational Analytics Version 1.0 environment to DB2 10.5 and downloads the fixpack with DB2 10.5. but encounters a preview errors related to the ISW level being at the incorrect version level.

This scenario is most likely to due to issues where the technote was not fully followed. This can happen due to confusion about the relationships between the InfoSphere Warehouse and DB2. Most customers understand how to upgrade DB2 and it is easy to miss that it is important to update the InfoSphere Warehouse product as well. Our Fixpack catalog does not at present support mixing DB2 10.5 and InfoSphere Warehouse 10.1 together. So the customer will need to revisit the Upgrading an IBM PureData System for Operational Analytics Version 1.0 environment to DB2 10.5 technote to verify that all of the update steps were followed and that the InfoSphere Warehouse levels are at 10.5 and the WebSphere Application Server levels are at 8.5.5.x as required as part of the technote.

Once the levels are updated per the technote the Fixpack can be resumed and the preview should no longer fail.

The FixCentral download inadvertently includes XML files that are not part of the fixpack.

Problem: I downloaded all of the files included in the FixCentral for the fixpack and there are extra XML files included. What are they for?

The XML files are of the following pattern:

*.fo.xml
*SG*.xml

Resolution: These files were inadvertently included in the fixpack packages and should either not be downloaded or deleted.

Storage update failed during apply phase because an update is already in progress message. [ Added 2017-09-18 ]

Problem: During the fixpack apply phase the fixpack fails.

[16 Sep 2017 21:14:05,518] STORAGE:storage0:172.23.1.181:1:Storage firmware update failed.

—————————————————————————————-
/BCU_share/applmgmt/pflayer/log/pl_update.trace:
—> This excerpts shows an attempt to update the drive fw fails. The critical message is this one: «CMMVC6055E The action failed as an update is in progress.n»], sleep time is not configured, defaults will be applied
[16 Sep 2017 21:13:38,576] apply: 172.23.1.181: now installing drive updates.
[16 Sep 2017 21:13:38,577] drive_id: 0:1:2:3:4:6:7:8:9:10:11:13:14:15:16:17:18:19:20:21:22:23:24:25:26:27:29:30:31:32:33:34:35:36:37:38:39:40:41:42:43:44:45:46:47:48:49:50:51:52:53:54:55:56:58:59:60:61:62:64:65:66:67:68:69:70:71
[16 Sep 2017 21:13:38,577] Number of drive id’s is less than 128
[16 Sep 2017 21:13:38,578] Drive update command execution cnt : 0.
[16 Sep 2017 21:13:43,150] command: ssh admin@172.23.1.181 LANG=en_US svctask applydrivesoftware -file IBM2076_DRIVE_20160923 -type firmware -drive 0:1:2:3:4:6:7:8:9:10:11:13:14:15:16:17:18:19:20:21:22:23:24:25:26:27:29:30:31:32:33:34:35:36:37:38:39:40:41:42:43:44:45:46:47:48:49:50:51:52:53:54:55:56:58:59:60:61:62:64:65:66:67:68:69:70:71
[16 Sep 2017 21:13:43,151] CMMVC6055E The action failed as an update is in progress.
[16 Sep 2017 21:13:43,151] Rc = 1
[16 Sep 2017 21:13:43,152] Extracted msg from NLS: apply: 172.23.1.181 ssh admin@172.23.1.181 LANG=en_US svctask applydrivesoftware -file IBM2076_DRIVE_20160923 -type firmware -drive 0:1:2:3:4:6:7:8:9:10:11:13:14:15:16:17:18:19:20:21:22:23:24:25:26:27:29:30:31:32:33:34:35:36:37:38:39:40:41:42:43:44:45:46:47:48:49:50:51:52:53:54:55:56:58:59:60:61:62:64:65:66:67:68:69:70:71 command failed.
[16 Sep 2017 21:13:43,153] apply: 172.23.1.181: error: ssh admin@172.23.1.181 LANG=en_US svctask applydrivesoftware -file IBM2076_DRIVE_20160923 -type firmware -drive 0:1:2:3:4:6:7:8:9:10:11:13:14:15:16:17:18:19:20:21:22:23:24:25:26:27:29:30:31:32:33:34:35:36:37:38:39:40:41:42:43:44:45:46:47:48:49:50:51:52:53:54:55:56:58:59:60:61:62:64:65:66:67:68:69:70:71 command failed , rc=1
[16 Sep 2017 21:13:43,153] < Entering Ctrl::Updates::Storage::search_token (Called from /opt/ibm/aixappl/pflayer/lib/Ctrl/Updates/Storage.pm line 1127)
[16 Sep 2017 21:13:43,154] Args:[[«CMMVC8325E»,»None of the specified drives needed to be upgraded or downgraded«],[], ]
[16 Sep 2017 21:13:43,155] Not able to find CMMVC8325E None of the specified drives needed to be upgraded or downgraded in the output, an unexpected error occured
[16 Sep 2017 21:13:43,155] Return: 0
[16 Sep 2017 21:13:43,156] Exiting Ctrl::Updates::Storage::search_token >
[16 Sep 2017 21:13:43,156] < Entering Ctrl::Updates::Storage::search_token (Called from /opt/ibm/aixappl/pflayer/lib/Ctrl/Updates/Storage.pm line 1128)
[16 Sep 2017 21:13:43,157] Args:[[«CMMVC8325E»,»None of the specified drives needed to be upgraded or downgraded«],[«CMMVC6055E The action failed as an update is in progress.n»], ]
[16 Sep 2017 21:13:43,157] Not able to find CMMVC8325E None of the specified drives needed to be upgraded or downgraded in the output, an unexpected error occured
[16 Sep 2017 21:13:43,158] Return: 0
[16 Sep 2017 21:13:43,158] Exiting Ctrl::Updates::Storage::search_token >
[16 Sep 2017 21:13:43,158] < Entering Ctrl::Updates::Storage::search_token (Called from /opt/ibm/aixappl/pflayer/lib/Ctrl/Updates/Storage.pm line 1138)
[16 Sep 2017 21:13:43,159] Args:[[«CMMVC6546E»,»The current drive status is degraded»],[], ]
[16 Sep 2017 21:13:43,159] Not able to find CMMVC6546E The current drive status is degraded in the output, an unexpected error occured
[16 Sep 2017 21:13:43,160] Return: 0
[16 Sep 2017 21:13:43,160] Exiting Ctrl::Updates::Storage::search_token >
[16 Sep 2017 21:13:43,160] < Entering Ctrl::Updates::Storage::search_token (Called from /opt/ibm/aixappl/pflayer/lib/Ctrl/Updates/Storage.pm line 1139)
[16 Sep 2017 21:13:43,161] Args:[[«CMMVC6546E»,»The current drive status is degraded»],[«CMMVC6055E The action failed as an update is in progress.n»], ]
[16 Sep 2017 21:13:43,161] Not able to find CMMVC6546E The current drive status is degraded in the output, an unexpected error occured
[16 Sep 2017 21:13:43,162] Return: 0
[16 Sep 2017 21:13:43,162] Exiting Ctrl::Updates::Storage::search_token >
[16 Sep 2017 21:13:43,163] Function search_token failed, exiting the loop.
[16 Sep 2017 21:13:43,163] Drive update got failed on 172.23.1.181 storage.
[16 Sep 2017 21:13:43,203] apply: storage0: apply failed
[16 Sep 2017 21:13:43,204] For message id::1021
[16 Sep 2017 21:13:58,212] < Entering Ctrl::Query::Status::read_status_n_details (Called from /opt/ibm/aixappl/pflayer/lib/Ctrl/Updates/Storage.pm line 1210)
[16 Sep 2017 21:13:58,213] Args:[«172.23.1.181 storage0 storage 0 NA»]
[16 Sep 2017 21:13:58,213] < Entering Ctrl::Util::util_details (Called from /opt/ibm/aixappl/pflayer/lib/Ctrl/Query/Status.pm line 45)
[16 Sep 2017 21:13:58,214] Args:[«172.23.1.181″,»storage»,0]
.
16 Sep 2017 21:14:05,520] Command Status:Details:
[16 Sep 2017 21:14:05,520] Status: Online
[16 Sep 2017 21:14:05,520] AccessState: Unlocked
[16 Sep 2017 21:14:05,520] Model: 124
[16 Sep 2017 21:14:05,520] IPv4Address: [«172.23.1.181»]
[16 Sep 2017 21:14:05,520] Manufacturer: IBM
.
[16 Sep 2017 21:14:05,520] FWBuild: 115.54.1610251759000
[16 Sep 2017 21:14:05,520] PLLogicalName: storage0
[16 Sep 2017 21:14:05,520] MachineType: 2076
[16 Sep 2017 21:14:05,520] HostName: V7_00_1
[16 Sep 2017 21:14:05,521] FWLevel: 7.5.0.11
[16 Sep 2017 21:14:05,521] Description: IBM Storwize V7000 Storage
[16 Sep 2017 21:14:05,521] STORAGE:storage0:172.23.1.181:1:Storage firmware update failed.
[16 Sep 2017 21:14:05,521] , Command Status->1
[16 Sep 2017 21:14:05,522] TASK_END::13::1 of 1::StorageUPD::172.23.1.181. RC=1::Storage update failed on 172.23.1.181
[16 Sep 2017 21:14:05,523] Return: 0
.
[16 Sep 2017 21:14:05,831] Error on nodes (172.23.1.181).
[16 Sep 2017 21:14:05,848] STEP_END::13::StorageFW_UPD::FAILED
.
[16 Sep 2017 21:14:06,220] Exiting /opt/ibm/aixappl/pflayer/lib/ManageCatalogStatus.pm => ODM::change_record >
[16 Sep 2017 21:14:06,224] Exiting ManageCatalogStatus::update_status >
[16 Sep 2017 21:14:06,225] PHASE_END APPLY IMPACT
[16 Sep 2017 21:14:06,225] For message id::640
[16 Sep 2017 21:14:06,227] The apply phase for the release ‘bwr5’ failed.
[16 Sep 2017 21:14:06,228] PHASE_END RESUME
[16 Sep 2017 21:14:06,228] For message id::640
[16 Sep 2017 21:14:06,229] The resume phase for the release ‘bwr5’ failed.

ssh -n superuser@172.23.1.181 ‘lsupdate’
status system_completion_required
event_sequence_number 131
progress
estimated_completion_time
suggested_action complete
system_new_code_level
system_forced no
system_next_node_status none
system_next_node_time
system_next_node_id
system_next_node_name

This is a documented issue when upgrading V7000 firmware from 7.3.x to 7.4.x as indicated in the 7.4.0 release notes.

https://public.dhe.ibm.com/storage/san/sanvc/release_notes/740_releasenotes.html

Resolution: Using the PureData System for Operational Analytics Console, use the Service Level Access page to find the link to access the Management Interface for each of the V7000s.

Navigate to the Events page which should show an alert. Select this alert and follow the fix procedures to initiate the second phase.

Do this for each of the V7000s that has this issue. This step will take approximately 40 mins per enclosure and can be run in parallel.

After the second phase is completed. You should see the message ‘System update completion finished from lseventlog.

lseventlog -fixed yes

130 170916183340 cluster V7_00_1 message no 980507 Update completed
131 170916183340 cluster V7_00_1 alert yes 009198 2050 System update completion required
132 170917022703 cluster V7_00_1 message no 980511 System update completion started
133 170917022713 node 3 node2 message no 980513 Node restarted for system update completion
134 170917022713 io_grp 0 io_grp0 message no 981102 SAS discovery occurred, configuration changes pending
135 170917022729 io_grp 0 io_grp0 message no 981103 SAS discovery occurred, configuration changes complete
136 170917022818 node 3 node2 message no 980349 Node added
137 170917022818 io_grp 0 io_grp0 message no 981102 SAS discovery occurred, configuration changes pending
138 170917022828 io_grp 0 io_grp0 message no 981103 SAS discovery occurred, configuration changes complete
139 170917025828 node 1 node1 message no 980513 Node restarted for system update completion
140 170917025828 io_grp 0 io_grp0 message no 981102 SAS discovery occurred, configuration changes pending
141 170917025828 io_grp 0 io_grp0 message no 981103 SAS discovery occurred, configuration changes complete
142 170917025939 node 1 node1 message no 980349 Node added
143 170917025941 io_grp 0 io_grp0 message no 981102 SAS discovery occurred, configuration changes pending
144 170917025941 cluster V7_00_1 message no 980512 System update completion finished
145 170917025946 io_grp 0 io_grp0 message no 981103 SAS discovery occurred, configuration changes complete

lsupdate on that host should show ‘status’ = success.

ssh -n superuser@172.23.1.181 ‘lsupdate’
status success
event_sequence_number
progress
estimated_completion_time
suggested_action start
system_new_code_level
system_forced no
system_next_node_status none
system_next_node_time
system_next_node_id
system_next_node_name

Once this is completed for all V7000s resume the apply phase.

HMC fw update fails in getupgfiles step. [ Added 2017-10-07 ]

Problem: The fixpack fails with the following message in the pl_update.log file.

[07 Oct 2017 14:51:11,167] iso file validation failed/not-applicable
[07 Oct 2017 14:51:11,168] Updates failed
[07 Oct 2017 14:51:11,188] TASK_END::2::1 of 1::HMCUPD::172.23.1.246. RC=1::Update failed for HMC
[07 Oct 2017 14:51:11,285] Executing query Logical_name=Management AND Solution_version=4.0.5.0, to update status of Product
[07 Oct 2017 14:51:11,340] Executing query Sub_module_type=Management AND Solution_version=4.0.5.0, to update status of sub module
[07 Oct 2017 14:51:11,570] Executing query Logical_name=Management AND Solution_version=4.0.5.0, to update status of Product
[07 Oct 2017 14:51:11,616] Executing query Sub_module_type=Management AND Solution_version=4.0.5.0, to update status of sub module
[07 Oct 2017 14:51:11,735] Error on nodes (172.23.1.246 172.23.1.245).
[07 Oct 2017 14:51:11,752] STEP_END::2::HMC_UPD::FAILED
[07 Oct 2017 14:51:11,756] Error occured in apply for product hmc1
[07 Oct 2017 14:51:11,806] Executing query Logical_name=hmc1 AND Solution_version=4.0.5.0, to update status of Product
[07 Oct 2017 14:51:11,925] Apply (impact) phase for management module has failed.
[07 Oct 2017 14:51:12,011] Executing query Logical_name=bwr1 AND Solution_version=4.0.5.0, to update status of Solution
[07 Oct 2017 14:51:12,129] PHASE_END APPLY IMPACT
[07 Oct 2017 14:51:12,131] The apply phase for the release ‘bwr1’ failed.
[07 Oct 2017 14:51:12,132] PHASE_END RESUME
[07 Oct 2017 14:51:12,134] The resume phase for the release ‘bwr1’ failed.

Looking earlier in the log we see the following message:

07 Oct 2017 14:51:07,582] Last login: Sat Oct 7 14:41:30 2017 from 172.23.1.1^M
[07 Oct 2017 14:51:07,582] ^[[?1034hhscroot@pddrmd7hmc1:

> getupgfiles -h 172.23.1.1 -u root -d /BCU_share/bwr1/firmware/hmc/CR6/image/imports/HMC_Recovery_V8R830_5 -s
[07 Oct 2017 14:51:07,582] Enter the current password for user root:
[07 Oct 2017 14:51:07,582]
[07 Oct 2017 14:51:07,582] The file transfer did not complete sucessfully.
[07 Oct 2017 14:51:07,582] Verify the remote directory exists, all required files needed for upgrade are there,
[07 Oct 2017 14:51:07,583] you have read access to both the directory and the files, and then try the operation again.
[07 Oct 2017 14:51:07,583] hscroot@pddrmd7hmc1:

> echo $?
[07 Oct 2017 14:51:07,583] 1
[07 Oct 2017 14:51:07,583] hscroot@pddrmd7hmc1:

>
[07 Oct 2017 14:51:07,585] From process 4522578: STDERR:
[07 Oct 2017 14:51:07,585]
[07 Oct 2017 14:51:07,586] Exit code: 1
[07 Oct 2017 14:51:07,587] Command return code -> 1
[07 Oct 2017 14:51:07,588] getupgfiles command failed
[07 Oct 2017 14:51:07,626] Failed to upgrade release

Resolution: Check the log file to see if the update completed on the second or other HMC in the environment. If the update completed successfully then the most likely reason is the known_hosts file for the root user has an incorrect ssh host key associated with the management host. This should be a rare occurrence but can happen if the ssh host keys on the management host change over time and during troubleshooting or a deployment step an ssh session was initiated from the root user on the hmc to the management host causing an issue.

To resolve this issue will require PDOA support to open a secondary to the HMC support team. The HMC support team will lead the customer to obtain pesh access. This is described in the following documentation or pesh for Power 7 and Power 8.

Access to pesh requires accessing the hscpe user and the root user. In PDOA environments the hscpe user is removed before the system is turned over, but it may have been created during troubleshooting steps. Therefore it may be necessary to create the hscpe user or to change the password for the hscpe user if that user already exists due to a previous troubleshooting step. The same is true for the root user, if the root password is not known, then it will be necessary to modify the root password. Both hscpe and root password can be modified through the hscroot user using the chhmcuser command.

Once a pesh session is established and the customer is able to access the root account it is possible to test that this indeed is the problem via the following as the root user:

bash-4.1# ssh root@172.23.1.1
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is

To fix the issue as the root user on the hmc run the following:

Note that if your management host internal network ip address is different from 172.23.1.1 then substitute that IP address in the ssh-keygen command.

ssh-keygen -R 172.23.1.1

This command will remove the entry from /root/.ssh/known_hosts file in hmc. We have
to remove this using root user of hmc.

«Could not start the product ‘GPFS’ on» during apply phase.[ Added 2017-11-22 ]

Problem: When the fixpack attempts to restart GPFS on a host, it may fail to start GPFS causing the fixpack process to fail.
Resolution: This happens due to a limitation in the pflayer code which determines whether all of the GPFS filesystem mount points are indeed mounted, allowing the fixpack process to proceed to the next step. This code works on a very specific naming convention for NSDs and associated GPFS filesystems as well as a one to one mapping of NSDs to filesystems. If a filesystem and nsd do not follow either of these conventions then the GPFS startup code will not be able to determine when all filesystems are indeed mounted. Customers that have added GPFS filesystems that do not follow these two conventions will need to contact IBM for possible remediation options.

Here is the test.

Run the following commands on the hosts identified in the pl_update.log file that could not start GPFS. These commands can be run prior to the fixpack process.

/usr/lpp/mmfs/bin/mmlsfs all -d 2> /dev/null | grep «-d» | awk ‘< sub(/nsd/, «», $2);print $2>’|sort

The expectation is that the output is exactly the same.

Drive update required for product ID ST900MM0006[ Added 2017-11-22 ]

Problem: Before starting the apply phases of the fixpack, it is necessary to apply an update to the V7000 drives. These steps can be applied while the system is online. See the linked V7000 tech note for more information.

Product ID ST900MM0006 need to update drives with firmware level B56S before running the «applydrivesoftware» command.

Data Integrity Issue when Drive Detects Unreadable Data
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1005289

Resolution: 1. In an ssh session log as the root user on the management host.

2. Determine the the ip addresses of all of the V7000 enclosures in the environment.

The SAN_FRAME entries in the xcluster.cfg file are V7000 enclosures.

$ grep ‘SAN_FRAME[0-9][0-9]*_IP’ /pschome/config/xcluster.cfg
SAN_FRAME1_IP = 172.23.1.181
SAN_FRAME2_IP = 172.23.1.182
SAN_FRAME3_IP = 172.23.1.183
SAN_FRAME4_IP = 172.23.1.184
SAN_FRAME5_IP = 172.23.1.185
SAN_FRAME6_IP = 172.23.1.186
SAN_FRAME7_IP = 172.23.1.187

Use the following command to query the console for the storage enclosures.

$ appl_ls_hw -r storage -A M_IP_address,Description
«172.23.1.181»,»IBM Storwize V7000 Storage»
«172.23.1.182»,»IBM Storwize V7000 Storage»
«172.23.1.183»,»IBM Storwize V7000 Storage»
«172.23.1.184»,»IBM Storwize V7000 Storage»
«172.23.1.185»,»IBM Storwize V7000 Storage»
«172.23.1.186»,»IBM Storwize V7000 Storage»
«172.23.1.187»,»IBM Storwize V7000 Storage»

In the above examples there are several V7000 storage enclosures and the ip addresses are: 172.23.181 to 172.23.1.187

3. Determine if your system has the impacted drive. This command will provide the number of drives that match the 900 GB with type ST900MM0006.

$ grep ‘SAN_FRAME[0-9]*[0-9]_IP’ /pschome/config/xcluster.cfg | while read a b c d;do echo «*** $ ***»;ssh -n superuser@$ ‘lsdrive -nohdr| while read id rest;do lsdrive $id;done’ | grep -c «product_id ST900MM0006»;done

4 The PureData System for Operational Analytics V1.0 FP5 image includes the necessary files to perform the drive update. These files were unpacked as part of the fixpack registration. Determine the location of the fixpack on the management host.

$ appl_ls_cat
NAME VERSION STATUS DESCRIPTION
bwr0 4.0.4.0 Committed Updates for IBM_PureData_System_for_Operational_Analytics
bwr1 4.0.5.0 Committed Updates for IBM_PureData_System_for_Operational_Analytics_DB2105

In the above command the fixpack files are part of the id ‘bwr1’. This means the files were unpacked on the management host in /BCU_share/bwr1.

5. Determine the fix path by changing the variable to the identifier determined in step 3 in the path /BCU_share/ /firmware/storage/2076/image/imports/drives. From the above example, the id was ‘bwr1’ so the path is «/BCU_share/bwr1/firmware/storage/2076/image/imports/drives».

6. Verify the fix file exists and also the cksum of the file.

$ ls -la /BCU_share/bwr1/firmware/storage/2076/image/imports/drives
total 162728
drwxr-xr-x 2 26976 19768 256 Jan 18 08:53 .
drwxr-xr-x 5 26976 19768 256 Jan 18 08:53 ..
-rw-r—r— 1 26976 19768 83313381 Jan 18 08:53 IBM2076_DRIVE_20160923

$ cksum /BCU_share/bwr1/firmware/storage/2076/image/imports/drives/IBM2076_DRIVE_20160923
3281318949 83313381 /BCU_share/bwr1/firmware/storage/2076/image/imports/drives/IBM2076_DRIVE_20160923

7. For each ip address identified in step 2. The example below uses 172.23.1.183, All V7000s can be updated concurrently.

a. Copy the image to storwize location /home/admin/upgrade

scp /BCU_share/bwr1/firmware/storage/2076/image/imports/drives/IBM2076_DRIVE_20160923 admin@172.23.1.183:/home/admin/upgrade
IBM2076_DRIVE_20160923 100% 79MB 39.7MB/s 00:02

b. Update the drive using the command below:
ssh admin@172.23.1.183 «applydrivesoftware -file IBM2076_DRIVE_20160923 -all»

7. Monitor the status of drive upgrade using Isdriveupgradeprogress command. This following command like will report on the progress of all of the V7000s. Repeat this command until there is no longer any output indicating the updates have finished.

$ grep ‘SAN_FRAME[0-9]*[0-9]_IP’ /pschome/config/xcluster.cfg | while read a b c d;do echo «*** $ ***»;ssh -n superuser@$ lsdriveprogress;done
*** 172.23.1.181 ***
*** 172.23.1.182 ***
*** 172.23.1.183 ***
*** 172.23.1.184 ***
*** 172.23.1.185 ***
*** 172.23.1.186 ***
*** 172.23.1.187 ***

HA Tools Version 2.0.5.0 hareset fails with «syntax error at line 854 :`else’ unexpected» error.[ Added 2018-01-23]

Problem: When attempting backup or restore the core TSA domains the hareset command fails with an error similar to the following.

/usr/IBM/analytics/ha_tools/hareset: syntax error at line 854 :`else’ unexpected

This is due to an errant edit as part of changes that were incorporated into the hatools in the March fixpacks (V1.0.0.5/V1.1.0.1) as part of HA Tools version 2.0.5.0.

Resolution: To fix in the field:

Login to the management host as root:

cp /usr/IBM/analytics/ha_tools/hareset to /usr/IBM/analytics/ha_tools/hareset.bak

Using the vi editor, modify the file /usr/IBM/analytics/ha_tools/hareset.

Find line 850 in this file.

Modify ‘if’ to say ‘fi’.

$ diff hareset.bak hareset
850c850
fi

Copy this new hareset to file to the rest of the hosts.

On FP3->FP5 The TSA upgrade does not include the appropriate TSA license. [ Added 9/5/2018 ]

This issue only affects customers who apply PDOA V1.0.0.5 (FP5) to V1.0.0.3 (FP3).

There have been two symptoms that have appeared in the field.

The first symptom occurs after trying to run a command to change rsct / TSA policies.

(mkrsrc-api) 2621-309 Command not allowed as daemon does not have a valid license.
mkequ: 2622-009 An unexpected RMC error occurred.The RMC return code was 1.

The second symptom can occur when trying to update TSA when there is no license. The following error can show up when running installSAM:

prereqSAM: All prerequisites for the ITSAMP installation are met on operating system: AIX 7100-05
installSAM: Cannot upgrade because no valid license was found.
installSAM: No installation was performed.
installSAM: For details, refer to the ‘Error:’ entries in the log file: /tmp/installSAM.2.log

1. Verify that the license is not applied by running the following command from root on the management host.

2. If planning to update to PDOA V1.0.0.6 then V1.0.0.6 will include instructions on how to remedy this issue. When PDOA V1.0.0.6 is available download the fixpack from FixCentral and follow the instructions to unpack the fixpack and then in the Appendix which describes how to apply the license as part of the TSA update. If not planning on applying FP6 then contact IBM Support to obtain the sam41.lic file and proceed to step 3.

3. Create the directory /stage/FP3_FP5/TSA.

mkdir -p /stage/FP3_FP5/TSA

4. Copy the sam41.lic file to the /stage/FP3_FP5/TSA directory.

5. Verify that stage is mounted on all hosts in the domain.

6. Run the following command to apply the license to all domains. This does not require restart.

dsh -n $ALL «samlicm -i /stage/FP3_FP5/TSA/sam41.lic «

7. Verify the license was applied successfully. The output should be similar to the output below once the TSA copies are licensed.

$ dsh -n $ALL «samlicm -s » | dshbak -c
HOSTS ————————————————————————-
host01, host02, host03, host04
——————————————————————————-
Product: IBM Tivoli System Automation for Multiplatforms 4.1.0.0
Creation date: Fri Aug 16 00:00:01 MST 2013
Expiration date: Thu Dec 31 00:00:01 MST 2037

Multiple DB2 Copies installed on the core hosts can confuse the fixpack. [ Added 2018-10-03 ]

The PDOA appliance is designed as follows:

1 DB2 9.7 copy on the management host to support IBM System Director.

1 DB2 10.1 or 10.5 DB2 copy on the management and management standby hosts supporting Warehouse Tools and DPM.

1 DB2 10.1, 10.5, 11.1 copy on all core hosts supporting the core database.

This assumption is built into the PDOA Console and can impact the following:

—> compliance checks comparing what is one the system to the validated stack

—> fixpack application (preview, prepare, apply, commit phases).

The most likely scenario is a that a customer who is very familiar with DB2 may install additional copies as part of a fixpack or special build installation. This is supported by DB2 but if the previous copy is left on the system it can cause various issues with the console with the most severe issues occurring during the fixpack application.

This issue will minimally impact customers on V1.0.0.5 or V1.1.0.1 as the non-cumulative V1.0.0.6 (FP6) / V1.1.0.2 (FP2) have significantly changed and no longer have this restriction and the compliance check for DB2 in the platform layer is not a critical function.

Remove any extra DB2 copies from the environment on all hosts before running the fixpack preview. This will prevent fixpack failures due to multiple DB2 copies.

If a problem is encountered during the appliance fixpack related to multiple db2 copies it will be necessary to seek guidance from IBM Support as the next steps will depend on the failure as well as when in the process of applying the fixpack.

Источник

Hi folks,

I’ve run an upgrade of Confluence from 5.5.2 to 6.2. After this upgrade, Confluence attempts to restart, but the log shows that Tomcat is failing to start due to a severe error, below. From the internet searches I’ve done up to now, it appears the message is referring to a <filter> tag in web.xml. It also suggests that more info can be found by looking at the appropriate log file. I don’t know what log file this message is referring to, and I have looked at all the logs in the AtlassianConfluencelogs folder.

So, I’m thinking the message is telling me that there might be a way to get a more verbose message that will at least tell me what <filter> is the problem, and I can go from there. It’s either that, or back out of the upgrade and try again, but I get a feeling I’m going to have the same web.xml problem.

Can anyone suggest how to get more info about this error?

Your help is very much appreciated.

Chris

08-Jun-2017 09:29:33.657 INFO [localhost-startStop-2] org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping.register Mapped «{[/reload],methods=[PUT]}» onto public org.springframework.http.ResponseEntity com.atlassian.synchrony.proxy.web.SynchronyProxyRestController.reloadConfiguration(com.atlassian.synchrony.proxy.web.SynchronyProxyConfigPayload)

08-Jun-2017 09:29:33.657 INFO [localhost-startStop-2] org.springframework.web.servlet.handler.SimpleUrlHandlerMapping.registerHandler Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.DefaultServletHttpRequestHandler]

08-Jun-2017 09:29:33.720 INFO [localhost-startStop-2] org.springframework.context.support.DefaultLifecycleProcessor.start Starting beans in phase 2147483647

08-Jun-2017 09:29:33.767 INFO [localhost-startStop-2] org.springframework.web.servlet.DispatcherServlet.initServletBean FrameworkServlet ‘dispatcher’: initialization completed in 2079 ms

08-Jun-2017 09:33:12.539 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal One or more Filters failed to start. Full details will be found in the appropriate container log file

08-Jun-2017 09:33:12.539 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal Context [] startup failed due to previous errors

08-Jun-2017 09:33:31.320 WARNING [localhost-startStop-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesJdbc The web application [ROOT] registered the JDBC driver [org.h2.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.

08-Jun-2017 09:33:31.320 WARNING [localhost-startStop-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesJdbc The web application [ROOT] registered the JDBC driver [net.sourceforge.jtds.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.

08-Jun-2017 09:33:31.320 WARNING [localhost-startStop-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesJdbc The web application [ROOT] registered the JDBC driver [com.github.gquintana.metrics.sql.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.

08-Jun-2017 09:33:31.320 WARNING [localhost-startStop-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesJdbc The web application [ROOT] registered the JDBC driver [org.postgresql.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.

Your management card may send you the following alerts:

“System: Warmstart” (or, in AOS v5.1.5 or higher, “System: Network Interface restarted”) and “System: Coldstart” (or, in AOS v5.1.5 or higher, “System: Network Interface Coldstarted”).

These alerts are sent when the Network Management Card restarts. The alerts don’t necessarily indicate a problem, and, when they do, they only affect the Network Management Card’s interface. Your UPS load is unaffected.

A “System: Coldstart” alert means that the Network Management Card (NMC) has just been powered; this may happen if the device powering the Network Management Card suffers an interruption of power.

A “System: Warmstart” alert means that the Network Management Card (NMC) has restarted without losing power. This may happen for multiple reasons:

  • The default gateway is wrong or the network traffic is too heavy and the gateway can not be reached.
  • After a new AOS or Application firmware upgrade has been uploaded to the NMC.
  • Modification of some NMC settings.
  • The Reset button on the front panel of the NMC is pressed.
  • Web Interface Reboot request
  • Network settings have changed – At least one of the TCP/IP settings changed.
  • A request to restart the current SNMP agent service was received.
  • An internal request to load and execute a new SNMP agent service was received.
  • A request to clear the NMC’s network settings and restart the SNMP agent service was received.
  • Smart-UPS Output Voltage Change
  • Remote Monitoring Service (RMS) communication has been lost (NMC2 only)
  • An internal firmware error was detected by the NMC and to clear the error, the NMC firmware explicitly reboots itself as a failsafe.
  • An undetected firmware error occurred and the hardware watchdog reboots the NMC to clear the error.

What you can do:

You should download all available event logs for your product: event.txt, data.txt, and config.ini for NMC1 and NMC2, as well as debug.txt and dump.txt for NMC2 only.

  •     Review the event.txt file to see if any of the causes listed above could be why your Network Management Card has restarted or coldstarted.
  •     Is this affecting more than one Network Management Card in your environment? This may point to a network traffic issue, causing the Management Card to reboot due to the watchdog mechanism outlined above.
  •     Note the frequency of the events in question. Can you pinpoint it to a certain time/certain set of events before and after?     If the restarts are always at the same intervals, this may relate to a network traffic issue.
  •     Depending on what you find, try rebooting your card’s interface or resetting the card to defaults (after backing up your configuration and obtaining the aforementioned log files). See if the issue persists.

The following Network Management Cards may generate these alerts:

  1. Web/SNMP Card – AP9606

Which is embedded in, among others: APC Environmental Monitoring Unit 1 (AP9312TH)

  1. Network Management Card 1 (NMC1) – AP9617, AP9618, AP9619

Which are embedded in, among others: Metered/Switched Rack PDUs (APC AP78XX, AP79XX), Rack Automatic Transfer Switches (APC AP77XX), Environmental Monitoring Units (APC AP9320, AP9340, NetBotz 200)

  1. Network Management Card 2 (NMC2) – AP9630/AP9631CH, AP9631/AP9631CH, AP9635/AP9635CH

Which are embedded in, among others: APC 2G Metered/Switched Rack PDUs (AP84XX, AP86XX, AP88XX, AP89XX), and some audio/video network management enabled products.

Chapter 2

System Messages and Recovery Procedures

S e n d d o c u m e n t c o m m e n t s t o n e x u s 7 k — f e e d b a c k — d o c @ c i s c o . c o m

Error Message

Logical Fibre Channel port reported failed [chars]

Explanation

Recommended Action

Error Message

Logical Fibre Channel port missing [chars]

Explanation

Recommended Action

Error Message

Logical Fibre Channel port reported not operational [chars]

Explanation

Recommended Action

Error Message

Node warmstarted due to software error [chars]

Explanation

cluster.

Recommended Action

configuration dump and a logged data dump. Save the dump data. 3. Contact the Cisco Technical

Assistance Center (TAC) through the Cisco Support web site http://www.cisco.com/tac. 4. Mark the

error you have just repaired as fixed.

Error Message

Power domain error [chars]

Explanation

Recommended Action

a node to the IO group which is not on the same Caching Services Module.

Error Message

SS_EID_VG_ER_MDISK_GROUP_OFFLINE ) A Managed Disk group is offline [chars]

Explanation

Recommended Action

operation. 3. Check managed disk status. If all managed disks show a status of online, mark the error

you have just repaired as fixed.

OL-15994-03

SVC-3-NODE_ERR_MSG: (SVC [dec]/[dec] SS_EID_PL_ER_FC_ADAP_FAIL )

No active/functioning logical FC port was detected by the software.

Reload the node. If the problem persists, replace the Caching Services Module.

SVC-3-NODE_ERR_MSG: (SVC [dec]/[dec] SS_EID_PL_ER_FC_ADAP_QUANTITY )

No active/functioning FC port was detected by the software.

Reload the node. If the problem persists, replace the Caching Services Module.

SVC-3-NODE_ERR_MSG: (SVC [dec]/[dec] SS_EID_PL_ER_FC_PORT_QUANTITY )

No active/functioning logical FC port was detected by the software.

Reload the node. If the problem persists, replace the Caching Services Module.

SVC-3-NODE_ERR_MSG: (SVC [dec]/[dec] SS_EID_PL_ER_NODE_WARMSTART )

The error that is logged in the cluster error log indicates a software problem in the

1. Ensure that the software is at the latest level on the cluster. 2. Run a

SVC-3-NODE_ERR_MSG: (SVC [dec]/[dec] SS_EID_PL_ER_POWER_DOMAIN )

The two nodes in an IO group are on the same Caching Services Module.

Determine what the configuration should be. Remove one of the nodes and add

SVC-3-NODE_ERR_MSG: (SVC [dec]/[dec]

An Mdisk group is offline.

1. Repair the enclosure or disk controller. 2. Start a cluster discovery

Cisco NX-OS System Messages Reference

SVC Messages

2-633

  • #1

Valve отредактировала карты = сломали (в очередной раз) сервера сообщества
Сменились сигнатуры и тем самым сервера ушли в постоянный краш (см 1.10)

[MaZa] [HotGuard] — Failed Offset 1
[SM] Unable to load extension «hotguard.ext»:
[SDKTOOLS] Sigscan for WriteBaselines failed
[SDKTOOLS] Failed to find WriteBaselines signature — stringtable error workaround disabled.
[AntiDLL] Sigscan for Signature failed
[SM] Unable to load extension «AntiDLL.ext»: Failed to create interceptor
[SM] Failed to load plugin «hotguard.smx»: Unable to load plugin (bad header).
[SM] Unable to load plugin «AntiDLL.smx»: Required extension «AntiDLL» file(«AntiDLL.ext») not running
[SM] Exception reported: Failed to get engine poiters. Data: 0, 0, F0D92D44, F0E311CC.
[SM] Blaming: block_print_garbage_messages.smx
[SM] Call stack trace:
[SM] [0] SetFailState
[SM] [1] Line 48, d:SourcePawn1.10block_print_garbage_messages.sp::OnPluginStart
[SM] Unable to load plugin «block_print_garbage_messages.smx»: Error detected in plugin startup (see error logs)
[SM] Unable to load plugin «CrashPlayer_AntiDLL.smx»: Required extension «AntiDLL» file(«AntiDLL.ext») not running
[SM] Exception reported: Can’t get offset for «CBaseServer::RejectConnection».
[SM] Blaming: server_redirect.smx
[SM] Call stack trace:
[SM] [0] SetFailState
[SM] [1] Line 9, server_redirect/redirect.sp::SetupSDKCalls
[SM] [2] Line 198, C:UsersartDesktopaddonsёsourcemodscriptingserver_redirect.sp::OnPluginStart
[SM] Unable to load plugin «server_redirect.smx»: Error detected in plugin startup (see error logs)
[SM] Exception reported: Failed to load CBaseServer::IsExclusiveToLobbyConnections signature from gamedata
[SM] Blaming: nolobbyreservation.smx
[SM] Call stack trace:
[SM] [0] SetFailState
[SM] [1] Line 87, nolobbyreservation.sp::OnPluginStart
[SM] Unable to load plugin «nolobbyreservation.smx»: Error detected in plugin startup (see error logs)

Послетали сигнатуры
CBaseServer::RejectConnection
CBaseServer::IsExclusiveToLobby

upd: Если хотите до сих пор использовать см 1.10 linux — скачивайте архив с см 1.11 6928, оттуда переносите все файлы из папки addons/sourcemod/gamedata/ с заменой. (остальные файлы из других папок не трогайте)
Под остальные плагины исправления — ищите файлы с фиксом сигнатур в соответствующих темах.

Последнее редактирование: Суббота в 10:30

  • #661

Сервер работает минут 10-15 и крашится. Отключил папку plugins и запустил без плагинов и не крашит. Сейчас сижу перебираю, какой из плагинов крашит его.

  • #662

@j1ton, все скрипты скомпилируй под обнову, у меня всё работает, только вип шприцы не робят, жду обнову

Последнее редактирование: Суббота в 14:43

  • #663

у кого нибудь крашит при смене карты? или только у меня, все обновлено… без единой ошибки

Сообщения автоматически склеены: Суббота в 14:41

@j1ton, все скрипты скомпилируй под обнову, у меня всё работает, только вип шприцы не робят, жду обнову
upd: серв падает спустя минут 10 онлайна, ошибка чтения errors_log, не знаю в чём трабл

gamedatу обнови и все.

Последнее редактирование: Суббота в 14:41

  • #664

у кого нибудь крашит при смене карты? или только у меня, все обновлено… без единой ошибки

Сообщения автоматически склеены: Суббота в 14:41

gamedatу обнови и все.

мне не помогла обнова gamedata, у меня проблема в каком-то плагине видимо, вот сижу ищу

  • #665

у кого нибудь крашит при смене карты? или только у меня, все обновлено… без единой ошибки

Сообщения автоматически склеены: Суббота в 14:41

gamedatу обнови и все.

так же. При компиляции пишет ошибки в синтаксисе.

  • #666

У меня sourcemod 1.11 сервер отлично работает, только проблема с плагином shop_skins.smx (не выключаются скины) и с плагином res.smx (не проигрывается музыка)
Приложу свои gamedata и extensions, (не нужное вам, удалите) попробуйте.
» Не забудьте в /addons/sourcemod/configs/core.cfg «DisableAutoUpdate» поставить на «yes» «

  • gamedata.zip

    135.7 КБ

    · Просмотры: 24

  • extensions.zip

    22.1 МБ

    · Просмотры: 24

  • #667

L 02/04/2023 - 15:11:04: Info (map "de_mirage") (file "/home/server26921/game/csgo/addons/sourcemod/logs/errors_20230204.log")
L 02/04/2023 - 15:11:04: [SM] Exception reported: Failed to create native "BaseComm_IsClientGagged", name is probably already in use
L 02/04/2023 - 15:11:04: [SM] Blaming: basecomm.smx
L 02/04/2023 - 15:11:04: [SM] Call stack trace:
L 02/04/2023 - 15:11:04: [SM]   [0] CreateNative
L 02/04/2023 - 15:11:04: [SM]   [1] Line 71, /home/builds/sourcemod/debian9-1.11/build/plugins/basecomm.sp::AskPluginLoad2
L 02/04/2023 - 15:11:04: [SM] Failed to load plugin "basecomm.smx": unexpected error 23 in AskPluginLoad callback.
L 02/04/2023 - 15:11:04: [AntiDLL] Sigscan for Signature failed
L 02/04/2023 - 15:11:04: [SM] Unable to load extension "AntiDLL.ext": Failed to create interceptor
L 02/04/2023 - 15:11:04: [Discord/DropsSummoner_discord.smx] At address g_pDropForAllPlayersPatch received not what we expected, drop for all players will be unavailable.
L 02/04/2023 - 15:11:04: [SM] Exception reported: [System Panel] [Users Chat DataBase] Failed to connection SP_users in databased.cfg
L 02/04/2023 - 15:11:04: [SM] Blaming: users_chat.smx
L 02/04/2023 - 15:11:04: [SM] Call stack trace:
L 02/04/2023 - 15:11:04: [SM]   [0] SetFailState
L 02/04/2023 - 15:11:04: [SM]   [1] Line 39, c:UsersauthtDesktopplugins-syspaneladdonssourcemodscriptingusers_chat.sp::Connection_BD
L 02/04/2023 - 15:11:04: [SM]   [2] Line 31, c:UsersauthtDesktopplugins-syspaneladdonssourcemodscriptingusers_chat.sp::OnPluginStart
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "users_chat.smx": Error detected in plugin startup (see error logs)
L 02/04/2023 - 15:11:04: [SM] Exception reported: [MA] Database failure: Could not find Database conf "materialadmin"
L 02/04/2023 - 15:11:04: [SM] Blaming: admin/materialadmin.smx
L 02/04/2023 - 15:11:04: [SM] Call stack trace:
L 02/04/2023 - 15:11:04: [SM]   [0] SetFailState
L 02/04/2023 - 15:11:04: [SM]   [1] Line 44, materialadmin/database.sp::ConnectBd
L 02/04/2023 - 15:11:04: [SM]   [2] Line 16, materialadmin/database.sp::MAConnectDB
L 02/04/2023 - 15:11:04: [SM]   [3] Line 286, materialadmin.sp::OnPluginStart
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "admin/materialadmin.smx": Error detected in plugin startup (see error logs)
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "admin/ma_mutenotification.smx": Could not find required plugin "materialadmin"
L 02/04/2023 - 15:11:04: [SM] Exception reported: [Clans] No database configuration in databases.cfg!
L 02/04/2023 - 15:11:04: [SM] Blaming: clans/clans.smx
L 02/04/2023 - 15:11:04: [SM] Call stack trace:
L 02/04/2023 - 15:11:04: [SM]   [0] SetFailState
L 02/04/2023 - 15:11:04: [SM]   [1] Line 11, clans/database.sp::ConnectToDatabase
L 02/04/2023 - 15:11:04: [SM]   [2] Line 240, A:ssmodscriptingclans.sp::OnPluginStart
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "clans/clans.smx": Error detected in plugin startup (see error logs)
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "clans/clan_createall.smx": Native "Clans_GetClientTimeToCreateClan" was not found
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "clans/clans_coinsbykill.smx": Native "Clans_AreClansLoaded" was not found
L 02/04/2023 - 15:11:04: [SM] Exception reported: [CustomPlayerArms] - Не удалось получить адрес s_playerViewmodelArmConfigs
L 02/04/2023 - 15:11:04: [SM] Blaming: CustomPlayerArms.smx
L 02/04/2023 - 15:11:04: [SM] Call stack trace:
L 02/04/2023 - 15:11:04: [SM]   [0] SetFailState
L 02/04/2023 - 15:11:04: [SM]   [1] Line 38, C:UsersanakaineDesktopxxxCustomPlayerArms.sp::OnPluginStart
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "CustomPlayerArms.smx": Error detected in plugin startup (see error logs)
L 02/04/2023 - 15:11:04: [SM] Exception reported: [System Panel] [Users Visits DataBase] Failed to connection SP_users in databased.cfg
L 02/04/2023 - 15:11:04: [SM] Blaming: users_visits.smx
L 02/04/2023 - 15:11:04: [SM] Call stack trace:
L 02/04/2023 - 15:11:04: [SM]   [0] SetFailState
L 02/04/2023 - 15:11:04: [SM]   [1] Line 28, c:UsersauthtDesktopplugins-syspaneladdonssourcemodscriptingusers_visits.sp::Connection_BD
L 02/04/2023 - 15:11:04: [SM]   [2] Line 23, c:UsersauthtDesktopplugins-syspaneladdonssourcemodscriptingusers_visits.sp::OnPluginStart
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "users_visits.smx": Error detected in plugin startup (see error logs)
L 02/04/2023 - 15:11:05: [SM] Unable to load plugin "vip/vip_clancreate.smx": Native "Clans_SetCreatePerm" was not found
L 02/04/2023 - 15:11:05: [SM] Unable to load plugin "Admins.smx": Could not find required plugin "materialadmin"
L 02/04/2023 - 15:11:05: [SM] Exception reported: [System Panel] [Users DataBase] Failed to connection SP_users in databased.cfg
L 02/04/2023 - 15:11:05: [SM] Blaming: users.smx
L 02/04/2023 - 15:11:05: [SM] Call stack trace:
L 02/04/2023 - 15:11:05: [SM]   [0] SetFailState
L 02/04/2023 - 15:11:05: [SM]   [1] Line 44, c:UsersauthtDesktopplugins-syspaneladdonssourcemodscriptingusers.sp::Connection_BD
L 02/04/2023 - 15:11:05: [SM]   [2] Line 21, c:UsersauthtDesktopplugins-syspaneladdonssourcemodscriptingusers.sp::OnPluginStart
L 02/04/2023 - 15:11:05: [SM] Unable to load plugin "users.smx": Error detected in plugin startup (see error logs)

есть фиксы этих плагинов?

  • #668

У меня sourcemod 1.11 сервер отлично работает, только проблема с плагином shop_skins.smx (не выключаются скины) и с плагином res.smx (не проигрывается музыка)
Приложу свои gamedata и extensions, (не нужное вам, удалите) попробуйте.
» Не забудьте в /addons/sourcemod/configs/core.cfg «DisableAutoUpdate» поставить на «yes» »

включаю сервер и при запуске он включается но в консоле пишет Could not establish connection to Steam servers.

  • #669

Тоже замечаю краши, но пока понять не могу из за какого плагина…

  • #670

L 02/04/2023 - 15:29:43: [STVM] Sigscan for CHLTVServer::BroadcastLocalChat failed
L 02/04/2023 - 15:29:43: [STVM] CHLTVServer::BroadcastLocalChat detour could not be initialized.
L 02/04/2023 - 15:29:43: [STVM] Sigscan for CHLTVDemoRecorder::StartRecording failed
L 02/04/2023 - 15:29:43: [STVM] CHLTVDemoRecorder::StartRecording detour could not be initialized.
L 02/04/2023 - 15:29:43: [STVM] Sigscan for CHLTVDemoRecorder::StopRecording failed
L 02/04/2023 - 15:29:43: [STVM] CHLTVDemoRecorder::StopRecording detour could not be initialized.
L 02/04/2023 - 15:29:43: [STVM] Failed to get CHLTVServer::m_DemoRecorder offset.

gamedata sourcetvmanager.

  • #671

Тоже замечаю краши, но пока понять не могу из за какого плагина…

Попробуй выключить всё, что связано со скинами(shop, ws, vip)

У меня к примеру после оф. Фикса не стартовал сервер с ws о фени.

И осталась одна ошибка:

[CSTRIKE] [CStrike] Failed to locate NET_SendPacket signature.

Решил попробовать перейти на 1.12 но без изменений, ошибка так и осталась.
Можете подсказать, что это и как решить? Буду очень благодарен

  • #672

L 02/04/2023 - 15:29:43: [STVM] Sigscan for CHLTVServer::BroadcastLocalChat failed
L 02/04/2023 - 15:29:43: [STVM] CHLTVServer::BroadcastLocalChat detour could not be initialized.
L 02/04/2023 - 15:29:43: [STVM] Sigscan for CHLTVDemoRecorder::StartRecording failed
L 02/04/2023 - 15:29:43: [STVM] CHLTVDemoRecorder::StartRecording detour could not be initialized.
L 02/04/2023 - 15:29:43: [STVM] Sigscan for CHLTVDemoRecorder::StopRecording failed
L 02/04/2023 - 15:29:43: [STVM] CHLTVDemoRecorder::StopRecording detour could not be initialized.
L 02/04/2023 - 15:29:43: [STVM] Failed to get CHLTVServer::m_DemoRecorder offset.

gamedata sourcetvmanager.

Решение.

  • sourcetvmanager.games.txt

    12.5 КБ

    · Просмотры: 14

  • #673

есть какие ни будь рабочие gamedata и extensions на 1.11 то уже все перепробовал нечего не хочет запускаться

  • #674

есть какие ни будь рабочие gamedata и extensions на 1.11 то уже все перепробовал нечего не хочет запускаться

листай тему, тут всё скидывали.
У меня всё запускается и всё работает, но вот проблема в том что у меня сервер онли мираж, и почему он меняет карту на рандомную и сервер крашится.

  • #675

листай тему, тут всё скидывали.
У меня всё запускается и всё работает, но вот проблема в том что у меня сервер онли мираж, и почему он меняет карту на рандомную и сервер крашится.

те которые кидали не работают

  • #676

Есть у кого сошка феникса под 1.11?

  • #677

кинте пожалуйста basecomm.smx рабочий

  • #678

Есть у кого сошка феникса под 1.11?

На 1.11 нет сошки, если не ошибаюсь

  • #680

у кого то было что тоже не робит AntiDLL?

Unable to load plugin "AntiDLL.smx": Required extension "AntiDLL" file("AntiDLL.ext") not running

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Node sass gyp error
  • Node pre gyp err build error
  • Node mysql error connect etimedout
  • Node must be provided when reporting error if location is not provided
  • Node js обработка ошибок

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии