Содержание
- Known issues for the IBM PureData System for Operational Analytics, Fix Pack V1.0.0.5
- Abstract
- Content
Known issues for the IBM PureData System for Operational Analytics, Fix Pack V1.0.0.5
Abstract
This document contains the restrictions and known issues for the IBM PureData System for Operational Analytics, Fix Pack V1.0.0.5 (Fix Pack 5).
Content
Flash storage upgrade gets stalled during apply phase
Problem: | Flash storage upgrade gets stalled during apply phase which results in the apply failure
Sample output of the failure from the PL log [06 Mar 2017 07:45:40,711] apply: 172.23.1.182: waiting for upgrade to complete, iteration : update_status= Verifying the status on the failed node by executing lssoftwareupgradestatus command $ ssh admin@172.23.1.182 |
Resolution: | 1. Abort the upgrade using applysoftware -abort command. Wait until the status becomes inactive
IBM_FlashSystem:ibisFlash_00:admin>applysoftware -abort 2. Verify if there are any events for internal errors «Node warmstarted due to an internal error» IBM_FlashSystem:ibisFlash_00:superuser>lseventlog Here event 130 has internal error on node with error code 2030. Detail listing of event:- Event_id_text shows Node warmstarted due to an internal error We need to clear the event on the node using cheventlog fix command. IBM_FlashSystem:ibisFlash_00:admin>cheventlog -fix 130 3. Once we have aborted the update and cleared the event we resume the upgrade |
Console does not allow resume after PFW update reboots management node
Problem: | When the non-management Apply is running, during the PFW update, the cec reboots. This brings down the management node and stops the fix pack update.
|
Resolution: | To resume the update:
Verify the console is started using the mistatus command run as root on the management host. (0) root @ ibis01: 7.1.0.0: / The command ‘miupdate -resume’ will not work. Instead: Determine the ‘bwr#’ associated with the fixpack. Use the appl_ls_cat command, run as root on the management host. $ appl_ls_cat Substitute the ‘bwr#’ found above, in this example it is 4.0.5.0 and run the appl_install_sw commmand as root on the management host. echo «appl_install_sw -l bwr1 -resume > /tmp/appl_install_sw_$(date +»%Y%m%d_%H%M%S»).out 2>&1″ | at now This will run the fixpack application outside of the session (these can be long running commands susceptible to terminal loss). Tail the /tmp/appl_install_sw_ .out file to view the progress. |
Apply output shows failed even though there is no failure or error in the log
Problem: | The output excerpt from mi command line or through console GUI: ===================================================== Log file: Infrastructure Infrastructure SAN switch firmware: SANFW apply started 1 of 1 task completed Log file: Log file: Log file: ======================== |
Resolution: | Verify that update was completed by running appl_ls_cat command.
In case of resume scenarios during mgmt update status should be M_Applied (0) root @ ibis01: 7.1.0.0: /BCU_share/aixappl/pflayer/log (0) root @ ibis01: 7.1.0.0: /BCU_share/aixappl/pflayer/log In case of resume scenarios during non management/core update status should be Applied (0) root @ ibis01: 7.1.0.0: /BCU_share/aixappl/pflayer/log (0) root @ ibis01: 7.1.0.0: /BCU_share/aixappl/pflayer/log If the status shows either Applied or M_Applied then proceed for the next step ignoring the failure message |
ISW update failed while updating MGMT update
Problem: | During the PDOA V1.1 FP5/FP1 management apply phase the fix pack may encounter an error. The pflayer log file may show the following error:
. |
Resolution: | There is a known issue with the ISW installer returning a status of 256 back to the caller of the install.bin command line even though the installation was a success. To verify:
1. Login to the management node as root in an ssh session. 2. Look for one of the following directories: /usr/IBM/dwe/appserver_001/iswapp_10.5/logs 3. Look for a file call ‘ISWinstall_summary_ .log with a recent date. 4. Run the following: grep -i status logs/ISWinstall_summary_1701121953.log This should return a large number of lines with ‘Status: SUCCESSFUL‘. If this is the case, it is safe to resume the fix pack as the update was successful. |
OPM update failed due to DBI Connect connect issue in wait_for_start.pl
Problem: | During the management apply phase, the apply phase may fail with the following symptoms in the logs.
RC=1::Can’t locate DBI.pm in @INC (@INC contains: /usr/opt/perl5/lib/5.10.1/aix-thread-multi /usr/opt/perl5/lib/5.10.1 /usr/opt/perl5/lib/site_perl/5.10.1/aix-thread-multi /usr/opt/perl5/lib/site_perl/5.10.1 /usr/opt/perl5/lib/site_perl .) at /BCU_share/bwr1/code/ISAS/Update/Common/OPM/scripts/wait_for_start.pl line 149.n DBI connect(‘OPMDB’,». ) failed: [IBM][CLI Driver] SQL1031N The database directory cannot be found on the indicated file system. SQLSTATE=58031 and hals shows that the DPM components are failed over to the standby management host. |
Resolution: | These messages indicate that the during the management apply phase, the DB2 Performance Monitor (DPM) component failed over to the management stand-by node. There are some known issues with DPM on startup that can lead to failures. If the above symptoms are seen then the next steps are to:
1. Use hals to determine if the DPM resources are indeed failed over. 2. Use lssam on the management host to determine if there are any failed states. 3. Use ‘resetrsrc‘ on any DPM resources that are in a failed state. 4. Verify with lssam that the resources are no longer in a failed state. 5. Use ‘hafailover DPM‘ to move the DPM resources to the management host. 6. Verify that the DPM resources successfully moved to the management host. 7. Resume the Fix Pack |
Storage update failed during apply phase because of drive update failure
Problem: | Storage update fails during the apply phase because of drive update failures; one or more storage drives might be in the offline state.
Console output during failure: Sample output of the failure from the PL log: [08 Mar 2017 04:13:43,315] Extracted msg from NLS: apply: 172.23.1.186 ssh admin@172.23.1.186 ssh admin@172.23.1.186 LANG=en_US svctask applydrivesoftware -file IBM2076_DRIVE_20160923 -type firmware -drive 0:1:2:3:4:5:6:7:8:9:10:11:12:13:14:15:16:17:18:19:20:21:22:23:24:25:26:27:28:29:30:31:32:33:34:35:36:37:38:39:40:41:42:43:44:45:46:47 command failed. By using executing the lsdrive command, we can verify the drive statuses on the failed storage box. As an example: $ ssh superuser@172.23.1.186 «lsdrive» Here we see that 2 drives (drive 5 and drive 13) are in the offline state which is the reason for the failure. |
Resolution: | Run the lsdrive to see the list of failed drives and then do the following:
1. Fix the drives. 2. Ensure the statuses of the drives are all online. 3. Resume the fix pack update. |
CECs is rebooted and /BCU_share is unmounted after a power firmware update
Problem: | The upgrade of the CEC that hosts the management and admin nodes is done and the CEC gets rebooted and the run is halted due to this reboot. When the CEC gets back online and the run is resumed, the BCU_share gets mounted on the management and admin nodes. Subsequently, the other CECs gets upgraded and rebooted. But when the respective nodes come back up, the BCU_share does not get mounted again and the upgrade proceeds to the point where it tries to access BCU_share and fails. Symptoms: 2. The /BCU_share NFS mount shared from the management host is not mounted on all hosts. |
Resolution: | After the failure is identified, simply resume. The resume code will verify that /BCU_share is mounted across the hosts. |
miinfo compliance command shows compliance issue for some of the products
Problem: | Running miinfo compliance command shows some levels are not correct. ‘miinfo -d -c‘
You may see the following under some servers: IBM Systems Director Common Agent 6.3.3.1 Higher |
Resolution: | 1. IBM Systems Director Common Agent 6.3.3.1 Higher The common agent should no longer be tracked by the compliance software. This is a defect in the compliance check program and will not impact the operation of the appliance. 2. IBM InfoSphere Warehouse 10.5.0.20151104_10.5.0.8..0 Lower 3. IBM InfoSphere Optim Query Workload Tuner The version of the product cannot be determined or it is not installed. NA |
FP5 cannot be directly applied to FP3 without some additional fixes and modifications.
Problem: | The IBM PureData System for Operational Analytics V1.0 FP5 package cannot be directly applied to FP3 environments. There are three distinct issues with FP5 on FP3 environments.
1. It is possible to register the fixpack, however after registration the console will no longer start. During the preview step similar messages to the following will appear in the log.
2. It is possible to apply the fixpack via command line, however the fixpack will fail validation as the firmware levels on the V7000 is too low for FP5 to update. 3. It is possible to apply the fixpack via command line, however the fixpack will fail validation as the firmware levels on the SAN switches is too low for FP5 to update. |
Resolution: | See the document ‘How to apply the IBM PureData System for Operational Analytics V1.0 FP5 on a FP3 environment?‘ for more information about the FP3 to FP5 scenario. |
Failed paths after the fixpack apply stage
Problem: | The AIX hosts may have failed paths to the external storage. The run the following command as root on the management host:
dsh -n $ALL «lspath | grep hdisk | grep -v Enabled | wc -l» | dshbak -c Will return output if there are failed paths to the storage. |
Resolution: | There are two remedy options.
1. Reboot the host with the failed paths. This will effectively bounce the port. This may require an outage. 2. Follow the instructions below to bounce only the port. All access to the storage is fully redundant with multiple paths which is why the system can start even with failed paths. This method avoids an outage and effectively bounces the port. The following should be done one port at a time and should be performed either in an outage window, or a time when the system will have very low I/O activity. a. For each host login, determine the ports with failed paths using the ‘lspath | grep hdisk | grep -v Enabled | while read stat disk dev rest;do echo «$»;done | sort | uniq‘ command. This command will return the uniq set of ports connected to hdisk devices that are Missing or Failed. Example output: b. For each port, create a script and update the ‘export to match the # in the fscsi# id of the failed path. This script will then remove all paths to that port, set the port to the defined state, and will then rediscover the paths. This effectively bounces the port. c. Change the id to match the fscsi number. Run the each script to remove the paths and to put the device in defined state, then use cfgmgr to reinitialize. This should create all of the new paths. Run these scripts one at a time and then verify that the path no longer appears in the command shown in a. export/> lspath -p fscsi$ | while read st hd fs;do echo $hd;done | sort | uniq | while read disk;do rmpath -d -l $ -p fscsi$;done d. After the commands run, verify that there are no more failed paths over the port and that the port has discovered the existing paths. Bouncing the port in this way preserves any settings stored in ODM for the fcs and fscsi devices. |
DB2 and/or ISW preview failures due to incorrect fixpack or incomplete DB2 10.5 upgrade.
Problem: | I downloaded and registered the fixpack, however I’m receiving preview errors related to the DB2 and / or InfoSphere Warehouse (ISW) levels.
There are two different fix central downloads for IBM PureData System for Operational Analytics V1.0 fixpack 5. IBM PureData System for Operational Analytics Fix Pack 5 (for systems with DB2 Version 10.1) IBM PureData System for Operational Analytics Fix Pack 5 (for systems with DB2 Version 10.5) There are a couple of scenarios where problems arise. 1. Customer has DB2 V10.1, downloads and registers the fixpack with DB2 10.5. |
Resolution: | 1. Customer has DB2 V10.1, downloads and registers the fixpack with DB2 10.5. 2. Customer has followed the instructions to uplift or upgrade DB2 to 10.5 by following the technote Upgrading an IBM PureData System for Operational Analytics Version 1.0 environment to DB2 10.5 and downloads the fixpack with DB2 10.1. Contact IBM Support for help to de-register the incorrect fixpack. 3. Customer has only partially followed the instructions to uplift or upgrade DB2 to 10.5 by following the technote Upgrading an IBM PureData System for Operational Analytics Version 1.0 environment to DB2 10.5 and downloads the fixpack with DB2 10.5. but encounters a preview errors related to the ISW level being at the incorrect version level. This scenario is most likely to due to issues where the technote was not fully followed. This can happen due to confusion about the relationships between the InfoSphere Warehouse and DB2. Most customers understand how to upgrade DB2 and it is easy to miss that it is important to update the InfoSphere Warehouse product as well. Our Fixpack catalog does not at present support mixing DB2 10.5 and InfoSphere Warehouse 10.1 together. So the customer will need to revisit the Upgrading an IBM PureData System for Operational Analytics Version 1.0 environment to DB2 10.5 technote to verify that all of the update steps were followed and that the InfoSphere Warehouse levels are at 10.5 and the WebSphere Application Server levels are at 8.5.5.x as required as part of the technote. Once the levels are updated per the technote the Fixpack can be resumed and the preview should no longer fail. |
The FixCentral download inadvertently includes XML files that are not part of the fixpack.
Problem: | I downloaded all of the files included in the FixCentral for the fixpack and there are extra XML files included. What are they for?
The XML files are of the following pattern: *.fo.xml |
Resolution: | These files were inadvertently included in the fixpack packages and should either not be downloaded or deleted. |
Storage update failed during apply phase because an update is already in progress message. [ Added 2017-09-18 ]
Problem: | During the fixpack apply phase the fixpack fails.
[16 Sep 2017 21:14:05,518] STORAGE:storage0:172.23.1.181:1:Storage firmware update failed. —————————————————————————————- ssh -n superuser@172.23.1.181 ‘lsupdate’ This is a documented issue when upgrading V7000 firmware from 7.3.x to 7.4.x as indicated in the 7.4.0 release notes. https://public.dhe.ibm.com/storage/san/sanvc/release_notes/740_releasenotes.html |
Resolution: | Using the PureData System for Operational Analytics Console, use the Service Level Access page to find the link to access the Management Interface for each of the V7000s.
Navigate to the Events page which should show an alert. Select this alert and follow the fix procedures to initiate the second phase. Do this for each of the V7000s that has this issue. This step will take approximately 40 mins per enclosure and can be run in parallel. After the second phase is completed. You should see the message ‘System update completion finished from lseventlog. lseventlog -fixed yes 130 170916183340 cluster V7_00_1 message no 980507 Update completed lsupdate on that host should show ‘status’ = success. ssh -n superuser@172.23.1.181 ‘lsupdate’ Once this is completed for all V7000s resume the apply phase. |
HMC fw update fails in getupgfiles step. [ Added 2017-10-07 ]
Problem: | The fixpack fails with the following message in the pl_update.log file.
[07 Oct 2017 14:51:11,167] iso file validation failed/not-applicable Looking earlier in the log we see the following message: 07 Oct 2017 14:51:07,582] Last login: Sat Oct 7 14:41:30 2017 from 172.23.1.1^M > getupgfiles -h 172.23.1.1 -u root -d /BCU_share/bwr1/firmware/hmc/CR6/image/imports/HMC_Recovery_V8R830_5 -s > echo $? > |
Resolution: | Check the log file to see if the update completed on the second or other HMC in the environment. If the update completed successfully then the most likely reason is the known_hosts file for the root user has an incorrect ssh host key associated with the management host. This should be a rare occurrence but can happen if the ssh host keys on the management host change over time and during troubleshooting or a deployment step an ssh session was initiated from the root user on the hmc to the management host causing an issue.
To resolve this issue will require PDOA support to open a secondary to the HMC support team. The HMC support team will lead the customer to obtain pesh access. This is described in the following documentation or pesh for Power 7 and Power 8. Access to pesh requires accessing the hscpe user and the root user. In PDOA environments the hscpe user is removed before the system is turned over, but it may have been created during troubleshooting steps. Therefore it may be necessary to create the hscpe user or to change the password for the hscpe user if that user already exists due to a previous troubleshooting step. The same is true for the root user, if the root password is not known, then it will be necessary to modify the root password. Both hscpe and root password can be modified through the hscroot user using the chhmcuser command. Once a pesh session is established and the customer is able to access the root account it is possible to test that this indeed is the problem via the following as the root user: bash-4.1# ssh root@172.23.1.1 To fix the issue as the root user on the hmc run the following: Note that if your management host internal network ip address is different from 172.23.1.1 then substitute that IP address in the ssh-keygen command. ssh-keygen -R 172.23.1.1 This command will remove the entry from /root/.ssh/known_hosts file in hmc. We have |
«Could not start the product ‘GPFS’ on» during apply phase.[ Added 2017-11-22 ]
Problem: | When the fixpack attempts to restart GPFS on a host, it may fail to start GPFS causing the fixpack process to fail. |
Resolution: | This happens due to a limitation in the pflayer code which determines whether all of the GPFS filesystem mount points are indeed mounted, allowing the fixpack process to proceed to the next step. This code works on a very specific naming convention for NSDs and associated GPFS filesystems as well as a one to one mapping of NSDs to filesystems. If a filesystem and nsd do not follow either of these conventions then the GPFS startup code will not be able to determine when all filesystems are indeed mounted. Customers that have added GPFS filesystems that do not follow these two conventions will need to contact IBM for possible remediation options.
Here is the test. Run the following commands on the hosts identified in the pl_update.log file that could not start GPFS. These commands can be run prior to the fixpack process. /usr/lpp/mmfs/bin/mmlsfs all -d 2> /dev/null | grep «-d» | awk ‘< sub(/nsd/, «», $2);print $2>’|sort The expectation is that the output is exactly the same. |
Drive update required for product ID ST900MM0006[ Added 2017-11-22 ]
Problem: | Before starting the apply phases of the fixpack, it is necessary to apply an update to the V7000 drives. These steps can be applied while the system is online. See the linked V7000 tech note for more information.
Product ID ST900MM0006 need to update drives with firmware level B56S before running the «applydrivesoftware» command. Data Integrity Issue when Drive Detects Unreadable Data |
Resolution: | 1. In an ssh session log as the root user on the management host.
2. Determine the the ip addresses of all of the V7000 enclosures in the environment. The SAN_FRAME entries in the xcluster.cfg file are V7000 enclosures. $ grep ‘SAN_FRAME[0-9][0-9]*_IP’ /pschome/config/xcluster.cfg Use the following command to query the console for the storage enclosures. $ appl_ls_hw -r storage -A M_IP_address,Description In the above examples there are several V7000 storage enclosures and the ip addresses are: 172.23.181 to 172.23.1.187 3. Determine if your system has the impacted drive. This command will provide the number of drives that match the 900 GB with type ST900MM0006. $ grep ‘SAN_FRAME[0-9]*[0-9]_IP’ /pschome/config/xcluster.cfg | while read a b c d;do echo «*** $ ***»;ssh -n superuser@$ ‘lsdrive -nohdr| while read id rest;do lsdrive $id;done’ | grep -c «product_id ST900MM0006»;done 4 The PureData System for Operational Analytics V1.0 FP5 image includes the necessary files to perform the drive update. These files were unpacked as part of the fixpack registration. Determine the location of the fixpack on the management host. $ appl_ls_cat In the above command the fixpack files are part of the id ‘bwr1’. This means the files were unpacked on the management host in /BCU_share/bwr1. 5. Determine the fix path by changing the variable to the identifier determined in step 3 in the path /BCU_share/ /firmware/storage/2076/image/imports/drives. From the above example, the id was ‘bwr1’ so the path is «/BCU_share/bwr1/firmware/storage/2076/image/imports/drives». 6. Verify the fix file exists and also the cksum of the file. $ ls -la /BCU_share/bwr1/firmware/storage/2076/image/imports/drives $ cksum /BCU_share/bwr1/firmware/storage/2076/image/imports/drives/IBM2076_DRIVE_20160923 7. For each ip address identified in step 2. The example below uses 172.23.1.183, All V7000s can be updated concurrently. a. Copy the image to storwize location /home/admin/upgrade scp /BCU_share/bwr1/firmware/storage/2076/image/imports/drives/IBM2076_DRIVE_20160923 admin@172.23.1.183:/home/admin/upgrade b. Update the drive using the command below: 7. Monitor the status of drive upgrade using Isdriveupgradeprogress command. This following command like will report on the progress of all of the V7000s. Repeat this command until there is no longer any output indicating the updates have finished. $ grep ‘SAN_FRAME[0-9]*[0-9]_IP’ /pschome/config/xcluster.cfg | while read a b c d;do echo «*** $ ***»;ssh -n superuser@$ lsdriveprogress;done |
HA Tools Version 2.0.5.0 hareset fails with «syntax error at line 854 :`else’ unexpected» error.[ Added 2018-01-23]
Problem: | When attempting backup or restore the core TSA domains the hareset command fails with an error similar to the following.
/usr/IBM/analytics/ha_tools/hareset: syntax error at line 854 :`else’ unexpected This is due to an errant edit as part of changes that were incorporated into the hatools in the March fixpacks (V1.0.0.5/V1.1.0.1) as part of HA Tools version 2.0.5.0. |
Resolution: | To fix in the field:
Login to the management host as root: cp /usr/IBM/analytics/ha_tools/hareset to /usr/IBM/analytics/ha_tools/hareset.bak Using the vi editor, modify the file /usr/IBM/analytics/ha_tools/hareset. Find line 850 in this file. Modify ‘if’ to say ‘fi’. $ diff hareset.bak hareset Copy this new hareset to file to the rest of the hosts. |
On FP3->FP5 The TSA upgrade does not include the appropriate TSA license. [ Added 9/5/2018 ]
This issue only affects customers who apply PDOA V1.0.0.5 (FP5) to V1.0.0.3 (FP3).
There have been two symptoms that have appeared in the field.
The first symptom occurs after trying to run a command to change rsct / TSA policies.
(mkrsrc-api) 2621-309 Command not allowed as daemon does not have a valid license.
mkequ: 2622-009 An unexpected RMC error occurred.The RMC return code was 1.
The second symptom can occur when trying to update TSA when there is no license. The following error can show up when running installSAM:
prereqSAM: All prerequisites for the ITSAMP installation are met on operating system: AIX 7100-05
installSAM: Cannot upgrade because no valid license was found.
installSAM: No installation was performed.
installSAM: For details, refer to the ‘Error:’ entries in the log file: /tmp/installSAM.2.log
1. Verify that the license is not applied by running the following command from root on the management host.
2. If planning to update to PDOA V1.0.0.6 then V1.0.0.6 will include instructions on how to remedy this issue. When PDOA V1.0.0.6 is available download the fixpack from FixCentral and follow the instructions to unpack the fixpack and then in the Appendix which describes how to apply the license as part of the TSA update. If not planning on applying FP6 then contact IBM Support to obtain the sam41.lic file and proceed to step 3.
3. Create the directory /stage/FP3_FP5/TSA.
mkdir -p /stage/FP3_FP5/TSA
4. Copy the sam41.lic file to the /stage/FP3_FP5/TSA directory.
5. Verify that stage is mounted on all hosts in the domain.
6. Run the following command to apply the license to all domains. This does not require restart.
dsh -n $ALL «samlicm -i /stage/FP3_FP5/TSA/sam41.lic «
7. Verify the license was applied successfully. The output should be similar to the output below once the TSA copies are licensed.
$ dsh -n $ALL «samlicm -s » | dshbak -c
HOSTS ————————————————————————-
host01, host02, host03, host04
——————————————————————————-
Product: IBM Tivoli System Automation for Multiplatforms 4.1.0.0
Creation date: Fri Aug 16 00:00:01 MST 2013
Expiration date: Thu Dec 31 00:00:01 MST 2037
Multiple DB2 Copies installed on the core hosts can confuse the fixpack. [ Added 2018-10-03 ]
The PDOA appliance is designed as follows:
1 DB2 9.7 copy on the management host to support IBM System Director.
1 DB2 10.1 or 10.5 DB2 copy on the management and management standby hosts supporting Warehouse Tools and DPM.
1 DB2 10.1, 10.5, 11.1 copy on all core hosts supporting the core database.
This assumption is built into the PDOA Console and can impact the following:
—> compliance checks comparing what is one the system to the validated stack
—> fixpack application (preview, prepare, apply, commit phases).
The most likely scenario is a that a customer who is very familiar with DB2 may install additional copies as part of a fixpack or special build installation. This is supported by DB2 but if the previous copy is left on the system it can cause various issues with the console with the most severe issues occurring during the fixpack application.
This issue will minimally impact customers on V1.0.0.5 or V1.1.0.1 as the non-cumulative V1.0.0.6 (FP6) / V1.1.0.2 (FP2) have significantly changed and no longer have this restriction and the compliance check for DB2 in the platform layer is not a critical function.
Remove any extra DB2 copies from the environment on all hosts before running the fixpack preview. This will prevent fixpack failures due to multiple DB2 copies.
If a problem is encountered during the appliance fixpack related to multiple db2 copies it will be necessary to seek guidance from IBM Support as the next steps will depend on the failure as well as when in the process of applying the fixpack.
Источник
Hi folks,
I’ve run an upgrade of Confluence from 5.5.2 to 6.2. After this upgrade, Confluence attempts to restart, but the log shows that Tomcat is failing to start due to a severe error, below. From the internet searches I’ve done up to now, it appears the message is referring to a <filter> tag in web.xml. It also suggests that more info can be found by looking at the appropriate log file. I don’t know what log file this message is referring to, and I have looked at all the logs in the AtlassianConfluencelogs folder.
So, I’m thinking the message is telling me that there might be a way to get a more verbose message that will at least tell me what <filter> is the problem, and I can go from there. It’s either that, or back out of the upgrade and try again, but I get a feeling I’m going to have the same web.xml problem.
Can anyone suggest how to get more info about this error?
Your help is very much appreciated.
Chris
…
08-Jun-2017 09:29:33.657 INFO [localhost-startStop-2] org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping.register Mapped «{[/reload],methods=[PUT]}» onto public org.springframework.http.ResponseEntity com.atlassian.synchrony.proxy.web.SynchronyProxyRestController.reloadConfiguration(com.atlassian.synchrony.proxy.web.SynchronyProxyConfigPayload)
08-Jun-2017 09:29:33.657 INFO [localhost-startStop-2] org.springframework.web.servlet.handler.SimpleUrlHandlerMapping.registerHandler Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.DefaultServletHttpRequestHandler]
08-Jun-2017 09:29:33.720 INFO [localhost-startStop-2] org.springframework.context.support.DefaultLifecycleProcessor.start Starting beans in phase 2147483647
08-Jun-2017 09:29:33.767 INFO [localhost-startStop-2] org.springframework.web.servlet.DispatcherServlet.initServletBean FrameworkServlet ‘dispatcher’: initialization completed in 2079 ms
08-Jun-2017 09:33:12.539 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal One or more Filters failed to start. Full details will be found in the appropriate container log file
08-Jun-2017 09:33:12.539 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal Context [] startup failed due to previous errors
08-Jun-2017 09:33:31.320 WARNING [localhost-startStop-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesJdbc The web application [ROOT] registered the JDBC driver [org.h2.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
08-Jun-2017 09:33:31.320 WARNING [localhost-startStop-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesJdbc The web application [ROOT] registered the JDBC driver [net.sourceforge.jtds.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
08-Jun-2017 09:33:31.320 WARNING [localhost-startStop-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesJdbc The web application [ROOT] registered the JDBC driver [com.github.gquintana.metrics.sql.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
08-Jun-2017 09:33:31.320 WARNING [localhost-startStop-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesJdbc The web application [ROOT] registered the JDBC driver [org.postgresql.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
…
Your management card may send you the following alerts:
“System: Warmstart” (or, in AOS v5.1.5 or higher, “System: Network Interface restarted”) and “System: Coldstart” (or, in AOS v5.1.5 or higher, “System: Network Interface Coldstarted”).
These alerts are sent when the Network Management Card restarts. The alerts don’t necessarily indicate a problem, and, when they do, they only affect the Network Management Card’s interface. Your UPS load is unaffected.
A “System: Coldstart” alert means that the Network Management Card (NMC) has just been powered; this may happen if the device powering the Network Management Card suffers an interruption of power.
A “System: Warmstart” alert means that the Network Management Card (NMC) has restarted without losing power. This may happen for multiple reasons:
- The default gateway is wrong or the network traffic is too heavy and the gateway can not be reached.
- After a new AOS or Application firmware upgrade has been uploaded to the NMC.
- Modification of some NMC settings.
- The Reset button on the front panel of the NMC is pressed.
- Web Interface Reboot request
- Network settings have changed – At least one of the TCP/IP settings changed.
- A request to restart the current SNMP agent service was received.
- An internal request to load and execute a new SNMP agent service was received.
- A request to clear the NMC’s network settings and restart the SNMP agent service was received.
- Smart-UPS Output Voltage Change
- Remote Monitoring Service (RMS) communication has been lost (NMC2 only)
- An internal firmware error was detected by the NMC and to clear the error, the NMC firmware explicitly reboots itself as a failsafe.
- An undetected firmware error occurred and the hardware watchdog reboots the NMC to clear the error.
What you can do:
You should download all available event logs for your product: event.txt, data.txt, and config.ini for NMC1 and NMC2, as well as debug.txt and dump.txt for NMC2 only.
- Review the event.txt file to see if any of the causes listed above could be why your Network Management Card has restarted or coldstarted.
- Is this affecting more than one Network Management Card in your environment? This may point to a network traffic issue, causing the Management Card to reboot due to the watchdog mechanism outlined above.
- Note the frequency of the events in question. Can you pinpoint it to a certain time/certain set of events before and after? If the restarts are always at the same intervals, this may relate to a network traffic issue.
- Depending on what you find, try rebooting your card’s interface or resetting the card to defaults (after backing up your configuration and obtaining the aforementioned log files). See if the issue persists.
The following Network Management Cards may generate these alerts:
- Web/SNMP Card – AP9606
Which is embedded in, among others: APC Environmental Monitoring Unit 1 (AP9312TH)
- Network Management Card 1 (NMC1) – AP9617, AP9618, AP9619
Which are embedded in, among others: Metered/Switched Rack PDUs (APC AP78XX, AP79XX), Rack Automatic Transfer Switches (APC AP77XX), Environmental Monitoring Units (APC AP9320, AP9340, NetBotz 200)
- Network Management Card 2 (NMC2) – AP9630/AP9631CH, AP9631/AP9631CH, AP9635/AP9635CH
Which are embedded in, among others: APC 2G Metered/Switched Rack PDUs (AP84XX, AP86XX, AP88XX, AP89XX), and some audio/video network management enabled products.
Chapter 2
System Messages and Recovery Procedures
S e n d d o c u m e n t c o m m e n t s t o n e x u s 7 k — f e e d b a c k — d o c @ c i s c o . c o m
Error Message
Logical Fibre Channel port reported failed [chars]
Explanation
Recommended Action
Error Message
Logical Fibre Channel port missing [chars]
Explanation
Recommended Action
Error Message
Logical Fibre Channel port reported not operational [chars]
Explanation
Recommended Action
Error Message
Node warmstarted due to software error [chars]
Explanation
cluster.
Recommended Action
configuration dump and a logged data dump. Save the dump data. 3. Contact the Cisco Technical
Assistance Center (TAC) through the Cisco Support web site http://www.cisco.com/tac. 4. Mark the
error you have just repaired as fixed.
Error Message
Power domain error [chars]
Explanation
Recommended Action
a node to the IO group which is not on the same Caching Services Module.
Error Message
SS_EID_VG_ER_MDISK_GROUP_OFFLINE ) A Managed Disk group is offline [chars]
Explanation
Recommended Action
operation. 3. Check managed disk status. If all managed disks show a status of online, mark the error
you have just repaired as fixed.
OL-15994-03
SVC-3-NODE_ERR_MSG: (SVC [dec]/[dec] SS_EID_PL_ER_FC_ADAP_FAIL )
No active/functioning logical FC port was detected by the software.
Reload the node. If the problem persists, replace the Caching Services Module.
SVC-3-NODE_ERR_MSG: (SVC [dec]/[dec] SS_EID_PL_ER_FC_ADAP_QUANTITY )
No active/functioning FC port was detected by the software.
Reload the node. If the problem persists, replace the Caching Services Module.
SVC-3-NODE_ERR_MSG: (SVC [dec]/[dec] SS_EID_PL_ER_FC_PORT_QUANTITY )
No active/functioning logical FC port was detected by the software.
Reload the node. If the problem persists, replace the Caching Services Module.
SVC-3-NODE_ERR_MSG: (SVC [dec]/[dec] SS_EID_PL_ER_NODE_WARMSTART )
The error that is logged in the cluster error log indicates a software problem in the
1. Ensure that the software is at the latest level on the cluster. 2. Run a
SVC-3-NODE_ERR_MSG: (SVC [dec]/[dec] SS_EID_PL_ER_POWER_DOMAIN )
The two nodes in an IO group are on the same Caching Services Module.
Determine what the configuration should be. Remove one of the nodes and add
SVC-3-NODE_ERR_MSG: (SVC [dec]/[dec]
An Mdisk group is offline.
1. Repair the enclosure or disk controller. 2. Start a cluster discovery
Cisco NX-OS System Messages Reference
SVC Messages
2-633
-
#1
Valve отредактировала карты = сломали (в очередной раз) сервера сообщества
Сменились сигнатуры и тем самым сервера ушли в постоянный краш (см 1.10)
[MaZa] [HotGuard] — Failed Offset 1
[SM] Unable to load extension «hotguard.ext»:
[SDKTOOLS] Sigscan for WriteBaselines failed
[SDKTOOLS] Failed to find WriteBaselines signature — stringtable error workaround disabled.
[AntiDLL] Sigscan for Signature failed
[SM] Unable to load extension «AntiDLL.ext»: Failed to create interceptor
[SM] Failed to load plugin «hotguard.smx»: Unable to load plugin (bad header).
[SM] Unable to load plugin «AntiDLL.smx»: Required extension «AntiDLL» file(«AntiDLL.ext») not running
[SM] Exception reported: Failed to get engine poiters. Data: 0, 0, F0D92D44, F0E311CC.
[SM] Blaming: block_print_garbage_messages.smx
[SM] Call stack trace:
[SM] [0] SetFailState
[SM] [1] Line 48, d:SourcePawn1.10block_print_garbage_messages.sp::OnPluginStart
[SM] Unable to load plugin «block_print_garbage_messages.smx»: Error detected in plugin startup (see error logs)
[SM] Unable to load plugin «CrashPlayer_AntiDLL.smx»: Required extension «AntiDLL» file(«AntiDLL.ext») not running
[SM] Exception reported: Can’t get offset for «CBaseServer::RejectConnection».
[SM] Blaming: server_redirect.smx
[SM] Call stack trace:
[SM] [0] SetFailState
[SM] [1] Line 9, server_redirect/redirect.sp::SetupSDKCalls
[SM] [2] Line 198, C:UsersartDesktopaddonsёsourcemodscriptingserver_redirect.sp::OnPluginStart
[SM] Unable to load plugin «server_redirect.smx»: Error detected in plugin startup (see error logs)
[SM] Exception reported: Failed to load CBaseServer::IsExclusiveToLobbyConnections signature from gamedata
[SM] Blaming: nolobbyreservation.smx
[SM] Call stack trace:
[SM] [0] SetFailState
[SM] [1] Line 87, nolobbyreservation.sp::OnPluginStart
[SM] Unable to load plugin «nolobbyreservation.smx»: Error detected in plugin startup (see error logs)
Послетали сигнатуры
CBaseServer::RejectConnection
CBaseServer::IsExclusiveToLobby
upd: Если хотите до сих пор использовать см 1.10 linux — скачивайте архив с см 1.11 6928, оттуда переносите все файлы из папки addons/sourcemod/gamedata/ с заменой. (остальные файлы из других папок не трогайте)
Под остальные плагины исправления — ищите файлы с фиксом сигнатур в соответствующих темах.
Последнее редактирование: Суббота в 10:30
-
#661
Сервер работает минут 10-15 и крашится. Отключил папку plugins и запустил без плагинов и не крашит. Сейчас сижу перебираю, какой из плагинов крашит его.
-
#662
@j1ton, все скрипты скомпилируй под обнову, у меня всё работает, только вип шприцы не робят, жду обнову
Последнее редактирование: Суббота в 14:43
-
#663
у кого нибудь крашит при смене карты? или только у меня, все обновлено… без единой ошибки
Сообщения автоматически склеены: Суббота в 14:41
@j1ton, все скрипты скомпилируй под обнову, у меня всё работает, только вип шприцы не робят, жду обнову
upd: серв падает спустя минут 10 онлайна, ошибка чтения errors_log, не знаю в чём трабл
gamedatу обнови и все.
Последнее редактирование: Суббота в 14:41
-
#664
у кого нибудь крашит при смене карты? или только у меня, все обновлено… без единой ошибки
Сообщения автоматически склеены: Суббота в 14:41
gamedatу обнови и все.
мне не помогла обнова gamedata, у меня проблема в каком-то плагине видимо, вот сижу ищу
-
#665
у кого нибудь крашит при смене карты? или только у меня, все обновлено… без единой ошибки
Сообщения автоматически склеены: Суббота в 14:41
gamedatу обнови и все.
так же. При компиляции пишет ошибки в синтаксисе.
-
#666
У меня sourcemod 1.11 сервер отлично работает, только проблема с плагином shop_skins.smx (не выключаются скины) и с плагином res.smx (не проигрывается музыка)
Приложу свои gamedata и extensions, (не нужное вам, удалите) попробуйте.
» Не забудьте в /addons/sourcemod/configs/core.cfg «DisableAutoUpdate» поставить на «yes» «
-
gamedata.zip
135.7 КБ
· Просмотры: 24
-
extensions.zip
22.1 МБ
· Просмотры: 24
-
#667
L 02/04/2023 - 15:11:04: Info (map "de_mirage") (file "/home/server26921/game/csgo/addons/sourcemod/logs/errors_20230204.log")
L 02/04/2023 - 15:11:04: [SM] Exception reported: Failed to create native "BaseComm_IsClientGagged", name is probably already in use
L 02/04/2023 - 15:11:04: [SM] Blaming: basecomm.smx
L 02/04/2023 - 15:11:04: [SM] Call stack trace:
L 02/04/2023 - 15:11:04: [SM] [0] CreateNative
L 02/04/2023 - 15:11:04: [SM] [1] Line 71, /home/builds/sourcemod/debian9-1.11/build/plugins/basecomm.sp::AskPluginLoad2
L 02/04/2023 - 15:11:04: [SM] Failed to load plugin "basecomm.smx": unexpected error 23 in AskPluginLoad callback.
L 02/04/2023 - 15:11:04: [AntiDLL] Sigscan for Signature failed
L 02/04/2023 - 15:11:04: [SM] Unable to load extension "AntiDLL.ext": Failed to create interceptor
L 02/04/2023 - 15:11:04: [Discord/DropsSummoner_discord.smx] At address g_pDropForAllPlayersPatch received not what we expected, drop for all players will be unavailable.
L 02/04/2023 - 15:11:04: [SM] Exception reported: [System Panel] [Users Chat DataBase] Failed to connection SP_users in databased.cfg
L 02/04/2023 - 15:11:04: [SM] Blaming: users_chat.smx
L 02/04/2023 - 15:11:04: [SM] Call stack trace:
L 02/04/2023 - 15:11:04: [SM] [0] SetFailState
L 02/04/2023 - 15:11:04: [SM] [1] Line 39, c:UsersauthtDesktopplugins-syspaneladdonssourcemodscriptingusers_chat.sp::Connection_BD
L 02/04/2023 - 15:11:04: [SM] [2] Line 31, c:UsersauthtDesktopplugins-syspaneladdonssourcemodscriptingusers_chat.sp::OnPluginStart
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "users_chat.smx": Error detected in plugin startup (see error logs)
L 02/04/2023 - 15:11:04: [SM] Exception reported: [MA] Database failure: Could not find Database conf "materialadmin"
L 02/04/2023 - 15:11:04: [SM] Blaming: admin/materialadmin.smx
L 02/04/2023 - 15:11:04: [SM] Call stack trace:
L 02/04/2023 - 15:11:04: [SM] [0] SetFailState
L 02/04/2023 - 15:11:04: [SM] [1] Line 44, materialadmin/database.sp::ConnectBd
L 02/04/2023 - 15:11:04: [SM] [2] Line 16, materialadmin/database.sp::MAConnectDB
L 02/04/2023 - 15:11:04: [SM] [3] Line 286, materialadmin.sp::OnPluginStart
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "admin/materialadmin.smx": Error detected in plugin startup (see error logs)
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "admin/ma_mutenotification.smx": Could not find required plugin "materialadmin"
L 02/04/2023 - 15:11:04: [SM] Exception reported: [Clans] No database configuration in databases.cfg!
L 02/04/2023 - 15:11:04: [SM] Blaming: clans/clans.smx
L 02/04/2023 - 15:11:04: [SM] Call stack trace:
L 02/04/2023 - 15:11:04: [SM] [0] SetFailState
L 02/04/2023 - 15:11:04: [SM] [1] Line 11, clans/database.sp::ConnectToDatabase
L 02/04/2023 - 15:11:04: [SM] [2] Line 240, A:ssmodscriptingclans.sp::OnPluginStart
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "clans/clans.smx": Error detected in plugin startup (see error logs)
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "clans/clan_createall.smx": Native "Clans_GetClientTimeToCreateClan" was not found
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "clans/clans_coinsbykill.smx": Native "Clans_AreClansLoaded" was not found
L 02/04/2023 - 15:11:04: [SM] Exception reported: [CustomPlayerArms] - Не удалось получить адрес s_playerViewmodelArmConfigs
L 02/04/2023 - 15:11:04: [SM] Blaming: CustomPlayerArms.smx
L 02/04/2023 - 15:11:04: [SM] Call stack trace:
L 02/04/2023 - 15:11:04: [SM] [0] SetFailState
L 02/04/2023 - 15:11:04: [SM] [1] Line 38, C:UsersanakaineDesktopxxxCustomPlayerArms.sp::OnPluginStart
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "CustomPlayerArms.smx": Error detected in plugin startup (see error logs)
L 02/04/2023 - 15:11:04: [SM] Exception reported: [System Panel] [Users Visits DataBase] Failed to connection SP_users in databased.cfg
L 02/04/2023 - 15:11:04: [SM] Blaming: users_visits.smx
L 02/04/2023 - 15:11:04: [SM] Call stack trace:
L 02/04/2023 - 15:11:04: [SM] [0] SetFailState
L 02/04/2023 - 15:11:04: [SM] [1] Line 28, c:UsersauthtDesktopplugins-syspaneladdonssourcemodscriptingusers_visits.sp::Connection_BD
L 02/04/2023 - 15:11:04: [SM] [2] Line 23, c:UsersauthtDesktopplugins-syspaneladdonssourcemodscriptingusers_visits.sp::OnPluginStart
L 02/04/2023 - 15:11:04: [SM] Unable to load plugin "users_visits.smx": Error detected in plugin startup (see error logs)
L 02/04/2023 - 15:11:05: [SM] Unable to load plugin "vip/vip_clancreate.smx": Native "Clans_SetCreatePerm" was not found
L 02/04/2023 - 15:11:05: [SM] Unable to load plugin "Admins.smx": Could not find required plugin "materialadmin"
L 02/04/2023 - 15:11:05: [SM] Exception reported: [System Panel] [Users DataBase] Failed to connection SP_users in databased.cfg
L 02/04/2023 - 15:11:05: [SM] Blaming: users.smx
L 02/04/2023 - 15:11:05: [SM] Call stack trace:
L 02/04/2023 - 15:11:05: [SM] [0] SetFailState
L 02/04/2023 - 15:11:05: [SM] [1] Line 44, c:UsersauthtDesktopplugins-syspaneladdonssourcemodscriptingusers.sp::Connection_BD
L 02/04/2023 - 15:11:05: [SM] [2] Line 21, c:UsersauthtDesktopplugins-syspaneladdonssourcemodscriptingusers.sp::OnPluginStart
L 02/04/2023 - 15:11:05: [SM] Unable to load plugin "users.smx": Error detected in plugin startup (see error logs)
есть фиксы этих плагинов?
-
#668
У меня sourcemod 1.11 сервер отлично работает, только проблема с плагином shop_skins.smx (не выключаются скины) и с плагином res.smx (не проигрывается музыка)
Приложу свои gamedata и extensions, (не нужное вам, удалите) попробуйте.
» Не забудьте в /addons/sourcemod/configs/core.cfg «DisableAutoUpdate» поставить на «yes» »
включаю сервер и при запуске он включается но в консоле пишет Could not establish connection to Steam servers.
-
#669
Тоже замечаю краши, но пока понять не могу из за какого плагина…
-
#670
L 02/04/2023 - 15:29:43: [STVM] Sigscan for CHLTVServer::BroadcastLocalChat failed
L 02/04/2023 - 15:29:43: [STVM] CHLTVServer::BroadcastLocalChat detour could not be initialized.
L 02/04/2023 - 15:29:43: [STVM] Sigscan for CHLTVDemoRecorder::StartRecording failed
L 02/04/2023 - 15:29:43: [STVM] CHLTVDemoRecorder::StartRecording detour could not be initialized.
L 02/04/2023 - 15:29:43: [STVM] Sigscan for CHLTVDemoRecorder::StopRecording failed
L 02/04/2023 - 15:29:43: [STVM] CHLTVDemoRecorder::StopRecording detour could not be initialized.
L 02/04/2023 - 15:29:43: [STVM] Failed to get CHLTVServer::m_DemoRecorder offset.
gamedata sourcetvmanager.
-
#671
Тоже замечаю краши, но пока понять не могу из за какого плагина…
Попробуй выключить всё, что связано со скинами(shop, ws, vip)
У меня к примеру после оф. Фикса не стартовал сервер с ws о фени.
И осталась одна ошибка:
[CSTRIKE] [CStrike] Failed to locate NET_SendPacket signature.
Решил попробовать перейти на 1.12 но без изменений, ошибка так и осталась.
Можете подсказать, что это и как решить? Буду очень благодарен
-
#672
L 02/04/2023 - 15:29:43: [STVM] Sigscan for CHLTVServer::BroadcastLocalChat failed L 02/04/2023 - 15:29:43: [STVM] CHLTVServer::BroadcastLocalChat detour could not be initialized. L 02/04/2023 - 15:29:43: [STVM] Sigscan for CHLTVDemoRecorder::StartRecording failed L 02/04/2023 - 15:29:43: [STVM] CHLTVDemoRecorder::StartRecording detour could not be initialized. L 02/04/2023 - 15:29:43: [STVM] Sigscan for CHLTVDemoRecorder::StopRecording failed L 02/04/2023 - 15:29:43: [STVM] CHLTVDemoRecorder::StopRecording detour could not be initialized. L 02/04/2023 - 15:29:43: [STVM] Failed to get CHLTVServer::m_DemoRecorder offset.
gamedata sourcetvmanager.
Решение.
-
sourcetvmanager.games.txt
12.5 КБ
· Просмотры: 14
-
#673
есть какие ни будь рабочие gamedata и extensions на 1.11 то уже все перепробовал нечего не хочет запускаться
-
#674
есть какие ни будь рабочие gamedata и extensions на 1.11 то уже все перепробовал нечего не хочет запускаться
листай тему, тут всё скидывали.
У меня всё запускается и всё работает, но вот проблема в том что у меня сервер онли мираж, и почему он меняет карту на рандомную и сервер крашится.
-
#675
листай тему, тут всё скидывали.
У меня всё запускается и всё работает, но вот проблема в том что у меня сервер онли мираж, и почему он меняет карту на рандомную и сервер крашится.
те которые кидали не работают
-
#676
Есть у кого сошка феникса под 1.11?
-
#677
кинте пожалуйста basecomm.smx рабочий
-
#678
Есть у кого сошка феникса под 1.11?
На 1.11 нет сошки, если не ошибаюсь
-
#680
у кого то было что тоже не робит AntiDLL?
Unable to load plugin "AntiDLL.smx": Required extension "AntiDLL" file("AntiDLL.ext") not running