Premature eof parse error

JSON Parser error: parse error: premature EOFx0a — POST request with valid JSON and libYAJL compiled on Centos7 #2052 Comments The Problem I’m getting the following rule triggered when parsing a POST request with a JSON body: ModSecurity: Warning. Matched «Operator Eq’ with parameter 0′ against variable REQBODY_ERROR’ (Value: 1′ ) [file «/path/to/httpd/conf.d/modsecuri ty/mod_security.conf»] […]

Содержание

  1. JSON Parser error: parse error: premature EOFx0a — POST request with valid JSON and libYAJL compiled on Centos7 #2052
  2. Comments
  3. parse error: premature EOF — what causes this ? #4
  4. Comments
  5. JSON requests with no body fail as «premature EOF» #1392
  6. Comments

JSON Parser error: parse error: premature EOFx0a — POST request with valid JSON and libYAJL compiled on Centos7 #2052

The Problem
I’m getting the following rule triggered when parsing a POST request with a JSON body:

ModSecurity: Warning. Matched «Operator Eq’ with parameter 0′ against variable REQBODY_ERROR’ (Value: 1′ ) [file «/path/to/httpd/conf.d/modsecuri
ty/mod_security.conf»] [line «53»] [id «200002»] [rev «»] [msg «Failed to parse request body.»] [data «JSON parsing error: parse error: premature EOFx0a»
] [severity «2»] [ver «»] [maturity «0»] [accuracy «0»] [hostname «hostname.local»] [uri «/Path/To/Endpoint
«] [unique_id «154342236916.285253»] [ref «v185,1»]

This is on ModSecurity v3.0.3 with Owasp CRS v3.2.0 on Centos7, and ModSecurity is compiled with libYAJL.

I’ve tried sending the most basic json payload and still get this error. It’s also triggered if I send an empty body.

The request has Content-Type header set to ‘application/json’

To Reproduce

Steps to reproduce the behavior:

curl -v -H ‘Content-Type: application/json’ -d ‘<«test»:»thisisatest»>‘ -k «https://hostname.local/PATH/to/Endpoint»

curl -v -H ‘Content-Type: application/json’ -d ‘<>‘ -k «https://hostname.local/PATH/to/Endpoint»

curl -v -H ‘Content-Type: application/json’ -d » -k «https://hostname.local/PATH/to/Endpoint»

Expected behavior

Any of this requests should trigger no rules.

Instead, any of them trigger rules:

which procedes to trigger blocking rules due to the anomaly scoring treshold .

Server:

ModSecurity v3.0.3 / Release 2.el7

OS (and distro): CentOS 7

The text was updated successfully, but these errors were encountered:

Here is a snippet from the debug log set to level 9:

[155352621654.189888] [/PATH/to/Endpoint] [4] Starting phase REQUEST_BODY. (SecRules 2)
[155352621654.189888] [/PATH/to/Endpoint] [9] This phase consists of 380 rule(s).
[155352621654.189888] [/PATH/to/Endpoint] [4] (Rule: 200002) Executing operator «Eq» with param «0» against REQBODY_ERROR.
[155352621654.189888] [/PATH/to/Endpoint] [9] Target value: «1» (Variable: REQBODY_ERROR)
[155352621654.189888] [/PATH/to/Endpoint] [9] Matched vars updated.
[155352621654.189888] [/PATH/to/Endpoint] [9] This rule severity is: 2 current transaction is: 255
[155352621654.189888] [/PATH/to/Endpoint] [9] Saving msg: Failed to parse request body.
[155352621654.189888] [/PATH/to/Endpoint] [4] Rule returned 1.
[155352621654.189888] [/PATH/to/Endpoint] [9] Running action: log
[155352621654.189888] [/PATH/to/Endpoint] [9] Saving transaction to logs
[155352621654.189888] [/PATH/to/Endpoint] [9] Running action: auditlog
[155352621654.189888] [/PATH/to/Endpoint] [9] Running action: log
[155352621654.189888] [/PATH/to/Endpoint] [9] Saving transaction to logs
[155352621654.189888] [/PATH/to/Endpoint] [9] Running action: status
[155352621654.189888] [/PATH/to/Endpoint] [4] Not running disruptive action: deny. SecRuleEngine is not On.

[155352621654.189888] [/PATH/to/Endpoint] [4] (Rule: 920130) Executing operator «Eq» with param «0» against REQBODY_ERROR.
[155352621654.189888] [/PATH/to/Endpoint] [9] Target value: «1» (Variable: REQBODY_ERROR)
[155352621654.189888] [/PATH/to/Endpoint] [9] Matched vars updated.
[155352621654.189888] [/PATH/to/Endpoint] [4] Running [independent] (non-disruptive) action: setvar
[155352621654.189888] [/PATH/to/Endpoint] [8] Saving variable: TX:msg with value: Failed to parse request body.
[155352621654.189888] [/PATH/to/Endpoint] [4] Running [independent] (non-disruptive) action: setvar
[155352621654.189888] [/PATH/to/Endpoint] [8] Saving variable: TX:anomaly_score_pl1 with value: 5
[155352621654.189888] [/PATH/to/Endpoint] [4] Running [independent] (non-disruptive) action: setvar
[155352621654.189888] [/PATH/to/Endpoint] [8] Saving variable: TX:-OWASP_CRS/PROTOCOL_VIOLATION/INVALID_REQ-REQBODY_ERROR with value: 1
[155352621654.189888] [/PATH/to/Endpoint] [9] This rule severity is: 2 current transaction is: 2
[155352621654.189888] [/PATH/to/Endpoint] [9] Saving msg: Failed to parse request body.
[155352621654.189888] [/PATH/to/Endpoint] [4] Rule returned 1.
[155352621654.189888] [/PATH/to/Endpoint] [9] Running action: log
[155352621654.189888] [/PATH/to/Endpoint] [9] Saving transaction to logs
[155352621654.189888] [/PATH/to/Endpoint] [9] Running action: auditlog

Источник

parse error: premature EOF — what causes this ? #4

When running the acs_income image, occasionally when it gets to the end of the processing it quits and gives the following error:
(long list of zip codes. i.e:
18013.
13241.
followed by:)
downloading Median household income in the past 12 months (in 2015 Inflation-adjusted dollars) from 2015 5-year American Community Survey
Error in parse_con(txt, bigint_as_char) : parse error: premature EOF
(right here) ——^
Calls: map_df . -> fromJSON_string -> parseJSON -> parse_con -> .Call
Execution halted

Re-running the image on the same file multiple times does not change the output — i.e. it is not a matter of just re-running the process and it eventually works. It always fails. Also, the processing appears to get through the complete list of zip codes before quitting. Any ideas what might be causing this?

The text was updated successfully, but these errors were encountered:

likely a non alpha numeric character present in the address fields. these can wreak havok on geocoding results and should be omitted before processing them. see: degauss-org/geocoder#1

I already removed all addresses with non alpha numeric characters, as indicated in geocoding tips you provided on the DeGAUSS website.
So I checked the list of processed counties (i noticed it’s counties, not zip codes that are listed in the acs_income docker image output), and one county is missing. Assuming that the address connected to this county is possibly causing the issue I looked at the file which is being fed into the acs_income image, only one address is in that missing county, and there isn’t anything different in that address from any other addresses in the file. The address was successfully processed through the geocoder and census tract docker images.
I’m trying other possibilities now.

Here’s a list of things I’ve tried, sort of randomly, trying to figure out what might be causing the error.
Each of these was tried separately. i.e. the addresses removed in step 1 were put back before removing the addresses in step 2, and so on, such that I could better determine which addresses were causing the issue.

  1. Removed addresses in Puerto Rico, thinking that the acs income docker image might not work in Puerto Rico — the error still occurred.
  2. Removed addresses where the geocoded output «zip» contained «NA» — the error still occurred.
  3. Removed addresses where the geocoded output «city», «state», «street1», and «street2» all contained «NA» — the error still occurred.
  4. Removed addresses where the geocoded output «city», «state», «street1», «street2», «prenum» and «number» all contained «NA» — the data was processed successfully!

So. my question is — does this make sense? How come the acs_income docker image can’t calculate the 2015 ACS income for these troublesome addresses in step 4, given that they all have valid «zip», «city», «state», «score», «precision», «county», «tract», «lon», and «lat» outputs ?

Hi, The acs_income images relies on valid lat/lon coordinates and does not use any of the fields you listed above to determine which census tract that row is in. Can you give an example address where the image fails and I can try to troubleshoot it?

So. the addresses which were removed in step 4 (above) do have valid lat/lon coordinates. My comment above may not have been clear. What I mean is that for a given batch of addresses, when I remove the addresses from the batch, which satisfy the rule indicated in step 4, the remaining addresses process in acs_income successfully. The addresses indicated in step 4, which were removed, do have valid lat/lon coordinates.

Also, I can’t share the addresses here, as they are patient addresses and are therefore private. However, I checked a few of the lat/lon pairs for the addresses which could not be processed with acs_income and the lat/lon pairs appear to be valid. For example, one of the addresses is in Clarksville TN and was geocoded with «precision» = «city». When I map the geocoded lat/lon pair for that address in Google, it is the center of Clarksville TN.

Addresses geocoded with a precision of anything other than range or street should not be used for further analysis, including extracting census tract or median household income. For example, the address that was geocoded with a precision of city means that the geocoder used the centroid of the city of Clarksville TN, as the geocoded location which is highly unlikely that that is where the subject’s address actually is.

Also, be careful about using PHI on the internet. Its good that you’re not posting private addresses here, but you also should probably not share them with everyone at Google!

Its hard to image a scenario where the geocoder assigns a missing value for city , state , street1 , street2 , prenum and number , but assigns valid lat/lon coordinates using a precision method of range or street . If it is, its likely a bug with the geocoder, but I can’t say for sure without being able to reproduce the problem locally.

Источник

JSON requests with no body fail as «premature EOF» #1392

I have set up the v3/master version with the nginx connector. Our team is using it with Centos 7.3 SE and nginx 1.10.3. Further details can be provided on request.

While running our API test suite I noticed basic GET requests were failing with a 400 . Upon inspection of Modsecurity’s audit logs, it was apparent there was an issue parsing the request body — the error returned was: [msg «Failed to parse request body.»] . This was followed with: [data «XML parsing error: parse error: premature EOFx0a»]

The error message was initially confusing as the request contained the header Content-Type: application/json , but we confirmed that was just a minor bug with the error message (PR to follow soon).

Assuming there was some unknown content contained within the body of the request, we spent some time evaluating the exact contents of the payload and concluded that no body was being sent at all (as expected).

So it seems ModSecurity validates the body of the request even if there is none to parse. It works correctly with XML but an empty string is considered invalid JSON. I noted the commented out lines above the parsing execution code at: https://github.com/SpiderLabs/ModSecurity/blob/v3/master/src/transaction.cc#L656-L663

I do wonder on what basis the decision to parse even when there is no content was made. Not wanting to presume too much, our temporary fix was to move that check into the JSON parsing block only (we could have excluded the GET requests and others explicitly in a chained rule, but if a body is present, we believe it should be parsed).

I am happy to provide another PR but wanted to get the community’s thoughts on the above first.

The text was updated successfully, but these errors were encountered:

Regardless of the request body content, there are rules that still needs to be processed during the request body phase. Those may or not may be related to the request body. The key point is this call here:
https://github.com/SpiderLabs/ModSecurity/blob/v3/master/src/transaction.cc#L799

Ideally there is no reason to put a rule in the request body phase if it is not directly related to a request body, but, some users thinks otherwise.

One thing that I was testing here was the behavior of ModSecurity version 2 dealing with an empty JSON. The behavior is to treat an empty JSON as an error. Check the logs here:
https://gist.github.com/zimmerle/bef703e5f57c42d5d958fe20a59e8971

Ultimately the ModSecurity version 2 and 3 have to behave in the same fashion. At least for the first version of ModSecurity version 3.

I’m wondering on this issue because as far as I could see here, on its default recommended configuration ModSecurity is not triggering rule 200002 and looks like is properly validating and parsing JSON data on POST body if the request is well formatted as expected.

Generates a request which seems to be non-compliant with JSON RFCs as valid JSON requests («JSON value» or «JSON text») should contain at least one of the following: integer, boolean, string, <> (empty object), [] (empty array) . So being non-compliant (no data) I think the behaviour is correct and it should be the user’s choice to either disable/change rule 200002 or, if possible, make sure the POST requests with JSON data are compliant.

Now with GET and JSON as parameter the behaviour is a bit different.

curl -H «Content-Type: application/json» -X GET http://localhost/a?b=%7B%22name%3A%22ABC%22,%22id%22%3A%221%22

This works fine and seems ok from a standard perspective (although I didn’t dig much to see if ModSecurity’s JSON parser is able to properly parse this as it does with POST requests, but this a different thing).

curl -H «Content-Type: application/json» -X GET -d » http://localhost/a?b=%7B%22name%3A%22ABC%22,%22id%22%3A%221%22

The request is sent like this:

GET /a?query=%5B%7B%22percentage_alcohol%3E%22%3A+0%2C+%22country%22%3A+null%2C+%22type%22%3A+%22%2Ffood%2Fwine%22%2C+%22name%22%3A+null%2C+%22percentage_alcohol%22%3A+null%7D%5D HTTP/1.1
Host: localhost
User-Agent: curl/7.47.0
Accept: */*
Content-Type: application/json
Content-Length: 0
Connection: close

As far as I know the standard behaviour would be that the receiving party (or ModSecurity parser in this case) would expect some body data to be sent because the Content-Lenght field is present even though common practice suggests that a request containing body data should not be sent as GET. This ends up being handled as a bad request and breaks the parser and triggering rule 200002.

Now, It looks like this request is NOT strictly non-standard according to the RFCs, but it might be interpreted as bad practice and RFC 7231 does mention the following:

A payload within a GET request message has no defined semantics;
sending a payload body on a GET request might cause some existing
implementations to reject the request.

All that said, I’m not sure how we strict this should be. Opinions? 🙂

Источник

Quick post to document the solution, without the underlying root cause, to an exception.
The setup is a client implemented with the JAX-WS reference implementation in Java 6 (JAX-WS RI 2.1.6 in JDK 6) calling a server also using JAX-WS hosted by Jetty using the jetty-jaxws2spi package.
Setup as normal I get an exception in the client when it tries to parse the response from the server.

1
2
com.sun.xml.internal.ws.streaming.XMLStreamReaderException: XML reader error: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,507]
Message: Premature EOF

I used the TCPProxy from The Grinder to look at the request and response from the communication.
In the output below I’ve obscured some of the namespaces and element names but kept the content length the same.
You can see that the response is being sent back chunked, in two chunks.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
--- localhost:49970->127.0.0.1:8070 opened --
--- 127.0.0.1:8070->localhost:49970 opened --
------ localhost:49970->127.0.0.1:8070 ------
POST /serviceAddress HTTP/1.1
Content-type: text/xml;charset="utf-8"
Soapaction: "sendAuthorizationEmail"
Accept: text/xml, multipart/related, text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
User-Agent: JAX-WS RI 2.1.6 in JDK 6
Host: localhost:8071
Connection: keep-alive
Content-Length: 523


------ localhost:49970->127.0.0.1:8070 ------
<?xml version="1.0" ?><S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/"><S:Body><ns2:sendAuthorizationEmailRequest xmlns:ns2="http://___________________________________________________/types" xmlns:ns3="http://___________________________________________________/faults"><sendAuthorizationEmailRequest><partyId>mem001</partyId><logicalID>NOT REQUIRED</logicalID><dynamicContent>Some content for an email</dynamicContent></sendAuthorizationEmailRequest></ns2:sendAuthorizationEmailRequest></S:Body></S:Envelope>
------ 127.0.0.1:8070->localhost:49970 ------
HTTP/1.1 200 OK
Content-Type: text/xml;charset=UTF-8
Transfer-Encoding: chunked
Server: Jetty(8.1.3.v20120416)

5E
<?xml version="1.0" ?><S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/"><S:Body>

------ 127.0.0.1:8070->localhost:49970 ------
19C
<ns2:sendAuthorizationEmailResponse xmlns:ns2="http://___________________________________________________/types" xmlns:ns3="http://___________________________________________________/faults"><sendAuthorizationEmailResponse><XXXXMessageType><messageCode>OK</messageCode><messageReason>OK</messageReason></XXXXMessageType></sendAuthorizationEmailResponse></ns2:sendAuthorizationEmailResponse></S:Body></S:Envelope>

--- 127.0.0.1:8070->localhost:49970 closed --
--- localhost:49970->127.0.0.1:8070 closed --

Eventually, after much poking around, made more difficult by lack of source code, I decided it might be that the client code could not handle the chunked response from the server.
To tell the server that the client didn’t want / couldn’t handle chunked I needed to add the “Connection: close” header to the request.
As with all things in the Java WebServices world this was not easy, involving poorly documented options that all seem to be controlled by passing around untyped Maps.
Here the magic incantation was

1
2
3
RequestContext ctx = ((BindingProvider) service).getRequestContext();
ctx.put(MessageContext.HTTP_REQUEST_HEADERS,
    Collections.singletonMap("Connection", Collections.singletonList("close")));

Setting this changed the response from the server, as shown below, and the client was able to successfully parse the result.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
--- localhost:51386->127.0.0.1:8070 opened --
--- 127.0.0.1:8070->localhost:51386 opened --
------ localhost:51386->127.0.0.1:8070 ------
POST /serviceAddress HTTP/1.1
Content-type: text/xml;charset="utf-8"
Connection: close
Soapaction: "sendAuthorizationEmail"
Accept: text/xml, multipart/related, text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
User-Agent: JAX-WS RI 2.1.6 in JDK 6
Host: localhost:8071
Content-Length: 523

<?xml version="1.0" ?><S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/"><S:Body><ns2:sendAuthorizationEmailRequest xmlns:ns2="http://___________________________________________________/types" xmlns:ns3="http://___________________________________________________/faults"><sendAuthorizationEmailRequest><partyId>mem001</partyId><logicalID>NOT REQUIRED</logicalID><dynamicContent>Some content for an email</dynamicContent></sendAuthorizationEmailRequest></ns2:sendAuthorizationEmailRequest></S:Body></S:Envelope>
------ 127.0.0.1:8070->localhost:51386 ------
HTTP/1.1 200 OK
Content-Type: text/xml;charset=UTF-8
Connection: close
Server: Jetty(8.1.3.v20120416)

<?xml version="1.0" ?><S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/"><S:Body>
------ 127.0.0.1:8070->localhost:51386 ------
<ns2:sendAuthorizationEmailResponse xmlns:ns2="http://___________________________________________________/types" xmlns:ns3="http://___________________________________________________/faults"><sendAuthorizationEmailResponse><XXXXMessageType><messageCode>OK</messageCode><messageReason>OK</messageReason></XXXXMessageType></sendAuthorizationEmailResponse></ns2:sendAuthorizationEmailResponse></S:Body></S:Envelope>
--- 127.0.0.1:8070->localhost:51386 closed --
--- localhost:51386->127.0.0.1:8070 closed --

For my game project, I have a fog of war shader that hides out-of-view enemies:

uniform sampler2D objtexture;
uniform sampler2D fogtexture;
uniform sampler2D lighttexture;

void main()
{
    // Load textures into pixels
    vec4 objpixel = texture2D(objtexture, gl_TexCoord[0].xy);
    vec4 fogpixel = texture2D(fogtexture, gl_TexCoord[0].xy);
    vec4 lightpixel = texture2D(lighttexture, gl_TexCoord[0].xy);

        // Draw objects if a lighttexture pixel is fully-transparent
    // Otherwise, hide objects behind fog
    bool changealpha = bool(ceil(lightpixel.a));
    objpixel = vec4((lightpixel.rgb) * float(changealpha) + objpixel.rgb * float(!changealpha), lightpixel.a * float(changealpha) + objpixel.a * float(!changealpha));
    objpixel = mix(objpixel, fogpixel, fogpixel.a);

        gl_FragColor = objpixel;
}


When running the program from Code::Blocks, everything works fine. Running the executable, it isn’t finding fog.frag file. I could include the fog.frag in the download, but I’d rather not have users cheat and edit the shader. As a solution, I tried embedding the shader in my program like the example shown here.

After running it through a Notepad++ macro to eliminate any human error, my shader now looks like this:

const std::string shaderdata = "uniform sampler2D objtexture;"
"uniform sampler2D fogtexture;"
"uniform sampler2D lighttexture;"
""
"void main()"
"{"
"    // Load textures into pixels"
"    vec4 objpixel = texture2D(objtexture, gl_TexCoord[0].xy);"
"    vec4 fogpixel = texture2D(fogtexture, gl_TexCoord[0].xy);"
"    vec4 lightpixel = texture2D(lighttexture, gl_TexCoord[0].xy);"
"    "
"    // Draw objects if a lighttexture pixel is fully-transparent"
"    // Otherwise, hide objects behind fog"
"    bool changealpha = bool(ceil(lightpixel.a));"
"    objpixel = vec4((lightpixel.rgb) * float(changealpha) + objpixel.rgb * float(!changealpha), lightpixel.a * float(changealpha) + objpixel.a * float(!changealpha));"
"    objpixel = mix(objpixel, fogpixel, fogpixel.a);"
"    "
"    gl_FragColor = objpixel;"
"}";

Unfortunately, if I compile, I get this error:

Failed to compile fragment shader:
Fragment shader failed to compile with the following errors:
ERROR: 0:1: error(#131) Syntax error: pre-mature EOF parse error
ERROR: error(#273) 1 compilation errors.  No code generated

If I print out my shaderdata string, it looks just fine. If I remove all of the comments, line breaks and empty lines, I get the same exact error. Looking at the special characters in Notepad++, all I see are line breaks, tabs and spaces. Could someone please explain what I’m doing wrong?

The XML I am reading from is free of errors. However, when I systematically System.out.println each line of the web file, a good chunk of the beginning lines don’t show. I am wondering if the XML is just too big for the program to handle. If that’s the case, are there alternate ways of going about reading the XML?

Here is the code:

public static void main(String args[]){
   String nextLine;
   URL url = null;
   URLConnection urlConn = null;
   InputStreamReader  inStream = null;
   BufferedReader buff = null;
   try{
	  System.out.println("okay"); 
	  System.setProperty("http.proxyHost","isaravapp01");
	  System.setProperty("http.proxyPort", "80");
      url  = new URL("http://media.vaticanradiowebcast.org/palxml/20121001.xml" );
      urlConn = url.openConnection();
     inStream = new InputStreamReader( 
                       urlConn.getInputStream());
       buff= new BufferedReader(inStream);
    
    while (true){
        nextLine =buff.readLine();  
        if (nextLine !=null){
            System.out.println(nextLine); 
        }
        else{
           break;
        } 
    }
    
    DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
	DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();
	Document doc = dBuilder.parse(urlConn.getInputStream());
 } catch(MalformedURLException e){
   System.out.println("Please check the URL:" + 
                                       e.toString() );
 } catch(IOException  e1){
  System.out.println("Can't read  from the Internet: "+ 
                                      e1.toString() ); 
 }
   catch(Exception e2){
	   System.out.println("Trouble parsing XML" + e2.toString());
	   e2.printStackTrace();}
   
try {
	inStream.close();
	buff.close();
} catch (IOException e) {
	e.printStackTrace();
}

}

And the error message I receive is:

XMLorg.xml.sax.SAXParseException: Premature end of file.
[Fatal Error] :1:1: Premature end of file.
org.xml.sax.SAXParseException: Premature end of file.
	at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source)
	at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(Unknown Source)
	at Main.main(Main.java:44)

Problem

After a Bamboo Server restart, the application doesn’t start up and the following appears in <bamboo-home-directory>/logs/atlassian-bamboo.log:

2016-03-21 13:42:26,022 INFO [localhost-startStop-1] [lifecycle] Starting Bamboo 5.9.7 (build #5920 Wed Oct 14 07:27:00 BRT 2015) using Java 1.8.0_71 from Oracle Corporation
2016-03-21 13:42:26,022 INFO [localhost-startStop-1] [lifecycle] Real path of servlet context: /Users/bamboouser/Documents/Atlassian/bamboo/bamboo-5.9.7/install/atlassian-bamboo
2016-03-21 13:42:26,039 ERROR [localhost-startStop-1] [DefaultAtlassianBootstrapManager] Home is not configured properly: 
com.atlassian.config.ConfigurationException: Failed to parse config file: Error on line -1 of document  : Premature end of file. Nested exception: Premature end of file.
    at com.atlassian.config.xml.DefaultDom4jXmlConfigurationPersister.load(DefaultDom4jXmlConfigurationPersister.java:35)
    at com.atlassian.config.xml.DefaultDom4jXmlConfigurationPersister.load(DefaultDom4jXmlConfigurationPersister.java:65)
    at com.atlassian.config.ApplicationConfig.load(ApplicationConfig.java:365)
    at com.atlassian.config.bootstrap.DefaultAtlassianBootstrapManager.init(DefaultAtlassianBootstrapManager.java:68)
    at com.atlassian.bamboo.setup.BootstrapLoaderListener.contextInitialized(BootstrapLoaderListener.java:98)
    at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5016)
    at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5524)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
    at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1575)
    at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1565)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.dom4j.DocumentException: Error on line -1 of document  : Premature end of file. Nested exception: Premature end of file.
    at org.dom4j.io.SAXReader.read(SAXReader.java:355)
    at org.dom4j.io.SAXReader.read(SAXReader.java:261)
    at com.atlassian.config.xml.AbstractDom4jXmlConfigurationPersister.loadDocument(AbstractDom4jXmlConfigurationPersister.java:68)
    at com.atlassian.config.xml.DefaultDom4jXmlConfigurationPersister.load(DefaultDom4jXmlConfigurationPersister.java:31)
    ... 13 more
2016-03-21 13:42:26,041 INFO [localhost-startStop-1] [lifecycle] Bamboo home directory: /Users/bamboouser/Documents/Atlassian/bamboo/bamboo-5.9.7/home
2016-03-21 13:42:26,042 INFO [localhost-startStop-1] [lifecycle] Default charset: UTF-8

Diagnosis

The <bamboo-home-directory>/bamboo.cfg.xml is empty. This file cannot be empty. It contains important information such as the database connection, license, path to important directories, etc.

Workaround

Force Bamboo to regenerate the bamboo.cfg.xml file. To do that we need to reinstall Bamboo and copy the file to our existing Bamboo Home directory. Please read all the steps carefully before going through the process.

  1. Create a new empty directory to use as the Bamboo Home directory. This temporary.
  2. Create a new empty database. This temporary.
  3. Edit the <bamboo-installation-directory>/atlassian-bamboo/WEB-INF/classes/bamboo-init.properties file.
  4. Point bamboo.home to the new directory created in Step 1.

    ## You can specify your bamboo.home property here or in your system environment variables.
    
    #bamboo.home=/old/path/to/home/directory
    bamboo.home=/new/path/to/temporary/home/directory
  5. Go to <bamboo-installation-directory>/bin and start Bamboo.

    If you access Bamboo through the web browser, you should be prompted to set up a new Bamboo instance. This is what we want.

  6. Go through the setup wizard process but point Bamboo to the new database (created in Step 2) and not to the existing one.
  7. After finishing the setup process, stop Bamboo.
  8. Go to the to the new Bamboo Home directory and copy the bamboo.cfg.xml file.
  9. Put the bamboo.cfg.xml file inside the old Bamboo Home directory.

    IMPORTANT: don’t start Bamboo yet. Instead open up the bamboo.cfg.xml file.

  10. Edit the database connection properties to point to your existing Bamboo database.
  11. Go back to the <bamboo-installation-directory>/atlassian-bamboo/WEB-INF/classes/bamboo-init.properties file.
  12. Point bamboo.home back to the old directory

    ## You can specify your bamboo.home property here or in your system environment variables.
    
    bamboo.home=/old/path/to/home/directory
  13. Start Bamboo again.

Понравилась статья? Поделить с друзьями:
  • Prefix creation exited with error wine
  • Prefetchfailure 917656 ошибка
  • Preferences error could not load kerio control vpn client preferences pane
  • Prediction error never lose
  • Prediction error linear regression