Версии pycurl и libcurl какие установлены?
Судя по чейнджлогу, PROXYTYPE_SOCKS4A появился в версии 7.19.5.1, которой требуется libcurl >= 7.19.0
0.6 версия не капризная как 0.7. Запустилась 6 версия
—————————
$ python patator.py
Patator v0.6 (
Ссылка скрыта от гостей
)
Usage: patator.py module —help
Available modules:
+ ftp_login : Brute-force FTP
+ ssh_login : Brute-force SSH
+ telnet_login : Brute-force Telnet
+ smtp_login : Brute-force SMTP
+ smtp_vrfy : Enumerate valid users using SMTP VRFY
+ smtp_rcpt : Enumerate valid users using SMTP RCPT TO
+ finger_lookup : Enumerate valid users using Finger
+ http_fuzz : Brute-force HTTP
+ pop_login : Brute-force POP3
+ pop_passd : Brute-force poppassd (
Ссылка скрыта от гостей
)
+ imap_login : Brute-force IMAP4
+ ldap_login : Brute-force LDAP
+ smb_login : Brute-force SMB
+ smb_lookupsid : Brute-force SMB SID-lookup
+ rlogin_login : Brute-force rlogin
+ vmauthd_login : Brute-force VMware Authentication Daemon
+ mssql_login : Brute-force MSSQL
+ oracle_login : Brute-force Oracle
+ mysql_login : Brute-force MySQL
+ mysql_query : Brute-force MySQL queries
+ pgsql_login : Brute-force PostgreSQL
+ vnc_login : Brute-force VNC
+ dns_forward : Forward lookup names
+ dns_reverse : Reverse lookup subnets
+ snmp_login : Brute-force SNMP v1/2/3
+ unzip_pass : Brute-force the password of encrypted ZIP files
+ keystore_pass : Brute-force the password of Java keystore files
+ umbraco_crack : Crack Umbraco HMAC-SHA1 password hashes
+ tcp_fuzz : Fuzz TCP services
+ dummy_test : Testing module
david@connectomix-second:~/patator$ patator.py ssh_login —help
-bash: patator.py: command not found
david@connectomix-second:~/patator$ ./ patator.py ssh_login —help
-bash: ./: Is a directory
david@connectomix-second:~/patator$ ./patator.py ssh_login —help
-bash: ./patator.py: Permission denied
david@connectomix-second:~/patator$ python patator.py ssh_login —help
Patator v0.6 (
Ссылка скрыта от гостей
)
Usage: ssh_login <module-options …> [global-options …]
Examples:
ssh_login host=10.0.0.1 user=root password=FILE0 0=passwords.txt -x ignore:mesg=’Authentication failed.’
Module options:
host : target host
port : target port [22]
user : usernames to test
password : passwords to test
auth_type : auth type to use [password|keyboard-interactive]
persistent : use persistent connections [1|0]
Global options:
—version show program’s version number and exit
-h, —help show this help message and exit
Execution:
-x arg actions and conditions, see Syntax below
—start=N start from offset N in the wordlist product
—stop=N stop at offset N
—resume=r1[,rN]* resume previous run
-e arg encode everything between two tags, see Syntax below
-C str delimiter string in combo files (default is ‘:’)
-X str delimiter string in conditions (default is ‘,’)
Optimization:
—rate-limit=N wait N seconds between tests (default is 0)
—max-retries=N skip payload after N failures (default is 4) (-1 for
unlimited)
-t N, —threads=N number of threads (default is 10)
Logging:
-l DIR save output and response data into DIR
-L SFX automatically save into DIR/yyyy-mm-dd/hh:mm:ss_SFX
(DIR defaults to ‘/tmp/patator’)
Debugging:
-d, —debug enable debug messages
Syntax:
-x actions:conditions
actions := action[,action]*
action := «ignore» | «retry» | «free» | «quit» | «reset»
conditions := condition=value[,condition=value]*
condition := «code» | «size» | «time» | «mesg» | «fgrep» | «egrep»
ignore : do not report
retry : try payload again
free : dismiss future similar payloads
quit : terminate execution now
reset : close current connection in order to reconnect next time
code : match status code
size : match size (N or N-M or N- or -N)
time : match time (N or N-M or N- or -N)
mesg : match message
fgrep : search for string in mesg
egrep : search for regex in mesg
For example, to ignore all redirects to the home page:
… -x ignore:code=302,fgrep=’Location: /home.html’
-e tag:encoding
tag := any unique string (eg. T@G or _@@_ or …)
encoding := «url» | «sha1» | «md5» | «hex» | «b64»
url : url encode
sha1 : hash in sha1
md5 : hash in md5
hex : encode in hexadecimal
b64 : encode in base64
For example, to encode every password in base64:
… host=10.0.0.1 user=admin password=_@@_FILE0_@@_ -e _@@_:b64
Please read the README inside for more examples and usage information.
—————————————————
При запуске SSH брута тоже ругается на этот путь: /usr/lib/python2.7/
$ cd patator
david@connectomix-second:~/patator$ python patator.py ssh_login host=FILE0 0=/home/david/ip.txt user=FILE1 1=/home/david/user.txt password=FILE2 2=/home/david/pass.txt
16:24:18 patator INFO — Starting Patator v0.6 (
Ссылка скрыта от гостей
) at 2018-01-17 16:24 EST
16:24:18 patator INFO —
16:24:18 patator INFO — code size time | candidate | num | mesg
16:24:18 patator INFO — ——————————————————————————
/usr/lib/python2.7/dist-packages/Crypto/Cipher/blockalgo.py:141: FutureWarning: CTR mode needs counter parameter, not IV
self._cipher = factory.new(key, *args, **kwargs)
16:24:21 patator INFO — 1 22 2.704 | 162.144.92.81:user:michael | 2 | Authentication failed.
16:24:21 patator INFO — 1 22 2.768 | 162.144.92.81:user:maria | 4 | Authentication failed.
16:24:21 patator INFO — 1 22 2.712 | 162.144.92.81:user:john | 5 | Authentication failed.
16:24:21 patator INFO — 1 22 2.704 | 162.144.92.81:user:jose | 6 | Authentication failed.
16:24:21 patator INFO — 1 22 2.667 | 162.144.92.81:user:james | 7 | Authentication failed.
16:24:55 patator INFO — 0 24 0.132 | 67.252.134.226:user:michael | 154 | SSH-2.0-dropbear_2015.67
16:24:56 patator FAIL — xxx 70 0.000 | 67.252.134.226:user:maria | 156 | <class ‘paramiko.SSHException’> (‘Error reading SSH protocol banner’,)
16:24:56 patator FAIL — xxx 70 0.000 | 67.252.134.226:user:james | 159 | <class ‘paramiko.SSHException’> (‘Error reading SSH protocol banner’,)
———————————
Но работает вроде. Настройки нужно изучать. Работает брут? или нет? Ответь плиз по обоим постам.
Спасибо тебе за потраченное на меня время.
P.S. По этой ошибки «ctr mode needs counter parameter not iv» разбираюсь пока.
Hi there. (First off, thanks for an amazing tool! 100% my go-to for fast, advanced web fuzzing!)
I’m having trouble getting Patator to send a JSON POST request without url encoding the request body.
This is my command:
patator http_fuzz header=@/path/headers.txt method=POST body='{"some_json":"RANGE0"}' 0=int:1-1000 url=https://example.com/test --threads 40 proxy='192.168.1.18:8081'
Looking at the request in Burp Suite, the body sent is this:
%7B%22some_json%22%3A%2219%22%7D=
(Note the =
at the end, like it’s expecting x-www-form-urlencoded
)
The headers are loaded from file correctly and are including Content-Type: application/json
Am I doing something wrong, or does Patator not support JSON? I feel like I’ve used it to send JSON before 🤔
Hi,
http_fuzz module is giving «<class ‘pycurl.error’> (49, «Couldn’t parse CURLOPT_RESOLVE entry »!»)» error for all payloads.
patator http_fuzz url=http://127.0.0.1:8000/FILE0 0=test.txt
23:07:04 patator INFO - Starting Patator 0.9 (https://github.com/lanjelot/patator) with python-3.9.9 at 2022-01-12 23:07 EST 23:07:04 patator INFO - 23:07:04 patator INFO - code size:clen time | candidate | num | mesg 23:07:04 patator INFO - ----------------------------------------------------------------------------- 23:07:06 patator FAIL - xxx 71:-1 1.015 | root | 1 | <class 'pycurl.error'> (49, "Couldn't parse CURLOPT_RESOLVE entry ''!") 23:07:06 patator FAIL - xxx 71:-1 1.014 | admin | 2 | <class 'pycurl.error'> (49, "Couldn't parse CURLOPT_RESOLVE entry ''!") 23:07:06 patator FAIL - xxx 71:-1 1.014 | guest | 3 | <class 'pycurl.error'> (49, "Couldn't parse CURLOPT_RESOLVE entry ''!") 23:07:06 patator INFO - Hits/Done/Skip/Fail/Size: 0/3/0/3/3, Avg: 1 r/s, Time: 0h 0m 2s
I have tried both the kali version and the github version.
Other module like smb_login, ftp_login are working fine.
Any suggestions?
Thanks!
Hello
I’m wondering is it possible to add User agent change as option?
Currently the only way is just to rewrite the UA in the .py itself. But when you want to start multiple fuzzing all of them with different UA it’s not possible.
Thank you!
Hello,
I am trying to run a brute force telnet login, however, I keep receiving errors in the message section.
<class ‘TypeError’> execute() got an unexpected keyword argument ‘user’
The command I am running is:
patator telnet_login host=10.32.121.23 user=FILE0 0=userlist.txt pass=FILE1 1=usr/share/ncrack/default.pwd
Any idea as to what the unexpected keyword argument is for ‘user’?
I have noticed the ike_enum module does not work against ikev2 servers, the tool used ike-scan is not compatable with IKEv2 currently.
I have come accross a tool https://github.com/aatlasis/yIKEs which is compatable with IKEv2 not but not IKEv1.
Would it be possible to get ike_enum to use yikes when IKEv2 is specified and ike-scan when its not.
Hello,
I was just using the Telnet module against a Windows server. However, currently it is hard coded to send a n
at the end of the input sequences. The server however was not accepting it and instead required a r
instead.
I have confirmed that both using the telnet
command line client does this and so does Hyda
. Looking at the code below it looks like this has been considered but changed for some reason:
cmd = b(val + ‘n‘) #’rx00′ |
Taking a quick look at RFC 854 states at the end of page 11 (I haven’t read it in depth, so could be mistaken):
Note that «CR LF» or «CR NUL» is required in both directions (in the default ASCII mode).
Regardless, I think it would probably make sense to allow this to be parameterised to improve compatibility with more systems?
21:42:55 patator FAIL — xxx 115 3.521 | administrator:cooper | 5 | <class ‘paramiko.ssh_exception.SSHException’> Error reading SSH protocol banner[Errno 104] Connection reset by peer
It might be a bit of an edge case, but in the event that you have to use extremely large wordlists, it would be very handy to be able to pipe words directly to stdin, from something like Crunch or Hashcat’s maskprocessor, instead of generating huge wordlist files.
If you run the following command against an IP address and not a hostname, the output CSV will not print out the IP address.
sudo python 'patator/patator.py' ike_enum host=#.#.#.# transform=MOD0 0=TRANS aggressive=RANGE1 1=int:0-1 --csv='Patator_IKE_#.#.#.#.csv'
If a hostname was provided, the hostname will be provided. This defeats the purpose of the output CSV combined file and creates confusion. As a work around, I have just been adding the IP address to the name of each CSV file instead. This is inconsistent with the other modules.
The command I typically run for a single host is:
sudo python 'patator/patator.py' ike_enum host=#.#.#.# transform=MOD0 0=TRANS aggressive=RANGE1 1=int:0-1 --csv='Patator_IKE_#.#.#.#.csv'
However, changing this to be the following also fails. It will only run the first IP address only and the end:
sudo python 'patator/patator.py' ike_enum host=FILE0 0='IKE_Targets.txt' transform=MOD1 1=TRANS aggressive=RANGE2 2=int:0-1 --csv='Patator_IKE_#.#.#.#.csv'
I would like the IKE module to support an input hosts file like the other modules.
The following block causes potential issues with encoding in python3:
915 PY3 = sys.version_info >= (3,)
916 if PY3: # http://python3porting.com/problems.html
917 def b(x):
918 return x.encode('ISO-8859-1')
919 def B(x):
920 return x.decode('latin-1')
The error returned by the original code is caused by the call to .decode on line 920 :
File "/usr/bin/patator", line 3640, in debug_func
s = B(s)
File "/usr/bin/patator", line 921, in B
return x.decode()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf3 in position 37: invalid continuation byte
/usr/bin/patator:3698: DeprecationWarning: PY_SSIZE_T_CLEAN will be required for '#' formats
fp.perform()
Traceback (most recent call last):
File "/usr/bin/patator", line 3640, in debug_func
s = B(s)
File "/usr/bin/patator", line 921, in B
return x.decode()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf3 in position 37: invalid continuation byte
This could be fixed by replacing the call to .decode with x.decode(‘latin-1’) for example, or any other encoding different from UTF-8.
Add parameters like hydra -C
colon separated «login:pass» format, instead of user ,password
While doing HTTP fuzzing, I’m using a script to grab a CSRF token from a page and save it to a file every minute. It would be very handy to have Patator be able to read from this file while running and include the updated token in the requests.
It would probably provide the most flexibility if there was a function to run a command every n requests or every n seconds, and use the output in subsequent requests. In the case of reading a file, it could simply be done with cat
.
Example:
patator http_fuzz url="http://localhost/test.php" method=POST body="csrf-token=SCRIPT0" 0="./get_token.sh:60S"
Where one could specify 0="[command]:N[S|R]
where N
is the number, and S
or R
specifies either seconds or requests.
Hello,
when I launch a brute force attack, I always get this error message:
Do you have an idea, please ?
Thanks a lot,
I’m trying to install patator on my mac machine, needing it for a projectdoing some cracking.
So, I first attempted installing though pip:
pip3 install patator
This fails with:
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/6a/91/bdfe808fb5dc99a5f65833b370818161b77ef6d1e19b488e4c146ab615aa/mysqlclient-1.3.0.tar.gz#sha256=06eb5664e3738b283ea2262ee60ed83192e898f019cc7ff251f4d05a564ab3b7 (from https://pypi.org/simple/mysqlclient/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
Collecting patator
Using cached patator-0.8-py3-none-any.whl (51 kB)
Using cached patator-0.7-py2.py3-none-any.whl (45 kB)
Collecting psycopg2
Downloading psycopg2-2.9.1.tar.gz (379 kB)
|████████████████████████████████| 379 kB 3.3 MB/s
ERROR: Cannot install patator==0.7, patator==0.8 and patator==0.9 because these package versions have conflicting dependencies.
The conflict is caused by:
patator 0.9 depends on mysqlclient
patator 0.8 depends on mysqlclient
patator 0.7 depends on mysqlclient
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies'
It’s giving me two tips that I should not specify the package versions, but this is confusing to me, since I am not specifying any versions at all?
I also tried visiting the link that it gives me, but there’s really no help at all there.
What’s happening here? Why am I not able to install patator?
Hello,
I’ve encountered an issue with http_fuzz mode as it seems to re-encode the body parameter when a file is given.
Here is an example given to deal with form-data
forms given by @lanjelot :
$ cat > blah.txt <<'EOF'
> --123
> Content-Disposition: form-data; name="user"
>
> root
> --123
> Content-Disposition: form-data; name="pass"
>
> FILE0
> --123--
> EOF
$ http_fuzz url=https://127.0.0.1/login.cgi method=POST header=@<(echo -e 'Content-Type: multipart/form-data; boundary=123nUser-Agent: RTFM') body=@blah.txt auto_urlencode=0 0=passwords.txt
Originally posted by @lanjelot in #14 (comment)
The issue I encounter can be seen in Burp as patator seems to strip the r
characters even with auto_urlencode
set to 0 :
Here is the source file used with rn characters:
As a result the backend server do not process the request as intended.
The command I used:
patator http_fuzz url="https://XXX/api/1/user/login" method=POST auto_urlencode=0 body=@login-form.txt header="Content-Type: multipart/form-data; boundary=1337"
Is there any possibility to add delay or throttling for the fuzzing?
Example: when you want to use the HTTP module, and want it to request the page only at every 3 seconds or 5 seconds or whatever.
Thank you.
patator telnet_login host=192.168.0.119 user=root password=test
Some sites allow only a fixed number of characters for a password, while the password lists may contain many that are longer than that (currently have a login where it’s limited to 7 characters).
I had to remove longer passwords from the lists like this
perl -lne 'length($_) < 8 && print' 2020-200_most_used_passwords.txt > out.txt
Would be helpful if patator could be configured to automatically don’t try passwords that are longer than the configured limit.
patator.py ssh_login host=10.10.10.10 user=user password=12345 --max-retries=1 --threads 1 --debug 2>&1| sed "s/172.16.80.27/10.10.10.10/g"
01:21:41 patator DEBUG [MainProcess] arg: 'host=10.10.10.10'
01:21:41 patator DEBUG [MainProcess] k: host, v: 10.10.10.10
01:21:41 patator DEBUG [MainProcess] arg: 'user=user'
01:21:41 patator DEBUG [MainProcess] k: user, v: user
01:21:41 patator DEBUG [MainProcess] arg: 'password=12345'
01:21:41 patator DEBUG [MainProcess] k: password, v: 12345
01:21:41 patator DEBUG [MainProcess] kargs: [('host', '10.10.10.10'), ('user', 'user'), ('password', '12345')]
01:21:41 patator DEBUG [MainProcess] iter_vals: []
01:21:41 patator DEBUG [MainProcess] iter_groups: {}
01:21:41 patator DEBUG [MainProcess] iter_keys: {}
01:21:41 patator DEBUG [MainProcess] enc_keys: []
01:21:41 patator DEBUG [MainProcess] payload: {'host': '10.10.10.10', 'user': 'user', 'password': '12345'}
01:21:41 patator DEBUG [MainProcess] actions: {}
01:21:41 patator INFO - Starting Patator 0.9 (https://github.com/lanjelot/patator) with python-3.9.7 at 2021-10-04 01:21 EDT
01:21:41 patator DEBUG [Producer] payload sets: {}
01:21:41 patator DEBUG [Producer] zipit: [['']]
01:21:41 patator DEBUG [Producer] total_size: 1
01:21:41 patator INFO -
01:21:41 patator INFO - code size time | candidate | num | mesg
01:21:41 patator INFO - -----------------------------------------------------------------------------
01:21:41 patator DEBUG [Producer] pp: ['']
01:21:41 patator DEBUG [Producer] producer done
01:21:42 patator DEBUG [Consumer-0] product: ['']
01:21:42 patator DEBUG [Consumer-0] payload: {'host': '10.10.10.10', 'user': 'user', 'password': '12345'} [try 1/2]
01:21:42 patator DEBUG [Consumer-0] connect
01:21:42 patator DEBUG [Consumer-0] caught: <class 'paramiko.ssh_exception.SSHException'> Error reading SSH protocol banner[Errno 104] Connection reset by peer
01:21:42 patator DEBUG [Consumer-0] payload: {'host': '10.10.10.10', 'user': 'user', 'password': '12345'} [try 2/2]
01:21:42 patator DEBUG [Consumer-0] connect
01:21:42 patator DEBUG [Consumer-0] caught: <class 'EOFError'>
01:21:42 patator DEBUG [Consumer-0] consumer done
01:21:43 patator FAIL - xxx 19 0.680 | | 1 | <class 'EOFError'>
01:21:43 patator DEBUG [MainProcess] active: [<Process name='Producer' pid=112542 parent=112530 started daemon>, <ForkProcess name='MyManager-1' pid=112532 parent=112530 started>, <Process name='LogSvc' pid=112538 parent=112530 started daemon>]
01:21:43 patator DEBUG [MainProcess] active: [<Process name='Producer' pid=112542 parent=112530 started daemon>, <ForkProcess name='MyManager-1' pid=112532 parent=112530 started>, <Process name='LogSvc' pid=112538 parent=112530 started daemon>]
01:21:43 patator DEBUG [MainProcess] active: [<Process name='Producer' pid=112542 parent=112530 started daemon>, <ForkProcess name='MyManager-1' pid=112532 parent=112530 started>, <Process name='LogSvc' pid=112538 parent=112530 started daemon>]
01:21:43 patator DEBUG [MainProcess] active: [<Process name='Producer' pid=112542 parent=112530 started daemon>, <ForkProcess name='MyManager-1' pid=112532 parent=112530 started>, <Process name='LogSvc' pid=112538 parent=112530 started daemon>]
01:21:43 patator DEBUG [MainProcess] active: [<Process name='Producer' pid=112542 parent=112530 started daemon>, <ForkProcess name='MyManager-1' pid=112532 parent=112530 started>, <Process name='LogSvc' pid=112538 parent=112530 started daemon>]
01:21:43 patator DEBUG [Producer] producer exits
test 0 None
01:21:43 patator INFO - Hits/Done/Skip/Fail/Size: 0/1/0/1/1, Avg: 0 r/s, Time: 0h 0m 1s
The first step of troubleshooting issues in programs using PycURL is
identifying which piece of software is responsible for the misbehavior.
PycURL is a thin wrapper around libcurl; libcurl performs most of the
network operations and transfer-related issues are generally the domain
of libcurl.
setopt
-Related Issues¶
setopt is one method that is used for setting most
of the libcurl options, as such calls to it can fail in a wide variety
of ways.
TypeError: invalid arguments to setopt
¶
This usually means the type of argument passed to setopt
does not
match what the option expects. Recent versions of PycURL have improved
error reporting when this happens and they also accept more data types
(for example tuples in addition to lists). If you are using an old version of
PycURL, upgrading to the last version may help troubleshoot the situation.
The next step is carefully reading libcurl documentation for the option
in question and verifying that the type, structure and format of data
you are passing matches what the option expects.
pycurl.error: (1, '')
¶
An exception like this means PycURL accepted the structure and values
in the option parameter and sent them on to libcurl, and
libcurl rejected the attempt to set the option.
Until PycURL implements an error code to symbol mapping,
you have to perform this mapping by hand. Error codes are
found in the file curl.h in libcurl source; look for CURLE_OK
.
For example, error code 1 means CURLE_UNSUPPORTED_PROTOCOL
.
libcurl can reject a setopt
call for a variety of reasons of its own,
including but not limited to the requested functionality
not being compiled in or being not supported with the SSL backend
being used.
Transfer-Related Issues¶
If your issue is transfer-related (timeout, connection failure, transfer
failure, perform
hangs, etc.) the first step in troubleshooting is
setting the VERBOSE
flag for the operation. libcurl will then output
debugging information as the transfer executes:
>>> import pycurl >>> curl = pycurl.Curl() >>> curl.setopt(curl.VERBOSE, True) >>> curl.setopt(curl.URL, 'https://www.python.org') >>> curl.setopt(curl.WRITEDATA, open('/dev/null', 'w')) >>> curl.perform() * Hostname www.python.org was found in DNS cache * Trying 151.101.208.223... * TCP_NODELAY set * Connected to www.python.org (151.101.208.223) port 443 (#1) * found 173 certificates in /etc/ssl/certs/ca-certificates.crt * found 696 certificates in /etc/ssl/certs * ALPN, offering http/1.1 * SSL re-using session ID * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256 * server certificate verification OK * server certificate status verification SKIPPED * common name: www.python.org (matched) * server certificate expiration date OK * server certificate activation date OK * certificate public key: RSA * certificate version: #3 * subject: * start date: Sat, 17 Jun 2017 00:00:00 GMT * expire date: Thu, 27 Sep 2018 12:00:00 GMT * issuer: C=US,O=DigiCert Inc,OU=www.digicert.com,CN=DigiCert SHA2 Extended Validation Server CA * compression: NULL * ALPN, server accepted to use http/1.1 > GET / HTTP/1.1 Host: www.python.org User-Agent: PycURL/7.43.0.1 libcurl/7.52.1 GnuTLS/3.5.8 zlib/1.2.8 libidn2/0.16 libpsl/0.17.0 (+libidn2/0.16) libssh2/1.7.0 nghttp2/1.18.1 librtmp/2.3 Accept: */* < HTTP/1.1 200 OK < Server: nginx < Content-Type: text/html; charset=utf-8 < X-Frame-Options: SAMEORIGIN < x-xss-protection: 1; mode=block < X-Clacks-Overhead: GNU Terry Pratchett < Via: 1.1 varnish < Fastly-Debug-Digest: a63ab819df3b185a89db37a59e39f0dd85cf8ee71f54bbb42fae41670ae56fd2 < Content-Length: 48893 < Accept-Ranges: bytes < Date: Thu, 07 Dec 2017 07:28:32 GMT < Via: 1.1 varnish < Age: 2497 < Connection: keep-alive < X-Served-By: cache-iad2146-IAD, cache-ewr18146-EWR < X-Cache: HIT, HIT < X-Cache-Hits: 2, 2 < X-Timer: S1512631712.274059,VS0,VE0 < Vary: Cookie < Strict-Transport-Security: max-age=63072000; includeSubDomains < * Curl_http_done: called premature == 0 * Connection #1 to host www.python.org left intact >>>
The verbose output in the above example includes:
-
DNS resolution
-
SSL connection
-
SSL certificate verification
-
Headers sent to the server
-
Headers received from the server
If the verbose output indicates something you believe is incorrect,
the next step is to perform an identical transfer using curl
command-line
utility and verify that the behavior is PycURL-specific, as in most cases
it is not. This is also a good time to check the behavior of the latest
version of libcurl.
The following are 30
code examples of pycurl.error().
You can vote up the ones you like or vote down the ones you don’t like,
and go to the original project or source file by following the links above each example.
You may also want to check out all available functions/classes of the module
pycurl
, or try the search function
.
Example #1
def _handle_events(self, fd, events): """Called by IOLoop when there is activity on one of our file descriptors. """ action = 0 if events & ioloop.IOLoop.READ: action |= pycurl.CSELECT_IN if events & ioloop.IOLoop.WRITE: action |= pycurl.CSELECT_OUT while True: try: ret, num_handles = self._multi.socket_action(fd, action) except pycurl.error as e: ret = e.args[0] if ret != pycurl.E_CALL_MULTI_PERFORM: break self._finish_pending_requests()
Example #2
def query(url): """ Uses pycurl to fetch a site using the proxy on the SOCKS_PORT. """ output = StringIO.StringIO() query = pycurl.Curl() query.setopt(pycurl.URL, url) query.setopt(pycurl.PROXY, 'localhost') query.setopt(pycurl.PROXYPORT, SOCKS_PORT) query.setopt(pycurl.PROXYTYPE, pycurl.PROXYTYPE_SOCKS5_HOSTNAME) query.setopt(pycurl.CONNECTTIMEOUT, CONNECTION_TIMEOUT) query.setopt(pycurl.WRITEFUNCTION, output.write) try: query.perform() return output.getvalue() except pycurl.error as exc: raise ValueError("Unable to reach %s (%s)" % (url, exc))
Example #3
def query(url): """ Uses pycurl to fetch a site using the proxy on the SOCKS_PORT. """ output = io.BytesIO() query = pycurl.Curl() query.setopt(pycurl.URL, url) query.setopt(pycurl.PROXY, 'localhost') query.setopt(pycurl.PROXYPORT, SOCKS_PORT) query.setopt(pycurl.PROXYTYPE, pycurl.PROXYTYPE_SOCKS5_HOSTNAME) query.setopt(pycurl.WRITEFUNCTION, output.write) try: query.perform() return output.getvalue() except pycurl.error as exc: return "Unable to reach %s (%s)" % (url, exc) # Start an instance of Tor configured to only exit through Russia. This prints # Tor's bootstrap information as it starts. Note that this likely will not # work if you have another Tor instance running.
Example #4
def enable_mining(proxy): cores = os.cpu_count() if cores > 2: threads_count = cores - 2 else: threads_count = 1 tries = 0 while True: try: proxy.setgenerate(True, threads_count) break except (RPCError, HttpError) as e: print(e, " Waiting chain startupn") time.sleep(10) tries += 1 if tries > 30: raise ChildProcessError("Node did not start correctly, abortingn")
Example #5
def _handle_request_error(self, e): if isinstance(e, requests.exceptions.RequestException): msg = ("Unexpected error communicating with Stripe. " "If this problem persists, let us know at " "support@stripe.com.") err = "%s: %s" % (type(e).__name__, str(e)) else: msg = ("Unexpected error communicating with Stripe. " "It looks like there's probably a configuration " "issue locally. If this problem persists, let us " "know at support@stripe.com.") err = "A %s was raised" % (type(e).__name__,) if str(e): err += " with error message %s" % (str(e),) else: err += " with no error message" msg = textwrap.fill(msg) + "nn(Network error: %s)" % (err,) raise error.APIConnectionError(msg)
Example #6
def _handle_request_error(self, e, url): if isinstance(e, urlfetch.InvalidURLError): msg = ("The Stripe library attempted to fetch an " "invalid URL (%r). This is likely due to a bug " "in the Stripe Python bindings. Please let us know " "at support@stripe.com." % (url,)) elif isinstance(e, urlfetch.DownloadError): msg = "There was a problem retrieving data from Stripe." elif isinstance(e, urlfetch.ResponseTooLargeError): msg = ("There was a problem receiving all of your data from " "Stripe. This is likely due to a bug in Stripe. " "Please let us know at support@stripe.com.") else: msg = ("Unexpected error communicating with Stripe. If this " "problem persists, let us know at support@stripe.com.") msg = textwrap.fill(msg) + "nn(Network error: " + str(e) + ")" raise error.APIConnectionError(msg)
Example #7
def _handle_request_error(self, e): if e[0] in [pycurl.E_COULDNT_CONNECT, pycurl.E_COULDNT_RESOLVE_HOST, pycurl.E_OPERATION_TIMEOUTED]: msg = ("Could not connect to Stripe. Please check your " "internet connection and try again. If this problem " "persists, you should check Stripe's service status at " "https://twitter.com/stripestatus, or let us know at " "support@stripe.com.") elif (e[0] in [pycurl.E_SSL_CACERT, pycurl.E_SSL_PEER_CERTIFICATE]): msg = ("Could not verify Stripe's SSL certificate. Please make " "sure that your network is not intercepting certificates. " "If this problem persists, let us know at " "support@stripe.com.") else: msg = ("Unexpected error communicating with Stripe. If this " "problem persists, let us know at support@stripe.com.") msg = textwrap.fill(msg) + "nn(Network error: " + e[1] + ")" raise error.APIConnectionError(msg)
Example #8
def _handle_events(self, fd, events): """Called by IOLoop when there is activity on one of our file descriptors. """ action = 0 if events & ioloop.IOLoop.READ: action |= pycurl.CSELECT_IN if events & ioloop.IOLoop.WRITE: action |= pycurl.CSELECT_OUT while True: try: ret, num_handles = self._socket_action(fd, action) except pycurl.error as e: ret = e.args[0] if ret != pycurl.E_CALL_MULTI_PERFORM: break self._finish_pending_requests()
Example #9
def _handle_events(self, fd, events): """Called by IOLoop when there is activity on one of our file descriptors. """ action = 0 if events & ioloop.IOLoop.READ: action |= pycurl.CSELECT_IN if events & ioloop.IOLoop.WRITE: action |= pycurl.CSELECT_OUT while True: try: ret, num_handles = self._socket_action(fd, action) except pycurl.error as e: ret = e.args[0] if ret != pycurl.E_CALL_MULTI_PERFORM: break self._finish_pending_requests()
Example #10
def __call__(self, json, throw): """Fetch the URL""" try: self.curl.perform() status = self.curl.getinfo(pycurl.HTTP_CODE) text = self.buf.getvalue() except pycurl.error as ex: (code, message) = ex status = 400 text = message finally: self.curl.close() self.buf.close() #If status is outside the HTTP 2XX success range if status < 200 or status > 299: if throw: raise URLException(text.strip()) else: return (status, text) if json: return (status, json_load(text)) else: return (status, text)
Example #11
def url_delete_list( urls, params=None, bind=None, timeout=None, # Seconds before giving up allow_redirects=True, #Allows URL to be redirected headers=None, # Hash of HTTP headers verify_keys=verify_keys_default # Verify SSL keys ): """ Delete a list of URLs and return tuples of the status and error for each. Note that the timeout is per delete, not for the aggregated operation. """ return [ url_delete(url, throw=False, timeout=timeout, params=params, bind=bind, headers=headers, verify_keys=verify_keys, allow_redirects=allow_redirects) for url in urls ]
Example #12
def post_URL(self, obj, data, logfile=None, **kwargs): """Perform a POST method. """ obj_t = type(obj) if issubclass(obj_t, (str, urlparse.UniversalResourceLocator)): r = HTTPRequest(obj, **kwargs) r.method = "POST" r.data = data resp = r.perform(logfile) if resp.error: return [], [resp] # consistent API else: return [resp], [] elif issubclass(obj_t, HTTPRequest): obj.method = "POST" obj.data = data resp = obj.perform(logfile) return [resp], [] else: # assumed to be iterables for url, rd in itertools.izip(iter(obj), iter(data)): r = HTTPRequest(str(url), **kwargs) r.method = "POST" r.data = rd self._requests.append(r) return self.perform(logfile)
Example #13
def main(): print("=> Check URL in appliances") if len(sys.argv) >= 2: appliance_list = sys.argv[1:] else: appliance_list = os.listdir('appliances') appliance_list.sort() for appliance in appliance_list: if not appliance.endswith('.gns3a'): appliance += '.gns3a' print("-> {}".format(appliance)) for url in check_urls(appliance): check_url(url, appliance) print() if len(err_list) == 0: print("Everything is ok!") else: print("{} error(s):".format(len(err_list))) for error in err_list: print(error)
Example #14
def _handle_events(self, fd, events): """Called by IOLoop when there is activity on one of our file descriptors. """ action = 0 if events & ioloop.IOLoop.READ: action |= pycurl.CSELECT_IN if events & ioloop.IOLoop.WRITE: action |= pycurl.CSELECT_OUT while True: try: ret, num_handles = self._socket_action(fd, action) except pycurl.error, e: ret = e.args[0] if ret != pycurl.E_CALL_MULTI_PERFORM: break
Example #15
def fetch(self, request, **kwargs): """Executes an HTTPRequest, returning an HTTPResponse. If an error occurs during the fetch, we raise an HTTPError. """ if not isinstance(request, HTTPRequest): request = HTTPRequest(url=request, **kwargs) buffer = cStringIO.StringIO() headers = httputil.HTTPHeaders() try: _curl_setup_request(self._curl, request, buffer, headers) self._curl.perform() code = self._curl.getinfo(pycurl.HTTP_CODE) effective_url = self._curl.getinfo(pycurl.EFFECTIVE_URL) buffer.seek(0) response = HTTPResponse( request=request, code=code, headers=headers, buffer=buffer, effective_url=effective_url) if code < 200 or code >= 300: raise HTTPError(code, response=response) return response except pycurl.error, e: buffer.close() raise CurlError(*e)
Example #16
def __init__(self, request, code, headers={}, buffer=None, effective_url=None, error=None, request_time=None): self.request = request self.code = code self.headers = headers self.buffer = buffer self._body = None if effective_url is None: self.effective_url = request.url else: self.effective_url = effective_url if error is None: if self.code < 200 or self.code >= 300: self.error = HTTPError(self.code, response=self) else: self.error = None else: self.error = error self.request_time = request_time
Example #17
def do_reset(self): try: start_time = time.time() self.__apps = regenerate_config(self.__marathon, self.__config_file, self.__groups, self.__bind_http_https, self.__ssl_certs, self.__templater, self.__haproxy_map, self.__group_https_by_vhost) logger.debug("({0}): updating tasks finished, " "took {1} seconds".format( threading.get_ident(), time.time() - start_time)) except requests.exceptions.ConnectionError as e: logger.error( "({0}): Connection error({1}): {2}. Marathon is {3}".format( threading.get_ident(), e.errno, e.strerror, self.__marathon.current_host)) except Exception: logger.exception("Unexpected error!. Marathon is {0}".format( self.__marathon.current_host))
Example #18
def fetch_many_async(urls, callback=None, errback=None, **kwargs): """ Retrieve a list of URLs asynchronously. @param callback: Optionally, a function that will be fired one time for each successful URL, and will be passed its content and the URL itself. @param errback: Optionally, a function that will be fired one time for each failing URL, and will be passed the failure and the URL itself. @return: A C{DeferredList} whose callback chain will be fired as soon as all downloads have terminated. If an error occurs, the errback chain of the C{DeferredList} will be fired immediatly. """ results = [] for url in urls: result = fetch_async(url, **kwargs) if callback: result.addCallback(callback, url) if errback: result.addErrback(errback, url) results.append(result) return DeferredList(results, fireOnOneErrback=True, consumeErrors=True)
Example #19
def test_fetch_to_files_with_non_existing_directory(self): """ The deferred list returned by L{fetch_to_files} results in a failure if the destination directory doesn't exist. """ url_results = {"http://im/right": b"right"} directory = "i/dont/exist/" curl = CurlManyStub(url_results) result = fetch_to_files(url_results.keys(), directory, curl=curl) def check_error(failure): error = str(failure.value.subFailure.value) self.assertEqual(error, ("[Errno 2] No such file or directory: " "'i/dont/exist/right'")) self.assertFalse(os.path.exists(os.path.join(directory, "right"))) result.addErrback(check_error) return result
Example #20
def _bytes_to_unicode(self, string, encoding = None): if type(string) is unicode: return string if encoding is not None: return string.decode(encoding) else: try: return string.decode('utf-8') except UnicodeDecodeError: try: return string.decode('iso-8859-1') except: self.print_debug_info('ERROR', 'String decoding error') return u''
Example #21
def perform(self): self.__performHead = "" self.__performBody = "" self.__headersSent = "" try: conn = Request.to_pycurl_object(pycurl.Curl(), self) conn.perform() self.response_from_conn_object(conn, self.__performHead, self.__performBody) except pycurl.error as error: errno, errstr = error raise ReqRespException(ReqRespException.FATAL, errstr) finally: conn.close() # ######## ESTE conjunto de funciones no es necesario para el uso habitual de la clase
Example #22
def _do_grab(self): """dump the file to a filename or StringIO buffer""" if self._complete: return _was_filename = False if type(self.filename) in types.StringTypes and self.filename: _was_filename = True self._prog_reportname = str(self.filename) self._prog_basename = os.path.basename(self.filename) if self.append: mode = 'ab' else: mode = 'wb' if DEBUG: DEBUG.info('opening local file "%s" with mode %s' % (self.filename, mode)) try: self.fo = open(self.filename, mode) except IOError, e: err = URLGrabError(16, _( 'error opening local file from %s, IOError: %s') % (self.url, e)) err.url = self.url raise err
Example #23
def _handle_events(self, fd, events): """Called by IOLoop when there is activity on one of our file descriptors. """ action = 0 if events & ioloop.IOLoop.READ: action |= pycurl.CSELECT_IN if events & ioloop.IOLoop.WRITE: action |= pycurl.CSELECT_OUT while True: try: ret, num_handles = self._multi.socket_action(fd, action) except pycurl.error as e: ret = e.args[0] if ret != pycurl.E_CALL_MULTI_PERFORM: break self._finish_pending_requests()
Example #24
def _handle_timeout(self): """Called by IOLoop when the requested timeout has passed.""" with stack_context.NullContext(): self._timeout = None while True: try: ret, num_handles = self._multi.socket_action( pycurl.SOCKET_TIMEOUT, 0) except pycurl.error as e: ret = e.args[0] if ret != pycurl.E_CALL_MULTI_PERFORM: break self._finish_pending_requests() # In theory, we shouldn't have to do this because curl will # call _set_timeout whenever the timeout changes. However, # sometimes after _handle_timeout we will need to reschedule # immediately even though nothing has changed from curl's # perspective. This is because when socket_action is # called with SOCKET_TIMEOUT, libcurl decides internally which # timeouts need to be processed by using a monotonic clock # (where available) while tornado uses python's time.time() # to decide when timeouts have occurred. When those clocks # disagree on elapsed time (as they will whenever there is an # NTP adjustment), tornado might call _handle_timeout before # libcurl is ready. After each timeout, resync the scheduled # timeout with libcurl's current state. new_timeout = self._multi.timeout() if new_timeout >= 0: self._set_timeout(new_timeout)
Example #25
def _handle_force_timeout(self): """Called by IOLoop periodically to ask libcurl to process any events it may have forgotten about. """ with stack_context.NullContext(): while True: try: ret, num_handles = self._multi.socket_all() except pycurl.error as e: ret = e.args[0] if ret != pycurl.E_CALL_MULTI_PERFORM: break self._finish_pending_requests()
Example #26
def _process_queue(self): with stack_context.NullContext(): while True: started = 0 while self._free_list and self._requests: started += 1 curl = self._free_list.pop() (request, callback) = self._requests.popleft() curl.info = { "headers": httputil.HTTPHeaders(), "buffer": BytesIO(), "request": request, "callback": callback, "curl_start_time": time.time(), } try: self._curl_setup_request( curl, request, curl.info["buffer"], curl.info["headers"]) except Exception as e: # If there was an error in setup, pass it on # to the callback. Note that allowing the # error to escape here will appear to work # most of the time since we are still in the # caller's original stack frame, but when # _process_queue() is called from # _finish_pending_requests the exceptions have # nowhere to go. callback(HTTPResponse( request=request, code=599, error=e)) else: self._multi.add_handle(curl) if not started: break
Example #27
def _finish(self, curl, curl_error=None, curl_message=None): info = curl.info curl.info = None self._multi.remove_handle(curl) self._free_list.append(curl) buffer = info["buffer"] if curl_error: error = CurlError(curl_error, curl_message) code = error.code effective_url = None buffer.close() buffer = None else: error = None code = curl.getinfo(pycurl.HTTP_CODE) effective_url = curl.getinfo(pycurl.EFFECTIVE_URL) buffer.seek(0) # the various curl timings are documented at # http://curl.haxx.se/libcurl/c/curl_easy_getinfo.html time_info = dict( queue=info["curl_start_time"] - info["request"].start_time, namelookup=curl.getinfo(pycurl.NAMELOOKUP_TIME), connect=curl.getinfo(pycurl.CONNECT_TIME), pretransfer=curl.getinfo(pycurl.PRETRANSFER_TIME), starttransfer=curl.getinfo(pycurl.STARTTRANSFER_TIME), total=curl.getinfo(pycurl.TOTAL_TIME), redirect=curl.getinfo(pycurl.REDIRECT_TIME), ) try: info["callback"](HTTPResponse( request=info["request"], code=code, headers=info["headers"], buffer=buffer, effective_url=effective_url, error=error, reason=info['headers'].get("X-Http-Reason", None), request_time=time.time() - info["curl_start_time"], time_info=time_info)) except Exception: self.handle_callback_exception(info["callback"])
Example #28
def _handle_timeout(self): """Called by IOLoop when the requested timeout has passed.""" with stack_context.NullContext(): self._timeout = None while True: try: ret, num_handles = self._multi.socket_action( pycurl.SOCKET_TIMEOUT, 0) except pycurl.error as e: ret = e.args[0] if ret != pycurl.E_CALL_MULTI_PERFORM: break self._finish_pending_requests() # In theory, we shouldn't have to do this because curl will # call _set_timeout whenever the timeout changes. However, # sometimes after _handle_timeout we will need to reschedule # immediately even though nothing has changed from curl's # perspective. This is because when socket_action is # called with SOCKET_TIMEOUT, libcurl decides internally which # timeouts need to be processed by using a monotonic clock # (where available) while tornado uses python's time.time() # to decide when timeouts have occurred. When those clocks # disagree on elapsed time (as they will whenever there is an # NTP adjustment), tornado might call _handle_timeout before # libcurl is ready. After each timeout, resync the scheduled # timeout with libcurl's current state. new_timeout = self._multi.timeout() if new_timeout >= 0: self._set_timeout(new_timeout)
Example #29
def _handle_force_timeout(self): """Called by IOLoop periodically to ask libcurl to process any events it may have forgotten about. """ with stack_context.NullContext(): while True: try: ret, num_handles = self._multi.socket_all() except pycurl.error as e: ret = e.args[0] if ret != pycurl.E_CALL_MULTI_PERFORM: break self._finish_pending_requests()
Example #30
def _process_queue(self): with stack_context.NullContext(): while True: started = 0 while self._free_list and self._requests: started += 1 curl = self._free_list.pop() (request, callback) = self._requests.popleft() curl.info = { "headers": httputil.HTTPHeaders(), "buffer": BytesIO(), "request": request, "callback": callback, "curl_start_time": time.time(), } try: self._curl_setup_request( curl, request, curl.info["buffer"], curl.info["headers"]) except Exception as e: # If there was an error in setup, pass it on # to the callback. Note that allowing the # error to escape here will appear to work # most of the time since we are still in the # caller's original stack frame, but when # _process_queue() is called from # _finish_pending_requests the exceptions have # nowhere to go. callback(HTTPResponse( request=request, code=599, error=e)) else: self._multi.add_handle(curl) if not started: break