I am getting this error on the current master branch…
commit bc4b6f0 (HEAD -> master, origin/master, origin/HEAD)
Author: John Bodley 4567245+john-bodley@users.noreply.github.com
Date: Mon Aug 23 10:20:52 2021 -0700
I am connecting to postgresql 11.10.
Here is what I see in my logs…
superset_app | 2021-08-23 18:17:13,022:INFO:superset.views.core:Triggering query_id: 34
superset_app | Query 34: Executing 1 statement(s)
superset_app | 2021-08-23 18:17:13,052:INFO:superset.sql_lab:Query 34: Executing 1 statement(s)
superset_app | Query 34: Set query to ‘running’
superset_app | 2021-08-23 18:17:13,052:INFO:superset.sql_lab:Query 34: Set query to ‘running’
superset_app | Query 34: Running statement 1 out of 1
superset_app | 2021-08-23 18:17:13,177:INFO:superset.sql_lab:Query 34: Running statement 1 out of 1
superset_app | Query 34: <class ‘TypeError’>
superset_app | Traceback (most recent call last):
superset_app | File «/app/superset/sql_lab.py», line 276, in execute_sql_statement
superset_app | data = db_engine_spec.fetch_data(cursor, increased_limit)
superset_app | File «/app/superset/db_engine_specs/postgres.py», line 174, in fetch_data
superset_app | return super().fetch_data(cursor, limit)
superset_app | File «/app/superset/db_engine_specs/base.py», line 510, in fetch_data
superset_app | raise cls.get_dbapi_mapped_exception(ex)
superset_app | File «/app/superset/db_engine_specs/base.py», line 508, in fetch_data
superset_app | return cursor.fetchall()
superset_app | File «/usr/local/lib/python3.7/site-packages/pytz/init.py», line 403, in init
superset_app | if abs(minutes) >= 1440:
superset_app | TypeError: ‘>=’ not supported between instances of ‘datetime.timedelta’ and ‘int’
superset_app | 2021-08-23 18:17:13,201:ERROR:superset.sql_lab:Query 34: <class ‘TypeError’>
superset_app | Traceback (most recent call last):
superset_app | File «/app/superset/sql_lab.py», line 276, in execute_sql_statement
superset_app | data = db_engine_spec.fetch_data(cursor, increased_limit)
superset_app | File «/app/superset/db_engine_specs/postgres.py», line 174, in fetch_data
superset_app | return super().fetch_data(cursor, limit)
superset_app | File «/app/superset/db_engine_specs/base.py», line 510, in fetch_data
superset_app | raise cls.get_dbapi_mapped_exception(ex)
superset_app | File «/app/superset/db_engine_specs/base.py», line 508, in fetch_data
superset_app | return cursor.fetchall()
superset_app | File «/usr/local/lib/python3.7/site-packages/pytz/init.py», line 403, in init
superset_app | if abs(minutes) >= 1440:
superset_app | TypeError: ‘>=’ not supported between instances of ‘datetime.timedelta’ and ‘int’
superset_app | [SupersetError(message=»postgresql error: ‘>=’ not supported between instances of ‘datetime.timedelta’ and ‘int'», error_type=<SupersetErrorType.GENERIC_DB_ENGINE_ERROR: ‘GENERIC_DB_ENGINE_ERROR’>, level=<ErrorLevel.ERROR: ‘error’>, extra={‘engine_name’: ‘PostgreSQL’, ‘issue_codes’: [{‘code’: 1002, ‘message’: ‘Issue 1002 — The database returned an unexpected error.’}]})]
superset_app | Traceback (most recent call last):
superset_app | File «/app/superset/views/base.py», line 204, in wraps
superset_app | return f(self, *args, **kwargs)
superset_app | File «/app/superset/utils/log.py», line 242, in wrapper
superset_app | value = f(*args, **kwargs)
superset_app | File «/app/superset/views/core.py», line 2578, in sql_json
superset_app | return self.sql_json_exec(request.json, log_params)
superset_app | File «/app/superset/views/core.py», line 2767, in sql_json_exec
superset_app | session, rendered_query, query, expand_data, log_params
superset_app | File «/app/superset/views/core.py», line 2563, in _sql_json_sync
superset_app | [SupersetError(**params) for params in data[«errors»]]
superset_app | superset.exceptions.SupersetErrorsException: [SupersetError(message=»postgresql error: ‘>=’ not supported between instances of ‘datetime.timedelta’ and ‘int'», error_type=<SupersetErrorType.GENERIC_DB_ENGINE_ERROR: ‘GENERIC_DB_ENGINE_ERROR’>, level=<ErrorLevel.ERROR: ‘error’>, extra={‘engine_name’: ‘PostgreSQL’, ‘issue_codes’: [{‘code’: 1002, ‘message’: ‘Issue 1002 — The database returned an unexpected error.’}]})]
superset_app | 2021-08-23 18:17:13,219:WARNING:superset.views.base:[SupersetError(message=»postgresql error: ‘>=’ not supported between instances of ‘datetime.timedelta’ and ‘int'», error_type=<SupersetErrorType.GENERIC_DB_ENGINE_ERROR: ‘GENERIC_DB_ENGINE_ERROR’>, level=<ErrorLevel.ERROR: ‘error’>, extra={‘engine_name’: ‘PostgreSQL’, ‘issue_codes’: [{‘code’: 1002, ‘message’: ‘Issue 1002 — The database returned an unexpected error.’}]})]
superset_app | Traceback (most recent call last):
superset_app | File «/app/superset/views/base.py», line 204, in wraps
superset_app | return f(self, *args, **kwargs)
superset_app | File «/app/superset/utils/log.py», line 242, in wrapper
superset_app | value = f(*args, **kwargs)
superset_app | File «/app/superset/views/core.py», line 2578, in sql_json
superset_app | return self.sql_json_exec(request.json, log_params)
superset_app | File «/app/superset/views/core.py», line 2767, in sql_json_exec
superset_app | session, rendered_query, query, expand_data, log_params
superset_app | File «/app/superset/views/core.py», line 2563, in _sql_json_sync
superset_app | [SupersetError(**params) for params in data[«errors»]]
superset_app | superset.exceptions.SupersetErrorsException: [SupersetError(message=»postgresql error: ‘>=’ not supported between instances of ‘datetime.timedelta’ and ‘int'», error_type=<SupersetErrorType.GENERIC_DB_ENGINE_ERROR: ‘GENERIC_DB_ENGINE_ERROR’>, level=<ErrorLevel.ERROR: ‘error’>, extra={‘engine_name’: ‘PostgreSQL’, ‘issue_codes’: [{‘code’: 1002, ‘message’: ‘Issue 1002 — The database returned an unexpected error.’}]})]
The error is…
PostgreSQL Error
postgresql error: ‘>=’ not supported between instances of ‘datetime.timedelta’ and ‘int’
This may be triggered by:
Issue 1002 — The database returned an unexpected error.
This is the DDL for the main timestamp field…
created_at TIMESTAMP WITH TIME ZONE
Содержание
- Issue Code Reference
- Issue 1000​
- Issue 1001​
- Issue 1002​
- Issue 1003​
- Issue 1004​
- Issue 1005​
- Issue 1006​
- Issue 1007​
- Issue 1008​
- Issue 1009​
- Issue 1010​
- Issue 1011​
- Issue 1012​
- Issue 1013​
- Issue 1014​
- Issue 1015​
- Issue 1016​
- Issue 1017​
- Issue 1018​
- Issue 1019​
- Issue 1020​
- Issue 1021​
- Issue 1022​
- Issue 1023​
- Issue 1024​
- Issue 1025​
- Issue 1026​
- Issue 1027​
- Issue 1028​
- Issue 1029​
- Issue 1030​
- Issue 1031​
- Issue 1032​
- Issue 1033​
- Issue 1034​
- Issue 1035​
- Issue 1036​
- Python Issue leaks to SQL Lab when querying Clickhouse #19253
- Comments
- How to reproduce the bug
- Environment
- Checklist
- Query from Google sheet not working #65
- Comments
- Footer
- Apache Impala Error is not descreptive enough #17058
- Comments
- How to reproduce the bug
- Expected results
- Actual results
- Screenshots
- Environment
- Checklist
- Additional context
- Footer
Issue Code Reference
This page lists issue codes that may be displayed in Superset and provides additional context.
Issue 1000​
It’s likely your datasource has grown too large to run the current query, and is timing out. You can resolve this by reducing the size of your datasource or by modifying your query to only process a subset of your data.
Issue 1001​
Your query may have timed out because of unusually high load on the database engine. You can make your query simpler, or wait until the database is under less load and try again.
Issue 1002​
Your query failed because of an error that occurred on the database. This may be due to a syntax error, a bug in your query, or some other internal failure within the database. This is usually not an issue within Superset, but instead a problem with the underlying database that serves your query.
Issue 1003​
Your query failed because of a syntax error within the underlying query. Please validate that all columns or tables referenced within the query exist and are spelled correctly.
Issue 1004​
Your query failed because it is referencing a column that no longer exists in the underlying datasource. You should modify the query to reference the replacement column, or remove this column from your query.
Issue 1005​
Your query failed because it is referencing a table that no longer exists in the underlying database. You should modify your query to reference the correct table.
Issue 1006​
Your query was not submitted to the database because it’s missing one or more parameters. You should define all the parameters referenced in the query in a valid JSON document. Check that the parameters are spelled correctly and that the document has a valid syntax.
Issue 1007​
The hostname provided when adding a new database is invalid and cannot be resolved. Please check that there are no typos in the hostname.
Issue 1008​
The port provided when adding a new database is not open. Please check that the port number is correct, and that the database is running and listening on that port.
Issue 1009​
The host provided when adding a new database doesn’t seem to be up. Additionally, it cannot be reached on the provided port. Please check that there are no firewall rules preventing access to the host.
Issue 1010​
Something unexpected happened, and Superset encountered an error while running a command. Please reach out to your administrator.
Issue 1011​
Something unexpected happened in the Superset backend. Please reach out to your administrator.
Issue 1012​
The user provided a username that doesn’t exist in the database. Please check that the username is typed correctly and exists in the database.
Issue 1013​
The user provided a password that is incorrect. Please check that the password is typed correctly.
Issue 1014​
Either the username provided does not exist or the password was written incorrectly. Please check that the username and password were typed correctly.
Issue 1015​
Either the database was written incorrectly or it does not exist. Check that it was typed correctly.
Issue 1016​
The schema was either removed or renamed. Check that the schema is typed correctly and exists.
Issue 1017​
We were unable to connect to your database. Please confirm that your service account has the Viewer and Job User roles on the project.
Issue 1018​
Not all parameters required to test, create, or edit a database were present. Please double check which parameters are needed, and that they are present.
Issue 1019​
Please check that the request payload has the correct format (eg, JSON).
Issue 1020​
Please check that the request payload has the expected schema.
Issue 1021​
Your instance of Superset doesn’t have a results backend configured, which is needed for asynchronous queries. Please contact an administrator for further assistance.
Issue 1022​
Only SELECT statements are allowed against this database. Please contact an administrator if you need to run DML (data manipulation language) on this database.
Issue 1023​
The last statement in a query run as CTAS (create table as select) MUST be a SELECT statement. Please make sure the last statement in the query is a SELECT.
Issue 1024​
When running a CVAS (create view as select) the query should have a single statement. Please make sure the query has a single statement, and no extra semi-colons other than the last one.
Issue 1025​
When running a CVAS (create view as select) the query should be a SELECT statement. Please make sure the query has a single statement and it’s a SELECT statement.
Issue 1026​
The submitted query might be too complex to run under the time limit defined by your Superset administrator. Please double check your query and verify if it can be optimized. Alternatively, contact your administrator to increase the timeout period.
Issue 1027​
The database might be under heavy load, running too many queries. Please try again later, or contact an administrator for further assistance.
Issue 1028​
The query contains one or more malformed template parameters. Please check your query and confirm that all template parameters are surround by double braces, for example, «<< ds >>». Then, try running your query again.
Issue 1029​
Either the schema, column, or table do not exist in the database.
Issue 1030​
The query might have a syntax error. Please check and run again.
Issue 1031​
The results from the query might have been deleted from the results backend after some period. Please re-run your query.
Issue 1032​
The query associated with the stored results no longer exists. Please re-run your query.
Issue 1033​
The query results were stored in a format that is no longer supported. Please re-run your query.
Issue 1034​
Please check that the provided database port is an integer between 0 and 65535 (inclusive).
Issue 1035​
The query was not started by an asynchronous worker. Please reach out to your administrator for further assistance.
Issue 1036​
The operation failed because the database referenced no longer exists. Please reach out to your administrator for further assistance.
Источник
Python Issue leaks to SQL Lab when querying Clickhouse #19253
A clear and concise description of what the bug is.
How to reproduce the bug
config database
clickhouse://test:XXXXXXXXXX@127.0.0.1:8123/default
run a clickhouse sql
error log:
2022-03-18 20:33:51,425:DEBUG:urllib3.connectionpool:http://180.97.87.196:8070 «POST /?query_id=77d02bd7-28ad-418f-b23d-6ff575f5e55f&database=default HTTP/1.1» 200 None
2022-03-18 20:33:51,426:DEBUG:superset.stats_logger:[stats_logger] (timing) sqllab.query.time_executing_query | 97.369140625
2022-03-18 20:33:51,428:ERROR:superset.sql_lab:Query 6:
Traceback (most recent call last):
File «/Users/test/projects/PycharmProjects/superset/superset/sql_lab.py», line 248, in execute_sql_statement
db_engine_spec.execute(cursor, sql, async_=True)
File «/Users/test/projects/PycharmProjects/superset/superset/db_engine_specs/base.py», line 1098, in execute
raise cls.get_dbapi_mapped_exception(ex)
File «/Users/test/projects/PycharmProjects/superset/superset/db_engine_specs/base.py», line 1096, in execute
cursor.execute(query)
File «/Users/test/sandai/py3env/lib/python3.9/site-packages/clickhouse_sqlalchemy/drivers/http/connector.py», line 117, in execute
self._process_response(response_gen)
File «/Users/test/sandai/py3env/lib/python3.9/site-packages/clickhouse_sqlalchemy/drivers/http/connector.py», line 216, in process_response
self.columns = next(response, None)
File «/Users/test/sandai/py3env/lib/python3.9/site-packages/clickhouse_sqlalchemy/drivers/http/transport.py», line 136, in execute
convs = [get_type(type) for type in types]
File «/Users/test/sandai/py3env/lib/python3.9/site-packages/clickhouse_sqlalchemy/drivers/http/transport.py», line 136, in
convs = [get_type(type) for type in types]
File «/Users/test/sandai/py3env/lib/python3.9/site-packages/clickhouse_sqlalchemy/drivers/http/transport.py», line 81, in _get_type
if type_str.startswith(‘Decimal’):
AttributeError: ‘NoneType’ object has no attribute ‘startswith’
2022-03-18 20:33:51,428:DEBUG:superset.sql_lab:Query 6: ‘NoneType’ object has no attribute ‘startswith’
2022-03-18 20:33:51,449:WARNING:superset.views.base:[SupersetError(message=»clickhouse error: ‘NoneType’ object has no attribute ‘startswith’», error_type= , level= , extra=<‘engine_name’: ‘ClickHouse’, ‘issue_codes’: [<‘code’: 1002, ‘message’: ‘Issue 1002 — The database returned an unexpected error.’>]>)]
Environment
(please complete the following information):
- browser type and version: chrom
- superset version: 1.4.1
- python version: 3.9 3.8
- node.js version: 16.9.1
Checklist
Make sure to follow these steps before submitting your issue — thank you!
- [ Y ] I have checked the superset logs for python stacktraces and included it here as text if there are any.
- [ Y ] I have reproduced the issue with at least the latest released version of superset.
- [ Y ] I have checked the issue tracker for the same issue and I haven’t found one similar.
Others have encountered this question:#15892 ,this issue was closed,but it still occurs in superset 1.4.1 。
The text was updated successfully, but these errors were encountered:
Источник
Query from Google sheet not working #65
This is the eror message:
I remember on the old deploy I had to manually install the google sheets plugin for sqlalchemy:
https://superset.apache.org/docs/databases/google-sheets
The text was updated successfully, but these errors were encountered:
it seems that there is some problem with enum34 :
epigraphhub_1 | [SupersetError(message=»The ‘enum34’ distribution was not found and is required by the application», error_type= , level= , extra=<‘engine_name’: ‘Google Sheets’, ‘issue_codes’: [<‘code’: 1002, ‘message’: ‘Issue 1002 — The database returned an unexpected error.’>]>)]
epigraphhub_1 | 2022-03-24 03:44:31,144:WARNING:superset.views.base:[SupersetError(message=»The ‘enum34’ distribution was not found and is required by the application», error_type= , level= , extra=<‘engine_name’: ‘Google Sheets’, ‘issue_codes’: [<‘code’: 1002, ‘message’: ‘Issue 1002 — The database returned an unexpected error.’>]>)]
enum34 was installed, so not sure about this problem. I will investigate this problem
Did you install Shillelagh ?
oh nop, I will try that.
sorry, I was confused because in the link you shared, there is another link to https://preset.io/blog/2020-06-01-connect-superset-google-sheets/
and there they are using: gsheetsdb
Right, but I think that currently they are using Shillelagh for that
At least according to the oficial documentation
My old friend Beto de Almeida is the author of this library.
using shillelag is also raising an error:
Unable to load dialect : type object ‘MSDialect_adodbapi’ has no attribute ‘dbapi’
epigraphhub_1 | 2022-03-24 13:25:57,637:WARNING:superset.db_engine_specs:Unable to load dialect : type object ‘MSDialect_adodbapi’ has no attribute ‘dbapi’
epigraphhub_1 | 2022-03-24 13:25:57,853:INFO:werkzeug:135.181.41.20 — — [24/Mar/2022 13:25:57] «GET /api/v1/database/3 HTTP/1.0» 200 —
epigraphhub_1 | Unable to load SQLAlchemy dialect : No module named ‘google’
epigraphhub_1 | 2022-03-24 13:25:57,859:WARNING:superset.db_engine_specs:Unable to load SQLAlchemy dialect : No module named ‘google’
epigraphhub_1 | 2022-03-24 13:25:57,944:INFO:werkzeug:135.181.41.20 — — [24/Mar/2022 13:25:57] «GET /api/v1/database/available/ HTTP/1.0» 200 —
epigraphhub_1 | Could not load database driver: GSheetsEngineSpec
epigraphhub_1 | 2022-03-24 13:25:59,364:WARNING:superset.views.base:Could not load database driver: GSheetsEngineSpec
I will be back to this issue at the end of the day
Unable to load dialect : type object ‘MSDialect_adodbapi’ has no attribute ‘dbapi’
epigraphhub_1 | 2022-03-24 13:25:57,637:WARNING:superset.db_engine_specs:Unable to load dialect : type object ‘MSDialect_adodbapi’ has no attribute ‘dbapi’
© 2023 GitHub, Inc.
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Источник
Apache Impala Error is not descreptive enough #17058
A clear and concise description of what the bug is.
How to reproduce the bug
- Go to ‘SQL Lab’
- Click on ‘Run’ for a query that has certain number of rows, usually more then 100 rows
- Scroll down to ‘. ‘
- See error
2021-10-11 14:41:41,504 WARNING: superset.views.base: [SupersetError(message=’impala error: bytes expected’, error_type= , level= , extra=<‘engine_name’: ‘Apache Impala’, ‘issue_codes’: [<‘code’: 1002, ‘message’: ‘Issue 1002 — The database returned an unexpected error.’>]>)] Traceback (most recent call last): File «/home/superset/venv/lib/python3.7/site-packages/superset/views/base.py», line 204, in wraps return f(self, *args, **kwargs) File «/home/superset/venv/lib/python3.7/site-packages/superset/utils/log.py», line 241, in wrapper value = f(*args, **kwargs) File «/home/superset/venv/lib/python3.7/site-packages/superset/views/core.py», line 2573, in sql_json return self.sql_json_exec(request.json, log_params) File «/home/superset/venv/lib/python3.7/site-packages/superset/views/core.py», line 2762, in sql_json_exec session, rendered_query, query, expand_data, log_params File «/home/superset/venv/lib/python3.7/site-packages/superset/views/core.py», line 2558, in _sql_json_sync [SupersetError(**params) for params in data[«errors»]] superset.exceptions.SupersetErrorsException: [SupersetError(message=’impala error: bytes expected’, error_type= , level= , extra=<‘engine_name’: ‘Apache Impala’, ‘issue_codes’: [<‘code’: 1002, ‘message’: ‘Issue 1002 — The database returned an unexpected error.’>]>)] Triggering query_id: 2447
Expected results
result. Or at least more detail stacktrace to investigate with impyla as there is no exception on Impala log
Actual results
Apache Impala Error
Screenshots
Environment
(please complete the following information):
- browser type and version: Chrome
- superset version: superset 1.3.1
- python version: python -3.7
- any feature flags active: more descriptive stack trace
Checklist
Make sure to follow these steps before submitting your issue — thank you!
- I have checked the superset logs for python stacktraces and included it here as text if there are any.
- I have reproduced the issue with at least the latest released version of superset.
- I have checked the issue tracker for the same issue and I haven’t found one similar.
Additional context
trying to connect to Impala
Docker image python:3.7.9
impyla==0.15.0
thrift==0.13.0
thrift-sasl==0.4.2
thriftpy2==0.4.0
The text was updated successfully, but these errors were encountered:
correct me if i understand wrong. when you connect db:impala to superset, and run queries that is supposed to return 100+ rows in sql, you receive an error.
do you also get this error while running the query in Explore?
does it happen to other DBs?
what happen when you run smaller queries?
@gregmazur 🙏
Correct.
We ve been using Superset 0.20.4 with the same impyla version and it still works, however when we connect to the DB from the newer version it gives the following error. And yes when I request a smaller amount of data I receive the result. It is like if I select, let’s say, one column then I can set row limit up to 10 k rows
Also while checking the Impala log I found at the time of the query such thing
7:14:09.662 AM | INFO | cc:490 | 2d4616d71326a901:be2fab1e00000002] Error preparing scanner for scan range hdfs://nameservice/tablename/ladd_recs/1633960990608/partitionName=psnh/part-00001-2cada810-593f-4cd6-b19b-850ad70d0454.c000.snappy.parquet(0:21846). Cancelled in ScannerContext 7:14:09.662 AM | INFO | cc:490 | 2d4616d71326a901:be2fab1e00000002] Error preparing scanner for scan range hdfs://nameservice/tablename/ladd_recs/1633960990608/partitionName=gpc/part-00001-2cada810-593f-4cd6-b19b-850ad70d0454.c000.snappy.parquet(5544:102400). Cancelled in IoMgr RequestContext 7:14:09.662 AM | INFO | cc:490 | 2d4616d71326a901:be2fab1e00000002] Error preparing scanner for scan range hdfs://nameservice/tablename/ladd_recs/1633960990608/partitionName=apco/part-00001-2cada810-593f-4cd6-b19b-850ad70d0454.c000.snappy.parquet(5544:102400). Cancelled in IoMgr RequestContext
So I assume there is something to do with Superset
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. For admin, please label this issue .pinned to prevent stale bot from closing the issue.
© 2023 GitHub, Inc.
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Источник
When performing the update to version 1.4.0 from version 1.3.0, the queries stop working with 'NoneType' object is not callable
error.
This also happens with version 2.0 and 1.5.0.
Query results should appear as normal query.
Trino Error
'NoneType' object is not callable
This may be triggered by:
Issue 1002 - The database returned an unexpected error.
See less
2022-07-19 14:05:50,063:WARNING:superset.views.base:[SupersetError(message="'NoneType' object is not callable", error_type=<SupersetErrorType.GENERIC_DB_ENGINE_ERROR: 'GENERIC_DB_ENGINE_ERROR'>, level=<ErrorLevel.ERROR: 'error'>, extra={'engine_name': 'Trino', 'issue_codes': [{'code': 1002, 'message': 'Issue 1002 - The database returned an unexpected error.'}]})]
Traceback (most recent call last):
File "/app/superset/views/base.py", line 203, in wraps
return f(self, *args, **kwargs)
File "/app/superset/utils/log.py", line 242, in wrapper
value = f(*args, **kwargs)
File "/app/superset/views/core.py", line 2462, in sql_json
command_result: CommandResult = command.run()
File "/app/superset/sqllab/command.py", line 104, in run
raise ex
File "/app/superset/sqllab/command.py", line 96, in run
status = self._run_sql_json_exec_from_scratch()
File "/app/superset/sqllab/command.py", line 138, in _run_sql_json_exec_from_scratch
raise ex
File "/app/superset/sqllab/command.py", line 133, in _run_sql_json_exec_from_scratch
return self._sql_json_executor.execute(
File "/app/superset/sqllab/sql_json_executer.py", line 111, in execute
raise SupersetErrorsException(
superset.exceptions.SupersetErrorsException: [SupersetError(message="'NoneType' object is not callable", error_type=<SupersetErrorType.GENERIC_DB_ENGINE_ERROR: 'GENERIC_DB_ENGINE_ERROR'>, level=<ErrorLevel.ERROR: 'error'>, extra={'engine_name': 'Trino', 'issue_codes': [{'code': 1002, 'message': 'Issue 1002 - The database returned an unexpected error.'}]})]
superset --version
Loaded your LOCAL configuration at [/app/pythonpath/superset_config.py]
Loaded your LOCAL configuration at [/app/pythonpath/superset_config.py]
Python 3.8.12
Flask 1.1.4
Werkzeug 1.0.1
Add any other context about the problem here.
Superset returns an error when I try a query with a tstzrange
column.
Considering Superset have a connection to a PostgreSQL database and there is a table with at least one column of the type tstzrange
, Go to «SQL Lab» -> «SQL Editor» and try a select query that includes the tstzrange
column, e.g.: select * from user
PostgreSQL Error
Unserializable object [2022-05-06 21:12:02.578651+00:00, None) of type <class 'psycopg2._range.DateTimeTZRange'>
This may be triggered by:
Issue 1002 - The database returned an unexpected error.
2022-05-09 14:33:34,658:INFO:superset.sql_lab:Query 879: Executing 1 statement(s)
--
Query 879: Set query to 'running'
2022-05-09 14:33:34,658:INFO:superset.sql_lab:Query 879: Set query to 'running'
Query 879: Running statement 1 out of 1
2022-05-09 14:33:35,068:INFO:superset.sql_lab:Query 879: Running statement 1 out of 1
[SupersetError(message="Unserializable object [2022-05-06 21:12:02.578651+00:00, None) of type <class 'psycopg2._range.DateTimeTZRange'>", error_type=<SupersetErrorType.GENERIC_DB_ENGINE_ERROR: 'GENERIC_DB_ENGINE_ERROR'>, level=<ErrorLevel.ERROR: 'error'>, extra={'engine_name': 'PostgreSQL', 'issue_codes': [{'code': 1002, 'message': 'Issue 1002 - The database returned an unexpected error.'}]})]
Traceback (most recent call last):
File "/app/superset/views/base.py", line 211, in wraps
return f(self, *args, **kwargs)
File "/app/superset/utils/log.py", line 245, in wrapper
value = f(*args, **kwargs)
File "/app/superset/views/core.py", line 2574, in sql_json
command_result: CommandResult = command.run()
File "/app/superset/sqllab/command.py", line 104, in run
raise ex
File "/app/superset/sqllab/command.py", line 96, in run
status = self._run_sql_json_exec_from_scratch()
File "/app/superset/sqllab/command.py", line 138, in _run_sql_json_exec_from_scratch
raise ex
File "/app/superset/sqllab/command.py", line 133, in _run_sql_json_exec_from_scratch
return self._sql_json_executor.execute(
File "/app/superset/sqllab/sql_json_executer.py", line 111, in execute
raise SupersetErrorsException(
superset.exceptions.SupersetErrorsException: [SupersetError(message="Unserializable object [2022-05-06 21:12:02.578651+00:00, None) of type <class 'psycopg2._range.DateTimeTZRange'>", error_type=<SupersetErrorType.GENERIC_DB_ENGINE_ERROR: 'GENERIC_DB_ENGINE_ERROR'>, level=<ErrorLevel.ERROR: 'error'>, extra={'engine_name': 'PostgreSQL', 'issue_codes': [{'code': 1002, 'message': 'Issue 1002 - The database returned an unexpected error.'}]})]
2022-05-09 14:33:35,871:WARNING:superset.views.base:[SupersetError(message="Unserializable object [2022-05-06 21:12:02.578651+00:00, None) of type <class 'psycopg2._range.DateTimeTZRange'>", error_type=<SupersetErrorType.GENERIC_DB_ENGINE_ERROR: 'GENERIC_DB_ENGINE_ERROR'>, level=<ErrorLevel.ERROR: 'error'>, extra={'engine_name': 'PostgreSQL', 'issue_codes': [{'code': 1002, 'message': 'Issue 1002 - The database returned an unexpected error.'}]})]
Traceback (most recent call last):
File "/app/superset/views/base.py", line 211, in wraps
return f(self, *args, **kwargs)
File "/app/superset/utils/log.py", line 245, in wrapper
value = f(*args, **kwargs)
File "/app/superset/views/core.py", line 2574, in sql_json
command_result: CommandResult = command.run()
File "/app/superset/sqllab/command.py", line 104, in run
raise ex
File "/app/superset/sqllab/command.py", line 96, in run
status = self._run_sql_json_exec_from_scratch()
File "/app/superset/sqllab/command.py", line 138, in _run_sql_json_exec_from_scratch
raise ex
File "/app/superset/sqllab/command.py", line 133, in _run_sql_json_exec_from_scratch
return self._sql_json_executor.execute(
File "/app/superset/sqllab/sql_json_executer.py", line 111, in execute
raise SupersetErrorsException(
superset.exceptions.SupersetErrorsException: [SupersetError(message="Unserializable object [2022-05-06 21:12:02.578651+00:00, None) of type <class 'psycopg2._range.DateTimeTZRange'>", error_type=<SupersetErrorType.GENERIC_DB_ENGINE_ERROR: 'GENERIC_DB_ENGINE_ERROR'>, level=<ErrorLevel.ERROR: 'error'>, extra={'engine_name': 'PostgreSQL', 'issue_codes': [{'code': 1002, 'message': 'Issue 1002 - The database returned an unexpected error.'}]})]
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
Hello everyone,
I have an Exchange Organization that holds 2 exchange 2007 (1 CAS/HT and 1 Mailbox) and we are migrating to 2 Exchange 2013 (1 CAS and 1 Mailbox), the migration process works very smoothly and without any issue.
On Exchange 2013 we have 4 databases, one of the databases is named Employees this databse was corrupted a few weeks ago and we had to restore it from a backup.
Since last week we are having on the Exchange 2013 Mailbox server teh followin events:
Log Name: Application
Source: MSExchangeIS
Date: 28/11/2016 10:24:01 a.m.
Event ID: 2006
Task Category: Physical Access
Level: Error
Keywords: Classic
User: N/A
Computer: mailbox01.mydomain.com
Description:
Microsoft Exchange Information Store worker process (1272) has encountered an unexpected database error (Microsoft.Isam.Esent.Interop.EsentKeyDuplicateException: Illegal duplicate key
at Microsoft.Isam.Esent.Interop.Server2003.Server2003Api.JetUpdate2(JET_SESID sesid, JET_TABLEID tableid, Byte[] bookmark, Int32 bookmarkSize, Int32& actualBookmarkSize, UpdateGrbit grbit)
at Microsoft.Exchange.Server.Storage.PhysicalAccessJet.JetTableOperator.Insert(IList`1 columns, IList`1 values, Column identityColumnToFetch, Boolean unversioned, Boolean ignoreDuplicateKey, Object& identityValue)) for database ‘Employees’
with a call stack of
at Microsoft.Exchange.Server.Storage.PhysicalAccessJet.JetTableOperator.Insert(IList`1 columns, IList`1 values, Column identityColumnToFetch, Boolean unversioned, Boolean ignoreDuplicateKey, Object& identityValue)
at Microsoft.Exchange.Server.Storage.PhysicalAccessJet.JetInsertOperator.ExecuteScalar()
at Microsoft.Exchange.Server.Storage.PhysicalAccess.DataRow.Insert(IConnectionProvider connectionProvider)
at Microsoft.Exchange.Server.Storage.StoreCommonServices.ObjectPropertyBag.Flush(Context context)
at Microsoft.Exchange.Server.Storage.LogicalDataModel.Item.Flush(Context context)
at Microsoft.Exchange.Server.Storage.LogicalDataModel.Message.Flush(Context context)
at Microsoft.Exchange.Server.Storage.LogicalDataModel.Message.SaveChanges(Context context)
at Microsoft.Exchange.Server.Storage.LogicalDataModel.TopMessage.SaveChanges(Context context, SaveMessageChangesFlags flags)
at Microsoft.Exchange.Protocols.MAPI.MapiMessage.SaveChangesInternal(MapiContext context, MapiSaveMessageChangesFlags saveFlags, ExchangeId& newMid)
at Microsoft.Exchange.Protocols.MAPI.MapiMessage.SaveChanges(MapiContext context, MapiSaveMessageChangesFlags saveFlags, ExchangeId& newMid)
at Microsoft.Exchange.Server.Storage.MapiDisp.RopHandler.SaveChangesMessage(MapiContext context, MapiMessage message, SaveChangesMode saveChangesMode, SaveChangesMessageResultFactory resultFactory)
at Microsoft.Exchange.Server.Storage.MapiDisp.RopHandlerBase.SaveChangesMessage(IServerObject serverObject, SaveChangesMode saveChangesMode, SaveChangesMessageResultFactory resultFactory)
at Microsoft.Exchange.RpcClientAccess.Parser.RopSaveChangesMessage.InternalExecute(IServerObject serverObject, IRopHandler ropHandler, ArraySegment`1 outputBuffer)
at Microsoft.Exchange.RpcClientAccess.Parser.InputRop.Execute(IConnectionInformation connection, IRopDriver ropDriver, ServerObjectHandleTable handleTable, ArraySegment`1 outputBuffer)
at Microsoft.Exchange.RpcClientAccess.Parser.RopDriver.ExecuteRops(List`1 inputArraySegmentList, ServerObjectHandleTable serverObjectHandleTable, ArraySegment`1 outputBuffer, Int32 outputIndex, Int32 maxOutputSize, Boolean isOutputBufferMaxSize,
Int32& outputSize, AuxiliaryData auxiliaryData, Boolean isFake, Byte[]& fakeOut)
at Microsoft.Exchange.RpcClientAccess.Parser.RopDriver.Execute(IList`1 inputBufferArray, ArraySegment`1 outputBuffer, Int32& outputSize, AuxiliaryData auxiliaryData, Boolean isFake, Byte[]& fakeOut)
at Microsoft.Exchange.Server.Storage.MapiDisp.MapiRpc.<>c__DisplayClass9.<DoRpc>b__6(MapiContext operationContext, MapiSession& session, Boolean& deregisterSession, AuxiliaryData auxiliaryData)
at Microsoft.Exchange.Server.Storage.MapiDisp.MapiRpc.Execute(IExecutionDiagnostics executionDiagnostics, MapiContext outerContext, String functionName, Boolean isRpc, IntPtr& contextHandle, Boolean tryLockSession, String userDn, IList`1 dataIn,
Int32 sizeInMegabytes, ArraySegment`1 auxIn, ArraySegment`1 auxOut, Int32& sizeAuxOut, ExecuteDelegate executeDelegate)
at Microsoft.Exchange.Server.Storage.MapiDisp.MapiRpc.DoRpc(IExecutionDiagnostics executionDiagnostics, IntPtr& contextHandle, IList`1 ropInArraySegments, ArraySegment`1 ropOut, Int32& sizeRopOut, Boolean internalAccessPrivileges, ArraySegment`1
auxIn, ArraySegment`1 auxOut, Int32& sizeAuxOut, Boolean fakeRequest, Byte[]& fakeOut)
at Microsoft.Exchange.Server.Storage.MapiDisp.PoolRpcServer.EcDoRpc(MapiExecutionDiagnostics executionDiagnostics, IntPtr& sessionHandle, UInt32 flags, UInt32 maximumResponseSize, ArraySegment`1 request, ArraySegment`1 auxiliaryIn, IPoolSessionDoRpcCompletion
completion)
at Microsoft.Exchange.Server.Storage.MapiDisp.PoolRpcServer.EcPoolSessionDoRpc_Unwrapped(MapiExecutionDiagnostics executionDiagnostics, IntPtr contextHandle, UInt32 sessionHandle, UInt32 flags, UInt32 maximumResponseSize, ArraySegment`1 request,
ArraySegment`1 auxiliaryIn, IPoolSessionDoRpcCompletion completion)
at Microsoft.Exchange.Server.Storage.MapiDisp.PoolRpcServer.<>c__DisplayClassf.<EcPoolSessionDoRpc>b__c()
at Microsoft.Exchange.Common.IL.ILUtil.DoTryFilterCatch[T](TryDelegate tryDelegate, GenericFilterDelegate filterDelegate, GenericCatchDelegate catchDelegate, T state)
at Microsoft.Exchange.Server.Storage.MapiDisp.PoolRpcServer.EcPoolSessionDoRpc(IntPtr contextHandle, UInt32 sessionHandle, UInt32 flags, UInt32 maximumResponseSize, ArraySegment`1 request, ArraySegment`1 auxiliaryIn, IPoolSessionDoRpcCompletion
completion)
at EcPoolSessionDoRpcRpc.EcDispatchCall(EcPoolSessionDoRpcRpc* , SafeRpcAsyncStateHandle pAsyncState, IPoolRpcServer server)
at PoolRpcServer_Wrapper.InternalExecute(PoolRpcServer_Wrapper* , SafeRpcAsyncStateHandle pAsyncState)
at Microsoft.Exchange.Rpc.ManagedExceptionAsyncCrashWrapper.Execute<class Microsoft::Exchange::Rpc::PoolRpc::SafeEcPoolSessionDoRpcRpcAsyncStateHandle>(ManagedExceptionAsyncCrashWrapper* , _RPC_ASYNC_STATE* pAsyncState)
at EcPoolSessionDoRpc_Managed(_RPC_ASYNC_STATE* pAsyncState, Void* cpxh, UInt32 ulSessionHandle, UInt32* pulFlags, UInt32 cbIn, Byte* rgbIn, UInt32* pcbOut, Byte** ppbOut, UInt32 cbAuxIn, Byte* rgbAuxIn, UInt32* pcbAuxOut, Byte** ppbAuxOut)
Log Name: Application
Source: MSExchangeIS
Date: 28/11/2016 10:24:01 a.m.
Event ID: 1046
Task Category: General
Level: Error
Keywords: Classic
User: N/A
Computer: mailbox01.mydomain.com
Description:
Unexpected error encountered in critical block. Location:(Microsoft.Exchange.Diagnostics.LID), scope: (MailboxShared), callstack: ( at Microsoft.Exchange.Server.Storage.StoreCommonServices.Context.OnCriticalBlockFailed(LID lid, CriticalBlockScope
criticalBlockScope)
at Microsoft.Exchange.Server.Storage.StoreCommonServices.Context.CriticalBlockFrame.Dispose()
at Microsoft.Exchange.Server.Storage.LogicalDataModel.TopMessage.SaveChanges(Context context, SaveMessageChangesFlags flags)
at Microsoft.Exchange.Protocols.MAPI.MapiMessage.SaveChangesInternal(MapiContext context, MapiSaveMessageChangesFlags saveFlags, ExchangeId& newMid)
at Microsoft.Exchange.Protocols.MAPI.MapiMessage.SaveChanges(MapiContext context, MapiSaveMessageChangesFlags saveFlags, ExchangeId& newMid)
at Microsoft.Exchange.Server.Storage.MapiDisp.RopHandler.SaveChangesMessage(MapiContext context, MapiMessage message, SaveChangesMode saveChangesMode, SaveChangesMessageResultFactory resultFactory)
at Microsoft.Exchange.Server.Storage.MapiDisp.RopHandlerBase.SaveChangesMessage(IServerObject serverObject, SaveChangesMode saveChangesMode, SaveChangesMessageResultFactory resultFactory)
at Microsoft.Exchange.RpcClientAccess.Parser.RopSaveChangesMessage.InternalExecute(IServerObject serverObject, IRopHandler ropHandler, ArraySegment`1 outputBuffer)
at Microsoft.Exchange.RpcClientAccess.Parser.InputRop.Execute(IConnectionInformation connection, IRopDriver ropDriver, ServerObjectHandleTable handleTable, ArraySegment`1 outputBuffer)
at Microsoft.Exchange.RpcClientAccess.Parser.RopDriver.ExecuteRops(List`1 inputArraySegmentList, ServerObjectHandleTable serverObjectHandleTable, ArraySegment`1 outputBuffer, Int32 outputIndex, Int32 maxOutputSize, Boolean isOutputBufferMaxSize,
Int32& outputSize, AuxiliaryData auxiliaryData, Boolean isFake, Byte[]& fakeOut)
at Microsoft.Exchange.RpcClientAccess.Parser.RopDriver.Execute(IList`1 inputBufferArray, ArraySegment`1 outputBuffer, Int32& outputSize, AuxiliaryData auxiliaryData, Boolean isFake, Byte[]& fakeOut)
at Microsoft.Exchange.Server.Storage.MapiDisp.MapiRpc.<>c__DisplayClass9.<DoRpc>b__6(MapiContext operationContext, MapiSession& session, Boolean& deregisterSession, AuxiliaryData auxiliaryData)
at Microsoft.Exchange.Server.Storage.MapiDisp.MapiRpc.Execute(IExecutionDiagnostics executionDiagnostics, MapiContext outerContext, String functionName, Boolean isRpc, IntPtr& contextHandle, Boolean tryLockSession, String userDn, IList`1 dataIn,
Int32 sizeInMegabytes, ArraySegment`1 auxIn, ArraySegment`1 auxOut, Int32& sizeAuxOut, ExecuteDelegate executeDelegate)
at Microsoft.Exchange.Server.Storage.MapiDisp.MapiRpc.DoRpc(IExecutionDiagnostics executionDiagnostics, IntPtr& contextHandle, IList`1 ropInArraySegments, ArraySegment`1 ropOut, Int32& sizeRopOut, Boolean internalAccessPrivileges, ArraySegment`1
auxIn, ArraySegment`1 auxOut, Int32& sizeAuxOut, Boolean fakeRequest, Byte[]& fakeOut)
at Microsoft.Exchange.Server.Storage.MapiDisp.PoolRpcServer.EcDoRpc(MapiExecutionDiagnostics executionDiagnostics, IntPtr& sessionHandle, UInt32 flags, UInt32 maximumResponseSize, ArraySegment`1 request, ArraySegment`1 auxiliaryIn, IPoolSessionDoRpcCompletion
completion)
at Microsoft.Exchange.Server.Storage.MapiDisp.PoolRpcServer.EcPoolSessionDoRpc_Unwrapped(MapiExecutionDiagnostics executionDiagnostics, IntPtr contextHandle, UInt32 sessionHandle, UInt32 flags, UInt32 maximumResponseSize, ArraySegment`1 request,
ArraySegment`1 auxiliaryIn, IPoolSessionDoRpcCompletion completion)
at Microsoft.Exchange.Server.Storage.MapiDisp.PoolRpcServer.<>c__DisplayClassf.<EcPoolSessionDoRpc>b__c()
at Microsoft.Exchange.Common.IL.ILUtil.DoTryFilterCatch[T](TryDelegate tryDelegate, GenericFilterDelegate filterDelegate, GenericCatchDelegate catchDelegate, T state)
at Microsoft.Exchange.Server.Storage.MapiDisp.PoolRpcServer.EcPoolSessionDoRpc(IntPtr contextHandle, UInt32 sessionHandle, UInt32 flags, UInt32 maximumResponseSize, ArraySegment`1 request, ArraySegment`1 auxiliaryIn, IPoolSessionDoRpcCompletion
completion)
at EcPoolSessionDoRpcRpc.EcDispatchCall(EcPoolSessionDoRpcRpc* , SafeRpcAsyncStateHandle pAsyncState, IPoolRpcServer server)
at PoolRpcServer_Wrapper.InternalExecute(PoolRpcServer_Wrapper* , SafeRpcAsyncStateHandle pAsyncState)
at Microsoft.Exchange.Rpc.ManagedExceptionAsyncCrashWrapper.Execute<class Microsoft::Exchange::Rpc::PoolRpc::SafeEcPoolSessionDoRpcRpcAsyncStateHandle>(ManagedExceptionAsyncCrashWrapper* , _RPC_ASYNC_STATE* pAsyncState)
at EcPoolSessionDoRpc_Managed(_RPC_ASYNC_STATE* pAsyncState, Void* cpxh, UInt32 ulSessionHandle, UInt32* pulFlags, UInt32 cbIn, Byte* rgbIn, UInt32* pcbOut, Byte** ppbOut, UInt32 cbAuxIn, Byte* rgbAuxIn, UInt32* pcbAuxOut, Byte** ppbAuxOut)
Log Name: Application
Source: MSExchangeIS
Date: 28/11/2016 10:24:01 a.m.
Event ID: 1002
Task Category: General
Level: Error
Keywords: Classic
User: N/A
Computer: mailbox01.mydomain.com
Description:
Unhandled exception (Microsoft.Exchange.Server.Storage.Common.DuplicateKeyException: JetTableOperator.Insert —> Microsoft.Isam.Esent.Interop.EsentKeyDuplicateException: Illegal duplicate key
at Microsoft.Isam.Esent.Interop.Server2003.Server2003Api.JetUpdate2(JET_SESID sesid, JET_TABLEID tableid, Byte[] bookmark, Int32 bookmarkSize, Int32& actualBookmarkSize, UpdateGrbit grbit)
at Microsoft.Exchange.Server.Storage.PhysicalAccessJet.JetTableOperator.Insert(IList`1 columns, IList`1 values, Column identityColumnToFetch, Boolean unversioned, Boolean ignoreDuplicateKey, Object& identityValue)
— End of inner exception stack trace —
at Microsoft.Exchange.Server.Storage.PhysicalAccessJet.JetTableOperator.Insert(IList`1 columns, IList`1 values, Column identityColumnToFetch, Boolean unversioned, Boolean ignoreDuplicateKey, Object& identityValue)
at Microsoft.Exchange.Server.Storage.PhysicalAccessJet.JetInsertOperator.ExecuteScalar()
at Microsoft.Exchange.Server.Storage.PhysicalAccess.DataRow.Insert(IConnectionProvider connectionProvider)
at Microsoft.Exchange.Server.Storage.StoreCommonServices.ObjectPropertyBag.Flush(Context context)
at Microsoft.Exchange.Server.Storage.LogicalDataModel.Item.Flush(Context context)
at Microsoft.Exchange.Server.Storage.LogicalDataModel.Message.Flush(Context context)
at Microsoft.Exchange.Server.Storage.LogicalDataModel.Message.SaveChanges(Context context)
at Microsoft.Exchange.Server.Storage.LogicalDataModel.TopMessage.SaveChanges(Context context, SaveMessageChangesFlags flags)
at Microsoft.Exchange.Protocols.MAPI.MapiMessage.SaveChangesInternal(MapiContext context, MapiSaveMessageChangesFlags saveFlags, ExchangeId& newMid)
at Microsoft.Exchange.Protocols.MAPI.MapiMessage.SaveChanges(MapiContext context, MapiSaveMessageChangesFlags saveFlags, ExchangeId& newMid)
at Microsoft.Exchange.Server.Storage.MapiDisp.RopHandler.SaveChangesMessage(MapiContext context, MapiMessage message, SaveChangesMode saveChangesMode, SaveChangesMessageResultFactory resultFactory)
at Microsoft.Exchange.Server.Storage.MapiDisp.RopHandlerBase.SaveChangesMessage(IServerObject serverObject, SaveChangesMode saveChangesMode, SaveChangesMessageResultFactory resultFactory)
at Microsoft.Exchange.RpcClientAccess.Parser.RopSaveChangesMessage.InternalExecute(IServerObject serverObject, IRopHandler ropHandler, ArraySegment`1 outputBuffer)
at Microsoft.Exchange.RpcClientAccess.Parser.InputRop.Execute(IConnectionInformation connection, IRopDriver ropDriver, ServerObjectHandleTable handleTable, ArraySegment`1 outputBuffer)
at Microsoft.Exchange.RpcClientAccess.Parser.RopDriver.ExecuteRops(List`1 inputArraySegmentList, ServerObjectHandleTable serverObjectHandleTable, ArraySegment`1 outputBuffer, Int32 outputIndex, Int32 maxOutputSize, Boolean isOutputBufferMaxSize,
Int32& outputSize, AuxiliaryData auxiliaryData, Boolean isFake, Byte[]& fakeOut)
at Microsoft.Exchange.RpcClientAccess.Parser.RopDriver.Execute(IList`1 inputBufferArray, ArraySegment`1 outputBuffer, Int32& outputSize, AuxiliaryData auxiliaryData, Boolean isFake, Byte[]& fakeOut)
at Microsoft.Exchange.Server.Storage.MapiDisp.MapiRpc.<>c__DisplayClass9.<DoRpc>b__6(MapiContext operationContext, MapiSession& session, Boolean& deregisterSession, AuxiliaryData auxiliaryData)
at Microsoft.Exchange.Server.Storage.MapiDisp.MapiRpc.Execute(IExecutionDiagnostics executionDiagnostics, MapiContext outerContext, String functionName, Boolean isRpc, IntPtr& contextHandle, Boolean tryLockSession, String userDn, IList`1 dataIn,
Int32 sizeInMegabytes, ArraySegment`1 auxIn, ArraySegment`1 auxOut, Int32& sizeAuxOut, ExecuteDelegate executeDelegate)).
I will appreciate any advice about this errors.
Best regards,
Manuel
Manuel&amp;amp;#180;s Microsoft Forums Threads
Requesting access to analytics-privatedata-users group for Abban Dunne
Requesting access to analytics-privatedata-users group for Abban Dunne
- Edit Task
- Edit Related Tasks…
- Edit Related Objects…
- Mute Notifications
- Protect as security issue
- Award Token
- Flag For Later
Hi,
Can I get added to the analytics-privatedata-users group so I can debug fundraising banner issues?
- Wikitech username: Abban Dunne
- Email address: abban.dunne@wikimedia.de
- SSH public key (must be a separate key from Wikimedia cloud SSH access): I don’t need to access the analytics cluster through SSH, running a query through the web client will be enough for my purposes.
- Requested group membership: analytics-privatedata-users
- Reason for access: I need to query fundraising banner issues.
- Name of approving party (manager for WMF/WMDE staff): Conny Kawohl
- Ensure you have signed the L3 Wikimedia Server Access Responsibilities document: I believe I signed this last year.
- Please coordinate obtaining a comment of approval on this task from the approving party.
SRE Clinic Duty Confirmation Checklist for Access Requests
This checklist should be used on all access requests to ensure that all steps are covered, including expansion to existing access. Please double check the step has been completed before checking it off.
This section is to be confirmed and completed by a member of the SRE team.
- — User has signed the L3 Acknowledgement of Wikimedia Server Access Responsibilities Document.
- — User has a valid NDA on file with WMF legal. (All WMF Staff/Contractor hiring are covered by NDA. Other users can be validated via the NDA tracking sheet)
- — User has provided the following: wikitech username, email address, and full reasoning for access (including what commands and/or tasks they expect to perform)
[] — User has provided a public SSH key. This ssh key pair should only be used for WMF cluster access, and not shared with any other service (this includes not sharing with WMCS access, no shared keys.) N/A no ssh access requested
- — access request (or expansion) has sign off of WMF sponsor/manager (sponsor for volunteers, manager for wmf staff)
- — access request (or expansion) has sign off of group approver indicated by the approval field in data.yaml
For additional details regarding access request requirements, please see https://wikitech.wikimedia.org/wiki/Requesting_shell_access
Previous Request
Hi, I already have access to Superset, but when I try run queries on the Presto Analytics Hive I get a permissions error: `presto error: Permission denied: user=abban, access=EXECUTE` Would it be possible to allow me to execute queries there? I'm not sure if this is an LDAP issue or if I need to chase it up elsewhere. Employee Name: Abban Dunne Role: Full Stack Developer, Fundraising Tech, WMDE Wikitech Username: Abban Dunne Kind regards, Abban.
- Mentions
Event Timeline
Comment Actions
Hi @jcrespo, I’m logging in in the browser through https://superset.wikimedia.org with the same credentials I use for Wikitech.
My first thought was maybe I’m missing a MySQL permission. Here’s the full output of the error:
DB engine Error presto error: Permission denied: user=abban, access=EXECUTE, inode="/wmf/data/event":analytics:analytics-privatedata-users:drwxr-x--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:351) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:311) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:238) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:541) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1705) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1723) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:642) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:55) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:3660) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:1147) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:671) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854) This may be triggered by: Issue 1002 - The database returned an unexpected error.
Comment Actions
Ah, this part is interesting:
analytics-privatedata-users
Maybe you are missing some server access, as an ldap-only-user, or you are expected to use a differeant way to query that data. I know some permissions depend on UNIX permissions (which you wouldn’t have). Hopefully someone on Analytics can clarify.
Comment Actions
Hi @Abban,
Indeed to query (most) data with Presto you need to be in the analytics-privatedata-users group. This requires a formal demand with manager approval (you’ll get access to PII data) as documented here: https://wikitech.wikimedia.org/wiki/Analytics/Data_access#Requesting_access
I think you should update this task with the SRE-Access-Requests tag, and request to be added to the group. This would require approval from your manager I assume.
AbbanWMDE renamed this task from Add Execute Permissions for WMDE Developer Abban Dunne to Requesting access to analytics-privatedata-users group for Abban Dunne.Aug 27 2021, 9:19 AM
Comment Actions
The intention here is to provide UNIX filesystem access to the analytics servers, by the user being present on the right groups, but not SSH access. I will add some security people on the patch to make sure that is not too crazy.
Assigning to Data Engineering Director for review and formal sign-off @odimitrijevic
JAllemandou lowered the priority of this task from High to Medium.Aug 27 2021, 4:03 PM
Comment Actions
Hi @AbbanWMDE,
Change has been merged now that it has been approved. It will take ~30mins to fully propagate at which point you should be able to user superset fully. In case you also need SSH access later on you ‘ll need to open a task with your SSH public key.
I ‘ll ask (if it has been asked already sorry about the repetition) that you read and understand https://wikitech.wikimedia.org/wiki/Analytics/Data_access#User_responsibilities since you will be handling PII
I ‘ll resolve this, but feel free to reopen in case anything is amiss.
Content licensed under Creative Commons Attribution-ShareAlike 3.0 (CC-BY-SA) unless otherwise noted; code licensed under GNU General Public License (GPL) or other open source licenses. By using this site, you agree to the Terms of Use, Privacy Policy, and Code of Conduct. · Wikimedia Foundation · Privacy Policy · Code of Conduct · Terms of Use · Disclaimer · CC-BY-SA · GPL