Error protocol pxf does not exist

The following table describes some errors you may encounter while using PXF:

PXF Errors

The following table describes some errors you may encounter while using PXF:

Error Message Discussion
Protocol “pxf” does not exist Cause: The pxf extension was not registered.
Solution: Create (enable) the PXF extension for the database as described in the PXF Enable Procedure.
Invalid URI pxf://<path-to-data>: missing options section Cause: The LOCATION URI does not include the profile or other required options.
Solution: Provide the profile and required options in the URI when you submit the CREATE EXTERNAL TABLE command.
PXF server error : Input path does not exist: hdfs://<namenode>:8020/<path-to-file> Cause: The HDFS file that you specified in <path-to-file> does not exist.
Solution: Provide the path to an existing HDFS file.
PXF server error : NoSuchObjectException(message:<schema>.<hivetable> table not found) Cause: The Hive table that you specified with <schema>.<hivetable> does not exist.
Solution: Provide the name of an existing Hive table.
PXF server error : Failed connect to localhost:5888; Connection refused (<segment-id> slice<N> <segment-host>:<port> pid=<process-id>)
Cause: The PXF Service is not running on <segment-host>.
Solution: Restart PXF on <segment-host>.
PXF server error: Permission denied: user=<user>, access=READ, inode=»<filepath>»:-rw——- Cause: The Greenplum Database user that ran the PXF operation does not have permission to access the underlying Hadoop service (HDFS or Hive). See Configuring the Hadoop User, User Impersonation, and Proxying.
PXF server error: PXF service could not be reached. PXF is not running in the tomcat container Cause: The pxf extension was updated to a new version but the PXF server has not been updated to a compatible version.
Solution: Ensure that the PXF server has been updated and restarted on all hosts.
ERROR: could not load library “/usr/local/greenplum-db-x.x.x/lib/postgresql/pxf.so” Cause: Some steps have not been completed after a Greenplum Database upgrade or migration, such as pxf cluster register.
Solution: Make sure you follow the steps outlined for [PXF Upgrade and Migration](https://docs.vmware.com/en/VMware-Tanzu-Greenplum/6/greenplum-database/GUID-pxf-pxf_upgrade_migration.html.

Most PXF error messages include a HINT that you can use to resolve the error, or to collect more information to identify the error.

PXF Logging

Refer to the Logging topic for more information about logging levels, configuration, and the pxf-app.out and pxf-service.log log files.

Addressing PXF JDBC Connector Time Zone Errors

You use the PXF JDBC connector to access data stored in an external SQL database. Depending upon the JDBC driver, the driver may return an error if there is a mismatch between the default time zone set for the PXF Service and the time zone set for the external SQL database.

For example, if you use the PXF JDBC connector to access an Oracle database with a conflicting time zone, PXF logs an error similar to the following:

java.io.IOException: ORA-00604: error occurred at recursive SQL level 1
ORA-01882: timezone region not found

Should you encounter this error, you can set default time zone option(s) for the PXF Service in the $PXF_BASE/conf/pxf-env.sh configuration file, PXF_JVM_OPTS property setting. For example, to set the time zone:

export PXF_JVM_OPTS="<current_settings> -Duser.timezone=America/Chicago"

You can use the PXF_JVM_OPTS property to set other Java options as well.

As described in previous sections, you must synchronize the updated PXF configuration to the Greenplum Database cluster and restart the PXF Service on each host.

About PXF External Table Child Partitions

Greenplum Database supports partitioned tables, and permits exchanging a leaf child partition with a PXF external table.

When you read from a partitioned Greenplum table where one or more partitions is a PXF external table and there is no data backing the external table path, PXF returns an error and the query fails. This default PXF behavior is not optimal in the partitioned table case; an empty child partition is valid and should not cause a query on the parent table to fail.

The IGNORE_MISSING_PATH PXF custom option is a boolean that specifies the action to take when the external table path is missing or invalid. The default value is false, PXF returns an error when it encounters a missing path. If the external table is a child partition of a Greenplum table, you want PXF to ignore a missing path error, so set this option to true.

For example, PXF ignores missing path errors generated from the following external table:

CREATE EXTERNAL TABLE ext_part_87 (id int, some_date date)
  LOCATION ('pxf://bucket/path/?PROFILE=s3:parquet&SERVER=s3&IGNORE_MISSING_PATH=true')
FORMAT 'CUSTOM' (formatter = 'pxfwritable_import');

The IGNORE_MISSING_PATH custom option applies only to file-based profiles, including *:text, *:parquet, *:avro, *:json, *:AvroSequenceFile, and *:SequenceFile. This option is not available when the external table specifies the hbase, hive[:*], or jdbc profiles, or when reading from S3 using S3-Select.

Addressing Hive MetaStore Connection Errors

The PXF Hive connector uses the Hive MetaStore to determine the HDFS locations of Hive tables. Starting in PXF version 6.2.1, PXF retries a failed connection to the Hive MetaStore a single time. If you encounter one of the following error messages or exceptions when accessing Hive via a PXF external table, consider increasing the retry count:

  • Failed to connect to the MetaStore Server.
  • Could not connect to meta store ...
  • org.apache.thrift.transport.TTransportException: null

PXF uses the hive-site.xml hive.metastore.failure.retries property setting to identify the maximum number of times it will retry a failed connection to the Hive MetaStore. The hive-site.xml file resides in the configuration directory of the PXF server that you use to access Hive.

Perform the following procedure to configure the number of Hive MetaStore connection retries that PXF will attempt; you may be required to add the hive.metastore.failure.retries property to the hive-site.xml file:

  1. Log in to the Greenplum Database master host.

  2. Identify the name of your Hive PXF server.

  3. Open the $PXF_BASE/servers/<hive-server-name>/hive-site.xml file in the editor of your choice, add the hive.metastore.failure.retries property if it does not already exist in the file, and set the value. For example, to configure 5 retries:

    <property>
        <name>hive.metastore.failure.retries</name>
        <value>5</value>
    </property>
    
  4. Save the file and exit the editor.

  5. Synchronize the PXF configuration to all hosts in your Greenplum Database cluster:

    gpadmin@gpmaster$ pxf cluster sync
    
  6. Re-run the failing SQL external table command.

Addressing a Missing Compression Codec Error

By default, PXF does not bundle the LZO compression library. If the Hadoop cluster is configured to use LZO compression, PXF returns the error message Compression codec com.hadoop.compression.lzo.LzoCodec not found on first access to Hadoop. To remedy the situation, you must register the LZO compression library with PXF as described below (for more information, refer to Registering a JAR Dependency):

  1. Locate the LZO library in the Hadoop installation directory on the Hadoop NameNode. For example, the file system location of the library may be /usr/lib/hadoop-lzo/lib/hadoop-lzo.jar.

  2. Log in to the Greenplum Database master host.

  3. Copy hadoop-lzo.jar from the Hadoop NameNode to the PXF configuration directory on the Greenplum Database master host. For example, if $PXF_BASE is /usr/local/pxf-gp6:

    gpadmin@gpmaster$ scp <hadoop-user>@<namenode-host>:/usr/lib/hadoop-lzo/lib/hadoop-lzo.jar /usr/local/pxf-gp6/lib/
    
  4. Synchronize the PXF configuration and restart PXF:

    gpadmin@gpmaster$ pxf cluster sync
    gpadmin@gpmaster$ pxf cluster restart
    
  5. Re-run the query.

Reading from a Hive table STORED AS ORC Returns NULLs

If you are using PXF to read from a Hive table STORED AS ORC and one or more columns that have values are returned as NULLs, there may be a case sensitivity issue between the column names specified in the Hive table definition and those specified in the ORC embedded schema definition. This might happen if the table has been created and populated by another system such as Spark.

The workaround described in this section applies when all of the following hold true:

  • The Greenplum Database PXF external table that you created specifies the hive:orc profile.
  • The Greenplum Database PXF external table that you created specifies the VECTORIZE=false (the default) setting.
  • There is a case mis-match between the column names specified in the Hive table schema and the column names specified in the ORC embedded schema.
  • You confirm that the field names in the ORC embedded schema are not all in lowercase by performing the following tasks:

    1. Run DESC FORMATTED <table-name> in the hive subsystem and note the returned location; for example, location:hdfs://namenode/hive/warehouse/<table-name>.
    2. List the ORC files comprising the table by running the following command:

      $ hdfs dfs -ls <location>
      
    3. Dump each ORC file with the following command. For example, if the first step returned hdfs://namenode/hive/warehouse/hive_orc_tbl1, run:

      $ hive --orcfiledump /hive/warehouse/hive_orc_tbl1/<orc-file> > dump.out
      
    4. Examine the output, specifically the value of Type (sample output: Type: struct<COL0:int,COL1:string>). If the field names are not all lowercase, continue with the workaround below.

To remedy this situation, perform the following procedure:

  1. Log in to the Greenplum Database master host.

  2. Identify the name of your Hadoop PXF server configuration.

  3. Locate the hive-site.xml configuration file in the server configuration directory. For example, if $PXF_BASE is /usr/local/pxf-gp6 and the server name is <server_name>, the file is located here:

    /usr/local/pxf-gp6/servers/<server_name>/hive-site.xml
    
  4. Add or update the following property definition in the hive-site.xml file, and then save and exit the editor:

    <property>
        <name>orc.schema.evolution.case.sensitive</name>
        <value>false</value>
        <description>A boolean flag to determine if the comparision of field names in schema evolution is case sensitive.</description>
    </property>
    
  5. Synchronize the PXF server configuration to your Greenplum Database cluster:

    gpadmin@gpmaster$ pxf cluster sync
    
  6. Try the query again.

Recommend Projects

  • React photo

    React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo

    Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo

    Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo

    TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo

    Django

    The Web framework for perfectionists with deadlines.

  • Laravel photo

    Laravel

    A PHP framework for web artisans

  • D3 photo

    D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Visualization

    Some thing interesting about visualization, use data art

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo

    Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo

    Microsoft

    Open source projects and samples from Microsoft.

  • Google photo

    Google

    Google ❤️ Open Source for everyone.

  • Alibaba photo

    Alibaba

    Alibaba Open Source for everyone

  • D3 photo

    D3

    Data-Driven Documents codes.

  • Tencent photo

    Tencent

    China tencent open source team.

Время прочтения
9 мин

Просмотры 3.5K

Привет! Меня зовут Артемий Козырь, и я Analytics Engineer в Wheely.

Популярность массивно-параллельных СУБД (MPP) для решения аналитических задач неукоснительно растет. Сегодня хотел бы поговорить о широко распространенной СУБД Greenplum и, в частности, о Platform Extension Framework (PXF) — расширении, с помощью которого открываются почти неограниченные возможности интеграции с множеством внешних систем и форматов данных.

В этой публикации Вас ждет:

  • Основные возможности PXF, конфигурация, способы оптимизации.

  • Организация Extract — Load с помощью PXF (Data Lake / OLTP).

  • Объединение локальных и внешних таблиц в запросах (Federated queries).

  • Запись данных во внешние системы (Clickhouse).

Базовая идея Greenplum PXF

Заключается в том, чтобы задействовать обработку данных на стороне внешних систем, тем самым обеспечивая пользователю возможность обращения ко всей истории данных вне зависимости от места хранения, но без дополнительных трат ресурсов и времени на полную репликацию.

Демонстрационный сценарий описывает источники, отражающие данные по продажам за период в несколько лет. Операционные данные (например, текущий месяц — hot data) хранятся в OLTP MySQL, в Greenplum хранятся данные для аналитической отчетности (например, последний год-два – warm data), а в AWS S3 архивируются данные за более ранние периоды (cold data).

Таким образом, запрос на вывод суммы продаж с группировкой помесячно может быть распределен на 3 системы и выполнен параллельно. При этом пользователь обращается к одной таблице.

Заявлена поддержка доступа к данным следующих систем:

  • Hadoop / Hive / HBase

  • AWS S3 / Google Cloud Storage / Azure Blob Storage / MinIO

  • Реляционные СУБД (через JDBC)

  • Network file systems

И следующих форматов хранения:

  • Text (plain, delimited, embedded line feeds)

  • JSON

  • Avro, AvroSequenceFile

  • SequenceFile, RCFile

  • ORC / Parquet

В рамках этой публикации я буду использовать управляемый сервис от Я.Облака – Yandex Managed Service for Greenplum®. Конфигурация кластера: 2 x Master + 2 x Segment хоста на базе s2.micro (2 vCPU, 100% vCPU rate, 8 GB RAM).

Managed Service for Greenplum уже включает расширение PXF и его базовую конфигурацию. В случае использования Self-hosted Greenplum, все шаги установки и конфигурации придется проделать самостоятельно:

  • Соблюдение ряда требований перед установкой.

  • Загрузка PXF Package.

  • Установка Package на хостах.

  • Инициализация и запуск сервиса PXF.

Пошаговые действия описаны в официальной документации, и я не буду останавливаться на этом подробно.

PXF — расширение для работы с внешними данными через EXTERNAL TABLEs

PXF реализует протокол для СУБД Greenplum, который можно использовать при создании внешних таблиц.

Синтаксис комадны CREATE EXTERNAL TABLE с протоколом pxf выглядит следующим образом:

CREATE [WRITABLE] EXTERNAL TABLE <table_name>
( <column_name> <data_type> [, ...] | LIKE <other_table> )
LOCATION('pxf://<path-to-data>?PROFILE=<profile_name>[&SERVER=<server_name>][&<custom-option>=<value>[...]]')
FORMAT '[TEXT|CSV|CUSTOM]' (<formatting-properties>)
;

Краткое пояснение к ключевым параметрам конфигурации:

С помощью PXF можно осуществлять как чтение из внешних источников, так и запись данных во внешние системы (ключевое слово WRITABLE). Также возможно объединение данных из разных источников в одном запросе (т.н. federated queries).

Отдельно хочу заострить внимание на поддержке Filter pushdown и Column Projection.

Filter pushdown позволяет применить ограничение на читаемые строки из выражения WHERE запроса SELECT на стороне источника данных, тем самым значительно снижая нагрузку и объем данных, передаваемых по сети. Например, это может быть использовано в обращениях к внешним СУБД, исключению партиций в Hive (partition pruning), чтению групп строк в колоночных форматах (ORC, Parquet).

Column Projection означает то, что только запрошенные колонки SELECT-запроса буду возвращены из внешних систем. Например, при запросе 5 колонок из 100 возможных в файлах формата Parquet в результате запроса вернутся (будут переданы по сети) только 5, что кратное уменьшает объем данных.

Интеграция с Data Lake (S3 / GCS / HDFS / MinIO)

Создадим EXTERNAL TABLE, указывающую на данные в объектном хранилище:

-- 1. Create EXTERNAL TABLE pointing to S3 (Text file)
DROP EXTERNAL TABLE IF EXISTS src_orders ;
CREATE EXTERNAL TABLE src_orders(
O_ORDERKEY BIGINT,
O_CUSTKEY INT,
O_ORDERSTATUS CHAR(1),
O_TOTALPRICE DECIMAL(15,2),
O_ORDERDATE DATE,
O_ORDERPRIORITY CHAR(15),
O_CLERK  CHAR(15),
O_SHIPPRIORITY INTEGER,
O_COMMENT VARCHAR(79)
)
LOCATION ('pxf://otus-dwh/tpch-dbgen/orders.csv?PROFILE=s3:csv&accesskey=<>&secretkey=<>&endpoint=storage.yandexcloud.net'
)
FORMAT 'TEXT'
(DELIMITER '|')
;

Явно подчеркну важность корректной конфигурации для чтения файла, а именно:

  • Адрес в бакете – otus-dwh/tpch-dbgen/orders.csv

  • Профиль для чтения формата файла – PROFILE=s3:csv

  • Наличие ключей для доступа к бакету – accesskey=<>&secretkey=<>

  • Указание endpoint для S3-подобных хранилищ, например endpoint=storage.yandexcloud.net

  • Параметры для чтения конкретных форматов – для текста это (DELIMITER '|')

В случае некорректной конфигурации, получить данные из внешней таблицы не удастся.

Всего в файле содержится ровно 15М строк.

-- 2. Count rows
SELECT count(1) FROM src_orders ; -- 15000000 ROWS

Теперь пробуем запустить аналитический запрос и посмотреть план его выполнения:

-- 3. Run OLAP query
EXPLAIN ANALYZE
SELECT
    DATE_TRUNC('month', O_ORDERDATE) AS order_year
    ,count(1) AS num_orders
FROM src_orders
WHERE O_ORDERSTATUS = 'P'
GROUP BY 1
ORDER BY 1 ASC 
;

На выполнение запроса потребовалось 21.5 секунды, львиная доля времени была затрачена на чтение данных в S3. Даже при наличии фильтра WHERE потребовалось полное чтение файла, так как текстовые файлы не поддерживают predicate pushdown.

Далее с помощью WRITEABLE таблицы запишем эти же данные обратно в S3, но уже в колоночном формате Parquet:

-- 4. Write data back to S3 in columnar format (Parquet)
DROP EXTERNAL TABLE IF EXISTS trg_orders ;
CREATE WRITABLE EXTERNAL TABLE trg_orders(
    O_ORDERKEY BIGINT,
    O_CUSTKEY INT,
    O_ORDERSTATUS CHAR(1),
    O_TOTALPRICE DECIMAL(15,2),
    O_ORDERDATE DATE,
    O_ORDERPRIORITY CHAR(15), 
    O_CLERK  CHAR(15), 
    O_SHIPPRIORITY INTEGER,
    O_COMMENT VARCHAR(79)
)
LOCATION ('pxf://otus-dwh/tpch-dbgen-parquet/orders?PROFILE=s3:parquet&accesskey=<>&secretkey=<>&endpoint=storage.yandexcloud.net'    
)
FORMAT 'CUSTOM' (FORMATTER='pxfwritable_export');
INSERT INTO trg_orders SELECT * FROM src_orders ORDER BY O_ORDERSTATUS, O_ORDERDATE
;

Запустим тот же самый OLAP-запрос к новой таблице и сравним результаты:

В этот раз запрос выполнился за 4 секунды (в 5 раз быстрее), с учетом predicate pushdown (WHERE O_ORDERSTATUS = 'P') и column projection (фактически читали 1 колонку — O_ORDERDATE), поддерживаемых форматом Parquet.

Интеграция с OLTP СУБД (PostgreSQL)

В этом сценарии перейдем к работе с внешними OLTP-базами, которые динамично наполняются новыми данными и меняют уже имеющиеся. Предположим, что в нашем случае это статистические данные о рекламных кампаниях:

-- 1. Create EXTERNAL TABLE pointing to PostgreSQL table
DROP EXTERNAL TABLE IF EXISTS src_direct_ads_facts ;
CREATE EXTERNAL TABLE src_direct_ads_facts (
    id serial ,
    account_id int4 ,
    dates_id int4 ,
    sites_id int4 ,
    traffic_id int4 ,
    device varchar(16) ,
    impressions_context int4 ,
    impressions_search int4 ,
    impressions int4 ,
    clicks_context int4 ,
    clicks_search int4 ,
    clicks int4 ,
    cost_context numeric(18, 2) ,
    cost_search numeric(18, 2) ,
    "cost" numeric(18, 2) ,
    average_position numeric(18, 2) ,
    average_position_clicks numeric(18, 2) ,
    campaigns_id int4 ,
    ads_id int4 ,
    adgroups_id int4 ,
    bounces int4
)
LOCATION ('pxf://public.direct_ads_facts?PROFILE=JDBC&JDBC_DRIVER=org.postgresql.Driver&DB_URL=jdbc:postgresql://<hostname>:6432/<database>&USER=<username>&PASS=<password>')
FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import')
;

В случае JDBC видим уже несколько другой набор параметров:

  • Схема и имя таблицы – public.direct_ads_facts

  • Профиль для чтения – PROFILE=JDBC

  • Класс JDBC-драйвера – JDBC_DRIVER=org.postgresql.Driver

  • DB URI – DB_URL=jdbc:postgresql://<hostname>:6432/<database>&USER=<username>&PASS=<password>'

Будем считать, что необходимо регулярно получать данные из системы-источника в Greenplum. Сделать это можно двумя способами:

  • Копирование всей таблицы при каждом запросе.

  • Организация инкрементальной загрузки.

Первый способ обладает очевидными недостатками – чрезмерная нагрузка на источник и копирование одних и тех же данных каждый раз. Инкрементальный способ загрузки представляется гораздо более эффективным, но для него понадобятся способы выделения дельты (изменений), например, монотонно возрастающий ключ:

-- 1. Initialize
CREATE TABLE direct_ads_facts AS SELECT * FROM src_direct_ads_facts ;
-- 2. Incremental load
INSERT INTO direct_ads_facts SELECT * FROM src_direct_ads_facts
WHERE id > (SELECT MAX(id) FROM direct_ads_facts) ;

Использование федеративных запросов (Federated Queries)

Теперь представьте ситуацию, когда огромную таблицу фактов нужно склеить с маленьким, но часто меняющимся справочником. Реальный пример — это справочник названий рекламных кампаний, которые всегда имеют одинаковый идентификатор, но периодически меняют свои названия (label).

Для этого идеально подойдет PXF с возможностью обращения к источнику src_direct_campaigns для получения самых актуальных значений из справочника:

-- 2.7. Create Data Mart
CREATE TABLE direct_search_facts AS
SELECT    
      cf.account_id AS account_id
    , MD5(CONCAT(CAST(cf.id AS varchar), 'yandex.search')) AS id
    , ga.caption AS caption
    , gd.simple_date AS dt
    , cf.device AS device
    , 'yandex.search' AS SOURCE
    , gt.medium AS medium
    , CAST(cp.campaign_id AS Int) AS campaign_id
    , cp.name AS campaign -- Always correct and up-to-date
    , gt.campaign AS traffic_campaign
    , gt.content AS CONTENT
    , gt.keyword AS keyword
    , gs.domain AS DOMAIN
    , gt.landing_page AS landing_page
    , cf.impressions_search AS impressions
    , cf.clicks_search AS clicks
    , cf.cost_search AS COST
FROM direct_ads_facts AS cf
    LEFT JOIN general_accounts AS ga
            ON ga.account_id = cf.account_id
    LEFT JOIN general_dates AS gd
            ON gd.id = cf.dates_id
    LEFT JOIN src_direct_campaigns AS cp -- ! From PostgreSQL directly
            ON cp.id = cf.campaigns_id
    LEFT JOIN general_sites AS gs
            ON gs.id = cf.sites_id
    LEFT JOIN general_traffic AS gt
            ON gt.id = cf.traffic_id
;

Запись данных в Clickhouse для сверхбыстрого чтения

Полученная широкая таблица-витрина содержит все данные, чтобы задавать многочисленные аналитические вопросы. Однако, порой даже Greenplum может не справляться с множеством сложных аналитических запросов, существенно теряя в скорости и интерактивности. На помощь приходит сверхбыстрый Clickhouse! Наша задача — переместить готовую витрину данных туда, где с ней можно будет работать в режиме пинг-понг, вне зависимости от объема данных.

Для начала на стороне приемника необходимо создать пустую таблицу (схему):

CREATE TABLE direct_search(
	  account_id Int8
    , id String
	, caption String
	, dt String
	, device String
	, "source" String
	, medium String
	, campaign_id Int8
	, campaign String
	, traffic_campaign String
	, "content" String
	, keyword String
	, "domain" String
	, landing_page String
	, impressions Int8
	, clicks Int8
	, "cost" Float32
)
ENGINE = MergeTree()  
ORDER BY (dt)
;

После этого мы готовы к записи данных в Clickhouse с помощью PXF:

DROP EXTERNAL TABLE IF EXISTS trg_ch_direct_search ;

CREATE WRITABLE EXTERNAL TABLE trg_ch_direct_search(
account_id int4 ,
id varchar(128) ,
caption varchar(128) ,
dt varchar(128) ,
device varchar(16) ,
"source" varchar(1024) ,
medium varchar(1024) ,
campaign_id int4 ,
campaign varchar(1024) ,
traffic_campaign varchar(1024) ,
"content" varchar(1024) ,
keyword varchar(1024) ,
"domain" varchar(1024) ,
landing_page varchar(1024) ,
impressions int4 ,
clicks int4 ,
"cost" numeric(18, 2)
)
LOCATION ('pxf://direct_search?PROFILE=JDBC&JDBC_DRIVER=ru.yandex.clickhouse.ClickHouseDriver&DB_URL=jdbc:clickhouse://<hostname>:8123/default')
FORMAT 'CUSTOM' (FORMATTER='pxfwritable_export')
;

INSERT INTO trg_ch_direct_search
SELECT
account_id
,id
,caption
,dt
,device
,"source"
,medium
,campaign_id
,campaign
,traffic_campaign
,"content"
,keyword
,"domain"
,landing_page
,impressions
,clicks
,"cost"
FROM direct_search
;

Ничего сложного, несколько простых действий, немного терпения и готово. Теперь с этой витриной можно работать в Clickhouse, который, как известно, не тормозит!

Проблемы и трудности

В рамках исследования и подготовки материала я столкнулся с несколькими трудностями, на которые хотел бы обратить ваше внимание:

  1. В случае работы с S3 необходимо указывать ключи доступа accesskey=<>&secretkey=<> даже для публично доступных buckets.

  2. Для работы PXF в Yandex Managed Service for Greenplum необходимо включить Egress NAT для подсети хостов.

С недавнего времени на сайте с документацией появилось релевантное сообщение:

Для подключения к внешним источникам необходимо включить NAT в интернет для подсети, в которой расположен кластер Managed Service for Greenplum®.

В противном случае вы будете получать ошибку с таймаутом SQL Error [08000]: ERROR: PXF server error : Could not obtain datasource for server default and PoolDescriptor

  1. Отсутствие SSL-соединения для JDBC.

«PXF» не поддерживает «SSL-соединение» на уровне параметров драйвера ClickHouse-JDBC.

Workaround: Вы можете оставить в кластере ClickHouse один хост без публичного доступа и к нему обращаться из Greenplum.

Умение строить комплексные решения, отвечающие на запросы бизнеса

Это то, что хотят видеть нанимающие менеджеры. Специалисты широкого профиля, мультиинструменталисты, обладающие автономностью и способные самостоятельно решать задачи и создавать ценность для бизнеса нужны на рынке как никогда.

Именно эти аспекты я держал в уме, когда работал над программами курсов Analytics Engineer, Data Engineer, DataOps Engineer в OTUS.

Это не просто набор занятий по темам, а единая, связная история, в которой акцент делается на понимание потребностей заказчиков. На live-сессиях я и мои коллеги делимся своим опытом и реальными кейсами.

В комментарии поделитесь, с каким кейсом использования PXF удалось поработать, либо как планируете применять?

Yes, my catalina.out:

Feb 06, 2019 10:58:41 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler [«http-bio-5888»]
Feb 06, 2019 10:58:41 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 286 ms
Feb 06, 2019 10:58:41 PM org.apache.catalina.core.StandardService startInternal
INFO: Starting service Catalina
Feb 06, 2019 10:58:41 PM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: Apache Tomcat/7.0.62
Feb 06, 2019 10:58:41 PM org.apache.catalina.startup.HostConfig deployWAR
INFO: Deploying web application archive /home/gpadmin/greenplum_db/pxf/pxf-service/webapps/pxf.war
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader readClasspathFile
INFO: Trying to read classpath file /home/gpadmin/greenplum_db/pxf/conf/pxf-private.classpath
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addRepositories
INFO: Classpath file /home/gpadmin/greenplum_db/pxf/conf/pxf-private.classpath has 5 entries
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/pxf/conf/ added from entry /home/gpadmin/pxf/conf
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/conf/ added from entry /home/gpadmin/greenplum_db/pxf/conf
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addRepositories
WARNING: Entry /home/gpadmin/pxf/lib/.jar doesn’t match any files
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/pxf-hive-5.0.1.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/pxf-
.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/pxf-api-5.0.1.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/pxf-.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/pxf-hdfs-5.0.1.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/pxf-
.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/pxf-json-5.0.1.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/pxf-.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/pxf-hbase-5.0.1.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/pxf-
.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/pxf-jdbc-5.0.1.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/pxf-.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/pxf-ignite-5.0.1.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/pxf-
.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/pxf-service-5.0.1.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/pxf-.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/shared/datanucleus-api-jdo-4.2.1.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/shared/
.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/shared/jackson-databind-2.9.8.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/shared/.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/shared/hbase-protocol-1.1.2.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/shared/
.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/shared/jackson-mapper-asl-1.9.13.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/shared/.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/shared/commons-configuration-1.6.jar added from entry /home/gpadmin/greenplum_db/pxf/lib/shared/
.jar
Feb 06, 2019 10:58:41 PM org.greenplum.pxf.service.utilities.CustomWebappLoader addPathToRepository
INFO: Repository file:///home/gpadmin/greenplum_db/pxf/lib/shared/jd

my localhost.2019-02-18.log:

Feb 18, 2019 1:32:03 AM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [PXF REST Service] in context with path [/pxf] threw exception
java.io.IOException: com.microsoft.sqlserver.jdbc.SQLServerDriver
at org.greenplum.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:149)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:105)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:120)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:957)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:620)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)

Feb 18, 2019 1:58:07 AM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [PXF REST Service] in context with path [/pxf] threw exception
java.io.IOException: com.microsoft.sqlserver.jdbc.SQLServerDriver
at org.greenplum.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:149)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:105)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:120)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:957)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:620)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)

Feb 18, 2019 2:32:14 AM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [PXF REST Service] in context with path [/pxf] threw exception
java.io.IOException: com.microsoft.sqlserver.jdbc.SQLServerDriver
at org.greenplum.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:149)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:105)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:120)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:957)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:620)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)

Feb 18, 2019 2:37:30 AM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [PXF REST Service] in context with path [/pxf] threw exception
java.io.IOException: com.microsoft.sqlserver.jdbc.SQLServerDriver
at org.greenplum.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:149)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:105)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:120)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:957)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:620)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)

Feb 18, 2019 3:55:26 AM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [PXF REST Service] in context with path [/pxf] threw exception
java.io.IOException: com.microsoft.sqlserver.jdbc.SQLServerDriver
at org.greenplum.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:149)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:105)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:120)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:957)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:620)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)

Feb 18, 2019 7:05:03 AM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [PXF REST Service] in context with path [/pxf] threw exception
java.io.IOException: com.microsoft.sqlserver.jdbc.SQLServerDriver
at org.greenplum.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:149)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:105)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:120)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:957)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:620)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)

my pxf-service.log:

2019-02-06 22:58:46.0511 INFO localhost-startStop-1 org.greenplum.pxf.service.utilities.Log4jConfigure — log4jConfigLocation = /home/gpadmin/pxf/conf/pxf-log4j.properties
2019-02-06 22:58:46.0511 INFO localhost-startStop-1 javax.servlet.ServletContextListener — webapp initialized
2019-02-06 22:58:46.0519 INFO localhost-startStop-1 org.greenplum.pxf.service.utilities.SecureLogin — User impersonation is enabled
2019-02-06 22:58:46.0689 WARN localhost-startStop-1 org.apache.hadoop.util.NativeCodeLoader — Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
2019-02-06 22:58:46.0726 INFO localhost-startStop-1 org.greenplum.pxf.service.utilities.SecureLogin — Kerberos Security is not enabled
2019-02-07 23:26:23.0685 WARN tomcat-http—47 org.greenplum.pxf.service.profile.ProfilesConf — Profile file ‘pxf-profiles.xml’ is empty
2019-02-07 23:26:23.0692 INFO tomcat-http—47 org.greenplum.pxf.service.profile.ProfilesConf — PXF profiles loaded: [adl:avro, adl:AvroSequenceFile, adl:json, adl:parquet, adl:SequenceFile, adl:text, adl:text:multi, Apache Ignite, Avro, GemFireXD, gs:avro, gs:AvroSequenceFile, gs:json, gs:parquet, gs:SequenceFile, gs:text, gs:text:multi, HBase, hdfs:avro, hdfs:AvroSequenceFile, hdfs:json, hdfs:parquet, hdfs:SequenceFile, hdfs:text, hdfs:text:multi, HdfsTextMulti, HdfsTextSimple, Hive, HiveORC, HiveRC, HiveText, HiveVectorizedORC, Jdbc, Json, Parquet, s3:avro, s3:AvroSequenceFile, s3:json, s3:parquet, s3:SequenceFile, s3:text, s3:text:multi, SequenceWritable, wasbs:avro, wasbs:AvroSequenceFile, wasbs:json, wasbs:parquet, wasbs:SequenceFile, wasbs:text, wasbs:text:multi]
2019-02-07 23:26:25.0985 ERROR tomcat-http—47 org.greenplum.pxf.service.rest.BridgeResource — Exception thrown when streaming
java.lang.ClassNotFoundException: com.microsoft.sqlserver.jdbc.SQLServerDriver
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1720)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1571)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.greenplum.pxf.plugins.jdbc.JdbcBasePlugin.getConnection(JdbcBasePlugin.java:133)
at org.greenplum.pxf.plugins.jdbc.JdbcAccessor.openForRead(JdbcAccessor.java:71)
at org.greenplum.pxf.service.bridge.ReadBridge.beginIteration(ReadBridge.java:72)
at org.greenplum.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:132)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:105)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:120)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:957)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:620)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
2019-02-07 23:27:04.0711 ERROR tomcat-http—13 org.greenplum.pxf.service.rest.BridgeResource — Exception thrown when streaming
java.lang.ClassNotFoundException: com.microsoft.sqlserver.jdbc.SQLServerDriver
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1720)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1571)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.greenplum.pxf.plugins.jdbc.JdbcBasePlugin.getConnection(JdbcBasePlugin.java:133)
at org.greenplum.pxf.plugins.jdbc.JdbcAccessor.openForRead(JdbcAccessor.java:71)
at org.greenplum.pxf.service.bridge.ReadBridge.beginIteration(ReadBridge.java:72)
at org.greenplum.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:132)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:105)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:120)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:957)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:620)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)

……

2019-02-18 07:05:03.0491 ERROR tomcat-http—43 org.greenplum.pxf.service.rest.BridgeResource — Exception thrown when streaming
java.lang.ClassNotFoundException: com.microsoft.sqlserver.jdbc.SQLServerDriver
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1720)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1571)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.greenplum.pxf.plugins.jdbc.JdbcBasePlugin.getConnection(JdbcBasePlugin.java:133)
at org.greenplum.pxf.plugins.jdbc.JdbcAccessor.openForRead(JdbcAccessor.java:71)
at org.greenplum.pxf.service.bridge.ReadBridge.beginIteration(ReadBridge.java:72)
at org.greenplum.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:132)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:105)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:120)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:957)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:620)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)

Понравилась статья? Поделить с друзьями:
  • Error please select a valid python interpreter как исправить pycharm
  • Error please reinstall ivcam
  • Error pkgproblemresolver resolve generated breaks this may be caused by held packages kali linux
  • Error pkgbuild does not exist
  • Error pipe connected