Hi @lhoestq, Below is the entire error message after I move both tsv files to the same directory. It’s the same with I got before.
/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
08/29/2021 22:56:43 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False
08/29/2021 22:56:43 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=/projectnb/media-framing/pred_result/label1/, overwrite_output_dir=True, do_train=True, do_eval=False, do_predict=True, evaluation_strategy=EvaluationStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=8.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs/Aug29_22-56-43_scc1, logging_first_step=False, logging_steps=500, save_steps=3000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=/projectnb/media-framing/pred_result/label1/, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.0, adafactor=False, _n_gpu=0)
08/29/2021 22:56:43 - INFO - __main__ - load a local file for train: /project/media-framing/transformer4/temp_train.tsv
08/29/2021 22:56:43 - INFO - __main__ - load a local file for test: /project/media-framing/transformer4/temp_test.tsv
08/29/2021 22:56:43 - WARNING - datasets.builder - Using custom data configuration default-df627c23ac0e98ec
Downloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /usr4/cs542sp/hey1/.cache/huggingface/datasets/csv/default-df627c23ac0e98ec/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff...
Traceback (most recent call last):
File "/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py", line 1166, in _prepare_split
num_examples, num_bytes = writer.finalize()
File "/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/arrow_writer.py", line 428, in finalize
self.stream.close()
File "pyarrow/io.pxi", line 132, in pyarrow.lib.NativeFile.close
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
OSError: error closing file
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_glue.py", line 487, in <module>
main()
File "run_glue.py", line 244, in main
datasets = load_dataset("csv", data_files=data_files, delimiter="t")
File "/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/load.py", line 852, in load_dataset
use_auth_token=use_auth_token,
File "/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py", line 616, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py", line 699, in _download_and_prepare
+ str(e)
OSError: Cannot find data file.
Original error:
error closing file
- Remove From My Forums
-
Question
-
Everything was fine last night, but this morning when I opened my VS 2005 web project, I got this message:
Failed to load dataset because of the following error: Unable to find connection X for object ‘web.config’. The connection string could not be found in application settings, or the data provider associated with the connection string could not be loaded.
Also, in my DatSet.xsd file, there are not 102 warnings that were not present before, and when I try to switch to Design view, I get «Cannot open a designer for the file because the class within it does not inherit from a class that can be visually designed.»
I cannot see anything obvioulsy wrong with the XSD file, but here are some of the warnings I get:
«The ‘AllowDbNull’ attribute is not declared.»
(Also, lots of other attriubes are «not declared»)
«The element cannot contain white space. content model is empty.»Can someone help be get my project repaired?
Thanks.
Answers
-
It turns out that the problem was duplicate connection strings in the .xsd file for the database, in the App_Code folder. I don’t know what put them in there or why the error messages gave so little useful information, but when I remove the duplicates, everything is fine.Below is a snippet from the .xsd file to illustrate. The <Connection> entries are identical in every way. To fix the problem, I removed all but one of them.
<Connections>
<Connection AppSettingsObjectName=»Web.config» AppSettingsPropertyName=»MyConnectionString» … </Connection>
<Connection AppSettingsObjectName=»Web.config» AppSettingsPropertyName=»MyConnectionString» … </Connection>
</Connections>
ERROR: LOADING Redis is loading the dataset in memory.
This Redis error is shown when the system is not ready to accept connection requests. It usually goes away when Redis finishes loading data into the memory, but sometimes it persists.
As a part of our Server Management Services, we help our customers to fix Redis related errors like this.
Today we’ll take a look at what causes persistent memory errors, and how to fix it.
What causes “ERROR: LOADING Redis is loading the dataset in memory”?
Redis keeps the whole data set in memory and answers all queries from memory. This often helps to reduce the application load time.
The Redis replication system allows replica Redis instances to be exact copies of master instances. The replica will automatically reconnect to the master every time the link breaks and will attempt to be an exact copy of it regardless of what happens to the master.
As updated earlier “LOADING Redis is loading the dataset in memory” error occurs if connections requests reach before the system completely loads the dataset into memory and makes Redis ready for connections. This generally happens in two different scenarios:
- At a master startup.
- When a slave reconnects and performs a full resynchronization with a master.
Let us now look at the possible fixes for this error.
How to fix the error “LOADING Redis is loading the dataset in memory”?
In most cases, a frequent display of the error message can be related to some recent changes made on the site in relation to Redis. This may increase the data in Redis considerably and can get saturated easily. As a result, Redis replicas may disconnect frequently. Finally, when it tries to reconnect the message “LOADING Redis is loading the dataset in memory” may be displayed.
The quick solution here would be to flush the Redis cache. Lets us discuss on how to fix the Redis cache:
Flush Redis Cache
To Flush the Redis Cache, either the FLUSHDB or the FLUSHALL commands could be used. FLUSHDB command deletes all the keys of the DBs selected and the FLUSHALL command deletes all the keys of all the existing databases, not just the selected one.
The syntax for the commands are:
redis-cli FLUSHDB
redis-cli -n DB_NUMBER FLUSHDB
redis-cli -n DB_NUMBER FLUSHDB ASYNC
redis-cli FLUSHALL
redis-cli FLUSHALL ASYNC
For instance, to delete all the keys of a database #4 from the Redis cache, the syntax to be used is:
$ redis-cli -n 4 FLUSHDB
This will help to fix the issue. To prevent it from happening frequently, we need to revert the changes that were made earlier. It is always preferred to keep the information within Redis minimalistic.
[Need help to fix Redis errors? We are available 24×7]
Conclusion
In short, the error “LOADING Redis is loading the dataset in memory” occurs at Redis master startup or when the slave reconnects and performs a full resynchronization with master. When these connections requests reach before the dataset is completely loaded into memory, it triggers the error message. Today, we discussed how our Support Engineers fix this error.
Note: Sometimes the Service Control screen that indicates the progress is not displayed in front of the other screens.
Methods for listing and loading datasets and metrics:
Datasets
datasets.list_datasets
<
source
>
(
with_community_datasets = True
with_details = False
)
Parameters
with_community_datasets (bool
, optional, defaults toTrue
) —
Include the community provided datasets.
with_details (bool
, optional, defaults toFalse
) —
Return the full details on the datasets instead of only the short name.
List all the datasets scripts available on the Hugging Face Hub.
Example:
>>> from datasets import list_datasets >>> list_datasets() ['acronym_identification', 'ade_corpus_v2', 'adversarial_qa', 'aeslc', 'afrikaans_ner_corpus', 'ag_news', ... ]
datasets.load_dataset
<
source
>
(
path: str
name: typing.Optional[str] = None
data_dir: typing.Optional[str] = None
data_files: typing.Union[str, typing.Sequence[str], typing.Mapping[str, typing.Union[str, typing.Sequence[str]]], NoneType] = None
split: typing.Union[str, datasets.splits.Split, NoneType] = None
cache_dir: typing.Optional[str] = None
features: typing.Optional[datasets.features.features.Features] = None
download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None
download_mode: typing.Optional[datasets.download.download_manager.DownloadMode] = None
ignore_verifications: bool = False
keep_in_memory: typing.Optional[bool] = None
save_infos: bool = False
revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None
use_auth_token: typing.Union[str, bool, NoneType] = None
task: typing.Union[str, datasets.tasks.base.TaskTemplate, NoneType] = None
streaming: bool = False
num_proc: typing.Optional[int] = None
**config_kwargs
)
→
Dataset or DatasetDict
Parameters
path (str
) —
Path or name of the dataset.
Depending onpath
, the dataset builder that is used comes from a generic dataset script (JSON, CSV, Parquet, text etc.) or from the dataset script (a python file) inside the dataset directory.For local datasets:
- if
path
is a local directory (containing data files only)
-> load a generic dataset builder (csv, json, text etc.) based on the content of the directory
e.g.'./path/to/directory/with/my/csv/data'
. - if
path
is a local dataset script or a directory containing a local dataset script (if the script has the same name as the directory)
-> load the dataset builder from the dataset script
e.g.'./dataset/squad'
or'./dataset/squad/squad.py'
.
For datasets on the Hugging Face Hub (list all available datasets and ids with datasets.list_datasets())
- if
path
is a dataset repository on the HF hub (containing data files only)
-> load a generic dataset builder (csv, text etc.) based on the content of the repository
e.g.'username/dataset_name'
, a dataset repository on the HF hub containing your data files. - if
path
is a dataset repository on the HF hub with a dataset script (if the script has the same name as the directory)
-> load the dataset builder from the dataset script in the dataset repository
e.g.glue
,squad
,'username/dataset_name'
, a dataset repository on the HF hub containing a dataset script'dataset_name.py'
.
- if
name (str
, optional) —
Defining the name of the dataset configuration.
data_dir (str
, optional) —
Defining thedata_dir
of the dataset configuration. If specified for the generic builders (csv, text etc.) or the Hub datasets anddata_files
isNone
,
the behavior is equal to passingos.path.join(data_dir, **)
asdata_files
to reference all the files in a directory.
data_files (str
orSequence
orMapping
, optional) —
Path(s) to source data file(s).
split (Split
orstr
) —
Which split of the data to load.
IfNone
, will return adict
with all splits (typicallydatasets.Split.TRAIN
anddatasets.Split.TEST
).
If given, will return a single Dataset.
Splits can be combined and specified like in tensorflow-datasets.
cache_dir (str
, optional) —
Directory to read/write data. Defaults to"~/.cache/huggingface/datasets"
.
features (Features
, optional) —
Set the features type to use for this dataset.
download_config (DownloadConfig, optional) —
Specific download configuration parameters.
download_mode (DownloadMode, defaults toREUSE_DATASET_IF_EXISTS
) —
Download/generate mode.
ignore_verifications (bool
, defaults toFalse
) —
Ignore the verifications of the downloaded/processed dataset information (checksums/size/splits/…).
keep_in_memory (bool
, defaults toNone
) —
Whether to copy the dataset in-memory. IfNone
, the dataset
will not be copied in-memory unless explicitly enabled by settingdatasets.config.IN_MEMORY_MAX_SIZE
to
nonzero. See more details in the improve performance section.
save_infos (bool
, defaults toFalse
) —
Save the dataset information (checksums/size/splits/…).
revision (Version orstr
, optional) —
Version of the dataset script to load.
As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch.
You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository.
use_auth_token (str
orbool
, optional) —
Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.
IfTrue
, or not specified, will get token from"~/.huggingface"
.
task (str
) —
The task to prepare the dataset for during training and evaluation. Casts the dataset’s Features to standardized column names and types as detailed indatasets.tasks
.
streaming (bool
, defaults toFalse
) —
If set toTrue
, don’t download the data files. Instead, it streams the data progressively while
iterating on the dataset. An IterableDataset or IterableDatasetDict is returned instead in this case.Note that streaming works for datasets that use data formats that support being iterated over like txt, csv, jsonl for example.
Json files may be downloaded completely. Also streaming from remote zip or gzip files is supported but other compressed formats
like rar and xz are not yet supported. The tgz format doesn’t allow streaming.
num_proc (int
, optional, defaults toNone
) —
Number of processes when downloading and generating the dataset locally.
Multiprocessing is disabled by default.Added in 2.7.0
**config_kwargs (additional keyword arguments) —
Keyword arguments to be passed to theBuilderConfig
and used in the DatasetBuilder.
- if
split
is notNone
: the dataset requested, - if
split
isNone
, a DatasetDict with each split.
or IterableDataset or IterableDatasetDict: if streaming=True
- if
split
is notNone
, the dataset is requested - if
split
isNone
, a~datasets.streaming.IterableDatasetDict
with each split.
Load a dataset from the Hugging Face Hub, or a local dataset.
You can find the list of datasets on the Hub or with datasets.list_datasets().
A dataset is a directory that contains:
- some data files in generic formats (JSON, CSV, Parquet, text, etc.).
- and optionally a dataset script, if it requires some code to read the data files. This is used to load any kind of formats or structures.
Note that dataset scripts can also download and read data files from anywhere — in case your data files already exist online.
This function does the following under the hood:
-
Download and import in the library the dataset script from
path
if it’s not already cached inside the library.If the dataset has no dataset script, then a generic dataset script is imported instead (JSON, CSV, Parquet, text, etc.)
Dataset scripts are small python scripts that define dataset builders. They define the citation, info and format of the dataset,
contain the path or URL to the original data files and the code to load examples from the original data files.You can find the complete list of datasets in the Datasets Hub.
-
Run the dataset script which will:
-
Download the dataset file from the original URL (see the script) if it’s not already available locally or cached.
-
Process and cache the dataset in typed Arrow tables for caching.
Arrow table are arbitrarily long, typed tables which can store nested objects and be mapped to numpy/pandas/python generic types.
They can be directly accessed from disk, loaded in RAM or even streamed over the web.
-
-
Return a dataset built from the requested splits in
split
(default: all).
It also allows to load a dataset from a local directory or a dataset repository on the Hugging Face Hub without dataset script.
In this case, it automatically loads all the data files from the directory or the dataset repository.
Example:
Load a dataset from the Hugging Face Hub:
>>> from datasets import load_dataset >>> ds = load_dataset('rotten_tomatoes', split='train') >>> data_files = {'train': 'train.csv', 'test': 'test.csv'} >>> ds = load_dataset('namespace/your_dataset_name', data_files=data_files)
Load a local dataset:
>>> from datasets import load_dataset >>> ds = load_dataset('csv', data_files='path/to/local/my_dataset.csv') >>> from datasets import load_dataset >>> ds = load_dataset('json', data_files='path/to/local/my_dataset.json') >>> from datasets import load_dataset >>> ds = load_dataset('path/to/local/loading_script/loading_script.py', split='train')
Load an IterableDataset:
>>> from datasets import load_dataset >>> ds = load_dataset('rotten_tomatoes', split='train', streaming=True)
Load an image dataset with the ImageFolder
dataset builder:
>>> from datasets import load_dataset >>> ds = load_dataset('imagefolder', data_dir='/path/to/images', split='train')
datasets.load_from_disk
<
source
>
(
dataset_path: str
fs = ‘deprecated’
keep_in_memory: typing.Optional[bool] = None
storage_options: typing.Optional[dict] = None
)
→
Dataset or DatasetDict
Parameters
dataset_path (str
) —
Path (e.g."dataset/train"
) or remote URI (e.g.
"s3://my-bucket/dataset/train"
) of the Dataset or DatasetDict directory where the dataset will be
loaded from.
fs (~filesystems.S3FileSystem
orfsspec.spec.AbstractFileSystem
, optional) —
Instance of the remote filesystem used to download the files from.Deprecated in 2.9.0
fs
was deprecated in version 2.9.0 and will be removed in 3.0.0.
Please usestorage_options
instead, e.g.storage_options=fs.storage_options
.
keep_in_memory (bool
, defaults toNone
) —
Whether to copy the dataset in-memory. IfNone
, the dataset
will not be copied in-memory unless explicitly enabled by settingdatasets.config.IN_MEMORY_MAX_SIZE
to
nonzero. See more details in the improve performance section.
storage_options (dict
, optional) —
Key/value pairs to be passed on to the file-system backend, if any.Added in 2.9.0
- If
dataset_path
is a path of a dataset directory: the dataset requested. - If
dataset_path
is a path of a dataset dict directory, a DatasetDict with each split.
Loads a dataset that was previously saved using save_to_disk() from a dataset directory, or
from a filesystem using either S3FileSystem or any implementation of
fsspec.spec.AbstractFileSystem
.
Example:
>>> from datasets import load_from_disk >>> ds = load_from_disk('path/to/dataset/directory')
datasets.load_dataset_builder
<
source
>
(
path: str
name: typing.Optional[str] = None
data_dir: typing.Optional[str] = None
data_files: typing.Union[str, typing.Sequence[str], typing.Mapping[str, typing.Union[str, typing.Sequence[str]]], NoneType] = None
cache_dir: typing.Optional[str] = None
features: typing.Optional[datasets.features.features.Features] = None
download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None
download_mode: typing.Optional[datasets.download.download_manager.DownloadMode] = None
revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None
use_auth_token: typing.Union[str, bool, NoneType] = None
**config_kwargs
)
Parameters
path (str
) —
Path or name of the dataset.
Depending onpath
, the dataset builder that is used comes from a generic dataset script (JSON, CSV, Parquet, text etc.) or from the dataset script (a python file) inside the dataset directory.For local datasets:
- if
path
is a local directory (containing data files only)
-> load a generic dataset builder (csv, json, text etc.) based on the content of the directory
e.g.'./path/to/directory/with/my/csv/data'
. - if
path
is a local dataset script or a directory containing a local dataset script (if the script has the same name as the directory)
-> load the dataset builder from the dataset script
e.g.'./dataset/squad'
or'./dataset/squad/squad.py'
.
For datasets on the Hugging Face Hub (list all available datasets and ids with datasets.list_datasets())
- if
path
is a dataset repository on the HF hub (containing data files only)
-> load a generic dataset builder (csv, text etc.) based on the content of the repository
e.g.'username/dataset_name'
, a dataset repository on the HF hub containing your data files. - if
path
is a dataset repository on the HF hub with a dataset script (if the script has the same name as the directory)
-> load the dataset builder from the dataset script in the dataset repository
e.g.glue
,squad
,'username/dataset_name'
, a dataset repository on the HF hub containing a dataset script'dataset_name.py'
.
- if
name (str
, optional) —
Defining the name of the dataset configuration.
data_dir (str
, optional) —
Defining thedata_dir
of the dataset configuration. If specified for the generic builders (csv, text etc.) or the Hub datasets anddata_files
isNone
,
the behavior is equal to passingos.path.join(data_dir, **)
asdata_files
to reference all the files in a directory.
data_files (str
orSequence
orMapping
, optional) —
Path(s) to source data file(s).
cache_dir (str
, optional) —
Directory to read/write data. Defaults to"~/.cache/huggingface/datasets"
.
features (Features, optional) —
Set the features type to use for this dataset.
download_config (DownloadConfig, optional) —
Specific download configuration parameters.
download_mode (DownloadMode, defaults toREUSE_DATASET_IF_EXISTS
) —
Download/generate mode.
revision (Version orstr
, optional) —
Version of the dataset script to load.
As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch.
You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository.
use_auth_token (str
orbool
, optional) —
Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.
IfTrue
, or not specified, will get token from"~/.huggingface"
.
**config_kwargs (additional keyword arguments) —
Keyword arguments to be passed to the BuilderConfig
and used in the DatasetBuilder.
Load a dataset builder from the Hugging Face Hub, or a local dataset. A dataset builder can be used to inspect general information that is required to build a dataset (cache directory, config, dataset info, etc.)
without downloading the dataset itself.
You can find the list of datasets on the Hub or with datasets.list_datasets().
A dataset is a directory that contains:
- some data files in generic formats (JSON, CSV, Parquet, text, etc.)
- and optionally a dataset script, if it requires some code to read the data files. This is used to load any kind of formats or structures.
Note that dataset scripts can also download and read data files from anywhere — in case your data files already exist online.
Example:
>>> from datasets import load_dataset_builder >>> ds_builder = load_dataset_builder('rotten_tomatoes') >>> ds_builder.info.features {'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None), 'text': Value(dtype='string', id=None)}
datasets.get_dataset_config_names
<
source
>
(
path: str
revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None
download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None
download_mode: typing.Optional[datasets.download.download_manager.DownloadMode] = None
dynamic_modules_path: typing.Optional[str] = None
data_files: typing.Union[str, typing.List, typing.Dict, NoneType] = None
**download_kwargs
)
Parameters
path (str
) — path to the dataset processing script with the dataset builder. Can be either:- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.'./dataset/squad'
or'./dataset/squad/squad.py'
- a dataset identifier on the Hugging Face Hub (list all available datasets and ids with datasets.list_datasets())
e.g.'squad'
,'glue'
or'openai/webtext'
- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
revision (Union[str, datasets.Version]
, optional) —
If specified, the dataset module will be loaded from the datasets repository at this version.
By default:- it is set to the local version of the lib.
- it will also try to load it from the main branch if it’s not available at the local version of the lib.
Specifying a version that is different from your local version of the lib might cause compatibility issues.
download_config (DownloadConfig, optional) —
Specific download configuration parameters.
download_mode (DownloadMode, defaults toREUSE_DATASET_IF_EXISTS
) —
Download/generate mode.
dynamic_modules_path (str
, defaults to~/.cache/huggingface/modules/datasets_modules
) —
Optional path to the directory in which the dynamic modules are saved. It must have been initialized withinit_dynamic_modules
.
By default the datasets and metrics are stored inside thedatasets_modules
module.
data_files (Union[Dict, List, str]
, optional) —
Defining the data_files of the dataset configuration.
**download_kwargs (additional keyword arguments) —
Optional attributes for DownloadConfig which will override the attributes indownload_config
if supplied,
for exampleuse_auth_token
.
Get the list of available config names for a particular dataset.
Example:
>>> from datasets import get_dataset_config_names >>> get_dataset_config_names("glue") ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']
datasets.get_dataset_infos
<
source
>
(
path: str
data_files: typing.Union[str, typing.List, typing.Dict, NoneType] = None
download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None
download_mode: typing.Optional[datasets.download.download_manager.DownloadMode] = None
revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None
use_auth_token: typing.Union[str, bool, NoneType] = None
**config_kwargs
)
Parameters
path (str
) — path to the dataset processing script with the dataset builder. Can be either:- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.'./dataset/squad'
or'./dataset/squad/squad.py'
- a dataset identifier on the Hugging Face Hub (list all available datasets and ids with datasets.list_datasets())
e.g.'squad'
,'glue'
or`'openai/webtext'
- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
revision (Union[str, datasets.Version]
, optional) —
If specified, the dataset module will be loaded from the datasets repository at this version.
By default:- it is set to the local version of the lib.
- it will also try to load it from the main branch if it’s not available at the local version of the lib.
Specifying a version that is different from your local version of the lib might cause compatibility issues.
download_config (DownloadConfig, optional) —
Specific download configuration parameters.
download_mode (DownloadMode, defaults toREUSE_DATASET_IF_EXISTS
) —
Download/generate mode.
data_files (Union[Dict, List, str]
, optional) —
Defining the data_files of the dataset configuration.
use_auth_token (str
orbool
, optional) —
Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.
IfTrue
, or not specified, will get token from"~/.huggingface"
.
**config_kwargs (additional keyword arguments) —
Optional attributes for builder class which will override the attributes if supplied.
Get the meta information about a dataset, returned as a dict mapping config name to DatasetInfoDict.
Example:
>>> from datasets import get_dataset_infos >>> get_dataset_infos('rotten_tomatoes') {'default': DatasetInfo(description="Movie Review Dataset. is a dataset of containing 5,331 positive and 5,331 negative processed ences from Rotten Tomatoes movie reviews...), ...}
datasets.get_dataset_split_names
<
source
>
(
path: str
config_name: typing.Optional[str] = None
data_files: typing.Union[str, typing.Sequence[str], typing.Mapping[str, typing.Union[str, typing.Sequence[str]]], NoneType] = None
download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None
download_mode: typing.Optional[datasets.download.download_manager.DownloadMode] = None
revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None
use_auth_token: typing.Union[str, bool, NoneType] = None
**config_kwargs
)
Parameters
path (str
) — path to the dataset processing script with the dataset builder. Can be either:- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.'./dataset/squad'
or'./dataset/squad/squad.py'
- a dataset identifier on the Hugging Face Hub (list all available datasets and ids with datasets.list_datasets())
e.g.'squad'
,'glue'
or'openai/webtext'
- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
config_name (str
, optional) —
Defining the name of the dataset configuration.
data_files (str
orSequence
orMapping
, optional) —
Path(s) to source data file(s).
download_config (DownloadConfig, optional) —
Specific download configuration parameters.
download_mode (DownloadMode, defaults toREUSE_DATASET_IF_EXISTS
) —
Download/generate mode.
revision (Version orstr
, optional) —
Version of the dataset script to load.
As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch.
You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository.
use_auth_token (str
orbool
, optional) —
Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.
IfTrue
, or not specified, will get token from"~/.huggingface"
.
**config_kwargs (additional keyword arguments) —
Optional attributes for builder class which will override the attributes if supplied.
Get the list of available splits for a particular config and dataset.
Example:
>>> from datasets import get_dataset_split_names >>> get_dataset_split_names('rotten_tomatoes') ['train', 'validation', 'test']
datasets.inspect_dataset
<
source
>
(
path: str
local_path: str
download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None
**download_kwargs
)
Parameters
path (str
) — Path to the dataset processing script with the dataset builder. Can be either:- a local path to processing script or the directory containing the script (if the script has the same name
as the directory),
e.g.'./dataset/squad'
or'./dataset/squad/squad.py'
. - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with list_datasets())
e.g.'squad'
,'glue'
or'openai/webtext'
.
- a local path to processing script or the directory containing the script (if the script has the same name
local_path (str
) —
Path to the local folder to copy the dataset script to.
download_config (DownloadConfig, optional) —
Specific download configuration parameters.
**download_kwargs (additional keyword arguments) —
Optional arguments for DownloadConfig which will override
the attributes ofdownload_config
if supplied.
Allow inspection/modification of a dataset script by copying on local drive at local_path.
Metrics
Metrics is deprecated in 🤗 Datasets. To learn more about how to use metrics, take a look at the library 🤗 Evaluate! In addition to metrics, you can find more tools for evaluating models and datasets.
datasets.list_metrics
<
source
>
(
with_community_metrics = True
with_details = False
)
Parameters
with_community_metrics (bool
, optional, defaultTrue
) — Include the community provided metrics.
with_details (bool
, optional, defaultFalse
) — Return the full details on the metrics instead of only the short name.
List all the metrics script available on the Hugging Face Hub.
Deprecated in 2.5.0
Use evaluate.list_evaluation_modules instead, from the new library 🤗 Evaluate: https://huggingface.co/docs/evaluate
Example:
>>> from datasets import list_metrics >>> list_metrics() ['accuracy', 'bertscore', 'bleu', 'bleurt', 'cer', 'chrf', ... ]
datasets.load_metric
<
source
>
(
path: str
config_name: typing.Optional[str] = None
process_id: int = 0
num_process: int = 1
cache_dir: typing.Optional[str] = None
experiment_id: typing.Optional[str] = None
keep_in_memory: bool = False
download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None
download_mode: typing.Optional[datasets.download.download_manager.DownloadMode] = None
revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None
**metric_init_kwargs
)
Parameters
path (str
) —
path to the metric processing script with the metric builder. Can be either:- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.'./metrics/rouge'
or'./metrics/rogue/rouge.py'
- a metric identifier on the HuggingFace datasets repo (list all available metrics with
datasets.list_metrics()
)
e.g.'rouge'
or'bleu'
- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
config_name (str
, optional) — selecting a configuration for the metric (e.g. the GLUE metric has a configuration for each subset)
process_id (int
, optional) — for distributed evaluation: id of the process
num_process (int
, optional) — for distributed evaluation: total number of processes
cache_dir (Optional str) — path to store the temporary predictions and references (default to ~/.cache/huggingface/metrics/)
experiment_id (str
) — A specific experiment id. This is used if several distributed evaluations share the same file system.
This is useful to compute metrics in distributed setups (in particular non-additive metrics like F1).
keep_in_memory (bool) — Whether to store the temporary results in memory (defaults to False)
download_config (Optionaldatasets.DownloadConfig
— specific download configuration parameters.
download_mode (DownloadMode, defaultREUSE_DATASET_IF_EXISTS
) — Download/generate mode.
revision (OptionalUnion[str, datasets.Version]
) — if specified, the module will be loaded from the datasets repository
at this version. By default, it is set to the local version of the lib. Specifying a version that is different from
your local version of the lib might cause compatibility issues.
Load a datasets.Metric.
Deprecated in 2.5.0
Use evaluate.load instead, from the new library 🤗 Evaluate: https://huggingface.co/docs/evaluate
Example:
>>> from datasets import load_metric >>> accuracy = load_metric('accuracy') >>> accuracy.compute(references=[1, 0], predictions=[1, 1]) {'accuracy': 0.5}
datasets.inspect_metric
<
source
>
(
path: str
local_path: str
download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None
**download_kwargs
)
Parameters
path (str
) — path to the dataset processing script with the dataset builder. Can be either:- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.'./dataset/squad'
or'./dataset/squad/squad.py'
- a dataset identifier on the Hugging Face Hub (list all available datasets and ids with
datasets.list_datasets()
)
e.g.'squad'
,'glue'
or'openai/webtext'
- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
local_path (str
) — path to the local folder to copy the datset script to.
download_config (Optionaldatasets.DownloadConfig
) — specific download configuration parameters.
**download_kwargs (additional keyword arguments) — optional attributes for DownloadConfig() which will override the attributes in download_config if supplied.
Allow inspection/modification of a metric script by copying it on local drive at local_path.
Deprecated in 2.5.0
Use evaluate.inspect_evaluation_module instead, from the new library 🤗 Evaluate instead: https://huggingface.co/docs/evaluate
From files
Configurations used to load data files.
They are used when loading local files or a dataset repository:
- local files:
load_dataset("parquet", data_dir="path/to/data/dir")
- dataset repository:
load_dataset("allenai/c4")
You can pass arguments to load_dataset
to configure data loading.
For example you can specify the sep
parameter to define the CsvConfig that is used to load the data:
load_dataset("csv", data_dir="path/to/data/dir", sep="t")
Text
class datasets.packaged_modules.text.TextConfig
<
source
>
(
name: str = ‘default’
version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0
data_dir: typing.Optional[str] = None
data_files: typing.Optional[datasets.data_files.DataFilesDict] = None
description: typing.Optional[str] = None
features: typing.Optional[datasets.features.features.Features] = None
encoding: str = ‘utf-8’
errors: typing.Optional[str] = None
chunksize: int = 10485760
keep_linebreaks: bool = False
sample_by: str = ‘line’
)
BuilderConfig for text files.
CSV
class datasets.packaged_modules.csv.CsvConfig
<
source
>
(
name: str = ‘default’
version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0
data_dir: typing.Optional[str] = None
data_files: typing.Optional[datasets.data_files.DataFilesDict] = None
description: typing.Optional[str] = None
sep: str = ‘,’
delimiter: typing.Optional[str] = None
header: typing.Union[int, typing.List[int], str, NoneType] = ‘infer’
names: typing.Optional[typing.List[str]] = None
column_names: typing.Optional[typing.List[str]] = None
index_col: typing.Union[int, str, typing.List[int], typing.List[str], NoneType] = None
usecols: typing.Union[typing.List[int], typing.List[str], NoneType] = None
prefix: typing.Optional[str] = None
mangle_dupe_cols: bool = True
engine: typing.Optional[str] = None
converters: typing.Dict[typing.Union[int, str], typing.Callable[[typing.Any], typing.Any]] = None
true_values: typing.Optional[list] = None
false_values: typing.Optional[list] = None
skipinitialspace: bool = False
skiprows: typing.Union[int, typing.List[int], NoneType] = None
nrows: typing.Optional[int] = None
na_values: typing.Union[str, typing.List[str], NoneType] = None
keep_default_na: bool = True
na_filter: bool = True
verbose: bool = False
skip_blank_lines: bool = True
thousands: typing.Optional[str] = None
decimal: str = ‘.’
lineterminator: typing.Optional[str] = None
quotechar: str = ‘»‘
quoting: int = 0
escapechar: typing.Optional[str] = None
comment: typing.Optional[str] = None
encoding: typing.Optional[str] = None
dialect: typing.Optional[str] = None
error_bad_lines: bool = True
warn_bad_lines: bool = True
skipfooter: int = 0
doublequote: bool = True
memory_map: bool = False
float_precision: typing.Optional[str] = None
chunksize: int = 10000
features: typing.Optional[datasets.features.features.Features] = None
encoding_errors: typing.Optional[str] = ‘strict’
on_bad_lines: typing.Literal[‘error’, ‘warn’, ‘skip’] = ‘error’
)
BuilderConfig for CSV.
JSON
class datasets.packaged_modules.json.JsonConfig
<
source
>
(
name: str = ‘default’
version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0
data_dir: typing.Optional[str] = None
data_files: typing.Optional[datasets.data_files.DataFilesDict] = None
description: typing.Optional[str] = None
features: typing.Optional[datasets.features.features.Features] = None
field: typing.Optional[str] = None
use_threads: bool = True
block_size: typing.Optional[int] = None
chunksize: int = 10485760
newlines_in_values: typing.Optional[bool] = None
)
BuilderConfig for JSON.
Parquet
class datasets.packaged_modules.parquet.ParquetConfig
<
source
>
(
name: str = ‘default’
version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0
data_dir: typing.Optional[str] = None
data_files: typing.Optional[datasets.data_files.DataFilesDict] = None
description: typing.Optional[str] = None
batch_size: int = 10000
columns: typing.Optional[typing.List[str]] = None
features: typing.Optional[datasets.features.features.Features] = None
)
BuilderConfig for Parquet.
SQL
class datasets.packaged_modules.sql.SqlConfig
<
source
>
(
name: str = ‘default’
version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0
data_dir: typing.Optional[str] = None
data_files: typing.Optional[datasets.data_files.DataFilesDict] = None
description: typing.Optional[str] = None
sql: typing.Union[str, ForwardRef(‘sqlalchemy.sql.Selectable’)] = None
con: typing.Union[str, ForwardRef(‘sqlalchemy.engine.Connection’), ForwardRef(‘sqlalchemy.engine.Engine’), ForwardRef(‘sqlite3.Connection’)] = None
index_col: typing.Union[str, typing.List[str], NoneType] = None
coerce_float: bool = True
params: typing.Union[typing.List, typing.Tuple, typing.Dict, NoneType] = None
parse_dates: typing.Union[typing.List, typing.Dict, NoneType] = None
columns: typing.Optional[typing.List[str]] = None
chunksize: typing.Optional[int] = 10000
features: typing.Optional[datasets.features.features.Features] = None
)
BuilderConfig for SQL.
Images
class datasets.packaged_modules.imagefolder.ImageFolderConfig
<
source
>
(
name: str = ‘default’
version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0
data_dir: typing.Optional[str] = None
data_files: typing.Optional[datasets.data_files.DataFilesDict] = None
description: typing.Optional[str] = None
features: typing.Optional[datasets.features.features.Features] = None
drop_labels: bool = None
drop_metadata: bool = None
)
BuilderConfig for ImageFolder.
Audio
class datasets.packaged_modules.audiofolder.AudioFolderConfig
<
source
>
(
name: str = ‘default’
version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0
data_dir: typing.Optional[str] = None
data_files: typing.Optional[datasets.data_files.DataFilesDict] = None
description: typing.Optional[str] = None
features: typing.Optional[datasets.features.features.Features] = None
drop_labels: bool = None
drop_metadata: bool = None
)
Builder Config for AudioFolder.
Forum Rules |
|