Let's get started with Frictionless! We will learn how to install and use the framework. The simple example below will showcase the framework's basic functionality. For an introduction to the concepts behind the Frictionless Framework, please read the Frictionless Introduction.
The framework requires Python3.6+. Versioning follows the SemVer Standard.
The framework supports CSV, Excel, and JSON formats by default. The second command above installs a plugin for SQL support. There are plugins for SQL, Pandas, HTML, and others (check the list of Frictionless Framework plugins and their status). Usually, you don't need to think about it in advance–frictionless will display a useful error message about a missing plugin with installation instructions.
Did you have an error installing Frictionless? Here are some dependencies and common errors:
pip: command not found. Please see the pip docs for help installing pip.
- Installing Python help (Mac)
- Installing Python help (Windows)
The framework can be used:
- as a Python library
- as a command-line interface
- as a restful API server (for advanced use cases)
For instance, all the examples below do the same thing:
All these interfaces are as much alike as possible regarding naming conventions and the way you interact with them. Usually, it's straightforward to translate, for instance, Python code to a command-line call. Frictionless provides code completion for Python and the command-line, which should help to get useful hints in real time. You can find the API reference here.
Arguments conform to the following naming convention:
- for Python interfaces, they are snake_cased, e.g.
- within dictionaries or JSON objects, they are camelCased, e.g.
- in the command line, they use dashes, e.g.
To get the documentation for a command-line interface just use the
We will take a very messy data file:
First of all, let's use
describe to infer the metadata directly from the tabular data. We can then edit and save it to provide others with useful information about the data:
This output is in YAML, it is a default Frictionless output format.
Now that we have inferred a table schema from the data file (e.g., expected format of the table, expected type of each value in a column, etc.), we can use
extract to read the normalized tabular data from the source CSV file:
Last but not least, let's get a validation report. This report will help us to identify and fix all the errors present in the tabular data, as comprehensive information is provided for every problem:
Now that we have all this information:
- we can clean up the table to ensure the data quality
- we can use the metadata to describe and share the dataset
- we can include the validation into our workflow to guarantee the validity
- and much more: don't hesitate and read the following sections of the documentation!