Skip to content

Commit 84a9bd9

Browse files
SajidAlamQBlvijnck
authored andcommitted
[KED-2630] Strip out versioning notes from docs (kedro-org#1273)
Signed-off-by: Laurens Vijnck <laurens_vijnck@mckinsey.com>
1 parent fc23bee commit 84a9bd9

23 files changed

+0
-99
lines changed

docs/source/04_kedro_project_setup/02_configuration.md

-4
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,6 @@
22

33
This section contains detailed information about configuration, for which the relevant API documentation can be found in [kedro.config.ConfigLoader](/kedro.config.ConfigLoader).
44

5-
```eval_rst
6-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
7-
```
8-
95
## Configuration root
106

117
We recommend that you keep all configuration files in the `conf` directory of a Kedro project. However, if you prefer, you may point Kedro to any other directory and change the configuration paths by setting the `CONF_ROOT` variable in `src/<project-package>/settings.py` as follows:

docs/source/04_kedro_project_setup/03_session.md

-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Lifecycle management with `KedroSession`
22

3-
```eval_rst
4-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
5-
```
6-
73
### Overview
84
A `KedroSession` allows you to:
95

docs/source/05_data/01_data_catalog.md

-5
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,5 @@
11
# The Data Catalog
22

3-
4-
```eval_rst
5-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
6-
```
7-
83
This section introduces `catalog.yml`, the project-shareable Data Catalog. The file is located in `conf/base` and is a registry of all data sources available for use by a project; it manages loading and saving of data.
94

105
All supported data connectors are available in [`kedro.extras.datasets`](/kedro.extras.datasets).

docs/source/05_data/02_kedro_io.md

-4
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,6 @@
33

44
In this tutorial, we cover advanced uses of the [Kedro IO](/kedro.io.rst) module to understand the underlying implementation. The relevant API documentation is [kedro.io.AbstractDataSet](/kedro.io.AbstractDataSet) and [kedro.io.DataSetError](/kedro.io.DataSetError).
55

6-
```eval_rst
7-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
8-
```
9-
106
## Error handling
117

128
We have custom exceptions for the main classes of errors that you can handle to deal with failures.

docs/source/06_nodes_and_pipelines/01_nodes.md

-4
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,6 @@ In this section we introduce the concept of a node, for which the relevant API d
44

55
Nodes are the building blocks of pipelines and represent tasks. Pipelines are used to combine nodes to build workflows, which range from simple machine learning workflows to end-to-end (E2E) production workflows.
66

7-
```eval_rst
8-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
9-
```
10-
117
You will first need to import libraries from Kedro and other standard tools to run the code snippets demonstrated below.
128

139
```python

docs/source/06_nodes_and_pipelines/02_pipeline_introduction.md

-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Pipelines
22

3-
```eval_rst
4-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
5-
```
6-
73
We previously introduced [Nodes](./01_nodes.md) as building blocks that represent tasks, and which can be combined in a pipeline to build your workflow. A pipeline organises the dependencies and execution order of your collection of nodes, and connects inputs and outputs while keeping your code modular. The pipeline determines the node execution order by resolving dependencies and does *not* necessarily run the nodes in the order in which they are passed in.
84

95
To benefit from Kedro's automatic dependency resolution, you can chain your nodes into a [pipeline](/kedro.pipeline.Pipeline), which is a list of nodes that use a shared set of variables.

docs/source/06_nodes_and_pipelines/03_modular_pipelines.md

-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Modular pipelines
22

3-
```eval_rst
4-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
5-
```
6-
73
## What are modular pipelines?
84

95
In many typical Kedro projects, a single (“main”) pipeline increases in complexity as the project evolves. To keep your project fit for purpose, we recommend that you create modular pipelines, which are logically isolated and can be reused. Modular pipelines are easier to develop, test and maintain, and are portable so they can be copied and reused between projects.

docs/source/06_nodes_and_pipelines/04_run_a_pipeline.md

-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Run a pipeline
22

3-
```eval_rst
4-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
5-
```
6-
73
## Runners
84

95
Runners are the execution mechanisms used to run pipelines. They all inherit from `AbstractRunner`.

docs/source/06_nodes_and_pipelines/05_slice_a_pipeline.md

-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Slice a pipeline
22

3-
```eval_rst
4-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
5-
```
6-
73
Sometimes it is desirable to run a subset, or a 'slice' of a pipeline's nodes. In this page, we illustrate the programmatic options that Kedro provides. You can also use the [Kedro CLI to pass parameters to `kedro run`](../09_development/03_commands_reference.md#run-the-project) command and slice a pipeline.
84

95
Let's look again at the example pipeline from the [pipeline introduction documentation](./02_pipeline_introduction.md#how-to-build-a-pipeline), which computes the variance of a set of numbers:

docs/source/07_extend_kedro/04_plugins.md

-5
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,5 @@
11
# Kedro plugins
22

3-
4-
```eval_rst
5-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
6-
```
7-
83
Kedro plugins allow you to create new features for Kedro and inject additional commands into the CLI. Plugins are developed as separate Python packages that exist outside of any Kedro project.
94

105
## Overview

docs/source/08_logging/01_logging.md

-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Logging
22

3-
```eval_rst
4-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
5-
```
6-
73
Kedro uses, and facilitates, the use of Python’s `logging` library by providing a default logging configuration. This can be found in `conf/base/logging.yml` in every project generated using Kedro’s CLI `kedro new` command.
84

95
## Configure logging

docs/source/09_development/01_set_up_vscode.md

-5
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,5 @@
11
# Set up Visual Studio Code
22

3-
4-
```eval_rst
5-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
6-
```
7-
83
Start by opening a new project directory in VS Code and installing the Python plugin under **Tools and languages**:
94

105
![](../meta/images/vscode_startup.png)

docs/source/09_development/02_set_up_pycharm.md

-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Set up PyCharm
22

3-
```eval_rst
4-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
5-
```
6-
73
This section will present a quick guide on how to configure [PyCharm](https://www.jetbrains.com/pycharm/) as a development environment for working on Kedro projects.
84

95
Open a new project directory in PyCharm. You will need to add your **Project Interpreter**, so go to **PyCharm | Preferences** for macOS or **File | Settings** for Windows and Linux:

docs/source/09_development/03_commands_reference.md

-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Kedro's command line interface
22

3-
```eval_rst
4-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
5-
```
6-
73
Kedro's command line interface (CLI) is used to give commands to Kedro via a terminal shell (such as the terminal app on macOS, or cmd.exe or PowerShell on Windows). You need to use the CLI to set up a new Kedro project, and to run it.
84

95
### Autocompletion (optional)

docs/source/09_development/04_lint.md

-4
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,6 @@
22

33
To follow these instructions, you will need to install the `pylint` package, subject to GPL licence.
44

5-
```eval_rst
6-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
7-
```
8-
95
You can lint your project code to ensure code quality using the `kedro lint` command, your project is linted with [`black`](https://github.com/psf/black) (projects created with Python 3.6 and above), [`flake8`](https://gitlab.com/pycqa/flake8) and [`isort`](https://github.com/PyCQA/isort). If you prefer to use [pylint](https://www.pylint.org/), a popular linting tool, then the sample commands you can use to help with this are included in the script below:
106

117
```bash

docs/source/09_development/05_debugging.md

-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Debugging
22

3-
```eval_rst
4-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
5-
```
6-
73
## Introduction
84

95
If you're running your Kedro pipeline from the CLI or you can't/don't want to run Kedro from within your IDE debugging framework, it can be hard to debug your Kedro pipeline or nodes. This is particularly frustrating because:

docs/source/10_deployment/08_databricks.md

-5
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,5 @@
11
# Deployment to a Databricks cluster
22

3-
4-
```eval_rst
5-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
6-
```
7-
83
This tutorial uses the [PySpark Iris Kedro Starter](https://github.com/quantumblacklabs/kedro-starters/tree/master/pyspark-iris) to illustrate how to bootstrap a Kedro project using Spark and deploy it to a [Databricks cluster on AWS](https://databricks.com/aws). It is split into 2 sections:
94

105
* [Databricks Connect workflow](#run-the-kedro-project-with-databricks-connect) (recommended)

docs/source/10_deployment/09_aws_sagemaker.md

-5
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,5 @@
11
# How to integrate Amazon SageMaker into your Kedro pipeline
22

3-
4-
```eval_rst
5-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
6-
```
7-
83
This tutorial explains how to integrate a Kedro project with [Amazon SageMaker](https://aws.amazon.com/sagemaker/) in order to train a machine learning model. It shows how to build machine learning pipelines in Kedro and while taking advantage of the power of SageMaker for potentially compute-intensive machine learning tasks.
94

105
The Kedro project will still run locally (or on one of many supported workflow engines like [Argo](./04_argo.md), [Prefect](./05_prefect.md), [Kubeflow](./06_kubeflow.md), [AWS Batch](./07_aws_batch.md) and others), but the model training step will be offloaded onto SageMaker.

docs/source/10_deployment/10_aws_step_functions.md

-5
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,5 @@
11
# How to deploy your Kedro pipeline with AWS Step Functions
22

3-
4-
```eval_rst
5-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
6-
```
7-
83
This tutorial explains how to deploy a Kedro project with [AWS Step Functions](https://aws.amazon.com/step-functions/?step-functions.sort-by=item.additionalFields.postDateTime&step-functions.sort-order=desc) in order to run a Kedro pipeline in production on AWS [Serverless Computing](https://aws.amazon.com/serverless/) platform.
94

105
## Why would you run a Kedro pipeline with AWS Step Functions

docs/source/10_deployment/11_airflow_astronomer.md

-5
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,5 @@
11
# How to deploy your Kedro pipeline on Apache Airflow with Astronomer
22

3-
4-
```eval_rst
5-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
6-
```
7-
83
This tutorial explains how to deploy a Kedro project on [Apache Airflow](https://airflow.apache.org/) with [Astronomer](https://www.astronomer.io/). Apache Airflow is an extremely popular open-source workflow management platform. Workflows in Airflow are modelled and organised as [DAGs](https://en.wikipedia.org/wiki/Directed_acyclic_graph), making it a suitable engine to orchestrate and execute a pipeline authored with Kedro. [Astronomer](https://www.astronomer.io/docs/cloud/stable/develop/cli-quickstart) is a managed Airflow platform which allows users to spin up and run an Airflow cluster easily in production. Additionally, it also provides a set of tools to help users get started with Airflow locally in the easiest way possible.
94

105
The following discusses how to run the [example Iris classification pipeline](../02_get_started/05_example_project) on a local Airflow cluster with Astronomer.

docs/source/11_tools_integration/01_pyspark.md

-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Build a Kedro pipeline with PySpark
22

3-
```eval_rst
4-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
5-
```
6-
73
This page outlines some best practices when building a Kedro pipeline with [`PySpark`](https://spark.apache.org/docs/latest/api/python/index.html). It assumes a basic understanding of both Kedro and `PySpark`.
84

95
## Centralise Spark configuration in `conf/base/spark.yml`

docs/source/11_tools_integration/02_ipython.md

-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Use Kedro with IPython and Jupyter Notebooks/Lab
22

3-
```eval_rst
4-
.. note:: This documentation is based on ``Kedro 0.17.1``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
5-
```
6-
73
This section follows the [Iris dataset example](../02_get_started/05_example_project.md) and demonstrates how to use Kedro with IPython and Jupyter Notebooks / Lab. We also recommend a video that explains the transition from the use of vanilla Jupyter Notebooks to using Kedro, from [Data Engineer One](https://www.youtube.com/watch?v=dRnCovp1GRQ&t=50s&ab_channel=DataEngineerOne).
84

95

docs/source/12_faq/02_architecture_overview.md

-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Kedro architecture overview
22

3-
```eval_rst
4-
.. note:: This documentation is based on ``Kedro 0.17.4``. If you spot anything that is incorrect then please create an `issue <https://github.com/quantumblacklabs/kedro/issues>`_ or pull request.
5-
```
6-
73
![Kedro architecture diagram](../meta/images/kedro_architecture.png)
84

95
At a high level, Kedro consists of five main parts:

0 commit comments

Comments
 (0)