Skip to content

Commit 642a4d4

Browse files
committed
docs/features: add feature flags guide
1 parent 58d72f5 commit 642a4d4

File tree

3 files changed

+90
-4
lines changed

3 files changed

+90
-4
lines changed

FEATURE-FLAGS.md

+84
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
# Feature flags in Leaf
2+
3+
## The problem(s)
4+
5+
Supporting different backends is an important concept in Leaf.
6+
7+
Optimally we would like to always have to choice of running Leaf on all backends.
8+
However in reality there are some tradeoffs that have to be made.
9+
10+
One problem is that certain backends require the presence of special hardware to
11+
run (CUDA needs NVIDIA GPUs), or the libraries to address them are not present on
12+
the developers machine which is necessary for compilation.
13+
14+
Another challenge is that not all backends have support for the same operations,
15+
which constrains neural networks with special requirements to the backends that
16+
provide those operations. Due to some limitations in the current version of Rust
17+
(1.7) allowing differently featured backends can not be that easily supported.
18+
See [Issue #81](https://github.com/autumnai/leaf/issues/81).
19+
20+
## The solution
21+
22+
Feature flags are a well known concept to add opt-in functionality that is
23+
not necessary for every use-case of a library and are a good solution to the first
24+
problem.
25+
Luckily, Cargo, Rust's package manager has built-in support for feature flags.
26+
27+
A simple dependency with additional features enabled in a `Cargo.toml` looks like this:
28+
```toml
29+
[dependencies]
30+
leaf = { version = "0.2.0", features = ["cuda"] }
31+
```
32+
33+
Feature flags are usually used in an additive way, but **some configurations
34+
of features for Leaf might actually take away some functionality**.
35+
We do this because we want the models to be portable across different backends,
36+
which is not possible if e.g. the CUDA backend supports Convolution layers while
37+
the Native backend doesn't. To make it possible we deactivate those features that
38+
are only available on a single backend, effectively "dumbing down" the backends.
39+
40+
Example:
41+
- feature flags are `cuda` -> `Convolution` Layer **is available** since the CUDA backend provides the required traits and there is no native backend it has to be compatible with.
42+
- feature flags are `native` -> `Convolution` Layer **is not available** since the native backend does not provide the required traits and there are no other frameworks present.
43+
- feature flags are `native cuda` -> `Convolution` Layer **is not available** since the native backend does not provide the required traits, and the CUDA backend has been dumbed down.
44+
45+
## Using the feature flags
46+
47+
One thing we have ignored until now are default feature flags. Cargo allows to
48+
define a set of features that should be included in a package by default .
49+
One of the default feature flags of Leaf is the `native` flag. When looking at
50+
the above example you might notice that the only way we can unleash the full
51+
power of the CUDA backend is by deactivating the default `native` flag.
52+
Cargo allows us to do that either via the `--no-default-features` on the CLI or
53+
by specifying `default-feature = false` for a dependency in `Cargo.toml`.
54+
55+
#### In your project
56+
57+
The simple `Cargo.toml` example above works in simple cases but if you want
58+
to provide the same flexibility of backends in your project, you can reexport
59+
the feature flags.
60+
61+
A typical example (including collenchyma) would look like this:
62+
```toml
63+
[dependencies]
64+
leaf = { version = "0.2.0", default-features = false }
65+
# the native collenchyma feature is neccesary to read/write tensors
66+
collenchyma = { version = "0.0.8", default-features = false, features = ["native"] }
67+
68+
[features]
69+
default = ["native"]
70+
native = ["leaf/native"]
71+
opencl = ["leaf/opencl", "collenchyma/opencl"]
72+
cuda = ["leaf/cuda", "collenchyma/cuda"]
73+
74+
```
75+
76+
Building your project would then look like this:
77+
```sh
78+
# having both native and CUDA backends
79+
# `native` is provided by default, and `cuda` explicitly specified by `--features cuda`
80+
cargo build --features cuda
81+
# unleashing CUDA
82+
# `native` default not included because of `--no-default-features`, and `cuda` explicitly specified by `--features cuda`
83+
cargo build --no-default-features --features cuda
84+
```

README.md

+3-1
Original file line numberDiff line numberDiff line change
@@ -86,6 +86,8 @@ cuda = ["leaf/cuda"]
8686
opencl = ["leaf/opencl"]
8787
```
8888

89+
> More information on the use of feature flags in Leaf can be found in [FEATURE-FLAGS.md](./FEATURE-FLAGS.md)
90+
8991

9092
## Examples
9193

@@ -100,7 +102,7 @@ the install guide, clone this repoistory and then run
100102

101103
```bash
102104
# The examples currently require CUDA support.
103-
cargo run --release --example benchmarks
105+
cargo run --release --no-default-features --features cuda --example benchmarks alexnet
104106
```
105107

106108
[leaf-examples]: https://github.com/autumnai/leaf-examples

examples/benchmarks.rs

+3-3
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ fn get_time_scale<'a>(sec: f64) -> (f64, &'a str) {
115115
#[cfg(feature="native")]
116116
fn bench_alexnet() {
117117
println!("Examples run only with CUDA support at the moment, because of missing native convolution implementation for the Collenchyma NN Plugin.");
118-
println!("Try running with `cargo run --no-default-features --features cuda --example benchmarks alexnet`.");
118+
println!("Try running with `cargo run --release --no-default-features --features cuda --example benchmarks alexnet`.");
119119
}
120120
#[cfg(all(feature="cuda", not(feature="native")))]
121121
fn bench_alexnet() {
@@ -197,7 +197,7 @@ fn bench_alexnet() {
197197
#[cfg(feature="native")]
198198
fn bench_overfeat() {
199199
println!("Examples run only with CUDA support at the moment, because of missing native convolution implementation for the Collenchyma NN Plugin.");
200-
println!("Try running with `cargo run --no-default-features --features cuda --example benchmarks overfeat`.");
200+
println!("Try running with `cargo run --release --no-default-features --features cuda --example benchmarks overfeat`.");
201201
}
202202
#[cfg(all(feature="cuda", not(feature="native")))]
203203
fn bench_overfeat() {
@@ -279,7 +279,7 @@ fn bench_overfeat() {
279279
#[cfg(feature="native")]
280280
fn bench_vgg_a() {
281281
println!("Examples run only with CUDA support at the moment, because of missing native convolution implementation for the Collenchyma NN Plugin.");
282-
println!("Try running with `cargo run --no-default-features --features cuda --example benchmarks vgg`.");
282+
println!("Try running with `cargo run --release --no-default-features --features cuda --example benchmarks vgg`.");
283283
}
284284
#[cfg(all(feature="cuda", not(feature="native")))]
285285
fn bench_vgg_a() {

0 commit comments

Comments
 (0)