Skip to content

Commit 3453976

Browse files
committed
build -> job
1 parent ce9d8f6 commit 3453976

File tree

2 files changed

+46
-47
lines changed

2 files changed

+46
-47
lines changed

README.md

+45-44
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,14 @@ Sparky is a flexible and minimalist continuous integration server and distribute
66

77
Sparky features:
88

9-
* Defining builds scheduling times in crontab style
10-
* Triggering builds using external APIs and custom logic
11-
* Build scenarios are defined as Raku scripts with support of [Sparrow6](https://github.com/melezhik/Sparrow6/blob/master/documentation/dsl.md) DSL
12-
* CICD code could be extended using various scripting languages
9+
* Defining jobs scheduling times in crontab style
10+
* Triggering jobs using external APIs and custom logic
11+
* Jobs scenarios are pure Raku code with additional support of [Sparrow6](https://github.com/melezhik/Sparrow6/blob/master/documentation/dsl.md) automation framework
12+
* Use of plugins on different programming languages
1313
* Everything is kept in SCM repository - easy to port, maintain and track changes
14-
* Builds gets run in one of 3 flavors - 1) on localhost 2) on remote machines via ssh 3) on docker instances
15-
* Nice web UI to run build and read reports
16-
* Runs in a peer-to-peer network fashion with distributed tasks support
14+
* Jobs get run in one of 3 flavors - 1) on localhost 2) on remote machines via ssh 3) on docker instances
15+
* Nice web UI to run jobs and read reports
16+
* Could be runs in a peer-to-peer network fashion with distributed tasks support
1717

1818
# Build status
1919

@@ -23,10 +23,10 @@ Sparky features:
2323
# Sparky workflow in 4 lines:
2424

2525
```bash
26-
$ nohup sparkyd & # run Sparky daemon to trigger build jobs
27-
$ nohup cro run & # run Sparky CI UI to see build statuses and reports
28-
$ nano ~/.sparky/projects/my-project/sparrowfile # write a build scenario
29-
$ firefox 127.0.0.1:4000 # run builds and get reports
26+
$ nohup sparkyd & # run Sparky daemon to trigger jobs
27+
$ nohup cro run & # run Sparky CI UI to see job statuses and reports
28+
$ nano ~/.sparky/projects/my-project/sparrowfile # write a job scenario
29+
$ firefox 127.0.0.1:4000 # run jobs and get reports
3030
```
3131

3232
# Installation
@@ -57,7 +57,7 @@ $ sparkyd
5757

5858
* Sparky daemon traverses sub directories found at the project root directory.
5959

60-
* For every directory found initiate build process invoking sparky worker ( `sparky-runner.raku` ).
60+
* For every directory found initiate job run process invoking sparky worker ( `sparky-runner.raku` ).
6161

6262
* Sparky root directory default location is `~/.sparky/projects`.
6363

@@ -94,7 +94,7 @@ $ sparrowdo --sparrowfile=utils/install-sparkyd-systemd.raku --no_sudo --localho
9494

9595
# Sparky Web UI
9696

97-
And finally Sparky has a simple web UI to show builds statuses and reports.
97+
And finally Sparky has a simple web UI to show jobs statuses and reports.
9898

9999
To run Sparky CI web app:
100100

@@ -126,23 +126,28 @@ Sparky project is just a directory located at the sparky root directory:
126126
$ mkdir ~/.sparky/projects/teddy-bear-app
127127
```
128128

129-
# Build scenario
129+
# Job scenario
130130

131-
Sparky is built on top of Sparrow/Sparrowdo, read [Sparrowdo](https://github.com/melezhik/sparrowdo)
132-
_to know how to write Sparky scenarios_.
131+
Sparky uses pure [Raku](https://raku.org) as job language, so simple job is just
132+
a Raku code:
133133

134-
Here is a short example.
134+
```bash
135+
$ nano ~/.sparky/projects/hello-world/sparrowfile
136+
```
135137

136-
Git check out a Raku project, install dependencies and run unit tests:
138+
```raku
139+
say "hello Sparky!";
140+
```
141+
142+
To leverage useful tasks and plugins, Sparky is fully integrated with [Sparrow](https://github.com/melezhik/Sparrow6) automation framework.
143+
144+
Here in example of job that checks out a Raku project, install dependencies and run unit tests:
137145

138146
```bash
139-
$ nano ~/.sparky/projects/teddy-bear-app/sparrowfile
147+
$ nano ~/.sparky/projects/raku-build/sparrowfile
140148
```
141149

142-
And add content like this:
143-
144150
```raku
145-
146151
directory "project";
147152

148153
git-scm 'https://github.com/melezhik/rakudist-teddy-bear.git', %(
@@ -163,8 +168,7 @@ bash 'prove6 -l', %(
163168

164169
# Configure Sparky workers
165170

166-
By default the build scenario gets executed _on the same machine you run Sparky at_, but you can change this
167-
to _any remote host_ setting Sparrowdo related parameters in the `sparky.yaml` file:
171+
By default the job scenario get executed _on the same machine you run Sparky at_, but you can change this to _any remote host_ setting Sparrowdo section in `sparky.yaml` file:
168172

169173
```bash
170174
$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
@@ -195,31 +199,30 @@ sparrowdo:
195199

196200
# Purging old builds
197201

198-
To remove old build set `keep_builds` parameter in `sparky.yaml`:
202+
To remove old job builds set `keep_builds` parameter in `sparky.yaml`:
199203

200204
```bash
201205
$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
202206
```
203207

204-
Put number of past builds to keep:
208+
Put number of builds to keep:
205209

206210
```yaml
207211
keep_builds: 10
208212
```
209213

210-
That makes Sparky remove old build and only keep last `keep_builds` builds.
214+
That makes Sparky remove old builds and only keep last `keep_builds` builds.
211215

212216
# Run by cron
213217

214218
It's possible to setup scheduler for Sparky builds, you should define `crontab` entry in sparky yaml file.
215-
for example to run a build every hour at 30,50 or 55 minute say this:
219+
220+
For example, to run a job every hour at 30,50 or 55 minutes:
216221

217222
```bash
218223
$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
219224
```
220225

221-
With this schedule:
222-
223226
```cron
224227
crontab: "30,50,55 * * * *"
225228
```
@@ -228,22 +231,19 @@ Follow [Time::Crontab](https://github.com/ufobat/p6-time-crontab) documentation
228231

229232
# Manual run
230233

231-
If you want to build a project from web UI, use `allow_manual_run`:
234+
To trogger job manually from web UI, use `allow_manual_run`:
232235

233236
```bash
234237
$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
235238
```
236239

237-
And activate manual run:
238240
```yaml
239241
allow_manual_run: true
240242
```
241243

242-
# Trigger build by SCM changes
244+
# Trigger job by SCM changes
243245

244-
** warning ** - the feature is not properly tested, feel free to post issues or suggestions
245-
246-
To trigger Sparky builds on SCM changes, define `scm` section in `sparky.yaml` file:
246+
To trigger Sparky jobs on SCM changes, define `scm` section in `sparky.yaml` file:
247247

248248
```yaml
249249
scm:
@@ -264,7 +264,7 @@ scm:
264264
branch: master
265265
```
266266

267-
Once a build is triggered respected SCM attributes available via `tags()<SCM_*>` elements:
267+
Once a job is triggered respected SCM data is available via `tags()<SCM_*>` function:
268268

269269
```raku
270270
directory "scm";
@@ -281,15 +281,14 @@ bash "ls -l {%*ENV<PWD>}/scm";
281281

282282
To set default values for SCM_URL and SCM_BRANCH, use sparrowdo `tags`:
283283

284-
285284
`sparky.yaml`:
286285

287286
```yaml
288287
sparrowdo:
289288
tags: SCM_URL=https://github.com/melezhik/rakudist-teddy-bear.git,SCM_BRANCH=master
290289
```
291290

292-
These is useful when trigger build manually.
291+
These is useful when trigger job manually.
293292

294293
# Flappers protection mechanism
295294

@@ -305,17 +304,19 @@ worker:
305304

306305
# Disable project
307306

308-
You can disable project builds by setting `disable` option to true:
307+
You can disable job runs by setting `disable` option to true:
309308

310309
```bash
311310
$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
312311
313312
disabled: true
314313
```
315-
It's handy when you start a new project and don't want to add it into build pipeline.
316314

317315
# Advanced topics
318316

317+
Following are some advanced topics, that might be of interest once you
318+
are familar with a basis.
319+
319320
# Job UIs
320321

321322
Sparky UI DSL allows to grammatically describe UI for Sparky jobs
@@ -337,13 +338,13 @@ Read more at [docs/stp.md](https://github.com/melezhik/sparky/blob/master/docs/s
337338

338339
## Job API
339340

340-
Job API allows to trigger new builds from a main scenario.
341+
Job API allows to orchestrate multiple Sparky jobs
341342

342343
Read more at [docs/job_api.md](https://github.com/melezhik/sparky/blob/master/docs/job_api.md)
343344

344345
## Sparky plugins
345346

346-
Sparky plugins are extensions points to add extra functionality to Sparky builds.
347+
Sparky plugins is way to extend Sparky jobs by writting plugins as Raku modules
347348

348349
Read more at [docs/plugins.md](https://github.com/melezhik/sparky/blob/master/docs/plugins.md)
349350

@@ -393,7 +394,7 @@ tls:
393394

394395
# Command line client
395396

396-
You can build the certain project using sparky command client called `sparky-runner.raku`:
397+
To trigger Sparky job in terminal use `sparky-runner.raku` cli:
397398

398399
```bash
399400
$ sparky-runner.raku --dir=/home/user/.sparky/projects/teddy-bear-app

docs/job_api.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
# Job API
22

3-
Job API allows to trigger new builds from a main scenario.
4-
5-
This allow one to create multi stage scenarios.
3+
Job API allows to orchestrate multiple Sparky jobs.
64

75
For example:
86

0 commit comments

Comments
 (0)