Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Look Into Small Read Throughput #539

Closed
alexanderkiel opened this issue Nov 19, 2021 · 1 comment
Closed

Look Into Small Read Throughput #539

alexanderkiel opened this issue Nov 19, 2021 · 1 comment
Assignees
Labels
performance Performance improvement

Comments

@alexanderkiel
Copy link
Member

In #538 we migrated from Aleph to Jetty, because Aleph and Manifold is no longer maintained by the Zachary Tellman and Jetty is certainly more mature and has a good change to get security updates more frequently.

However, in a performance test of small HTTP GET's of read interaction of Patient resources, Aleph has better throughput.

To test this, I used the following vegeta Patient resource template:

{
  "method": "PUT",
  "url": "http://localhost:8080/fhir/Patient/",
  "body": {
    "resourceType": "Patient",
    "id": "0",
    "gender": "male",
    "birthDate": "1994-01-10"
  },
  "header": {
    "Content-Type": [
      "application/fhir+json"
    ],
    "Accept": [
      "application/fhir+json"
    ]
  }
}

and the following script:

#!/usr/bin/env bash

RATE=100
ID_START=0
DURATION=60

cat patient-update.json | \
jq -cM --argjson start ${ID_START} --argjson rate ${RATE} --argjson duration ${DURATION} \
  '. as $request | range($start; $start + $rate * $duration) | tostring as $id | $request | .url += $id | .body.id = $id | .body |= @base64' | \
vegeta attack -rate=${RATE} -format=json -duration=${DURATION}s | \
vegeta report

to create 6000 Patient resources and the following vegeta Patient read template:

{
  "method": "GET",
  "url": "http://localhost:8080/fhir/Patient/",
  "header": {
    "Accept": [
      "application/fhir+json"
    ]
  }
}

and script:

#!/usr/bin/env bash

ID_START=0
ID_END=3000

cat patient-read.json | \
jq -cM --argjson id_start ${ID_START} --argjson id_end ${ID_END} \
  '. as $request | range($id_start; $id_end) | tostring as $id | $request | .url += $id' | \
vegeta attack -rate=0 -max-workers=100 -format=json -duration=60s | \
vegeta report

to read 3000 of that patients with 100 concurrent workers for 60 seconds. The results with Aleph are:

Requests      [total, rate, throughput]         1901817, 31696.12, 31695.96
Requests      [total, rate, throughput]         1899174, 31652.92, 31651.82
Requests      [total, rate, throughput]         1944818, 32413.61, 32412.64

Latencies     [min, mean, 50, 90, 95, 99, max]  164.163µs, 2.518ms, 2.095ms, 4.389ms, 5.186ms, 14.156ms, 50.423ms
Latencies     [min, mean, 50, 90, 95, 99, max]  167.228µs, 2.5ms, 2.08ms, 4.291ms, 5.116ms, 13.384ms, 42.848ms
Latencies     [min, mean, 50, 90, 95, 99, max]  162.848µs, 2.45ms, 2.127ms, 4.372ms, 5.143ms, 8.823ms, 34.437ms

and with Jetty 9.4:

Requests      [total, rate, throughput]         1266512, 21104.36, 21103.25
Requests      [total, rate, throughput]         1258705, 20967.70, 20967.24
Requests      [total, rate, throughput]         1253666, 20892.89, 20891.83

Latencies     [min, mean, 50, 90, 95, 99, max]  189.163µs, 4.15ms, 2.195ms, 5.266ms, 16.018ms, 42.078ms, 79.751ms
Latencies     [min, mean, 50, 90, 95, 99, max]  188.872µs, 4.147ms, 2.208ms, 5.293ms, 16.082ms, 40.939ms, 81.982ms
Latencies     [min, mean, 50, 90, 95, 99, max]  190.789µs, 4.111ms, 2.167ms, 5.35ms, 16.077ms, 40.434ms, 95.686ms

The test was carried out on a a VM with Xeon E5-2687W v4 @ 3.00GHz CPU's, with plenty of RAM and the Docker container constraint to 4 CPU's.

Jetty 10 was even slower.

This issue should help to remember that the throughput in this particular scenario was higher with Aleph. Because it's more important to have the improved maturity of Jetty opposed to the better throughput of Aleph, this issue has low priority.

@alexanderkiel alexanderkiel added the performance Performance improvement label Nov 19, 2021
@alexanderkiel
Copy link
Member Author

With Jetty 11 we reach about 50k requests/s. So I'll close that issue.

@alexanderkiel alexanderkiel self-assigned this Oct 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Performance improvement
Projects
None yet
Development

No branches or pull requests

1 participant