Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MB-29577 license readability and ver update #2

Closed
wants to merge 1 commit into from

Conversation

1tylermitchell
Copy link

For updating vulcan license in product does it need to be based to a different branch?

@ceejatec
Copy link
Contributor

We were working on the last step of moving the license text out of ns_server last night when we had the hardware crash. We will get back to that soon.

The proper location going forward will be github.com/couchbase/product-texts. Please take a look over there. For now there is no branching on that repo so the Vulcan license can go to master.

Also, both product-texts and ns_server use Gerrit, so we can't accept GitHub pull requests. As such I'm closing this PR.

@ceejatec ceejatec closed this May 15, 2018
ns-codereview pushed a commit that referenced this pull request Mar 21, 2019
- hide/show chart controls & restyle title
- re-configure chart panels for layout sml/md/lg
- move chart CSS to cbui-components.css
- revise delete chart dialog

Change-Id: Icce3ca90b8f76ae6b0ea5fd957d91fbc350718ae
Reviewed-on: http://review.couchbase.org/106408
Reviewed-by: Pavel Blagodov <stochmail@gmail.com>
Tested-by: Pavel Blagodov <stochmail@gmail.com>
ns-codereview pushed a commit that referenced this pull request Mar 25, 2019
- fixed broken style inheritance from .charts-group-row
- removed button-like style from save button

Change-Id: Ia5ef5c95ed2b9ba7b148521a11da5ac412b31add
Reviewed-on: http://review.couchbase.org/106737
Reviewed-by: Rob Ashcom <rob.ashcom@gmail.com>
Tested-by: Rob Ashcom <rob.ashcom@gmail.com>
ns-codereview pushed a commit that referenced this pull request Mar 29, 2019
- changed placeholder text
- made certificate types conditional on pre-selecting a mode
- change help text
- changed some field labels

Change-Id: Id24619d2057643709c491fac1039072359354667
Reviewed-on: http://review.couchbase.org/106962
Tested-by: Rob Ashcom <rob.ashcom@gmail.com>
Well-Formed: Build Bot <build@couchbase.com>
Reviewed-by: Pavel Blagodov <stochmail@gmail.com>
Tested-by: Pavel Blagodov <stochmail@gmail.com>
ns-codereview pushed a commit that referenced this pull request Oct 24, 2019
- added groups to “All Services” preset
- fixed some stat naming typos
- re-ordered some charts
- re-named 2 stats for clarity/consistency

Change-Id: Icbe3d4fe0ab0af23addd9c0650ae4b5f26402d3a
Reviewed-on: http://review.couchbase.org/116940
Reviewed-by: Rob Ashcom <rob.ashcom@gmail.com>
Tested-by: Rob Ashcom <rob.ashcom@gmail.com>
ns-codereview pushed a commit that referenced this pull request Oct 21, 2020
... metrics if there are too many of them reported.

The goal is to maintain sane size of stats part of cbcollect dump.

The prometheus_cfg process wakes up every 10 min and performs
the following steps:
1) Firstly, it gets the latest scrape information for each target
   from prometheus. Right now we need to know only how many samples
   are reported in each scrape by each service. Prometheus keeps this
   information in the scrape_samples_scraped metric.
2) All samples are divided into two parts: those for which the scrape
   interval is static, and those for which the scrape interval can be
   changed. First group is all the low cardinality metrics and
   the high cardinality metrics for which the scrape interval is set
   explicitly. All other samples fall to the second group (all high
   cardinality metrics where the scrape interval is not explicitly
   set).
3) Then it calculates how many samples can be written per second to
   satisfy cbcollect dump size requirement and subtracts the rate of
   "static" samples from it (first group from #2). The resulting
   number is the maximum samples rate for metrics from second group.
4) Now when it knows the max samples rate and the number of samples
   per scrape, it is easy to calculate scrape intervals for each
   service. For example: the max sample rate is 100 samples per
   second and the number of samples per scrape is 2000. We can
   calculate scrape interval: 2000 s / 100 sps = 20 s.

Change-Id: I383dacfaf88a0ba392c97a72bd809f9428469535
Reviewed-on: http://review.couchbase.org/c/ns_server/+/136831
Tested-by: Timofey Barmin <timofey.barmin@couchbase.com>
Reviewed-by: Steve Watanabe <steve.watanabe@couchbase.com>
Reviewed-by: Sam Cramer <sam.cramer@couchbase.com>
ns-codereview pushed a commit that referenced this pull request Aug 29, 2024
The idea is to modify encryption keys in persistent_term atomically
and avoid scenarios like:

1. proc1 reads keys from disk
2. proc3 changes keys are changed on disk
3. proc2 reads keys from disk
3. proc2 writes keys to persistent_term based on #3
4. proc1 writes keys to persistent_term based on #1 (overwrites #2)

Change-Id: I8d08717170e7b9c920778b7918fc74877d06bbe8
Reviewed-on: https://review.couchbase.org/c/ns_server/+/213279
Tested-by: Timofey Barmin <timofey.barmin@couchbase.com>
Reviewed-by: Navdeep S Boparai <navdeep.boparai@couchbase.com>
Well-Formed: Build Bot <build@couchbase.com>
ns-codereview pushed a commit that referenced this pull request Sep 27, 2024
Fix permissions check: should check not only usages that are being
set but also usages that are being replaced. Without this check,
for example, bucket admin can overwrite a secret created by full
admin that was supposed to be used for things like config encryption.

Also this change fixes a race scenario when two parallel changes can
hypothetically overwrite some settings of the secret being modified:
1. PUT takes current secret properties and prepares new properties
   based on that value;
2. Another process modifies some secret properties (auto-rotation);
3. PUT finishes and sets the properties prepared at step #1
4. Change made by step #2 is lost

This obvious race was considered imposible in the very first
implementation, but then after several changes it became possible:(

Change-Id: I3c508e9eb8d8b367bc63bb8aaadfc050c4204160
Reviewed-on: https://review.couchbase.org/c/ns_server/+/216863
Tested-by: Timofey Barmin <timofey.barmin@couchbase.com>
Well-Formed: Build Bot <build@couchbase.com>
Reviewed-by: Navdeep S Boparai <navdeep.boparai@couchbase.com>
ns-codereview pushed a commit that referenced this pull request Oct 21, 2024
This allows us to utilize ctest to run ns_test in parallel with a given
number of instances (-jN). This spawns N erlang instances and runs one
module per instance.

Before (locally):
$ time ninja ns_test

...
real	6m55.331s
user	7m9.756s
sys	0m59.668s

After (locally):
$ time ctest -j4 -R ns_test.

...
100% tests passed, 0 tests failed out of 98

Total Test time (real) = 226.95 sec

real	3m47.234s
user	11m12.976s
sys	1m39.428s

Before (CV):

11:52:57 ============================================
11:52:57 ===          Run unit tests              ===
11:52:57 ============================================
11:52:57 # make test ARGS=-j3 --output-on-failure --no-compress-output -T Test --exclude-regex "api_test|cluster_test" --tests-regex .*
11:52:57 Running tests...
11:52:57    Site: a89a34ed7991
11:52:57    Build name: Linux-g++
11:52:57 Create new tag: 20241014-1052 - Experimental
11:52:57 Test project /home/couchbase/jenkins/workspace/ns-server-ns-test/master/ns_server/build
11:52:57     Start 1: ns_server_tests_build_for_idiotic_cmake
11:53:15 1/2 Test #1: ns_server_tests_build_for_idiotic_cmake ...   Passed   20.26 sec
11:53:15     Start 2: ns_test
12:02:11 2/2 Test #2: ns_test ...................................   Passed  535.39 sec
12:02:11
12:02:11 100% tests passed, 0 tests failed out of 2
12:02:11
12:02:11 Total Test time (real) = 555.81 sec

After (CV):

16:03:37 ============================================
16:03:37 ===          Run unit tests              ===
16:03:37 ============================================
16:03:37 # make test ARGS=-j3 --output-on-failure --no-compress-output -T Test --exclude-regex a^ --tests-regex "ns_test."
16:03:37 Running tests...
16:03:37    Site: f58292625dac
16:03:37    Build name: Linux-g++
16:03:37 Create new tag: 20231121-1503 - Experimental
16:03:37 Test project /home/couchbase/jenkins/workspace/ns-server-ns-test-ben-h/ns_server/build
16:03:37         Start   1: ns_server_tests_build_for_idiotic_cmake
16:04:19   1/101 Test   #1: ns_server_tests_build_for_idiotic_cmake ...   Passed   41.64 sec
16:04:19         Start   2: ns_test_active_cache
16:04:19         Start   3: ns_test_addr_util
16:04:19         Start   4: ns_test_analytics_settings_manager
16:04:23   2/101 Test   #4: ns_test_analytics_settings_manager ........   Passed    4.51 sec
16:04:23         Start   5: ns_test_async
16:04:23   3/101 Test   #3: ns_test_addr_util .........................   Passed    4.65 sec
16:04:23         Start   6: ns_test_auto_failover
16:04:28   4/101 Test   #6: ns_test_auto_failover .....................   Passed    4.30 sec
16:04:28         Start   7: ns_test_auto_failover_logic
16:04:31   5/101 Test   #5: ns_test_async .............................   Passed    7.18 sec
16:04:31         Start   8: ns_test_auto_reprovision
16:04:33   6/101 Test   #7: ns_test_auto_failover_logic ...............   Passed    5.24 sec
16:04:33         Start   9: ns_test_bucket_info_cache
16:04:38   7/101 Test   #8: ns_test_auto_reprovision ..................   Passed    7.06 sec
16:04:38         Start  10: ns_test_bucket_placer
16:04:42   8/101 Test  #10: ns_test_bucket_placer .....................   Passed    4.67 sec
16:04:42         Start  11: ns_test_cb_dist
16:04:47   9/101 Test  #11: ns_test_cb_dist ...........................   Passed    4.41 sec
16:04:47         Start  12: ns_test_cb_epmd
16:04:51  10/101 Test  #12: ns_test_cb_epmd ...........................   Passed    4.17 sec
16:04:51         Start  13: ns_test_cb_util
16:04:52  11/101 Test   #2: ns_test_active_cache ......................   Passed   33.49 sec
16:04:52         Start  14: ns_test_ciphers
16:04:54  12/101 Test   #9: ns_test_bucket_info_cache .................   Passed   20.72 sec
16:04:54         Start  15: ns_test_cluster_compat_mode
16:04:55  13/101 Test  #13: ns_test_cb_util ...........................   Passed    4.35 sec
16:04:55         Start  16: ns_test_collections
16:04:57  14/101 Test  #14: ns_test_ciphers ...........................   Passed    4.32 sec
16:04:57         Start  17: ns_test_dcp_consumer_conn
16:04:58  15/101 Test  #15: ns_test_cluster_compat_mode ...............   Passed    4.19 sec
16:04:58         Start  18: ns_test_dcp_proxy
16:05:01  16/101 Test  #17: ns_test_dcp_consumer_conn .................   Passed    4.79 sec
16:05:01         Start  19: ns_test_dcp_replicator
16:05:02  17/101 Test  #18: ns_test_dcp_proxy .........................   Passed    4.46 sec
16:05:02         Start  20: ns_test_dcp_traffic_monitor
16:05:06  18/101 Test  #19: ns_test_dcp_replicator ....................   Passed    4.90 sec
16:05:06         Start  21: ns_test_diag_handler
16:05:07  19/101 Test  #20: ns_test_dcp_traffic_monitor ...............   Passed    4.63 sec
16:05:07         Start  22: ns_test_event_log_server
16:05:11  20/101 Test  #21: ns_test_diag_handler ......................   Passed    4.21 sec
16:05:11         Start  23: ns_test_failover
16:05:11  21/101 Test  #22: ns_test_event_log_server ..................   Passed    4.14 sec
16:05:11         Start  24: ns_test_global_tasks
16:05:19  22/101 Test  #24: ns_test_global_tasks ......................   Passed    7.59 sec
16:05:19         Start  25: ns_test_guardrail_enforcer
16:05:22  23/101 Test  #23: ns_test_failover ..........................   Passed   11.56 sec
16:05:22         Start  26: ns_test_guardrail_monitor
16:05:24  24/101 Test  #16: ns_test_collections .......................   Passed   28.77 sec
16:05:24         Start  27: ns_test_health_monitor
16:05:25  25/101 Test  #25: ns_test_guardrail_enforcer ................   Passed    6.71 sec
16:05:25         Start  28: ns_test_hibernation_manager
16:05:32  26/101 Test  #26: ns_test_guardrail_monitor .................   Passed    9.44 sec
16:05:32         Start  29: ns_test_index_monitor
16:05:36  27/101 Test  #29: ns_test_index_monitor .....................   Passed    4.61 sec
16:05:36         Start  30: ns_test_index_settings_manager
16:05:41  28/101 Test  #30: ns_test_index_settings_manager ............   Passed    5.35 sec
16:05:41         Start  31: ns_test_janitor_agent
16:05:43  29/101 Test  #28: ns_test_hibernation_manager ...............   Passed   17.61 sec
16:05:43         Start  32: ns_test_kv_stats_monitor
16:05:46  30/101 Test  #27: ns_test_health_monitor ....................   Passed   21.93 sec
16:05:46         Start  33: ns_test_ldap_auth
16:05:47  31/101 Test  #32: ns_test_kv_stats_monitor ..................   Passed    4.28 sec
16:05:47         Start  34: ns_test_ldap_filter_parser
16:05:48  32/101 Test  #31: ns_test_janitor_agent .....................   Passed    6.01 sec
16:05:48         Start  35: ns_test_ldap_util
16:05:51  33/101 Test  #33: ns_test_ldap_auth .........................   Passed    4.78 sec
16:05:51         Start  36: ns_test_mb_map
16:05:52  34/101 Test  #34: ns_test_ldap_filter_parser ................   Passed    4.35 sec
16:05:52         Start  37: ns_test_mb_master
16:05:52  35/101 Test  #35: ns_test_ldap_util .........................   Passed    4.40 sec
16:05:52         Start  38: ns_test_memcached_auth_server
16:05:56  36/101 Test  #37: ns_test_mb_master .........................   Passed    4.47 sec
16:05:56         Start  39: ns_test_memcached_permissions
16:05:57  37/101 Test  #36: ns_test_mb_map ............................   Passed    5.87 sec
16:05:57         Start  40: ns_test_memory_quota
16:05:57  38/101 Test  #38: ns_test_memcached_auth_server .............   Passed    5.48 sec
16:05:57         Start  41: ns_test_menelaus_alert
16:06:01  39/101 Test  #40: ns_test_memory_quota ......................   Passed    4.40 sec
16:06:01         Start  42: ns_test_menelaus_roles
16:06:02  40/101 Test  #39: ns_test_memcached_permissions .............   Passed    5.55 sec
16:06:02         Start  43: ns_test_menelaus_stats
16:06:02  41/101 Test  #41: ns_test_menelaus_alert ....................   Passed    4.38 sec
16:06:02         Start  44: ns_test_menelaus_users
16:06:06  42/101 Test  #43: ns_test_menelaus_stats ....................   Passed    4.35 sec
16:06:06         Start  45: ns_test_menelaus_util
16:06:07  43/101 Test  #44: ns_test_menelaus_users ....................   Passed    4.76 sec
16:06:07         Start  46: ns_test_menelaus_web_alerts_srv
16:06:08  44/101 Test  #42: ns_test_menelaus_roles ....................   Passed    7.42 sec
16:06:08         Start  47: ns_test_menelaus_web_autocompaction
16:06:11  45/101 Test  #45: ns_test_menelaus_util .....................   Passed    4.90 sec
16:06:11         Start  48: ns_test_menelaus_web_buckets
16:06:12  46/101 Test  #46: ns_test_menelaus_web_alerts_srv ...........   Passed    5.73 sec
16:06:12         Start  49: ns_test_menelaus_web_cluster
16:06:14  47/101 Test  #47: ns_test_menelaus_web_autocompaction .......   Passed    5.92 sec
16:06:14         Start  50: ns_test_menelaus_web_collections
16:06:17  48/101 Test  #49: ns_test_menelaus_web_cluster ..............   Passed    4.77 sec
16:06:17         Start  51: ns_test_menelaus_web_guardrails
16:06:20  49/101 Test  #50: ns_test_menelaus_web_collections ..........   Passed    6.19 sec
16:06:20         Start  52: ns_test_menelaus_web_node
16:06:23  50/101 Test  #51: ns_test_menelaus_web_guardrails ...........   Passed    5.84 sec
16:06:23         Start  53: ns_test_menelaus_web_pools
16:06:25  51/101 Test  #52: ns_test_menelaus_web_node .................   Passed    4.47 sec
16:06:25         Start  54: ns_test_menelaus_web_prometheus
16:06:28  52/101 Test  #53: ns_test_menelaus_web_pools ................   Passed    4.89 sec
16:06:28         Start  55: ns_test_menelaus_web_rbac
16:06:30  53/101 Test  #54: ns_test_menelaus_web_prometheus ...........   Passed    4.73 sec
16:06:30         Start  56: ns_test_menelaus_web_samples
16:06:35  54/101 Test  #55: ns_test_menelaus_web_rbac .................   Passed    6.97 sec
16:06:35         Start  57: ns_test_menelaus_web_settings
16:06:36  55/101 Test  #56: ns_test_menelaus_web_samples ..............   Passed    6.21 sec
16:06:36         Start  58: ns_test_menelaus_web_settings2
16:06:39  56/101 Test  #57: ns_test_menelaus_web_settings .............   Passed    4.42 sec
16:06:39         Start  59: ns_test_menelaus_web_stats
16:06:42  57/101 Test  #58: ns_test_menelaus_web_settings2 ............   Passed    6.63 sec
16:06:42         Start  60: ns_test_menelaus_web_xdcr_target
16:06:44  58/101 Test  #59: ns_test_menelaus_web_stats ................   Passed    4.89 sec
16:06:44         Start  61: ns_test_misc
16:06:47  59/101 Test  #60: ns_test_menelaus_web_xdcr_target ..........   Passed    4.69 sec
16:06:47         Start  62: ns_test_new_concurrency_throttle
16:06:49  60/101 Test  #48: ns_test_menelaus_web_buckets ..............   Passed   37.85 sec
16:06:49         Start  63: ns_test_node_monitor
16:06:52  61/101 Test  #62: ns_test_new_concurrency_throttle ..........   Passed    4.47 sec
16:06:52         Start  64: ns_test_node_status_analyzer
16:06:53  62/101 Test  #63: ns_test_node_monitor ......................   Passed    4.37 sec
16:06:53         Start  65: ns_test_ns_audit
16:06:56  63/101 Test  #64: ns_test_node_status_analyzer ..............   Passed    4.56 sec
16:06:56         Start  66: ns_test_ns_bucket
16:07:02  64/101 Test  #65: ns_test_ns_audit ..........................   Passed    8.62 sec
16:07:02         Start  67: ns_test_ns_cluster
16:07:02  65/101 Test  #66: ns_test_ns_bucket .........................   Passed    6.09 sec
16:07:02         Start  68: ns_test_ns_config
16:07:06  66/101 Test  #67: ns_test_ns_cluster ........................   Passed    4.28 sec
16:07:06         Start  69: ns_test_ns_config_auth
16:07:09  67/101 Test  #61: ns_test_misc ..............................   Passed   25.21 sec
16:07:09         Start  70: ns_test_ns_config_default
16:07:09  68/101 Test  #68: ns_test_ns_config .........................   Passed    7.06 sec
16:07:09         Start  71: ns_test_ns_config_rep
16:07:12  69/101 Test  #69: ns_test_ns_config_auth ....................   Passed    5.49 sec
16:07:12         Start  72: ns_test_ns_doctor
16:07:14  70/101 Test  #71: ns_test_ns_config_rep .....................   Passed    4.51 sec
16:07:14         Start  73: ns_test_ns_janitor
16:07:15  71/101 Test  #70: ns_test_ns_config_default .................   Passed    5.90 sec
16:07:15         Start  74: ns_test_ns_orchestrator
16:07:17  72/101 Test  #72: ns_test_ns_doctor .........................   Passed    5.01 sec
16:07:17         Start  75: ns_test_ns_ports_setup
16:07:20  73/101 Test  #74: ns_test_ns_orchestrator ...................   Passed    4.97 sec
16:07:20         Start  76: ns_test_ns_pubsub
16:07:21  74/101 Test  #75: ns_test_ns_ports_setup ....................   Passed    4.92 sec
16:07:21         Start  77: ns_test_ns_rebalance_observer
16:07:25  75/101 Test  #73: ns_test_ns_janitor ........................   Passed   11.10 sec
16:07:25         Start  78: ns_test_ns_rebalancer
16:07:26  76/101 Test  #76: ns_test_ns_pubsub .........................   Passed    5.89 sec
16:07:26         Start  79: ns_test_ns_server_stats
16:07:29  77/101 Test  #77: ns_test_ns_rebalance_observer .............   Passed    7.70 sec
16:07:29         Start  80: ns_test_ns_single_vbucket_mover
16:07:30  78/101 Test  #78: ns_test_ns_rebalancer .....................   Passed    4.86 sec
16:07:30         Start  81: ns_test_ns_ssl_services_setup
16:07:30  79/101 Test  #79: ns_test_ns_server_stats ...................   Passed    4.28 sec
16:07:30         Start  82: ns_test_ns_storage_conf
16:07:34  80/101 Test  #80: ns_test_ns_single_vbucket_mover ...........   Passed    4.41 sec
16:07:34         Start  83: ns_test_ns_tick_agent
16:07:35  81/101 Test  #82: ns_test_ns_storage_conf ...................   Passed    4.26 sec
16:07:35         Start  84: ns_test_ns_vbucket_mover
16:07:36  82/101 Test  #81: ns_test_ns_ssl_services_setup .............   Passed    6.25 sec
16:07:36         Start  85: ns_test_pipes
16:07:39  83/101 Test  #84: ns_test_ns_vbucket_mover ..................   Passed    4.26 sec
16:07:39         Start  86: ns_test_promQL
16:07:40  84/101 Test  #83: ns_test_ns_tick_agent .....................   Passed    6.54 sec
16:07:40         Start  87: ns_test_prometheus
16:07:40  85/101 Test  #85: ns_test_pipes .............................   Passed    4.31 sec
16:07:40         Start  88: ns_test_prometheus_cfg
16:07:43  86/101 Test  #86: ns_test_promQL ............................   Passed    4.59 sec
16:07:43         Start  89: ns_test_query_settings_manager
16:07:46  87/101 Test  #87: ns_test_prometheus ........................   Passed    5.73 sec
16:07:46         Start  90: ns_test_rebalance_agent
16:07:47  88/101 Test  #88: ns_test_prometheus_cfg ....................   Passed    6.68 sec
16:07:47         Start  91: ns_test_rebalance_stage_info
16:07:48  89/101 Test  #89: ns_test_query_settings_manager ............   Passed    4.37 sec
16:07:48         Start  92: ns_test_recovery
16:07:50  90/101 Test  #90: ns_test_rebalance_agent ...................   Passed    4.24 sec
16:07:50         Start  93: ns_test_scram_sha
16:07:51  91/101 Test  #91: ns_test_rebalance_stage_info ..............   Passed    4.13 sec
16:07:51         Start  94: ns_test_service_index
16:07:59  92/101 Test  #94: ns_test_service_index .....................   Passed    8.22 sec
16:07:59         Start  95: ns_test_service_stats_collector
16:08:00  93/101 Test  #93: ns_test_scram_sha .........................   Passed    9.77 sec
16:08:00         Start  96: ns_test_sigar
16:08:03  94/101 Test  #92: ns_test_recovery ..........................   Passed   15.65 sec
16:08:03         Start  97: ns_test_sjson
16:08:05  95/101 Test  #95: ns_test_service_stats_collector ...........   Passed    5.65 sec
16:08:05         Start  98: ns_test_stat_names_mappings
16:08:06  96/101 Test  #96: ns_test_sigar .............................   Passed    5.75 sec
16:08:06         Start  99: ns_test_validator
16:08:09  97/101 Test  #98: ns_test_stat_names_mappings ...............   Passed    4.40 sec
16:08:09         Start 100: ns_test_vclock
16:08:10  98/101 Test  #99: ns_test_validator .........................   Passed    4.29 sec
16:08:10         Start 101: ns_test_yaml
16:08:14  99/101 Test #100: ns_test_vclock ............................   Passed    4.40 sec
16:08:14 100/101 Test #101: ns_test_yaml ..............................   Passed    4.35 sec
16:09:00 101/101 Test  #97: ns_test_sjson .............................   Passed   56.40 sec
16:09:00
16:09:00 100% tests passed, 0 tests failed out of 101
16:09:00
16:09:00 Total Test time (real) = 322.74 sec

(Note the '.' after ns_test in the above command, this tells ctest not
to execute the `ns_test` suite which still exists and runs all tests
(unless filtered with T_WILDCARD)).

This has the added bonus effects of supressing the output of tests that
pass automatically, and providing easy to access timing info for test
suites (the successful ctest output).

Change-Id: Ic59fbb1ed89aba431fc367cfee89c008547b16b6
Reviewed-on: https://review.couchbase.org/c/ns_server/+/182675
Reviewed-by: Peter Searby <peter.searby@couchbase.com>
Well-Formed: Build Bot <build@couchbase.com>
Tested-by: Ben Huddleston <ben.huddleston@couchbase.com>
ns-codereview pushed a commit that referenced this pull request Dec 20, 2024
Before this commit we ignored read key errors, because we
had to support the case when log dir is removed together
with log deks. Now since log deks are stored in config dir,
they can't be removed when logs are removed, so it should
be save to assume that deks must be always readable.

There is another scenario that needs to be kept in mind:
say we have a dek encrypted by aws key, and that aws key
is unavailable at startup, so we can't read that dek.
There are two ways to handle that:
1. Continue to start up, but retry reading deks later;
2. Fail to start up.

Option #1 is hard to implement as the code that uses that dek
should handle the case when dek is not available.

This is another reason why this commit implements option #2.
Note that this scenario was not supported before this commit.

Change-Id: Ib01c009957ae7f413428b38c6f2c32bb19f193db
Reviewed-on: https://review.couchbase.org/c/ns_server/+/221170
Reviewed-by: Navdeep S Boparai <navdeep.boparai@couchbase.com>
Well-Formed: Build Bot <build@couchbase.com>
Tested-by: Timofey Barmin <timofey.barmin@couchbase.com>
ns-codereview pushed a commit that referenced this pull request Feb 26, 2025
Change #1:
set_active_key for buckets should treat enoent and not_supported as
"bucket not found".
When bucket is on disk, but not in memcached (e.g. when cluster
membership is inactiveAdded or inactiveFailed), we can't push
keys to memcached. If we treat it as error (behavior before this this
change), we won't be able to modify encryption-at-rest settings
because cb_cluster_secrets update_bucket_deks status will show error
(issues list will not be empty).
At the same time it seems ok to treat as ok, because memcached
is not encrypting any data in this bucket, so it doesn't need new
keys. When bucket is activated (e.g. we add node back to
the cluster), ns_memcached will push actual keys to memcached in
create_bucket.

Change #2:
Treat not_found in set_active_key as ok, but only when ns_memcached
process doesn't exist before set_active_key attempt.
This is important in order to avoid races when set_active_key and
create_bucket are called in parallel. Basically the following scenario:

1. (process1) ns_memcached fetches old keys
2. (process2) set_active_dek is called (and gets not_found)
3. (process1) ns_memcached creates the bucket with old keys
4. (process1) ns_memcached crashes
5. (process2) we check if ns_memcached is running and return ok
6. Bucket is created with old keys

Change #3:
get_dek_id_in_use should return not_found when bucket doesn't exist
or when memcached returns not_supported.
Reasoning is the same as in change #1.
Basically when there is no bucket in memcached, we should assume
that all current deks are still in use and don't drop anything.
The goal of the change is to not treat it as error basically, because
it leads to the situation when we can't modify encryption-at-rest
settings.

Change-Id: I63cc3e2d7ddbadf5f5866c662858c0dd2d81b270
Reviewed-on: https://review.couchbase.org/c/ns_server/+/223510
Tested-by: Timofey Barmin <timofey.barmin@couchbase.com>
Well-Formed: Build Bot <build@couchbase.com>
Reviewed-by: Navdeep S Boparai <navdeep.boparai@couchbase.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants