Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Canvas] TagCloud arguments. #107729

Merged
merged 13 commits into from
Sep 17, 2021

Conversation

Kuznietsov
Copy link
Contributor

@Kuznietsov Kuznietsov commented Aug 5, 2021

Completes a part of #101377.

At this PR TagCloud side panel is added to Canvas.

What is added/changed:

  • UI view for the tagcloud expression was added.
  • vis_dimension argument was added.

  • The type of palette argument of tagcloud function was changed from string to palette.
  • The way of handling palettes in tagcloud to_ast was changed.
  • The way of passing default palette to tagcloud fn was changed.

@Kuznietsov Kuznietsov added Team:Presentation Presentation Team for Dashboard, Input Controls, and Canvas loe:medium Medium Level of Effort v8.0.0 impact:medium Addressing this issue will have a medium level of impact on the quality/strength of our product. Feature:Canvas release_note:feature Makes this part of the condensed release notes auto-backport Deprecated - use backport:version if exact versions are needed v7.15.0 labels Aug 5, 2021
@Kuznietsov Kuznietsov self-assigned this Aug 5, 2021
@Kuznietsov Kuznietsov changed the title Tag cloud canvas arguments [Canvas] Tag cloud canvas arguments Aug 5, 2021
@Kuznietsov Kuznietsov changed the title [Canvas] Tag cloud canvas arguments [Canvas] TagCloud arguments. Aug 5, 2021
@Kuznietsov Kuznietsov marked this pull request as ready for review August 10, 2021 13:26
@Kuznietsov Kuznietsov requested a review from a team August 10, 2021 13:26
@Kuznietsov Kuznietsov requested review from a team as code owners August 10, 2021 13:26
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-presentation (Team:Presentation)

Copy link
Member

@jbudz jbudz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

storybook aliases LGTM

@Kuznietsov
Copy link
Contributor Author

@elasticmachine merge upstream

@Kuznietsov
Copy link
Contributor Author

@elasticmachine merge upstream

@mistic mistic added v7.16.0 and removed v7.15.0 labels Aug 18, 2021
@Kuznietsov
Copy link
Contributor Author

@elasticmachine merge upstream

1 similar comment
@Kuznietsov
Copy link
Contributor Author

@elasticmachine merge upstream

@timroes timroes requested review from a team and removed request for a team August 31, 2021 15:04
Copy link
Contributor

@crob611 crob611 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Presentation changes look good.

onValueChange,
argValue,
argId,
choises: dynamicChoises,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Spelling is choices dynamiceChoices. Needs to be corrected in a few places.

);

const options = [
{ value: '', text: 'select column', disabled: true },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Static text Select Column should be an i18n string

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

argId,
choises: dynamicChoises,
}) => {
const existChoises = typeInstance.options.choices ?? dynamicChoises?.[typeInstance.name] ?? [];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain where the dynamic choices option comes from? I'm not sure I'm following why it's needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, it seems to me that is from the version with expression builder. I'll remove it. Before, it was allowing to pass some options to select from the resolve method of the model.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Completely removed the current part of the code.

@Kuznietsov
Copy link
Contributor Author

@elasticmachine merge upstream

@Kuznietsov
Copy link
Contributor Author

@elasticmachine merge upstream

Copy link
Contributor

@stratoula stratoula left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Something goes wrong with the palette in existing tagcloud visualizations. For example here the colors are not the correct ones
    image

  • When I export a tagcloud SO from 7.15 and import it again in localhost I get this error
    image

@Kuznietsov
Copy link
Contributor Author

@elasticmachine merge upstream

@Kuznietsov Kuznietsov force-pushed the tag_cloud_canvas_arguments branch from 02f4c87 to bf3f3c8 Compare September 15, 2021 08:18
@Kuznietsov
Copy link
Contributor Author

Kuznietsov commented Sep 15, 2021

  • Something goes wrong with the palette in existing tagcloud visualizations. For example here the colors are not the correct ones
    image
  • When I export a tagcloud SO from 7.15 and import it again in localhost I get this error
    image

@stratoula, I've fixed the code to avoid migrations.

@stratoula
Copy link
Contributor

@elasticmachine merge upstream

setPalette(paramName, {
type: 'palette',
name: newPalette,
name: palette?.value ?? 'clear',
Copy link
Contributor

@stratoula stratoula Sep 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What the fallback to clear does here?

Copy link
Contributor Author

@Kuznietsov Kuznietsov Sep 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stratoula, I've updated the code.

This line is updating the color of the chart for the default palette if active palette doesn't exist in a palettes list:

This line is changing the palette in palette picker to default if the active palette is not defined:

valueOfSelected={activePalette?.name || DEFAULT_PALETTE}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect! Thank you :)

@Kuznietsov
Copy link
Contributor Author

@elasticmachine merge upstream

Copy link
Contributor

@stratoula stratoula left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kibana Vis Editors team changes LGTM. I tested it locally and works fine, also it is bwc ;)

@Kuznietsov Kuznietsov enabled auto-merge (squash) September 17, 2021 09:09
@kibanamachine
Copy link
Contributor

💛 Build succeeded, but was flaky


Test Failures

Kibana Pipeline / general / X-Pack API Integration Tests.x-pack/test/api_integration/apis/ml/results/get_anomalies_table_data·ts.apis Machine Learning ResultsService GetAnomaliesTableData should fetch anomalies table data

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 1 times on tracked branches: https://github.com/elastic/kibana/issues/112417

[00:00:00]       │
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook in "apis"
[00:09:58]           └-: Machine Learning
[00:09:58]             └-> "before all" hook in "Machine Learning"
[00:09:58]             └-> "before all" hook in "Machine Learning"
[00:09:58]               │ debg creating role ft_ml_source
[00:09:58]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [node-01] added role [ft_ml_source]
[00:09:58]               │ debg creating role ft_ml_source_readonly
[00:09:58]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [node-01] added role [ft_ml_source_readonly]
[00:09:58]               │ debg creating role ft_ml_dest
[00:09:58]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [node-01] added role [ft_ml_dest]
[00:09:58]               │ debg creating role ft_ml_dest_readonly
[00:09:58]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [node-01] added role [ft_ml_dest_readonly]
[00:09:58]               │ debg creating role ft_ml_ui_extras
[00:09:58]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [node-01] added role [ft_ml_ui_extras]
[00:09:58]               │ debg creating role ft_default_space_ml_all
[00:09:58]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [node-01] added role [ft_default_space_ml_all]
[00:09:58]               │ debg creating role ft_default_space1_ml_all
[00:09:58]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [node-01] added role [ft_default_space1_ml_all]
[00:09:58]               │ debg creating role ft_all_spaces_ml_all
[00:09:58]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [node-01] added role [ft_all_spaces_ml_all]
[00:09:58]               │ debg creating role ft_default_space_ml_read
[00:09:58]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [node-01] added role [ft_default_space_ml_read]
[00:09:58]               │ debg creating role ft_default_space1_ml_read
[00:09:58]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [node-01] added role [ft_default_space1_ml_read]
[00:09:58]               │ debg creating role ft_all_spaces_ml_read
[00:09:58]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [node-01] added role [ft_all_spaces_ml_read]
[00:09:58]               │ debg creating role ft_default_space_ml_none
[00:09:58]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [node-01] added role [ft_default_space_ml_none]
[00:09:58]               │ debg creating user ft_ml_poweruser
[00:09:58]               │ info [o.e.x.s.a.u.TransportPutUserAction] [node-01] added user [ft_ml_poweruser]
[00:09:58]               │ debg created user ft_ml_poweruser
[00:09:58]               │ debg creating user ft_ml_poweruser_spaces
[00:09:58]               │ info [o.e.x.s.a.u.TransportPutUserAction] [node-01] added user [ft_ml_poweruser_spaces]
[00:09:58]               │ debg created user ft_ml_poweruser_spaces
[00:09:58]               │ debg creating user ft_ml_poweruser_space1
[00:09:59]               │ info [o.e.x.s.a.u.TransportPutUserAction] [node-01] added user [ft_ml_poweruser_space1]
[00:09:59]               │ debg created user ft_ml_poweruser_space1
[00:09:59]               │ debg creating user ft_ml_poweruser_all_spaces
[00:09:59]               │ info [o.e.x.s.a.u.TransportPutUserAction] [node-01] added user [ft_ml_poweruser_all_spaces]
[00:09:59]               │ debg created user ft_ml_poweruser_all_spaces
[00:09:59]               │ debg creating user ft_ml_viewer
[00:09:59]               │ info [o.e.x.s.a.u.TransportPutUserAction] [node-01] added user [ft_ml_viewer]
[00:09:59]               │ debg created user ft_ml_viewer
[00:09:59]               │ debg creating user ft_ml_viewer_spaces
[00:09:59]               │ info [o.e.x.s.a.u.TransportPutUserAction] [node-01] added user [ft_ml_viewer_spaces]
[00:09:59]               │ debg created user ft_ml_viewer_spaces
[00:09:59]               │ debg creating user ft_ml_viewer_space1
[00:09:59]               │ info [o.e.x.s.a.u.TransportPutUserAction] [node-01] added user [ft_ml_viewer_space1]
[00:09:59]               │ debg created user ft_ml_viewer_space1
[00:09:59]               │ debg creating user ft_ml_viewer_all_spaces
[00:09:59]               │ info [o.e.x.s.a.u.TransportPutUserAction] [node-01] added user [ft_ml_viewer_all_spaces]
[00:09:59]               │ debg created user ft_ml_viewer_all_spaces
[00:09:59]               │ debg creating user ft_ml_unauthorized
[00:09:59]               │ info [o.e.x.s.a.u.TransportPutUserAction] [node-01] added user [ft_ml_unauthorized]
[00:09:59]               │ debg created user ft_ml_unauthorized
[00:09:59]               │ debg creating user ft_ml_unauthorized_spaces
[00:09:59]               │ info [o.e.x.s.a.u.TransportPutUserAction] [node-01] added user [ft_ml_unauthorized_spaces]
[00:09:59]               │ debg created user ft_ml_unauthorized_spaces
[00:25:55]             └-: ResultsService
[00:25:55]               └-> "before all" hook in "ResultsService"
[00:25:55]               └-: GetAnomaliesTableData
[00:25:55]                 └-> "before all" hook for "should fetch anomalies table data"
[00:25:55]                 └-> "before all" hook for "should fetch anomalies table data"
[00:25:55]                   │ info [x-pack/test/functional/es_archives/ml/farequote] Loading "mappings.json"
[00:25:55]                   │ info [x-pack/test/functional/es_archives/ml/farequote] Loading "data.json.gz"
[00:25:55]                   │ info [x-pack/test/functional/es_archives/ml/farequote] Skipped restore for existing index "ft_farequote"
[00:25:56]                   │ debg applying update to kibana config: {"dateFormat:tz":"UTC"}
[00:25:58]                   │ debg Creating anomaly detection job with id 'fq_multi_1_ae' ...
[00:25:58]                   │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.ml-anomalies-shared] creating index, cause [api], templates [.ml-anomalies-], shards [1]/[1]
[00:25:58]                   │ info [o.e.c.r.a.AllocationService] [node-01] updating number_of_replicas to [0] for indices [.ml-anomalies-shared]
[00:25:58]                   │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.ml-annotations-6] creating index, cause [api], templates [], shards [1]/[1]
[00:25:58]                   │ info [o.e.c.r.a.AllocationService] [node-01] updating number_of_replicas to [0] for indices [.ml-annotations-6]
[00:25:58]                   │ info [o.e.c.m.MetadataMappingService] [node-01] [.ml-anomalies-shared/LwkecsTuScKNjDO-dBw80Q] update_mapping [_doc]
[00:25:59]                   │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.ml-config] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[00:25:59]                   │ info [o.e.c.r.a.AllocationService] [node-01] updating number_of_replicas to [0] for indices [.ml-config]
[00:25:59]                   │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.ml-notifications-000002] creating index, cause [auto(bulk api)], templates [.ml-notifications-000002], shards [1]/[1]
[00:25:59]                   │ info [o.e.c.r.a.AllocationService] [node-01] updating number_of_replicas to [0] for indices [.ml-notifications-000002]
[00:25:59]                   │ debg Waiting up to 5000ms for 'fq_multi_1_ae' to exist...
[00:25:59]                   │ debg > AD job created.
[00:25:59]                   │ debg Creating datafeed with id 'datafeed-fq_multi_1_ae' ...
[00:26:00]                   │ debg Waiting up to 5000ms for 'datafeed-fq_multi_1_ae' to exist...
[00:26:00]                   │ debg > Datafeed created.
[00:26:00]                   │ debg Opening anomaly detection job 'fq_multi_1_ae'...
[00:26:00]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [node-01] Opening job [fq_multi_1_ae]
[00:26:00]                   │ info [o.e.x.c.m.u.MlIndexAndAlias] [node-01] About to create first concrete index [.ml-state-000001] with alias [.ml-state-write]
[00:26:00]                   │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.ml-state-000001] creating index, cause [api], templates [.ml-state], shards [1]/[1]
[00:26:00]                   │ info [o.e.c.r.a.AllocationService] [node-01] updating number_of_replicas to [0] for indices [.ml-state-000001]
[00:26:00]                   │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ml-state-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ml-size-based-ilm-policy]
[00:26:00]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [node-01] [fq_multi_1_ae] Loading model snapshot [N/A], job latest_record_timestamp [N/A]
[00:26:01]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [node-01] [fq_multi_1_ae] [autodetect/223187] [CResourceMonitor.cc@82] Setting model memory limit to 20 MB
[00:26:01]                   │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ml-state-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [ml-size-based-ilm-policy]
[00:26:01]                   │ debg > AD job opened.
[00:26:01]                   │ debg Starting datafeed 'datafeed-fq_multi_1_ae' with start: '0', end: '1631873104328'...
[00:26:01]                   │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ml-size-based-ilm-policy]
[00:26:01]                   │ info [o.e.x.m.d.DatafeedJob] [node-01] [fq_multi_1_ae] Datafeed started (from: 1970-01-01T00:00:00.000Z to: 2021-09-17T10:05:04.328Z) with frequency [600000ms]
[00:26:01]                   │ debg > Datafeed started.
[00:26:01]                   │ debg Waiting up to 120000ms for datafeed state to be stopped...
[00:26:01]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_ae
[00:26:01]                   │ debg --- retry.waitForWithTimeout error: expected job state to be stopped but got started
[00:26:01]                   │ info [o.e.c.m.MetadataMappingService] [node-01] [.ml-anomalies-shared/LwkecsTuScKNjDO-dBw80Q] update_mapping [_doc]
[00:26:01]                   │ info [o.e.x.m.j.p.DataCountsReporter] [node-01] [fq_multi_1_ae] 10000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:26:01]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_ae
[00:26:01]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:26:02]                   │ info [o.e.x.m.j.p.DataCountsReporter] [node-01] [fq_multi_1_ae] 20000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:26:02]                   │ info [o.e.x.m.j.p.DataCountsReporter] [node-01] [fq_multi_1_ae] 30000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:26:02]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_ae
[00:26:02]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:26:02]                   │ info [o.e.x.m.j.p.DataCountsReporter] [node-01] [fq_multi_1_ae] 40000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:26:03]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_ae
[00:26:03]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:26:03]                   │ info [o.e.x.m.j.p.DataCountsReporter] [node-01] [fq_multi_1_ae] 50000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:26:03]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_ae
[00:26:03]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:26:03]                   │ info [o.e.x.m.j.p.DataCountsReporter] [node-01] [fq_multi_1_ae] 60000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:26:04]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_ae
[00:26:04]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:26:04]                   │ info [o.e.x.m.j.p.DataCountsReporter] [node-01] [fq_multi_1_ae] 70000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:26:04]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_ae
[00:26:04]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:26:04]                   │ info [o.e.x.m.j.p.DataCountsReporter] [node-01] [fq_multi_1_ae] 80000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:26:05]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_ae
[00:26:05]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:26:05]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_ae
[00:26:05]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:26:06]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_ae
[00:26:06]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:26:06]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_ae
[00:26:06]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:26:06]                   │ info [o.e.x.m.d.DatafeedJob] [node-01] [fq_multi_1_ae] Lookback has finished
[00:26:06]                   │ info [o.e.x.m.d.DatafeedRunner] [node-01] [no_realtime] attempt to stop datafeed [datafeed-fq_multi_1_ae] for job [fq_multi_1_ae]
[00:26:06]                   │ info [o.e.x.m.d.DatafeedRunner] [node-01] [no_realtime] try lock [20s] to stop datafeed [datafeed-fq_multi_1_ae] for job [fq_multi_1_ae]...
[00:26:06]                   │ info [o.e.x.m.d.DatafeedRunner] [node-01] [no_realtime] stopping datafeed [datafeed-fq_multi_1_ae] for job [fq_multi_1_ae], acquired [true]...
[00:26:06]                   │ info [o.e.x.m.d.DatafeedRunner] [node-01] [no_realtime] datafeed [datafeed-fq_multi_1_ae] for job [fq_multi_1_ae] has been stopped
[00:26:07]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [node-01] Closing job [fq_multi_1_ae], because [close job (api)]
[00:26:07]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [node-01] [fq_multi_1_ae] [autodetect/223187] [CCmdSkeleton.cc@66] Handled 86273 records
[00:26:07]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [node-01] [fq_multi_1_ae] [autodetect/223187] [CAnomalyJob.cc@1601] Pruning obsolete models
[00:26:07]                   │ info [o.e.c.m.MetadataMappingService] [node-01] [.ml-anomalies-shared/LwkecsTuScKNjDO-dBw80Q] update_mapping [_doc]
[00:26:07]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_ae
[00:26:07]                   │ info [o.e.x.m.p.AbstractNativeProcess] [node-01] [fq_multi_1_ae] State output finished
[00:26:07]                   │ debg Waiting up to 120000ms for job state to be closed...
[00:26:07]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:26:07]                   │ debg > AD job stats fetched.
[00:26:07]                   │ debg --- retry.waitForWithTimeout error: expected job state to be closed but got closing
[00:26:07]                   │ info [o.e.x.m.j.p.a.o.AutodetectResultProcessor] [node-01] [fq_multi_1_ae] 120 buckets parsed from autodetect output
[00:26:07]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:26:07]                   │ debg > AD job stats fetched.
[00:26:07]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:26:08]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:26:08]                   │ info [o.e.x.m.j.p.a.AutodetectCommunicator] [node-01] [fq_multi_1_ae] autodetect connection for job closed
[00:26:08]                   │ debg > AD job stats fetched.
[00:26:08]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:26:08]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:26:08]                   │ debg > AD job stats fetched.
[00:26:08]                 └-> should fetch anomalies table data
[00:26:08]                   └-> "before each" hook: global before each for "should fetch anomalies table data"
[00:26:08]                   └- ✖ fail: apis Machine Learning ResultsService GetAnomaliesTableData should fetch anomalies table data
[00:26:08]                   │       Error: expected 14 to sort of equal 13
[00:26:08]                   │       + expected - actual
[00:26:08]                   │ 
[00:26:08]                   │       -14
[00:26:08]                   │       +13
[00:26:08]                   │       
[00:26:08]                   │       at Assertion.assert (/dev/shm/workspace/parallel/6/kibana/node_modules/@kbn/expect/expect.js:100:11)
[00:26:08]                   │       at Assertion.eql (/dev/shm/workspace/parallel/6/kibana/node_modules/@kbn/expect/expect.js:244:8)
[00:26:08]                   │       at Context.<anonymous> (test/api_integration/apis/ml/results/get_anomalies_table_data.ts:83:40)
[00:26:08]                   │       at runMicrotasks (<anonymous>)
[00:26:08]                   │       at processTicksAndRejections (internal/process/task_queues.js:95:5)
[00:26:08]                   │       at Object.apply (/dev/shm/workspace/parallel/6/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
[00:26:08]                   │ 
[00:26:08]                   │ 

Stack Trace

Error: expected 14 to sort of equal 13
    at Assertion.assert (/dev/shm/workspace/parallel/6/kibana/node_modules/@kbn/expect/expect.js:100:11)
    at Assertion.eql (/dev/shm/workspace/parallel/6/kibana/node_modules/@kbn/expect/expect.js:244:8)
    at Context.<anonymous> (test/api_integration/apis/ml/results/get_anomalies_table_data.ts:83:40)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (internal/process/task_queues.js:95:5)
    at Object.apply (/dev/shm/workspace/parallel/6/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) {
  actual: '14',
  expected: '13',
  showDiff: true
}

Metrics [docs]

Module Count

Fewer modules leads to a faster build time

id before after diff
canvas 1061 1063 +2

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
canvas 1.0MB 1.0MB +6.2KB
expressionTagcloud 8.4KB 8.5KB +114.0B
total +6.3KB

Page load bundle

Size of the bundles that are downloaded on every page load. Target size is below 100kb

id before after diff
expressionTagcloud 7.2KB 7.2KB -2.0B
visDefaultEditor 18.5KB 18.6KB +103.0B
visTypeTagcloud 5.8KB 6.0KB +117.0B
total +218.0B

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

cc @Kunzetsov

@Kuznietsov Kuznietsov merged commit c63fff9 into elastic:master Sep 17, 2021
kibanamachine added a commit to kibanamachine/kibana that referenced this pull request Sep 17, 2021
* Added arguments to Tagcloud at Canvas.

Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
@kibanamachine
Copy link
Contributor

💚 Backport successful

Status Branch Result
7.x

This backport PR will be merged automatically after passing CI.

kibanamachine added a commit that referenced this pull request Sep 17, 2021
* Added arguments to Tagcloud at Canvas.

Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>

Co-authored-by: Yaroslav Kuznietsov <kuznetsov.yaroslav.yk@gmail.com>
@timroes timroes added release_note:skip Skip the PR/issue when compiling release notes and removed release_note:feature Makes this part of the condensed release notes labels Oct 27, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-backport Deprecated - use backport:version if exact versions are needed Feature:Canvas impact:medium Addressing this issue will have a medium level of impact on the quality/strength of our product. loe:medium Medium Level of Effort release_note:skip Skip the PR/issue when compiling release notes Team:Presentation Presentation Team for Dashboard, Input Controls, and Canvas v7.16.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants