-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathwx-conrefs.yml
679 lines (624 loc) · 24.5 KB
/
wx-conrefs.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
# Do NOT edit this file, to edit conrefs edit the common/conrefs.yml file
attributedefs: >
{:codeblock: .codeblock}
{:custom_width: width="550"}
{:iih: height="20" style="vertical-align:text-bottom"}
{:biw: style="max-width:90%;height:auto;width:auto"}
{:external: target="_blank" .external}
{:java: #java .ph data-hd-programlang='java'}
{:javascript: #javascript .ph data-hd-programlang='javascript'}
{:new_window: target="_blank" rel="noopener noreferrer" title="Opens a new
window or tab"}
{:php: #php .ph data-hd-programlang='php'}
{:pre: .pre}
{:python: #python .ph data-hd-programlang='python'}
{:r: #r .ph data-hd-programlang='r'}
{:ruby: #ruby .ph data-hd-programlang='ruby'}
{:scala: #scala .ph data-hd-programlang='scala'}
{:screen: .screen}
{:shortdesc: .shortdesc}
{:tip: .tip}
{:important: .important}
<style>.midd::after { content:"\A\00A0\00A0"; white-space: pre }</style>
kyle: kylewx
AI_Factsheets: AI Factsheets
Amazon_DynamoDB: Amazon DynamoDB
Amazon_EMR_short: Amazon EMR
Amazon_EMR_full: Amazon Elastic Map Reduce
Amazon_S3_short: Amazon S3
Amazon_S3_full: Amazon Simple Storage Service
Amazon_sagemaker: Amazon SageMaker
apis: Watson Data API
aws_glue: AWS Glue
aws_glue_cat: AWS Glue Data Catalog
hbase: Apache HBase
at_full_notm: IBM Cloud Activity Tracker with LogDNA
avatar: Avatar
aws: AWS
aws_long: Amazon Web Services
Anaconda: Anaconda Repository for IBM Cloud Pak for Data
autoai: AutoAI
azure: Microsoft Azure
mssqlserver: Microsoft SQL Server
azure-sql: Microsoft Azure SQL Database
azure-synapse: Microsoft Azure Synapse Analytics
apachehdfs: Apache HDFS
azureblob: Azure Blob Storage
cosmosdb: Azure Cosmos DB
azurefs: Azure File Storage
conn-sdk: IBM Connector SDK
azuredls: Azure Data Lake Storage
hadoopst: Analytics for Apache Hadoop
BigInsightsHDFS: Analytics Engine HDFS
Bluemix: IBM Cloud
Bluemix_short: IBM Cloud
Bluemix_notm: IBM Cloud
redshift: Amazon Redshift
azrds-mysql: Amazon RDS for MySQL
azrds-oracle: Amazon RDS for Oracle
azrds-postresql: Amazon RDS for PostgreSQL
cassandra: Apache Cassandra
cass-opt: Apache Cassandra for DataStage
derby: Apache Derby
hive: Apache Hive
kudu: Apache Kudu
box: Box
cloudera: Apache Impala
cognos: Cognos Analytics
cos_short: Cloud Object Storage
cos: IBM Cloud Object Storage
cos_infra: Cloud Object Storage (infrastructure)
cos_s3: Cloud Object Storage (S3 API)
cp: IBM Cloud Private
cp_short: Cloud Private
cloudant: Cloudant
cpddocs: https://www.ibm.com/docs/SSQNUZ_5.0.x
dynamdashbemb_short: Cognos Dashboard Embedded
composeForMySQL: IBM Cloud Databases for MySQL
composeForPostgreSQL: IBM Cloud Databases for PostgreSQL
dbdatastax: IBM Cloud Databases for DataStax
datastax-op: DataStax Enterprise
databricks: Databricks
databricks-m: Databricks (Manta)
databricks-azure: Microsoft Azure Databricks
dashboards: Cognos Dashboards
ibmdashboards: IBM Dashboards
dashdbshort: dashDB
dashdb_long: IBM dashDB (now named IBM Db2 Warehouse on Cloud)
dashdb_long_bold: '**IBM dashDB** (now named IBM Db2 Warehouse on Cloud)'
data: Cloud Pak for Data
datalong: IBM Cloud Pak for Data
databand: IBM Databand
cpdaas: IBM watsonx
cpdaas_p: Cloud Pak for Data as a Service
installer-full: IBM Cloud Pak for Data control plane
datahub: IBM Knowledge Catalog
datahub_full: IBM Knowledge Catalog
dpx: Data Product Hub
dpx_long: IBM Data Product Hub as a Service
dpx_long_onprem: IBM Data Product Hub
dpriv: Masking flow
drpiv_lc: masking flow
catalog: catalog
classifier_short: Watson Natural Language Classifier
classifier_long: IBM Watson Natural Language Classifier
classifier: Natural Language Classifier
ccs_lower: common core services
ccs_upper: Common Core Services
conversationshort: Watson Assistant
conversationfull: IBM Watson Assistant
cos_swift: Object Storage OpenStack Swift
cos_swift_infra: Object Storage OpenStack Swift (infrastructure)
data_refinery: Data Refinery
data_connect: Data Refinery
data_connect_full: Data Refinery
data_rep: Data Replication
data_rep_full: IBM Data Replication
datastage: DataStage
data_virt: Watson Query
data_virt_cpd: Data Virtualization
data_virt_ui: Data virtualization
data_virt_conn: Data Virtualization
datavirt-full: IBM Watson Query
datavirt-z: Data Virtualization Manager for z/OS
discovery_short: Watson Discovery
discovery_long: IBM Watson Discovery
discovery: Discovery
DSX_short: Watson Studio
DSX_full: IBM Watson Studio
DSX_cloud_full: IBM Watson Studio Cloud
DSX: Watson Studio
DSX_ep: Watson Studio Enterprise
DSX_local: Watson Studio Local
Db2: Db2
db2-dstage: IBM Db2 for DataStage
Db2fori: Db2 for i
Db2forzOS: Db2 for z/OS
Db2forzOS_long: IBM Db2 for z/OS
Db2_Hosted_short: Db2 Hosted
Db2_Hosted_long: IBM Db2 Hosted (previously named IBM DB2 on Cloud)
Db2_on_Cloud_short: Db2 on Cloud
Db2_on_Cloud_long: IBM® Db2 on Cloud
Db2_on_Cloud_long_notm: IBM Db2 on Cloud
Db2Warehouse: Db2 Warehouse
Db2WarehouseonCloud_short: Db2 Warehouse on Cloud
Db2WarehouseonCloud_med: IBM Db2 Warehouse on Cloud
Db2WarehouseonCloud_long: IBM Db2 Warehouse on Cloud (previously named IBM dashDB)
do: Decision Optimization
dq-rules: Data quality rules
eventstore: Db2 Event Store
docker: Docker
dremio: Dremio
dropbox: Dropbox
edb: Databases for EDB
es: Elasticsearch
elastic-cloud: Elastic Cloud
eefah: Execution Engine for Apache Hadoop
exasol: Exasol
flight_service: Flight service
fm_prompt: Prompt Lab
fm_tuning: Tuning Studio
gallery: Resource hub
asset_sample: Resource hub sample
globalrep: global search repository
gbq: Google BigQuery
google-pub-sub: Google Cloud Pub/Sub
governance-console: Governance console
governance-console-long: Governance console on IBM watsonx
gcs: Google Cloud Storage
hadoop-hdfs: HDFS via Execution Engine for Hadoop
hadoop-hive: Hive via Execution Engine for Hadoop
helm: Helm
ikcpremium: IBM Knowledge Catalog Premium
ikcpremiumfull: IBM Knowledge Catalog Premium Cartridge
ikcstandard: IBM Knowledge Catalog Standard
ikcstandardfull: IBM Knowledge Catalog Standard Cartridge
ikcanyedition: IBM Knowledge Catalog any edition
guard-short: Guardium Data Protection
guard-long: IBM Security Guardium Data Protection
iae_short: Analytics Engine
iae_full: IBM Analytics Engine
iae_full_notm: IBM Analytics Engine
iae_cpd_short: Analytics Engine powered by Apache Spark
iae_cpd_full: IBM Analytics Engine powered by Apache Spark
iae_cpd_full_notm: IBM Analytics Engine powered by Apache Spark
iamshort: Cloud Identity and Access Management
IBM: IBM
ibmid: IBMid
IBM_notm: IBM
icp: IBM Cloud Pak
ifpc: Informatica PowerCenter
igc_short: Information Governance Catalog
igc_long: IBM InfoSphere Information Governance Catalog
ieeh: Impala via Execution Engine for Hadoop
impala: Impala
Informix: Informix
iis: InfoSphere Information Server
kafka: Apache Kafka
kafka-short: Kafka
k_studio_short: Watson Knowledge Studio
k_studio_long: IBM Knowledge Studio
k_studio: Knowledge Studio
keymanagementservicefull: IBM Key Protect for IBM Cloud
keymanagementserviceshort: Key Protect
looker: Google Looker
mariadb: MariaDB
mdm-oc_full: IBM Match 360 with Watson
mdm-oc_short: IBM Match 360
m3: Match 360
ipm-full: IBM Product Master
ipm-short: Product Master
message_hub: Event Streams
message_hub_full: IBM Event Streams
meta-enrich: Metadata enrichment
meta-import: Metadata import
meta-import-discovery: Metadata import (discovery)
meta-import-lineage: Metadata import (lineage)
microstrategy: MicroStrategy
milvus: Milvus
minio: MinIO
mongo: MongoDB
mongodb: IBM Cloud Databases for MongoDB
mq: MQ
mq-long: IBM MQ
mysql: MySQL
nlc_full: IBM Watson Natural Language Classifier
nlc: Natural Language Classifier
nlp: Watson Natural Language Processing
nlpfull: IBM Watson Natural Language Processing
nlufull: IBM Watson Natural Language Understanding
nlushort: Natural Language Understanding
odata: OData
odi: Open Data for Industries
openlineage: OpenLineage
oracle: Oracle
oracle-dstage: Oracle Database for DataStage
oraclebi: Oracle Business Intelligence Enterprise Edition
oracledi: Oracle Data Integrator
p_insights_short: Watson Personality Insights
p_insights_long: IBM Watson Personality Insights
p_insights: Personality Insights
pipeline_full: IBM Orchestration Pipelines
pipeline_short: Orchestration Pipelines
pipeline: Pipelines
pivgreenplum: Greenplum
pc: Connectivity
pconn: Platform assets
pm_short: Machine Learning
pm_wml: Watson Machine Learning
pm_full: IBM Watson Machine Learning
plananalytics: Planning Analytics
postgresql: PostgreSQL
powerbi: Power BI
powerbi-desktop: Microsoft Power BI Desktop
powerbi-rs: Microsoft Power BI (Azure)
PureData: Netezza Performance Server
nzopt: IBM Netezza Performance Server for DataStage
qlik: Qlik Sense
rhos-short: OpenShift
rhos-long: Red Hat OpenShift
sapase: SAP ASE
sapbo: SAP BusinessObjects
sapbapi: SAP BAPI
sapiq: SAP IQ
sapodata: SAP OData
saphana: SAP HANA
sapbulk: SAP Bulk Extract
sapdelta: SAP Delta Extract
sapidoc: SAP IDoc
sapabap: SAP Data Dictionary
sap-s4hana: SAP S/4HANA
sas: Statistical Analysis System
sat_long: IBM Cloud Satellite
sat: Satellite
satlink: Satellite Link
satloc: Satellite location
satctr: Satellite Connector
singlestore: SingleStoreDB
spark_short: Apache Spark
spark_long: IBM Apache Spark
spark: Spark
speechtext_short: Watson Speech to Text
speechtext_full: IBM Watson Speech to Text
speechtext: Speech to Text
snowflake: Snowflake
spssas: SPSS Analytic Server
spss_modeler: SPSS Modeler
sqldb: SQL Database
sqlquery: IBM Cloud Data Engine
sqlquery-short: Cloud Data Engine
ssas: Microsoft SQL Server Analysis Services
ssis: Microsoft SQL Server Integration Services
ssrs: Microsoft SQL Server Reporting Services
st-volume: Storage volume
streaminganalyticsshort: Streaming Analytics
streaminganalyticslong: IBM Streaming Analytics
streams_short: Streams Designer
streams_long: IBM Streams Designer
svsdio_long: IBM Streams Flows Extension for Microsoft Visual Studio Code
svsdio: Streams Flows extension for VS Code
synthetic: Synthetic Data Generator
tableau: Tableau
talend: Talend
tera: Teradata
tera-dstage: Teradata database for DataStage
textspeech_short: Watson Text to Speech
textspeech_full: IBM Watson Text to Speech
textspeech: Text to Speech
till: Tiller
tone_short: Watson Tone Analyzer
tone_long: IBM Watson Tone Analyzer
tone: Tone Analyzer
translator_short: Watson Language Translator
translator_long: IBM Watson Language Translator
translator: Language Translator
vertica: Vertica
visu: Visualizations
Visutab: Visualization
WA_short: Watson Analytics
WA_long: IBM Watson Analytics
wx-platform: watsonx
wx-platform_cap: Watsonx
wx-platform_full: IBM watsonx as a Service
wxai: watsonx.ai
wxai_cap: Watsonx.ai
wxai_full: IBM watsonx.ai
wxai_lite: watsonx.ai lightweight engine
wxdata_full: IBM watsonx.data
wxdata_cap: Watsonx.data
wxdata: watsonx.data
watsonxd-conn: watsonx.data Presto
wxgov: watsonx.governance
wxgovernance_cap: Watsonx.governance
wxgovernance_full: IBM watsonx.governance
wmla: Watson Machine Learning Accelerator
wmla_full: IBM Watson Machine Learning Accelerator
Platform: IBM Watson
Platform_full: IBM Watson
presto: Presto
visualrecognitionfulltm: IBM Watson Visual Recognition
visualrecognitionfull: IBM Watson Visual Recognition
visualrecognitionshort: Visual Recognition
ais_full: IBM Watson OpenScale
ais_short: Watson OpenScale
ais: Watson OpenScale
wos4d_full: IBM Watson OpenScale for IBM Cloud Pak for Data
wos4d_notm: IBM Watson OpenScale for IBM Cloud Pak for Data
op_platform: IBM OpenPages with Watson
op_mrg: IBM OpenPages Model Risk Governance
op_short: IBM OpenPages
op_shorter: OpenPages
ai_func: AI function
ai_func_pl: AI functions
bigsql: Db2 Big SQL
bigsql_full: IBM Db2 Big SQL
objectdetectionfulltm: IBM Watson Object Detection
objectdetectionfull: IBM Watson Object Detection
objectdetectionshort: Object Detection
connectivity: >-
For **Private connectivity**, to connect to a database that is not
externalized to the internet (for example, behind a firewall), you must set up
a [secure connection](securingconn.html).
jdbc-cpd: >-
See [Importing JDBC drivers](../../cpd/admin/jdbc-drivers.html) for the
procedure and required permissions to upload the JAR file to Cloud Pak for
Data. **Important**: By default, uploading JDBC driver files is disabled and
users cannot view the list of JDBC drivers in the web client. An administrator
must [enable users to upload or view JDBC
drivers](../../cpd/admin/post-install-enable-jdbc-upload.html).
chooseproject: >-
Click **Assets > New asset > Connect to a data source**. See [Adding a
connection to a project](create-conn.html).
v-cred-ssl: >-
For **Credentials** and **Certificates**, you can use secrets if a vault is
configured for the platform and the service supports vaults. For information,
see [Using secrets from vaults in connections](vaults-conn.html).
v-cred: >-
For **Credentials**, you can use secrets if a vault is configured for the
platform and the service supports vaults. For information, see [Using secrets
from vaults in connections](vaults-conn.html).
v-ssl: >-
For **Certificates**, you can use secrets if a vault is configured for the
platform and the service supports vaults. For information, see [Using secrets
from vaults in connections](vaults-conn.html).
choosecatalog: >-
Click **Add to catalog > Connection**. See [Adding a connection asset to a
catalog](../catalog/c-add-conn.html).
choosedsp: >-
Click **Import assets > Data access > Connection**. See [Adding data assets to
a deployment space](../analyze-data/ml-space-add-assets.html).
dataproject: See [Add data from a connection in a project](connected-data.html).
datacatalog: See [Add data from a connection in a catalog](../catalog/conn-data.html).
experimental: >-
<span class="carbon-tag-magenta bx--tag bx--tag--magenta">Experimental</span>
This is an experimental release and is not yet supported for use in production
environments.
dpxcreds: >-
Data Product Hub uses the credentials of the connection owner to create and
deliver data products. The personal credentials entered by the connection
owner are automatically saved and used to create and deliver data products.
dph-conn: >-
**Data Product Hub**
: You can connect to this data source from Data Product Hub. For
instructions, see [Connectors for Data Product
Hub](../../wsj/data-products/dph_conn_types.html).
pv: Version 1.1
pv2: Version 2.0.0-1
number2: 2\.0\.0-1
watson: Watson
wml: Watson Machine Learning
wmls: Watson Machine Learning Server
wmls-full: IBM Watson Machine Learning Server
service-label: <span class="carbon-tag-blue bx--tag bx--tag--blue">Service</span>
tech-preview: >-
<span class="carbon-tag-magenta bx--tag bx--tag--magenta">Tech preview</span>
This is a technology preview and is not yet supported for use in production
environments.
lineage: lineage
Lineage: Lineage
preinstall-elasticsearch: >
Log in as a system administrator and set the virtual memory kernel parameter
to `262144` on every compute node in the cluster to support the Elasicsearch
microservice. Elasticsearch helps in checking the health of your clusters and
creating data visualizations.
Set the parameter on every node in the cluster:
```
echo "vm.max_map_count=262144" >> /etc/sysctl.conf ; sysctl -p
```
If you don't set the parameter, the Elasticsearch pod fails with the following
error message:
```
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to
at least [262144]
```
timezone-master-node: >
If the service will be installed on a remote machine that runs in a different
time zone than the master node, the time zone for the master node is
overwritten by the time zone for the installer node. This time zone
discrepancy results in scheduled jobs that don't run at the correct time.
If necessary, log in as a system administrator and set the timezone for the
master node:
1. Locate the [tz database code
format](https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/html-single/version_3_rest_api_guide/index#appe-Timezones)
associated with the master node time zone.
1. If the `override.yaml` file does not exist, create it.
1. Add the tz database code value to the `override.yaml` file. For example, if
you're using the America/Los_Angeles database code, add the following value:
```
global:
masterTimezone: 'America/Los_Angeles'
```
1. Inform the user who is installing the service to include the `--override
override.yaml` option in the installation command.
salesforce: |
Salesforce
sfc: Salesforce.com
salesforce-opt: Salesforce API for DataStage
services-install_role: >
**Required role:** To complete this task, you must be an administrator of the
project (namespace) where you will deploy the service.
services-install_prereq: >
**Prerequisite:** Watson Studio Local must be installed on your cluster before
you can install this service. See [Installing Watson Studio
Local](../install/install-ws.html).
services-install_1: >
1. Transfer the service tar file to the cluster.
1. Extract the service tar file in the same directory where the Watson Studio
Local tar file was extracted.
services-install_2: |
1. Push the Docker images to the registry by running this command:
services-install_3: |
1. Install the setup administration and security by running this command:
services-install_4: |
1. Install the service by running this command:
services-install_icparg: >
Include the `--icp` argument if you are installing the service on an IBM Cloud
Private cluster.
services-install_override: >
Include the `--override override.yml` argument if the cluster administrator
set the time zone for the master node
services-install-vars: >
Ensure that you have information from your cluster administrator so that you
can replace the following values in the installation commands:
- `<Package_location>` → The installation directory where the Watson Studio
Local tar file was extracted.
- `<Registry_password>` → The password for the Docker registry.
- `<Registry_username>` → The user name for the Docker registry.
- `<Assembly_version>` → The version of the service assembly to install.
- `<Storage_class_name>` → The name of the storage class to use to provision
storage for the service.
- `<Registry_location>` → The location to store the images in the registry
server.
- `<Registry_from_cluster>` → The location from which pods on the cluster can
pull images.
- `<Project>` → The project (namespace) where the Watson Studio Local control
plane is installed.
- `<Install_override_file>` → The name of the `override.yaml` file.
config-custom-certs: >
You can use your existing certificates and not have to modify the system
truststore. The following configuration properties convert DSXHI to do the
following customizations:
**custom_jks**
: DSXHI typically generates a Keystore, converts it to a `.crt`, and adds
the `.crt` to the Java Truststore. However, with this configuration, DSXHI
allows you to provide a custom Keystore that can be used to generate the
required `.crt`.
**dsxhi_cacert**
: DSXHI previously detected the appropriate truststore to use as part of the
installation. With the `dsxhi_cacert` property, DSXHI allows you to provide
any custom truststore (CACERTS), where DSXHI certs are added.
**add_certs_to_truststore**
: This configuration provides options to either add the host certificate to
the truststore yourself or DSXHI adds it. If you set the configuration to
False, users must add the host certificate to the truststore themselves. DSXHI
doesn't make any changes to the truststore. If you set the configuration to
True, DSXHI retains its default behavior to add host certificate to java
truststore and on detected datanodes for gateway and web services.
action: >-
<button class="bx--tag bx--tag--red"><span class="bx--tag__label">Action
required</span></button>
cloud-new-plan: >-
<button class="bx--tag bx--tag--teal"><span
class="bx--tag__label">New</span></button> This information describes the new
service plan.
startofchange: >-
<button class="bx--tag bx--tag--magenta"><span class="bx--tag__label">Start of
change</span></button>
endofchange: >-
<button class="bx--tag bx--tag--magenta"><span class="bx--tag__label">End of
change</span></button>
question: >-
<button class="bx--tag bx--tag--red"><span
class="bx--tag__label">Question</span></button>
beginner: >-
<button class="bx--tag bx--tag--green"><span
class="bx--tag__label">Beginner</span></button>
intermediate: >-
<button class="bx--tag bx--tag--blue"><span
class="bx--tag__label">Intermediate</span></button>
advanced: >-
<button class="bx--tag bx--tag--purple"><span
class="bx--tag__label">Advanced</span></button>
nocode: >-
<button class="bx--tag bx--tag--green"><span class="bx--tag__label">No
code</span></button>
lowcode: >-
<button class="bx--tag bx--tag--blue"><span class="bx--tag__label">Low
code</span></button>
allcode: >-
<button class="bx--tag bx--tag--purple"><span class="bx--tag__label">All
code</span></button>
cont: >-
<button class="bx--tag bx--tag--red"><span
class="bx--tag__label">Continuation</span></button>
df_data: Data integration
df_AI: AI governance
df_pipeline: Orchestrate an AI pipeline
df_data_science: Data Science and MLOps
df_pipeline_tut1: Orchestrate an AI pipeline with data integration
df_pipeline_tut2: Orchestrate an AI pipeline with model monitoring
df_360: Master Data Management
df_govern: Data governance
create_connection: New asset > Connect to a data source
create_synthetic_data: New asset > Generate synthetic tabular data
create_prompt: New asset > Chat and build prompts with foundation models
create_tuning: New asset > Tune a foundation model with labeled data
create_vector_index: New asset > Ground gen AI with vectorized documents
create_dr: New asset > Prepare and visualize data
create_autoai: New asset > Build machine learning models automatically
create_spss: New asset > Build models as a visual flow
create_do: New asset > Solve optimization problems
create_fl: New asset > Train models on distributed data
create_notebook: New asset > Work with data and models in Python or R notebooks
create_pipeline: New asset > Automate model lifecycle
create_masking: New asset > Copy and mask data
create_metadata_enr: New asset > Enrich data assets with metadata
create_metadata_imp: New asset > Import metadata for data assets
create_query: New asset > Create a dynamic view of data
create_replicate: New asset > Replicate data
create_ds: New asset > Transform and integrate data
create_ds_component: New asset > Create reusable DataStage components
create_data_quality_definition: New asset > Define how to measure data quality
create_data_quality_rule: New asset > Measure and monitor data quality
create_parameter_set: New asset > Define resuable sets of parameters
create_dashboard: New asset > Visualize data in dashboards
create_experiment: New Asset > Build deep learning experiments
create_mdm_configuration: New Asset > Consolidate data into 360-degree views
predefined_terms: >-
Predefined business terms and the Knowledge Accelerator Sample Personal Data
category that includes them are available only if you create a Watson
Knowledge Catalog service instance with a Lite or Standard plan after 7
October 2022. For more information see [Predefined business
terms](predefined-business-terms.html).
data_lineage: IBM Manta Data Lineage
data_lineage_short: Manta Data Lineage
manta: MANTA Automated Data Lineage for IBM Cloud Pak for Data
manta_short: MANTA Automated Data Lineage
manta_alone: MANTA
manta_admin_ui: Automatic Data Lineage
manta_install_guide: >-
MANTA Automated Data Lineage for IBM Cloud Pak for Data Installation and Usage
Manual
manta_use_guide: >-
MANTA Automated Data Lineage for IBM Cloud Pak for Data Visualization User
Documentation
manta_admin_guide: MANTA Automated Data Lineage for IBM Cloud Pak for Data Server Administration
manta_architecture_guide: >-
MANTA Automated Data Lineage for IBM Cloud Pak for Data Container Architecture
(36.x)
manta_openshift_guide: >-
Guide to Managing MANTA Automated Data Lineage on IBM Cloud Pak for Data on
OpenShift Using Operator (36.x)
manta_docset_URL: https://www.ibm.com/support/pages/node/6597457
svc-install: >-
<span class="carbon-tag-blue bx--tag bx--tag--blue">Service</span> This
service is not available by default. An administrator must install this
service on the IBM Cloud Pak for Data platform, and you must be given access
to the service. To determine whether the service is installed, open the
Services catalog and check whether the service is enabled.
icpd-optional: >-
<button class="bx--tag bx--tag--blue"><span
class="bx--tag__label">Optional</span></button> This feature is not available
by default.
unreal_data: Unreal Data