Lokasi ngalangkungan proxy:   [ UP ]  
[Ngawartoskeun bug]   [Panyetelan cookie]                
Skip to content

Commit 30db745

Browse files
authored
Merge pull request #19924 from MicrosoftDocs/release-2019-cu12
Publish notes for SQL Server Big Data Cluster 2019 cu12
2 parents f8b001e + 98f6e6d commit 30db745

14 files changed

Lines changed: 400 additions & 265 deletions

docs/azdata/install/deploy-install-azdata-pip.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@ titleSuffix:
44
description: Learn how to install the azdata tool with pip.
55
author: MikeRayMSFT
66
ms.author: mikeray
7-
ms.reviewer: mihaelab
8-
ms.date: 09/30/2020
7+
ms.reviewer: danibunny
8+
ms.date: 07/29/2021
99
ms.topic: conceptual
1010
ms.prod: sql
1111
ms.technology: big-data-cluster
@@ -22,15 +22,15 @@ This article describes how to install the [!INCLUDE [azure-data-cli-azdata](../.
2222
2323
## <a id="prerequisites"></a> Prerequisites
2424

25-
`azdata` is a command-line utility written in Python that enables cluster administrators to bootstrap and manage data resources via REST APIs. The minimum Python version required is v3.5. `pip` is required to download and install the `azdata` tool. The instructions below provide examples for Windows, Linux (Ubuntu) and macOS/OS X. For installing Python on other platforms, see the [Python documentation](https://wiki.python.org/moin/BeginnersGuide/Download). In addition, install and update the latest version of `requests` Python package:
25+
`azdata` is a command-line utility written in Python that enables cluster administrators to bootstrap and manage data resources via REST APIs. The minimum Python version required is v3.6. `pip` is required to download and install the `azdata` tool. The instructions below provide examples for Windows, Linux (Ubuntu) and macOS/OS X. For installing Python on other platforms, see the [Python documentation](https://wiki.python.org/moin/BeginnersGuide/Download). In addition, install and update the latest version of `requests` Python package:
2626

2727
```bash
2828
pip3 install -U requests
2929
```
3030

3131
## <a id="windows"></a> Windows `azdata` installation
3232

33-
1. On a Windows client, download the necessary Python package from [https://www.python.org/downloads/](https://www.python.org/downloads/). For python 3.5.3 and later, pip3 is also installed when you install Python.
33+
1. On a Windows client, download the necessary Python package from [https://www.python.org/downloads/](https://www.python.org/downloads/). For python 3.6 and later, pip3 is also installed when you install Python.
3434

3535
> [!TIP]
3636
> When installing Python3, select to add Python to your `PATH`. If you do not, you can later find where pip3 is located and manually add it to your `PATH`.
@@ -56,7 +56,7 @@ pip3 install -U requests
5656

5757
## <a id="linux"></a> Linux `azdata` installation
5858

59-
On Linux, you must install Python 3.5 and then upgrade pip. The following example shows the commands that would work for Ubuntu. For other Linux platforms, see the [Python documentation](https://wiki.python.org/moin/BeginnersGuide/Download).
59+
On Linux, you must install Python 3.6 and then upgrade pip. The following example shows the commands that would work for Ubuntu. For other Linux platforms, see the [Python documentation](https://wiki.python.org/moin/BeginnersGuide/Download).
6060

6161
1. Install the necessary Python packages:
6262

docs/big-data-cluster/big-data-cluster-faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -232,7 +232,7 @@ sections:
232232
SQL Server ML Services support policy is same as that of SQL Server, except that every major release comes with a new runtime version. SparkML library itself is open source software (OSS). We do package many OSS components in Big Data Cluster and this is supported by Microsoft.
233233
- question: Is Red Hat Enterprise Linux 8 (RHEL8) supported platform for SQL Server Big Data Clusters?
234234
answer: |
235-
Not at this time. See here for the [supported platforms](release-notes-big-data-cluster.md#supported-platforms).
235+
Not at this time. See here for the [tested configurations](release-notes-big-data-cluster.md#tested-configurations).
236236
- name: Tools
237237
questions:
238238
- question: Are the notebooks available in Azure Data Studio essentially Jupyter notebooks?

docs/big-data-cluster/concept-application-deployment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ When an application is executed, the Kubernetes service for the application prox
5555

5656
## <a id="app-deploy-security"></a> Security considerations for applications deployments on OpenShift
5757

58-
SQL Server 2019 CU5 enables support for BDC deployment on Red Hat OpenShift and an updated security model for BDC so privileged containers no longer required. In addition to non-privileged, containers are running as non-root user by default for all new deployments using [SQL Server 2019 CU5](release-notes-big-data-cluster.md#cu5).
58+
SQL Server 2019 CU5 enables support for BDC deployment on Red Hat OpenShift and an updated security model for BDC so privileged containers no longer required. In addition to non-privileged, containers are running as non-root user by default for all new deployments using [SQL Server 2019 CU5](release-notes-cumulative-updates-history.md#cu5).
5959

6060
At the time of the CU5 release, the setup step of the applications deployed with [app deploy](app-create.md) interfaces will still run as *root* user. This is required since during setup extra packages that application will use are installed. Other user code deployed as part of the application will run as low privilege user.
6161

docs/big-data-cluster/deploy-offline.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ This article describes how to perform an offline deployment of a [!INCLUDE[big-d
1717

1818
## Prerequisites
1919

20-
- Docker Engine 1.8+ on any supported Linux distribution or Docker for Mac/Windows. For more information, see [Install Docker](https://docs.docker.com/engine/installation/).
20+
- Docker Engine on any supported Linux distribution or Docker for Mac/Windows. Validate the engine version against the tested configurations on the [SQL Server Big Data Clusters release notes](release-notes-big-data-cluster.md).For more information, see [Install Docker](https://docs.docker.com/engine/installation/).
2121

2222
## Load images into a private repository
2323

@@ -26,7 +26,7 @@ The following steps describe how to pull the big data cluster container images f
2626
> [!TIP]
2727
> The following steps explain the process. However, to simplify the task, you can use the [automated script](#automated) instead of manually running these commands.
2828
29-
1. Pull the big data cluster container images by repeating the following command. Replace `<SOURCE_IMAGE_NAME>` with each [image name](#images). Replace `<SOURCE_DOCKER_TAG>` with the tag for the big data cluster release, such as **2019-GDR1-ubuntu-16.04**.
29+
1. Pull the big data cluster container images by repeating the following command. Replace `<SOURCE_IMAGE_NAME>` with each [image name](#images). Replace `<SOURCE_DOCKER_TAG>` with the tag for the big data cluster release, such as **2019-CU12-ubuntu-20.04**.
3030

3131
```PowerShell
3232
docker pull mcr.microsoft.com/mssql/bdc/<SOURCE_IMAGE_NAME>:<SOURCE_DOCKER_TAG>

docs/big-data-cluster/deploy-openshift.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@ titleSuffix: SQL Server Big Data Cluster
44
description: Learn how to upgrade SQL Server Big Data Clusters on OpenShift.
55
author: mihaelablendea
66
ms.author: mihaelab
7-
ms.reviewer: mikeray
8-
ms.date: 06/22/2020
7+
ms.reviewer: dacoelho
8+
ms.date: 07/29/2021
99
ms.topic: conceptual
1010
ms.prod: sql
1111
ms.technology: big-data-cluster
@@ -21,7 +21,7 @@ This article explains how to deploy a SQL Server Big Data Cluster on OpenShift e
2121
> For a quick way to bootstrap a sample environment using ARO and then BDC deployed on this platform, you can use the Python script available [here](quickstart-big-data-cluster-deploy-aro.md).
2222
2323

24-
SQL Server 2019 CU5 introduces support for SQL Server Big Data Clusters on OpenShift. You can deploy big data clusters to on-premises OpenShift or on Azure Red Hat OpenShift (ARO). Deployment requires OpenShift cluster version minimum 4.3. While the deployment workflow is similar to deploying in other Kubernetes based platforms ([kubeadm](deploy-with-kubeadm.md) and [AKS](deploy-on-aks.md)), there are some differences. The difference is mainly in relation to running applications as non-root user and the security context used for the namespace BDC is deployed in.
24+
You can deploy big data clusters to on-premises OpenShift or on Azure Red Hat OpenShift (ARO). Validate the OpenShifts CRI-O version against tested configurations on the [SQL Server Big Data Clusters release notes](release-notes-big-data-cluster.md). While the deployment workflow is similar to deploying in other Kubernetes based platforms ([kubeadm](deploy-with-kubeadm.md) and [AKS](deploy-on-aks.md)), there are some differences. The difference is mainly in relation to running applications as non-root user and the security context used for the namespace BDC is deployed in.
2525

2626
For deploying the OpenShift cluster on-premises see the [Red Hat OpenShift documentation](https://docs.openshift.com/container-platform/4.3/release_notes/ocp-4-3-release-notes.html#ocp-4-3-installation-and-upgrade). For ARO deployments see the [Azure Red Hat OpenShift](/azure/openshift/intro-openshift).
2727

@@ -112,7 +112,7 @@ This article outlines deployment steps that are specific to the OpenShift platfo
112112
azdata bdc config init --source aro-dev-test --target custom-openshift
113113
```
114114

115-
1. Customize the configuration files control.json and bdc.json. Here are some additional resources that guide you through the customizations supported for various use cases:
115+
1. Customize the configuration files control.json and bdc.json. Here are some additional resources that guide you through the customizations for various use cases:
116116

117117
- [Storage](concept-data-persistence.md)
118118
- [AD related settings](active-directory-deploy.md)

docs/big-data-cluster/deployment-guidance.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ titleSuffix: SQL Server Big Data Clusters
44
description: Learn how to deploy SQL Server Big Data Clusters on Kubernetes.
55
author: WilliamDAssafMSFT
66
ms.author: wiassaf
7-
ms.reviewer:
7+
ms.reviewer: dacoelho
88
ms.date: 06/22/2020
99
ms.topic: conceptual
1010
ms.prod: sql
@@ -21,9 +21,9 @@ SQL Server Big Data Cluster is deployed as docker containers on a Kubernetes clu
2121
- Install the cluster configuration tool [!INCLUDE [azure-data-cli-azdata](../includes/azure-data-cli-azdata.md)] on your client machine.
2222
- Deploy a SQL Server big data cluster in a Kubernetes cluster.
2323

24-
## Supported platforms
24+
## Tested configurations
2525

26-
See [Supported platforms](release-notes-big-data-cluster.md#supported-platforms) for a complete list of the various Kubernetes platforms validated for deploying SQL Server Big Data Clusters.
26+
See [Tested configurations](release-notes-big-data-cluster.md#tested-configurations) for a complete list of the various Kubernetes platforms validated for deploying SQL Server Big Data Clusters.
2727

2828
### SQL Server editions
2929

docs/big-data-cluster/deployment-script-single-node-kubeadm.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ In this tutorial, you use a sample bash deployment script to deploy a single nod
2020

2121
## Prerequisites
2222

23-
- A vanilla Ubuntu 18.04 or 16.04 **server** virtual or physical machine. All dependencies are set up by the script, and you run the script from within the VM.
23+
- A vanilla Ubuntu 20.04 **server** virtual or physical machine. All dependencies are set up by the script, and you run the script from within the VM.
2424

2525
> [!NOTE]
2626
> Using Azure Linux VMs is not yet supported.

docs/big-data-cluster/reference-open-source-software.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -19,25 +19,25 @@ A SQL Server Big Data Cluster includes some containers that are developed by ope
1919

2020
## Project list
2121

22-
The table below shows the open-source projects in use as of [!INCLUDE [sssql19-md](../includes/sssql19-md.md)] CU8 and prior, and CU9 and later.
23-
24-
| Project | CU8 and prior | Beginning with CU9 |
25-
|--|--|--|
26-
| [collectd](https://collectd.org/) | 5.8.1 | 5.12 |
27-
| [InfluxDB](https://www.influxdata.com) | 1.7.6 | 1.8.3 |
28-
| [Elasticsearch](https://www.elastic.co/) | 7.0.1 | 7.9.1 |
29-
| [Fluent Bit](https://docs.fluentbit.io/manual/about/what-is-fluent-bit) | 1.1.1 | 1.6.3 |
30-
| [Grafana](https://grafana.com/) | 6.3.6 | 7.3.1 |
31-
| Hadoop <br/>[HDFS DataNode](concept-storage-pool.md)<br/>[HDFS NameNode](https://cwiki.apache.org/confluence/display/HADOOP2/NameNode) |3.1.3+|3.3.0|
32-
| [Hive (Metastore)](https://hive.apache.org/) |2.3.7|2.3.7<br/>3.0.0 (standalone)<br/>3.1.2 (hive)|
33-
| [Kibana](https://www.elastic.co/kibana) | 7.0.1 | 7.9.1 |
34-
| [Knox](https://knox.apache.org/) |1.2.0|1.4.0|
35-
| [Livy](https://livy.apache.org/) |0.6.0|0.7.0|
36-
| [opendistro-elasticsearch-security](https://www.elastic.co/what-is/elastic-stack-security) | 1.0.0.1 | 1.10.1.0 |
37-
| [Openresty (Nginx)](https://openresty.org/) | 1.15.8 | 1.17.8.2 |
38-
| [Spark](configure-spark-hdfs.md) |2.4.6+|2.4.10|
39-
| [Telegraf](https://docs.influxdata.com/telegraf/) | 1.10.3 | 1.16.1 |
40-
| [ZooKeeper](https://cwiki.apache.org/confluence/display/zookeeper) |3.5.8|3.6.2
22+
The table below shows the open-source projects in use on [!INCLUDE [sssql19-md](../includes/sssql19-md.md)]. For the exact version used on each cumulative update, see [SQL Server Big Data Clusters platform release notes](release-notes-big-data-cluster.md).
23+
24+
| Project |
25+
|--|
26+
| [collectd](https://collectd.org/) |
27+
| [InfluxDB](https://www.influxdata.com) |
28+
| [Elasticsearch](https://www.elastic.co/) |
29+
| [Fluent Bit](https://docs.fluentbit.io/manual/about/what-is-fluent-bit) |
30+
| [Grafana](https://grafana.com/) |
31+
| Hadoop <br/>[HDFS DataNode](concept-storage-pool.md)<br/>[HDFS NameNode](https://cwiki.apache.org/confluence/display/HADOOP2/NameNode) |
32+
| [Hive (Metastore)](https://hive.apache.org/) |
33+
| [Kibana](https://www.elastic.co/kibana) |
34+
| [Knox](https://knox.apache.org/) |
35+
| [Livy](https://livy.apache.org/) |
36+
| [opendistro-elasticsearch-security](https://www.elastic.co/what-is/elastic-stack-security) |
37+
| [Openresty (Nginx)](https://openresty.org/) |
38+
| [Spark](configure-spark-hdfs.md) |
39+
| [Telegraf](https://docs.influxdata.com/telegraf/) |
40+
| [ZooKeeper](https://cwiki.apache.org/confluence/display/zookeeper) |
4141

4242
## Next steps
4343

0 commit comments

Comments
 (0)