You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/big-data-cluster/concept-application-deployment.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -102,7 +102,7 @@ The followings are the target scenarios for app deploy:
102
102
103
103
In app deploy, BDC python runtime allows Python application inside the big data cluster to address variety of use cases such as machine learning inferencing, API serving and more.
104
104
105
-
Python 3.5 for Ubuntu 16.04 and Python 3.8 for Ubuntu 20.04.
105
+
The app deploy Python runtime uses Python 3.8 on SQL Server Big Data Clusters CU10+.
106
106
107
107
In app deploy, `spec.yaml` is where you provide the information that controller needs to know to deploy your application. The following are the fields that can be specified:
108
108
@@ -146,7 +146,7 @@ App deploy Python runtime doesn't support scheduling scenario. Once Python app i
146
146
147
147
In app deploy, BDC Python runtime allows R application inside the big data cluster to address variety of use cases such as machine learning inferencing, API serving and more.
148
148
149
-
The app deploy R runtime supports Microsoft R Open (MRO) 3.5.2.
149
+
The app deploy R runtime uses Microsoft R Open (MRO) version 3.5.2 on SQL Server Big Data Clusters CU10+.
Copy file name to clipboardExpand all lines: docs/big-data-cluster/configure-bdc-postdeployment.md
+18-3Lines changed: 18 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,8 +4,8 @@ titleSuffix: SQL Server big data clusters
4
4
description: Big Data Clusters Post-Deployment Configuration Overview
5
5
author: MikeRayMSFT
6
6
ms.author: mikeray
7
-
ms.reviewer: rahul.ajmera
8
-
ms.date: 02/11/2021
7
+
ms.reviewer: dacoelho
8
+
ms.date: 08/04/2021
9
9
ms.topic: reference
10
10
ms.prod: sql
11
11
ms.technology: big-data-cluster
@@ -24,37 +24,51 @@ Cluster, service, and resource scoped settings for Big Data Clusters can be conf
24
24
## Step by Step: Configure BDC to meet your Spark workload requirements
25
25
26
26
### View the current configurations of the Big Data Cluster Spark service
27
+
27
28
The following example shows how to view the user configured settings of the Spark service. You can view all possible configurable settings, system-managed and all configurable settings, and pending settings through optional parameters. Visit [`azdata bdc spark` statement](../azdata/reference/reference-azdata-bdc-spark-statement.md) for more information.
### Change the default number of cores and memory for the Spark driver across all resources with Spark (i.e. for the Spark service)
43
+
41
44
Update the default number of cores to 2 and default memory to 7424m for the Spark service.
42
45
43
46
```bash
44
47
azdata bdc spark settings set --settings spark-defaults-conf.spark.driver.cores=2,spark-defaults-conf.spark.driver.memory=7424m
45
48
```
46
49
47
50
### Change the default number of cores and memory for the Spark executors in the Storage Pool
51
+
48
52
Update the default number of executor cores to 4 for the Storage Pool.
49
53
50
54
```bash
51
55
azdata bdc spark settings set --settings spark-defaults-conf.spark.executor.cores=4 --resource=storage-0
52
56
```
53
57
58
+
### Configure additional paths to the default classpath of Spark applications
59
+
60
+
The ```/opt/hadoop/share/hadoop/tools/lib/``` path contains several libraries to be used by your spark applications, but the referred path is not loaded by default in the classpath of Spark applications. To enable this setting apply the following configuration pattern.
61
+
62
+
```bash
63
+
azdata bdc hdfs settings set --settings hadoop-env.HADOOP_CLASSPATH="/opt/hadoop/share/hadoop/tools/lib/*"
64
+
```
65
+
54
66
### View the pending settings changes staged in the big data cluster
67
+
55
68
View the pending settings changes for the Spark service only and across the entire big data cluster.
56
69
57
70
#### Pending Spark Service Settings
71
+
58
72
```bash
59
73
azdata bdc spark settings show --filter-option=pending --include-details
For more information about [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ss-nover.md)], see [Introducing [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ver15.md)]](big-data-cluster-overview.md)
0 commit comments