Lokasi ngalangkungan proxy:   [ UP ]  
[Ngawartoskeun bug]   [Panyetelan cookie]                
Skip to content

Commit f20746c

Browse files
author
stevestein
committed
Merge branch 'release-sqlseattle' of https://github.com/MicrosoftDocs/sql-docs-pr into ssms-18
2 parents efe4139 + 1892bcd commit f20746c

92 files changed

Lines changed: 941 additions & 685 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/advanced-analytics/java/extension-java.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,8 @@ Install the JDK under the default /Program Files/ folder if you want to avoid ha
7777
> [!Note]
7878
> The authorization and isolation model for extensions has changed in this release. For more information, see [Differences in a SQL Server Machine 2019 Learning Services installation](../install/sql-machine-learning-services-ver15.md).
7979
80+
<a name="perms-nonwindows"></a>
81+
8082
### Grant access to non-default JDK folder (Windows only)
8183

8284
You can skip this step if you installed the JDK/JRE in the default folder. For a non-default folder installation, run the following PowerShell scripts to grant access to the **SQLRUsergroup** and SQL Server service accounts (in ALL_APPLICATION_PACKAGES) for accessing the JVM and the Java classpath.

docs/advanced-analytics/java/java-first-sample.md

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ manager: cgronlun
1212
monikerRange: ">=sql-server-ver15||=sqlallproducts-allversions"
1313
---
1414

15-
# SQL Server Java sample
15+
# SQL Server Java sample walkthrough
1616

1717
This example demonstrates a Java class that receives two columns (ID and text) from SQL Server and returns two columns back to SQL Server (ID and ngram). For a given ID and string combination, the code generates permutations of ngrams (substrings), returning those permutations along with the original ID. The length of the ngram is defined by a parameter sent to the Java class.
1818

@@ -24,7 +24,7 @@ This example demonstrates a Java class that receives two columns (ID and text) f
2424

2525
+ Java SE Development Kit (JDK) 1.10 on Windows, or JDK 1.8 on Linux.
2626

27-
A Java IDE is helpful for creating and compiling classes. If you don't have one, we recommend [Visual Studio Code](https://code.visualstudio.com/download) with the [Java extension](https://code.visualstudio.com/docs/languages/java) (not related to the SQL Server Java extension).
27+
Command-line compilation using **javac** is sufficient for this tutorial.
2828

2929
## 1 - Load sample data
3030

@@ -165,7 +165,7 @@ public class InputRow {
165165

166166
## 4 - Class OutputRow.java
167167

168-
The third and final class is called **OutputRow.java**. Copy the code into the class and save it in the same location as the others.
168+
The third and final class is called **OutputRow.java**. Copy the code and save as OutputRow.java in the same location as the others.
169169

170170
```java
171171
package pkg;
@@ -187,21 +187,23 @@ public class OutputRow {
187187

188188
## 5 - Compile
189189

190-
Once you have your classes ready, run javac to compile them into ".class" files (`javac Ngram.java InputRow.java OutputRow.java). You should get three .class files for this sample (Ngram.class, InputRow.class, and OutputRow.class).
190+
Once you have your classes ready, run javac to compile them into ".class" files (`javac Ngram.java InputRow.java OutputRow.java`). You should get three .class files for this sample (Ngram.class, InputRow.class, and OutputRow.class).
191191

192-
On the SQL Server computer, place these files in a subfolder called "pkg" in your classpath location. For example, on Linux, if the classpath location is called '/home/myclasspath/', then the .class files should be in '/home/myclasspath/pkg'. In this sample, the CLASSPATH provided in the sp_execute_external_script is '/home/myclasspath/' assuming Linux.
192+
Place the compiled code into a subfolder called "pkg" in the classpath location. If you are working on a development workstation, this step is where you copy the files to the SQL Server computer.
193193

194-
On Windows, set the value to a Windows folder path 'C:\myJavaCode' and then create a subfolder called "pkg" to contain the compiled classes. In this CTP, use a relatively shallow folder structure to simplify permissions.
194+
The classpath is the location of compiled code. For example, on Linux, if the classpath is '/home/myclasspath/', then the .class files should be in '/home/myclasspath/pkg'. In the example script in step 7, the CLASSPATH provided in the sp_execute_external_script is '/home/myclasspath/' (assuming Linux).
195195

196-
For instructions on how to set the classpath, see [Set CLASSPATH](howto-call-java-from-sql.md#set-classpath).
196+
On Windows, we recommend using a relatively shallow folder structure, one or two levels deep, to simplify permissions. For example, your classpath might look like 'C:\myJavaCode' with a subfolder called '\pkg' containing the compiled classes.
197+
198+
For more information about classpath, see [Set CLASSPATH](howto-call-java-from-sql.md#set-classpath).
197199

198200
### Using .jar files
199201

200-
If you plan to package your classes and dependencies into .jar files, provide the full path to the .jar file in the sp_execute_external_script CLASSPATH parameter. For example, if the jar file is called 'ngram.jar', the CLASSPATH will be '/home/myclasspath/ngram.jar'
202+
If you plan to package your classes and dependencies into .jar files, provide the full path to the .jar file in the sp_execute_external_script CLASSPATH parameter. For example, if the jar file is called 'ngram.jar', the CLASSPATH will be '/home/myclasspath/ngram.jar' on Linux.
201203

202204
## 6 - Permissions
203205

204-
Grant permissions on the compiled code so that SQL Server Launchpad service and AppContainers can execute it.
206+
Script execution only succeeds if the process identities have access to your code.
205207

206208
### On Linux
207209

@@ -220,19 +222,19 @@ Grant 'Read and Execute' permissions to **SQLRUserGroup** and the **All applicat
220222
5. Enter **SQLRUserGroup**, check the name, and then click OK to add the group.
221223
6. Enter **all application packages**, check the name, and then click OK to add. If the name doesn't resolve, revisit the Locations step. The SID is local to your machine.
222224

223-
Make sure both security identities have 'Read and Execute' permissions on the folder and on the "pkg" subfolder.
225+
Make sure both security identities have 'Read and Execute' permissions on the folder and "pkg" subfolder.
224226

225227
<a name="call-method"></a>
226228

227229
## 7 - Call *getNgrams()*
228230

229-
To call the code from SQL Server, specify the Java method *getNgrams()* from the "script" parameter of sp_execute_external_script. This method belongs to a package called "pkg" and a class file called **Ngram.java**.
231+
To call the code from SQL Server, specify the Java method **getNgrams()** in the "script" parameter of sp_execute_external_script. This method belongs to a package called "pkg" and a class file called **Ngram.java**.
230232

231233
This example passes the CLASSPATH parameter to provide the path to the Java files. It also uses "params" to pass a parameter to the Java class. Make sure that classpath does not exceed 30 characters. If it does, increase the value in the script below.
232234

233235
+ On Linux, run the following code in SQL Server Management Studio or another tool used for running Transact-SQL.
234236

235-
+ On Windows, change **@myClassPath** to N'C:\myJavaCode\' (assuming it's the parent folder of \pkg) before executing the query.
237+
+ On Windows, change **@myClassPath** to N'C:\myJavaCode\' (assuming it's the parent folder of \pkg) before executing the query in SQL Server Management Studio or another tool.
236238

237239
```sql
238240
DECLARE @myClassPath nvarchar(30)

docs/analytics-platform-system/toc.yml

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,12 @@
1-
- name: About this release
1+
- name: What's new in APS
2+
href: whats-new-analytics-platform-system.md
23
items:
3-
- name: What's new
4-
items:
5-
- name: APS CU7.1
6-
href: whats-new-analytics-platform-system.md#h2-aps-cu7.1
7-
- name: APS AU7
8-
href: whats-new-analytics-platform-system.md#h2-aps-au7
9-
- name: APS 2016
10-
href: whats-new-analytics-platform-system.md#h2-aps-au6
4+
- name: APS CU7.1
5+
href: whats-new-analytics-platform-system.md#h2-aps-cu7.1?view=aps-pdw-2016-au7
6+
- name: APS AU7
7+
href: whats-new-analytics-platform-system.md#h2-aps-au7?view=aps-pdw-2016-au7
8+
- name: APS 2016
9+
href: whats-new-analytics-platform-system.md#h2-aps-au6?view=aps-pdw-2016
1110
- name: Architecture
1211
items:
1312
- name: Parallel Data Warehouse overview

docs/analytics-platform-system/whats-new-analytics-platform-system.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ See what’s new in the latest Appliance Updates for Microsoft® Analytics Platf
1616
::: moniker range=">= aps-pdw-2016-au7 || = sqlallproducts-allversions"
1717
<a name="h2-aps-cu7.1"></a>
1818
## APS CU7.1
19+
Release date - July 2018
1920

2021
### DBCC commands do not consume concurrency slots (behavior change)
2122
APS supports a subset of the T-SQL [DBCC commands](https://docs.microsoft.com/sql/t-sql/database-console-commands/dbcc-transact-sql) such as [DBCC DROPCLEANBUFFERS](https://docs.microsoft.com/sql/t-sql/database-console-commands/dbcc-dropcleanbuffers-transact-sql). Previously, these commands would consume a [concurrency slot](https://docs.microsoft.com/en-us/sql/analytics-platform-system/workload-management?view=aps-pdw-2016-au7#concurrency-slots) reducing the number of user loads/queries that could be executed. The `DBCC` commands are now run in a local queue that do not consume a user concurrency slot improving overall query execution performance.
@@ -34,7 +35,9 @@ We have upgraded to SQL Server 2016 SP2 CU2 with APS CU7.1. The upgrade fixes so
3435

3536
<a name="h2-aps-au7"></a>
3637
## APS AU7
37-
APS 2016 is a prerequisite to upgrade to AU7. The following are new in APS AU7:
38+
Release date - May 2018
39+
40+
APS 2016 is a prerequisite to upgrade to AU7. The following are new features in APS AU7:
3841

3942
### Auto-create and auto-update statistics
4043
APS AU7 creates and updates statistics automatically, by default. To update statistics settings, administrators can use a new feature switch menu item in the [Configuration Manager](appliance-configuration.md#CMTasks). The [feature switch](appliance-feature-switch.md) controls the auto-create, auto-update, and asynchronous update behavior of statistics. You can also update statistics settings with the [ALTER DATABASE (Parallel Data Warehouse)](../t-sql/statements/alter-database-transact-sql.md?tabs=sqlpdw) statement.

docs/big-data-cluster/big-data-cluster-overview.md

Lines changed: 8 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -27,13 +27,13 @@ By leveraging [SQL Server PolyBase](../relational-databases/polybase/polybase-gu
2727

2828
### Data lake
2929

30-
A SQL Big Data Cluster includes a scalable HDFS [storage pool](concept-storage-pool.md). This can be used to directly store big data, potentially ingested from multiple external sources. Once in the Big Data Cluster, you can analyze and query the data and combine it with your high-value relational data.
30+
A SQL Big Data Cluster includes a scalable HDFS *storage pool*. This can be used to directly store big data, potentially ingested from multiple external sources. Once in the Big Data Cluster, you can analyze and query the data and combine it with your high-value relational data.
3131

3232
![Data lake](media/big-data-cluster-overview/data-lake.png)
3333

3434
### Scale-out data mart
3535

36-
SQL Big Data Clusters provides scale-out compute and storage to improve the performance of analyzing any data. Data from a variety of sources can be ingested and distributed across [data pool](concept-data-pool.md) nodes for further analysis.
36+
SQL Big Data Clusters provides scale-out compute and storage to improve the performance of analyzing any data. Data from a variety of sources can be ingested and distributed across *data pool* nodes for further analysis.
3737

3838
![Data mart](media/big-data-cluster-overview/data-mart.png)
3939

@@ -60,7 +60,7 @@ You can use Azure Data Studio to perform a variety of tasks on the Big Data Clus
6060

6161
## <a id="architecture"></a> Architecture
6262

63-
A SQL Big Data Cluster is a cluster of Linux nodes orchestrated by [Kubernetes](https://kubernetes.io/docs/concepts/).
63+
A SQL Big Data Cluster is a cluster of Linux nodes orchestrated by [Kubernetes](https://kubernetes.io/docs/concepts/).
6464

6565
### Kubernetes concepts
6666

@@ -82,29 +82,17 @@ Nodes in the cluster are arranged into three logical planes: the control plane,
8282

8383
### <a id="controlplane"></a> Control plane
8484

85-
The control plane provides management and security for the cluster. It contains the Kubernetes master, the [SQL Server master instance](concept-master-instance.md), and other cluster-level services such as the Hive Metastore and Spark Driver.
85+
The control plane provides management and security for the cluster. It contains the Kubernetes master, the *SQL Server master instance*, and other cluster-level services such as the Hive Metastore and Spark Driver.
8686

8787
### <a id="computeplane"></a> Compute plane
8888

89-
The compute plane provides computational resources to the cluster. It contains nodes running SQL Server on Linux pods. The pods in the compute plane are divided into [compute pools](concept-compute-pool.md) for specific processing tasks. A compute pool can act as a [PolyBase](../relational-databases/polybase/polybase-guide.md) scale-out group for distributed queries over different data sources—such as HDFS, Oracle, MongoDB, or Teradata.
89+
The compute plane provides computational resources to the cluster. It contains nodes running SQL Server on Linux pods. The pods in the compute plane are divided into *compute pools* for specific processing tasks. A compute pool can act as a [PolyBase](../relational-databases/polybase/polybase-guide.md) scale-out group for distributed queries over different data sources—such as HDFS, Oracle, MongoDB, or Teradata.
9090

9191
### <a id="dataplane"></a> Data plane
9292

93-
The data plane is used for data persistence and caching. It contains the SQL data pool, and storage nodes. The SQL [data pool](concept-data-pool.md) consists of one or more nodes running SQL Server on Linux. It is used to ingest data from SQL queries or Spark jobs. SQL Big Data Cluster data marts are persisted in the data pool. The [storage pool](concept-storage-pool.md) consists of storage nodes comprised of SQL Server on Linux, Spark, and HDFS. All the storage nodes in a SQL Big Data cluster are members of an HDFS cluster.
94-
95-
## Get started
96-
97-
SQL Big Data Clusters is first available as a limited public preview through the SQL Server 2019
98-
Early Adoption Program. To request access, register [here](https://aka.ms/eapsignup), and specify your interest to try Big Data Clusters. Microsoft will triage all requests and respond as soon as possible.
93+
The data plane is used for data persistence and caching. It contains the SQL data pool, and storage nodes. The SQL data pool consists of one or more nodes running SQL Server on Linux. It is used to ingest data from SQL queries or Spark jobs. SQL Big Data Cluster data marts are persisted in the data pool. The storage pool consists of storage nodes comprised of SQL Server on Linux, Spark, and HDFS. All the storage nodes in a SQL Big Data cluster are members of an HDFS cluster.
9994

10095
## Next steps
10196

102-
Learn more about SQL Server Big Data Clusters in the following articles:
103-
104-
[HDFS](concept-hdfs.md)
105-
[Spark](concept-spark.md)
106-
[Controller](concept-controller.md)
107-
[Master instance](concept-master-instance.md)
108-
[Compute pool](concept-compute-pool.md)
109-
[Data pool](concept-data-pool.md)
110-
[Storage pool](concept-storage-pool.md)
97+
SQL Big Data Clusters is first available as a limited public preview through the SQL Server 2019
98+
Early Adoption Program. To request access, register [here](https://aka.ms/eapsignup), and specify your interest to try Big Data Clusters. Microsoft will triage all requests and respond as soon as possible.

docs/big-data-cluster/concept-compute-pool.md

Lines changed: 0 additions & 31 deletions
This file was deleted.

0 commit comments

Comments
 (0)