Lokasi ngalangkungan proxy:   [ UP ]  
[Ngawartoskeun bug]   [Panyetelan cookie]                
Skip to content

Commit e65afba

Browse files
committed
Merge remote-tracking branch 'upstream/release-2019-cu12' into release-2019-cu
2 parents 05e1435 + 9259164 commit e65afba

12 files changed

Lines changed: 68 additions & 49 deletions

docs/connect/odbc/linux-mac/install-microsoft-odbc-driver-sql-server-macos.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Install the Microsoft ODBC driver for SQL Server (macOS)
33
description: Learn how to install the Microsoft ODBC Driver for SQL Server on macOS clients to enable database connectivity.
4-
ms.date: 07/30/2021
4+
ms.date: 08/02/2021
55
ms.prod: sql
66
ms.prod_service: connectivity
77
ms.technology: connectivity
@@ -19,7 +19,7 @@ This article explains how to install the Microsoft ODBC Driver for SQL Server on
1919
This article provides commands for installing the ODBC driver from the bash shell. If you want to download the packages directly, see [Download ODBC Driver for SQL Server](../download-odbc-driver-for-sql-server.md).
2020

2121
> [!Note]
22-
> The Microsoft ODBC driver for SQL Server on macOS is only supported on the x64 architecture. The Apple M1 is not supported.
22+
> The Microsoft ODBC driver for SQL Server on macOS is only supported on the x64 architecture through version 17.7. The Apple M1 (ARM64) is supported starting with version 17.8. The architecture will be detected and the correct package will be automatically installed by the Homebrew formula. If your command prompt is running in x64 emulation mode on the M1, the x64 package will be installed. If you're not running in emulation mode in your command prompt, the ARM64 package will be installed.
2323
2424
## Microsoft ODBC 17
2525

docs/connect/odbc/linux-mac/release-notes-odbc-sql-server-linux-mac.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
---
2-
title: "Release Notes ODBC Driver for SQL Server on Linux and macOS"
2+
title: Release Notes ODBC Driver for SQL Server on Linux and macOS
33
description: "Learn what's new and changed in released versions of the Microsoft ODBC Driver for SQL Server."
44
ms.custom: ""
5-
ms.date: "07/30/2021"
5+
ms.date: 08/02/2021
66
ms.prod: sql
77
ms.prod_service: connectivity
88
ms.reviewer: v-daenge
@@ -36,6 +36,7 @@ GeneMi. 2019/04/03.
3636
| New item | Details |
3737
| :------- | :------ |
3838
| New distributions supported. | Ubuntu 21.04, Alpine 3.13 |
39+
| Support for Apple M1 ARM64 hardware | See [Install the ODBC driver (macOS)](install-microsoft-odbc-driver-sql-server-macos.md). |
3940
| Replication option added to the connection string | See [DSN and Connection String Attributes and Keywords](../dsn-connection-string-attribute.md). |
4041
| KeepAlive and KeepAliveInterval options added to the connection string | See [DSN and Connection String Attributes and Keywords](../dsn-connection-string-attribute.md). |
4142
| Bug fixes. | [Bug fixes](../bug-fixes.md). |

docs/database-engine/availability-groups/windows/automatic-seeding-secondary-replicas.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ CREATE AVAILABILITY GROUP [<AGName>]
8888
WITH (
8989
ENDPOINT_URL = N'TCP://Primary_Replica.Contoso.com:5022',
9090
FAILOVER_MODE = AUTOMATIC,
91-
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
91+
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT
9292
),
9393
N'Secondary_Replica' WITH (
9494
ENDPOINT_URL = N'TCP://Secondary_Replica.Contoso.com:5022',

docs/database-engine/availability-groups/windows/change-the-availability-mode-of-an-availability-replica-sql-server.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,8 +48,11 @@ You must be connected to the server instance that hosts the primary replica.
4848
2. Use the [ALTER AVAILABILITY GROUP](../../../t-sql/statements/alter-availability-group-transact-sql.md) statement, as the following example:
4949

5050
```sql
51-
ALTER AVAILABILITY GROUP *group_name* MODIFY REPLICA ON '*server_name*'
52-
WITH ( AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT , FAILOVER_MODE = MANUAL );
51+
ALTER AVAILABILITY GROUP [<availability_group_name>] MODIFY REPLICA ON '*server_name*'
52+
WITH ( AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT)
53+
54+
ALTER AVAILABILITY GROUP [<availability_group_name>] MODIFY REPLICA ON '*server_name*'
55+
WITH ( FAILOVER_MODE = MANUAL );
5356
```
5457

5558
Where *group_name* is the name of the availability group and *server_name* is the name of the server instance that hosts the replica to be modified.

docs/database-engine/availability-groups/windows/create-an-availability-group-transact-sql.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -226,7 +226,7 @@ ms.author: chadam
226226

227227
The following code example creates a transaction log backup on MyDb1 and on MyDb2.
228228

229-
```sql
229+
```sql
230230
-- On the server instance that hosts the primary replica,
231231
-- Backup the transaction log on each primary database:
232232
BACKUP LOG MyDb1
@@ -236,7 +236,7 @@ ms.author: chadam
236236
237237
BACKUP LOG MyDb2
238238
TO DISK = N'\\FILESERVER\SQLbackups\MyDb2.bak'
239-
WITHNOFORMAT;
239+
WITH NOFORMAT;
240240
GO
241241
```
242242

@@ -409,7 +409,7 @@ GO
409409
410410
BACKUP LOG MyDb2
411411
TO DISK = N'\\FILESERVER\SQLbackups\MyDb2.bak'
412-
WITHNOFORMAT
412+
WITH NOFORMAT
413413
GO
414414
415415
-- Restore the transaction log on each secondary database,

docs/database-engine/availability-groups/windows/secondary-replica-connection-redirection-always-on-availability-groups.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ The following transact-SQL script creates this AG. In this example, Each replica
9090
CREATE AVAILABILITY GROUP MyAg
9191
WITH ( CLUSTER_TYPE = NONE )
9292
FOR
93-
DATABASE <Database1>
93+
DATABASE [<Database1>]
9494
REPLICA ON
9595
'COMPUTER01' WITH
9696
(
@@ -100,8 +100,8 @@ CREATE AVAILABILITY GROUP MyAg
100100
SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL,
101101
READ_ONLY_ROUTING_URL = 'TCP://COMPUTER01.<domain>.<tld>:1433' ),
102102
PRIMARY_ROLE (ALLOW_CONNECTIONS = READ_WRITE,
103-
READ_ONLY_ROUTING_LIST = (COMPUTER02, COMPUTER03),
104-
READ_WRITE_ROUTING_URL = 'TCP://COMPUTER01.<domain>.<tld>:1433' )
103+
READ_ONLY_ROUTING_LIST = ('COMPUTER02', 'COMPUTER03'),
104+
READ_WRITE_ROUTING_URL = 'TCP://COMPUTER01.<domain>.<tld>:1433' ),
105105
SESSION_TIMEOUT = 10
106106
),
107107
'COMPUTER02' WITH
@@ -112,8 +112,8 @@ CREATE AVAILABILITY GROUP MyAg
112112
SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL,
113113
READ_ONLY_ROUTING_URL = 'TCP://COMPUTER02.<domain>.<tld>:1433' ),
114114
PRIMARY_ROLE (ALLOW_CONNECTIONS = READ_WRITE,
115-
READ_ONLY_ROUTING_LIST = (COMPUTER01, COMPUTER03),
116-
READ_WRITE_ROUTING_URL = 'TCP://COMPUTER02.<domain>.<tld>:1433' )
115+
READ_ONLY_ROUTING_LIST = ('COMPUTER01', 'COMPUTER03'),
116+
READ_WRITE_ROUTING_URL = 'TCP://COMPUTER02.<domain>.<tld>:1433' ),
117117
SESSION_TIMEOUT = 10
118118
),
119119
'COMPUTER03' WITH
@@ -124,8 +124,8 @@ CREATE AVAILABILITY GROUP MyAg
124124
SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL,
125125
READ_ONLY_ROUTING_URL = 'TCP://COMPUTER03.<domain>.<tld>:1433' ),
126126
PRIMARY_ROLE (ALLOW_CONNECTIONS = READ_WRITE,
127-
READ_ONLY_ROUTING_LIST = (COMPUTER01, COMPUTER02),
128-
READ_WRITE_ROUTING_URL = 'TCP://COMPUTER03.<domain>.<tld>:1433' )
127+
READ_ONLY_ROUTING_LIST = ('COMPUTER01', 'COMPUTER02'),
128+
READ_WRITE_ROUTING_URL = 'TCP://COMPUTER03.<domain>.<tld>:1433' ),
129129
SESSION_TIMEOUT = 10
130130
);
131131
GO

docs/machine-learning/deploy/modify-r-python-code-to-run-in-sql-server.md

Lines changed: 17 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -72,12 +72,25 @@ How much you change your code depends on whether you intend to submit the code f
7272

7373
+ When running code in a stored procedure, you can pass through multiple **scalar** inputs. For any parameters that you want to use in the output, add the **OUTPUT** keyword.
7474

75-
For example, the following scalar input `@model_name` contains the model name, which is also output in its own column in the results:
75+
For example, the following scalar input `@model_name` contains the model name, which is also later modified by the R script, and output in its own column in the results:
7676

7777
```sql
78-
EXECUTE sp_execute_external_script @model_name = "DefaultModel" OUTPUT
79-
,@language = N'R'
80-
,@script = N'R code here'
78+
-- declare a local scalar variable which will be passed into the R script
79+
DECLARE @local_model_name AS NVARCHAR (50) = 'DefaultModel';
80+
81+
-- The below defines an OUTPUT variable in the scope of the R script, called model_name
82+
-- Syntactically, it is defined by using the @model_name name. Be aware that the sequence
83+
-- of these parameters is very important. Mandatory parameters to sp_execute_external_script
84+
-- must appear first, followed by the additional parameter definitions like @params, etc.
85+
EXECUTE sp_execute_external_script @language = N'R', @script = N'
86+
model_name <- "Model name from R script"
87+
OutputDataSet <- data.frame(InputDataSet$c1, model_name)'
88+
, @input_data_1 = N'SELECT 1 AS c1'
89+
, @params = N'@model_name nvarchar(50) OUTPUT'
90+
, @model_name = @local_model_name OUTPUT;
91+
92+
-- optionally, examine the new value for the local variable:
93+
SELECT @local_model_name;
8194
```
8295

8396
+ Any variables that you pass in as parameters of the stored procedure [sp_execute_external_script](../../relational-databases/system-stored-procedures/sp-execute-external-script-transact-sql.md) must be mapped to variables in the code. By default, variables are mapped by name. All columns in the input dataset must also be mapped to variables in the script.

docs/relational-databases/indexes/columnstore-indexes-data-loading-guidance.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -79,15 +79,17 @@ If you are loading data only to stage it before running more transformations, lo
7979
A common pattern for data load is to load the data into a staging table, do some transformation and then load it into the target table using the following command
8080

8181
```sql
82-
INSERT INTO <columnstore index>
83-
SELECT <list of columns> FROM <Staging Table>
82+
INSERT INTO [<columnstore index>]
83+
SELECT col1 /* include actual list of columns in place of col1*/
84+
FROM [<Staging Table>]
8485
```
8586

8687
This command loads the data into the columnstore index in similar ways to BCP or Bulk Insert but in a single batch. If the number of rows in the staging table < 102400, the rows are loaded into a delta rowgroup otherwise the rows are directly loaded into compressed rowgroup. One key limitation was that this `INSERT` operation was single threaded. To load data in parallel, you could create multiple staging table or issue `INSERT`/`SELECT` with non-overlapping ranges of rows from the staging table. This limitation goes away with [!INCLUDE[sssql16-md](../../includes/sssql16-md.md)]. The command below loads the data from staging table in parallel but you will need to specify `TABLOCK`. You may find this contradictory to what was said earlier with bulkload but the key difference is the parallel data load from the staging table is executed under the same transaction.
8788

8889
```sql
89-
INSERT INTO <columnstore index> WITH (TABLOCK)
90-
SELECT <list of columns> FROM <Staging Table>
90+
INSERT INTO [<columnstore index>] WITH (TABLOCK)
91+
SELECT col1 /* include actual list of columns in place of col1*/
92+
FROM [<Staging Table>]
9193
```
9294

9395
There are following optimizations available when loading into clustered columnstore index from staging table:
@@ -101,7 +103,7 @@ SELECT <list of columns> FROM <Staging Table>
101103
*Trickle insert* refers to the way individual rows move into the columnstore index. Trickle inserts use the [INSERT INTO](../../t-sql/statements/insert-transact-sql.md) statement. With trickle insert, all of the rows go to the deltastore. This is useful for small numbers of rows, but not practical for large loads.
102104

103105
```sql
104-
INSERT INTO <table-name> VALUES (<set of values>)
106+
INSERT INTO [<table-name>] VALUES ('some value' /*replace with actual set of values*/)
105107
```
106108

107109
> [!NOTE]
@@ -110,13 +112,13 @@ INSERT INTO <table-name> VALUES (<set of values>)
110112
Once the rowgroup contains 1,048,576 rows, the delta rowgroup us marked closed but it is still available for queries and update/delete operations but the newly inserted rows go into an existing or newly created deltastore rowgroup. There is a background thread *Tuple Mover (TM)* that compresses the closed delta rowgroups periodically every 5 minutes or so. You can explicitly invoke the following command to compress the closed delta rowgroup
111113

112114
```sql
113-
ALTER INDEX <index-name> on <table-name> REORGANIZE
115+
ALTER INDEX [<index-name>] on [<table-name>] REORGANIZE
114116
```
115117

116118
If you want force a delta rowgroup closed and compressed, you can execute the following command. You may want run this command if you are done loading the rows and don't expect any new rows. By explicitly closing and compressing the delta rowgroup, you can save storage further and improve the analytics query performance. A best practice is to invoke this command if you don't expect new rows to be inserted.
117119

118120
```sql
119-
ALTER INDEX <index-name> on <table-name> REORGANIZE with (COMPRESS_ALL_ROW_GROUPS = ON)
121+
ALTER INDEX [<index-name>] on [<table-name>] REORGANIZE with (COMPRESS_ALL_ROW_GROUPS = ON)
120122
```
121123

122124
## How loading into a partitioned table works

0 commit comments

Comments
 (0)