Lokasi ngalangkungan proxy:   [ UP ]  
[Ngawartoskeun bug]   [Panyetelan cookie]                
Skip to content

Commit b32292f

Browse files
committed
Merge branch 'master' of https://github.com/MicrosoftDocs/sql-docs-pr into release-sqlseattle
2 parents 576d6ec + 770140f commit b32292f

20 files changed

Lines changed: 153 additions & 99 deletions

docs/2014/database-engine/sql-server-managed-backup-to-windows-azure-retention-and-storage-settings.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -127,9 +127,9 @@ manager: craigg
127127
```
128128
129129
## <a name="InstanceConfigure"></a> Enable and Configure Default [!INCLUDE[ss_smartbackup](../includes/ss-smartbackup-md.md)] settings for the Instance
130-
You can enable and configure default [!INCLUDE[ss_smartbackup](../includes/ss-smartbackup-md.md)] settings at the instance level in two ways: By using the system stored procedure `smart_backup.set_instance_backup` or **SQL Server Management Studio**. The two methods are explained below:
130+
You can enable and configure default [!INCLUDE[ss_smartbackup](../includes/ss-smartbackup-md.md)] settings at the instance level in two ways: By using the system stored procedure `smart_admin.set_instance_backup` or **SQL Server Management Studio**. The two methods are explained below:
131131
132-
**smart_backup.set_instance_backup:**. By specifying the value **1** for *@enable_backup* parameter, you can enable backup and set the default configurations. Once applied at the instance level, these default settings are applied to any new database that is added to this instance. When [!INCLUDE[ss_smartbackup](../includes/ss-smartbackup-md.md)] is enabled for the first time, the following information must be provided in addition to enabling [!INCLUDE[ss_smartbackup](../includes/ss-smartbackup-md.md)] on the instance:
132+
**smart_admin.set_instance_backup:**. By specifying the value **1** for *@enable_backup* parameter, you can enable backup and set the default configurations. Once applied at the instance level, these default settings are applied to any new database that is added to this instance. When [!INCLUDE[ss_smartbackup](../includes/ss-smartbackup-md.md)] is enabled for the first time, the following information must be provided in addition to enabling [!INCLUDE[ss_smartbackup](../includes/ss-smartbackup-md.md)] on the instance:
133133
134134
- The retention period.
135135

docs/integration-services/connection-manager/azure-storage-connection-manager.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,6 @@ Following properties are available.
3131
- **Authentication:** Specifies the authentication method to use. **AccessKey** and **ServicePrincipal** authentication are supported.
3232
- **AccessKey:** For this authentication method, specify the **Account key**.
3333
- **ServicePrincipal:** For this authentication method, specify the **Application ID**, **Application key**, **Tenant ID** of the service principal.
34-
The service principal should be assigned **Storage Blob Data Contributor** role to the storage account.
34+
For **Test Connection** to work, the service principal should be assigned at least **Storage Blob Data Reader** role to the storage account.
3535
Refer to [this](https://docs.microsoft.com/azure/storage/common/storage-auth-aad-rbac-portal#assign-rbac-roles-using-the-azure-portal) page for details.
3636
- **Environment:** Specifies the cloud environment hosting the storage account.

docs/integration-services/control-flow/flexible-file-task.md

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,27 @@ For **Copy** operation, following properties are available.
3737
- **SourceConnection:** Specifies the source connection manager.
3838
- **SourceFolderPath:** Specifies the source folder path.
3939
- **SourceFileName:** Specifies the source file name. If left blank, the source folder will be copied.
40-
- **SearchRecursively:** Specifies whether to recursively copy sub-folders.
40+
- **SearchRecursively:** Specifies whether to recursively copy subfolders.
4141
- **DestinationConnectionType:** Specifies the destination connection manager type.
4242
- **DestinationConnection:** Specifies the destination connection manager.
4343
- **DestinationFolderPath:** Specifies the destination folder path.
4444
- **DestinationFileName:** Specifies the destination file name.
45+
46+
***Notes on Service Principal Permission Configuration***
47+
48+
For **Test Connection** to work (either blob storage or Data Lake Storage Gen2), the service principal should be assigned at least **Storage Blob Data Reader** role to the storage account.
49+
This is done with [RBAC](https://docs.microsoft.com/azure/storage/common/storage-auth-aad-rbac-portal#assign-rbac-roles-using-the-azure-portal).
50+
51+
For blob storage, read and write permissions are granted by assigning at least **Storage Blob Data Reader** and **Storage Blob Data Contributor** roles, respectively.
52+
53+
For Data Lake Storage Gen2, permission is determined by both RBAC and [ACLs](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-how-to-set-permissions-storage-explorer).
54+
Pay attention that ACLs are configured using the Object ID (OID) of the service principal for the app registration as detailed [here](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-access-control#how-do-i-set-acls-correctly-for-a-service-principal).
55+
This is different from the Application (client) ID that is used with RBAC configuration.
56+
When a security principal is granted RBAC data permissions through a built-in role, or through a custom role, these permissions are evaluated first upon authorization of a request.
57+
If the requested operation is authorized by the security principal's RBAC assignments, then authorization is immediately resolved and no additional ACL checks are performed.
58+
Alternatively, if the security principal does not have an RBAC assignment, or the request's operation does not match the assigned permission, then ACL checks are performed to determine if the security principal is authorized to perform the requested operation.
59+
60+
- For read permission, grant at least **Execute** permission starting from the source file system, along with **Read** permission for the files to copy. Alternatively, grant at least the **Storage Blob Data Reader** role with RBAC.
61+
- For write permission, grant at least **Execute** permission starting from the sink file system, along with **Write** permission for the sink folder. Alternatively, grant at least the **Storage Blob Data Contributor** role with RBAC.
62+
63+
See [this](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-access-control) article for details.

docs/integration-services/control-flow/foreach-loop-container.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -501,7 +501,19 @@ Specifies an existing Azure Storage Connection Manager or creates a new one that
501501
Specifies the path of the folder to enumerate files in.
502502

503503
**SearchRecursively**
504-
Specifies whether to search recursively within the specified folder.
504+
Specifies whether to search recursively within the specified folder.
505+
506+
***Notes on Service Principal Permission Configuration***
507+
508+
Data Lake Storage Gen2 permission is determined by both [RBAC](https://docs.microsoft.com/azure/storage/common/storage-auth-aad-rbac-portal#assign-rbac-roles-using-the-azure-portal) and [ACLs](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-how-to-set-permissions-storage-explorer).
509+
Pay attention that ACLs are configured using the Object ID (OID) of the service principal for the app registration as detailed [here](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-access-control#how-do-i-set-acls-correctly-for-a-service-principal).
510+
This is different from the Application (client) ID that is used with RBAC configuration.
511+
When a security principal is granted RBAC data permissions through a built-in role, or through a custom role, these permissions are evaluated first upon authorization of a request.
512+
If the requested operation is authorized by the security principal's RBAC assignments, then authorization is immediately resolved and no additional ACL checks are performed.
513+
Alternatively, if the security principal does not have an RBAC assignment, or the request's operation does not match the assigned permission, then ACL checks are performed to determine if the security principal is authorized to perform the requested operation.
514+
For the enumerator to work, grant at least **Execute** permission starting from the root file system, along with **Read** permission for the target folder.
515+
Alternatively, grant at least the **Storage Blob Data Reader** role with RBAC.
516+
See [this](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-access-control) article for details.
505517

506518
## Variable Mappings Page - Foreach Loop Editor
507519
Use the **Variables Mappings** page of the **Foreach Loop Editor** dialog box to map variables to the collection value. The value of the variable is updated with the collection values on each iteration of the loop.

docs/integration-services/data-flow/flexible-file-destination.md

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,12 +46,29 @@ Following properties are available on the **Advanced Editor**.
4646
- **escapeChar:** The special character used to escape a column delimiter in the content of input file. You cannot specify both escapeChar and quoteChar for a table. Only one character is allowed. No default value.
4747
- **quoteChar:** The character used to quote a string value. The column and row delimiters inside the quote characters would be treated as part of the string value. This property is applicable to both input and output datasets. You cannot specify both escapeChar and quoteChar for a table. Only one character is allowed. No default value.
4848
- **nullValue:** One or more characters used to represent a null value. The **default** value is \N.
49-
- **encodingName:** Specify the encoding name. See [Encoding.EncodingName](https://docs.microsoft.com/en-us/dotnet/api/system.text.encoding?redirectedfrom=MSDN&view=netframework-4.8) Property.
49+
- **encodingName:** Specify the encoding name. See [Encoding.EncodingName](https://docs.microsoft.com/dotnet/api/system.text.encoding?redirectedfrom=MSDN&view=netframework-4.8) Property.
5050
- **skipLineCount:** Indicates the number of non-empty rows to skip when reading data from input files. If both skipLineCount and firstRowAsHeader are specified, the lines are skipped first and then the header information is read from the input file.
5151
- **treatEmptyAsNull:** Specifies whether to treat null or empty string as a null value when reading data from an input file. The **default** value is True.
5252

5353
After specifying the connection information, switch to the **Columns** page to map source columns to destination columns for the SSIS data flow.
5454

55+
**Notes on Service Principal Permission Configuration**
56+
57+
For **Test Connection** to work (either blob storage or Data Lake Storage Gen2), the service principal should be assigned at least **Storage Blob Data Reader** role to the storage account.
58+
This is done with [RBAC](https://docs.microsoft.com/azure/storage/common/storage-auth-aad-rbac-portal#assign-rbac-roles-using-the-azure-portal).
59+
60+
For blob storage, write permission is granted by assigning at least **Storage Blob Data Contributor** role.
61+
62+
For Data Lake Storage Gen2, permission is determined by both RBAC and [ACLs](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-how-to-set-permissions-storage-explorer).
63+
Pay attention that ACLs are configured using the Object ID (OID) of the service principal for the app registration as detailed [here](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-access-control#how-do-i-set-acls-correctly-for-a-service-principal).
64+
This is different from the Application (client) ID that is used with RBAC configuration.
65+
When a security principal is granted RBAC data permissions through a built-in role, or through a custom role, these permissions are evaluated first upon authorization of a request.
66+
If the requested operation is authorized by the security principal's RBAC assignments, then authorization is immediately resolved and no additional ACL checks are performed.
67+
Alternatively, if the security principal does not have an RBAC assignment, or the request's operation does not match the assigned permission, then ACL checks are performed to determine if the security principal is authorized to perform the requested operation.
68+
For write permission, grant at least **Execute** permission starting from the sink file system, along with **Write** permission for the sink folder.
69+
Alternatively, grant at least the **Storage Blob Data Contributor** role with RBAC.
70+
See [this](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-access-control) article for details.
71+
5572
**Prerequisite for ORC/Parquet File Format**
5673

5774
Java is required to use ORC/Parquet file format.

docs/integration-services/data-flow/flexible-file-source.md

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,12 +44,29 @@ Following properties are available on the **Advanced Editor**.
4444
- **escapeChar:** The special character used to escape a column delimiter in the content of input file. You cannot specify both escapeChar and quoteChar for a table. Only one character is allowed. No default value.
4545
- **quoteChar:** The character used to quote a string value. The column and row delimiters inside the quote characters would be treated as part of the string value. This property is applicable to both input and output datasets. You cannot specify both escapeChar and quoteChar for a table. Only one character is allowed. No default value.
4646
- **nullValue:** One or more characters used to represent a null value. The **default** value is \N.
47-
- **encodingName:** Specify the encoding name. See [Encoding.EncodingName](https://docs.microsoft.com/en-us/dotnet/api/system.text.encoding?redirectedfrom=MSDN&view=netframework-4.8) Property.
47+
- **encodingName:** Specify the encoding name. See [Encoding.EncodingName](https://docs.microsoft.com/dotnet/api/system.text.encoding?redirectedfrom=MSDN&view=netframework-4.8) Property.
4848
- **skipLineCount:** Indicates the number of non-empty rows to skip when reading data from input files. If both skipLineCount and firstRowAsHeader are specified, the lines are skipped first and then the header information is read from the input file.
4949
- **treatEmptyAsNull:** Specifies whether to treat null or empty string as a null value when reading data from an input file. The **default** value is True.
5050

5151
After you specify the connection information, switch to the **Columns** page to map source columns to destination columns for the SSIS data flow.
5252

53+
**Notes on Service Principal Permission Configuration**
54+
55+
For **Test Connection** to work (either blob storage or Data Lake Storage Gen2), the service principal should be assigned at least **Storage Blob Data Reader** role to the storage account.
56+
This is done with [RBAC](https://docs.microsoft.com/azure/storage/common/storage-auth-aad-rbac-portal#assign-rbac-roles-using-the-azure-portal).
57+
58+
For blob storage, read permission is granted by assigning at least **Storage Blob Data Reader** role.
59+
60+
For Data Lake Storage Gen2, permission is determined by both RBAC and [ACLs](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-how-to-set-permissions-storage-explorer).
61+
Pay attention that ACLs are configured using the Object ID (OID) of the service principal for the app registration as detailed [here](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-access-control#how-do-i-set-acls-correctly-for-a-service-principal).
62+
This is different from the Application (client) ID that is used with RBAC configuration.
63+
When a security principal is granted RBAC data permissions through a built-in role, or through a custom role, these permissions are evaluated first upon authorization of a request.
64+
If the requested operation is authorized by the security principal's RBAC assignments, then authorization is immediately resolved and no additional ACL checks are performed.
65+
Alternatively, if the security principal does not have an RBAC assignment, or the request's operation does not match the assigned permission, then ACL checks are performed to determine if the security principal is authorized to perform the requested operation.
66+
For read permission, grant at least **Execute** permission starting from the source file system, along with **Read** permission for the files to read.
67+
Alternatively, grant at least the **Storage Blob Data Reader** role with RBAC.
68+
See [this](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-access-control) article for details.
69+
5370
**Prerequisite for ORC/Parquet File Format**
5471

5572
Java is required to use ORC/Parquet file format.

docs/linux/quickstart-install-connect-docker.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ Before starting the following steps, make sure that you have selected your prefe
8989

9090
::: zone pivot="cs1-bash"
9191
```bash
92-
sudo docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' \
92+
sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<YourStrong!Passw0rd>" \
9393
-p 1433:1433 --name sql1 \
9494
-d mcr.microsoft.com/mssql/server:2017-latest
9595
```
@@ -208,7 +208,7 @@ Before starting the following steps, make sure that you have selected your prefe
208208

209209
::: zone pivot="cs1-bash"
210210
```bash
211-
sudo docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' \
211+
sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<YourStrong!Passw0rd>" \
212212
-p 1433:1433 --name sql1 \
213213
-d mcr.microsoft.com/mssql/server:2019-CTP3.1-ubuntu
214214
```
@@ -299,7 +299,7 @@ The **SA** account is a system administrator on the SQL Server instance that get
299299
::: zone pivot="cs1-bash"
300300
```bash
301301
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd \
302-
-S localhost -U SA -P '<YourStrong!Passw0rd>' \
302+
-S localhost -U SA -P "<YourStrong!Passw0rd>" \
303303
-Q 'ALTER LOGIN SA WITH PASSWORD="<YourNewStrong!Passw0rd>"'
304304
```
305305
::: zone-end
@@ -347,7 +347,7 @@ The following steps use the SQL Server command-line tool, **sqlcmd**, inside the
347347
2. Once inside the container, connect locally with sqlcmd. Sqlcmd is not in the path by default, so you have to specify the full path.
348348

349349
```bash
350-
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<YourNewStrong!Passw0rd>'
350+
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "<YourNewStrong!Passw0rd>"
351351
```
352352

353353
> [!TIP]
@@ -449,7 +449,7 @@ The following steps use **sqlcmd** outside of your container to connect to SQL S
449449

450450
::: zone pivot="cs1-bash"
451451
```bash
452-
sqlcmd -S <ip_address>,1433 -U SA -P '<YourNewStrong!Passw0rd>'
452+
sqlcmd -S <ip_address>,1433 -U SA -P "<YourNewStrong!Passw0rd>"
453453
```
454454
::: zone-end
455455

0 commit comments

Comments
 (0)