You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -47,11 +47,13 @@ Community technology preview (CTP) 2.0 is the first public release of [!INCLUDE[
47
47
- SQL Server Machine Learning Services
48
48
- Polybase
49
49
- Expanded support for Persistent Memory (PMEM) devices
50
+
50
51
-[Big Data Cluster](#bigdatacluster)
51
52
- Deploy a SQL Server Big Data Cluster with Linux containers on Kubernetes
52
53
- Use Azure Data Studio to run Jupyter Notebooks
53
54
- Ingest external data into a data pool
54
55
- Query HDFS data in the storage pool
56
+
55
57
-[SQL Server on Linux](#sqllinux)
56
58
- Replication support
57
59
- Support for the Microsoft Distributed Transaction Coordinator (MSDTC)
@@ -60,10 +62,13 @@ Community technology preview (CTP) 2.0 is the first public release of [!INCLUDE[
60
62
- Machine Learning on Linux
61
63
- New container registry
62
64
- New RHEL-based container images
65
+
63
66
-[Master Data Services](#mds)
64
67
- Silverlight controls replaced
68
+
65
69
-[Security](#security)
66
70
- Certificate management in SQL Server Configuration Manager
71
+
67
72
-[Tools](#tools)
68
73
- SQL Server Management Studio (SSMS) 18.0 (preview)
69
74
- Azure Data Studio (preview)
@@ -82,15 +87,20 @@ Continue reading for more details about these features.
82
87
83
88
-**Row mode memory grant feedback** expands on the memory grant feedback feature introduced in SQL Server 2017 by adjusting memory grant sizes for both batch and row mode operators. For an excessive memory grant condition, if the granted memory is more than two times the size of the actual used memory, memory grant feedback will recalculate the memory grant. Consecutive executions will then request less memory. For an insufficiently sized memory grant that results in a spill to disk, memory grant feedback will trigger a recalculation of the memory grant. Consecutive executions will then request more memory. This feature is enabled by default under database compatibility level 150.
84
89
85
-
-**Approximate COUNT DISTINCT** returns the approximate number of unique non-null values in a group. This function is designed for use in big data scenarios and is optimized for the following conditions:
86
-
- Access of data sets that are millions of rows or higher AND
87
-
- Aggregation of a column or columns that have a large number of distinct values AND
88
-
- Responsiveness is more critical than absolute precision. `APPROXIMATE_COUNT_DISTINCT` yields results typically within 2% of the precise answer in a small fraction of the time.
89
-
90
-
-**Batch mode on rowstore** enables batch mode without requiring a columnstore index. Batch mode processing allows query operators to process data more efficiently by working on a batch of rows at a time instead of one row at a time. A number of other scalability improvements are tied to batch mode processing. In earlier versions, batch mode only worked in conjunction with columnstore indexes. This feature is enabled by default under database compatibility level 150. Workloads that may benefit:
91
-
- A significant part of the workload consists of analytical queries (as a rule of thumb, queries with operators such as joins or aggregates processing hundreds of thousands of rows or more), AND
92
-
- The workload is CPU bound AND
93
-
- Creating a columnstore index adds too much overhead to the transactional part of your workload, OR creating a columnstore index is not feasible because your application depends on a feature that is not yet supported with columnstore indexes.
90
+
-**Approximate COUNT DISTINCT** returns the approximate number of unique non-null values in a group. This function is designed for use in big data scenarios. This function is optimized for queries where all the following conditions are true:
91
+
- Accesses data sets of at least millions of rows.
92
+
- Aggregates a column or columns that have a large number of distinct values.
93
+
- Responsiveness is more critical than absolute precision.
94
+
-`APPROXIMATE_COUNT_DISTINCT` returns results that are typically within 2% of the precise answer.
95
+
- And it returns the approximate answer in a small fraction of the time needed for the precise answer.
96
+
97
+
-**Batch mode on rowstore** no longer requires a columnstore index to process a query in batch mode. Batch mode allows query operators to work on a set of rows, instead of just one row at a time. This feature is enabled by default under database compatibility level 150. Batch mode improves the speed of queries that access rowstore tables when all the following are true:
98
+
- The query uses analytic operators such as joins or aggregation operators.
99
+
- The query involves 100,000 or more rows.
100
+
- The query is CPU bound, rather than input/output data bound.
101
+
- Creation and use of a columnstore index would have one of the following drawbacks:
102
+
- Would add too much overhead to the query.
103
+
- Or, is not feasible because your application depends on a feature that is not yet supported with columnstore indexes.
94
104
95
105
-**Table variable deferred compilation** improves plan quality and overall performance for queries referencing table variables. During optimization and initial compilation, this feature will propagate cardinality estimates that are based on actual table variable row counts. This accurate row count information will be used for optimizing downstream plan operations. This feature is enabled by default under database compatibility level 150.
@@ -156,18 +166,17 @@ This feature may provide significant storage savings, depending on the character
156
166
157
167
### Lightweight query profiling infrastructure enabled by default
158
168
159
-
The lightweight query profiling infrastructure provides query performance data more efficiently than standard profiling technologies. Lightweight profiling is now enabled by default. It was introduced in SQL Server 2016 SP1. Lightweight profiling offers a query execution statistics collection mechanism with an expected overhead of 2% CPU, compared with an overhead of up to 75% CPU for the standard query profiling mechanism. On previous versions,
160
-
it was OFF by default. Database administrators could enable it with [trace flag 7412](../t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql.md).
169
+
The lightweight query profiling infrastructure provides query performance data more efficiently than standard profiling technologies. Lightweight profiling is now enabled by default. It was introduced in SQL Server 2016 SP1. Lightweight profiling offers a query execution statistics collection mechanism with an expected overhead of 2% CPU, compared with an overhead of up to 75% CPU for the standard query profiling mechanism. On previous versions, it was OFF by default. Database administrators could enable it with [trace flag 7412](../t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql.md).
161
170
162
-
For more information, see [Developers Choice: Query progress – anytime, anywhere](http://blogs.msdn.microsoft.com/sql_server_team/query-progress-anytime-anywhere/).
171
+
For more information, see [Developers Choice: Query progress – anytime, anywhere](http://blogs.msdn.microsoft.com/sql_server_team/query-progress-anytime-anywhere/).
163
172
164
173
### Data Discovery and Classification
165
174
166
-
Data discovery and classification provides advanced capabilities natively built into SQL Server for classifying, labeling, and protecting the sensitive data in your databases. Classifying your most sensitive data (business, financial, healthcare, personal information, etc.) can play a pivotal role in your organizational information protection stature. It can serve as infrastructure for:
175
+
Data discovery and classification provides advanced capabilities that are natively built into SQL Server. Classifying and labelingyour most sensitive data provides the following benefits:
167
176
168
-
-Helping meet data privacy standards and regulatory compliance requirements
169
-
-Various security scenarios, such as monitoring (auditing) and alerting on anomalous access to sensitive data
170
-
-Making it easier to identify where sensitive data resides in the enterprise so admins can take the right steps securing the database
177
+
-Helps meet data privacy standards and regulatory compliance requirements.
178
+
-Supports security scenarios, such as monitoring (auditing), and alerting on anomalous access to sensitive data.
179
+
-Makes it easier to identify where sensitive data resides in the enterprise, so that administrators can take the right steps to secure the database.
171
180
172
181
For more information, see [SQL Data Discovery and Classification](../relational-databases/security/sql-data-discovery-and-classification.md).
173
182
@@ -195,27 +204,31 @@ SELECT page_info.*
195
204
FROMsys.dm_exec_requestsAS d
196
205
CROSS APPLY sys.fn_PageResCracker(d.page_resource) AS r
-**Up to five synchronous replicas** – SQL Server 2019 preview increases the limit for synchronous replicas from three (in SQL Server 2017) to five. Configure up to five synchronous replicas (1 primary and up to 4 synchronous secondary replicas) with automatic failover between these replicas.
212
+
-**Up to five synchronous replicas** – SQL Server 2019 preview increases the maximum number of synchronous replicas to 5, up from 3 in SQL Server 2017. You can configure this group of 5 replicas to have automatic failover within the group. There is 1 primary replica, plus 4 synchronous secondary replicas.
204
213
205
-
-**Secondary to primary replica connection redirection**: Allows client application connections to be directed to the primary replica regardless of the target server specified in the connection string. This capability allows connection redirection without a listener. Use Secondary to primary replica connection redirection in the following cases:
214
+
-**Secondary-to-primary replica connection redirection**: Allows client application connections to be directed to the primary replica regardless of the target server specified in the connection string. This capability allows connection redirection without a listener. Use secondary-to-primary replica connection redirection in the following cases:
206
215
207
-
- The cluster technology does not offer a listener capability
208
-
- A multi subnet configuration where redirection becomes complex
209
-
- Read scale-out or disaster recovery scenarios where cluster type is `NONE`
216
+
- The cluster technology does not offer a listener capability.
217
+
- A multi subnet configuration where redirection becomes complex.
218
+
- Read scale-out or disaster recovery scenarios where cluster type is `NONE`.
210
219
211
-
For details, see [Secondary to primary replica read/write connection redirection (Always On Availability Groups)](../database-engine/availability-groups/windows/secondary-replica-connection-redirection-always-on-availability-groups.md
212
-
).
220
+
For details, see [Secondary to primary replica read/write connection redirection (Always On Availability Groups)](../database-engine/availability-groups/windows/secondary-replica-connection-redirection-always-on-availability-groups.md).
213
221
214
222
### Always Encrypted with secure enclaves
215
223
216
-
Expands upon Always Encrypted with in-place encryption and rich computations by enabling computations on plaintext data inside a secure enclave on the server side.
224
+
Expands upon Always Encrypted with in-place encryption and rich computations. The expansions come from the enabling of computations on plaintext data, inside a secure enclave on the server side.
225
+
226
+
Cryptographic operations include the encryption of columns, and the rotating of column encryption keys. These operations can now be issued by using Transact-SQL, and they do not require that data be moved out of the database. Secure enclaves provide Always Encrypted to a broader set of scenarios that have both of the following requirements:
227
+
228
+
- The demand that sensitive data be protected during access.
229
+
- The requirement that rich computations on protected data be supported within the database system.
217
230
218
-
Cryptographic operations (encrypting columns, rotating columns encryption keys, etc.), can now be issued using Transact-SQL and do not require moving data out of the database. Secure enclaves unlock Always Encrypted to a much broader set of scenarios and applications that demand sensitive data to be protected in use, while also requiring rich computations on protected data to be supported within the database system. For details, see [Always Encrypted with secure enclaves](../relational-databases/security/encryption/always-encrypted-enclaves.md).
231
+
For details, see [Always Encrypted with secure enclaves](../relational-databases/security/encryption/always-encrypted-enclaves.md).
219
232
220
233
>[!NOTE]
221
234
>Always Encrypted with secure enclaves is only available on Windows OS.
0 commit comments