Query performance issues account for the majority of bottlenecks we encounter in hosted .NET applications running on SQL Server. Most can be resolved by applying targeted optimization techniques instead of scaling hardware or instances prematurely. The foundation is understanding exactly how the query optimizer translates your T-SQL into an execution plan and pinpointing where scans, lookups, or inaccurate cardinality estimates create drag.
The techniques below are drawn from production workloads using ASP.NET Core with EF Core or Dapper against SQL Server 2019 and 2022. They focus on measurable improvements: reducing CPU, cutting I/O, and lowering query duration without architectural overhauls. Start every optimization effort with data from execution plans and Query Store rather than assumptions.
#Reading Execution Plans
The execution plan is the single most valuable diagnostic artifact SQL Server produces. It shows join order, index usage, estimated versus actual rows, memory grants, and operator costs. Large discrepancies between estimated and actual rows usually indicate stale statistics or missing indexes. In SSMS, enable actual plans with Ctrl+M or via the toolbar button. For production systems without direct SSMS access, rely on Query Store (available since SQL Server 2016) or Extended Events to capture plans during peak load.
SET STATISTICS IO, TIME ON;
GO
SELECT OrderId, CustomerId, OrderDate, TotalAmount
FROM Orders
WHERE CustomerId = 42
AND OrderDate > '2023-01-01';
GO
Focus first on the thickest arrows (high row counts), operators exceeding 30% cost, and any warnings. A table scan inside a Nested Loops join on the inner side is almost always a scalability killer in OLTP workloads. Look for Key Lookup operators that can be eliminated with covering indexes. Use the missing index DMV suggestions only as hints; evaluate them against your full workload before implementation.
#Indexing Strategies That Deliver Results
Proper indexing typically yields the largest single improvement. Prioritize indexes that support your most frequent WHERE, JOIN, and ORDER BY predicates. Remember every additional index increases write latency and storage overhead, so measure before and after on representative production-like loads. Use the INCLUDE clause to create covering indexes that satisfy all columns needed by the query without key lookups.
CREATE NONCLUSTERED INDEX IX_Orders_CustomerId_OrderDate
ON Orders (CustomerId, OrderDate)
INCLUDE (TotalAmount, Status, OrderNotes);
The example above creates a covering index for the earlier query, allowing a single index seek with no additional clustered index lookups. Filtered indexes work well for queries that always target a subset such as active orders (WHERE IsActive = 1). Monitor index usage with sys.dm_db_index_usage_stats and remove indexes that show zero seeks over a representative period. Rebuild or reorganize indexes when fragmentation exceeds 30%.
#T-SQL Patterns That Scale
Even perfect indexes fail when T-SQL hides predicates from the optimizer. Applying functions to indexed columns is the most common anti-pattern. YEAR(), MONTH(), or CONVERT on a datetime column prevents index seeks. Rewrite these as range predicates that allow the optimizer to use existing indexes. Similar rules apply to leading-wildcard LIKE expressions and certain OR conditions that expand into UNION ALL under the covers.
-- Non-sargable - forces scan
SELECT * FROM Orders WHERE YEAR(OrderDate) = 2023;
-- Sargable - supports index seek
SELECT * FROM Orders
WHERE OrderDate >= '2023-01-01'
AND OrderDate < '2024-01-01';
Explicitly list columns instead of SELECT *. Replace IN with EXISTS for correlated checks. Limit use of table variables in complex queries because they lack statistics. For queries with highly variable parameters that produce different optimal plans, consider OPTION (RECOMPILE) or dynamic SQL. These patterns consistently appear in slow-query reports from .NET applications we support.
#Maintaining Performance in Production
Optimization is ongoing. Enable Query Store on every production database and review regressed queries after each deployment. Update statistics more frequently than the auto-update threshold on large tables that grow daily. Schedule index maintenance during low-activity windows using scripts that reorganize indexes with fragmentation between 5-30% and rebuild those above 30%. Set up Extended Events sessions to capture queries exceeding 500ms duration or 5000 logical reads.
The practical takeaway is to always measure first. Capture baseline metrics with Query Store, identify the top five highest-cost queries by CPU or duration, apply one targeted change (index, rewrite, or statistics update), then re-measure. In production .NET environments these disciplined steps routinely deliver 5-20x faster queries while reducing overall server load and operational costs.
Comments
No comments yet