Your websites are not subject to bandwidth throttling or traffic limits provided usage complies with our Terms of Service. However we do limit the amount of CPU and RAM (memory) resources that your website(s) can utilize. This policy has been consistent since the company began providing Windows and .NET hosting in 2003.
The distinction matters because server CPU and memory are shared finite resources. Without per-account limits, a single application with inefficient code or an unexpected traffic pattern could degrade performance for every other site on the same host. Bandwidth, by contrast, is abundant on modern networks, so we do not restrict data transfer volume or visitor count when the traffic is legitimate.
#Bandwidth and Traffic Guidelines
We impose no caps on the volume of data transferred to or from your websites and no restrictions on concurrent visitors or API requests. You can serve static assets, dynamic ASP.NET pages, REST endpoints, or database-driven content without artificial speed reductions or overage charges. This applies equally to small corporate sites and higher-traffic applications, provided all activity stays within the acceptable use outlined in the Terms of Service. Prohibited behaviors such as spam, illegal content distribution, or denial-of-service activities remain enforceable regardless of bandwidth consumed.
The absence of traffic metering removes a common barrier to growth. Marketing campaigns, seasonal peaks, or successful product launches will not trigger throttling or unexpected billing. Monitor your own logs and analytics to understand usage patterns; the hosting platform itself will not interrupt service based on megabytes transferred or requests processed.
#CPU and RAM Limitations Explained
Computational resources are allocated and metered at the account level. We limit the amount of CPU and RAM resources that your website(s) can utilize to protect the shared environment and guarantee consistent performance across all customers. When an application exceeds its permitted share, the platform automatically reduces process priority, queues threads, or temporarily constrains further execution until demand subsides. This mechanism is not bandwidth throttling; network throughput remains unaffected. It only governs how much processor time and working memory the account may claim at any moment.
Limits exist because every physical server has fixed cores and memory. One runaway process—whether caused by an unindexed database query, a memory leak, or an infinite loop—can starve other sites of cycles. By enforcing ceilings we maintain stability without requiring customers to move to dedicated servers for moderate workloads. The practical outcome is that well-designed .NET applications run smoothly while poorly optimized ones surface performance problems that must be corrected in code.
#Common Pitfalls Leading to High Usage
- Database queries without indexes or proper filtering that force full table scans and consume excessive CPU during peak load.
- Failure to dispose of large objects or use using statements, resulting in memory leaks that steadily increase RAM consumption until limits are hit.
- Blocking synchronous I/O or long-running computations on the IIS thread pool, starving the application of available threads and driving up CPU wait times.
- Running debug builds in production, disabling compiler optimizations and enabling verbose logging that together inflate both processor and memory demands.
#Optimization Strategies and Code Examples
Design applications from the outset to respect CPU and RAM ceilings. Use asynchronous controllers and middleware, implement layered caching, and ensure database access is minimal and indexed. Profile regularly with Visual Studio diagnostics or dotnet-trace to locate hot paths. Offload non-critical work such as email generation or report generation to queued background tasks rather than handling them inline with web requests. For ASP.NET Core applications, configure application pool recycling settings and output caching directives in web.config or through middleware to reduce repeated work.
using Microsoft.Extensions.Caching.Memory;
public class DataService
{
private readonly IMemoryCache _cache;
private readonly IDatabaseRepository _repo;
public DataService(IMemoryCache cache, IDatabaseRepository repo)
{
_cache = cache;
_repo = repo;
}
public async Task<IEnumerable<ReportData>> GetReportAsync()
{
string cacheKey = "report_daily";
if (!_cache.TryGetValue(cacheKey, out IEnumerable<ReportData> data))
{
data = await _repo.RunExpensiveQueryAsync();
var options = new MemoryCacheEntryOptions()
.SetSlidingExpiration(TimeSpan.FromMinutes(10))
.SetSize(1);
_cache.Set(cacheKey, data, options);
}
return data;
}
}
The example registers an in-memory cache and checks for existing data before executing a heavy repository call. On cache miss the result is stored with a sliding expiration and size limit, ensuring subsequent requests are served from RAM instead of repeating CPU-intensive database work. Applied consistently across an application, patterns like this keep both processor and memory usage well inside permitted bounds.
The practical takeaway is to treat CPU and RAM limits as architectural constraints rather than afterthoughts. Profile early, cache aggressively, eliminate blocking calls, and review the Terms of Service for full acceptable-use details. Addressing resource usage in development prevents production throttling, reduces support tickets, and delivers consistently responsive applications.
Comments
No comments yet