You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ I also like to coin this the **Google approach** where in all their strategic pr
38
38
39
39
Avoid premature optimizations where un-necessary and focus on Macro-level optimizations
40
40
41
-
Many developers generally preach that you should never prematurely optimize, however I'm more of the mind that since you're already writing code, think about how often this piece of code gets run and optimize accordingly. In practice this means I never optimize one-off code yet spend considerable effort speeding up my serializers where any improvement has the potential to improve all my services. You don't know when your head is going be back in the space you're currently in so a decent effort at efficient code (without hacks) is not a bad idea.
41
+
Many developers generally preach that you should never prematurely optimize, however I'm more of the mind that since you're already writing code, think about how often this piece of code gets run and optimize accordingly. In practice this means I never optimize one-off code yet spend considerable effort speeding up [my serializers](http://www.servicestack.net/mythz_blog/?p=344) where any improvement has the potential to improve all my services. You don't know when your head is going be back in the space you're currently in so a decent effort at efficient code (without hacks) is not a bad idea.
42
42
43
43
With that said, if you're developing an ajax app, something like [Page Speed](http://code.google.com/speed/page-speed/) that tests the complete end-user experience is invaluable to visualize where the areas of optimization will yield the most value.
44
44
@@ -66,13 +66,13 @@ Send packet CA -> Netherlands -> CA | 150,000,000
66
66
### Caching
67
67
The other most important facet and one that runs at the heart of all high-performance systems is Caching, where even the most inefficient systems can be masquaraded by good caching strategies. The level of caching which provides the most value is what I like to call **front-line caching** where you cache the outer most layer (saving the most CPU) in the most optimal output format. E.g. if you're developing web services, you wan't to cache your gzipped/deflated output. The most optimal place to store your cache is in-memory although if you have load-balances servers (as many popular systems do) you will want to consider the leading caching servers in this area capable of some [impressive numbers](http://antirez.com/post/update-on-memcached-redis-benchmark.html):
68
68
69
-
* Memcached - The original and industry standard
70
-
* Redis - The hot new entry into this space, like a Memcached on steroids used by a [growing number of companies](http://redis.io/topics/whos-using-redis) including .NET's own [StackOverflow](http://highscalability.com/blog/2011/3/3/stack-overflow-architecture-update-now-at-95-million-page-vi.html)
69
+
*[Memcached](http://memcached.org) - The original and industry standard
70
+
*[Redis](http://redis.io) - The hot new entry into this space, like a Memcached on steroids used by a [growing number of companies](http://redis.io/topics/whos-using-redis) including .NET's own [StackOverflow](http://highscalability.com/blog/2011/3/3/stack-overflow-architecture-update-now-at-95-million-page-vi.html)
71
71
* AppFabric - Worth a mention, since its Microsoft's entry into this area, but they're earlier recommendations for SQL as a distributed cache and their weak fine-grained caching options inherent in ASP.NET caching provider leaves me un-impressed.
72
72
73
73
Since caching should be treated and thought about as an important first-class concept, I like to keep in mind the cacheability of and use-cases of my services and pages when designing the level of granuality of my API. Since I like fine-grained control of caching, I prefer to use [abstract caching layer](https://github.com/ServiceStack/ServiceStack/wiki/Caching) which at the moment has swappable Redis, Memcached, In-Memory and File system providers.
74
74
75
-
Most .NET developers will likely just make do with 'Time based caching' as that's the default behaviour in ASP.NET caching provider API's and OutputCaching directives. My preferred option is to cache on 'Validility' where I would invalidate caches manually (e.g. when a user modified is profile, clear his cache) which means you always get the latest version and you never need to hit the database again to rehydrate the cache if it hasn't changed.
75
+
Most .NET developers will likely just make do with 'Time based caching' as that's the default behaviour in ASP.NET caching provider API's and OutputCaching directives. My preferred option is to cache on 'Validility' where I would invalidate caches manually (e.g. when a user modified his profile, clear his cache) which means you always get the latest version and you never need to hit the database again to rehydrate the cache if it hasn't changed.
76
76
77
77
### Compression
78
78
Related to the subject of Caching is Compression since they usually operate on the 'outer most layer' i.e. the final Output which in a lot of cases you should consider compressing if its not already (i.e. .jpg, .png, etc). The .NET framework comes with its own GzipStream and DeflateStream classes to do this for you. (Note: prior to 3.5 .NET had a weak impl of DeflateStream so I used [DotNetZip](http://dotnetzip.codeplex.com/) instead).
0 commit comments