Skip to content

Commit 73a74e2

Browse files
committed
Edited README.md via GitHub
1 parent 11ab4d4 commit 73a74e2

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ I also like to coin this the **Google approach** where in all their strategic pr
3838

3939
Avoid premature optimizations where un-necessary and focus on Macro-level optimizations
4040

41-
Many developers generally preach that you should never prematurely optimize, however I'm more of the mind that since you're already writing code, think about how often this piece of code gets run and optimize accordingly. In practice this means I never optimize one-off code yet spend considerable effort speeding up my serializers where any improvement has the potential to improve all my services. You don't know when your head is going be back in the space you're currently in so a decent effort at efficient code (without hacks) is not a bad idea.
41+
Many developers generally preach that you should never prematurely optimize, however I'm more of the mind that since you're already writing code, think about how often this piece of code gets run and optimize accordingly. In practice this means I never optimize one-off code yet spend considerable effort speeding up [my serializers](http://www.servicestack.net/mythz_blog/?p=344) where any improvement has the potential to improve all my services. You don't know when your head is going be back in the space you're currently in so a decent effort at efficient code (without hacks) is not a bad idea.
4242

4343
With that said, if you're developing an ajax app, something like [Page Speed](http://code.google.com/speed/page-speed/) that tests the complete end-user experience is invaluable to visualize where the areas of optimization will yield the most value.
4444

@@ -66,13 +66,13 @@ Send packet CA -> Netherlands -> CA | 150,000,000
6666
### Caching
6767
The other most important facet and one that runs at the heart of all high-performance systems is Caching, where even the most inefficient systems can be masquaraded by good caching strategies. The level of caching which provides the most value is what I like to call **front-line caching** where you cache the outer most layer (saving the most CPU) in the most optimal output format. E.g. if you're developing web services, you wan't to cache your gzipped/deflated output. The most optimal place to store your cache is in-memory although if you have load-balances servers (as many popular systems do) you will want to consider the leading caching servers in this area capable of some [impressive numbers](http://antirez.com/post/update-on-memcached-redis-benchmark.html):
6868

69-
* Memcached - The original and industry standard
70-
* Redis - The hot new entry into this space, like a Memcached on steroids used by a [growing number of companies](http://redis.io/topics/whos-using-redis) including .NET's own [StackOverflow](http://highscalability.com/blog/2011/3/3/stack-overflow-architecture-update-now-at-95-million-page-vi.html)
69+
* [Memcached](http://memcached.org) - The original and industry standard
70+
* [Redis](http://redis.io) - The hot new entry into this space, like a Memcached on steroids used by a [growing number of companies](http://redis.io/topics/whos-using-redis) including .NET's own [StackOverflow](http://highscalability.com/blog/2011/3/3/stack-overflow-architecture-update-now-at-95-million-page-vi.html)
7171
* AppFabric - Worth a mention, since its Microsoft's entry into this area, but they're earlier recommendations for SQL as a distributed cache and their weak fine-grained caching options inherent in ASP.NET caching provider leaves me un-impressed.
7272

7373
Since caching should be treated and thought about as an important first-class concept, I like to keep in mind the cacheability of and use-cases of my services and pages when designing the level of granuality of my API. Since I like fine-grained control of caching, I prefer to use [abstract caching layer](https://github.com/ServiceStack/ServiceStack/wiki/Caching) which at the moment has swappable Redis, Memcached, In-Memory and File system providers.
7474

75-
Most .NET developers will likely just make do with 'Time based caching' as that's the default behaviour in ASP.NET caching provider API's and OutputCaching directives. My preferred option is to cache on 'Validility' where I would invalidate caches manually (e.g. when a user modified is profile, clear his cache) which means you always get the latest version and you never need to hit the database again to rehydrate the cache if it hasn't changed.
75+
Most .NET developers will likely just make do with 'Time based caching' as that's the default behaviour in ASP.NET caching provider API's and OutputCaching directives. My preferred option is to cache on 'Validility' where I would invalidate caches manually (e.g. when a user modified his profile, clear his cache) which means you always get the latest version and you never need to hit the database again to rehydrate the cache if it hasn't changed.
7676

7777
### Compression
7878
Related to the subject of Caching is Compression since they usually operate on the 'outer most layer' i.e. the final Output which in a lot of cases you should consider compressing if its not already (i.e. .jpg, .png, etc). The .NET framework comes with its own GzipStream and DeflateStream classes to do this for you. (Note: prior to 3.5 .NET had a weak impl of DeflateStream so I used [DotNetZip](http://dotnetzip.codeplex.com/) instead).

0 commit comments

Comments
 (0)