Just a quick thought: as non-volatile storage becomes faster and more affordable, I/O will cease to be the bottleneck it currently is, especially for database servers. Granted, there are applications/web sites out there which will always have to shard their database layer because they deal with a volume of writes well above what a single DB server can handle (and I'm talking about mammoth social media sites such as Facebook, Twitter, Tumblr etc). By database in this context I mean relational databases. NoSQL-like databases worth their salt are distributed from the get go, so I am not referring to them in this discussion.
For people who are hoping not to have to shard their RDBMS, things like memcached for reads and super fast storage such as FusionIO for writes give them a chance to scale their single database server up for a much longer period of time (and by a single database server I mostly mean the server where the writes go, since reads can be scaled more easily by sending them to slaves of the master server in the MySQL world for example).
In this new world, the bottleneck at the database server layer becomes not the I/O subsystem, but the CPU. Hence the need to squeeze every ounce of performance out of your code and out of your SQL queries. Good DBAs will become more important, and good developers writing efficient code will be at a premium. Performance testing will gain a greater place in the overall testing strategy as developers and DBAs will need to test their code and their SQL queries against in-memory databases to make sure there are no inefficiencies in the code.
I am using the future tense here, but the future is upon us already, and it's exciting!
Subscribe to:
Post Comments (Atom)
Modifying EC2 security groups via AWS Lambda functions
One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...
-
Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms inte...
-
Update 02/26/07 -------- The link to the old httperf page wasn't working anymore. I updated it and pointed it to the new page at HP. Her...
-
I've been using dnspython lately for transferring some DNS zone files from one name server to another. I found the package extremely us...
6 comments:
Hi Grig. Nice article.
I'm interested to know, from a tester rather than a developer perspective - what tools do you think are useful for measuring code performance?
I disagree. Even if the I/O subsystem becomes an order of magnitude faster, this will not be enough to keep the CPU busy all the time; I/O subsystem >> RAM >> CPU cache; there's multiple orders of magnitude difference between them.
Koen -- you refer to raw speeds here. I refer to poorly written SQL queries and poorly written code that uses those queries. That combination can tie up the CPU and potentially cause deadlocks as well no matter how fast your CPU is.
Simon -- thanks for the kind words. I'll follow up on your question with another post soon.
Poorly written SQL queries are poorly written because they have redundant queries which hit the DB, not because they're using so much CPU to process DB results. An order of magnitude faster DBs are going to lessen this problem, not increase it.
Profiling is always the best way to see where to focus optimization efforts. SSDs don't change that.
Benjamin -- I disagree. I think slow I/O masks a lot of issues that come to the forefront once I/O is not the issue anymore. Once you eliminate your #1 problem, #2 becomes #1. And #2 for most people is poorly written queries and poorly written code that makes too much redundant use of queries.
Post a Comment