`

Performance...

阅读更多

« I used to work for... | Main | Usability for Programmers »

Google TechTalk: Open Source Performance Testing

Goranka Bjedov - Using Open Source Tools for Performance Testing, Google TechTalk, 8 September 2006.

Goranka provides a great overview of how Google are doing performance testing

"solving problems by just adding more machines is contrary to our mission statement, which is, 'let's try to conserve resources'. Make sure you can always throw more machines at it, but eventually you run out of space, you're usng too much electricity and it's just the wrong thing to do."

Performance Testing

"So any time you run a test and you're actually measuring how quickly a system responds to work load - that means performance test. So, in a sense, you're answering the question, given a load x, how fast will the system return the result, right? But the important thing is I'm really interested in the time. That's what I'm measuring."

Stress Testing

"Like, in the case of a stress test, what I'm really more interested in is when will the system fail and how will it fail. I don't necessarily care about how fast the response times are, but hwat I really want to know is under what load will the system fail. And hopefully it's going to do so gracefully."

Load Testing

"Load test, for me means, I'm putting certain load on the system and what I want to find out over a prolonged period of time, how will the system behave. ... But in general for my load tests, ... we try to use about 80 percent of the maximum load that the system can handle and see what happens if we're running the system under that load for a long period of time."

Benchmark Testing

 "And so a benchmark test, most important to us, is ... it's simplified, it's measurable and it's repeatable, right? Those are the three things that are very important to me. ... But basically what bench marks [mean] to us is, let's try to figure out what the customer or other users are doing out in the real world and then klet's extract some subsystem or subset of those operations that we can reasonablt, easily recreat and that reasonably well represents the behaviour of the system. And then every single time we make a code change, lets' run that benchmark and find out, did we mess up the system to the point that it's unusable or did something really change tremendously. So we tend to do a lot of benchmark testing."

Scalability Testing

"And scalability means, if I increase a paricular resource, for example, on a typical system where I may have clients and then some frontends and a load [balancer] in front of them and then backends and so on. What if I have 5 frontends as opposed to 10 or as oppsed to 15. If I double the number of frontends, how will my throughput change? ... So, some particular variable is being changed ... how does it affect the performance of the system or the throoughput of the system ..."

Reliability Testing

"So basically, reliability testing - and I honestly believe that yuo should run it on pretty much any product that you have, and run it for at least 72 hours and find out how is your system handling the load? You'll find out strange things happen."

Availability Testing

"Availability testing is tied to reliability testing ... when people talk about four nines, five nies and so on, that's what they're talking about. Availability testing is something slightly different and it basically says when a system fails how quickly will it come back up. ... So availability testing tells you , okay,  so finally something has failed, but can it be back up online very, very quickly."

Very good talk.

分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics