How We Benchmark at LiteSpeed Technologies
Permit me to introduce myself.
My name is Steven Antonucci, and I am the head of Business Development for LiteSpeed Technologies. In essence, I own sales and marketing for LiteSpeed, but I have a very technical background. I’ve been around the industry for 28 years, holding a degree in Electrical Engineering from Stevens Institute of Technology. For the past 17 years, I have worked for Mercury Interactive/HP Software directly in various sales and marketing and TECHNICAL roles…including the NYC lead performance architect for LoadRunner.
So, briefly… I know what a good benchmark looks like.
When I joined LiteSpeed Technologies a few weeks ago, I immediately became aware that people doubted our benchmark results. And frankly, they were correct.
It wasn’t so much that the testing we did was rigged (it wasn’t), but the way that we talked about it made no sense. So, being here now a full month, and having a much better understanding of the market, the players, and our process… I’m fixing that.
Right now.
The Four Covenants of a Benchmark
I have been working on this post now for a few days (mentally), including 3AM this morning. When you consider what it takes to produce a meaningful benchmark, I would offer that it boils down to Four Covenants:
- Ethics
- Tools
- Skills
- Methodology
If you are missing any of these items, your benchmark is meaningless.
Ethics
Everything starts with ethics. If you cannot trust the people, you cannot trust the result.
When we benchmark, we will always describe the environments. We know that our stuff works better than our competitors, and we want to be on a level playing field. If we are comparing A to B, we are comparing them on the same hardware, with all of the recommended basic installations. We are not “tuning” any of the environments to perform better or worse, nor in any way changing the loads to make one solution appear better.
I am in charge of the ethics of the benchmarks we produce here at LiteSpeed. I simply will not tolerate any violations of this policy.
Tools
When I got here, most of our tooling was around Siege. Most of our benchmark data came from spreadsheets that were gathered “somehow” and conclusions were reached “somehow”.
Being comfortable with LoadRunner (I am LoadRunner Certified), I immediately upgraded our tooling. If you are unaware of LoadRunner’s position, it is the dominant load testing platform used by more than 75% of the market to assure mission critical applications and websites. It also provides visibility into benchmark data that we never had in the past, and automates A LOT of the manual data assembly and presentation stuff.
LoadRunner works differently than Siege. LoadRunner proxies the browser to record a context sensitive script that can follow a business process like logging in, browsing a catalog, adding an item to a shopping cart, and checking out. This is particularly important in platforms like Magento, as shopping cart functionality is not something you can benchmark via Siege (or similar tools that just call URL’s)
We connect to the environments we are benchmarking with HP’s SiteScope for LoadRunner tool. Looking into CPU, memory, and other metrics while we are applying test loads helps us explain why we are faster. It also gives us a demo environment for trade shows like WHD, Magento Imagine, and HostingCon.
That’s right. If you want to see it live, we’ll let you put your hands on it. If you want to replicate our tooling, I’m happy to help (santonucci@litespeedtech.com). We are using the Community Editions of both products, and there is zero cost.
Skills
In my 28 years in technology, I’ve talked to A LOT of companies, and they all claim to have the best, smartest people working there. Statistically, we all know that this is impossible.
Even though I am LoadRunner certified, I wouldn’t hire me to do benchmarking. I have the luxury of a breadth of industry contacts to help me do my job. Today, I am having my friends at Foulk Consulting help me with our benchmarking. (http://foulkconsulting.com/) I’ve known Ryan and his team for 15 years, and I trust that they have the right people to help me. If the largest sports apparel manufacturer trusts Ryan’s guys to fix their shopping cart issues…you should too.
Methodology
Being at Mercury during the first DotCom wave, I learned A LOT about methodology. I supported the ActiveTest products. We offered a 2X performance gain guarantee or we would refund the customer’s money. My job was to clearly understand their environments and assess the risk with the engagement. Our technical team handled all of the heavy lifting for things like the Democratic National Convention and the NBA.com site when Yao Ming joined the Houston Rockets and suddenly brought one billion Chinese people to the site every morning…big stuff.
The way you design a benchmark test really matters. When I arrived, our tests were very simplistic and our conclusions very grandiose. I am bringing way more rigour to the effort so that our benchmarks become more focused and our results will be significantly more concrete. You will begin to see more mixed traffic, scenario based benchmark data describing specific, validated data points from many more test runs.
Which is what every world class engineering team at every company on the planet does.
Summary
Moving forward, you should expect to be blown away by our benchmarks. They will be far more detailed than anything available in the industry. We’re going to “open source” everything we do around the benchmarks- meaning you’ll be able download the same tools we use, use our scripts, our scenarios… and you’ll get the same results.
Finally, I would extend the offer to anyone that wants it… Feel free to reach out to me.
santonucci@litespeedtech.com
Comments