Commit 2069f250 authored by rniwa@webkit.org's avatar rniwa@webkit.org

Use perf.webkit.org JSON format in results page

https://bugs.webkit.org/show_bug.cgi?id=110842

Reviewed by Benjamin Poulain.

PerformanceTests: 

Updated the results page template to use the new JSON format.

Since new JSON format doesn't contain statistics such as stdev and min, added statistics.js to compute
these values. Also use 95% percentile confidence interval instead of standard deviation in various places.

* resources/results-template.html: Added statistics.js as dependency.
(TestResult): Updated to take a metric instead of its test. Replaced stdev() with confidenceIntervalDelta()
now that we have a fancy Statistics class.

(TestRun.webkitRevision):
(PerfTestMetric): Renamed from PerfTest since this object now encapsulates each measurement (such as time,
JS heap, and malloc) in test. Also added a conversion table from a metric name to a unit since new format
doesn't contain units.
(PerfTestMetric.name): Updated to compute the full metric name from test name and metric name, matching
the old behavior.
(PerfTestMetric.isMemoryTest): Explicitly look for 'JSHeap' and 'Malloc' tests.
(PerfTestMetric.smallerIsBetter):

(attachPlot): Deleted the code to deal with tests that don't provide individual iteration measurement
since such tests no longer exist. Also fixed up the code compute y-axis range.

(createTableRow.markupForRun): Updated to use confidenceIntervalDelta() instead of stdev().
        
(init.addTests): Added. Recursively add metrics.

* resources/statistics.js: Added. Imported from perf.webkit.org.
(Statistics.max):
(Statistics.min):
(Statistics.sum):
(Statistics.squareSum):
(Statistics.sampleStandardDeviation):
(Statistics.supportedConfidenceLevels):
(Statistics.confidenceIntervalDelta):
(Statistics.confidenceInterval):

Tools: 

Change the default JSON format from that of webkit-perf.appspot.com to that of perf.webkit.org.

A whole bunch of integration tests have been updated to use the new JSON format.

* Scripts/webkitpy/performance_tests/perftestsrunner.py:
(PerfTestsRunner._generate_and_show_results): Renamed output and output_path to legacy_output
and legacy_output_json_path respectively.
(PerfTestsRunner._generate_results_dict): Don't assume meta build information is always available.
(PerfTestsRunner._generate_output_files): Make json_output, which is used to generate the default
JSON file and the results page out of perf_webkit_output instead of legacy_output.

* Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py:
(MainTest.test_run_memory_test):
(MainTest._test_run_with_json_output.mock_upload_json):
(MainTest):
(MainTest.test_run_with_json_output):
(MainTest.test_run_with_description):
(MainTest.test_run_generates_json_by_default):
(MainTest.test_run_merges_output_by_default):
(MainTest.test_run_respects_reset_results):
(MainTest.test_run_generates_and_show_results_page):
(MainTest.test_run_with_slave_config_json):
(MainTest.test_run_with_multiple_repositories):
(MainTest.test_run_with_upload_json):
(MainTest.test_run_with_upload_json_should_generate_perf_webkit_json):


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@144141 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 1dc41965
2013-02-25 Ryosuke Niwa <rniwa@webkit.org>
Use perf.webkit.org JSON format in results page
https://bugs.webkit.org/show_bug.cgi?id=110842
Reviewed by Benjamin Poulain.
Updated the results page template to use the new JSON format.
Since new JSON format doesn't contain statistics such as stdev and min, added statistics.js to compute
these values. Also use 95% percentile confidence interval instead of standard deviation in various places.
* resources/results-template.html: Added statistics.js as dependency.
(TestResult): Updated to take a metric instead of its test. Replaced stdev() with confidenceIntervalDelta()
now that we have a fancy Statistics class.
(TestRun.webkitRevision):
(PerfTestMetric): Renamed from PerfTest since this object now encapsulates each measurement (such as time,
JS heap, and malloc) in test. Also added a conversion table from a metric name to a unit since new format
doesn't contain units.
(PerfTestMetric.name): Updated to compute the full metric name from test name and metric name, matching
the old behavior.
(PerfTestMetric.isMemoryTest): Explicitly look for 'JSHeap' and 'Malloc' tests.
(PerfTestMetric.smallerIsBetter):
(attachPlot): Deleted the code to deal with tests that don't provide individual iteration measurement
since such tests no longer exist. Also fixed up the code compute y-axis range.
(createTableRow.markupForRun): Updated to use confidenceIntervalDelta() instead of stdev().
(init.addTests): Added. Recursively add metrics.
* resources/statistics.js: Added. Imported from perf.webkit.org.
(Statistics.max):
(Statistics.min):
(Statistics.sum):
(Statistics.squareSum):
(Statistics.sampleStandardDeviation):
(Statistics.supportedConfidenceLevels):
(Statistics.confidenceIntervalDelta):
(Statistics.confidenceInterval):
2013-02-11 Alexei Filippov <alph@chromium.org>
Web Inspector: Split Profiler domain in protocol into Profiler and HeapProfiler
......
......@@ -137,8 +137,9 @@ Reference <span id="reference" class="checkbox"></span>
<script>
(function () {
var jQuery = ['PerformanceTests/Dromaeo/resources/dromaeo/web/lib/jquery-1.6.4.js'];
var plugins = ['PerformanceTests/resources/jquery.flot.min.js', 'PerformanceTests/resources/jquery.tablesorter.min.js'];
var jQuery = 'PerformanceTests/Dromaeo/resources/dromaeo/web/lib/jquery-1.6.4.js';
var plugins = ['PerformanceTests/resources/jquery.flot.min.js', 'PerformanceTests/resources/jquery.tablesorter.min.js',
'PerformanceTests/resources/statistics.js'];
var localPath = '%AbsolutePathToWebKitTrunk%';
var remotePath = 'https://svn.webkit.org/repository/webkit/trunk';
var numberOfFailures = 0;
......@@ -183,27 +184,29 @@ Reference <span id="reference" class="checkbox"></span>
});
})();
function TestResult(associatedTest, result, associatedRun) {
this.unit = function () { return result.unit; }
this.test = function () { return associatedTest; }
this.values = function () { return result.values ? result.values.map(function (value) { return associatedTest.scalingFactor() * value; }) : undefined; }
this.unscaledMean = function () { return result.avg; }
this.mean = function () { return associatedTest.scalingFactor() * result.avg; }
this.min = function () { return associatedTest.scalingFactor() * result.min; }
this.max = function () { return associatedTest.scalingFactor() * result.max; }
this.stdev = function () { return associatedTest.scalingFactor() * result.stdev; }
this.stdevRatio = function () { return result.stdev / result.avg; }
this.percentDifference = function(other) { return (other.mean() - this.mean()) / this.mean(); }
function TestResult(metric, values, associatedRun) {
this.test = function () { return metric; }
this.values = function () { return values.map(function (value) { return metric.scalingFactor() * value; }); }
this.unscaledMean = function () { return Statistics.sum(values) / values.length; }
this.mean = function () { return metric.scalingFactor() * this.unscaledMean(); }
this.min = function () { return metric.scalingFactor() * Statistics.min(values); }
this.max = function () { return metric.scalingFactor() * Statistics.max(values); }
this.confidenceIntervalDelta = function () {
return metric.scalingFactor() * Statistics.confidenceIntervalDelta(0.95, values.length,
Statistics.sum(values), Statistics.squareSum(values));
}
this.confidenceIntervalDeltaRatio = function () { return this.confidenceIntervalDelta() / this.mean(); }
this.percentDifference = function(other) { return (other.unscaledMean() - this.unscaledMean()) / this.unscaledMean(); }
this.isStatisticallySignificant = function (other) {
var diff = Math.abs(other.mean() - this.mean());
return diff > this.stdev() && diff > other.stdev();
return diff > this.confidenceIntervalDelta() && diff > other.confidenceIntervalDelta();
}
this.run = function () { return associatedRun; }
}
function TestRun(entry) {
this.description = function () { return entry['description']; }
this.webkitRevision = function () { return entry['webkit-revision']; }
this.webkitRevision = function () { return entry['revisions']['WebKit']['revision']; }
this.label = function () {
var label = 'r' + this.webkitRevision();
if (this.description())
......@@ -212,10 +215,11 @@ function TestRun(entry) {
}
}
function PerfTest(name) {
function PerfTestMetric(name, metric) {
var testResults = [];
var cachedUnit = null;
var cachedScalingFactor = null;
var unit = {'FrameRate': 'fps', 'Runs': 'runs/s', 'Time': 'ms', 'Malloc': 'bytes', 'JSHeap': 'bytes'}[metric];
// We can't do this in TestResult because all results for each test need to share the same unit and the same scaling factor.
function computeScalingFactorIfNeeded() {
......@@ -224,7 +228,6 @@ function PerfTest(name) {
if (!testResults.length || cachedUnit)
return;
var unit = testResults[0].unit(); // FIXME: We should verify that all results have the same unit.
var mean = testResults[0].unscaledMean(); // FIXME: We should look at all values.
var kilo = unit == 'bytes' ? 1024 : 1000;
if (mean > 10 * kilo * kilo && unit != 'ms') {
......@@ -239,8 +242,8 @@ function PerfTest(name) {
}
}
this.name = function () { return name; }
this.isMemoryTest = function () { return name.indexOf(':') >= 0; }
this.name = function () { return name + ':' + metric; }
this.isMemoryTest = function () { return metric == 'JSHeap' || metric == 'Malloc'; }
this.addResult = function (newResult) {
testResults.push(newResult);
cachedUnit = null;
......@@ -255,7 +258,7 @@ function PerfTest(name) {
computeScalingFactorIfNeeded();
return cachedUnit;
}
this.smallerIsBetter = function () { return testResults[0].unit() == 'ms' || testResults[0].unit() == 'bytes'; }
this.smallerIsBetter = function () { return unit == 'ms' || unit == 'bytes'; }
}
var plotColor = 'rgb(230,50,50)';
......@@ -353,22 +356,15 @@ function attachPlot(test, plotContainer, minIsZero) {
return newValues ? values.concat(newValues.map(function (value) { return [index, value]; })) : values;
}, []);
var plotData = [];
if (values.length)
plotData = [$.extend(true, {}, subpointsPlotOptions, {data: values})];
else {
function makeSubpoints(id, callback) { return $.extend(true, {}, subpointsPlotOptions, {id: id, data: results.map(callback)}); }
plotData = [makeSubpoints('min', function (result, index) { return [index, result.min()]; }),
makeSubpoints('max', function (result, index) { return [index, result.max()]; }),
makeSubpoints('-&#963;', function (result, index) { return [index, result.mean() - result.stdev()]; }),
makeSubpoints('+&#963;', function (result, index) { return [index, result.mean() + result.stdev()]; })];
}
var plotData = [$.extend(true, {}, subpointsPlotOptions, {data: values})];
plotData.push({id: '&mu;', data: results.map(function (result, index) { return [index, result.mean()]; }), color: plotColor});
var overallMax = Statistics.max(results.map(function (result, index) { return result.max(); }));
var overallMin = Statistics.min(results.map(function (result, index) { return result.min(); }));
var margin = (overallMax - overallMin) * 0.1;
var currentPlotOptions = $.extend(true, {}, mainPlotOptions, {yaxis: {
min: minIsZero ? 0 : Math.min.apply(Math, results.map(function (result, index) { return result.min(); })) * 0.98,
max: Math.max.apply(Math, results.map(function (result, index) { return result.max(); })) * (minIsZero ? 1.1 : 1.01)}});
min: minIsZero ? 0 : overallMin - margin,
max: minIsZero ? overallMax * 1.1 : overallMax + margin}});
currentPlotOptions.xaxis.max = results.length - 0.5;
currentPlotOptions.xaxis.ticks = results.map(function (result, index) { return [index, result.run().label()]; });
......@@ -475,13 +471,13 @@ function createTableRow(runs, test, referenceIndex) {
}
}
var statistics = '&sigma;=' + toFixedWidthPrecision(result.stdev()) + ', min=' + toFixedWidthPrecision(result.min())
var statistics = '&sigma;=' + toFixedWidthPrecision(result.confidenceIntervalDelta()) + ', min=' + toFixedWidthPrecision(result.min())
+ ', max=' + toFixedWidthPrecision(result.max()) + '\n' + regressionAnalysis;
// Tablesorter doesn't know about the second cell so put the comparison in the invisible element.
return '<td class="result" title="' + statistics + '">' + toFixedWidthPrecision(result.mean()) + hiddenValue
+ '</td><td class="stdev" title="' + statistics + '">&plusmn; '
+ formatPercentage(result.stdevRatio()) + warning + '</td>' + comparisonCell;
+ '</td><td class="confidenceIntervalDelta" title="' + statistics + '">&plusmn; '
+ formatPercentage(result.confidenceIntervalDeltaRatio()) + warning + '</td>' + comparisonCell;
}
function markupForMissingRun(isReference) {
......@@ -547,24 +543,42 @@ function init() {
});
var runs = [];
var tests = {};
var metrics = {};
$.each(JSON.parse(document.getElementById('json').textContent), function (index, entry) {
var run = new TestRun(entry);
runs.push(run);
$.each(entry.results, function (test, result) {
if (!tests[test])
tests[test] = new PerfTest(test);
tests[test].addResult(new TestResult(tests[test], result, run));
});
function addTests(tests, parentFullName) {
for (var testName in tests) {
var fullTestName = parentFullName + '/' + testName;
var rawMetrics = tests[testName].metrics;
for (var metricName in rawMetrics) {
var fullMetricName = fullTestName + ':' + metricName;
var metric = metrics[fullMetricName];
if (!metric) {
metric = new PerfTestMetric(fullTestName, metricName);
metrics[fullMetricName] = metric;
}
metric.addResult(new TestResult(metric, rawMetrics[metricName].current, run));
}
if (tests[testName].tests)
addTests(tests[testName].tests, fullTestName);
}
}
addTests(entry.tests, '');
});
var shouldIgnoreMemory= true;
var referenceIndex = 0;
createTable(tests, runs, shouldIgnoreMemory, referenceIndex);
createTable(metrics, runs, shouldIgnoreMemory, referenceIndex);
$('#time-memory').bind('change', function (event, checkedElement) {
shouldIgnoreMemory = checkedElement.textContent == 'Time';
createTable(tests, runs, shouldIgnoreMemory, referenceIndex);
createTable(metrics, runs, shouldIgnoreMemory, referenceIndex);
});
runs.map(function (run, index) {
......@@ -573,7 +587,7 @@ function init() {
$('#reference').bind('change', function (event, checkedElement) {
referenceIndex = parseInt(checkedElement.getAttribute('value'));
createTable(tests, runs, shouldIgnoreMemory, referenceIndex);
createTable(metrics, runs, shouldIgnoreMemory, referenceIndex);
});
$('.checkbox').each(function (index, checkbox) {
......
/*
* Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
var Statistics = new (function () {
this.max = function (values) {
return Math.max.apply(Math, values);
}
this.min = function (values) {
return Math.min.apply(Math, values);
}
this.sum = function (values) {
return values.reduce(function (a, b) { return a + b; }, 0);
}
this.squareSum = function (values) {
return values.reduce(function (sum, value) { return sum + value * value;}, 0);
}
// With sum and sum of squares, we can compute the sample standard deviation in O(1).
// See https://rniwa.com/2012-11-10/sample-standard-deviation-in-terms-of-sum-and-square-sum-of-samples/
this.sampleStandardDeviation = function (numberOfSamples, sum, squareSum) {
if (numberOfSamples < 2)
return 0;
return Math.sqrt(squareSum / (numberOfSamples - 1)
- sum * sum / (numberOfSamples - 1) / numberOfSamples);
}
this.supportedConfidenceLevels = function () {
var supportedLevels = [];
for (var quantile in tDistributionInverseCDF)
supportedLevels.push((1 - (1 - quantile) * 2).toFixed(2));
return supportedLevels;
}
// Computes the delta d s.t. (mean - d, mean + d) is the confidence interval with the specified confidence level in O(1).
this.confidenceIntervalDelta = function (confidenceLevel, numberOfSamples, sum, squareSum) {
var probability = (1 - (1 - confidenceLevel) / 2);
if (!(probability in tDistributionInverseCDF)) {
throw 'We only support ' + this.supportedConfidenceLevels().map(
function (level) { return level * 100 + '%'; } ).join(', ') + ' confidence intervals.';
}
if (numberOfSamples - 2 < 0)
return NaN;
var cdfForProbability = tDistributionInverseCDF[probability];
var degreesOfFreedom = numberOfSamples - 1;
if (degreesOfFreedom > cdfForProbability.length)
throw 'We only support up to ' + deltas.length + ' degrees of freedom';
// tDistributionQuantile(degreesOfFreedom, confidenceLevel) * sampleStandardDeviation / sqrt(numberOfSamples) * S/sqrt(numberOfSamples)
var quantile = cdfForProbability[degreesOfFreedom - 1]; // The first entry is for the one degree of freedom.
return quantile * this.sampleStandardDeviation(numberOfSamples, sum, squareSum) / Math.sqrt(numberOfSamples);
}
this.confidenceInterval = function (values, probability) {
var sum = this.sum(values);
var mean = sum / values.length;
var delta = this.confidenceIntervalDelta(probability || 0.95, values.length, sum, this.squareSum(values));
return [mean - delta, mean + delta];
}
// See http://en.wikipedia.org/wiki/Student's_t-distribution#Table_of_selected_values
// This table contains one sided (a.k.a. tail) values.
var tDistributionInverseCDF = {
0.9: [
3.077684, 1.885618, 1.637744, 1.533206, 1.475884, 1.439756, 1.414924, 1.396815, 1.383029, 1.372184,
1.363430, 1.356217, 1.350171, 1.345030, 1.340606, 1.336757, 1.333379, 1.330391, 1.327728, 1.325341,
1.323188, 1.321237, 1.319460, 1.317836, 1.316345, 1.314972, 1.313703, 1.312527, 1.311434, 1.310415,
1.309464, 1.308573, 1.307737, 1.306952, 1.306212, 1.305514, 1.304854, 1.304230, 1.303639, 1.303077,
1.302543, 1.302035, 1.301552, 1.301090, 1.300649, 1.300228, 1.299825, 1.299439, 1.299069, 1.298714,
1.298373, 1.298045, 1.297730, 1.297426, 1.297134, 1.296853, 1.296581, 1.296319, 1.296066, 1.295821,
1.295585, 1.295356, 1.295134, 1.294920, 1.294712, 1.294511, 1.294315, 1.294126, 1.293942, 1.293763,
1.293589, 1.293421, 1.293256, 1.293097, 1.292941, 1.292790, 1.292643, 1.292500, 1.292360, 1.292224,
1.292091, 1.291961, 1.291835, 1.291711, 1.291591, 1.291473, 1.291358, 1.291246, 1.291136, 1.291029,
1.290924, 1.290821, 1.290721, 1.290623, 1.290527, 1.290432, 1.290340, 1.290250, 1.290161, 1.290075],
0.95: [
6.313752, 2.919986, 2.353363, 2.131847, 2.015048, 1.943180, 1.894579, 1.859548, 1.833113, 1.812461,
1.795885, 1.782288, 1.770933, 1.761310, 1.753050, 1.745884, 1.739607, 1.734064, 1.729133, 1.724718,
1.720743, 1.717144, 1.713872, 1.710882, 1.708141, 1.705618, 1.703288, 1.701131, 1.699127, 1.697261,
1.695519, 1.693889, 1.692360, 1.690924, 1.689572, 1.688298, 1.687094, 1.685954, 1.684875, 1.683851,
1.682878, 1.681952, 1.681071, 1.680230, 1.679427, 1.678660, 1.677927, 1.677224, 1.676551, 1.675905,
1.675285, 1.674689, 1.674116, 1.673565, 1.673034, 1.672522, 1.672029, 1.671553, 1.671093, 1.670649,
1.670219, 1.669804, 1.669402, 1.669013, 1.668636, 1.668271, 1.667916, 1.667572, 1.667239, 1.666914,
1.666600, 1.666294, 1.665996, 1.665707, 1.665425, 1.665151, 1.664885, 1.664625, 1.664371, 1.664125,
1.663884, 1.663649, 1.663420, 1.663197, 1.662978, 1.662765, 1.662557, 1.662354, 1.662155, 1.661961,
1.661771, 1.661585, 1.661404, 1.661226, 1.661052, 1.660881, 1.660715, 1.660551, 1.660391, 1.660234],
0.975: [
12.706205, 4.302653, 3.182446, 2.776445, 2.570582, 2.446912, 2.364624, 2.306004, 2.262157, 2.228139,
2.200985, 2.178813, 2.160369, 2.144787, 2.131450, 2.119905, 2.109816, 2.100922, 2.093024, 2.085963,
2.079614, 2.073873, 2.068658, 2.063899, 2.059539, 2.055529, 2.051831, 2.048407, 2.045230, 2.042272,
2.039513, 2.036933, 2.034515, 2.032245, 2.030108, 2.028094, 2.026192, 2.024394, 2.022691, 2.021075,
2.019541, 2.018082, 2.016692, 2.015368, 2.014103, 2.012896, 2.011741, 2.010635, 2.009575, 2.008559,
2.007584, 2.006647, 2.005746, 2.004879, 2.004045, 2.003241, 2.002465, 2.001717, 2.000995, 2.000298,
1.999624, 1.998972, 1.998341, 1.997730, 1.997138, 1.996564, 1.996008, 1.995469, 1.994945, 1.994437,
1.993943, 1.993464, 1.992997, 1.992543, 1.992102, 1.991673, 1.991254, 1.990847, 1.990450, 1.990063,
1.989686, 1.989319, 1.988960, 1.988610, 1.988268, 1.987934, 1.987608, 1.987290, 1.986979, 1.986675,
1.986377, 1.986086, 1.985802, 1.985523, 1.985251, 1.984984, 1.984723, 1.984467, 1.984217, 1.983972],
0.99: [
31.820516, 6.964557, 4.540703, 3.746947, 3.364930, 3.142668, 2.997952, 2.896459, 2.821438, 2.763769,
2.718079, 2.680998, 2.650309, 2.624494, 2.602480, 2.583487, 2.566934, 2.552380, 2.539483, 2.527977,
2.517648, 2.508325, 2.499867, 2.492159, 2.485107, 2.478630, 2.472660, 2.467140, 2.462021, 2.457262,
2.452824, 2.448678, 2.444794, 2.441150, 2.437723, 2.434494, 2.431447, 2.428568, 2.425841, 2.423257,
2.420803, 2.418470, 2.416250, 2.414134, 2.412116, 2.410188, 2.408345, 2.406581, 2.404892, 2.403272,
2.401718, 2.400225, 2.398790, 2.397410, 2.396081, 2.394801, 2.393568, 2.392377, 2.391229, 2.390119,
2.389047, 2.388011, 2.387008, 2.386037, 2.385097, 2.384186, 2.383302, 2.382446, 2.381615, 2.380807,
2.380024, 2.379262, 2.378522, 2.377802, 2.377102, 2.376420, 2.375757, 2.375111, 2.374482, 2.373868,
2.373270, 2.372687, 2.372119, 2.371564, 2.371022, 2.370493, 2.369977, 2.369472, 2.368979, 2.368497,
2.368026, 2.367566, 2.367115, 2.366674, 2.366243, 2.365821, 2.365407, 2.365002, 2.364606, 2.364217]
};
})();
if (typeof module != 'undefined') {
for (var key in Statistics)
module.exports[key] = Statistics[key];
}
2013-02-25 Ryosuke Niwa <rniwa@webkit.org>
Use perf.webkit.org JSON format in results page
https://bugs.webkit.org/show_bug.cgi?id=110842
Reviewed by Benjamin Poulain.
Change the default JSON format from that of webkit-perf.appspot.com to that of perf.webkit.org.
A whole bunch of integration tests have been updated to use the new JSON format.
* Scripts/webkitpy/performance_tests/perftestsrunner.py:
(PerfTestsRunner._generate_and_show_results): Renamed output and output_path to legacy_output
and legacy_output_json_path respectively.
(PerfTestsRunner._generate_results_dict): Don't assume meta build information is always available.
(PerfTestsRunner._generate_output_files): Make json_output, which is used to generate the default
JSON file and the results page out of perf_webkit_output instead of legacy_output.
* Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py:
(MainTest.test_run_memory_test):
(MainTest._test_run_with_json_output.mock_upload_json):
(MainTest):
(MainTest.test_run_with_json_output):
(MainTest.test_run_with_description):
(MainTest.test_run_generates_json_by_default):
(MainTest.test_run_merges_output_by_default):
(MainTest.test_run_respects_reset_results):
(MainTest.test_run_generates_and_show_results_page):
(MainTest.test_run_with_slave_config_json):
(MainTest.test_run_with_multiple_repositories):
(MainTest.test_run_with_upload_json):
(MainTest.test_run_with_upload_json_should_generate_perf_webkit_json):
2013-02-26 Adam Barth <abarth@webkit.org>
[Chromium] Enable threaded HTML parser by default in DumpRenderTree
......@@ -207,25 +207,25 @@ class PerfTestsRunner(object):
def _generate_and_show_results(self):
options = self._options
output_json_path = self._output_json_path()
output, perf_webkit_output = self._generate_results_dict(self._timestamp, options.description, options.platform, options.builder_name, options.build_number)
perf_webkit_json_path = self._output_json_path()
legacy_output, perf_webkit_output = self._generate_results_dict(self._timestamp, options.description, options.platform, options.builder_name, options.build_number)
if options.slave_config_json_path:
output, perf_webkit_output = self._merge_slave_config_json(options.slave_config_json_path, output, perf_webkit_output)
if not output:
legacy_output, perf_webkit_output = self._merge_slave_config_json(options.slave_config_json_path, legacy_output, perf_webkit_output)
if not legacy_output:
return self.EXIT_CODE_BAD_SOURCE_JSON
output = self._merge_outputs_if_needed(output_json_path, output)
if not output:
perf_webkit_output = self._merge_outputs_if_needed(perf_webkit_json_path, perf_webkit_output)
if not perf_webkit_output:
return self.EXIT_CODE_BAD_MERGE
perf_webkit_output = [perf_webkit_output]
legacy_output = [legacy_output]
results_page_path = self._host.filesystem.splitext(output_json_path)[0] + '.html'
perf_webkit_json_path = self._host.filesystem.splitext(output_json_path)[0] + '-perf-webkit.json' if options.test_results_server else None
self._generate_output_files(output_json_path, perf_webkit_json_path, results_page_path, output, perf_webkit_output)
results_page_path = self._host.filesystem.splitext(perf_webkit_json_path)[0] + '.html'
legacy_output_json_path = self._host.filesystem.splitext(perf_webkit_json_path)[0] + '-legacy.json' if options.test_results_server else None
self._generate_output_files(legacy_output_json_path, perf_webkit_json_path, results_page_path, legacy_output, perf_webkit_output)
if options.test_results_server:
if not self._upload_json(options.test_results_server, output_json_path):
if not self._upload_json(options.test_results_server, legacy_output_json_path):
return self.EXIT_CODE_FAILED_UPLOADING
# FIXME: Remove this code once we've made transition to use perf.webkit.org
......@@ -253,13 +253,20 @@ class PerfTestsRunner(object):
if value:
contents[key] = value
contents_for_perf_webkit = {
'builderName': builder_name,
'buildNumber': str(build_number),
contents_for_perf_webkit = {'tests': {}}
if description:
contents_for_perf_webkit['description'] = description
meta_info = {
'buildTime': self._datetime_in_ES5_compatible_iso_format(self._utc_timestamp),
'platform': platform,
'revisions': revisions_for_perf_webkit,
'tests': {}}
'builderName': builder_name,
'buildNumber': int(build_number) if build_number else None}
for key, value in meta_info.items():
if value:
contents_for_perf_webkit[key] = value
# FIXME: Make this function shorter once we've transitioned to use perf.webkit.org.
for metric_full_name, result in self._results.iteritems():
......@@ -322,12 +329,13 @@ class PerfTestsRunner(object):
_log.error("Failed to merge output JSON file %s: %s" % (output_json_path, error))
return None
def _generate_output_files(self, output_json_path, perf_webkit_json_path, results_page_path, output, perf_webkit_output):
def _generate_output_files(self, output_json_path, perf_webkit_json_path, results_page_path, legacy_output, perf_webkit_output):
filesystem = self._host.filesystem
json_output = json.dumps(output)
filesystem.write_text_file(output_json_path, json_output)
if output_json_path:
filesystem.write_text_file(output_json_path, json.dumps(legacy_output))
json_output = json.dumps(perf_webkit_output)
if perf_webkit_json_path:
filesystem.write_text_file(perf_webkit_json_path, json.dumps(perf_webkit_output))
......
......@@ -92,8 +92,8 @@ Finished: 0.1 s
"""
results = {"max": 1510, "avg": 1490, "median": 1488, "min": 1471, "stdev": 15.13935, "unit": "ms",
"values": [1486, 1471, 1510, 1505, 1478, 1490]}
results = {'url': 'http://trac.webkit.org/browser/trunk/PerformanceTests/Bindings/event-target-wrapper.html',
'metrics': {'Time': {'current': [1486.0, 1471.0, 1510.0, 1505.0, 1478.0, 1490.0]}}}
class SomeParserTestData:
......@@ -157,12 +157,9 @@ median= 529000.0 bytes, stdev= 14124.44689 bytes, min= 511000.0 bytes, max= 5480
Finished: 0.1 s
"""
results = {'values': [1080, 1120, 1095, 1101, 1104], 'avg': 1100, 'min': 1080, 'max': 1120,
'stdev': 14.50861, 'median': 1101, 'unit': 'ms'}
js_heap_results = {'values': [825000, 811000, 848000, 837000, 829000], 'avg': 830000, 'min': 811000, 'max': 848000,
'stdev': 13784.04875, 'median': 829000, 'unit': 'bytes'}
malloc_results = {'values': [529000, 511000, 548000, 536000, 521000], 'avg': 529000, 'min': 511000, 'max': 548000,
'stdev': 14124.44689, 'median': 529000, 'unit': 'bytes'}
results = {'current': [1080, 1120, 1095, 1101, 1104]}
js_heap_results = {'current': [825000, 811000, 848000, 837000, 829000]}
malloc_results = {'current': [529000, 511000, 548000, 536000, 521000]}
class TestDriver:
......@@ -302,19 +299,10 @@ class MainTest(unittest.TestCase):
stdout, stderr, log = output.restore_output()
self.assertEqual(unexpected_result_count, 0)
self.assertEqual(self._normalize_output(log), MemoryTestData.output + '\nMOCK: user.open_url: file://...\n')
results = self._load_output_json(runner)[0]['results']
values = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
# Stdev for test doesn't match on some bots
self.assertEqual(sorted(results['Parser/memory-test'].keys()), sorted(MemoryTestData.results.keys()))
for key in MemoryTestData.results:
if key == 'stdev':
self.assertAlmostEqual(results['Parser/memory-test'][key], MemoryTestData.results[key], places=4)
else:
self.assertEqual(results['Parser/memory-test'][key], MemoryTestData.results[key])
self.assertEqual(results['Parser/memory-test'], MemoryTestData.results)
self.assertEqual(results['Parser/memory-test:JSHeap'], MemoryTestData.js_heap_results)
self.assertEqual(results['Parser/memory-test:Malloc'], MemoryTestData.malloc_results)
parser_tests = self._load_output_json(runner)[0]['tests']['Parser']['tests']
self.assertEqual(parser_tests['memory-test']['metrics']['Time'], MemoryTestData.results)
self.assertEqual(parser_tests['memory-test']['metrics']['JSHeap'], MemoryTestData.js_heap_results)
self.assertEqual(parser_tests['memory-test']['metrics']['Malloc'], MemoryTestData.malloc_results)
def _test_run_with_json_output(self, runner, filesystem, upload_suceeds=False, results_shown=True, expected_exit_code=0):
filesystem.write_text_file(runner._base_path + '/inspector/pass.html', 'some content')
......@@ -325,7 +313,7 @@ class MainTest(unittest.TestCase):
def mock_upload_json(hostname, json_path, host_path=None):
# FIXME: Get rid of the hard-coded perf.webkit.org once we've completed the transition.
self.assertIn(hostname, ['some.host', 'perf.webkit.org'])
self.assertIn(json_path, ['/mock-checkout/output.json', '/mock-checkout/output-perf-webkit.json'])
self.assertIn(json_path, ['/mock-checkout/output.json', '/mock-checkout/output-legacy.json'])
self.assertIn(host_path, [None, '/api/report'])
uploaded[0] = upload_suceeds
return upload_suceeds
......@@ -351,16 +339,17 @@ class MainTest(unittest.TestCase):
return logs
_event_target_wrapper_and_inspector_results = {
"Bindings/event-target-wrapper": EventTargetWrapperTestData.results,
"inspector/pass.html:group_name:test_name": 42}
"Bindings":
{"url": "http://trac.webkit.org/browser/trunk/PerformanceTests/Bindings",
"tests": {"event-target-wrapper": EventTargetWrapperTestData.results}}}
def test_run_with_json_output(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server=some.host'])
self._test_run_with_json_output(runner, port.host.