Commit 96514be3 authored by rniwa@webkit.org's avatar rniwa@webkit.org

PerformanceTests: Dromaeo should report individual test result

https://bugs.webkit.org/show_bug.cgi?id=99800

Reviewed by Eric Seidel.

Made one small modification to Droameo's webrunner.js so that it reports individual runs/s values
for each subtest. This allows us to compute the aggregated run/s for each iteration like other
performance tests.

Also stop measuring memory usage in Dromaeo tests because some Dromaeo tests (e.g. jslib-modify-jquery)
have unrealistic memory usage, and measuring them at the time of teardown doesn't make much sense.

* Animation/balls.html: Fixed typo: measureValueAync.
* Dromaeo/resources/dromaeo/web/webrunner.js:

* Dromaeo/resources/dromaeorunner.js:
(DRT.setup): Call prepareToMeasureValuesAsync so that DRT.teardown can use meausreValueAsync, and log
"Running 5 times". Since the log container will be inserted before iframe, we need to explicitly insert
the iframe as the first child of the body element to avoid logs from affecting the iframe's position.
Also specify the number of iterations by calling PerfTestRunner.iterationCount() so that we may adjust
the number of iterations in PerfTestRunner.

(DRT.progress): Log individual measurement for each subtest.
(DRT.teardown): Compute the aggregated result for each iteration, and log them using measureValueAsync.

* resources/runner.js:
(PerfTestRunner.logStatistics): Merged printStatistics since it's no longer needed after r131638.
(PerfTestRunner): Removed getAndPrintMemoryStatistics since it was used only in Dromaeo tests but
we no longer measure memory usage in Dromaeo tests.

(start): Increment completedRuns from -1 to 0 for Dromaeo tests where we don't want to ignore the initial
measurement. Note that ignoreWarmUpAndLog ignores the measurements for which completedRuns is negative.

(ignoreWarmUpAndLog): We don't measure memory usage in Dromaeo tests. See above.
(PerfTestRunner.iterationCount): Added. This abstraction allows us to auto-adjust the number of iterations from
run-perf-tests in near future.
(PerfTestRunner.measureValueAsync): Renamed from measureValueAync.

Tools: Dromaeo should report individual test result
https://bugs.webkit.org/show_bug.cgi?id=99800

Reviewed by Eric Seidel.

Ignore subtest results spit out by Dromaeo tests.

* Scripts/webkitpy/performance_tests/perftest.py:
(PerfTest): Added a line to ignore.
* Scripts/webkitpy/performance_tests/perftest_unittest.py:
(MainTest.test_parse_output_with_subtests): Added.

LayoutTests: Fix a test and re-enable fast/harness/perftests on Chromium.

* fast/harness/perftests/runs-per-second-log.html:
* platform/chromium/TestExpectations:


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@136492 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent eb9e8ccd
2012-12-04 Ryosuke Niwa <rniwa@webkit.org>
Fix a test and re-enable fast/harness/perftests on Chromium.
* fast/harness/perftests/runs-per-second-log.html:
* platform/chromium/TestExpectations:
2012-12-04 Takashi Sakamoto <tasak@google.com>
Unreviewed, WebKit gardening.
......@@ -22,12 +22,19 @@ PerfTestRunner.now = function () {
PerfTestRunner.storeHeapResults = function () { }
var printStatistics = PerfTestRunner.printStatistics;
PerfTestRunner.printStatistics = function (statistics, title) {
if (statistics.unit == 'bytes')
var logStatistics = PerfTestRunner.logStatistics;
PerfTestRunner.logStatistics = function (statistics, unit, title) {
if (unit == 'bytes')
return;
statistics.stdev = statistics.stdev.toPrecision(3);
return printStatistics.call(PerfTestRunner, statistics, title);
return logStatistics.call(PerfTestRunner, statistics, unit, title);
}
var computeStatistics = PerfTestRunner.computeStatistics;
PerfTestRunner.computeStatistics = function (values, unit) {
var statistics = computeStatistics(values, unit);
if (statistics.stdev)
statistics.stdev = statistics.stdev.toPrecision(3);
return statistics;
}
PerfTestRunner.measureRunsPerSecond({
......
......@@ -249,9 +249,6 @@ platform/qt/fast/forms [ WontFix ]
inspector/styles/vendor-prefixes.html [ WontFix ]
fast/css/apple-prefix.html [ WontFix ]
# Perf tests aren't loaded on the chromium bots, so skip testing the framework
fast/harness/perftests [ WontFix ]
# Perf tests are really slow in debug builds and there are few benefits in
# running them.
[ Debug ] perf [ WontFix ]
......
......@@ -140,7 +140,7 @@
var frameRateVal = FRAMES_PER_TIMER_READING * 1000 / ((currTime - frameTimes[0]) / (frameTimes.length - 1));
if (!isNaN(frameRateVal))
PerfTestRunner.measureValueAync(frameRateVal);
PerfTestRunner.measureValueAsync(frameRateVal);
}
}
......
2012-12-03 Ryosuke Niwa <rniwa@webkit.org>
Dromaeo should report individual test result
https://bugs.webkit.org/show_bug.cgi?id=99800
Reviewed by Eric Seidel.
Made one small modification to Droameo's webrunner.js so that it reports individual runs/s values
for each subtest. This allows us to compute the aggregated run/s for each iteration like other
performance tests.
Also stop measuring memory usage in Dromaeo tests because some Dromaeo tests (e.g. jslib-modify-jquery)
have unrealistic memory usage, and measuring them at the time of teardown doesn't make much sense.
* Animation/balls.html: Fixed typo: measureValueAync.
* Dromaeo/resources/dromaeo/web/webrunner.js:
* Dromaeo/resources/dromaeorunner.js:
(DRT.setup): Call prepareToMeasureValuesAsync so that DRT.teardown can use meausreValueAsync, and log
"Running 5 times". Since the log container will be inserted before iframe, we need to explicitly insert
the iframe as the first child of the body element to avoid logs from affecting the iframe's position.
Also specify the number of iterations by calling PerfTestRunner.iterationCount() so that we may adjust
the number of iterations in PerfTestRunner.
(DRT.progress): Log individual measurement for each subtest.
(DRT.teardown): Compute the aggregated result for each iteration, and log them using measureValueAsync.
* resources/runner.js:
(PerfTestRunner.logStatistics): Merged printStatistics since it's no longer needed after r131638.
(PerfTestRunner): Removed getAndPrintMemoryStatistics since it was used only in Dromaeo tests but
we no longer measure memory usage in Dromaeo tests.
(start): Increment completedRuns from -1 to 0 for Dromaeo tests where we don't want to ignore the initial
measurement. Note that ignoreWarmUpAndLog ignores the measurements for which completedRuns is negative.
(ignoreWarmUpAndLog): We don't measure memory usage in Dromaeo tests. See above.
(PerfTestRunner.iterationCount): Added. This abstraction allows us to auto-adjust the number of iterations from
run-perf-tests in near future.
(PerfTestRunner.measureValueAsync): Renamed from measureValueAync.
2012-11-29 Shinya Kawanaka <shinyak@chromium.org>
[Shadow] Performance tests of distribution for changing select attribute
......
......@@ -139,6 +139,7 @@
data.version = testVersions[curID];
data.name = title;
data.scale = num;
data.times = times;
logTest(data);
......
......@@ -2,32 +2,13 @@
var DRT = {
baseURL: "./resources/dromaeo/web/index.html",
computeScores: function (results) {
var mean = 0, min = 0, max = 0, stdev = 0, varsum = 0;
for (var i = 0; i < results.length; ++i) {
var item = results[i];
mean += item.mean;
min += item.min;
max += item.max;
varsum += item.deviation * item.deviation;
}
return {
median: 0,
mean: mean,
min: min,
max: max,
stdev: Math.sqrt(varsum),
unit: "runs/s"
};
},
setup: function(testName) {
PerfTestRunner.prepareToMeasureValuesAsync({iterationCount: 5, doNotMeasureMemoryUsage: true, doNotIgnoreInitialRun: true, unit: 'runs/s'});
var iframe = document.createElement("iframe");
var url = DRT.baseURL + "?" + testName;
var url = DRT.baseURL + "?" + testName + '&numTests=' + PerfTestRunner.iterationCount();
iframe.setAttribute("src", url);
document.body.appendChild(iframe);
document.body.insertBefore(iframe, document.body.firstChild);
iframe.addEventListener(
"load", function() {
DRT.targetDocument = iframe.contentDocument;
......@@ -56,18 +37,25 @@
},
progress: function(message) {
if (message.status.score)
DRT.log(message.status.score.mean);
var score = message.status.score;
if (score)
DRT.log(score.name + ': [' + score.times.join(', ') + ']');
},
teardown: function(data) {
var scores = DRT.computeScores(data.result);
PerfTestRunner.printStatistics(scores, "Time:");
PerfTestRunner.getAndPrintMemoryStatistics();
window.setTimeout(function() {
if (window.testRunner)
testRunner.notifyDone();
}, 0);
PerfTestRunner.log('');
var tests = data.result;
var times = [];
for (var i = 0; i < tests.length; ++i) {
for (var j = 0; j < tests[i].times.length; ++j) {
var runsPerSecond = tests[i].times[j];
times[j] = (times[j] || 0) + 1 / runsPerSecond;
}
}
for (var i = 0; i < times.length; ++i)
PerfTestRunner.measureValueAsync(1 / times[i]);
},
targetDelegateOf: function(functionName) {
......
......@@ -89,10 +89,6 @@ if (window.testRunner) {
PerfTestRunner.logStatistics = function (values, unit, title) {
var statistics = this.computeStatistics(values, unit);
this.printStatistics(statistics, title);
}
PerfTestRunner.printStatistics = function (statistics, title) {
this.log("");
this.log(title);
if (statistics.values)
......@@ -104,25 +100,15 @@ if (window.testRunner) {
this.log("max " + statistics.max + " " + statistics.unit);
}
PerfTestRunner.getUsedMallocHeap = function() {
function getUsedMallocHeap() {
var stats = window.internals.mallocStatistics();
return stats.committedVMBytes - stats.freeListBytes;
}
PerfTestRunner.getUsedJSHeap = function() {
function getUsedJSHeap() {
return console.memory.usedJSHeapSize;
}
PerfTestRunner.getAndPrintMemoryStatistics = function() {
if (!window.internals)
return;
var jsMemoryStats = PerfTestRunner.computeStatistics([PerfTestRunner.getUsedJSHeap()], "bytes");
PerfTestRunner.printStatistics(jsMemoryStats, "JS Heap:");
var mallocMemoryStats = PerfTestRunner.computeStatistics([PerfTestRunner.getUsedMallocHeap()], "bytes");
PerfTestRunner.printStatistics(mallocMemoryStats, "Malloc:");
}
PerfTestRunner.gc = function () {
if (window.GCController)
window.GCController.collect();
......@@ -170,6 +156,8 @@ if (window.testRunner) {
iterationCount = test.iterationCount || 20;
logLines = window.testRunner ? [] : null;
PerfTestRunner.log("Running " + iterationCount + " times");
if (test.doNotIgnoreInitialRun)
completedIterations++;
if (runner)
scheduleNextRun(runner);
}
......@@ -206,9 +194,9 @@ if (window.testRunner) {
PerfTestRunner.log("Ignoring warm-up run (" + labeledResult + ")");
else {
results.push(measuredValue);
if (window.internals) {
jsHeapResults.push(PerfTestRunner.getUsedJSHeap());
mallocHeapResults.push(PerfTestRunner.getUsedMallocHeap());
if (window.internals && !currentTest.doNotMeasureMemoryUsage) {
jsHeapResults.push(getUsedJSHeap());
mallocHeapResults.push(getUsedMallocHeap());
}
PerfTestRunner.log(labeledResult);
}
......@@ -235,12 +223,16 @@ if (window.testRunner) {
testRunner.notifyDone();
}
PerfTestRunner.iterationCount = function () {
return iterationCount;
}
PerfTestRunner.prepareToMeasureValuesAsync = function (test) {
PerfTestRunner.unit = test.unit;
start(test);
}
PerfTestRunner.measureValueAync = function (measuredValue) {
PerfTestRunner.measureValueAsync = function (measuredValue) {
completedIterations++;
try {
......
2012-12-03 Ryosuke Niwa <rniwa@webkit.org>
Dromaeo should report individual test result
https://bugs.webkit.org/show_bug.cgi?id=99800
Reviewed by Eric Seidel.
Ignore subtest results spit out by Dromaeo tests.
* Scripts/webkitpy/performance_tests/perftest.py:
(PerfTest): Added a line to ignore.
* Scripts/webkitpy/performance_tests/perftest_unittest.py:
(MainTest.test_parse_output_with_subtests): Added.
2012-12-03 Zan Dobersek <zandobersek@gmail.com>
Unreviewed, functionality fix after r136437.
......
......@@ -126,7 +126,10 @@ class PerfTest(object):
re.compile(re.escape("""frame "<!--framePath //<!--frame0-->-->" - has 1 onunload handler(s)""")),
re.compile(re.escape("""frame "<!--framePath //<!--frame0-->/<!--frame0-->-->" - has 1 onunload handler(s)""")),
# Following is for html5.html
re.compile(re.escape("""Blocked access to external URL http://www.whatwg.org/specs/web-apps/current-work/"""))]
re.compile(re.escape("""Blocked access to external URL http://www.whatwg.org/specs/web-apps/current-work/""")),
# Dromaeo reports values for subtests. Ignore them for now.
re.compile(r'(?P<name>.+): \[(?P<values>(\d+(.\d+)?,\s+)*\d+(.\d+)?)\]'),
]
def _should_ignore_line_in_parser_test_result(self, line):
return self._should_ignore_line(self._lines_to_ignore_in_parser_result, line)
......
......@@ -117,6 +117,33 @@ class MainTest(unittest.TestCase):
for line in non_ignored_lines:
self.assertFalse(test._should_ignore_line_in_stderr(line))
def test_parse_output_with_subtests(self):
output = DriverOutput('\n'.join([
'Running 20 times',
'some test: [1, 2, 3, 4, 5]',
'other test = else: [6, 7, 8, 9, 10]',
'',
'Time:',
'values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ms',
'avg 1100 ms',
'median 1101 ms',
'stdev 11 ms',
'min 1080 ms',
'max 1120 ms']), image=None, image_hash=None, audio=None)
output_capture = OutputCapture()
output_capture.capture_output()
try:
test = PerfTest(MockPort(), 'some-test', '/path/some-dir/some-test')
self.assertEqual(test.parse_output(output),
{'some-test': {'avg': 1100.0, 'median': 1101.0, 'min': 1080.0, 'max': 1120.0, 'stdev': 11.0, 'unit': 'ms',
'values': [i for i in range(1, 20)]}})
finally:
pass
actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
self.assertEqual(actual_stdout, '')
self.assertEqual(actual_stderr, '')
self.assertEqual(actual_logs, 'RESULT some-test= 1100.0 ms\nmedian= 1101.0 ms, stdev= 11.0 ms, min= 1080.0 ms, max= 1120.0 ms\n')
class TestPageLoadingPerfTest(unittest.TestCase):
class MockDriver(object):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment