Commit 855a7724 authored by rniwa@webkit.org's avatar rniwa@webkit.org

Some perf. tests have variances that differ greatly between runs

https://bugs.webkit.org/show_bug.cgi?id=97510

Reviewed by Benjamin Poulain.

PerformanceTests: 

In order to control the number of iterations and processes to use from run-perf-tests, always use 20
iterations on all tests except Dromaeo, where even doing 5 iterations is prohibitively slow, by default.
Without this change, it'll become extremely hard for us to tweak the number of iterations and processes
to use from run-perf-tests.

* Animation/balls.html:
* DOM/DOMTable.html:
* DOM/resources/dom-perf.js:
(runBenchmarkSuite.PerfTestRunner.measureTime):
* Dromaeo/resources/dromaeorunner.js:
* Layout/floats_100_100.html:
* Layout/floats_100_100_nested.html:
* Layout/floats_20_100.html:
* Layout/floats_20_100_nested.html:
* Layout/floats_2_100.html:
* Layout/floats_2_100_nested.html:
* Layout/floats_50_100.html:
* Layout/floats_50_100_nested.html:
* Layout/subtree-detaching.html:
* Parser/html5-full-render.html:
* SVG/SvgHitTesting.html:
* resources/runner.js:
* resources/results-template.html:

Tools: 

Use multiple instances of DumpRenderTree or WebKitTestRunner to amortize the effect of the runtime
environment on test results (we run each instance after one another, not in parallel).

We use 4 instances of the test runner, each executing 5 in-process iterations, for the total of 20
iterations as it was done previously in single process. These values are hard-coded in perftest.py
and runner.js but they are to be configurable in the future.

Set of 5 iterations obtained by the same test runner is treated as an "iteration group" and each
metric now reports an array of the length 4 with each element containing an array of 5 iteration
values obtained by each test runner instance as opposed to a flattened array of 20 iteration values.

Unfortunately, we can use the same trick on Dromaeo because we're already doing only 5 iterations
and repeating the entire Dromaeo 4 times will take too long. We need to disable more Dromaeo tests
as needed. To this end, added SingleProcessPerfTest to preserve the old behavior.

* Scripts/webkitpy/performance_tests/perftest.py:
(PerfTestMetric.append_group): Renamed from append.
(PerfTestMetric.grouped_iteration_values): Added.
(PerfTestMetric.flattened_iteration_values): Renamed from iteration_values.

(PerfTest.__init__): Takes the number of processes (drivers) to run tests with.
This parameter is only used by SingleProcessPerfTest.

(PerfTest.run): Repeat tests using different driver processes.
(PerfTest._run_with_driver): Returns a boolean instead of a list of measured metrics
since metrics are shared between multiple drivers (i.e. multiple calls to _run_with_driver).
We instead use _ensure_metrics to obtain the matched metrics and store the data there.
(PerfTest._ensure_metrics): Added.

(SingleProcessPerfTest): Added. Used to run Dromaeo tests where running it on 4 different
instances of DumpRenderTree/WebKitTestRunner takes too long.
(SingleProcessPerfTest.__init__):

(ReplayPerfTest._run_with_driver): Updated to use _ensure_metrics.

(PerfTestFactory): Use SingleProcessPerfTest to run Dromaeo tests.

* Scripts/webkitpy/performance_tests/perftest_unittest.py: Updated various tests that expect
_run_with_driver to return a list of metrics. Now it returns a boolean indicating whether
the test succeeded or not. Obtain the dictionary of metrics via test._metrics instead.

(TestPerfTestMetric.test_append): Updated per name and added some test cases for
grouped_iteration_values.

(TestPerfTest._assert_results_are_correct):

(TestSingleProcessPerfTest): Added.
(TestSingleProcessPerfTest.test_use_only_one_process):
(TestSingleProcessPerfTest.test_use_only_one_process.run_single):

(TestReplayPerfTest.test_run_with_driver_accumulates_results):
(TestReplayPerfTest.test_run_with_driver_accumulates_memory_results):

* Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: Updated values of
sample standard deviations since we're now running tests 4 times.
(MainTest._test_run_with_json_output.mock_upload_json):
(MainTest.test_run_with_upload_json_should_generate_perf_webkit_json):

LayoutTests: 

Use dromaeoIterationCount now that we no longer support iterationCount.

* fast/harness/perftests/runs-per-second-iterations.html:


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@144583 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent c7485c73
2013-03-03 Ryosuke Niwa <rniwa@webkit.org>
Some perf. tests have variances that differ greatly between runs
https://bugs.webkit.org/show_bug.cgi?id=97510
Reviewed by Benjamin Poulain.
Use dromaeoIterationCount now that we no longer support iterationCount.
* fast/harness/perftests/runs-per-second-iterations.html:
2013-03-03 Sheriff Bot <webkit.review.bot@gmail.com>
Unreviewed, rolling out r144567.
......@@ -33,7 +33,7 @@ PerfTestRunner.measureRunsPerSecond({
callsInIterations[i] = 0;
callsInIterations[i]++;
},
iterationCount: 1,
dromaeoIterationCount: 1,
done: function () {
debug("Returning times: [" + originalTimesInIterations.join(", ") + "]");
shouldEvaluateTo("callsInIterations[0]", 1);
......
......@@ -106,7 +106,7 @@
var particles = [];
window.onload = function () {
PerfTestRunner.prepareToMeasureValuesAsync({iterationCount: 10, done: onCompletedRun, unit: 'fps'});
PerfTestRunner.prepareToMeasureValuesAsync({done: onCompletedRun, unit: 'fps'});
// Create the particles
for (var i = 0; i < MAX_PARTICLES; i++)
......
2013-03-03 Ryosuke Niwa <rniwa@webkit.org>
Some perf. tests have variances that differ greatly between runs
https://bugs.webkit.org/show_bug.cgi?id=97510
Reviewed by Benjamin Poulain.
In order to control the number of iterations and processes to use from run-perf-tests, always use 20
iterations on all tests except Dromaeo, where even doing 5 iterations is prohibitively slow, by default.
Without this change, it'll become extremely hard for us to tweak the number of iterations and processes
to use from run-perf-tests.
* Animation/balls.html:
* DOM/DOMTable.html:
* DOM/resources/dom-perf.js:
(runBenchmarkSuite.PerfTestRunner.measureTime):
* Dromaeo/resources/dromaeorunner.js:
* Layout/floats_100_100.html:
* Layout/floats_100_100_nested.html:
* Layout/floats_20_100.html:
* Layout/floats_20_100_nested.html:
* Layout/floats_2_100.html:
* Layout/floats_2_100_nested.html:
* Layout/floats_50_100.html:
* Layout/floats_50_100_nested.html:
* Layout/subtree-detaching.html:
* Parser/html5-full-render.html:
* SVG/SvgHitTesting.html:
* resources/runner.js:
* resources/results-template.html:
2013-02-25 Ryosuke Niwa <rniwa@webkit.org>
Use perf.webkit.org JSON format in results page
......
......@@ -6,8 +6,7 @@
<script type="text/javascript" src="resources/dom-perf.js"></script>
<script type="text/javascript" src="resources/dom-perf/domtable.js"></script>
<script>
runBenchmarkSuite(DOMTableTest, 10);
// iterationCount: 10 since this test is very slow (~12m per run on Core i5 2.53Hz MacBookPro)
runBenchmarkSuite(DOMTableTest);
</script>
</body>
</html>
......@@ -330,7 +330,7 @@ BenchmarkSuite.prototype.generateLargeTree = function() {
return this.generateDOMTree(26, 26, 4);
};
function runBenchmarkSuite(suite, iterationCount) {
function runBenchmarkSuite(suite) {
PerfTestRunner.measureTime({run: function () {
var container = document.getElementById('container');
var content = document.getElementById('benchmark_content');
......@@ -346,7 +346,6 @@ function runBenchmarkSuite(suite, iterationCount) {
}
return totalMeanTime;
},
iterationCount: iterationCount,
done: function () {
var container = document.getElementById('container');
if (container.firstChild)
......
......@@ -3,10 +3,11 @@
baseURL: "./resources/dromaeo/web/index.html",
setup: function(testName) {
PerfTestRunner.prepareToMeasureValuesAsync({iterationCount: 5, doNotMeasureMemoryUsage: true, doNotIgnoreInitialRun: true, unit: 'runs/s'});
var ITERATION_COUNT = 5;
PerfTestRunner.prepareToMeasureValuesAsync({dromaeoIterationCount: ITERATION_COUNT, doNotMeasureMemoryUsage: true, doNotIgnoreInitialRun: true, unit: 'runs/s'});
var iframe = document.createElement("iframe");
var url = DRT.baseURL + "?" + testName + '&numTests=' + PerfTestRunner.iterationCount();
var url = DRT.baseURL + "?" + testName + '&numTests=' + ITERATION_COUNT;
iframe.setAttribute("src", url);
document.body.insertBefore(iframe, document.body.firstChild);
iframe.addEventListener(
......
......@@ -9,8 +9,7 @@
<body>
<pre id="log"></pre>
<script>
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(100, 100, 0, 3),
iterationCount: 2});
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(100, 100, 0, 3)});
</script>
</body>
</html>
......@@ -9,8 +9,7 @@
<body>
<pre id="log"></pre>
<script>
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(100, 100, 100, 3),
iterationCount: 2});
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(100, 100, 100, 3)});
</script>
</body>
</html>
......@@ -9,8 +9,7 @@
<body>
<pre id="log"></pre>
<script>
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(20, 100, 0, 100),
iterationCount: 7});
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(20, 100, 0, 100)});
</script>
</body>
</html>
......@@ -9,8 +9,7 @@
<body>
<pre id="log"></pre>
<script>
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(20, 100, 100, 100),
iterationCount: 6});
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(20, 100, 100, 100)});
</script>
</body>
</html>
......@@ -9,8 +9,7 @@
<body>
<pre id="log"></pre>
<script>
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(2, 100, 0, 500),
iterationCount: 10});
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(2, 100, 0, 500)});
</script>
</body>
</html>
......@@ -9,8 +9,7 @@
<body>
<pre id="log"></pre>
<script>
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(2, 100, 100, 250),
iterationCount: 10});
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(2, 100, 100, 250)});
</script>
</body>
</html>
......@@ -9,8 +9,7 @@
<body>
<pre id="log"></pre>
<script>
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(50, 100, 0, 20),
iterationCount: 5});
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(50, 100, 0, 20)});
</script>
</body>
</html>
......@@ -9,8 +9,7 @@
<body>
<pre id="log"></pre>
<script>
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(50, 100, 100, 20),
iterationCount: 5});
PerfTestRunner.measureTime({run: createFloatsLayoutTestFunction(50, 100, 100, 20)});
</script>
</body>
</html>
......@@ -36,7 +36,7 @@ function runTest() {
buildTree();
PerfTestRunner.measureTime({run: runTest, iterationCount: 20, description: "This benchmark checks the time spend in detaching an tree." });
PerfTestRunner.measureTime({run: runTest, description: "This benchmark checks the time spend in detaching an tree." });
</script>
</body>
</html>
......@@ -5,8 +5,7 @@
// Running from the onload callback just makes the UI nicer as it shows the logs before starting the test.
window.onload = function() {
PerfTestRunner.measurePageLoadTime({path: "resources/html5.html",
chunkSize: 500000, // 6.09mb / 500k = approx 13 chunks (thus 13 forced layouts/style resolves).
iterationCount: 5 }); // Depending on the chosen chunk size, iterations can take over 60s to run on a fast machine, so we only run 5.
chunkSize: 500000 }); // 6.09mb / 500k = approx 13 chunks (thus 13 forced layouts/style resolves).
}
</script>
......
......@@ -111,7 +111,7 @@
var wrapper = document.getElementById('wrapper');
if (wrapper)
wrapper.parentNode.removeChild(wrapper);
}, iterationCount: 10});
}});
</script>
</body>
</html>
......@@ -185,6 +185,13 @@ Reference <span id="reference" class="checkbox"></span>
})();
function TestResult(metric, values, associatedRun) {
if (values[0] instanceof Array) {
var flattenedValues = [];
for (var i = 0; i < values.length; i++)
flattenedValues = flattenedValues.concat(values[i]);
values = flattenedValues;
}
this.test = function () { return metric; }
this.values = function () { return values.map(function (value) { return metric.scalingFactor() * value; }); }
this.unscaledMean = function () { return Statistics.sum(values) / values.length; }
......
......@@ -153,7 +153,9 @@ if (window.testRunner) {
return;
}
currentTest = test;
iterationCount = test.iterationCount || 20;
// FIXME: We should be using multiple instances of test runner on Dromaeo as well but it's too slow now.
// FIXME: Don't hard code the number of in-process iterations to use inside a test runner.
iterationCount = test.dromaeoIterationCount || (window.testRunner ? 5 : 20);
logLines = window.testRunner ? [] : null;
PerfTestRunner.log("Running " + iterationCount + " times");
if (test.doNotIgnoreInitialRun)
......@@ -226,10 +228,6 @@ if (window.testRunner) {
testRunner.notifyDone();
}
PerfTestRunner.iterationCount = function () {
return iterationCount;
}
PerfTestRunner.prepareToMeasureValuesAsync = function (test) {
PerfTestRunner.unit = test.unit;
start(test);
......
2013-03-03 Ryosuke Niwa <rniwa@webkit.org>
Some perf. tests have variances that differ greatly between runs
https://bugs.webkit.org/show_bug.cgi?id=97510
Reviewed by Benjamin Poulain.
Use multiple instances of DumpRenderTree or WebKitTestRunner to amortize the effect of the runtime
environment on test results (we run each instance after one another, not in parallel).
We use 4 instances of the test runner, each executing 5 in-process iterations, for the total of 20
iterations as it was done previously in single process. These values are hard-coded in perftest.py
and runner.js but they are to be configurable in the future.
Set of 5 iterations obtained by the same test runner is treated as an "iteration group" and each
metric now reports an array of the length 4 with each element containing an array of 5 iteration
values obtained by each test runner instance as opposed to a flattened array of 20 iteration values.
Unfortunately, we can use the same trick on Dromaeo because we're already doing only 5 iterations
and repeating the entire Dromaeo 4 times will take too long. We need to disable more Dromaeo tests
as needed. To this end, added SingleProcessPerfTest to preserve the old behavior.
* Scripts/webkitpy/performance_tests/perftest.py:
(PerfTestMetric.append_group): Renamed from append.
(PerfTestMetric.grouped_iteration_values): Added.
(PerfTestMetric.flattened_iteration_values): Renamed from iteration_values.
(PerfTest.__init__): Takes the number of processes (drivers) to run tests with.
This parameter is only used by SingleProcessPerfTest.
(PerfTest.run): Repeat tests using different driver processes.
(PerfTest._run_with_driver): Returns a boolean instead of a list of measured metrics
since metrics are shared between multiple drivers (i.e. multiple calls to _run_with_driver).
We instead use _ensure_metrics to obtain the matched metrics and store the data there.
(PerfTest._ensure_metrics): Added.
(SingleProcessPerfTest): Added. Used to run Dromaeo tests where running it on 4 different
instances of DumpRenderTree/WebKitTestRunner takes too long.
(SingleProcessPerfTest.__init__):
(ReplayPerfTest._run_with_driver): Updated to use _ensure_metrics.
(PerfTestFactory): Use SingleProcessPerfTest to run Dromaeo tests.
* Scripts/webkitpy/performance_tests/perftest_unittest.py: Updated various tests that expect
_run_with_driver to return a list of metrics. Now it returns a boolean indicating whether
the test succeeded or not. Obtain the dictionary of metrics via test._metrics instead.
(TestPerfTestMetric.test_append): Updated per name and added some test cases for
grouped_iteration_values.
(TestPerfTest._assert_results_are_correct):
(TestSingleProcessPerfTest): Added.
(TestSingleProcessPerfTest.test_use_only_one_process):
(TestSingleProcessPerfTest.test_use_only_one_process.run_single):
(TestReplayPerfTest.test_run_with_driver_accumulates_results):
(TestReplayPerfTest.test_run_with_driver_accumulates_memory_results):
* Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: Updated values of
sample standard deviations since we're now running tests 4 times.
(MainTest._test_run_with_json_output.mock_upload_json):
(MainTest.test_run_with_upload_json_should_generate_perf_webkit_json):
2013-03-03 Alexandre Elias <aelias@chromium.org>
[chromium] Remove WebLayerTreeView::setViewportSize call
......@@ -65,12 +65,16 @@ class PerfTestMetric(object):
def has_values(self):
return bool(self._iterations)
def append(self, value):
self._iterations.append(value)
def append_group(self, group_values):
assert isinstance(group_values, list)
self._iterations.append(group_values)
def iteration_values(self):
def grouped_iteration_values(self):
return self._iterations
def flattened_iteration_values(self):
return [value for group_values in self._iterations for value in group_values]
def unit(self):
return self._unit
......@@ -85,11 +89,14 @@ class PerfTestMetric(object):
class PerfTest(object):
def __init__(self, port, test_name, test_path):
def __init__(self, port, test_name, test_path, process_run_count=4):
self._port = port
self._test_name = test_name
self._test_path = test_path
self._description = None
self._metrics = {}
self._ordered_metrics_name = []
self._process_run_count = process_run_count
def test_name(self):
return self._test_name
......@@ -110,26 +117,26 @@ class PerfTest(object):
return self._port.create_driver(worker_number=0, no_timeout=True)
def run(self, time_out_ms):
driver = self._create_driver()
try:
metrics = self._run_with_driver(driver, time_out_ms)
finally:
driver.stop()
if not metrics:
return metrics
for _ in xrange(self._process_run_count):
driver = self._create_driver()
try:
if not self._run_with_driver(driver, time_out_ms):
return None
finally:
driver.stop()
should_log = not self._port.get_option('profile')
if should_log and self._description:
_log.info('DESCRIPTION: %s' % self._description)
results = {}
for metric in metrics:
results[metric.name()] = metric.iteration_values()
for metric_name in self._ordered_metrics_name:
metric = self._metrics[metric_name]
results[metric.name()] = metric.grouped_iteration_values()
if should_log:
legacy_chromium_bot_compatible_name = self.test_name_without_file_extension().replace('/', ': ')
self.log_statistics(legacy_chromium_bot_compatible_name + ': ' + metric.name(),
metric.iteration_values(), metric.unit())
metric.flattened_iteration_values(), metric.unit())
return results
......@@ -164,14 +171,10 @@ class PerfTest(object):
output = self.run_single(driver, self.test_path(), time_out_ms)
self._filter_output(output)
if self.run_failed(output):
return None
return False
current_metric = None
results = []
for line in re.split('\n', output.text):
if not line:
continue
description_match = self._description_regex.match(line)
metric_match = self._metrics_regex.match(line)
score = self._score_regex.match(line)
......@@ -181,15 +184,22 @@ class PerfTest(object):
elif metric_match:
current_metric = metric_match.group('metric').replace(' ', '')
elif score:
key = score.group('key')
if key == 'values' and results != None:
values = [float(number) for number in score.group('value').split(', ')]
results.append(PerfTestMetric(current_metric, score.group('unit'), values))
if score.group('key') != 'values':
continue
metric = self._ensure_metrics(current_metric, score.group('unit'))
metric.append_group(map(lambda value: float(value), score.group('value').split(', ')))
else:
results = None
_log.error('ERROR: ' + line)
return False
return results
return True
def _ensure_metrics(self, metric_name, unit=None):
if metric_name not in self._metrics:
self._metrics[metric_name] = PerfTestMetric(metric_name, unit)
self._ordered_metrics_name.append(metric_name)
return self._metrics[metric_name]
def run_single(self, driver, test_path, time_out_ms, should_run_pixel_test=False):
return driver.run_test(DriverInput(test_path, time_out_ms, image_hash=None, should_run_pixel_test=should_run_pixel_test), stop_when_done=False)
......@@ -247,6 +257,11 @@ class PerfTest(object):
output.text = '\n'.join([line for line in re.split('\n', output.text) if not self._should_ignore_line(self._lines_to_ignore_in_parser_result, line)])
class SingleProcessPerfTest(PerfTest):
def __init__(self, port, test_name, test_path):
super(SingleProcessPerfTest, self).__init__(port, test_name, test_path, process_run_count=1)
class ChromiumStylePerfTest(PerfTest):
_chromium_style_result_regex = re.compile(r'^RESULT\s+(?P<name>[^=]+)\s*=\s+(?P<value>\d+(\.\d+)?)\s*(?P<unit>\w+)$')
......@@ -361,18 +376,19 @@ class ReplayPerfTest(PerfTest):
return True
def _run_with_driver(self, driver, time_out_ms):
times = PerfTestMetric('Time')
malloc = PerfTestMetric('Malloc')
js_heap = PerfTestMetric('JSHeap')
times = []
malloc = []
js_heap = []
for i in range(0, 20):
for i in range(0, 6):
output = self.run_single(driver, self.test_path(), time_out_ms)
if not output or self.run_failed(output):
return None
return False
if i == 0:
continue
times.append(output.test_time * 1000)
if not output.measurements:
continue
......@@ -383,7 +399,14 @@ class ReplayPerfTest(PerfTest):
else:
js_heap.append(result)
return filter(lambda metric: metric.has_values(), [times, malloc, js_heap])
if times:
self._ensure_metrics('Time').append_group(times)
if malloc:
self._ensure_metrics('Malloc').append_group(malloc)
if js_heap:
self._ensure_metrics('JSHeap').append_group(js_heap)
return True
def run_single(self, driver, url, time_out_ms, record=False):
server = self._start_replay_server(self._archive_path, record)
......@@ -426,6 +449,7 @@ class ReplayPerfTest(PerfTest):
class PerfTestFactory(object):
_pattern_map = [
(re.compile(r'^Dromaeo/'), SingleProcessPerfTest),
(re.compile(r'^inspector/'), ChromiumStylePerfTest),
(re.compile(r'(.+)\.replay$'), ReplayPerfTest),
]
......
......@@ -41,6 +41,7 @@ from webkitpy.performance_tests.perftest import PerfTest
from webkitpy.performance_tests.perftest import PerfTestMetric
from webkitpy.performance_tests.perftest import PerfTestFactory
from webkitpy.performance_tests.perftest import ReplayPerfTest
from webkitpy.performance_tests.perftest import SingleProcessPerfTest
class MockPort(TestPort):
......@@ -69,25 +70,32 @@ class TestPerfTestMetric(unittest.TestCase):
self.assertFalse(metric.has_values())
self.assertFalse(metric2.has_values())
metric.append(1)
metric.append_group([1])
self.assertTrue(metric.has_values())
self.assertFalse(metric2.has_values())
self.assertEqual(metric.iteration_values(), [1])
metric.append(2)
self.assertEqual(metric.iteration_values(), [1, 2])
self.assertEqual(metric.grouped_iteration_values(), [[1]])
self.assertEqual(metric.flattened_iteration_values(), [1])
metric2.append(3)
metric.append_group([2])
self.assertEqual(metric.grouped_iteration_values(), [[1], [2]])
self.assertEqual(metric.flattened_iteration_values(), [1, 2])
metric2.append_group([3])
self.assertTrue(metric2.has_values())
self.assertEqual(metric.iteration_values(), [1, 2])
self.assertEqual(metric2.iteration_values(), [3])
self.assertEqual(metric.flattened_iteration_values(), [1, 2])
self.assertEqual(metric2.flattened_iteration_values(), [3])
metric.append_group([4, 5])
self.assertEqual(metric.grouped_iteration_values(), [[1], [2], [4, 5]])
self.assertEqual(metric.flattened_iteration_values(), [1, 2, 4, 5])
class TestPerfTest(unittest.TestCase):
def _assert_results_are_correct(self, test, output):
test.run_single = lambda driver, path, time_out_ms: output
parsed_results = test._run_with_driver(None, None)
self.assertEqual(len(parsed_results), 1)
self.assertEqual(parsed_results[0].iteration_values(), [1080, 1120, 1095, 1101, 1104])
self.assertTrue(test._run_with_driver(None, None))
self.assertEqual(test._metrics.keys(), ['Time'])
self.assertEqual(test._metrics['Time'].flattened_iteration_values(), [1080, 1120, 1095, 1101, 1104])
def test_parse_output(self):
output = DriverOutput("""
......@@ -133,7 +141,7 @@ max 1120 ms
try:
test = PerfTest(MockPort(), 'some-test', '/path/some-dir/some-test')
test.run_single = lambda driver, path, time_out_ms: output
self.assertIsNone(test._run_with_driver(None, None))
self.assertFalse(test._run_with_driver(None, None))
finally:
actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
self.assertEqual(actual_stdout, '')
......@@ -200,6 +208,30 @@ max 1120 ms
self.assertEqual(actual_logs, '')
class TestSingleProcessPerfTest(unittest.TestCase):
def test_use_only_one_process(self):
called = [0]
def run_single(driver, path, time_out_ms):
called[0] += 1
return DriverOutput("""
Running 20 times
Ignoring warm-up run (1115)
Time:
values 1080, 1120, 1095, 1101, 1104 ms
avg 1100 ms
median 1101 ms
stdev 14.50862 ms
min 1080 ms
max 1120 ms""", image=None, image_hash=None, audio=None)
test = SingleProcessPerfTest(MockPort(), 'some-test', '/path/some-dir/some-test')
test.run_single = run_single
self.assertTrue(test.run(0))
self.assertEqual(called[0], 1)
class TestReplayPerfTest(unittest.TestCase):
class ReplayTestPort(MockPort):
def __init__(self, custom_run_test=None):
......@@ -301,7 +333,7 @@ class TestReplayPerfTest(unittest.TestCase):
output_capture.capture_output()
try:
driver = port.create_driver(worker_number=1, no_timeout=True)
metrics = test._run_with_driver(driver, None)
self.assertTrue(test._run_with_driver(driver, None))
finally:
actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
......@@ -309,9 +341,8 @@ class TestReplayPerfTest(unittest.TestCase):
self.assertEqual(actual_stderr, '')
self.assertEqual(actual_logs, '')