Commit 3ad9d8de authored by jsbell@chromium.org's avatar jsbell@chromium.org
Browse files

IndexedDB: Optimize encodeString/decodeString

https://bugs.webkit.org/show_bug.cgi?id=97794

Reviewed by Tony Chang.

Optimize string encoding/decoding, which showed up as a CPU hot spot during profiling.
The backing store uses big-endian ordering of 16-bit code unit strings, so a memcopy
isn't sufficient, but the code used StringBuilder::append() character-by-character
and custom byte-swapping which was slow.

Ran a test w/ DumpRenderTree (to avoid multiprocess overhead) taking a 10k character string
and putting it 20k times and getting it 20k times. On my test box, mean time before the
patch was 8.2s, mean time after the patch was 4.6s.

Tested by Chromium's webkit_unit_tests --gtest_filter='IDBLevelDBCodingTest.*String*'

* Modules/indexeddb/IDBLevelDBCoding.cpp:
(WebCore::IDBLevelDBCoding::encodeString):
(WebCore::IDBLevelDBCoding::decodeString):


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@130299 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent a5ce50fd
2012-10-03 Joshua Bell <jsbell@chromium.org>
IndexedDB: Optimize encodeString/decodeString
https://bugs.webkit.org/show_bug.cgi?id=97794
Reviewed by Tony Chang.
Optimize string encoding/decoding, which showed up as a CPU hot spot during profiling.
The backing store uses big-endian ordering of 16-bit code unit strings, so a memcopy
isn't sufficient, but the code used StringBuilder::append() character-by-character
and custom byte-swapping which was slow.
Ran a test w/ DumpRenderTree (to avoid multiprocess overhead) taking a 10k character string
and putting it 20k times and getting it 20k times. On my test box, mean time before the
patch was 8.2s, mean time after the patch was 4.6s.
Tested by Chromium's webkit_unit_tests --gtest_filter='IDBLevelDBCodingTest.*String*'
* Modules/indexeddb/IDBLevelDBCoding.cpp:
(WebCore::IDBLevelDBCoding::encodeString):
(WebCore::IDBLevelDBCoding::decodeString):
2012-10-03 Keishi Hattori <keishi@webkit.org>
Implement DataList UI for input type time on chromium
......
......@@ -32,6 +32,7 @@
#include "IDBKey.h"
#include "IDBKeyPath.h"
#include "LevelDBSlice.h"
#include <wtf/ByteOrder.h>
#include <wtf/text/StringBuilder.h>
// LevelDB stores key/value pairs. Keys and values are strings of bytes, normally of type Vector<char>.
......@@ -289,36 +290,33 @@ const char* decodeVarInt(const char* p, const char* limit, int64_t& foundInt)
Vector<char> encodeString(const String& s)
{
Vector<char> ret(s.length() * 2);
// Backing store is UTF-16BE, convert from host endianness.
size_t length = s.length();
Vector<char> ret(length * sizeof(UChar));
for (unsigned i = 0; i < s.length(); ++i) {
UChar u = s[i];
unsigned char hi = u >> 8;
unsigned char lo = u;
ret[2 * i] = hi;
ret[2 * i + 1] = lo;
}
const UChar* src = s.characters();
UChar* dst = reinterpret_cast<UChar*>(ret.data());
for (unsigned i = 0; i < length; ++i)
*dst++ = htons(*src++);
return ret;
}
String decodeString(const char* p, const char* end)
String decodeString(const char* start, const char* end)
{
ASSERT(end >= p);
ASSERT(!((end - p) % 2));
size_t len = (end - p) / 2;
StringBuilder result;
result.reserveCapacity(len);
// Backing store is UTF-16BE, convert to host endianness.
ASSERT(end >= start);
ASSERT(!((end - start) % sizeof(UChar)));
for (size_t i = 0; i < len; ++i) {
unsigned char hi = *p++;
unsigned char lo = *p++;
size_t length = (end - start) / sizeof(UChar);
Vector<UChar> buffer(length);
result.append(static_cast<UChar>((hi << 8) | lo));
}
const UChar* src = reinterpret_cast<const UChar*>(start);
UChar* dst = buffer.data();
for (unsigned i = 0; i < length; ++i)
*dst++ = ntohs(*src++);
return result.toString();
return String::adopt(buffer);
}
Vector<char> encodeStringWithLength(const String& s)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment