By Vladimir Grigorov


2008-09-29 12:03:23 8 Comments

Is it possible to convert UTF8 string in a std::string to std::wstring and vice versa in a platform independent manner? In a Windows application I would use MultiByteToWideChar and WideCharToMultiByte. However, the code is compiled for multiple OSes and I'm limited to standard C++ library.

10 comments

@TarmoPikaro 2019-06-02 13:09:27

Created my own library for utf-8 to utf-16/utf-32 conversion - but decided to make a fork of existing project for that purpose.

https://github.com/tapika/cutf

(Originated from https://github.com/noct/cutf )

API works with plain C as well as with C++.

Function prototypes looks like this: (For full list see https://github.com/tapika/cutf/blob/master/cutf.h )

//
//  Converts utf-8 string to wide version.
//
//  returns target string length.
//
size_t utf8towchar(const char* s, size_t inSize, wchar_t* out, size_t bufSize);

//
//  Converts wide string to utf-8 string.
//
//  returns filled buffer length (not string length)
//
size_t wchartoutf8(const wchar_t* s, size_t inSize, char* out, size_t outsize);

#ifdef __cplusplus

std::wstring utf8towide(const char* s);
std::wstring utf8towide(const std::string& s);
std::string  widetoutf8(const wchar_t* ws);
std::string  widetoutf8(const std::wstring& ws);

#endif

Sample usage / simple test application for utf conversion testing:

#include "cutf.h"

#define ok(statement)                                       \
    if( !(statement) )                                      \
    {                                                       \
        printf("Failed statement: %s\n", #statement);       \
        r = 1;                                              \
    }

int simpleStringTest()
{
    const wchar_t* chineseText = L"主体";
    auto s = widetoutf8(chineseText);
    size_t r = 0;

    printf("simple string test:  ");

    ok( s.length() == 6 );
    uint8_t utf8_array[] = { 0xE4, 0xB8, 0xBB, 0xE4, 0xBD, 0x93 };

    for(int i = 0; i < 6; i++)
        ok(((uint8_t)s[i]) == utf8_array[i]);

    auto ws = utf8towide(s);
    ok(ws.length() == 2);
    ok(ws == chineseText);

    if( r == 0 )
        printf("ok.\n");

    return (int)r;
}

And if this library does not satisfy your needs - feel free to open following link:

http://utf8everywhere.org/

and scroll down at the end of page and pick up any heavier library which you like.

@Mark Ransom 2008-09-29 14:00:12

The problem definition explicitly states that the 8-bit character encoding is UTF-8. That makes this a trivial problem; all it requires is a little bit-twiddling to convert from one UTF spec to another.

Just look at the encodings on these Wikipedia pages for UTF-8, UTF-16, and UTF-32.

The principle is simple - go through the input and assemble a 32-bit Unicode code point according to one UTF spec, then emit the code point according to the other spec. The individual code points need no translation, as would be required with any other character encoding; that's what makes this a simple problem.

Here's a quick implementation of wchar_t to UTF-8 conversion and vice versa. It assumes that the input is already properly encoded - the old saying "Garbage in, garbage out" applies here. I believe that verifying the encoding is best done as a separate step.

std::string wchar_to_UTF8(const wchar_t * in)
{
    std::string out;
    unsigned int codepoint = 0;
    for (in;  *in != 0;  ++in)
    {
        if (*in >= 0xd800 && *in <= 0xdbff)
            codepoint = ((*in - 0xd800) << 10) + 0x10000;
        else
        {
            if (*in >= 0xdc00 && *in <= 0xdfff)
                codepoint |= *in - 0xdc00;
            else
                codepoint = *in;

            if (codepoint <= 0x7f)
                out.append(1, static_cast<char>(codepoint));
            else if (codepoint <= 0x7ff)
            {
                out.append(1, static_cast<char>(0xc0 | ((codepoint >> 6) & 0x1f)));
                out.append(1, static_cast<char>(0x80 | (codepoint & 0x3f)));
            }
            else if (codepoint <= 0xffff)
            {
                out.append(1, static_cast<char>(0xe0 | ((codepoint >> 12) & 0x0f)));
                out.append(1, static_cast<char>(0x80 | ((codepoint >> 6) & 0x3f)));
                out.append(1, static_cast<char>(0x80 | (codepoint & 0x3f)));
            }
            else
            {
                out.append(1, static_cast<char>(0xf0 | ((codepoint >> 18) & 0x07)));
                out.append(1, static_cast<char>(0x80 | ((codepoint >> 12) & 0x3f)));
                out.append(1, static_cast<char>(0x80 | ((codepoint >> 6) & 0x3f)));
                out.append(1, static_cast<char>(0x80 | (codepoint & 0x3f)));
            }
            codepoint = 0;
        }
    }
    return out;
}

The above code works for both UTF-16 and UTF-32 input, simply because the range d800 through dfff are invalid code points; they indicate that you're decoding UTF-16. If you know that wchar_t is 32 bits then you could remove some code to optimize the function.

std::wstring UTF8_to_wchar(const char * in)
{
    std::wstring out;
    unsigned int codepoint;
    while (*in != 0)
    {
        unsigned char ch = static_cast<unsigned char>(*in);
        if (ch <= 0x7f)
            codepoint = ch;
        else if (ch <= 0xbf)
            codepoint = (codepoint << 6) | (ch & 0x3f);
        else if (ch <= 0xdf)
            codepoint = ch & 0x1f;
        else if (ch <= 0xef)
            codepoint = ch & 0x0f;
        else
            codepoint = ch & 0x07;
        ++in;
        if (((*in & 0xc0) != 0x80) && (codepoint <= 0x10ffff))
        {
            if (sizeof(wchar_t) > 2)
                out.append(1, static_cast<wchar_t>(codepoint));
            else if (codepoint > 0xffff)
            {
                out.append(1, static_cast<wchar_t>(0xd800 + (codepoint >> 10)));
                out.append(1, static_cast<wchar_t>(0xdc00 + (codepoint & 0x03ff)));
            }
            else if (codepoint < 0xd800 || codepoint >= 0xe000)
                out.append(1, static_cast<wchar_t>(codepoint));
        }
    }
    return out;
}

Again if you know that wchar_t is 32 bits you could remove some code from this function, but in this case it shouldn't make any difference. The expression sizeof(wchar_t) > 2 is known at compile time, so any decent compiler will recognize dead code and remove it.

@Nemanja Trifunovic 2008-09-29 16:59:41

I don't see he seaid anything about std::string containing UTF-8 encoded strings in the original question: "Is it possible to convert std::string to std::wstring and vice versa in a platform independent manner?"

@Mark Ransom 2008-09-29 18:07:06

UTF-8 is specified in the title of the post. You are correct that it is missing from the body of the text.

@Vladimir Grigorov 2008-09-30 08:55:08

Thank you for the correction, I did intend to use UTF8. I edited the question to be more clear.

@moogs 2008-10-16 10:23:43

But ''widechar'' does not necessarily mean UTF16

@Craig McQueen 2011-07-23 23:56:26

What you've got may be a good "proof of concept". It's one thing to convert valid encodings successfully. It is another level of effort to handle conversion of invalid encoding data (e.g. unpaired surrogates in UTF-16) correctly according to the specifications. For that you really need some more thoroughly designed and tested code.

@Mark Ransom 2011-07-24 01:00:47

@Craig McQueen, you're absolutely right. I made the assumption that the encoding was already correct, and it was just a mechanical conversion. I'm sure there are situations where that's the case, and this code would be adequate - but the limitations should be stated explicitly. It's not clear from the original question if this should be a concern or not.

@Tyler Long 2013-03-23 16:35:04

I have the same feeling as you. The questions already states "UTF8", so it is an encoding/decoding issue. It has nothing to do with locale. Whose answer mentioned locale didn't get the point at all.

@Mark Ransom 2017-09-29 04:39:48

@moogs after all these years I just realized how close this was to working for both UTF-16 and UTF-32 wchar_t. I've updated the answer.

@Assaf Lavie 2008-09-29 14:42:30

@vharron 2010-09-28 18:10:25

ConvertUTF.h ConvertUTF.c

Credit to bames53 for providing updated versions

@Adriano Lucas 2012-07-02 15:05:09

Can be downloaded from here

@Vladimir Grigorov 2013-02-11 09:47:17

I've asked this question 5 years ago. This thread was very helpful for me back then, I came to a conclusion, then I moved on with my project. It is funny that I needed something similar recently, totally unrelated to that project from the past. As I was researching for possible solutions, I stumbled upon my own question :)

The solution I chose now is based on C++11. The boost libraries that Constantin mentions in his answer are now part of the standard. If we replace std::wstring with the new string type std::u16string, then the conversions will look like this:

UTF-8 to UTF-16

std::string source;
...
std::wstring_convert<std::codecvt_utf8_utf16<char16_t>,char16_t> convert;
std::u16string dest = convert.from_bytes(source);    

UTF-16 to UTF-8

std::u16string source;
...
std::wstring_convert<std::codecvt_utf8_utf16<char16_t>,char16_t> convert;
std::string dest = convert.to_bytes(source);    

As seen from the other answers, there are multiple approaches to the problem. That's why I refrain from picking an accepted answer.

@Chawathe Vipul 2013-04-25 09:14:26

wstring implies 2 or 4 bytes instead of single byte characters. Where's the question to switch from utf8 encoding?

@Xtra Coder 2014-10-04 20:06:44

I've got some strange poor performance with codecvt, look here for details: stackoverflow.com/questions/26196686/…

@Navin 2015-06-16 20:48:50

I think you should accept this answer. Sure there are multiple ways to solve this, but this is the only portable solution that does not need a library.

@thomthom 2015-12-14 14:46:54

Is this UTF-16 with LE or BE?

@HojjatJafary 2017-06-19 10:35:16

std::wstring_convert deprecated in C++17

@Chris Jester-Young 2008-09-29 12:07:48

You can use the codecvt locale facet. There's a specific specialisation defined, codecvt<wchar_t, char, mbstate_t> that may be of use to you, although, the behaviour of that is system-specific, and does not guarantee conversion to UTF-8 in any way.

@Tyler Long 2013-03-23 16:24:25

Doing encoding/decoding according to locale is a bad idea. Just as you said: "does not guarantee".

@Basilevs 2014-05-11 12:11:44

@TylerLong obviously one should configure std::locale instance specifically for the required conversion.

@Tyler Long 2014-12-08 12:52:02

@Basilevs I still think using locale to encode/decode is wrong. The correct way is to configure encoding instead of locale. As far as I can tell, there is no such a locale which can represent every single unicode character. Let's say I want to encode a string which contains all of the unicode characters, which locale do you sugguest me to configure? Corret me if I am wrong.

@Basilevs 2014-12-20 14:14:48

@TylerLong Locale in C++ is very abstract concept that covers far more things than just regional settings and encodings. Basically one can.do everything with it. While codecvt_facet indeed handles more than just simple recoding, absolutely nothing prevents it from making simple unicode transformations.

@Trisch 2011-09-09 08:19:10

UTFConverter - check out this library. It does such a convertion, but you need also ConvertUTF class - I've found it here

@Constantin 2008-09-29 13:36:25

You can extract utf8_codecvt_facet from Boost serialization library.

Their usage example:

  typedef wchar_t ucs4_t;

  std::locale old_locale;
  std::locale utf8_locale(old_locale,new utf8_codecvt_facet<ucs4_t>);

  // Set a New global locale
  std::locale::global(utf8_locale);

  // Send the UCS-4 data out, converting to UTF-8
  {
    std::wofstream ofs("data.ucd");
    ofs.imbue(utf8_locale);
    std::copy(ucs4_data.begin(),ucs4_data.end(),
          std::ostream_iterator<ucs4_t,ucs4_t>(ofs));
  }

  // Read the UTF-8 data back in, converting to UCS-4 on the way in
  std::vector<ucs4_t> from_file;
  {
    std::wifstream ifs("data.ucd");
    ifs.imbue(utf8_locale);
    ucs4_t item = 0;
    while (ifs >> item) from_file.push_back(item);
  }

Look for utf8_codecvt_facet.hpp and utf8_codecvt_facet.cpp files in boost sources.

@Martin York 2008-11-11 05:33:56

I though you had to imbue the stream before it is opened, otherwise the imbue is ignored!

@Constantin 2008-11-11 15:15:01

Martin, it seems to work with Visual Studio 2005: 0x41a is successfully converted to {0xd0, 0x9a} UTF-8 sequence.

@Ben Straub 2008-09-29 13:44:25

There are several ways to do this, but the results depend on what the character encodings are in the string and wstring variables.

If you know the string is ASCII, you can simply use wstring's iterator constructor:

string s = "This is surely ASCII.";
wstring w(s.begin(), s.end());

If your string has some other encoding, however, you'll get very bad results. If the encoding is Unicode, you could take a look at the ICU project, which provides a cross-platform set of libraries that convert to and from all sorts of Unicode encodings.

If your string contains characters in a code page, then may $DEITY have mercy on your soul.

@Martin York 2008-09-29 16:12:37

ICU converts too/from every character encoding I have ever come across. Its huge.

@Martin Cote 2008-09-29 12:16:55

I don't think there's a portable way of doing this. C++ doesn't know the encoding of its multibyte characters.

As Chris suggested, your best bet is to play with codecvt.

@Tyler Long 2013-03-23 16:26:00

The question says "UTF8", so "the encoding of its multibyte characters" is known.

Related Questions

Sponsored Content

17 Answered Questions

[SOLVED] Best way to convert text files between character sets?

8 Answered Questions

12 Answered Questions

[SOLVED] When to use reinterpret_cast?

  • 2009-02-21 16:12:39
  • HeretoLearn
  • 316139 View
  • 438 Score
  • 12 Answer
  • Tags:   c++ casting

19 Answered Questions

4 Answered Questions

[SOLVED] How do I achieve the theoretical maximum of 4 FLOPs per cycle?

2 Answered Questions

[SOLVED] UTF8 Conversion

  • 2011-04-09 18:43:17
  • Eric
  • 792 View
  • 0 Score
  • 2 Answer
  • Tags:   utf-8 encode

1 Answered Questions

[SOLVED] Cross-platform way to convert UTF8 to std::wstring

1 Answered Questions

2 Answered Questions

Sponsored Content