I have a Vb.Net application that works in conjunction with a C++ app. The C++ app writes the contents of a printer DEVMODE structure to a file as bytes. The VB.Net app reads the DEVMODE from that file into a VB.Net structure using a BinaryReader. In the DEVMODE are two character array fields, device name and formname. I am expecting (and looking at the file seems correct) these character fields are written out as 2-byte characters, Unicode/UTF-16. Now in order to read these fields from the file in VB.Net, I use the ReadChars function of the BinaryReader I created. Now I expect that I should create the BinaryReader with encoding Unicode (which is UTF-16) and call ReadChars(32) since the fields are 32 characters fixed length.
If I do this, the char() returned by the ReadChars function is not correct. In order to get the correct char() returned, I have to set the BinaryReader encoding to UTF-8 and call ReadChars(64). This makes no sense to me reading the documentation. The DEVMODE character fields are defined as TCHAR or WCHAR which should be Unicode/UTF-16 and I should call ReadChars with 32 character length and the ReadChars function should return the correct char array and advance 2 bytes per character. I can't figure out how UTF-8 works with a UTF-16 field and given I used UTF-8 as the encoding, why would ReadChars need to be told to read 64 characters when there are only 32 characters (written as 2 byte chars for 64 total bytes).
I am just not getting what is going on here.