1. 方法一:将文件中的乱码找到去掉
//z 2012-07-06 10:22:32 AM IS2120@CSDN.T113491669
用VS2005+DirectX9 SDK(手头测试过的是2004年10月的DirectX SDK和2006年4月的DirectX SDK)编译游戏会出现以下warning:
--------------------------------------------------------------------------------
d:\microsoft directx 9.0 sdk (october 2004)\include\d3d9types.h(1385) : warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss
--------------------------------------------------------------------------------
要修正这个问题,不必要存为UTF8的文件,而是搜索_D3DDEVINFO_VCACHE,然后会看到:
typedef struct _D3DDEVINFO_VCACHE ...{
DWORD Pattern; /**//* bit pattern, return value must be FOUR_CC(慍? 慉? 慍? 慔? */
DWORD OptMethod; /**//* optimization method 0 means longest strips, 1 means vertex cache based */
DWORD CacheSize; /**//* cache size to optimize for (only required if type is 1) */
DWORD MagicNumber; /**//* used to determine when to restart strips (only required if type is 1)*/
} D3DDEVINFO_VCACHE, *LPD3DDEVINFO_VCACHE;
那四个乱码的去掉就可以了
2.方法二: 将该文件存为 UTF-8 文件
(折腾了很久,记录一下,T113491669)
3. 与此相关的其他信息
其实这是个警告,你可以修改编译选项,不要把警告看做error!
这虽然只是一个警告,但是会导致不能进行DEBUG,断点处无法停止。
To get rid of the second warning(C4819) you need to save the file in Unicode format.
Go to file->advanced save options and under that select the new encoding you want to save it as. UTF-8 or UNICODE codepage 1200 are the settings you want.
Encoding in C++ is quite a bit complicated. Here is my understanding of it. Every implementation has to support characters from the basic source character set. These include common characters listed in2.2/1. These characters should all fit into one This is all nice, but the mapping from characters in the file, to source characters (used at compile time) is implementation defined. This constitutes the encoding used. Here is what it says literally:
For gcc, you can change it using the option |
The C++ standard doesn't say anything about source-code file encoding, so far as I know. The usual encoding is (or used to be) 7-bit ASCII -- some compilers (Borland's, for instance) would balk at ASCII characters that used the high-bit. There's no technical reason that Unicode characters can't be used, if your compiler and editor accept them -- most modern Linux-based tools, and many of the better Windows-based editors, handle UTF-8 encoding with no problem, though I'm not sure that Microsoft's compiler will. EDIT: It looks like Microsoft's compilers will accept Unicode-encoded files, but will sometimes produce errors on 8-bit ASCII too:
|
warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss