Is it possible, using Windows CNG API and AES in GCM mode, to encrypt a buffer of data with a size that is not a multiple of 16 bytes (128 bits) when chaining is enabled?
When I try to pass a buffer of 60 bytes to the BCryptEncrypt function with chaining enabled, I get the error: 0xc0000206, which translates to the error string: The size of the buffer is invalid for the specified operation.
Here is a code snippet I made to demonstrate my problem:
#include <windows.h>
#include <stdio.h>
#include <bcrypt.h>
#pragma comment(lib, "bcrypt.lib")
int main() {
unsigned char key[16] = { 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c, 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08 };
unsigned char iv[12] = { 0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88 };
unsigned char pt[60] = { 0xd9, 0x31, 0x32, 0x25, 0xf8, 0x84, 0x06, 0xe5, 0xa5, 0x59, 0x09, 0xc5, 0xaf, 0xf5, 0x26, 0x9a,
0x86, 0xa7, 0xa9, 0x53, 0x15, 0x34, 0xf7, 0xda, 0x2e, 0x4c, 0x30, 0x3d, 0x8a, 0x31, 0x8a, 0x72,
0x1c, 0x3c, 0x0c, 0x95, 0x95, 0x68, 0x09, 0x53, 0x2f, 0xcf, 0x0e, 0x24, 0x49, 0xa6, 0xb5, 0x25,
0xb1, 0x6a, 0xed, 0xf5, 0xaa, 0x0d, 0xe6, 0x57, 0xba, 0x63, 0x7b, 0x39 };
unsigned char ct[60];
unsigned char tag[16];
NTSTATUS bcryptResult = 0;
DWORD bytesDone = 0;
BCRYPT_ALG_HANDLE algHandle = 0;
DWORD blockLength = 16;
bcryptResult = BCryptOpenAlgorithmProvider(&algHandle, BCRYPT_AES_ALGORITHM, 0, 0);
bcryptResult = BCryptSetProperty(algHandle, BCRYPT_CHAINING_MODE,(BYTE*)BCRYPT_CHAIN_MODE_GCM, sizeof(BCRYPT_CHAIN_MODE_GCM), 0);
BCRYPT_KEY_HANDLE keyHandle = 0;
bcryptResult = BCryptGenerateSymmetricKey(algHandle, &keyHandle, 0, 0, key, sizeof(key), 0);
/* ---Encrypt data--- */
{
unsigned char macContext[16];
unsigned char contextIV[16];
BCRYPT_AUTHENTICATED_CIPHER_MODE_INFO authInfo;
BCRYPT_INIT_AUTH_MODE_INFO(authInfo);
authInfo.pbNonce = iv;
authInfo.cbNonce = sizeof(iv);
authInfo.pbTag = tag;
authInfo.cbTag = sizeof(tag);
// Enable chaining of BCryptEncrypt calls
authInfo.pbMacContext = macContext;
authInfo.cbMacContext = sizeof(macContext);
authInfo.dwFlags = BCRYPT_AUTH_MODE_CHAIN_CALLS_FLAG;
bcryptResult = BCryptEncrypt(keyHandle, pt, sizeof(pt), &authInfo, contextIV, sizeof(contextIV), ct, sizeof(ct), &bytesDone, 0);
if (!BCRYPT_SUCCESS(bcryptResult)) {
printf("Error: 0x%x \n", bcryptResult);
}
// Disable chaining and call BCryptEncrypt again to get tag
authInfo.dwFlags &= ~BCRYPT_AUTH_MODE_CHAIN_CALLS_FLAG;
bcryptResult = BCryptEncrypt(keyHandle, NULL, 0, &authInfo, contextIV, sizeof(contextIV), NULL, 0, &bytesDone, 0);
if (!BCRYPT_SUCCESS(bcryptResult)) {
printf("Error: 0x%x", bcryptResult);
}
}
return 0;
}
I have noticed two things.
- Encrypting a buffer with a size that is a multiple of 16 bytes works perfectly with the code I have supplied. For example, if changing the buffers pt and ct:s size to 64, everything works fine.
- If I do not enable chaining and simply call BCryptEncrypt once with a buffer of size 60 everything works fine as well, and I get the correct tag.
Since AES-GCM in itself does not require padding, I find it odd that it seems like the cbInput has to be padded to the block size(16bytes) when chaining is enabled but not when chaining is not used. Is this intended?