1
0
mirror of https://github.com/narzoul/DDrawCompat synced 2024-12-30 08:55:36 +01:00
DDrawCompat/DDrawCompat/CompatDepthBuffer.h
narzoul b9b4a2aafd Fixed incorrect z-buffer bit depths reported in D3DDEVICEDESC
Legacy DirectDraw interfaces specify the z-buffer format as a single bit depth
number instead of as a DDPIXELFORMAT struct. DirectDraw seems to convert a
legacy z-buffer bit depth of N into a DDPIXELFORMAT with dwFlags = DDPF_ZBUFFER,
dwZBufferBitDepth = N and dwZBitMask set to 1s in the lowest N bits.

Some drivers (so far noticed with AMD only) report the list of supported
z-buffer bit depths incorrectly, resulting in a game potentially selecting a
bit depth that can't actually be created via the legacy interfaces.
For example, the driver may report 16 and 32 bits as supported whereas all
32 bit z-buffer pixel formats use only 24 bits for z-buffer (with the
remaining bits unused or used as stencil buffer). Meanwhile the same driver
doesn't report 24 bits as supported when it's actually supported.

This fix overrides the set of supported z-buffer bit depths in D3DDEVICEDESC
structs for HAL devices to align with the actually supported pixel formats.

Fixes a startup issue in Rainbow Six mentioned in issue #2.
2016-06-12 14:21:09 +02:00

12 lines
232 B
C++

#pragma once
#include <guiddef.h>
#include "CompatPtr.h"
namespace CompatDepthBuffer
{
template <typename TDirect3d, typename TD3dDeviceDesc>
void fixSupportedZBufferBitDepths(CompatPtr<TDirect3d> d3d, TD3dDeviceDesc& desc);
}