When the 3D projection matrix is not set (transformation is performed in
software), pixel (table) fog effects appear to be missing on some drivers.
The matrix itself normally isn't passed to the user mode SetTransform
driver function anyway due to the fixed-function shader converters being
enabled by default. Instead only UpdateWInfo is called with the W-range
calculated from the matrix.
For the initial identity matrix, UpdateWInfo receives the range (1.0, 1.0).
This seems to cause Intel (HD 4600) drivers to interpret pixel fog ranges as
Z-depth values, while with any other range specified eye-relative (W) depth
is used. The actual values otherwise don't seem to matter with respect to
how fog is rendered.
On AMD (RX 480), eye-relative depth is used irrespective of the W-range.
There is currently no information on how NVIDIA drivers behave, except that
with the default W-range no fog seems to be produced.
This fix replaces the W-range (1.0, 1.0) with (0.0, 1.0) which causes Intel
to also interpret fog range in eye-relative depth with the default identity
projection matrix.
Fixes missing fog on Intel HD (and hopefully NVIDIA) in Combat Mission
games reported in issue #20.
AMD drivers reject system memory surfaces that are larger than the maximum
texture width/height supported by the driver (usually 4096x4096 for AMD).
This can cause issues in games that create larger system memory surfaces.
This workaround crops the driver resource dimensions to the allowed
maximum values and handles blitting outside this region by creating a
temporary resource that maps to the actual affected system memory region.
Fixes crashes with AMD drivers in Rainbow Six games and Desperados,
mentioned in issue #2 and #8.
On recent drivers, double buffered DirectDraw flips no longer wait for the
vertical sync before returning and instead just insert the flip into the
flip queue for later execution. This effectively results in triple buffered
behavior (in the render-ahead sense) and causes up to an extra frame of
latency even if the flip queue size is set to 1.
To restore the legacy double buffered behavior, each flip waits for the
presented frame to leave the flip queue before returning.
To restore the legacy triple buffered behavior, the flip queue size is
forced to 1. This causes the flip to wait if the previous flip is still
pending.
Video memory render target surfaces get an additional video memory off-screen
plain surface used for providing efficient direct memory access (locking).
All 2D operations starting with a lock are redirected to the off-screen plain
surface until the next rendering operation requires the render target surface.
The two surfaces are kept in sync via blits as needed.
Fixes performance issues in Rogue Spear menus, mentioned in issue #2.
Disabled the indication of mirrored blit support via the driver GetCaps routine
as this may be reported as supported on modern GPUs even when it's not.
Removing this indication allows the HEL (Hardware Emulation Layer) to correctly
handle all mirrored blits. It also seems to perform better than the previous
clumsy line-by-line mirroring emulation.
This flag forces linear (non-swizzled) surface layout on Intel HD drivers.
Fixes issues with games that keep writing to unlocked surfaces (e.g. Nox)
as well as greatly increases lock performance.