BMS DirectX 8 Mod

Not quite that way really. DX7, DX8 and DX9 use different renderpaths. It’s not just “enable some more optional effects if we’re on DX9”, pretty huge part of the render pipeline is different. In DX7 all we’ve had was a fixed-function shaderless pipeline similar to what is available through OpenGL 1.1. With DX8 and DX8.1 we could use some basic shaders which is known known as Shader Model 1.0, 1.1, 1.3 and 1.4 (different GPUs of that times supported different amount of features, and something that was supported by SM 1.1 not always had been available in 1.3 and vice versa). With DX9 the render pipeline had changed even more from the classic fixed function approach. Source engine actually supports three types of DX9 render pipelines, namely one utilizing SM 2.0 shaders (dxlevel 90), another utilizing SM 3.0 shaders (dxlevel 95, SM3.0 had been introduced with DX9c) and yet another utilizing so-called SM4 shaders (which is DX10-capable hardware with drivers/DX emulating DX9 semi-fixed pipeline on Vista+). 2007 Source SDK still retains sort of DX7 fixed function pipeline compatibility but it’s depracated and isn’t really usable. DX80/81 support in this SDK is in a pretty decent shape but to correctly use it mod authors should provide specialized versions of assets and shaders that would work correctly with it. It’s not that hard really from programmers PoV but it is something that requires a work to be done. I.e. BMS could possibly be ported to dxlevel 80/81 but it isn’t a matter of a “set a checkbox in a settings dialog and press the ‘recompile’ button”.

Well, I’d bet the real reason is simplier: there’s no point it fixing something that hadn’t broke. TF2 had initially been implemented with dxlevel 80/81 compatibility in so there’s no point in spending valuable man resources “porting” it to dxlevel90+ only by throwing out part of the job that had already been done.

And, BTW, Valve had broken dxlevel 80 support HL2 in process of porting it to the 2007+ SDK, namely at the maps that had been adopted to utilize HDR (Black Mesa East, Ravenholm). Shame is that they simply forgot to compile LDR lighmaps for those maps resulting in mat_fullbright being permanently turned on when you play in non-HRD DX9 or ordinary DX8/81.

BMS minimal system requirements include GPU that had DX9c SM3.0 hardware level support. Why would you assume that BMS should correctly work in case you de-facto restrict those cards from using their full hardware capabilities?

Published minimal system requirements IMO is enough. There are a lot of ways to broke the game by incorrectly configuring your PC of by messing with cvars in game console.

Um, are you sure that you had correctly understand what was the meaning of the original sentence you refer to here? Raminator’s point was exactly what you write: DX9 was made because it allows developers to do more things in a better way.

That’s simply not true. Older api version allowed less things to be implemented and thus using them would mean “less fancy things and effects”. And “less fancy things and effects” generally means “better performance”. People want to use DXLevel80/81 not because they love DX8, they just want the damn game to be less laggy, nothing more. As BMS and/or Source 2007 SDK engine do not offer a lot of ways to turn off fancy things in DX9 mode in order to gain more performance people try to downgrade API used in hopes that it would simplify some fancy effects or turn them off altogether.

TBH it’s a BMS fault that there are a lot of places in game where it lags like hell even of a pretty decent setups. You really don’t expect to get something like 20-30 FPS on a PC equipped with two years old quad core CPU and a GPU from 200-300 USD range like GTX 560 Ti if there are some other games using the same engine which run at perfect ~100FPS+ framerate providing a pretty decent visuals (EP1, EP2, Portal II). And cause this problems are BMS fault - it’d be good for them to be eventually fixed. If some patch in the future would introduce an option allowing to enable lower quality version of the smoke cloud effects - people would welcome it and would stop trying to use older APIs. If some patch in the future would address the problem with dynamic lighting sometimes dropping FPS down to 10-40 range - people would welcome it and would stop trying to fallback to dxlevel80. And so on.

Trouble is that BMS turned out to be extremely CPU/GPU tasking game at some places and that is the problem at a first place. Optimizing it is what needed urgently.

Well, not the particles by themselves, but the fact that they are alpha-blended (semi-transparent) and there are LOTs of them on screen at the same moment. Smoke clouds effects are the most problematic here and are the first target to be optimized. Dynamic lighting is another candidate to work on. And third thing to optimize is a lot of complicated prop_static geometry causing CPU hog and GPU triangle setup pipeline stagnation.

I spent $50 on my 5850 and it’s runs BM fine with everything on the highest. The time you waste trying to get this to work could be time spent doing a quarter of a days work for that $50.

I don’t mean to sound too harsh but why should the team spend time and resources that could be spent on Xen because a very small minority have painfully outdated hardware that would be reasonably cheap to replace?

First of all, you either play some different BMS than every one else plays or simply lie when state that “BM runs fine with everything on the highest”. 1920x1200 + 16x MSAA + 16x ANISO would drop the FPS below 60 at some places even with today’s high end GPUs like GTX 680. For example start up the game, open the console and type in “map bm_c1a1a” and try zooming in into the smoke cloud located at the center of the test chamber. Monitor the FPS (FRAPS, DXtory, in-game cl_showfps 1/2 counter). Still sure you have the game performing fine with everything maxed?

I’ve got GTX 550 Ti at home in one workstation, GTX 560 Ti in another. My brother have got GTX 670 in his gaming rig. All these cards cost much higher than 50 USD you mention but still are not able to provide “rock solid 60 FPS with vsync on” in BMS even with AA completely turned off. So I could totally understand people with lower end nowdays GPUs (GTX 440/450/540/550/640/650) struggling to make the game run without lags. People simply assume that earlier versions of API could result in better performance and that’s generally is true due to less complicated effects are usually used with earlier versions of API.

Personally, I think some people are just too damn picky about framerate. At 1920x1080 with all ingame video settings maxed + ambient occlusion, mine drops to ~40 when repeating your example (with AO off, it only drops to ~57), but I barely notice that it’s slowed down. It looks and feels fine to me, and I certainly have no problem with an occasional minor slowdown if it means not making the game look like dogshit.

using DX8 does improve performance for some ultra slow GPUs,
I remember testing TF2 with DX8 on a geforce 6100 and Intel GMA 3100 and in both there was a huge FPS boost by using DX8… but to be honest you can buy a cheap $30 card on ebay far better than those things and enough to play the game in DX9 decently,
also if in their minimum requirements they ask for DX9 hardware, than it’s fair to assume that the game will possibly not work with DX8 hardware (or in DX8 mode).

I’m playing the game on my old 5750, it plays quite well, max details, msaa 4x, af 16x, vsync off as I always do (lower input latency), and most of the game is really smooth, only in some spots there seem to be a problem, I noticed a drop to ± 40fps on that small room behind the initial reception room (when you go back and it’s on fire…), and the worst part was residual processing where there is a spot which the framerate goes down to less than 30, but in most places it works really well, although when I turn on the flash lights or shoot a gun for the first time sometimes it stutters a little bit,

the point of Black Mesa was recreate HL1 with more modern tech, so I think ignoring DX8 is fair enough,

I was kind of joking about DX6 earlier, although I really did test HL2 in DX6 mode, and up until 2009 I think it still had support for it.

Call me stupid, but that’s the exact same issue I’m trying to solve here…

After I tried to force the white line you were having trouble with yesterday, until we resolved it, I had the same pink & black checkerboard problem.

I previously added “-dxlevel 8” to the launch options (which was absolutely wrong, I guess “+mat_dxlevel 81” would’ve been right?), then tried to “deactivate” it by removing it from said options and added “+mat_dxlevel 98”, but still - the game would either crash on load (or new game), show the Vertex Error or load just fine with the checkerboards all over the place.

The in-game menu seems to have a problem, as well.

It’s 100% transparent for a while, then it goes back to 100% opacity / solidity after a few seconds.

The clothing of scientists is also a bit weird looking (I’ll try to make a screenshot of it in just a few moments) - it looks like there are blood decals all over their nice clean and white coats…

If the clothing looks like that, the game starts (with the checkerboards), if their clothing looks just fine - it crashes.

Sorry to post this here, but I’m running out of ideas (would adding “-mat_dxlevel 81” to the launch options help? Since +variable and -variable usually stands for ON and OFF in the source engine.)…

Thanks in advance.


EDIT (11:13): Well, so much about that screenshot…

For some odd reason, I can’t seem to reproduce the bug I mentioned earlier anymore.

Neither can I force the game to load the first level (that with the tram station) into the main menu background.

Sad day is sad, but I hope you can imagine what it’d look like. :frowning:

~57 without vsync means that you’d get stuttering and drop to 30 FPS with the vsync on. And, yet again, what’s AA level you use at 1920x1080? And what is your videocard? Cause I get ~35-45 FPS with vsync turned off @1920x1200 + NoAA + Aniso 8x when zoom in to the particle system smoke cloud at the test chamber with GTX 560 Ti. With 550 Ti it gets even worse - 20-30 FPS at best @1400x900. Now think about people who have GPUs like GTX 260, GTS 250/GTX9800/8800 or GTS 440/450/540: they would have even worse frame rates (i.e. from 10-20 ranges). At the same time they have no such FPS problems with EP2, EP1 or Portal II. That’s why I think it’s BMS fault. Yet again: people simply want to play the game without lags. Drops to 30-40 FPS are extremely noticable when rest of the game runs at rock solid 60 FPS (with vsync) or even faster (without vsync). And anything below 30 FPS is a show-stopper for first person shooter.

Then, if the FPS dropdown problems were occurring only at places where there are no fights (like test chamber example I posted below) - it would be OK. But they also occur when you set some zombie on fire (dlighting-related stutters), when you fight a lot of marines on the surface (yet again dlighting-related), when you fight sentry autoturrets (dlighting), at some places that seem “random” at the first glance like a place where you meet bullsquid for the first time (a lot of high-detailed prop_static and other models coupled with expensive water surfaces doubling/tripling the work engine have to do for geometry processing) and so on. Yeas, I play the game as is and still think that it’s awesome and really fun but I really would welcome if it would be made lags-free. For now I use the following bings that helps to pass the problematic areas with the FPS drops:

bind "u" "incrementvar r_dynamic 0 1 1;incrementvar r_dynamiclighting 0 1 1;" bind "j" "incrementvar r_WaterDrawReflection 0 1 1;" bind "m" "incrementvar r_drawparticles 0 1 1;"

It allows me to turn off and on problematic in-game effects by pressing a single (well, there are three of them, but usually it’s required to only use one of them to get FPS back to 60FPS with vsync) button.

No, both ways are valid, but changing renderpath using command line is a preffered one - it make the game not only to change the renderpath but also resets all the renderer-related settings to a safe defaults for that renderpath. Note that the game should be started with the “-dxlevel XX” command line parameter only once per a renderpath switch. Don’t keep this command line parameter there in case you had successfully switched the game into using another renderpath - it would break the game making it refuse to save renderer-related settings like resolution, texture detail levels and so on.

Add “-dxlevel 90” (older videocards on XP and up) or “-dxlevel 95” (fresher videocards on XP and up) or “-dxlevel 98” (DX10-capable videocard on Vista and up) to the BMS startup command line parameters. Start up the game once, confirm that it had switched into using DX9.x renderpath (check what it prints in advanced gfx settings dialog box at the “Software DirectX version” box), quit the game and remove -dxlevel XX from the command line.

As for “+smthing” vs. “-smthng” on the command line: it is not “on”/“off” selectors, it’s naming convention of the source engine. Words preceeded by “-” are a command line parameters. There’s a list of these source engine recognizes you could look at here: https://developer.valvesoftware.com/wiki/Command_Line_Options

Things preceeded with “+” on the commandline instructs the engine to change the value of the specified cvar to the specified value. It’s the same as if you would add " " into the autoexec.cfg of your BMS installation.

I’ll try that right away.

Thank you so much for the effort! :heart::heart::heart:

Thanks for explaining it, as well. :slight_smile:

EDIT (06:29 PM): Just tried it and it works! No more pink & black checkerboards!

The overall higher visual quality amazed me, to be quite honest…

Thanks again!

weell, that’s not how it works.

graphics cards tailored for newer rendering pipelines (programmable shaders) actually run programs made with the older fixed-function pipeline slower. usually that doesn’t have an effect, because you typically have both programmable shaders and fixed-function code, and if something has only the old method, it’s old enough that your graphics card can run it thrice in parallel, but if you force your graphics card into a non-native mode, for a new game, you lose performance.

if there are effects to be disabled, it hasn’t necessarily anything to do with “direct x versions”.

TL;DR: don’t assume you know much about graphics programming & performance if you didn’t actually learn in-depth knowledge about it.

Being a programmer who had worked with 3D rendering a lot I could tell you that things are not that simple. Benchmarks we’ve been doing internally in the company I had been employed to several years ago showed that special blocks to accelerate fixed function pipeline (HW T&L specifically) had been removed from nowdays GPUs not that long ago, around the release of 4xx series for nVIDIA GPUs. What we were doing is basically replicating fixed function pipeline with a simple fragment/vertex OpenGL shaders and comparing the performance we get with a fixed-function pipeline vs. GLSL-replicated approach. Fixed function performed somewhat faster on all cards up to 4xx series where the performance for fixed pipeline dropped down and became the same. We could elaborate on this topic more in PMs as it’s really an offtop for this thread.

TL;DR: don’t assume the level of other people knowledge and don’t assume your kung-fu is better unless you have spend a lot of time discussing topic in question with them and have the proving numbers on hands.

This really is one of the dumbest things I’ve seen asked so far. If you’re running hardware that is so outdated that you want to use Direct X8 then you’d be running hardware that can only play the game on Low-Normal at 800x600. Which I don’t think most of us consider ‘gaming’.

It’s sort of a paradox request.

8xAA, 16xAF, GTX 560 Ti. Like I said, I have everything maxed out in the in-game options, as well as ambient occlusion enabled through the Nvidia Control Panel. I have adaptive vsync globally enabled, so dropping to 30 when it dips below 60 doesn’t happen.

Maybe it’s because I grew up playing Jedi Knight on a PC that was below the minimum specs (100MHz Cyrix Cx5x86, 8MB of RAM, 540MB hard drive, and no 3D acceleration) and got 15 FPS if I was lucky at 640x480, but framerates below 60 really don’t bother me at all.

I don’t even notice games that have frame rates that drop below 60. Apparently we can’t see anything more than 30. Though games are noticeably smoother at 60, opposed to 30.

Keeping V-Sync on, even when you can’t get 60 frames due to lackluster hardware. Will usually stop you from getting screen tearing, and not always force you down to a really low frame rate, like previously stated. That, is only when you enable triple-buffering. If both are enabled and your hardware can’t maintain 60fps you will get a performance drop. If just V-Sync or ‘Wait for V-Sync’ is enabled and triple-buffering isn’t, then not being able to attain 60fps whilst having V-Sync on, won’t result in a performance loss. And will usually prevent screen tearing. So it’s best to keep it on, rather than have your refresh rate go over 60 in small/low draw distance areas, which would create tearing.

Yeah, it helps, but you get a tearing at that moment. That’s generally OK but is not perfect.

Well, many of us here on these forums grew up at those times. Back in 1994 I’ve been playing Wolf 3D and then Doom on the AMD clone of 386DX40 and it surely wasn’t a “60FPS experience” ;-). Then again, as soon as you get used to something good - it’s hard to merely accept something that is worse. Especially for a person who had spend a lot of time when he was young playing Quake World in a competitive tournaments. Any experienced Quake World, Quake 3/Live or original CS player would tell you that even FPS drop from stable 60 or 75 down to something like 50 or 60 is extremely noticeable and annoying. Yes, it’s a matter of a personal preference, but still - there are a lot of people out there who could feel the change in input lag related to even slight FPS drops and would find it annoying.

That’s not entirely true. With a sideband vision (i.e. when you don’t look directly at the light source) average person could spot luminance blinking even at about 60Hz rate. If you happen to have older CRT monitor accessible try setting it to work in 60Hz vsync mode, open up some bright white picture and try looking on it from about 1.5-2m distance with a sideband vision. You would clearly be able to spot rapid blinking of displayed picture.

This theme is more complicated than what is usually though about. Double-buffered rendering with vsync - always drop to 30 FPS as soon as you can’t manage to hit 60+ rate, drop down to 20 or 15 FPS when you can’t achive 30+ or 20+ rate and so on. Triple buffering could be implemented in several different ways. What is readily and easily available in D3D9 API is a render-queue implementation of the triple buffering approach. It would help to mitigate hard FPS drop to 30 but would at the same time introduce extra input lag. This is what tools like D3Doverrider do enable when you use them over double-buffered D3D game. More proper triple buffering approach is harder to implement using D3D API and isn’t widely used really. I could elaborate in details on the technical details of such implementation from programmer PoV but IMO it’s offtopic here really :-).

All in all, if we go back to BMS - it seems to be using double-buffered approach. So if you want to be tearing-free with it and also want to avoid 60-30-60 FPS stutters - the only way to head is to use D3Doverrider to force triple-buffered render queue. If you don’t mind slight amount of tearing then adaptive vsync is a better thing to use. In both cases it’s quite important to limit max framerate to something slightly higher than 60 FPS to mitigate extra input lag for triple-buffered render queue approach and to avoid getting jumps from “noticable input lag” to “no input lag” with an adaptive vsync approach.

Then again, input lag is a matter of a personal preference. Some people do not notice vsync-related input lag at all while other (me included) could clearly feel the difference between 1-frame lag vs. 2-frames lag vs. 3-frames lag for vsync + different render queue depths.

it would be interesting to establish a “benchmark”, I mean choosing a place on the game for people to test and compare performance (maybe a fixed test, demo, timedemo)
because I have a older card and I’m not seeing any dramatic performance issues,
maybe it’s because of my resolution (1280x1024), or something else, but with my 5750 with msaa4x and the rest on max most of the time I have an OK (over 60 most of the time) framerate, with a few spots during the game which were bad (less than 30), but overall not something that would be enough to make me disable effects (simply because I experienced the constant low framerate in just maybe 4-5 occasions, and not in essential moments for the gameplay),
also I always play with vsync off, if I turn it on I notice right away some input latency, and while I can see tearing, it’s not to bad for me,

so I have a card comparable to a GTS 250/9800GTX+/GTS 450 and I’m pretty happy with the game, again it’s probably the resolution and lack of vsync helping, but I suppose if you are happy with a card of this level you are probably not playing in 1080p anyway.

now what about CPU performance? the game seems to use a single core, so high IPC is preferable (so something like a AMD FX will suffer in this game?)

IMO there are some pretty much obvious places to test for performance issues:
a) Middle of the test chamber right after the cascade resonance. There’a black smoke cloud there which seems to be extremely fillrate hungry. map bm_c1a1a, zoom into the cloud, watch the FPS/budget panel.
b) bm_c1a1b, first encounters with zombies where you could set them on fire. Light the flare, set the zombie on fire and while it’s corpse is burning - observe the FPS/budget panel.
c)

map bm_c1a3a
sv_cheats 1
god
notarget
setpos -823.756348 -1593.614258 -255.968750;
setang 0.681325 16.047066 0.000000

Check FPS counter and budget panel.

Sure, resolution matters a lot for fillrate-constrained cases. At 1280x1024 (or at 1400x900 - they are pretty close to each other) I’ve got way less fillrate-related FPS dropdowns compared to what I’ve got @1680x1050 or @1920x1200.

In general I prefer to have vsync on if possible as long as there’s a way to force triple-buffered render queue mode for the game (D3Doverrider allows to do it with HL2 and thus BM) and limit target FPS to be slightly less than 60 (could be done using “Frame target” setting in the nVIDIA Inspector or simply using fps_max cvar). Limiting FPS warrants that there wouldn’t be extra frame input lag (in case of 2-frames deep render queue), forcing triple-buffered render queue warrants that the game engine won’t stall on presentation->swap_buffers call as soon as I turn vsync on having fps limiter to be set slightly below 60.

TBH GTX 550 Ti I have in one of my home workstations is approx in speed parity to the GTS 250 WRT features used by HL2 engine. Typical use of this PC is a software developing duties - i.e. I have two 1920x1200 displays attached to it so I have a plenty of screen space available to place my Eclipse stuff around, thus I wouldn’t mind be able to play BM in 1080p on this PC :-). But I really don’t want to spend extra 200 or 300 bucks right now on a new gfx card for this PC especially when I have another PC semi-dedicated for gaming with 1920x1200 120Hz display and a GTX 560 Ti card in it.

Yes. Multicore CPUs with slower IPCs would suffer in BM. On the other hand Turbo Boost/Turbo Core would come for a rescue as long as OS scheduler don’t do a crack tossing main game thread from core to core. And having SMT technologies turned off with quad(+) core CPUs also might be helpful here: you wouldn’t welcome if OS scheduler would place execution of some other thread on the “core” that shares most of its resources with the one you happen run main game thread on.

Even people who have a laptop :smiley:

for those like me with AMD graphics card I think the best option for framerate limit is dxtory (but yes, in source engine games you can set in game), but I haven’t really played around much with it, I just tested the default vsync

I’m using windows XP with a dual core SMT capable CPU, so looking at the task manager it basically have one core (core 0) at high 90% usage most of the time and the other threads with low usage,

about the places you mentioned for testing, are you sure about C?
I had to use noclip to go into that position, but there is nothing interesting going on
https://i49.tinypic.com/1zprtro.jpg

in A there is clearly a performance problem, but without the zoom in is more acceptable,
https://i45.tinypic.com/14cxxk6.jpg
https://i48.tinypic.com/250nuvt.jpg

B
https://i45.tinypic.com/2cy30i9.jpg

now during my normal playtrough the worst place that grabed my attetion was certainlly this

entering the room,
and while the “live” zombie is burning, when it dies and in the rest of the chapter it gets a lot better.
https://i49.tinypic.com/9k6tyb.jpg
https://i50.tinypic.com/dcznr6.jpg

but as I said these were extreme cases, most of the time it was over 60 with some places at 40-60 which was still more acceptable than under 30s

Founded in 2004, Leakfree.org became one of the first online communities dedicated to Valve’s Source engine development. It is more famously known for the formation of Black Mesa: Source under the 'Leakfree Modification Team' handle in September 2004.