I was wondering about the face creation system and how it works. Like lets say you get this face that looks like the person will have a deep voice but has some shrimpy voice or vise versa can this happen? or do they/you have some script that does not allow this?
Face creation system has nothing to do with voice, though that would be interesting. All face creation system does is randomize the faces so you’re not looking at the same people each time you find a new scientist or guard or soldier. Now what I’d like to know is if it’s even possible to use with HL2.
ya that would be cool to use In Half-life 2
All characters of a specific type share the same voice actor, just like in the original. That is to say, all security guards will sound exactly the same, regardless of how they look. The same goes for the grunts and the male and female scientists.
We do not use the character system to exchange voices per profile, but in theory it is possible. Instead, as Acinonyx has said, “…all security guards [for example] will sound exactly the same, regardless of how they look.”
The system uses predefined settings to determine either static or random variables for different systems such as model flexes, model body-groups, model skins, criteria for choreo, and more.
If you declare a set of criteria for choreo then you can adjust the choreo scripts so it plays a different set altogether…thus having different voices for each profile.
More information on the modeling side of the character system can be found here, https://www.wiki.blackmesasource.com/Face_Creation_System
Example profile…
npc_human_scientist
//Face_01
{
flex_data
{
cheek_depth .5
cheek_fat_max .5
chin_butt 1.0
chin_width .5
ears_angle 1.0
ears_height .5
eyes_ang_min .5
jaw_depth .5
lowlip_size 1.0
mouth_w_max .1
mouth_h_min .4
mouth_depth .3
nose_w_min .5
nose_angle .5
nost_height .5
nost_width .9
nose_tip .5
}
bodygroup_data
{
glasses 0,1,2,3,4,5,6
}
model "models/humans/scientist.mdl"
skin 0,1,2,3,6,10,12,13
}
Quite frankly, that’s brilliant.
This being said, I doubt it would be difficult to assign each generated character their own “voice profile”; an additional variable which makes the npc speak their lines in a slightly higher or lower pitch (by a factor of no more than, say, 10 percent up or down, or whatever still sounds natural). You’d be suprised what a difference this makes to variety during gameplay, and, from a technical standpoint, you could still use the same speech files - no need for more recording or lip-synching.
npc_human_scientist
//Face_01
{
flex_data
{
"stuff"
}
bodygroup_data
{
"stuff"
}
[B]speech_data[/B]
{
"pitch" "94" //between 85 - 105
}
model "stuff"
skin "things"
}
et cetera
lol,
chin_butt 1.0
Tried it, did not pan out so well…
Cool! thanks for the reply
Another question lets say you have a security guard following you and you save then exit then go back in the game will he be the same?
yes
That’s awesome thanks!
At what point does a cleft chin cross the line between “heroic” and “butt”? :fffuuu:
about 0.7
Just out of curiosity, what exactly went wrong?
I was going to ask the same thing.
can’t wait till people start fucking with the faces if at all possible.
A sex mod would hardly be appropriate, xalener. :hmph:
Building on this, and my own interpretation of the facial system: the way I’m understanding it, the faces are same every time you play the game. It’s just that, throughout the course of the game, no two of the faces are the same. Is this about right, or are there certain faces (on the soldiers, for example) that are randomly generated each time you start a new game, so that you don’t see the same batch of characters on each consecutive playthrough?