Sunjammer has been doing a great thing in the other subforums, taking stock of the state of player projects and the issues people are still having with the toolset and progress toward solutions. He might be heading our way at some point, but I'd like to contribute a bit to the process, and I invite the rest among the "eared" community to chime in on what you think we still need. I'm a generalist, so I've dabbled in almost every aspect of module building but can't really claim to be a master of any. But my sense, so far, is that sound has received the least attention (there have been a lot of issues with modeling, too, but that appears to be on Bioware's radar, at least.)
The wiki has a few items on sound, but they tend to focus on specific issues:
Voice-over (http://social.biowar....php/Voice-Over): This covers part of the process, but doesn't flesh out the technical side of doing voiceover with fmod.
Cutscene Music (http://social.biowar...dex.php/How-tos) is very useful, and while it doesn't cover everything, I learned far more about how FMOD works from this short tutorial than from anything else on the site.
Placing sound volumes and objects in areas (http://social.biowar...#Volume_objects)
FMOD Tutorial (http://social.biowar.../index.php/FMOD) has some useful, if very technical, information, but it seems primarily geared toward creating sound effects emitters. It is made more difficult by the fact that the first two steps (General techniques- starting up. & Folders and subfolders - setting up audio) are impossible because we do not have access to Bioware's projects in a form we can open.
There are some ways to extract the sound files from the fsb files (http://social.biowar...Request:_Sounds), but they are not universally successful, and we still don't have a way to see how Bioware used the raw sounds in a project. We have the source data for levels, scripts, and even the conversations, cutscenes, plots, and areas of the single player game. Having access to the sound projects would make it much easier for the building community to figure things out on our own.
Things we seem to be struggling with:
Figuring out new music sets and placing them in areas (new compositions, and new combinations of existing music resources - I suspect it has something to do with the parameters set in FMOD, but where do we find guidance on what those parameters are?).
Player and creature soundsets: though there seems to be a lot of progress on this
Some more information about how the game process the music and sounds (most of the other resources have 2DA's associated with them. Music and sound have a couple of 2DA's associated with them, but it gives us a very incomplete picture.
Any thoughts out there? I'm sure it would be helpful to the powers that be, who are generally of a benevolent nature, to get an overview of our sound issues in one place. And as we solve some of our sound issues, we have to make sure to contribute to the wiki, because most people will eventually run into the same problems many of us have.
I'm no expert, so I might have missed other problems, or exsiting solutions to the problems I've noticed.
[Request] More robust guidance on sound: State of the Nation?
Débuté par
Qutayba
, mars 01 2010 06:33
#1
Posté 01 mars 2010 - 06:33
#2
Posté 01 mars 2010 - 11:44
I can help with advice on recording and mastering but I have little to no experiance with the sound implementation in the toolset. I may have a look into it if i have time but at least i can help with getting good quality sound at source.
#3
Posté 21 mars 2010 - 06:25
I knew I should have trade-marked the "State of the Nation" topics!

One thing I am interested to understand is how you guys prepare a script for an external voice artist. I've not noticed a "Builder to Voice Artist" export tool so is it simply a case of sending them all the conversations what the character they are voicing is involved in and asking them to pick out the correct line?
I'm asking because I recently threw together a "quick and dirty" utility for one of my writers which extracts all the dialogue and comments from every conversation in a B2B export (.dadbdata) file and saves them in a simple text file. It occurred to me that it should be straight-forward to add a front end and tweak this so it could be used to create a file for each NPC in a conversation which contained, for example, the string id; any comments; and the line itself in a Excel compatible format. However I've no idea if something better than this already exists in the toolset or as a standalone utility.
One thing I am interested to understand is how you guys prepare a script for an external voice artist. I've not noticed a "Builder to Voice Artist" export tool so is it simply a case of sending them all the conversations what the character they are voicing is involved in and asking them to pick out the correct line?
I'm asking because I recently threw together a "quick and dirty" utility for one of my writers which extracts all the dialogue and comments from every conversation in a B2B export (.dadbdata) file and saves them in a simple text file. It occurred to me that it should be straight-forward to add a front end and tweak this so it could be used to create a file for each NPC in a conversation which contained, for example, the string id; any comments; and the line itself in a Excel compatible format. However I've no idea if something better than this already exists in the toolset or as a standalone utility.
#4
Posté 22 mars 2010 - 01:09
I am no voice actor but wouldn't the voice actors require the entire conversation to get a feel for it? (I know we can put in comments but still....)
#5
Posté 22 mars 2010 - 07:47
Having the conversation definitely makes for more accurate interpretation of a line, which is especially important in a situation like this where often there are multiple potential ways to lead into the same line (in the original campaign, for example, there are many cases where your response doesn't change the overall path of the conversation but you do get an answer specific bit of dialogue before rejoining the main conversation thread). It takes work to get an interpretation that can work like this. Even in the Bioware campaign there are some places where these are less than 100% successful (which is how I know it works that way). 
An ideal output for use by voiceover talent would be something along the lines of a 'choose your own adventure' version of a script. Basically you'd traverse the tree of choices using 1, labeling each new segment. When you reach the end, start with the second branch of the first node; but referring to previous segments any time you merge back into the first progression. The one thing that would be different from the CYOA books would be that you'd want to note the total number of ways to reach that particular segment (and from what segments) so the actor knows the segment had to fit multiple options (and can reference them).
This would look something like:
-- Sides for Character 1 --
SEGMENT 1 (Root - initiates conversation)
Character 1: (comments/stage direction) Dialogue by actor beginning the conversation. Initial dialogue ends in a question?
PC option: Answer 1 [links to SEGMENT 2]
PC option: Answer 2 [links to SEGMENT 3]
PC option: Don't want to talk right now [END]
SEGMENT 2 ( 1 FORK from SEGMENT 1)
Character 1: More dialogue, further clarifying the question asked in Segment 1.
PC Option: Answer 1 [link to SEGMENT 3]
PC Option: Answer 2 [link to SEGMENT 4]
SEGMENT 3 ( 2 FORKS from SEGMENT 1 and SEGMENT 2)
Character 1: More dialogue
PC Option: Answer 1 [link to SEGMENT 4]
PC Option: Answer 2 [END]
SEGMENT 4 ( 2 FORKS from SEGMENT 2 and SEGMENT 3)
Character 1: Final dialogue, end of conversation.
[END]
Something like this should be fairly easy to generate, though providing the segment feed info might require loading the data into a stack or multiple passes through instead of a simple once through parse. I've only started playing around with the toolset though. My interest audio side is as a sound engineer and director in the theatre biz... looking at something that uses a similar principle.
An ideal output for use by voiceover talent would be something along the lines of a 'choose your own adventure' version of a script. Basically you'd traverse the tree of choices using 1, labeling each new segment. When you reach the end, start with the second branch of the first node; but referring to previous segments any time you merge back into the first progression. The one thing that would be different from the CYOA books would be that you'd want to note the total number of ways to reach that particular segment (and from what segments) so the actor knows the segment had to fit multiple options (and can reference them).
This would look something like:
-- Sides for Character 1 --
SEGMENT 1 (Root - initiates conversation)
Character 1: (comments/stage direction) Dialogue by actor beginning the conversation. Initial dialogue ends in a question?
PC option: Answer 1 [links to SEGMENT 2]
PC option: Answer 2 [links to SEGMENT 3]
PC option: Don't want to talk right now [END]
SEGMENT 2 ( 1 FORK from SEGMENT 1)
Character 1: More dialogue, further clarifying the question asked in Segment 1.
PC Option: Answer 1 [link to SEGMENT 3]
PC Option: Answer 2 [link to SEGMENT 4]
SEGMENT 3 ( 2 FORKS from SEGMENT 1 and SEGMENT 2)
Character 1: More dialogue
PC Option: Answer 1 [link to SEGMENT 4]
PC Option: Answer 2 [END]
SEGMENT 4 ( 2 FORKS from SEGMENT 2 and SEGMENT 3)
Character 1: Final dialogue, end of conversation.
[END]
Something like this should be fairly easy to generate, though providing the segment feed info might require loading the data into a stack or multiple passes through instead of a simple once through parse. I've only started playing around with the toolset though. My interest audio side is as a sound engineer and director in the theatre biz... looking at something that uses a similar principle.
#6
Posté 22 mars 2010 - 08:51
I don't see a problem with supplying the full conversation for context but surely the goal is to make the VA's life easy and ensure that a) all lines have been recorded and
assigned the correct string id?
#7
Posté 23 mars 2010 - 10:39
When I record voiceovers for performance, I keep things as simple as possible when doing recordings. We have sides ready for actors (monologues are given 'as is'... scenes usually include the interspersed dialogue, unless there's a long break until the next line, which we usually skip). Typically we will highlight the lines that the actor reads (leaving stage direction and other characters' lines as plain text, available for interpretation). Once I begin a session though, I leave the mic live, breaking only to save sections when we are clearly moving to another piece. Sometimes we move through lines quickly, getting the right interpretation in one or two tries... sometimes the director gives notes a number of times to get a line right. I leave the mic live because I often get useful cuts or phonemes out of bits that wouldn't have been recorded otherwise. I've spliced together entire speeches out of eight or ten fragmented readings. As we go, I'll jot down timestamps of the cuts that the director thought were good, sometimes I will ask that these be repeated right away because I could hear a glitch in the sample (like a pop from a consonant hit too hard ... or a script rustling, etc). Then when we're done (or done with a section if its a larger project), I'll go back through and play the good samples for the director. If they're approved, I'll transfer those cuts into their own individual files later and clean them up for performance use (as well as adding in any processing that needs doing).
I typically save files in an Act_Scene_line-First_couple_words_of_the_line.wav format... and when I build a show using our cue-ing system, its simply a matter of plugging the .wav's into cues in order. If the VA has a script formatted as I suggested above, you could easily plug the string ID for naming into the segment header. Or maybe as:
Character (string ID): Line of dialogue, bleh bleh bleh.
But really, the Voice talent doesn't 'need' string ID's. Right? That's just convenience so the builder who is putting them in place in the toolset doesn't have to review the sample and label it properly before its usable. If you're fortunate enough to have an engineer handling the recordings and sample cleanup for you, they're probably going to be responsible for this step (meaning the actor really just needs a script/sides). But if its all on the VA, then sure; you would want to include it.
I was taking a look at the explanation/process given in the wiki (http://social.biowar...g_for_recording) for this tonight. I'm going to have to take a look at what this generates... if someone has one up somewhere, I'd like to take a look. It seems like it would have nearly everything that you'd see in my recommendation... except for info on lines that loop back to others... or cases where a line is used for multiple things. Some of this would probably be obvious when you got to the line later on... some of it might be completely invisible to the VA though. Dialogue writers don't 'have' to reuse speech though... and even if you did say the identical words... you wouldn't have to use the same recording for it, the new version could have a different interpretation.
I typically save files in an Act_Scene_line-First_couple_words_of_the_line.wav format... and when I build a show using our cue-ing system, its simply a matter of plugging the .wav's into cues in order. If the VA has a script formatted as I suggested above, you could easily plug the string ID for naming into the segment header. Or maybe as:
Character (string ID): Line of dialogue, bleh bleh bleh.
But really, the Voice talent doesn't 'need' string ID's. Right? That's just convenience so the builder who is putting them in place in the toolset doesn't have to review the sample and label it properly before its usable. If you're fortunate enough to have an engineer handling the recordings and sample cleanup for you, they're probably going to be responsible for this step (meaning the actor really just needs a script/sides). But if its all on the VA, then sure; you would want to include it.
I was taking a look at the explanation/process given in the wiki (http://social.biowar...g_for_recording) for this tonight. I'm going to have to take a look at what this generates... if someone has one up somewhere, I'd like to take a look. It seems like it would have nearly everything that you'd see in my recommendation... except for info on lines that loop back to others... or cases where a line is used for multiple things. Some of this would probably be obvious when you got to the line later on... some of it might be completely invisible to the VA though. Dialogue writers don't 'have' to reuse speech though... and even if you did say the identical words... you wouldn't have to use the same recording for it, the new version could have a different interpretation.
Modifié par Jaesic, 23 mars 2010 - 10:42 .





Retour en haut







