After the pitches on St Nicholas Day we decided to give everyone a role in order to facilitate the decision-making progress and general workflow. We had named Owen as project leader before the pitch, and he was now also in charge of the narrative. Matthias was put in charge of the card game mechanics, and Andreas would be most concerned with the meta-game and the mixed-reality aspect. Sarah was appointed Lead artist and in charge of logos, and I took the position of lead programmer.
We decided that the one in charge will have the final say on a decision concerning their field, but everyone could always make suggestions and give criticism to anyone.
We decided to meet in person every monday, and over skype every thursday.
We had been using Google docs already and they had proven effective, so we decided to keep any written text on those, so we had some nice and dynamic documents. We used whatsapp as main communication channel, and dropbox as space for larger files. We also decided to use Slack as a sort of hub to connect everything, but at the time, it hadn’t been used much. We used skype for the Thursday calls, but not otherwise.
Developing the core mechanics for the card game had proven challenging. Given that we needed to nail those down ASAP, we had spent most of our efforts that week coming up with ideas, and then revising and revising those. We hadn’t quite finalized them, but we had a strong bunch of foundations, and just needed to shave off some of the rough edges.
The narrative had been developed alongside the mechanics. We’ve had the 8 factions from the start, and wanted to stick with them. We had decided that we could not possibly finish all the necessary cards for the 8 factions until February, so we chose three factions that were different enough to show the possibilities the game would offer.
The meta-game had changed as well. We had found some flaws and found ways to make them into mechanics, but we still needed more work on that.
On the day of the Kickoff meeting I had a few ideas about what I would like to do and wouldn’t like to do. I knew I didn’t want to do VR like in the previous semester, and I was interested in creating a board game, perhaps with QR codes.
After the kick-off meeting I went to the open space and started bouncing ideas around with the others who were there. Owen and I started talking about posting masses of recognizable QR codes around the city, so many that people would become interested and just scan them out of curiosity. The idea was that there would be this corporation whose only product is its logo which they post everywhere.
We met again on Friday, along with Andreas and a few other people, to bounce ideas around some more. We had the idea of making a card game out of it, where you need to go outside and scan QR codes in order to earn new cards. The card game would feature a corporate theme, to cover the ideas we had before.
After the brainstorm meeting at CGL Owen and I spent the weekend thinking about different aspects of the game, I was mostly concerned with game mechanics and the meta-game of collecting QR codes, as well as legal consequences of posting stickers on public places. I also spent a lot of time researching the stock market, to see what kind of game mechanics we could make out of it.
Adding Team Members
During the weekend Andreas decided to join us. I was very happy to get Andreas on the team, as I had wanted to work with him before, and he seemed very interested in the Mixed Reality aspect, which I didn’t want to concern myself with too much.
We met on Monday to polish the ideas we had and start working on the pitch.
During our pitch Sarah joined, and afterwards Matthias joined as well, which made us a pretty strong team. Owen and I had thought about asking Sarah to join anyways, as we were both impressed with her previous work, so it was great that she joined out of her own motivation. Matthias has a lot of experience with Card games and was excited to create one from scratch, so he also was a much needed addition to the team!
Later that day Caterina, who wasn’t there for the pitches, asked to join the team. After some discussion we invited her on the next day, as we realised how much artwork will need to be done for the cards, and the app, and the corporate logos etcetera. From what I could tell, Caterina’s artstyle was quite similar to Sarah’s, so they should be able to work together well!
The Toddler Connection is a detective game for the HTC Vive. It features investigative dialogue, with a unique dialogue mechanic, as well as two distinct perspectives on the world. You take on the role of a toddler in kindergarten who likes playing detective and solving mysteries. Once again playing his game of detective, he searches for the stolen bracelet of his crush, the kindergarten teacher. By talking to the other toddlers in kindergarten, or rather their film-noir alter egos, you find clues and hints about the missing bracelet until you can finally face the thief!
The Toddler Connection is the second Semester Project I did in the Cologne Game Lab, during May to July of 2016. I worked with Felix Schade, Pierre Schlömp and Dominique Bodden on this.
During the entire process of working on The Toddler Connection, we had thought about a lot of different features we could implement. A lot of these came up during Felix and my programming sessions, and we just put them aside as “stretch goals” for the time being, hoping that we might get to them eventually. Sadly, we had enough to do to get the game to a “playable” state, so we had no time to start implementing them.
We wanted to make the kindergarten world a lot more fun, we wanted to implement physics based toys like marble runs, wooden labyrinths, or stuff like the “cheese mouse game”. We also wanted to make the Musician’s piano playable, we thought about making a working pen or chalkboard, and to make a proper cigarette/cigar the detective could smoke.
We also wanted to make it so that the glass bottles and cups in the noir world can break. I noticed that a lot of people tried breaking bottles during playtesting, and I was sad that we couldn’t implement it.
We had thought about a feature where the detective/player had to go to the “Crazy Wall” in his office and combine his clues to get new ones, or to be able to accuse someone. The asset made it into the game, however we couldn’t implement it as a game mechanic.
Initially we also wanted to make the other toddlers in the kindergarten much more active as well, we wanted them to move around and play, instead of standing around idly.
During all of the project phase I met up with Felix Schade at his place to work on programing. It was an absolute necessity, as it was impossible to test anything of our game without the vive, the sole exception of this being the xml.
We met up about twice a week on average, and worked on implementing new features, fixing bugs, all the while making design choices on how things should look or behave. Whenever we had to make an important choice we implemented first, and asked for feedback and opinion of the rest of the group afterwards, which generally worked out well.
I realized that working with two people on the same machine is actually very efficient, as Felix and I could pick up on each other’s mistakes and fill up the gaps that the other had. Especially while working on the Vive, having two people is incredibly helpful, as one person can test whatever changes the other is doing, and because we both know the code, the testing person could notice problems that someone who doesn’t know the code couldn’t.
I did however notice that I find this Co-Programming quite boring after a while of not writing the code (and looking over the shoulder) and I quickly lost motivation while doing that, and got tired much faster. When I eventually told Felix about this he happily offered me to do more of the typing, and from then on I worked a lot more motivated.
While working with Felix I was quite surprised that he enrolled as a Game Designer instead of a Programer. I have to admit that Felix is a lot more experienced with Unity than I am (as he’s been working on his own game for a while now) and he’s good at making things look good, something that I haven’t spent much time on learning yet.
We were VERY ambitious with our dialog system. I absolutely love the idea, I love how much freedom you have, talking about everything to everyone. But we vastly underestimated the amount work involved with it. For the Final version, we have 9 interactable npcs, and 185 possible item combinations, adding up to a staggering 1665 dialog lines, not including detective replies, and a few alternative lines. I don’t know his amount of work hours, but Pierre spent a good part of the last 3 weeks or so working on little else than writing the dialogs, and eventually cutting the voice recordings into lines for the npcs to say.
I had decided to work together with Pierre on the dialogs, check his spelling and grammar, as well as tweak some lines that seemed hard to understand. I had to touch the dialogs anyways, as I was the one to put them into the xml file later, so during this process I worked quite close with Pierre to make sure we have the correct dialogs and items.
Yesterday, after working on the dialogs on and off for the last month, I finally finished writing them into the xml script. It’s still missing a few lines, but they will likely be added later today.
There is not much I can say about the process of copying the dialogs into the xml file, except that I watched 21 hours of a DnD game in the last three days, and that I might need new ‘Ctrl’ ‘C’ and ‘V’ keys.
Around the end of the first week of July, I decided to rework the xml I made, because it was quite ugly code. The script that reads the xml file was built in a way that it depended on the xml-file, meaning that if the xml file had some errors in the order of elements, the dialog would not be displayed correctly.
Script-central instead of xml-central code
I took the current xml file and the reader script and started flipping things around. The first thing I did was to make the script read all of the elements of the dialog and save them to local variables, instead of executing the actions as soon as the element is read.
The strings that I get out at the end will be sent to another method, which deals with executing the dialog.
For each of the strings (which are an element in xml) a few things are done. First I check whether there is more than one node of the same name (eg <GainDialogItem />). In the original reader-method I already deal with this by adding a ‘+’ between multiple nodes of the same name, so in this method I can check for that. In most cases there should never be more than one node of the same name, but I like to put some kind of error handling in, to expect human (especially my) errors.
After that I do whatever that element is supposed to do. In the case of GainDialogItem and LoseDialogItem it’s simple, I save the item as “true” or “false” in the PlayerPrefs, therefore saving player progress, and for the GainDialogItem I also display a little icon, so the player knows they gained a new item.
SendAMessage is very straightforward as well. I normally just call the method specified in the innertext, which is something like “OpenMansionDoors” which allows the player to enter the mansion to talk to the boss, as well as saves that information to a PlayerPref, so that progress is saved.
Dealing with what the npc and possibly the detective say, and especially their audio, is a bit more complicated.
Unlike for the other strings I don’t have error handling here if there is more than one element for the npc’s text, I just remind myself not to do that.
The first step is to put the text in the speech bubble for the npc, commented out here, because the script is in a testing project. Then I do a quick check to see if the line has been filled properly, if it is “TEXTLINE” that means the dialog is missing somehow. After that I create strings for the audio files that shall be played, utilizing the naming conventions I decided on. I save the detectiveSays and detectiveAudio strings globally to use them later on.
Now the audio file for the npc shall be played. Most of the text is commented out here again, for reasons explained above.
First I try to Load the npc audio file from the Resources, this is the reason why the naming convention is so important.
Then the playing audio for this npc and the detective is stopped to avoid overlapping. Now I play the audio file for the npc, and -if the detective has a reply for this dialog option- I start a coroutine to let the detective talk, delayed by the length of the npc’s audio. Here I also have a Debug Warning for missing audio files, again, commented out.
After the specified delay, the method “DoDetectiveTalking” is called. This method does pretty much the same as the above, it writes the text on a speech bubble, stops what the detective is currently saying, and then plays the new audio file.
When I realised just how big the xml file would be (it’s >6500 lines, 185 combinations per character), I imagined that traversing through all of it could take up a lot of computing power, and might severely lower the framerate, which I absolutely want to avoid in a VR-game especially! I did a quick stress test, copy+pasting about 3000 options for one character, and -as expected- the game stuttered badly for 2 seconds. Of course we wouldn’t have nearly as many options in the game, but then again, there is a lot of other stuff happening too. I did some optimisation in the code, like stopping the search after the item is found, but It was obvious that that would not do too much.
I thought that I could at least make it better by doing the search parallely to the Update function, instead of attempting to do it all within one frame, so I had to look deeper into Coroutines, which had so far eluded my comprehension.
I tested around a bit and finally learned how exactly they work. I believed that they would actually work in parallel to the Update function, like multithreading (I believe that that is what multithreading does?). However I now understood that they work in the same thread, but a Coroutine can halt its execution through the “yield return null” statement, and let the Update finish, and then continue where it left off in the next Update.
At the end of my testing I decided to “yield return null” after every 50th Item, and that has been working just fine.
Checking for Human Error
As I mentioned before, I like to make my code check itself for errors, and I definitely wanted to check the huge xml file for any missing combinations, so I wrote a script for it. The script should go through all of the characters and all of the possible combinations, DO those combinations, and I’ll be left with a whole bunch of awesome DebugLog!
First I add all npcs and items to two lists, so I can go through those later.
Then I do the combinations. I go through all the single Items, add them to each other single Item -alphabetically sorted- and add them to the final list, if it’s not already in there. I also exclude a few combinations here that are not possible due to the story in the game.
Finally I do all of the combinations for all of the characters.
Note that I wrote this as a Coroutine. Even when returning after every 20 items, the Game has a framerate of around 10 while doing this, and the method takes around a minute to finish checking the xml file.
I knew that we needed to write our dialogs outside of Unity, because it would be way to much text, and editing and keeping track of text within Unity is a pain. I knew from previous experience that XML is used a lot in Unity and AAA games but I only had very basic insights on how to use it. I researched on my own for a while and had a few different approaches based on tutorials from the Unity wiki or some other sites.
For the first few tests I was using the Unity XmlSerializer and I noticed that it had quite a few flaws that would make my work more annoying. For once I had to write three scripts just to read the file properly. I also found out that the serializer only goes to a certain depth of tags, which meant that I would have to create a different XML file for each character, so we would end up with quite a few files.
So eventually I decided to ask Markus, my programming professor, for help. I was quite happy to hear that my approach wasn’t absolutely terrible, but he recommended reading the XML file “directly” instead of using unity’s serializer. He gave me an example Unity Project which explained the method very well, so I could quickly implement it. Now I needed only one script to read the file, and I could put all dialogs in one single XML file too! I had already revised the XML file after our decision on the InventoryItem approach so I only had to tweak that a little bit.
I then wrote a prototype Scene for it, using a few buttons and textfields to test whether it was all working as intended.
Finally, this is the core method for reading the XML-file.
Now we only had to implement this in the main project. That’ll be a whole other post though.
Today I started with Skinning a model that Pierre made, the coolguy_prisoner (was there a character like that?). I’m happy I get to apply the skills I learned from Fridi during our Profil² Woche!
After shortly struggling to find the mesh in the file that Pierre provided I got to work. I hadn’t done anything in maya for a few months now but it was surprisingly fast for me to get back into the controls! I quickly created a few joints along the spine and the left arm just to test whether I still knew how to do this at all!
I tried to let Maya do the skin weights for me but apparently I can’t be that lazy…
So I got to painting the skin weights myself. I wasn’t sure where to start, so I had to try around a bit, but eventually I just flooded the first child of the Arm, the Clavicle with all of the weight for the arm and then went from there. I did pretty much the same for the Spine too.
The model had a pretty complicated structure, with a few hollow areas like the jacket on the arm, which resulted in some weird glitches.
I’ll keep in close contact with Pierre (I’m talking to him right now) in order to optimize our workflow! Eventually I managed to get nice weighting, and it actually looked pretty good for the short amount of time!
As a final test for my rig I made a short animation. Thanks for reading 🙂
Our game is a detective game, we want the player to think while playing, and be able to solve the mysteries themselves instead of being presented the solution at the end of the game. We want investigative dialog and freedom of choice in what they want to ask.
How does the dialog work?
Instead of a set of dialog options for every character, we decided that you should be able to talk to every character about anything that might be relevant. So for example you could talk to a character about another character you met, or maybe about where to get alcohol to fill a bottle of booze. We drew some inspiration from Ace Attorney here where, in similar fashion, you can present an item to the court in order to highlight a discrepancy of a witness’s testimony.
In the beginning we thought about how to limit the amount of things you can talk about. In Ace Attorney, you can highlight any part of a witness’s testimony and try to find a mistake in it. However we knew that we couldn’t allow the player to talk about absolutely everything, because that would require us to write ridiculous amounts of dialog!
We were all agreed on an Items Category, which would contain the different physical objects the player may have with them. We also knew that we had to include persons and locations. For a while I also tried to bring in a sort of “abstract” category, which would contain ideas like money or alcohol or power, and phrases like “the don said I would get a free refill”.
However in the end we found out that we could make that much more intuitive by combining items, like “empty booze bottle” + “the Don” + “Barkeep” to tell the character about getting a refill from the barkeep.
So after 2 meetings and lots of discussion about abstracts and physicals and topics, we came up with three final categories of InventoryItems:
Items, physical objects the player has, like an empty booze bottle. Persons, the characters you can talk to, like the Don. Locations, the unique places of the noir world like the bar.
How does the inventory work?
The inventory is closely related to the dialog, so you’ll already have an idea of it, but we still had to find a way to design it visually!
What was important to us was, like the environment, make it feel good in VR. We liked how the interaction with the environment felt, so we wanted to see how we could do that for the inventory too. The solution was to make the inventory behave pretty much the same like the interactable objects in the world. The InventoryItems would spawn when you opened the Inventory, and you could grab them and move them around and -to be implemented- have them float around you in zero gravity. To give the Item/s to an NPC in oder to talk about them, we would need some kind of container to collect them in, and then give the container to the NPC.
For the first draft we came up with a ring around the player that displayed the different items. It looked very sci-fi so we moved on from that soon. As of writing this (June 21st) we are still not 100% set on the visual look of the inventory, as we got some negative feedback from the intermediate presentation, but intend to make it fit the noir style!