Multiverse travel immersive experience

Tl;dr: I built a multiverse travel immersive experience with an old phone, a Pi, and AI tools for a local art show!

My wife and I are fortunate to know a ton of talented people through work and random chance. Some of our friends, inspired by collectives who design cool experiential stuff like Meow Wolf, decided to pull things together for a few shows where they get to:

  • “make some cool shit”
  • have fun with friends
  • show people a good time

And that is the story of how the Spectacle Society of Kansas City came to be!

My wife and I had been to a Spectacle Society show at a brewery during a local First Friday. A few weeks later, we were invited to contribute a couple pieces for the next show! With a theme for the show (forests and portals) and a deadline set, I went to my stash of unused stuff, picked some parts and set to work.

Go grab a drink and a snack, people: this is going to be a long one!

The experience

If we could travel through time, space and alternate dimensions, what would we need? A professional multiverse travel agency of course! I decided this would be the perfect build for a touch tone phone I had thrifted a while back. I also knew that I wanted it to be a Raspberry Pi project (I have way too many of these lying around and they’re perfect for an immersive experience – stable, low maintenance, rugged).

During the experience, guests would pick up the receiver of the travel agency’s courtesy phone, and the phone system would:

  • Welcome prospective travelers.
  • Offer a selection of choices – top destinations, travel precautions and making a reservation.
  • Loop back around if no choice has been made.

Easy on paper, but in practice it means this:

  1. The Raspberry Pi boots up
  2. The Pi initializes the audio hardware available
  3. The Pi starts the phone system script (and restarts the script if it crashes)
  4. The script keeps an eye on the phone’s handset hook
  5. If someone picks up the handset, the script plays a welcome message and the options available over the phone
  6. The script waits for input from users
  7. The script plays the message matching the user selection (including invalid selections)
  8. The script loops back around to step 4 once the selection message is done playing
  9. The script stops playing messages and waits for the next person to pick up the handset

And so on and so forth, until the Pi is turned off or for the next couple weeks (the duration of the show).

Having outlined all that, it was time to dive in!

Audio hardware setup

Since I had decided this would be a Pi project, I installed the latest version of the Raspberry Pi OS. Nothing special here, all I did was install a fresh image from the Raspberry Pi Imager.

This whole experience revolved around sound: to be able to play sound from the Pi into the handset speaker, I connected a small amplifier from Adafruit to the Pi and the handset’s speaker, replacing the original system in the phone. I didn’t bother doing anything with the microphone, since guests would not have voice interactions with the system.

Thanks to the installation above, including the many troubleshooting scenarios included, that part was pretty easy! The only difficulty I had was picking the correct frequency for the amplifier to drive the speaker in the handset. The sound from the handset had to be crackly just the right way . I believe I brought the sampling rate down to 4600 in the software settings for the amplifier to get just the right kind of retro audio!

The Raspberry Pi OS comes with a variety of programs to test out sound output. Once I was happy with what came of out of the handset, I called it good and moved on to setting up the various switches on the phone.

Handset hook and keypad hardware setup

Getting the various switches on the phone to register correctly was pretty easy, considering I would only have to deal with 13 switches: 12 for the keypad, and one for the handset hook that detects whether someone wants to use the phone!

For the hook switch, I just removed all the insides of the phone and tested the various spade connectors with a multimeter until I got a continuity reading with the hook pressed down: then, I knew I had my wires!

The next part was figuring out the keypad. Keypad matrices are a timeless way to wire keys and make the most of your electronics, and this touch tone phone is no exception. All I had to do was identify the paths corresponding to the rows and columns of the keypad and label them. Because it was only a few switches – MUCH simpler than any keyboard I’ve made this far – that went pretty fast.

At that point, all I had to do was pick the pins for the switches on the Raspberry Pi – using pinout.xyz as one does – and just like that, it was time to start writing the Python script for the phone system.

At that point, it was two to three weeks until the show: it was time to speed things up a bit, and I had heard through the grapevine that Large Language Models had recently made some serious progress on the development front. Since that whole project was similar to other builds of mine, I thought this would be a good benchmark to see what OpenAI’s ChatGPT and Anthropic’s Claude had to offer!

Writing keypad code with AI

Getting user input from the touch tone keypad seemed pretty straightforward. The “keyboard” itself was just your average keys matrix – very close to my Pi Pico Macro Keypad. I knew what working code for user input from a small keys matrix should look like, so I decided I would gauge what ChatGPT and Claude could do based on that first task and work from there.

I went to OpenAI’s ChatGPT and gave it the following prompt:

In return, ChatGPT gave me the following code:

Good enough for me! If you’re curious, I gave large language model Claude the exact same prompt: you can see the result here.

After a little testing and a few tweaks, I called it good enough for what it was supposed to do. I repeated the process for the handset’s hook switch. Once I was happy with the results and reliability of my test, it was time to move on to grow the phone tree part of our immersive experience!

Writing the phone tree code with AI

ChatGPT and Claude did so well for that first and very simple first part that I decided to continue experimenting. I had written code similar to what I had in mind for my previous employer – a sounds-based treasure hunt. Remember, here’s what the phone tree system needed to do:

  1. The phone tree script plays an intro audio file once the guest picks up the handset
  2. The guest pushes a button and the phone system plays the corresponding audio file
  3. The script loops back to the top of the phone tree and waits until the guest hangs up

I knew based on some Adafruit tutorials and older projects that I wanted to use the pygame module for handling sound. After looking for sound files to play when guests would press keys on the keypad, I decided instead to use the numpy module to generate the correct DTMF tones on the fly – something I had absolutely no experience with.

I requested a simplified version of the phone tree script from ChatGPT and Claude, with only one option to pick. They both did great at writing code that roughly worked out of the box. It’s only later that things got more complicated, for two reasons.

  1. The first limit is purely technical. Large Language Models can only keep track of so much data – also called context window. After that, LLMs can lose track of what they’re doing, introduce bugs and variations, go off the rails or stop the conversation.
  2. The second limit is …Financial! OpenAI and Anthropic only let you get the best of their LLMs for free for so long. Because of that, I often had to pause between sessions to let my free usage reset.

Overall, it probably took two or three days of work with the LLMs to gradually build up the phone system. Once the script worked smoothly on my laptop and the Pi, I moved on to the rest of the course: integrating all the parts, creating the sound files and add finishing touches!

I did save time when it came to troubleshooting some problems directly tied to the keypad – more on that later!

Phone integration

I removed all the unnecessary parts from inside the phone and added electrical tape where the Pi and metal frame might be in contact. I also rewired the switches, moving from the spade terminals to pins I could just plug into the Raspberry Pi.

Then, I reinstalled the keypad in place – and noticed I had wired it wrong! Could I de-solder and resolder the whole thing? Sure! Or I could just turn the whole thing upside down because it would still work anyway 😉

Finally I added the Pi, wired the phone hook, the keypad and the handset speaker, then made room for the power supply line. I tested everything was working correctly and let it run overnight for good measure. Stability is capital when you’re developing a project that you won’t be around to troubleshoot! Once I was satisfied with that, I moved on to building the audio files for the phone system.

Writing the phone script and audio files

I wrote a quick outline for the phone tree system. The premise of the immersive experience was that the Infinite Destinations Travel Agency’s courtesy phone would guide would-be travelers to:

  • Learn about their top destinations (key 1)
  • Get safe travel tips (key 2)
  • Book tickets (key 3)

After I fleshed out the outline for each menu option, I fed the script to ElevenLabs’ text to speech generator. The whole project fit under the monthly amount of free generation time available to users.

I decided to use voice option Matilda as the voice of Infinite Destinations. After a few tweaks, I downloaded the generated audio files and spiced them up with free audio files:

With all the ingredients downloaded, I mixed the audio tracks together in Audacity. Once I was happy with it, you probably guessed it: I called it good and moved on!

Finishing touches

At that point I could finally see the finish line! There were a few unexpected issues I ran into that I thought might be interesting to you all.

First, I found out I couldn’t collect keypresses over the phone’s key pad the way keyboards do. Blame it on not having an Enter key! After a little research and some additional prompting, Claude walked me through using the termios module to get around that problem and even working sample code for me to try. I had never done that before, so that was a very welcome solution to a last minute problem!

Second, the script wouldn’t start on Raspberry Pi startup. I had run into that before: a little while ago, my MakerSpace Thermal Printout Machine had the same problem. I went with another suggestion from Claude and used the screen command to start the script in a user session of its own, independently from any user logging in on the Raspberry Pi itself. Check this article from How-To-Geek for more details!

How immersive is your experience without props? I don’t think so! I whipped together a flier for our multiverse travel agency – now called Infinite Destinations Travels! – with Creative Fabrica Studio. A quick hop to the local library for a couple paper prints and that was that!

Finally, I grabbed a black Sharpie and wrote the word “PUCE” all over the phone body (a teaser for the safety section of the multiverse travel experience). I also wrote “Pick up pick up pick up” on the receiver as a call to action for guests.

I was done – with one week to spare!

Conclusion – the show

The show went real well. Full-time members of the Spectacle Society told me we had at least 300 guests on the show’s first evening. As far as I know, the installation never crashed over the few weeks that the show ran at the Digital Dreams KC art gallery.

One of the Spectacle Society core members had designed another immersive experience based on using a rotary phone. This is someone I look up to, with a resume that makes me look like I’m dumb as rocks. They said they designed their phone jukebox because…They didn’t think they had the time to develop a whole phone tree system! I felt very proud 🙂

Conclusion – developing with AI

First things first – you can find all of the code for this project on Github, warts and all. Each file more or less covers a specific function or milestone in the development.

Second, would it have gone faster if I had written that script from scratch by hand? Hard to tell. I would say that early drafting and iteration went very fast. I knew exactly what I wanted, how to do it, and how to express it to the Large Language Models. That helped get me the results I wanted real fast.

After that initial step, is there still a gain? Well…It depends! Development of new features or functions was always relatively straightforward, but consolidating these features together always felt tricky. That’s when bugs would pop up from one prompt to the next, requiring clean up and troubleshooting. I wouldn’t recommend asking ChatGPT or Claude to correct code that isn’t working. It’s easy for them to pile up more continuity-breaking changes into the code! Instead, paste in the faulty code and ask the LLMs to walk you through it so you can fix it.

Claude was a huge help when it came to troubleshooting last minute stuff such as the script not starting, or working around getting user input with the alternate termios method. Ironically, looking things up often brought up vague, unhelpful articles – possibly written by AI. Claude was able to guide me methodically through what termios does one step at a time.

Ultimately, it feels like Claude was the most reliable of the two chatbots. The newly introduced artifacts feature was very convenient! So convenient that OpenAI recently introduced a similar feature called Canvas.

Overall, this was a positive experience! Ultimately, it’s all subjective: I’d recommend you give it a shot and see if LLMs can be useful to you. Personally, I’m going to continue experimenting with AI-assisted development!