MENU

The Sound of Music: Timbre, Synthesizers, and Orchestration

The hills are alive… with curiosity! Why do instruments “sound” different from one another, and how do we respond to the different sounds we hear?

To explore these questions, Bey and Kirsten speak with music perception researcher Dr. Stephen McAdams, going in depth about timbre, orchestration, and the ways we perceive tone qualities. Then, the two sit down with electronic musician and educator Ben Runyan to learn about the history and technology behind synthesizers, including a live jam session!

Links for today’s episode:

Transcript
Speaker:

Hello, hello, hello, hello everyone, to Philadelphia and beyond.

Speaker:

I am Kirsten. Michelle Cills.

Speaker:

And I'm The Bul Bey, we are your hosts.

Speaker:

That's what they tell us!

Speaker:

And this is the So Curious podcast, presented by The Franklin Institute.

Speaker:

This whole season of the podcast is about the science of music, and today's episode

Speaker:

is going to be about the actual sounds we hear in music.

Speaker:

All right, so first we're going to talk to music cognition expert Dr.

Speaker:

Stephen McAdams, and we're going to go

Speaker:

into depth about a term that we've already heard a few times this season, timbre.

Speaker:

So what is it?

Speaker:

How does it work?

Speaker:

Why are we still talking about it? Absolutely.

Speaker:

And then we'll be joined by local

Speaker:

electronic musician Ben Runyan to teach us how synthesizers work, how the sounds they

Speaker:

make can be morphed, and maybe even give us a little demonstration.

Speaker:

What are your favorite sounds, Bey?

Speaker:

I know that's vague as hell, what are your favorite sounds?

Speaker:

No, that's great!

Speaker:

That's a great question, thank you for asking.

Speaker:

Whether it's, like, an instrument or, like, a sound in the world.

Speaker:

Something I never get tired of are birds in the morning.

Speaker:

They can be a little loud at times, but I'm never mad about it.

Speaker:

I'm not like, "oh, man, birds are chirping again!"

Speaker:

Yeah, right. Yeah.

Speaker:

It's like it reminds you that you're alive.

Speaker:

It's really pretty sound in the city. Absolutely.

Speaker:

Also, another beautiful sound in the city.

Speaker:

The garbage truck coming through,

Speaker:

the people who were working on my street this morning, who woke me up at 05:00 a.m.

Speaker:

Gotta love it. Digging up the pavement...

Speaker:

Well, to help us figure out why we might

Speaker:

like those sounds, we are now joined by Dr.

Speaker:

McAdams.

Speaker:

Thank you so much for joining us, Steven.

Speaker:

It's a great pleasure.

Speaker:

So can you just do us a favor?

Speaker:

Introduce yourself, tell us about what you do.

Speaker:

Well, okay. I'm a professor of Music Psychology at

Speaker:

McGill University and the Schulich School of Music.

Speaker:

And my interest is in a number of areas around music perception and cognition.

Speaker:

And in particular, I'm interested in how we perceive the timbre of musical

Speaker:

instruments, and how we play with those in orchestration.

Speaker:

And for those who don't know, not me.

Speaker:

Can you tell us what is timbre?

Speaker:

Timber has always been the tough one because the official definitions, like

Speaker:

basically said, everything that's not pitch.

Speaker:

It's not how loud the sound is.

Speaker:

It's not how long in duration it is.

Speaker:

It's not where it is in the room or anything like that.

Speaker:

And so it's everything else.

Speaker:

One thing we can think about timbre is,

Speaker:

you can think about it in terms of sound color.

Speaker:

So you might have brighter sounds and darker sounds.

Speaker:

You might have smoother sounds or rougher sounds.

Speaker:

You might have sounds that have a sharper

Speaker:

attack or a more mellow attack, things like that.

Speaker:

All those are different properties of timbre.

Speaker:

And so we think of timbre as something that's multidimensional.

Speaker:

It's got a lot of different dimensions in play all at the same time.

Speaker:

And that's what makes it really fascinating.

Speaker:

It's also really hard to study and it's very hard to define, as you just noticed.

Speaker:

All right, so jumping into the next question, if two instruments are playing

Speaker:

the exact same pitch, we can still tell if it's played by a piano or violin.

Speaker:

Like, why do instruments sound different?

Speaker:

Is it the material?

Speaker:

Is it the physical sound waves or the way our brains perceive it?

Speaker:

Explain that. All of the above.

Speaker:

Nice.

Speaker:

So it starts with the mechanics, basically.

Speaker:

Okay, so what's a piano?

Speaker:

Piano has a hammer that hits a string.

Speaker:

That string starts vibrating.

Speaker:

Those vibrations go into the sound board

Speaker:

of the piano and then they get radiated out into the room.

Speaker:

A flute, somebody's blowing air into a

Speaker:

little hole that sets up all these vibrations that

Speaker:

then resonate inside the tube of the flute.

Speaker:

And the way they resonate depends on which fingering you're using.

Speaker:

So that changes the pitch and so on and so forth.

Speaker:

So those are very different mechanical mechanisms.

Speaker:

And then the sounds that get radiated, of

Speaker:

course, that's where we get into acoustics.

Speaker:

Okay, so that's the waveforms and the waveforms from a flute sound and a piano

Speaker:

sound are very different in the way they look.

Speaker:

They have different frequencies in them.

Speaker:

Different frequencies that are present, even if they're playing the same pitch,

Speaker:

are going to have different weights to them.

Speaker:

So, for example, you might have a brighter

Speaker:

sound in a flute and a less bright sound in the piano, for example.

Speaker:

So those are all part of timbre, and that depends on the waveform.

Speaker:

And then, of course, then there's our

Speaker:

ears, which pick up that waveform and analyze it in our brain and figure out the

Speaker:

relations between all these different parts.

Speaker:

How do they change over time, which

Speaker:

frequencies are strong, which frequencies are weak?

Speaker:

And all that goes into the sound color and

Speaker:

the sharpness of the attack and the general shape of things.

Speaker:

Wow. Yeah.

Speaker:

I'm thinking right now about our also human voices and how Bey and I could hum

Speaker:

the same note, and you would know it's not one voice.

Speaker:

Right? And I guess our voices are instruments.

Speaker:

Yeah, so what's the distinction between our voices?

Speaker:

I mean, I guess the timbre, if I had to guess!

Speaker:

Is it the timber again?

Speaker:

That's timber again!

Speaker:

All roads lead to timbre.

Speaker:

Of course they do!

Speaker:

Well, you have different shapes of vocal tracts and different sizes of the glottis,

Speaker:

so that's all the resonances that are going on.

Speaker:

So it also depends on what pitch you're speaking at, the way you shape your mouth,

Speaker:

so that's going to change the vowels and consonants and things like that.

Speaker:

All those are aspects of timbre and voice, for example.

Speaker:

How do you track how brains perceive different timbers?

Speaker:

I don't do brains very much myself.

Speaker:

I presume that all the people in my experiments have one,

Speaker:

because if you take it out, they can't hear very well anymore!

Speaker:

But basically, we're just asking people questions about what they're hearing.

Speaker:

It's not necessarily what brainwaves are going on.

Speaker:

My colleague up the hill, Robert Zatorre, does that kind of stuff, but I'm more

Speaker:

interested in what people's experience is, in that sense.

Speaker:

So we ask them what they're hearing.

Speaker:

Are you hearing this as blended together, or do you hear separate voices in it?

Speaker:

Is the sound darker, lighter? Is it brighter?

Speaker:

Is it so on and so forth?

Speaker:

So we ask them questions about what they're perceiving.

Speaker:

Yeah.

Speaker:

And so do different timbres, will they elicit different emotions from people?

Speaker:

Exactly. So if you want to do deep, dark

Speaker:

melancholy, you're not going to use like a glockenspiel or something like that.

Speaker:

You're going to go for like, you know,

Speaker:

bass clarinets and cellos in low register and things like that.

Speaker:

But if you want something happy and jubilant and things like that, you're

Speaker:

going to go for flutes and oboes, for example.

Speaker:

Interesting.

Speaker:

I was going to ask, are the associations

Speaker:

with them inherent to our brain or taught through culture?

Speaker:

Like the adjectives that we would use to describe..

Speaker:

.

Speaker:

Yeah, is that inherent in our brain, or is that taught through culture?

Speaker:

Oh, I think that's got a lot to do with culture, quite frankly.

Speaker:

We did some interesting experiments on people's emotional response to music,

Speaker:

comparing sort of Pygmies in the Congo and Canadians, and we used some Western music,

Speaker:

like some film music, and also some orchestral music, but also a lot of the

Speaker:

music come from that Pygmy culture, the Mbenzele culture in the Congo.

Speaker:

And what we found was one dimension of emotion, which we call arousal.

Speaker:

So that's excited or calm.

Speaker:

That was pretty common across the cultures.

Speaker:

But the other dimension of emotion, which we call valence, that's sort of the

Speaker:

positive, negative aspect, that's very different according to culture.

Speaker:

And we found this also recently doing some experiments comparing Chinese listeners

Speaker:

and Western listeners to both Chinese music and Western music.

Speaker:

And so it seems to me that there's aspects that are universal.

Speaker:

Some people like to say, "oh, music is the

Speaker:

universal language of the emotions." Well, only partly.

Speaker:

Only partly. And the excited-calm part does it!

Speaker:

But what people perceive as positive emotion and negative emotion, that's very

Speaker:

much tied to culture and even subcultures, I would say.

Speaker:

So you can get some people that can get very excited and elated by a Baroque

Speaker:

opera, for example, but they might not at all like death metal and then vice versa.

Speaker:

So somebody who's doing death metal is

Speaker:

going to like, you be disgusted by listening to Baroque opera.

Speaker:

So even in subcultures, you get differences in emotional response, which

Speaker:

has a lot to do with the sort of the sound colors that are being used and the kind of

Speaker:

musical structures that are being played as well.

Speaker:

Yeah.

Speaker:

Kirsten I've always been blown away with how globally accepted Backstreet Boys is.

Speaker:

Yeah! They are Southeast Asia, Canada...

Speaker:

The universal language.

Speaker:

When they say that, they mean the Backstreet Boys.

Speaker:

They mean the song Everybody by Backstreet Boys.

Speaker:

Yeah, even Snowball the cockatoo likes to dance to the Backstreet Boys!

Speaker:

Can you explain Snowball to our listeners, by the way?

Speaker:

Oh yeah, this is some work done by Aniruddh Patel, who's at Tufts University

Speaker:

now, and he started working with a lady who had a cockatoo.

Speaker:

But Snowball basically, you put on The

Speaker:

Backstreet Boys, or you put on Queen, for example, and Snowball will actually start

Speaker:

dancing to the music, doing his little legs.

Speaker:

He does little shuffles to the side, his head's bobbing.

Speaker:

And so what Ani Patel has shown is that

Speaker:

cockatiels, but also elephants and also some seals, actually can perceive rhythm,

Speaker:

and they react to rhythm in ways that we can actually measure.

Speaker:

So it's not unique to humans, this aspect of music, which is rhythm, perception.

Speaker:

Looking back at the things I've done... That is so funny!

Speaker:

Wow.

Speaker:

Tell us, what is orchestration?

Speaker:

Can you break down that word and that definition?

Speaker:

Once you pick an instrument, you're

Speaker:

already making an orchestrational decision.

Speaker:

Okay, so I've got a song I want to do, or I'm writing a piece.

Speaker:

Why do I pick, like, an oboe or an electric guitar?

Speaker:

Or if I've got an electric guitar, what

Speaker:

kind of different effects do I want to put on it?

Speaker:

All of those are orchestrational decisions.

Speaker:

I'm picking sounds that go to the musical idea I'm trying to get across.

Speaker:

The other thing is combinations of sounds

Speaker:

are put together to create new sounds that you can't do with any given instrument.

Speaker:

So, for example, I'm thinking of a piece

Speaker:

by Claude Dubussy, a French composer, where he mixes together an English horn

Speaker:

and a muted trumpet, and he gets a kind of a foghorn like sound out of that.

Speaker:

Wow.

Speaker:

Which you couldn't have gotten with any of the instruments by themselves.

Speaker:

So by blending them together and getting

Speaker:

these new emergent timbers, as we call them, these sound colors, that's an

Speaker:

orchestrational thing that one does, for example.

Speaker:

Et's like an instrument cocktail, basically.

Speaker:

Exactly! Exactly. This is really cool.

Speaker:

I kind of consider myself a songwriter, so this is like research to me.

Speaker:

I'm like, tell me about this ACTOR

Speaker:

project, which stands for Analysis, Creation, and Teaching of Orchestration.

Speaker:

And you're doing a lot of this work to

Speaker:

bring timbre to the attention of orchestral work.

Speaker:

And just why is this important?

Speaker:

And in the future of your research, in the

Speaker:

not so distant future, what are you excited about?

Speaker:

What do we have to look forward to?

Speaker:

Well, I think one thing I've noticed is

Speaker:

that if you look at what a lot of music scholarship that goes on.

Speaker:

So here I'm talking about music ologists and music theorists.

Speaker:

They talk a lot about pitch, they talk

Speaker:

about rhythm, they talk about harmony, talk about global form, but they don't

Speaker:

very often talk about the sound colors that are actually being used.

Speaker:

So our aim is really to try to sensitize all these music scholars to the fact that

Speaker:

there's a huge part of music that they're actually missing out on by not paying

Speaker:

attention to timbre and orchestration, because there's stuff they can analyze,

Speaker:

and it doesn't get at where the emotion is coming from, for example.

Speaker:

And a lot of that comes through timbre.

Speaker:

So for me, that's one of the big

Speaker:

directions we're going here is trying to look at those kinds of things.

Speaker:

Obviously, major and minor scales give you

Speaker:

happy and sad, slow and fast and things like that.

Speaker:

But you can also turn that around by picking different kinds of instruments

Speaker:

that have inherent emotional qualities to them that are really important.

Speaker:

So that's I think, the main direction we're going is that and then, of course,

Speaker:

we've got the whole thing of how do you teach orchestration?

Speaker:

Because it's really complicated.

Speaker:

I've got one student from Singapore who's been looking a lot at the role of timbral

Speaker:

expression in different kinds of instruments, sort of Chinese instruments

Speaker:

and also Western instruments, and how people are sensitive to those kinds of

Speaker:

things as they communicate emotion in a certain sense.

Speaker:

So these are, I think, all directions that are really going in an interesting way.

Speaker:

And we're doing that with a big consortium.

Speaker:

The ACTOR Project has about 200 people

Speaker:

involved in Europe, in North America, and also in Asia, and about 20 different

Speaker:

partner institutions, mostly universities, but some conservatories, like the Paris

Speaker:

Conservatory and the Geneva Conservatory, for example.

Speaker:

Yeah, you talk about global impact. Yeah.

Speaker:

That's a huge reach you have there!

Speaker:

That was the whole idea was we want to go abroad because the idea of orchestration,

Speaker:

in our sense, applies to all music all over the world.

Speaker:

And so how can we bring all that in?

Speaker:

And we actually had this year a really

Speaker:

interesting series on Afrological perspectives on timbre and orchestration.

Speaker:

So we had a bunch of scholars from Africa, but also from North America.

Speaker:

We're looking at different ways that we

Speaker:

can approach this outside of the, sort of the white Western canon, and look at

Speaker:

different approaches to what's going on in music.

Speaker:

If you look at jazz music, you look at music from different African cultures,

Speaker:

there's a complete different approach to things.

Speaker:

And so we need to have all these different kinds of approaches to understand really

Speaker:

how rich the whole world of timbre and orchestration actually is.

Speaker:

Well, thank you so much. Dr.

Speaker:

Steven McAdams. Oh it's a pleasure!

Speaker:

Awesome to chat with you. Thank you.

Speaker:

Great, thanks Bey and Kirsten, really nice talking to you!

Speaker:

It's wild how something as important to music as timbre is so abstract.

Speaker:

Yeah. Well, now we are joined in studio by Ben

Speaker:

Runyan to teach us a little bit about synthesizers.

Speaker:

Thank you so much for joining us, Ben! Thanks for having me.

Speaker:

Yeah. So can you introduce yourself?

Speaker:

Tell us about what you do?

Speaker:

Sure! My name is Ben.

Speaker:

I am a music producer.

Speaker:

I've done anything from being a touring musician to writing music for TV and film.

Speaker:

I'm a synth nerd.

Speaker:

I like sports and cheesesteaks.

Speaker:

Awsome! You mentioned synthesizers, that's why we have you here.

Speaker:

How did you get into it? Please tell us about that.

Speaker:

I was a Temple student and there was one

Speaker:

class called Music 389, and it was an audio production class.

Speaker:

This is in 2007.

Speaker:

And there was this program called Ableton Live that my professor showed me and it

Speaker:

had all these synthesizers and I knew that I liked electronic music.

Speaker:

I was just like, I got to learn how to make this stuff.

Speaker:

I started making my own music with my own vocals and lyrics.

Speaker:

And then I started a band called City Rain, and then we were on MTV, and all of

Speaker:

a sudden it's just like one thing led to another and my childlike fascination with

Speaker:

all of this technology became an obsession.

Speaker:

So can you give us, like a little crash course in the history of synthesizers?

Speaker:

So synthesizers actually showed up in the 1930s and 40s.

Speaker:

It was used mostly in classical music.

Speaker:

Really, like strange avantgarde classical music, stuff like that.

Speaker:

Then moving into the 50's and 60's, you

Speaker:

had synthesizers like the Moog, not unlike the one we're looking at right now.

Speaker:

For those who are listening, you can

Speaker:

Google Behringer, b-E-H-R-I-N-G-E-R model D.

Speaker:

It's like a big rectangle with, like, 20 different knobs.

Speaker:

Then you had things like the Mellotron, which was like the first sampling keyboard

Speaker:

that had tape loops inside of the keyboard where you could press down a key and it

Speaker:

would play like a drum loop or an organ loop or something like that.

Speaker:

And then you had Pet Sounds by the Beach Boys and that was kind of like the

Speaker:

floodgates opened with synthesizers and music, and then the Beatles got hella

Speaker:

jealous and we're like, we got to do that too!

Speaker:

And then by the time you get to the 70s we're talking like Emerson Lake and Palmer

Speaker:

and really like mathy nerdy rock music with synthesizers.

Speaker:

And then the 80s happened.

Speaker:

And then hip hop, new wave music, house music, techno music, even acid house.

Speaker:

And then like the revolution. Yeah.

Speaker:

Would you say the 80s is like, the golden age?

Speaker:

The 80s, that is like... Flock of Seagulls, yeah.

Speaker:

And ironically, in the 90s, other than

Speaker:

rave music and hip hop music, there was kind of a really big, kind of anti

Speaker:

-revolution against electronic music and they're like, "let's go back!"

Speaker:

And there was a push again. Yeah.

Speaker:

They're like, "Let's make guitar music

Speaker:

again!" And "it's not real music if it's electronic!"

Speaker:

The computers are going to take over! Well they did!

Speaker:

What's next?

Speaker:

And so, a lot of that music went over to like, Europe.

Speaker:

And so that's why a lot of the times,

Speaker:

unfortunately, when you hear people say, "oh, house music, techno music" and you

Speaker:

say, "well, where did that come from?" They think Europe.

Speaker:

They think it's like a European phenomenon.

Speaker:

Yeah, like Berlin.

Speaker:

And it's a black American art form that came from Chicago and came from Detroit.

Speaker:

And so it's unfortunately that the

Speaker:

cultural heritage is kind of misunderstood about it.

Speaker:

But of course, it's a shared world stage when it comes to electronic music.

Speaker:

But nowadays, of course, 2010 happened and EDM happened and now it's a world

Speaker:

phenomenon that isn't looked down upon anymore as like, when I was making this

Speaker:

music in college, people are like, "Oh, but you don't really make music.

Speaker:

You know, you're not playing guitar." Yeah.

Speaker:

Are you able to speak to any of the,

Speaker:

maybe, social science behind why we like synth sounds?

Speaker:

Like, why are we drawn to it?

Speaker:

Why do we play with it?

Speaker:

How does it become what it is today?

Speaker:

So there's a guy named Brian Eno, and

Speaker:

Brian Eno is one of the great producers of all time.

Speaker:

He produced many of the Talking Heads records, a bunch of Coldplay records.

Speaker:

He created ambient music.

Speaker:

He's the inventor of ambient music. Damn.

Speaker:

And what he said about it was when the

Speaker:

synthesizers came about in the 80s there was no way to play it.

Speaker:

Like, there was no known way to play it.

Speaker:

It wasn't like a guitar or piano where there was rules to it.

Speaker:

Right. So you could do anything.

Speaker:

You could paint with sounds.

Speaker:

You could just start inputting.

Speaker:

And because of the math based arena that it was in, you were pushing up against

Speaker:

certain parameters almost like painting inside of numbers in a certain way.

Speaker:

And that alone made it really cool because

Speaker:

nobody could tell you you were playing it wrong.

Speaker:

And when you made stuff, it was the first time anybody had heard anything like that.

Speaker:

It was kind of like being like an astronaut.

Speaker:

You were going out and finding things that people hadn't heard yet.

Speaker:

And when you hear a sound that sounds like that there's no reference point in life

Speaker:

where you're like "Oh, that sounds like a string." Maybe sort of, but not really.

Speaker:

Especially in a synth like this that isn't in the digital realm.

Speaker:

There's no presets.

Speaker:

Whatever way you left it is the way that it is.

Speaker:

And so for you to get a different sound

Speaker:

out of it, you have to start touching it and playing around with it.

Speaker:

Yeah, for the listeners, we're watching

Speaker:

him twist knobs, press buttons, pull out cords, flip around.

Speaker:

I'm not even sure how to describe that, like he's diffusing something.

Speaker:

So, Ben, I noticed that there's no keyboard on the synthesizer.

Speaker:

How does that work?

Speaker:

So, there is a language called MIDI, and

Speaker:

that's Musical Instrument Digital Interface.

Speaker:

And this was invented in the late 70s.

Speaker:

Different synthesizers had proprietary ports on them.

Speaker:

Meaning they, like, pigeonholed you into

Speaker:

buying their synthesizers so that you could hook your stuff up with their gear.

Speaker:

And then they're like, "Wait, this doesn't work.

Speaker:

We need to agree on a standard." And so

Speaker:

the standard they agreed on was these ports right here, which were called MIDI.

Speaker:

And so you could hook them together, so you could hook up a keyboard to this to

Speaker:

play the notes, and then you could have a keyboard that then you could switch over

Speaker:

between which synthesizer you're playing, or, all of them at once.

Speaker:

So you could have a keyboard actually triggering four or five different synths

Speaker:

at once to layer the sounds together to get different sounds.

Speaker:

So the MIDI was the agreed upon standard.

Speaker:

Now, that is to say, plenty of synthesizers have keyboards as well, but

Speaker:

they're generally speaking less expensive when they don't have a keyboard.

Speaker:

And then right now, this is being hooked

Speaker:

up via USB to my computer, and USB can send MIDI signal as well.

Speaker:

And so basically there's also then the

Speaker:

level of like, having to know which key is on your laptop...

Speaker:

Yeah, when I press a key on here, it triggers the MIDI to go out the USB cable

Speaker:

into the synth and then the audio comes out to the board.

Speaker:

I mean, we might as well do it demo time. Let's jump into it.

Speaker:

Let's create. Let's see if we can make a square wave?

Speaker:

A triangle wave?

Speaker:

Yeah, a square wave. A triangle wave.

Speaker:

A sine wave is exactly what it sounds like.

Speaker:

It's like, the shape. So a sine wave doesn't exist in nature.

Speaker:

Like, it's a perfect sound.

Speaker:

So like, if I pull up a sine wave,

Speaker:

that sound is so digital, it doesn't exist anywhere in nature.

Speaker:

It has no harmonics.

Speaker:

It's an absence of harmonics.

Speaker:

Right, but it also like if you add reverb

Speaker:

to it right, it's like a beautiful, very pleasant sound.

Speaker:

Right, we like it. Yeah, you're right, because the sine wave

Speaker:

does sound - no part of me would think that is...

Speaker:

Space Odyssey, 2001... Right right, exactly!

Speaker:

Okay, then let's hear the square wave.

Speaker:

All right, so then we're going to do a square wave.

Speaker:

So let's hear. Yeah, right, so that's a

Speaker:

little bit there's, a little more Zzzz in there.

Speaker:

A little more harmonics. And if I go lower.

Speaker:

And walk us through a triangle wave? Yeah, triangle wave.

Speaker:

So triangle wave.

Speaker:

It doesn't have that same "urghh", that bass rumble.

Speaker:

A and B it right?

Speaker:

And then let's go back to the square.

Speaker:

Right, you can hear the difference, especially if I go down an octave.

Speaker:

And then I go back to the...

Speaker:

It doesn't have that same bass tone to it. Yeah, absolutely.

Speaker:

Same top tone, but not the same bottom one.

Speaker:

And what about a sawtooth?

Speaker:

Even more harmonics! Yeah

Speaker:

It does sound like a saw!

Speaker:

Like it has that - yeah.

Speaker:

So now, I'm going to add some movement to it.

Speaker:

Okay.

Speaker:

So right now when I press the key, not much happens.

Speaker:

I'm going to add what's called a filter envelope.

Speaker:

So let me explain what that means.

Speaker:

It's a shape - attack, decay, sustain, release.

Speaker:

Attack means when I hit the sound, does

Speaker:

the sound happen instantly, or does it take a while to happen instantly?

Speaker:

InstantlyInstant

Speaker:

Okay, so would it be a fast attack or a slow attack?

Speaker:

Fast attack. You got it, Right.

Speaker:

So then after I let go of the key, does it

Speaker:

go away really quickly or does it ring out?

Speaker:

Goes away. So short decay.

Speaker:

If I hold it down, does it sustain, or - it does.

Speaker:

So it's a long sustain, and then the release kind of like decay.

Speaker:

Same thing. It doesn't ring out.

Speaker:

So, right now, with this sound, right, if

Speaker:

I were to change the release, when I let go, the sound.

Speaker:

Just ringing out, I'm not holding it down. Right.

Speaker:

So that's attack, decay, sustain, release.

Speaker:

But I'm going to assign this attack, decay, sustain, release.

Speaker:

Now we're getting really heady, to what's called a filter.

Speaker:

Again, a filter is the idea that - hear how the harmonics are cut off?

Speaker:

Yeah. Opening up, closing, right?

Speaker:

Opening up, closing. But I'm doing that with my hand.

Speaker:

I want that to happen when I press the key.

Speaker:

Yeah.

Speaker:

So I'm assigning this filter envelope to the filter.

Speaker:

Now you hear how it kind of plucks a little bit?

Speaker:

Right.

Speaker:

So basically, each time I hit the key, that filter opens and closes very fast.

Speaker:

Okay.

Speaker:

So then let's say I'm going to throw what's called an arpeggiator on it.

Speaker:

Arpeggio, basically, I can hold down the key and it'll play at intervals of, like,

Speaker:

eighths or sixteenths or something like that.

Speaker:

So, I'm going to put on a MIDI effect called an arpeggiator.

Speaker:

And now when I hold down the key -

Speaker:

It's playing at 8th notes, but I can have it play at 16ths, right?

Speaker:

So - what fun! That was fun!

Speaker:

We have fun here!

Speaker:

Okay, I'm going to record that because that's kind of hot. 1,2,3,4.

Speaker:

Oh sh - I'm gonna cuss on the podcast! Are we doing this?

Speaker:

Is this happening?

Speaker:

So we're going to add something real

Speaker:

quick, just to add some context that is not synth related.

Speaker:

I'm going to find some guitars or something.

Speaker:

Let's see.

Speaker:

How about some I didn't know I'd be jamming live on the podcast.

Speaker:

You never do know what's going to happen in here.

Speaker:

Okay, great.

Speaker:

Let's see if we can add some melody to this.

Speaker:

Yeah, okay.

Speaker:

I'll add it with this bad boy. Boop boop boo...

Speaker:

Wait, I have to tune it ! The really cool thing about this synth

Speaker:

is it falls out of tune, because it's an actual electric circuitry.

Speaker:

So each time you turn it on and it warms up, you have to retune it.

Speaker:

Oh, that's interesting. Isn't that kind of cool?

Speaker:

Yeah.

Speaker:

So what's kind of cool about that, unlike digital synths, is that when you make a

Speaker:

record, it's always like, a little bit off.

Speaker:

And that gives it kind of a cool character.

Speaker:

It's like it's like alive. Like a little signature.

Speaker:

And it needs some imperfection.

Speaker:

It needs some imperfection, yeah!

Speaker:

All right, so here we'll do that.

Speaker:

So I'm taking the sound, and I'm just getting the raw sound, right?

Speaker:

So this is what comes straight out of the synth.

Speaker:

This is like all that's coming out.

Speaker:

I could add some noise to it, maybe add a little bit.

Speaker:

But now I'm going to add effects inside of the computer.

Speaker:

So like some reverb.

Speaker:

So that's delay.

Speaker:

Oooh, and now maybe I'll add an arpeggiator on that, too.

Speaker:

So I'm going to hold down a chord...

Speaker:

(Ben jamming)

Speaker:

Ben is going back to the synthesizer

Speaker:

again, this rectangular device with knobs, switches and buttons.

Speaker:

So I'm adding another note.

Speaker:

And he's adding another note!

Speaker:

So this is legitimately happening in the studio as we speak, right?

Speaker:

He's creating as we're talking.

Speaker:

So he's adding...

Speaker:

Things are falling out of tune a little bit...

Speaker:

He's tuning them, giving it a human imperfection!

Speaker:

Yeah. So he's adding and subtracting and finding

Speaker:

a way to put this all together into one sound.

Speaker:

Very well said, Bey.

Speaker:

This is like play by play. I know.

Speaker:

I feel like we're sports commentators.

Speaker:

I added a little envelope to it to give it that bounce.

Speaker:

Right.

Speaker:

So as the sounds are changing, I'm talking to the listener right now, as

Speaker:

the sound is changing, Ben is literally turning these knobs and switches.

Speaker:

This is manually happening.

Speaker:

Yeah.

Speaker:

That's Ben.

Speaker:

And then that is Ben, also.

Speaker:

Great.

Speaker:

Ben, let's say someone is like, at home and they want to try to play around with

Speaker:

synths themself and they want to learn how to get into this.

Speaker:

What are some resources free things we can recommend?

Speaker:

There's actually a YouTube channel, SynthHacker.

Speaker:

He basically will take songs that are

Speaker:

popular and say, "how do we make that sound?" And I learned a lot of what I did

Speaker:

by saying like, "okay, that's a really cool sound.

Speaker:

How do we make that?" And he'll provide a

Speaker:

preset that you can download into one of the synths, and then you can kind of like

Speaker:

- I think the best way to do it is take a preset on a synth that's from a popular

Speaker:

song, and then break down how was this made?

Speaker:

Think of something you'd build, and then you take it apart to see how it works.

Speaker:

Like a car or something. Yeah.

Speaker:

And we were talking a lot about tech and science in music.

Speaker:

Obviously, that's the theme of this whole season.

Speaker:

And I think it is like, I mean, something like a synth is such a great example.

Speaker:

Like you said, it has been around forever

Speaker:

and so I think it's really cool that this is like making this really attainable.

Speaker:

Ben Runyan, thank you so much for coming on the So Curious podcast!

Speaker:

Thank you for being here. And...

Speaker:

Go, Birds! Go Birds!!! Superbowl 52!

Speaker:

Let's go!!!

Speaker:

Thank you again to Ben Runyan for taking

Speaker:

the time to show us the basics of synthesizers.

Speaker:

But the term basic feels reductive because

Speaker:

I feel like there are no basics to synthesizers.

Speaker:

That was a lot, wasn't it? Yeah.

Speaker:

I love how - and I know this is not the

Speaker:

technical term, but, you know, like the little bleep bloops.

Speaker:

Absolutely. When it's like "Bleep bloop bleep bloop

Speaker:

bloop bloop bloop bleep!" I love the bleep bloops.

Speaker:

I love also how the synth can sound like instruments, but still, he was able to be

Speaker:

like, but this doesn't sound like anything you could have heard in the world.

Speaker:

Right. And it's like, damn, you're right!

Speaker:

You're right.

Speaker:

Like certain sounds you can't just play on a guitar.

Speaker:

It's not an organic sound.

Speaker:

It's not an organic sound, but it's still a sound.

Speaker:

And people create and do all kinds of

Speaker:

things with synthesizers, so it's just as credible, just as real, and just as much a

Speaker:

part of the world of music as everything else.

Speaker:

As always, thank you for joining us.

Speaker:

And make sure you tune in to next week's

Speaker:

episode because we're exploring the impact music can have on mental health.

Speaker:

You can be listening to a song and then

Speaker:

all of a sudden become extremely emotional.

Speaker:

You're not really sure why, but by the end

Speaker:

of the song you're like, "Oh my goodness, I know why this song moved me!"

Speaker:

Oooo, little throw back to season three, huh?

Speaker:

Yeah. Don't forget to listen.

Speaker:

Please make sure that you subscribe to the So Curious podcast on Apple Podcast

Speaker:

, Stitcher, wherever you listen, please don't miss out.

Speaker:

And please remember, if you like what you're listening to, take a minute.

Speaker:

Write a five star review.

Speaker:

If you want to write anything amazing about me.

Speaker:

Kirsten is spelled K-I-R-S-T-E-N, and Bey is spelled B-E-Y.

Speaker:

Tell everyone how much you are learning

Speaker:

and how fun this is! So thank you all so much.

Speaker:

Don't forget to be here next week, Tuesday.

Speaker:

This podcast is made in partnership with

Speaker:

RADIO KISMET, Philadelphia's premier podcast production studio.

Speaker:

This podcast is produced by Amy Carson.

Speaker:

The Franklin Institute's director of digital editorial is Joy Montefusco.

Speaker:

Dr.

Speaker:

Jayatri Das is the Franklin Institute's Chief Bioscientist, and Erin Armstrong

Speaker:

runs marketing, communications and digital media.

Speaker:

Head of Operations is Christopher Plant.

Speaker:

Our mixing engineer is Justin Berger, and our audio editor is Lauren DeLuca.

Speaker:

Our graphic designer is Emma Seager.

Speaker:

And I'm The Bul Bey!

Speaker:

And I'm Kirsten Michelle Cills. Thanks!

Speaker:

Thank you! See ya.

Scroll to top