Discussion:
Research and training materials for live coding
(too old to reply)
ddw_music
2014-06-18 12:41:10 UTC
Permalink
Hi -- I'm inching my way back into the SC online communities, for now as more
of an asker than answerer. It's been very helpful to relieve myself of the
(self-imposed) pressure to answer questions just because they're there.

Now I have a question. I started playing with JITLib while preparing for a
workshop, and that led to some experiments with live coding. I ran into some
challenges with that pretty quickly, and I think they're the sorts of
challenges that more experienced live coders have already dealt with. The
main one is expressing control routines (sequencing) efficiently. I'm good
with patterns, but... they're a lot of typing when an audience is waiting
for you to do something cool. And that was even using Pbindef to build up
the complexity item by item.

I'm wondering about research and training materials. Like, my conservatory
department also houses the jazz major, and the students learn improvisation
from teachers and from established workbooks. But if I want to learn to
improvise in code, I feel like I have to be an auto-didact, figuring
everything out for myself even though other people figured it out before me.
I'm fully aware that nobody gets good at live coding without individual
practice, but continuing the jazz analogy - the teachers at my school give
the students skills to practice, and practice methodologies.

I had a quick look at the toplap wiki, but didn't see much in the way of
research papers explaining solutions to the (thorny) problems of expressive
but compact code. I did see the page for practice exercises, but it seemed
to me these are intermediate-level, and when it comes to live coding, I'm
very much a beginner.

Thanks,
hjh



--
View this message in context: http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.

_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Cole Ingraham
2014-06-18 13:30:41 UTC
Permalink
Hi James and welcome back =).

While I don't know about specific educational resources for learning live
coding, I have been exploring it myself quite a lot: especially over the
past year-ish. I certainly echo your concerns about feeling not wanting the
audience to sit and wait while you figure out how to make something happen.
Here are some of my thoughts and approaches (whether anyone finds them
useful or not). I'm also hoping this spawns a larger discussion on the
topic since it's something that seems to have growing interest.

"All out live coding" is really hard. Yes there are some insanely talented
ninjas with 9000+ wpm that can make some amazing things happen from an
empty file, but for the rest of us I recommend having a handful of starting
code. This could be skeletons of things, complete code that you intend to
extend/tweak, infrastructure for effects, whatever is appropriate. If
anyone complains about that, tell them to go live code in Assembly.

Since you brought up jazz and conservatory training, regular practice
helps. Also having some general idea, goal, form, timbres, etc. in mind is
quite helpful, as with all music.

One big issue is the amount of typing needed to make something happen (as
mentioned before). Even if you have some prefab code ready, navigating and
editing it can be rough. I switched to vim for my live coding work (emacs
is just as suitable to *hopefully* prevent a flame war from bringing up
vim) because I found it far easier to get around, and really like some of
the customization that the IDE just doesn't have (yet).

That still doesn't change how much typing you need though. SC, which
totally awesome and crazy powerful, is rather verbose when it comes to live
coding. If you look at other live coding languages (ixi, tidal, lich) they
are built from the ground up to be the shortest distance between thought
and compile (with the trade off of plenty of flexibility but that's part of
the deal). I've been hacking on an utility to make Pdef feel a little more
like these languages if you are interested:

https://github.com/coleingraham/sc_utils/blob/master/algorave/SymbolToPdef.sc

This is still very rough but I have performed with it. The idea is that I
add methods to Symbol to replace a lot of the extra typing of Pdef and
Pbindef. Here's an example

// the following both produce the same Pdef
(
Pdef(\example,
Pbind(
\instrument, "default",
\dur, Pseq([1,0.5,0.5],inf),
\degree, Pseq([0,2,4],inf),
\amp, Prand([0.25,0.25],inf)
)
);
);

Pdef(\example).play;
Pbindef(\example,\dur,Prand([0.25,0.5],inf));
Pdef(\example).stop;

(
\example +> (
inst: "default",
dur: [1,0.5,0.5],
degree: [0,2,4],
amp: Prand([0.25,0.25],inf)
);
)

\example.play;
\example.set(\dur,Prand([0.25,0.5],inf));
\example.stop;


It's a small difference but ends up being on average 30% less typing. This
somewhat assumes you will be using lots of Pseq but that's not a
requirement. Things just start looking much more like a normal Pdef if you
want to do other things.

That's all I've got for now.

-Cole
Post by ddw_music
Hi -- I'm inching my way back into the SC online communities, for now as more
of an asker than answerer. It's been very helpful to relieve myself of the
(self-imposed) pressure to answer questions just because they're there.
Now I have a question. I started playing with JITLib while preparing for a
workshop, and that led to some experiments with live coding. I ran into some
challenges with that pretty quickly, and I think they're the sorts of
challenges that more experienced live coders have already dealt with. The
main one is expressing control routines (sequencing) efficiently. I'm good
with patterns, but... they're a lot of typing when an audience is waiting
for you to do something cool. And that was even using Pbindef to build up
the complexity item by item.
I'm wondering about research and training materials. Like, my conservatory
department also houses the jazz major, and the students learn improvisation
from teachers and from established workbooks. But if I want to learn to
improvise in code, I feel like I have to be an auto-didact, figuring
everything out for myself even though other people figured it out before me.
I'm fully aware that nobody gets good at live coding without individual
practice, but continuing the jazz analogy - the teachers at my school give
the students skills to practice, and practice methodologies.
I had a quick look at the toplap wiki, but didn't see much in the way of
research papers explaining solutions to the (thorny) problems of expressive
but compact code. I did see the page for practice exercises, but it seemed
to me these are intermediate-level, and when it comes to live coding, I'm
very much a beginner.
Thanks,
hjh
--
http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.
_______________________________________________
sc-users mailing list
http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
--
DMA Music Composition - University of Colorado at Boulder
MFA Electronic Music and Recording Media - Mills College
BM Music Composition - University of the Pacific

http://www.coleingraham.com
http://www.glitchlich.com
https://soundcloud.com/coledingraham
http://www.facebook.com/coleingrahammusic
joesh
2014-06-18 14:53:07 UTC
Permalink
Interesting remark. I'm coming at this from the other side as a 'jazz
musician' who interested to integrate live coding into my work. I've also
started looking into the idea of building a syllabus which takles this and
can be taught in a conservatory. There are, as I'm sure you're aware,
several conservatories teaching coding (or sonology as it's called over here
- I'm in the Benelux), but often it's taught as a harmonically static
process, which as you well know jazz isn't.

I'll be interested to see/read what others have come up with, but at the
moment I'm only learning SC, so i can't really make any solid remarks as
yet. Unfortunately as with other programs - PD, Csound, Usine or the like -
there isn't any scope for moving quickly through harmonic space, beats are
easy, syncing tempos with a live drummer surely a slight problem.

Lastly as you point out your not sure what methodologies to use as jazzers
have now a complete 'workbook' technique they can build from. I guess that
until more coders work in shifting harmonic areas and less in ostinato based
riffs there will be little progress in that area - meaning development of
code/programs to tackle those specific problems.




--
View this message in context: http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656p7611659.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.

_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Scott Carver
2014-06-18 17:37:22 UTC
Permalink
Though I've done minimal live coding in an actual performance context, I've
had occasional times when I'm working on music AS IF I were live coding,
i.e. the code I'm using is simple and expressive enough that I can
improvise with no down-time and achieve interesting results.

Easily the most powerful concept I've run into w/r/t that is the
composability of events and event streams via Pchain. I've had very good
results building out a piece by constructing a library of Pdef's containing
different gestural components -- separately, i.e. Pdef's for note / pitch
data, separate Pdefs encapsulating time gestures, Pdef's for different
instruments, etc. At that point, playing and modifying can look like:


Pdef(\main).play;
Pdef(\main, Pchain(

Pdef(\humInstrA),

Pdef(\fastShortC),

Pdef(\varyNotes).set(\vAmount, 0.2),

Pdef(\notesD)

));

...which is much closer to the kind of gestural thinking that's interesting
when performing (as opposed to, e.g., fiddling with decimal points inside
patterns) (and -- nothing about the above precludes also fiddling with
decimal points...).
- Scott
Post by joesh
Interesting remark. I'm coming at this from the other side as a 'jazz
musician' who interested to integrate live coding into my work. I've also
started looking into the idea of building a syllabus which takles this and
can be taught in a conservatory. There are, as I'm sure you're aware,
several conservatories teaching coding (or sonology as it's called over here
- I'm in the Benelux), but often it's taught as a harmonically static
process, which as you well know jazz isn't.
I'll be interested to see/read what others have come up with, but at the
moment I'm only learning SC, so i can't really make any solid remarks as
yet. Unfortunately as with other programs - PD, Csound, Usine or the like -
there isn't any scope for moving quickly through harmonic space, beats are
easy, syncing tempos with a live drummer surely a slight problem.
Lastly as you point out your not sure what methodologies to use as jazzers
have now a complete 'workbook' technique they can build from. I guess that
until more coders work in shifting harmonic areas and less in ostinato based
riffs there will be little progress in that area - meaning development of
code/programs to tackle those specific problems.
--
http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656p7611659.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.
_______________________________________________
sc-users mailing list
http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Iannis Zannos
2014-06-18 18:11:08 UTC
Permalink
Perhaps relevant:

https://github.com/iani/tiny-sc/blob/master/Classes/SynthTree/Patterns/Edef/EventPatternTries.scd

Cheers,

IZ
Post by Scott Carver
Though I've done minimal live coding in an actual performance context,
I've had occasional times when I'm working on music AS IF I were live
coding, i.e. the code I'm using is simple and expressive enough that I can
improvise with no down-time and achieve interesting results.
Easily the most powerful concept I've run into w/r/t that is the
composability of events and event streams via Pchain. I've had very good
results building out a piece by constructing a library of Pdef's containing
different gestural components -- separately, i.e. Pdef's for note / pitch
data, separate Pdefs encapsulating time gestures, Pdef's for different
Pdef(\main).play;
Pdef(\main, Pchain(
Pdef(\humInstrA),
Pdef(\fastShortC),
Pdef(\varyNotes).set(\vAmount, 0.2),
Pdef(\notesD)
));
...which is much closer to the kind of gestural thinking that's
interesting when performing (as opposed to, e.g., fiddling with decimal
points inside patterns) (and -- nothing about the above precludes also
fiddling with decimal points...).
- Scott
Post by joesh
Interesting remark. I'm coming at this from the other side as a 'jazz
musician' who interested to integrate live coding into my work. I've also
started looking into the idea of building a syllabus which takles this and
can be taught in a conservatory. There are, as I'm sure you're aware,
several conservatories teaching coding (or sonology as it's called over here
- I'm in the Benelux), but often it's taught as a harmonically static
process, which as you well know jazz isn't.
I'll be interested to see/read what others have come up with, but at the
moment I'm only learning SC, so i can't really make any solid remarks as
yet. Unfortunately as with other programs - PD, Csound, Usine or the like -
there isn't any scope for moving quickly through harmonic space, beats are
easy, syncing tempos with a live drummer surely a slight problem.
Lastly as you point out your not sure what methodologies to use as jazzers
have now a complete 'workbook' technique they can build from. I guess that
until more coders work in shifting harmonic areas and less in ostinato based
riffs there will be little progress in that area - meaning development of
code/programs to tackle those specific problems.
--
http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656p7611659.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.
_______________________________________________
sc-users mailing list
http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
ddw_music
2014-06-19 09:43:27 UTC
Permalink
Post by joesh
Interesting remark. I'm coming at this from the other side as a 'jazz
musician' who interested to integrate live coding into my work. I've also
started looking into the idea of building a syllabus which tackles this
and can be taught in a conservatory. There are, as I'm sure you're aware,
several conservatories teaching coding (or sonology as it's called over
here - I'm in the Benelux), but often it's taught as a harmonically static
process, which as you well know jazz isn't.
I'll be interested to see/read what others have come up with, but at the
moment I'm only learning SC, so i can't really make any solid remarks as
yet. Unfortunately as with other programs - PD, Csound, Usine or the like
- there isn't any scope for moving quickly through harmonic space.
Generative harmony is a matter of data representation and algorithm design.
SC has sophisticated data structures and it's (AFAIK) a "Turing-complete"
language, meaning it can express any algorithm that standard languages can.
There is definitely scope for generative harmony! You're correct that few
people have done it, but that reflects the complexity of the task, not any
lack of scope for it. (Probably one reason why it isn't done very much is,
if you're going to figure it out all by yourself, it takes both music-theory
knowledge and insight into algorithm design from computer science. It's a
lot to expect of one person. I think there are some books out there, but
free training materials would help... perhaps a future TODO for me...)

It's less likely to be done in Pd or Max, because their data structures are
weaker and complex algorithms are *much* more cumbersome to express in
graphical data flows than in code. That doesn't rule out writing externals,
or embedding JavaScript in a Max patch; still, SC's native paradigm is much,
much, MUCH better suited to harmony processing.

A quick, off-the-cuff list of the requirements might include:

- Diatonic representation of pitch (SC has that, including accidentals)

- Ways to measure the consonance or dissonance of a collection of pitches
(not built in, but if you decide what the measurement is, you can certainly
implement it. For instance, in ddwChucklib, I have functions that measure a
chord's "fit" above a given root pitch. The fifth, third, seventh and ninth
above the root get positive scores, in descending order, while the sixth
gets a negative score because it pushes the sonority more toward a root
that's a third lower. In this formulation, chromatic notes get punished
pretty harshly, but you can handle jazz chords with chromatic alterations by
testing the chord notes against several scales -- e.g., a major mode with #4
and b7 would match a dominant #13 chord. So you could choose a V root, the
#4b7 mode, and voice a chord based on that.)

- I also noticed, in some pieces where I wrote the harmony by hand, that I
was using more-or-less melodic motion in the top voice for continuity
between chords. So I built that into my algorithms: Choose a top note, and
adjust the chord notes to it (checking the "fitness" of the voicing against
the root note, as above).

That ended up being less of a list format than I expected, but I think it
makes the point: If you analyze your approach to harmony, you can model some
aspects of it in SC code.

See also, for instance, Paul Lansky's "Heavy Set" (on the **really
flippingly excellent** /Ride/ album which I cannot recommend highly enough
-- IMO, a must-hear), which models "the right hand of an imaginary (and very
large) improvising pianist."

hjh



--
View this message in context: http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656p7611692.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.

_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
ddw_music
2014-06-19 09:58:30 UTC
Permalink
e.g., a major mode with #4 and b7 would match a dominant #13 chord.
#11 chord (oops).

hjh




--
View this message in context: http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656p7611694.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.

_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Julian Rohrhuber
2014-06-19 19:03:33 UTC
Permalink
Post by ddw_music
- Ways to measure the consonance or dissonance of a collection of pitches
one source would be the Dissonance quark, and Juan's thesis on the topic ...

_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
henrique matias
2014-06-19 20:46:38 UTC
Permalink
I believe if you want to make something cool live, you must be able to
perform your code, and not only type.

IMHO, many people expect to see you writing down loads of stuff in order to
look "hax0r" but that generally results in not very musical or interesting
results from a listener perspective, or from a "musical point of view".

I believe you should create some musical content whatever way you manage to
do it, and then wrap it into an API ( for instance making your own class,
or Proto ), then have special messages and properties which you will use in
your live show.


If i was a teacher and i had pupils, the skill i would teach them is to
create simple musical content first then find simple ways of modulating it
that yields musical results.

Then building a few of those little modular musical things and eventually
working out a way of making them talk to each other, somehow like a jazz
like a band.


good luck!
Post by Julian Rohrhuber
Post by ddw_music
- Ways to measure the consonance or dissonance of a collection of pitches
one source would be the Dissonance quark, and Juan's thesis on the topic ...
_______________________________________________
sc-users mailing list
http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Daniel Mayer
2014-06-19 21:41:40 UTC
Permalink
Post by joesh
there isn't any scope for moving quickly through harmonic space, beats are
easy, syncing tempos with a live drummer surely a slight problem.
The Pattern system is very well suited to
accomplish harmonic sequencing.
Patterns have an enormous flexibility,
it takes some time though to get through possibilities.

First there is built in stuff concerning scales, transposing, temperature etc.
Refering to harmonic states can be done in a number of ways,
especially data sharing is relevant:

http://doc.sccode.org/Tutorials/A-Practical-Guide/PG_06g_Data_Sharing.html

Here an example with on-the-fly changes - you could
also use determined timing for scales etc. and use data sharing.

Not to say that this example is musically very elaborated,
but might give a taste where you can start experimenting


(
// some defaults, also default instrument

~steps = 12; // 12 steps per octave (equal temperature)
~scale = Scale([0,1,3,5,7,9,11]);
~shift = 0; // chromatic shift
~chords = [[0, 2]]; // pool of chords

p = Pbind(
\degree, Pseq((0..5), inf) + Pfunc { ~chords.choose },
\octave, Prand((4..6), inf),
\scale, Pfunc { ~scale },
\ctranspose, Pfunc { ~shift },
\dur, 0.2,
\amp, 0.05,
\legato, 2.5
).trace.play;
)

// change scale

~scale = Scale([0,2,4,5,7,10]);

// chromatic shift

~shift = 4;

// microtonal shift

~shift = 2.5;


// change chord pool (steps in scale)

~chords = [[0, -2], [0, 4, 7]];

~chords = [[0, 3]];


// scale within equal division of octave into 7 steps

~scale = Scale([0,1,3,4,5], 7)

~chords = [[-3, 0, 3]];


p.stop;


And still this is quite conventional thinking in strict sequencing of pitch pools,
but anything can be sequenced and harmonic relations might grow out from
data other than fixed pools of pitch classes.

Greetings

Daniel

-----------------------------
www.daniel-mayer.at
-----------------------------




_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Cole Ingraham
2014-06-19 22:48:40 UTC
Permalink
I'd also like to add that live coding with a group is often more varied and
intricate than solo, as obvious as that may sound. Continuing with the jazz
analogy: not everyone needs to be a solo pianist.

-Cole
Post by Daniel Mayer
Post by joesh
there isn't any scope for moving quickly through harmonic space, beats
are
Post by joesh
easy, syncing tempos with a live drummer surely a slight problem.
The Pattern system is very well suited to
accomplish harmonic sequencing.
Patterns have an enormous flexibility,
it takes some time though to get through possibilities.
First there is built in stuff concerning scales, transposing, temperature etc.
Refering to harmonic states can be done in a number of ways,
http://doc.sccode.org/Tutorials/A-Practical-Guide/PG_06g_Data_Sharing.html
Here an example with on-the-fly changes - you could
also use determined timing for scales etc. and use data sharing.
Not to say that this example is musically very elaborated,
but might give a taste where you can start experimenting
(
// some defaults, also default instrument
~steps = 12; // 12 steps per octave (equal temperature)
~scale = Scale([0,1,3,5,7,9,11]);
~shift = 0; // chromatic shift
~chords = [[0, 2]]; // pool of chords
p = Pbind(
\degree, Pseq((0..5), inf) + Pfunc { ~chords.choose },
\octave, Prand((4..6), inf),
\scale, Pfunc { ~scale },
\ctranspose, Pfunc { ~shift },
\dur, 0.2,
\amp, 0.05,
\legato, 2.5
).trace.play;
)
// change scale
~scale = Scale([0,2,4,5,7,10]);
// chromatic shift
~shift = 4;
// microtonal shift
~shift = 2.5;
// change chord pool (steps in scale)
~chords = [[0, -2], [0, 4, 7]];
~chords = [[0, 3]];
// scale within equal division of octave into 7 steps
~scale = Scale([0,1,3,4,5], 7)
~chords = [[-3, 0, 3]];
p.stop;
And still this is quite conventional thinking in strict sequencing of pitch pools,
but anything can be sequenced and harmonic relations might grow out from
data other than fixed pools of pitch classes.
Greetings
Daniel
-----------------------------
www.daniel-mayer.at
-----------------------------
_______________________________________________
sc-users mailing list
http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
--
DMA Music Composition - University of Colorado at Boulder
MFA Electronic Music and Recording Media - Mills College
BM Music Composition - University of the Pacific

http://www.coleingraham.com
http://www.glitchlich.com
https://soundcloud.com/coledingraham
http://www.facebook.com/coleingrahammusic
ddw_music
2014-06-19 09:26:04 UTC
Permalink
Post by Cole Ingraham
"All out live coding" is really hard. Yes there are some insanely talented
ninjas with 9000+ wpm that can make some amazing things happen from an
empty file, but for the rest of us I recommend having a handful of
starting code. This could be skeletons of things, complete code that you
intend to extend/tweak, infrastructure for effects, whatever is
appropriate. If anyone complains about that, tell them to go live code in
Assembly.
Actually, this was one of the intended uses of ddwChucklib. I ended up using
it more for modeling complex musical behaviors, but there's no reason why
chucklib objects couldn't be components of improvisation. I may have been
dissuaded from that by editorial comments from Nick Collins on my SC book
chapter, where I had proposed chucklib objects could be part of a live
coding framework, but he seemed adamant that this was Simply Not Live
Coding. More precisely (IIRC -- this was a few years ago), the objection was
that the code displayed to the audience should, in live coding, not merely
use algorithms, but communicate something about the nature of the
algorithms. "Code should be seen as well as heard, *underlying algorithms
viewed* as well as their visual outcome"[1] (emphasis mine).

A process prototype could, for instance, accept a string to define a drum
rhythm - hypothetically (I haven't written this prototype... yet):

PR(\strRhy).chuck(BP(\snr), nil, (sample: \snare, rhy: ".x.x"));
BP(\snr).play;

BP(\snr).rhy = "..x. ....x..x";

PR(\strRhy).chuck(BP(\k), nil, (sample: \kick, rhy: "xxxx"));
BP(\k).play;

BP(\k).rhy = "x x..x .x .";

... which is less convenient but not radically different from ixilang's "ac
-> hat |H Hhh hhh H h|"[2].

Hm... it's probably worth my while to revive that line of thinking.

~~

sc-tiny also looks very interesting. I like the way NodeProxies can
modularize synthesis -- assuming a ProxySpace is pushed:

~src = { ... };
~out = { \in.ar(0!2) };
~out.vol = 0.2;
~out.play;
~src <>> ~out;

~fx = { ... do something to \in.ar(0!2) ... };
~src <>> ~fx <>> ~out;

When I discovered that last technique... oh wow... mind blown.

But, starting from an empty code file and not using pattern helpers will
tend to push stylistically toward drones. sc-tiny seems like it would (once
it stabilizes) give you the benefits of Ndefs with tighter syntax and easier
pattern specification.

Really interesting topic -- thanks for all the suggestions!
hjh

[1] http://toplap.org/wiki/ManifestoDraft
[2] http://onewolftechblog.wordpress.com/2014/03/15/ixilang/



--
View this message in context: http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656p7611691.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.

_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
thor
2014-06-19 09:53:32 UTC
Permalink
Hi James

Just read this thread and I'd like to add a few notes. Firstly, that the field of live coding is
still so young that there is no 'canon' as you'd find in jazz. Many people design their own
systems and it's all very experimental at this stage. That's why it's such a fun field.
But there is certainly theory of various kind, well exemplified by a recent issue of Computer
Music Journal, edited by Julian, Nick and Alex.

http://www.mitpressjournals.org/toc/comj/38/1

You mention Nick's definition of live coding, and coincidentally in my paper I argue that this
is a "strong definition" of live coding, as opposed to more weaker definitions by practically
everyone else. But we should all aspire to Nick's : )

There is a guy called Click Nilson, who's written a paper on practice:

http://composerprogrammer.com/research/livecodingpractice.pdf

As to training materials, I'd say observe performances, but mostly that DIY is the best approach.

thor
Post by ddw_music
Post by Cole Ingraham
"All out live coding" is really hard. Yes there are some insanely talented
ninjas with 9000+ wpm that can make some amazing things happen from an
empty file, but for the rest of us I recommend having a handful of
starting code. This could be skeletons of things, complete code that you
intend to extend/tweak, infrastructure for effects, whatever is
appropriate. If anyone complains about that, tell them to go live code in
Assembly.
Actually, this was one of the intended uses of ddwChucklib. I ended up using
it more for modeling complex musical behaviors, but there's no reason why
chucklib objects couldn't be components of improvisation. I may have been
dissuaded from that by editorial comments from Nick Collins on my SC book
chapter, where I had proposed chucklib objects could be part of a live
coding framework, but he seemed adamant that this was Simply Not Live
Coding. More precisely (IIRC -- this was a few years ago), the objection was
that the code displayed to the audience should, in live coding, not merely
use algorithms, but communicate something about the nature of the
algorithms. "Code should be seen as well as heard, *underlying algorithms
viewed* as well as their visual outcome"[1] (emphasis mine).
A process prototype could, for instance, accept a string to define a drum
PR(\strRhy).chuck(BP(\snr), nil, (sample: \snare, rhy: ".x.x"));
BP(\snr).play;
BP(\snr).rhy = "..x. ....x..x";
PR(\strRhy).chuck(BP(\k), nil, (sample: \kick, rhy: "xxxx"));
BP(\k).play;
BP(\k).rhy = "x x..x .x .";
... which is less convenient but not radically different from ixilang's "ac
-> hat |H Hhh hhh H h|"[2].
Hm... it's probably worth my while to revive that line of thinking.
~~
sc-tiny also looks very interesting. I like the way NodeProxies can
~src = { ... };
~out = { \in.ar(0!2) };
~out.vol = 0.2;
~out.play;
~src <>> ~out;
~fx = { ... do something to \in.ar(0!2) ... };
~src <>> ~fx <>> ~out;
When I discovered that last technique... oh wow... mind blown.
But, starting from an empty code file and not using pattern helpers will
tend to push stylistically toward drones. sc-tiny seems like it would (once
it stabilizes) give you the benefits of Ndefs with tighter syntax and easier
pattern specification.
Really interesting topic -- thanks for all the suggestions!
hjh
[1] http://toplap.org/wiki/ManifestoDraft
[2] http://onewolftechblog.wordpress.com/2014/03/15/ixilang/
--
View this message in context: http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656p7611691.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.
_______________________________________________
sc-users mailing list
info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Edward Nixon
2014-06-19 13:53:53 UTC
Permalink
Post by Cole Ingraham
Hi James
Just read this thread and I'd like to add a few notes. Firstly, that the field of live coding is
still so young that there is no 'canon' as you'd find in jazz. Many people design their own
systems and it's all very experimental at this stage. That's why it's such a fun field.
But there is certainly theory of various kind, well exemplified by a recent issue of Computer
Music Journal, edited by Julian, Nick and Alex.
http://www.mitpressjournals.org/toc/comj/38/1
You mention Nick's definition of live coding, and coincidentally in my paper I argue that this
is a "strong definition" of live coding, as opposed to more weaker definitions by practically
everyone else. But we should all aspire to Nick's : )
http://composerprogrammer.com/research/livecodingpractice.pdf
As to training materials, I'd say observe performances, but mostly that DIY is the best approach.
thor
Post by ddw_music
Post by Cole Ingraham
"All out live coding" is really hard. Yes there are some insanely talented
ninjas with 9000+ wpm that can make some amazing things happen from an
empty file, but for the rest of us I recommend having a handful of
starting code. This could be skeletons of things, complete code that you
intend to extend/tweak, infrastructure for effects, whatever is
appropriate. If anyone complains about that, tell them to go live code in
Assembly.
Actually, this was one of the intended uses of ddwChucklib. I ended up using
it more for modeling complex musical behaviors, but there's no reason why
chucklib objects couldn't be components of improvisation. I may have been
dissuaded from that by editorial comments from Nick Collins on my SC book
chapter, where I had proposed chucklib objects could be part of a live
coding framework, but he seemed adamant that this was Simply Not Live
Coding. More precisely (IIRC -- this was a few years ago), the objection was
that the code displayed to the audience should, in live coding, not merely
use algorithms, but communicate something about the nature of the
algorithms. "Code should be seen as well as heard, *underlying algorithms
viewed* as well as their visual outcome"[1] (emphasis mine).
A process prototype could, for instance, accept a string to define a drum
PR(\strRhy).chuck(BP(\snr), nil, (sample: \snare, rhy: ".x.x"));
BP(\snr).play;
BP(\snr).rhy = "..x. ....x..x";
PR(\strRhy).chuck(BP(\k), nil, (sample: \kick, rhy: "xxxx"));
BP(\k).play;
BP(\k).rhy = "x x..x .x .";
... which is less convenient but not radically different from ixilang's "ac
-> hat |H Hhh hhh H h|"[2].
Hm... it's probably worth my while to revive that line of thinking.
~~
sc-tiny also looks very interesting. I like the way NodeProxies can
~src = { ... };
~out = { \in.ar(0!2) };
~out.vol = 0.2;
~out.play;
~src <>> ~out;
~fx = { ... do something to \in.ar(0!2) ... };
~src <>> ~fx <>> ~out;
When I discovered that last technique... oh wow... mind blown.
But, starting from an empty code file and not using pattern helpers will
tend to push stylistically toward drones. sc-tiny seems like it would (once
it stabilizes) give you the benefits of Ndefs with tighter syntax and easier
pattern specification.
Really interesting topic -- thanks for all the suggestions!
hjh
[1] http://toplap.org/wiki/ManifestoDraft
[2] http://onewolftechblog.wordpress.com/2014/03/15/ixilang/
I guess there's a long tradition of virtuosity, to greater or lesser extend, at the expense of aesthetic quality. I think of Listz and Paganini as exemplars. Which is not to say neither of them ever wrote fine, high quality music. The distinction is more ambiguous, perhaps non-existent, when I think of jazz. Jazz is based on the fundamental of improvisation; each performance is its own contribution to the canon. And a celebration of the Live, even when recorded.

I confess to having some trouble with the whole live coding thing in this context. The ergonomics, the level of contact between the performer and the music is just that step or two removed from the spontaneous relationship. It's very chunky and, to a general audience, very obscure.

I'm sure my attitude is, in large part, a function of my ignorance and lack of skill. The exemplary electronic music that I love, almost anything by Francois Bayle or Bernard Parmigiani, sounds performed to me. I don't know to what extent it actually is and, again, I'm ignorant of their working methods.

I guess the answer may come out of a decision about what it is of the tradition, i.e., in the simple sense of 'what has gone before,' that one wants to incorporate or emulate. As an amateur/dilettante, I have the luxury of thinking that SC in particular and electronic music in general can and should try to go where no man/woman has gone before and not look back to imitate earlier forms and genres. Which is not to say we don't venture out there trailing a whole long tail of attitudes, states of mind, preconceptions and sonic maps. I guess we just have to be conscious of the influences and keep them in some sort of perspective. Harold Bloom wrote an interesting book called The Anxiety of Influence, mainly for poets but more generally of interest.

We can make decisions such as the one Morton Subotnik talks about: eschewing the black and white keyboard (and in my mind, the ET MIDI straight jacket.) But he has the luxury of being a pioneer, of being in at a beginning. The rest of us have to pick our ways through the still primitive garden that those guys planted and tended.

In any event, unformed thoughts. I actually intended this note to direct your attention to a post in the Matrixsynth blog, from last week; it's an interview with Alexander Cortini wearing his NIN hat. A show and tell. It's about his performance set up. I found it really interesting in terms of the balance he attempts to strike between spontaneous possibilities and packaged structure in the context of a show. I don't know if seeing this helps in anyway; he certainly has more money to spend on his "instrument" than I do. Just another data point, I guess. Here's the link:

http://www.matrixsynth.com/2014/06/nine-inch-nails-alessandro-cortini-live.html

…edN
Julian Rohrhuber
2014-06-19 16:24:17 UTC
Permalink
A lot of the question about how to approach live coding depends on what you have in mind: Live coding, by definition, may or may not have the aim to please an audience, and may or may not have an audience indeed.

The tension of having to think in terms of changes of rules as basic musical movements is a very interesting task in itself, and it is actually a relief from the idea that a program should represent a final state in a production process.

In teaching, it seems very productive to begin early with live coding, simply as a means to experiment with differences between variations of algorithms. In terms of traditional performance, a general issue in the current state of live coding is the link between the two poles: "blank sheet" vs. "code DJ". If you start from scratch, the dish is cooked completely in front of your eyes. If you modify a larger existing system, the transitions are potentially more interesting. But in the first case as well as in the second, certainly there are preparations hidden from view. Often there is memorisation, and there are automatisms; what I long for most (both for myself and others) is "public thought", improvised reasoning in sound.

What seems to be making this easier is a spectrum between "custom live coding setups" and "piece specific languages". To find one's way there, it is helpful to work with two documents: one for setup one for live coding, and when practicing, move code fore and back between them until it warms up. So a live coding piece is defined by a thematic frame that sets the scene, but that is structured in a way that makes it easy not only to change parameters but to work grammatically. Think of improvised poetry, how would this work with a programming language instead of a natural language?

In general for learning improvisation, there is really nothing that could replace playing together with others.

But you asked for papers on the topic - did you look at http://www.sussex.ac.uk/Users/nc81/research/livecodingpractice.pdf?

As for practicing materials, this is what is generally lacking from contemporary art!


_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Richard Wentk
2014-06-19 17:06:10 UTC
Permalink
I guess the aims are:

1. Write code under time pressure
2. Please an audience
3. Be seen to be doing something clever with code in front of an audience
4. Make interesting music

There may be little or no overlap between the aims.

Item 1 is probably the most interesting, because it can potentially used as a way of breaking out of cliches.

(Of course with algo-music, it’s just as likely to turn you into a cliche machine. So there’s that.)

The tragic thing about algo-music in general - and I’m including live coding as a subset - is that people keep reinventing it, over and over. I’ve spent the last six months reading every paper I can find about it, and the amount of previous work that has been forgotten, ignored, or never seen is simply staggering.

My guess is there’s something about machine-(semi)-composed music that seems to make it inherently forgettable. You need to put a face to it, or it just doesn’t stick.

Hiding behind a laptop and writing manifestos isn’t enough to give it a face, IMO.

There’s also the uncanny valley problem. To audience ears, a lot of algo-music sounds not-quite-right musically. The musical shapes made by sonifying algorithms and semi-randomised walks through melodic and harmonic spaces are fine as curiosities, but I’m not convinced they’re any more than that.

Naive oversimplification is one of the curses of computer modelling. Deep models are difficult, and music made by relatively simple machines falls some way short, even of supposedly trivial forms like dance/trance.

The odds of being able to improvise a really good deep model on stage are - I’d guess - not high, even if you know what you’re doing and are writing in a hypothetical musical modelling language, not trying to win a fight against a bits/bytes/objects language.

Richard
Post by Julian Rohrhuber
A lot of the question about how to approach live coding depends on what you have in mind: Live coding, by definition, may or may not have the aim to please an audience, and may or may not have an audience indeed.
The tension of having to think in terms of changes of rules as basic musical movements is a very interesting task in itself, and it is actually a relief from the idea that a program should represent a final state in a production process.
In teaching, it seems very productive to begin early with live coding, simply as a means to experiment with differences between variations of algorithms. In terms of traditional performance, a general issue in the current state of live coding is the link between the two poles: "blank sheet" vs. "code DJ". If you start from scratch, the dish is cooked completely in front of your eyes. If you modify a larger existing system, the transitions are potentially more interesting. But in the first case as well as in the second, certainly there are preparations hidden from view. Often there is memorisation, and there are automatisms; what I long for most (both for myself and others) is "public thought", improvised reasoning in sound.
What seems to be making this easier is a spectrum between "custom live coding setups" and "piece specific languages". To find one's way there, it is helpful to work with two documents: one for setup one for live coding, and when practicing, move code fore and back between them until it warms up. So a live coding piece is defined by a thematic frame that sets the scene, but that is structured in a way that makes it easy not only to change parameters but to work grammatically. Think of improvised poetry, how would this work with a programming language instead of a natural language?
In general for learning improvisation, there is really nothing that could replace playing together with others.
But you asked for papers on the topic - did you look at http://www.sussex.ac.uk/Users/nc81/research/livecodingpractice.pdf?
As for practicing materials, this is what is generally lacking from contemporary art!
_______________________________________________
sc-users mailing list
info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Chad McKinney
2014-06-19 17:21:01 UTC
Permalink
I've heard various criticisms against live coding, algorithmic music, and
just electronic music in general. Live coding, as with all music, has a
range of practitioners, with a few being very good, the majority being
mediocre, and some absolutely terrible. It's music after all.
Post by Richard Wentk
1. Write code under time pressure
2. Please an audience
3. Be seen to be doing something clever with code in front of an audience
4. Make interesting music
There may be little or no overlap between the aims.
Item 1 is probably the most interesting, because it can potentially used
as a way of breaking out of cliches.
(Of course with algo-music, it’s just as likely to turn you into a cliche
machine. So there’s that.)
The tragic thing about algo-music in general - and I’m including live
coding as a subset - is that people keep reinventing it, over and over.
I’ve spent the last six months reading every paper I can find about it, and
the amount of previous work that has been forgotten, ignored, or never seen
is simply staggering.
My guess is there’s something about machine-(semi)-composed music that
seems to make it inherently forgettable. You need to put a face to it, or
it just doesn’t stick.
Hiding behind a laptop and writing manifestos isn’t enough to give it a
face, IMO.
There’s also the uncanny valley problem. To audience ears, a lot of
algo-music sounds not-quite-right musically. The musical shapes made by
sonifying algorithms and semi-randomised walks through melodic and harmonic
spaces are fine as curiosities, but I’m not convinced they’re any more than
that.
Naive oversimplification is one of the curses of computer modelling. Deep
models are difficult, and music made by relatively simple machines falls
some way short, even of supposedly trivial forms like dance/trance.
The odds of being able to improvise a really good deep model on stage are
- I’d guess - not high, even if you know what you’re doing and are writing
in a hypothetical musical modelling language, not trying to win a fight
against a bits/bytes/objects language.
Richard
On 19 Jun 2014, at 17:24, Julian Rohrhuber <
Post by Julian Rohrhuber
A lot of the question about how to approach live coding depends on what
you have in mind: Live coding, by definition, may or may not have the aim
to please an audience, and may or may not have an audience indeed.
Post by Julian Rohrhuber
The tension of having to think in terms of changes of rules as basic
musical movements is a very interesting task in itself, and it is actually
a relief from the idea that a program should represent a final state in a
production process.
Post by Julian Rohrhuber
In teaching, it seems very productive to begin early with live coding,
simply as a means to experiment with differences between variations of
algorithms. In terms of traditional performance, a general issue in the
current state of live coding is the link between the two poles: "blank
sheet" vs. "code DJ". If you start from scratch, the dish is cooked
completely in front of your eyes. If you modify a larger existing system,
the transitions are potentially more interesting. But in the first case as
well as in the second, certainly there are preparations hidden from view.
Often there is memorisation, and there are automatisms; what I long for
most (both for myself and others) is "public thought", improvised reasoning
in sound.
Post by Julian Rohrhuber
What seems to be making this easier is a spectrum between "custom live
coding setups" and "piece specific languages". To find one's way there, it
is helpful to work with two documents: one for setup one for live coding,
and when practicing, move code fore and back between them until it warms
up. So a live coding piece is defined by a thematic frame that sets the
scene, but that is structured in a way that makes it easy not only to
change parameters but to work grammatically. Think of improvised poetry,
how would this work with a programming language instead of a natural
language?
Post by Julian Rohrhuber
In general for learning improvisation, there is really nothing that
could replace playing together with others.
Post by Julian Rohrhuber
But you asked for papers on the topic - did you look at
http://www.sussex.ac.uk/Users/nc81/research/livecodingpractice.pdf?
Post by Julian Rohrhuber
As for practicing materials, this is what is generally lacking from
contemporary art!
Post by Julian Rohrhuber
_______________________________________________
sc-users mailing list
http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
Post by Julian Rohrhuber
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
_______________________________________________
sc-users mailing list
http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
ddw_music
2014-06-20 02:04:00 UTC
Permalink
On June 20, 2014 1:08:48 AM "Richard Wentk [via New SuperCollider Mailing
Post by Richard Wentk
The tragic thing about algo-music in general - and I’m including live
coding as a subset - is that people keep reinventing it, over and over.
I’ve spent the last six months reading every paper I can find about it, and
the amount of previous work that has been forgotten, ignored, or never seen
is simply staggering.
That's exactly why I started this thread...
Post by Richard Wentk
My guess is there’s something about machine-(semi)-composed music that
seems to make it inherently forgettable. You need to put a face to it, or
it just doesn’t stick.
I think this why I'm mainly interested in live performance with computers.
Algorithmically-composed fixed-media works are unlikely to compare well
with handcrafted ones -- facelessness is likely to be a problem here. In
performance, the face is onstage. I think this is where algorave gets it right.
Post by Richard Wentk
There’s also the uncanny valley problem. To audience ears, a lot of
algo-music sounds not-quite-right musically. The musical shapes made by
sonifying algorithms and semi-randomised walks through melodic and harmonic
spaces are fine as curiosities, but I’m not convinced they’re any more than
that.
But can they be more than that in the future? Or, are they more than that now?

One thing that interests me very much is alternatives to a narrowly-drawn
definition of music as relationships among sounds. Music is also a social
practice. Live performance with computers proposes some kinds of social
music-making that never existed before. If a spectator approaches these
with the same mindset she would bring to a Beethoven concert, she's likely
to be disappointed... by which I mean to suggest that machine mastery of
12et harmonic practices may not be the most useful yardstick.

One is free, of course, to be uninterested in the social relationships
embodied in live computer performance...
Post by Richard Wentk
The odds of being able to improvise a really good deep model on stage are -
I’d guess - not high, even if you know what you’re doing and are writing in
a hypothetical musical modelling language, not trying to win a fight
against a bits/bytes/objects language.
If we retain a loose analogy with jazz -- jazz improvisers don't make up
the musical model on the spot either.

Sent with AquaMail for Android
http://www.aqua-mail.com






--
View this message in context: http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656p7611719.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.
ddw_music
2014-06-20 02:08:31 UTC
Permalink
On June 20, 2014 1:08:48 AM "Richard Wentk [via New SuperCollider Mailing
Post by Richard Wentk
The odds of being able to improvise a really good deep model on stage are -
I’d guess - not high, even if you know what you’re doing and are writing in
a hypothetical musical modelling language, not trying to win a fight
against a bits/bytes/objects language.
Sorry, hit send accidentally.

If we retain a loose analogy with jazz -- jazz improvisers don't make up
the musical model on the spot either. When you're at a jazz club, you're
not hearing the hours of scale and arpeggio practice that are necessary for
a good solo... If live coding means building the model on stage, then the
results will be simplistic.

hjh

Sent with AquaMail for Android
http://www.aqua-mail.com






--
View this message in context: http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656p7611720.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.
Cole Ingraham
2014-06-20 02:45:08 UTC
Permalink
To paraphrase a definition that I believe is from Nick Collins: live coding
involves altering the functionality of software as part of the performance.
This is how I've been approaching the practice for a while now, and I feel
it gets at the core ideals pretty well. There is no real distinction
between the "code it all on the fly" person and the "code DJ" (which I feel
is a bit too derogatory outside of dance music centric circles).

For my own workflow, when I first started learning SC I would just have a
file with all my code in blocks and run those one at a time (like many
people I assume). Gradually I started writing classes, GUIs, creating more
self contained programs so that I wouldn't need to deal with a ton of raw
code when performing. I was very happy with that for a while but gradually
started feeling limited by having everything wrapped in a neat little
package and have returned to having much of the precomposed aspects of a
performance in raw code. The difference/advantage for me now is that I feel
comfortable enough making changes on the fly (and confident enough about
what the result will be) and there are some things that are just far
simpler to express as pure code than within some higher level system (this
is of course why we aren't talking about using Max here). I would totally
consider this to be live coding.

-Cole
Post by ddw_music
On June 20, 2014 1:08:48 AM "Richard Wentk [via New SuperCollider Mailing
Lists Forums (Use These!!!)]" <[hidden email]
Post by Richard Wentk
The odds of being able to improvise a really good deep model on stage
are -
Post by Richard Wentk
I’d guess - not high, even if you know what you’re doing and are writing
in
Post by Richard Wentk
a hypothetical musical modelling language, not trying to win a fight
against a bits/bytes/objects language.
Sorry, hit send accidentally.
If we retain a loose analogy with jazz -- jazz improvisers don't make up
the musical model on the spot either. When you're at a jazz club, you're
not hearing the hours of scale and arpeggio practice that are necessary for
a good solo... If live coding means building the model on stage, then the
results will be simplistic.
hjh
Sent with AquaMail for Android
http://www.aqua-mail.com
------------------------------
View this message in context: Re: Research and training materials for
live coding
<http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656p7611720.html>
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive
<http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/SuperCollider-Users-New-Use-this-f2676391.html>
at Nabble.com.
--
DMA Music Composition - University of Colorado at Boulder
MFA Electronic Music and Recording Media - Mills College
BM Music Composition - University of the Pacific

http://www.coleingraham.com
http://www.glitchlich.com
https://soundcloud.com/coledingraham
http://www.facebook.com/coleingrahammusic
Fabrice Mogini
2014-06-20 07:38:01 UTC
Permalink
This is a picture of my current live coding system, inspired from Event.
I use Pbindef in the PlayFunction, which uses data from many other
functions. Any aspect can be changed in real time: it can be a value or it
can be a whole function that is re-written as long as it has a valid output
that can be used by the other functions.[image: Inline images 1]
Post by Cole Ingraham
To paraphrase a definition that I believe is from Nick Collins: live
coding involves altering the functionality of software as part of the
performance. This is how I've been approaching the practice for a while
now, and I feel it gets at the core ideals pretty well. There is no real
distinction between the "code it all on the fly" person and the "code DJ"
(which I feel is a bit too derogatory outside of dance music centric
circles).
For my own workflow, when I first started learning SC I would just have a
file with all my code in blocks and run those one at a time (like many
people I assume). Gradually I started writing classes, GUIs, creating more
self contained programs so that I wouldn't need to deal with a ton of raw
code when performing. I was very happy with that for a while but gradually
started feeling limited by having everything wrapped in a neat little
package and have returned to having much of the precomposed aspects of a
performance in raw code. The difference/advantage for me now is that I feel
comfortable enough making changes on the fly (and confident enough about
what the result will be) and there are some things that are just far
simpler to express as pure code than within some higher level system (this
is of course why we aren't talking about using Max here). I would totally
consider this to be live coding.
-Cole
Post by ddw_music
On June 20, 2014 1:08:48 AM "Richard Wentk [via New SuperCollider Mailing
Lists Forums (Use These!!!)]" <[hidden email]
Post by Richard Wentk
The odds of being able to improvise a really good deep model on stage
are -
Post by Richard Wentk
I’d guess - not high, even if you know what you’re doing and are
writing in
Post by Richard Wentk
a hypothetical musical modelling language, not trying to win a fight
against a bits/bytes/objects language.
Sorry, hit send accidentally.
If we retain a loose analogy with jazz -- jazz improvisers don't make up
the musical model on the spot either. When you're at a jazz club, you're
not hearing the hours of scale and arpeggio practice that are necessary for
a good solo... If live coding means building the model on stage, then the
results will be simplistic.
hjh
Sent with AquaMail for Android
http://www.aqua-mail.com
------------------------------
View this message in context: Re: Research and training materials for
live coding
<http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656p7611720.html>
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive
<http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/SuperCollider-Users-New-Use-this-f2676391.html>
at Nabble.com.
--
DMA Music Composition - University of Colorado at Boulder
MFA Electronic Music and Recording Media - Mills College
BM Music Composition - University of the Pacific
http://www.coleingraham.com
http://www.glitchlich.com
https://soundcloud.com/coledingraham
http://www.facebook.com/coleingrahammusic
Scott Wilson
2014-06-20 07:54:40 UTC
Permalink
A few musings…
Post by ddw_music
If live coding means building the model on stage, then the
results will be simplistic.
Live coding is a technique and an interface, whether or not it's used with a pre-defined musical structure/form/model is a different matter.

I know some people are very purist about it, and I enjoy that as a virtuosic activity, but for myself live coding arose as a natural and obvious development of other work I'm doing, so I didn't get that religion!

A normal SC workflow general involves aspects of live coding. At some point in a live performance setup that I was using it just seemed a no brainer that I should have code windows pop open and organise stuff in such a way that I could modify running algorithms with code. It was maybe naive in some ways, but it just seemed clear that it was the best way to do it.

Viewed from that perspective, live coding is not a religion or a revolution, but just another tool in the toolbox, with appropriate uses, and cases where it's less helpful. That can be as part of a structured improvisation, which of course has a long history in different forms in jazz and other musics. That's basically what we've been doing in BEER. See here if you're interested:

http://www.mitpressjournals.org/doi/abs/10.1162/COMJ_a_00229#.U6PnpMZqMtI

One interesting difference is of course the pace of things, and the nature of the musical thought. Live coders often do think more structurally than other improvisers might, because they are more often having to think about what they are going to do in the near or less near future, rather than in a moment. Put another way, as an interface it is very flexible, and allows you the freedom to do an amazing array of things, but not necessarily particularly quickly! This is one reason why ensemble approaches seem to work well.

There is probably some interesting research to do on the impact of this on the music. Lack of speed, the need for forward planning, and the utility of ensemble improv all tend to lead to certain sorts of musical results. One thing a structured approach can do is undermine that, which is one thing we've tried to explore with BEER.

S.
Patrick Borgeat
2014-06-20 08:49:25 UTC
Permalink
I was thinking a lot about why I got better at Live Coding (off course,
after hours and hours of performances and practices) and if I could have
become as good in less time if I could have traveled back in time and
From my experience a lot of things come together. I would have taught
myself some best practice live coding "patterns" that worked out or me
quite well. I would stress the importance of writing code that offers many
possibilities to do interesting, organic musical changes with few
keystrokes. I'm not a fan of writing a big chunk of code live and executing
it when it's finished. Don't burry too much code in functions/methods as
you can't change them in a local context. (I always wanted to build an
editor for a LISPy language where you can replace a function call with the
actual code of the function OR with the result of the function call (e.g.
an Array literal)). Keep all your currently running sounds/patterns in
mind. Nothing is more annoying that a sound/pattern that is playing for
minutes (even if it's very soft) just because your mind is elsewhere. Be
aware, that your perceived time != how the audience will perceive time, if
you're currently having fun debugging something you're audience might be
annoyed because there is no meaningful change in music. Think about an
overall time structure with dynamic/sonical changes/progressions, etc. Keep
in flow with your live coding "instrument": It can be fun trying to play
Punk Rock with SuperCollider but you probably always will suck at it. Try
to find a musical style that is unique to your approach to live coding and
build on that.

One thing I don't hear often:

I general I liked to think that Live Coding was able to create a unique
experience in which code projection was just a part of. Many other things
matter (some of which you, as a performer, can't do anything about it – but
it's very important that you're staged right and the audience is able to
get in the right mood – talkt with the host. Does the audience know what
you're doing? What are they looking for/enjoying? The coding? The concept?
The sound/music? Visuals? The Room? (e.g. Ambient Live Coding in a church).
Be careful with audience expectations, hype can be your enemy. Crafting a
great experience for the audience gives a performance meaning.

cheerios,
Patrick
A few musings

If live coding means building the model on stage, then the
results will be simplistic.
Live coding is a technique and an interface, whether or not it's used with
a pre-defined musical structure/form/model is a different matter.
I know some people are very purist about it, and I enjoy that as a
virtuosic activity, but for myself live coding arose as a natural and
obvious development of other work I'm doing, so I didn't get that religion!
A normal SC workflow general involves aspects of live coding. At some
point in a live performance setup that I was using it just seemed a no
brainer that I should have code windows pop open and organise stuff in such
a way that I could modify running algorithms with code. It was maybe naive
in some ways, but it just seemed clear that it was the best way to do it.
Viewed from that perspective, live coding is not a religion or a
revolution, but just another tool in the toolbox, with appropriate uses,
and cases where it's less helpful. That can be as part of a structured
improvisation, which of course has a long history in different forms in
jazz and other musics. That's basically what we've been doing in BEER. See
http://www.mitpressjournals.org/doi/abs/10.1162/COMJ_a_00229#.U6PnpMZqMtI
One interesting difference is of course the pace of things, and the nature
of the musical thought. Live coders often do think more structurally than
other improvisers might, because they are more often having to think about
what they are going to do in the near or less near future, rather than in a
moment. Put another way, as an interface it is very flexible, and allows
you the freedom to do an amazing array of things, but not necessarily
particularly quickly! This is one reason why ensemble approaches seem to
work well.
There is probably some interesting research to do on the impact of this on
the music. Lack of speed, the need for forward planning, and the utility of
ensemble improv all tend to lead to certain sorts of musical results. One
thing a structured approach can do is undermine that, which is one thing
we've tried to explore with BEER.
S.
Scott Wilson
2014-06-20 13:20:37 UTC
Permalink
I general I liked to think that Live Coding was able to create a unique experience in which code projection was just a part of. Many other things matter (some of which you, as a performer, can't do anything about it – but it's very important that you're staged right and the audience is able to get in the right mood – talkt with the host. Does the audience know what you're doing? What are they looking for/enjoying? The coding? The concept? The sound/music? Visuals? The Room? (e.g. Ambient Live Coding in a church). Be careful with audience expectations, hype can be your enemy. Crafting a great experience for the audience gives a performance meaning.
I think code projection is often misunderstood by commentators, and for that reason tends to be either treated as something giving the audience unique insight, or conversely irrelevant to the musical result. Certainly it can be both of those things, and neither!

I think of it as part of the performative trappings of a particular cultural/social setting. It has a sort of meaning or value, but that's contingent on who/what/where, and has more to do with what's understood as part of that setting than on whether an individual audience member really 'gets it'. In this sense it can have a lot in common with more widely experienced performance conventions and rituals, and should be seen as performative in its own right regardless of specialist knowledge. The same thing could be said of non-specialists watching someone play the tuba!

S.
Edward Nixon
2014-06-20 13:24:17 UTC
Permalink
Post by Richard Wentk
1. Write code under time pressure
2. Please an audience
3. Be seen to be doing something clever with code in front of an audience
4. Make interesting music
There may be little or no overlap between the aims.
Item 1 is probably the most interesting, because it can potentially used as a way of breaking out of cliches.
(Of course with algo-music, it’s just as likely to turn you into a cliche machine. So there’s that.)
The tragic thing about algo-music in general - and I’m including live coding as a subset - is that people keep reinventing it, over and over. I’ve spent the last six months reading every paper I can find about it, and the amount of previous work that has been forgotten, ignored, or never seen is simply staggering.
My guess is there’s something about machine-(semi)-composed music that seems to make it inherently forgettable. You need to put a face to it, or it just doesn’t stick.
Think about the dynamics of performances using analog/accoustical resources -- piano recitals, symphony, choral concerts, even jazz concerts & club dates. Traditional performance.

There is a canonical body of work, a repertoire that has developed over many, many years. The audience, by and large, already remembers the pieces. It has a bunch of memories associated with the music. It comes out to hear a particular interpretation of their favourite pieces; this is the case with jazz performances as well as witness the core importance of the American Song Book.

An audience won't remember a piece of music unless it has been repeatedly repeated, commented on and otherwise built into the narrative that encases a canonical body of work. It takes time, yes, but it also takes a living, breathing performance tradition, a history of memorable celebrations, the creation of legends, triumphs. There is an element of the theatrical in the whole enterprise. The theatrical provides the arc, the living and breathing sense of life, the spectacle.

The theatrical -- in the sense of the performer appearing out front and centre stage -- may be an aspect of alto-music that needs the most work in terms of bringing electronic music and its audience together in performance. It has to "happen" and it really can't if its practitioners are sitting at a table in the dark, with their faces illuminated dimly by a computer monitor. So, to my mind, there is a fundamental presentation issue that needs to be overcome. It's a problem that has been solved already by the conventional, historical performance streams. A lot of it is convention, familiarity, but I don't think one should wait for the audience to get used to people sitting behind desks projecting stuff onto screens. In the dark.
Post by Richard Wentk
Hiding behind a laptop and writing manifestos isn’t enough to give it a face, IMO.
There’s also the uncanny valley problem. To audience ears, a lot of algo-music sounds not-quite-right musically. The musical shapes made by sonifying algorithms and semi-randomised walks through melodic and harmonic spaces are fine as curiosities, but I’m not convinced they’re any more than that.
The audience has always had other things, other than the music, to hold on to. Other reasons to be present. A lot of it was and remains social, i.e., the idea of being at a concert with like-minded, politically and socially compatible people. It's a social institution.
Post by Richard Wentk
Naive oversimplification is one of the curses of computer modelling. Deep models are difficult, and music made by relatively simple machines falls some way short, even of supposedly trivial forms like dance/trance.
The odds of being able to improvise a really good deep model on stage are - I’d guess - not high, even if you know what you’re doing and are writing in a hypothetical musical modelling language, not trying to win a fight against a bits/bytes/objects language.
All it has to do is breath. If you can write something that sounds like it's living and breathing and evolving AND it's being presented by someone who is paying as much or more attention to the audience than the instrument, I think it will at least be accepted as a valid attempt. It will become a candidate for the canon. But if the performer shows no evidence of caring whether the audience is there or not, of course the audience isn't going to care either.

There's an interesting biography of John Cage called Where the Heart Beats by Kay Larson. At about the half-way point, Cage is grappling with chance operations in his methodology. He decides he must suspend his judgement about whether or not a particular chance move has created a valid or aesthetic result. His focus then shifts to creating a very detailed and elaborate framework in which the chance operations should occur. The point of control shifts, the notion of value and judgement is abstracted. I don't guess this kind of suspension of judgement is for everyone; ultimately it may only work for John Cage. But I think it's a suggestive area of research and experimentation, particularly in a medium for which chance operations are so readily accessible.

…edN
ddw_music
2014-06-20 16:25:01 UTC
Permalink
On June 20, 2014 9:27:11 PM "Edward Nixon [via New SuperCollider Mailing
Post by Edward Nixon
The theatrical -- in the sense of the performer appearing out front and
centre stage -- may be an aspect of alto-music that needs the most work in
terms of bringing electronic music and its audience together in
performance. It has to "happen" and it really can't if its practitioners
are sitting at a table in the dark, with their faces illuminated dimly by a
computer monitor. So, to my mind, there is a fundamental presentation issue
that needs to be overcome.
One area where algorave gets it right, because the audience has something
to do besides watch the theatrical spectacle -- i.e., dance.

I have a feeling that the concert model -- quiet room, passive and
attentive audience -- will not serve live coding very well. Live coding may
be one of many activities in the space at the time. This is bad for musical
creators' egos, but nobody cares about the Composer As Prophet anymore anyway.
Post by Edward Nixon
All it has to do is breathe.
A wise remark.

There's an awful lot in this thread. It will take me some time to digest.
One immediate take-away is that it inspires me to return to an idea I had
early in the development of chucklib, but which I didn't realize. I always
meant it to support improvised performance of prepared materials, but my
pieces tended to harden into set forms while composing and I drifted away
from that goal. Now I'm thinking I can prepare "agents" in advance of a
performance, which respond to various messages representing things that
dance music should do. Then, invoke the agents not from a prepared script
but from an empty document. So I could activate a bunch of layers, then do
something like (not implemented yet) "BP.message(\slowrise)" and layers
would reenter, building tension and energy etc.

I have plenty of time to ponder, which usually works best for me -- put
ideas into the hopper, turn the crank for a few months, and good stuff
comes out later.

Thanks everyone!
hjh

Sent with AquaMail for Android
http://www.aqua-mail.com






--
View this message in context: http://new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Research-and-training-materials-for-live-coding-tp7611656p7611737.html
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.
Edward Nixon
2014-06-20 17:46:25 UTC
Permalink
Post by ddw_music
On June 20, 2014 9:27:11 PM "Edward Nixon [via New SuperCollider Mailing
Post by Edward Nixon
The theatrical -- in the sense of the performer appearing out front and
centre stage -- may be an aspect of alto-music that needs the most work in
terms of bringing electronic music and its audience together in
performance. It has to "happen" and it really can't if its practitioners
are sitting at a table in the dark, with their faces illuminated dimly by a
computer monitor. So, to my mind, there is a fundamental presentation issue
that needs to be overcome.
One area where algorave gets it right, because the audience has something
to do besides watch the theatrical spectacle -- i.e., dance.
I have a feeling that the concert model -- quiet room, passive and
attentive audience -- will not serve live coding very well. Live coding may
be one of many activities in the space at the time. This is bad for musical
creators' egos, but nobody cares about the Composer As Prophet anymore anyway.
Nobody becomes a prophet in real time. It's always lumbered on after the fact, in retrospect, by others. But maybe prophets as a class are toast, unless they have a radio show. Or a blog.

But you're right. Personally, I very much look forward to whatever you come up with out of this, particularly if it circles back around chucklib et al.

…edN
alex
2014-06-20 21:34:53 UTC
Permalink
nice thread!

I'd add a couple of things..

With algorave lets not forget the experience of the live coder.
There's something special about being in flow with code while in a
room full of people dancing to it. It turns programming into something
else. I hope more people will have this experience. For the audience
maybe it's not that different from a standard club night, I'm not
sure.

The approach of modelling profound ideas while live coding doesn't
work for me, at least not in the hylomorphic sense of projecting an
idea into code. I think it is better to just try something out, and
then be lead by the music to make changes to the code. The difference
is that you are not transcribing an idea but following a thread, and
don't necessarily have an idea of what the code as a whole is doing..
There's no need to, as long as you can change it.

I think live coding is very different from (the conventional view of)
software development. e.g. when your abstractions are woven into the
present, then you are no longer really dealing with abstractions.

alex
Post by ddw_music
On June 20, 2014 9:27:11 PM "Edward Nixon [via New SuperCollider Mailing
Post by Edward Nixon
The theatrical -- in the sense of the performer appearing out front and
centre stage -- may be an aspect of alto-music that needs the most work in
terms of bringing electronic music and its audience together in
performance. It has to "happen" and it really can't if its practitioners
are sitting at a table in the dark, with their faces illuminated dimly by a
computer monitor. So, to my mind, there is a fundamental presentation issue
that needs to be overcome.
One area where algorave gets it right, because the audience has something
to do besides watch the theatrical spectacle -- i.e., dance.
I have a feeling that the concert model -- quiet room, passive and
attentive audience -- will not serve live coding very well. Live coding may
be one of many activities in the space at the time. This is bad for musical
creators' egos, but nobody cares about the Composer As Prophet anymore anyway.
Post by Edward Nixon
All it has to do is breathe.
A wise remark.
There's an awful lot in this thread. It will take me some time to digest.
One immediate take-away is that it inspires me to return to an idea I had
early in the development of chucklib, but which I didn't realize. I always
meant it to support improvised performance of prepared materials, but my
pieces tended to harden into set forms while composing and I drifted away
from that goal. Now I'm thinking I can prepare "agents" in advance of a
performance, which respond to various messages representing things that
dance music should do. Then, invoke the agents not from a prepared script
but from an empty document. So I could activate a bunch of layers, then do
something like (not implemented yet) "BP.message(\slowrise)" and layers
would reenter, building tension and energy etc.
I have plenty of time to ponder, which usually works best for me -- put
ideas into the hopper, turn the crank for a few months, and good stuff
comes out later.
Thanks everyone!
hjh
Sent with AquaMail for Android
http://www.aqua-mail.com
________________________________
View this message in context: Re: Research and training materials for live
coding
Sent from the SuperCollider Users New (Use this!!!!) mailing list archive at Nabble.com.
--
http://yaxu.org/

_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Julian Rohrhuber
2014-06-20 21:40:50 UTC
Permalink
Post by alex
I think live coding is very different from (the conventional view of)
software development. e.g. when your abstractions are woven into the
present, then you are no longer really dealing with abstractions.
I agree very much, only I think that this is where the whole situation truly becomes abstract ...
_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/

Daniel Mayer
2014-06-18 16:28:36 UTC
Permalink
I'm good with patterns, but... they're a lot of typing when an audience is waiting
for you to do something cool. And that was even using Pbindef to build up
the complexity item by item.
Being not a live-coder myself I noticed on occasion that,
related to Cole's remark, abbreviations of Pbindef syntax can be
not only comfortable but also leading to new musical thoughts.

I had some fun with multiple setting of Pbindefs
with Function ~pbindefs from this thread
though I didn't follow this much further yet:

new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Pattern-question-How-to-set-parameters-of-generators-inside-a-local-pattern-scope-during-runtime-and-td7591518.html


Regards

Daniel

-----------------------------
www.daniel-mayer.at
-----------------------------


_______________________________________________
sc-users mailing list

info (subscription, etc.): http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Charlie Roberts
2014-06-18 17:20:29 UTC
Permalink
I think the syntax in this lib by Iannis Zannos looks really nice for quick
sequencing:

https://github.com/iani/tiny-sc

I'd guess that many of us working with live coding are coming from jazz /
improvisation backgrounds. I spent some time recently trying to make it
easier to move through harmony in Gibber (http://gibber.mat.ucsb.edu).
There's still a ways to go but I've already given some performances that
made more use of harmony than what has been typical for me. Simple example:

// play octaves using global scale with 1/8th note durations
FM( 'bass' ).play( [0,7], 1/8 )

// sequence root of global scale to change each measure; you can also
change modes this way
// individual ugens can also have their own scale object to override the
current global setting
Gibber.scale.root.seq( ['c4','eb4','g4', 'f4'], 1 )

// play three-note chords randomly picked from current scale
Synth( 'rhodes' ).chord.seq( Rndi( 0,7,3 ), 1 )

// just because
Drums( 'x*o*x*o-' )

If anyone is curious, this giblet goes through the details in more depth:
http://gibber.mat.ucsb.edu/?path=gibber/publications/Scales*audio*

- Charlie
Post by ddw_music
I'm good with patterns, but... they're a lot of typing when an audience
is waiting
for you to do something cool. And that was even using Pbindef to build up
the complexity item by item.
Being not a live-coder myself I noticed on occasion that,
related to Cole's remark, abbreviations of Pbindef syntax can be
not only comfortable but also leading to new musical thoughts.
I had some fun with multiple setting of Pbindefs
with Function ~pbindefs from this thread
new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Pattern-question-How-to-set-parameters-of-generators-inside-a-local-pattern-scope-during-runtime-and-td7591518.html
Regards
Daniel
-----------------------------
www.daniel-mayer.at
-----------------------------
_______________________________________________
sc-users mailing list
http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Iannis Zannos
2014-06-18 17:52:51 UTC
Permalink
Hi Charlie,

thanks for mentioning tiny-sc. To comment: I am still working on this on a
daily basis, and the syntax is changing. Not a good thing for releasing,
real *alpha version prerelease* stuff. If you take a look at
Notes/Roadmap.org you'll see that it took me many iterations to reach some
simple state, while some details are still not settled.

https://github.com/iani/tiny-sc/blob/master/Notes/Roadmap.org

That's why I've been meaning to announce "tiny" for weeks now but always
putting it off. However, I have already done a few gigs with it and it
works well. It does many things that JITLib does, in an easier and shorter
way. At some point, I think a discussion is due, but it is a big subject
and that's why I keep postponing.

To say one thing maybe: I think an interesting approach is to distinguish
three types of objects:


1. Templates for creating signals (functions, synthdefs),
2. control sources (patterns, views, kr busses, oscfuncs, midifuncs
etc), and
3. players that play signals and/or patterns (in JITlib: Ndefs, Pdefs
etc).

My operator/mini-language/coding-interface design became much clearer once
I got that idea. Especially 2 and 3 above create a good
guideline/distinction. One accesses the sources by sending messages to
symbols and then sends them to players with the => operator, using an
adverb, if needed, to address a parameter of the player. So:

{ func } => \synthtree; // play new func in synthtree
number =>.param \synthtree; // set parameter of synthtree to value

but also:

\nameofbus.k =>.param \synthtree // map parameter to bus

\nameofview.v =>.param \synthtree // set up view to control parameter
(under development)

(event ) => \sythtree // play event quasi Pbind style in synthtree (using a
simpler, more flexible alternative to Pbind)

\sourcetree.st => \fxtree // play output of sourcetree to input of \fxtree.
Adverb optionally specifies input name of fxtree to link to the output of
sourcetree.

Etc.

Just to give an idea...

Cheers.

Iannis Zannos
Post by Charlie Roberts
I think the syntax in this lib by Iannis Zannos looks really nice for
https://github.com/iani/tiny-sc
I'd guess that many of us working with live coding are coming from jazz /
improvisation backgrounds. I spent some time recently trying to make it
easier to move through harmony in Gibber (http://gibber.mat.ucsb.edu).
There's still a ways to go but I've already given some performances that
// play octaves using global scale with 1/8th note durations
FM( 'bass' ).play( [0,7], 1/8 )
// sequence root of global scale to change each measure; you can also
change modes this way
// individual ugens can also have their own scale object to override the
current global setting
Gibber.scale.root.seq( ['c4','eb4','g4', 'f4'], 1 )
// play three-note chords randomly picked from current scale
Synth( 'rhodes' ).chord.seq( Rndi( 0,7,3 ), 1 )
// just because
Drums( 'x*o*x*o-' )
http://gibber.mat.ucsb.edu/?path=gibber/publications/Scales*audio*
- Charlie
Post by ddw_music
I'm good with patterns, but... they're a lot of typing when an
audience is waiting
for you to do something cool. And that was even using Pbindef to build
up
the complexity item by item.
Being not a live-coder myself I noticed on occasion that,
related to Cole's remark, abbreviations of Pbindef syntax can be
not only comfortable but also leading to new musical thoughts.
I had some fun with multiple setting of Pbindefs
with Function ~pbindefs from this thread
new-supercollider-mailing-lists-forums-use-these.2681727.n2.nabble.com/Pattern-question-How-to-set-parameters-of-generators-inside-a-local-pattern-scope-during-runtime-and-td7591518.html
Regards
Daniel
-----------------------------
www.daniel-mayer.at
-----------------------------
_______________________________________________
sc-users mailing list
http://www.beast.bham.ac.uk/research/sc_mailing_lists.shtml
archive: http://www.listarc.bham.ac.uk/marchives/sc-users/
search: http://www.listarc.bham.ac.uk/lists/sc-users/search/
Continue reading on narkive:
Loading...