The Intermorphic (MIDI) Music Engine (IME) 19 is an 'adaptive' generative music engine and is the latest iteration in our line of IME engines. It is also the evolution of the SSEYO Koan Generative Music Engine (SKME) and Noatikl Music Engine (NME).
The IME utilises a range of generative / aleatoric / stochastic / algorithmic music techniques and parameters and creates MIDI notes and MIDI controller events through the interaction of 4 different kinds of NME Objects: Voice (with 7 Voice Types), Rule (Scale, Harmony, Rhythm & Next Note), Piece and File. There can be multiple instances of the first two objects in a "piece" and there are a large number of Voice parameters to affect and direct the composition.
It is referred to as 'adaptive' because its "Listening Voice" allows external MIDI notes to influence a live composition.
New in the IME 19:
The IME MIDI data can (as app available or permitted) be used to drive external MIDI software or hardware synths, FX units and samplers or can be played through the Intermorphic Sound Engine (ISE) for truly portable generative music.
Fundamental to understanding how the IME works is understanding two types of Objects: Voice and Rule. We cover these right at the start as once you get your head around them, you are on your way! Before we do that, though, we first need to introduce the "Generator Network", which defines how everything is hooked up together.
The Intermorphic Music Engine (IME) "Generator Network" defines how all the generators/voice objects in a piece are associated with MIDI channels and how these generators/voices are linked to each other (e.g. Following voices [below], where one voice follows the output of another).
These objects can in turn be associated with Intermorphic Sound Engine (ISE) Sound Networks and FX Networks.
In the Generator Network you can:
If you drag from a Voice box to a MIDI channel box that will tell the system that the Generator/Voice will play through the defined MIDI channel. If a new voice is added to a piece, the voice won't be assigned to a specific MIDI channel; if you haven't yet hooked-up a Generator/Voice to a MIDI channel, the IME will decide on a free MIDI channel to hook-up the Voice to when it starts playing.
Every IME piece/template (which currently uses the file extension of .noatikl) includes one or more Voice Objects ("Voices"). These are MIDI generators and are the things that actually generate the MIDI notes and MIDI events according to various parameter values used by it. They're called Voices for historical reasons, but are equally well referred to as a "Generators".
Each Voice Object composes according to its defined Voice Type of which are 6 (TTM are a subset of the Patterns Voice Type), and each Voice Type has a corresponding associated colour:
Tip: Where the interface displays the Voice's name (e.g. Fred), that element will flash when the Voice plays.
Rhythmic Voices are the default Voice Type and compose according to the parameters in the Rhythmic / Basics parameter set. Notes have their durations composed according to the Rhythm Rules you define for your voice, to fit as well as possible within the bar structure of your piece.
Ambient Voices have notes composed such that their durations are defined by the parameters in the Ambient parameter set. Rhythm rules are not used for Ambient Voices. An Ambient Voice plays its notes without any respect for tempo or bar timings! Ambient Voices are wonderful for creating drifting, floating sounds for either background or foreground use as drones or for musical texture.
Following Voices are fascinating. They are used to create Voices which work in a call-response manner, following the behaviour of other Voices in a variety of exciting ways according to parameters in the Following parameter set.
Repeat Voices are like Rhythmic Voices, with the added feature that they can be defined to repeat work that they have composed in previous bars, according to rules defined in the Repeat parameter set. When not repeating previous bars, Repeat Voices work in exactly the same way as Rhythmic Voices.
Patterns Voices are Voices that play in accordance with various fixed MIDI-like patterns. These patterns have a specific syntax, are able to follow generative sequencing rules and can adapt automatically to changes in Scale Rules. They are great for bringing some structure to your composition. Patterns Voices are capable of mutating their patterns while playing, according to parameters defined in the Patterns parameter set. When not following a pattern (e.g. when mutating), Pattern Voices compose as Rhythmic Voices.
"Text to Music" Voices are a sub type of the Patterns Voice Type. Text to Music Voices have their own special set of parameters and allow text in any language to generate a seed melody which, even though not visible, is in a Patterns syntax. Use English, Chinese, Japanese, Russian, German, French - whatever language takes your fancy, you will always get a melody!
Listening Voices are Voices that detect incoming MIDI note events (monophonic, not polyphonic) and present them as a virtual composed note which can be followed by a Follows Voice. Listening Voices therefore allow you to create simple hyperinstruments where external input can influence the music that is generated. Such music is also referred to as Adaptive Generative Music [AGM]
Listening voices do not require any use of scripting, but they can also be used with Intermorphic Wotja Script for more advanced hyperinstruments.
Every Voice composes notes according to the settings of 4 types of Rule Objects it has associated with it.
The four Rule Objects are Scale Rule, Harmony Rule, Next Note Rule, Rhythm Rule. Each Rule Object contains one or more "Rules", such as "Major" or "Minor".
Each Rule is an array of the values of the elements for that Rule Object (such as the values of 24 scale intervals). These values represent the relative probability (from 0% to 100%) (the "weighting") of that Rules' element being chosen by a Voice for a note to be composed. An element with a value of 0 will not be chosen for composition.
Important: When the Pitch Range for a Voice is wide enough, Scale Rules will "wrap around" to accomodate the extra range in notes available.
Changes to the values in these Rules, as with changes to any other IME parameter, allows an instant change in the music generated.
Tip: Where the interface displays the Rule elements, tap or click one one at the appropriate position to change the value of that element; the IME will attempt to make that element flash when it has been selected to play a note.
These elements represent the semitone distance from the Root note defined for this Piece, these being: P1 (root), m2, M2, m3, M3, P4 (perfect 4th), b5, P5 (perfect 5th), m6, M6, m7, M7, P8 Oct (octave), m9, M9, m10, M10, P11 (perfect 11th), m125, P15 (perfect 13th), m14, M14, m15, M15.
These elements represent in semitones the possible harmonies available, these being: P1 (root), m2, M2, m3, M3, P4 (perfect 4th), b5, P5 (perfect 5th), m6, M6, m7, M7, P8 Oct (octave), m9, M9, m10, M10, P11 (perfect 11th), m125, P15 (perfect 13th), m14, M14, m15, M15.
Imagine that you have three Voices, called X, Y and Z.
Imagine that at some time in the piece, the IME has already chosen to play note C for Voice X, and note G for Voice Y. At that time, the IME thinks about composing a note for Voice Z. It looks at the notes available for it to compose, and adjusts the probabilities of choosing each of those notes, by applying the Harmony Rule element values for each of those two composed notes which are active at time (i.e. notes C and G).
Harmonies are always calculated based on a rising direction up from the Piece Root. So, if the Piece Root is B and the IME is considering if it can compose note D for Voice Z, and it looks at the note it needs to harmonize with which is note C for Voice X; then Noatikl figures-out the Harmony Rule values for Voice Z from the C of Voice X, up and through the Octave (i.e. E, F, F#, G, G# etc.).
These elements represent in semitones the distance a new note will be from the last composed note, these being: P1 (root), m2, M2, m3, M3, P4 (perfect 4th), b5, P5 (perfect 5th), m6, M6, m7, M7, P8 Oct (octave), m9, M9, m10, M10, P11 (perfect 11th), m125, P15 (perfect 13th), m14, M14, m15, M15
These elements represent the permitted note durations, these being: 1, 1/2., 1/2, 1/4., 1/4, 1/8., 1/8, Triplet, 1/16
Voice Type - Rhythmic & "Basics" parameters.
Rhythmic Voices are the default Voice Type and compose according to the parameters in the Rhythmic / Basics parameter set. Notes have their durations composed according to the Rhythm Rules you define for your voice, to fit as well as possible within the bar structure of your piece.
The "Rhythmic"/"Basics" parameters encompass the main parameters of every Voice and also govern those of the default Voice Type, which is the Rhythmic Voice.
The "Rhythmic"/"Basics" parameters govern how Rhythmic Voices work but, as they are the core parameters, they are also used (where relevant) in other Voice Types.
Every voice in a Noatikl file has a unique name. You can use any name you want, provided it is not empty, and provided it is not a single question mark (which has a reserved meaning for use with Rules, which you will find out about later).
Toggle this setting (Yes/No) to mute or unmute the voice. Certain Voice Types might take some time to respond, depending on how far in advance their notes are composed.
When the keyboard focus is on the Mute cell, you have various extra menu options available to you in the "Control" Menu. These are as follows:
If you hold down the ctrl key when you click on the mute cell, you will toggle all other voice's mute states, without changing the mute state of the voice that you ctrl-click on. This can be very handy.
Every Voice is assigned a given Patch. This specifies the sound that you will hear whenever the Voice plays a note. The exact sound you hear depends on the sound source you have associated with the Voice; this could be the ISE or you might be driving an external MIDI synth.
In general, the IME does not emit any MIDI bank select CC information for a Vvoice before it emits the Patch Change MIDI event. However, you can force the IME to emit such information, by typing-in a special format patch value; where you type-in the patch in the format: patch.msb.lsb, for example:
98.53.4
In this example, the IME will emit bank select CCs for both MSB and LSB according to the settings you supply (53 and 4 respectively, in this case). If you don't specify a value for the lsb, then the IME will only emit a Bank Select MSB CC (CC number 0). If you supply the lsb, then the IME will also emit a Bank Select MSB CC (CC number 32).
Not all software synthesizers for your favourite sequencer like having Patch data supplied to them via a Patch Change MIDI event. If this is the case, simply change the Send MIDI Bank/Patch? parameter to No (unchecked), and the IME won't send any MIDI patch change events.
A Voice emits data on a MIDI Channel. MIDI channels are numbered from 1 to 16. The default MIDI channel for a Voice is actually MIDI channel 0 – which tells the IME to assign a free channel from 1 to 16 automatically, as best it can. MIDI channel 10 is always reserved for percussion sounds, such as drum sounds or other untuned sound.
Every Voice composes according to its Voice Type.
Set the Pitch to be the minimum pitch for which you want your Voice to compose. The IME will ensure that it composes no notes less than this pitch value.
Set this to define the shortest possible sequence of notes that your Voice will compose in sequence. The Voice composes a sequence of notes, followed by a sequence of rests. The length of each sequence of notes is governed by this and the Phrase Length Range parameter.
This value defines the upper limit to the number of notes that your Voice will compose in sequence. For example, if the Phrase Length is 3, and the Phrase Length Range is 25, then the minimum phrase will be 3 notes, and the maximum phrase length will be (3+25) = 28 notes.
Set this to define the shortest possible sequence of rests that your Voice will compose. Your Voice composes a sequence of notes, followed by a sequence of rests. The length of each sequence of rests is governed by this and the Phrase Gaps Range parameter.
This value defines the upper limit to the number of rests that your Voice will compose in sequence. For example, if the Phrase Gaps is 3, and the Phrase Gaps Range is 25, then the minimum phrase will be 3 rests, and the maximum phrase length will be (3+25) = 30 rests.
This value defaults to zero. If not zero, then the defined percentage of notes that would otherwise be played by your Voice will instead be treated as a rests of the same duration. This is very useful for making any Voice sound sparser. Give it a go: this parameter is very powerful, and applies to all Voice Types.
Voice - Type Ambient
Ambient Voices have notes composed such that their durations are defined by the parameters in the Ambient parameter set. Rhythm rules are not used for Ambient Voices. An Ambient Voice plays its notes without any respect for tempo or bar timings! Ambient Voices are wonderful for creating drifting, floating sounds for either background or foreground use as drones or for musical texture.
The "Ambient" parameters govern how Ambient Voices work.
You define the Unit of Measure for which the other Ambient Voice parameters are interpreted. This may be one of the following values:
The parameters including Duration are all interpreted as being in thousandths of a second (i.e. Milliseconds). So, a Duration value of 1000 means one second.
The parameters including Duration are all interpreted as being in 60ths of a beat. In the IME a Beat is defined as being one crotchet; you get 4 beats in a bar of 4:4 music. So, a Duration value of 60 means one beat. A Duration value of 30 means a quaver. A Duration value of 20 means a triplet. A Duration value of 15 means a semi-quaver. A Duration value of 240 means 4 beats (which is a full bar if the Piece Meter is 4:4).
The parameters including Duration are all interpreted as being in seconds. So, a Duration value of 10 means ten seconds.
The Ambient Voice parameters govern how Ambient Voices work.
This defines the minimum duration for which the Ambient Voice will play when it composes a note. The actual value chosen for each note is a value between Duration, and Duration plus the Duration Range. Each and every note composed for this Ambient Voice will have a note whose duration is separately calculated.
This is combined with the Duration parameter, to determine the duration for which the Ambient Voice will play when it composes a note. The actual value chosen for each note is a value between Duration, and Duration plus the Duration Range. Each and every note composed for this Ambient Voice will have a note whose duration is separately calculated.
This defines the minimum duration for which the Ambient Voice will play when it composes a rest. The actual value chosen for each rest is a value between Gap Minimum, and Gap Minimum plus the Gap Range. Each and every rest composed for this Ambient Voice will have a rest whose duration is separately calculated.
Tip: This is the Duration used for the Phrase Gaps / Phrase Gaps Range parameters i.e. it allows the duration of gaps to be different to that of the notes.
This is combined with the Gap Minimum parameter, to determine the duration for which the Ambient Voice will play when it composes a rest. The actual value chosen for each rest is a value between Gap Minimum, and Gap Minimum plus the Gap Range. Each and every note composed for this Ambient Voice will have a note whose duration is separately calculated.
Tip: This is the Duration Range used for the Phrase Gaps / Phrase Gaps Range parameters i.e. it allows the duration of gaps to be different to that of the notes.
Set this to define the shortest possible sequence of notes that your Voice will compose in sequence. The Voice composes a sequence of notes, followed by a sequence of rests. The length of each sequence of notes is governed by this and the Phrase Length Range parameter.
This value defines the upper limit to the number of notes that your Voice will compose in sequence. For example, if the Phrase Length is 3, and the Phrase Length Range is 25, then the minimum phrase will be 3 notes, and the maximum phrase length will be (3+25) = 28 notes.
Set this to define the shortest possible sequence of rests that your Voice will compose. Your Voice composes a sequence of notes, followed by a sequence of rests. The length of each sequence of rests is governed by this and the Phrase Gaps Range parameter.
This value defines the upper limit to the number of rests that your Voice will compose in sequence. For example, if the Phrase Gaps is 3, and the Phrase Gaps Range is 25, then the minimum phrase will be 3 rests, and the maximum phrase length will be (3+25) = 30 rests.
This value defaults to zero. If not zero, then the defined percentage of notes that would otherwise be played by your Voice will instead be treated as a rests of the same duration. This is very useful for making any Voice sound sparser. Give it a go: this parameter is very powerful, and applies to all Voice Types.
Voice - Type Follow
Following Voices are fascinating. They are used to create Voices which work in a call-response manner, following the behaviour of other Voices in a variety of exciting ways according to parameters in the Following parameter set.
The "Follow" parameters govern how Following Voices work. Select the Voice which you want your Voice to follow. If you don't specify a Voice to follow, then the following Voice won't play. You may follow any Voice in the piece you want, of any type. You may even follow a Voice that is following another Voice that is following another Voice... provided that you don't try to define a cyclic dependency which loops back to the current Voice!
Important: If you are following a voice that is using the Chords parameter (i.e. to create more than one note), then only the first note in the chord is followed.
This parameter sets the percentage of notes that the Followed Voice responds to so as to emit a note. Set to 100 if you want the Following Voice to emit a note for every note played by the Followed Voice. Set to a smaller value if you want to thin-out the notes played by the Following Voice. This is also useful for building networks of chords, where if you have a number of Following Voices all following either each other or one main Voice, and if those Following Voices have the Percent Parameter to less than 100, then sometimes you will hear dense chords, and sometimes you will hear thinner chords.
This parameter defines the pitch to use for a note generated by the Following Voice. The available values are:
This causes the Following Voice to choose notes which respect the currently defined Scale, Harmony and Next Note Rules.
This causes the Following Voice to choose notes which are offset from the followed note, such they are at an interval within the Scale Rule, defined as a value randomly selected between the Shift / Interval and Shift / Interval plus Shift / Interval Range values.
For example, if these values are 1 and 2 respectively, then each time a note is chosen, it will be between 1 and (1+2)=3 Scale Rule intervals up from the Followed Voice's note. It is important to understand that this refers to the non-zeroed elements in the current Scale Rule, in other words only those notes that are available within the Scale Rule.
So, in our example, if we were using a Major Scale Rule, and if the followed note were C4 (Middle C), and if Noatikl chose a value of 2 as its random value; then the played note would be E4 (Middle C), which is the second note up from Middle C within the Major Scale Rule.
This causes the Following Voice to choose notes which are offset up from the followed note, such they offset from the followed note by a number of semitones which is a value randomly selected between the Shift / Interval and Shift / Interval plus Shift / Interval Range values. A note chosen in this way ignores the current Scale Rule.
For example, if these values are 1 and 2 respectively, then each time a note is chosen, it will be between 1 and (1+2)=3 semitones up from the Followed Voice's note.
So, in our example, if we were using a Major Scale Rule, and if the followed note were C4 (Middle C), and if Noatikl chose a value of 3 as its random value; then the played note would be D#4 (Middle D#), which is the third semitone note up from Middle C. This value is used even though it is not in the current scale rule.
You define the Unit of Measure by which the Delay and Delay Range parameters are interpreted. This may be one of the following values:
The parameters including Duration are all interpreted as being in thousandths of a second (i.e. Milliseconds). So, a Duration value of 1000 means one second.
The parameters including Duration are all interpreted as being in 60ths of a beat. In Noatikl a Beat is defined as being one crotchet; you get 3 beats in a bar of 4:4 music. So, a Duration value of 60 means one beat. A Duration value of 30 means a quaver. A Duration value of 20 means a triplet. A Duration value of 15 means a semi-quaver. A Duration value of 240 means 4 beats (which is a full bar if the Piece Meter is 4:4).
The parameters including Duration are all interpreted as being in seconds. So, a Duration value of 10 means ten seconds.
This defines the minimum delay after which the Following Voice will play a followed note. The actual value chosen for each note is a value between Delay, and Delay plus the Delay Range. Each and every note composed for this Following Voice will have a note whose delay is separately calculated.
This is combined with the Delay parameter, to determine the delay after which the Following Voice will play a followed note. The actual value chosen for each note is a value between Delay, and Delay plus the Delay Range. Each and every note composed for this Following Voice will have a note whose delay is separately calculated.
Used when the Strategy is either Interval Within Scale Rule or Semitone Shift.
This causes the Following Voice to choose notes which are offset in some way from the followed note, according to the Strategy; where the offset is defined as a value randomly selected between the Shift / Interval and Shift / Interval plus Shift / Interval Range values.
This represents the "Shift/Interval Range", and is used when the Strategy is either Interval Within Scale Rule or Semitone Shift.
This causes the Following Voice to choose notes which are offset in some way from the followed note, according to the Strategy; where the offset is defined as a value randomly selected between the Shift / Interval and Shift / Interval plus Shift / Interval Range values.
Voice - Type Repeat
Repeat Voices are like Rhythmic Voices, with the added feature that they can be defined to repeat work that they have composed in previous bars, according to rules defined in the Repeat parameter set. When not repeating previous bars, Repeat Voices work in exactly the same way as Rhythmic Voices meaning they compose as if they were of Rhythmic Voice Type.
The "Repeat" parameters govern how Repeat Voices work.
You define the name of the Voice from which you would like, from time-to-time, to repeat past bars of music. If you simply want to repeat bars played in the past for the current Voice, simply select the magic value of '?', which is also the default value.
When the Voice starts composing a new bar, it takes a look at this parameter value. This defines for what percent of the time the Voice should repeat previously-composed music. Set this parameter to 100 if you always want past composed music to be repeated (where available!); set to 0 if you never want past music repeated by this Voice. When the Voice doesn't choose to repeat past data it composes a new bar of music were it to be of Rhythmic Voice Type.
Defines the number of bars for which the Voice should repeat a past-composed chunk of music. The actual value chosen is somewhere between Bars and Bars + Bars Range.
Defines the upper limit of the number of bars for which the Voice should repeat a past-composed chunk of music. The actual value chosen is somewhere between Bars and Bars + Bars Range.
Defines the number of bars in the past, from which the Voice will choose the past-composed music to repeat. The actual value chosen is somewhere between History and History + History Range.
Defines the upper limit of the number of bars in the past, from which the Voice will choose the past-composed music to repeat. The actual value chosen is somewhere between History and History + History Range.
Voice - Type Patterns
Patterns Voices are Voices that play in accordance with various fixed MIDI-like patterns. These patterns have a specific syntax, are able to follow generative sequencing rules and can adapt automatically to changes in Scale Rules. They are great for bringing some structure to your composition. Patterns Voices are capable of mutating their patterns while playing, according to parameters defined in the Patterns parameter set. When not following a pattern (e.g. when mutating), Pattern Voices compose as Rhythmic Voices.
The "Patterns" parameters govern how Patterns Voices work. Patterns Voices compose as if they were of Rhythmic Voice Type. Voices using TTM (Text to Music) Voices still use the Patterns Voice Type, as TTM is a subtype of that.
See the also the IME Pattern Syntax section.
When the Voice starts a new sub-pattern at the start of a bar, it consults the value you have defined for Use Percent. This parameter determines the probably of the Voice using the Pattern for the bar; or alternatively, compose a completely new bar (that you will hear only once!) were it to be of Rhythmic Voice Type.
If Use Percent is 100, then the Voice will always use the pattern. If Use Percent is 50, then teh Voice will instead compose a new bar every other bar or so. Note that the Voice will never interrupt a sub-pattern that it is playing; the Use Percent parameter is considered only on a sub-pattern boundary, at the start of a new bar.
The mutation factor is used when a bar is considered for mutation (which can happen only if Bars Between is not zero! The Mutation Factor determines the level of mutation to apply. If set to 10.0%, then when playing from a sub-pattern, this means that each note that would be played from the pattern, has a 10% chance of having a different one composed, with subsequence pattern playbacks keeping that mutation. Note that if Mutate Rhythm? is set to Yes, then if the composed note is longer than the composed-over pattern note, this might overlap and cancel-out some other notes in the sub-pattern.
This parameter defines the number of bars that the Voice waits between, before trying to mutate a bar in a pattern according to the Mutation Factor. If Bars Between is set to zero, the Voice can never mutate. Set to 1 if you want mutation every bar, 2 if you want mutation every other bar, etc. ...
The actual number of bars used is selected randomly each time, somewhere in the range from Bars Between, to Bars Between plus Bars Range.
This parameter is used to help define the number of bars between attempts by the Voice to mutate the current pattern. The actual number of bars used is selected randomly each time, somewhere in the range from Bars Between, to Bars Between plus Bars Range.
If set to No, then the timing of the sub-pattern is preserved perfectly; only the frequency of the pattern notes will be changed when the pattern is mutated. Otherwise, the duration of each note is chosen from the rhythm rules and phrase/phrase gap rules for the Voice.
Defines the Meter to be used by the Voice, such 4:4 or 3:4 or 6:8. A value of ?, which is the default, means to use the Meter defined for the Piece. A different value allows the Voice to work with a completely different meter, which can be used for interesting polyphonic effects.
Defines the Pattern to be used by a Fixed Pattern Voice.
Select the Patterns parameter to display any Pattern Editor Dialog that is available. Information on the syntax to use is detailed below.
The pattern syntax is somewhat complicated, and, until such time as we have created a pattern editor for the IME, will require a bit of effort to get to grips with.
A pattern is a text string in a specific format, surrounded by < and > symbols. Patterns can be grouped together as list of patterns. Because of this we refer to patterns as a "Pattern" when there is only one, or as "Sub-Patterns" when there are many in the list of patterns. We know it is a bit confusing, so we'll say it again in another way: each Pattern is made-up of a number of Sub-Patterns.
There are 4 types of Sub-Pattern:
The IME employs an underlying "time unit" that is 1/60th of a crotchet/quarter note. IME note Duration values map onto standard music notation in the following way (irrespective of meter) [# IME time units - Composed note length]:
So by way of example:
You are of course free to experiment using other time unit values, which will mean different things. E.g. 10 time units is a 24th note etc.
Tip: To aid understanding and visual clarity, we use the following color coding: durations, velocity values and scale intervals.
Where:
See above for note duration values.
In R patterns a negative duration indicates a rest for that time; in F patterns a pitch of -1 indicates a rest for the note duration.
If a note sub-pattern is not an exact even number of bars (e.g. 2 and and half bars at the current meter!) then the engine will pad to silence to the end of the nearest bar boundary.
Syntax
<[S][prob][.M] R {[seqnum[-seqnumrange].[repeattimes[-repeattimesrange]] [seqnum[-seqnumrange].[repeattimes[-repeattimesrange]]}* >
Where:
Which sub-pattern is chosen by the Voice depends on a few things:
If there is at least one sequenced sub pattern, then a sequence is used to drive the sub-pattern. Which sequence to use, is based on the Voice making a weighted random selection from the available sub-patterns. When (if!) the sequenced sub-pattern end is reached, the Voice will make another selection as to which sequenced sub-pattern to use.
Otherwise, a sub-pattern is chosen, based on the Voice making a weighted random selection from the available sub-patterns. This sub-pattern is played through to the end, at which point the Voice will make another selection as to which sub-pattern to play.
A Note Sub-Pattern that is less than a whole number of bars at the Voice's current Meter, will be padded automatically with silence to ensure that it remains bar synchronised.
The IME has its own pattern format, which allows it to play specified notes and rests in different ways. Patterns are affected by the various Rules being used by the Voice.
Tip: To aid understanding and visual clarity in the examples, we use the following color coding: durations, velocity values and scale intervals.
Below are a number of example patterns. Copy and paste these into Noatikl to try them out.
Rhythm: <100 R 60 60 60.127 60>
Both: <100 B 60.15-30 1 60 2 60.127 3 15 7>
Forced Frequency: <100 F60 60.127 1 60 4 30 5 15.70-120 7>
Sequence: <S100.M R 1.1 2-0.1-0 3-0.1-0 >
Two patterns. Select randomly from these two each time!
<100 B 60 1 60 2 60 3 60 4>
<100 B 30 9 30 8 30 7 30 6 30 5 30 4 30 3 30 2>
Forced Frequency pattern. Plays note with pitch 58 (or MIDI patch 59 on Ch10): note on, note off, note on, note off etc.
<100 F58 15 1 45 -1 15 1 45 -1 15 1 45 -1>
Forced Frequency pattern: Plays a series of notes for a whole bar (or patches in the drum kit on Ch10) starting at base pitch 40, each one a 16th note...
<100 F40 15 1 15 7 15 6 15 13 15 2 15 8 15 4 15 11 15 13 15 2 15 8 15 2 15 2 15 8 15 4 15 14>
Two sequenced sub-patterns. Play 1 once, then 2 once...
<S100 R 1.1 2.1>
<100 B 60 1 60 2 60 3 60 4>
<100 B 30 9 30 8 30 7 30 6 30 5 30 4 30 3 30 2>
Two sequenced sub-patterns. Play 1 twice, then 2 twice...
<S100 R 1.2 2.2>
<100 B 60 1 60 2 60 3 60 4>
<100 B 30 9 30 8 30 7 30 6 30 5 30 4 30 3 30 2>
Two sequenced sub-patterns. Play 1 or 2 twice, then 1 or 2 twice...
<S100 R 1-1.2 1-1.2>
<100 B 60 1 60 2 60 3 60 4>
<100 B 30 9 30 8 30 7 30 6 30 5 30 4 30 3 30 2>
Two sequenced sub-patterns. Play 1 once, then 2 twice, the one or 2 once, then 2 once...
<S100 R 1.1 2.2 1-1.1 2.1>
<100 B 60 1 60 2 60 3 60 4>
<100 B 30 9 30 8 30 7 30 6 30 5 30 4 30 3 30 2>
Two sequenced sub-patterns. Play 1 once, then 2 forever...
<S100 R 1.1 2.0>
<100 B 60 1 60 2 60 3 60 4>
<100 B 30 9 30 8 30 7 30 6 30 5 30 4 30 3 30 2>
Voice - Type Text to Music
"Text to Music" Voices are a sub type of the Patterns Voice Type. Text to Music Voices have their own special set of parameters and allow text in any language to generate a seed melody which, even though not visible, is in a Patterns syntax. Use English, Chinese, Japanese, Russian, German, French - whatever language takes your fancy, you will always get a melody!
The "Text to Music" parameters govern how Text to Music Voices work.
Voice - Type Listening
Listening Voices are Voices that detect incoming MIDI note events (monophonic, not polyphonic) and present them as a virtual composed note which can be followed by a Follows Voice. Listening Voices therefore allow you to create simple hyperinstruments where external input can influence the music that is generated. Such music is also referred to as Adaptive Generative Music [AGM]
Listening voices do not require any use of scripting, but they can also be used with Intermorphic Wotja Script for more advanced hyperinstruments.
A Listening Voice does not have any special parameters which is why there are none shown in the parameter list.
The monophonic "Listening Voice" listens out on its MIDI channel for incoming MIDI notes. It will only detect a subsequent note if it first receives a note off for the previous note.
When an incoming MIDI note is detected, e.g. C60, then:
The virtual note created by a Listening Voice is pitch shifted, if necessary, to fit within the band of pitch values set by its Pitch and Pitch Range parameters (see Rhythmic Voice).
The Chord parameters let you configure any Voice Type to generate chords automatically.
In outline, use the Depth and Depth Range values to define the "Chord Depth"; which is the number of notes that will play at any one time for a given Voice. The first note in any chord is composed according to the normal mechanism for the Voice Type; additional notes that cause a chord to be built-up may be generated automatically according to the Chord parameters.
Specify the minimum Depth of chord that you want your Voice to play with. A value of 1 will mean that the Voice will not chord (unless the Depth Range parameter is greater than zero).
The Depth defines the number of notes that are played by the voice at any one time.
Specify the relative maximum Depth of chord that you want your Voice to play with. A value of 0 means that whenever the Voice is played, it will play a number of notes equal to the Depth. A value of one or more means that whenever the Voice is played, it will play a number of notes equal to a randomly selected value between the Depth and the Depth plus the Depth Range.
This parameter tells the Voice the percentage chance that it should actually emit any given note in the chord (after the first note, of course!). Set to 100 if you want the Chording Voice to always emit a note for every note played by the Voice. Set to a smaller value if you want to thin-out the notes played within the chord. This allows you to create chords of varying depth; sometimes dense, sometimes thin.
This parameter tells the Voice what it should do when it decides the pitch to use for a note generated within a chord. The available values are:
This causes the Voice's chord notes to be selected according to the currently defined Scale, Harmony and Next Note Rules.
This causes the Voice's chord note to be selected offset from the followed note, such they it is at an interval within the Scale Rule beyond the previous note in the chord, defined as a value randomly selected between the Shift / Interval and Shift / Interval plus Shift / Interval Range values.
For example, if these values are 1 and 2 respectively, then each time a note is chosen within the chord, it will be between 1 and (1+2)=3 Scale Rule intervals up from the previous note in the chord. It is important to understand that this refers to the non-zeroed elements in the current Scale Rule, in other words only those notes that are available within the Scale Rule.
So, in our example, if we were using a Major Scale Rule, and if the first note in the chord were C4 (Middle C), and if the Voice chose a value of 2 as its random value; then the played note would be E4 (Middle C), which is the second note up from Middle C within the Major Scale Rule.
This causes the the Voice's chord note to be selected offset up from the previous note in the chord, such it is offset from the previous chord note by a number of semitones which is a value randomly selected between the Shift / Interval and Shift / Interval plus Shift / Interval Range values. A note chosen in this way ignores the current Scale Rule.
For example, if these values are 1 and 2 respectively, then each time a note is chosen, it will be between 1 and (1+2)=3 semitones up from the previous note in the chord.
So, in our example, if we were using a Major Scale Rule, and if the previous note in the chord were C4 (Middle C), and if the Voice chose a value of 3 as its random value; then the played note would be D#4 (Middle D#), which is the third semitone note up from Middle C. This value is used even though it is not in the current Scale Rule.
You define the Unit of Measure by which the Delay and Delay Range parameters are interpreted. This may be one of the following values:
The parameters including Duration are all interpreted as being in thousandths of a second (i.e. Milliseconds). So, a Duration value of 1000 means one second.
The parameters including Duration are all interpreted as being in 60ths of a beat. In the IME a Beat is defined as being one crotchet; you get 3 beats in a bar of 4:4 music. So, a Duration value of 60 means one beat. A Duration value of 30 means a quaver. A Duration value of 20 means a triplet. A Duration value of 15 means a semi-quaver. A Duration value of 240 means 4 beats (which is a full bar if the Piece Meter is 4:4).
This works the same way as Beats (60ths of a) except that where the Delay has a special value of 10, 15 or 20; the delay is interpreted in a special way that is very useful for some breakbeat-based music. Specifically, in this case, the calculated value for the delay is rounded to the nearest sub-multiple of the Delay value. So, for example, if the engine calculates a value of 43, and if Delay is 20, the used value for the delay is actually 40 (which is the nearest multiple of 20).
This defines the minimum delay after which the Chording Voice will play a followed note. The actual value chosen for each note is a value between Delay, and Delay plus the Delay Range. Each and every note composed for this Chording Voice will have a note whose delay is separately calculated.
This is combined with the Delay parameter, to determine the delay after which the Chording Voice will play a followed note. The actual value chosen for each note is a value between Delay, and Delay plus the Delay Range. Each and every note composed for this Chording Voice will have a note whose delay is separately calculated.
Used when the Strategy is either Interval Within Scale Rule or Semitone Shift.
This causes the Chording Voice to choose notes which are offset in some way from the followed note, according to the Strategy; where the offset is defined as a value randomly selected between the Shift / Interval and Shift / Interval plus Shift / Interval Range values.
This represents the "Shift/Interval Range", and is used when the Strategy is either Interval Within Scale Rule or Semitone Shift.
This causes the Chording Voice to choose notes which are offset in some way from the followed note, according to the Strategy; where the offset is defined as a value randomly selected between the Shift / Interval and Shift / Interval plus Shift / Interval Range values.
This parameter defines the amount that the pitch of each note in the chord should be offset, in semitones, from the previous note in the chord; the actual value selected might be overridden according to the various rules that apply to the Voice, but in general, this parameter allows you to "shape" a chord to have a given range of pitch values. In combination with the Delay-related parameters, this allows you to create some very interesting arpeggiation effects.
For example, a value of +12 would tend to space each note in the chord by a range of 12 semitones (which is one octave), with each subsequent value in the chord being higher in pitch that the previous.
For example, a value of -12 would tend to space each note in the chord by a range of 12 semitones (which is one octave), with each subsequent value in the chord being lower in pitch that the previous.
This parameter allows you to specify the range of relative velocities for the notes in a chord. Each subsequent note in the chord is the defined percentage louder (for a positive value) or quieter (for a negative value) than the previous note in the chord. A value of zero means that all notes in the chord are played with the same velocity.
The Voice velocity envelope values are ignored when this parameter is applied.
For example, a value of -30 would tell the Voice to generate its chords such that each auto-chorded note is 30% quieter than each preceding note in the chord; giving a noticeable tailing-off effect.
The Voice Rule parameters let you select the various rules that govern how your Voice works. The Rules themselves are edited in the relevant Rule Object. See Rule Objects.
Select the Harmony Rule that you want your Voice to use. If you select the value labelled ?, this tells the Voice to use the value defined in the Piece Object. If that value is in turn the ? value, then your Voice will choose a rule to use at random when it starts playing. See Rule Objects.
Select the Next Note Rule that you want your Voice to use. If you select the value labelled ?, this tells the Voice to use the value defined in the Piece Object. If that value is in turn the ? value, then your Voice will choose a rule to use at random when it starts playing. See Rule Objects.
Select the Rhythm Rule that you want your Voice to use. If you select the value labelled ?, then your Voice will choose a rule to use at random when it starts playing. See Rule Objects.
Select the Scale Rule that you want your Voice to use. If you select the value labelled ?, this tells the Voice to use the value defined in the Piece Object. If that value is in turn the ? value, then your Voice will choose a rule to use at random when it starts playing. See Rule Objects.
The default value for this parameter is "Yes", which means that the Voice will be considered for harmonisation with other voices. Set to "No" if for some reason you do not want other voices to harmonize with this voice.
Normally, you want your Voice to use the Piece Root. This is represented by the value ? However, sometimes you really want to force your Voice to use a different Root note; in which case, set the Voice Root to be whatever value suits.
This allows you to work-around the following sort of problem:
Imagine that you have a sampler, wher you load-up a variety of loops against MIDI note C3 up to D3. To have your piece drive this from a Rhythmic Voice such that the sounds you hear are not affected by changes to the Piece Root, you should set the Voice Root to e.g. C3 and your Voice will then be unaffected by changes to the Piece Root. Note that in this specific example, it would probably be a good idea to set the Harmonize? Flag to No.
The Voice Scripting parameter allows your Voice to use Intermorphic Wotja Script. For full details please refer to the Intermorphic Wotja Script Guide.
This Voice Comments parameters allow you to store comments in your Voice, in the form of copyright information and any notes you might want to make for future reference.
Enter the Copyright information you might want to record for the Voice. In the case of a Voice from a template pack, this might contain a copyright notice associated with that template.
Enter any detailed notes you might want to make about this Voice for future reference.
The Voice Articulation parameters define the percentage of the duration of composed note, i.e. they determine how long a composed note actually plays for. The IME used to always compose notes to be played "Legato" (no gap between one note and the next) - this allows you to play them staccato.
1 is very staccato and 100 is legato (the new default).
Max articulation is the value of Articulation (min) + Articulation range, and is used in combination with the variation values (below).
The minimum variation in staccato between notes, c.f. other parameters that adopt the min + range approach.
The range in variation of stacatto between the notes (in addition to the min).
The Voice Controllers parameters define some of the key MIDI controller values that are emitted by the Voice.
Set this value to other than the default of "-1", if you want to emit a Damper/Hold MIDI controller (MIDI CC 64) at the specified value on this Voice's MIDI line. This is a funny MIDI controller, with only two states; in that a value of 64 or greater activates Damper/Hold, and any value of 63 or less means to turn it off! Leave this value at the default of "-1" if you don't want the Voice to emit any information for this MIDI controller.
Set this value to other than the default of "-1", if you want to emit a Harmonic Content MIDI controller (MIDI CC 71) at the specified value on this Voice's MIDI line. Leave this value at the default of "-1" if you don't want the Voice to emit any information for this MIDI controller.
Set this value to other than the default of "-1", if you want to emit a Reverb MIDI controller (MIDI CC 91) at the specified value on this Voice's MIDI line. Leave this value at the default of "-1" if you don't want the Voice to emit any information for this MIDI controller.
Set this value to other than the default of "-1", if you want to emit a Chorus MIDI controller (MIDI CC 93) at the specified value on this Voice's MIDI line. Leave this value at the default of "-1" if you don't want the Voice to emit any information for this MIDI controller.
If you are using Damper/Hold (64), then you will find that your notes can start building-up and never decay! In which case, set the Damper Release parameter to "Yes", which tells the Voice to momentarily release the damper just before the end of every bar. This prevents build-up of notes and generally sounds wonderful.
Set this value to other than the default of "-1", if you want to emit a Portamento MIDI controller (MIDI CC 65) at the specified value on this Voice's MIDI line. Leave this value at the default of "-1" if you don't want the Voice to emit any information for this MIDI controller.
The default value of "Yes" means that this Voice can share its MIDI channel with other Voices. This is only considered if you have defined the MIDI Channel parameter for a Voice to be 0.
The Voice Microcontroller parameters (there are two) allow you to define very powerful Microcontrollers to be associated with your Voice.
Microcontrollers are very powerful and you can think of them as built-in, highly configurable MIDI event generators. They can either synchronise to the tempo of your piece, or you can let them run free-floating. Experiment with them – they can do a huge amount to make your music interesting and dynamic.
Tip: if you want to synchronise your Microcontroller to the time-base, so that your MIDI controller is synchronised to bar boundaries in your music, you'll need to use the Beat Cycle Length parameter.
This tells your Voice which MIDI controller (also referred to as the MIDI CC) to emit for this microcontroller. When the Microcontroller is active, the Voice will emit values for this MIDI controller that change at various times, with behaviour that you define using the various parameters in this Parameter.
The Mode defines the shape of the waveform that the Voice will use to shape this waveform.
The Mode may be one of the following values:
The microcontroller is off. This is the default value.
The microcontroller will drift between the Minimum and Minimum plus Range, changing at times specified by the Update and Update Range parameters, by an amount between the Change and Change plus Change Range parameters.
A triangular waveform, that starts at the minimum value, works up to the maximum value, and works back to the minimum value.
A triangular waveform, that starts at the maximum value, works down to the minimum value, and works back to the maximum value.
A sawtooth waveform, that starts at the minimum value, works up to the maximum value, and then starts again from the minimum value.
A sawtooth waveform, that starts at the maximum value, works down to the minimum value, and then starts again from the maximum value.
Defines the minimum value that may be emitted by the Microcontroller.
The microcontroller will emit a value between the Minimum and Minimum plus Range values.
So for example, if you define Minimum to be 20, and Range to be 100, the value that is emitted will be in the range 20 to 120 inclusive.
Defines the amount by which the microcontroller will change, every time it is allowed to change. Typically set to a value of 1. If this value is set to 0, the Microcontroller will change only if the Change Range is greater than or equal to 1.
Defines the upper limit to the amount by which the microcontroller will change, every time it is allowed to change. Typically set to a value of 1. If this value is set to 0, the Microcontroller will change only if the Change Range is greater than or equal to 1.
For example, if you define Change to be 1, and Change Range to be 3, the value that is emitted will vary by a value between 1 and (3+1)=4 each time.
Defines the minimum time in milliseconds between changes in the emitted Microcontroller value. The system might not be able to emit changes as quickly as you want, if you set a very small value! If you don't want changes to happen very often, then use a large value.
Ignored if Beat Cycle Length is non-zero.
Defines the upper limit in the time in milliseconds between changes in the emitted Microcontroller value. Use this parameter to apply some uncertainty in when the changes will occur.
For example, if you define Update to be 1000, and Update Range to be 500, the value that is emitted will change every 1000 to 1500 milliseconds (or in other words, every 1 to 1.5 seconds).
Ignored if Beat Cycle Length is non-zero.
You define the Unit of Measure by which the Update and Update Range parameters are interpreted. This may be one of the following values:
The Update and Update Range are interpreted as being in thousandths of a second (i.e. Milliseconds). So, a Update value of 1000 means one second.
The Update and Update Range are interpreted as being in seconds. So, a Update value of 10 means ten seconds.
This parameter is critical for generating effects which synchronise with the bar timing of your Voice. If you want to achieve an effect like a filter-sweep that synchronises to your bar boundary, then this is the parameter to use.
Here are some of the values you could use.
Note in the IME a Beat is defined as being one crotchet; you get 4 beats in a bar of 4:4 music. So, a Duration value of 60 means one beat. A Duration value of 30 means a quaver. A Duration value of 20 means a triplet. A Duration value of 15 means a semi-quaver. A Duration value of 240 means 4 beats (which is a full bar if the Piece Meter is 4:4).
Use this parameter if you want to start the microcontroller from a start-point other than at the very start of its cycle.
The Voice Micro Note Delay parameters provide fine variation in the times of Note events generated by a Voice. This can be used to give a Voice more “human” feel.
The maximum amount of delay generated by micro note delay changes, that may be applied to note events. Zero means off (which is the default).
The amount of change in the micro delay that is applied by Wotja between note on/off events. The value drifts between zero (off) and the Delay Range, changing by plus or minus the Delay Change value each time.
Fixed amount of offset note delay to apply, used only when the Micro Note Delay controller is in use. The default value is zero.
The Voice Micro Pitch parameters provide fine variation through use of the MIDI Pitch Wheel controller.
Tip: This is not normally used on MIDI line 10, which is the drum/percussion line!
A value from 0 to 24, meaning how many semitones are controlled by the full available range of Micro Pitch parameters. The default value is 2, which represents two semitones.
Fixed amount of pitch-bend to apply on this MIDI line, used to tune/de-tune an instrument.
The default value is zero, which means to apply no offset pitch bend.
From -8192 to +8191; which covers a range of pitch bend defined by the Bend Sensitivity parameter.
The maximum amount of micro pitch change that can be applied. Zero means off (which is the default). The maximum value allowed is 8191. The value chosen is added to the pitch bend offset.
The amount of change in Micro Pitch that is applied by Wotja between “update” periods. The value drifts between zero (off) and the Pitch Range, changing by plus or minus the Pitch Change value each time.
The time in milliseconds between updates to the pitch controller. The actual value chosen is selected randomly each time, to be a value somewhere between Pitch Update and Update Range.
The upper limit of time between updates to the pitch controller. The actual value chosen is betwen Pitch Update and Update Range.
Normally, Voices compose and emit MIDI note events. The Voice Note to MIDI CC Mapping parameters allow you to tell a Voice to emit MIDI controller data instead of MIDI note events.
Why would you want to do this? Well, it lets you use a Voice as a very powerful generative MIDI event generator with a huge range of potential applications.
If you want this Voice to emit a MIDI CC instead of note on/off events, set this parameter to Yes.
If you have set CC for Note On? to Yes, then instead of emitting a note on event, the Voice will emit the specified MIDI CC, with a value equal to the composed pitch.
If you want this Voice to emit a MIDI CC proportionate to the Velocity of the composed note (in addition to any controller defined for Note On CC), then set this parameter to Yes.
If you have set CC for Velocity? to Yes, then the Voice will (in addition to the Note On CC value) emit the specified MIDI CC, with a value equal to the composed velocity.
If you want this Voice to emit a MIDI CC when a note off occurs, set this parameter to Yes. This applies only if CC for Note On? Is set to Yes.
If you have set CC for Note Off? to Yes, then instead of emitting a note off event, the Voice will emit the specified MIDI CC, with a value equal to the composed pitch of the stopped note.
Voice Envelopes are supported for a number of parameters. Envelopes work in the same way for all of these, so they are all grouped here.
Each envelope is a collection of up to 100 data points. A piece starts with the value at the left side of the envelope, and as it progresses, eventually ends up with the value from the far right of the envelope.
You can draw direct on to the envelope with your mouse.
Alternatively, you may use one of the various powerful envelope editing tools that we have made available to you.
To use the envelope editing tools:
- Right-click (win) or ctrl-click (mac) on the envelope tool
- Select the option you want. e.g. random, curve up etc.
- Select the range using the mouse...
- Then: either press space or enter, or select pop-up envelope tool to apply to the selected range.
- Select freehand mode to return to the normal click-to-paint mode.
This allows you to define how the Velocity level is changed automatically for your Voice throughout playback of the piece. The velocity defines relatively how loud each note is.
The actual Velocity value used at any point in the piece is calculated as being a value somewhere in the range from Velocity, to Velocity plus the value of the Velocity Range envelope.
The value in this envelope is added to the value in the Velocity Envelope.
The velocity for any composed note can change by a value between Velocity Change Envelope value (VCE) and the VCE plus the Velocity Change Range Envelope value.
The value in this envelope is added to the value in the Velocity Change Envelope.
This allows you to define an envelope that is used to emit a MIDI CC of your choice. The default value for this MIDI CC is 7, which is the Volume controller.
Use this to define the MIDI CC that you want to be emitted by this envelope. The default value is 7, which is the Volume controller.
Use this to turn your envelope on or off.
This provides fine variation in the values generated by the User Envelope 1. Any value generated by this micro controller is added to the Micro User Envelope 1 value, to give fine variation in any such envelope.
The maximum amount of micro change in the User Envelope 1 that can be applied. Zero means off (which is the default).
The amount of micro change in the User Envelope that is applied by Wotja between “update” periods. The value drifts between zero (off) and the Range, changing by plus or minus the Change value each time.
The time in milliseconds between updates to the Micro User Envelope value. The actual value chosen is selected randomly each time, to be a value somewhere between Update and Update Range.
The upper limit of time between updates to the Micro User Envelope value. The actual value chosen is betwen Update and Update Range.
This allows you to define an envelope that is used to emit a MIDI CC of your choice. The default value for this MIDI CC is 10, which is the Pan controller.
Use this to define the MIDI CC that you want to be emitted by this envelope. The default value is 10, which is the Pan controller.
Use this to turn your envelope on or off.
The Mix Tempo allows you to define the tempo for your mix.
The Mix Root parameter allows you to define the Root Pitch to use for the mix. For example, if you are using a Major Scale Rule, then set this value to be C for your Piece to play in the key of C Major.
Sets how long it takes for the file volume to ramp up once the file has started to play.
Sets how long the file will play for (Piece time) in the Playlist.
Sets a range on how long the file will play for (Piece time) in the Playlist.
Sets how long it takes for the file volume to ramp down before play moves to the next file in the Playlist.
The Piece Rules parameters allow you to define the default Rules used by the piece in the selected Cell. Each Cell can have different Piece Rules.
Set this to define the default Scale Rule to use when the piece plays. Individual Voices are allowed to override this setting if they so wish.
Set this to define the default Harmony Rule to use when the piece plays. Individual Voices are allowed to override this setting if they so wish.
Set this to define the default Next Note Rule to use when the piece plays. Individual Voices are allowed to override this setting if they so wish.
The Piece Time parameters allow you to define some of the key parameters that govern your composition at the Piece level.
The duration time units for which the piece should play. The default value is Seconds, but you can set to Bars if you so wish.
The minimum duration, in seconds / bars, for which to play this piece before restarting. The duration chosen is somewhere in the range of Piece Length, to Piece Length plus Piece Length Range. If Length Units is Seconds, then the chosen duration is rounded-up to the nearest bar boundary.
Defines an upper range in values, for which to play this piece before restarting
Defines the Meter to be used by this Piece, such 4:4 or 3:4 or 6:8. A Voice will generally use this Meter, but the Meter value to be used for each Voice may actually override this setting. This approach allows Voices to be configured to work with a completely different Meter, which can be used for interesting polyphonic effects.
The Piece Scripting parameter allows your piece (Content Cell) to use Intermorphic Wotja Script. For full details please refer to the Intermorphic Wotja Script Guide.
The File Object allows you to define some book-keeping parameters for your piece (and each cell in a mix can use a different piece). None of these parameters affect the way that you piece sounds.
The title of your Piece.
The author of your Piece.
Any notes you want to include for your Piece.