Playback in Dorico: past, present, and future


Editor’s note: This post was written when Dorico 1.2.10 was the most current version available. Since then, Dorico Pro 2 was released on May 30, 2018 with more playback improvements such as support for automation and video. Support for NotePerformer 3, also released on May 30, 2018, was added as well.


One could easily have the impression that requirements and expectations regarding modern notation software and playback could not be more divided.

I remember a presentation of VSL (Vienna Symphonic Library) where Paul Steinbauer, speaking about different requirements, said that there are almost as many individual (VSL) setups as there are users.

There are engravers preferring simple playback, as for them it is usually just about catching mistakes. Other users want adequate sound rendering without thinking too much about settings and post-editing. Yet another group expect playback and editing functionality that is comparable to MIDI editing within a DAW.

Dorico’s development team puts an enormous effort into meeting all those different playback requirements. This comes at a price, though: what’s just a staccato dot in notation can become a mammoth task in terms of playback. Such tasks take time.

When I started this post I quickly realized that I needed to give a lot of background information in order to give a good overview of the developments in the world of sound libraries and to also give a prognosis about further developments. At the same time this information gives an impression how playback was influenced and even limited by notation in the past.

Just as Dorico’s notation features (to meet the ideal of the Old Masters as the requirements of modern composers and media alike) were designed and developed from scratch, it is time to re-think the requirements of sample providers and general development perspectives.

But more about that later…

Take it from the top

To better grasp the enormous potential for playback in future versions of Dorico (for which the foundation is laid already and is being continually expanded), let’s have a short look back.

When I started with synthesizers and other MIDI devices there were no computers. The General MIDI (GM) format had just been established as a standard, and synthesizers and rack modules were explicitly intended for live performances, even in the studio.

I had a whole arsenal of Kawai Q-80 hardware sequencers when the “ST” entered the market, and being able to claim ownership of C-Lab’s Notator was somewhat of a moral imperative. But there was also another little bit of software, coming out of Norderstedt (Germany), which was highly praised by Keyboards magazine and was considered obligatory among insiders: Cubit, which I purchased in its 1.0.3 version.

It is interesting that two different approaches to MIDI editing were already established in both of these programs, each of which can still be found in modern software.

Roughly speaking those are “graphics to MIDI” or “MIDI to graphics”. I’m, aware, of course, that much more far-reaching mechanisms are involved and that data bank structures play a crucial role in real-time data processing.

With the Atari as the core of the setup, all synthesizers (Roland D-50, D-10, Yamaha DX 7, etc.) were controlled from a Roland A-880 MIDI patch bay. The Q-80 sequencers served as backup, but also as trill generators (sixteenths at a tempo of 56 bpm equal 14.9, or just about 15 notes per second). I am still laughing when I remember these days.

Looking at the development of sound generators from that point in time on, two general trends can be noticed: to go beyond the frontier of the possible (synthesizers) on the one hand, with incredible synthetic sounds; and to get as closely as possible towards a realistic sound rendering (PCM, AI) on the other.

The latter is also apparent in my setup. Starting with the Roland D-50 and the Yamaha DX-7, soon Roland’s U series was added, Akai, EMU and Roland Sampler, Korg M1 to W01, the first sample modelling synth, Yamaha’s V-1, and of course many more.

This is approximately how the world of sound generators looked like when Finale and later Sibelius stepped into the light of the world. As the core focus of these program was clearly music notation, internal playback did only work if one had a computer with a built-in Soundblaster card. That a soundcard came with a GM module was not something that could be taken for granted at the time. External devices could be controlled, of course, provided that a MIDI output board was installed.

Obviously such results were not comparable with those created through sequencers (hardware and software) or a live performance. Then again, this proved to be the incentive to continually improve playback editing capabilities.

Since, at that time, articulations and playing techniques were not possible except via multimode, they had to be simulated with functions and macros, some of which are in use to this day.

It was only a question of time, though, until the first samplers for computers captured the market, and increasing processing power made it possible to achieve quite realistic results. (And that development hasn’t yet reached its end.) At the latest with the turn of the millennium notation packages also came with integrated sample players, be it Finale’s forerunner of the Aria player or Sibelius’s EastWest QLSO Silver (or Gold), back then along with the Kontakt engine by Native Instruments.

Notation programs grew along the demands, while sequencer programs grew along the possibilities – a tremendous difference that carries a good amount of disadvantage.

Any new development by the producers of sample libraries had to be implemented into already existing code, during a time with scant standards and little outlook of where the journey might actually be heading.

While sequencing programs, by their nature, can react more flexibly, with notation programs there are more and more barriers appearing nowadays, originating from the initial development work.

A well-intended attempt to establish a certain standard with Sibelius’s SoundWorld turned out to be ultimately more of an obstacle, since this was too clumsy, performance-heavy and enabled users only in a limited way to employ sophisticated playing techniques (in particular, temporary key switches).

So while notation features more and more approach the ideal standard that should have applied from day one, none of the prevalent programs’ playback functionality is an appropriate fit for today’s market.

It is possible, of course, to produce excellent audio files – but it comes at the cost of usability and is overall very time-consuming.

Along came Polly… oops, Dorico

While the currently established programs, coming from music engraving, had to struggle, in their ongoing development, with restrictions and obstacles regarding playback, the Dorico team, having 20 years of experience with engraving as well as playback, could avoid any limitations from the start.

The real tightrope for the developer team to walk is to accommodate every conceivable sound library. This requires a maximum amount of flexibility from Dorico.

Embertone’s Chapman Trumpet, for example, comes merely with legato, poly-sustain and staccato. A trill has to be generated by an automation.

On the other side of the spectrum there is Vienna Symphonic Library’s B flat Trumpet, equipped with an enormous number of different articulations. For many of those articulations there even are variations available, as staccato and fast staccato, or performance legato and  fast performance legato.

Beyond that there are fast repetitions for double-tongue and triple-tongue. It is even possible to build custom articulations or whole sequences (e.g. for ornaments) from the available samples, to be triggered by the press of a key.

But even here automation is still necessary to achieve a realistic sound rendering appropriate for the respective kind of music. Just the many ways to play a trill alone give a vague idea of the complexity involved at times in the creation of a good mockup.

The core conclusion: there is no “one” setting for all instruments – neither for articulations nor for controller data enabling expressive performance.

An orchestral crescendo is not a linear increase in volume. Usually, strings, woodwinds and brass follow different curve progressions. For example: while a woodwind crescendo proceeds linearly, the brass crescendo follows a logarithmic curve.

Now let’s look at Dorico in detail. Obviously, as a consequence of the complexity described above, a good number of features are still a bit down the road.

As someone who has, for years, been creating audio tracks in Cubase on a daily basis, I will take the liberty of occasionally adding my personal reasoning, thoughts and expectations. I am certain that many readers will have expectations and requirements of their own as well.

The setup

The first setting to check is found under Edit > Device setup.

Here the desired ASIO driver can be set, along with the sample rate, provided that the driver of the interface offers several options.

A separate button opens the control window of the interface, allowing for further settings (e.g. output channels).

Play mode

Changing to Play mode, two new additions immediately catch the eye.

To the right, below the VST instruments, there is a new section called MIDI Instruments.

All kinds of MIDI ports can be activated here, which are then available in the drop-down menu of the instrument tracks. The soundcard’s GS wavetable is also accessible here for those who only need a simple controller device for notation work.

I usually use a MIDI port here to control my Roland FP-3 E-Piano.

To the left, above the instrument tracks, there is a separate track for chords. Here, any chord symbols from the score can be assigned to a Player and the corresponding MIDI channel. This allows to add playback of the respective chords on top of the actual sound rendering. The loudspeaker button can be used to deactivate the chord track during playback.

The chord track is certainly a reasonable idea, as far as control of a score is concerned.

Personally speaking, I find the chord rendering too jumpy when chord symbols are entered with the computer keyboard. Sure, this is clearer for review, but I wouldn’t mind an additional option to have chords played in a more homogenous way, with a certain logic for voicings.

When chords are added using a MIDI keyboard the chord track follows the voicing played on the keyboard, in case one would like to use the chord track for specific automated sample patches (Action Strings, Session Guitarist, Session Horns and many more).

All the way to the left, the toolbox has been amended to include some new items.

Played Duration

This view shows, in the form of a bar, note duration as it is played during playback. A thin black line shows the actually notated duration.

Notated Duration

This view shows, in the form of a bar/beam, the actual notated duration, without graphical consideration of played duration.

Object Selection

This tool enables a cursor that allows to select and manipulate events.

With Notated Duration active, notes can be moved forward and backward, and their pitch can be changed.
It is also possible to change the notated duration by clicking the event bar’s right-side edge and dragging left or right.

With Played Duration active, note start positions can also be moved without affecting the music’s notation.

Generally speaking, edits applied with Notated Duration active will affect notation, while edits made with Played Duration active will only affect layback.


This tool is for adding of notes. Note events can be added retroactively in the piano roll editor, and will appear in the music notation, regardless of the currently set view.

Here I would like to have a possibility to add notes which will not appear in the music notation, for the following reason: Most sample libraries require both a start note and an end note for producing glissandi, portamenti, rips, and so on. In many situations, though, these items should not show up as actual notational content. Another use case would be ornaments of all kinds, which are not notated in full in the sheet music. Even accelerating trills and ornaments would be possible that way.

Draw Percussion

This tool is for adding of notes within instrument tracks assigned to unpitched percussion staves. I will cover this in some further detail below.


This tool is for deleting unnecessary notes.

All changes made in Play mode can be reverted in whole or in part, by selecting and clicking Play > Reset Playback Overwrites.

It should be mentioned that a note’s duration and start point, for playback, can already be changed numerically in Write mode.

Something unfortunately not yet integrated is an ability to draw control command curves. We know that this will arrive eventually, thanks to a video that had been published by accident showing an early state of controller lanes.

However, it is easy to imagine that an implementation of that magnitude won’t be happening just so, as all of a score’s dynamic indications and playing techniques will have to interact with those of the playback track, while still remaining highly flexible.

Playback options

In Play mode, under Play > Play Options we have the possibility to change various settings for playback. As can be expected from Dorico by now, the interface is very tidy and clear; there are several graphics that help understand the settings.


The first entry to be found is Dynamics. Here, Dorico’s behaviour regarding dynamic indications can be set.

The intensity of a dynamics curve can be changed by a numerical value. This affects dynamic levels in relation to each other. With a value of “1” the dynamics curve follows a linear path. Individual dynamic indications relate to each other in linear proportion.

If the mid-range of dynamics (p to f) is to be amplified, and therefore brought more clearly into the foreground, this is done by increasing the numerical value (>1.0). To decrease mid-range dynamics, the value is below 1 (<1.0); this particularly increases expressive playback, since smaller nuances can then come out more clearly.

Note Dynamics allows the user to define dynamics behavior of notes in regard to beat position as well as accents and marcati.

There is a setting to manipulate dynamics for all notes on the first beat of a bar and an additional option for setting all other beats. This enables accentuated playback if needed.

Further below, there are settings for accent and marcato, allowing one to emphasize them by their dynamics in relation to “normal” notes. Personally, in addition to the given settings, I would love to see pre-defined curves, being that natural marcati follow a curve to return to the given volume (of a “natural” note).

Last, there is the Humanize feature, which adds natural, random fluctuations to a dynamic by a percentage value, to approximate – dynamics-wise – a more natural way of playing. A value of 100% correlates to a potential deviation from the actual value by no more than 40%.

Overall I find these settings and options very well-designed, as they especially accommodate those users who do not want to be hassled too much by settings in the area of playback.

I would like the settings even better if they were not global, but specific to particular tracks, since every orchestral instrument family will usually approach dynamics a bit differently. Additionally, multiple CC values should be supported as settings, because – in my experience – not just the Expression controller (CC-11), but also the sound of a particular dynamic level is often edited (CC varies by library producer; VSL uses CC-12). Then again, playback functionality is only in its beginning, and I wouldn’t be surprised if any such details were on the backlog all along already. And with the future addition of controller tracks this will be possible by just drawing curves — but then the global setting should be overwritten or disabled.

Pedal lines

The way pedal lines can be controlled is remarkable. Durations for Depression Length, Retake Length and Release Length can all be set independently.

In the past, at the beginning of a pedal line the pedal controller – usually CC-64 – would merely be set to 127 (“on”) and then back to 0 (“off”) again at the end of the line. That could mean, at times, that one had to be extra careful about where a line was starting or ending, and often this would not really correspond to the intended score notation.
In Dorico one can choose whether the pedal should be active before or after the playing of a note, and to have re-takes or a full release, respectively.

Pedaling levels indicated in the score (¼, ½, ¾) will also affect the CC values going out. Sound libraries or hardware e-pianos without support for exact reproduction of pedal nuances will usually apply a full sustain for any value above ½ (64-127) and none for any value below ½ (0-63).


Within the Timing section one can set the note duration for certain articulations and also the start points for note playback.

The first entry allows setting the duration of the gap that is to be inserted between two subsequent Flows during playback. The default value of 5 seconds is reasonable. Individual settings for specific Flows are not possible yet.

Further below, deviations from nominal duration can be set globally for notes without articulation, with staccato, staccatissimo, tenuto, marcato and notes subject to legato. For staccato and staccatissimo, in particular, it is advisable to test and, if needed, adjust the default values.

Next follows another Humanize setting. This time, however, it is about random variances of a note’s start time. The maximally possible deviation of 100% corresponds to +/- 10 ticks. I found this maximum value a bit low, at first, since in Cubase I am used to working with up to 25 ticks. But it is sufficient to avoid the dreaded “MIDI one”, and polyphonic passages won’t exhibit any “pumping”.

The next sub-section is notable, I think, because it allows meaningful adjustments to appoggiaturas.

For (long) appoggiaturas it can be decided whether they should be played on the beat (standard) or before. Beyond that, it is even possible to specify a threshold duration for playing an appoggiatura’s notated length in full. Any durations below are played as (short) acciaccaturas. For now, an appoggiatura will always be played with half the duration of the subsequent note. It would be desirable here to also have the notated value considered.

Unfortunately, for acciaccaturas there is no way to have it play on the beat, as is done traditionally. Given that this issue is contentious since the beginning of the 20th century, it may be noted that it really is at the discretion of a composer (or conductor/artist) to decide whether an acciaccatura is to be played before or on the beat. Until an implementation may come along, this has to be corrected manually, if needed.

Next, unmeasured tremolos can be fine-tuned. Alongside the minimum number of strokes that must be present to trigger playback of unmeasured tremolos, one can also set the duration of the single notes in relation to a quarter note at bpm=120. It is worthwhile to test different values with different sample libraries here.

The available settings for arpeggios are found last and are very impressive.

Not only is it possible to set the playback length of an arpeggio, it can also be decided whether it should start before or on the beat.

The minimum and maximum durations are based on actual note durations and are specified as a fraction of those durations.

Overall, Playback Options are well-designed and presented in a clear fashion, especially through the use of informative graphics. Particularly for older generations of sound libraries – with their limitations of articulations and expressive playback – it is possible to quickly find satisfactory settings here, without much need for tweaking.

This especially accommodates those users who want to obsess about playback as little as possible. It makes sense to have automation integrated first, since some important things are still lacking in the area of music engraving.
However, looking at how much playback has prospered already and which clever solutions have been conjured up by the Dorico team, obvious progress towards a promising direction is unmistakable.

Expression maps

Until one year ago, expression maps were largely unknown outside the world of Cubase. But what are they, really?

Basically, it is nothing else than the familiar Sound ID system, but more pragmatic and flexible. In contrast to, for example, Sibelius’s Soundset Editor, articulations and playing techniques can be assigned and directly tested in a user-friendly way.

The logical framework of how such an assignment works in the background is actually rather simple. A note is associated with an attribute. The note can be an object to which a class is added, or the note is stored in a table holding the attribute. It even can be something much more clever, for which my computer programming imagination is not sufficient at all.

The important thing is that such combinations of note and attribute are brought into a form readable by users. Instead of a snippet of code they now find musically meaningful terms like staccato, tenuto, trill, etc.

In the Expression Maps editor, these terms, in turn, can now be associated with certain commands, for example: keyswitches, program changes, or control change commands. Beyond that, one can also directly assign behaviors to notes, be it a transposition, a change of duration, or something else.

Let‘s take a closer look at the Expression Maps editor.

The first thing that caught my eye may very well be irrelevant for many, but is actually something that I truly miss in Cubase: the description field. By now I am using so many different expression maps in Cubase for the same sounds (depending on the project) that it is easy to quickly get lost.

But onto something more substantial: the editor appears very clear and is, by and large, already self-explanatory.

In the left column, already predefined expression maps are found for the provided sounds of the HSO player. Furthermore, there is the expression map “Modulation Wheel Dynamics”, rerouting dynamics to CC-1 (the modulation wheel). The expression map “C11 Expression” works by the same principle, with the difference of rerouting dynamics to CC-11 (Expression).

Another pre-defined expression map is “Transpose Up 1 Octave”, which is especially useful with GM sounds; just consider a piccolo flute, which is most likely not octave-transposed within the GM world, while most libraries have sampled a piccolo already octave-transposed. And timpani rolls quite often can be found 3 or 4 octaves higher in the same patch. (C2=note c2, single hit C5=note c2, roll)

A default map can serve as starting point for a new expression map. Just the same, all other already existing maps can be duplicated and edited. In order to edit a duplicate map, it has to be unlocked (see the lock icon at the top right).

In the lower area, several buttons can be found with which saved Dorico expression maps can be exported or imported. The possibility to import Cubase expression maps is especially clever.

This way, maps created in Cubase can be imported and adjusted without problems. For some sound libraries, expression maps already exist, which are provided for download on the Steinberg web page. It should be noted that Cubase expression maps generally use the articulation IDs of the sample manufacturers. Therefore, on occasion it is necessary to manually assign the ID “sustain” to the playing technique “natural”.

In the screenshot, I have chosen the HSO Flute Solo. The central area now shows how an expression map, with its various entries, is structured.

At the top, core data is provided:

  • name of the expression map
  • internal ID (requires a unique string; when creating from scratch or when duplicating a map, “User” is added automatically)
  • creator and version (self-explanatory)plug-ins, a reminder of which sound library is controlled by the expression map
  • description allows detailed notes about the map; in my case it serves more as a memo box

In the middle field we find the actual expression map entries: the assignment of possible articulations and playing techniques.

The left-side list (under Techniques) can be extended arbitrarily, either by duplicating and editing an existing entry, or by adding a new one. The tools for doing this are found directly below the list.

To the right of the central area there are three sections for making some basic settings.

The first section gives some settings for the played note itself, like transposition, duration (%) and velocity (%), dedicated areas for pitch bend, and lower and upper margins for velocity. More on that in a moment.

The Volume Dynamic section serves for setting how a given sound library is to be controlled in relation to played dynamics. The really clever thing is the possibility to have different assignments for each entry.

In the third section, a so-called exception group can be assigned, but this might change in a future version. This, too, is under development and not functional at current. Some combinations of articulations are given in the notation side of the program so that at least these articulation groups can be assigned to one expression map entry. As an example, portato (staccato + tenuto) has its own sign. But then the question comes to mind how another way of notation such as a “portato” (slur + staccato = legato + staccato) will be solved in a future version.

In the lower field, MIDI commands are assigned to the above-selected articulations selected above, such as keyswitches, program changes or control changes. These can be combined in any way. For example, in addition to a key switch a filter or a sample’s release time can be controlled.

The tools below the edit field should be self-explanatory.

One special feature is the possibility to send a command shortly before as well as shortly after an event. This can make sense, as each sound library treats various settings differently.

Once all settings of an expression map are done, they can be assigned to the individual tracks of a VST instrument.

For doing this, there is a small gear icon to the left of each VST instrument in Play mode. Clicking it opens a table for setting the number of MIDI channels to use. Below, an expression map can be chosen and assigned for each channel.

The advantage here is that maps can be set individually, in contrast to the soundset architecture of other programs, where a single instance with 16 channels can most likely be used with one soundset only, irrespective of how many different instruments with different articulation assignments are really needed in an instance.

When starting note entry with dedicated articulations, in Play mode the corresponding expression map entries will now show automatically.

Personally, I think it would be worth allowing changes that do not affect the music notation. Additionally, there should be a possibility to create custom expressions that do not map to pre-defined articulations or playing techniques.

Here’s why: With a quickly played staccato, individual notes sound different, somewhat crisp. Depending on musical style, staccato also means something different. At some times it means “short”, at others it means to play notes detached from each other, but not explicitly short. What I would do for such a case is to create several expression map entries; one would be made the default, while the others could be exchanged as needed in the lower field of the piano roll, overwriting the default expression with a custom one to get a better result if needed.

A bit more on expression maps

Some articulations and playing techniques are not functional yet. There is a reason for this. Integration has to be well thought out. It is about maximum flexibility in order to be prepared for future developments from the side of manufacturers of sound libraries, while still taking into account the currently available libraries. Each of the manufacturers is doing their own thing.

Let’s take a trill, for example. There are sound libraries that do not have dedicated trill samples. Here an automation is needed. Other manufacturers provide half tone and whole tone trills. However, individual libraries differ in the way they are controlled. While for many libraries a trill is controlled via keyswitch, Orchestra Tools’ Berlin Woodwinds requires a keyswitch and then a second note to be played immediately after the main note.

But even sampled trills have a disadvantage. In practice, there are several ways of playing a trill, depending on musical style – beginning with the upper note, beginning with the main note, beginning slowly and then accelerating, and so on.

For use cases like these, some manufacturers offer so-called “fast repetition legato patches”. Here, a keyswitch or program change for controlling the patch is needed, as well as an automation to create the trill, as known from other programs.

Many factors have to be considered, and any snap decision would doubtlessly prove fatal for future enhancements.

Percussion maps

Dorico 1.2 introduced percussion maps as a way of assigning appropriate sounds to a score’s various percussion instruments.

Right up front it should be said that playback of percussion instruments is usable overall, but that certain features are currently still in development and not yet fully functional. This is due to the enormous complexity and inconsistent standards of notation, and because of the different approaches to percussion sounds by sound library producers.

What, exactly, is needed for notation and playback of percussion instruments?

From the notation we need a note’s staff position, its notehead type, its stem direction, and any assigned articulations. For playback we need the output note: that is, the MIDI note triggering the desired sound according to the sound library. For this it should ideally also be possible to trigger another MIDI channel, since an orchestral percussion part can be made up from different sound libraries, for example the common set of bass drum, snare drum, and suspended cymbal.

This note, of course, has also to be entered. This can be done via mouse, computer keyboard or MIDI keyboard, with the latter two requiring an input note assignment.

About percussion maps in Dorico

In Dorico, single percussion instruments can be assigned to a player, with Dorico automatically creating instrument changes between those instruments as needed, exactly like with pitched instruments. Alternatively, a player can have a Percussion Kit assigned, which allow the user to control several instruments at once.

Any single percussion instruments held by a player can be combined into a Kit at any later time in Setup mode, by right-clicking to bring up the context menu and then choosing Combine Instruments into Kit.

In contrast to pitched instruments, the assignment of playing techniques to articulations in the notation is not “hard-wired”, since percussion instruments are handled too differently, depending on musical style and region. There also is no real global notation standard that is comprehensive and unified. For example, notating a buzz roll on snare drum by putting a Z on the note stem is something that usually won’t be encountered in German-speaking regions of the world.

Playing technique assignment is already done in Setup mode. If we have added a player with a percussion instrument that is not a Kit (the instrument name appears in blue), on clicking the small arrow next to the instrument name we find that the entry Edit Percussion Playing Techniques… while Edit Percussion Kit… is greyed out.

When having chosen a Kit, however, Edit Percussion Playing Techniques… is greyed out. Here we arrive at the percussion playing technique editor by first opening the percussion kit editor (by clicking Edit Percussion Kit…).

There, first, an instrument has to be chosen, for which playing techniques can then be edited in the percussion playing technique editor. The editor is accessed by clicking the middle button Edit Percussion Playing Techniques… at the bottom.

Let’s look at the assignments more closely. In the previously opened Edit Percussion Kit… dialog, I have selected the Snare Drum and, by clicking the button Edit Percussion Playing Techniques… at the bottom, opened the Percussion Instrument Playing Techniques dialog.

Here I have already set up some common playing techniques. “Natural” stands for a simple hit on the snare drum. “Rim” I have set up as a rim click. “Roll” is for rendering of a drumroll sound and has been assigned to the “Natural” playing technique.

The combination of a notehead type and a defined staff position constitutes a “unique” constellation, consisting of a default playing technique and an arbitrary number of additional playing directions.

If a notehead type is used more than once for a particular staff position, the warning “The appearance for this playing technique is not unique” appears.

Identical notehead types should be used with deliberation; still, during my tests, the warning message had no impact on playing technique performance during sound rendering unless the settings for articulations and tremolos matched exactly as well.

To the left below the upper viewing area there are two buttons for adding further techniques and for editing existing ones. The trash can — for deletion — is positioned to the far right.

Below is a drop-down menu for assigning a notehead type to the playing technique selected above.

To the right of the drop-down menu users can choose – for the playing technique selected above – a position for the notehead relative to the staff line, which is relevant if one has opted for the Grid or Single-line Instruments representation. This setting does not affect the 5-line Staff representation in any way.

Once a new playing technique has been added or an existing one has been selected, further edits are possible in the lower portion of the editor, related to articulations and tremolos.

With the new playing technique the lower left column is still empty. There, the first steps should be to set up a “Default” articulation, acting as a fallback in case the playing technique is not supposed to be notated by way of an articulation.

For this, click the [+] at the lower left edge of the column. This creates a “Default” entry automatically.
In contrast to Expression Maps, where possible combinations are hardwired as notes and articulations/playing techniques, different combinations from articulations and tremolo strokes can be chosen freely here. Articulations are divided into three groups. Therefore, any articulation from one group can be combined with one from another group. Articulations belonging to the same group are mutually exclusive (for example accent and marcato).

Below the choices for tremolo strokes, we find Playing Technique. This is where we assign a playing technique for the percussion map which then can be used within a map for correctly triggering the required sound, just as we‘re used to from expression maps. What is hardwired in an expression map we have assigned manually and therefore more flexibly.

An important setting is found to the right of the assignment field for Playing Technique. Here a choice is made about whether the technique set up for any particular articulations and tremolos should be added to “Default” or rather replace it entirely. Significantly, the question is if there is a dedicated sample available or not within the sample library for the selected playing technique.

Here the settings shown in the picture further above follow once more as a series of images, with explanations if needed.

Here I have set up the “Roll” playing technique for three tremolo strokes. It is supposed to replace the default mapping since my sample library (Vir2 Elite Drums) contains a drumroll sample of its own. For this reason Replace has been chosen.

The articulation “Accent” is added to the “Default” technique. Thus Dorico automatically plays an accent according to the settings in Playback Options.

A drum roll (three tremolo strokes) with accent, however, is not supposed to be added to “Default” but to replace it. It is important here, for Playing Technique to once again choose “Roll” and to have the accent performed by way of automation.

Articulations and tremolo strokes are added within the notation itself, just as with any other instruments. According to our example, writing a note with a default notehead type onto the snare drum line results in “Single Hit” rendering unless a three-stroke tremolo is added.

A very important note: during note entry (covered in detail further below) playing techniques and articulations set up with the method described above are not considered when toggled, meaning that only techniques set up in the upper area (“Natural” and “Rim”, in our case) can, along with the required different notehead types, be toggled-through via MIDI keyboard or the shortcut Alt+Shift+Up/Down.

If we still want direct access by toggling to, let‘s say, a drumroll playing technique during note entry, the drumroll should be set up – in spite of the warning message – with identical notehead type and three tremolo strokes. The procedure is the same as described above, however, the drumroll playing technique is not assigned to the default note (i.e., without any articulations or tremolo strokes), but instead a new entry is created in the uppermost area, with “Roll” chosen and with the same notehead type assigned. Next, in the lower left field a playing technique must be added for three tremolo strokes (unless it already exists). Lastly, Replace has to be applied.

Comparison of both methods, with the example of a drum roll

Method A: Assignment of a drum roll to the default notehead type, as described step-by-step above

The playing technique cannot be chosen directly during note entry. Even after adding a three-stroke tremolo, any selected (clicked-on) note will be played as a single hit (default). After having entered a large number of bars it can be explicitly decided which note should be a drumroll.

Method B: Assignment of a drum roll as a separate playing technique

The playing technique can be chosen during note entry by “toggling”. The drumroll sound will play back properly even without the application of a three-stroke tremolo to the note. After having entered a large number of bars the track might have to be listened through in order to apply three-stroke tremolos to the drum roll notes. If a mistake has been made regarding the drum roll playing technique, it is not sufficient — as with method A – to select the note and to remove the tremolo strokes; additionally, the underlying playing technique has to be changed via Alt+Shift+Up/Down as well.

After having set up a percussion instrument in Setup mode, the next crucial step is to assign playing techniques to the appropriate sounds from the sound library of our choice.

Switching to Play mode, one peculiarity becomes apparent right away: all instruments that we have set up in our kit have their own track, respectively. Thus it is possible to choose a distinct MIDI channel for each instrument and therefore also a distinct sound library with its own distinct percussion map.

In general, for controlling drum patches there are two different approaches used by producers of sound libraries.

Either it is done by way of a mapping, with any and all available sounds each assigned to a specific MIDI note (the GM drum map probably being the best-known example); or the control is handled with keyswitches, with a specific default sound assigned to [generic] MIDI pitches and with commands for different playing techniques attached to separate MIDI notes that lie outside of the instrument patch’s playable pitch range.

So, while with the first approach all sounds are scattered about the MIDI keyboard, with the second one the notation program always triggers the same MIDI key [for a particular pitch] and changes playing techniques via keyswitches.

Dorico supports both approaches.

The Percussion Maps editor

To assign playing techniques, in Play mode, open the Percussion Maps editor via Play > Percussion Maps.

Just as in the Expression Maps editor, the Percussion Maps Editor has a clear and consistent design. To the left is a list of preconfigured maps. A new percussion map comes to be either by creating it from scratch or by duplicating an existing one. The respective buttons are located at the bottom of the list. To the right there are three main areas labeled Percussion Map, Drum Kit Note Map and Edit Drum Kit Note.

Just like with expression maps, the Percussion Map area allows to give a map a name and a unique ID. On top of that it can be decided whether the map should apply for multiple instruments or for a single instrument.

The area Drum Kit Note Map below provides a list of all MIDI notes, serving as trigger pitches for the respective sound library. A filter allows to restrict the view only to those notes actually in use.

With a list entry selected we can modify that entry in the editing area below (Edit Drum Kit Note), or we can set up a new entry if the selected entry is still empty.

As can be seen in the picture I have gone with the mapping method for the snare drum of the kit that I prepared in Setup mode earlier, since my Vir2 Snare requires different trigger pitches for all playing techniques. Single Hit is triggered by MIDI note 36 (C2), Rim by MIDI note 37 (C#2), and drum roll by MIDI note 48 (C3).

To create a playing technique from scratch, one first clicks Show all in order to display all MIDI notes that are not yet in use. Next one scrolls to the MIDI note that the producers of the sound library have designated for the intended laying technique.

With the note selected, the editing area allows for entering a name, choosing the corresponding Instrument and to assign the playing technique. The Keyswitches field is not used in this case.

Once all settings for the chosen MIDI note are provided, they are confirmed with the Apply button before going onto the next MIDI note.

After all required assignments are set up, the Percussion Maps editor can be closed.

For a drum kit note map using keyswitches the approach is roughly the same, although with the exception that, as a first step, the MIDI note designated for the default sound of the sound library (usually a Single Hit) is selected and set up as described above. The Keyswitches field, once again, is not used, since this entry serves as a default for the map to fall back on if no playing technique is assigned to a note.

With this done, clicking Add Keyswitch Alternative gives a duplicate of the entry, still referring to the same MIDI note. For this entry all necessary settings (like name and playing technique) can now be changed and the MIDI pitch number for the keyswitch can be added. Which keyswitch is required for which desired playing technique is documented in a sound library‘s user manual. Often this is also already apparent from the interface of the library’s engine.

The important thing is to assign all entries to the same MIDI note, as shown in the picture below.

With all settings finalized and saved via Apply, the next step is to assign the percussion map to the audio engine of the sample library.

To do this, in Play mode we click the gear symbol to the left of the sound library under VST Instruments.

This opens the dialog that we already know from handling expression maps.

The right column is for assigning percussion maps. Using a drop-down menu, one of the existing percussion maps can be chosen – in our case “snare_vir2” is the right one.

In addition to the percussion map there is also an expression map assigned, controlling dynamics with MIDI’s CC7 (Volume). This expression map should have no playing techniques set up whatsoever, since otherwise it is possible that entries contradict each other or that an expression map’s keyswitches are played back as actual notes.

Note entry for unpitched percussion

The introduction of unpitched percussion notation also comes with some new entries in Edit > Preferences. Let‘s take a quick look.

In the Percussion Input section there are settings to adjust the way note entry for percussion staves behaves.

Percussion Input is divided into two areas by way of the grey background. The upper area offers note entry settings relevant to the 5-line Staff and Grid representations.

The Input onto kit or grid buttons decide whether a percussion map or a note’s staff position should be used for note entry within a 5-line Staff or Grid representation.

Use Percussion Map might be somewhat misleading. It refers to the MIDI note assignment provided by a sample library‘s producers; the purpose is to enable users to enter particular sounds directly with the MIDI keyboard. The classic example for such a layout is the GM standard, where, for example, a C1 corresponds to a bass drum and a D1 to a snare drum (sometimes there is an octave shift, though). In practice this means pressing, for entry, exactly the MIDI key to which the desired sound of the used drum patch is assigned. The library I use (Vir2 Elite Percussion) has several single hits matched to C3-B3, as well as drumrolls to C4-B4.

I mention this just for the sake of completeness, since this feature is not fully functional yet (for reasons given earlier) and should not be used for now.

Use staff position allows note entry by way of specifying a staff position within the 5-line Staff or Grid representation, with Interpret as: determining whether a 5-line Staff representation should behave like a regular staff with a treble clef or a bass clef, respectively.

Thus, for entering a snare drum note (which is set to be notated between the third and fourth staff line) the MIDI key C5 is used with Treble G Clef chosen, or MIDI note E3 if the option is set to Bass F Clef.

If the Bass F Clef setting is used it is important to change the subsequent MIDI pitch number (labeled Input techniques from MIDI key: with the default MIDI note 48 = C3), because it otherwise conflicts with the key range for note entry. My recommendation is to set this value to MIDI note 36 (C2).

What is that feature about, though?

It allows the user to switch, via MIDI keyboard, between the various playing techniques of an instrument during note entry; the specified MIDI note and the upper-next one act as forward and backward toggles. This way it is not necessary to move the hand away from the MIDI keyboard for choosing a particular playing technique for entry.
With Interpret as: set to treble clef and with the default setting unchanged, MIDI note 49 (C#3) toggles forward and MIDI note 48 (C3) toggles backward.

A further very practical feature is the ability to choose a playing technique directly.

With the set MIDI pitch value (48 by default, or the respective user choice) as reference, the MIDI note a whole tone above (with 48 (C3) this would be MIDI note 50 (D3)) is assigned to the very first playing technique set up in the Percussion Instrument Playing Techniques editor. Each additional playing technique follows by half tone steps.

By way of example:

Earlier we had set up a Snare with the playing techniques “Natural”, “Rim” and “Roll”.

Leaving the Interpret as: and Input techniques from MIDI key: options at their default settings (treble clef and 48, respectively), MIDI note 50 (D3) chooses “Natural”, MIDI note 51 (D#3) chooses “Rim” and MIDI note 52 (E3) chooses “Roll” directly.

Especially with a larger number of playing techniques this is quite an advantage. In all, up to ten playing techniques can be handled in this manner; above that number the toggle function must be employed.

But this mechanism for choosing playing techniques directly is also the reason why, with Bass F clef selected, the default value for Input techniques from MIDI key: has to be changed, since otherwise choice of playing technique and actual note entry collide.

It is important to remember that this feature only relates to playing techniques that are set up in the upper field of the Percussion Instrument Playing Techniques editor. Playing techniques controlled by application of articulations or tremolo strokes to already existing notes cannot be selected or toggled in the manner just described (also see the section about Percussion Instrument Playing Techniques above).

Personally speaking, I would like the ability to re-arrange the playing techniques set up in the Percussion Instrument Playing Techniques editor by mouse afterwards, since such mapping often can be a “growing” process.

The available settings for the Single-line Instruments representation differs in that they offer no choice between treble or bass clef.

Instead, a base note can be specified, which will then, along with the next-upper notes, serve for playing technique input. A note will be positioned on the line, unless another position has been set in the Percussion Instrument Playing Techniques editor. The default value is MIDI note 60 (C4), which can be change at a user’s discretion.

Lastly, an important note: if multiple instruments have been assigned to the same staff position in the 5-line Staff representation, it is possible to toggle through them with the Up/Down Arrow keys; this cycles through all positions onto which instruments have been placed in the Edit Percussion Kit dialog.


I am often told that I tend to see more in things than is actually the case. Even though playback in Dorico is only just in its early stages, the already created basis is more than promising. The introduction of Expression Maps, allowing for quick, user-friendly editing and for setup of articulations and playing techniques for all available and future sound libraries, is a pioneering implementation. Percussion Maps make an intuitive workflow possible, with regard to the complexity of the task itself, but also when considering the different approaches by each producer of sound libraries, with the further development of such libraries being a perpetual process.

Expectations towards playback from notation programs have changed drastically in recent years. Decision-makers at publishing houses as well as orchestras by now prefer audio files for a first assessment – not because they are unable to read scores, but because it is just faster. More and more, composers are expected to offer a high quality demo reel.

Sure, this can also be done in post-production with a DAW – but why should it be, when there is another way? Film composers, on the other hand, usually work in a DAW but require reasonable engraving afterwards. This can certainly be done by exporting and further editing in a notation program. But with film scoring in particular, time is a serious factor, as I can confirm from my own experience. I have spent many sleepless nights, with a tight deadline looming, preparing sheet music for colleagues.

The same question here: why, when there is another way?

And the basis for this is already given in Dorico right now. It‘s not about having Dorico built up to full DAW, though. Steinberg already has excellent specimens of those on the market. Rather, it is about integrating MIDI editing capabilities of a DAW into Dorico in a way that facilitates first-class playback.

Looking at my own workflow in Cubase, then – after importing a MIDI file from a notation program and deleting all redundant CC data – I am missing merely a few features in Dorico, which are all the more important, however. The following steps constitute only a raw sketch.

The first step is to add “initializing blocks” in bar [-1], preceding a Flow, which would allow to set default values for all conceivable CC data, something that is sensible especially for VSL. In Dorico this could be done via an Expression Map, but for this note in a preceding bar would be needed, with the assignment of an unused articulation to that note.

I would welcome a possibility to specify an arbitrary set of data within the process of assigning instruments in Play mode, which would then be sent in advance of playback.

The next step that follows the editing of the tempo track. Dorico adequately obeys the tempo indications of a score, but in order to achieve realistic playback, a large number of tempo indications would have to be inserted into a score. A detailed framework for editing the tempo track without affecting the actual notation would be desirable here.
Knowing the Dorico development team, I would make the bet, though, that this is already on their to-do list. Just a guess, obviously.

Next after setting up the tempo track is the editing of articulations. Probably the most important issue here is setting up a variety of samples for one and the same articulation. Once more as an example I would like to bring up staccato, which can have extremely different meanings even within one score.

As I already explained above when covering Expression Maps, users should have the option to create custom Expression Map entries that are independent from the program, which can then be swapped in Play mode in a specific instrument’s Playing Technique track.

What I mean by this are the playing techniques which are selectable in the list of playing techniques. Now, all of them are hardwired to articulations and playing techniques in the score notation. I would love to be able to add custom playing techniques and articulations which are not hardwired to notation items and can then be switched in the playing techniques track without changing items in the score itself.

Taking staccato as an example, this could mean “staccato”, “staccato long”, “staccato fast”, “staccato soft”, and so on. In VSL, for staccato alone I have more than 30 different variants available, some of which contain sustain sounds.
This is similar with other playing techniques as well, of course.

A further step is editing of attack and release points and of a note event‘s duration. This already can be done in Dorico right now in a very relaxed and user-friendly manner.

Another important issue would be to insert notes into a track that are not supposed to appear in the score notation, especially with regard to ornaments and their wide range of possible interpretations. Such notes are also required by the producers of sound libraries for certain articulations and playing techniques, for example the trill or the tremolos of “Berlin Woodwinds” by Orchestral Tools.

Probably the most extensive and most important workflow step is the often very detailed editing of control change commands in CC tracks. Reading through the many comments in forums and Facebook posts, this may be one of the features everyone is waiting for. Especially when working without Key-Velocity, some CC-curves depend on each other. In my workflow it is the CC11 (Expression) and CC12 (sound of a certain dynamic level), where CC12 is most likely only a certain percentage of the CC11 value (especially when a “French” sounding sax is needed instead of a big-band sounding one) but always free for further manual adjustments. In Cubase I handle this with a Logical Preset (a kind of macro) which calculates and copies the selected CC11 values to the CC12 Lane. It is very important for a good workflow to see several CC-lanes at once or on one screen.

Maybe an unintended minor spoiler was given by the development team of Dorico about half a year before the initial release of the program. Watching this video at 3:50, an early state of CC Lanes can be glimpsed.

For such a feature it would be imperative to provide users with various tools comparable to those of Cubase/Nuendo; especially the different curve forms allow a more precise and faster workflow.

The final step would be audio editing of a sound library’s output channels.

In this area, Dorico leaves little to be desired, with a generous amount of Cubase/Nuendo technology already implemented. In fact, the only thing that I miss is a possibility to create custom groups and to control gates and dynamic EQs via sidechain.

An automation feature would certainly be useful for a number of users.

A hotly debated point would be the ability to synchronize Dorico with Cubase/Nuendo or, even better, with any DAW, thereby providing seamless data exchange “on-the-fly”.

In particular, I am thinking of True Rewire, in the way that I am accustomed to from Propellerhead’s Reason in combination with Cubase. There, each of Reason‘s MIDI channels appears as MIDI input in Cubase, each audio output set-up in Reason appears as audio input in Cubase – and vice versa.

Given that extreme processing workload can result in skipping and loss of synchronization, I am generally of the conviction that a new and more efficient protocol would make sense here. But that is just my humble opinion.

This list might appear long, but can actually be broken down to a few bullet points:

  • Init-Control Change send
  • Tempo track editing, User-customizable Expression Map entries (or, respectively, selectable playing techniques independent from the program)
  • CC lanes
  • Integration with DAWs

We should not forget, however, the unbelievable achievement of the development team in so short a time. And of course the focal point was and is first and foremost in the area of notation and engraving.

In the area of playback, just the same, the primary goal has been and is first and foremost to give a faithful audio rendering of the score notation regarding dynamics, tempo as well as articulations and playing techniques; the more detailed editing tools are second.

But seeing all the clever ideas – even for small details like a fermata – that the development team has come up with, one can confidently look towards the future of Dorico’s playback. “We are just about to begin” has by now become a familiar phrase that sums up things well.


  1. Jochen

    Very interesting read and a comprehensive insight into the subject thanks I did not dive that deep into as soon as I learned that many things are not yet fully implemented. But I played a bit with Spitfire Symphonic Strings and did a quick test of the expression map features – and the result out of the box gave me goose bumps. Dorico’s playback will definitely be fantastic when all of the above will be implemented. Still, there is much setting-up left to the user – and using high quality samples takes up a lot of CPU and memory in the background… I hope that for lazy people like me, Arne Wallander will be able to develop a version of the ultra-CPU-friendly NotePerformer for Dorico…

  2. Johan


    Thank you for this elaborated review.
    Very promising for the future.
    But … I did not find the following in your review.
    Basic repeats are not implemented yet.
    And will not be in next version.



    1. Andrew Noah Cap

      Hi Johan, I am sorry that I left this out. This post was not an overnight thing. Actually I started when the latest update was released. The more lines I wrote the more it became clear to me that a deeper look into the world of VSTis and its manufacturer’s philosophies were needed.

      In the latest release the basic repeat functionality is not implemented but as far as I know it is high priority on the list and chances are high that in the next version it will be implemented.

  3. Peter Roos

    Excellent review – thanks Andy.

  4. Charles Gaskell

    I would estimate that for about 80% of standard notated music, there will be agreement by 80% of performers as to how it should be played.

    In my view, what is needed is something that will do the heavy-lifting for that 80% and fully automate translating this into Midi instructions (including CC instructions) required by a sound library. There also needs to be a way that these Midi instructions can be edited manually, to cater for the other 20%, but associated with the original notation, so that adding a bar in the middle of the piece for example does not force you to re-input all the manual edits and overrides.

    As these instructions vary from sound library to sound library it would seem inefficient and inextensible to produce individual mappings for individual libraries (as is the case with Sibelius Sound Sets). Perhaps a better way would be to adopt the model of Java, and produce meta-Midi instructions (similar to Java’s pseudo-machine code) which represent the instructions for a virtual sound library, with a separate process to turn these meta-Midi instructions into true Midi instructions suitable for a specific sound library.

    Such an approach would also have the bonus that if you have the ability to translate true Midi instructions for a sound library into a virtualised generic set of meta-Midi instructions, then the problems you get when you substitute one sound library with another (or where you layer one sound library with another) would be much reduced and changing from (say) VSL Synchron Strings to LA Scoring Strings or EWQL Hollywood Strings would be as simple as generating the meta-Midi instructions from the VSL String Midi, then generating the LASS or EWQL HS specific Midi instructions and assigning them to the appropriate track (and layering as required).

    Thanks Andy for a detailed look at the playback capabilities of Dorico – much food for thought and exploration there. I do how that we are at the beginning of an era where the challenges that notation bring (or at least 80% of them) are finally tamed and the drudgery of forever drawing “realistic” expression curves etc. becomes a thing of the past.

  5. DaddyO

    Outstanding sketch of the situation, Andy. Thanks. I share most of your wish list.

    Re: Playback, certainly part of it is delivering adequate audio output to serve as a demo for customers, or as an acceptable sketch prior to full DAW implementation, or when customers are not the target audience. As an additional note, I think there are some (many?) among us who, when composing, want better playback not so much want to catch “mistakes,” although this is certainly helpful, but also to serve as a sandbox for composing.

    I suppose there are a number of experienced, professional composers and orchestrators who do not need this step because they can hear something in their head already orchestrated…all they need is to transfer that to notation. But there are likely also a number of Dorico users who do not have that capability, who hear something general in their head but then need to try out various ways of orchestrating it, continually reworking things until they are satisfied with what they are composing.

    All the existing and prospective playback work you cover will help towards meeting this need. But I do want to make sure there is some mention of the people who want to use Dorico as their go-to composition tool. These people, like myself, strongly prefer composing in notation rather than in a DAW, at least initially. But the playback needs to be convenient and strong enough to play back on the fly, in a targeted way if needed, and to give a good enough rendering of techniques, ornaments, tempos, etc. that one can evaluate what they have written.

    1. Kerwin

      I tend to use the sound functions of my scoring program as a sandbox when composing, to inspire textures I might not have tried before or fiddle with on-the-fly ideas. As such, while very interesting, I feel expression maps are way to far into mockup territory for me. When I need to do a mockup, a full DAW is likely going to be more convenient than Dorico (at least for the next few years.) What Dorico lacks is a psuedo-realistic playback (YMMV) out-of-the-box that something like Noteperformer can deliver without hours of tuning expression maps and libraries. I did spend several weekends trying to get Dorico to play nice with Spitfire’s symphonic libraries, but was not overly impressed with the result simply because the lack of cc-lanes and other niceties of a DAW a still to nascent in Dorico. The out-of-the-box sound of HAlion, while having some good individual patches is both incomplete and doesn’t seem to “blend” very well in Dorico’s engine (compared with the same patches thoughtfully worked in Cubase.)

      As an editorial comment about the piece in general, I’m not really sure why this is categorized as a “Review”…it feels more like lots of tips about the existing Dorico product, along with some speculation/wishing as to where Dorico is going in the sound-producing area. It doesn’t make a serious attempt to compare and contrast solutions already in market. Perhaps this would better be categorized as “Tips” or a “Tutorial” of sorts?

      1. Philip Rothman

        Hi Kerwin,

        Thanks to you and everyone so far for all of these incredible comments. It answer to your question about whether or not it is a “review”, several times prior to publication I went around and categorized it as “tips”, “tutorial”, or “opinion”, but in the end went with “review”. Really, though, it is probably its own category of blog post born from Andy’s unique genius!

Leave a Comment

Your email address will not be published. Required fields are marked *