On working at Steinberg
Q: What’s your work environment like?
There are 14 of us in the office, and of the 14, 12 of us worked together before. We’re very fortunate that we’ve been working together for a very long time; many of us for more than ten years. We have a strong rapport; we’re friends as well as colleagues, and have been since we were much younger and didn’t have babies and spouses. That, I think, goes a long way towards what will hopefully make this a successful endeavor.
We all have a strong feel for the field that we’re working in and we have a really strong team dynamic that has survived completely intact from being transplanted. We’re a small team, but I like to think that we “punch above our weight” in terms of the results that we get given the number of people that we have. That’s no reflection on me; that’s because the people doing the programming are tremendously talented individuals who no doubt could turn their skills to any field and excel, and we’re lucky that we have them working with us on this project. They often take hare-brained ideas of mine and improve them a lot by telling me, not in so many words, that I’m an idiot, and that’s quite all right!
The nice thing about working here at Steinberg is that we don’t have anything else to do other than work on this project. So these days I have no excuse for not being able to come up with answers to pressing development questions on demand. In a way, that’s a wonderful thing because it means that, as much as much my mental energy will allow it, it can be 100% dedicated to building a better program.
We collaborate with our colleagues in Hamburg, but it’s not like we’re poking around into the internals of Cubase. Steinberg as a whole is run on the agile software methodology, so we work in short iterations, or sprints. There’s a fairly flexible allocation of people among projects. We do work with them in ways you would imagine: we go over to Hamburg for a few days at a time to do in-person work and we have weekly meetings via teleconference.
VST hosting and VST plug-ins that are currently included with Cubase are just a couple of the ways, no doubt, that the first version of our program will bear the fruit of those efforts. We don’t know what kinds of sounds will ship with our product when it becomes available, and that’s a bit of challenge: pros have already probably chosen what libraries they wish to use for playback, so in a sense it’s probably even more important that we figure out how to work well with existing third-party libraries than we do with yet another orchestra that comes in the box with our software.
Down the line, as I said, we want to look at more specific integration to transfer data between Cubase projects and our program’s projects, and we’ll come up with little teams that are focused on that for a period of time according to the way that the priorities all shake out.
Comparing features to existing products
Q: Are you looking at the existing music notation software products as a baseline for features? Is there anything you’ve ruled out including in your program?
Naturally, we have constraints in terms of how much we can do in a reasonable time frame so that we can come up with a coherent first version that we can release and hopefully sell, and that customers will hopefully want to buy. Inevitably we won’t have all of the features that the existing, mature programs have. They have an accumulated 20-plus years of features. I won’t categorically rule out adding any particular features in the future, of course. One example of a feature that is unlikely that we’ll have in our first version is support for scanned music, but that’s the kind of thing that we might very well add later on down the line because it is a genuinely useful workflow.
Do we look at the existing notation products as a baseline? Only in one sense: the actual repertoire of types of music notations that you simply have to be able to handle in order to be useful to anybody is very large. In other words, pretty much every type of instrument needs notes, accidentals, staff lines, barlines, slurs, ties, beams, dots, plain text, techniques, rehearsal marks, bar numbers, staff labels… the list goes on and on. So we have to make sure that our program is capable of doing conventional notation in the first version. It’s inevitable that some more specialized notations, such as those used by early music, or perhaps some guitar things, will have to wait for later versions.
The challenge is, every time you turn over a stone, whether it’s ties, or beams, or stems, you discover a huge amount of stuff you have to do. If we can’t do, for example, instrument naming at least as well as in Finale and Sibelius, why would any pro who’s likely already using one of those programs bother to take a look at our program? So that sets the bar in terms of core notational elements; we have to be at least as good as those guys at those things. But it can’t be just “me, too.” We have to have things that are unique to our program — and I believe we have a number of things that we are doing that are unique to us — but if we can do some fancy thing, but can’t label our staves, then, it’s a non-starter. We have to cover the same ground, and we have to cover it well.
SMuFL and Bravura
Q: You’ve talked about SMuFL (Standard Music Font Layout) and Bravura on your blog and elsewhere, but what is it like seeing the response to it in the field, and how it has begun to be used and implemented?
SMuFL was born of necessity. We needed a font, and we knew that it couldn’t be done the way it was done before. So it made sense to approach it in a way that made sense beyond our little world. There is a difficulty with interchanging existing music fonts, and hopefully this will help.
It’s been really fantastic to see the enthusiasm for it among a variety of different types of software. Logic Pro X 10.1 now works with Bravura and is, effectively, SMuFL-compliant, in that it knows what code points to pull out of the font to get the right symbols. MuseScore 2.0 will ship with Bravura; SoundSlice is using SMuFL and Bravura; MakeMusic will support it in an upcoming version of Finale; there are a number of independent developers that are making use of it in various mobile apps that will eventually see the light of day; and Robert Piéchaud just released a new version of November, which is the first commercial SMuFL font. It’s really cool to see another font designer working through the issues and coming up with a font.
My goal is that we should have fonts developed to this standard so that people who use our program and hopefully other programs, if other developers choose to support it, will have a wider variety of fonts they can use in the same way that you can with your word processor. I would like for a font designer to think that it will be worth their while to produce a music font because its use won’t just be limited to one particular program.
The state of music notation and the field
Q: Is it a good time to be investing in the development of music notation software, with all the predictions of the decline of the art form?
I’m always very dubious of people warning that the death of classical music is upon us. If you’re going to have music of any sort of complexity that’s going to require multiple humans to play it, you need music notation. So I don’t worry about the state of our field. It’s a bit like saying, “is the e-book going to be the death of reading,” or “is the internet going to be the death of reading,” or “is TV going to be the death of reading?” No!
The point is, we’re still training musicians to play acoustic instruments. Obviously, yes, kids are more attracted to guitars and drums these days than they are to piano, clarinets and violins; all the same, we still have wonderful orchestras, wonderful ballet houses, and wonderful opera houses. We still have fantastic conservatories that are teaching these things, and I haven’t heard about any of them closing, other than the occasional university dropping a music program here and there. Juilliard isn’t closing; the Royal Academy of Music isn’t closing.
We are still training musicians, and we still have an appetite for music that can move us. I’m sure that there is always going to be a need for music notation, just as much as there is there is going to be a need for the written word. It is a way in which humanity’s artistic endeavors are communicated to the current and future generations. I cannot see it going away.
To refer to something I said earlier, we see our role that we have in building this program as being part of that tradition that goes back hundreds of years: how these things are communicated, and, in the end, they can be brought to life by humans actually picking up instruments and playing them. That’s our role. We’re the toolmaker who replaces the craftsman that was doing that job — or at least enables somebody who doesn’t have the training to do it at all, to do it, and to enable somebody who does have the training to do it better, and to get closer to the tradition of it — and then to help make that music come alive.
In the end, that’s what this is all about. We make music notation software so that you can make music notation that can be played by a human being, so that we can have the shared experiences of music. That’s not going away, and I think that there is room for us to make something that does these things in a new way that’s faster and better, so that it’s worth somebody’s while to check it out when it arrives.
Ah! I knew it! When you posted about oTranscribe, and showed an excerpt of an interview you were going to post later, I thought to myself, “that sounds just like Daniel Spreadbury!” I’m pleased to know I’m right, but more importantly pleased to hear some more juicy details of their development process. So fascinating! Indeed, when we have a foundation to build off of (and I’m speaking from the LilyPond realm), it can be so easy to take for granted everything that already exists to make my development work easier.
Did Daniel mention anything about a scripting capability for their notation program? If not at first, I assume they would incorporate it eventually.
Thanks for sharing, Phil!
Fantastic interview, thanks for sharing! Very exciting news.
I enjoyed reading about the philosophy of this new product and welcome its entry into the notation arena. However, I’m still waiting for the day when a product developer will explicitly state that what the user *sees* is what drives the functionality. The case is sad indeed when well into the 21st century we still have to *tell* computers what to do step by step by step.
Granted, we must dialog with our software in the beginning so that it might know our intent, but application developers seem to ignore the potential of heuristic interaction and instead seem to believe that salvation lies in zillions of lines of code.
The application must be built to learn from our input, not just passively sit there and have to be instructed and re-instructed. How it learns is from the display – ‘looking’ at precisely the information that the user sees. As it stands currently, however, an application works in the dark, constructing its output from the dungeons of it processes, then simply waits for us to continue regardless of how well it executed its task.
If the computer struggles, it should have a tool that allows users to manually construct the element in graphical form. The tool would see what the element does, then it would adapt itself to the ‘new’ function. Instead, software such as Sibelius must either have the function built in or someone has to write a plug-in, else, the user has to wait for the next patch or major release, then we go round and round again.
For example, Sibelius doesn’t ‘understand’ that there are common cases where tied notes extend to a 2nd Ending. Thus, the tie marks are absent. There is no way to tell Sibelius that you want a tie mark – you have to manually build one. Sibelius can’t learn what it is you want to do (which is a most common procedure).
Properly constructed, the application would permit the user to temporarily create the tie mark (draw it, if need be), then examine the output as it appears on the screen to the user. A dialog would come up and say, “I see that you’re creating a tie from the last note of the phrase to the 2nd Ending – would you like me to add this function to your library?”
The other capability that a good notation product would possess is to understand what a user is attempting to do with the interface. In Sibelius, if I drag a quarter note, for example, from one part of a bar to another, Sibelius refuses to recognize this and instead thinks I’m trying to *insert* a quarter note, which it then proceeds to break up into whatever denominations it thinks are appropriate for that part of the bar. Even if the functionality can be circumvented in another part of the application, the type of action that a user is taking – dragging and dropping – is not comprehended by the application, thus the undesired result.
(The above happens when changing the meter for a measure. All I want to do is ‘move’ the motif I originally constructed to another part of the bar, but Sibleius does not permit this easily because it can’t understand what I’m trying to do. However, MS Word and countless other applications understand this principle.)
Ralph L. Bowers Jr.
Wonderful write up of your conversation Philip.
Was the subject of the ability to have developers write plugins for this new notation software?
Scanned music ability does not need to be native in the program. A good external like SmartScore X Pro, or even PhotoScore would be preferable.
Thanks, everyone, for the comments so far. I had meant to ask Daniel about plug-ins, but our time ran long as it was. Perhaps in the next chat!
Thank you for wonderful and informative interview. Awaiting (im)patiently for the first release! I just know this will be a superior program.
A fantastic interview and I’m really looking forward to checking out the first version of this software when it finally launches.
Thanks for the blog…an interesting read!
I’m really looking forward to seeing what this team cooks up working with Steinberg and Yamaha. There seems to be a great deal of potential with VST3 protocols, a very mature DAW, and the great but under-appreciated Halion sound engine.
I’ve always been tugged back and forth between CuBase for production work (since the 1990s with Atari platforms) and Sibelius for Windows as an Educator, so it seems to be a great alliance in the making!
In the short term I’m thinking good pricing and thoughtful bundling into studio-centric DAW packages could prove to be a rather effective way to get people hooked on the early versions while opening up untapped and appreciative markets for better scoring tools. Many ‘bridges’ can be built in the music world with this team!
So when can we expect this new product?
It seems that Mr. Spreadbury is concentrating mostly on composers with no classical training, who will go to the orchestral score only after the mock-up in Cubase, which is fair enough since he is working for Steinberg, and this is what Hollywood attracts these days.
However, for straight to “paper” composers like myself who rarely go to the DAW (since I mostly deal with live ensembles directly), and only after the score is finished, the most important thing after the professional engraving features, is the immediacy of musical feedback from the notation program, during the act of composition. That is why, an as much as possible faithful aural result from the program is paramount for me, without first having to waste time setting it up with external libraries and using big computer resources, in order to compose.
Something similar – or better an improvement – to what Wallander have done (that is an immediate reading of the music on the page with some basic reverberation), is more important to me than what happens after the composition is done (taking the MIDI information to the DAW, seamlessly or not, to play with the most realistic instruments, draw automation, tweak nuances, EQ, etc.)
Every trained composer that I have spoken with about this, agrees with these views, so please pass this on, for what it is worth.
I absolutely agree with your remarks – it’s the aural version of what I posted earlier and a condition I also share.
When I speak of this to the notation developers, they think I’m talking about the ‘inaccuracy factor’ – that ‘human feel’ thing. I’ve rarely had someone understand the issue like you seem to. Instead, it’s about simply getting the Playback to properly read the elements of the score (dynamics, articulation, etc.). My Sibelius piano sound plays an accent. My Sibelius cello sound does not. Playback doesn’t recognize decrescendos. Etc., etc.
Wallander’s product did not work for me (they have a long way to go yet), but a notation software which would render sound performed precisely as a human musician would (and the technology is here) would be wonderful.
Dr. Dave Walker
Great interview Philip! This is such an important development for the notation community; thanks for keeping an eye on it for us.
You referred to Daniel’s blog but didn’t give the URL. I think the answers to a lot of the questions asked here, as well as some fascinating details for us computer nerds, are given there, at: http://blog.steinberg.net/
It sounds like Steinberg’s TODO list is pretty full, but I hope it includes a top-level item for IMPORT: i.e. being able to import Sibelius and Finale files without having to tweak the imported file. (That probably means importing directly, not via MusicXML, which requires a lot of tweaking when I move files between Finale and Sibelius).
@Phil Shaw, this will not happen unless there is an open source plugin interface to let users create such features. Finale won’t read Sibelius files, Sibelius won’t read Finale files, so why would this new Steinberg Product read both of them? It’s just like designing a new car with the option to install the doors of your neighbors’ cars.
Surely it would stand to reason that should Steinberg wish to provide that facility then they will do so?
That would in fact be a neat way of gaining customers who might otherwise say ” It’s not worth getting another program! “.
My 2 Cents worth FWIW!
@Jochen: Sibelius and Finale may not read each other’s internal formats, but Open Office and other (non-Word) word processors readd and write all versions of Word internal formats.
They have to do that in order to compete with a monopoly product.
Metaharmony and Gregory…
I hear you guys loud and clear…in the search of how studios have managed to do it thus far….I’ve learned a little bit to share here.
In any advanced score package to date like Sibelius or Finale (and there are others….these are just the two I have some experience with). The degree of realism by which notations are represented are accomplished by the quality and thoroughness of the sample library and expression maps installed (maps that correlate something like an accent or staccato to various patch or controller changes needed by the sound engine).
If you are to a level where you require quality mock-up audio, then its time to purchase a top level sample library with updated expression maps for your chosen notation package (so the program knows to use an accented/stabbed/doit/fall/tremolo/or whatever patch for the job.
Most of the pro level notation software CAN and DOES insert patch/channel changes, and invoke the key/velocity switches required by more powerful sample libraries to call up different articulations and directions on the fly. You simply need to invest in better sampler/synth-plugins that actually supply such articulations….or….take the time to build them.
There is a trade off……
Better audio plugins and sample sets tend to cost a lot…as in thousands of dollars for a fully outfitted orchestra with all the articulations, mutes, and tonging and bowing styles – etc. You can start at around $150 for ‘just a basic orchestra’ sound-set, up into the thousands ‘per instrument family’. So by the time you get mid or high end libraries…that could be 4 or 5 times the cost of what any any score editor alone, that’s on the market cost right now.
I.E. Steinberg Symphony Orchestra is a decent start, and it’ll cost about $150 US shipped (if you don’t already have a Steinberg Dongle). Garritan offers libraries in the same price range (but don’t need a dongle). The list goes on….Vienna Symphony library, The East West series….etc. The more articulations and styles you need, the more advanced libraries you’ll require.
An alternative to spending big, is to gradually make your own sound library as you need it….and do mock ups in a DAW where it’s a LOT easier to tweak synth parameters by drawing dots, curves, and ramps on a linear control track, and drag the lengths of notes around than it is to maintain a 6,000 line xml expression map.
Big libraries like that will also require LOTS of storage space….plenty of memory, and maybe even a fairly modern quad processor or better….so they’re not going to work so well on iPads, Androids, and low end lap-tops. You could use flac compression and get them a little smaller…but then you’ll definitely want at least a mid range Sandy Bridge system or better to take care of the decompression work.
On dragging stuff around in a musical score. It is possible to make interfaces on the screen where one can do that….the problem is, you can lose a dozen other abilities if that behavior is allowed. Again, it’s also pretty seriously linked to the quality of sound library or MIDI device one is using (if you drag a down bow violin part to a staff that has an up bow patch assigned to it….what take precedence?) I think maybe that’s why the team represented in this interview is looking into offering different ‘screen modes’ where users get some choices in those sorts of things.
Steinberg has been working on a VST3 protocol for a while now that supports expressions for ‘individual notes’….which should make it a little easier to support dragging stuff from stave to stave and having it convert (or stay the same) on the fly. Presently….The HALion 5 synth/sampler system supports it….not sure what others (if any) support VST3 protocols at this time. Nuendo and CuBase have made giant strides in being able to support drag and drop expression and direction symbols, and making it easy for you to tweak them as you like while working on a project. They’ve reached a point where they just need a more polished score editor that’s a bit easier and straight forward to use, while being more polished in the looks of the output to make for a seriously first class ‘printable score’.
IF Steinberg takes scoring seriously…..they could release a range of products that will marry the DAW and Score editing worlds in ways that are easy enough to use that few in their right minds would ever complain about having a tracking engine (which could become optional to use for those who balk at the idea of working with anything other than traditional notation) under the hood.
It wont be for everyone…..but the elements are certainly in place to put out one heck of a nice product that has ‘more’ of what people want/need at lower price points.
Lots of good information, thanks. It sounds as though those who develop and market libraries have borrowed the business model of cable television: find the handful of features that 90% of the general public wants, then sprinkle them throughout a layered set of library packages until you force users to purchase the full monty. I don’t have much use for a Fluba or a set of Boobam drums in my composing, but it *would* be nice that if I pay $400 like I did with Sibelius’ library that I could get a trumpet to play a simple accent.
What I would like to see is a Library marketed instrument-by-instrument in a sort of crosstable of musical functions, the same way that you purchase fonts. I know exactly what I want – and what I don’t want: Each instrument’s sampling set that I purchase should recognize each and every music command in standard notation, provided that the sound variation is possible in the physical world. (Yes, Sibelius, it is possible to play an accent on a synthesizer.)
If the samples would simply follow the scores, we would never need to resort to the nightmare of the world of DAWs.
As to dragging the screen artifacts, all you’d have to do is go back to school with standard sheet music. Anything related to the page text should be able to be located and anchored based upon its resting place – it would never be relative to anything else. This is standard sheet music layout. Simple drag and drop should tell the application where – at the moment – the text exists. The only time it ever would move is if the users themselves move it.
Everything related to the music systems, however *would* be relative – it certainly has to move about as the scores are edited because the system text makes sense only where it is located in relation to the systems.
Go back to the basics of desktop publishing with layout – page setup elements behave like margins and pages sizes, etc. They are constraints on where the systems are located and how they will move around. Don’t make it so we have to learn pixel counts and other expressions of trial and error in order to fix text. When we drag it, there it should sit. WYSIWYG design means that the user expects something to behave as it appears – it’s the only way we know how to interact with the computer (thus, the term UI).
I don’t see why this would negatively impact the development of the functionality of systems at all. If a user finds that there isn’t enough space for the systems, then s/he must do something about page setup – this is how all desktop publishing and word processing is done. Once page setup is complete, the Magnetic Layout would respect the location and size of the page text objects and move the systems around accordingly.
I’m willing to have to deal with a few notation issues (re manual fix vs automated placement) in exchange for knowing that my page is not going to be destroyed simply because I’ve now got sixteenth notes in Bar 1 rather than a whole note.
All great points and I fully agree that right now sample sets are pretty expensive and come in some weird ‘bundles’, etc.
The one thing that I didn’t make clear…
In the past…just dragging a note from one stave to another wasn’t so simple as it seems. While they could have made it that way….there’s a bunch of other features and functions it would break in the process. Most of which has to do with the sound engine stuff.
Actually, I’ve seen a few ‘pilot’ tests where users were given a UI that did allow you to drag a note from one staff to another. It caused a whole slew of confusion and frustrated users to no end. Just some examples of what went wrong:
1. What goes in the hole where you drug the note from? Does it ‘move’ the note, or does it ‘copy’ it? If it ‘moves’ the hole, does it put a rest in its place or close the gap? OK, so the user needs to decide and set rules…then space is needed to set those rules, you then get another 14 pages of ‘user presets and configurations’ for users to find, learn, master…etc.
This also leads to a big problem you’ve described….any time we change the value of some note….it can have a run-away effect on every note and rest that follows and warp the entire score!
(It’s one reason that for some time now I compose with piano editors in a DAW instead….it’s just alot less hassle to control things and then clean up the score later).
2. If the note has a bunch of channel data with it for the sound engine. I.E. accents, mutes, staccato, etc….it’s not always going to need or use the same information as the staff it came from. So…someone would have to code every possibility for every patch in the sound engine and set up cross referenced XML tables if the program is going to automatically decide how to ‘convert’ the note that was intended for ‘one synth patch/sample’ into something useable by ‘another’. One ‘quick’ alternative is to just clear everyting but the midi note, velocity, and length….but you can still run into issues where the user spends as much time correcting what didn’t convert properly as they would have simply ‘drawing a fresh note’ on the second staff to begin with.
3. So much is already done with dragging…that adding even more can add a complex array of modifier keys. I.E. Hold ctrl while dragging. It’s not a problem for some users….but in the pilot tests….quite a few users DO have problems with learning lots of key combos.
The great news:
VST3 helps with a good deal of that. It goes beyond the ‘general midi’ specifications and allows more information to be attached to single notes. It also sets some standards for sample and synth libraries so that more things are done the same way across the different sound-sets. Things that once was always ‘channel’ data can now be attached to ‘individual notes’. So progress is being made in that area…CuBase has been gradually implementing the newer protocols since about version 5. We’re at 8 now, so it’s getting pretty robust :)
Better things are coming…unless someone drops the ball. Steinberg offers this fresh team a lot of great engines to use under the hood in getting us a step or three closer to your wish list.
1. If you use the UI as your basis, then you usually don’t run into a lot of contradictions. It’s only when you foist the programming logic on your users do the problems arise. For example, in traditional music notation, each measure has to contain the correct number of beats, including fractions of a beat. (For those of you who wish to re-invent the language of Western music, feel free to invent your own software.)
What this means is that when a measure has accommodated a note in place of the rest(s) that it began with, when that note is dragged (without the Ctrl key held down), then its default operation should be MOVED and a rest should now appear where that note had been originally placed. If the user wishes to COPY the note, then Ctrl+drag or right-click Copy functions should be used. This is all standard Windows and why there seems to be such a penchant for insisting that users develop an entirely new set of UI skills just to learn a single application is beyond me.
2. I don’t understand the necessity of having things ‘run away’ when editing is performed. It would be better to restrain the ‘thinking’ of the application than force the user to clean up a big mess. This would mean that by and large, only single measures would be recalculated mathematically, and everything else would be related (again) to the display. I believe the bottom-up approach to scripting – that is, make sure everything works at each level of automation – would be the best way to go. Why sacrifice ease of use for 90% of the application for the purpose of force-fitting those last pieces of automation? Either that, or build heuristic intelligence into the app: have the app ask the users what it should do. I would be glad to ‘teach’ any application what I was doing as long is it would remember.
Lastly, to me, a DAW is a poor substitute for composing music when you’re used to traditional methods. Leave the DAW world for the sequencer crowd. I didn’t spend the past 50+ years of my life learning the language of music notation only to be foiled by an over-burdened software interface.
For example, so what if we have to use separate staves to play back separate parts? I don’t recall ever asking a notation software developer to design an app which would allow two different human beings to read off the same part while articulating it differently. I view ‘parts’ as representing humans in an orchestra, and if there are two notes occurring simultaneously in the same line of a score and one has an accent and the other doesn’t, what real-world situation would have the same individual playing both? I imagine a polyphonic instrument player (e.g. piano) could theoretically play a chord where one of the notes is accented while the rest aren’t (even though struck simultaneously), but must we develop an entire application founded on this type of bizarre requirement?
Those two violin parts you use in your example are played by two different humans. If the application is designed to understand this from the start, then there will be far fewer problems learning how to use it and make it do what we need it to do.
I agree 200% about having a different editing mode for dragging and fine tuning elements before printing…maybe call it a ‘plate mode’ or something….
That would be great! And should be doable….but I think the piece needs to be more or less composed before diving into that last step.
What does Yamaha bring to the table?
They have a LOT of experience with their Romplers and Synths over the past 3 decades. In short, they’ve learned to do with kilobytes and sub-megahertz processors things that many modern plugin developers are still trying to master with gigabytes and octa-core 4ghz processors!
They understand ‘psycho-acoustics’ quite well……and they might even be the ‘best in the industry’ at understanding what human ears can hear, not hear, and how to ‘trick the brain’ into hearing things that quite frankly aren’t there, or dismissing various unwanted digital artifacts that are really difficult to predict and totally remove. That means smaller and more efficient plugins that can even run on iPads and stuff!
Hopefully we’ll get a world class score editor/printer for CuBase in the not so distant future :) The composition part of the CuBase editor is already heading in the right direction….it simply needs to PRINT better, and more sample libraries need to come with expression maps pre-built for it, and we need MORE libraries geared for HALion and VST3.
Oh…I think it might also be helpful not to think of it as, “Trying to appeal to DAW Users and then bring ‘academic composers’ DOWN to their level. Rather think of it as bridging the best of both worlds….and making BOTH approaches more powerful, and easier to use.
Consumers will be able to afford stuff that the Pros have been using for almost a decade now….
I work a good deal in both worlds. I’ve learned a heck of lot from the ‘pointy headed engineers’ about new ways to think about music composition, and make it all go faster and smoother…AND how to take steps from the very first note in the composition stage to make it sound better.
Case in point….
Music is VERY repetitious in nature. It’s usually very pattern based. Time after time I’ve watched ‘classically trained’ composers sit down and take a week to plug in a symphony in a scoring package….that an ‘uneducated’ sound engineer can slam in a DAW and have playing ‘very realistically’, along with a properly marked up score in about 2 hours (or even less if they’re pretty experienced engineers).
So much of what is done with music can be modularized…cut, pasted, exported, imported, and used again and again in fresh scores. Yep, it’s amazing the tricks that can be used to speed up even complex symphonic composition (with proper vocings and everything). DAWS also have all sorts of tools to automate otherwise time consuming composition tasks. I.E. Slap in a bunch of block chord progressions that you’ve stored up in a tool box….then run a process that instantly puts it in drop 2 voicing…then go back and move a few things around rhythmically…and presto! 18 bars of composing that would take an hour or more to enter note by note can be worked up in minutes….auditioned…and edited and shaped.
Score editors often have these same tools buried in some menu somewhere….but the irony is that it’s sometimes perceived in the academic world as uneducated poke and plug ‘ear composers’ that use them…..a misconception of it being those who ‘know nothing about ‘serious’ music’ and the ‘theory that shapes it’ who tend to spend a few moments learning tools and theory that will save them HOURS per session on down the road.
Who knows what the former Sibelius team will cook up in their new home. They might do something that looks and feels NOTHING like Nuendo or CuBase. That’s fine :) The cool thing is….they still get access to the underlying ‘engines’….and it has the potential to AWESOME :)
Just one quick example then I’ll stop hogging the thread (apologies for those that I’m annoying…almost done here…just excited about the potential ahead).
VST3 (Nuendo/CuBase) brings stuff to the table….that as far as I know nothing else on the market has implemented yet.
Imagine you’ve chosen to score two violin parts divsi style on the same staff. Say you want one to decrescendo while the other crescendos. Imagine you want one to be up bowed, accented, and staccato, and the other to be down bowed, legato, with a slight pitch bend up and a filter clamping down to give it a ‘raspy’ sound.
Traditionally, you’d have no choice but to put split the two parts up and put them on their own staff (if you want it to play back right). The reason for that is that if you draw in volume controller data for the crescendo and synth filters…it’s also going to crescendo and change the filters for all the other notes on that same staff….it’s impossible to make one note get louder and the other softer on the same midi channel. It’s also really difficult if not impossible to force one to use an up-bow sample while the other uses a down-bow sample if they’re both on the same staff.
There is also some limits in that general MIDI is stuck at 16 channels per port. VST3 can work around some of that as well….perhaps cutting down on the dreaded practice of opening extra ‘instances’ of a plugin just to add another voice or two.
A VST3 engine and plugin can change that…..each note can work independently….even if it’s on the same staff in a score editor, or track in a DAW. Each note can have volume curves and other controller data applied to it independently.
For most of us….we’re really quite used to working with monophonic staves or tracks, so we can deal with issues like portamento glides, pitch bends, slurs, sakes, sforzando attacks, crescendos and decrescendos, different articulations that use key-switches or patch changes, etc. It’s become second nature….but addressing that little issue of it being ‘channel data’ will open up a lot of doors to making ‘score editing’ more powerful and flexible.
Such scratching the surface of a few tools and engines this team might get to play with that they never had easy access to with AVID. AVID offered some great tools and ideas as well….but now this team gets to play with a new set of toys that at least in theory….could speed up their task of getting some interesting products out there :)
I do understand your points. I’m simply trying to help connect the dots as to why it developed the way it did. Also, to point out that some new protocols are not on the table that can FIX some of those issues.
1. It was tried by other rules and styles in very complex pilot programs…where musicians of all levels come in to play with the software. Stuff that failed or confused users, yet could still be accomplished by pressing ctrl-c, making a single click to the the new staff and then pressing ctrl v worked out better (and took far less code…as well as fewer issues that would stall third party development of audio plugins).
2. The playback engine of all of this stuff is indeed a sequencer, that has little choice but to play by MIDI rules.
3. Notation is ‘interpretive’, and a human can look at a dot and know it means (play this half value). No problem…but for a computer it’s not always that simple. Particularly if you also add some slurs into the mix which might need to send a portamento signal (which is a channel message, not a ‘note message’.
4. I gave the example of two notes doing different things on the same staff…PRECISELY because that is one of the many challenges that must be faced when simply ‘dragging’ something from one staff to another. What does it do with the accompanying ‘channel data’? What if there is ALSO data on the new staff…does it ‘average’ it or ‘replace it’, or just ‘clear it’ and make you put it back in?
VST3 can help get around alot of those issues.
Just as yet another example…
To play a single note on a midi instrument there are several instructions involved.
1. Note on
2. Velocity (speed of key strike).
2. Tons of clocks and stuff for timing.
3. After touch expression (pressure applied to the keys…usually used to change filters and pitch over time so it doesn’t sound like a robot talking instead of music).
4. Note off.
All that stuff is very independent on ‘clocks’ and ‘timing’. A lot of math goes into trying to interpret what humans plop on the page vs what is actually sounded by the synth engine.
So…..if all you were doing was drawing a score…and playback were not imporant, then yes……you could just make objects and elements and place them to your heart’s content….but music works in about 30 dimensions at least. It’s really difficult for a computer to ‘guess’ what you ‘mean’ for it to do.
It needs rules and instructions. Not everyone agrees on the best way to teach the computer to take it all in. So….
They set up big research labs….
Bring in hundreds of average humans…ranging from Doctoral holding opera composers, to 12 year old piano students…and WATCH them all using tools over and over again.
They try things………LOTS of things. And if 7 out of 10 people are confused by something….they save it until the technology is advanced enough to implement it without making the experience WORSE for users.
I think you’ve got GREAT ideas….every single one of them. I’m just excited because this team hopefully has a lot of tools at their disposal that could make implementing your ideas much more plausible, and PART of that IS to use DAW technology (even if you slap a UI on top of it that looks and acts NOTHING like a DAW) and methods to ‘fill the gaps’ and bring the best of both worlds together.
Those two violin parts in divsi……….
I used that as a more ‘obvious’ example….to explain ‘limitations’ of sound engines…and why MANY things are done in notation packages the way they are.
Designing EVERYTHING from the ground up……alla throwing MIDI out the window (in development since the early 1980s)….and creating entirely new standards that thrid party developers can work to (which they probably will not if it is radically different from every other product on the market they are also developing for)….well….if they do that….we’re looking at decades before ANY product gets released….not months……….as they’d also have to reinvent synth and sampler technologies….and all the protocols that tie them together.
We don’t have the ability to post screen shots here…..or I’d show you that not all DAWs are glorified beat boxes. A few of them have taken ‘composing’ and ‘editing’ in the score editor quite seriously.
In many ways the ‘poor substitute’ is not much different from any score only package. They don’t look as good and print as well (yet)….but the ease of use on the ‘composition’ curve while working with notation is closing fast. A Sibelius class score module built into a DAW would harm no one….and benefit many.
There’s already a decent score editor…
You click a note value then drop it on the score (or set up any key strokes you like if you’d rather not click with the mouse…as nearly EVERY command in CuBase can be moved to any key on the keyboard you like…or even across multiple keyboards, joysticks, and other UI devices, as well as external MIDI controllers…keyboards…MPC pads…whatever you like. You drag dynamic markers, articulations, etc. into your score.
If the expression is missing for that marking, unlike Sibelius…where you’d have to wade through a massive XML file (and have some understanding of how VST plugins work as well) to figure out how to invent and insert it……you just click an expression box and put in whatever data you need (I.E. stomp a pedal, change the volume, increase the attack velocity, half the value of the note, etc.) and save it……it’ll be there waiting for all your future sessions. We stop and tell real musicians how we want things played all the time…then often have to ‘rehearse’ it several minutes before we can move on. Sometimes computers need that same kind of attention.
When something doesn’t work out quite right….instead of right clicking through a dozen menus to try to adjust things as dedicated score packages have us do….just swap to a piano scroll editor and drag the note start and or note end to ‘precisely’ where you want it in time. No guessing, no ‘interpretation’. You get quantization tools, block edit, and cut and paste tools that just don’t translate well to easily translate from simple notation at all… And that’s because a scroll editor shows you EXACTLY what is is happening…..every single event…from note on, to pitch bend, to note off, etc…in any color you want so it STANDS OUT at you.
Want to put in a percussion part? Sick of it sounding like a robot playing tin cans, and taking hours to set up all the different note head shapes and possible staff positions? No sweat…open a drum grid editor that maps everything out for you so that things ‘snap’ in place, drag any needed samples right on the pad….and it all does end up translating better in the score mode with half the effort. The drum map even puts it all on the right place, with the right kind of note-heads back in the score view.
Sick of drawing those intricate percussion rudiments over and over…note by note? Open your tool box and drag it in the DAW. It’s not quite as easy and intuitive with a score package.
Hit a snag…or even have ‘writer’s block’ with a tricky chord progression? No problem in a DAW….type in chord names on a track and adjust till it’s doing what you want. Want a ‘voicing’ that’s more appropriate for that family of instruments in your new progression? Tell it with a click to voice it for guitar, or harp, or a baroque era string section. And this is just the beginning of the ‘poor substitute’ we get from these DAW applications.
In a real orchestra, every single note has to be ‘interpreted’ by a musician. They have to look at the conductor and lock into a common tempo…get louder when he gives that signal, softer, start and end together, etc. They have to use their ‘ears’ to make it blend and work together.
Before VST3….any notes living on the same staff had to be given secret new channels (which run out fast), or all march along and more or less do the same thing. Not so anymore. You can actually do a harp part…or a contrapuntal organ arrangement on the same grand staff now and drop in the details you need, and with a little practice…the headache of keeping up with different channels and patches for all the details become easy as drag and drop…rather than ‘move this to staff x and make it invisible…mute that so it doesn’t play…etc….that we get with score packages).
Teaching samples to ‘follow along properly’ is just as complex as developing a high school orchestra….where every kid has to be taught to play….to follow instructions…to interpret the data on the page, and LISTEN to the conductor when they don’t interpret it as he wishes. The computer has to make ALL of those decisions as well. And all those ‘samples’ were made by real musicians……and the system has to decide what to do with them all. When it gets it wrong…you want the tools to ‘quickly and easily’ correct that computer. DAWs give you that level of detail and control……notes on a page do not, and until relatively recently, dedicated score packages would not do this either (and they still need alot of user help to be corrected…if it’s possible to correct it [In a DAW it’s almost always a few click fix]).
The old score packages only had 16 channels and a very limited amount of power to decide what to do for all those notes….and getting past those limits has been extraordinary. We’re now asking it to tongue this note hard, slur this one, change to a mute for that one, play a harmonic instead for yet another one…down bow, up bow, sustain with mute and add vibrato………………………….stuff that every individual in that ‘real’ 30 piece orchestra has to contend with.
Currently, the poor substitute is working with a DAW if you need better control and more output than a notation package can do alone. You get many many many power tools for the money, and the learning curve isn’t really that bad.
Also with the poor DAW substitute, we get about 60 layers of ‘undo’ and ‘redo-with parameters’…..so if the mouse slips….and throws the score wonky, don’t waste time trying to ‘fix it’….just hit ctrl-z, and try again with a different approach (maybe one of those other poor editors that’ll get it right the first time). Many dedicated score editors have that undo thing as well….but not as many layers and degrees of flexibility are at hand as with a comparable priced DAW.
Grooves are another thing where today’s stand alone score packages have a LOT of catching up to do. If your piece ‘swings’ or has gradual tempo changes, or needs to be precisely timed or side chained to other events…….good luck with a stand alone score editor that’s not synced to special film industry hardware (extra cost options). Stuff that’s easy peasy in a good off the shelf DAW ‘substitute’ can be a nightmare in a run of the mill stand alone score package.
Nothing about western music is being reinvented. Piano scrolls actually predate a major percentage of the ‘notation practices’ we use today (jazz in particular…but even classical style scoring looks MUCH different today….we’ve even renamed half the instruments or substituted them with stuff that didn’t even exist in the 1700s), and having single parts each get its own score staff is actually something rather new as well (mainly because of computer composition).
Condensed scores are far easier for a many conductors to analyze and follow. Classical pieces usually had around 8 or 9 staff lines, but far more ‘parts’………where today your typical 6th grade band music can have as many as 16 staves in a system….even though 12 of them are often in ‘unison’, or simply transposed to new octaves.
So…the example about divsi parts on one line is quite relative to making the tools more conductive to ‘traditional western music’. It’s relative to being able to talk to synths and samplers so they can ‘follow along’ with what is on the page.
It’s about making things easier for human conductors….instead of force fitting the technology, and being strapped to highly interpretive mediums for communication.
I know I’d much rather conduct and teach from a well designed condensed score ANY day. I don’t need, nor want every part transposed for me on the page (other than maybe the first woodshed rehearsal with REALLY young kids performing).
And…VST3 (which WAS ‘asked for’ to break free of many general MIDI constraints, as well as UI focus issues on the screen) helps bridge the gap that made dragging and dropping stuff virtually anywhere on the score and it still work properly a distinct possibility.
In short….if I just want to hammer out a quick score or worksheet that needs to LOOK great. Sibelius is the tool for me.
If I need some oddball contemporary layout to do unconventional notation. I open Finale.
If I am feeling ‘creative’, and want it to sound anything like what is on the page according to how I’d interpret it as a conductor. Or just want the ‘easiest’ method of composing any style of music I can imagine, I fire up CuBase every time.
Hopefully, we’re not too far from getting most of that, if not all of it all in the same box.
First of all, what Phil Shaw said!! I don’t see why the big boys cannot agree among themselves about a way to allow an as seamless as possible transfer of notated music from one platform to the other. Are they so insecure? I don’t believe their clientele would fluctuate at all; the same size demographic would more or less move from one to the other. However, it would make it immensely more efficient for us who collaborate with, or employ people using the ‘competing’ software, to work on the same piece of music. Huge advantage, well spotted Phil!
I would like to write an extensive note to address what Gregory and Brian have been expatiating on, however I find this tiny window too inappropriate to write something expansive, and I would like to request that it be redesigned for more than a quick note.
Suffice to say, that a) Since I have a little background in programming myself, I have the greatest respect for the people and their efforts to make music-making as intuitive as possible for all of us, and b) I do own Garritan, Wallander, and VSL as a mater of fact, and I have been sequencing for the last 27 years. It takes a huge amount of time to do properly, and it is a counter-intuitive activity while in the realm of inspiration or when working out a piece conceptually on the page.
Sequencing maybe equivalent to composition to high-school students and those who think the latest soundtrack to some movie is orchestral music just because an orchestra is performing it (the actual music could be puerile – 2,3 parts and no polyphony to speak of), but I don’t “try to find ‘chords..!’ on a grid” when I am out of inspiration, nor can I conduct Ligeti and Schnittke scores with 16 staves for the strings alone, from a condensed score. And I certainly don’t need faithful real-time playback for composition if I am writing something as uncomplicated as the Gladiator main title.
But for real voicings and proper polyphony and balance, and so many other aspects, an as much automatic and as faithful as possible rendition of the score to-begin-with, would be most valuable. More valuable in fact than VSLing the score in Cubase, which anyway can always take place subsequently should one wish it. I am basically saying composition (including orchestration), and realisation for many of us are two separate musical activities.
I realise that asking for Playback during composition appears to be contradicting this, but I don’t view Playback as the realisation of the score. I would say it is akin to a composer trying passages on the piano while writing a concerto that Pollini will one day perform properly in Carnegie Hall. Yes, it would be nice to have Pollini try everything during composition, but if I had to cook and clean for him and massage him and explain the music to him every time before he tries any passage for me in my studio, it would unfocus my mind from composition, no matter how much better he would be at playing that music for me.
There have been many improvements in the Playback engines of notation programs through the years (I have used Sibelius since version 2), and I am just urging that this momentum does not cease due to the advent of Rewire and the people who inescapably need electricity to compose orchestral music (DAWs, libraries with ostinati, orchestrated chords, and the like).
Congratulations on you have achieved thus far, and thank you for your consideration.
I totally support what Metaharmony is suggesting! Audio output of the score quality of samples playback with automation options for each staff a must to do please. I am presently doing a score for Choir & orchestra with a second version with reduce orchestra.
The existing Sibelius Library not for me and the Wallander Note performer options which is better but are still not giving the audio outputs that all of us composers needs to have. We need credibility to give a realistic impression of each instruments to our clients. A mixer option with automation for all instrument staff would really help giving the score better representation. To have each instrument staff the Export option to midi and Audio in sync please with each others, this is not the case now. My wish is to have a Notation software that will give me a credible professional audio output in several formats quickly with the flexibility to export audio and midi in a format for most DAW to recall tempo change ret… Time signature, Key of composition. I am presently a Sibelius 7.5 user a Sonar Platinum user and a Logic Pro X user. I use Sonic Implants
Vienna and Spectrasonics virtual instruments to name a few. So yes these days we need to be Mac and PC users.
Daniel I believe you can do this and I will be there to become a user of this new product.
Don’t forget to include the Video tutorial with the New software. The learning curve and the compatibility with Finale & Sibelius is always the scary part for me to move on.
Wishing you success.
Absolutely fantastic interview.