Features Help Download

Lomse Hacking Guide

5.1. Sound subsystem overview

5.1.1. Introduction: How Lomse playback works

From Lomse internal point of view, playing back an score involves, basically, two steps:

  • The score internal model is parsed to build the sound model.
  • The sound model is then traversed and three kind of events are generated:
    1. Sound events, for generating sounds.
    2. Highlight events, for adding visual tracking effects to the displayed score. For instance, highlighting notes as they are being played or displaying a vertical line at current beat position.
    3. End of playback events, oriented to facilitate GUI controls synchronization and housekeeping.

It is responsibility of the user application to handle these events and do whatever is needed, i.e. transform sound events into sounds or generate the required visual effects.

Lomse library provides an implementation for generating visual effects, so that, if desired, the user application can delegate in Lomse the generation of visual effects.

From the Lomse user application point of view, to play back an score the application has to:

  • Define a class, derived from MidiServerBase. This class will receive the sound events and will have the responsibility of generating the sounds.
  • Create an instance of class ScorePlayer. This class takes care of most of the work to do. By using it, playing an score is just two tasks:
    1. Load the score to play in the ScorePlayer instance.
    2. Ask ScorePlayer to play it, specifying the desired options (i.e. visual tracking, metronome settings, count-off, etc).
  • Deal with generated events:
    • All sound events will be sent to the MidiServerBase derived class.
    • All highlight events will be sent to the standard callback for events. For processing them, the user application could delegate in Lomse by invoking method Interactor::on_highlight_event()

So, as you can see, implementing score playback in an application is simple, and the only burden for the application is coding a MidiServer class for generating the sounds.

5.1.2. The sound model

The sound model is mainly a table containing all sound related events for the score. The table entries ar ordered by time, so that generating the real events during playback is just traversing the table.

Class SoundEventsTable represents the sound model. Its responsibility is to store and manage all sound events for an score. Each sound event is represented by an instance of class SoundEvent.

In order to facilitate starting playback on any measure and finishing it also on any other measure, the sound model is, in fact, composed by two tables:

  • The already comented table, containig all sound events ordered by time. It is represented by variable m_events, defined as SoundEvent*>.
  • An auxiliary table of measures , containing the index over m_events for the first event of each measure. It is represented by variable m_measures, defined as std::vector<int>.

5.1.3. Building the sound model

Building the sound model is triggered bt invoking method ScorePlayer::load_score(). In order to not re-create the sound model each time the score is going to be played, the sound model is saved in the ImoScore object.

When invoking ScorePlayer::load_score() it asks ImoScore for its sound model. If it is already created and stored, it is just returned. Otherwise, ImoScore triggers the creation of the sound model by invoking SoundEventsTable::create_table():

void ScorePlayer::load_score(ImoScore* pScore, PlayerGui* pPlayerGui,
                             int metronomeChannel, int metronomeInstr,
                             int tone1, int tone2)
{
    stop();

    m_pScore = pScore;
    m_pPlayerGui = pPlayerGui;
    m_MtrChannel = metronomeChannel;
    m_MtrInstr = metronomeInstr;
    m_MtrTone1 = tone1;
    m_MtrTone2 = tone2;

    m_pTable = m_pScore->get_midi_table();
}

SoundEventsTable* ImoScore::get_midi_table()
{
    if (!m_pMidiTable)
    {
        m_pMidiTable = LOMSE_NEW SoundEventsTable(this);
        m_pMidiTable->create_table();
    }
    return m_pMidiTable;
}

SoundEventsTable::create_table() method is responsible for building the sound model. To do it, the internal model is traversed (in fact what is traversed is the StaffObjsCollection table) to identify and create the event entries. When all entries are created, the table is ordered by time. Finally, the measures table is created:

void SoundEventsTable::create_table()
{
    program_sounds_for_instruments();
    create_events();
    sort_by_time();
    close_table();
    create_measures_table();
}

5.1.4. Playing back the score

Method ScorePlayer::play() and related (play_measure and play_from_measure) are responsible for doing all playback. They just determine the first and last events to play and invoke protected method play_segment(firstEvent, lastEvent), which does the real work.

As playing music is a real-time task, all playback is done in an independent thread. Otherwise the application will be “frozen” while playback is taking place:

void ScorePlayer::play_segment(int nEvStart, int nEvEnd)
{
    //Create a new thread. It starts inmediately to execute do_play()
    delete m_pThread;
    m_pThread = LOMSE_NEW SoundThread(&ScorePlayer::thread_main, this,
                                nEvStart, nEvEnd, m_playMode, m_fVisualTracking,
                                m_fCountOff, m_nMM, m_pInteractor);
}

Once created, the sound thread will inmediatelly run method do_play which is the core of playback.

5.1.5. Transforming events into sounds: sound API

The sound thread method do_play generates sound events and sends them directly to the MidiServerBase derived class provided by your application. Remember that an instance of your derived class was a required parameter for creating the ScorePlayer object:

ScorePlayer(LibraryScope& libScope, MidiServerBase* pMidi);

Class MidiServerBase is very simple:

class MidiServerBase
{
public:
    MidiServerBase() {}
    virtual ~MidiServerBase() {}

    virtual void program_change(int channel, int instr) {}
    virtual void voice_change(int channel, int instr) {}
    virtual void note_on(int channel, int pitch, int volume) {}
    virtual void note_off(int channel, int pitch, int volume) {}
    virtual void all_sounds_off() {}
}

Events are sent to your derived class just by invoking any of the virtual methods. Invokation of these methods is done in real time, that is, do_play method is responsible for computing and determining the exact time at which a note on / note off has to take place, and invokes the respective method, note_on() or note_off(), at the appropriate time. This implies that your midi server implementation is only responsible for generating or stopping sounds when requested, and no time computations are needed.

[TODO6]: I would like to modify Lomse library to add an interface to the JACK audio system (http://jackaudio.org/). I don’t know if this will widen options, perhaps current MIDI interface is enough, but I would like to study the issue when finding time. See Transforming events into sounds: sound API.

5.1.6. Generating visual effects

Apart from generating sound events, the do_play method also generates highlight events, that is, events to add visual tracking effects, shyncronized with sound, on the displayed score.

Highlight events are modelled by class EventScoreHighlight. It contains a list of sub-events and the affected objects:

k_highlight_on_event,           // add highlight to a note/rest
k_highlight_off_event,          // remove highlight from a note/rest
k_end_of_higlight_event,        // end of score play back. Remove all highlight.
k_advance_tempo_line_event,     // move tempo line to next time position

These highlight events are sent to your application via the event handling callback, set up at Lomse initialization.

Important

When your application receives and handles a higlight event, it MUST create an application event and enqueue it in the application events queue. The idea is to return control inmediatelly to the sound thread, to not introduce delays that will negatively affect the sound tempo. And when the sound thread is idle, waiting for the right time to generate the next sound event, your application will get control and will process the enqued highlight events, without interfering with sound generation.

5.1.7. Default visual effects in Lomse

For generating visual effects you have two options, either do it yourself by modifying the graphical model as desired, or to delegate in Lomse for generating standard visual effects. Currently, Lomse offers two type of visual effects:

  • Colouring notes as they are being played. This is fully operational.
  • Displaying a vertical colored tempo line acrros the system, positioned at current beat. tempo. This is not yet finished and, therefore, is not yet available.

When handlig a higlight event, if your application would like to delegate in Lomse for visual effects generation, the only thing to do is to pass the event to the interactor:

pInteractor->handle_event(pEvent);

The Interactor will handle the event and will modify the graphic model. And, finally, will send an update window event to your application, for updating the display.

[TODO7]: As seen in Default visual effects in Lomse, handling highlight events in your application can be a round trip to, finally, delegate its handling on Lomse. This round trip is very important to ensure that the code for implementing visual highlight is executed in your application thread instead of in Lomse playback thread. To avoid this burden in your application, probably the best solution will be to implement, in Lomse, an event subsystem with its own thread. But there is a lot of work to do and I have to prioritize the necessities.

5.1.8. End of playback events

Besides generating highlight events, the do_play method will generate a k_end_of_playback_event, for signaling the end of playback. This can be useful for restoring GUI controls related to playback. This event is also sent to your application via the event handling callback, set up at Lomse initialization.

5.1.9. To do: future improvements

[TODO6]I would like to modify Lomse library to add an interface to the JACK audio system (http://jackaudio.org/). I don’t know if this will widen options, perhaps current MIDI interface is enough, but I would like to study the issue when finding time. See Transforming events into sounds: sound API.
[TODO7]As seen in Default visual effects in Lomse, handling highlight events in your application can be a round trip to, finally, delegate its handling on Lomse. This round trip is very important to ensure that the code for implementing visual highlight is executed in your application thread instead of in Lomse playback thread. To avoid this burden in your application, probably the best solution will be to implement, in Lomse, an event subsystem with its own thread. But there is a lot of work to do and I have to prioritize the necessities.

If you would like to contribute to Lomse project by working on any of these issuaes you are welcome. Please post me a message. Thank you.