Musical Instrument Digital Interface
Interface: “…a means of interaction between two systems…” (Webster’s)
The term is used broadly and sometimes confusingly in electronics, computers, computer science and programming as well as less technical subjects.
This word – and many of these articles – embrace a topic which I call digital control. A simple form of digital control is when you walk up to a light switch in your house and turn it on or off. One huge use for computing machines is to use this basic action for controlling all sorts of things. Huge electrical machines, power generation and distribution systems, lighting systems in buildings and for entertainment, manufacturing tools, and all sorts of other devices.
Any control system has 3 basic components:
- A controller;
- the thing to be controlled;
- some sort of feedback mechanism to tell the controller if it was successful.
In MIDI and several other simpler digital control systems, the only feedback channel is through the human senses. In other words, when a musician hits a key on a MIDI controller, the only way he knows that doing that worked the way he wanted it to is by listening to what comes out of the speakers.
I will bring out the book I have in my library: The MIDI Companion by Jeffrey Rona (1994) as my resource for this little introductory article.
MIDI was introduced in 1983 by the growing electronic musical instrument industry as a standard way for controllers to talk to synthesizers. As the name implies, is was digital. In fact, it totally depends on microcontrollers to work at all.
I define a microcontroller as any electronic control device built into a piece of equipment that requires software (or firmware – embedded software) to make it work. The actual technical definition is maybe a little narrower than this.
Thus, MIDI has two main technical aspects to it:
- The technology of making musical sounds using electronics, which heavily uses the terminologies of acoustics, the science of sound.
- The technology of digital communications, which uses the terminologies of computer systems and computer science.
Advanced users of MIDI must learn both technologies and the various terminologies involved. A “casual” user of MIDI only needs to learn the basic acoustical terminologies, along with having some knowledge of music, of course.
The photo at the top of this article is my controller. It has an ordinary musical keyboard to play notes, and quite a few control knobs, sliders and switches that tell the synthesizer more about the acoustical properties of each note.
Controller hardware can have a wide variety of capabilities built into it. But what it must do is send instructions to the synthesizer that will result in musical sounds coming out.
The synthesizer is the device being controlled. It’s job is to make sounds by responding correctly to the control messages sent to it over its MIDI connection from the controller. Most synths also have some stand-alone sound production capability, so they can be tested without a full controller attached.
I have recently acquired a synth, which is the main reason for this article. Before, I only had “software” synths, which operate on a computer and play through the computer’s sound system. That adds a layer of complexity that I didn’t want to have to address in writing about MIDI. So with the use of this synth, that complexity is eliminated.
Future articles will explore this technology in more depth.