Monday, August 3, 2015

Reasons Why I Use the Mutt Mailreader

My favored mailreader is mutt.

The running joke is that I like it because it's conspicuously antediluvian.  Well, I don't dislike it for that reason, but there are better and more accurate reasons for why I actually like it.

The first and most important reason is that it has support (after a fashion) for tagging of mail messages.  I grew up (so to speak) on the Berkeley mailreader, which stored old messages into an array of files within an archive directory.  Although it's the term "directory" and not "file" that implies "folder" in a post-Windows world, these files are the moral equivalent of modern mail folders.

And folders are a distinctly sub-optimal way of organizing mail.  Suppose I have a folder for bills and statements, and a separate folder for medical.  So an e-mail receipt for the gas bill goes in the bills-and-statements folder, and an eyeglass prescription goes in the medical folder.  But what happens if I get a medical statement?  Where does that go?  Either I have to choose a folder to go in, or I save it in both folders.  The former makes it more difficult for me to find the message later on, and the latter is more tedious (some mailreaders consciously resist any attempts to store multiple copies) and causes consistency problems in case you want to go in and edit messages (for example, to make notes).

The proper solution to this problem is to support mail tagging, a la Gmail.  In Gmail, one creates tags, not folders, and then any number of tags can be attached to a given message.  One can put both the bills-and-statements tag and the medical tag on a medical statement e-mail, and then it will show up whenever you search either.  More usefully, you can search for both tags together, and then only medical statements (and anything else that has both tags simultaneously) will show up.  When I started using my Gmail account, I was blown away by how powerful an organizing mechanism tags were.  They basically implement multiple inheritance.  I never wanted to go back to folders for my personal e-mail.  I mean, social networking (including this blog) relies critically on tagging, why shouldn't e-mail?

Work e-mail, alas, was a different matter.  Understandably, they wanted people to use the company e-mail address and not a Gmail address, and the corporate IT infrastructure didn't support using the Gmail interface (at either of the places I worked at)—until, that is, I discovered mutt's tagging support.

To be sure, it is support after a fashion: It provides support for the X-Label header field, in terms of displaying it, but scripts have to be added in order to support adding the tags yourselves (because tags aren't very useful if you have to manually add them into the e-mail).  There's a certain amount of, ahh, customization needed to make the experience minimally unpleasant, but it's worth it.  The corporate-approved mailreader doesn't support tagging, and I won't (willingly) switch to it until it does.  We recently switched to an Exchange server, and that threatened to coerce me into the corporate mailreader, but I found a solution, Davmail, that provides an IMAP interface to an Exchange server, and that has permitted me to happily continue tagging my e-mail.

But that's only the most important reason I cleave to mutt.  Among others:
  • It can be used on any dumb text terminal you can think of, as long as it can log into my machine.  I occasionally have to check my mail on some remarkably incapable devices, and mutt will work on all of them.
  • It is blindingly fast, meaning that I can access and search my entire mail archive from years back and expect results back effectively the moment I hit the enter key.
  • It is remarkably configurable.  That's not a bonus for some people, but I like tinkering with my e-mail interface, and this suits me.
  • A somewhat backhanded compliment of mutt is that it prevents me from being exposed to e-mail attacks that depend on code being automatically loaded and executed within the e-mail message.  Well, OK, I do like that, but it's really a way of admitting that mutt can't possibly support the same kind of message display interface that a graphical mailreader can.
Mutt's slogan sums it up nicely: "All mail clients suck. This one just sucks less."

Thursday, June 11, 2015

Harmolodics and Holomorphy

Ornette Coleman died today.

And with him died any chance for an authoritative version of his treatise on harmolodics, which he had reportedly been working on for decades.  Oh, I daresay we may eventually see some fractured notes (pun intended) about harmolodics, but we will not see the definitive statement of what it is.

To be sure, it's entirely possible that any treatise about harmolodics would have been allusive and telegraphic at best.  Coleman was notoriously cagey about describing harmolodics, and players in Prime Time, Coleman's group, were obviously fearful of being pinned down to any concrete statement that might get back to Coleman (who understandably might be upset about his creation being characterized in a way not to his liking).

Practically speaking, harmolodics was what Coleman played with Prime Time, or at least aimed at playing.  He was said to have denied that any of his albums actually achieved harmolodic playing.  So we have no guarantee that any particular piece was exemplary of his musical philosophy.  In some sense, then, there might not be any ironclad difference between harmolodics and entirely free jazz.

Nonetheless, the nagging suspicion of many a listener was that there was something to harmolodics, that it didn't sound entirely free, that there was some structure lurking in there somewhere.  We might even imagine Ornette himself, driven by inspirations even he couldn't completely articulate, nonetheless moving the music in directions that felt "right" to him, if not specified or unique.  It's a tantalizing task to try to describe what that structure might be like.

If any authoritative vision of harmolodics died with him, so however did the possibility of being declared definitively wrong.  Musicology is in a sense freer now to come up with a descriptive notion of harmolodics, as opposed to what might have been Coleman's own more prescriptive one.  So here are my personal thoughts on harmolodics, based on a moderate amount of listening to Ornette Coleman recordings.

It's an odd idea, the concept of Coleman prescribing what harmolodics was, because even if it wasn't entirely free, he still viewed it as being freer than traditional jazz.  Still, he did seem to consistently assert that harmolodics was about denying the hegemony of harmony.  He viewed harmolodic music as equal parts harmony, melody, rhythm, dynamics, articulation, etc., all acknowledged as parts of a musical performance.  Granted, it's probably not possible to say precisely what "equal" means in this context (can you imagine measuring a particular piece to be exactly 75 percent harmony and 25 percent melody?), but it's hard to deny that traditional jazz performance is driven more by harmony—the chord changes—than by the melody in the head.  Presumably, that dominance is what Coleman wanted to counter; he frequently alluded to a "democracy" amongst the performers and the music they created.

One of the things that strikes me when listening to Prime Time and other ostensibly harmolodic groups play is that although any piece may seem to meander along aimlessly, individual segments of it typically do not.  That is, if you were to listen to any one-second snippet of a harmolodic piece, it "makes sense" in a way that we don't usually associate with harmolodics.  It sounds like it could come out of many a jazz piece.  So perhaps one thing that distinguishes harmolodics from other jazz forms is that the parts that make sense don't persist as long in harmolodics.

Let me try to make that more explicit by reference to traditional jazz pieces.  Suppose we're looking at a twelve-bar blues, the most traditional of the traditional jazz forms.  Everyone plays this at some point.  Even in a jazz setting, with its penchant for alteration, a fairly standard chord progression runs

| C7    | F7    | C7    | %     |
| F7    | %     | C7    | Em A7 |
| Dm7   | G7    | C7 A7 | D7 G7 |


Because everyone is playing to the same chart, whenever the bass is playing G7, so is the piano, so is the horn, etc.  It all "makes sense," because each performer is playing notes in the same scale.  We might characterize such playing as all taking place along the same line, or "linear."

What's more, the transition from, say, G7 to C7, although it's not exactly the same scale, is very nearly the same.  It differs in exactly one spot: The position occupied by B in the G7 scale becomes a Bb in the C7 scale.  So although it's not exactly on the same line, it's still diatonic.  We might say that it's in the same plane, to stretch (ever so slightly) a mathematical metaphor.  Thus it's not very surprising to hear.  Most of the other transitions in these changes are like that, and even those that aren't, are so familiar to our ears that we don't find them jarring at all.  On the contrary, those transitions are so familiar that it becomes jarring when we don't follow them.

It occurs to me that there is an analogue to be made here between the familiar plane of traditional jazz and harmolodics on one hand, and the familiar plane of Euclidean geometry and curved space on the other hand.

I've talked about curved space in other contexts before, where it's directly related to gravitation.  Here, obviously, the application is less precise, but I'll try to keep it from being wholly vacuous.  The idea is that when we say a section of music is diatonic, that's like saying it's flat—and I don't mean "flat" as in opposite of "sharp," or even that it's uninspired.  It simply means that it obeys the familiar rules of traditional jazz.

When it came time to specify what curved space means in physics, one of the central motivating tenets is that although it's globally curved, locally it's flat, in the limit.  That's why wherever you are in the universe, as long as you're relatively small (small compared to the curvature of spacetime), things behave more or less the way you're used to.  That's relativity.


In the same way, when you're listening to a piece of harmolodic music, although the whole of it doesn't constrain itself to any single musical plane, locally (that is, at any immediate moment), it does.  In particular, that means there aren't any immediately jarring transitions, but changes smoothly (differentiably, we might say!) from one moment to the next.  That's what gives harmolodic music the feel of being unanchored, and yet not having any moments of discontinuity, where what happens next is wholly divorced from what came before.

And how does one arrive at what comes next?  To my ear, that's where the democracy that Coleman was striving for comes in.  In traditional jazz, the lead chart—the chord sequence—dictates what comes next.  When I listen to harmolodic music, what I hear is an instantaneous bending of the musical fabric, where at any moment, any performer might play the note, or the rhythm, or even the articulation that changes the direction of the group and the music as a whole.  Maybe, if the recent actions of the rhythm section have pointed toward a C major scale, the horn might begin C-E-D-F—

—but then continue E-G#-F#-A-Ab-C-Bb-Db, following the intervallic motive of up a major third, down a major second, up a minor third, down a minor second, and then repeating a major third higher.  The bass and piano might follow suit—perhaps playing in double time for a moment to match the speed of the melodic line—but only for the moment, before one or the other of them again takes the lead in steering the music in yet another direction.

Obviously, carrying such an idea to fruition requires the performers to listen intently to each other, and to develop an almost preternatural intuition about their fellow musicians and their likely directions.  It's an interesting balance, though, since too little anticipation means that the music won't make sense for long stretches, while too much anticipation means implicitly restricting where the music can and can't go, and paradoxically limiting the very freedom that the approach was meant to foster.  Still, properly handled, it could enable a group to produce music that sounds cohesive and yet is freed from much of the shackles of traditional jazz.  To put it in the vernacular of the time in which harmolodics started, it would allow the music to ascend to a higher dimension.

I hope to make some time in the future to look at specific recordings and use them to substantiate the general framework I've described here.  (Also, I realize there's precious little reference to holomorphy here, other than the one mention of differentiability, but I couldn't resist the alliteration.)

Tuesday, May 26, 2015

No More Dirty Looks

This article makes for interesting reading, and I love the introductory comic. But despite making some insightful points, this open letter tends to put up a barrier to progress—a barrier that could be resolved with a more conciliatory approach, I believe.

Some of the problems are minor:
  • The letter is repetitive. Homework creates a burden for parents. It also takes away time. It also causes conflict for families. All placed in separate bullets that can't help but overwhelm the reader into thinking the conclusion is right along so many different directions.
  • He also believes that a single anecdotal piece of evidence (the educational background of his daughter) is compelling.  I'm sure it is to him, because she's his daughter, but that's an advantage that her teachers don't have.  They are beholden to many more people than that.
  • The letter places any objection in a belittling light.  This is a "(hopefully minor) conflict."  The implication is that it's minor, unless the teacher makes it major.  (That won't happen, so long as the teacher simply acquiesces, perhaps.)
But some of the other flaws run deeper. In an attempt to bring the teacher on board, the writer also commiserates about the burden that assigning homework places on them. Well, homework may well create a burden for teachers. Sometimes they may complain about it. Nonetheless, it was a burden they knew was there when they decided to become teachers.  That burden is still there, and is now accompanied by the burdens of coming up with new ways to ensure that Johnny is figuring out what he needs to figure out now that he's a homework Conscientious Objector. Oh, not to mention the letters and phone calls from parents who (quite rightly) wonder why their kids should have to do homework when Johnny doesn't. Or worrying about keeping their job under administrators who aren't particularly sympathetic.

This letter doesn't much recognize these additional pressures that its unilateral declaration imposes on the teacher. (One of the problems with such an open letter is that it biases the discussion—the open letter becomes the presumed position, from which opponents must come to dislodge the writer, rather than the position arising out of a balanced dialogue.)  That might be because the writer is also writing school administrators, city council members, legislators, etc., in a broad campaign aimed at reforming the way homework is assigned and managed in the school curriculum.  Or it might be because the writer recognizes that any such acknowledgement will weaken support for this position and therefore chooses to omit it.  Without further elaboration, one simply can't tell.

As a reader, and as a parent, I think that the conclusion (that homework should mostly be done away with) is appealing, and should therefore be viewed with the greatest suspicion.  The notion that homework is an outmoded relic is enticing on so many different levels that we are predisposed to accept it.  But one of the lessons of science is that one can so easily convince oneself to accept imperfect arguments and insufficient evidence on behalf of a position one is inclined to believe in the first place.  We sometimes hear that extraordinary claims require extraordinary evidence.  Attractive claims should be added to that maxim.

To be sure, some of the observations do occasionally fit the bill.  Sometimes, homework is just busy work.  It's excessive.  It's misguided.  It's boring.  Does that mean that the inevitable solution is to jettison practically everything with the bathwater, except (here are the writer's two exceptions) reading and other homework that the kids find engaging?  Instead of getting rid of homework because it's broken (if indeed it is), why not figure out what's broken about it, and how to fix it?  And while there'll be no objection from me about requiring reading, who's to say what kids find engaging?  The kids themselves?  The writer?  As human beings, we often find grass-roots approaches like this engaging because they feel organic, natural, unforced, and while there may be something to that, it's one thing for an approach to work at a family or even a single-school level, quite another for it to scale to the district level, let alone the state.

Despite a perfunctory invitation to discussion at the end of this letter, its tone brooks no debate, and therefore runs the risk of setting the interaction on an oppositional edge practically before it begins.  It seems to me that whatever change the writer hopes to make could be achieved less confrontationally (if less social-networkily) by making a series of observations to educators about what he finds flawed about homework.  That could progress to a discussion of what the aim of homework (whatever form it might take) should be, and at what levels change should take place in order to benefit children most pervasively.  Interested parents and teachers could support each other.  Instead, the writer chooses a direct and public we-will-not-actively-support-you-on-any-homework-we-don't-approve-of line.  An interesting approach to public consensus, but I can think of better.

Friday, May 22, 2015

The Most Beautiful Equation in Mathematics

What follows is a bit I did over at Math StackExchange.  Posting it over here was an experiment in whether the mathematical typesetting would transfer correctly in a copy-and-paste.  For the most part, as long as I leave it alone, it seems to have done so (modulo the line breaks being lost in the shuffle).

Euler's equation

eiπ+1=0

is considered by many to be the most beautiful equation in mathematics—rightly, in my opinion. However, despite what Gauss might say, it's not the most obvious thing in the world, so let's perhaps try to sneak up on it, rather than land right on it with a bang.

It's possible to think of complex numbers simply as combinations of real values and imaginary values (that is, square roots of negative numbers). However, plotting them on the complex plane provides a kind of geometric intuition that can be valuable.


On the complex plane, a complex number a+bi is plotted at the point (a,b). Adding complex numbers is then just like adding vectors—(a+bi)+(c+di)=(a+c)+(b+d)i, for instance—just as you might have expected. (It's probably useful to draw some of these out on graph paper, if you can.)

Multiplication is where things get a little unusual. Multiplication by real values is just as you'd expect, generalizing from the one-dimensional real number line to the two-dimensional complex plane: Just as k times a positive number is (for positive k) another positive number k times as far from the origin, and correspondingly for negative numbers, k times a complex number is another complex number, k times as far from the origin, and in the same direction.

But multiplication by imaginary values is different. When you multiply something by i, you don't scale that something, you rotate it counter-clockwise, by 90 degrees. Thus, the number 5, which is 5 steps to the east (so to speak) of the origin, when multiplied by i becomes 5i, which is 5 steps to the north of the origin; and 3+4i, which is to the northeast, becomes 4+3i, which is to the northwest. And so on.



OK, let's step away from the complex plane for a moment, and proceed to the exponential function. We're going to start with the ordinary ol' real-valued exponential function, y=ex. There are lots of exponential functions: 2x,10x,πx, But there's something special about the exponential function with e, Euler's constant, as its base.

If you graph y=ex, you get a curve that starts out at the far left, at (,0) (so to speak), and proceeds rightward, crawling very slowly upward, so slowly that by the time it gets to x=0, it's gotten no further upward than (0,1). After that, however, it picks up speed, so that further points are (1,e),(2,e2),(3,e3),, and by the time x=20, we've nearly halfway to a billion.

Another way to put that is that the derivative of y=ex, which you might think of as its slope, starts out as an almost vanishingly small number far to the left of the origin, but becomes very large when we get to the right of the origin.

To be sure, all exponential functions do that basic thing. However, the very unusual thing about y=ex is that its derivative—its slope, in other words—is exactly itself. Other exponential functions have derivatives that are itself multiplied by some constant. But only the exponential function, with e as its base, has a derivative that is exactly equal to itself.

It's very rare that an expression has that property. The function y=x2, for instance, has derivative (or slope) y=2x, which is not equal to x2. But if you want to know the slope of y=ex at any point, you just figure out what y is, and there's your slope. At x=1, for instance, y=e2.71828, so the slope there is also y=e2.71828.

The only functions that have that property have the form y=Cex, where C is any constant.

There's another way to think of the derivative that is not the slope, although it's related. It has to do with the effect that incremental changes in x have on y. As we saw above, the derivative of y=ex, at x=1, is also y=ex=e2.71828.

That means that if you make a small change in x, from 1 to 1+0.001=1.001, then y approximately makes 2.71828 times as much of a change, from 2.71828 to 2.71828+0.002718282.72100. This is only accurate for small changes, the smaller the better, and in this case at least is exact only in the limit, as the change approaches zero. That is, in fact, the definition of the derivative.



Now, let's return to the complex plane, and put the whole thing together. Let's start with e0=1. We can plot that point on the complex plane, and it will be at the point with coordinates (1,0). It's important to remember that this does not mean that 0=e1. The value of x is not being plotted here; all we're doing is plotting y=e0=1=1+0i, and that 1 and 0 are the coordinates of (1,0), which is one step east of the origin. By the unusual property of ex, the derivative is also 1.

Suppose we then consider making a small change to x=0. If we add 0.001 to x, we make a change to ex that is equal to the derivative times the small change in x. That is to say, we add the derivative 1 times the small change, 0.001, or just 0.001 again. So the new value would be close to (though not quite exactly) 1.001, which is represented by the point (1.001,0). It would be in the same direction from the origin—east—as the original point, but 0.001 further away.

But what happens if we add not 0.001 to x, but 0.001i? The derivative is still 1, so the incremental impact on ex is the derivative ex=1 times 0.001i, or 0.001i again. So the new value would be close to (though, again, not quite exactly) 1+0.001i, which is represented by the point (1,0.001). It would be 0.001 steps to the north of (1,0), because the extra factor of i rotates the increment counter-clockwise by 90 degrees.

Symbolically, we would say

e0.001i1+0.001i

Now, suppose we added another 0.001i to the exponent, so that we are now evaluating e0.002i. We'll do what we did before, which was to multiply the increment in the exponent, 0.001i, by the derivative. And what is the derivative? Is it 1, as it was before? No, since we're making an incremental step from e0.001i, it should be the derivative at 0.001i, which is equal to e0.001i again, which we determined above to be about 1+0.001i. If we multiply this new derivative value by the increment 0.001i, we get an incremental impact on ex of 0.000001+0.001i, which is a tiny step that is mostly northward, but which is also just an almost infinitesimal bit to the west (that's the 0.000001 bit). We've veered ever so slightly to the left, so the new estimated value at x=0.002i is

e0.002i0.999999+0.002i

One thing to observe about the small steps that we've taken is that each one is at right angles to where we are from the origin. When we were directly east of the origin, our small step was directly northward. When we were just a tiny bit north of east from the origin, our small step was mostly northward, but a tiny bit westward, too.

What curve could we put around the origin, such that if we traced its path, the direction we're moving would always be at right angles to our direction from the origin? That curve is, as you might have guessed already, a circle. And since we start off 1 step east of the origin, the circle has radius 1. Unsurprisingly, this circle is called the unit circle.

If we follow this line of reasoning, then the value of eiπ must be somewhere along this unit circle; that is, if eiπ=m+ni, then m2+n2=1 (since that's the equation of a circle of radius 1, centered at the origin). The only reason our estimated values weren't exactly on the unit circle is that we made steps of positive size, whereas the derivative is technically good only for steps of infinitesimal size. But where on the unit circle is eiπ?

The crucial observation is in how fast we make our way around the circle. When we made our first step, from x=0 to 0.001i, that step had a size, a magnitude, of 0.001, and the incremental impact on ex was also of magnitude 0.001. Our second step, from x=0.001i to 0.002i, was also of magnitude 0.001, and the incremental impact on ex was, again, about 0.001.

In order to get to eiπ, we would have to make a bunch of steps, whose combined magnitude total π. The result would be, if we reason as we did above, to move a distance π around the unit circle. Since the unit circle has radius 1, and diameter 2, its circumference must be 2π. Therefore, eiπ must be halfway around the circle, at coordinates (1,0). That is none other than the complex value 1+0i=1:

eiπ=1

or, in its more common form,

eiπ+1=0

The foregoing is not, by any means, a rigorous demonstration. It's an attempt to give some kind of intuition behind the mysterious-looking formula.