Tuesday, 19 January 2016

A Personal Finance Application Wish List

Many years ago I used Microsoft Money, so when I recently looked for a personal finance app, I was surprised to find Microsoft end-of-lifed the product. Even more odd was that nothing seems to have come along to replace it. It seems that building an app to track personal finances should be relatively easy, and I’m sure many programmers would have noticed that it’s a “point of pain”, so why isn’t there anything really good out there?

A bit of digging revealed that there are a couple of good apps, like Mint, and some of the other spending trackers from the banks. Though many of them are region centric, so if you're not in the US or the UK, you might be out of luck. One of the leading independent apps appears to be YNAB, and it’s great for what it is, but it’s not a full featured personal finance application. I now use YNAB (much more on this later). 

The problem may be that there are several distinct activities that I want from my PFM (personal finance manager), and these activities get confused all the time. There isn’t a convention that I know of for what these activities are called, so for now I’m calling them Tracking, Budgeting, Planning, and Cashflow Planning.


Tracking is pretty simple – you just record all your transactions, add some meaningful categories, and at the end of the month you can look at how much you spent in each category. This was the way I used Microsoft Money a lot of the time. When I was using it, the mere fact that I was tracking everything and analyzing it regularly had a definite impact on my spending. I was much less likely to spend on trivial things. I understood that my bank balance was just a number, and the real picture could only be seen when all the commitments that money had to serve were taken into account.
Tracking is the least complex feature of a PFM, and it probably delivers you the bulk of the benefits. Mostly because you’re forming the habit of regularly analyzing your spending, and breaking the illusion that the cash in your bank account is money you can spend. Perhaps part of the reason that there aren’t too many good apps out there is that this is easy enough to do with a spreadsheet, and many banks and credit cards now offer this kind of view of spending.


To most people, this means writing down all the transactions you think are going to happen in the next month or longer. Then you either try to reduce some (if you don’t have enough money to cover them), or you see that you’ve got enough money and you’re done. Then at the end of the month you check that you’re within budget (you are almost certainly not), then you feel bad for a bit, resolve to do better in future, and eventually give up budgeting.

Until I used YNAB for a while, I didn’t realize that my idea of budgeting was nothing like true budgeting. YNAB is a software version of the old “envelope” system, where back in the days of paper paychecks and using cash for everything, the thrifty household would cash their paycheck, then put amounts of cash into envelopes labelled with the categories they were for. An envelope for Groceries, one for Heating Oil, another for Clothing and so on. If you were looking further out, you might have one for Christmas, and another for the Vacation you were planning for next year.

There’s a certain genius to this system, and more than any other, it forces you to break the delusion that the cash you have is the cash you can spend. Even better, if you do spend too much on something trivial, you have to physically take cash out of one of the other envelopes to do it; and it’s kind of tough to let yourself take money out of the Kids College Fund or Vacation to go bowling.

YNAB is a very opinionated piece of software (a good thing I think). It’s tough to learn how to use it in the intended way without spending a lot of time reading the docs and watching the excellent online videos. I’m thankful I stuck at it, I don’t think I would have ever really understood what budgeting means otherwise. That’s probably why there seems to be a steep learning curve for such a simple piece of software – it forces you to look at budgeting a different way.


I’ve grown to respect the YNAB approach and I hope I’ll always see budgeting in this new powerful way. However, I also need to plan. The YNAB way is that you don’t allocate money to any category until you actually have that money. So when you’re starting out, you rarely budget more than 1 month out. So let’s say you get paid on the 1st of the month, and your mortgage payment is paid on the 2nd of the month. In the YNAB budget view, you don’t show that mortgage payment on your budget until you have the cash for it. Now I get why this is right, and I get that forcing you not to budget until you’ve got the cash is the best way to be realistic and build up a proper buffer so you’re paying this month’s expenses with the paycheck from a month or two ago.

But. I also want to be able to see a plan or a projection of those transactions. It’s important for me to think beyond the next month and remember that I pay my car insurance annually in March, and that’s nearly 700. In YNAB, the way you deal with this is by creating a category / envelope for car insurance and put enough into it each month so you’ll be able to pay the 700 in March without screwing up your cash position. That’s fine. But there isn’t anywhere in YNAB where you can see that car insurance of 700 is due in March, and property tax is due in April, and school supplies are due in September, and so on.

I also need a planning system that helps me make big decisions. Like what happens if I switch jobs to something that pays less? A short term budget doesn’t give me what I need here. Should my wife go back to work? What happens if my son gets into an expensive school? These questions require plans that go months and years into the future, and sometimes, the output is along the lines of “get a second job or a big promotion buddy, you need way more cash”. And that’s ok (I’ve done this in the past, many people do), but we should have tools that make it easier to explore the what-if scenarios.

Cashflow Planning

I also need to resort to a spreadsheet in order to figure out how much cash I’m going to have in my checking account, my family joint account, and what the balance is going to be on any credit cards we use. The purist budgeter approach is that this stuff doesn’t really matter – you have income coming in, when you receive that income, you allocate it to your expenses, and when you pay those expenses, it doesn’t matter which account it comes from, what’s really happening is income is being moved to outflows.

Again, this took some getting used to, and when it finally clicked, it made a ton of sense and I can see why it’s the right way to think when budgeting. But. I still need something that tells me when our joint account needs to be topped up so it doesn’t go overdrawn. It would be nice if I could see when that was likely to happen, and since I can input the expenses I’m going to be paying this month and next, along with the dates and amounts, I’m pretty sure the computer could do this for me.

More urgently, when you first adopt YNAB, you’re probably living paycheck to paycheck. In fact, you’re probably living off next month’s paycheck, i.e. spending on a credit card and then paying it off when you get paid, then repeating the cycle. While you work to fix this, and move to living off income you already have in the bank, you really need help managing your credit cards. The most urgent question being: which expenses should I pay with my credit card this month, so that I don’t put my checking account into overdraft? Some expenses can’t be paid with credit card, so it’s not good enough to say that you pay everything out of your checking until that’s used up and then switch back to credit.

Overspending and Amendments

When you overspend in YNAB, or you add an item to your budget mid-way through the month, you overwrite the budget with this new information. There’s no way to see what your original budget was versus how reality ended up. Other users have pointed this out, and the YNAB response is that it’s just the right way to do things. I think it may be ok to display the current budget without all the clutter of what the original amounts were, but the app should show you how often you underestimate categories and by how much, and how often you completely omit items. Then, when you’re budgeting next month, a helpful tip would be text like “You underestimate the total budget by an average of 15% each month, and regularly overspend in Entertainment by 50%”. I think it would also be a helpful discipline for the user to be asked to “declare” that a budget is final. As it is now, it’s too easy to keep tweaking and there’s no way of tracking what the original intent was.

The Huh…? Factor

The first months I used YNAB, I entered every transaction diligently, I allocated my income, I amended my overspending as recommended, and I created a buffer category to start building toward living off old income. I also cut back on a lot of spending, making pretty significant changes to get things under control. But when I put all this in YNAB, and looked at the reports and the budget screen, I had no clue what it all meant. Was I doing well? Did I have enough cash to cover my main expenses next month, assuming I got paid as usual? I had no idea, and I still think it’s tough to see. Yes, you need to stop thinking of the cash in your bank account as the money you can spend, but it would be nice to be able to move into “budget space” in YNAB, and then have it translate back into “cash and time” space. Putting it simply, if I give the computer my current balances, all the upcoming transactions with dates and amounts, it should be able to tell me what my balances will be in future. I know this isn’t budgeting, but it’s an important part of what I want from a personal finance app, and if I find another app that does this, I’m less likely to enter my transactions and projections into both YNAB and that new app.

Commitments, Covered, Discretionary

In a budget app, I’d like to be able to label some future expenses as “Hard commitments”. These are things I’m legally obligated to pay, like rent or mortgage, my internet service, gym fees, anything where I’ve signed a contract and I can’t just decide to reduce my spending in that category in the short term.

When income comes in, the budget app should allocate to the Hard Commitments automatically, in the order that they will have to be paid. If there’s income left over, then I can use that to allocate to the discretionary categories.

One solution to resolve the conflict between YNAB’s idea that you only enter the expenses that you have income to cover, and the need to do this to properly plan future cashflows, might to make it possible to enter future month’s forecast outflows, but to clearly mark them as “uncovered”. When income is allocated to outflows, they are “covered”.

So let’s say you’ve got no income yet this month, and a couple of hard commitments in your budget for the month along with some discretionaries. The hard commitments should be colored red, making sure you can’t miss the fact that if you don’t get some source of funds to pay them, you’re in trouble. The discretionaries can be colored amber, indicating that they’re not covered. As income arrives, the hard commitments turn green, as do the discretionaries. The commitments for future months stay red until you’ve got the cash to cover them. This way, the budget view will eventually show you with a month, or two, or three, with all categories in green, making it clear that you have a buffer and how far into the future it reaches.


The ideal outcome for me would be that YNAB adds functionality to their app that makes it easier to plan and see a “cash and time” view of my data. Or perhaps, other apps get built that can read the YNAB data files, so I don’t have to have separate records. I love the way YNAB just uses dropbox to sync and share the data files, and it looks like they’ve designed a good format for the data. Maybe what we need is for this format to be made “open” or somehow standardized by convention so that app developers can all read the user’s data, no need to rekey or import.

I’m sure there are a bunch of great apps out there that I’m totally unaware of, or perhaps the right thing is to get a professional accounting package like Sage or Quicken and do all of this the way the pros do it. Let me know what I’m missing! 

Friday, 18 July 2014

MH17 flight path history

MH17 crashed in Ukraine on the 17th July 2014, with the loss of all on board. According to reports aircraft was flying directly over the the disputed territory, at an altitude of 33,000 feet. 

Eurocontrol reportedly said that they had kept the corridor open even as Ukrainian authorities banned any aircraft flying below 32,000 feet. 

The historical flight paths followed by MH17 show that the flight on the 17th followed a route further to the north than previous flights, taking it over the disputed territory. 

12-July, almost completely avoids Ukrainian airspace, flies to the south over the Black sea:

13th July, slightly further north, well inside Ukrainian space:

14th July, entered Ukrainian space further north, but routed far to the south of the disputed territory

15th July, similar to the previous day’s route

16th July, Way farther north, skirting the disputed territory

17th July, Further north still, directly over the disputed territory, with tragic result

As reported in some media, other airlines have routed away from the disputed territory’s airspace for months
e.g., Air India Delhi to Frankfurt

 All images snapped from flightaware.com

Thursday, 1 August 2013

Good Intentions

The biggest waste of time I know.

I'm one of those programmers who spends a lot of his time wondering "how could I do this better/easier/faster/safer/simpler". Often the conclusion is recognising some common patterns across a codebase or refactor functions into more meaningful units, the bread and butter of keeping your code in shape. Quite often however, "this" becomes programming itself, or more accurately, application development. I start out asking how to make a module better, and trace the question back until I'm questioning the overall architecture of applications in general, rather than the particular program I'm working on.

For example, a little project I'm working on at the moment has a data access layer. It's a small file of functions that wrap database calls. There's a certain amount of marshalling and demarshalling, and a few hard coded mappings from query results to types. So naturally, I'm asking "why can't I generate all this code automatically and not even have to think about it, and have something automatically generate typed mappings? I could just give it my types and it should figure out everything else!" And so I spend a few days reading about ORMs, trying them out, debugging exploratory code, getting a buzz from seeing something work "automagically" and then getting battered and bruised by dependencies, versioning issues, performance gotchas that are now opaque to my reason, eventually whimpering back to my simple wrappers, cleaning them up for an hour and moving on. Yeah, yeah, there'll be a few bugs in that module and I'll probably come back to it a few times, but doing it this way is highly likely to be better than downing tools and rearchitecting the entire project to use an ORM. (Ok, I knew about ORMs a long time ago, but it's a typical example).

And that's a real time waster. Probably the biggest waste of time in my entire life.

Don't get me wrong, about 5% of the time, I'll learn about a method or library or framework or approach that really is better. The next project I design, I'll be able to make an informed decision as to whether an ORM is appropriate or not. (Disclaimer: I said "informed" not "good"). Sometimes it's even relevant to the project. Once in a while, it's not a completely foregone conclusion or utterly obvious, so I actually get to make the decision.

But 95% of the time, it leads to nothing, even if I find something interesting, it's not applicable to the current project, or the next one, and the one after that will be running on hexacore smart watches with 6G connectivity so it's probably better to wait until we're starting work on that before doing a review and making decisions about what stack to build it with.

And speaking of stacks, a lot of the irresolvable flamewars are because fanboi A is arguing about how great his language is, and fanboi B is arguing about how great his entire stack is. They're both right, they're just talking about two completely different things. I'll even propose the humbly named "dubhrosa relation of language elegance": the more elegant a language, the less likely it's embedded in a productive stack. Why? Evolution.

A productive stack is one in which everything important to commercial app development is basically possible and reasonably straightforward, in which you can change code in one part of your stack without creating massive explosions of infeasibility in some other part. Usually these stacks have ugly edges. In some cases, they have entire continents of ugliness, like stacks with PHP in them.

Perhaps this ugliness is a kind of a genetic trait that hints at what drove its creation. Take perl, or PHP. They're both ugly. There are pockets of elegance and undoubtedly they are "powerful", but in general, you can tell they grew without any especially well-grounded master design. You can also tell that the people driving the development of these kinds of languages were really focused on making stuff work. I can be fairly confident that perl and PHP will allow me to connect to any major database. There might be a few ugly bits, but I know it's really unlikely that I'll find myself boxed into a corner, like not being able to connect to a major db, or being able to, but ending up rewriting the standard library for doing so because it's such poor performance. I don't have the same expectation with Haskell or OCaml, for instance.

So, in conclusion, if you're building an app these days, you should pick a stack, a popular, well used one, and stick with it for the duration of the project. Learn to love the one you're with. When you encounter overwhelming ugliness, put it on your workflowy list under "There must be a better way - discuss", write the code that makes you cringe, comment it so everyone knows you're aware of how ugly it is so there's no risk to your programmer-tribe status levels, and move on.

Gratuitous blogpost controversial statements that I probably don't really mean exactly 

Let's put some languages in order of ugliness:

Python,PHP, perl, Java, Ruby, C++, C#, F#, Haskell

(yeah, yeah, Python really is way uglier than C++, just not at first glance, it's the beer-goggles language. Its ugliness is hidden under a layer of cakey makeup and whitespace and bitchy bravado. It tells you how much you deserve list comprehensions and before you know it you're in too deep. You're shouting at the screen "if I wanted a glorified dictionary wrapper masquerading as a programming language I could have built my own! a fast one!". It's uglier than PHP in a deep way. PHP sits in the corner of the library sniffling and covered in acne but is honest and friendly and if you ask him out on a date he says "are you sure?" three or four times. Then he brings you to the fairground and you have a great time up until you fall off the rollercoaster (did he get too excited and push against the guardrail? you'll never know for sure) and break your arms and wake up in hospital and somehow you've caught his acne and his cold but he's there beside your bed with video games and taytos. Python, by contrast, roofies you with syntax and when you see through it, tells you that you're inadequate if you don't understand how great it is, and when you finally leave, he spams your twitwall for months.)

And yes, Haskell is the most elegant, by about a billion miles, don't even try to argue with me on this one. Haskell's community is fantastic, but its ecosystem sucks. Configuration management can be tricky (google cabal hell), and it doesn't fit well in any of the widespread stacks. I've tried using Haskell in a web app, first using the Haskell webapp frameworks, and then just as a CGI within Apache/postgres, pausing only to build a custom json protocol with session oriented connections and a custom client. Oh how I wept. It was like finding the partner of your dreams, perfect in every way, except they wake up and stab you in the back of your head at 3 in the morning on a regular basis. I say this in part because it's sufficiently vague that the absolutely lovely and helpful Haskell guys don't implode in exasperation because I tried version of yesod and the random stabbings were fixed in (and I'd explain I couldn't use that because I was using a version of the bystring library hand picked to be compatible with a particular version of the vector library that meant I couldn't upgrade to that version of yesod even if I wanted to, oh wait that's fixed now but I need to build ghc from source you say? How about the x64 bug?), and in part because it's entirely true.

There's also the little discussed, but very important, matter of a project's "moron impact factor" that you need to consider when understanding the successful stacks. Think of it this way: every successful project basically never completes, it's a successful application so people are using it, asking for new features, and burdening the system with ever increased load. At some point, you'll have new programmers, old ones might leave, or you're adding because there's just more work to be done. The law of large numbers and cretinous recruitment agents means that at some point some very stupid programmers will work on your application. The amount of damage they can do in a given period (moron detection time) is a characteristic of the stack you're using. This is the hidden value of verbose languages - it takes stupid programmers so long to wade through all the verbose code, that they can't do as much damage, there simply aren't enough hours for them to get to it. What's more, it's easier to make your peace with adding a mediocre programmer to the team to add some feature, because the stack is pretty ugly anyway, so we're all acclimatised to the necessity of ugliness, so we can tolerate ugly but functional code written by a mediocre programmer who was drafted in.

Please let me know when my Haskell stack is ready. Until then I'll be in a darkened office rubbing my temples and sighing.

Monday, 31 December 2012

XVoice: speech control of Linux desktop applications


An open source speech control project up for adoption

The Early Days

Around the end of the 1990's IBM released a Linux version of their ViaVoice speech recognition engine. It was always a beta product, it never had the full set of features of the original Windows program, but at the time it was the only good recognition engine for Linux, so I started playing around with it. 

Soon I discovered an open source project, XVoice. XVoice was an application that used the ViaVoice engine for speech-to-text, but then used the resulting text to control the Linux desktop. It was a hack that used a bunch of programs in ways they were never designed for, and it achieved something rather exciting: you could speak to Linux and it would do your bidding. 

One of the great features of the ViaVoice engine was that it allowed you to define a grammar, and then the engine would match whatever speech was input against that grammar. This meant that without training, recognition rates were near-perfect for a domain specific grammar (nicely defined in BNF). 

Progress and Success

After a few months of regular development in early 2000, XVoice had proper support for user-defined command grammars. These grammars mapped spoken commands to keystrokes, and you could have multiple grammars, one for each application. XVoice had some (hacky) heuristics; you could specify a regex that would match against the window title, which then would automatically load the right grammar file. You could control the mouse too, XVoice split the screen into ever smaller 3x3 grids that you navigated until the mouse was where you wanted it. The grammars were hierarchical  so you could include the grammar for spelling out numbers in your emacs control grammar, and they supported pattern substitution, so the command sent to an application could include some of the words you said. 

There were some quite motivated users who contributed a lot to the development. One was a programmer who used Vi but had severe RSI that was making it difficult for him to work. He defined a comprehensive Vi grammar that allowed him to program, and interestingly, claimed he was more efficient because he was using higher level Vi commands than he would normally. Some Emacs users had huge grammars that let them read news, send emails, program in lisp and who knows what else. 

As an aside, my experience working on XVoice left me in no doubt that for regular people, voice control of your computer is a fun trick, but the only people who would use this on an ongoing basis are those who have no other choice. Talking for several hours a day is physically difficult, and even with the clever grammars some people designed, it's not something you'd choose to do unless you had to. We had at least a few quadriplegic users who were starting to use the system with some success, for them, XVoice was the only way they could operate a Unix machine. This realization made the future of the project clearer: we'd focus on features that helped people who couldn't type at all or only with great difficulty. 

As a programming project, working on XVoice was just great, and I learned a lot from the other programmers who were much more experienced and capable than I was. By the end (version 0.9.6 I think...) it was a really cool program, being used by people who really needed it to work. We encountered and solved some of the problems of voice control of graphical user interfaces. The command grammar was a pretty elegant and extensible system, and I recall that the sense at the time was that most of the interesting work ahead lay in defining bigger and better libraries of grammars. 

An Abrupt End

Unfortunately IBM didn't seem very interested in ViaVoice on Linux. We had some contact with the developers, who were quite helpful, but the "official" IBM people would never tell anyone what the plans for continuing Linux support were. The only contact we had with them was when they demanded that we make it clear on the XVoice related websites that we didn't distribute ViaVoice with XVoice, that people had to buy it separately (which we always did). 

Then one day IBM discontinued ViaVoice for Linux. It just disappeared from their website. At the time, CMU Sphinx was the only plausible candidate for an open-source speech-to-text engine that we could use instead of ViaVoice, but it wasn't very mature and had some issues that would have been tough to work with. The main coders on the project had personal or work issues that meant they couldn't work on XVoice for a while, and so the project lost momentum. 

Wistful Thinking

Every once in a while when I read an article about speech to text, particularly about command and control systems, I wish that we'd had the time to rework XVoice to work against an open source engine. It's disappointing that there's still no clear (open source or other) solution for people who can only interact using speech. Many big tech companies pay lip service to accessibility, but in this case the big boys didn't do anyone any favors.

The Future

I think it's important to distinguish between projects that focus on building better general recognition accuracy for dictation, and accessibility oriented command and control systems. Complete control with speech is a difficult problem, I don't think you solve it as part of a larger generic command and control platform. You have to focus on accessibility and talk to users who have real accessibility issues, and get them to work with you to overcome them. If you are working on command and control, it's worth remembering that these people are the only ones who will be still using your software when the novelty wears off and their throat is sore. In the final few months, that's where XVoice was focusing, there were a bunch of awkward problems we'd need to fix, but it was pretty exciting. 

The key feature that XVoice or any other command and control system relies on is the ability to feed context-specific grammars to the recognition engine on the fly. The underlying accuracy of the engine isn't very critical if the grammar is sufficiently constrained. All modern engines are likely good enough. But as far as I know, the current HTML5 implementations of speech input don't yet support setting grammars. CMU Sphinx appears to, but it's not clear how well it works in practise, their configuration files seem quite complex. 

The XVoice code is all GPL, it's on Sourceforge and now on github  so please feel free to go nuts. Before today it was about 10 years since I looked at it but the docs are actually pretty good and the code isn't as bad to read as I expected. It's mostly pre-RAII C++, so would need a cleanup and a dose of smart pointers to bring it up to modern standards. Even if the project isn't resurrected, the ideas around how command grammars are structured and used might be useful to another project, or the code for generating X events could be reused. There's a sample set of grammars in the modules subdirectory that make for interesting reading - there's even one for "netscape", how quaint. 

Sunday, 23 December 2012

Lessons learning Haskell

Lessons learning Haskell

It's often claimed that learning Haskell will make you a better programmer in other languages. I like the idea that there's no such thing as a good programmer, just a programmer who follows good practices. As soon as we stop following good practices  we suck again. So, Haskell must introduce and indoctrinate better practices that we carry back to our other languages. Right? I think it's true but it's not obvious, so I've written this article to outline some of the habits and practices that I think changed after I used Haskell for a while.


Haskell makes your functions pure by default. You have to change the type signature if you want to do IO, and mark every function between your function and main() as tainted with IO. This forces you to be conscious of IO. It encourages you to keep functions that do IO as high up the stack, close to main, as possible. Purity also means you can't read or write global variables, that's just another type of IO. If your function needs some data, you pass it as a parameter. So the type signature of a pure function is a complete itinerary of everything it can access, and therefore is a very good spec for what the function does in most cases.

Experienced programmers who pay attention already know that IO and global vars mustn't be taken lightly. Every IO operation is a potential source of errors, exceptions, and failures. Functions that do IO are difficult or impossible to test. Programmers know this, but Haskell makes sure you never forget it when it matters. It incessantly shunts you in the direction of keeping the call-stack of IO-doers as small as possible.

When I go back to my other languages, I now put all my IO in top-level functions that are called directly from main or the event loop. I gather every scrap of data I need to do the computation. I marshal the data into typed structures and pass it all into a pure function that does the work. Then the structure that's returned is demarshalled and transmitted, displayed or stored as required. If I need to do some computation to determine what data I need to fetch, I make sure this is not commingled with the IO functions, so my code fetches data, calculates what else needs to be fetched, fetches that data, and so on.

This has some nice effects. The IO doers are isolated and distinct. Error handling and exception catching is clearer and simpler. The compute code is pure. This makes it much easier to test, debug, and understand.

Clean Syntax for Static Types

boo :: Map Integer String -> String -> Integer

I have no idea what the word "boo" is supposed to mean when used as a function name. But I can be almost certain what this "boo" does. It takes a map that has Integer keys and String values, and a second argument that is just a String, and it returns an Integer. So I'm fairly sure that this function does a reverse mapping - you give it a String value and a Map, and it finds the Integer key for that value.

A lot of this depends on the fact that the signature tells me that there is no IO going on. If the bit at the end of the line was "-> IO Integer" instead of "-> Integer", all bets are off. The function could be sending the Map and String to launch control, and -> IO Integer could be the number of seconds it took to get a response, or the price of a gallon of gas in pennies (hence "boo", perhaps). The point is, you can't confidently reason about a function from its signature if IO is involved.

The Haskell type signature of a function is particularly clear and easy to follow. Functions just map one type to another "foo :: Author -> DateOfBirth". Parameterized types just list the parameter types "Map Integer String". There are very few boilerplate tokens for the eye to scan.

But how has this changed what I do in other languages? I now sketch out the design for larger components in this Haskell signature notation. Particularly if I'm writing a library with a public API. I've shown these sketches to other developers as we discuss the design of a program, and they get it. Most of the time, I don't mention that the notation is Haskell. The only slight oddity for them is the use of -> between function "inputs". They expect foo :: A,B -> C instead of foo :: A -> B -> C. But they get over it immediately, and I have never had to mention currying or partial application, since they're usually just pleased that the notation is clearer than anything else we've ever used.

Container Operations

I think one of the reasons I started using Lisp, then Erlang and then Haskell was that I must have typed "for (size_t i=0; i<..." just about a million times and I was sick of it. C++ teases with approximations to map, filter, fold, scan, just enough so that you'll try them for a few months until you eventually give up or your colleagues smack you. When I want to filter items from a container, I don't want to start by saying "for(size_t i=0...". I want to say "filter f xs" and I want my colleagues to read that too.

It might seem like an overreaction. But even in big classfull C++ projects, where I was senior developer, I spent my days writing functions. Functions consisting of loops and branches, because C++ didn't do a great job of accommodating "operations on containers". Despite all the guff written about STL separating iterators from algorithms from containers (from allocators...ahem), nobody provided a simple set of primitive container operations that regular programmers would use.  

Using Haskell for a while, the effect goes further. It forced me to think of every such problem as a chain of the primitive list operations, maps, folds, filters and scans. Now I always think in these terms. I "see" the transformation of a container as a simple sequence of these operations. Before, I would have thought in terms of munging multiple actions into the body of a single loop.

I see things using higher level concepts, and I write my comments and code with these in mind. I usually still reach for the trusty for loop in C++, but I'll factor together common container operations where appropriate into higher level functions. Filtering is a really common example that seems to come up all the time.

What's Changed

Using Haskell definitely gives you a lot of warm fuzzy feelings, (until your filehandle is closed before you've actually read the data because you didn't ask for a result, silly). Part of the joy of the language is that it forces you to take a new approach to problems you've solved conventionally before. When the answer clicks and you see that the new approach is more elegant and powerful and general than what you've been using all these years, it's hard not to smile with sheer pleasure.

In real world commercial software projects, if you don't properly test your code, and do code reviews, it doesn't matter what language you use, you're leaving the big wins on the table. A team that does these things well consistently will beat any team that does not, regardless of what language or technology stack they're using.

Using Haskell changed my practices so that the code I write is easier to test and easier to code review.  There's a bunch of other stuff too, some of it can be articulated, some will probably always be just a "warm fuzzy".