, • IBM T.J. Watson Research Center • email@example.com
Pages: pp. 4–6
A few weeks ago, a new Google Labs feature for their Gmail system made some news. They called it "mail goggles" and the tag line was "Stop sending mail you later regret" ( http://gmailblog.blogspot.com/2008/10/new-in-labs-stop-sending-mail-you-later.html).
The announcement said that one of their engineers wanted to keep from sending pleas to his ex-girlfriend when he wasn't thinking straight, so he came up with a simple test. I checked my calendar — no, it wasn't April 1st — but I still couldn't help thinking this was an odd joke. Nope! Sure enough, if I go to the Labs page from my Gmail account, I see that the feature exists, with examples of simple — but not too simple — math problems it might pose.
Wow. I would love to know how many people have adopted this by now, and how many continue to use it. I'd especially like to know how many times people have failed to send mail because they flunked the test and whether they feel this actually saved them from a mistake.
I don't think I'm particularly prone to saying things in late-night emails that I wouldn't say otherwise, but I have other email vulnerabilities. In particular, I have to confess to a profound need to set the record straight or get the last word in email exchanges, a trait that can result in longer and more heated debates than would be desirable. Perhaps the next Google goggle can add even more checks to protect us from ourselves:
I could go on, but you get the idea. In fact, as machine understanding continues to improve, I can imagine the analysis of such things moving from mechanical checks (any mail sent at 2 a.m. Sunday morning requires a sanity check, or flow analysis shows that many messages went out over a short period) to a deeper understanding. After all, if the mailer could distinguish between the drunken appeal to the ex-girlfriend and "normal" exchanges, it wouldn't have to encumber sending normal email with the extra process. And if it could analyze intent, it could sense tone, examine what it knows about a recipient, and raise the red flag suggested in my last bullet item.
Of course, email is just one way in which our computer systems and, in fact, other mechanisms are increasingly protecting us from our own mistakes.
One example is navigation systems. Cheap GPS technologies have made such systems almost as ubiquitous as mobile phones. The ability to get live updates on traffic status is a relatively recent addition to the technology, and a welcome one to anyone who drives a lot. But navigation systems (both handheld and online) could do other things better, such as factoring the time of a drive into the directions to avoid known, predictable bottlenecks.
Navigation systems should also be reliable. That is, their directions must be trustworthy. I'm on my second GPS, and it's occasionally given me some strange directions — such as telling me to get off at one part of an exit and make a u-turn rather than get off the exit going in the right direction in the first place. But this is nothing compared to some of the odd directions I received from my first GPS (circa 2002), which became the "GPS that cried wolf." It was so unbelievable that when it tried to offer a shortcut one time, I went miles out of my way before I realized that it wanted me to cut across a peninsula rather than traverse its perimeter.
Several utilities, such as gas, electricity, and phone companies, let customers specify in advance whom to notify in the event of nonpayment. The idea is that if a senior citizen or other vulnerable customer becomes forgetful or has other issues, a family member can take countermeasures before disconnecting a vital service.
Online services such as banks and credit cards have similar features: email alerts for late payments, unusual activity, low balances, and so on. Identity-theft defense mechanisms will notify you when new activity appears on your credit report. When someone is deemed especially vulnerable, such as when a hacker steals personal information from a company or government agency, extra countermeasures such as credit report freezes can deter abuse. (In fact, you can now freeze access even without an actual explicit fraud threat; see www.aarp.org/money/wise_consumer/scams/block_your_credit_reports_to_prevent_id_theft.html).
The catch is that, over time, it becomes easier and easier to become complacent. The first time I was notified that a thief had gained access to my US Social-Security number, I raced to block my credit files. The second time, I reactivated the freeze, perhaps after a few more days. More recently, I've just given up. But I ask myself, why is it that the default is to allow access to my report, without my explicit permission, in the first place? These days, everyone is vulnerable; just like the utility companies informing a relative when a problem occurs, each consumer should be entitled to free notification of anything affecting their credit report.
I recently saw an item on Slashdot about a woman who fell victim to a "Nigerian 419 scheme" (see http://tech.slashdot.org/article.pl?sid=08/11/13/1659214, which refers to the original article at www.katu.com/news/34292654.html). I imagine that every reader is by now well familiar with the 419 schemes, named for an article in the Nigerian criminal code (see http://en.wikipedia.org/wiki/419_Advanced_Fee_Schemes for a history), and of course these scams aren't limited to Nigerians.
The woman was suckered in part by the scammers' email, which used her grandfather's name and claimed he had left behind millions of dollars. As with all these scams, she was asked for a little money, then some more, and then still more. This particular article was interesting because the woman reportedly lost US$400,000 by mortgaging her (fully paid for) house, borrowing against her (paid for) car, and so on, and during this process she ignored many peoples' advice to give up and stop throwing good money after bad.
What does this have to do with IEEE Internet Computing? Well, the victim here was taken in by a bogus email. Many mail systems and Web browsers are adept at spotting "phishing" attempts that impersonate well-known institutions such as banks. Some mail systems can even identify other spam, such as the note this woman received: I have never had a 419 email categorized by Google's gmail as valid mail rather than spam. But what about everyone else?
There's plenty of room for improvement. The article on katu.com states, "For more than two years, Spears sent tens and hundreds of thousands of dollars. Everyone she knew, including law enforcement officials, her family and bank officials, told her to stop, that it was all a scam. She persisted." There is then a commentary from the investigative reporter: "The Nigerian scammers look for people like Janella Spears. Most of us have the sense to hit delete. But what about that older relative who has email, but is gullible or easily confused? Obviously, the scam works. That's why it continues."
Clearly, Ms. Spears needed some mail goggles, not to mention some bank goggles. She needed a system that would not only flag the incoming mail as spam but question her desire to reply. When she did reply, she needed next-of-kin notification, just as if her electricity was about to be shut off. She was finally stopped because her wire transfers to Nigeria caught the attention of the government as possible money laundering. But surely the system could have done more earlier to detect that her financial activity was an unusual pattern and merited scrutiny.
To put it another way, given that the existing bank controls put a stop to her generosity, we clearly have the technology — just not the right threshold. As I understand it, transactions of $10,000 generally merit special notification. To some, losing $10,000 would already be disastrous. Spears lost $400,000, and it's good to know she's down but not out ("Spears said it would take her at least three to four years to dig out of the debt she ran up in pursuit of the nonexistent pot of Nigerian gold"). But I'm sure her family is asking, "What took you so long?"
There are a lot of scammers out there, and we need better tripwires.
Virgilio A.F. Almeida is a professor in the Computer Science Department at the Federal University of Minas Gerais (UFMG), Brazil. His research interests include large-scale distributed systems, Internet computing, autonomic computing, and performance modeling and analysis. Almeida has a BSEE from UFMG, an MS in computer science from the Pontifical Catholic University in Rio de Janeiro, and a PhD in computer science from Vanderbilt University. He has published more than 100 technical papers and coauthored five books on performance modeling, including Performance By Design (2004), Capacity Planning for Web Services (2002), and Scaling for E-Business" (2000), all published by Prentice Hall.
Stephen Farrell is a research fellow in the Department of Computer Science at Trinity College Dublin, where he teaches and researches security and delay/disruption-tolerant networking (DTN). He is also chief technologist with NewBay Software ( www.newbay.com), a provider of digital lifestyle solutions for mobile network operators. He has a PhD in computer science from Trinity College Dublin. Farrell has been involved in Internet standards for more than a decade and currently cochairs the IETF DomainKeys Identified Mail Working Group, is an invited expert in the W3C Web Security Context group, and cochairs the IRTF Delay Tolerant Networking Research Group. He coauthored Delay and Disruption Tolerant Networking (Artech House, 2006).
The opinions expressed in this column are my personal opinions. I speak neither for my employer nor for IEEE Internet Computing in this regard, and any errors or omissions are my own.